Kubernetes Blog The Kubernetes blog is used by the project to communicate new features, community reports, and any news that might be relevant to the Kubernetes community.

  • SELinux Volume Label Changes goes GA (and likely implications in v1.37)
    on April 22, 2026 at 6:35 pm

    If you run Kubernetes on Linux with SELinux in enforcing mode, plan ahead: a future release (anticipated to be v1.37) is expected to turn the SELinuxMount feature gate on by default. This makes volume setup faster for most workloads, but it can break applications that still depend on the older recursive relabeling model in subtle ways (for example, sharing one volume between privileged and unprivileged Pods on the same node). Kubernetes v1.36 is the right release to audit your cluster and fix or opt out of this change. If your nodes do not use SELinux, nothing changes for you: the kubelet skips the whole SELinux logic when SELinux is unavailable or disabled in the Linux kernel. You can skip this article completely. This blog builds on the earlier work described in the Kubernetes 1.27: Efficient SELinux Relabeling (Beta) post, where the SELinuxMountReadWriteOncePod feature gate was described. The problem to be addressed remains the same, however, this blog extends that same approach to all volumes. The problem Linux systems with Security Enhanced Linux (SELinux) enabled use labels attached to objects (for example, files and network sockets) to make access control decisions. Historically, the container runtime applies SELinux labels to a Pod and all its volumes. Kubernetes only passes the SELinux label from a Pod’s securityContext fields to the container runtime. The container runtime then recursively changes the SELinux label on all files that are visible to the Pod’s containers. This can be time-consuming if there are many files on the volume, especially when the volume is on a remote filesystem. Caution:If a container uses subPath of a volume, only that subPath of the whole volume is relabeled. This allows two Pods that have two different SELinux labels to use the same volume, as long as they use different subpaths of it. If a Pod does not have any SELinux label assigned in the Kubernetes API, the container runtime assigns a unique random label, so a process that potentially escapes the container boundary cannot access data of any other container on the host. The container runtime still recursively relabels all Pod volumes with this random SELinux label. What Kubernetes is improving Where the stack supports it, the kubelet can mount the volume with -o context=<label> so the kernel applies the correct label for all inodes on that mount without a recursive inode traversal. That path is gated by feature flags and requires, among other things, that the Pod expose enough of an SELinux label (for example spec.securityContext.seLinuxOptions.level) and that the volume driver opts in (for CSI, CSIDriver field spec.seLinuxMount: true). The project rolled this out in phases: ReadWriteOncePod volumes were handled under the SELinuxMountReadWriteOncePod feature gate, on by default since v1.28 and GA in v1.36. Broader coverage was handled under the SELinuxMount flag, paired with the spec.securityContext.seLinuxChangePolicy field on Pods. If a Pod and its volume meet all of the following conditions, Kubernetes will mount the volume directly with the right SELinux label. Such a mount will happen in a constant time and the container runtime will not need to recursively relabel any files on it. For such a mount to happen: The operating system must support SELinux. Without SELinux support detected, the kubelet and the container runtime do not do anything with regard to SELinux. The feature gate SELinuxMountReadWriteOncePod must be enabled. If you’re running Kubernetes v1.36, the feature is enabled unconditionally. The Pod must use a PersistentVolumeClaim with applicable accessModes: Either the volume has accessModes: [“ReadWriteOncePod”] or the volume can use any other access mode(s), provided that the feature gates SELinuxChangePolicy and SELinuxMount are both enabled and the Pod has spec.securityContext.seLinuxChangePolicy set to nil (default) or as MountOption. The feature gate SELinuxMount is Beta and disabled by default in Kubernetes 1.36. All other SELinux-related feature gates are now General Availability (GA). With any of these feature gates disabled, SELinux labels will always be applied by the container runtime via recursively traversing through the volume (or its subPaths). The Pod must have at least seLinuxOptions.level assigned in its security context or all containers in that Pod must have it set in their container-level security contexts. Kubernetes will read the default user, role and type from the operating system defaults (typically system_u, system_r and container_t). Without Kubernetes knowing at least the SELinux level, the container runtime will assign a random level after the volumes are mounted. The container runtime will still relabel the volumes recursively in that case. The volume plugin or the CSI driver responsible for the volume supports mounting with SELinux mount options. These in-tree volume plugins support mounting with SELinux mount options: fc and iscsi. CSI drivers that support mounting with SELinux mount options must declare this capability in their CSIDriver instance by setting the seLinuxMount field. Volumes managed by other volume plugins or CSI drivers that do not set seLinuxMount: true will be recursively relabeled by the container runtime. The breaking change The SELinuxMount feature gate changes what volumes can be shared among multiple Pods in a subtle way. Both of these cases work with recursive relabeling: Two Pods with different SELinux labels share the same volume, but each of them uses a different subPath to the volume. A privileged Pod and an unprivileged Pod share the same volume. The above scenarios will not work with modern, target behavior for Kubernetes mounting when SELinux is active. Instead, one of these Pods will be stuck in ContainerCreating until the other Pod is terminated. The first case is very niche and hasn’t been seen in practice. Although the second case is still quite rare, this setup has been observed in applications. Kubernetes v1.36 offers metrics and events to identify these Pods and allows cluster administrators to opt out of the mount option through the Pod field spec.securityContext.seLinuxChangePolicy. seLinuxChangePolicy The new Pod field spec.securityContext.seLinuxChangePolicy specifies how the SELinux label is applied to all Pod volumes. In Kubernetes v1.36, this field is part of the stable Pod API. There are three choices available: field not set (default) In Kubernetes v1.36, the behavior depends on whether the SELinuxMount feature gate is enabled. By default that feature gate is not enabled, and the SELinux label is applied recursively. If you enable that feature gate in your cluster, and all other conditions are met, labelling will be applied using the mount option. Recursive the SELinux label is applied recursively. This opts out from using the mount option. MountOption the SELinux label is applied using the mount option, if all other conditions are met. This choice is available only when the SELinuxMount feature gate is enabled. SELinux warning controller (optional) Kubernetes v1.36 provides a new controller within the control plane, selinux-warning-controller. This controller runs within the kube-controller-manager controller. To use it, you pass –controllers=*,selinux-warning-controller on the kube-controller-manager command line; you also must not have explicitly overridden the SELinuxChangePolicy feature gate to be disabled. The controller watches all Pods in the cluster and emits an Event when it finds two Pods that share the same volume in a way that is not compatible with the SELinuxMount feature gate. All such conflicting Pods will receive an event, such as: SELinuxLabel “system_u:system_r:container_t:s0:c98,c99” conflicts with pod my-other-pod that uses the same volume as this pod with SELinuxLabel “system_u:system_r:container_t:s0:c0,c1″. If both pods land on the same node, only one of them may access the volume. The actual Pod name may be censored when the conflicting Pods run in different namespaces to prevent leaking information across namespace boundaries. The controller reports such an event even when these Pods don’t run on the same node, to make sure all Pods work regardless of the Kubernetes scheduler decision. They could run on the same node next time. In addition, the controller emits the metric selinux_warning_controller_selinux_volume_conflict that lists all current conflicts among Pods. The metric has labels that identify the conflicting Pods and their SELinux labels, such as: selinux_warning_controller_selinux_volume_conflict{pod1_name=”my-other-pod”,pod1_namespace=”default”,pod1_value=”system_u:object_r:container_file_t:s0:c0,c1″,pod2_name=”my-pod”,pod2_namespace=”default”,pod2_value=”system_u:object_r:container_file_t:s0:c0,c2″,property=”SELinuxLabel”} 1 There is a security consequence from enabling this opt-in controller: it may reveal namespace names, which are always present in the metric. The Kubernetes project assumes only cluster administrators can access kube-controller-manager metrics. Suggested upgrade path To ensure a smooth upgrade path from v1.36 to a release with SELinuxMount enabled (anticipated to be v1.37), we suggest you follow these steps: Enable selinux-warning-controller in the kube-controller-manager. Check the selinux_warning_controller_selinux_volume_conflict metric. It shows all potential conflicts between Pods. For each conflicting Pod (Deployment, StatefulSet, etc.), either apply the opt-out (set Pod’s spec.securityContext.seLinuxChangePolicy: Recursive) or re-architect the application to remove such a conflict. For example, do your Pods really need to run as privileged? Check the volume_manager_selinux_volume_context_mismatch_warnings_total metric. This metric is emitted by the kubelet when it actually starts a Pod that runs when SELinuxMount is disabled, but such a Pod won’t start when SELinuxMount is enabled. This metric lists the number of Pods that will experience a true conflict. Unfortunately, this metric does not expose the exact Pod name as a label. The full Pod name is available only in the selinux_warning_controller_selinux_volume_conflict metric. Once both metrics have been accounted for, upgrade to a Kubernetes version that has SELinuxMount enabled. Consider using a MutatingAdmissionPolicy, a mutating webhook, or a policy engine like Kyverno or Gatekeeper to apply the opt-out to all Pods in a namespace or across the entire cluster. When SELinuxMount is enabled, the kubelet will emit the metric volume_manager_selinux_volume_context_mismatch_errors_total with the number of Pods that could not start because their SELinux label conflicts with an existing Pod that uses the same volume. The exact Pod names should still be available in the selinux_warning_controller_selinux_volume_conflict metric, if the selinux-warning-controller is enabled. Further reading KEP: Speed up SELinux volume relabeling using mounts SELinux Volume Relabeling Feature Gates Story 3: cluster upgrade Configure a security context for a Pod — Efficient SELinux volume relabeling and selinux-warning-controller Acknowledgements If you run into issues, have feedback, or want to contribute, find us on the Kubernetes Slack in #sig-node and #sig-storage or join a SIG Node or SIG Storage meetings.

  • Kubernetes v1.36: ハル (Haru)
    on April 22, 2026 at 12:00 am

    Editors: Chad M. Crowell, Kirti Goyal, Sophia Ugochukwu, Swathi Rao, Utkarsh Umre Similar to previous releases, the release of Kubernetes v1.36 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community. This release consists of 70 enhancements. Of those enhancements, 18 have graduated to Stable, 25 are entering Beta, and 25 have graduated to Alpha. There are also some deprecations and removals in this release; make sure to read about those. Release theme and logo We open 2026 with Kubernetes v1.36, a release that arrives as the season turns and the light shifts on the mountain. ハル (Haru) is a sound in Japanese that carries many meanings; among those we hold closest are 春 (spring), 晴れ (hare, clear skies), and 遥か (haruka, far-off, distant). A season, a sky, and a horizon. You will find all three in what follows. The logo, created by avocadoneko / Natsuho Ide, draws inspiration from Katsushika Hokusai’s Thirty-six Views of Mount Fuji (富嶽三十六景, Fugaku Sanjūrokkei), the same series that gave the world The Great Wave off Kanagawa. Our v1.36 logo reimagines one of the series’ most celebrated prints, Fine Wind, Clear Morning (凱風快晴, Gaifū Kaisei), also known as Red Fuji (赤富士, Aka Fuji): the mountain lit red in a summer dawn, bare of snow after the long thaw. Thirty-six views felt like a fitting number to sit with at v1.36, and a reminder that even Hokusai didn’t stop there.1 Keeping watch over the scene is the Kubernetes helm, set into the sky alongside the mountain. At the foot of Fuji sit Stella (left) and Nacho (right), two cats with the Kubernetes helm on their collars, standing in for the role of komainu, the paired lion-dog guardians that watch over Japanese shrines. Paired, because nothing is guarded alone. Stella and Nacho stand in for a very much larger set of paws: the SIGs and working groups, the maintainers and reviewers, the people behind docs, blogs, and translations, the release team, first-time contributors taking their first steps, and lifelong contributors returning season after season. Kubernetes v1.36 is, as always, held up by many hands. Brushed across Red Fuji in the logo is the calligraphy 晴れに翔け (hare ni kake), “soar into clear skies”. It is the first half of a couplet that was too long to fit on the mountain: 晴れに翔け、未来よ明け hare ni kake, asu yo ake “Soar into clear skies; toward tomorrow’s sunrise.”2 That is the wish we carry for this release: to soar into clear skies, for the release itself, for the project, and for everyone who ships it together. The dawn breaking over Red Fuji is not an ending but a passage: this release carries us to the next, and that one to the one after, on toward horizons far beyond what any single view can hold. 1. The series was so popular that Hokusai added ten more prints, bringing the total to forty-six. 2. 未来 means “the future” in its widest sense, not just tomorrow but everything still to come. It is usually read mirai; here it takes the informal reading asu. Spotlight on key updates Kubernetes v1.36 is packed with new features and improvements. Here are a few select updates the Release Team would like to highlight! Stable: Fine-grained API authorization On behalf of Kubernetes SIG Auth and SIG Node, we are pleased to announce the graduation of fine-grained kubelet API authorization to General Availability (GA) in Kubernetes v1.36! The KubeletFineGrainedAuthz feature gate was introduced as an opt-in alpha feature in Kubernetes v1.32, then graduated to beta (enabled by default) in v1.33. Now, the feature is generally available. This feature enables more precise, least-privilege access control over the kubelet’s HTTPS API replacing the need to grant the overly broad nodes/proxy permission for common monitoring and observability use cases. ​​This work was done as a part of KEP #2862 led by SIG Auth and SIG Node. Beta: Resource health status Before the v1.34 release, Kubernetes lacked a native way to report the health of allocated devices, making it difficult to diagnose Pod crashes caused by hardware failures. Building on the initial alpha release in v1.31 which focused on Device Plugins, Kubernetes v1.36 expands this feature by promoting the allocatedResourcesStatus field within the .status for each Pod (to beta). This field provides a unified health reporting mechanism for all specialized hardware. Users can now run kubectl describe pod to determine if a container’s crash loop is due to an Unhealthy or Unknown device status, regardless of whether the hardware was provisioned via traditional plugins or the newer DRA framework. This enhanced visibility allows administrators and automated controllers to quickly identify faulty hardware and streamline the recovery of high-performance workloads. This work was done as part of KEP #4680 led by SIG Node. Alpha: Workload Aware Scheduling (WAS) features Previously, the Kubernetes scheduler and job controllers managed pods as independent units, often leading to fragmented scheduling or resource waste for complex, distributed workloads. Kubernetes v1.36 introduces a comprehensive suite of Workload Aware Scheduling (WAS) features in Alpha, natively integrating the Job controller with a revised Workload API and a new decoupled PodGroup API, to treat related pods as a single logical entity. Kubernetes v1.35 already supported gang scheduling by requiring a minimum number of pods to be schedulable before any were bound to nodes. v1.36 goes further with a new PodGroup scheduling cycle that evaluates the entire group atomically, either all pods in the group are bound together, or none are. This work was done across several KEPs (including #4671, #5547, #5832, #5732, and #5710) led by SIG Scheduling and SIG Apps. Features graduating to Stable This is a selection of some of the improvements that are now stable following the v1.36 release. Volume group snapshots After several cycles in beta, VolumeGroupSnapshot support reaches General Availability (GA) in Kubernetes v1.36. This feature allows you to take crash-consistent snapshots across multiple PersistentVolumeClaims simultaneously. The support for volume group snapshots relies on a set of extension APIs for group snapshots. These APIs allow users to take crash consistent snapshots for a set of volumes. A key aim is to allow you to restore that set of snapshots to new volumes and recover your workload based on a crash consistent recovery point. This work was done as part of KEP #3476 led by SIG Storage. Mutable volume attach limits In Kubernetes v1.36, the mutable CSINode allocatable feature graduates to stable. This enhancement allows Container Storage Interface (CSI) drivers to dynamically update the reported maximum number of volumes that a node can handle. With this update, the kubelet can dynamically update a node’s volume limits and capacity information. The kubelet adjusts these limits based on periodic checks or in response to resource exhaustion errors from the CSI driver, without requiring a component restart. This ensures the Kubernetes scheduler maintains an accurate view of storage availability, preventing pod scheduling failures caused by outdated volume limits. This work was done as part of KEP #4876 led by SIG Storage. API for external signing of ServiceAccount tokens In Kubernetes v1.36, the external ServiceAccount token signer feature for service accounts graduates to stable, making it possible to offload token signing to an external system while still integrating cleanly with the Kubernetes API. Clusters can now rely on an external JWT signer for issuing projected service account tokens that follow the standard service account token format, including support for extended expiration when needed. This is especially useful for clusters that already rely on external identity or key management systems, allowing Kubernetes to integrate without duplicating key management inside the control plane. The kube-apiserver is wired to discover public keys from the external signer, cache them, and validate tokens it did not sign itself, so existing authentication and authorization flows continue to work as expected. Over the alpha and beta phases, the API and configuration for the external signer plugin, path validation, and OIDC discovery were hardened to handle real-world deployments and rotation patterns safely. With GA in v1.36, external ServiceAccount token signing is now a fully supported option for platforms that centralize identity and signing, simplifying integration with external IAM systems and reducing the need to manage signing keys directly inside the control plane. This work was done as part of KEP #740 led by SIG Auth. DRA features graduating to Stable Part of the Dynamic Resource Allocation (DRA) ecosystem reaches full production maturity in Kubernetes v1.36 as key governance and selection features graduate to Stable. The transition of DRA admin access to GA provides a permanent, secure framework for cluster administrators to access and manage hardware resources globally, while the stabilization of prioritized lists ensures that resource selection logic remains consistent and predictable across all cluster environments. Now, organizations can confidently deploy mission-critical hardware automation with the guarantee of long-term API stability and backward compatibility. These features empower users to implement sophisticated resource-sharing policies and administrative overrides that are essential for large-scale GPU clusters and multi-tenant AI platforms, marking the completion of the core architectural foundation for next-generation resource management. This work was done as part of KEPs #5018 and #4816 led by SIG Auth and SIG Scheduling. Mutating admission policies Declarative cluster management reaches a new level of sophistication in Kubernetes v1.36 with the graduation of MutatingAdmissionPolicies to Stable. This milestone provides a native, high-performance alternative to traditional webhooks by allowing administrators to define resource mutations directly in the API server using the Common Expression Language (CEL), fully replacing the need for external infrastructure for many common use cases. Now, cluster operators can modify incoming requests without the latency and operational complexity associated with managing custom admission webhooks. By moving mutation logic into a declarative, versioned policy, organizations can achieve more predictable cluster behavior, reduced network overhead, and a hardened security model with the full guarantee of long-term API stability. This work was done as part of KEP #3962 led by SIG API Machinery. Declarative validation of Kubernetes native types with validation-gen The development of custom resources reaches a new level of efficiency in Kubernetes v1.36 as declarative validation (with validation-gen) graduates to Stable. This milestone replaces the manual and often error-prone task of writing complex OpenAPI schemas by allowing developers to define sophisticated validation logic directly within Go struct tags using the Common Expression Language (CEL). Instead of writing custom validation functions, Kubernetes contributors can now define validation rules using IDL marker comments (such as +k8s:minimum or +k8s:enum) directly within the API type definitions (types.go). The validation-gen tool parses these comments to automatically generate robust API validation code at compile-time. This reduces maintenance overhead and ensures that API validation remains consistent and synchronized with the source code. This work was done as part of KEP #5073 led by SIG API Machinery. Removal of gogo protobuf dependency for Kubernetes API types Security and long-term maintainability for the Kubernetes codebase take a major step forward in Kubernetes v1.36 with the completion of the gogoprotobuf removal. This initiative has eliminated a significant dependency on the unmaintained gogoprotobuf library, which had become a source of potential security vulnerabilities and a blocker for adopting modern Go language features. Instead of migrating to standard Protobuf generation, which presented compatibility risks for Kubernetes API types, the project opted to fork and internalize the required generation logic within k8s.io/code-generator. This approach successfully eliminates the unmaintained runtime dependencies from the Kubernetes dependency graph while preserving existing API behavior and serialization compatibility. For consumers of Kubernetes API Go types, this change reduces technical debt and prevents accidental misuse with standard protobuf libraries. This work was done as part of KEP #5589 led by SIG API Machinery. Node log query Previously, Kubernetes required cluster administrators to log into nodes via SSH or implement a client-side reader for debugging issues pertaining to control-plane or worker nodes. While certain issues still require direct node access, issues with the kube-proxy or kubelet can be diagnosed by inspecting their logs. Node logs offer cluster administrators a method to view these logs using the kubelet API and kubectl plugin to simplify troubleshooting without logging into nodes, similar to debugging issues related to a pod or container. This method is operating system agnostic and requires the services or nodes to log to /var/log. As this feature reaches GA in Kubernetes 1.36 after thorough performance validation on production workloads, it is enabled by default on the kubelet through the NodeLogQuery feature gate. In addition, the enableSystemLogQuery kubelet configuration option must also be enabled. This work was done as a part of KEP #2258 led by SIG Windows. Support User Namespaces in pods Container isolation and node security reach a major maturity milestone in Kubernetes v1.36 as support for User Namespaces graduates to Stable. This long-awaited feature provides a critical layer of defense-in-depth by allowing the mapping of a container’s root user to a non-privileged user on the host, ensuring that even if a process escapes the container, it possesses no administrative power over the underlying node. Now, cluster operators can confidently enable this hardened isolation for production workloads to mitigate the impact of container breakout vulnerabilities. By decoupling the container’s internal identity from the host’s identity, Kubernetes provides a robust, standardized mechanism to protect multi-tenant environments and sensitive infrastructure from unauthorized access, all with the full guarantee of long-term API stability. This work was done as part of KEP #127 led by SIG Node. Support PSI based on cgroupv2 Node resource management and observability become more precise in Kubernetes v1.36 as the export of Pressure Stall Information (PSI) metrics graduates to Stable. This feature provides the kubelet with the ability to report “pressure” metrics for CPU, memory, and I/O, offering a more granular view of resource contention than traditional utilization metrics. Cluster operators and autoscalers can use these metrics to distinguish between a system that is simply busy and one that is actively stalling due to resource exhaustion. By leveraging these signals, users can more accurately tune pod resource requests, improve the reliability of vertical autoscaling, and detect noisy neighbor effects before they lead to application performance degradation or node instability. This work was done as part of KEP #4205 led by SIG Node. Volume source: OCI artifact and/or image The distribution of container data becomes more flexible in Kubernetes v1.36 as OCI volume source support graduates to Stable. This feature moves beyond the traditional requirement of mounting volumes from external storage providers or config maps by allowing the kubelet to pull and mount content directly from any OCI-compliant registry, such as a container image or an artifact repository. Now, developers and platform engineers can package application data, models, or static assets as OCI artifacts and deliver them to pods using the same registries and versioning workflows they already use for container images. This convergence of image and volume management simplifies CI/CD pipelines, reduces dependency on specialized storage backends for read-only content, and ensures that data remains portable and securely accessible across any environment. This work was done as part of KEP #4639 led by SIG Node. New features in Beta This is a selection of some of the improvements that are now beta following the v1.36 release. Staleness mitigation for controllers Staleness in Kubernetes controllers is a problem that affects many controllers and can subtly affect controller behavior. It is usually not until it is too late, when a controller in production has already taken incorrect action, that staleness is found to be an issue due to some underlying assumption made by the controller author. This could lead to conflicting updates or data corruption upon controller reconciliation during times of cache staleness. We are excited to announce that Kubernetes v1.36 includes new features that help mitigate controller staleness and provide better observability of controller behavior. This prevents reconciliation based on an outdated view of cluster state that can often lead to harmful behavior. This work was done as part of KEP #5647 led by SIG API Machinery. IP/CIDR validation improvements In Kubernetes v1.36, the StrictIPCIDRValidation feature for API IP and CIDR fields graduates to beta, tightening validation to catch malformed addresses and prefixes that previously slipped through. This helps prevent subtle configuration bugs where Services, Pods, NetworkPolicies, or other resources reference invalid IPs, which could otherwise lead to confusing runtime behavior or security surprises. Controllers are updated to canonicalize IPs they write back into objects and to warn when they encounter bad values that were already stored, so clusters can gradually converge on clean, consistent data. With beta, StrictIPCIDRValidation is ready for wider use, giving operators more reliable guardrails around IP-related configuration as they evolve networks and policies over time. This work was done as a part of KEP #4858 led by SIG Network. Separate kubectl user preferences from cluster configs The .kuberc feature for customizing kubectl user preferences continues to be beta and is enabled by default. The ~/.kube/kuberc file allows users to store aliases, default flags, and other personal settings separately from kubeconfig files, which hold cluster endpoints and credentials. This separation prevents personal preferences from interfering with CI pipelines or shared kubeconfig files, while maintaining a consistent kubectl experience across different clusters and contexts. In Kubernetes v1.36, .kuberc was expanded with the ability to define policies for credential plugins (allowlists or denylists) to enforce safer authentication practicies. Users can disable this functionality if needed by setting the KUBECTL_KUBERC=false or KUBERC=off environment variables. This work was done as a part of KEP #3104 led by SIG CLI, with the help from SIG Auth. Mutable container resources when Job is suspended In Kubernetes v1.36, the MutablePodResourcesForSuspendedJobs feature graduates to beta and is enabled by default. This update relaxes Job validation to allow updates to container CPU, memory, GPU, and extended resource requests and limits while a Job is suspended. This capability allows queue controllers and operators to adjust batch workload requirements based on real‑time cluster conditions. For example, a queueing system can suspend incoming Jobs, adjust their resource requirements to match available capacity or quota, and then unsuspend them. The feature strictly limits mutability to suspended Jobs (or Jobs whose pods have been terminated upon suspension) to prevent disruptive changes to actively running pods. This work was done as a part of KEP #5440 led by SIG Apps. Constrained impersonation In Kubernetes v1.36, the ConstrainedImpersonation feature for user impersonation graduates to beta, tightening a historically all‑or‑nothing mechanism into something that can actually follow least‑privilege principles. When this feature is enabled, an impersonator must have two distinct sets of permissions: one to impersonate a given identity, and another to perform specific actions on that identity’s behalf. This prevents support tools, controllers, or node agents from using impersonation to gain broader access than they themselves are allowed, even if their impersonation RBAC is misconfigured. Existing impersonate rules keep working, but the API server prefers the new constrained checks first, making the transition incremental instead of a flag day. With beta in v1.36, ConstrainedImpersonation is tested, documented, and ready for wider adoption by platform teams that rely on impersonation for debugging, proxying, or node‑level controllers. This work was done as a part of KEP #5284 led by SIG Auth. DRA features in beta The Dynamic Resource Allocation (DRA) framework reaches another maturity milestone in Kubernetes v1.36 as several core features graduate to beta and are enabled by default. This transition moves DRA beyond basic allocation by graduating partitionable devices and consumable capacity, allowing for more granular sharing of hardware like GPUs, while device taints and tolerations ensure that specialized resources are only utilized by the appropriate workloads. Now, users benefit from a much more reliable and observable resource lifecycle through ResourceClaim device status and the ability to ensure device attachment before Pod scheduling. By integrating these features with extended resource support, Kubernetes provides a robust production-ready alternative to the legacy device plugin system, enabling complex AI and HPC workloads to manage hardware with unprecedented precision and operational safety. This work was done across several KEPs (including #5004, #4817, #5055, #5075, #4815, and #5007) led by SIG Scheduling and SIG Node. Statusz for Kubernetes components In Kubernetes v1.36, the ComponentStatusz feature gate for core Kubernetes components graduates to beta, providing a /statusz endpoint (enabled by default) that surfaces real‑time build and version details for each component. This low‑overhead z-page exposes information like start time, uptime, Go version, binary version, emulation version, and minimum compatibility version, so operators and developers can quickly see exactly what is running without digging through logs or configs. The endpoint offers a human‑readable text view by default, plus a versioned structured API (config.k8s.io/v1beta1) for programmatic access in JSON, YAML, or CBOR via explicit content negotiation. Access is granted to the system:monitoring group, keeping it aligned with existing protections on health and metrics endpoints and avoiding exposure of sensitive data. With beta, ComponentStatusz is enabled by default across all core control‑plane components and node agents, backed by unit, integration, and end‑to‑end tests so it can be safely used in production for observability and debugging workflows. This work was done as a part of KEP #4827 led by SIG Instrumentation. Flagz for Kubernetes components In Kubernetes v1.36, the ComponentFlagz feature gate for core Kubernetes components graduates to beta, standardizing a /flagz endpoint that exposes the effective command‑line flags each component was started with. This gives cluster operators and developers real‑time, in‑cluster visibility into component configuration, making it much easier to debug unexpected behavior or verify that a flag rollout actually took effect after a restart. The endpoint supports both a human‑readable text view and a versioned structured API (initially config.k8s.io/v1beta1), so you can either curl it during an incident or wire it into automated tooling once you are ready. Access is granted to the system:monitoring group and sensitive values can be redacted, keeping configuration insight aligned with existing security practices around health and status endpoints. With beta, ComponentFlagz is now enabled by default and implemented across all core control‑plane components and node agents, backed by unit, integration, and end‑to‑end tests to ensure the endpoint is reliable in production clusters. This work was done as a part of KEP #4828 led by SIG Instrumentation. Mixed version proxy (aka unknown version interoperability proxy) In Kubernetes v1.36, the mixed version proxy feature graduates to beta, building on its alpha introduction in v1.28 to provide safer control-plane upgrades for mixed-version clusters. Each API request can now be routed to the apiserver instance that serves the requested group, version, and resource, reducing 404s and failures due to version skew. The feature relies on peer-aggregated discovery, so apiservers share information about which resources and versions they expose, then use that data to transparently reroute requests when needed. New metrics on rerouted traffic and proxy behavior help operators understand how often requests are forwarded and to which peers. Together, these changes make it easier to run highly available, mixed-version API control planes in production while performing multi-step or partial control-plane upgrades. This work was done as a part of KEP #4020 led by SIG API Machinery Memory QoS with cgroups v2 Kubernetes now enhances memory QoS on Linux cgroup v2 nodes with smarter, tiered memory protection that better aligns kernel controls with pod requests and limits, reducing interference and thrashing for workloads sharing the same node. This iteration also refines how kubelet programs memory.high and memory.min, adds metrics and safeguards to avoid livelocks, and introduces configuration options so cluster operators can tune memory protection behavior for their environments. This work was done as part of KEP #2570 led by SIG Node. New features in Alpha This is a selection of some of the improvements that are now alpha following the v1.36 release. HPA scale to zero for custom metrics Until now, the HorizontalPodAutoscaler (HPA) required a minimum of at least one replica to remain active, as it could only calculate scaling needs based on metrics (like CPU or Memory) from running pods. Kubernetes v1.36 continues the development of the HPA scale to zero feature (disabled by default) in Alpha, allowing workloads to scale down to zero replicas specifically when using Object or External metrics. Now, users can experiment with significantly reducing infrastructure costs by completely idling heavy workloads when no work is pending. While the feature remains behind the HPAScaleToZero feature gate, it enables the HPA to stay active even with zero running pods, automatically scaling the deployment back up as soon as the external metric (e.g., queue length) indicates that new tasks have arrived. This work was done as part of KEP #2021 led by SIG Autoscaling. DRA features in Alpha Historically, the Dynamic Resource Allocation (DRA) framework lacked seamless integration with high-level controllers and provided limited visibility into device-specific metadata or availability. Kubernetes v1.36 introduces a wave of DRA enhancements in Alpha, including native ResourceClaim support for workloads, and DRA native resources to provide the flexibility of DRA to cpu management. Now, users can leverage the downward API to expose complex resource attributes directly to containers and benefit from improved resource availability visibility for more predictable scheduling. these updates, combined with support for list types in device attributes, transform DRA from a low-level primitive into a robust system capable of handling the sophisticated networking and compute requirements of modern AI and high-performance computing (HPC) stacks. This work was done across several KEPs (including #5729, #5304, #5517, #5677, and #5491) led by SIG Scheduling and SIG Node. Native histogram support for Kubernetes metrics High-resolution monitoring reaches a new milestone in Kubernetes v1.36 with the introduction of native histogram support in Alpha. While classical Prometheus histograms relied on static, pre-defined buckets that often forced a compromise between data accuracy and memory usage, this update allows the control plane to export sparse histograms that dynamically adjust their resolution based on real-time data. Now, cluster operators can capture precise latency distributions for the kube-apiserver and other core components without the overhead of manual bucket management. This architectural shift ensures more reliable SLIs and SLOs, providing high-fidelity heatmaps that remain accurate even during the most unpredictable workload spikes. This work was done as part of KEP #5808 led by SIG Instrumentation. Manifest based admission control config Managing admission controllers moves toward a more declarative and consistent model in Kubernetes v1.36 with the introduction of manifest-based admission control configuration in Alpha. This change addresses the long-standing challenge of configuring admission plugins through disparate command-line flags or separate, complex config files by allowing administrators to define the desired state of admission control directly through a structured manifest. Now, cluster operators can manage admission plugin settings with the same versioned, declarative workflows used for other Kubernetes objects, significantly reducing the risk of configuration drift and manual errors during cluster upgrades. By centralizing these configurations into a unified manifest, the kube-apiserver becomes easier to audit and automate, paving the way for more secure and reproducible cluster deployments. This work was done as part of KEP #5793 led by SIG API Machinery. CRI list streaming With the introduction of CRI list streaming in Alpha, Kubernetes v1.36 uses new internal streaming operations. This enhancement addresses the memory pressure and latency spikes often seen on large-scale nodes by replacing traditional, monolithic List requests between the kubelet and the container runtime with a more efficient server-side streaming RPC. Now, instead of waiting for a single, massive response containing all container or image data, the kubelet can process results incrementally as they are streamed. This shift significantly reduces the peak memory footprint of the kubelet and improves responsiveness on high-density nodes, ensuring that cluster management remains fluid even as the number of containers per node continues to grow. This work was done as part of KEP #5825 led by SIG Node. Other notable changes Ingress NGINX retirement To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee have retired Ingress NGINX on March 24, 2026. Since that date, there have been no further releases, no bugfixes, and no updates to resolve any security vulnerabilities discovered. Existing deployments of Ingress NGINX will continue to function, and installation artifacts like Helm charts and container images will remain available. For full details, see the official retirement announcement. Faster SELinux labelling for volumes (GA) Kubernetes v1.36 makes the SELinux volume mounting improvement generally available. This change replaced recursive file relabeling with mount -o context=XYZ option, applying the correct SELinux label to the entire volume at mount time. It brings more consistent performance and reduces Pod startup delays on SELinux-enforcing systems. This feature was introduced as beta in v1.28 for ReadWriteOncePod volumes. In v1.32, it gained metrics and an opt-out option (securityContext.seLinuxChangePolicy: Recursive) to help catch conflicts. Now in v1.36, it reaches Stable and defaults to all volumes, with Pods or CSIDrivers opting in via spec.seLinuxMount. However, we expect this feature to create the risk of breaking changes in the future Kubernetes releases, potentially due to sharing one volume between privileged and unprivileged Pods on the same node. Developers have the responsibility of setting the seLinuxChangePolicy field and SELinux volume labels on Pods. Regardless of whether they are writing a Deployment, StatefulSet, DaemonSet or even a custom resource that includes a Pod template, being careless with these settings can lead to a range of problems such as Pods not starting up correctly when Pods share a volume. Kubernetes v1.36 is the ideal release to audit your clusters. To learn more, check out SELinux Volume Label Changes goes GA (and likely implications in v1.37) blog. For more details on this enhancement, refer to KEP-1710: Speed up recursive SELinux label change. Graduations, deprecations, and removals in v1.36 Graduations to stable This lists all the features that graduated to stable (also known as general availability). For a full list of updates including new features and graduations from alpha to beta, see the release notes. This release includes a total of 18 enhancements promoted to stable: Support User Namespaces in pods API for external signing of Service Account tokens Speed up recursive SELinux label change Portworx file in-tree to CSI driver migration Fine grained Kubelet API authorization Mutating Admission Policies Node log query VolumeGroupSnapshot Mutable CSINode Allocatable Property DRA: Prioritized Alternatives in Device Requests Support PSI based on cgroupv2 add ProcMount option DRA: Extend PodResources to include resources from Dynamic Resource Allocation VolumeSource: OCI Artifact and/or Image Split L3 Cache Topology Awareness in CPU Manager DRA: AdminAccess for ResourceClaims and ResourceClaimTemplates Remove gogo protobuf dependency for Kubernetes API types CSI driver opt-in for service account tokens via secrets field Deprecations removals, and community updates As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones to improve the project’s overall health. See the Kubernetes deprecation and removal policy for more details on this process. Kubernetes v1.36 includes a couple of deprecations. Deprecation of Service .spec.externalIPs With this release, the externalIPs field in Service spec is deprecated. This means the functionality exists, but will no longer function in a future version of Kubernetes. You should plan to migrate if you currently rely on that field. This field has been a known security headache for years, enabling man-in-the-middle attacks on your cluster traffic, as documented in CVE-2020-8554. From Kubernetes v1.36 and onwards, you will see deprecation warnings when using it, with full removal planned for v1.43. If your Services still lean on externalIPs, consider using LoadBalancer services for cloud-managed ingress, NodePort for simple port exposure, or Gateway API for a more flexible and secure way to handle external traffic. For more details on this field and its deprecation, refer to External IPs or read KEP-5707: Deprecate service.spec.externalIPs. Removal of the gitRepo volume driver The gitRepo volume type has been deprecated since v1.11. For Kubernetes v1.36, the gitRepo volume plugin is permanently disabled and cannot be turned back on. This change protects clusters from a critical security issue where using gitRepo could let an attacker run code as root on the node. Although gitRepo has been deprecated for years and better alternatives have been recommended, it was still technically possible to use it in previous releases. From v1.36 onward, that path is closed for good, so any existing workloads depending on gitRepo will need to migrate to supported approaches such as init containers or external git-sync style tools. For more details on this removal, refer to KEP-5040: Remove gitRepo volume driver Release notes Check out the full details of the Kubernetes v1.36 release in our release notes. Availability Kubernetes v1.36 is available for download on GitHub or on the Kubernetes download page. To get started with Kubernetes, check out these tutorials or run local Kubernetes clusters using minikube. You can also easily install v1.36 using kubeadm. Release Team Kubernetes is only possible with the support, commitment, and hard work of its community. Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on. This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management. We would like to thank the entire Release Team for the hours spent hard at work to deliver the Kubernetes v1.36 release to our community. The Release Team’s membership ranges from first-time shadows to returning team leads with experience forged over several release cycles. A very special thanks goes out to our release lead, Ryota Sawada, for guiding us through a successful release cycle, for his hands-on approach to solving challenges, and for bringing the energy and care that drives our community forward. Project Velocity The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing, and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem. During the v1.36 release cycle, which spanned 15 weeks from 12th January 2026 to 22nd April 2026, Kubernetes received contributions from as many as 106 different companies and 491 individuals. In the wider cloud native ecosystem, the figure goes up to 370 companies, counting 2235 total contributors. Note that “contribution” counts when someone makes a commit, code review, comment, creates an issue or PR, reviews a PR (including blogs and documentation) or comments on issues and PRs. If you are interested in contributing, visit Getting Started on our contributor website. Source for this data: Companies contributing to Kubernetes Overall ecosystem contributions Events Update Explore upcoming Kubernetes and cloud native events, including KubeCon + CloudNativeCon, KCD, and other notable conferences worldwide. Stay informed and get involved with the Kubernetes community! April 2026 KCD – Kubernetes Community Days: Guadalajara: April 18, 2026 | Guadalajara, Mexico May 2026 KCD – Kubernetes Community Days: Toronto: May 13, 2026 | Toronto, Canada KCD – Kubernetes Community Days: Texas: May 15, 2026 | Austin, USA KCD – Kubernetes Community Days: Istanbul: May 15, 2026 | Istanbul, Turkey KCD – Kubernetes Community Days: Helsinki: May 20, 2026 | Helsinki, Finland KCD – Kubernetes Community Days: Czech & Slovak: May 21, 2026 | Prague, Czechia June 2026 KCD – Kubernetes Community Days: New York: June 10, 2026 | New York, USA KubeCon + CloudNativeCon India 2026: June 18-19, 2026 | Mumbai, India July 2026 KubeCon + CloudNativeCon Japan 2026: July 29-30, 2026 | Yokohama, Japan September 2026 KubeCon + CloudNativeCon China 2026: September 8-9, 2026 | Shanghai, China October 2026 KCD – Kubernetes Community Days: UK: Oct 19, 2026 | Edinburgh, UK November 2026 KCD – Kubernetes Community Days: Porto: Nov 19, 2026 | Porto, Portugal KubeCon + CloudNativeCon North America 2026: Nov 9-12, 2026 | Salt Lake, USA You can find the latest event details here. Upcoming Release Webinar Join members of the Kubernetes v1.36 Release Team on Wednesday, May 20th 2026 at 4:00 PM (UTC) to learn about the release highlights of this release. For more information and registration, visit the event page on the CNCF Online Programs site. Get Involved The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below. Thank you for your continued feedback and support. Follow us on Bluesky @kubernetes.io for the latest updates Join the community discussion on Discuss Join the community on Slack Post questions (or answer questions) on Stack Overflow Share your Kubernetes story Read more about what’s happening with Kubernetes on the blog Learn more about the Kubernetes Release Team

  • Gateway API v1.5: Moving features to Stable
    on April 21, 2026 at 4:30 pm

    The Kubernetes SIG Network community presents the release of Gateway API (v1.5)! Released on March 14, 2026, version 1.5 is our biggest release yet, and concentrates on moving existing Experimental features to Standard (Stable). The Gateway API v1.5.1 patch release is already available The Gateway API v1.5 brings six widely-requested feature promotions to the Standard channel (Gateway API’s GA release channel): ListenerSet TLSRoute promoted to Stable HTTPRoute CORS Filter Client Certificate Validation Certificate Selection for Gateway TLS Origination ReferenceGrant promoted to Stable Special thanks for Gateway API Contributors for their efforts on this release. New release process As of Gateway API v1.5, the project has moved to a release train model, where on a feature freeze date, any features that are ready are shipped in the release. This applies to both Experimental and Standard, and also applies to documentation — if the documentation isn’t ready to ship, the feature isn’t ready to ship. We are aiming for this to produce a more reliable release cadence (since we are basing our work off the excellent work done by SIG Release on Kubernetes itself). As part of this change, we’ve also introduced Release Manager and Release Shadow roles to our release team. Many thanks to Flynn (Buoyant) and Beka Modebadze (Google) for all the great work coordinating and filing the rough edges of our release process. They are both going to continue in this role for the next release as well. New standard features ListenerSet Leads: Dave Protasowski, David Jumani GEP-1713 Why ListenerSet? Prior to ListenerSet, all listeners had to be specified directly on the Gateway object. While this worked well for simple use cases, it created challenges for more complex or multi-tenant environments: Platform teams and application teams often needed to coordinate changes to the same Gateway Safely delegating ownership of individual listeners was difficult Extending existing Gateways required direct modification of the original resource ListenerSet addresses these limitations by allowing listeners to be defined independently and then merged onto a target Gateway. ListenerSets also enable attaching more than 64 listeners to a single, shared Gateway. This is critical for large scale deployments and scenarios with multiple hostnames per listener. Even though the ListenerSet feature significantly enhances scalability, the listener field in Gateway remains a mandatory requirement and the Gateway must have at least one valid listener. How it works A ListenerSet attaches to a Gateway and contributes one or more listeners. The Gateway controller is responsible for merging listeners from the Gateway resource itself and any attached ListenerSet resources. In this example, a central infrastructure team defines a Gateway with a default HTTP listener, while two different application teams define their own ListenerSet resources in separate namespaces. Both ListenerSets attach to the same Gateway and contribute additional HTTPS listeners. — apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: example-gateway namespace: infra spec: gatewayClassName: example-gateway-class allowedListeners: namespaces: from: All # A selector lets you fine tune this listeners: – name: http protocol: HTTP port: 80 — apiVersion: gateway.networking.k8s.io/v1 kind: ListenerSet metadata: name: team-a-listeners namespace: team-a spec: parentRef: name: example-gateway namespace: infra listeners: – name: https-a protocol: HTTPS port: 443 hostname: a.example.com tls: certificateRefs: – name: a-cert — apiVersion: gateway.networking.k8s.io/v1 kind: ListenerSet metadata: name: team-b-listeners namespace: team-b spec: parentRef: name: example-gateway namespace: infra listeners: – name: https-b protocol: HTTPS port: 443 hostname: b.example.com tls: certificateRefs: – name: b-cert TLSRoute Leads: Rostislav Bobrovsky, Ricardo Pchevuzinske Katz GEP-2643 The TLSRoute resource allows you to route requests by matching the Server Name Indication (SNI) presented by the client during the TLS handshake and directing the stream to the appropriate Kubernetes backends. When working with TLSRoute, a Gateway’s TLS listener can be configured in one of two modes: Passthrough or Terminate. If you install Gateway API v1.5 Standard over v1.4 or earlier Experimental, your existing Experimental TLSRoutes will not be usable. This is because they will be stored in the v1alpha2 or v1alpha3 version, which is not included in the v1.5 Standard YAMLs. If this applies to you, either continue using Experimental for v1.5.1 and onward, or you’ll need to download and migrate your TLSRoutes to v1, which is present in the Standard YAMLs. Passthrough mode The Passthrough mode is designed for strict security requirements. It is ideal for scenarios where traffic must remain encrypted end-to-end until it reaches the destination backend, when the external client and backend need to authenticate directly with each other, or when you can’t store certificates on the Gateway. This configuration is also applicable when an encrypted TCP stream is required instead of standard HTTP traffic. In this mode, the encrypted byte stream is proxied directly to the destination backend. The Gateway has zero access to private keys or unencrypted data. The following TLSRoute is attached to a listener that is configured in Passthrough mode. It will match only TLS handshakes with the foo.example.com SNI hostname and apply its routing rules to pass the encrypted TCP stream to the configured backend: — apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: example-gateway spec: gatewayClassName: example-gateway-class listeners: – name: tls-passthrough protocol: TLS port: 8443 tls: mode: Passthrough — apiVersion: gateway.networking.k8s.io/v1 kind: TLSRoute metadata: name: foo-route spec: parentRefs: – name: example-gateway sectionName: tls-passthrough hostnames: – “foo.example.com” rules: – backendRefs: – name: foo-svc port: 8443 Terminate mode The Terminate mode provides the convenience of centralized TLS certificate management directly at the Gateway. In this mode, the TLS session is fully terminated at the Gateway, which then routes the decrypted payload to the destination backend as a plain text TCP stream. The following TLSRoute is attached to a listener that is configured in Terminate mode. It will match only TLS handshakes with the bar.example.com SNI hostname and apply its routing rules to pass the decrypted TCP stream to the configured backend: apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: example-gateway spec: gatewayClassName: example-gateway-class listeners: – name: tls-terminate protocol: TLS port: 443 tls: mode: Terminate certificateRefs: – name: tls-terminate-certificate — apiVersion: gateway.networking.k8s.io/v1 kind: TLSRoute metadata: name: bar-route spec: parentRefs: – name: example-gateway sectionName: tls-terminate hostnames: – “bar.example.com” rules: – backendRefs: – name: bar-svc port: 8080 HTTPRoute CORS filter Leads: Damian Sawicki, Ricardo Pchevuzinske Katz, Norwin Schnyder, Huabing (Robin) Zhao, LiangLliu, GEP-1767 Cross-origin resource sharing (CORS) is an HTTP-header based security mechanism that allows (or denies) a web page to access resources from a server on an origin different from the domain that served the web page. See our documentation page for more information. The HTTPRoute resource can be used to configure Cross-Origin Resource Sharing (CORS). The following HTTPRoute allows requests from https://app.example: apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: cors spec: parentRefs: – name: same-namespace rules: – matches: – path: type: PathPrefix value: /cors-behavior-creds-false backendRefs: – name: infra-backend-v1 port: 8080 filters: – cors: allowOrigins: – https://app.example type: CORS Instead of specifying a list of specific origins, you can also specify a single wildcard (“*”), which will allow any origin. It is also allowed to use semi-specified origins in the list, where the wildcard appears after the scheme and at the beginning of the hostname, e.g. https://*.bar.com: apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: cors spec: parentRefs: – name: same-namespace rules: – matches: – path: type: PathPrefix value: /cors-behavior-creds-false backendRefs: – name: infra-backend-v1 port: 8080 filters: – cors: allowOrigins: – https://www.baz.com – https://*.bar.com – https://*.foo.com type: CORS HTTPRoute filters allow for the configuration of CORS settings. See a list of supported options below: allowCredentials Specifies whether the browser is allowed to include credentials (such as cookies and HTTP authentication) in the CORS request. allowMethods The HTTP methods that are allowed for CORS requests. allowHeaders The HTTP headers that are allowed for CORS requests. exposeHeaders The HTTP headers that are exposed to the client. maxAge The maximum time in seconds that the browser should cache the preflight response. A comprehensive example: apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: cors-allow-credentials spec: parentRefs: – name: same-namespace rules: – matches: – path: type: PathPrefix value: /cors-behavior-creds-true backendRefs: – name: infra-backend-v1 port: 8080 filters: – cors: allowOrigins: – “https://www.foo.example.com” – “https://*.bar.example.com” allowMethods: – GET – OPTIONS allowHeaders: – “*” exposeHeaders: – “x-header-3” – “x-header-4” allowCredentials: true maxAge: 3600 type: CORS Gateway client certificate validation Leads: Arko Dasgupta, Katarzyna Łach, Norwin Schnyder GEP-91 Client certificate validation, also known as mutual TLS (mTLS), is a security mechanism where the client provides a certificate to the server to prove its identity. This is in contrast to standard TLS, where only the server presents a certificate to the client. In the context of the Gateway API, frontend mTLS means that the Gateway validates the client’s certificate before allowing the connection to proceed to a backend service. This validation is done by checking the client certificate against a set of trusted Certificate Authorities (CAs) configured on the Gateway. The API was shaped this way to address a critical security vulnerability related to connection reuse and still provide some level of flexibility. Configuration overview Client validation is defined using the frontendValidation struct, which specifies how the Gateway should verify the client’s identity. caCertificateRefs: A list of references to Kubernetes objects (typically ConfigMap’s) containing PEM-encoded CA certificate bundles used as trust anchors to validate the client’s certificate. mode: Defines the validation behavior. AllowValidOnly (Default): The Gateway accepts connections only if the client presents a valid certificate that passes validation against the specified CA bundle. AllowInsecureFallback: The Gateway accepts connections even if the client certificate is missing or fails verification. This mode typically delegates authorization to the backend and should be used with caution. Validation can be applied globally to the Gateway or overridden for specific ports: Default Configuration: This configuration applies to all HTTPS listeners on the Gateway, unless a per-port override is defined. Per-Port Configuration: This allows for fine-grained control, overriding the default configuration for all listeners handling traffic on a specific port. Example: apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: client-validation-basic spec: gatewayClassName: acme-lb tls: frontend: default: validation: caCertificateRefs: – kind: ConfigMap group: “” name: foo-example-com-ca-cert perPort: – port: 8443 tls: validation: caCertificateRefs: – kind: ConfigMap group: “” name: foo-example-com-ca-cert mode: “AllowInsecureFallback” listeners: – name: foo-https protocol: HTTPS port: 443 hostname: foo.example.com tls: certificateRefs: – kind: Secret group: “” name: foo-example-com-cert – name: bar-https protocol: HTTPS port: 8443 hostname: bar.example.com tls: certificateRefs: – kind: Secret group: “” name: bar-example-com-cert Certificate selection for Gateway TLS origination Leads: Marcin Kosieradzki, Rob Scott, Norwin Schnyder, Lior Lieberman, Katarzyna Lach GEP-3155 Mutual TLS (mTLS) for upstream connections requires the Gateway to present a client certificate to the backend, in addition to verifying the backend’s certificate. This ensures that the backend only accepts connections from authorized Gateways. Gateway’s client certificate configuration To configure the client certificate that the Gateway uses when connecting to backends, use the tls.backend.clientCertificateRef field in the Gateway resource. This configuration applies to the Gateway as a client for all upstream connections managed by that Gateway. apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: backend-tls spec: gatewayClassName: acme-lb tls: backend: clientCertificateRef: kind: Secret group: “” # empty string means core API group name: foo-example-cert listeners: – name: foo-http protocol: HTTP port: 80 hostname: foo.example.com ReferenceGrant promoted to v1 The ReferenceGrant resource has not changed in more than a year, and we do not expect it to change further, so its version has been bumped to v1, and it is now officially in the Standard channel, and abides by the GA API contract (that is, no breaking changes). Try it out Unlike other Kubernetes APIs, you don’t need to upgrade to the latest version of Kubernetes to get the latest version of Gateway API. As long as you’re running Kubernetes 1.30 or later, you’ll be able to get up and running with this version of Gateway API. To try out the API, follow the Getting Started Guide. As of this writing, seven implementations are already fully conformant with Gateway API v1.5. In alphabetical order: Agentgateway Airlock Microgateway GKE Gateway HAProxy Ingress kgateway NGINX Gateway Fabric Traefik Proxy Get involved Wondering when a feature will be added? There are lots of opportunities to get involved and help define the future of Kubernetes routing APIs for both ingress and service mesh. Check out the user guides to see what use-cases can be addressed. Try out one of the existing Gateway controllers. Or join us in the community and help us build the future of Gateway API together! The maintainers would like to thank everyone who’s contributed to Gateway API, whether in the form of commits to the repo, discussion, ideas, or general support. We could never have made this kind of progress without the support of this dedicated and active community.

  • Kubernetes v1.36 Sneak Peek
    on March 30, 2026 at 12:00 am

    Kubernetes v1.36 is coming at the end of April 2026. This release will include removals and deprecations, and it is packed with an impressive number of enhancements. Here are some of the features we are most excited about in this cycle! Please note that this information reflects the current state of v1.36 development and may change before release. The Kubernetes API removal and deprecation process The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release. It will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement. Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes. Beta or pre-release API versions must be supported for 3 releases after the deprecation. Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. Whether an API is removed as a result of a feature graduating from beta to stable, or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the deprecation guide. A recent example of this principle in action is the retirement of the ingress-nginx project, announced by SIG-Security on March 24, 2026. As stewardship shifts away from the project, the community has been encouraged to evaluate alternative ingress controllers that align with current security and maintenance best practices. This transition reflects the same lifecycle discipline that underpins Kubernetes itself, ensuring continued evolution without abrupt disruption. Ingress NGINX retirement To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee have retired Ingress NGINX on March 24, 2026. Since that date, there have been no further releases, no bugfixes, and no updates to resolve any security vulnerabilities discovered. Existing deployments of Ingress NGINX will continue to function, and installation artifacts like Helm charts and container images will remain available. For full details, see the official retirement announcement. Deprecations and removals for Kubernetes v1.36 Deprecation of .spec.externalIPs in Service The externalIPs field in Service spec is being deprecated, which means you’ll soon lose a quick way to route arbitrary externalIPs to your Services. This field has been a known security headache for years, enabling man-in-the-middle attacks on your cluster traffic, as documented in CVE-2020-8554. From Kubernetes v1.36 and onwards, you will see deprecation warnings when using it, with full removal planned for v1.43. If your Services still lean on externalIPs, consider using LoadBalancer services for cloud-managed ingress, NodePort for simple port exposure, or Gateway API for a more flexible and secure way to handle external traffic. For more details on this enhancement, refer to KEP-5707: Deprecate service.spec.externalIPs Removal of gitRepo volume driver The gitRepo volume type has been deprecated since v1.11. Starting Kubernetes v1.36, the gitRepo volume plugin is permanently disabled and cannot be turned back on. This change protects clusters from a critical security issue where using gitRepo could let an attacker run code as root on the node. Although gitRepo has been deprecated for years and better alternatives have been recommended, it was still technically possible to use it in previous releases. From v1.36 onward, that path is closed for good, so any existing workloads depending on gitRepo will need to migrate to supported approaches such as init containers or external git-sync style tools. For more details on this enhancement, refer to KEP-5040: Remove gitRepo volume driver Featured enhancements of Kubernetes v1.36 The following list of enhancements is likely to be included in the upcoming v1.36 release. This is not a commitment and the release content is subject to change. Faster SELinux labelling for volumes (GA) Kubernetes v1.36 makes the SELinux volume mounting improvement generally available. This change replaced recursive file relabeling with mount -o context=XYZ option, applying the correct SELinux label to the entire volume at mount time. It brings more consistent performance and reduces Pod startup delays on SELinux-enforcing systems. This feature was introduced as beta in v1.28 for ReadWriteOncePod volumes. In v1.32, it gained metrics and an opt-out option (securityContext.seLinuxChangePolicy: Recursive) to help catch conflicts. Now in v1.36, it reaches stable and defaults to all volumes, with Pods or CSIDrivers opting in via spec.SELinuxMount. However, we expect this feature to create the risk of breaking changes in the future Kubernetes releases, due to the potential for mixing of privileged and unprivileged pods. Setting the seLinuxChangePolicy field and SELinux volume labels on Pods, correctly, is the responsibility of the Pod author Developers have that responsibility whether they are writing a Deployment, StatefulSet, DaemonSet or even a custom resource that includes a Pod template. Being careless with these settings can lead to a range of problems when Pods share volumes. For more details on this enhancement, refer to KEP-1710: Speed up recursive SELinux label change External signing of ServiceAccount tokens As a beta feature, Kubernetes already supports external signing of ServiceAccount tokens. This allows clusters to integrate with external key management systems or signing services instead of relying only on internally managed keys. With this enhancement, the kube-apiserver can delegate token signing to external systems such as cloud key management services or hardware security modules. This improves security and simplifies key management services for clusters that rely on centralized signing infrastructure. We expect that this will graduate to stable (GA) in Kubernetes v1.36. For more details on this enhancement, refer to KEP-740: Support external signing of service account tokens DRA Driver support for Device taints and tolerations Kubernetes v1.33 introduced support for taints and tolerations for physical devices managed through Dynamic Resource Allocation (DRA). Normally, any device can be used for scheduling. However, this enhancement allows DRA drivers to mark devices as tainted, which ensures that they will not be used for scheduling purposes. Alternatively, cluster administrators can create a DeviceTaintRule to mark devices that match a certain selection criteria(such as all devices of a certain driver) as tainted. This improves scheduling control and helps ensure that specialized hardware resources are only used by workloads that explicitly request them. In Kubernetes v1.36, this feature graduates to beta with more comprehensive testing complete, making it accessible by default without the need for a feature flag and open to user feedback. To learn about taints and tolerations, see taints and tolerations. For more details on this enhancement, refer to KEP-5055: DRA: device taints and tolerations. DRA support for partitionable devices Kubernetes v1.36 expands Dynamic Resource Allocation (DRA) by introducing support for partitionable devices, allowing a single hardware accelerator to be split into multiple logical units that can be shared across workloads. This is especially useful for high-cost resources like GPUs, where dedicating an entire device to a single workload can lead to underutilization. With this enhancement, platform teams can improve overall cluster efficiency by allocating only the required portion of a device to each workload, rather than reserving it entirely. This makes it easier to run multiple workloads on the same hardware while maintaining isolation and control, helping organizations get more value out of their infrastructure. To learn more about this enhancement, refer to KEP-4815: DRA Partitionable Devices Want to know more? New features and deprecations are also announced in the Kubernetes release notes. We will formally announce what’s new in Kubernetes v1.36 as part of the CHANGELOG for that release. Kubernetes v1.36 release is planned for Wednesday, April 22, 2026. Stay tuned for updates! You can also see the announcements of changes in the release notes for: Kubernetes v1.35 Kubernetes v1.34 Kubernetes v1.33 Kubernetes v1.32 Kubernetes v1.31 Kubernetes v1.30 Get involved The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below. Thank you for your continued feedback and support. Follow us on Bluesky @kubernetes.io for the latest updates Join the community discussion on Discuss Join the community on Slack Post questions (or answer questions) on Server Fault or Stack Overflow Share your Kubernetes story Read more about what’s happening with Kubernetes on the blog Learn more about the Kubernetes Release Team

  • Announcing Ingress2Gateway 1.0: Your Path to Gateway API
    on March 20, 2026 at 7:00 pm

    With the Ingress-NGINX retirement scheduled for March 2026, the Kubernetes networking landscape is at a turning point. For most organizations, the question isn’t whether to migrate to Gateway API, but how to do so safely. Migrating from Ingress to Gateway API is a fundamental shift in API design. Gateway API provides a modular, extensible API with strong support for Kubernetes-native RBAC. Conversely, the Ingress API is simple, and implementations such as Ingress-NGINX extend the API through esoteric annotations, ConfigMaps, and CRDs. Migrating away from Ingress controllers such as Ingress-NGINX presents the daunting task of capturing all the nuances of the Ingress controller, and mapping that behavior to Gateway API. Ingress2Gateway is an assistant that helps teams confidently move from Ingress to Gateway API. It translates Ingress resources/manifests along with implementation-specific annotations to Gateway API while warning you about untranslatable configuration and offering suggestions. Today, SIG Network is proud to announce the 1.0 release of Ingress2Gateway. This milestone represents a stable, tested migration assistant for teams ready to modernize their networking stack. Ingress2Gateway 1.0 Ingress-NGINX annotation support The main improvement for the 1.0 release is more comprehensive Ingress-NGINX support. Before the 1.0 release, Ingress2Gateway only supported three Ingress-NGINX annotations. For the 1.0 release, Ingress2Gateway supports over 30 common annotations (CORS, backend TLS, regex matching, path rewrite, etc.). Comprehensive integration testing Each supported Ingress-NGINX annotation, and representative combinations of common annotations, is backed by controller-level integration tests that verify the behavioral equivalence of the Ingress-NGINX configuration and the generated Gateway API. These tests exercise real controllers in live clusters and compare runtime behavior (routing, redirects, rewrites, etc.), not just YAML structure. The tests: spin up an Ingress-NGINX controller spin up multiple Gateway API controllers apply Ingress resources that have implementation-specific configuration translate Ingress resources to Gateway API with ingress2gateway and apply generated manifests verify that the Gateway API controllers and the Ingress controller exhibit equivalent behavior. A comprehensive test suite not only catches bugs in development, but also ensures the correctness of the translation, especially given surprising edge cases and unexpected defaults, so that you don’t find out about them in production. Notification & error handling Migration is not a “one-click” affair. Surfacing subtleties and untranslatable behavior is as important as translating supported configuration. The 1.0 release cleans up the formatting and content of notifications, so it is clear what is missing and how you can fix it. Using Ingress2Gateway Ingress2Gateway is a migration assistant, not a one-shot replacement. Its goal is to migrate supported Ingress configuration and behavior identify unsupported configuration and suggest alternatives reevaluate and potentially discard undesirable configuration The rest of the section shows you how to safely migrate the following Ingress-NGINX configuration apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/proxy-body-size: “1G” nginx.ingress.kubernetes.io/use-regex: “true” nginx.ingress.kubernetes.io/proxy-send-timeout: “1” nginx.ingress.kubernetes.io/proxy-read-timeout: “1” nginx.ingress.kubernetes.io/enable-cors: “true” nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers “Request-Id: $req_id”; name: my-ingress namespace: my-ns spec: ingressClassName: nginx rules: – host: my-host.example.com http: paths: – backend: service: name: website-service port: number: 80 path: /users/(\d+) pathType: ImplementationSpecific tls: – hosts: – my-host.example.com secretName: my-secret 1. Install Ingress2Gateway If you have a Go environment set up, you can install Ingress2Gateway with go install github.com/kubernetes-sigs/ingress2gateway@v1.0.0 Otherwise, brew install ingress2gateway You can also download the binary from GitHub or build from source. 2. Run Ingress2Gateway You can pass Ingress2Gateway Ingress manifests, or have the tool read directly from your cluster. # Pass it files ingress2gateway print –input-file my-manifest.yaml,my-other-manifest.yaml –providers=ingress-nginx > gwapi.yaml # Use a namespace in your cluster ingress2gateway print –namespace my-api –providers=ingress-nginx > gwapi.yaml # Or your whole cluster ingress2gateway print –providers=ingress-nginx –all-namespaces > gwapi.yaml Note:You can also pass –emitter <agentgateway|envoy-gateway|kgateway> to output implementation-specific extensions. 3. Review the output This is the most critical step. The commands from the previous section output a Gateway API manifest to gwapi.yaml, and they also emit warnings that explain what did not translate exactly and what to review manually. apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: annotations: gateway.networking.k8s.io/generator: ingress2gateway-dev name: nginx namespace: my-ns spec: gatewayClassName: nginx listeners: – hostname: my-host.example.com name: my-host-example-com-http port: 80 protocol: HTTP – hostname: my-host.example.com name: my-host-example-com-https port: 443 protocol: HTTPS tls: certificateRefs: – group: “” kind: Secret name: my-secret — apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: annotations: gateway.networking.k8s.io/generator: ingress2gateway-dev name: my-ingress-my-host-example-com namespace: my-ns spec: hostnames: – my-host.example.com parentRefs: – name: nginx port: 443 rules: – backendRefs: – name: website-service port: 80 filters: – cors: allowCredentials: true allowHeaders: – DNT – Keep-Alive – User-Agent – X-Requested-With – If-Modified-Since – Cache-Control – Content-Type – Range – Authorization allowMethods: – GET – PUT – POST – DELETE – PATCH – OPTIONS allowOrigins: – ‘*’ maxAge: 1728000 type: CORS matches: – path: type: RegularExpression value: (?i)/users/(\d+).* name: rule-0 timeouts: request: 10s — apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: annotations: gateway.networking.k8s.io/generator: ingress2gateway-dev name: my-ingress-my-host-example-com-ssl-redirect namespace: my-ns spec: hostnames: – my-host.example.com parentRefs: – name: nginx port: 80 rules: – filters: – requestRedirect: scheme: https statusCode: 308 type: RequestRedirect Ingress2Gateway successfully translated some annotations into their Gateway API equivalents. For example, the nginx.ingress.kubernetes.io/enable-cors annotation was translated into a CORS filter. But upon closer inspection, the nginx.ingress.kubernetes.io/proxy-{read,send}-timeout and nginx.ingress.kubernetes.io/proxy-body-size annotations do not map perfectly. The logs show the reason for these omissions as well as reasoning behind the translation. ┌─ WARN ──────────────────────────────────────── │ Unsupported annotation nginx.ingress.kubernetes.io/configuration-snippet │ source: INGRESS-NGINX │ object: Ingress: my-ns/my-ingress └─ ┌─ INFO ──────────────────────────────────────── │ Using case-insensitive regex path matches. You may want to change this. │ source: INGRESS-NGINX │ object: HTTPRoute: my-ns/my-ingress-my-host-example-com └─ ┌─ WARN ──────────────────────────────────────── │ ingress-nginx only supports TCP-level timeouts; i2gw has made a best-effort translation to Gateway API timeouts.request. Please verify that this meets your needs. See documentation: https://gateway-api.sigs.k8s.io/guides/http-timeouts/ │ source: INGRESS-NGINX │ object: HTTPRoute: my-ns/my-ingress-my-host-example-com └─ ┌─ WARN ──────────────────────────────────────── │ Failed to apply my-ns.my-ingress.metadata.annotations.”nginx.ingress.kubernetes.io/proxy-body-size” from my-ns/my-ingress: Most Gateway API implementations have reasonable body size and buffering defaults │ source: STANDARD_EMITTER │ object: HTTPRoute: my-ns/my-ingress-my-host-example-com └─ ┌─ WARN ──────────────────────────────────────── │ Gateway API does not support configuring URL normalization (RFC 3986, Section 6). Please check if this matters for your use case and consult implementation-specific details. │ source: STANDARD_EMITTER └─ There is a warning that Ingress2Gateway does not support the nginx.ingress.kubernetes.io/configuration-snippet annotation. You will have to check your Gateway API implementation documentation to see if there is a way to achieve equivalent behavior. The tool also notified us that Ingress-NGINX regex matches are case-insensitive prefix matches, which is why there is a match pattern of (?i)/users/(\d+).*. Most organizations will want to change this behavior to be an exact case-sensitive match by removing the leading (?i) and the trailing .* from the path pattern. Ingress2Gateway made a best-effort translation from the nginx.ingress.kubernetes.io/proxy-{send,read}-timeout annotations to a 10 second request timeout in our HTTP route. If requests for this service should be much shorter, say 3 seconds, you can make the corresponding changes to your Gateway API manifests. Also, nginx.ingress.kubernetes.io/proxy-body-size does not have a Gateway API equivalent, and was thus not translated. However, most Gateway API implementations have reasonable defaults for maximum body size and buffering, so this might not be a problem in practice. Further, some emitters might offer support for this annotation through implementation-specific extensions. For example, adding the –emitter agentgateway, –emitter envoy-gateway, or –emitter kgateway flag to the previous ingress2gateway print command would have resulted in additional implementation-specific configuration in the generated Gateway API manifests that attempted to capture the body size configuration. We also see a warning about URL normalization. Gateway API implementations such as Agentgateway, Envoy Gateway, Kgateway, and Istio have some level of URL normalization, but the behavior varies across implementations and is not configurable through standard Gateway API. You should check and test the URL normalization behavior of your Gateway API implementation to ensure it is compatible with your use case. To match Ingress-NGINX default behavior, Ingress2Gateway also added a listener on port 80 and a HTTP Request redirect filter to redirect HTTP traffic to HTTPS. You may not want to serve HTTP traffic at all and remove the listener on port 80 and the corresponding HTTPRoute. Caution:Always thoroughly review the generated output and logs. After manually applying these changes, the Gateway API manifests might look as follows. — apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: annotations: gateway.networking.k8s.io/generator: ingress2gateway-dev name: nginx namespace: my-ns spec: gatewayClassName: nginx listeners: – hostname: my-host.example.com name: my-host-example-com-https port: 443 protocol: HTTPS tls: certificateRefs: – group: “” kind: Secret name: my-secret — apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: annotations: gateway.networking.k8s.io/generator: ingress2gateway-dev name: my-ingress-my-host-example-com namespace: my-ns spec: hostnames: – my-host.example.com parentRefs: – name: nginx port: 443 rules: – backendRefs: – name: website-service port: 80 filters: – cors: allowCredentials: true allowHeaders: – DNT … allowMethods: – GET … allowOrigins: – ‘*’ maxAge: 1728000 type: CORS matches: – path: type: RegularExpression value: /users/(\d+) name: rule-0 timeouts: request: 3s 4. Verify Now that you have Gateway API manifests, you should thoroughly test them in a development cluster. In this case, you should at least double-check that your Gateway API implementation’s maximum body size defaults are appropriate for you and verify that a three-second timeout is enough. After validating behavior in a development cluster, deploy your Gateway API configuration alongside your existing Ingress. We strongly suggest that you then gradually shift traffic using weighted DNS, your cloud load balancer, or traffic-splitting features of your platform. This way, you can quickly recover from any misconfiguration that made it through your tests. Finally, when you have shifted all your traffic to your Gateway API controller, delete your Ingress resources and uninstall your Ingress controller. Conclusion The Ingress2Gateway 1.0 release is just the beginning, and we hope that you use Ingress2Gateway to safely migrate to Gateway API. As we approach the March 2026 Ingress-NGINX retirement, we invite the community to help us increase our configuration coverage, expand testing, and improve UX. Resources about Gateway API The scope of Gateway API can be daunting. Here are some resources to help you work with Gateway API: Listener sets allow application developers to manage gateway listeners. gwctl gives you a comprehensive view of your Gateway resources, such as attachments and linter errors. Gateway API Slack: #sig-network-gateway-api on Kubernetes Slack Ingress2Gateway Slack: #sig-network-ingress2gateway on Kubernetes Slack GitHub: kubernetes-sigs/ingress2gateway

Scroll to Top