Kubernetes Blog The Kubernetes blog is used by the project to communicate new features, community reports, and any news that might be relevant to the Kubernetes community.
-
How we built a dynamic Kubernetes API Server for the API Aggregation Layer in Cozystack
on November 21, 2024 at 12:00 am
Hi there! I’m Andrei Kvapil, but you might know me as @kvaps in communities dedicated to Kubernetes and cloud-native tools. In this article, I want to share how we implemented our own extension api-server in the open-source PaaS platform, Cozystack. Kubernetes truly amazes me with its powerful extensibility features. You’re probably already familiar with the controller concept and frameworks like kubebuilder and operator-sdk that help you implement it. In a nutshell, they allow you to extend your Kubernetes cluster by defining custom resources (CRDs) and writing additional controllers that handle your business logic for reconciling and managing these kinds of resources. This approach is well-documented, with a wealth of information available online on how to develop your own operators. However, this is not the only way to extend the Kubernetes API. For more complex scenarios such as implementing imperative logic, managing subresources, and dynamically generating responses—the Kubernetes API aggregation layer provides an effective alternative. Through the aggregation layer, you can develop a custom extension API server and seamlessly integrate it within the broader Kubernetes API framework. In this article, I will explore the API aggregation layer, the types of challenges it is well-suited to address, cases where it may be less appropriate, and how we utilized this model to implement our own extension API server in Cozystack. What Is the API Aggregation Layer? First, let’s get definitions straight to avoid any confusion down the road. The API aggregation layer is a feature in Kubernetes, while an extension api-server is a specific implementation of an API server for the aggregation layer. An extension API server is just like the standard Kubernetes API server, except it runs separately and handles requests for your specific resource types. So, the aggregation layer lets you write your own extension API server, integrate it easily into Kubernetes, and directly process requests for resources in a certain group. Unlike the CRD mechanism, the extension API is registered in Kubernetes as an APIService, telling Kubernetes to consider this new API server and acknowledge that it serves certain APIs. You can execute this command to list all registered apiservices: kubectl get apiservices.apiregistration.k8s.io Example APIService: NAME SERVICE AVAILABLE AGE v1alpha1.apps.cozystack.io cozy-system/cozystack-api True 7h29m As soon as the Kubernetes api-server receives requests for resources in the group v1alpha1.apps.cozystack.io, it redirects all those requests to our extension api-server, which can handle them based on the business logic we’ve built into it. When to use the API Aggregation Layer The API Aggregation Layer helps solve several issues where the usual CRD mechanism might not enough. Let’s break them down. Imperative Logic and Subresources Besides regular resources, Kubernetes also has something called subresources. In Kubernetes, subresources are additional actions or operations you can perform on primary resources (like Pods, Deployments, Services) via the Kubernetes API. They provide interfaces to manage specific aspects of resources without affecting the entire object. A simple example is status, which is traditionally exposed as a separate subresource that you can access independently from the parent object. The status field isn’t meant to be changed But beyond /status, Pods in Kubernetes also have subresources like /exec, /portforward, and /log. Interestingly, instead of the usual declarative resources in Kubernetes, these represent endpoints for imperative operations like viewing logs, proxying connections, executing commands in a running container, and so on. To support such imperative commands on your own API, you need implement an extension API and an extension API server. Here are some well-known examples: KubeVirt: An add-on for Kubernetes that extends its API capabilities to run traditional virtual machines. The extension api-server created as part of KubeVirt handles subresources like /restart, /console, and /vnc for virtual machines. Knative: A Kubernetes add-on that extends its capabilities for serverless computing, implementing the /scale subresource to set up autoscaling for its resource types. By the way, even though subresource logic in Kubernetes can be imperative, you can manage access to them declaratively using Kubernetes standard RBAC model. For example this way you can control access to the /log and /exec subresources of the Pod kind: kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-and-pod-logs-reader rules: – apiGroups: [“”] resources: [“pods”, “pods/log”] verbs: [“get”, “list”] – apiGroups: [“”] resources: [“pods/exec”] verbs: [“create”] You’re not tied to use etcd Usually, the Kubernetes API server uses etcd for its backend. However, implementing your own API server doesn’t lock you into using only etcd. If it doesn’t make sense to store your server’s state in etcd, you can store information in any other system and generate responses on the fly. Here are a few cases to illustrate: metrics-server is a standard extension for Kubernetes which allows you to view real-time metrics of your nodes and pods. It defines alternative Pod and Node kinds in its own metrics.k8s.io API. Requests to these resources are translated into metrics directly from Kubelet. So when you run kubectl top node or kubectl top pod, metrics-server fetches metrics from cAdvisor in real-time. It then returns these metrics to you. Since the information is generated in real-time and is only relevant at the moment of the request, there is no need to store it in etcd. This approach saves resources. If needed, you can use a backend other than etcd. You can even implement a Kubernetes-compatible API for it. For example, if you use Postgres, you can create a transparent representation of its entities in the Kubernetes API. Eg. databases, users, and grants within Postgres would appear as regular Kubernetes resources, thanks to your extension API server. You could manage them using kubectl or any other Kubernetes-compatible tool. Unlike controllers, which implement business logic using custom resources and reconciliation methods, an extension API server eliminates the need for separate controllers for every kind. This means you don’t have to sync state between the Kubernetes API and your backend. One-Time resources Kubernetes has a special API used to provide users with information about their permissions. This is implemented using the SelfSubjectAccessReview API. One unusual detail of these resources is that you can’t view them using get or list verbs. You can only create them (using the create verb) and receive output with information about what you have access to at that moment. If you try to run kubectl get selfsubjectaccessreviews directly, you’ll just get an error like this: Error from server (MethodNotAllowed): the server does not allow this method on the requested resource The reason is that the Kubernetes API server doesn’t support any other interaction with this type of resource (you can only CREATE them). The SelfSubjectAccessReview API supports commands such as: kubectl auth can-i create deployments –namespace dev When you run the command above, kubectl creates a SelfSubjectAccessReview using the Kubernetes API. This allows Kubernetes to fetch a list of possible permissions for your user. Kubernetes then generates a personalized response to your request in real-time. This logic is different from a scenario where this resource is simply stored in etcd. Similarly, in KubeVirt’s CDI (Containerized Data Importer) extension, which allows file uploads into a PVC from a local machine using the virtctl tool, a special token is required before the upload process begins. This token is generated by creating an UploadTokenRequest resource via the Kubernetes API. Kubernetes routes (proxies) all UploadTokenRequest resource creation requests to the CDI extension API server, which generates and returns the token in response. Full control over conversion, validation, and output formatting Your own API server can have all the capabilities of the vanilla Kubernetes API server. The resources you create in your API server can be validated immediately on the server side without additional webhooks. While CRDs also support server-side validation using Common Expression Language (CEL) for declarative validation and ValidatingAdmissionPolicies without the need for webhooks, a custom API server allows for more complex and tailored validation logic if needed. Kubernetes allows you to serve multiple API versions for each resource type, traditionally v1alpha1, v1beta1 and v1. Only one version can be specified as the storage version. All requests to other versions must be automatically converted to the version specified as storage version. With CRDs, this mechanism is implemented using conversion webhooks. Whereas in an extension API server, you can implement your own conversion mechanism, choose to mix up different storage versions (one object might be serialized as v1, another as v2), or rely on an external backing API. Directly implementing the Kubernetes API lets you format table output however you like and doesn’t force you to follow the additionalPrinterColumns logic in CRDs. Instead, you can write your own formatter that formats the table output and custom fields in it. For example, when using additionalPrinterColumns, you can display field values only following the JSONPath logic. In your own API server, you can generate and insert values on the fly, formatting the table output as you wish. Dynamic resource registration The resources served by an extension api-server don’t need to be pre-registered as CRDs. Once your extension API server is registered using an APIService, Kubernetes starts polling it to discover APIs and resources it can serve. After receiving a discovery response, the Kubernetes API server automatically registers all available types for this API group. Although this isn’t considered common practice, you can implement logic that dynamically registers the resource types you need in your Kubernetes cluster. When not to use the API Aggregation Layer There are some anti-patterns where using the API Aggregation Layer isn’t recommended. Let’s go through them. Unstable backend If your API server stops responding for some reason due to an unavailable backend or other issues it may block some Kubernetes functionality. For example, when deleting namespaces, Kubernetes will wait for a response from your API server to see if there are any remaining resources. If the response doesn’t come, the namespace deletion will be blocked. Also, you might have encountered a situation where, when the metrics-server is unavailable, an extra message appears in stderr after every API request (even unrelated to metrics) stating that metrics.k8s.io is unavailable. This is another example of how using the API Aggregation Layer can lead to problems when the api-server handling requests is unavailable. Slow requests If you can’t guarantee an instant response for user requests, it’s better to consider using a CustomResourceDefinition and controller. Otherwise, you might make your cluster less stable. Many projects implement an extension API server only for a limited set of resources, particularly for imperative logic and subresources. This recommendation is also mentioned in the official Kubernetes documentation. Why we needed it in Cozystack As a reminder, we’re developing the open-source PaaS platform Cozystack, which can also be used as a framework for building your own private cloud. Therefore, the ability to easily extend the platform is crucial for us. Cozystack is built on top of FluxCD. Any application is packaged into its own Helm chart, ready for deployment in a tenant namespace. Deploying any application on the platform is done by creating a HelmRelease resource, specifying the chart name and parameters for the application. All the rest logic is handled by FluxCD. This pattern allows us to easily extend the platform with new applications and provide the ability to create new applications that just need to be packaged into the appropriate Helm chart. Interface of the Cozystack platform So, in our platform, everything is configured as HelmRelease resources. However, we ran into two problems: limitations of the RBAC model and the need for a public API. Let’s delve into these Limitations of the RBAC model The widely-deployed RBAC system in Kubernetes doesn’t allow you to restrict access to a list of resources of the same kind based on labels or specific fields in the spec. When creating a role, you can limit access across the resources in the same kind only by specifying specific resource names in resourceNames. For verbs like get or update it will work. However, filtering by resourceNames using list verb doesn’t work like that. Thus you can limit listing certain resources by kind but not by name. Kubernetes has a special API used to provide users with information about their permissions. This is implemented using the SelfSubjectAccessReview API. One unusual detail of these resources is that you can’t view them using get or list verbs. You can only create them (using the create verb) and receive output with information about what you have access to at that moment. So, we decided to introduce new resource types based on the names of the Helm charts they use and generate the list of available kinds dynamically at runtime in our extension api-server. This way, we can reuse Kubernetes standard RBAC model to manage access to specific resource types. Need for a public API Since our platform provides capabilities for deploying various managed services, we want to organize public access to the platform’s API. However, we can’t allow users to interact directly with resources like HelmRelease because that would let them specify arbitrary names and parameters for Helm charts to deploy, potentially compromising our system. We wanted to give users the ability to deploy a specific service simply by creating the resource with corresponding kind in Kubernetes. The type of this resource should be named the same as the chart from which it’s deployed. Here are some examples: kind: Kubernetes → chart: kubernetes kind: Postgres → chart: postgres kind: Redis → chart: redis kind: VirtualMachine → chart: virtual-machine Moreover, we don’t want to have to add a new type to codegen and recompile our extension API server every time we add a new chart for it to start being served. The schema update should be done dynamically or provided via a ConfigMap by the administrator. Two-Way conversion Currently, we already have integrations and a dashboard that continue to use HelmRelease resources. At this stage, we didn’t want to lose the ability to support this API. Considering that we’re simply translating one resource into another, support is maintained and it works both ways. If you create a HelmRelease, you’ll get a custom resource in Kubernetes, and if you create a custom resource in Kubernetes, it will also be available as a HelmRelease. We don’t have any additional controllers that synchronize state between these resources. All requests to resources in our extension API server are transparently proxied to HelmRelease and vice versa. This eliminates intermediate states and the need to write controllers and synchronization logic. Implementation To implement the Aggregation API, you might consider starting with the following projects: apiserver-builder: Currently in alpha and hasn’t been updated for two years. It works like kubebuilder, providing a framework for creating an extension API server, allowing you to sequentially create a project structure and generate code for your resources. sample-apiserver: A ready-made example of an implemented API server, based on official Kubernetes libraries, which you can use as a foundation for your project. For practical reasons, we chose the second project. Here’s what we needed to do: Disable etcd support In our case, we don’t need it since all resources are stored directly in the Kubernetes API. You can disable etcd options by passing nil to RecommendedOptions.Etcd: Disabling etcd options Generate a common resource kind We called it Application, and it looks like this: Application type definition This is a generic type used for any application type, and its handling logic is the same for all charts. Configure configuration loading Since we want to configure our extension api-server via a config file, we formed the config structure in Go: Config type definition We also modified the resource registration logic so that the resources we create are registered in scheme with different Kind values: Dynamic resource registration As a result, we got a config where you can pass all possible types and specify what they should map to: ConfigMap example Implement our own registry To store state not in etcd but translate it directly into Kubernetes HelmRelease resources (and vice versa), we wrote conversion functions from Application to HelmRelease and from HelmRelease to Application: Conversion functions We implemented logic to filter resources by chart name, sourceRef, and prefix in the HelmRelease name: Filtering functions Then, using this logic, we implemented the methods Get(), Delete(), List(), Create(). You can see the full example here: Registry Implementation At the end of each method, we set the correct Kind and return an unstructured.Unstructured{} object so that Kubernetes serializes the object correctly. Otherwise, it would always serialize them with kind: Application, which we don’t want. What did we achieve? In Cozystack, all our types from the ConfigMap are now available in Kubernetes as-is: kubectl api-resources | grep cozystack buckets apps.cozystack.io/v1alpha1 true Bucket clickhouses apps.cozystack.io/v1alpha1 true ClickHouse etcds apps.cozystack.io/v1alpha1 true Etcd ferretdb apps.cozystack.io/v1alpha1 true FerretDB httpcaches apps.cozystack.io/v1alpha1 true HTTPCache ingresses apps.cozystack.io/v1alpha1 true Ingress kafkas apps.cozystack.io/v1alpha1 true Kafka kuberneteses apps.cozystack.io/v1alpha1 true Kubernetes monitorings apps.cozystack.io/v1alpha1 true Monitoring mysqls apps.cozystack.io/v1alpha1 true MySQL natses apps.cozystack.io/v1alpha1 true NATS postgreses apps.cozystack.io/v1alpha1 true Postgres rabbitmqs apps.cozystack.io/v1alpha1 true RabbitMQ redises apps.cozystack.io/v1alpha1 true Redis seaweedfses apps.cozystack.io/v1alpha1 true SeaweedFS tcpbalancers apps.cozystack.io/v1alpha1 true TCPBalancer tenants apps.cozystack.io/v1alpha1 true Tenant virtualmachines apps.cozystack.io/v1alpha1 true VirtualMachine vmdisks apps.cozystack.io/v1alpha1 true VMDisk vminstances apps.cozystack.io/v1alpha1 true VMInstance vpns apps.cozystack.io/v1alpha1 true VPN We can work with them just like regular Kubernetes resources. Listing S3 Buckets: kubectl get buckets.apps.cozystack.io -n tenant-kvaps Example output: NAME READY AGE VERSION foo True 22h 0.1.0 testaasd True 27h 0.1.0 Listing Kubernetes Clusters: kubectl get kuberneteses.apps.cozystack.io -n tenant-kvaps Example output: NAME READY AGE VERSION abc False 19h 0.14.0 asdte True 22h 0.13.0 Listing Virtual Machine Disks: kubectl get vmdisks.apps.cozystack.io -n tenant-kvaps Example output: NAME READY AGE VERSION docker True 21d 0.1.0 test True 18d 0.1.0 win2k25-iso True 21d 0.1.0 win2k25-system True 21d 0.1.0 Listing Virtual Machine Instances: kubectl get vminstances.apps.cozystack.io -n tenant-kvaps Example output: NAME READY AGE VERSION docker True 21d 0.1.0 test True 18d 0.1.0 win2k25 True 20d 0.1.0 We can create, modify, and delete each of them, and any interaction with them will be translated into HelmRelease resources, while also applying the resource structure and prefix in the name. To see all related Helm releases: kubectl get helmreleases -n tenant-kvaps -l cozystack.io/ui Example output: NAME AGE READY bucket-foo 22h True bucket-testaasd 27h True kubernetes-abc 19h False kubernetes-asdte 22h True redis-test 18d True redis-yttt 12d True vm-disk-docker 21d True vm-disk-test 18d True vm-disk-win2k25-iso 21d True vm-disk-win2k25-system 21d True vm-instance-docker 21d True vm-instance-test 18d True vm-instance-win2k25 20d True Next Steps We don’t intend to stop here with our API. In the future, we plan to add new features: Add validation based on an OpenAPI spec generated directly from Helm charts. Develop a controller that collects release notes from deployed releases and shows users access information for specific services. Revamp our dashboard to work directly with the new API. Conclusion The API Aggregation Layer allowed us to quickly and efficiently solve our problem by providing a flexible mechanism for extending the Kubernetes API with dynamically registered resources and converting them on the fly. Ultimately, this made our platform even more flexible and extensible without the need to write code for each new resource. You can test the API yourself in the open-source PaaS platform Cozystack, starting from version v0.18.
-
Kubernetes v1.32 sneak peek
on November 8, 2024 at 12:00 am
As we get closer to the release date for Kubernetes v1.32, the project develops and matures. Features may be deprecated, removed, or replaced with better ones for the project’s overall health. This blog outlines some of the planned changes for the Kubernetes v1.32 release, that the release team feels you should be aware of, for the continued maintenance of your Kubernetes environment and keeping up to date with the latest changes. Information listed below is based on the current status of the v1.32 release and may change before the actual release date. The Kubernetes API removal and deprecation process The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release will continue to function until removal (at least one year from the deprecation). Its usage will result in a warning being displayed. Removed APIs are no longer available in the current version, so you must migrate to use the replacement instead. Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes. Beta or pre-release API versions must be supported for 3 releases after the deprecation. Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. Whether an API is removed due to a feature graduating from beta to stable or because that API did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the deprecation guide. Note on the withdrawal of the old DRA implementation The enhancement #3063 introduced Dynamic Resource Allocation (DRA) in Kubernetes 1.26. However, in Kubernetes v1.32, this approach to DRA will be significantly changed. Code related to the original implementation will be removed, leaving KEP #4381 as the “new” base functionality. The decision to change the existing approach originated from its incompatibility with cluster autoscaling as resource availability was non-transparent, complicating decision-making for both Cluster Autoscaler and controllers. The newly added Structured Parameter model substitutes the functionality. This removal will allow Kubernetes to handle new hardware requirements and resource claims more predictably, bypassing the complexities of back and forth API calls to the kube-apiserver. Please also see the enhancement issue #3063 to find out more. API removal There is only a single API removal planned for Kubernetes v1.32: The flowcontrol.apiserver.k8s.io/v1beta3 API version of FlowSchema and PriorityLevelConfiguration has been removed. To prepare for this, you can edit your existing manifests and rewrite client software to use the flowcontrol.apiserver.k8s.io/v1 API version, available since v1.29. All existing persisted objects are accessible via the new API. Notable changes in flowcontrol.apiserver.k8s.io/v1beta3 include that the PriorityLevelConfiguration spec.limited.nominalConcurrencyShares field only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30. For more information, please refer to the API deprecation guide. Sneak peek of Kubernetes v1.32 The following list of enhancements is likely to be included in the v1.32 release. This is not a commitment and the release content is subject to change. Even more DRA enhancements! In this release, like the previous one, the Kubernetes project continues proposing a number of enhancements to the Dynamic Resource Allocation (DRA), a key component of the Kubernetes resource management system. These enhancements aim to improve the flexibility and efficiency of resource allocation for workloads that require specialized hardware, such as GPUs, FPGAs and network adapters. This release introduces improvements, including the addition of resource health status in the Pod status, as outlined in KEP #4680. Add resource health status to the Pod status It isn’t easy to know when a Pod uses a device that has failed or is temporarily unhealthy. KEP #4680 proposes exposing device health via Pod status, making troubleshooting of Pod crashes easier. Windows strikes back! KEP #4802 adds support for graceful shutdowns of Windows nodes in Kubernetes clusters. Before this release, Kubernetes provided graceful node shutdown functionality for Linux nodes but lacked equivalent support for Windows. This enhancement enables the kubelet on Windows nodes to handle system shutdown events properly. Doing so, it ensures that Pods running on Windows nodes are gracefully terminated, allowing workloads to be rescheduled without disruption. This improvement enhances the reliability and stability of clusters that include Windows nodes, especially during a planned maintenance or any system updates. Allow special characters in environment variables With the graduation of this enhancement to beta, Kubernetes now allows almost all printable ASCII characters (excluding “=”) to be used as environment variable names. This change addresses the limitations previously imposed on variable naming, facilitating a broader adoption of Kubernetes by accommodating various application needs. The relaxed validation will be enabled by default via the RelaxedEnvironmentVariableValidation feature gate, ensuring that users can easily utilize environment variables without strict constraints, enhancing flexibility for developers working with applications like .NET Core that require special characters in their configurations. Make Kubernetes aware of the LoadBalancer behavior KEP #1860 graduates to GA, introducing the ipMode field for a Service of type: LoadBalancer, which can be set to either “VIP” or “Proxy”. This enhancement is aimed at improving how cloud providers load balancers interact with kube-proxy and it is a change transparent to the end user. The existing behavior of kube-proxy is preserved when using “VIP”, where kube-proxy handles the load balancing. Using “Proxy” results in traffic sent directly to the load balancer, providing cloud providers greater control over relying on kube-proxy; this means that you could see an improvement in the performance of your load balancer for some cloud providers. Retry generate name for resources This enhancement improves how name conflicts are handled for Kubernetes resources created with the generateName field. Previously, if a name conflict occurred, the API server returned a 409 HTTP Conflict error and clients had to manually retry the request. With this update, the API server automatically retries generating a new name up to seven times in case of a conflict. This significantly reduces the chances of collision, ensuring smooth generation of up to 1 million names with less than a 0.1% probability of a conflict, providing more resilience for large-scale workloads. Want to know more? New features and deprecations are also announced in the Kubernetes release notes. We will formally announce what’s new in Kubernetes v1.32 as part of the CHANGELOG for this release. You can see the announcements of changes in the release notes for: Kubernetes v1.31 Kubernetes v1.30 Kubernetes v1.29 Kubernetes v1.28
-
Spotlight on Kubernetes Upstream Training in Japan
on October 28, 2024 at 12:00 am
We are organizers of Kubernetes Upstream Training in Japan. Our team is composed of members who actively contribute to Kubernetes, including individuals who hold roles such as member, reviewer, approver, and chair. Our goal is to increase the number of Kubernetes contributors and foster the growth of the community. While Kubernetes community is friendly and collaborative, newcomers may find the first step of contributing to be a bit challenging. Our training program aims to lower that barrier and create an environment where even beginners can participate smoothly. What is Kubernetes upstream training in Japan? Our training started in 2019 and is held 1 to 2 times a year. Initially, Kubernetes Upstream Training was conducted as a co-located event of KubeCon (Kubernetes Contributor Summit), but we launched Kubernetes Upstream Training in Japan with the aim of increasing Japanese contributors by hosting a similar event in Japan. Before the pandemic, the training was held in person, but since 2020, it has been conducted online. The training offers the following content for those who have not yet contributed to Kubernetes: Introduction to Kubernetes community Overview of Kubernetes codebase and how to create your first PR Tips and encouragement to lower participation barriers, such as language How to set up the development environment Hands-on session using kubernetes-sigs/contributor-playground At the beginning of the program, we explain why contributing to Kubernetes is important and who can contribute. We emphasize that contributing to Kubernetes allows you to make a global impact and that Kubernetes community is looking forward to your contributions! We also explain Kubernetes community, SIGs, and Working Groups. Next, we explain the roles and responsibilities of Member, Reviewer, Approver, Tech Lead, and Chair. Additionally, we introduce the communication tools we primarily use, such as Slack, GitHub, and mailing lists. Some Japanese speakers may feel that communicating in English is a barrier. Additionally, those who are new to the community need to understand where and how communication takes place. We emphasize the importance of taking that first step, which is the most important aspect we focus on in our training! We then go over the structure of Kubernetes codebase, the main repositories, how to create a PR, and the CI/CD process using Prow. We explain in detail the process from creating a PR to getting it merged. After several lectures, participants get to experience hands-on work using kubernetes-sigs/contributor-playground, where they can create a simple PR. The goal is for participants to get a feel for the process of contributing to Kubernetes. At the end of the program, we also provide a detailed explanation of setting up the development environment for contributing to the kubernetes/kubernetes repository, including building code locally, running tests efficiently, and setting up clusters. Interview with participants We conducted interviews with those who participated in our training program. We asked them about their reasons for joining, their impressions, and their future goals. Keita Mochizuki (NTT DATA Group Corporation) Keita Mochizuki is a contributor who consistently contributes to Kubernetes and related projects. Keita is also a professional in container security and has recently published a book. Additionally, he has made available a Roadmap for New Contributors, which is highly beneficial for those new to contributing. Junya: Why did you decide to participate in Kubernetes Upstream Training? Keita: Actually, I participated twice, in 2020 and 2022. In 2020, I had just started learning about Kubernetes and wanted to try getting involved in activities outside of work, so I signed up after seeing the event on Twitter by chance. However, I didn’t have much knowledge at the time, and contributing to OSS felt like something beyond my reach. As a result, my understanding after the training was shallow, and I left with more of a “hmm, okay” feeling. In 2022, I participated again when I was at a stage where I was seriously considering starting contributions. This time, I did prior research and was able to resolve my questions during the lectures, making it a very productive experience. Junya: How did you feel after participating? Keita: I felt that the significance of this training greatly depends on the participant’s mindset. The training itself consists of general explanations and simple hands-on exercises, but it doesn’t mean that attending the training will immediately lead to contributions. Junya: What is your purpose for contributing? Keita: My initial motivation was to “gain a deep understanding of Kubernetes and build a track record,” meaning “contributing itself was the goal.” Nowadays, I also contribute to address bugs or constraints I discover during my work. Additionally, through contributing, I’ve become less hesitant to analyze undocumented features directly from the source code. Junya: What has been challenging about contributing? Keita: The most difficult part was taking the first step. Contributing to OSS requires a certain level of knowledge, and leveraging resources like this training and support from others was essential. One phrase that stuck with me was, “Once you take the first step, it becomes easier to move forward.” Also, in terms of continuing contributions as part of my job, the most challenging aspect is presenting the outcomes as achievements. To keep contributing over time, it’s important to align it with business goals and strategies, but upstream contributions don’t always lead to immediate results that can be directly tied to performance. Therefore, it’s crucial to ensure mutual understanding with managers and gain their support. Junya: What are your future goals? Keita: My goal is to contribute to areas with a larger impact. So far, I’ve mainly contributed by fixing smaller bugs as my primary focus was building a track record, but moving forward, I’d like to challenge myself with contributions that have a greater impact on Kubernetes users or that address issues related to my work. Recently, I’ve also been working on reflecting the changes I’ve made to the codebase into the official documentation, and I see this as a step toward achieving my goals. Junya: Thank you very much! Yoshiki Fujikane (CyberAgent, Inc.) Yoshiki Fujikane is one of the maintainers of PipeCD, a CNCF Sandbox project. In addition to developing new features for Kubernetes support in PipeCD, Yoshiki actively participates in community management and speaks at various technical conferences. Junya: Why did you decide to participate in the Kubernetes Upstream Training? Yoshiki: At the time I participated, I was still a student. I had only briefly worked with EKS, but I thought Kubernetes seemed complex yet cool, and I was casually interested in it. Back then, OSS felt like something out of reach, and upstream development for Kubernetes seemed incredibly daunting. While I had always been interested in OSS, I didn’t know where to start. It was during this time that I learned about the Kubernetes Upstream Training and decided to take the challenge of contributing to Kubernetes. Junya: What were your impressions after participating? Yoshiki: I found it extremely valuable as a way to understand what it’s like to be part of an OSS community. At the time, my English skills weren’t very strong, so accessing primary sources of information felt like a big hurdle for me. Kubernetes is a very large project, and I didn’t have a clear understanding of the overall structure, let alone what was necessary for contributing. The upstream training provided a Japanese explanation of the community structure and allowed me to gain hands-on experience with actual contributions. Thanks to the guidance I received, I was able to learn how to approach primary sources and use them as entry points for further investigation, which was incredibly helpful. This experience made me realize the importance of organizing and reviewing primary sources, and now I often dive into GitHub issues and documentation when something piques my interest. As a result, while I am no longer contributing to Kubernetes itself, the experience has been a great foundation for contributing to other projects. Junya: What areas are you currently contributing to, and what are the other projects you’re involved in? Yoshiki: Right now, I’m no longer working with Kubernetes, but instead, I’m a maintainer of PipeCD, a CNCF Sandbox project. PipeCD is a CD tool that supports GitOps-style deployments for various application platforms. The tool originally started as an internal project at CyberAgent. With different teams adopting different platforms, PipeCD was developed to provide a unified CD platform with a consistent user experience. Currently, it supports Kubernetes, AWS ECS, Lambda, Cloud Run, and Terraform. Junya: What role do you play within the PipeCD team? Yoshiki: I work full-time on improving and developing Kubernetes-related features within the team. Since we provide PipeCD as a SaaS internally, my main focus is on adding new features and improving existing ones as part of that support. In addition to code contributions, I also contribute by giving talks at various events and managing community meetings to help grow the PipeCD community. Junya: Could you explain what kind of improvements or developments you are working on with regards to Kubernetes? Yoshiki: PipeCD supports GitOps and Progressive Delivery for Kubernetes, so I’m involved in the development of those features. Recently, I’ve been working on features that streamline deployments across multiple clusters. Junya: Have you encountered any challenges while contributing to OSS? Yoshiki: One challenge is developing features that maintain generality while meeting user use cases. When we receive feature requests while operating the internal SaaS, we first consider adding features to solve those issues. At the same time, we want PipeCD to be used by a broader audience as an OSS tool. So, I always think about whether a feature designed for one use case could be applied to another, ensuring the software remains flexible and widely usable. Junya: What are your goals moving forward? Yoshiki: I want to focus on expanding PipeCD’s functionality. Currently, we are developing PipeCD under the slogan “One CD for All.” As I mentioned earlier, it supports Kubernetes, AWS ECS, Lambda, Cloud Run, and Terraform, but there are many other platforms out there, and new platforms may emerge in the future. For this reason, we are currently developing a plugin system that will allow users to extend PipeCD on their own, and I want to push this effort forward. I’m also working on features for multi-cluster deployments in Kubernetes, and I aim to continue making impactful contributions. Junya: Thank you very much! Future of Kubernetes upstream training We plan to continue hosting Kubernetes Upstream Training in Japan and look forward to welcoming many new contributors. Our next session is scheduled to take place at the end of November during CloudNative Days Winter 2024. Moreover, our goal is to expand these training programs not only in Japan but also around the world. Kubernetes celebrated its 10th anniversary this year, and for the community to become even more active, it’s crucial for people across the globe to continue contributing. While Upstream Training is already held in several regions, we aim to bring it to even more places. We hope that as more people join Kubernetes community and contribute, our community will become even more vibrant!
-
Announcing the 2024 Steering Committee Election Results
on October 2, 2024 at 8:10 pm
The 2024 Steering Committee Election is now complete. The Kubernetes Steering Committee consists of 7 seats, 3 of which were up for election in 2024. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community. This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their charter. Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success. Results Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle): Antonio Ojea (@aojea), Google Benjamin Elder (@BenTheElder), Google Sascha Grunert (@saschagrunert), Red Hat They join continuing members: Stephen Augustus (@justaugustus), Cisco Paco Xu 徐俊杰 (@pacoxu), DaoCloud Patrick Ohly (@pohly), Intel Maciej Szulik (@soltysh), Defense Unicorns Benjamin Elder is a returning Steering Committee Member. Big thanks! Thank you and congratulations on a successful election to this round’s election officers: Bridget Kromhout (@bridgetkromhout) Christoph Blecker (@cblecker) Priyanka Saggu (@Priyankasaggu11929) Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community: Bob Killen (@mrbobbytables) Nabarun Pal (@palnabarun) And thank you to all the candidates who came forward to run for election. Get involved with the Steering Committee This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee meeting notes and weigh in by filing an issue or creating a PR against their repo. They have an open meeting on the first Monday at 8am PT of every month. They can also be contacted at their public mailing list steering@kubernetes.io. You can see what the Steering Committee meetings are all about by watching past meetings on the YouTube Playlist. If you want to meet some of the newly elected Steering Committee members, join us for the Steering AMA at the Kubernetes Contributor Summit North America 2024 in Salt Lake City. This post was adapted from one written by the Contributor Comms Subproject. If you want to write stories about the Kubernetes community, learn more about us.
-
Spotlight on CNCF Deaf and Hard-of-hearing Working Group (DHHWG)
on September 30, 2024 at 12:00 am
In recognition of Deaf Awareness Month and the importance of inclusivity in the tech community, we are spotlighting Catherine Paganini, facilitator and one of the founding members of CNCF Deaf and Hard-of-Hearing Working Group (DHHWG). In this interview, Sandeep Kanabar, a deaf member of the DHHWG and part of the Kubernetes SIG ContribEx Communications team, sits down with Catherine to explore the impact of the DHHWG on cloud native projects like Kubernetes. Sandeep’s journey is a testament to the power of inclusion. Through his involvement in the DHHWG, he connected with members of the Kubernetes community who encouraged him to join SIG ContribEx – the group responsible for sustaining the Kubernetes contributor experience. In an ecosystem where open-source projects are actively seeking contributors and maintainers, this story highlights how important it is to create pathways for underrepresented groups, including those with disabilities, to contribute their unique perspectives and skills. In this interview, we delve into Catherine’s journey, the challenges and triumphs of establishing the DHHWG, and the vision for a more inclusive future in cloud native. We invite Kubernetes contributors, maintainers, and community members to reflect on the significance of empathy, advocacy, and community in fostering a truly inclusive environment for all, and to think about how they can support efforts to increase diversity and accessibility within their own projects. Introduction Sandeep Kanabar (SK): Hello Catherine, could you please introduce yourself, share your professional background, and explain your connection to the Kubernetes ecosystem? Catherine Paganini (CP): I’m the Head of Marketing at Buoyant, the creator of Linkerd, the CNCF-graduated service mesh, and 5th CNCF project. Four years ago, I started contributing to open source. The initial motivation was to make cloud native concepts more accessible to newbies and non-technical people. Without a technical background, it was hard for me to understand what Kubernetes, containers, service meshes, etc. mean. All content was targeted at engineers already familiar with foundational concepts. Clearly, I couldn’t be the only one struggling with wrapping my head around cloud native. My first contribution was the CNCF Landscape Guide, which I co-authored with my former colleague Jason Morgan. Next, we started the CNCF Glossary, which explains cloud native concepts in simple terms. Today, the glossary has been (partially) localised into 14 languages! Currently, I’m the co-chair of the TAG Contributor Strategy and the Facilitator of the Deaf and Hard of Hearing Working Group (DHHWG) and Blind and Visually Impaired WG (BVIWG), which is still in formation. I’m also working on a new Linux Foundation (LF) initiative called ABIDE (Accessibility and Belonging through Inclusion, Diversity, and Equity), so stay tuned to learn more about it! Motivation and early milestones SK: That’s inspiring! Building on your passion for accessibility, what motivated you to facilitate the creation of the DHHWG? Was there a speecifc moment or experience that sparked this initiative? CP: Last year at KubeCon Amsterdam, I learned about a great initiative by Jay Tihema that creates pathways for Maori youth into cloud native and open source. While telling my CODA (children of deaf adults) high school friend about it, I thought it’d be great to create something similar for deaf folks. A few months later, I posted about it in a LinkedIn post that the CNCF shared. Deaf people started to reach out, wanting to participate. And the rest is history. SK: Speaking of history, since its launch, how has the DHHWG evolved? Could you highlight some of the key milestones or achievements the group has reached recently? CP: Our WG is about a year old. It started with a few deaf engineers and me brainstorming how to make KubeCon more accessible. We published an initial draft of Best practices for an inclusive conference and shared it with the LF events team. KubeCon Chicago was two months later, and we had a couple of deaf attendees. It was the first KubeCon accessible to deaf signers. Destiny, one of our co-chairs, even participated in a keynote panel. It was incredible how quickly everything happened! DHHWG members at KubeCon Chicago The team has grown since then, and we’ve been able to do much more. With a kiosk in the project pavilion, an open space discussion, a sign language crash course, and a few media interviews, KubeCon Paris had a stronger advocacy and outreach focus. Check out this video of our team in Paris to get a glimpse of all the different KubeCon activities — it was such a great event! The team also launched the first CNCF Community Group in sign language, Deaf in Cloud Native, a glossary team that creates sign language videos for each technical term to help standardize technical signs across the globe. It’s crazy to think that it all happened within one year! Overcoming challenges and addressing misconceptions SK: That’s remarkable progress in just a year! Building such momentum must have come with its challenges. What barriers have you encountered in facilitating the DHHWG, and how did you and the group work to overcome them? CP: The support from the community, LF, and CNCF has been incredible. The fact that we achieved so much is proof of it. The challenges are more in helping some team members overcome their fear of contributing. Most are new to open source, and it can be intimidating to put your work out there for everyone to see. The fear of being criticized in public is real; however, as they will hopefully realize over time, our community is incredibly supportive. Instead of criticizing, people tend to help improve the work, leading to better outcomes. SK: Are there any misconceptions about the deaf and hard-of-hearing community in tech that you’d like to address? CP: Deaf and hard of hearing individuals are very diverse — there is no one-size-fits-all. Some deaf people are oral (speak), others sign, while some lip read or prefer captions. It generally depends on how people grew up. While some people come from deaf families and sign language is their native language, others were born into hearing families who may or may not have learned how to sign. Some deaf people grew up surrounded by hearing people, while others grew up deeply embedded in Deaf culture. Hard-of-hearing individuals, on the other hand, typically can communicate well with hearing peers one-on-one in quiet settings, but loud environments or conversations with multiple people can make it hard to follow the conversation. Most rely heavily on captions. Each background and experience will shape their communication style and preferences. In short, what works for one person, doesn’t necessarily work for others. So never assume and always ask about accessibility needs and preferences. Impact and the role of allies SK: Can you share some key impacts/outcomes of the conference best practices document? CP: Here are the two most important ones: Captions should be on the monitor, not in an app. That’s especially important during technical talks with live demos. Deaf and hard of hearing attendees will miss important information switching between captions on their phone and code on the screen. Interpreters are most valuable during networking, not in talks (with captions). Most people come to conferences for the hallway track. That is no different for deaf attendees. If they can’t network, they are missing out on key professional connections, affecting their career prospects. SK: In your view, how crucial is the role of allies within the DHHWG, and what contributions have they made to the group’s success? CP: Deaf and hard of hearing individuals are a minority and can only do so much. Allies are the key to any diversity and inclusion initiative. As a majority, allies can help spread the word and educate their peers, playing a key role in scaling advocacy efforts. They also have the power to demand change. It’s easy for companies to ignore minorities, but if the majority demands that their employers be accessible, environmentally conscious, and good citizens, they will ultimately be pushed to adapt to new societal values. Expanding DEI efforts and future vision SK: The importance of allies in driving change is clear. Beyond the DHHWG, are you involved in any other DEI groups or initiatives within the tech community? CP: As mentioned above, I’m working on an initiative called ABIDE, which is still work in progress. I don’t want to share too much about it yet, but what I can say is that the DHHWG will be part of it and that we just started a Blind and Visually Impaired WG (BVIWG). ABIDE will start by focusing on accessibility, so if anyone reading this has an idea for another WG, please reach out to me via the CNCF Slack @Catherine Paganini. SK: What does the future hold for the DHHWG? Can you share details about any ongoing or upcoming initiatives? CP: I think we’ve been very successful in terms of visibility and awareness so far. We can’t stop, though. Awareness work is ongoing, and most people in our community haven’t heard about us or met anyone on our team yet, so a lot of work still lies ahead. DHHWG members at KubeCon Paris The next step is to refocus on advocacy. The same thing we did with the conference best practices but for other areas. The goal is to help educate the community about what real accessibility looks like, how projects can be more accessible, and why employers should seriously consider deaf candidates while providing them with the tools they need to conduct successful interviews and employee onboarding. We need to capture all that in documents, publish it, and then get the word out. That last part is certainly the most challenging, but it’s also where everyone can get involved. Call to action SK: Thank you for sharing your insights, Catherine. As we wrap up, do you have any final thoughts or a call to action for our readers? CP: As we build our accessibility page, check in regularly to see what’s new. Share the docs with your team, employer, and network — anyone, really. The more people understand what accessibility really means and why it matters, the more people will recognize when something isn’t accessible, and be able to call out marketing-BS, which, unfortunately, is more often the case than not. We need allies to help push for change. No minority can do this on their own. So please learn about accessibility, keep an eye out for it, and call it out when something isn’t accessible. We need your help! Wrapping up Catherine and the DHHWG’s work exemplify the power of community and advocacy. As we celebrate Deaf Awareness Month, let’s reflect on her role as an ally and consider how we can all contribute to building a more inclusive tech community, particularly within open-source projects like Kubernetes. Together, we can break down barriers, challenge misconceptions, and ensure that everyone feels welcome and valued. By advocating for accessibility, supporting initiatives like the DHHWG, and fostering a culture of empathy, we can create a truly inclusive and welcoming space for all.