diff --git a/config.toml b/config.toml index 9b649a4230..c34cfa20d2 100644 --- a/config.toml +++ b/config.toml @@ -188,30 +188,30 @@ docsbranch = "main" url = "https://kubernetes.io" [[params.versions]] -fullversion = "v1.25.0" +fullversion = "v1.25.5" version = "v1.25" -githubbranch = "v1.25.0" -docsbranch = "main" -url = "https://kubernetes.io" +githubbranch = "v1.25.5" +docsbranch = "release-1.25" +url = "https://v1-25.docs.kubernetes.io" [[params.versions]] -fullversion = "v1.24.2" +fullversion = "v1.24.9" version = "v1.24" -githubbranch = "v1.24.2" +githubbranch = "v1.24.9" docsbranch = "release-1.24" url = "https://v1-24.docs.kubernetes.io" [[params.versions]] -fullversion = "v1.23.8" +fullversion = "v1.23.15" version = "v1.23" -githubbranch = "v1.23.8" +githubbranch = "v1.23.15" docsbranch = "release-1.23" url = "https://v1-23.docs.kubernetes.io" [[params.versions]] -fullversion = "v1.22.11" +fullversion = "v1.22.17" version = "v1.22" -githubbranch = "v1.22.11" +githubbranch = "v1.22.17" docsbranch = "release-1.22" url = "https://v1-22.docs.kubernetes.io" diff --git a/content/en/blog/_posts/2022-12-13-host-process-containers-ga/hpc_architecture.svg b/content/en/blog/_posts/2022-12-13-host-process-containers-ga/hpc_architecture.svg new file mode 100644 index 0000000000..81d0044b99 --- /dev/null +++ b/content/en/blog/_posts/2022-12-13-host-process-containers-ga/hpc_architecture.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/content/en/blog/_posts/2022-12-13-host-process-containers-ga/index.md b/content/en/blog/_posts/2022-12-13-host-process-containers-ga/index.md new file mode 100644 index 0000000000..72488b9bf8 --- /dev/null +++ b/content/en/blog/_posts/2022-12-13-host-process-containers-ga/index.md @@ -0,0 +1,122 @@ +--- +layout: blog +title: "Kubernetes 1.26: Windows HostProcess Containers Are Generally Available" +date: 2022-12-13 +slug: windows-host-process-containers-ga +--- + +**Authors**: Brandon Smith (Microsoft) and Mark Rossetti (Microsoft) + +The long-awaited day has arrived: HostProcess containers, the Windows equivalent to Linux privileged +containers, has finally made it to **GA in Kubernetes 1.26**! + +What are HostProcess containers and why are they useful? + +Cluster operators are often faced with the need to configure their nodes upon provisioning such as +installing Windows services, configuring registry keys, managing TLS certificates, +making network configuration changes, or even deploying monitoring tools such as a Prometheus's node-exporter. +Previously, performing these actions on Windows nodes was usually done by running PowerShell scripts +over SSH or WinRM sessions and/or working with your cloud provider's virtual machine management tooling. +HostProcess containers now enable you to do all of this and more with minimal effort using Kubernetes native APIs. + +With HostProcess containers you can now package any payload +into the container image, map volumes into containers at runtime, and manage them like any other Kubernetes workload. +You get all the benefits of containerized packaging and deployment methods combined with a reduction in +both administrative and development cost. +Gone are the days where cluster operators would need to manually log onto +Windows nodes to perform administrative duties. + +[HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod/) differ +quite significantly from regular Windows Server containers. +They are run directly as processes on the host with the access policies of +a user you specify. HostProcess containers run as either the built-in Windows system accounts or +ephemeral users within a user group defined by you. HostProcess containers also share +the host's network namespace and access/configure storage mounts visible to the host. +On the other hand, Windows Server containers are highly isolated and exist in a separate +execution namespace. Direct access to the host from a Windows Server container is explicitly disallowed +by default. + +## How does it work? + +Windows HostProcess containers are implemented with Windows [_Job Objects_](https://learn.microsoft.com/en-us/windows/win32/procthread/job-objects), +a break from the previous container model which use server silos. +Job Objects are components of the Windows OS which offer the ability to +manage a group of processes as a group (also known as a _job_) and assign resource constraints to the +group as a whole. Job objects are specific to the Windows OS and are not associated with +the Kubernetes [Job API](/docs/concepts/workloads/controllers/job/). They have no process +or file system isolation, +enabling the privileged payload to view and edit the host file system with the +desired permissions, among other host resources. The init process, and any processes +it launches (including processes explicitly launched by the user) are all assigned to the +job object of that container. When the init process exits or is signaled to exit, +all the processes in the job will be signaled to exit, the job handle will be +closed and the storage will be unmounted. + +HostProcess and Linux privileged containers enable similar scenarios but differ +greatly in their implementation (hence the naming difference). HostProcess containers +have their own [PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#windowssecuritycontextoptions-v1-core) fields. +Those used to configure Linux privileged containers **do not** apply. Enabling privileged access to a Windows host is a +fundamentally different process than with Linux so the configuration and +capabilities of each differ significantly. Below is a diagram detailing the +overall architecture of Windows HostProcess containers: + +{{< figure src="hpc_architecture.svg" alt="HostProcess Architecture" >}} + +Two major features were added prior to moving to stable: the ability to run as local user accounts, and +a simplified method of accessing volume mounts. To learn more, read +[Create a Windows HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod/). + +## HostProcess containers in action + +Kubernetes SIG Windows has been busy putting HostProcess containers to use - even before GA! +They've been very excited to use HostProcess containers for a number of important activities +that were a pain to perform in the past. + +Here are just a few of the many use use cases with example deployments: + +- [CNI solutions and kube-proxy](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/hostprocess/calico#calico-example) +- [windows-exporter](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) +- [csi-proxy](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/hostprocess/csi-proxy) +- [Windows-debug container](https://github.com/jsturtevant/windows-debug) +- [ETW event streaming](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/hostprocess/eventflow-logger) + +## How do I use it? + +A HostProcess container can be built using any base image of your choosing, however, for convenience we have +created [a HostProcess container base image](https://github.com/microsoft/windows-host-process-containers-base-image). +This image is only a few KB in size and does not inherit any of the same compatibility requirements as regular Windows +server containers which allows it to run on any Windows server version. + +To use that Microsoft image, put this in your `Dockerfile`: + +```dockerfile +FROM mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0 +``` + +You can run HostProcess containers from within a +[HostProcess Pod](/docs/concepts/workloads/pods/#privileged-mode-for-containers). + +To get started with running Windows containers, +see the general guidance for [deploying Windows nodes](/docs/setup/production-environment/windows/). +If you have a compatible node (for example: Windows as the operating system +with containerd v1.7 or later as the container runtime), you can deploy a Pod with one +or more HostProcess containers. +See the [Create a Windows HostProcess Pod - Prerequisites](/docs/tasks/configure-pod-container/create-hostprocess-pod/#before-you-begin) +for more information. + +Please note that within a Pod, you can't mix HostProcess containers with normal Windows containers. + +## How can I learn more? + +- Work through [Create a Windows HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod/) + +- Read about Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/) and [Pod Security Admission](docs/concepts/security/pod-security-admission/) + +- Read the enhancement proposal [Windows Privileged Containers and Host Networking Mode](https://github.com/kubernetes/enhancements/tree/master/keps/sig-windows/1981-windows-privileged-container-support) (KEP-1981) + +- Watch the [Windows HostProcess for Configuration and Beyond](https://www.youtube.com/watch?v=LcXT9pVkwvo) KubeCon NA 2022 talk + +## How do I get involved? + +Get involved with [SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows) +to contribute! diff --git a/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/deviceplugin-framework-overview.svg b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/deviceplugin-framework-overview.svg new file mode 100644 index 0000000000..64d4288a20 --- /dev/null +++ b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/deviceplugin-framework-overview.svg @@ -0,0 +1,4 @@ + + + +
Kubelet





















Kubelet...
Device Plugin





















Device Plugin...
Device Plugin gRPC server















Device Plugin gRPC server...
GetDevicePluginOptions
GetDevicePluginOptions
ListAndWatch
ListAndWatch
GetPreferredAllocation
GetPreferredAllocation
Allocate
Allocate
PreStartContainer
PreStartContainer
GetDevicePluginOptions
GetDevicePluginOptions
ListAndWatch
ListAndWatch
GetPreferredAllocation
GetPreferredAllocation
Allocate
Allocate
PreStartContainer
PreStartContainer
Kubelet gRPC server




Kubelet gRPC server...
Register
Register
Register
Register
Kubelet gRPC API implementation
Kubelet gRPC API impl...
Kubelet gRPC client call 
Kubelet gRPC client c...
Device Plugin gRPC API implementation
Device Plugin gRPC AP...
Device Plugin gRPC client call
Device Plugin gRPC cl...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md new file mode 100644 index 0000000000..8b3893db2f --- /dev/null +++ b/content/en/blog/_posts/2022-12-19-devicemanager-ga.md/index.md @@ -0,0 +1,93 @@ +--- +layout: blog +title: 'Kubernetes 1.26: Device Manager graduates to GA' +date: 2022-12-19 +slug: devicemanager-ga +--- + +**Author:** Swati Sehgal (Red Hat) + +The Device Plugin framework was introduced in the Kubernetes v1.8 release as a vendor +independent framework to enable discovery, advertisement and allocation of external +devices without modifying core Kubernetes. The feature graduated to Beta in v1.10. +With the recent release of Kubernetes v1.26, Device Manager is now generally +available (GA). + +Within the kubelet, the Device Manager facilitates communication with device plugins +using gRPC through Unix sockets. Device Manager and Device plugins both act as gRPC +servers and clients by serving and connecting to the exposed gRPC services respectively. +Device plugins serve a gRPC service that kubelet connects to for device discovery, +advertisement (as extended resources) and allocation. Device Manager connects to +the `Registration` gRPC service served by kubelet to register itself with kubelet. + +Please refer to the documentation for an [example](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#example-pod) on how a pod can request a device exposed to the cluster by a device plugin. + +Here are some example implementations of device plugins: +- [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin) +- [Collection of Intel device plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes) +- [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin) +- [SRIOV network device plugin for Kubernetes](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin) + +## Noteworthy developments since Device Plugin framework introduction + +### Kubelet APIs moved to kubelet staging repo +External facing `deviceplugin` API packages moved from `k8s.io/kubernetes/pkg/kubelet/apis/` +to `k8s.io/kubelet/pkg/apis/` in v1.17. Refer to [Move external facing kubelet apis to staging](https://github.com/kubernetes/kubernetes/pull/83551) for more details on the rationale behind this change. + +### Device Plugin API updates +Additional gRPC endpoints introduced: + 1. `GetDevicePluginOptions` is used by device plugins to communicate + options to the `DeviceManager` in order to indicate if `PreStartContainer`, + `GetPreferredAllocation` or other future optional calls are supported and + can be called before making devices available to the container. + 1. `GetPreferredAllocation` allows a device plugin to forward allocation + preferrence to the `DeviceManager` so it can incorporate this information + into its allocation decisions. The `DeviceManager` will call out to a + plugin at pod admission time asking for a preferred device allocation + of a given size from a list of available devices to make a more informed + decision. E.g. Specifying inter-device constraints to indicate preferrence + on best-connected set of devices when allocating devices to a container. + 1. `PreStartContainer` is called before each container start if indicated by + device plugins during registration phase. It allows Device Plugins to run device + specific operations on the Devices requested. E.g. reconfiguring or + reprogramming FPGAs before the container starts running. + +Pull Requests that introduced these changes are here: +1. [Invoke preStart RPC call before container start, if desired by plugin](https://github.com/kubernetes/kubernetes/pull/58282) +1. [Add GetPreferredAllocation() call to the v1beta1 device plugin API](https://github.com/kubernetes/kubernetes/pull/92665) + +With introduction of the above endpoints the interaction between Device Manager in +kubelet and Device Manager can be shown as below: + +{{< figure src="deviceplugin-framework-overview.svg" alt="Representation of the Device Plugin framework showing the relationship between the kubelet and a device plugin" class="diagram-large" caption="Device Plugin framework Overview" >}} + +### Change in semantics of device plugin registration process +Device plugin code was refactored to separate 'plugin' package under the `devicemanager` +package to lay the groundwork for introducing a `v1beta2` device plugin API. This would +allow adding support in `devicemanager` to service multiple device plugin APIs at the +same time. + +With this refactoring work, it is now mandatory for a device plugin to start serving its gRPC +service before registering itself with kubelet. Previously, these two operations were asynchronous +and device plugin could register itself before starting its gRPC server which is no longer the +case. For more details, refer to [PR #109016](https://github.com/kubernetes/kubernetes/pull/109016) and [Issue #112395](https://github.com/kubernetes/kubernetes/issues/112395). + +### Dynamic resource allocation +In Kubernetes 1.26, inspired by how [Persistent Volumes](/docs/concepts/storage/persistent-volumes) +are handled in Kubernetes, [Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) +has been introduced to cater to devices that have more sophisticated resource requirements like: + +1. Decouple device initialization and allocation from the pod lifecycle. +1. Facilitate dynamic sharing of devices between containers and pods. +1. Support custom resource-specific parameters +1. Enable resource-specific setup and cleanup actions +1. Enable support for Network-attached resources, not just node-local resources + +## Is the Device Plugin API stable now? +No, the Device Plugin API is still not stable; the latest Device Plugin API version +available is `v1beta1`. There are plans in the community to introduce `v1beta2` API +to service multiple plugin APIs at once. A per-API call with request/response types +would allow adding support for newer API versions without explicitly bumping the API. + +In addition to that, there are existing proposals in the community to introduce additional +endpoints [KEP-3162: Add Deallocate and PostStopContainer to Device Manager API](https://github.com/kubernetes/kubernetes/pull/109016). diff --git a/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md b/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md new file mode 100644 index 0000000000..3597ca03fb --- /dev/null +++ b/content/en/blog/_posts/2022-12-20-validating-admission-policies-alpha/index.md @@ -0,0 +1,156 @@ +--- +layout: blog +title: "Kubernetes 1.26: Introducing Validating Admission Policies" +date: 2022-12-20 +slug: validating-admission-policies-alpha +--- + +**Authors:** Joe Betz (Google), Cici Huang (Google) + +In Kubernetes 1.26, the 1st alpha release of validating admission policies is +available! + +Validating admission policies use the [Common Expression +Language](https://github.com/google/cel-spec) (CEL) to offer a declarative, +in-process alternative to [validating admission +webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks). + +CEL was first introduced to Kubernetes for the [Validation rules for +CustomResourceDefinitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules). +This enhancement expands the use of CEL in Kubernetes to support a far wider +range of admission use cases. + +Admission webhooks can be burdensome to develop and operate. Webhook developers +must implement and maintain a webhook binary to handle admission requests. Also, +admission webhooks are complex to operate. Each webhook must be deployed, +monitored and have a well defined upgrade and rollback plan. To make matters +worse, if a webhook times out or becomes unavailable, the Kubernetes control +plane can become unavailable. This enhancement avoids much of this complexity of +admission webhooks by embedding CEL expressions into Kubernetes resources +instead of calling out to a remote webhook binary. + +For example, to set a limit on how many replicas a Deployment can have. +Start by defining a validation policy: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicy +metadata: + name: "demo-policy.example.com" +spec: + matchConstraints: + resourceRules: + - apiGroups: ["apps"] + apiVersions: ["v1"] + operations: ["CREATE", "UPDATE"] + resources: ["deployments"] + validations: + - expression: "object.spec.replicas <= 5" +``` + +The `expression` field contains the CEL expression that is used to validate +admission requests. `matchConstraints` declares what types of requests this +`ValidatingAdmissionPolicy` is may validate. + +Next bind the policy to the appropriate resources: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicyBinding +metadata: + name: "demo-binding-test.example.com" +spec: + policy: "demo-policy.example.com" + matchResources: + namespaceSelector: + - key: environment, + operator: In, + values: ["test"] +``` + +This `ValidatingAdmissionPolicyBinding` resource binds the above policy only to +namespaces where the `environment` label is set to `test`. Once this binding +is created, the kube-apiserver will begin enforcing this admission policy. + +To emphasize how much simpler this approach is than admission webhooks, if this example +were instead implemented with a webhook, an entire binary would need to be +developed and maintained just to perform a `<=` check. In our review of a wide +range of admission webhooks used in production, the vast majority performed +relatively simple checks, all of which can easily be expressed using CEL. + +Validation admission policies are highly configurable, enabling policy authors +to define policies that can be parameterized and scoped to resources as needed +by cluster administrators. + +For example, the above admission policy can be modified to make it configurable: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicy +metadata: + name: "demo-policy.example.com" +spec: + paramKind: + apiVersion: rules.example.com/v1 # You also need a CustomResourceDefinition for this API + kind: ReplicaLimit + matchConstraints: + resourceRules: + - apiGroups: ["apps"] + apiVersions: ["v1"] + operations: ["CREATE", "UPDATE"] + resources: ["deployments"] + validations: + - expression: "object.spec.replicas <= params.maxReplicas" +``` + +Here, `paramKind` defines the resources used to configure the policy and the +`expression` uses the `params` variable to access the parameter resource. + +This allows multiple bindings to be defined, each configured differently. For +example: + +```yaml +apiVersion: admissionregistration.k8s.io/v1alpha1 +kind: ValidatingAdmissionPolicyBinding +metadata: + name: "demo-binding-production.example.com" +spec: + policy: "demo-policy.example.com" + paramsRef: + name: "demo-params-production.example.com" + matchResources: + namespaceSelector: + - key: environment, + operator: In, + values: ["production"] +``` + +```yaml +apiVersion: rules.example.com/v1 # defined via a CustomResourceDefinition +kind: ReplicaLimit +metadata: + name: "demo-params-production.example.com" +maxReplicas: 1000 +``` + +This binding and parameter resource pair limit deployments in namespaces with the +`environment` label set to `production` to a max of 1000 replicas. + +You can then use a separate binding and parameter pair to set a different limit +for namespaces in the `test` environment. + +I hope this has given you a glimpse of what is possible with validating +admission policies! There are many features that we have not yet touched on. + +To learn more, read +[Validating Admission Policy](/docs/reference/access-authn-authz/validating-admission-policy/). + +We are working hard to add more features to admission policies and make the +enhancement easier to use. Try it out, send us your feedback and help us build +a simpler alternative to admission webhooks! + +## How do I get involved? + +If you want to get involved in development of admission policies, discuss enhancement +roadmaps, or report a bug, you can get in touch with developers at +[SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery). diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md new file mode 100644 index 0000000000..58bb57366b --- /dev/null +++ b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/index.md @@ -0,0 +1,129 @@ +--- +layout: blog +title: 'Kubernetes v1.26: GA Support for Kubelet Credential Providers' +date: 2022-12-22 +slug: kubelet-credential-providers +--- + +**Authors:** Andrew Sy Kim (Google), Dixita Narang (Google) + +Kubernetes v1.26 introduced generally available (GA) support for [_kubelet credential +provider plugins_]( /docs/tasks/kubelet-credential-provider/kubelet-credential-provider/), +offering an extensible plugin framework to dynamically fetch credentials +for any container image registry. + +## Background + +Kubernetes supports the ability to dynamically fetch credentials for a container registry service. +Prior to Kubernetes v1.20, this capability was compiled into the kubelet and only available for +Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry. + +{{< figure src="kubelet-credential-providers-in-tree.png" caption="Figure 1: Kubelet built-in credential provider support for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry." >}} + +Kubernetes v1.20 introduced alpha support for kubelet credential providers plugins, +which provides a mechanism for the kubelet to dynamically authenticate and pull images +for arbitrary container registries - whether these are public registries, managed services, +or even a self-hosted registry. +In Kubernetes v1.26, this feature is now GA + +{{< figure src="kubelet-credential-providers-plugin.png" caption="Figure 2: Kubelet credential provider overview" >}} + +## Why is it important? + +Prior to Kubernetes v1.20, if you wanted to dynamically fetch credentials for image registries +other than ACR (Azure Container Registry), ECR (Elastic Container Registry), or GCR +(Google Container Registry), you needed to modify the kubelet code. +The new plugin mechanism can be used in any cluster, and lets you authenticate to new registries without +any changes to Kubernetes itself. Any cloud provider or vendor can publish a plugin that lets you authenticate with their image registry. + +## How it works + +The kubelet and the exec plugin binary communicate through stdio (stdin, stdout, and stderr) by sending and receiving +json-serialized api-versioned types. If the exec plugin is enabled and the kubelet requires authentication information for an image +that matches against a plugin, the kubelet will execute the plugin binary, passing the `CredentialProviderRequest` API via stdin. Then +the exec plugin communicates with the container registry to dynamically fetch the credentials and returns the credentials in an +encoded response of the `CredentialProviderResponse` API to the kubelet via stdout. + +{{< figure src="kubelet-credential-providers-how-it-works.png" caption="Figure 3: Kubelet credential provider plugin flow" >}} + +On receiving credentials from the kubelet, the plugin can also indicate how long credentials can be cached for, to prevent unnecessary +execution of the plugin by the kubelet for subsequent image pull requests to the same registry. In cases where the cache duration +is not specified by the plugin, a default cache duration can be specified by the kubelet (more details below). + +```json +{ + "apiVersion": "kubelet.k8s.io/v1", + "kind": "CredentialProviderResponse", + "auth": { + "cacheDuration": "6h", + "private-registry.io/my-app": { + "username": "exampleuser", + "password": "token12345" + } + } +} +``` + +In addition, the plugin can specify the scope in which cached credentials are valid for. This is specified through the `cacheKeyType` field +in `CredentialProviderResponse`. When the value is `Image`, the kubelet will only use cached credentials for future image pulls that exactly +match the image of the first request. When the value is `Registry`, the kubelet will use cached credentials for any subsequent image pulls +destined for the same registry host but using different paths (for example, `gcr.io/foo/bar` and `gcr.io/bar/foo` refer to different images +from the same registry). Lastly, when the value is `Global`, the kubelet will use returned credentials for all images that match against +the plugin, including images that can map to different registry hosts (for example, gcr.io vs k8s.gcr.io). The `cacheKeyType` field is required by plugin +implementations. + +```json +{ + "apiVersion": "kubelet.k8s.io/v1", + "kind": "CredentialProviderResponse", + "auth": { + "cacheKeyType": "Registry", + "private-registry.io/my-app": { + "username": "exampleuser", + "password": "token12345" + } + } +} +``` + +## Using kubelet credential providers + +You can configure credential providers by installing the exec plugin(s) into +a local directory accessible by the kubelet on every node. Then you set two command line arguments for the kubelet: +* `--image-credential-provider-config`: the path to the credential provider plugin config file. +* `--image-credential-provider-bin-dir`: the path to the directory where credential provider plugin binaries are located. + +The configuration file passed into `--image-credential-provider-config` is read by the kubelet to determine which exec plugins should be invoked for a container image used by a Pod. +Note that the name of each _provider_ must match the name of the binary located in the local directory specified in `--image-credential-provider-bin-dir`, otherwise the kubelet +cannot locate the path of the plugin to invoke. + +```yaml +kind: CredentialProviderConfig +apiVersion: kubelet.config.k8s.io/v1 +providers: +- name: auth-provider-gcp + apiVersion: credentialprovider.kubelet.k8s.io/v1 + matchImages: + - "container.cloud.google.com" + - "gcr.io" + - "*.gcr.io" + - "*.pkg.dev" + args: + - get-credentials + - --v=3 + defaultCacheDuration: 1m +``` + +Below is an overview of how the Kubernetes project is using kubelet credential providers for end-to-end testing. + +{{< figure src="kubelet-credential-providers-enabling.png" caption="Figure 4: Kubelet credential provider configuration used for Kubernetes e2e testing" >}} + +For more configuration details, see [Kubelet Credential Providers](https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/). + +## Getting Involved + +Come join SIG Node if you want to report bugs or have feature requests for the Kubelet Credential Provider. You can reach us through the following ways: +* Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node) +* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node) +* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode) +* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-node#meetings) diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png new file mode 100644 index 0000000000..5aa0886e90 Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-enabling.png differ diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png new file mode 100644 index 0000000000..11054229f8 Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-how-it-works.png differ diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png new file mode 100644 index 0000000000..f26b42d45e Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-in-tree.png differ diff --git a/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png new file mode 100644 index 0000000000..2aeedb738f Binary files /dev/null and b/content/en/blog/_posts/2022-12-22-kubelet-credential-providers/kubelet-credential-providers-plugin.png differ diff --git a/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md b/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md new file mode 100644 index 0000000000..671334c489 --- /dev/null +++ b/content/en/blog/_posts/2022-12-23-fsgroup-on-mount.md @@ -0,0 +1,72 @@ +--- +layout: blog +title: "Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time" +date: 2022-12-23 +slug: kubernetes-12-06-fsgroup-on-mount +--- + +**Authors:** Fabio Bertinatto (Red Hat), Hemant Kumar (Red Hat) + +Delegation of `fsGroup` to CSI drivers was first introduced as alpha in Kubernetes 1.22, +and graduated to beta in Kubernetes 1.25. +For Kubernetes 1.26, we are happy to announce that this feature has graduated to +General Availability (GA). + +In this release, if you specify a `fsGroup` in the +[security context](/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod), +for a (Linux) Pod, all processes in the pod's containers are part of the additional group +that you specified. + +In previous Kubernetes releases, the kubelet would *always* apply the +`fsGroup` ownership and permission changes to files in the volume according to the policy +you specified in the Pod's `.spec.securityContext.fsGroupChangePolicy` field. + +Starting with Kubernetes 1.26, CSI drivers have the option to apply the `fsGroup` settings during +volume mount time, which frees the kubelet from changing the permissions of files and directories +in those volumes. + +## How does it work? + +CSI drivers that support this feature should advertise the +[`VOLUME_MOUNT_GROUP`](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetcapabilities) node capability. + +After recognizing this information, the kubelet passes the `fsGroup` information to +the CSI driver during pod startup. This is done through the +[`NodeStageVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodestagevolume) and +[`NodePublishVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodepublishvolume) +CSI calls. + +Consequently, the CSI driver is expected to apply the `fsGroup` to the files in the volume using a +_mount option_. As an example, [Azure File CSIDriver](https://github.com/kubernetes-sigs/azurefile-csi-driver) utilizes the `gid` mount option to map +the `fsGroup` information to all the files in the volume. + +It should be noted that in the example above the kubelet refrains from directly +applying the permission changes into the files and directories in that volume files. +Additionally, two policy definitions no longer have an effect: neither +`.spec.fsGroupPolicy` for the CSIDriver object, nor +`.spec.securityContext.fsGroupChangePolicy` for the Pod. + +For more details about the inner workings of this feature, check out the +[enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2317-fsgroup-on-mount/) +and the [CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html) +in the CSI developer documentation. + +## Why is it important? + +Without this feature, applying the fsGroup information to files is not possible in certain storage environments. + +For instance, Azure File does not support a concept of POSIX-style ownership and permissions +of files. The CSI driver is only able to set the file permissions at the volume level. + +## How do I use it? + +This feature should be mostly transparent to users. If you maintain a CSI driver that should +support this feature, read +[CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html) +for more information on how to support this feature in your CSI driver. + +Existing CSI drivers that do not support this feature will continue to work as usual: +they will not receive any `fsGroup` information from the kubelet. In addition to that, +the kubelet will continue to perform the ownership and permissions changes to files +for those volumes, according to the policies specified in `.spec.fsGroupPolicy` for the +CSIDriver and `.spec.securityContext.fsGroupChangePolicy` for the relevant Pod. diff --git a/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md b/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md new file mode 100644 index 0000000000..d1edc4575b --- /dev/null +++ b/content/en/blog/_posts/2022-12-27-cpumanager-goes-GA.md @@ -0,0 +1,71 @@ +--- +layout: blog +title: 'Kubernetes v1.26: CPUManager goes GA' +date: 2022-12-27 +slug: cpumanager-ga +--- + +**Author:** +Francesco Romani (Red Hat) + +The CPU Manager is a part of the kubelet, the Kubernetes node agent, which enables the user to allocate exclusive CPUs to containers. +Since Kubernetes v1.10, where it [graduated to Beta](/blog/2018/07/24/feature-highlight-cpu-manager/), the CPU Manager proved itself reliable and +fulfilled its role of allocating exclusive CPUs to containers, so adoption has steadily grown making it a staple component of performance-critical +and low-latency setups. Over time, most changes were about bugfixes or internal refactoring, with the following noteworthy user-visible changes: + +- [support explicit reservation of CPUs](https://github.com/Kubernetes/Kubernetes/pull/83592): it was already possible to request to reserve a given + number of CPUs for system resources, including the kubelet itself, which will not be used for exclusive CPU allocation. Now it is possible to also + explicitly select which CPUs to reserve instead of letting the kubelet pick them up automatically. +- [report the exclusively allocated CPUs](https://github.com/Kubernetes/Kubernetes/pull/97415) to containers, much like is already done for devices, + using the kubelet-local [PodResources API](/docs/concepts/extend-Kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources). +- [optimize the usage of system resources](https://github.com/Kubernetes/Kubernetes/pull/101771), eliminating unnecessary sysfs changes. + +The CPU Manager reached the point on which it "just works", so in Kubernetes v1.26 it has graduated to generally available (GA). + +## Customization options for CPU Manager {#cpu-managed-customization} + +The CPU Manager supports two operation modes, configured using its _policies_. With the `none` policy, the CPU Manager allocates CPUs to containers +without any specific constraint except the (optional) quota set in the Pod spec. +With the `static` policy, then provided that the pod is in the Guaranteed QoS class and every container in that Pod requests an integer amount of vCPU cores, +then the CPU Manager allocates CPUs exclusively. Exclusive assignment means that other containers (whether from the same Pod, or from a different Pod) do not +get scheduled onto that CPU. + +This simple operational model served the user base pretty well, but as the CPU Manager matured more and more, users started to look at more elaborate use +cases and how to better support them. + +Rather than add more policies, the community realized that pretty much all the novel use cases are some variation of the behavior enabled by the `static` +CPU Manager policy. Hence, it was decided to add [options to tune the behavior of the static policy](https://github.com/Kubernetes/enhancements/tree/master/keps/sig-node/2625-cpumanager-policies-thread-placement#proposed-change). +The options have a varying degree of maturity, like any other Kubernetes feature, and in order to be accepted, each new option provides a backward +compatible behavior when disabled, and to document how to interact with each other, should they interact at all. + +This enabled the Kubernetes project to graduate to GA the CPU Manager core component and core CPU allocation algorithms to GA, +while also enabling a new age of experimentation in this area. +In Kubernetes v1.26, the CPU Manager supports [three different policy options](/docs/tasks/administer-cluster/cpu-management-policies.md#static-policy-options): + +`full-pcpus-only` +: restrict the CPU Manager core allocation algorithm to full physical cores only, reducing noisy neighbor issues from hardware technologies that allow sharing cores. + +`distribute-cpus-across-numa` +: drive the CPU Manager to evenly distribute CPUs across NUMA nodes, for cases where more than one NUMA node is required to satisfy the allocation. + +`align-by-socket` +: change how the CPU Manager allocates CPUs to a container: consider CPUs to be aligned at the socket boundary, instead of NUMA node boundary. + +## Further development + +After graduating the main CPU Manager feature, each existing policy option will follow their graduation process, independent from CPU Manager and from each other option. +There is room for new options to be added, but there's also a growing demand for even more flexibility than what the CPU Manager, and its policy options, currently grant. + +Conversations are in progress in the community about splitting the CPU Manager and the other resource managers currently part of the kubelet executable +into pluggable, independent kubelet plugins. If you are interested in this effort, please join the conversation on SIG Node communication channels (Slack, mailing list, weekly meeting). + +## Further reading + +Please check out the [Control CPU Management Policies on the Node](/docs/tasks/administer-cluster/cpu-management-policies/) +task page to learn more about the CPU Manager, and how it fits in relation to the other node-level resource managers. + +## Getting involved + +This feature is driven by the [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md) community. +Please join us to connect with the community and share your ideas and feedback around the above feature and +beyond. We look forward to hearing from you! diff --git a/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md b/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md new file mode 100644 index 0000000000..42f4b25b40 --- /dev/null +++ b/content/en/blog/_posts/2022-12-29-scalable-job-tracking-ga/index.md @@ -0,0 +1,155 @@ +--- +layout: blog +title: "Kubernetes 1.26: Job Tracking, to Support Massively Parallel Batch Workloads, Is Generally Available" +date: 2022-12-29 +slug: "scalable-job-tracking-ga" +--- + +**Authors:** Aldo Culquicondor (Google) + +The Kubernetes 1.26 release includes a stable implementation of the [Job](/docs/concepts/workloads/controllers/job/) +controller that can reliably track a large amount of Jobs with high levels of +parallelism. [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) +and [WG Batch](https://github.com/kubernetes/community/tree/master/wg-batch) +have worked on this foundational improvement since Kubernetes 1.22. After +multiple iterations and scale verifications, this is now the default +implementation of the Job controller. + +Paired with the Indexed [completion mode](/docs/concepts/workloads/controllers/job/#completion-mode), +the Job controller can handle massively parallel batch Jobs, supporting up to +100k concurrent Pods. + +The new implementation also made possible the development of [Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy), +which is in beta in the 1.26 release. + +## How do I use this feature? + +To use Job tracking with finalizers, upgrade to Kubernetes 1.25 or newer and +create new Jobs. You can also use this feature in v1.23 and v1.24, if you have the +ability to enable the `JobTrackingWithFinalizers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). + +If your cluster runs Kubernetes 1.26, Job tracking with finalizers is a stable +feature. For v1.25, it's behind that feature gate, and your cluster administrators may have +explicitly disabled it - for example, if you have a policy of not using +beta features. + +Jobs created before the upgrade will still be tracked using the legacy behavior. +This is to avoid retroactively adding finalizers to running Pods, which might +introduce race conditions. + +For maximum performance on large Jobs, the Kubernetes project recommends +using the [Indexed completion mode](/docs/concepts/workloads/controllers/job/#completion-mode). +In this mode, the control plane is able to track Job progress with less API +calls. + +If you are a developer of operator(s) for batch, [HPC](https://en.wikipedia.org/wiki/High-performance_computing), +[AI](https://en.wikipedia.org/wiki/Artificial_intelligence), [ML](https://en.wikipedia.org/wiki/Machine_learning) +or related workloads, we encourage you to use the Job API to delegate accurate +progress tracking to Kubernetes. If there is something missing in the Job API +that forces you to manage plain Pods, the [Working Group Batch](https://github.com/kubernetes/community/tree/master/wg-batch) +welcomes your feedback and contributions. + +### Deprecation notices + +During the development of the feature, the control plane added the annotation +[`batch.kubernetes.io/job-tracking`](/docs/reference/labels-annotations-taints/#batch-kubernetes-io-job-tracking) +to the Jobs that were created when the feature was enabled. +This allowed a safe transition for older Jobs, but it was never meant to stay. + +In the 1.26 release, we deprecated the annotation `batch.kubernetes.io/job-tracking` +and the control plane will stop adding it in Kubernetes 1.27. +Along with that change, we will remove the legacy Job tracking implementation. +As a result, the Job controller will track all Jobs using finalizers and it will +ignore Pods that don't have the aforementioned finalizer. + +Before you upgrade your cluster to 1.27, we recommend that you verify that there +are no running Jobs that don't have the annotation, or you wait for those jobs +to complete. +Otherwise, you might observe the control plane recreating some Pods. +We expect that this shouldn't affect any users, as the feature is enabled by +default since Kubernetes 1.25, giving enough buffer for old jobs to complete. + +## What problem does the new implementation solve? + +Generally, Kubernetes workload controllers, such as ReplicaSet or StatefulSet, +rely on the existence of Pods or other objects in the API to determine the +status of the workload and whether replacements are needed. +For example, if a Pod that belonged to a ReplicaSet terminates or ceases to +exist, the ReplicaSet controller needs to create a replacement Pod to satisfy +the desired number of replicas (`.spec.replicas`). + +Since its inception, the Job controller also relied on the existence of Pods in +the API to track Job status. A Job has [completion](/docs/concepts/workloads/controllers/job/#completion-mode) +and [failure handling](/docs/concepts/workloads/controllers/job/#handling-pod-and-container-failures) +policies, requiring the end state of a finished Pod to determine whether to +create a replacement Pod or mark the Job as completed or failed. As a result, +the Job controller depended on Pods, even terminated ones, to remain in the API +in order to keep track of the status. + +This dependency made the tracking of Job status unreliable, because Pods can be +deleted from the API for a number of reasons, including: +- The garbage collector removing orphan Pods when a Node goes down. +- The garbage collector removing terminated Pods when they reach a threshold. +- The Kubernetes scheduler preempting a Pod to accomodate higher priority Pods. +- The taint manager evicting a Pod that doesn't tolerate a `NoExecute` taint. +- External controllers, not included as part of Kubernetes, or humans deleting + Pods. + +### The new implementation + +When a controller needs to take an action on objects before they are removed, it +should add a [finalizer](/docs/concepts/overview/working-with-objects/finalizers/) +to the objects that it manages. +A finalizer prevents the objects from being deleted from the API until the +finalizers are removed. Once the controller is done with the cleanup and +accounting for the deleted object, it can remove the finalizer from the object and the +control plane removes the object from the API. + +This is what the new Job controller is doing: adding a finalizer during Pod +creation, and removing the finalizer after the Pod has terminated and has been +accounted for in the Job status. However, it wasn't that simple. + +The main challenge is that there are at least two objects involved: the Pod +and the Job. While the finalizer lives in the Pod object, the accounting lives +in the Job object. There is no mechanism to atomically remove the finalizer in +the Pod and update the counters in the Job status. Additionally, there could be +more than one terminated Pod at a given time. + +To solve this problem, we implemented a three staged approach, each translating +to an API call. +1. For each terminated Pod, add the unique ID (UID) of the Pod into short-lived + lists stored in the `.status` of the owning Job + ([.status.uncountedTerminatedPods](/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus)). +2. Remove the finalizer from the Pods(s). +3. Atomically do the following operations: + - remove UIDs from the short-lived lists + - increment the overall `succeeded` and `failed` counters in the `status` of + the Job. + +Additional complications come from the fact that the Job controller might +receive the results of the API changes in steps 1 and 2 out of order. We solved +this by adding an in-memory cache for removed finalizers. + +Still, we faced some issues during the beta stage, leaving some pods stuck +with finalizers in some conditions ([#108645](https://github.com/kubernetes/kubernetes/issues/108645), +[#109485](https://github.com/kubernetes/kubernetes/issues/109485), and +[#111646](https://github.com/kubernetes/kubernetes/pull/111646)). As a result, +we decided to switch that feature gate to be disabled by default for the 1.23 +and 1.24 releases. + +Once resolved, we re-enabled the feature for the 1.25 release. Since then, we +have received reports from our customers running tens of thousands of Pods at a +time in their clusters through the Job API. Seeing this success, we decided to +graduate the feature to stable in 1.26, as part of our long term commitment to +make the Job API the best way to run large batch Jobs in a Kubernetes cluster. + +To learn more about the feature, you can read the [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2307-job-tracking-without-lingering-pods). + +## Acknowledgments + +As with any Kubernetes feature, multiple people contributed to getting this +done, from testing and filing bugs to reviewing code. + +On behalf of SIG Apps, I would like to especially thank Jordan Liggitt (Google) +for helping me debug and brainstorm solutions for more than one race condition +and Maciej Szulik (Red Hat) for his conscious reviews. diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png new file mode 100644 index 0000000000..c6cdbef25f Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-overview.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png new file mode 100644 index 0000000000..b5a516a01d Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/endpointslice-with-terminating-pod.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md new file mode 100644 index 0000000000..91ecd167cc --- /dev/null +++ b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/index.md @@ -0,0 +1,117 @@ +--- +layout: blog +title: "Kubernetes v1.26: Advancements in Kubernetes Traffic Engineering" +date: 2022-12-30 +slug: advancements-in-kubernetes-traffic-engineering +--- + +**Authors:** Andrew Sy Kim (Google) + +Kubernetes v1.26 includes significant advancements in network traffic engineering with the graduation of +two features (Service internal traffic policy support, and EndpointSlice terminating conditions) to GA, +and a third feature (Proxy terminating endpoints) to beta. The combination of these enhancements aims +to address short-comings in traffic engineering that people face today, and unlock new capabilities for the future. + +## Traffic Loss from Load Balancers During Rolling Updates + +Prior to Kubernetes v1.26, clusters could experience [loss of traffic](https://github.com/kubernetes/kubernetes/issues/85643) +from Service load balancers during rolling updates when setting the `externalTrafficPolicy` field to `Local`. +There are a lot of moving parts at play here so a quick overview of how Kubernetes manages load balancers might help! + +In Kubernetes, you can create a Service with `type: LoadBalancer` to expose an application externally with a load balancer. +The load balancer implementation varies between clusters and platforms, but the Service provides a generic abstraction +representing the load balancer that is consistent across all Kubernetes installations. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app.kubernetes.io/name: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + type: LoadBalancer +``` + +Under the hood, Kubernetes allocates a NodePort for the Service, which is then used by kube-proxy to provide a +network data path from the NodePort to the Pod. A controller will then add all available Nodes in the cluster +to the load balancer’s backend pool, using the designated NodePort for the Service as the backend target port. + +{{< figure src="traffic-engineering-service-load-balancer.png" caption="Figure 1: Overview of Service load balancers" >}} + +Oftentimes it is beneficial to set `externalTrafficPolicy: Local` for Services, to avoid extra hops between +Nodes that are not running healthy Pods backing that Service. When using `externalTrafficPolicy: Local`, +an additional NodePort is allocated for health checking purposes, such that Nodes that do not contain healthy +Pods are excluded from the backend pool for a load balancer. + +{{< figure src="traffic-engineering-lb-healthy.png" caption="Figure 2: Load balancer traffic to a healthy Node, when externalTrafficPolicy is Local" >}} + +One such scenario where traffic can be lost is when a Node loses all Pods for a Service, +but the external load balancer has not probed the health check NodePort yet. The likelihood of this situation +is largely dependent on the health checking interval configured on the load balancer. The larger the interval, +the more likely this will happen, since the load balancer will continue to send traffic to a node +even after kube-proxy has removed forwarding rules for that Service. This also occurrs when Pods start terminating +during rolling updates. Since Kubernetes does not consider terminating Pods as “Ready”, traffic can be loss +when there are only terminating Pods on any given Node during a rolling update. + +{{< figure src="traffic-engineering-lb-without-proxy-terminating-endpoints.png" caption="Figure 3: Load balancer traffic to terminating endpoints, when externalTrafficPolicy is Local" >}} + +Starting in Kubernetes v1.26, kube-proxy enables the `ProxyTerminatingEndpoints` feature by default, which +adds automatic failover and routing to terminating endpoints in scenarios where the traffic would otherwise +be dropped. More specifically, when there is a rolling update and a Node only contains terminating Pods, +kube-proxy will route traffic to the terminating Pods based on their readiness. In addition, kube-proxy will +actively fail the health check NodePort if there are only terminating Pods available. By doing so, +kube-proxy alerts the external load balancer that new connections should not be sent to that Node but will +gracefully handle requests for existing connections. + +{{< figure src="traffic-engineering-lb-with-proxy-terminating-endpoints.png" caption="Figure 4: Load Balancer traffic to terminating endpoints with ProxyTerminatingEndpoints enabled, when externalTrafficPolicy is Local" >}} + +### EndpointSlice Conditions + +In order to support this new capability in kube-proxy, the EndpointSlice API introduced new conditions for endpoints: +`serving` and `terminating`. + +{{< figure src="endpointslice-overview.png" caption="Figure 5: Overview of EndpointSlice conditions" >}} + +The `serving` condition is semantically identical to `ready`, except that it can be `true` or `false` +while a Pod is terminating, unlike `ready` which will always be `false` for terminating Pods for compatibility reasons. +The `terminating` condition is true for Pods undergoing termination (non-empty deletionTimestamp), false otherwise. + +The addition of these two conditions enables consumers of this API to understand Pod states that were previously not possible. +For example, we can now track "ready" and "not ready" Pods that are also terminating. + +{{< figure src="endpointslice-with-terminating-pod.png" caption="Figure 6: EndpointSlice conditions with a terminating Pod" >}} + +Consumers of the EndpointSlice API, such as Kube-proxy and Ingress Controllers, can now use these conditions to coordinate connection draining +events, by continuing to forward traffic for existing connections but rerouting new connections to other non-terminating endpoints. + +## Optimizing Internal Node-Local Traffic + +Similar to how Services can set `externalTrafficPolicy: Local` to avoid extra hops for externally sourced traffic, Kubernetes +now supports `internalTrafficPolicy: Local`, to enable the same optimization for traffic originating within the cluster, specifically +for traffic using the Service Cluster IP as the destination address. This feature graduated to Beta in Kubernetes v1.24 and is graduating to GA in v1.26. + +Services default the `internalTrafficPolicy` field to `Cluster`, where traffic is randomly distributed to all endpoints. + +{{< figure src="service-internal-traffic-policy-cluster.png" caption="Figure 7: Service routing when internalTrafficPolicy is Cluster" >}} + +When `internalTrafficPolicy` is set to `Local`, kube-proxy will forward internal traffic for a Service only if there is an available endpoint +that is local to the same Node. + +{{< figure src="service-internal-traffic-policy-local.png" caption="Figure 8: Service routing when internalTrafficPolicy is Local" >}} + +{{< caution >}} +When using `internalTrafficPoliy: Local`, traffic will be dropped by kube-proxy when no local endpoints are available. +{{< /caution >}} + +## Getting Involved + +If you're interested in future discussions on Kubernetes traffic engineering, you can get involved in SIG Network through the following ways: +* Slack: [#sig-network](https://kubernetes.slack.com/messages/sig-network) +* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network) +* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnetwork) +* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-network#meetings) diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png new file mode 100644 index 0000000000..e0f477aa2e Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-cluster.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png new file mode 100644 index 0000000000..407a0db0ed Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/service-internal-traffic-policy-local.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png new file mode 100644 index 0000000000..74ac7f4f5c Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-healthy.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png new file mode 100644 index 0000000000..0faa5d960a Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-with-proxy-terminating-endpoints.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png new file mode 100644 index 0000000000..43db9c9efb Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-lb-without-proxy-terminating-endpoints.png differ diff --git a/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png new file mode 100644 index 0000000000..a4e58c6207 Binary files /dev/null and b/content/en/blog/_posts/2022-12-30-advancements-in-traffic-engineering/traffic-engineering-service-load-balancer.png differ diff --git a/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md b/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md new file mode 100644 index 0000000000..2f7cd683e0 --- /dev/null +++ b/content/en/blog/_posts/2023-01-02-cross-namespace-data-sources-alpha.md @@ -0,0 +1,159 @@ +--- +layout: blog +title: "Kubernetes v1.26: Alpha support for cross-namespace storage data sources" +date: 2023-01-02 +slug: cross-namespace-data-sources-alpha +--- + +**Author:** Takafumi Takahashi (Hitachi Vantara) + +Kubernetes v1.26, released last month, introduced an alpha feature that +lets you specify a data source for a PersistentVolumeClaim, even where the source +data belong to a different namespace. +With the new feature enabled, you specify a namespace in the `dataSourceRef` field of +a new PersistentVolumeClaim. Once Kubernetes checks that access is OK, the new +PersistentVolume can populate its data from the storage source specified in that other +namespace. +Before Kubernetes v1.26, provided your cluster had the `AnyVolumeDataSource` feature enabled, +you could already provision new volumes from a data source in the **same** +namespace. +However, that only worked for the data source in the same namespace, +therefore users couldn't provision a PersistentVolume with a claim +in one namespace from a data source in other namespace. +To solve this problem, Kubernetes v1.26 added a new alpha `namespace` field +to `dataSourceRef` field in PersistentVolumeClaim the API. + +## How it works + +Once the csi-provisioner finds that a data source is specified with a `dataSourceRef` that +has a non-empty namespace name, +it checks all reference grants within the namespace that's specified by the`.spec.dataSourceRef.namespace` +field of the PersistentVolumeClaim, in order to see if access to the data source is allowed. +If any ReferenceGrant allows access, the csi-provisioner provisions a volume from the data source. + +## Trying it out + +The following things are required to use cross namespace volume provisioning: + +* Enable the `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) for the kube-apiserver and kube-controller-manager +* Install a CRD for the specific `VolumeSnapShot` controller +* Install the CSI Provisioner controller and enable the `CrossNamespaceVolumeDataSource` feature gate +* Install the CSI driver +* Install a CRD for ReferenceGrants + +## Putting it all together + +To see how this works, you can install the sample and try it out. +This sample do to create PVC in dev namespace from VolumeSnapshot in prod namespace. +That is a simple example. For real world use, you might want to use a more complex approach. + +### Assumptions for this example {#example-assumptions} + +* Your Kubernetes cluster was deployed with `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` feature gates enabled +* There are two namespaces, dev and prod +* CSI driver is being deployed +* There is an existing VolumeSnapshot named `new-snapshot-demo` in the _prod_ namespace +* The ReferenceGrant CRD (from the Gateway API project) is already deployed + +### Grant ReferenceGrants read permission to the CSI Provisioner + +Access to ReferenceGrants is only needed when the CSI driver +has the `CrossNamespaceVolumeDataSource` controller capability. +For this example, the external-provisioner needs **get**, **list**, and **watch** +permissions for `referencegrants` (API group `gateway.networking.k8s.io`). + +```yaml + - apiGroups: ["gateway.networking.k8s.io"] + resources: ["referencegrants"] + verbs: ["get", "list", "watch"] +``` + +### Enable the CrossNamespaceVolumeDataSource feature gate for the CSI Provisioner + +Add `--feature-gates=CrossNamespaceVolumeDataSource=true` to the csi-provisioner command line. +For example, use this manifest snippet to redefine the container: + +```yaml + - args: + - -v=5 + - --csi-address=/csi/csi.sock + - --feature-gates=Topology=true + - --feature-gates=CrossNamespaceVolumeDataSource=true + image: csi-provisioner:latest + imagePullPolicy: IfNotPresent + name: csi-provisioner +``` + +### Create a ReferenceGrant + +Here's a manifest for an example ReferenceGrant. + +```yaml +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: ReferenceGrant +metadata: + name: allow-prod-pvc + namespace: prod +spec: + from: + - group: "" + kind: PersistentVolumeClaim + namespace: dev + to: + - group: snapshot.storage.k8s.io + kind: VolumeSnapshot + name: new-snapshot-demo +``` + +### Create a PersistentVolumeClaim by using cross namespace data source + +Kubernetes creates a PersistentVolumeClaim on dev and the CSI driver populates +the PersistentVolume used on dev from snapshots on prod. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: example-pvc + namespace: dev +spec: + storageClassName: example + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + dataSourceRef: + apiGroup: snapshot.storage.k8s.io + kind: VolumeSnapshot + name: new-snapshot-demo + namespace: prod + volumeMode: Filesystem +``` + +## How can I learn more? + +The enhancement proposal, +[Provision volumes from cross-namespace snapshots](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3294-provision-volumes-from-cross-namespace-snapshots), includes lots of detail about the history and technical implementation of this feature. + +Please get involved by joining the [Kubernetes Storage Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-storage) +to help us enhance this feature. +There are a lot of good ideas already and we'd be thrilled to have more! + +## Acknowledgments + +It takes a wonderful group to make wonderful software. +Special thanks to the following people for the insightful reviews, +thorough consideration and valuable contribution to the CrossNamespaceVolumeDataSouce feature: + +* Michelle Au (msau42) +* Xing Yang (xing-yang) +* Masaki Kimura (mkimuram) +* Tim Hockin (thockin) +* Ben Swartzlander (bswartz) +* Rob Scott (robscott) +* John Griffith (j-griffith) +* Michael Henriksen (mhenriks) +* Mustafa Elbehery (Elbehery) + +It’s been a joy to work with y'all on this. diff --git a/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md b/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md new file mode 100644 index 0000000000..674eaedea1 --- /dev/null +++ b/content/en/blog/_posts/2023-01-05-retroactive-default-storage-class.md @@ -0,0 +1,166 @@ +--- +layout: blog +title: "Kubernetes 1.26: Retroactive Default StorageClass" +date: 2023-01-05 +slug: retroactive-default-storage-class +--- + +**Author:** Roman Bednář (Red Hat) + +The v1.25 release of Kubernetes introduced an alpha feature to change how a default StorageClass was assigned to a PersistentVolumeClaim (PVC). +With the feature enabled, you no longer need to create a default StorageClass first and PVC second to assign the class. Additionally, any PVCs without a StorageClass assigned can be updated later. +This feature was graduated to beta in Kubernetes 1.26. + +You can read [retroactive default StorageClass assignment](/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment) in the Kubernetes documentation for more details about how to use that, +or you can read on to learn about why the Kubernetes project is making this change. + +## Why did StorageClass assignment need improvements + +Users might already be familiar with a similar feature that assigns default StorageClasses to **new** PVCs at the time of creation. This is currently handled by the [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass). + +But what if there wasn't a default StorageClass defined at the time of PVC creation? +Users would end up with a PVC that would never be assigned a class. +As a result, no storage would be provisioned, and the PVC would be somewhat "stuck" at this point. +Generally, two main scenarios could result in "stuck" PVCs and cause problems later down the road. +Let's take a closer look at each of them. + +### Changing default StorageClass + +With the alpha feature enabled, there were two options admins had when they wanted to change the default StorageClass: + +1. Creating a new StorageClass as default before removing the old one associated with the PVC. +This would result in having two defaults for a short period. +At this point, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the newest default StorageClass would be chosen and assigned to this PVC. + +2. Removing the old default first and creating a new default StorageClass. +This would result in having no default for a short time. +Subsequently, if a user were to create a PersistentVolumeClaim with storageClassName set to null (implying default StorageClass), the PVC would be in Pending state forever. +The user would have to fix this by deleting the PVC and recreating it once the default StorageClass was available. + + +### Resource ordering during cluster installation + +If a cluster installation tool needed to create resources that required storage, for example, an image registry, it was difficult to get the ordering right. +This is because any Pods that required storage would rely on the presence of a default StorageClass and would fail to be created if it wasn't defined. + +## What changed + +We've changed the PersistentVolume (PV) controller to assign a default StorageClass to any unbound PersistentVolumeClaim that has the storageClassName set to null. +We've also modified the PersistentVolumeClaim admission within the API server to allow the change of values from an unset value to an actual StorageClass name. + +### Null `storageClassName` versus `storageClassName: ""` - does it matter? { #null-vs-empty-string } + +Before this feature was introduced, those values were equal in terms of behavior. Any PersistentVolumeClaim with the storageClassName set to null or "" would bind to an existing PersistentVolume resource with storageClassName also set to null or "". + +With this new feature enabled we wanted to maintain this behavior but also be able to update the StorageClass name. +With these constraints in mind, the feature changes the semantics of null. If a default StorageClass is present, null would translate to "Give me a default" and "" would mean "Give me PersistentVolume that also has "" StorageClass name." In the absence of a StorageClass, the behavior would remain unchanged. + +Summarizing the above, we've changed the semantics of null so that its behavior depends on the presence or absence of a definition of default StorageClass. + +The tables below show all these cases to better describe when PVC binds and when its StorageClass gets updated. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PVC binding behavior with Retroactive default StorageClass
PVC storageClassName = ""PVC storageClassName = null
Without default classPV storageClassName = ""bindsbinds
PV without storageClassNamebindsbinds
With default classPV storageClassName = ""bindsclass updates
PV without storageClassNamebindsclass updates
+ +## How to use it + +If you want to test the feature whilst it's alpha, you need to enable the relevant feature gate in the kube-controller-manager and the kube-apiserver. Use the `--feature-gates` command line argument: + +``` +--feature-gates="...,RetroactiveDefaultStorageClass=true" +``` + +### Test drive + +If you would like to see the feature in action and verify it works fine in your cluster here's what you can try: + +1. Define a basic PersistentVolumeClaim: +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-1 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +2. Create the PersistentVolumeClaim when there is no default StorageClass. The PVC won't provision or bind (unless there is an existing, suitable PV already present) and will remain in Pending state. +``` +$ kc get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +pvc-1 Pending +``` + +3. Configure one StorageClass as default. +``` +$ kc patch sc -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' +storageclass.storage.k8s.io/my-storageclass patched +``` + +4. Verify that PersistentVolumeClaims is now provisioned correctly and was updated retroactively with new default StorageClass. +``` +$ kc get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +pvc-1 Bound pvc-06a964ca-f997-4780-8627-b5c3bf5a87d8 1Gi RWO my-storageclass 87m +``` + +### New metrics + +To help you see that the feature is working as expected we also introduced a new retroactive_storageclass_total metric to show how many times that the PV controller attempted to update PersistentVolumeClaim, and retroactive_storageclass_errors_total to show how many of those attempts failed. + +## Getting involved + +We always welcome new contributors so if you would like to get involved you can join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). + +If you would like to share feedback, you can do so on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5). + +Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order): + +- Deep Debroy ([ddebroy](https://github.com/ddebroy)) +- Divya Mohan ([divya-mohan0209](https://github.com/divya-mohan0209)) +- Jan Šafránek ([jsafrane](https://github.com/jsafrane/)) +- Joe Betz ([jpbetz](https://github.com/jpbetz)) +- Jordan Liggitt ([liggitt](https://github.com/liggitt)) +- Michelle Au ([msau42](https://github.com/msau42)) +- Seokho Son ([seokho-son](https://github.com/seokho-son)) +- Shannon Kularathna ([shannonxtreme](https://github.com/shannonxtreme)) +- Tim Bannister ([sftim](https://github.com/sftim)) +- Tim Hockin ([thockin](https://github.com/thockin)) +- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t)) +- Xing Yang ([xing-yang](https://github.com/xing-yang)) diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 6645984559..c8cd33fe14 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -172,11 +172,11 @@ Beta graduation of this feature. Because of this, kubelet upgrades should be sea but there still may be changes in the API before stabilization making upgrades not guaranteed to be non-breaking. -{{< caution >}} +{{< note >}} Although the Device Manager component of Kubernetes is a generally available feature, the _device plugin API_ is not stable. For information on the device plugin API and version compatibility, read [Device Plugin API versions](/docs/reference/node/device-plugin-api-versions/). -{{< caution >}} +{{< /note >}} As a project, Kubernetes recommends that device plugin developers: diff --git a/content/en/docs/concepts/policy/pid-limiting.md b/content/en/docs/concepts/policy/pid-limiting.md index 1e03ccf375..54e1b324f9 100644 --- a/content/en/docs/concepts/policy/pid-limiting.md +++ b/content/en/docs/concepts/policy/pid-limiting.md @@ -73,13 +73,6 @@ The value you specified declares that the specified number of process IDs will be reserved for the system as a whole and for Kubernetes system daemons respectively. -{{< note >}} -Before Kubernetes version 1.20, PID resource limiting with Node-level -reservations required enabling the [feature -gate](/docs/reference/command-line-tools-reference/feature-gates/) -`SupportNodePidsLimit` to work. -{{< /note >}} - ## Pod PID limits Kubernetes allows you to limit the number of processes running in a Pod. You @@ -89,12 +82,6 @@ To configure the limit, you can specify the command line parameter `--pod-max-pi to the kubelet, or set `PodPidsLimit` in the kubelet [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/). -{{< note >}} -Before Kubernetes version 1.20, PID resource limiting for Pods required enabling -the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -`SupportPodPidsLimit` to work. -{{< /note >}} - ## PID based eviction You can configure kubelet to start terminating a Pod when it is misbehaving and consuming abnormal amount of resources. diff --git a/content/en/docs/concepts/security/multi-tenancy.md b/content/en/docs/concepts/security/multi-tenancy.md index 8393b3a0f2..49355d08a6 100755 --- a/content/en/docs/concepts/security/multi-tenancy.md +++ b/content/en/docs/concepts/security/multi-tenancy.md @@ -44,7 +44,7 @@ share clusters. The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor running multiple instances of a workload for customers. This business model is so strongly associated with this deployment style that many people call it "SaaS tenancy." However, a better -term might be "multi-customer tenancy,” since SaaS vendors may also use other deployment models, +term might be "multi-customer tenancy," since SaaS vendors may also use other deployment models, and this deployment model can also be used outside of SaaS. In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md index 46063a427e..db92db3f73 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md @@ -17,6 +17,10 @@ Commands related to handling kubernetes certificates Commands related to handling kubernetes certificates +``` +kubeadm certs [flags] +``` + ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md index 9be11c4331..541d9892a1 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md @@ -55,7 +55,7 @@ kubeadm config images list [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md index 56083e9ede..3a78f26031 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md @@ -48,7 +48,7 @@ kubeadm config images pull [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md index ac2897751e..ad919d2e16 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md @@ -55,6 +55,7 @@ kubelet-finalize Updates settings relevant to the kubelet after TLS addon Install required addons for passing conformance tests /coredns Install the CoreDNS addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster +show-join-command Show the join command for control-plane and worker node ``` @@ -138,7 +139,7 @@ kubeadm init [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md index ec2adcb93c..dafa56360a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md @@ -58,11 +58,18 @@ kubeadm init phase addon all [flags] + + + + + + + - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md index 2293447d0f..3225abfba1 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md @@ -37,11 +37,18 @@ kubeadm init phase addon coredns [flags] + + + + + + + - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md index edfddfa4b3..bfc3a74a5b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md @@ -58,6 +58,13 @@ kubeadm init phase addon kube-proxy [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md index 769dd1903d..27b722f289 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md @@ -47,6 +47,13 @@ kubeadm init phase bootstrap-token [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md index 18eee93ce3..a6723674af 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md @@ -65,6 +65,13 @@ kubeadm init phase certs all [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md index f78bf3d7c5..416c59933e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs apiserver-etcd-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md index 1b6bc016c6..e4128aedea 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs apiserver-kubelet-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md index 9e22d779d5..ff6d9adc00 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md @@ -69,6 +69,13 @@ kubeadm init phase certs apiserver [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md index 54f9c74f74..7f333a5da4 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md @@ -48,6 +48,13 @@ kubeadm init phase certs ca [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md index f7236ba5c8..3c72fcdf6a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md @@ -48,6 +48,13 @@ kubeadm init phase certs etcd-ca [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md index 0c0389c185..708e244f2b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs etcd-healthcheck-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md index c2b863f843..54c17d5196 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md @@ -50,6 +50,13 @@ kubeadm init phase certs etcd-peer [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md index 1770f38815..96eeba4003 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md @@ -50,6 +50,13 @@ kubeadm init phase certs etcd-server [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md index 22cc9f5ddc..1c425e7a2f 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md @@ -48,6 +48,13 @@ kubeadm init phase certs front-proxy-ca [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md index e3d9602901..12867c61a6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md @@ -48,6 +48,13 @@ kubeadm init phase certs front-proxy-client [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md index 6df0ea58f6..e168a6bb87 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md @@ -101,7 +101,7 @@ kubeadm init phase control-plane all [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md index 178efcf9ed..3f69982f8f 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md @@ -83,7 +83,7 @@ kubeadm init phase control-plane apiserver [flags] - + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md index b5c57f4d2f..d5c168024f 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md @@ -56,6 +56,13 @@ kubeadm init phase etcd local [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md index cd01b778df..dc6264e2ab 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig admin [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md index 52de2366f3..28be544102 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig all [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md index c2a63d91c1..5c1563f637 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig controller-manager [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md index 4ce731fc17..19b8cfe672 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md @@ -67,6 +67,13 @@ kubeadm init phase kubeconfig kubelet [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md index 86588c83ca..580f99d255 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md @@ -65,6 +65,13 @@ kubeadm init phase kubeconfig scheduler [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md index 680986bebe..62278d5c12 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md @@ -51,6 +51,13 @@ kubeadm init phase kubelet-finalize all [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md index a6cd628cec..93c521157b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md @@ -44,6 +44,13 @@ kubeadm init phase kubelet-finalize experimental-cert-rotation [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md index 156c101fcb..2dd93c707d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md @@ -51,6 +51,13 @@ kubeadm init phase kubelet-start [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md index 8e88958204..685dfdcab5 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md @@ -47,6 +47,13 @@ kubeadm init phase mark-control-plane [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md index 03f02c1251..21fc3f7fea 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md @@ -44,6 +44,13 @@ kubeadm init phase preflight [flags] + + + + + + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_show-join-command.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_show-join-command.md new file mode 100644 index 0000000000..23abc5671c --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_show-join-command.md @@ -0,0 +1,65 @@ + + + +Show the join command for control-plane and worker node + +### Synopsis + + +Show the join command for control-plane and worker node + +``` +kubeadm init phase show-join-command [flags] +``` + +### Options + +
--feature-gates string

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

--feature-gates string

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

--feature-gates string

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

--feature-gates string

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

--feature-gates string

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help
--feature-gates string

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

--feature-gates string

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Specify a stable IP address or DNS name for the control plane.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help

Path to a kubeadm configuration file.

--dry-run

Don't apply any changes; just output what would be done.

-h, --help
++++ + + + + + + + + + + +
-h, --help

help for show-join-command

+ + + +### Options inherited from parent commands + + ++++ + + + + + + + + + + +
--rootfs string

[EXPERIMENTAL] The path to the 'real' host root filesystem.

+ + + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md index f7717958b6..9915f522ab 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md @@ -15,7 +15,7 @@ Upload certificates to kubeadm-certs ### Synopsis -This command is not meant to be run on its own. See list of available subcommands. +Upload control plane certificates to the kubeadm-certs Secret ``` kubeadm init phase upload-certs [flags] @@ -44,6 +44,13 @@ kubeadm init phase upload-certs [flags]

Path to a kubeadm configuration file.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md index 78867154bc..2b15abac96 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md @@ -37,6 +37,13 @@ kubeadm init phase upload-config all [flags]

Path to a kubeadm configuration file.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md index b6fe788708..d8f466b409 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md @@ -46,6 +46,13 @@ kubeadm init phase upload-config kubeadm [flags]

Path to a kubeadm configuration file.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md index d0288825bb..ae2fd63e83 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md @@ -44,6 +44,13 @@ kubeadm init phase upload-config kubelet [flags]

Path to a kubeadm configuration file.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md index b671f2a64e..c78cd9c9cc 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md @@ -114,7 +114,7 @@ kubeadm join [api-server-endpoint] [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md index e6cdc8095f..4a94a75f49 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-join all [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -51,6 +51,13 @@ kubeadm join phase control-plane-join all [flags]

Create a new control plane instance on this node

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md index 1eadda3fd7..637e909c3b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-join etcd [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -51,6 +51,13 @@ kubeadm join phase control-plane-join etcd [flags]

Create a new control plane instance on this node

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md index 08ce1ada2a..888b17e17e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md @@ -34,7 +34,7 @@ kubeadm join phase control-plane-join mark-control-plane [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -44,6 +44,13 @@ kubeadm join phase control-plane-join mark-control-plane [flags]

Create a new control plane instance on this node

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md index a8b6a3c8ea..c2e387505c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-join update-status [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md index 70843944d6..266906c653 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md @@ -55,7 +55,7 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -93,6 +93,13 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags]

For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md index b9f2357e03..ee1234e5a1 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]

For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md index 71689c1087..b6f19a68a2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md @@ -48,7 +48,7 @@ kubeadm join phase control-plane-prepare control-plane [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -58,6 +58,13 @@ kubeadm join phase control-plane-prepare control-plane [flags]

Create a new control plane instance on this node

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md index 494212f9a1..019ea5cb5c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [f --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [f

For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md index 2bb7cbf4aa..d12f102bbb 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md @@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags

For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md index 8f76d4227d..1902c7cc2c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md @@ -34,7 +34,7 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -72,6 +72,13 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags]

For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md index 23000e4fcd..ecfb735e7e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md @@ -62,7 +62,7 @@ kubeadm join phase preflight [api-server-endpoint] [flags] --config string -

Path to kubeadm config file.

+

Path to a kubeadm configuration file.

@@ -107,6 +107,13 @@ kubeadm join phase preflight [api-server-endpoint] [flags]

For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md index 7d632e23d4..1d979bfcb9 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md @@ -45,6 +45,13 @@ kubeadm reset [flags]

The path to the directory where the certificates are stored. If specified, clean this directory.

+ +--cleanup-tmp-dir + + +

Cleanup the "/etc/kubernetes/tmp" directory

+ + --cri-socket string diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md index 9fadcb29bb..f62de85dd3 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md @@ -37,6 +37,13 @@ kubeadm reset phase cleanup-node [flags]

The path to the directory where the certificates are stored. If specified, clean this directory.

+ +--cleanup-tmp-dir + + +

Cleanup the "/etc/kubernetes/tmp" directory

+ + --cri-socket string @@ -44,6 +51,13 @@ kubeadm reset phase cleanup-node [flags]

Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.

+ +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md index 14298c2f48..dd074f8ecf 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md @@ -30,6 +30,13 @@ kubeadm reset phase preflight [flags] + +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -f, --force diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md index 3218bb0d9f..54e7cf0e1b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md @@ -30,6 +30,13 @@ kubeadm reset phase remove-etcd-member [flags] + +--dry-run + + +

Don't apply any changes; just output what would be done.

+ + -h, --help diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md index 1302e50d38..2d1d3aa37d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md @@ -76,7 +76,7 @@ kubeadm upgrade apply [version] --feature-gates string -

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

+

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md index 28ab989a84..d235a06526 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md @@ -55,7 +55,7 @@ kubeadm upgrade plan [version] [flags] --feature-gates string -

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)
UnversionedKubeletConfigMap=true|false (default=true)

+

A set of key=value pairs that describe feature gates for various features. Options are:
PublicKeysECDSA=true|false (ALPHA - default=false)
RootlessControlPlane=true|false (ALPHA - default=false)

diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md index d4f22871ed..107473aeee 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md @@ -244,7 +244,7 @@ it off regardless. Doing so will disable the ability to use the `--discovery-tok * Fetch the `cluster-info` file from the API Server: ```shell -kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml +kubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml ``` The output is similar to this: diff --git a/content/en/docs/tasks/tools/install-kubectl-windows.md b/content/en/docs/tasks/tools/install-kubectl-windows.md index 240e3807a7..0e7bc7c53e 100644 --- a/content/en/docs/tasks/tools/install-kubectl-windows.md +++ b/content/en/docs/tasks/tools/install-kubectl-windows.md @@ -56,7 +56,7 @@ The following methods exist for installing kubectl on Windows: - Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result: ```powershell - $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) + $(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256) ``` 1. Append or prepend the `kubectl` binary folder to your `PATH` environment variable. diff --git a/content/ja/docs/concepts/architecture/garbage-collection.md b/content/ja/docs/concepts/architecture/garbage-collection.md index 11f4796fd1..2ac48608b9 100644 --- a/content/ja/docs/concepts/architecture/garbage-collection.md +++ b/content/ja/docs/concepts/architecture/garbage-collection.md @@ -86,10 +86,10 @@ Kubernetesがオーナーオブジェクトを削除すると、残された依 ## 未使用のコンテナとイメージのガベージコレクション {#containers-images} -{{}}は未使用のイメージに対して5分ごとに、未使用のコンテナーに対して1分ごとにガベージコレクションを実行します。 +{{}}は未使用のイメージに対して5分ごとに、未使用のコンテナに対して1分ごとにガベージコレクションを実行します。 外部のガベージコレクションツールは、kubeletの動作を壊し、存在するはずのコンテナを削除する可能性があるため、使用しないでください。 -未使用のコンテナーとイメージのガベージコレクションのオプションを設定するには、[設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)を使用してkubeletを調整し、[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)リソースタイプを使用してガベージコレクションに関連するパラメーターを変更します。 +未使用のコンテナとイメージのガベージコレクションのオプションを設定するには、[設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)を使用してkubeletを調整し、[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)リソースタイプを使用してガベージコレクションに関連するパラメーターを変更します。 ### コンテナイメージのライフサイクル @@ -108,12 +108,12 @@ kubeletは、次の変数に基づいて未使用のコンテナをガベージ * `MinAge`: kubeletがガベージコレクションできるコンテナの最低期間。`0`を設定すると無効化されます。 * `MaxPerPodContainer`: 各Podのペアが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。 - * `MaxContainers`: クラスターが持つことができるデッドコンテナーの最大数。`0`未満に設定すると無効化されます。 + * `MaxContainers`: クラスターが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。 これらの変数に加えて、kubeletは、通常、最も古いものから順に、定義されていない削除されたコンテナをガベージコレクションします。 -`MaxPerPodContainer`と`MaxContainers`は、Podごとのコンテナーの最大数(`MaxPerPodContainer`)を保持すると、グローバルなデッドコンテナの許容合計(`MaxContainers`)を超える状況で、互いに競合する可能性があります。 -この状況では、kubeletは`MaxPerPodContainer`を調整して競合に対処します。最悪のシナリオは、`MaxPerPodContainer`を1にダウングレードし、最も古いコンテナーを削除することです。 +`MaxPerPodContainer`と`MaxContainers`は、Podごとのコンテナの最大数(`MaxPerPodContainer`)を保持すると、グローバルなデッドコンテナの許容合計(`MaxContainers`)を超える状況で、互いに競合する可能性があります。 +この状況では、kubeletは`MaxPerPodContainer`を調整して競合に対処します。最悪のシナリオは、`MaxPerPodContainer`を1にダウングレードし、最も古いコンテナを削除することです。 さらに、削除されたPodが所有するコンテナは、`MinAge`より古くなると削除されます。 {{}} diff --git a/content/ja/docs/concepts/cluster-administration/certificates.md b/content/ja/docs/concepts/cluster-administration/certificates.md index 2cc32294ff..9a5212ad66 100644 --- a/content/ja/docs/concepts/cluster-administration/certificates.md +++ b/content/ja/docs/concepts/cluster-administration/certificates.md @@ -105,7 +105,7 @@ weight: 20 openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ -CAcreateserial -out server.crt -days 10000 \ - -extensions v3_ext -extfile csr.conf + -extensions v3_ext -extfile csr.conf -sha256 1. 証明書を表示します。 openssl x509 -noout -text -in ./server.crt diff --git a/content/ja/docs/concepts/configuration/manage-resources-containers.md b/content/ja/docs/concepts/configuration/manage-resources-containers.md index 6d512e5d92..499e2a7214 100644 --- a/content/ja/docs/concepts/configuration/manage-resources-containers.md +++ b/content/ja/docs/concepts/configuration/manage-resources-containers.md @@ -24,9 +24,9 @@ Podが動作しているNodeに利用可能なリソースが十分にある場 たとえば、コンテナに256MiBの`メモリー`要求を設定し、そのコンテナが8GiBのメモリーを持つNodeにスケジュールされたPod内に存在し、他のPodが存在しない場合、コンテナはより多くのRAMを使用しようとする可能性があります。 -そのコンテナに4GiBの`メモリー`制限を設定すると、kubelet(および{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}) が制限を適用します。ランタイムは、コンテナーが設定済みのリソース制限を超えて使用するのを防ぎます。例えば、コンテナ内のプロセスが、許容量を超えるメモリを消費しようとすると、システムカーネルは、メモリ不足(OOM)エラーで、割り当てを試みたプロセスを終了します。 +そのコンテナに4GiBの`メモリー`制限を設定すると、kubelet(および{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}) が制限を適用します。ランタイムは、コンテナが設定済みのリソース制限を超えて使用するのを防ぎます。例えば、コンテナ内のプロセスが、許容量を超えるメモリを消費しようとすると、システムカーネルは、メモリ不足(OOM)エラーで、割り当てを試みたプロセスを終了します。 -制限は、違反が検出されるとシステムが介入するように事後的に、またはコンテナーが制限を超えないようにシステムが防ぐように強制的に、実装できます。 +制限は、違反が検出されるとシステムが介入するように事後的に、またはコンテナが制限を超えないようにシステムが防ぐように強制的に、実装できます。 異なるランタイムは、同じ制限を実装するために異なる方法をとることができます。 {{< note >}} diff --git a/content/ja/docs/concepts/configuration/secret.md b/content/ja/docs/concepts/configuration/secret.md index 3a2dd6ce43..f06bb7edf8 100644 --- a/content/ja/docs/concepts/configuration/secret.md +++ b/content/ja/docs/concepts/configuration/secret.md @@ -838,7 +838,7 @@ spec: /etc/secret-volume/ssh-privatekey ``` -コンテナーはSecretのデータをSSH接続を確立するために使用することができます。 +コンテナはSecretのデータをSSH接続を確立するために使用することができます。 ### ユースケース: 本番、テスト用の認証情報を持つPod diff --git a/content/ja/docs/concepts/containers/overview.md b/content/ja/docs/concepts/containers/overview.md deleted file mode 100644 index 6659bfb35c..0000000000 --- a/content/ja/docs/concepts/containers/overview.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: コンテナの概要 -content_type: concept -weight: 1 ---- - - - -コンテナは、アプリケーションの(コンパイルされた)コードと、実行時に必要な依存関係をパッケージ化するための技術です。実行する各コンテナは再現性があります。依存関係を含めることによる標準化は、どこで実行しても同じ動作が得られることを意味します。 - -コンテナは、基礎となるホストインフラストラクチャからアプリケーションを切り離します。これにより、さまざまなクラウド環境やOS環境でのデプロイが容易になります。 - - - -## コンテナイメージ {#container-images} -[コンテナイメージ](/docs/concepts/containers/images/)は、アプリケーションを実行するために必要なすべてのものを含んだ、すぐに実行可能なソフトウェアパッケージです。コードとそれが必要とする任意のランタイム、アプリケーションとシステムのライブラリ、および必須の設定のデフォルト値が含まれています。 - -設計上、コンテナは不変であるため、すでに実行中のコンテナのコードを変更することはできません。コンテナ化されたアプリケーションがあり、変更を加えたい場合は、変更を含む新しいコンテナをビルドし、コンテナを再作成して更新されたイメージから起動する必要があります。 - -## コンテナランタイム {#container-runtimes} - -{{< glossary_definition term_id="container-runtime" length="all" >}} - -## {{% heading "whatsnext" %}} -* [コンテナイメージ](/docs/concepts/containers/images/)についてお読みください。 -* [Pod](/ja/docs/concepts/workloads/pods/)についてお読みください。 diff --git a/content/ja/docs/concepts/extend-kubernetes/_index.md b/content/ja/docs/concepts/extend-kubernetes/_index.md index 132f1c100c..67ccb50adf 100644 --- a/content/ja/docs/concepts/extend-kubernetes/_index.md +++ b/content/ja/docs/concepts/extend-kubernetes/_index.md @@ -72,7 +72,7 @@ Webhookのモデルでは、Kubernetesは外部のサービスを呼び出しま 1. ユーザーは頻繁に`kubectl`を使って、Kubernetes APIとやり取りをします。[Kubectlプラグイン](/docs/tasks/extend-kubectl/kubectl-plugins/)は、kubectlのバイナリを拡張します。これは個別ユーザーのローカル環境のみに影響を及ぼすため、サイト全体にポリシーを強制することはできません。 2. APIサーバーは全てのリクエストを処理します。APIサーバーのいくつかの拡張ポイントは、リクエストを認可する、コンテキストに基づいてブロックする、リクエストを編集する、そして削除を処理することを可能にします。これらは[APIアクセス拡張](/docs/concepts/extend-kubernetes/#api-access-extensions)セクションに記載されています。 3. APIサーバーは様々な種類の *リソース* を扱います。`Pod`のような *ビルトインリソース* はKubernetesプロジェクトにより定義され、変更できません。ユーザーも、自身もしくは、他のプロジェクトで定義されたリソースを追加することができます。それは *カスタムリソース* と呼ばれ、[カスタムリソース](/docs/concepts/extend-kubernetes/#user-defined-types)セクションに記載されています。カスタムリソースは度々、APIアクセス拡張と一緒に使われます。 -4. KubernetesのスケジューラーはPodをどのノードに配置するかを決定します。スケジューリングを拡張するには、いくつかの方法があります。それらは[スケジューラー拡張](/docs/concepts/extend-kubernetes/#scheduler-extensions)セクションに記載されています。 +4. KubernetesのスケジューラーはPodをどのノードに配置するかを決定します。スケジューリングを拡張するには、いくつかの方法があります。それらは[スケジューラー拡張](#scheduling-extensions)セクションに記載されています。 5. Kubernetesにおける多くの振る舞いは、APIサーバーのクライアントであるコントローラーと呼ばれるプログラムに実装されています。コントローラーは度々、カスタムリソースと共に使われます。 6. kubeletはサーバー上で実行され、Podが仮想サーバーのようにクラスターネットワーク上にIPを持った状態で起動することをサポートします。[ネットワークプラグイン](/docs/concepts/extend-kubernetes/#network-plugins)がPodのネットワーキングにおける異なる実装を適用することを可能にします。 7. kubeletはまた、コンテナのためにボリュームをマウント、アンマウントします。新しい種類のストレージは[ストレージプラグイン](/docs/concepts/extend-kubernetes/#storage-plugins)を通じてサポートされます。 @@ -139,7 +139,7 @@ Kubernetesはいくつかのビルトイン認証方式と、それらが要件 他のネットワークファブリックが[ネットワークプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)を通じてサポートされます。 -### スケジューラー拡張 +### スケジューラー拡張 {#scheduling-extensions} スケジューラーは特別な種類のコントローラーで、Podを監視し、Podをノードに割り当てます。デフォルトのコントローラーを完全に置き換えることもできますが、他のKubernetesのコンポーネントの利用を継続する、または[複数のスケジューラー](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)を同時に動かすこともできます。 diff --git a/content/ja/docs/concepts/storage/persistent-volumes.md b/content/ja/docs/concepts/storage/persistent-volumes.md index d976b7ef29..4e874ea755 100644 --- a/content/ja/docs/concepts/storage/persistent-volumes.md +++ b/content/ja/docs/concepts/storage/persistent-volumes.md @@ -655,7 +655,7 @@ spec: ``` {{< note >}} -Podにrawブロックデバイスを追加する場合は、マウントパスの代わりにコンテナーでデバイスパスを指定します。 +Podにrawブロックデバイスを追加する場合は、マウントパスの代わりにコンテナでデバイスパスを指定します。 {{< /note >}} ### ブロックボリュームのバインド @@ -678,7 +678,7 @@ Podにrawブロックデバイスを追加する場合は、マウントパス アルファリリースでは、静的にプロビジョニングされたボリュームのみがサポートされます。管理者は、rawブロックデバイスを使用する場合、これらの値を考慮するように注意する必要があります。 {{< /note >}} -## ボリュームのスナップショットとスナップショットからのボリュームの復元のサポート +## ボリュームのスナップショットとスナップショットからのボリュームの復元のサポート {#volume-snapshot-and-restore-volume-from-snapshot-support} {{< feature-state for_k8s_version="v1.17" state="beta" >}} diff --git a/content/ja/docs/concepts/storage/volume-snapshots.md b/content/ja/docs/concepts/storage/volume-snapshots.md new file mode 100644 index 0000000000..82c573c651 --- /dev/null +++ b/content/ja/docs/concepts/storage/volume-snapshots.md @@ -0,0 +1,182 @@ +--- +title: ボリュームのスナップショット +content_type: concept +weight: 60 +--- + + + +Kubernetesでは、*VolumeSnapshot*はストレージシステム上のボリュームのスナップショットを表します。このドキュメントは、Kubernetes[永続ボリューム](/ja/docs/concepts/storage/persistent-volumes/)に既に精通していることを前提としています。 + + + +## 概要 {#introduction} + +APIリソース`PersistentVolume`と`PersistentVolumeClaim`を使用してユーザーと管理者にボリュームをプロビジョニングする方法と同様に、`VolumeSnapshotContent`と`VolumeSnapshot`APIリソースは、ユーザーと管理者のボリュームスナップショットを作成するために提供されます。 + +`VolumeSnapshotContent`は、管理者によってプロビジョニングされたクラスター内のボリュームから取得されたスナップショットです。PersistentVolumeがクラスターリソースであるように、これはクラスターのリソースです。 + +`VolumeSnapshot`は、ユーザーによるボリュームのスナップショットの要求です。PersistentVolumeClaimに似ています。 + +`VolumeSnapshotClass`を使用すると、`VolumeSnapshot`に属するさまざまな属性を指定できます。これらの属性は、ストレージシステム上の同じボリュームから取得されたスナップショット間で異なる場合があるため、`PersistentVolumeClaim`の同じ`StorageClass`を使用して表現することはできません。 + +ボリュームスナップショットは、完全に新しいボリュームを作成することなく、特定の時点でボリュームの内容をコピーするための標準化された方法をKubernetesユーザーに提供します。この機能により、たとえばデータベース管理者は、編集または削除の変更を実行する前にデータベースをバックアップできます。 + +この機能を使用する場合、ユーザーは次のことに注意する必要があります。 + +- APIオブジェクト`VolumeSnapshot`、`VolumeSnapshotContent`、および`VolumeSnapshotClass`は{{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}であり、コアAPIの一部ではありません。 +- `VolumeSnapshot`のサポートは、CSIドライバーでのみ利用できます。 +- `VolumeSnapshot`の展開プロセスの一環として、Kubernetesチームは、コントロールプレーンに展開されるスナップショットコントローラーと、CSIドライバーと共に展開されるcsi-snapshotterと呼ばれるサイドカーヘルパーコンテナを提供します。スナップショットコントローラーは、`VolumeSnapshot`および`VolumeSnapshotContent`オブジェクトを管理し、`VolumeSnapshotContent`オブジェクトの作成と削除を担当します。サイドカーcsi-snapshotterは、`VolumeSnapshotContent`オブジェクトを監視し、CSIエンドポイントに対して`CreateSnapshot`および`DeleteSnapshot`操作をトリガーします。 +- スナップショットオブジェクトの厳密な検証を提供するvalidation Webhookサーバーもあります。これは、CSIドライバーではなく、スナップショットコントローラーおよびCRDと共にKubernetesディストリビューションによってインストールする必要があります。スナップショット機能が有効になっているすべてのKubernetesクラスターにインストールする必要があります。 +- CSIドライバーは、ボリュームスナップショット機能を実装している場合と実装していない場合があります。ボリュームスナップショットのサポートを提供するCSIドライバーは、csi-snapshotterを使用する可能性があります。詳細については、[CSIドライバーのドキュメント](https://kubernetes-csi.github.io/docs/)を参照してください。 +- CRDとスナップショットコントローラーのインストールは、Kubernetesディストリビューションの責任です。 + +## ボリュームスナップショットとボリュームスナップショットのコンテンツのライフサイクル + +`VolumeSnapshotContents`はクラスター内のリソースです。`VolumeSnapshots`は、これらのリソースに対するリクエストです。`VolumeSnapshotContents`と`VolumeSnapshots`の間の相互作用は、次のライフサイクルに従います。 + +### プロビジョニングボリュームのスナップショット + +スナップショットをプロビジョニングするには、事前プロビジョニングと動的プロビジョニングの2つの方法があります。 + +#### 事前プロビジョニング{#static} + +クラスター管理者は、多数の`VolumeSnapshotContents`を作成します。それらは、クラスターユーザーが使用できるストレージシステム上の実際のボリュームスナップショットの詳細を保持します。それらはKubernetesAPIに存在し、消費することができます。 + +#### 動的プロビジョニング + +既存のスナップショットを使用する代わりに、スナップショットをPersistentVolumeClaimから動的に取得するように要求できます。[VolumeSnapshotClass](/ja/docs/concepts/storage/volume-snapshot-classes/)は、スナップショットを作成するときに使用するストレージプロバイダー固有のパラメーターを指定します。 + +### バインディング + +スナップショットコントローラーは、事前プロビジョニングされたシナリオと動的にプロビジョニングされたシナリオの両方で、適切な`VolumeSnapshotContent`オブジェクトを使用した`VolumeSnapshot`オブジェクトのバインディングを処理します。バインディングは1対1のマッピングです。 + +事前プロビジョニングされたバインディングの場合、要求されたVolumeSnapshotContentオブジェクトが作成されるまで、VolumeSnapshotはバインドされないままになります。 + +### スナップショットソース保護としてのPersistentVolumeClaim + +この保護の目的は、スナップショットがシステムから取得されている間、使用中の{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}APIオブジェクトがシステムから削除されないようにすることです(これにより、データが失われる可能性があります)。 + +PersistentVolumeClaimのスナップショットが作成されている間、そのPersistentVolumeClaimは使用中です。スナップショットソースとしてアクティブに使用されているPersistentVolumeClaim APIオブジェクトを削除しても、PersistentVolumeClaimオブジェクトはすぐには削除されません。代わりに、PersistentVolumeClaimオブジェクトの削除は、スナップショットがReadyToUseになるか中止されるまで延期されます。 + +### 削除 + +削除は`VolumeSnapshot`オブジェクトの削除によってトリガーされ、`DeletionPolicy`に従います。`DeletionPolicy`が`Delete`の場合、基になるストレージスナップショットは`VolumeSnapshotContent`オブジェクトとともに削除されます。`DeletionPolicy`が`Retain`の場合、基になるスナップショットと`VolumeSnapshotContent`の両方が残ります。 + +## ボリュームスナップショット + +各VolumeSnapshotには、仕様とステータスが含まれています。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshot +metadata: + name: new-snapshot-test +spec: + volumeSnapshotClassName: csi-hostpath-snapclass + source: + persistentVolumeClaimName: pvc-test +``` + +`persistentVolumeClaimName`は、スナップショットのPersistentVolumeClaimデータソースの名前です。このフィールドは、スナップショットを動的にプロビジョニングするために必要です。 + +ボリュームスナップショットは、属性`volumeSnapshotClassName`を使用して[VolumeSnapshotClass](/ja/docs/concepts/storage/volume-snapshot-classes/)の名前を指定することにより、特定のクラスを要求できます。何も設定されていない場合、利用可能な場合はデフォルトのクラスが使用されます。 + +事前プロビジョニングされたスナップショットの場合、次の例に示すように、スナップショットのソースとして`volumeSnapshotContentName`を指定する必要があります。事前プロビジョニングされたスナップショットには、`volumeSnapshotContentName`ソースフィールドが必要です。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshot +metadata: + name: test-snapshot +spec: + source: + volumeSnapshotContentName: test-content +``` + +## ボリュームスナップショットコンテンツ + +各VolumeSnapshotContentには、仕様とステータスが含まれています。動的プロビジョニングでは、スナップショット共通コントローラーが`VolumeSnapshotContent`オブジェクトを作成します。以下に例を示します。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotContent +metadata: + name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455 +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotClassName: csi-hostpath-snapclass + volumeSnapshotRef: + name: new-snapshot-test + namespace: default + uid: 72d9a349-aacd-42d2-a240-d775650d2455 +``` + +`volumeHandle`は、ストレージバックエンドで作成され、ボリュームの作成中にCSIドライバーによって返されるボリュームの一意の識別子です。このフィールドは、スナップショットを動的にプロビジョニングするために必要です。これは、スナップショットのボリュームソースを指定します。 +事前プロビジョニングされたスナップショットの場合、(クラスター管理者として)次のように`VolumeSnapshotContent`オブジェクトを作成する必要があります。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotContent +metadata: + name: new-snapshot-content-test +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotRef: + name: new-snapshot-test + namespace: default +``` + +`snapshotHandle`は、ストレージバックエンドで作成されたボリュームスナップショットの一意の識別子です。このフィールドは、事前プロビジョニングされたスナップショットに必要です。この`VolumeSnapshotContent`が表すストレージシステムのCSIスナップショットIDを指定します。 + +`sourceVolumeMode`は、スナップショットが作成されるボリュームのモードです。`sourceVolumeMode`フィールドの値は、`Filesystem`または`Block`のいずれかです。ソースボリュームモードが指定されていない場合、Kubernetesはスナップショットをソースボリュームのモードが不明であるかのように扱います。 + +`volumeSnapshotRef`は、対応する`VolumeSnapshot`の参照です。`VolumeSnapshotContent`が事前プロビジョニングされたスナップショットとして作成されている場合、`volumeSnapshotRef`で参照される`VolumeSnapshot`がまだ存在しない可能性があることに注意してください。 + +## スナップショットのボリュームモードの変換 {#convert-volume-mode} + +クラスターにインストールされている`VolumeSnapshots`APIが`sourceVolumeMode`フィールドをサポートしている場合、APIには、権限のないユーザーがボリュームのモードを変換するのを防ぐ機能があります。 + +クラスターにこの機能の機能があるかどうかを確認するには、次のコマンドを実行します。 + +```yaml +$ kubectl get crd volumesnapshotcontent -o yaml +``` + +ユーザーが既存の`VolumeSnapshot`から`PersistentVolumeClaim`を作成できるようにしたいが、ソースとは異なるボリュームモードを使用する場合は、`VolumeSnapshot`に対応する`VolumeSnapshotContent`にアノテーション`snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"`を追加する必要があります。 + +事前プロビジョニングされたスナップショットの場合、クラスター管理者が`spec.sourceVolumeMode`を入力する必要があります。 + +この機能を有効にした`VolumeSnapshotContent`リソースの例は次のようになります。 + +```yaml +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotContent +metadata: + name: new-snapshot-content-test + annotations: + - snapshot.storage.kubernetes.io/allowVolumeModeChange: "true" +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + sourceVolumeMode: Filesystem + volumeSnapshotRef: + name: new-snapshot-test + namespace: default +``` + +## スナップショットからのボリュームのプロビジョニング + +`PersistentVolumeClaim`オブジェクトの*dataSource*フィールドを使用して、スナップショットからのデータが事前に取り込まれた新しいボリュームをプロビジョニングできます。 + +詳細については、[ボリュームのスナップショットとスナップショットからのボリュームの復元](/ja/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)を参照してください。 diff --git a/content/ja/docs/concepts/workloads/controllers/deployment.md b/content/ja/docs/concepts/workloads/controllers/deployment.md index b62151fc54..548416d73e 100644 --- a/content/ja/docs/concepts/workloads/controllers/deployment.md +++ b/content/ja/docs/concepts/workloads/controllers/deployment.md @@ -145,7 +145,7 @@ Deploymentに対して適切なセレクターとPodテンプレートのラベ ## Deploymentの更新 {#updating-a-deployment} {{< note >}} -Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナーイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。 +Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。 {{< /note >}} Deploymentを更新するには以下のステップに従ってください。 @@ -938,7 +938,7 @@ Deploymentを使って一部のユーザーやサーバーに対してリリー ## Deployment Specの記述 他の全てのKubernetesの設定と同様に、Deploymentは`.apiVersion`、`.kind`や`.metadata`フィールドを必要とします。 -設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナーの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 +設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。 Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要とします。 @@ -1008,7 +1008,7 @@ Deploymentのセレクターに一致するラベルを持つPodを直接作成 ### Min Ready Seconds {#min-ready-seconds} -`.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナーがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。 +`.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。 ### リビジョン履歴の保持上限 diff --git a/content/ja/docs/reference/_index.md b/content/ja/docs/reference/_index.md index aca4c278b5..5888dd4511 100644 --- a/content/ja/docs/reference/_index.md +++ b/content/ja/docs/reference/_index.md @@ -36,7 +36,7 @@ content_type: concept ## コンポーネントリファレンス -* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 各ノード上で動作する最も重要なノードエージェントです。kubeletは一通りのPodSpecを受け取り、コンテナーが実行中で正常であることを確認します。 +* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 各ノード上で動作する最も重要なノードエージェントです。kubeletは一通りのPodSpecを受け取り、コンテナが実行中で正常であることを確認します。 * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - Pod、Service、Replication Controller等、APIオブジェクトのデータを検証・設定するREST APIサーバーです。 * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Kubernetesに同梱された、コアのコントロールループを埋め込むデーモンです。 * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 単純なTCP/UDPストリームのフォワーディングや、一連のバックエンド間でTCP/UDPのラウンドロビンでのフォワーディングを実行できます。 diff --git a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md index 9150d77b49..76a711bc35 100644 --- a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md @@ -191,133 +191,119 @@ content_type: concept | 機能名 | デフォルト値 | ステージ | 導入開始バージョン | 最終利用可能バージョン | |---------|---------|-------|-------|-------| -| `Accelerators` | `false` | Alpha | 1.6 | 1.10 | -| `Accelerators` | - | Deprecated | 1.11 | - | | `AdvancedAuditing` | `false` | Alpha | 1.7 | 1.7 | | `AdvancedAuditing` | `true` | Beta | 1.8 | 1.11 | | `AdvancedAuditing` | `true` | GA | 1.12 | - | -| `AffinityInAnnotations` | `false` | Alpha | 1.6 | 1.7 | -| `AffinityInAnnotations` | - | Deprecated | 1.8 | - | -| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | -| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - | -| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 | -| `BlockVolume` | `true` | Beta | 1.13 | 1.17 | -| `BlockVolume` | `true` | GA | 1.18 | - | -| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 | -| `CSIBlockVolume` | `true` | Beta | 1.14 | 1.17 | -| `CSIBlockVolume` | `true` | GA | 1.18 | - | -| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 | -| `CSIDriverRegistry` | `true` | Beta | 1.14 | 1.17 | -| `CSIDriverRegistry` | `true` | GA | 1.18 | | -| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 | -| `CSINodeInfo` | `true` | Beta | 1.14 | 1.16 | -| `CSINodeInfo` | `true` | GA | 1.17 | | -| `AttachVolumeLimit` | `false` | Alpha | 1.11 | 1.11 | -| `AttachVolumeLimit` | `true` | Beta | 1.12 | 1.16 | -| `AttachVolumeLimit` | `true` | GA | 1.17 | - | -| `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | -| `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 | -| `CSIPersistentVolume` | `true` | GA | 1.13 | - | -| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 | -| `CustomPodDNS` | `true` | Beta| 1.10 | 1.13 | -| `CustomPodDNS` | `true` | GA | 1.14 | - | -| `CustomResourcePublishOpenAPI` | `false` | Alpha| 1.14 | 1.14 | -| `CustomResourcePublishOpenAPI` | `true` | Beta| 1.15 | 1.15 | -| `CustomResourcePublishOpenAPI` | `true` | GA | 1.16 | - | -| `CustomResourceSubresources` | `false` | Alpha | 1.10 | 1.10 | -| `CustomResourceSubresources` | `true` | Beta | 1.11 | 1.15 | -| `CustomResourceSubresources` | `true` | GA | 1.16 | - | -| `CustomResourceValidation` | `false` | Alpha | 1.8 | 1.8 | -| `CustomResourceValidation` | `true` | Beta | 1.9 | 1.15 | -| `CustomResourceValidation` | `true` | GA | 1.16 | - | -| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 | -| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 | -| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - | -| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 | -| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - | -| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | -| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - | -| `EnableAggregatedDiscoveryTimeout` | `true` | Deprecated | 1.16 | - | -| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 | -| `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - | -| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 | -| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | - | -| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 | -| `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - | -| `HugePages` | `false` | Alpha | 1.8 | 1.9 | -| `HugePages` | `true` | Beta| 1.10 | 1.13 | -| `HugePages` | `true` | GA | 1.14 | - | -| `Initializers` | `false` | Alpha | 1.7 | 1.13 | -| `Initializers` | - | Deprecated | 1.14 | - | -| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 | -| `KubeletConfigFile` | - | Deprecated | 1.10 | - | -| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 | -| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 | -| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - | -| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | -| `MountPropagation` | `true` | Beta | 1.10 | 1.11 | -| `MountPropagation` | `true` | GA | 1.12 | - | -| `NodeLease` | `false` | Alpha | 1.12 | 1.13 | -| `NodeLease` | `true` | Beta | 1.14 | 1.16 | -| `NodeLease` | `true` | GA | 1.17 | - | -| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | -| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | -| `PersistentLocalVolumes` | `true` | GA | 1.14 | - | -| `PodPriority` | `false` | Alpha | 1.8 | 1.10 | -| `PodPriority` | `true` | Beta | 1.11 | 1.13 | -| `PodPriority` | `true` | GA | 1.14 | - | -| `PodReadinessGates` | `false` | Alpha | 1.11 | 1.11 | -| `PodReadinessGates` | `true` | Beta | 1.12 | 1.13 | -| `PodReadinessGates` | `true` | GA | 1.14 | - | -| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 | -| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 | -| `PodShareProcessNamespace` | `true` | GA | 1.17 | - | -| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | -| `PVCProtection` | - | Deprecated | 1.10 | - | -| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 | -| `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 | -| `ResourceQuotaScopeSelectors` | `true` | Beta | 1.12 | 1.16 | -| `ResourceQuotaScopeSelectors` | `true` | GA | 1.17 | - | -| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | -| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | 1.16 | -| `ScheduleDaemonSetPods` | `true` | GA | 1.17 | - | -| `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 | -| `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 | -| `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | - | -| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | -| `StorageObjectInUseProtection` | `true` | GA | 1.11 | - | -| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 | -| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | -| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 | -| `SupportIPVSProxyMode` | `true` | GA | 1.11 | - | -| `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 | -| `TaintBasedEvictions` | `true` | Beta | 1.13 | 1.17 | -| `TaintBasedEvictions` | `true` | GA | 1.18 | - | -| `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 | -| `TaintNodesByCondition` | `true` | Beta | 1.12 | 1.16 | -| `TaintNodesByCondition` | `true` | GA | 1.17 | - | -| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 | -| `VolumePVCDataSource` | `true` | Beta | 1.16 | 1.17 | -| `VolumePVCDataSource` | `true` | GA | 1.18 | - | -| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | -| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | -| `VolumeScheduling` | `true` | GA | 1.13 | - | -| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | -| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | -| `VolumeScheduling` | `true` | GA | 1.13 | - | -| `VolumeSubpath` | `true` | GA | 1.13 | - | -| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 | -| `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 | -| `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - | +| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 | +| `CSIInlineVolume` | `true` | Beta | 1.16 | 1.24 | +| `CSIInlineVolume` | `true` | GA | 1.25 | - | +| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigration` | `true` | Beta | 1.17 | 1.24 | +| `CSIMigration` | `true` | GA | 1.25 | - | +| `CSIMigrationAWS` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigrationAWS` | `false` | Beta | 1.17 | 1.22 | +| `CSIMigrationAWS` | `true` | Beta | 1.23 | 1.24 | +| `CSIMigrationAWS` | `true` | GA | 1.25 | - | +| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 | +| `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | 1.22 | +| `CSIMigrationAzureDisk` | `true` | Beta | 1.23 | 1.23 | +| `CSIMigrationAzureDisk` | `true` | GA | 1.24 | | +| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigrationGCE` | `false` | Beta | 1.17 | 1.22 | +| `CSIMigrationGCE` | `true` | Beta | 1.23 | 1.24 | +| `CSIMigrationGCE` | `true` | GA | 1.25 | - | +| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 | +| `CSIMigrationOpenStack` | `true` | Beta | 1.18 | 1.23 | +| `CSIMigrationOpenStack` | `true` | GA | 1.24 | | +| `CSIStorageCapacity` | `false` | Alpha | 1.19 | 1.20 | +| `CSIStorageCapacity` | `true` | Beta | 1.21 | 1.23 | +| `CSIStorageCapacity` | `true` | GA | 1.24 | - | +| `CSRDuration` | `true` | Beta | 1.22 | 1.23 | +| `CSRDuration` | `true` | GA | 1.24 | - | +| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 | +| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | 1.23 | +| `ControllerManagerLeaderMigration` | `true` | GA | 1.24 | - | +| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 | +| `CronJobTimeZone` | `true` | Beta | 1.25 | | +| `DaemonSetUpdateSurge` | `false` | Alpha | 1.21 | 1.21 | +| `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | 1.24 | +| `DaemonSetUpdateSurge` | `true` | GA | 1.25 | - | +| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 | +| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | 1.23 | +| `DefaultPodTopologySpread` | `true` | GA | 1.24 | - | +| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 | +| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 | +| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- | +| `DryRun` | `false` | Alpha | 1.12 | 1.12 | +| `DryRun` | `true` | Beta | 1.13 | 1.18 | +| `DryRun` | `true` | GA | 1.19 | - | +| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 | +| `DynamicKubeletConfig` | `true` | Beta | 1.11 | 1.21 | +| `DynamicKubeletConfig` | `false` | Deprecated | 1.22 | - | +| `EfficientWatchResumption` | `false` | Alpha | 1.20 | 1.20 | +| `EfficientWatchResumption` | `true` | Beta | 1.21 | 1.23 | +| `EfficientWatchResumption` | `true` | GA | 1.24 | - | +| `EphemeralContainers` | `false` | Alpha | 1.16 | 1.22 | +| `EphemeralContainers` | `true` | Beta | 1.23 | 1.24 | +| `EphemeralContainers` | `true` | GA | 1.25 | - | +| `ExecProbeTimeout` | `true` | GA | 1.20 | - | +| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 | +| `ExpandCSIVolumes` | `true` | Beta | 1.16 | 1.23 | +| `ExpandCSIVolumes` | `true` | GA | 1.24 | - | +| `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.14 | +| `ExpandInUsePersistentVolumes` | `true` | Beta | 1.15 | 1.23 | +| `ExpandInUsePersistentVolumes` | `true` | GA | 1.24 | - | +| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 | +| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | 1.23 | +| `ExpandPersistentVolumes` | `true` | GA | 1.24 |- | +| `IdentifyPodOS` | `false` | Alpha | 1.23 | 1.23 | +| `IdentifyPodOS` | `true` | Beta | 1.24 | 1.24 | +| `IdentifyPodOS` | `true` | GA | 1.25 | - | +| `IndexedJob` | `false` | Alpha | 1.21 | 1.21 | +| `IndexedJob` | `true` | Beta | 1.22 | 1.23 | +| `IndexedJob` | `true` | GA | 1.24 | - | +| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 | +| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | 1.24 | +| `LocalStorageCapacityIsolation` | `true` | GA | 1.25 | - | +| `NetworkPolicyEndPort` | `false` | Alpha | 1.21 | 1.21 | +| `NetworkPolicyEndPort` | `true` | Beta | 1.22 | 1.24 | +| `NetworkPolicyEndPort` | `true` | GA | 1.25 | - | +| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 | +| `NonPreemptingPriority` | `true` | Beta | 1.19 | 1.23 | +| `NonPreemptingPriority` | `true` | GA | 1.24 | - | +| `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 | +| `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | 1.23 | +| `PodAffinityNamespaceSelector` | `true` | GA | 1.24 | - | +| `PodOverhead` | `false` | Alpha | 1.16 | 1.17 | +| `PodOverhead` | `true` | Beta | 1.18 | 1.23 | +| `PodOverhead` | `true` | GA | 1.24 | - | +| `PodSecurity` | `false` | Alpha | 1.22 | 1.22 | +| `PodSecurity` | `true` | Beta | 1.23 | 1.24 | +| `PodSecurity` | `true` | GA | 1.25 | | +| `PreferNominatedNode` | `false` | Alpha | 1.21 | 1.21 | +| `PreferNominatedNode` | `true` | Beta | 1.22 | 1.23 | +| `PreferNominatedNode` | `true` | GA | 1.24 | - | +| `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 | +| `RemoveSelfLink` | `true` | Beta | 1.20 | 1.23 | +| `RemoveSelfLink` | `true` | GA | 1.24 | - | +| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | +| `ServerSideApply` | `true` | Beta | 1.16 | 1.21 | +| `ServerSideApply` | `true` | GA | 1.22 | - | +| `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.21 | +| `ServiceLBNodePortControl` | `true` | Beta | 1.22 | 1.23 | +| `ServiceLBNodePortControl` | `true` | GA | 1.24 | - | +| `ServiceLoadBalancerClass` | `false` | Alpha | 1.21 | 1.21 | +| `ServiceLoadBalancerClass` | `true` | Beta | 1.22 | 1.23 | +| `ServiceLoadBalancerClass` | `true` | GA | 1.24 | - | +| `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 | +| `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | 1.24 | +| `StatefulSetMinReadySeconds` | `true` | GA | 1.25 | - | +| `SuspendJob` | `false` | Alpha | 1.21 | 1.21 | +| `SuspendJob` | `true` | Beta | 1.22 | 1.23 | +| `SuspendJob` | `true` | GA | 1.24 | - | | `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 | | `WatchBookmark` | `true` | Beta | 1.16 | 1.16 | | `WatchBookmark` | `true` | GA | 1.17 | - | -| `WindowsGMSA` | `false` | Alpha | 1.14 | 1.15 | -| `WindowsGMSA` | `true` | Beta | 1.16 | 1.17 | -| `WindowsGMSA` | `true` | GA | 1.18 | - | -| `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 | -| `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 | -| `WindowsRunAsUserName` | `true` | GA | 1.18 | - | {{< /table >}} ## 機能を使用する diff --git a/content/ja/docs/setup/best-practices/node-conformance.md b/content/ja/docs/setup/best-practices/node-conformance.md index 355dfdf366..919e64ff66 100644 --- a/content/ja/docs/setup/best-practices/node-conformance.md +++ b/content/ja/docs/setup/best-practices/node-conformance.md @@ -8,12 +8,6 @@ weight: 30 *ノード適合テスト* は、システムの検証とノードに対する機能テストを提供するコンテナ型のテストフレームワークです。このテストは、ノードがKubernetesの最小要件を満たしているかどうかを検証するもので、テストに合格したノードはKubernetesクラスタに参加する資格があることになります。 -## 制約 - -Kubernetesのバージョン1.5ではノード適合テストには以下の制約があります: - -* ノード適合テストはコンテナのランタイムとしてDockerのみをサポートします。 - ## ノードの前提条件 適合テストを実行するにはノードは通常のKubernetesノードと同じ前提条件を満たしている必要があります。 最低でもノードに以下のデーモンがインストールされている必要があります: @@ -25,10 +19,11 @@ Kubernetesのバージョン1.5ではノード適合テストには以下の制 ノード適合テストを実行するには、以下の手順に従います: -1. Kubeletをlocalhostに指定します(`--api-servers="http://localhost:8080"`)、 -このテストフレームワークはKubeletのテストにローカルマスターを起動するため、Kubeletをローカルホストに設定します(`--api-servers="http://localhost:8080"`)。他にも配慮するべきKubeletフラグがいくつかあります: - * `--pod-cidr`: `kubenet`を利用している場合は、Kubeletに任意のCIDR(例: `--pod-cidr=10.180.0.0/24`)を指定する必要があります。 - * `--cloud-provider`: `--cloud-provider=gce`を指定している場合は、テストを実行する前にこのフラグを取り除いてください。 +1. kubeletの`--kubeconfig`オプションの値を調べます。例:`--kubeconfig=/var/lib/kubelet/config.yaml`。 + このテストフレームワークはKubeletのテスト用にローカルコントロールプレーンを起動するため、APIサーバーのURLとして`http://localhost:8080`を使用します。 + 他にも使用できるkubeletコマンドラインパラメーターがいくつかあります: + + * `--cloud-provider`: `--cloud-provider=gce`を指定している場合は、テストを実行する前にこのフラグを取り除いてください。 2. 以下のコマンドでノード適合テストを実行します: @@ -37,7 +32,7 @@ Kubernetesのバージョン1.5ではノード適合テストには以下の制 # $LOG_DIRはテスト出力のパスです。 sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ - k8s.gcr.io/node-test:0.2 + registry.k8s.io/node-test:0.2 ``` ## 他アーキテクチャ向けのノード適合テストの実行 @@ -58,7 +53,7 @@ Kubernetesは他のアーキテクチャ用のノード適合テストのdocker sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e FOCUS=MirrorPod \ # MirrorPodテストのみを実行します - k8s.gcr.io/node-test:0.2 + registry.k8s.io/node-test:0.2 ``` 特定のテストをスキップするには、環境変数`SKIP`をスキップしたいテストの正規表現で上書きします。 @@ -67,7 +62,7 @@ sudo docker run -it --rm --privileged --net=host \ sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e SKIP=MirrorPod \ # MirrorPodテスト以外のすべてのノード適合テストを実行します - k8s.gcr.io/node-test:0.2 + registry.k8s.io/node-test:0.2 ``` ノード適合テストは、[node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md)のコンテナ化されたバージョンです。 diff --git a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 577d38a5a8..2fa71ad2fc 100644 --- a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -123,7 +123,7 @@ Kubernetes v1.18におけるWindows上でのContainerDは以下の既知の欠 Kubernetes[ボリューム](/docs/concepts/storage/volumes/)を使用すると、データの永続性とPodボリュームの共有要件を備えた複雑なアプリケーションをKubernetesにデプロイできます。特定のストレージバックエンドまたはプロトコルに関連付けられた永続ボリュームの管理には、ボリュームのプロビジョニング/プロビジョニング解除/サイズ変更、Kubernetesノードへのボリュームのアタッチ/デタッチ、およびデータを永続化する必要があるPod内の個別のコンテナへのボリュームのマウント/マウント解除などのアクションが含まれます。特定のストレージバックエンドまたはプロトコルに対してこれらのボリューム管理アクションを実装するコードは、Kubernetesボリューム[プラグイン](/docs/concepts/storage/volumes/#types-of-volumes)の形式で出荷されます。次の幅広いクラスのKubernetesボリュームプラグインがWindowsでサポートされています。: ##### In-treeボリュームプラグイン -In-treeボリュームプラグインに関連付けられたコードは、コアKubernetesコードベースの一部として提供されます。In-treeボリュームプラグインのデプロイでは、追加のスクリプトをインストールしたり、個別のコンテナ化されたプラグインコンポーネントをデプロイしたりする必要はありません。これらのプラグインは、ストレージバックエンドでのボリュームのプロビジョニング/プロビジョニング解除とサイズ変更、Kubernetesノードへのボリュームのアタッチ/アタッチ解除、Pod内の個々のコンテナーへのボリュームのマウント/マウント解除を処理できます。次のIn-treeプラグインは、Windowsノードをサポートしています。: +In-treeボリュームプラグインに関連付けられたコードは、コアKubernetesコードベースの一部として提供されます。In-treeボリュームプラグインのデプロイでは、追加のスクリプトをインストールしたり、個別のコンテナ化されたプラグインコンポーネントをデプロイしたりする必要はありません。これらのプラグインは、ストレージバックエンドでのボリュームのプロビジョニング/プロビジョニング解除とサイズ変更、Kubernetesノードへのボリュームのアタッチ/アタッチ解除、Pod内の個々のコンテナへのボリュームのマウント/マウント解除を処理できます。次のIn-treeプラグインは、Windowsノードをサポートしています。: * [awsElasticBlockStore](/docs/concepts/storage/volumes/#awselasticblockstore) * [azureDisk](/docs/concepts/storage/volumes/#azuredisk) @@ -167,7 +167,7 @@ Windowsは、L2bridge、L2tunnel、Overlay、Transparent、NATの5つの異な | -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ | | L2bridge | コンテナは外部のvSwitchに接続されます。コンテナはアンダーレイネットワークに接続されますが、物理ネットワークはコンテナのMACを上り/下りで書き換えるため、MACを学習する必要はありません。コンテナ間トラフィックは、コンテナホスト内でブリッジされます。 | MACはホストのMACに書き換えられ、IPは変わりません。| [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge)、[Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)、Flannelホストゲートウェイは、win-bridgeを使用します。 | win-bridgeはL2bridgeネットワークモードを使用して、コンテナをホストのアンダーレイに接続して、最高のパフォーマンスを提供します。ノード間接続にはユーザー定義ルート(UDR)が必要です。 | | L2Tunnel | これはl2bridgeの特殊なケースですが、Azureでのみ使用されます。すべてのパケットは、SDNポリシーが適用されている仮想化ホストに送信されます。| MACが書き換えられ、IPがアンダーレイネットワークで表示されます。 | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNIを使用すると、コンテナをAzure vNETと統合し、[Azure Virtual Networkが提供](https://azure.microsoft.com/en-us/services/virtual-network/)する一連の機能を活用できます。たとえば、Azureサービスに安全に接続するか、Azure NSGを使用します。[azure-cniのいくつかの例](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)を参照してください。| -| オーバーレイ(KubernetesのWindows用のオーバーレイネットワークは *アルファ* 段階です) | コンテナには、外部のvSwitchに接続されたvNICが付与されます。各オーバーレイネットワークは、カスタムIPプレフィックスで定義された独自のIPサブネットを取得します。オーバーレイネットワークドライバーは、VXLANを使用してカプセル化します。 | 外部ヘッダーでカプセル化されます。 | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN (win-overlayを使用) | win-overlayは、仮想コンテナーネットワークをホストのアンダーレイから分離する必要がある場合に使用する必要があります(セキュリティ上の理由など)。データセンター内のIPが制限されている場合に、(異なるVNIDタグを持つ)異なるオーバーレイネットワークでIPを再利用できるようにします。このオプションには、Windows Server 2019で[KB4489899](https://support.microsoft.com/help/4489899)が必要です。| +| オーバーレイ(KubernetesのWindows用のオーバーレイネットワークは *アルファ* 段階です) | コンテナには、外部のvSwitchに接続されたvNICが付与されます。各オーバーレイネットワークは、カスタムIPプレフィックスで定義された独自のIPサブネットを取得します。オーバーレイネットワークドライバーは、VXLANを使用してカプセル化します。 | 外部ヘッダーでカプセル化されます。 | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN (win-overlayを使用) | win-overlayは、仮想コンテナネットワークをホストのアンダーレイから分離する必要がある場合に使用する必要があります(セキュリティ上の理由など)。データセンター内のIPが制限されている場合に、(異なるVNIDタグを持つ)異なるオーバーレイネットワークでIPを再利用できるようにします。このオプションには、Windows Server 2019で[KB4489899](https://support.microsoft.com/help/4489899)が必要です。| | 透過的([ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)の特別な使用例) | 外部のvSwitchが必要です。コンテナは外部のvSwitchに接続され、論理ネットワーク(論理スイッチおよびルーター)を介したPod内通信を可能にします。 | パケットは、[GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/)または[STT](https://datatracker.ietf.org/doc/draft-davie-stt/)トンネリングを介してカプセル化され、同じホスト上にないポッドに到達します。パケットは、ovnネットワークコントローラーによって提供されるトンネルメタデータ情報を介して転送またはドロップされます。NATは南北通信のために行われます。 | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [ansible経由でデプロイ](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib)します。分散ACLは、Kubernetesポリシーを介して適用できます。 IPAMをサポートします。負荷分散は、kube-proxyなしで実現できます。 NATは、ip​​tables/netshを使用せずに行われます。 | | NAT(*Kubernetesでは使用されません*) | コンテナには、内部のvSwitchに接続されたvNICが付与されます。DNS/DHCPは、[WinNAT](https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/)と呼ばれる内部コンポーネントを使用して提供されます。 | MACおよびIPはホストMAC/IPに書き換えられます。 | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | 完全を期すためにここに含まれています。 | diff --git a/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 2ac539bf06..0fe0ffe410 100644 --- a/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -295,7 +295,7 @@ Liveness ProbeおよびReadiness Probeのチェック動作をより正確に制 * `periodSeconds`: Probeが実行される頻度(秒数)。デフォルトは10秒。最小値は1。 * `timeoutSeconds`: Probeがタイムアウトになるまでの秒数。デフォルトは1秒。最小値は1。 * `successThreshold`: 一度Probeが失敗した後、次のProbeが成功したとみなされるための最小連続成功数。 -デフォルトは1。Liveness Probeには1を設定する必要があります。最小値は1。 +デフォルトは1。Liveness ProbeおよびStartup Probeには1を設定する必要があります。最小値は1。 * `failureThreshold`: Probeが失敗した場合、Kubernetesは`failureThreshold`に設定した回数までProbeを試行します。 Liveness Probeにおいて、試行回数に到達することはコンテナを再起動することを意味します。 Readiness Probeの場合は、Podが準備できていない状態として通知されます。デフォルトは3。最小値は1。 diff --git a/content/ja/docs/tasks/debug/debug-application/_index.md b/content/ja/docs/tasks/debug/debug-application/_index.md index 1924abae92..901a82969b 100644 --- a/content/ja/docs/tasks/debug/debug-application/_index.md +++ b/content/ja/docs/tasks/debug/debug-application/_index.md @@ -1,7 +1,7 @@ --- title: アプリケーションのトラブルシューティング -description: Debugging common containerized application issues. +description: コンテナ化されたアプリケーションの一般的な問題をデバッグします。 weight: 20 --- -This doc contains a set of resources for fixing issues with containerized applications. It covers things like common issues with Kubernetes resources (like Pods, Services, or StatefulSets), advice on making sense of container termination messages, and ways to debug running containers. +このドキュメントには、コンテナ化されたアプリケーションの問題を解決するための、一連のリソースが記載されています。Kubernetesリソース(Pod、Service、StatefulSetなど)に関する一般的な問題や、コンテナ終了メッセージを理解するためのアドバイス、実行中のコンテナをデバッグする方法などが網羅されています。 diff --git a/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md b/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md index 18b4dda5bb..9b977d4de2 100644 --- a/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md @@ -194,7 +194,7 @@ StatefulSetがスケールアップした場合や、次のPodがPersistentVolum ## クライアントトラフィックを送信する テストクエリーをMySQLマスター(ホスト名 `mysql-0.mysql`)に送信するには、 -`mysql:5.7`イメージを使って一時的なコンテナーを実行し、`mysql`クライアントバイナリーを実行します。 +`mysql:5.7`イメージを使って一時的なコンテナを実行し、`mysql`クライアントバイナリーを実行します。 ```shell kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\ diff --git a/content/pt-br/docs/concepts/workloads/controllers/replicaset.md b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md new file mode 100644 index 0000000000..9bba625090 --- /dev/null +++ b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md @@ -0,0 +1,361 @@ +--- +title: ReplicaSet +content_type: concept +weight: 20 +--- + + + +O propósito de um ReplicaSet é gerenciar um conjunto de réplicas de Pods em execução a qualquer momento. Por isso, é geralmente utilizado para garantir a disponibilidade de um certo número de Pods idênticos. + + + + +## Como um ReplicaSet funciona + +Um ReplicaSet é definido por campos, incluindo um seletor que identifica quais Pods podem ser adquiridos, um número de réplicas indicando quantos Pods devem ser mantidos, e um pod template especificando as definições para novos Pods que devem ser criados para atender ao número de réplicas estipuladas. Um ReplicaSet cumpre seu propósito criando e deletando Pods conforme for preciso para atingir o número desejado. Quando um ReplicaSet precisa criar novos Pods, ele usa o seu podTemplate. + +Um ReplicaSet é conectado ao seus Pods pelo campo do Pod [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents), que especifíca qual recurso é dono do objeto atual. Todos os Pods adquiridos por um ReplicaSet possuem as informações de identificação do ReplicaSet vinculado no campo ownerReferences. É por esse elo que o ReplicaSet tem conhecimento do estado dos Pods que está mantendo e assim faz seu planejamento. + +Um ReplicaSet identifica novos Pods a serem adquiridos utilizando o seu seletor. Caso exista um Pod que não tenha OwnerReference ou se o OwnerReference não for um {{< glossary_tooltip term_id="controller" >}} e o seu seletor corresponde com o do ReplicaSet, o Pod é adquirido imediatamente por esse ReplicaSet. + +## Quando usar um ReplicaSet + +Um ReplicaSet garante que um número de réplicas de um Pod estão executando em qualquer momento. Entretanto, um Deployment é um conceito de nível superior que gerencia ReplicaSets e fornece atualizações declarativas aos Pods assim como várias outras funções úteis. Portanto, nós recomendamos a utilização de Deployments ao invés do uso direto de ReplicaSets, exceto se for preciso uma orquestração de atualização customizada ou que nenhuma atualização seja necessária. + +Isso na realidade significa que você pode nunca precisar manipular objetos ReplicaSet: +prefira usar um Deployment, e defina sua aplicação na seção spec. + +## Exemplo + +{{< codenew file="controllers/frontend.yaml" >}} + +Salvando esse manifesto como `frontend.yaml` e submetendo no cluster Kubernetes irá criar o ReplicaSet definido e os Pods mantidos pelo mesmo. + +```shell +kubectl apply -f https://kubernetes.io/pt-br/examples/controllers/frontend.yaml +``` + +Você pode então retornar os ReplicaSets atualmente existentes atualmente no cluster: + +```shell +kubectl get rs +``` + +E observar o ReplicaSet com o nome de frontend que você criou: + +```shell +NAME DESIRED CURRENT READY AGE +frontend 3 3 3 6s +``` + +Você também pode checar o estado do ReplicaSet: + +```shell +kubectl describe rs/frontend +``` + +E você deve ver uma saída similar a esta: + +```shell +Name: frontend +Namespace: default +Selector: tier=frontend +Labels: app=guestbook + tier=frontend +Annotations: kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"apps/v1","kind":"ReplicaSet","metadata":{"annotations":{},"labels":{"app":"guestbook","tier":"frontend"},"name":"frontend",... +Replicas: 3 current / 3 desired +Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed +Pod Template: + Labels: tier=frontend + Containers: + php-redis: + Image: gcr.io/google_samples/gb-frontend:v3 + Port: + Host Port: + Environment: + Mounts: + Volumes: +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulCreate 117s replicaset-controller Created pod: frontend-wtsmm + Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-b2zdv + Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-vcmts +``` + +E por fim você consegue verificar os Pods que foram criados: + +```shell +kubectl get pods +``` + +Você deve ver uma informação do Pod similar à esta: + +```shell +NAME READY STATUS RESTARTS AGE +frontend-b2zdv 1/1 Running 0 6m36s +frontend-vcmts 1/1 Running 0 6m36s +frontend-wtsmm 1/1 Running 0 6m36s +``` + +Você consegue também validar que a referência de dono desses pods está definida para o ReplicaSet frontend. +Para fazer isso, retorne o yaml de um dos Pods que estão executando: + +```shell +kubectl get pods frontend-b2zdv -o yaml +``` + +O output será semelhante ao exibido abaixo, com as informações do ReplicaSet frontend definidas no campo ownerReferences dentro da metadata do Pod: + +```shell +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2020-02-12T07:06:16Z" + generateName: frontend- + labels: + tier: frontend + name: frontend-b2zdv + namespace: default + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: frontend + uid: f391f6db-bb9b-4c09-ae74-6a1f77f3d5cf +... +``` + +## Aquisições de Pod sem Template + +Enquanto você pode criar Pods diretamente sem problemas, é fortemente recomendado que você se certifique que esses Pods não tenham labels que combinem com o seletor de um dos seus ReplicaSets. O motivo para isso é que um ReplicaSet não é limitado a possuir apenas Pods estipulados por seu template -- ele pode adquirir outros Pods na maneira descrita nas seções anteriores. + +Observe o exemplo anterior do ReplicaSet frontend, e seus Pods especificados no seguinte manifesto: + +{{< codenew file="pods/pod-rs.yaml" >}} + +Como esses Pods não possuem um Controller (ou qualquer objeto) referenciados como seu dono e possuem labels que combinam com o seletor do ReplicaSet frontend, eles serão imediatamente adquiridos pelo ReplicaSet. + +Imagine que você crie os Pods depois que o ReplicaSet frontend foi instalado e criou as réplicas de Pod inicial definida para cumprir o número de réplicas requiridas: + +```shell +kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml +``` + +Os novos Pods serão adquiridos pelo ReplicaSet, e logo depois terminados já que o ReplicaSet estará acima do número desejado. + +Buscando os Pods: + +```shell +kubectl get pods +``` + +O output mostra que os novos Pods ou já estão terminados, ou estão no processo de ser terminados. + +```shell +NAME READY STATUS RESTARTS AGE +frontend-b2zdv 1/1 Running 0 10m +frontend-vcmts 1/1 Running 0 10m +frontend-wtsmm 1/1 Running 0 10m +pod1 0/1 Terminating 0 1s +pod2 0/1 Terminating 0 1s +``` + +Se você criar os Pods primeiro: + +```shell +kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml +``` + +mas em seguida criar o ReplicaSet: + +```shell +kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml +``` + +Você vai perceber que o ReplicaSet adquiriu os Pods e criou apenas novos de acordo com o seu spec até que o número de novo Pods e os Pods iniciais seja igual a ao número desejado. Listando os Pods: + +```shell +kubectl get pods +``` + +Irá retornar a seguinte saída: +```shell +NAME READY STATUS RESTARTS AGE +frontend-hmmj2 1/1 Running 0 9s +pod1 1/1 Running 0 36s +pod2 1/1 Running 0 36s +``` + +Nesse sentido, um ReplicaSet pode possuir um grupo não-homogêneo de Pods +## Escrevendo um manifesto ReplicaSet + +Como todos os outros objetos de Kubernetes API, um ReplicaSet necessita dos campos `apiVersion`, `kind`, e `metadata`. +Para ReplicaSets, o `kind` sempre será um ReplicaSet. + +O nome de um objeto ReplicaSet precisa ser [nome de subdomínio de DNS](/pt-br/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. + +Um ReplicaSet também precisa de uma [seção `.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). + +### Template de Pod + +O `.spec.template` é um [template de pod](/docs/concepts/workloads/pods/#pod-templates) que também necessita de labels configurados. No nosso exemplo `frontend.yaml` nós temos uma label: `tier: frontend`. +Fique atento para não sobrepor com seletores de outros controllers, para que eles não tentem adquirir esse Pod. + +Para o campo de [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) do template, `.spec.template.spec.restartPolicy`, o único valor permitido é `Always`, que é o padrão. + +### Seletor de Pod + +O campo `.spec.selector` é um [seletor de labels](/docs/concepts/overview/working-with-objects/labels/). Como discutido [anteriormente](#como-um-replicaset-funciona) esses são os labels usados para identificar Pods em potencial para aquisição. No nosso exemplo `frontend.yaml`, o seletor era: + +```yaml +matchLabels: + tier: frontend +``` + +No ReplicaSet, `.spec.template.metadata.labels` precisa combinar com `spec.selector`, ou será rejeitado pela API. + +{{< note >}} +Para 2 ReplicaSets definindo o mesmo `.spec.selector` mas diferentes campos de `.spec.template.metadata.labels` e `.spec.template.spec`, cada ReplicaSet ignorará os Pods criados pelo outro ReplicaSet. +{{< /note >}} + +### Replicas + +Você pode definir quantos Pods devem executar simultaneamente determinando `.spec.replicas`. O ReplicaSet irá criar/deletar os Pods para igualar à esse número. + +Se você não especificar o `.spec.replicas`, seu padrão é 1. + +## Trabalhando com ReplicaSets + +### Deletando um ReplicaSet e seus Pods + +Para deletar um ReplicaSet e todos os seus Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). O [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automaticamente deleta todos os Pods dependentes por padrão. + +Quando usar a API REST ou a biblioteca `client-go`, você precisa definir `propagationPolicy` para `Background` ou `Foreground` na opção -d. +Por exemplo: +```shell +kubectl proxy --port=8080 +curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ +> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \ +> -H "Content-Type: application/json" +``` + +### Deletando apenas o ReplicaSet + +Você consegue deletar um ReplicaSet sem afetar qualquer um dos Pods usando [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) com a opção `--cascade=orphan`. +Quando usar a API REST ou a biblioteca `client-go`, você precisa definir `propagationPolicy` para `Orphan`. +Por exemplo: +```shell +kubectl proxy --port=8080 +curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ +> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \ +> -H "Content-Type: application/json" +``` + +Quando o ReplicaSet original for deletado, você pode criar um novo ReplicaSet para substituí-lo. Contanto que o `.spec.selector` do antigo e do atual sejam o mesmo, o novo irá adquirir os Pods antigos. Porém, o ReplicaSet não atualizará as definições dos Pods existentes caso surja um novo e diferente template de pod. +Para atualizar esses Pods para um novo spec de um modo controlado, use um [Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), já que ReplicaSets não suportam um atualização gradual diretamente. + +### Isolando Pods de um ReplicaSet + +Você pode remover Pods de um Replicaset trocando suas labels. Essa técnica pode ser usada para remover Pods de um serviço para depuração, recuperação de dados, etc. Pods que forem removidos por esse método serão substituídos imediatamente (assumindo que o número de replicas não tenha sido alterado). + +### Escalonando um ReplicaSet + +Um ReplicaSet pode ser facilmente escalonado para cima ou para baixo simplesmente atualizando o campo de `.spec.replicas`. O Replicaset controller garante que o número desejado de Pods com um seletor de label correspondente estejam disponíveis e operando. + +Ao escalonar para baixo, o Replicaset controller escolhe quais pods irá deletar ordenando os pods disponíveis para priorizar quais pods seram escalonados para baixo seguindo o seguinte algoritmo geral: + 1. Pods pendentes (e não agendáveis) são decaídos primeiro + 2. Se a anotação `controller.kubernetes.io/pod-deletion-cost` estiver definida, então o pod com o menor valor será priorizado primeiro. + 3. Pods em nós com mais réplicas são decaídos primeiro que pods em nodes com menos réplicas. + 4. Se a data de criação dos pods for diferente, o pod que foi criado mais recentemente vem antes que o pod mais antigo (as datas de criação são guardados em uma escala logarítmica caso o [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `LogarithmicScaleDown` esteja habilitado) + +Se o Pod obedecer todos os items acima simultaneamente, a seleção é aleatória. + +### Custo de deleção de Pods +{{< feature-state for_k8s_version="v1.22" state="beta" >}} + +Utilizando a anotação [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost), +usuários podem definir uma preferência em relação à quais pods serão removidos primeiro caso o ReplicaSet precise escalonar para baixo. + +A anotação deve ser definida no pod, com uma variação de [-2147483647, 2147483647]. Isso representa o custo de deletar um pod comparado com outros pods que pertencem à esse mesmo ReplicaSet. Pods com um custo de deleção menor são eleitos para deleção antes de pods com um custo maior. + +O valor implícito para essa anotação para pods que não a tem definida é 0; valores negativos são permitidos. +Valores inválidos serão rejeitados pelo servidor API. + +Esse recurso está em beta e é habilitado por padrão. Você consegue desabilita-lo usando o +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +`PodDeletionCost` ambos no kube-apiserver e no kube-controller-manager. + +{{< note >}} +- Esse recurso é honrado baseado no melhor esforço, portanto não oferece qualquer garantia na ordem de deleção dos pods. +- Usuários são recomendados à evitar atualizações frequentes em anotações, como gerar atualizações baseando-se em alguma métrica, porque fazendo isso irá criar um número significante de atualizações de pod para o apiserver. +{{< /note >}} + +#### Exemplo de caso de uso +Os diferentes Pods de uma aplicação podem ter níveis de utilização divergentes. Ao escalonar para baixo, a aplicação pode preferir remover os pods com a menor utilização. Para evitar atualizações frequentes nos pods, a aplicação deve atualizar `controller.kubernetes.io/pod-deletion-cost` uma vez antes de expedir o escalonamento para baixo das réplicas (configurando a anotação para um valor proporcional ao nível de utilização do Pod). Isso funciona se a própria aplicação controlar o escalonamento; por exemplo, o pod condutor de um Deployment de Spark. + +### ReplicaSet como um Horizontal Pod Autoscaler Target + +Um ReplicaSet pode também ser controlado por um +[Horizontal Pod Autoscalers (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/). Isto é, +um ReplicaSet pode ser automaticamente escalonado por um HPA. Aqui está um exemplo de um HPA controlando o ReplicaSet que nós criamos no exemplo anterior. + +{{< codenew file="controllers/hpa-rs.yaml" >}} + +Salvando esse manifesto como `hpa-rs.yaml` e enviando para o cluster Kubernetes deve +criar um HPA definido que autoescalona o ReplicaSet controlado dependendo do uso de CPU +dos Pods replicados. + +```shell +kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml +``` + + +Alternativamente, você pode usar o comando `kubectl autoscale` para realizar a mesma coisa +(e é bem mais simples!) + +```shell +kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50 +``` + +## Alternativas ao ReplicaSet + +### Deployment (recomendado) + +[`Deployment`](/docs/concepts/workloads/controllers/deployment/) é um objeto o qual pode possuir ReplicaSets, atualizá-los e por consequência seus Pods via atualizações declarativas, gradativas do lado do servidor. +Enquanto ReplicaSets conseguem ser usados independentemente, hoje eles são principalmente usados por Deployments como um mecanismo para orquestrar a criação, deleção e atualização de um Pod. Quando você usa Deployments você não precisa se preocupar com o gerenciamento de ReplicaSets que são criados por ele. Deployments controlam e gerenciam seus ReplicaSets. +Por isso, é recomendado o uso de Deployments quando você deseja ReplicaSets. + +### Bare Pods + +Diferente do caso onde um usuário cria Pods diretamente, um ReplicaSet substitui Pods que forem deletados ou terminados por qualquer motivo, como em caso de falha de nó ou manutenção disruptiva de nó, como uma atualização de kernel. Por esse motivo, nós recomendamos que você use um ReplicaSet mesmo que sua aplicação necessite apenas de um único Pod. Pense na semelhança com um supervisor de processos, apenas que ele supervisione vários Pods em múltiplos nós ao invés de apenas um Pod. Um ReplicaSet delega reinicializações de um container local para algum agente do nó (Kubelet ou Docker, por exemplo). + +### Job + +Use um [`Job`](/docs/concepts/workloads/controllers/job/) no lugar de um ReplicaSet para Pods que tem por objetivo sua terminação no final da execução (como batch jobs). + +### DaemonSet + +Use um [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) no lugar de um ReplicaSet para Pods que precisam prover funções no nível de sistema, como monitoramento do sistema ou logs do sistema. Esses Pods tem um tempo de vida ligado à vida útil do sistema: +os Pods precisam estar executando na máquina antes de outros Pods inicializarem, e são seguros de terminarem quando a máquina esta preparada para reiniciar/desligar. + +### ReplicationController +ReplicaSets são sucessores ao [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/). +Os dois servem para o mesmo propósito, e tem comportamentos semelhantes, exceto que um ReplicationController não suporta os requerimentos de um seletor baseado em definição como descrito no [guia de usuário de label](/docs/concepts/overview/working-with-objects/labels/#label-selectors). +Portanto, ReplicaSets são preferíveis à ReplicationControllers + + +## {{% heading "whatsnext" %}} + +* Aprenda sobre [Pods](/docs/concepts/workloads/pods). +* Aprenda sobre [Deployments](/docs/concepts/workloads/controllers/deployment/). +* [Executar uma aplicação Stateless usando um Deployment](/docs/tasks/run-application/run-stateless-application-deployment/), + o qual necessita de ReplicaSets para funcionar. +* `ReplicaSet` é um recurso alto nível na API REST do Kubernetes. + Leia a {{< api-reference page="workload-resources/replica-set-v1" >}} + definição de objeto para entender a API para replica sets. +* Leia sobre [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) e como + você consegue usá-lo para gerenciar disponibilidade de aplicação durante interrupções. diff --git a/content/pt-br/docs/contribute/review/reviewing-prs.md b/content/pt-br/docs/contribute/review/reviewing-prs.md new file mode 100644 index 0000000000..47042ad6db --- /dev/null +++ b/content/pt-br/docs/contribute/review/reviewing-prs.md @@ -0,0 +1,159 @@ +--- +title: Revisando pull requests +content_type: concept +main_menu: true +weight: 10 +--- + + + +Qualquer pessoa pode revisar um _pull request_ da documentação. +Visite a seção [pull requests](https://github.com/kubernetes/website/pulls) no repositório do site Kubernetes para ver os _pull requests_ abertos. + +Revisar os _pull requests_ da documentação é uma ótima maneira de se apresentar à comunidade Kubernetes. +Isso ajuda você a aprender a base de código e construir a confiança com outros colaboradores. + +Antes de revisar, é uma boa ideia: + +- Ler o [guia de conteúdo](/docs/contribute/style/content-guide/) e o [guia de estilo](/docs/contribute/style/style-guide/) para que você possa deixar comentários esclarecedores. +- Entender as diferentes [funções e responsabilidades](/docs/contribute/participate/roles-and-responsibilities/) na comunidade da documentação do Kubernetes. + + + +## Antes de começar + +Antes de começar uma revisão: + +- Leia o [Código de Conduta da CNCF](https://github.com/cncf/foundation/blob/main/code-of-conduct.md) e certifique-se de cumpri-lo o tempo todo. +- Seja educado, atencioso e prestativo. +- Comente os aspectos positivos dos PRs, bem como mudanças. +- Seja empático e cuidadoso, observe como sua avaliação pode ser recebida. +- Assuma boas intenções e faça perguntas esclarecedoras. +- Colaboradores experientes, considere trabalhar em par com os novos colaboradores cujo trabalho requer grandes mudanças. + +## Processo de revisão + +Em geral, revise os _pull requests_ de conteúdo e estilo em inglês. +A Figura 1 descreve as etapas para o processo de revisão. +Seguem os detalhes para cada etapa. + + + + +{{< mermaid >}} +flowchart LR + subgraph fourth[Começar revisão] + direction TB + S[ ] -.- + M[adicionar comentários] --> N[revisar mudanças] + N --> O[novos colaboradores devem
escolher Comment] + end + subgraph third[Selecionar PR] + direction TB + T[ ] -.- + J[leia a descrição
e comentários]--> K[visualize as mudanças no ambiente
de pré-visualização do Netlify] + end + + A[Revise a lista de PR abertos]--> B[Filtre os PRs abertos
pela label] + B --> third --> fourth + + +classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; +classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold +classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 +class A,B,J,K,M,N,O grey +class S,T spacewhite +class third,fourth white +{{}} + +Figura 1. Etapas do processo de revisão. + +1. Acesse [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). + Você verá uma lista de todas as solicitações de _pull requests_ abertos no site e na documentação do Kubernetes. + +2. Filtre os PRs abertos usando um ou todos os _labels_ seguintes: + + - `cncf-cla: yes` (Recomendado): PRs enviados por colaboradores que não assinaram o CLA não podem ser feito o _merge_. Consulte [Assinar o CLA](/docs/contribute/new-content/#sign-the-cla) para obter mais informações. + - `language/pt` (Recomendado): Filtro para PRs em português. + - `size/`: Filtro para PRs com um determinado tamanho. Se você é novo, comece com PRs menores. + + Além disso, certifique-se que o PR não esteja marcado como `work in progress`. Os PRs que usam o _label_ `work in progress` ainda não estão prontos para revisão. + +3. Depois de selecionar um PR para revisar, entenda a mudança: + + - Lendo a descrição do PR para entender as alterações feitas e ler quaisquer `issues` vinculadas + - Lendo quaisquer comentários de outros revisores + - Clicando na aba **Files changed** para ver os arquivos e linhas alteradas + - Pré-visualizar as alterações ambiente de pré-visualização do _Netlify_, rolando até a seção _PR's build check_ na parte inferior da aba **Conversation**. + Aqui está uma captura da tela (isso mostra a área de trabalho do site GitHub; se você estiver revisando em um tablet ou smartphone, a interface web do usuário GitHub será um pouco diferente): + {{< figure src="/images/docs/github_netlify_deploy_preview.png" alt="Detalhes do PR no GitHub, incluindo o link para a visualização do Netlify" >}} + Para abrir a visualização, selecione o link **Details** da linha **deploy/netlify** na lista de verificações. + +4. Vá para a aba **Files changed** para iniciar sua revisão. + + 1. Clique no símbolo `+` ao lado da linha que você deseja comentar. + + 1. Preencha com todos os comentários que você tenha sobre a linha e clique em **Add single comment** (se você tiver apenas um comentário para fazer) ou **Start a review** (se você tiver vários comentários para fazer) + + 1. Quando terminar, clique em **Review changes** na parte superior da página. + Aqui, você pode adicionar um resumo da sua revisão (e deixar alguns comentários positivos para o colaborador!). + Por favor, sempre use o "Comentário" + + - Evite clicar no botão "Request changes" ao concluir sua revisão. + Se você quiser bloquear o _merge_ do PR antes que outras alterações sejam realizadas, você pode deixar um comentário "/hold". + Mencione por que você está definindo o bloqueio e, opcionalmente, especifique as condições sob as quais o bloqueio pode ser removido por você ou por outros revisores. + + - Evite clicar no botão "Approve" ao concluir sua revisão. + Deixar um comentário "/approve" é recomendado na maioria dos casos. + + +## Checklist para revisão + +Ao revisar, use como ponto de partida o seguinte. + +### Linguagem e gramática + +- Existe algum erro óbvio na linguagem ou gramática? Existe uma maneira melhor de expressar algo? + - Concentre-se na linguagem e na gramática nas partes que o autor está mudando na página. + A menos que o autor esteja claramente com o objetivo de atualizar a página inteira, ele não tem obrigação de corrigir todos os problemas na página. + - Quando um PR atualiza uma página existente, você deve se concentrar em revisar as partes que estão sendo atualizadas na página. + Esse conteúdo alterado deve ser revisado quanto à correção técnica e editorial. + Se você encontrar erros na página que não se relacionam diretamente com o que o autor do PR está tentando resolver, ele deve ser tratado em uma `issue` separada (primeiro, verifique se não existe uma `issue` existente sobre isso). + - Cuidado com os _pull requests_ que movem conteúdo. + Se um autor renomear uma página ou combinar duas páginas, nós (_Kubernetes SIG Docs_) geralmente evitamos pedir a esse autor que corrija todas as questões gramaticais ou ortográficas que poderíamos identificar dentro desse conteúdo movido. +- Existem palavras complicadas ou arcaicas que podem ser substituídas por uma palavra mais simples? +- Existem palavras, termos ou frases em uso que podem ser substituídos por uma alternativa não discriminatória? +- A escolha da palavra e sua capitalização seguem o [guia de estilo](/docs/contribute/style/style-guide/)? +- Existem frases longas que podem ser mais curtas ou menos complexas? +- Existem parágrafos longos que podem funcionar melhor como uma lista ou tabela? + +### Conteúdo + +- Existe conteúdo semelhante em outro lugar no site Kubernetes? +- O conteúdo está excessivamente vinculado a uma documentação externa, de um fornecedor individual ou de um código não aberto? + +### Website + +- Esse PR alterou ou removeu um título da página, _slug/alias_ ou link? +Em caso afirmativo, existem links quebrados como resultado deste PR? +Existe outra opção, como alterar o título da página sem alterar o _slug_? +- O PR apresenta uma nova página? Caso afirmativo: + - A página está usando corretamente o [tipo de conteúdo](/docs/contribute/style/page-content-types/) e os códigos relacionados ao Hugo? + - A página aparece corretamente na navegação da seção (ou em geral)? + - A página deve aparecer na lista em [Documentação/Home](/pt-br/docs/home/)? +- As alterações aparecem na visualização do Netlify? Esteja particularmente atento a listas, blocos de código, tabelas, notas e imagens. + +### Outro + +- Cuidado com as [edições triviais](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits); + se você observar uma mudança que entender ser uma edição trivial, por favor, marque essa política (ainda não há problema em aceitar a alteração se for genuinamente uma melhoria). +- Incentive os autores que estão fazendo correções de espaço em branco a fazê-lo no primeiro commit de seu PR e, em seguida, adicione outras alterações além disso. + Isso facilita as revisões e o _merge_. + Cuidado especialmente com uma mudança trivial que aconteça em um único _commit_, juntamente com uma grande quantidade de limpeza dos espaços em branco (e se você observar isso, incentive o autor a corrigi-lo). + +Como revisor, se você identificar pequenos problemas com um PR que não são essenciais para o significado, como erros de digitação ou espaços em branco incorretos, sinalize seus comentários com `nit:`. +Isso permite que o autor saiba que esta parte do seu _feedback_ não é uma crítica. + +Se você estiver considerando um __pull request__ e todo o _feedback_ restante estiver marcado como um `nit`, você pode realizar o _merge_ do PR de qualquer maneira. +Nesse caso, muitas vezes é útil abrir uma _issue_ sobre os `nits` restantes. +Considere se você é capaz de atender aos requisitos para marcar esse nova _issue_ como uma [Good First Issue](https://www.kubernetes.dev/docs/guide/help-wanted/#good-first-issue); se você puder, esses são uma boa fonte. diff --git a/content/pt-br/docs/reference/glossary/code-contributor.md b/content/pt-br/docs/reference/glossary/code-contributor.md new file mode 100644 index 0000000000..9b4c993455 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/code-contributor.md @@ -0,0 +1,18 @@ +--- +title: Colaborador de Código +id: code-contributor +date: 2018-04-12 +full_link: https://github.com/kubernetes/community/tree/master/contributors/devel +short_description: > + Uma pessoa que desenvolve e contribui com código para o código aberto do Kubernetes. + +aka: +tags: +- community +- user-type +--- + Uma pessoa que desenvolve e contribui com código para o código aberto do Kubernetes. + + + +Eles também são {{< glossary_tooltip text="membros da comunidade" term_id="member" >}} ativos que participam de um ou mais {{< glossary_tooltip text="Grupos de Interesse Especial (SIGs)" term_id="sig" >}}. diff --git a/content/pt-br/examples/controllers/frontend.yaml b/content/pt-br/examples/controllers/frontend.yaml new file mode 100644 index 0000000000..53be03c176 --- /dev/null +++ b/content/pt-br/examples/controllers/frontend.yaml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: ReplicaSet +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # modifique o número de replicas de acordo com o seu caso + replicas: 3 + selector: + matchLabels: + tier: frontend + template: + metadata: + labels: + tier: frontend + spec: + containers: + - name: php-redis + image: gcr.io/google_samples/gb-frontend:v3 diff --git a/content/pt-br/examples/controllers/hpa-rs.yaml b/content/pt-br/examples/controllers/hpa-rs.yaml new file mode 100644 index 0000000000..a8388530dc --- /dev/null +++ b/content/pt-br/examples/controllers/hpa-rs.yaml @@ -0,0 +1,11 @@ +apiVersion: autoscaling/v1 +kind: HorizontalPodAutoscaler +metadata: + name: frontend-scaler +spec: + scaleTargetRef: + kind: ReplicaSet + name: frontend + minReplicas: 3 + maxReplicas: 10 + targetCPUUtilizationPercentage: 50 diff --git a/content/pt-br/examples/pods/pod-rs.yaml b/content/pt-br/examples/pods/pod-rs.yaml new file mode 100644 index 0000000000..df7b390597 --- /dev/null +++ b/content/pt-br/examples/pods/pod-rs.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Pod +metadata: + name: pod1 + labels: + tier: frontend +spec: + containers: + - name: hello1 + image: gcr.io/google-samples/hello-app:2.0 + +--- + +apiVersion: v1 +kind: Pod +metadata: + name: pod2 + labels: + tier: frontend +spec: + containers: + - name: hello2 + image: gcr.io/google-samples/hello-app:1.0 diff --git a/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md b/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md index f822f32eb0..dfe5207336 100644 --- a/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md +++ b/content/zh-cn/docs/concepts/services-networking/service-traffic-policy.md @@ -1,7 +1,7 @@ --- title: 服务内部流量策略 content_type: concept -weight: 75 +weight: 120 description: >- 如果集群中的两个 Pod 想要通信,并且两个 Pod 实际上都在同一节点运行, **服务内部流量策略** 可以将网络流量限制在该节点内。 @@ -13,7 +13,7 @@ reviewers: - maplain title: Service Internal Traffic Policy content_type: concept -weight: 75 +weight: 120 description: >- If two Pods in your cluster want to communicate, and both Pods are actually running on the same node, _Service Internal Traffic Policy_ to keep network traffic within that node. @@ -24,7 +24,7 @@ description: >- -{{< feature-state for_k8s_version="v1.23" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} ## 使用服务内部流量策略 {#using-service-internal-traffic-policy} - -`ServiceInternalTrafficPolicy` -[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 是 Beta 功能,默认启用。 -启用该功能后,你就可以通过将 {{< glossary_tooltip text="Service" term_id="service" >}} 的 +你可以通过将 {{< glossary_tooltip text="Service" term_id="service" >}} 的 `.spec.internalTrafficPolicy` 项设置为 `Local`, 来为它指定一个内部专用的流量策略。 -此设置就相当于告诉 kube-proxy 对于集群内部流量只能使用本地的服务端口。 +此设置就相当于告诉 kube-proxy 对于集群内部流量只能使用节点本地的服务端口。 ## 工作原理 {#how-it-works} - kube-proxy 基于 `spec.internalTrafficPolicy` 的设置来过滤路由的目标服务端点。 -当它的值设为 `Local` 时,只选择节点本地的服务端点。 -当它的值设为 `Cluster` 或缺省时,则选择所有的服务端点。 -启用[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) -`ServiceInternalTrafficPolicy` 后, -`spec.internalTrafficPolicy` 的值默认设为 `Cluster`。 +当它的值设为 `Local` 时,只会选择节点本地的服务端点。 +当它的值设为 `Cluster` 或缺省时,Kubernetes 会选择所有的服务端点。 ## {{% heading "whatsnext" %}} * 请阅读[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints) * 请阅读 [Service 的外部流量策略](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) -* 请阅读[用 Service 连接应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) +* 遵循[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程 \ No newline at end of file diff --git a/content/zh-cn/docs/concepts/services-networking/service.md b/content/zh-cn/docs/concepts/services-networking/service.md index db8907865d..582532be4c 100644 --- a/content/zh-cn/docs/concepts/services-networking/service.md +++ b/content/zh-cn/docs/concepts/services-networking/service.md @@ -3,7 +3,7 @@ title: 服务(Service) feature: title: 服务发现与负载均衡 description: > - 无需修改你的应用程序即可使用陌生的服务发现机制。Kubernetes 为容器提供了自己的 IP 地址和一个 DNS 名称,并且可以在它们之间实现负载均衡。 + 无需修改你的应用程序去使用陌生的服务发现机制。Kubernetes 为容器提供了自己的 IP 地址和一个 DNS 名称,并且可以在它们之间实现负载均衡。 description: >- 将在集群中运行的应用程序暴露在单个外向端点后面,即使工作负载分散到多个后端也是如此。 content_type: concept @@ -33,7 +33,7 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. --> -使用 Kubernetes,你无需修改应用程序即可使用不熟悉的服务发现机制。 +使用 Kubernetes,你无需修改应用程序去使用不熟悉的服务发现机制。 Kubernetes 为 Pod 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, 并且可以在它们之间进行负载均衡。 @@ -1916,4 +1916,4 @@ For more context: 更多上下文: * 阅读[虚拟 IP 和 Service 代理](/zh-cn/docs/reference/networking/virtual-ips/) * 阅读 Service API 的 [API 参考](/zh-cn/docs/reference/kubernetes-api/service-resources/service-v1/) -* 阅读 EndpointSlice API 的 [API 参考](/zh-cn/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/) \ No newline at end of file +* 阅读 EndpointSlice API 的 [API 参考](/zh-cn/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/) diff --git a/content/zh-cn/docs/concepts/storage/volumes.md b/content/zh-cn/docs/concepts/storage/volumes.md index 7486107d39..215e8e908b 100644 --- a/content/zh-cn/docs/concepts/storage/volumes.md +++ b/content/zh-cn/docs/concepts/storage/volumes.md @@ -304,7 +304,7 @@ For more details, see the [`azureFile` volume plugin](https://github.com/kuberne --> #### azureFile CSI 迁移 {#azurefile-csi-migration} -{{< feature-state for_k8s_version="v1.21" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} -### glusterfs(已弃用) {#glusterfs} +### glusterfs(已移除) {#glusterfs} -{{< feature-state for_k8s_version="v1.25" state="deprecated" >}} + - -`glusterfs` 卷能将 [Glusterfs](https://www.gluster.org) (一个开源的网络文件系统) -挂载到你的 Pod 中。不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`glusterfs` -卷的内容在删除 Pod 时会被保存,卷只是被卸载。 -这意味着 `glusterfs` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。 -GlusterFS 可以被多个写者同时挂载。 - -{{< note >}} - -在使用前你必须先安装运行自己的 GlusterFS。 -{{< /note >}} - - -更多详情请参考 [GlusterFS 示例](https://github.com/kubernetes/examples/tree/master/volumes/glusterfs)。 +Kubernetes {{< skew currentVersion >}} 不包含 `glusterfs` 卷类型。 +GlusterFS 树内存储驱动程序在 Kubernetes v1.25 版本中被弃用,然后在 v1.26 版本中被完全移除。 + ### hostPath {#hostpath} {{< warning >}} @@ -1575,37 +1559,34 @@ For more information, see the [vSphere volume](https://github.com/kubernetes/exa --> #### vSphere CSI 迁移 {#vsphere-csi-migration} -{{< feature-state for_k8s_version="v1.19" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} + -从 Kubernetes v1.25 开始,针对 `vsphereVolume` 的 `CSIMigrationvSphere` 特性默认被启用。 -来自树内 `vspherevolume` 的所有插件操作将被重新指向到 -`csi.vsphere.vmware.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} 驱动, -除非 `CSIMigrationvSphere` 特性门控被禁用。 +在 Kubernetes {{< skew currentVersion >}} 中,对树内 `vsphereVolume` +类的所有操作都会被重定向至 `csi.vsphere.vmware.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} 驱动程序。 [vSphere CSI 驱动](https://github.com/kubernetes-sigs/vsphere-csi-driver)必须安装到集群上。 你可以在 VMware 的文档页面[迁移树内 vSphere 卷插件到 vSphere 容器存储插件](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-968D421F-D464-4E22-8127-6CB9FF54423F.html) 中找到有关如何迁移树内 `vsphereVolume` 的其他建议。 +如果未安装 vSphere CSI 驱动程序,则无法对由树内 `vsphereVolume` 类型创建的 PV 执行卷操作。 -从 Kubernetes v1.25 开始,(已弃用)树内 vSphere 存储驱动不支持低于 7.0u2 的 vSphere 版本。 -你必须运行 vSphere 7.0u2 或更高版本才能继续使用这个已弃用的驱动,或迁移到替代的 CSI 驱动。 +你必须运行 vSphere 7.0u2 或更高版本才能迁移到 vSphere CSI 驱动程序。 如果你正在运行 Kubernetes v{{< skew currentVersion >}},请查阅该 Kubernetes 版本的文档。 diff --git a/content/zh-cn/docs/reference/kubectl/_index.md b/content/zh-cn/docs/reference/kubectl/_index.md index 45175bafe2..c5be1ccfbb 100644 --- a/content/zh-cn/docs/reference/kubectl/_index.md +++ b/content/zh-cn/docs/reference/kubectl/_index.md @@ -1,7 +1,7 @@ --- title: 命令行工具 (kubectl) content_type: reference -weight: 60 +weight: 110 no_list: true card: name: reference @@ -10,16 +10,14 @@ card: - + {{< glossary_definition prepend="Kubernetes 提供" term_id="kubectl" length="short" >}} -{{< caution >}} 从命令行指定的参数会覆盖默认值和任何相应的环境变量。 {{< /caution >}} @@ -267,6 +265,7 @@ Operation | Syntax | Description `diff` | `kubectl diff -f FILENAME [flags]`| Diff file or stdin against live configuration. `drain` | `kubectl drain NODE [options]` | Drain node in preparation for maintenance. `edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | Edit and update the definition of one or more resources on the server by using the default editor. +`events` | `kubectl events` | List events `exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod. `explain` | `kubectl explain [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc. `expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | Expose a replication controller, service, or pod as a new Kubernetes service. @@ -313,6 +312,7 @@ Operation | Syntax | Description `diff` | `kubectl diff -f FILENAME [flags]`| 在当前起作用的配置和文件或标准输之间作对比 (**BETA**) `drain` | `kubectl drain NODE [options]` | 腾空节点以准备维护。 `edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | 使用默认编辑器编辑和更新服务器上一个或多个资源的定义。 +`events` | `kubectl events` | 列举事件。 `exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | 对 Pod 中的容器执行命令。 `explain` | `kubectl explain [--recursive=false] [flags]` | 获取多种资源的文档。例如 Pod、Node、Service 等。 `expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | 将副本控制器、服务或 Pod 作为新的 Kubernetes 服务暴露。 @@ -549,7 +549,7 @@ The result of running either command is similar to: --> 运行这两个命令之一的结果类似于: -```shell +``` NAME RSRC submit-queue 610995 ``` @@ -593,7 +593,7 @@ The output is similar to: --> 输出类似于: -```shell +``` NAME AGE pod-name 1m ``` @@ -788,7 +788,6 @@ kubectl delete pods,services -l = # Delete all pods, including uninitialized ones. kubectl delete pods --all ``` - --> ```shell @@ -874,7 +873,6 @@ cat service.yaml | kubectl diff -f - - ## 示例:创建和使用插件 - 这个插件写好了,把它变成可执行的: ```bash - sudo chmod a+x ./kubectl-hello # 并将其移动到路径中的某个位置 diff --git a/content/zh-cn/docs/reference/kubectl/cheatsheet.md b/content/zh-cn/docs/reference/kubectl/cheatsheet.md index 1c58882098..004d281903 100644 --- a/content/zh-cn/docs/reference/kubectl/cheatsheet.md +++ b/content/zh-cn/docs/reference/kubectl/cheatsheet.md @@ -6,7 +6,7 @@ card: name: reference weight: 30 --- - @@ -35,7 +34,6 @@ This page contains a list of commonly used `kubectl` commands and flags. ### BASH --> - ## Kubectl 自动补全 {#kubectl-autocomplete} ### BASH @@ -79,7 +77,7 @@ echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc ### 关于 `--all-namespaces` 的一点说明 {#a-note-on-all-namespaces} 我们经常用到 `--all-namespaces` 参数,你应该要知道它的简写: @@ -385,6 +383,9 @@ kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initConta # List Events sorted by timestamp kubectl get events --sort-by=.metadata.creationTimestamp +# List all warning events +kubectl events --types=Warning + # Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied. kubectl diff -f ./my-manifest.yaml @@ -470,6 +471,9 @@ kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initConta # 列出事件(Events),按时间戳排序 kubectl get events --sort-by=.metadata.creationTimestamp +# 列出所有警告事件 +kubectl events --types=Warning + # 比较当前的集群状态和假定某清单被应用之后的集群状态 kubectl diff -f ./my-manifest.yaml @@ -566,7 +570,7 @@ kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", " # Add a new element to a positional array kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]' -# Update a deployment's replica count by patching it's scale subresource +# Update a deployment's replica count by patching its scale subresource kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' ``` --> @@ -638,6 +642,7 @@ kubectl scale --replicas=5 rc/foo rc/bar rc/baz # 伸缩多个 ```bash kubectl delete -f ./pod.json # 删除在 pod.json 中指定的类型和名称的 Pod +kubectl delete pod unwanted --now # 删除 Pod 且无宽限期限(无优雅时段) kubectl delete pod,service baz foo # 删除名称为 "baz" 和 "foo" 的 Pod 和服务 kubectl delete pods,services -l name=myLabel # 删除包含 name=myLabel 标签的 pods 和服务 kubectl -n my-ns delete pod,svc --all # 删除在 my-ns 名字空间中全部的 Pods 和服务 @@ -860,7 +866,7 @@ To output details to your terminal window in a specific format, add the `-o` (or 要以特定格式将详细信息输出到终端窗口,将 `-o`(或者 `--output`)参数添加到支持的 `kubectl` 命令中。 - 使用 `-o=custom-columns` 的示例: diff --git a/content/zh-cn/docs/reference/node/device-plugin-api-versions.md b/content/zh-cn/docs/reference/node/device-plugin-api-versions.md new file mode 100644 index 0000000000..56c03d226a --- /dev/null +++ b/content/zh-cn/docs/reference/node/device-plugin-api-versions.md @@ -0,0 +1,61 @@ +--- +content_type: "reference" +title: Kubelet 设备管理器 API 版本 +weight: 10 +--- + + + +本页详述了 Kubernetes +[设备插件 API](https://github.com/kubernetes/kubelet/tree/master/pkg/apis/deviceplugin) +与不同版本的 Kubernetes 本身之间的版本兼容性。 + + +## 兼容性矩阵 {#compatibility-matrix} + +| | `v1alpha1` | `v1beta1` | +|-----------------|-------------|-------------| +| Kubernetes 1.21 | - | ✓ | +| Kubernetes 1.22 | - | ✓ | +| Kubernetes 1.23 | - | ✓ | +| Kubernetes 1.24 | - | ✓ | +| Kubernetes 1.25 | - | ✓ | +| Kubernetes 1.26 | - | ✓ | + + +简要说明: + +* `✓` 设备插件 API 和 Kubernetes 版本中的特性或 API 对象完全相同。 + +* `+` 设备插件 API 具有 Kubernetes 集群中可能不存在的特性或 API 对象, + 不是因为设备插件 API 添加了额外的新 API 调用,就是因为服务器移除了旧的 API 调用。 + 但它们的共同点是(大多数其他 API)都能工作。 + 请注意,Alpha API 可能会在次要版本的迭代过程中消失或出现重大变更。 + +* `-` Kubernetes 集群具有设备插件 API 无法使用的特性,不是因为服务器添加了额外的 API 调用, + 就是因为设备插件 API 移除了旧的 API 调用。但它们的共同点是(大多数 API)都能工作。