Merge branch 'main' into patch-1

This commit is contained in:
ramobis 2022-12-13 21:37:12 +01:00
commit dd6c9ef831
113 changed files with 2703 additions and 323 deletions

View File

@ -188,30 +188,30 @@ docsbranch = "main"
url = "https://kubernetes.io" url = "https://kubernetes.io"
[[params.versions]] [[params.versions]]
fullversion = "v1.25.0" fullversion = "v1.25.5"
version = "v1.25" version = "v1.25"
githubbranch = "v1.25.0" githubbranch = "v1.25.5"
docsbranch = "main" docsbranch = "release-1.25"
url = "https://kubernetes.io" url = "https://v1-25.docs.kubernetes.io"
[[params.versions]] [[params.versions]]
fullversion = "v1.24.2" fullversion = "v1.24.9"
version = "v1.24" version = "v1.24"
githubbranch = "v1.24.2" githubbranch = "v1.24.9"
docsbranch = "release-1.24" docsbranch = "release-1.24"
url = "https://v1-24.docs.kubernetes.io" url = "https://v1-24.docs.kubernetes.io"
[[params.versions]] [[params.versions]]
fullversion = "v1.23.8" fullversion = "v1.23.15"
version = "v1.23" version = "v1.23"
githubbranch = "v1.23.8" githubbranch = "v1.23.15"
docsbranch = "release-1.23" docsbranch = "release-1.23"
url = "https://v1-23.docs.kubernetes.io" url = "https://v1-23.docs.kubernetes.io"
[[params.versions]] [[params.versions]]
fullversion = "v1.22.11" fullversion = "v1.22.17"
version = "v1.22" version = "v1.22"
githubbranch = "v1.22.11" githubbranch = "v1.22.17"
docsbranch = "release-1.22" docsbranch = "release-1.22"
url = "https://v1-22.docs.kubernetes.io" url = "https://v1-22.docs.kubernetes.io"

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 214 KiB

View File

@ -0,0 +1,122 @@
---
layout: blog
title: "Kubernetes 1.26: Windows HostProcess Containers Are Generally Available"
date: 2022-12-13
slug: windows-host-process-containers-ga
---
**Authors**: Brandon Smith (Microsoft) and Mark Rossetti (Microsoft)
The long-awaited day has arrived: HostProcess containers, the Windows equivalent to Linux privileged
containers, has finally made it to **GA in Kubernetes 1.26**!
What are HostProcess containers and why are they useful?
Cluster operators are often faced with the need to configure their nodes upon provisioning such as
installing Windows services, configuring registry keys, managing TLS certificates,
making network configuration changes, or even deploying monitoring tools such as a Prometheus's node-exporter.
Previously, performing these actions on Windows nodes was usually done by running PowerShell scripts
over SSH or WinRM sessions and/or working with your cloud provider's virtual machine management tooling.
HostProcess containers now enable you to do all of this and more with minimal effort using Kubernetes native APIs.
With HostProcess containers you can now package any payload
into the container image, map volumes into containers at runtime, and manage them like any other Kubernetes workload.
You get all the benefits of containerized packaging and deployment methods combined with a reduction in
both administrative and development cost.
Gone are the days where cluster operators would need to manually log onto
Windows nodes to perform administrative duties.
[HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod/) differ
quite significantly from regular Windows Server containers.
They are run directly as processes on the host with the access policies of
a user you specify. HostProcess containers run as either the built-in Windows system accounts or
ephemeral users within a user group defined by you. HostProcess containers also share
the host's network namespace and access/configure storage mounts visible to the host.
On the other hand, Windows Server containers are highly isolated and exist in a separate
execution namespace. Direct access to the host from a Windows Server container is explicitly disallowed
by default.
## How does it work?
Windows HostProcess containers are implemented with Windows [_Job Objects_](https://learn.microsoft.com/en-us/windows/win32/procthread/job-objects),
a break from the previous container model which use server silos.
Job Objects are components of the Windows OS which offer the ability to
manage a group of processes as a group (also known as a _job_) and assign resource constraints to the
group as a whole. Job objects are specific to the Windows OS and are not associated with
the Kubernetes [Job API](/docs/concepts/workloads/controllers/job/). They have no process
or file system isolation,
enabling the privileged payload to view and edit the host file system with the
desired permissions, among other host resources. The init process, and any processes
it launches (including processes explicitly launched by the user) are all assigned to the
job object of that container. When the init process exits or is signaled to exit,
all the processes in the job will be signaled to exit, the job handle will be
closed and the storage will be unmounted.
HostProcess and Linux privileged containers enable similar scenarios but differ
greatly in their implementation (hence the naming difference). HostProcess containers
have their own [PodSecurityContext](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#windowssecuritycontextoptions-v1-core) fields.
Those used to configure Linux privileged containers **do not** apply. Enabling privileged access to a Windows host is a
fundamentally different process than with Linux so the configuration and
capabilities of each differ significantly. Below is a diagram detailing the
overall architecture of Windows HostProcess containers:
{{< figure src="hpc_architecture.svg" alt="HostProcess Architecture" >}}
Two major features were added prior to moving to stable: the ability to run as local user accounts, and
a simplified method of accessing volume mounts. To learn more, read
[Create a Windows HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod/).
## HostProcess containers in action
Kubernetes SIG Windows has been busy putting HostProcess containers to use - even before GA!
They've been very excited to use HostProcess containers for a number of important activities
that were a pain to perform in the past.
Here are just a few of the many use use cases with example deployments:
- [CNI solutions and kube-proxy](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/hostprocess/calico#calico-example)
- [windows-exporter](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml)
- [csi-proxy](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/hostprocess/csi-proxy)
- [Windows-debug container](https://github.com/jsturtevant/windows-debug)
- [ETW event streaming](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/hostprocess/eventflow-logger)
## How do I use it?
A HostProcess container can be built using any base image of your choosing, however, for convenience we have
created [a HostProcess container base image](https://github.com/microsoft/windows-host-process-containers-base-image).
This image is only a few KB in size and does not inherit any of the same compatibility requirements as regular Windows
server containers which allows it to run on any Windows server version.
To use that Microsoft image, put this in your `Dockerfile`:
```dockerfile
FROM mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0
```
You can run HostProcess containers from within a
[HostProcess Pod](/docs/concepts/workloads/pods/#privileged-mode-for-containers).
To get started with running Windows containers,
see the general guidance for [deploying Windows nodes](/docs/setup/production-environment/windows/).
If you have a compatible node (for example: Windows as the operating system
with containerd v1.7 or later as the container runtime), you can deploy a Pod with one
or more HostProcess containers.
See the [Create a Windows HostProcess Pod - Prerequisites](/docs/tasks/configure-pod-container/create-hostprocess-pod/#before-you-begin)
for more information.
Please note that within a Pod, you can't mix HostProcess containers with normal Windows containers.
## How can I learn more?
- Work through [Create a Windows HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod/)
- Read about Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/) and [Pod Security Admission](docs/concepts/security/pod-security-admission/)
- Read the enhancement proposal [Windows Privileged Containers and Host Networking Mode](https://github.com/kubernetes/enhancements/tree/master/keps/sig-windows/1981-windows-privileged-container-support) (KEP-1981)
- Watch the [Windows HostProcess for Configuration and Beyond](https://www.youtube.com/watch?v=LcXT9pVkwvo) KubeCon NA 2022 talk
## How do I get involved?
Get involved with [SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows)
to contribute!

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 25 KiB

View File

@ -0,0 +1,93 @@
---
layout: blog
title: 'Kubernetes 1.26: Device Manager graduates to GA'
date: 2022-12-19
slug: devicemanager-ga
---
**Author:** Swati Sehgal (Red Hat)
The Device Plugin framework was introduced in the Kubernetes v1.8 release as a vendor
independent framework to enable discovery, advertisement and allocation of external
devices without modifying core Kubernetes. The feature graduated to Beta in v1.10.
With the recent release of Kubernetes v1.26, Device Manager is now generally
available (GA).
Within the kubelet, the Device Manager facilitates communication with device plugins
using gRPC through Unix sockets. Device Manager and Device plugins both act as gRPC
servers and clients by serving and connecting to the exposed gRPC services respectively.
Device plugins serve a gRPC service that kubelet connects to for device discovery,
advertisement (as extended resources) and allocation. Device Manager connects to
the `Registration` gRPC service served by kubelet to register itself with kubelet.
Please refer to the documentation for an [example](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#example-pod) on how a pod can request a device exposed to the cluster by a device plugin.
Here are some example implementations of device plugins:
- [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
- [Collection of Intel device plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes)
- [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin)
- [SRIOV network device plugin for Kubernetes](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin)
## Noteworthy developments since Device Plugin framework introduction
### Kubelet APIs moved to kubelet staging repo
External facing `deviceplugin` API packages moved from `k8s.io/kubernetes/pkg/kubelet/apis/`
to `k8s.io/kubelet/pkg/apis/` in v1.17. Refer to [Move external facing kubelet apis to staging](https://github.com/kubernetes/kubernetes/pull/83551) for more details on the rationale behind this change.
### Device Plugin API updates
Additional gRPC endpoints introduced:
1. `GetDevicePluginOptions` is used by device plugins to communicate
options to the `DeviceManager` in order to indicate if `PreStartContainer`,
`GetPreferredAllocation` or other future optional calls are supported and
can be called before making devices available to the container.
1. `GetPreferredAllocation` allows a device plugin to forward allocation
preferrence to the `DeviceManager` so it can incorporate this information
into its allocation decisions. The `DeviceManager` will call out to a
plugin at pod admission time asking for a preferred device allocation
of a given size from a list of available devices to make a more informed
decision. E.g. Specifying inter-device constraints to indicate preferrence
on best-connected set of devices when allocating devices to a container.
1. `PreStartContainer` is called before each container start if indicated by
device plugins during registration phase. It allows Device Plugins to run device
specific operations on the Devices requested. E.g. reconfiguring or
reprogramming FPGAs before the container starts running.
Pull Requests that introduced these changes are here:
1. [Invoke preStart RPC call before container start, if desired by plugin](https://github.com/kubernetes/kubernetes/pull/58282)
1. [Add GetPreferredAllocation() call to the v1beta1 device plugin API](https://github.com/kubernetes/kubernetes/pull/92665)
With introduction of the above endpoints the interaction between Device Manager in
kubelet and Device Manager can be shown as below:
{{< figure src="deviceplugin-framework-overview.svg" alt="Representation of the Device Plugin framework showing the relationship between the kubelet and a device plugin" class="diagram-large" caption="Device Plugin framework Overview" >}}
### Change in semantics of device plugin registration process
Device plugin code was refactored to separate 'plugin' package under the `devicemanager`
package to lay the groundwork for introducing a `v1beta2` device plugin API. This would
allow adding support in `devicemanager` to service multiple device plugin APIs at the
same time.
With this refactoring work, it is now mandatory for a device plugin to start serving its gRPC
service before registering itself with kubelet. Previously, these two operations were asynchronous
and device plugin could register itself before starting its gRPC server which is no longer the
case. For more details, refer to [PR #109016](https://github.com/kubernetes/kubernetes/pull/109016) and [Issue #112395](https://github.com/kubernetes/kubernetes/issues/112395).
### Dynamic resource allocation
In Kubernetes 1.26, inspired by how [Persistent Volumes](/docs/concepts/storage/persistent-volumes)
are handled in Kubernetes, [Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)
has been introduced to cater to devices that have more sophisticated resource requirements like:
1. Decouple device initialization and allocation from the pod lifecycle.
1. Facilitate dynamic sharing of devices between containers and pods.
1. Support custom resource-specific parameters
1. Enable resource-specific setup and cleanup actions
1. Enable support for Network-attached resources, not just node-local resources
## Is the Device Plugin API stable now?
No, the Device Plugin API is still not stable; the latest Device Plugin API version
available is `v1beta1`. There are plans in the community to introduce `v1beta2` API
to service multiple plugin APIs at once. A per-API call with request/response types
would allow adding support for newer API versions without explicitly bumping the API.
In addition to that, there are existing proposals in the community to introduce additional
endpoints [KEP-3162: Add Deallocate and PostStopContainer to Device Manager API](https://github.com/kubernetes/kubernetes/pull/109016).

View File

@ -0,0 +1,156 @@
---
layout: blog
title: "Kubernetes 1.26: Introducing Validating Admission Policies"
date: 2022-12-20
slug: validating-admission-policies-alpha
---
**Authors:** Joe Betz (Google), Cici Huang (Google)
In Kubernetes 1.26, the 1st alpha release of validating admission policies is
available!
Validating admission policies use the [Common Expression
Language](https://github.com/google/cel-spec) (CEL) to offer a declarative,
in-process alternative to [validating admission
webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks).
CEL was first introduced to Kubernetes for the [Validation rules for
CustomResourceDefinitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules).
This enhancement expands the use of CEL in Kubernetes to support a far wider
range of admission use cases.
Admission webhooks can be burdensome to develop and operate. Webhook developers
must implement and maintain a webhook binary to handle admission requests. Also,
admission webhooks are complex to operate. Each webhook must be deployed,
monitored and have a well defined upgrade and rollback plan. To make matters
worse, if a webhook times out or becomes unavailable, the Kubernetes control
plane can become unavailable. This enhancement avoids much of this complexity of
admission webhooks by embedding CEL expressions into Kubernetes resources
instead of calling out to a remote webhook binary.
For example, to set a limit on how many replicas a Deployment can have.
Start by defining a validation policy:
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicy
metadata:
name: "demo-policy.example.com"
spec:
matchConstraints:
resourceRules:
- apiGroups: ["apps"]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["deployments"]
validations:
- expression: "object.spec.replicas <= 5"
```
The `expression` field contains the CEL expression that is used to validate
admission requests. `matchConstraints` declares what types of requests this
`ValidatingAdmissionPolicy` is may validate.
Next bind the policy to the appropriate resources:
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: "demo-binding-test.example.com"
spec:
policy: "demo-policy.example.com"
matchResources:
namespaceSelector:
- key: environment,
operator: In,
values: ["test"]
```
This `ValidatingAdmissionPolicyBinding` resource binds the above policy only to
namespaces where the `environment` label is set to `test`. Once this binding
is created, the kube-apiserver will begin enforcing this admission policy.
To emphasize how much simpler this approach is than admission webhooks, if this example
were instead implemented with a webhook, an entire binary would need to be
developed and maintained just to perform a `<=` check. In our review of a wide
range of admission webhooks used in production, the vast majority performed
relatively simple checks, all of which can easily be expressed using CEL.
Validation admission policies are highly configurable, enabling policy authors
to define policies that can be parameterized and scoped to resources as needed
by cluster administrators.
For example, the above admission policy can be modified to make it configurable:
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicy
metadata:
name: "demo-policy.example.com"
spec:
paramKind:
apiVersion: rules.example.com/v1 # You also need a CustomResourceDefinition for this API
kind: ReplicaLimit
matchConstraints:
resourceRules:
- apiGroups: ["apps"]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["deployments"]
validations:
- expression: "object.spec.replicas <= params.maxReplicas"
```
Here, `paramKind` defines the resources used to configure the policy and the
`expression` uses the `params` variable to access the parameter resource.
This allows multiple bindings to be defined, each configured differently. For
example:
```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: "demo-binding-production.example.com"
spec:
policy: "demo-policy.example.com"
paramsRef:
name: "demo-params-production.example.com"
matchResources:
namespaceSelector:
- key: environment,
operator: In,
values: ["production"]
```
```yaml
apiVersion: rules.example.com/v1 # defined via a CustomResourceDefinition
kind: ReplicaLimit
metadata:
name: "demo-params-production.example.com"
maxReplicas: 1000
```
This binding and parameter resource pair limit deployments in namespaces with the
`environment` label set to `production` to a max of 1000 replicas.
You can then use a separate binding and parameter pair to set a different limit
for namespaces in the `test` environment.
I hope this has given you a glimpse of what is possible with validating
admission policies! There are many features that we have not yet touched on.
To learn more, read
[Validating Admission Policy](/docs/reference/access-authn-authz/validating-admission-policy/).
We are working hard to add more features to admission policies and make the
enhancement easier to use. Try it out, send us your feedback and help us build
a simpler alternative to admission webhooks!
## How do I get involved?
If you want to get involved in development of admission policies, discuss enhancement
roadmaps, or report a bug, you can get in touch with developers at
[SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).

View File

@ -0,0 +1,129 @@
---
layout: blog
title: 'Kubernetes v1.26: GA Support for Kubelet Credential Providers'
date: 2022-12-22
slug: kubelet-credential-providers
---
**Authors:** Andrew Sy Kim (Google), Dixita Narang (Google)
Kubernetes v1.26 introduced generally available (GA) support for [_kubelet credential
provider plugins_]( /docs/tasks/kubelet-credential-provider/kubelet-credential-provider/),
offering an extensible plugin framework to dynamically fetch credentials
for any container image registry.
## Background
Kubernetes supports the ability to dynamically fetch credentials for a container registry service.
Prior to Kubernetes v1.20, this capability was compiled into the kubelet and only available for
Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry.
{{< figure src="kubelet-credential-providers-in-tree.png" caption="Figure 1: Kubelet built-in credential provider support for Amazon Elastic Container Registry, Azure Container Registry, and Google Cloud Container Registry." >}}
Kubernetes v1.20 introduced alpha support for kubelet credential providers plugins,
which provides a mechanism for the kubelet to dynamically authenticate and pull images
for arbitrary container registries - whether these are public registries, managed services,
or even a self-hosted registry.
In Kubernetes v1.26, this feature is now GA
{{< figure src="kubelet-credential-providers-plugin.png" caption="Figure 2: Kubelet credential provider overview" >}}
## Why is it important?
Prior to Kubernetes v1.20, if you wanted to dynamically fetch credentials for image registries
other than ACR (Azure Container Registry), ECR (Elastic Container Registry), or GCR
(Google Container Registry), you needed to modify the kubelet code.
The new plugin mechanism can be used in any cluster, and lets you authenticate to new registries without
any changes to Kubernetes itself. Any cloud provider or vendor can publish a plugin that lets you authenticate with their image registry.
## How it works
The kubelet and the exec plugin binary communicate through stdio (stdin, stdout, and stderr) by sending and receiving
json-serialized api-versioned types. If the exec plugin is enabled and the kubelet requires authentication information for an image
that matches against a plugin, the kubelet will execute the plugin binary, passing the `CredentialProviderRequest` API via stdin. Then
the exec plugin communicates with the container registry to dynamically fetch the credentials and returns the credentials in an
encoded response of the `CredentialProviderResponse` API to the kubelet via stdout.
{{< figure src="kubelet-credential-providers-how-it-works.png" caption="Figure 3: Kubelet credential provider plugin flow" >}}
On receiving credentials from the kubelet, the plugin can also indicate how long credentials can be cached for, to prevent unnecessary
execution of the plugin by the kubelet for subsequent image pull requests to the same registry. In cases where the cache duration
is not specified by the plugin, a default cache duration can be specified by the kubelet (more details below).
```json
{
"apiVersion": "kubelet.k8s.io/v1",
"kind": "CredentialProviderResponse",
"auth": {
"cacheDuration": "6h",
"private-registry.io/my-app": {
"username": "exampleuser",
"password": "token12345"
}
}
}
```
In addition, the plugin can specify the scope in which cached credentials are valid for. This is specified through the `cacheKeyType` field
in `CredentialProviderResponse`. When the value is `Image`, the kubelet will only use cached credentials for future image pulls that exactly
match the image of the first request. When the value is `Registry`, the kubelet will use cached credentials for any subsequent image pulls
destined for the same registry host but using different paths (for example, `gcr.io/foo/bar` and `gcr.io/bar/foo` refer to different images
from the same registry). Lastly, when the value is `Global`, the kubelet will use returned credentials for all images that match against
the plugin, including images that can map to different registry hosts (for example, gcr.io vs k8s.gcr.io). The `cacheKeyType` field is required by plugin
implementations.
```json
{
"apiVersion": "kubelet.k8s.io/v1",
"kind": "CredentialProviderResponse",
"auth": {
"cacheKeyType": "Registry",
"private-registry.io/my-app": {
"username": "exampleuser",
"password": "token12345"
}
}
}
```
## Using kubelet credential providers
You can configure credential providers by installing the exec plugin(s) into
a local directory accessible by the kubelet on every node. Then you set two command line arguments for the kubelet:
* `--image-credential-provider-config`: the path to the credential provider plugin config file.
* `--image-credential-provider-bin-dir`: the path to the directory where credential provider plugin binaries are located.
The configuration file passed into `--image-credential-provider-config` is read by the kubelet to determine which exec plugins should be invoked for a container image used by a Pod.
Note that the name of each _provider_ must match the name of the binary located in the local directory specified in `--image-credential-provider-bin-dir`, otherwise the kubelet
cannot locate the path of the plugin to invoke.
```yaml
kind: CredentialProviderConfig
apiVersion: kubelet.config.k8s.io/v1
providers:
- name: auth-provider-gcp
apiVersion: credentialprovider.kubelet.k8s.io/v1
matchImages:
- "container.cloud.google.com"
- "gcr.io"
- "*.gcr.io"
- "*.pkg.dev"
args:
- get-credentials
- --v=3
defaultCacheDuration: 1m
```
Below is an overview of how the Kubernetes project is using kubelet credential providers for end-to-end testing.
{{< figure src="kubelet-credential-providers-enabling.png" caption="Figure 4: Kubelet credential provider configuration used for Kubernetes e2e testing" >}}
For more configuration details, see [Kubelet Credential Providers](https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/).
## Getting Involved
Come join SIG Node if you want to report bugs or have feature requests for the Kubelet Credential Provider. You can reach us through the following ways:
* Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node)
* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node)
* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode)
* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-node#meetings)

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

View File

@ -0,0 +1,72 @@
---
layout: blog
title: "Kubernetes 1.26: Support for Passing Pod fsGroup to CSI Drivers At Mount Time"
date: 2022-12-23
slug: kubernetes-12-06-fsgroup-on-mount
---
**Authors:** Fabio Bertinatto (Red Hat), Hemant Kumar (Red Hat)
Delegation of `fsGroup` to CSI drivers was first introduced as alpha in Kubernetes 1.22,
and graduated to beta in Kubernetes 1.25.
For Kubernetes 1.26, we are happy to announce that this feature has graduated to
General Availability (GA).
In this release, if you specify a `fsGroup` in the
[security context](/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod),
for a (Linux) Pod, all processes in the pod's containers are part of the additional group
that you specified.
In previous Kubernetes releases, the kubelet would *always* apply the
`fsGroup` ownership and permission changes to files in the volume according to the policy
you specified in the Pod's `.spec.securityContext.fsGroupChangePolicy` field.
Starting with Kubernetes 1.26, CSI drivers have the option to apply the `fsGroup` settings during
volume mount time, which frees the kubelet from changing the permissions of files and directories
in those volumes.
## How does it work?
CSI drivers that support this feature should advertise the
[`VOLUME_MOUNT_GROUP`](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetcapabilities) node capability.
After recognizing this information, the kubelet passes the `fsGroup` information to
the CSI driver during pod startup. This is done through the
[`NodeStageVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodestagevolume) and
[`NodePublishVolumeRequest`](https://github.com/container-storage-interface/spec/blob/v1.7.0/spec.md#nodepublishvolume)
CSI calls.
Consequently, the CSI driver is expected to apply the `fsGroup` to the files in the volume using a
_mount option_. As an example, [Azure File CSIDriver](https://github.com/kubernetes-sigs/azurefile-csi-driver) utilizes the `gid` mount option to map
the `fsGroup` information to all the files in the volume.
It should be noted that in the example above the kubelet refrains from directly
applying the permission changes into the files and directories in that volume files.
Additionally, two policy definitions no longer have an effect: neither
`.spec.fsGroupPolicy` for the CSIDriver object, nor
`.spec.securityContext.fsGroupChangePolicy` for the Pod.
For more details about the inner workings of this feature, check out the
[enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2317-fsgroup-on-mount/)
and the [CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html)
in the CSI developer documentation.
## Why is it important?
Without this feature, applying the fsGroup information to files is not possible in certain storage environments.
For instance, Azure File does not support a concept of POSIX-style ownership and permissions
of files. The CSI driver is only able to set the file permissions at the volume level.
## How do I use it?
This feature should be mostly transparent to users. If you maintain a CSI driver that should
support this feature, read
[CSI Driver `fsGroup` Support](https://kubernetes-csi.github.io/docs/support-fsgroup.html)
for more information on how to support this feature in your CSI driver.
Existing CSI drivers that do not support this feature will continue to work as usual:
they will not receive any `fsGroup` information from the kubelet. In addition to that,
the kubelet will continue to perform the ownership and permissions changes to files
for those volumes, according to the policies specified in `.spec.fsGroupPolicy` for the
CSIDriver and `.spec.securityContext.fsGroupChangePolicy` for the relevant Pod.

View File

@ -0,0 +1,71 @@
---
layout: blog
title: 'Kubernetes v1.26: CPUManager goes GA'
date: 2022-12-27
slug: cpumanager-ga
---
**Author:**
Francesco Romani (Red Hat)
The CPU Manager is a part of the kubelet, the Kubernetes node agent, which enables the user to allocate exclusive CPUs to containers.
Since Kubernetes v1.10, where it [graduated to Beta](/blog/2018/07/24/feature-highlight-cpu-manager/), the CPU Manager proved itself reliable and
fulfilled its role of allocating exclusive CPUs to containers, so adoption has steadily grown making it a staple component of performance-critical
and low-latency setups. Over time, most changes were about bugfixes or internal refactoring, with the following noteworthy user-visible changes:
- [support explicit reservation of CPUs](https://github.com/Kubernetes/Kubernetes/pull/83592): it was already possible to request to reserve a given
number of CPUs for system resources, including the kubelet itself, which will not be used for exclusive CPU allocation. Now it is possible to also
explicitly select which CPUs to reserve instead of letting the kubelet pick them up automatically.
- [report the exclusively allocated CPUs](https://github.com/Kubernetes/Kubernetes/pull/97415) to containers, much like is already done for devices,
using the kubelet-local [PodResources API](/docs/concepts/extend-Kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
- [optimize the usage of system resources](https://github.com/Kubernetes/Kubernetes/pull/101771), eliminating unnecessary sysfs changes.
The CPU Manager reached the point on which it "just works", so in Kubernetes v1.26 it has graduated to generally available (GA).
## Customization options for CPU Manager {#cpu-managed-customization}
The CPU Manager supports two operation modes, configured using its _policies_. With the `none` policy, the CPU Manager allocates CPUs to containers
without any specific constraint except the (optional) quota set in the Pod spec.
With the `static` policy, then provided that the pod is in the Guaranteed QoS class and every container in that Pod requests an integer amount of vCPU cores,
then the CPU Manager allocates CPUs exclusively. Exclusive assignment means that other containers (whether from the same Pod, or from a different Pod) do not
get scheduled onto that CPU.
This simple operational model served the user base pretty well, but as the CPU Manager matured more and more, users started to look at more elaborate use
cases and how to better support them.
Rather than add more policies, the community realized that pretty much all the novel use cases are some variation of the behavior enabled by the `static`
CPU Manager policy. Hence, it was decided to add [options to tune the behavior of the static policy](https://github.com/Kubernetes/enhancements/tree/master/keps/sig-node/2625-cpumanager-policies-thread-placement#proposed-change).
The options have a varying degree of maturity, like any other Kubernetes feature, and in order to be accepted, each new option provides a backward
compatible behavior when disabled, and to document how to interact with each other, should they interact at all.
This enabled the Kubernetes project to graduate to GA the CPU Manager core component and core CPU allocation algorithms to GA,
while also enabling a new age of experimentation in this area.
In Kubernetes v1.26, the CPU Manager supports [three different policy options](/docs/tasks/administer-cluster/cpu-management-policies.md#static-policy-options):
`full-pcpus-only`
: restrict the CPU Manager core allocation algorithm to full physical cores only, reducing noisy neighbor issues from hardware technologies that allow sharing cores.
`distribute-cpus-across-numa`
: drive the CPU Manager to evenly distribute CPUs across NUMA nodes, for cases where more than one NUMA node is required to satisfy the allocation.
`align-by-socket`
: change how the CPU Manager allocates CPUs to a container: consider CPUs to be aligned at the socket boundary, instead of NUMA node boundary.
## Further development
After graduating the main CPU Manager feature, each existing policy option will follow their graduation process, independent from CPU Manager and from each other option.
There is room for new options to be added, but there's also a growing demand for even more flexibility than what the CPU Manager, and its policy options, currently grant.
Conversations are in progress in the community about splitting the CPU Manager and the other resource managers currently part of the kubelet executable
into pluggable, independent kubelet plugins. If you are interested in this effort, please join the conversation on SIG Node communication channels (Slack, mailing list, weekly meeting).
## Further reading
Please check out the [Control CPU Management Policies on the Node](/docs/tasks/administer-cluster/cpu-management-policies/)
task page to learn more about the CPU Manager, and how it fits in relation to the other node-level resource managers.
## Getting involved
This feature is driven by the [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md) community.
Please join us to connect with the community and share your ideas and feedback around the above feature and
beyond. We look forward to hearing from you!

View File

@ -0,0 +1,155 @@
---
layout: blog
title: "Kubernetes 1.26: Job Tracking, to Support Massively Parallel Batch Workloads, Is Generally Available"
date: 2022-12-29
slug: "scalable-job-tracking-ga"
---
**Authors:** Aldo Culquicondor (Google)
The Kubernetes 1.26 release includes a stable implementation of the [Job](/docs/concepts/workloads/controllers/job/)
controller that can reliably track a large amount of Jobs with high levels of
parallelism. [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps)
and [WG Batch](https://github.com/kubernetes/community/tree/master/wg-batch)
have worked on this foundational improvement since Kubernetes 1.22. After
multiple iterations and scale verifications, this is now the default
implementation of the Job controller.
Paired with the Indexed [completion mode](/docs/concepts/workloads/controllers/job/#completion-mode),
the Job controller can handle massively parallel batch Jobs, supporting up to
100k concurrent Pods.
The new implementation also made possible the development of [Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy),
which is in beta in the 1.26 release.
## How do I use this feature?
To use Job tracking with finalizers, upgrade to Kubernetes 1.25 or newer and
create new Jobs. You can also use this feature in v1.23 and v1.24, if you have the
ability to enable the `JobTrackingWithFinalizers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
If your cluster runs Kubernetes 1.26, Job tracking with finalizers is a stable
feature. For v1.25, it's behind that feature gate, and your cluster administrators may have
explicitly disabled it - for example, if you have a policy of not using
beta features.
Jobs created before the upgrade will still be tracked using the legacy behavior.
This is to avoid retroactively adding finalizers to running Pods, which might
introduce race conditions.
For maximum performance on large Jobs, the Kubernetes project recommends
using the [Indexed completion mode](/docs/concepts/workloads/controllers/job/#completion-mode).
In this mode, the control plane is able to track Job progress with less API
calls.
If you are a developer of operator(s) for batch, [HPC](https://en.wikipedia.org/wiki/High-performance_computing),
[AI](https://en.wikipedia.org/wiki/Artificial_intelligence), [ML](https://en.wikipedia.org/wiki/Machine_learning)
or related workloads, we encourage you to use the Job API to delegate accurate
progress tracking to Kubernetes. If there is something missing in the Job API
that forces you to manage plain Pods, the [Working Group Batch](https://github.com/kubernetes/community/tree/master/wg-batch)
welcomes your feedback and contributions.
### Deprecation notices
During the development of the feature, the control plane added the annotation
[`batch.kubernetes.io/job-tracking`](/docs/reference/labels-annotations-taints/#batch-kubernetes-io-job-tracking)
to the Jobs that were created when the feature was enabled.
This allowed a safe transition for older Jobs, but it was never meant to stay.
In the 1.26 release, we deprecated the annotation `batch.kubernetes.io/job-tracking`
and the control plane will stop adding it in Kubernetes 1.27.
Along with that change, we will remove the legacy Job tracking implementation.
As a result, the Job controller will track all Jobs using finalizers and it will
ignore Pods that don't have the aforementioned finalizer.
Before you upgrade your cluster to 1.27, we recommend that you verify that there
are no running Jobs that don't have the annotation, or you wait for those jobs
to complete.
Otherwise, you might observe the control plane recreating some Pods.
We expect that this shouldn't affect any users, as the feature is enabled by
default since Kubernetes 1.25, giving enough buffer for old jobs to complete.
## What problem does the new implementation solve?
Generally, Kubernetes workload controllers, such as ReplicaSet or StatefulSet,
rely on the existence of Pods or other objects in the API to determine the
status of the workload and whether replacements are needed.
For example, if a Pod that belonged to a ReplicaSet terminates or ceases to
exist, the ReplicaSet controller needs to create a replacement Pod to satisfy
the desired number of replicas (`.spec.replicas`).
Since its inception, the Job controller also relied on the existence of Pods in
the API to track Job status. A Job has [completion](/docs/concepts/workloads/controllers/job/#completion-mode)
and [failure handling](/docs/concepts/workloads/controllers/job/#handling-pod-and-container-failures)
policies, requiring the end state of a finished Pod to determine whether to
create a replacement Pod or mark the Job as completed or failed. As a result,
the Job controller depended on Pods, even terminated ones, to remain in the API
in order to keep track of the status.
This dependency made the tracking of Job status unreliable, because Pods can be
deleted from the API for a number of reasons, including:
- The garbage collector removing orphan Pods when a Node goes down.
- The garbage collector removing terminated Pods when they reach a threshold.
- The Kubernetes scheduler preempting a Pod to accomodate higher priority Pods.
- The taint manager evicting a Pod that doesn't tolerate a `NoExecute` taint.
- External controllers, not included as part of Kubernetes, or humans deleting
Pods.
### The new implementation
When a controller needs to take an action on objects before they are removed, it
should add a [finalizer](/docs/concepts/overview/working-with-objects/finalizers/)
to the objects that it manages.
A finalizer prevents the objects from being deleted from the API until the
finalizers are removed. Once the controller is done with the cleanup and
accounting for the deleted object, it can remove the finalizer from the object and the
control plane removes the object from the API.
This is what the new Job controller is doing: adding a finalizer during Pod
creation, and removing the finalizer after the Pod has terminated and has been
accounted for in the Job status. However, it wasn't that simple.
The main challenge is that there are at least two objects involved: the Pod
and the Job. While the finalizer lives in the Pod object, the accounting lives
in the Job object. There is no mechanism to atomically remove the finalizer in
the Pod and update the counters in the Job status. Additionally, there could be
more than one terminated Pod at a given time.
To solve this problem, we implemented a three staged approach, each translating
to an API call.
1. For each terminated Pod, add the unique ID (UID) of the Pod into short-lived
lists stored in the `.status` of the owning Job
([.status.uncountedTerminatedPods](/docs/reference/kubernetes-api/workload-resources/job-v1/#JobStatus)).
2. Remove the finalizer from the Pods(s).
3. Atomically do the following operations:
- remove UIDs from the short-lived lists
- increment the overall `succeeded` and `failed` counters in the `status` of
the Job.
Additional complications come from the fact that the Job controller might
receive the results of the API changes in steps 1 and 2 out of order. We solved
this by adding an in-memory cache for removed finalizers.
Still, we faced some issues during the beta stage, leaving some pods stuck
with finalizers in some conditions ([#108645](https://github.com/kubernetes/kubernetes/issues/108645),
[#109485](https://github.com/kubernetes/kubernetes/issues/109485), and
[#111646](https://github.com/kubernetes/kubernetes/pull/111646)). As a result,
we decided to switch that feature gate to be disabled by default for the 1.23
and 1.24 releases.
Once resolved, we re-enabled the feature for the 1.25 release. Since then, we
have received reports from our customers running tens of thousands of Pods at a
time in their clusters through the Job API. Seeing this success, we decided to
graduate the feature to stable in 1.26, as part of our long term commitment to
make the Job API the best way to run large batch Jobs in a Kubernetes cluster.
To learn more about the feature, you can read the [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2307-job-tracking-without-lingering-pods).
## Acknowledgments
As with any Kubernetes feature, multiple people contributed to getting this
done, from testing and filing bugs to reviewing code.
On behalf of SIG Apps, I would like to especially thank Jordan Liggitt (Google)
for helping me debug and brainstorm solutions for more than one race condition
and Maciej Szulik (Red Hat) for his conscious reviews.

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

View File

@ -0,0 +1,117 @@
---
layout: blog
title: "Kubernetes v1.26: Advancements in Kubernetes Traffic Engineering"
date: 2022-12-30
slug: advancements-in-kubernetes-traffic-engineering
---
**Authors:** Andrew Sy Kim (Google)
Kubernetes v1.26 includes significant advancements in network traffic engineering with the graduation of
two features (Service internal traffic policy support, and EndpointSlice terminating conditions) to GA,
and a third feature (Proxy terminating endpoints) to beta. The combination of these enhancements aims
to address short-comings in traffic engineering that people face today, and unlock new capabilities for the future.
## Traffic Loss from Load Balancers During Rolling Updates
Prior to Kubernetes v1.26, clusters could experience [loss of traffic](https://github.com/kubernetes/kubernetes/issues/85643)
from Service load balancers during rolling updates when setting the `externalTrafficPolicy` field to `Local`.
There are a lot of moving parts at play here so a quick overview of how Kubernetes manages load balancers might help!
In Kubernetes, you can create a Service with `type: LoadBalancer` to expose an application externally with a load balancer.
The load balancer implementation varies between clusters and platforms, but the Service provides a generic abstraction
representing the load balancer that is consistent across all Kubernetes installations.
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancer
```
Under the hood, Kubernetes allocates a NodePort for the Service, which is then used by kube-proxy to provide a
network data path from the NodePort to the Pod. A controller will then add all available Nodes in the cluster
to the load balancers backend pool, using the designated NodePort for the Service as the backend target port.
{{< figure src="traffic-engineering-service-load-balancer.png" caption="Figure 1: Overview of Service load balancers" >}}
Oftentimes it is beneficial to set `externalTrafficPolicy: Local` for Services, to avoid extra hops between
Nodes that are not running healthy Pods backing that Service. When using `externalTrafficPolicy: Local`,
an additional NodePort is allocated for health checking purposes, such that Nodes that do not contain healthy
Pods are excluded from the backend pool for a load balancer.
{{< figure src="traffic-engineering-lb-healthy.png" caption="Figure 2: Load balancer traffic to a healthy Node, when externalTrafficPolicy is Local" >}}
One such scenario where traffic can be lost is when a Node loses all Pods for a Service,
but the external load balancer has not probed the health check NodePort yet. The likelihood of this situation
is largely dependent on the health checking interval configured on the load balancer. The larger the interval,
the more likely this will happen, since the load balancer will continue to send traffic to a node
even after kube-proxy has removed forwarding rules for that Service. This also occurrs when Pods start terminating
during rolling updates. Since Kubernetes does not consider terminating Pods as “Ready”, traffic can be loss
when there are only terminating Pods on any given Node during a rolling update.
{{< figure src="traffic-engineering-lb-without-proxy-terminating-endpoints.png" caption="Figure 3: Load balancer traffic to terminating endpoints, when externalTrafficPolicy is Local" >}}
Starting in Kubernetes v1.26, kube-proxy enables the `ProxyTerminatingEndpoints` feature by default, which
adds automatic failover and routing to terminating endpoints in scenarios where the traffic would otherwise
be dropped. More specifically, when there is a rolling update and a Node only contains terminating Pods,
kube-proxy will route traffic to the terminating Pods based on their readiness. In addition, kube-proxy will
actively fail the health check NodePort if there are only terminating Pods available. By doing so,
kube-proxy alerts the external load balancer that new connections should not be sent to that Node but will
gracefully handle requests for existing connections.
{{< figure src="traffic-engineering-lb-with-proxy-terminating-endpoints.png" caption="Figure 4: Load Balancer traffic to terminating endpoints with ProxyTerminatingEndpoints enabled, when externalTrafficPolicy is Local" >}}
### EndpointSlice Conditions
In order to support this new capability in kube-proxy, the EndpointSlice API introduced new conditions for endpoints:
`serving` and `terminating`.
{{< figure src="endpointslice-overview.png" caption="Figure 5: Overview of EndpointSlice conditions" >}}
The `serving` condition is semantically identical to `ready`, except that it can be `true` or `false`
while a Pod is terminating, unlike `ready` which will always be `false` for terminating Pods for compatibility reasons.
The `terminating` condition is true for Pods undergoing termination (non-empty deletionTimestamp), false otherwise.
The addition of these two conditions enables consumers of this API to understand Pod states that were previously not possible.
For example, we can now track "ready" and "not ready" Pods that are also terminating.
{{< figure src="endpointslice-with-terminating-pod.png" caption="Figure 6: EndpointSlice conditions with a terminating Pod" >}}
Consumers of the EndpointSlice API, such as Kube-proxy and Ingress Controllers, can now use these conditions to coordinate connection draining
events, by continuing to forward traffic for existing connections but rerouting new connections to other non-terminating endpoints.
## Optimizing Internal Node-Local Traffic
Similar to how Services can set `externalTrafficPolicy: Local` to avoid extra hops for externally sourced traffic, Kubernetes
now supports `internalTrafficPolicy: Local`, to enable the same optimization for traffic originating within the cluster, specifically
for traffic using the Service Cluster IP as the destination address. This feature graduated to Beta in Kubernetes v1.24 and is graduating to GA in v1.26.
Services default the `internalTrafficPolicy` field to `Cluster`, where traffic is randomly distributed to all endpoints.
{{< figure src="service-internal-traffic-policy-cluster.png" caption="Figure 7: Service routing when internalTrafficPolicy is Cluster" >}}
When `internalTrafficPolicy` is set to `Local`, kube-proxy will forward internal traffic for a Service only if there is an available endpoint
that is local to the same Node.
{{< figure src="service-internal-traffic-policy-local.png" caption="Figure 8: Service routing when internalTrafficPolicy is Local" >}}
{{< caution >}}
When using `internalTrafficPoliy: Local`, traffic will be dropped by kube-proxy when no local endpoints are available.
{{< /caution >}}
## Getting Involved
If you're interested in future discussions on Kubernetes traffic engineering, you can get involved in SIG Network through the following ways:
* Slack: [#sig-network](https://kubernetes.slack.com/messages/sig-network)
* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network)
* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnetwork)
* [Biweekly meetings](https://github.com/kubernetes/community/tree/master/sig-network#meetings)

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

View File

@ -0,0 +1,159 @@
---
layout: blog
title: "Kubernetes v1.26: Alpha support for cross-namespace storage data sources"
date: 2023-01-02
slug: cross-namespace-data-sources-alpha
---
**Author:** Takafumi Takahashi (Hitachi Vantara)
Kubernetes v1.26, released last month, introduced an alpha feature that
lets you specify a data source for a PersistentVolumeClaim, even where the source
data belong to a different namespace.
With the new feature enabled, you specify a namespace in the `dataSourceRef` field of
a new PersistentVolumeClaim. Once Kubernetes checks that access is OK, the new
PersistentVolume can populate its data from the storage source specified in that other
namespace.
Before Kubernetes v1.26, provided your cluster had the `AnyVolumeDataSource` feature enabled,
you could already provision new volumes from a data source in the **same**
namespace.
However, that only worked for the data source in the same namespace,
therefore users couldn't provision a PersistentVolume with a claim
in one namespace from a data source in other namespace.
To solve this problem, Kubernetes v1.26 added a new alpha `namespace` field
to `dataSourceRef` field in PersistentVolumeClaim the API.
## How it works
Once the csi-provisioner finds that a data source is specified with a `dataSourceRef` that
has a non-empty namespace name,
it checks all reference grants within the namespace that's specified by the`.spec.dataSourceRef.namespace`
field of the PersistentVolumeClaim, in order to see if access to the data source is allowed.
If any ReferenceGrant allows access, the csi-provisioner provisions a volume from the data source.
## Trying it out
The following things are required to use cross namespace volume provisioning:
* Enable the `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) for the kube-apiserver and kube-controller-manager
* Install a CRD for the specific `VolumeSnapShot` controller
* Install the CSI Provisioner controller and enable the `CrossNamespaceVolumeDataSource` feature gate
* Install the CSI driver
* Install a CRD for ReferenceGrants
## Putting it all together
To see how this works, you can install the sample and try it out.
This sample do to create PVC in dev namespace from VolumeSnapshot in prod namespace.
That is a simple example. For real world use, you might want to use a more complex approach.
### Assumptions for this example {#example-assumptions}
* Your Kubernetes cluster was deployed with `AnyVolumeDataSource` and `CrossNamespaceVolumeDataSource` feature gates enabled
* There are two namespaces, dev and prod
* CSI driver is being deployed
* There is an existing VolumeSnapshot named `new-snapshot-demo` in the _prod_ namespace
* The ReferenceGrant CRD (from the Gateway API project) is already deployed
### Grant ReferenceGrants read permission to the CSI Provisioner
Access to ReferenceGrants is only needed when the CSI driver
has the `CrossNamespaceVolumeDataSource` controller capability.
For this example, the external-provisioner needs **get**, **list**, and **watch**
permissions for `referencegrants` (API group `gateway.networking.k8s.io`).
```yaml
- apiGroups: ["gateway.networking.k8s.io"]
resources: ["referencegrants"]
verbs: ["get", "list", "watch"]
```
### Enable the CrossNamespaceVolumeDataSource feature gate for the CSI Provisioner
Add `--feature-gates=CrossNamespaceVolumeDataSource=true` to the csi-provisioner command line.
For example, use this manifest snippet to redefine the container:
```yaml
- args:
- -v=5
- --csi-address=/csi/csi.sock
- --feature-gates=Topology=true
- --feature-gates=CrossNamespaceVolumeDataSource=true
image: csi-provisioner:latest
imagePullPolicy: IfNotPresent
name: csi-provisioner
```
### Create a ReferenceGrant
Here's a manifest for an example ReferenceGrant.
```yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-prod-pvc
namespace: prod
spec:
from:
- group: ""
kind: PersistentVolumeClaim
namespace: dev
to:
- group: snapshot.storage.k8s.io
kind: VolumeSnapshot
name: new-snapshot-demo
```
### Create a PersistentVolumeClaim by using cross namespace data source
Kubernetes creates a PersistentVolumeClaim on dev and the CSI driver populates
the PersistentVolume used on dev from snapshots on prod.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
namespace: dev
spec:
storageClassName: example
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
dataSourceRef:
apiGroup: snapshot.storage.k8s.io
kind: VolumeSnapshot
name: new-snapshot-demo
namespace: prod
volumeMode: Filesystem
```
## How can I learn more?
The enhancement proposal,
[Provision volumes from cross-namespace snapshots](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3294-provision-volumes-from-cross-namespace-snapshots), includes lots of detail about the history and technical implementation of this feature.
Please get involved by joining the [Kubernetes Storage Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-storage)
to help us enhance this feature.
There are a lot of good ideas already and we'd be thrilled to have more!
## Acknowledgments
It takes a wonderful group to make wonderful software.
Special thanks to the following people for the insightful reviews,
thorough consideration and valuable contribution to the CrossNamespaceVolumeDataSouce feature:
* Michelle Au (msau42)
* Xing Yang (xing-yang)
* Masaki Kimura (mkimuram)
* Tim Hockin (thockin)
* Ben Swartzlander (bswartz)
* Rob Scott (robscott)
* John Griffith (j-griffith)
* Michael Henriksen (mhenriks)
* Mustafa Elbehery (Elbehery)
Its been a joy to work with y'all on this.

View File

@ -0,0 +1,166 @@
---
layout: blog
title: "Kubernetes 1.26: Retroactive Default StorageClass"
date: 2023-01-05
slug: retroactive-default-storage-class
---
**Author:** Roman Bednář (Red Hat)
The v1.25 release of Kubernetes introduced an alpha feature to change how a default StorageClass was assigned to a PersistentVolumeClaim (PVC).
With the feature enabled, you no longer need to create a default StorageClass first and PVC second to assign the class. Additionally, any PVCs without a StorageClass assigned can be updated later.
This feature was graduated to beta in Kubernetes 1.26.
You can read [retroactive default StorageClass assignment](/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment) in the Kubernetes documentation for more details about how to use that,
or you can read on to learn about why the Kubernetes project is making this change.
## Why did StorageClass assignment need improvements
Users might already be familiar with a similar feature that assigns default StorageClasses to **new** PVCs at the time of creation. This is currently handled by the [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass).
But what if there wasn't a default StorageClass defined at the time of PVC creation?
Users would end up with a PVC that would never be assigned a class.
As a result, no storage would be provisioned, and the PVC would be somewhat "stuck" at this point.
Generally, two main scenarios could result in "stuck" PVCs and cause problems later down the road.
Let's take a closer look at each of them.
### Changing default StorageClass
With the alpha feature enabled, there were two options admins had when they wanted to change the default StorageClass:
1. Creating a new StorageClass as default before removing the old one associated with the PVC.
This would result in having two defaults for a short period.
At this point, if a user were to create a PersistentVolumeClaim with storageClassName set to <code>null</code> (implying default StorageClass), the newest default StorageClass would be chosen and assigned to this PVC.
2. Removing the old default first and creating a new default StorageClass.
This would result in having no default for a short time.
Subsequently, if a user were to create a PersistentVolumeClaim with storageClassName set to <code>null</code> (implying default StorageClass), the PVC would be in <code>Pending</code> state forever.
The user would have to fix this by deleting the PVC and recreating it once the default StorageClass was available.
### Resource ordering during cluster installation
If a cluster installation tool needed to create resources that required storage, for example, an image registry, it was difficult to get the ordering right.
This is because any Pods that required storage would rely on the presence of a default StorageClass and would fail to be created if it wasn't defined.
## What changed
We've changed the PersistentVolume (PV) controller to assign a default StorageClass to any unbound PersistentVolumeClaim that has the storageClassName set to <code>null</code>.
We've also modified the PersistentVolumeClaim admission within the API server to allow the change of values from an unset value to an actual StorageClass name.
### Null `storageClassName` versus `storageClassName: ""` - does it matter? { #null-vs-empty-string }
Before this feature was introduced, those values were equal in terms of behavior. Any PersistentVolumeClaim with the storageClassName set to <code>null</code> or <code>""</code> would bind to an existing PersistentVolume resource with storageClassName also set to <code>null</code> or <code>""</code>.
With this new feature enabled we wanted to maintain this behavior but also be able to update the StorageClass name.
With these constraints in mind, the feature changes the semantics of <code>null</code>. If a default StorageClass is present, <code>null</code> would translate to "Give me a default" and <code>""</code> would mean "Give me PersistentVolume that also has <code>""</code> StorageClass name." In the absence of a StorageClass, the behavior would remain unchanged.
Summarizing the above, we've changed the semantics of <code>null</code> so that its behavior depends on the presence or absence of a definition of default StorageClass.
The tables below show all these cases to better describe when PVC binds and when its StorageClass gets updated.
<table>
<caption>PVC binding behavior with Retroactive default StorageClass</caption>
<thead>
<tr>
<th colspan="2"></th>
<th>PVC <tt>storageClassName</tt> = <code>""</code></th>
<th>PVC <tt>storageClassName</tt> = <code>null</code></th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2">Without default class</td>
<td>PV <tt>storageClassName</tt> = <code>""</code></td>
<td>binds</td>
<td>binds</td>
</tr>
<tr>
<td>PV without <tt>storageClassName</tt></td>
<td>binds</td>
<td>binds</td>
</tr>
<tr>
<td rowspan="2">With default class</td>
<td>PV <tt>storageClassName</tt> = <code>""</code></td>
<td>binds</td>
<td>class updates</td>
</tr>
<tr>
<td>PV without <tt>storageClassName</tt></td>
<td>binds</td>
<td>class updates</td>
</tr>
</tbody>
</table>
## How to use it
If you want to test the feature whilst it's alpha, you need to enable the relevant feature gate in the kube-controller-manager and the kube-apiserver. Use the `--feature-gates` command line argument:
```
--feature-gates="...,RetroactiveDefaultStorageClass=true"
```
### Test drive
If you would like to see the feature in action and verify it works fine in your cluster here's what you can try:
1. Define a basic PersistentVolumeClaim:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
2. Create the PersistentVolumeClaim when there is no default StorageClass. The PVC won't provision or bind (unless there is an existing, suitable PV already present) and will remain in <code>Pending</code> state.
```
$ kc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-1 Pending
```
3. Configure one StorageClass as default.
```
$ kc patch sc -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/my-storageclass patched
```
4. Verify that PersistentVolumeClaims is now provisioned correctly and was updated retroactively with new default StorageClass.
```
$ kc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-1 Bound pvc-06a964ca-f997-4780-8627-b5c3bf5a87d8 1Gi RWO my-storageclass 87m
```
### New metrics
To help you see that the feature is working as expected we also introduced a new <code>retroactive_storageclass_total</code> metric to show how many times that the PV controller attempted to update PersistentVolumeClaim, and <code>retroactive_storageclass_errors_total</code> to show how many of those attempts failed.
## Getting involved
We always welcome new contributors so if you would like to get involved you can join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
If you would like to share feedback, you can do so on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):
- Deep Debroy ([ddebroy](https://github.com/ddebroy))
- Divya Mohan ([divya-mohan0209](https://github.com/divya-mohan0209))
- Jan Šafránek ([jsafrane](https://github.com/jsafrane/))
- Joe Betz ([jpbetz](https://github.com/jpbetz))
- Jordan Liggitt ([liggitt](https://github.com/liggitt))
- Michelle Au ([msau42](https://github.com/msau42))
- Seokho Son ([seokho-son](https://github.com/seokho-son))
- Shannon Kularathna ([shannonxtreme](https://github.com/shannonxtreme))
- Tim Bannister ([sftim](https://github.com/sftim))
- Tim Hockin ([thockin](https://github.com/thockin))
- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t))
- Xing Yang ([xing-yang](https://github.com/xing-yang))

View File

@ -172,11 +172,11 @@ Beta graduation of this feature. Because of this, kubelet upgrades should be sea
but there still may be changes in the API before stabilization making upgrades not but there still may be changes in the API before stabilization making upgrades not
guaranteed to be non-breaking. guaranteed to be non-breaking.
{{< caution >}} {{< note >}}
Although the Device Manager component of Kubernetes is a generally available feature, Although the Device Manager component of Kubernetes is a generally available feature,
the _device plugin API_ is not stable. For information on the device plugin API and the _device plugin API_ is not stable. For information on the device plugin API and
version compatibility, read [Device Plugin API versions](/docs/reference/node/device-plugin-api-versions/). version compatibility, read [Device Plugin API versions](/docs/reference/node/device-plugin-api-versions/).
{{< caution >}} {{< /note >}}
As a project, Kubernetes recommends that device plugin developers: As a project, Kubernetes recommends that device plugin developers:

View File

@ -73,13 +73,6 @@ The value you specified declares that the specified number of process IDs will
be reserved for the system as a whole and for Kubernetes system daemons be reserved for the system as a whole and for Kubernetes system daemons
respectively. respectively.
{{< note >}}
Before Kubernetes version 1.20, PID resource limiting with Node-level
reservations required enabling the [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/)
`SupportNodePidsLimit` to work.
{{< /note >}}
## Pod PID limits ## Pod PID limits
Kubernetes allows you to limit the number of processes running in a Pod. You Kubernetes allows you to limit the number of processes running in a Pod. You
@ -89,12 +82,6 @@ To configure the limit, you can specify the command line parameter `--pod-max-pi
to the kubelet, or set `PodPidsLimit` in the kubelet to the kubelet, or set `PodPidsLimit` in the kubelet
[configuration file](/docs/tasks/administer-cluster/kubelet-config-file/). [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
{{< note >}}
Before Kubernetes version 1.20, PID resource limiting for Pods required enabling
the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
`SupportPodPidsLimit` to work.
{{< /note >}}
## PID based eviction ## PID based eviction
You can configure kubelet to start terminating a Pod when it is misbehaving and consuming abnormal amount of resources. You can configure kubelet to start terminating a Pod when it is misbehaving and consuming abnormal amount of resources.

View File

@ -44,7 +44,7 @@ share clusters.
The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor
running multiple instances of a workload for customers. This business model is so strongly running multiple instances of a workload for customers. This business model is so strongly
associated with this deployment style that many people call it "SaaS tenancy." However, a better associated with this deployment style that many people call it "SaaS tenancy." However, a better
term might be "multi-customer tenancy, since SaaS vendors may also use other deployment models, term might be "multi-customer tenancy," since SaaS vendors may also use other deployment models,
and this deployment model can also be used outside of SaaS. and this deployment model can also be used outside of SaaS.
In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from

View File

@ -17,6 +17,10 @@ Commands related to handling kubernetes certificates
Commands related to handling kubernetes certificates Commands related to handling kubernetes certificates
```
kubeadm certs [flags]
```
### Options ### Options
<table style="width: 100%; table-layout: fixed;"> <table style="width: 100%; table-layout: fixed;">

View File

@ -55,7 +55,7 @@ kubeadm config images list [flags]
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -48,7 +48,7 @@ kubeadm config images pull [flags]
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -55,6 +55,7 @@ kubelet-finalize Updates settings relevant to the kubelet after TLS
addon Install required addons for passing conformance tests addon Install required addons for passing conformance tests
/coredns Install the CoreDNS addon to a Kubernetes cluster /coredns Install the CoreDNS addon to a Kubernetes cluster
/kube-proxy Install the kube-proxy addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster
show-join-command Show the join command for control-plane and worker node
``` ```
@ -138,7 +139,7 @@ kubeadm init [flags]
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -58,11 +58,18 @@ kubeadm init phase addon all [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -37,11 +37,18 @@ kubeadm init phase addon coredns [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -58,6 +58,13 @@ kubeadm init phase addon kube-proxy [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -47,6 +47,13 @@ kubeadm init phase bootstrap-token [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -65,6 +65,13 @@ kubeadm init phase certs all [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,6 +48,13 @@ kubeadm init phase certs apiserver-etcd-client [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,6 +48,13 @@ kubeadm init phase certs apiserver-kubelet-client [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -69,6 +69,13 @@ kubeadm init phase certs apiserver [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,6 +48,13 @@ kubeadm init phase certs ca [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,6 +48,13 @@ kubeadm init phase certs etcd-ca [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,6 +48,13 @@ kubeadm init phase certs etcd-healthcheck-client [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -50,6 +50,13 @@ kubeadm init phase certs etcd-peer [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -50,6 +50,13 @@ kubeadm init phase certs etcd-server [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,6 +48,13 @@ kubeadm init phase certs front-proxy-ca [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,6 +48,13 @@ kubeadm init phase certs front-proxy-client [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -101,7 +101,7 @@ kubeadm init phase control-plane all [flags]
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -83,7 +83,7 @@ kubeadm init phase control-plane apiserver [flags]
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -56,6 +56,13 @@ kubeadm init phase etcd local [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -65,6 +65,13 @@ kubeadm init phase kubeconfig admin [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -65,6 +65,13 @@ kubeadm init phase kubeconfig all [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -65,6 +65,13 @@ kubeadm init phase kubeconfig controller-manager [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -67,6 +67,13 @@ kubeadm init phase kubeconfig kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -65,6 +65,13 @@ kubeadm init phase kubeconfig scheduler [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Specify a stable IP address or DNS name for the control plane.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -51,6 +51,13 @@ kubeadm init phase kubelet-finalize all [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -44,6 +44,13 @@ kubeadm init phase kubelet-finalize experimental-cert-rotation [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -51,6 +51,13 @@ kubeadm init phase kubelet-start [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -47,6 +47,13 @@ kubeadm init phase mark-control-plane [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -44,6 +44,13 @@ kubeadm init phase preflight [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -0,0 +1,65 @@
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference content, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
Show the join command for control-plane and worker node
### Synopsis
Show the join command for control-plane and worker node
```
kubeadm init phase show-join-command [flags]
```
### Options
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for show-join-command</p></td>
</tr>
</tbody>
</table>
### Options inherited from parent commands
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--rootfs string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>[EXPERIMENTAL] The path to the 'real' host root filesystem.</p></td>
</tr>
</tbody>
</table>

View File

@ -15,7 +15,7 @@ Upload certificates to kubeadm-certs
### Synopsis ### Synopsis
This command is not meant to be run on its own. See list of available subcommands. Upload control plane certificates to the kubeadm-certs Secret
``` ```
kubeadm init phase upload-certs [flags] kubeadm init phase upload-certs [flags]
@ -44,6 +44,13 @@ kubeadm init phase upload-certs [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -37,6 +37,13 @@ kubeadm init phase upload-config all [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -46,6 +46,13 @@ kubeadm init phase upload-config kubeadm [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -44,6 +44,13 @@ kubeadm init phase upload-config kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -114,7 +114,7 @@ kubeadm join [api-server-endpoint] [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -41,7 +41,7 @@ kubeadm join phase control-plane-join all [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -51,6 +51,13 @@ kubeadm join phase control-plane-join all [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -41,7 +41,7 @@ kubeadm join phase control-plane-join etcd [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -51,6 +51,13 @@ kubeadm join phase control-plane-join etcd [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -34,7 +34,7 @@ kubeadm join phase control-plane-join mark-control-plane [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -44,6 +44,13 @@ kubeadm join phase control-plane-join mark-control-plane [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -41,7 +41,7 @@ kubeadm join phase control-plane-join update-status [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -55,7 +55,7 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -93,6 +93,13 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -48,7 +48,7 @@ kubeadm join phase control-plane-prepare control-plane [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -58,6 +58,13 @@ kubeadm join phase control-plane-prepare control-plane [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create a new control plane instance on this node</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [f
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [f
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -41,7 +41,7 @@ kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -79,6 +79,13 @@ kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -34,7 +34,7 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -72,6 +72,13 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -62,7 +62,7 @@ kubeadm join phase preflight [api-server-endpoint] [flags]
<td colspan="2">--config string</td> <td colspan="2">--config string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to kubeadm config file.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr> </tr>
<tr> <tr>
@ -107,6 +107,13 @@ kubeadm join phase preflight [api-server-endpoint] [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -45,6 +45,13 @@ kubeadm reset [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to the directory where the certificates are stored. If specified, clean this directory.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to the directory where the certificates are stored. If specified, clean this directory.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--cleanup-tmp-dir</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Cleanup the &quot;/etc/kubernetes/tmp&quot; directory</p></td>
</tr>
<tr> <tr>
<td colspan="2">--cri-socket string</td> <td colspan="2">--cri-socket string</td>
</tr> </tr>

View File

@ -37,6 +37,13 @@ kubeadm reset phase cleanup-node [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to the directory where the certificates are stored. If specified, clean this directory.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to the directory where the certificates are stored. If specified, clean this directory.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--cleanup-tmp-dir</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Cleanup the &quot;/etc/kubernetes/tmp&quot; directory</p></td>
</tr>
<tr> <tr>
<td colspan="2">--cri-socket string</td> <td colspan="2">--cri-socket string</td>
</tr> </tr>
@ -44,6 +51,13 @@ kubeadm reset phase cleanup-node [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.</p></td>
</tr> </tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -30,6 +30,13 @@ kubeadm reset phase preflight [flags]
</colgroup> </colgroup>
<tbody> <tbody>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-f, --force</td> <td colspan="2">-f, --force</td>
</tr> </tr>

View File

@ -30,6 +30,13 @@ kubeadm reset phase remove-etcd-member [flags]
</colgroup> </colgroup>
<tbody> <tbody>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr> <tr>
<td colspan="2">-h, --help</td> <td colspan="2">-h, --help</td>
</tr> </tr>

View File

@ -76,7 +76,7 @@ kubeadm upgrade apply [version]
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -55,7 +55,7 @@ kubeadm upgrade plan [version] [flags]
<td colspan="2">--feature-gates string</td> <td colspan="2">--feature-gates string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (default=true)</p></td> <td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
</tr> </tr>
<tr> <tr>

View File

@ -244,7 +244,7 @@ it off regardless. Doing so will disable the ability to use the `--discovery-tok
* Fetch the `cluster-info` file from the API Server: * Fetch the `cluster-info` file from the API Server:
```shell ```shell
kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml kubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml
``` ```
The output is similar to this: The output is similar to this:

View File

@ -56,7 +56,7 @@ The following methods exist for installing kubectl on Windows:
- Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result: - Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result:
```powershell ```powershell
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) $(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256)
``` ```
1. Append or prepend the `kubectl` binary folder to your `PATH` environment variable. 1. Append or prepend the `kubectl` binary folder to your `PATH` environment variable.

View File

@ -86,10 +86,10 @@ Kubernetesがオーナーオブジェクトを削除すると、残された依
## 未使用のコンテナとイメージのガベージコレクション {#containers-images} ## 未使用のコンテナとイメージのガベージコレクション {#containers-images}
{{<glossary_tooltip text="kubelet" term_id="kubelet">}}は未使用のイメージに対して5分ごとに、未使用のコンテナに対して1分ごとにガベージコレクションを実行します。 {{<glossary_tooltip text="kubelet" term_id="kubelet">}}は未使用のイメージに対して5分ごとに、未使用のコンテナに対して1分ごとにガベージコレクションを実行します。
外部のガベージコレクションツールは、kubeletの動作を壊し、存在するはずのコンテナを削除する可能性があるため、使用しないでください。 外部のガベージコレクションツールは、kubeletの動作を壊し、存在するはずのコンテナを削除する可能性があるため、使用しないでください。
未使用のコンテナとイメージのガベージコレクションのオプションを設定するには、[設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)を使用してkubeletを調整し、[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)リソースタイプを使用してガベージコレクションに関連するパラメーターを変更します。 未使用のコンテナとイメージのガベージコレクションのオプションを設定するには、[設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)を使用してkubeletを調整し、[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)リソースタイプを使用してガベージコレクションに関連するパラメーターを変更します。
### コンテナイメージのライフサイクル ### コンテナイメージのライフサイクル
@ -108,12 +108,12 @@ kubeletは、次の変数に基づいて未使用のコンテナをガベージ
* `MinAge`: kubeletがガベージコレクションできるコンテナの最低期間。`0`を設定すると無効化されます。 * `MinAge`: kubeletがガベージコレクションできるコンテナの最低期間。`0`を設定すると無効化されます。
* `MaxPerPodContainer`: 各Podのペアが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。 * `MaxPerPodContainer`: 各Podのペアが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。
* `MaxContainers`: クラスターが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。 * `MaxContainers`: クラスターが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。
これらの変数に加えて、kubeletは、通常、最も古いものから順に、定義されていない削除されたコンテナをガベージコレクションします。 これらの変数に加えて、kubeletは、通常、最も古いものから順に、定義されていない削除されたコンテナをガベージコレクションします。
`MaxPerPodContainer`と`MaxContainers`は、Podごとのコンテナの最大数(`MaxPerPodContainer`)を保持すると、グローバルなデッドコンテナの許容合計(`MaxContainers`)を超える状況で、互いに競合する可能性があります。 `MaxPerPodContainer`と`MaxContainers`は、Podごとのコンテナの最大数(`MaxPerPodContainer`)を保持すると、グローバルなデッドコンテナの許容合計(`MaxContainers`)を超える状況で、互いに競合する可能性があります。
この状況では、kubeletは`MaxPerPodContainer`を調整して競合に対処します。最悪のシナリオは、`MaxPerPodContainer`を1にダウングレードし、最も古いコンテナを削除することです。 この状況では、kubeletは`MaxPerPodContainer`を調整して競合に対処します。最悪のシナリオは、`MaxPerPodContainer`を1にダウングレードし、最も古いコンテナを削除することです。
さらに、削除されたPodが所有するコンテナは、`MinAge`より古くなると削除されます。 さらに、削除されたPodが所有するコンテナは、`MinAge`より古くなると削除されます。
{{<note>}} {{<note>}}

View File

@ -105,7 +105,7 @@ weight: 20
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out server.crt -days 10000 \ -CAcreateserial -out server.crt -days 10000 \
-extensions v3_ext -extfile csr.conf -extensions v3_ext -extfile csr.conf -sha256
1. 証明書を表示します。 1. 証明書を表示します。
openssl x509 -noout -text -in ./server.crt openssl x509 -noout -text -in ./server.crt

View File

@ -24,9 +24,9 @@ Podが動作しているNodeに利用可能なリソースが十分にある場
たとえば、コンテナに256MiBの`メモリー`要求を設定し、そのコンテナが8GiBのメモリーを持つNodeにスケジュールされたPod内に存在し、他のPodが存在しない場合、コンテナはより多くのRAMを使用しようとする可能性があります。 たとえば、コンテナに256MiBの`メモリー`要求を設定し、そのコンテナが8GiBのメモリーを持つNodeにスケジュールされたPod内に存在し、他のPodが存在しない場合、コンテナはより多くのRAMを使用しようとする可能性があります。
そのコンテナに4GiBの`メモリー`制限を設定すると、kubelet(および{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}) が制限を適用します。ランタイムは、コンテナが設定済みのリソース制限を超えて使用するのを防ぎます。例えば、コンテナ内のプロセスが、許容量を超えるメモリを消費しようとすると、システムカーネルは、メモリ不足(OOM)エラーで、割り当てを試みたプロセスを終了します。 そのコンテナに4GiBの`メモリー`制限を設定すると、kubelet(および{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}) が制限を適用します。ランタイムは、コンテナが設定済みのリソース制限を超えて使用するのを防ぎます。例えば、コンテナ内のプロセスが、許容量を超えるメモリを消費しようとすると、システムカーネルは、メモリ不足(OOM)エラーで、割り当てを試みたプロセスを終了します。
制限は、違反が検出されるとシステムが介入するように事後的に、またはコンテナが制限を超えないようにシステムが防ぐように強制的に、実装できます。 制限は、違反が検出されるとシステムが介入するように事後的に、またはコンテナが制限を超えないようにシステムが防ぐように強制的に、実装できます。
異なるランタイムは、同じ制限を実装するために異なる方法をとることができます。 異なるランタイムは、同じ制限を実装するために異なる方法をとることができます。
{{< note >}} {{< note >}}

View File

@ -838,7 +838,7 @@ spec:
/etc/secret-volume/ssh-privatekey /etc/secret-volume/ssh-privatekey
``` ```
コンテナはSecretのデータをSSH接続を確立するために使用することができます。 コンテナはSecretのデータをSSH接続を確立するために使用することができます。
### ユースケース: 本番、テスト用の認証情報を持つPod ### ユースケース: 本番、テスト用の認証情報を持つPod

View File

@ -1,26 +0,0 @@
---
title: コンテナの概要
content_type: concept
weight: 1
---
<!-- overview -->
コンテナは、アプリケーションの(コンパイルされた)コードと、実行時に必要な依存関係をパッケージ化するための技術です。実行する各コンテナは再現性があります。依存関係を含めることによる標準化は、どこで実行しても同じ動作が得られることを意味します。
コンテナは、基礎となるホストインフラストラクチャからアプリケーションを切り離します。これにより、さまざまなクラウド環境やOS環境でのデプロイが容易になります。
<!-- body -->
## コンテナイメージ {#container-images}
[コンテナイメージ](/docs/concepts/containers/images/)は、アプリケーションを実行するために必要なすべてのものを含んだ、すぐに実行可能なソフトウェアパッケージです。コードとそれが必要とする任意のランタイム、アプリケーションとシステムのライブラリ、および必須の設定のデフォルト値が含まれています。
設計上、コンテナは不変であるため、すでに実行中のコンテナのコードを変更することはできません。コンテナ化されたアプリケーションがあり、変更を加えたい場合は、変更を含む新しいコンテナをビルドし、コンテナを再作成して更新されたイメージから起動する必要があります。
## コンテナランタイム {#container-runtimes}
{{< glossary_definition term_id="container-runtime" length="all" >}}
## {{% heading "whatsnext" %}}
* [コンテナイメージ](/docs/concepts/containers/images/)についてお読みください。
* [Pod](/ja/docs/concepts/workloads/pods/)についてお読みください。

View File

@ -72,7 +72,7 @@ Webhookのモデルでは、Kubernetesは外部のサービスを呼び出しま
1. ユーザーは頻繁に`kubectl`を使って、Kubernetes APIとやり取りをします。[Kubectlプラグイン](/docs/tasks/extend-kubectl/kubectl-plugins/)は、kubectlのバイナリを拡張します。これは個別ユーザーのローカル環境のみに影響を及ぼすため、サイト全体にポリシーを強制することはできません。 1. ユーザーは頻繁に`kubectl`を使って、Kubernetes APIとやり取りをします。[Kubectlプラグイン](/docs/tasks/extend-kubectl/kubectl-plugins/)は、kubectlのバイナリを拡張します。これは個別ユーザーのローカル環境のみに影響を及ぼすため、サイト全体にポリシーを強制することはできません。
2. APIサーバーは全てのリクエストを処理します。APIサーバーのいくつかの拡張ポイントは、リクエストを認可する、コンテキストに基づいてブロックする、リクエストを編集する、そして削除を処理することを可能にします。これらは[APIアクセス拡張](/docs/concepts/extend-kubernetes/#api-access-extensions)セクションに記載されています。 2. APIサーバーは全てのリクエストを処理します。APIサーバーのいくつかの拡張ポイントは、リクエストを認可する、コンテキストに基づいてブロックする、リクエストを編集する、そして削除を処理することを可能にします。これらは[APIアクセス拡張](/docs/concepts/extend-kubernetes/#api-access-extensions)セクションに記載されています。
3. APIサーバーは様々な種類の *リソース* を扱います。`Pod`のような *ビルトインリソース* はKubernetesプロジェクトにより定義され、変更できません。ユーザーも、自身もしくは、他のプロジェクトで定義されたリソースを追加することができます。それは *カスタムリソース* と呼ばれ、[カスタムリソース](/docs/concepts/extend-kubernetes/#user-defined-types)セクションに記載されています。カスタムリソースは度々、APIアクセス拡張と一緒に使われます。 3. APIサーバーは様々な種類の *リソース* を扱います。`Pod`のような *ビルトインリソース* はKubernetesプロジェクトにより定義され、変更できません。ユーザーも、自身もしくは、他のプロジェクトで定義されたリソースを追加することができます。それは *カスタムリソース* と呼ばれ、[カスタムリソース](/docs/concepts/extend-kubernetes/#user-defined-types)セクションに記載されています。カスタムリソースは度々、APIアクセス拡張と一緒に使われます。
4. KubernetesのスケジューラーはPodをどのードに配置するかを決定します。スケジューリングを拡張するには、いくつかの方法があります。それらは[スケジューラー拡張](/docs/concepts/extend-kubernetes/#scheduler-extensions)セクションに記載されています。 4. KubernetesのスケジューラーはPodをどのードに配置するかを決定します。スケジューリングを拡張するには、いくつかの方法があります。それらは[スケジューラー拡張](#scheduling-extensions)セクションに記載されています。
5. Kubernetesにおける多くの振る舞いは、APIサーバーのクライアントであるコントローラーと呼ばれるプログラムに実装されています。コントローラーは度々、カスタムリソースと共に使われます。 5. Kubernetesにおける多くの振る舞いは、APIサーバーのクライアントであるコントローラーと呼ばれるプログラムに実装されています。コントローラーは度々、カスタムリソースと共に使われます。
6. kubeletはサーバー上で実行され、Podが仮想サーバーのようにクラスターネットワーク上にIPを持った状態で起動することをサポートします。[ネットワークプラグイン](/docs/concepts/extend-kubernetes/#network-plugins)がPodのネットワーキングにおける異なる実装を適用することを可能にします。 6. kubeletはサーバー上で実行され、Podが仮想サーバーのようにクラスターネットワーク上にIPを持った状態で起動することをサポートします。[ネットワークプラグイン](/docs/concepts/extend-kubernetes/#network-plugins)がPodのネットワーキングにおける異なる実装を適用することを可能にします。
7. kubeletはまた、コンテナのためにボリュームをマウント、アンマウントします。新しい種類のストレージは[ストレージプラグイン](/docs/concepts/extend-kubernetes/#storage-plugins)を通じてサポートされます。 7. kubeletはまた、コンテナのためにボリュームをマウント、アンマウントします。新しい種類のストレージは[ストレージプラグイン](/docs/concepts/extend-kubernetes/#storage-plugins)を通じてサポートされます。
@ -139,7 +139,7 @@ Kubernetesはいくつかのビルトイン認証方式と、それらが要件
他のネットワークファブリックが[ネットワークプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)を通じてサポートされます。 他のネットワークファブリックが[ネットワークプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)を通じてサポートされます。
### スケジューラー拡張 ### スケジューラー拡張 {#scheduling-extensions}
スケジューラーは特別な種類のコントローラーで、Podを監視し、Podをードに割り当てます。デフォルトのコントローラーを完全に置き換えることもできますが、他のKubernetesのコンポーネントの利用を継続する、または[複数のスケジューラー](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)を同時に動かすこともできます。 スケジューラーは特別な種類のコントローラーで、Podを監視し、Podをードに割り当てます。デフォルトのコントローラーを完全に置き換えることもできますが、他のKubernetesのコンポーネントの利用を継続する、または[複数のスケジューラー](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)を同時に動かすこともできます。

View File

@ -655,7 +655,7 @@ spec:
``` ```
{{< note >}} {{< note >}}
Podにrawブロックデバイスを追加する場合は、マウントパスの代わりにコンテナでデバイスパスを指定します。 Podにrawブロックデバイスを追加する場合は、マウントパスの代わりにコンテナでデバイスパスを指定します。
{{< /note >}} {{< /note >}}
### ブロックボリュームのバインド ### ブロックボリュームのバインド
@ -678,7 +678,7 @@ Podにrawブロックデバイスを追加する場合は、マウントパス
アルファリリースでは、静的にプロビジョニングされたボリュームのみがサポートされます。管理者は、rawブロックデバイスを使用する場合、これらの値を考慮するように注意する必要があります。 アルファリリースでは、静的にプロビジョニングされたボリュームのみがサポートされます。管理者は、rawブロックデバイスを使用する場合、これらの値を考慮するように注意する必要があります。
{{< /note >}} {{< /note >}}
## ボリュームのスナップショットとスナップショットからのボリュームの復元のサポート ## ボリュームのスナップショットとスナップショットからのボリュームの復元のサポート {#volume-snapshot-and-restore-volume-from-snapshot-support}
{{< feature-state for_k8s_version="v1.17" state="beta" >}} {{< feature-state for_k8s_version="v1.17" state="beta" >}}

View File

@ -0,0 +1,182 @@
---
title: ボリュームのスナップショット
content_type: concept
weight: 60
---
<!-- overview -->
Kubernetesでは、*VolumeSnapshot*はストレージシステム上のボリュームのスナップショットを表します。このドキュメントは、Kubernetes[永続ボリューム](/ja/docs/concepts/storage/persistent-volumes/)に既に精通していることを前提としています。
<!-- body -->
## 概要 {#introduction}
APIリソース`PersistentVolume`と`PersistentVolumeClaim`を使用してユーザーと管理者にボリュームをプロビジョニングする方法と同様に、`VolumeSnapshotContent`と`VolumeSnapshot`APIリソースは、ユーザーと管理者のボリュームスナップショットを作成するために提供されます。
`VolumeSnapshotContent`は、管理者によってプロビジョニングされたクラスター内のボリュームから取得されたスナップショットです。PersistentVolumeがクラスターリソースであるように、これはクラスターのリソースです。
`VolumeSnapshot`は、ユーザーによるボリュームのスナップショットの要求です。PersistentVolumeClaimに似ています。
`VolumeSnapshotClass`を使用すると、`VolumeSnapshot`に属するさまざまな属性を指定できます。これらの属性は、ストレージシステム上の同じボリュームから取得されたスナップショット間で異なる場合があるため、`PersistentVolumeClaim`の同じ`StorageClass`を使用して表現することはできません。
ボリュームスナップショットは、完全に新しいボリュームを作成することなく、特定の時点でボリュームの内容をコピーするための標準化された方法をKubernetesユーザーに提供します。この機能により、たとえばデータベース管理者は、編集または削除の変更を実行する前にデータベースをバックアップできます。
この機能を使用する場合、ユーザーは次のことに注意する必要があります。
- APIオブジェクト`VolumeSnapshot`、`VolumeSnapshotContent`、および`VolumeSnapshotClass`は{{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}であり、コアAPIの一部ではありません。
- `VolumeSnapshot`のサポートは、CSIドライバーでのみ利用できます。
- `VolumeSnapshot`の展開プロセスの一環として、Kubernetesチームは、コントロールプレーンに展開されるスナップショットコントローラーと、CSIドライバーと共に展開されるcsi-snapshotterと呼ばれるサイドカーヘルパーコンテナを提供します。スナップショットコントローラーは、`VolumeSnapshot`および`VolumeSnapshotContent`オブジェクトを管理し、`VolumeSnapshotContent`オブジェクトの作成と削除を担当します。サイドカーcsi-snapshotterは、`VolumeSnapshotContent`オブジェクトを監視し、CSIエンドポイントに対して`CreateSnapshot`および`DeleteSnapshot`操作をトリガーします。
- スナップショットオブジェクトの厳密な検証を提供するvalidation Webhookサーバーもあります。これは、CSIドライバーではなく、スナップショットコントローラーおよびCRDと共にKubernetesディストリビューションによってインストールする必要があります。スナップショット機能が有効になっているすべてのKubernetesクラスターにインストールする必要があります。
- CSIドライバーは、ボリュームスナップショット機能を実装している場合と実装していない場合があります。ボリュームスナップショットのサポートを提供するCSIドライバーは、csi-snapshotterを使用する可能性があります。詳細については、[CSIドライバーのドキュメント](https://kubernetes-csi.github.io/docs/)を参照してください。
- CRDとスナップショットコントローラーのインストールは、Kubernetesディストリビューションの責任です。
## ボリュームスナップショットとボリュームスナップショットのコンテンツのライフサイクル
`VolumeSnapshotContents`はクラスター内のリソースです。`VolumeSnapshots`は、これらのリソースに対するリクエストです。`VolumeSnapshotContents`と`VolumeSnapshots`の間の相互作用は、次のライフサイクルに従います。
### プロビジョニングボリュームのスナップショット
スナップショットをプロビジョニングするには、事前プロビジョニングと動的プロビジョニングの2つの方法があります。
#### 事前プロビジョニング{#static}
クラスター管理者は、多数の`VolumeSnapshotContents`を作成します。それらは、クラスターユーザーが使用できるストレージシステム上の実際のボリュームスナップショットの詳細を保持します。それらはKubernetesAPIに存在し、消費することができます。
#### 動的プロビジョニング
既存のスナップショットを使用する代わりに、スナップショットをPersistentVolumeClaimから動的に取得するように要求できます。[VolumeSnapshotClass](/ja/docs/concepts/storage/volume-snapshot-classes/)は、スナップショットを作成するときに使用するストレージプロバイダー固有のパラメーターを指定します。
### バインディング
スナップショットコントローラーは、事前プロビジョニングされたシナリオと動的にプロビジョニングされたシナリオの両方で、適切な`VolumeSnapshotContent`オブジェクトを使用した`VolumeSnapshot`オブジェクトのバインディングを処理します。バインディングは1対1のマッピングです。
事前プロビジョニングされたバインディングの場合、要求されたVolumeSnapshotContentオブジェクトが作成されるまで、VolumeSnapshotはバインドされないままになります。
### スナップショットソース保護としてのPersistentVolumeClaim
この保護の目的は、スナップショットがシステムから取得されている間、使用中の{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}APIオブジェクトがシステムから削除されないようにすることです(これにより、データが失われる可能性があります)。
PersistentVolumeClaimのスナップショットが作成されている間、そのPersistentVolumeClaimは使用中です。スナップショットソースとしてアクティブに使用されているPersistentVolumeClaim APIオブジェクトを削除しても、PersistentVolumeClaimオブジェクトはすぐには削除されません。代わりに、PersistentVolumeClaimオブジェクトの削除は、スナップショットがReadyToUseになるか中止されるまで延期されます。
### 削除
削除は`VolumeSnapshot`オブジェクトの削除によってトリガーされ、`DeletionPolicy`に従います。`DeletionPolicy`が`Delete`の場合、基になるストレージスナップショットは`VolumeSnapshotContent`オブジェクトとともに削除されます。`DeletionPolicy`が`Retain`の場合、基になるスナップショットと`VolumeSnapshotContent`の両方が残ります。
## ボリュームスナップショット
各VolumeSnapshotには、仕様とステータスが含まれています。
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: new-snapshot-test
spec:
volumeSnapshotClassName: csi-hostpath-snapclass
source:
persistentVolumeClaimName: pvc-test
```
`persistentVolumeClaimName`は、スナップショットのPersistentVolumeClaimデータソースの名前です。このフィールドは、スナップショットを動的にプロビジョニングするために必要です。
ボリュームスナップショットは、属性`volumeSnapshotClassName`を使用して[VolumeSnapshotClass](/ja/docs/concepts/storage/volume-snapshot-classes/)の名前を指定することにより、特定のクラスを要求できます。何も設定されていない場合、利用可能な場合はデフォルトのクラスが使用されます。
事前プロビジョニングされたスナップショットの場合、次の例に示すように、スナップショットのソースとして`volumeSnapshotContentName`を指定する必要があります。事前プロビジョニングされたスナップショットには、`volumeSnapshotContentName`ソースフィールドが必要です。
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
spec:
source:
volumeSnapshotContentName: test-content
```
## ボリュームスナップショットコンテンツ
各VolumeSnapshotContentには、仕様とステータスが含まれています。動的プロビジョニングでは、スナップショット共通コントローラーが`VolumeSnapshotContent`オブジェクトを作成します。以下に例を示します。
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455
spec:
deletionPolicy: Delete
driver: hostpath.csi.k8s.io
source:
volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002
sourceVolumeMode: Filesystem
volumeSnapshotClassName: csi-hostpath-snapclass
volumeSnapshotRef:
name: new-snapshot-test
namespace: default
uid: 72d9a349-aacd-42d2-a240-d775650d2455
```
`volumeHandle`は、ストレージバックエンドで作成され、ボリュームの作成中にCSIドライバーによって返されるボリュームの一意の識別子です。このフィールドは、スナップショットを動的にプロビジョニングするために必要です。これは、スナップショットのボリュームソースを指定します。
事前プロビジョニングされたスナップショットの場合、(クラスター管理者として)次のように`VolumeSnapshotContent`オブジェクトを作成する必要があります。
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: new-snapshot-content-test
spec:
deletionPolicy: Delete
driver: hostpath.csi.k8s.io
source:
snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002
sourceVolumeMode: Filesystem
volumeSnapshotRef:
name: new-snapshot-test
namespace: default
```
`snapshotHandle`は、ストレージバックエンドで作成されたボリュームスナップショットの一意の識別子です。このフィールドは、事前プロビジョニングされたスナップショットに必要です。この`VolumeSnapshotContent`が表すストレージシステムのCSIスナップショットIDを指定します。
`sourceVolumeMode`は、スナップショットが作成されるボリュームのモードです。`sourceVolumeMode`フィールドの値は、`Filesystem`または`Block`のいずれかです。ソースボリュームモードが指定されていない場合、Kubernetesはスナップショットをソースボリュームのモードが不明であるかのように扱います。
`volumeSnapshotRef`は、対応する`VolumeSnapshot`の参照です。`VolumeSnapshotContent`が事前プロビジョニングされたスナップショットとして作成されている場合、`volumeSnapshotRef`で参照される`VolumeSnapshot`がまだ存在しない可能性があることに注意してください。
## スナップショットのボリュームモードの変換 {#convert-volume-mode}
クラスターにインストールされている`VolumeSnapshots`APIが`sourceVolumeMode`フィールドをサポートしている場合、APIには、権限のないユーザーがボリュームのモードを変換するのを防ぐ機能があります。
クラスターにこの機能の機能があるかどうかを確認するには、次のコマンドを実行します。
```yaml
$ kubectl get crd volumesnapshotcontent -o yaml
```
ユーザーが既存の`VolumeSnapshot`から`PersistentVolumeClaim`を作成できるようにしたいが、ソースとは異なるボリュームモードを使用する場合は、`VolumeSnapshot`に対応する`VolumeSnapshotContent`にアノテーション`snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"`を追加する必要があります。
事前プロビジョニングされたスナップショットの場合、クラスター管理者が`spec.sourceVolumeMode`を入力する必要があります。
この機能を有効にした`VolumeSnapshotContent`リソースの例は次のようになります。
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: new-snapshot-content-test
annotations:
- snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"
spec:
deletionPolicy: Delete
driver: hostpath.csi.k8s.io
source:
snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002
sourceVolumeMode: Filesystem
volumeSnapshotRef:
name: new-snapshot-test
namespace: default
```
## スナップショットからのボリュームのプロビジョニング
`PersistentVolumeClaim`オブジェクトの*dataSource*フィールドを使用して、スナップショットからのデータが事前に取り込まれた新しいボリュームをプロビジョニングできます。
詳細については、[ボリュームのスナップショットとスナップショットからのボリュームの復元](/ja/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)を参照してください。

View File

@ -145,7 +145,7 @@ Deploymentに対して適切なセレクターとPodテンプレートのラベ
## Deploymentの更新 {#updating-a-deployment} ## Deploymentの更新 {#updating-a-deployment}
{{< note >}} {{< note >}}
Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。 Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。
{{< /note >}} {{< /note >}}
Deploymentを更新するには以下のステップに従ってください。 Deploymentを更新するには以下のステップに従ってください。
@ -938,7 +938,7 @@ Deploymentを使って一部のユーザーやサーバーに対してリリー
## Deployment Specの記述 ## Deployment Specの記述
他の全てのKubernetesの設定と同様に、Deploymentは`.apiVersion`、`.kind`や`.metadata`フィールドを必要とします。 他の全てのKubernetesの設定と同様に、Deploymentは`.apiVersion`、`.kind`や`.metadata`フィールドを必要とします。
設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。
Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。 Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。
Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要とします。 Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要とします。
@ -1008,7 +1008,7 @@ Deploymentのセレクターに一致するラベルを持つPodを直接作成
### Min Ready Seconds {#min-ready-seconds} ### Min Ready Seconds {#min-ready-seconds}
`.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。 `.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。
### リビジョン履歴の保持上限 ### リビジョン履歴の保持上限

View File

@ -36,7 +36,7 @@ content_type: concept
## コンポーネントリファレンス ## コンポーネントリファレンス
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 各ード上で動作する最も重要なードエージェントです。kubeletは一通りのPodSpecを受け取り、コンテナが実行中で正常であることを確認します。 * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 各ード上で動作する最も重要なードエージェントです。kubeletは一通りのPodSpecを受け取り、コンテナが実行中で正常であることを確認します。
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - Pod、Service、Replication Controller等、APIオブジェクトのデータを検証・設定するREST APIサーバーです。 * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - Pod、Service、Replication Controller等、APIオブジェクトのデータを検証・設定するREST APIサーバーです。
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Kubernetesに同梱された、コアのコントロールループを埋め込むデーモンです。 * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Kubernetesに同梱された、コアのコントロールループを埋め込むデーモンです。
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 単純なTCP/UDPストリームのフォワーディングや、一連のバックエンド間でTCP/UDPのラウンドロビンでのフォワーディングを実行できます。 * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 単純なTCP/UDPストリームのフォワーディングや、一連のバックエンド間でTCP/UDPのラウンドロビンでのフォワーディングを実行できます。

View File

@ -191,133 +191,119 @@ content_type: concept
| 機能名 | デフォルト値 | ステージ | 導入開始バージョン | 最終利用可能バージョン | | 機能名 | デフォルト値 | ステージ | 導入開始バージョン | 最終利用可能バージョン |
|---------|---------|-------|-------|-------| |---------|---------|-------|-------|-------|
| `Accelerators` | `false` | Alpha | 1.6 | 1.10 |
| `Accelerators` | - | Deprecated | 1.11 | - |
| `AdvancedAuditing` | `false` | Alpha | 1.7 | 1.7 | | `AdvancedAuditing` | `false` | Alpha | 1.7 | 1.7 |
| `AdvancedAuditing` | `true` | Beta | 1.8 | 1.11 | | `AdvancedAuditing` | `true` | Beta | 1.8 | 1.11 |
| `AdvancedAuditing` | `true` | GA | 1.12 | - | | `AdvancedAuditing` | `true` | GA | 1.12 | - |
| `AffinityInAnnotations` | `false` | Alpha | 1.6 | 1.7 | | `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
| `AffinityInAnnotations` | - | Deprecated | 1.8 | - | | `CSIInlineVolume` | `true` | Beta | 1.16 | 1.24 |
| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | | `CSIInlineVolume` | `true` | GA | 1.25 | - |
| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - | | `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 | | `CSIMigration` | `true` | Beta | 1.17 | 1.24 |
| `BlockVolume` | `true` | Beta | 1.13 | 1.17 | | `CSIMigration` | `true` | GA | 1.25 | - |
| `BlockVolume` | `true` | GA | 1.18 | - | | `CSIMigrationAWS` | `false` | Alpha | 1.14 | 1.16 |
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 | | `CSIMigrationAWS` | `false` | Beta | 1.17 | 1.22 |
| `CSIBlockVolume` | `true` | Beta | 1.14 | 1.17 | | `CSIMigrationAWS` | `true` | Beta | 1.23 | 1.24 |
| `CSIBlockVolume` | `true` | GA | 1.18 | - | | `CSIMigrationAWS` | `true` | GA | 1.25 | - |
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 | | `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 |
| `CSIDriverRegistry` | `true` | Beta | 1.14 | 1.17 | | `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | 1.22 |
| `CSIDriverRegistry` | `true` | GA | 1.18 | | | `CSIMigrationAzureDisk` | `true` | Beta | 1.23 | 1.23 |
| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 | | `CSIMigrationAzureDisk` | `true` | GA | 1.24 | |
| `CSINodeInfo` | `true` | Beta | 1.14 | 1.16 | | `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 |
| `CSINodeInfo` | `true` | GA | 1.17 | | | `CSIMigrationGCE` | `false` | Beta | 1.17 | 1.22 |
| `AttachVolumeLimit` | `false` | Alpha | 1.11 | 1.11 | | `CSIMigrationGCE` | `true` | Beta | 1.23 | 1.24 |
| `AttachVolumeLimit` | `true` | Beta | 1.12 | 1.16 | | `CSIMigrationGCE` | `true` | GA | 1.25 | - |
| `AttachVolumeLimit` | `true` | GA | 1.17 | - | | `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | 1.17 |
| `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | | `CSIMigrationOpenStack` | `true` | Beta | 1.18 | 1.23 |
| `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 | | `CSIMigrationOpenStack` | `true` | GA | 1.24 | |
| `CSIPersistentVolume` | `true` | GA | 1.13 | - | | `CSIStorageCapacity` | `false` | Alpha | 1.19 | 1.20 |
| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 | | `CSIStorageCapacity` | `true` | Beta | 1.21 | 1.23 |
| `CustomPodDNS` | `true` | Beta| 1.10 | 1.13 | | `CSIStorageCapacity` | `true` | GA | 1.24 | - |
| `CustomPodDNS` | `true` | GA | 1.14 | - | | `CSRDuration` | `true` | Beta | 1.22 | 1.23 |
| `CustomResourcePublishOpenAPI` | `false` | Alpha| 1.14 | 1.14 | | `CSRDuration` | `true` | GA | 1.24 | - |
| `CustomResourcePublishOpenAPI` | `true` | Beta| 1.15 | 1.15 | | `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 |
| `CustomResourcePublishOpenAPI` | `true` | GA | 1.16 | - | | `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | 1.23 |
| `CustomResourceSubresources` | `false` | Alpha | 1.10 | 1.10 | | `ControllerManagerLeaderMigration` | `true` | GA | 1.24 | - |
| `CustomResourceSubresources` | `true` | Beta | 1.11 | 1.15 | | `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 |
| `CustomResourceSubresources` | `true` | GA | 1.16 | - | | `CronJobTimeZone` | `true` | Beta | 1.25 | |
| `CustomResourceValidation` | `false` | Alpha | 1.8 | 1.8 | | `DaemonSetUpdateSurge` | `false` | Alpha | 1.21 | 1.21 |
| `CustomResourceValidation` | `true` | Beta | 1.9 | 1.15 | | `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | 1.24 |
| `CustomResourceValidation` | `true` | GA | 1.16 | - | | `DaemonSetUpdateSurge` | `true` | GA | 1.25 | - |
| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 | | `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 | | `DefaultPodTopologySpread` | `true` | Beta | 1.20 | 1.23 |
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - | | `DefaultPodTopologySpread` | `true` | GA | 1.24 | - |
| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 | | `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - | | `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 |
| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | | `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- |
| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - | | `DryRun` | `false` | Alpha | 1.12 | 1.12 |
| `EnableAggregatedDiscoveryTimeout` | `true` | Deprecated | 1.16 | - | | `DryRun` | `true` | Beta | 1.13 | 1.18 |
| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 | | `DryRun` | `true` | GA | 1.19 | - |
| `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - | | `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 |
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 | | `DynamicKubeletConfig` | `true` | Beta | 1.11 | 1.21 |
| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | - | | `DynamicKubeletConfig` | `false` | Deprecated | 1.22 | - |
| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 | | `EfficientWatchResumption` | `false` | Alpha | 1.20 | 1.20 |
| `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - | | `EfficientWatchResumption` | `true` | Beta | 1.21 | 1.23 |
| `HugePages` | `false` | Alpha | 1.8 | 1.9 | | `EfficientWatchResumption` | `true` | GA | 1.24 | - |
| `HugePages` | `true` | Beta| 1.10 | 1.13 | | `EphemeralContainers` | `false` | Alpha | 1.16 | 1.22 |
| `HugePages` | `true` | GA | 1.14 | - | | `EphemeralContainers` | `true` | Beta | 1.23 | 1.24 |
| `Initializers` | `false` | Alpha | 1.7 | 1.13 | | `EphemeralContainers` | `true` | GA | 1.25 | - |
| `Initializers` | - | Deprecated | 1.14 | - | | `ExecProbeTimeout` | `true` | GA | 1.20 | - |
| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 | | `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
| `KubeletConfigFile` | - | Deprecated | 1.10 | - | | `ExpandCSIVolumes` | `true` | Beta | 1.16 | 1.23 |
| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 | | `ExpandCSIVolumes` | `true` | GA | 1.24 | - |
| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 | | `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.14 |
| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - | | `ExpandInUsePersistentVolumes` | `true` | Beta | 1.15 | 1.23 |
| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | | `ExpandInUsePersistentVolumes` | `true` | GA | 1.24 | - |
| `MountPropagation` | `true` | Beta | 1.10 | 1.11 | | `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 |
| `MountPropagation` | `true` | GA | 1.12 | - | | `ExpandPersistentVolumes` | `true` | Beta | 1.11 | 1.23 |
| `NodeLease` | `false` | Alpha | 1.12 | 1.13 | | `ExpandPersistentVolumes` | `true` | GA | 1.24 |- |
| `NodeLease` | `true` | Beta | 1.14 | 1.16 | | `IdentifyPodOS` | `false` | Alpha | 1.23 | 1.23 |
| `NodeLease` | `true` | GA | 1.17 | - | | `IdentifyPodOS` | `true` | Beta | 1.24 | 1.24 |
| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | | `IdentifyPodOS` | `true` | GA | 1.25 | - |
| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | | `IndexedJob` | `false` | Alpha | 1.21 | 1.21 |
| `PersistentLocalVolumes` | `true` | GA | 1.14 | - | | `IndexedJob` | `true` | Beta | 1.22 | 1.23 |
| `PodPriority` | `false` | Alpha | 1.8 | 1.10 | | `IndexedJob` | `true` | GA | 1.24 | - |
| `PodPriority` | `true` | Beta | 1.11 | 1.13 | | `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
| `PodPriority` | `true` | GA | 1.14 | - | | `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | 1.24 |
| `PodReadinessGates` | `false` | Alpha | 1.11 | 1.11 | | `LocalStorageCapacityIsolation` | `true` | GA | 1.25 | - |
| `PodReadinessGates` | `true` | Beta | 1.12 | 1.13 | | `NetworkPolicyEndPort` | `false` | Alpha | 1.21 | 1.21 |
| `PodReadinessGates` | `true` | GA | 1.14 | - | | `NetworkPolicyEndPort` | `true` | Beta | 1.22 | 1.24 |
| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 | | `NetworkPolicyEndPort` | `true` | GA | 1.25 | - |
| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 | | `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
| `PodShareProcessNamespace` | `true` | GA | 1.17 | - | | `NonPreemptingPriority` | `true` | Beta | 1.19 | 1.23 |
| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | | `NonPreemptingPriority` | `true` | GA | 1.24 | - |
| `PVCProtection` | - | Deprecated | 1.10 | - | | `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 |
| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 | | `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | 1.23 |
| `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 | | `PodAffinityNamespaceSelector` | `true` | GA | 1.24 | - |
| `ResourceQuotaScopeSelectors` | `true` | Beta | 1.12 | 1.16 | | `PodOverhead` | `false` | Alpha | 1.16 | 1.17 |
| `ResourceQuotaScopeSelectors` | `true` | GA | 1.17 | - | | `PodOverhead` | `true` | Beta | 1.18 | 1.23 |
| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | | `PodOverhead` | `true` | GA | 1.24 | - |
| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | 1.16 | | `PodSecurity` | `false` | Alpha | 1.22 | 1.22 |
| `ScheduleDaemonSetPods` | `true` | GA | 1.17 | - | | `PodSecurity` | `true` | Beta | 1.23 | 1.24 |
| `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 | | `PodSecurity` | `true` | GA | 1.25 | |
| `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 | | `PreferNominatedNode` | `false` | Alpha | 1.21 | 1.21 |
| `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | - | | `PreferNominatedNode` | `true` | Beta | 1.22 | 1.23 |
| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | | `PreferNominatedNode` | `true` | GA | 1.24 | - |
| `StorageObjectInUseProtection` | `true` | GA | 1.11 | - | | `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 |
| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 | | `RemoveSelfLink` | `true` | Beta | 1.20 | 1.23 |
| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | | `RemoveSelfLink` | `true` | GA | 1.24 | - |
| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 | | `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
| `SupportIPVSProxyMode` | `true` | GA | 1.11 | - | | `ServerSideApply` | `true` | Beta | 1.16 | 1.21 |
| `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 | | `ServerSideApply` | `true` | GA | 1.22 | - |
| `TaintBasedEvictions` | `true` | Beta | 1.13 | 1.17 | | `ServiceLBNodePortControl` | `false` | Alpha | 1.20 | 1.21 |
| `TaintBasedEvictions` | `true` | GA | 1.18 | - | | `ServiceLBNodePortControl` | `true` | Beta | 1.22 | 1.23 |
| `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 | | `ServiceLBNodePortControl` | `true` | GA | 1.24 | - |
| `TaintNodesByCondition` | `true` | Beta | 1.12 | 1.16 | | `ServiceLoadBalancerClass` | `false` | Alpha | 1.21 | 1.21 |
| `TaintNodesByCondition` | `true` | GA | 1.17 | - | | `ServiceLoadBalancerClass` | `true` | Beta | 1.22 | 1.23 |
| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 | | `ServiceLoadBalancerClass` | `true` | GA | 1.24 | - |
| `VolumePVCDataSource` | `true` | Beta | 1.16 | 1.17 | | `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 |
| `VolumePVCDataSource` | `true` | GA | 1.18 | - | | `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | 1.24 |
| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | | `StatefulSetMinReadySeconds` | `true` | GA | 1.25 | - |
| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | | `SuspendJob` | `false` | Alpha | 1.21 | 1.21 |
| `VolumeScheduling` | `true` | GA | 1.13 | - | | `SuspendJob` | `true` | Beta | 1.22 | 1.23 |
| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | | `SuspendJob` | `true` | GA | 1.24 | - |
| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 |
| `VolumeScheduling` | `true` | GA | 1.13 | - |
| `VolumeSubpath` | `true` | GA | 1.13 | - |
| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 |
| `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 |
| `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - |
| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 | | `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 |
| `WatchBookmark` | `true` | Beta | 1.16 | 1.16 | | `WatchBookmark` | `true` | Beta | 1.16 | 1.16 |
| `WatchBookmark` | `true` | GA | 1.17 | - | | `WatchBookmark` | `true` | GA | 1.17 | - |
| `WindowsGMSA` | `false` | Alpha | 1.14 | 1.15 |
| `WindowsGMSA` | `true` | Beta | 1.16 | 1.17 |
| `WindowsGMSA` | `true` | GA | 1.18 | - |
| `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 |
| `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 |
| `WindowsRunAsUserName` | `true` | GA | 1.18 | - |
{{< /table >}} {{< /table >}}
## 機能を使用する ## 機能を使用する

View File

@ -8,12 +8,6 @@ weight: 30
*ノード適合テスト* は、システムの検証とードに対する機能テストを提供するコンテナ型のテストフレームワークです。このテストは、ードがKubernetesの最小要件を満たしているかどうかを検証するもので、テストに合格したードはKubernetesクラスタに参加する資格があることになります。 *ノード適合テスト* は、システムの検証とードに対する機能テストを提供するコンテナ型のテストフレームワークです。このテストは、ードがKubernetesの最小要件を満たしているかどうかを検証するもので、テストに合格したードはKubernetesクラスタに参加する資格があることになります。
## 制約
Kubernetesのバージョン1.5ではノード適合テストには以下の制約があります:
* ード適合テストはコンテナのランタイムとしてDockerのみをサポートします。
## ノードの前提条件 ## ノードの前提条件
適合テストを実行するにはードは通常のKubernetesードと同じ前提条件を満たしている必要があります。 最低でもノードに以下のデーモンがインストールされている必要があります: 適合テストを実行するにはードは通常のKubernetesードと同じ前提条件を満たしている必要があります。 最低でもノードに以下のデーモンがインストールされている必要があります:
@ -25,10 +19,11 @@ Kubernetesのバージョン1.5ではノード適合テストには以下の制
ノード適合テストを実行するには、以下の手順に従います: ノード適合テストを実行するには、以下の手順に従います:
1. Kubeletをlocalhostに指定します(`--api-servers="http://localhost:8080"`)、 1. kubeletの`--kubeconfig`オプションの値を調べます。例:`--kubeconfig=/var/lib/kubelet/config.yaml`。
このテストフレームワークはKubeletのテストにローカルマスターを起動するため、Kubeletをローカルホストに設定します(`--api-servers="http://localhost:8080"`)。他にも配慮するべきKubeletフラグがいくつかあります: このテストフレームワークはKubeletのテスト用にローカルコントロールプレーンを起動するため、APIサーバーのURLとして`http://localhost:8080`を使用します。
* `--pod-cidr`: `kubenet`を利用している場合は、Kubeletに任意のCIDR(例: `--pod-cidr=10.180.0.0/24`)を指定する必要があります。 他にも使用できるkubeletコマンドラインパラメーターがいくつかあります:
* `--cloud-provider`: `--cloud-provider=gce`を指定している場合は、テストを実行する前にこのフラグを取り除いてください。
* `--cloud-provider`: `--cloud-provider=gce`を指定している場合は、テストを実行する前にこのフラグを取り除いてください。
2. 以下のコマンドでノード適合テストを実行します: 2. 以下のコマンドでノード適合テストを実行します:
@ -37,7 +32,7 @@ Kubernetesのバージョン1.5ではノード適合テストには以下の制
# $LOG_DIRはテスト出力のパスです。 # $LOG_DIRはテスト出力のパスです。
sudo docker run -it --rm --privileged --net=host \ sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
k8s.gcr.io/node-test:0.2 registry.k8s.io/node-test:0.2
``` ```
## 他アーキテクチャ向けのノード適合テストの実行 ## 他アーキテクチャ向けのノード適合テストの実行
@ -58,7 +53,7 @@ Kubernetesは他のアーキテクチャ用のード適合テストのdocker
sudo docker run -it --rm --privileged --net=host \ sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
-e FOCUS=MirrorPod \ # MirrorPodテストのみを実行します -e FOCUS=MirrorPod \ # MirrorPodテストのみを実行します
k8s.gcr.io/node-test:0.2 registry.k8s.io/node-test:0.2
``` ```
特定のテストをスキップするには、環境変数`SKIP`をスキップしたいテストの正規表現で上書きします。 特定のテストをスキップするには、環境変数`SKIP`をスキップしたいテストの正規表現で上書きします。
@ -67,7 +62,7 @@ sudo docker run -it --rm --privileged --net=host \
sudo docker run -it --rm --privileged --net=host \ sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
-e SKIP=MirrorPod \ # MirrorPodテスト以外のすべてのード適合テストを実行します -e SKIP=MirrorPod \ # MirrorPodテスト以外のすべてのード適合テストを実行します
k8s.gcr.io/node-test:0.2 registry.k8s.io/node-test:0.2
``` ```
ノード適合テストは、[node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md)のコンテナ化されたバージョンです。 ノード適合テストは、[node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md)のコンテナ化されたバージョンです。

View File

@ -123,7 +123,7 @@ Kubernetes v1.18におけるWindows上でのContainerDは以下の既知の欠
Kubernetes[ボリューム](/docs/concepts/storage/volumes/)を使用すると、データの永続性とPodボリュームの共有要件を備えた複雑なアプリケーションをKubernetesにデプロイできます。特定のストレージバックエンドまたはプロトコルに関連付けられた永続ボリュームの管理には、ボリュームのプロビジョニング/プロビジョニング解除/サイズ変更、Kubernetesードへのボリュームのアタッチ/デタッチ、およびデータを永続化する必要があるPod内の個別のコンテナへのボリュームのマウント/マウント解除などのアクションが含まれます。特定のストレージバックエンドまたはプロトコルに対してこれらのボリューム管理アクションを実装するコードは、Kubernetesボリューム[プラグイン](/docs/concepts/storage/volumes/#types-of-volumes)の形式で出荷されます。次の幅広いクラスのKubernetesボリュームプラグインがWindowsでサポートされています。: Kubernetes[ボリューム](/docs/concepts/storage/volumes/)を使用すると、データの永続性とPodボリュームの共有要件を備えた複雑なアプリケーションをKubernetesにデプロイできます。特定のストレージバックエンドまたはプロトコルに関連付けられた永続ボリュームの管理には、ボリュームのプロビジョニング/プロビジョニング解除/サイズ変更、Kubernetesードへのボリュームのアタッチ/デタッチ、およびデータを永続化する必要があるPod内の個別のコンテナへのボリュームのマウント/マウント解除などのアクションが含まれます。特定のストレージバックエンドまたはプロトコルに対してこれらのボリューム管理アクションを実装するコードは、Kubernetesボリューム[プラグイン](/docs/concepts/storage/volumes/#types-of-volumes)の形式で出荷されます。次の幅広いクラスのKubernetesボリュームプラグインがWindowsでサポートされています。:
##### In-treeボリュームプラグイン ##### In-treeボリュームプラグイン
In-treeボリュームプラグインに関連付けられたコードは、コアKubernetesコードベースの一部として提供されます。In-treeボリュームプラグインのデプロイでは、追加のスクリプトをインストールしたり、個別のコンテナ化されたプラグインコンポーネントをデプロイしたりする必要はありません。これらのプラグインは、ストレージバックエンドでのボリュームのプロビジョニング/プロビジョニング解除とサイズ変更、Kubernetesードへのボリュームのアタッチ/アタッチ解除、Pod内の個々のコンテナへのボリュームのマウント/マウント解除を処理できます。次のIn-treeプラグインは、Windowsードをサポートしています。: In-treeボリュームプラグインに関連付けられたコードは、コアKubernetesコードベースの一部として提供されます。In-treeボリュームプラグインのデプロイでは、追加のスクリプトをインストールしたり、個別のコンテナ化されたプラグインコンポーネントをデプロイしたりする必要はありません。これらのプラグインは、ストレージバックエンドでのボリュームのプロビジョニング/プロビジョニング解除とサイズ変更、Kubernetesードへのボリュームのアタッチ/アタッチ解除、Pod内の個々のコンテナへのボリュームのマウント/マウント解除を処理できます。次のIn-treeプラグインは、Windowsードをサポートしています。:
* [awsElasticBlockStore](/docs/concepts/storage/volumes/#awselasticblockstore) * [awsElasticBlockStore](/docs/concepts/storage/volumes/#awselasticblockstore)
* [azureDisk](/docs/concepts/storage/volumes/#azuredisk) * [azureDisk](/docs/concepts/storage/volumes/#azuredisk)
@ -167,7 +167,7 @@ Windowsは、L2bridge、L2tunnel、Overlay、Transparent、NATの5つの異な
| -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ | | -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ |
| L2bridge | コンテナは外部のvSwitchに接続されます。コンテナはアンダーレイネットワークに接続されますが、物理ネットワークはコンテナのMACを上り/下りで書き換えるため、MACを学習する必要はありません。コンテナ間トラフィックは、コンテナホスト内でブリッジされます。 | MACはホストのMACに書き換えられ、IPは変わりません。| [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge)、[Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)、Flannelホストゲートウェイは、win-bridgeを使用します。 | win-bridgeはL2bridgeネットワークモードを使用して、コンテナをホストのアンダーレイに接続して、最高のパフォーマンスを提供します。ード間接続にはユーザー定義ルート(UDR)が必要です。 | | L2bridge | コンテナは外部のvSwitchに接続されます。コンテナはアンダーレイネットワークに接続されますが、物理ネットワークはコンテナのMACを上り/下りで書き換えるため、MACを学習する必要はありません。コンテナ間トラフィックは、コンテナホスト内でブリッジされます。 | MACはホストのMACに書き換えられ、IPは変わりません。| [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge)、[Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)、Flannelホストゲートウェイは、win-bridgeを使用します。 | win-bridgeはL2bridgeネットワークモードを使用して、コンテナをホストのアンダーレイに接続して、最高のパフォーマンスを提供します。ード間接続にはユーザー定義ルート(UDR)が必要です。 |
| L2Tunnel | これはl2bridgeの特殊なケースですが、Azureでのみ使用されます。すべてのパケットは、SDNポリシーが適用されている仮想化ホストに送信されます。| MACが書き換えられ、IPがアンダーレイネットワークで表示されます。 | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNIを使用すると、コンテナをAzure vNETと統合し、[Azure Virtual Networkが提供](https://azure.microsoft.com/en-us/services/virtual-network/)する一連の機能を活用できます。たとえば、Azureサービスに安全に接続するか、Azure NSGを使用します。[azure-cniのいくつかの例](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)を参照してください。| | L2Tunnel | これはl2bridgeの特殊なケースですが、Azureでのみ使用されます。すべてのパケットは、SDNポリシーが適用されている仮想化ホストに送信されます。| MACが書き換えられ、IPがアンダーレイネットワークで表示されます。 | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNIを使用すると、コンテナをAzure vNETと統合し、[Azure Virtual Networkが提供](https://azure.microsoft.com/en-us/services/virtual-network/)する一連の機能を活用できます。たとえば、Azureサービスに安全に接続するか、Azure NSGを使用します。[azure-cniのいくつかの例](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)を参照してください。|
| オーバーレイ(KubernetesのWindows用のオーバーレイネットワークは *アルファ* 段階です) | コンテナには、外部のvSwitchに接続されたvNICが付与されます。各オーバーレイネットワークは、カスタムIPプレフィックスで定義された独自のIPサブネットを取得します。オーバーレイネットワークドライバーは、VXLANを使用してカプセル化します。 | 外部ヘッダーでカプセル化されます。 | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN (win-overlayを使用) | win-overlayは、仮想コンテナネットワークをホストのアンダーレイから分離する必要がある場合に使用する必要があります(セキュリティ上の理由など)。データセンター内のIPが制限されている場合に、(異なるVNIDタグを持つ)異なるオーバーレイネットワークでIPを再利用できるようにします。このオプションには、Windows Server 2019で[KB4489899](https://support.microsoft.com/help/4489899)が必要です。| | オーバーレイ(KubernetesのWindows用のオーバーレイネットワークは *アルファ* 段階です) | コンテナには、外部のvSwitchに接続されたvNICが付与されます。各オーバーレイネットワークは、カスタムIPプレフィックスで定義された独自のIPサブネットを取得します。オーバーレイネットワークドライバーは、VXLANを使用してカプセル化します。 | 外部ヘッダーでカプセル化されます。 | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN (win-overlayを使用) | win-overlayは、仮想コンテナネットワークをホストのアンダーレイから分離する必要がある場合に使用する必要があります(セキュリティ上の理由など)。データセンター内のIPが制限されている場合に、(異なるVNIDタグを持つ)異なるオーバーレイネットワークでIPを再利用できるようにします。このオプションには、Windows Server 2019で[KB4489899](https://support.microsoft.com/help/4489899)が必要です。|
| 透過的([ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)の特別な使用例) | 外部のvSwitchが必要です。コンテナは外部のvSwitchに接続され、論理ネットワーク(論理スイッチおよびルーター)を介したPod内通信を可能にします。 | パケットは、[GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/)または[STT](https://datatracker.ietf.org/doc/draft-davie-stt/)トンネリングを介してカプセル化され、同じホスト上にないポッドに到達します。パケットは、ovnネットワークコントローラーによって提供されるトンネルメタデータ情報を介して転送またはドロップされます。NATは南北通信のために行われます。 | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [ansible経由でデプロイ](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib)します。分散ACLは、Kubernetesポリシーを介して適用できます。 IPAMをサポートします。負荷分散は、kube-proxyなしで実現できます。 NATは、iptables/netshを使用せずに行われます。 | | 透過的([ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)の特別な使用例) | 外部のvSwitchが必要です。コンテナは外部のvSwitchに接続され、論理ネットワーク(論理スイッチおよびルーター)を介したPod内通信を可能にします。 | パケットは、[GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/)または[STT](https://datatracker.ietf.org/doc/draft-davie-stt/)トンネリングを介してカプセル化され、同じホスト上にないポッドに到達します。パケットは、ovnネットワークコントローラーによって提供されるトンネルメタデータ情報を介して転送またはドロップされます。NATは南北通信のために行われます。 | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [ansible経由でデプロイ](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib)します。分散ACLは、Kubernetesポリシーを介して適用できます。 IPAMをサポートします。負荷分散は、kube-proxyなしで実現できます。 NATは、iptables/netshを使用せずに行われます。 |
| NAT(*Kubernetesでは使用されません*) | コンテナには、内部のvSwitchに接続されたvNICが付与されます。DNS/DHCPは、[WinNAT](https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/)と呼ばれる内部コンポーネントを使用して提供されます。 | MACおよびIPはホストMAC/IPに書き換えられます。 | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | 完全を期すためにここに含まれています。 | | NAT(*Kubernetesでは使用されません*) | コンテナには、内部のvSwitchに接続されたvNICが付与されます。DNS/DHCPは、[WinNAT](https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/)と呼ばれる内部コンポーネントを使用して提供されます。 | MACおよびIPはホストMAC/IPに書き換えられます。 | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | 完全を期すためにここに含まれています。 |

View File

@ -295,7 +295,7 @@ Liveness ProbeおよびReadiness Probeのチェック動作をより正確に制
* `periodSeconds`: Probeが実行される頻度(秒数)。デフォルトは10秒。最小値は1。 * `periodSeconds`: Probeが実行される頻度(秒数)。デフォルトは10秒。最小値は1。
* `timeoutSeconds`: Probeがタイムアウトになるまでの秒数。デフォルトは1秒。最小値は1。 * `timeoutSeconds`: Probeがタイムアウトになるまでの秒数。デフォルトは1秒。最小値は1。
* `successThreshold`: 一度Probeが失敗した後、次のProbeが成功したとみなされるための最小連続成功数。 * `successThreshold`: 一度Probeが失敗した後、次のProbeが成功したとみなされるための最小連続成功数。
デフォルトは1。Liveness Probeには1を設定する必要があります。最小値は1。 デフォルトは1。Liveness ProbeおよびStartup Probeには1を設定する必要があります。最小値は1。
* `failureThreshold`: Probeが失敗した場合、Kubernetesは`failureThreshold`に設定した回数までProbeを試行します。 * `failureThreshold`: Probeが失敗した場合、Kubernetesは`failureThreshold`に設定した回数までProbeを試行します。
Liveness Probeにおいて、試行回数に到達することはコンテナを再起動することを意味します。 Liveness Probeにおいて、試行回数に到達することはコンテナを再起動することを意味します。
Readiness Probeの場合は、Podが準備できていない状態として通知されます。デフォルトは3。最小値は1。 Readiness Probeの場合は、Podが準備できていない状態として通知されます。デフォルトは3。最小値は1。

View File

@ -1,7 +1,7 @@
--- ---
title: アプリケーションのトラブルシューティング title: アプリケーションのトラブルシューティング
description: Debugging common containerized application issues. description: コンテナ化されたアプリケーションの一般的な問題をデバッグします。
weight: 20 weight: 20
--- ---
This doc contains a set of resources for fixing issues with containerized applications. It covers things like common issues with Kubernetes resources (like Pods, Services, or StatefulSets), advice on making sense of container termination messages, and ways to debug running containers. このドキュメントには、コンテナ化されたアプリケーションの問題を解決するための、一連のリソースが記載されています。Kubernetesリソース(Pod、Service、StatefulSetなど)に関する一般的な問題や、コンテナ終了メッセージを理解するためのアドバイス、実行中のコンテナをデバッグする方法などが網羅されています。

Some files were not shown because too many files have changed in this diff Show More