Merge remote-tracking branch 'upstream/main' into dev-1.24
This commit is contained in:
commit
0135d3642b
|
@ -1,6 +1,7 @@
|
||||||
[submodule "themes/docsy"]
|
[submodule "themes/docsy"]
|
||||||
path = themes/docsy
|
path = themes/docsy
|
||||||
url = https://github.com/google/docsy.git
|
url = https://github.com/google/docsy.git
|
||||||
|
branch = v0.2.0
|
||||||
[submodule "api-ref-generator"]
|
[submodule "api-ref-generator"]
|
||||||
path = api-ref-generator
|
path = api-ref-generator
|
||||||
url = https://github.com/kubernetes-sigs/reference-docs
|
url = https://github.com/kubernetes-sigs/reference-docs
|
||||||
|
|
|
@ -27,16 +27,18 @@ RUN mkdir $HOME/src && \
|
||||||
FROM golang:1.16-alpine
|
FROM golang:1.16-alpine
|
||||||
|
|
||||||
RUN apk add --no-cache \
|
RUN apk add --no-cache \
|
||||||
|
runuser \
|
||||||
git \
|
git \
|
||||||
openssh-client \
|
openssh-client \
|
||||||
rsync \
|
rsync \
|
||||||
npm && \
|
npm && \
|
||||||
npm install -D autoprefixer postcss-cli
|
npm install -D autoprefixer postcss-cli
|
||||||
|
|
||||||
RUN mkdir -p /usr/local/src && \
|
RUN mkdir -p /var/hugo && \
|
||||||
cd /usr/local/src && \
|
|
||||||
addgroup -Sg 1000 hugo && \
|
addgroup -Sg 1000 hugo && \
|
||||||
adduser -Sg hugo -u 1000 -h /src hugo
|
adduser -Sg hugo -u 1000 -h /var/hugo hugo && \
|
||||||
|
chown -R hugo: /var/hugo && \
|
||||||
|
runuser -u hugo -- git config --global --add safe.directory /src
|
||||||
|
|
||||||
COPY --from=0 /go/bin/hugo /usr/local/bin/hugo
|
COPY --from=0 /go/bin/hugo /usr/local/bin/hugo
|
||||||
|
|
||||||
|
|
|
@ -131,7 +131,7 @@ Hugo is shipped in two set of binaries for technical reasons. The current websit
|
||||||
If you run `make serve` on macOS and receive the following error:
|
If you run `make serve` on macOS and receive the following error:
|
||||||
|
|
||||||
-->
|
-->
|
||||||
### 对 macOs 上打开太多文件的故障排除
|
### 对 macOS 上打开太多文件的故障排除
|
||||||
|
|
||||||
如果在 macOS 上运行 `make serve` 收到以下错误:
|
如果在 macOS 上运行 `make serve` 收到以下错误:
|
||||||
|
|
||||||
|
|
|
@ -24,14 +24,14 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
|
||||||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
|
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
|
||||||
* [Cilium](https://github.com/cilium/cilium) ist ein L3 Network- and Network-Policy-Plugin welches das transparent HTTP/API/L7-Policies durchsetzen kann. Sowohl Routing- als auch Overlay/Encapsulation-Modes werden uterstützt. Außerdem kann Cilium auf andere CNI-Plugins aufsetzen.
|
* [Cilium](https://github.com/cilium/cilium) ist ein L3 Network- and Network-Policy-Plugin welches das transparent HTTP/API/L7-Policies durchsetzen kann. Sowohl Routing- als auch Overlay/Encapsulation-Modes werden uterstützt. Außerdem kann Cilium auf andere CNI-Plugins aufsetzen.
|
||||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
|
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
|
||||||
* [Contiv](http://contiv.github.io) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
|
* [Contiv](https://contivpp.io/) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
|
||||||
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
|
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
|
||||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
|
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
|
||||||
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
|
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
|
||||||
* Multus ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
|
* Multus ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
|
||||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
|
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
|
||||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) ist eine SDN-Plattform die Policy-Basiertes Networking zwischen Kubernetes Pods und nicht-Kubernetes Umgebungen inklusive Sichtbarkeit und Security-Monitoring bereitstellt.
|
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) ist eine SDN-Plattform die Policy-Basiertes Networking zwischen Kubernetes Pods und nicht-Kubernetes Umgebungen inklusive Sichtbarkeit und Security-Monitoring bereitstellt.
|
||||||
* [Romana](http://romana.io) ist eine Layer 3 Network-Lösung für Pod-Netzwerke welche auch die [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) unterstützt. Details zur Installation als kubeadm Add-On sind [hier](https://github.com/romana/romana/tree/master/containerize) verfügbar.
|
* [Romana](https://github.com/romana/romana) ist eine Layer 3 Network-Lösung für Pod-Netzwerke welche auch die [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) unterstützt. Details zur Installation als kubeadm Add-On sind [hier](https://github.com/romana/romana/tree/master/containerize) verfügbar.
|
||||||
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) bietet Networking and Network-Policies und arbeitet auf beiden Seiten der Network-Partition ohne auf eine externe Datenbank angwiesen zu sein.
|
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) bietet Networking and Network-Policies und arbeitet auf beiden Seiten der Network-Partition ohne auf eine externe Datenbank angwiesen zu sein.
|
||||||
|
|
||||||
## Service-Discovery
|
## Service-Discovery
|
||||||
|
@ -52,5 +52,3 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
|
||||||
Es gibt einige weitere Add-Ons die in dem abgekündigten [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons)-Verzeichnis dokumentiert sind.
|
Es gibt einige weitere Add-Ons die in dem abgekündigten [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons)-Verzeichnis dokumentiert sind.
|
||||||
|
|
||||||
Add-Ons die ordentlich gewartet werden dürfen gerne hier aufgezählt werden. Wir freuen uns auf PRs!
|
Add-Ons die ordentlich gewartet werden dürfen gerne hier aufgezählt werden. Wir freuen uns auf PRs!
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,12 @@ date: 2017-11-02
|
||||||
slug: containerd-container-runtime-options-kubernetes
|
slug: containerd-container-runtime-options-kubernetes
|
||||||
url: /blog/2017/11/Containerd-Container-Runtime-Options-Kubernetes
|
url: /blog/2017/11/Containerd-Container-Runtime-Options-Kubernetes
|
||||||
---
|
---
|
||||||
**_Editor's note: Today's post is by Lantao Liu, Software Engineer at Google, and Mike Brown, Open Source Developer Advocate at IBM._**
|
|
||||||
|
**Authors:** Lantao Liu (Google), and Mike Brown (IBM)
|
||||||
|
|
||||||
|
_Update: Kubernetes support for Docker via `dockershim` is now deprecated.
|
||||||
|
For more information, read the [deprecation notice](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation).
|
||||||
|
You can also discuss the deprecation via a dedicated [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917)._
|
||||||
|
|
||||||
A _container runtime_ is software that executes containers and manages container images on a node. Today, the most widely known container runtime is [Docker](https://www.docker.com/), but there are other container runtimes in the ecosystem, such as [rkt](https://coreos.com/rkt/), [containerd](https://containerd.io/), and [lxd](https://linuxcontainers.org/lxd/). Docker is by far the most common container runtime used in production Kubernetes environments, but Docker’s smaller offspring, containerd, may prove to be a better option. This post describes using containerd with Kubernetes.
|
A _container runtime_ is software that executes containers and manages container images on a node. Today, the most widely known container runtime is [Docker](https://www.docker.com/), but there are other container runtimes in the ecosystem, such as [rkt](https://coreos.com/rkt/), [containerd](https://containerd.io/), and [lxd](https://linuxcontainers.org/lxd/). Docker is by far the most common container runtime used in production Kubernetes environments, but Docker’s smaller offspring, containerd, may prove to be a better option. This post describes using containerd with Kubernetes.
|
||||||
|
|
||||||
|
|
|
@ -360,7 +360,7 @@ So let's fix the issue by installing the missing package:
|
||||||
sudo apt install -y conntrack
|
sudo apt install -y conntrack
|
||||||
```
|
```
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Let's try to launch it again:
|
Let's try to launch it again:
|
||||||
|
|
||||||
|
|
|
@ -7,6 +7,10 @@ slug: dont-panic-kubernetes-and-docker
|
||||||
|
|
||||||
**Authors:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
**Authors:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
||||||
|
|
||||||
|
_Update: Kubernetes support for Docker via `dockershim` is now deprecated.
|
||||||
|
For more information, read the [deprecation notice](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation).
|
||||||
|
You can also discuss the deprecation via a dedicated [GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917)._
|
||||||
|
|
||||||
Kubernetes is [deprecating
|
Kubernetes is [deprecating
|
||||||
Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)
|
Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)
|
||||||
as a container runtime after v1.20.
|
as a container runtime after v1.20.
|
||||||
|
|
|
@ -5,13 +5,20 @@ date: 2021-11-12
|
||||||
slug: are-you-ready-for-dockershim-removal
|
slug: are-you-ready-for-dockershim-removal
|
||||||
---
|
---
|
||||||
|
|
||||||
**Author:** Sergey Kanzhelev, Google. With reviews from Davanum Srinivas, Elana Hashman, Noah Kantrowitz, Rey Lejano.
|
**Authors:** Sergey Kanzhelev, Google. With reviews from Davanum Srinivas, Elana Hashman, Noah Kantrowitz, Rey Lejano.
|
||||||
|
|
||||||
{{% alert color="info" title="Poll closed" %}}
|
{{% alert color="info" title="Poll closed" %}}
|
||||||
This poll closed on January 7, 2022.
|
This poll closed on January 7, 2022.
|
||||||
{{% /alert %}}
|
{{% /alert %}}
|
||||||
|
|
||||||
Last year we announced that Dockershim is being deprecated: [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).
|
Last year we [announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
|
||||||
|
that Kubernetes' dockershim component (which provides a built-in integration for
|
||||||
|
Docker Engine) is deprecated.
|
||||||
|
|
||||||
|
_Update: There's a [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
|
||||||
|
with more information, and you can also discuss the deprecation via a dedicated
|
||||||
|
[GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917)._
|
||||||
|
|
||||||
Our current plan is to remove dockershim from the Kubernetes codebase soon.
|
Our current plan is to remove dockershim from the Kubernetes codebase soon.
|
||||||
We are looking for feedback from you whether you are ready for dockershim
|
We are looking for feedback from you whether you are ready for dockershim
|
||||||
removal and to ensure that you are ready when the time comes.
|
removal and to ensure that you are ready when the time comes.
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
---
|
---
|
||||||
layout: blog
|
layout: blog
|
||||||
title: "Updated: Dockershim Removal FAQ"
|
title: "Updated: Dockershim Removal FAQ"
|
||||||
|
linkTitle: "Dockershim Removal FAQ"
|
||||||
date: 2022-02-17
|
date: 2022-02-17
|
||||||
slug: dockershim-faq
|
slug: dockershim-faq
|
||||||
aliases: [ '/dockershim' ]
|
aliases: [ '/dockershim' ]
|
||||||
|
@ -184,7 +185,7 @@ options are available as you migrate things over.
|
||||||
[documentation]: https://github.com/containerd/cri/blob/master/docs/registry.md
|
[documentation]: https://github.com/containerd/cri/blob/master/docs/registry.md
|
||||||
|
|
||||||
For instructions on how to use containerd and CRI-O with Kubernetes, see the
|
For instructions on how to use containerd and CRI-O with Kubernetes, see the
|
||||||
Kubernetes documentation on [Container Runtimes]
|
Kubernetes documentation on [Container Runtimes].
|
||||||
|
|
||||||
[Container Runtimes]: /docs/setup/production-environment/container-runtimes/
|
[Container Runtimes]: /docs/setup/production-environment/container-runtimes/
|
||||||
|
|
||||||
|
@ -194,12 +195,24 @@ If you use a vendor-supported Kubernetes distribution, you can ask them about
|
||||||
upgrade plans for their products. For end-user questions, please post them
|
upgrade plans for their products. For end-user questions, please post them
|
||||||
to our end user community forum: https://discuss.kubernetes.io/.
|
to our end user community forum: https://discuss.kubernetes.io/.
|
||||||
|
|
||||||
|
You can discuss the decision to remove dockershim via a dedicated
|
||||||
|
[GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917).
|
||||||
|
|
||||||
You can also check out the excellent blog post
|
You can also check out the excellent blog post
|
||||||
[Wait, Docker is deprecated in Kubernetes now?][dep] a more in-depth technical
|
[Wait, Docker is deprecated in Kubernetes now?][dep] a more in-depth technical
|
||||||
discussion of the changes.
|
discussion of the changes.
|
||||||
|
|
||||||
[dep]: https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m
|
[dep]: https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m
|
||||||
|
|
||||||
|
### Is there any tooling that can help me find dockershim in use
|
||||||
|
|
||||||
|
Yes! The [Detector for Docker Socket (DDS)][dds] is a kubectl plugin that you can
|
||||||
|
install and then use to check your cluster. DDS can detect if active Kubernetes workloads
|
||||||
|
are mounting the Docker Engine socket (`docker.sock`) as a volume.
|
||||||
|
Find more details and usage patterns in the DDS project's [README][dds].
|
||||||
|
|
||||||
|
[dds]: https://github.com/aws-containers/kubectl-detector-for-docker-socket
|
||||||
|
|
||||||
### Can I have a hug?
|
### Can I have a hug?
|
||||||
|
|
||||||
Yes, we're still giving hugs as requested. 🤗🤗🤗
|
Yes, we're still giving hugs as requested. 🤗🤗🤗
|
||||||
|
|
|
@ -0,0 +1,25 @@
|
||||||
|
---
|
||||||
|
layout: blog
|
||||||
|
title: "Dockershim: The Historical Context"
|
||||||
|
date: 2022-05-03
|
||||||
|
slug: dockershim-historical-context
|
||||||
|
---
|
||||||
|
|
||||||
|
**Author:** Kat Cosgrove
|
||||||
|
|
||||||
|
|
||||||
|
Dockershim has been removed as of Kubernetes v1.24, and this is a positive move for the project. However, context is important for fully understanding something, be it socially or in software development, and this deserves a more in-depth review. Alongside the dockershim removal in Kubernetes v1.24, we’ve seen some confusion (sometimes at a panic level) and dissatisfaction with this decision in the community, largely due to a lack of context around this removal. The decision to deprecate and eventually remove dockershim from Kubernetes was not made quickly or lightly. Still, it’s been in the works for so long that many of today’s users are newer than that decision, and certainly newer than the choices that led to the dockershim being necessary in the first place.
|
||||||
|
|
||||||
|
So what is the dockershim, and why is it going away?
|
||||||
|
|
||||||
|
In the early days of Kubernetes, we only supported one container runtime. That runtime was Docker Engine. Back then, there weren’t really a lot of other options out there and Docker was the dominant tool for working with containers, so this was not a controversial choice. Eventually, we started adding more container runtimes, like rkt and hypernetes, and it became clear that Kubernetes users want a choice of runtimes working best for them. So Kubernetes needed a way to allow cluster operators the flexibility to use whatever runtime they choose.
|
||||||
|
|
||||||
|
The [Container Runtime Interface](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) (CRI) was released to allow that flexibility. The introduction of CRI was great for the project and users alike, but it did introduce a problem: Docker Engine’s use as a container runtime predates CRI, and Docker Engine is not CRI-compatible. To solve this issue, a small software shim (dockershim) was introduced as part of the kubelet component specifically to fill in the gaps between Docker Engine and CRI, allowing cluster operators to continue using Docker Engine as their container runtime largely uninterrupted.
|
||||||
|
|
||||||
|
However, this little software shim was never intended to be a permanent solution. Over the course of years, its existence has introduced a lot of unnecessary complexity to the kubelet itself. Some integrations are inconsistently implemented for Docker because of this shim, resulting in an increased burden on maintainers, and maintaining vendor-specific code is not in line with our open source philosophy. To reduce this maintenance burden and move towards a more collaborative community in support of open standards, [KEP-2221 was introduced](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim), proposing the removal of the dockershim. With the release of Kubernetes v1.20, the deprecation was official.
|
||||||
|
|
||||||
|
We didn’t do a great job communicating this, and unfortunately, the deprecation announcement led to some panic within the community. Confusion around what this meant for Docker as a company, if container images built by Docker would still run, and what Docker Engine actually is led to a conflagration on social media. This was our fault; we should have more clearly communicated what was happening and why at the time. To combat this, we released [a blog](/blog/2020/12/02/dont-panic-kubernetes-and-docker/) and [accompanying FAQ](/blog/2020/12/02/dockershim-faq/) to allay the community’s fears and correct some misconceptions about what Docker is and how containers work within Kubernetes. As a result of the community’s concerns, Docker and Mirantis jointly agreed to continue supporting the dockershim code in the form of [cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/), allowing you to continue using Docker Engine as your container runtime if need be. For the interest of users who want to try other runtimes, like containerd or cri-o, [migration documentation was written](docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
|
||||||
|
|
||||||
|
We later [surveyed the community](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) and [discovered that there are still many users with questions and concerns](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim). In response, Kubernetes maintainers and the CNCF committed to addressing these concerns by extending documentation and other programs. In fact, this blog post is a part of this program. With so many end users successfully migrated to other runtimes, and improved documentation, we believe that everyone has a paved way to migration now.
|
||||||
|
|
||||||
|
Docker is not going away, either as a tool or as a company. It’s an important part of the cloud native community and the history of the Kubernetes project. We wouldn’t be where we are without them. That said, removing dockershim from kubelet is ultimately good for the community, the ecosystem, the project, and open source at large. This is an opportunity for all of us to come together to support open standards, and we’re glad to be doing so with the help of Docker and the community.
|
|
@ -21,6 +21,7 @@ This page lists some of the available add-ons and links to their respective inst
|
||||||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
|
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
|
||||||
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins.
|
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins.
|
||||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
|
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
|
||||||
|
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
|
||||||
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
|
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
|
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
|
||||||
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod.
|
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod.
|
||||||
|
|
|
@ -478,7 +478,7 @@ in the Pod manifest, and represent parameters to the container runtime.
|
||||||
|
|
||||||
Security profiles are control plane mechanisms to enforce specific settings in the Security Context,
|
Security profiles are control plane mechanisms to enforce specific settings in the Security Context,
|
||||||
as well as other related parameters outside the Security Context. As of July 2021,
|
as well as other related parameters outside the Security Context. As of July 2021,
|
||||||
[Pod Security Policies](/docs/concepts/profile/pod-security-profile/) are deprecated in favor of the
|
[Pod Security Policies](/docs/concepts/security/pod-security-policy/) are deprecated in favor of the
|
||||||
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
||||||
|
|
||||||
{{% thirdparty-content %}}
|
{{% thirdparty-content %}}
|
||||||
|
|
|
@ -323,17 +323,6 @@ If the feature gate `ExpandedDNSConfig` is enabled for the kube-apiserver and
|
||||||
the kubelet, it is allowed for Kubernetes to have at most 32 search domains and
|
the kubelet, it is allowed for Kubernetes to have at most 32 search domains and
|
||||||
a list of search domains of up to 2048 characters.
|
a list of search domains of up to 2048 characters.
|
||||||
|
|
||||||
### Feature availability
|
|
||||||
|
|
||||||
The availability of Pod DNS Config and DNS Policy "`None`" is shown as below.
|
|
||||||
|
|
||||||
| k8s version | Feature support |
|
|
||||||
| :---------: |:-----------:|
|
|
||||||
| 1.14 | Stable |
|
|
||||||
| 1.10 | Beta (on by default)|
|
|
||||||
| 1.9 | Alpha |
|
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -45,42 +45,7 @@ See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "vers
|
||||||
|
|
||||||
An example NetworkPolicy might look like this:
|
An example NetworkPolicy might look like this:
|
||||||
|
|
||||||
```yaml
|
{{< codenew file="service/networking/networkpolicy.yaml" >}}
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: test-network-policy
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
role: db
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
- Egress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
- ipBlock:
|
|
||||||
cidr: 172.17.0.0/16
|
|
||||||
except:
|
|
||||||
- 172.17.1.0/24
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: myproject
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
role: frontend
|
|
||||||
ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 6379
|
|
||||||
egress:
|
|
||||||
- to:
|
|
||||||
- ipBlock:
|
|
||||||
cidr: 10.0.0.0/24
|
|
||||||
ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 5978
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy.
|
POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy.
|
||||||
|
|
|
@ -185,7 +185,7 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
|
||||||
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
||||||
command is interrupted, it can be restarted.
|
command is interrupted, it can be restarted.
|
||||||
|
|
||||||
When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
|
When using the REST API or [client library](/docs/reference/using-api/client-libraries), you need to do the steps explicitly (scale replicas to
|
||||||
0, wait for pod deletions, then delete the ReplicationController).
|
0, wait for pod deletions, then delete the ReplicationController).
|
||||||
|
|
||||||
### Deleting only a ReplicationController
|
### Deleting only a ReplicationController
|
||||||
|
@ -194,7 +194,7 @@ You can delete a ReplicationController without affecting any of its pods.
|
||||||
|
|
||||||
Using kubectl, specify the `--cascade=orphan` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
|
Using kubectl, specify the `--cascade=orphan` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
|
||||||
|
|
||||||
When using the REST API or Go client library, you can delete the ReplicationController object.
|
When using the REST API or [client library](/docs/reference/using-api/client-libraries), you can delete the ReplicationController object.
|
||||||
|
|
||||||
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
||||||
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
|
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
|
||||||
|
|
|
@ -87,3 +87,17 @@ To close a pull request, leave a `/close` comment on the PR.
|
||||||
The [`fejta-bot`](https://github.com/fejta-bot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity.
|
The [`fejta-bot`](https://github.com/fejta-bot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity.
|
||||||
|
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
|
## PR Wrangler shadow program
|
||||||
|
|
||||||
|
In late 2021, SIG Docs introduced the PR Wrangler Shadow Program. The program was introduced to help new contributors understand the PR wrangling process.
|
||||||
|
|
||||||
|
### Become a shadow
|
||||||
|
|
||||||
|
- If you are interested in shadowing as a PR wrangler, please visit the [PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers) to see the PR wrangling schedule for this year and sign up.
|
||||||
|
|
||||||
|
- Kubernetes org members can edit the [PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers) and sign up to shadow an existing PR Wrangler for a week.
|
||||||
|
|
||||||
|
- Others can reach out on the [#sig-docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) for requesting to shadow an assigned PR Wrangler for a specific week. Feel free to reach out to Brad Topol (`@bradtopol`) or one of the [SIG Docs co-chairs/leads](https://github.com/kubernetes/community/tree/master/sig-docs#leadership).
|
||||||
|
|
||||||
|
- Once you've signed up to shadow a PR Wrangler, introduce yourself to the PR Wrangler on the [Kubernetes Slack](slack.k8s.io).
|
|
@ -31,7 +31,7 @@ kube-apiserver --authorization-mode=Example,RBAC --other-options --more-options
|
||||||
The RBAC API declares four kinds of Kubernetes object: _Role_, _ClusterRole_,
|
The RBAC API declares four kinds of Kubernetes object: _Role_, _ClusterRole_,
|
||||||
_RoleBinding_ and _ClusterRoleBinding_. You can
|
_RoleBinding_ and _ClusterRoleBinding_. You can
|
||||||
[describe objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects),
|
[describe objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects),
|
||||||
or amend them, using tools such as `kubectl,` just like any other Kubernetes object.
|
or amend them, using tools such as `kubectl`, just like any other Kubernetes object.
|
||||||
|
|
||||||
{{< caution >}}
|
{{< caution >}}
|
||||||
These objects, by design, impose access restrictions. If you are making changes
|
These objects, by design, impose access restrictions. If you are making changes
|
||||||
|
|
|
@ -19,4 +19,9 @@ You can request eviction either by directly calling the Eviction API
|
||||||
using a client of the kube-apiserver, like the `kubectl drain` command.
|
using a client of the kube-apiserver, like the `kubectl drain` command.
|
||||||
When an `Eviction` object is created, the API server terminates the Pod.
|
When an `Eviction` object is created, the API server terminates the Pod.
|
||||||
|
|
||||||
|
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
|
||||||
|
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
|
||||||
|
|
||||||
API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction).
|
API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction).
|
||||||
|
|
||||||
|
* See [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/) for more information.
|
||||||
|
|
|
@ -15,6 +15,76 @@ This document serves both as a reference to the values and as a coordination poi
|
||||||
|
|
||||||
## Labels, annotations and taints used on API objects
|
## Labels, annotations and taints used on API objects
|
||||||
|
|
||||||
|
### app.kubernetes.io/component
|
||||||
|
|
||||||
|
Example: `app.kubernetes.io/component=database`
|
||||||
|
|
||||||
|
Used on: All Objects
|
||||||
|
|
||||||
|
The component within the architecture.
|
||||||
|
|
||||||
|
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||||
|
|
||||||
|
### app.kubernetes.io/created-by
|
||||||
|
|
||||||
|
Example: `app.kubernetes.io/created-by=controller-manager`
|
||||||
|
|
||||||
|
Used on: All Objects
|
||||||
|
|
||||||
|
The controller/user who created this resource.
|
||||||
|
|
||||||
|
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||||
|
|
||||||
|
### app.kubernetes.io/instance
|
||||||
|
|
||||||
|
Example: `app.kubernetes.io/instance=mysql-abcxzy`
|
||||||
|
|
||||||
|
Used on: All Objects
|
||||||
|
|
||||||
|
A unique name identifying the instance of an application.
|
||||||
|
|
||||||
|
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||||
|
|
||||||
|
### app.kubernetes.io/managed-by
|
||||||
|
|
||||||
|
Example: `app.kubernetes.io/managed-by=helm`
|
||||||
|
|
||||||
|
Used on: All Objects
|
||||||
|
|
||||||
|
The tool being used to manage the operation of an application.
|
||||||
|
|
||||||
|
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||||
|
|
||||||
|
### app.kubernetes.io/name
|
||||||
|
|
||||||
|
Example: `app.kubernetes.io/name=mysql`
|
||||||
|
|
||||||
|
Used on: All Objects
|
||||||
|
|
||||||
|
The name of the application.
|
||||||
|
|
||||||
|
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||||
|
|
||||||
|
### app.kubernetes.io/part-of
|
||||||
|
|
||||||
|
Example: `app.kubernetes.io/part-of=wordpress`
|
||||||
|
|
||||||
|
Used on: All Objects
|
||||||
|
|
||||||
|
The name of a higher level application this one is part of.
|
||||||
|
|
||||||
|
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||||
|
|
||||||
|
### app.kubernetes.io/version
|
||||||
|
|
||||||
|
Example: `app.kubernetes.io/version="5.7.21"`
|
||||||
|
|
||||||
|
Used on: All Objects
|
||||||
|
|
||||||
|
The current version of the application (e.g., a semantic version, revision hash, etc.).
|
||||||
|
|
||||||
|
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||||
|
|
||||||
### kubernetes.io/arch
|
### kubernetes.io/arch
|
||||||
|
|
||||||
Example: `kubernetes.io/arch=amd64`
|
Example: `kubernetes.io/arch=amd64`
|
||||||
|
|
|
@ -2,27 +2,27 @@
|
||||||
reviewers:
|
reviewers:
|
||||||
- vincepri
|
- vincepri
|
||||||
- bart0sh
|
- bart0sh
|
||||||
title: Container runtimes
|
title: Container Runtimes
|
||||||
content_type: concept
|
content_type: concept
|
||||||
weight: 20
|
weight: 20
|
||||||
---
|
---
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
|
{{% dockershim-removal %}}
|
||||||
|
|
||||||
You need to install a
|
You need to install a
|
||||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
|
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
|
||||||
into each node in the cluster so that Pods can run there. This page outlines
|
into each node in the cluster so that Pods can run there. This page outlines
|
||||||
what is involved and describes related tasks for setting up nodes.
|
what is involved and describes related tasks for setting up nodes.
|
||||||
|
|
||||||
<!-- body -->
|
|
||||||
|
|
||||||
Kubernetes {{< skew currentVersion >}} requires that you use a runtime that
|
Kubernetes {{< skew currentVersion >}} requires that you use a runtime that
|
||||||
conforms with the
|
conforms with the
|
||||||
{{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI).
|
{{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI).
|
||||||
|
|
||||||
See [CRI version support](#cri-versions) for more information.
|
See [CRI version support](#cri-versions) for more information.
|
||||||
|
|
||||||
This page lists details for using several common container runtimes with
|
This page provides an outline of how to use several common container runtimes with
|
||||||
Kubernetes, on Linux:
|
Kubernetes.
|
||||||
|
|
||||||
- [containerd](#containerd)
|
- [containerd](#containerd)
|
||||||
- [CRI-O](#cri-o)
|
- [CRI-O](#cri-o)
|
||||||
|
@ -30,12 +30,27 @@ Kubernetes, on Linux:
|
||||||
- [Mirantis Container Runtime](#mcr)
|
- [Mirantis Container Runtime](#mcr)
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
For other operating systems, look for documentation specific to your platform.
|
Kubernetes releases before v1.24 included a direct integration with Docker Engine,
|
||||||
|
using a component named _dockershim_. That special direct integration is no longer
|
||||||
|
part of Kubernetes (this removal was
|
||||||
|
[announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
|
||||||
|
as part of the v1.20 release).
|
||||||
|
You can read
|
||||||
|
[Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) to understand how this removal might
|
||||||
|
affect you. To learn about migrating from using dockershim, see
|
||||||
|
[Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/).
|
||||||
|
|
||||||
|
If you are running a version of Kubernetes other than v{{< skew currentVersion >}},
|
||||||
|
check the documentation for that version.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
|
|
||||||
|
<!-- body -->
|
||||||
|
|
||||||
## Cgroup drivers
|
## Cgroup drivers
|
||||||
|
|
||||||
Control groups are used to constrain resources that are allocated to processes.
|
On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
|
||||||
|
are used to constrain resources that are allocated to processes.
|
||||||
|
|
||||||
When [systemd](https://www.freedesktop.org/wiki/Software/systemd/) is chosen as the init
|
When [systemd](https://www.freedesktop.org/wiki/Software/systemd/) is chosen as the init
|
||||||
system for a Linux distribution, the init process generates and consumes a root control group
|
system for a Linux distribution, the init process generates and consumes a root control group
|
||||||
|
@ -64,7 +79,7 @@ If you have automation that makes it feasible, replace the node with another usi
|
||||||
configuration, or reinstall it using automation.
|
configuration, or reinstall it using automation.
|
||||||
{{< /caution >}}
|
{{< /caution >}}
|
||||||
|
|
||||||
## Cgroup v2
|
### Cgroup version 2 {#cgroup-v2}
|
||||||
|
|
||||||
Cgroup v2 is the next version of the cgroup Linux API. Differently than cgroup v1, there is a single
|
Cgroup v2 is the next version of the cgroup Linux API. Differently than cgroup v1, there is a single
|
||||||
hierarchy instead of a different one for each controller.
|
hierarchy instead of a different one for each controller.
|
||||||
|
@ -102,8 +117,8 @@ In order to use it, cgroup v2 must be supported by the CRI runtime as well.
|
||||||
|
|
||||||
### Migrating to the `systemd` driver in kubeadm managed clusters
|
### Migrating to the `systemd` driver in kubeadm managed clusters
|
||||||
|
|
||||||
Follow this [Migration guide](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)
|
If you wish to migrate to the `systemd` cgroup driver in existing kubeadm managed clusters,
|
||||||
if you wish to migrate to the `systemd` cgroup driver in existing kubeadm managed clusters.
|
follow [configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
|
||||||
|
|
||||||
## CRI version support {#cri-versions}
|
## CRI version support {#cri-versions}
|
||||||
|
|
||||||
|
@ -120,11 +135,13 @@ using the (deprecated) v1alpha2 API instead.
|
||||||
|
|
||||||
### containerd
|
### containerd
|
||||||
|
|
||||||
This section contains the necessary steps to use containerd as CRI runtime.
|
This section outlines the necessary steps to use containerd as CRI runtime.
|
||||||
|
|
||||||
Use the following commands to install Containerd on your system:
|
Use the following commands to install Containerd on your system:
|
||||||
|
|
||||||
Install and configure prerequisites:
|
1. Install and configure prerequisites:
|
||||||
|
|
||||||
|
(these instructions apply to Linux nodes only)
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
|
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
|
||||||
|
@ -146,69 +163,23 @@ EOF
|
||||||
sudo sysctl --system
|
sudo sysctl --system
|
||||||
```
|
```
|
||||||
|
|
||||||
Install containerd:
|
1. Install containerd:
|
||||||
|
|
||||||
{{< tabs name="tab-cri-containerd-installation" >}}
|
Visit
|
||||||
{{% tab name="Linux" %}}
|
[Getting started with containerd](https://containerd.io/docs/getting-started/#starting-containerd)
|
||||||
|
and follow the instructions there, up to the point where you have a valid
|
||||||
1. Install the `containerd.io` package from the official Docker repositories.
|
configuration file (on Linux: `/etc/containerd/config.toml`).
|
||||||
Instructions for setting up the Docker repository for your respective Linux distribution and
|
|
||||||
installing the `containerd.io` package can be found at
|
|
||||||
[Install Docker Engine](https://docs.docker.com/engine/install/#server).
|
|
||||||
|
|
||||||
2. Configure containerd:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
sudo mkdir -p /etc/containerd
|
|
||||||
containerd config default | sudo tee /etc/containerd/config.toml
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Restart containerd:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
sudo systemctl restart containerd
|
|
||||||
```
|
|
||||||
|
|
||||||
{{% /tab %}}
|
|
||||||
{{% tab name="Windows (PowerShell)" %}}
|
|
||||||
|
|
||||||
Start a Powershell session, set `$Version` to the desired version (ex: `$Version="1.4.3"`),
|
|
||||||
and then run the following commands:
|
|
||||||
|
|
||||||
1. Download containerd:
|
|
||||||
|
|
||||||
|
If you are running Windows, you might want to exclude containerd from Windows Defender Scans
|
||||||
```powershell
|
```powershell
|
||||||
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
|
# If excluding containerd from Windows Defender scans, consider how else
|
||||||
tar.exe xvf .\containerd-windows-amd64.tar.gz
|
# you will make sure that the executable is genuine.
|
||||||
```
|
|
||||||
|
|
||||||
2. Extract and configure:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force
|
|
||||||
cd $Env:ProgramFiles\containerd\
|
|
||||||
.\containerd.exe config default | Out-File config.toml -Encoding ascii
|
|
||||||
|
|
||||||
# Review the configuration. Depending on setup you may want to adjust:
|
|
||||||
# - the sandbox_image (Kubernetes pause image)
|
|
||||||
# - cni bin_dir and conf_dir locations
|
|
||||||
Get-Content config.toml
|
|
||||||
|
|
||||||
# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans
|
|
||||||
Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
|
Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Start containerd:
|
For containerd, the CRI socket is `/run/containerd/containerd.sock` by default.
|
||||||
|
|
||||||
```powershell
|
#### Configuring the `systemd` cgroup driver {#containerd-systemd}
|
||||||
.\containerd.exe --register-service
|
|
||||||
Start-Service containerd
|
|
||||||
```
|
|
||||||
|
|
||||||
{{% /tab %}}
|
|
||||||
{{< /tabs >}}
|
|
||||||
|
|
||||||
#### Using the `systemd` cgroup driver {#containerd-systemd}
|
|
||||||
|
|
||||||
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
|
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
|
||||||
|
|
||||||
|
@ -219,7 +190,7 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
|
||||||
SystemdCgroup = true
|
SystemdCgroup = true
|
||||||
```
|
```
|
||||||
|
|
||||||
If you apply this change make sure to restart containerd again:
|
If you apply this change, make sure to restart containerd:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
sudo systemctl restart containerd
|
sudo systemctl restart containerd
|
||||||
|
@ -232,176 +203,14 @@ When using kubeadm, manually configure the
|
||||||
|
|
||||||
This section contains the necessary steps to install CRI-O as a container runtime.
|
This section contains the necessary steps to install CRI-O as a container runtime.
|
||||||
|
|
||||||
Use the following commands to install CRI-O on your system:
|
To install CRI-O, follow [CRI-O Install Instructions](https://github.com/cri-o/cri-o/blob/main/install.md#readme).
|
||||||
|
|
||||||
{{< note >}}
|
|
||||||
The CRI-O major and minor versions must match the Kubernetes major and minor versions.
|
|
||||||
For more information, see the [CRI-O compatibility matrix](https://github.com/cri-o/cri-o#compatibility-matrix-cri-o--kubernetes).
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
Install and configure prerequisites:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# Create the .conf file to load the modules at bootup
|
|
||||||
cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
|
|
||||||
overlay
|
|
||||||
br_netfilter
|
|
||||||
EOF
|
|
||||||
|
|
||||||
sudo modprobe overlay
|
|
||||||
sudo modprobe br_netfilter
|
|
||||||
|
|
||||||
# Set up required sysctl params, these persist across reboots.
|
|
||||||
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
|
|
||||||
net.bridge.bridge-nf-call-iptables = 1
|
|
||||||
net.ipv4.ip_forward = 1
|
|
||||||
net.bridge.bridge-nf-call-ip6tables = 1
|
|
||||||
EOF
|
|
||||||
|
|
||||||
sudo sysctl --system
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< tabs name="tab-cri-cri-o-installation" >}}
|
|
||||||
{{% tab name="Debian" %}}
|
|
||||||
|
|
||||||
To install CRI-O on the following operating systems, set the environment variable `OS`
|
|
||||||
to the appropriate value from the following table:
|
|
||||||
|
|
||||||
| Operating system | `$OS` |
|
|
||||||
| ---------------- | ----------------- |
|
|
||||||
| Debian Unstable | `Debian_Unstable` |
|
|
||||||
| Debian Testing | `Debian_Testing` |
|
|
||||||
|
|
||||||
<br />
|
|
||||||
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
|
||||||
For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`.
|
|
||||||
You can pin your installation to a specific release.
|
|
||||||
To install version 1.20.0, set `VERSION=1.20:1.20.0`.
|
|
||||||
<br />
|
|
||||||
|
|
||||||
Then run
|
|
||||||
```shell
|
|
||||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
|
||||||
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
|
|
||||||
EOF
|
|
||||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
|
|
||||||
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
|
|
||||||
EOF
|
|
||||||
|
|
||||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
|
|
||||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
|
|
||||||
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get install cri-o cri-o-runc
|
|
||||||
```
|
|
||||||
|
|
||||||
{{% /tab %}}
|
|
||||||
|
|
||||||
{{% tab name="Ubuntu" %}}
|
|
||||||
|
|
||||||
To install on the following operating systems, set the environment variable `OS`
|
|
||||||
to the appropriate field in the following table:
|
|
||||||
|
|
||||||
| Operating system | `$OS` |
|
|
||||||
| ---------------- | ----------------- |
|
|
||||||
| Ubuntu 20.04 | `xUbuntu_20.04` |
|
|
||||||
| Ubuntu 19.10 | `xUbuntu_19.10` |
|
|
||||||
| Ubuntu 19.04 | `xUbuntu_19.04` |
|
|
||||||
| Ubuntu 18.04 | `xUbuntu_18.04` |
|
|
||||||
|
|
||||||
<br />
|
|
||||||
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
|
||||||
For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`.
|
|
||||||
You can pin your installation to a specific release.
|
|
||||||
To install version 1.20.0, set `VERSION=1.20:1.20.0`.
|
|
||||||
<br />
|
|
||||||
|
|
||||||
Then run
|
|
||||||
```shell
|
|
||||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
|
||||||
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
|
|
||||||
EOF
|
|
||||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
|
|
||||||
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
|
|
||||||
EOF
|
|
||||||
|
|
||||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
|
|
||||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
|
|
||||||
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get install cri-o cri-o-runc
|
|
||||||
```
|
|
||||||
{{% /tab %}}
|
|
||||||
|
|
||||||
{{% tab name="CentOS" %}}
|
|
||||||
|
|
||||||
To install on the following operating systems, set the environment variable `OS`
|
|
||||||
to the appropriate field in the following table:
|
|
||||||
|
|
||||||
| Operating system | `$OS` |
|
|
||||||
| ---------------- | ----------------- |
|
|
||||||
| Centos 8 | `CentOS_8` |
|
|
||||||
| Centos 8 Stream | `CentOS_8_Stream` |
|
|
||||||
| Centos 7 | `CentOS_7` |
|
|
||||||
|
|
||||||
<br />
|
|
||||||
Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
|
||||||
For instance, if you want to install CRI-O 1.20, set `VERSION=1.20`.
|
|
||||||
You can pin your installation to a specific release.
|
|
||||||
To install version 1.20.0, set `VERSION=1.20:1.20.0`.
|
|
||||||
<br />
|
|
||||||
|
|
||||||
Then run
|
|
||||||
```shell
|
|
||||||
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
|
|
||||||
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
|
|
||||||
sudo yum install cri-o
|
|
||||||
```
|
|
||||||
|
|
||||||
{{% /tab %}}
|
|
||||||
|
|
||||||
{{% tab name="openSUSE Tumbleweed" %}}
|
|
||||||
|
|
||||||
```shell
|
|
||||||
sudo zypper install cri-o
|
|
||||||
```
|
|
||||||
{{% /tab %}}
|
|
||||||
{{% tab name="Fedora" %}}
|
|
||||||
|
|
||||||
Set `$VERSION` to the CRI-O version that matches your Kubernetes version.
|
|
||||||
For instance, if you want to install CRI-O 1.20, `VERSION=1.20`.
|
|
||||||
|
|
||||||
You can find available versions with:
|
|
||||||
```shell
|
|
||||||
sudo dnf module list cri-o
|
|
||||||
```
|
|
||||||
CRI-O does not support pinning to specific releases on Fedora.
|
|
||||||
|
|
||||||
Then run
|
|
||||||
```shell
|
|
||||||
sudo dnf module enable cri-o:$VERSION
|
|
||||||
sudo dnf install cri-o
|
|
||||||
```
|
|
||||||
|
|
||||||
{{% /tab %}}
|
|
||||||
{{< /tabs >}}
|
|
||||||
|
|
||||||
Start CRI-O:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable crio --now
|
|
||||||
```
|
|
||||||
|
|
||||||
Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md)
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
|
|
||||||
#### cgroup driver
|
#### cgroup driver
|
||||||
|
|
||||||
CRI-O uses the systemd cgroup driver per default. To switch to the `cgroupfs`
|
CRI-O uses the systemd cgroup driver per default, which is likely to work fine
|
||||||
cgroup driver, either edit `/etc/crio/crio.conf` or place a drop-in
|
for you. To switch to the `cgroupfs` cgroup driver, either edit
|
||||||
configuration in `/etc/crio/crio.conf.d/02-cgroup-manager.conf`, for example:
|
`/etc/crio/crio.conf` or place a drop-in configuration in
|
||||||
|
`/etc/crio/crio.conf.d/02-cgroup-manager.conf`, for example:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[crio.runtime]
|
[crio.runtime]
|
||||||
|
@ -409,28 +218,28 @@ conmon_cgroup = "pod"
|
||||||
cgroup_manager = "cgroupfs"
|
cgroup_manager = "cgroupfs"
|
||||||
```
|
```
|
||||||
|
|
||||||
Please also note the changed `conmon_cgroup`, which has to be set to the value
|
You should also note the changed `conmon_cgroup`, which has to be set to the value
|
||||||
`pod` when using CRI-O with `cgroupfs`. It is generally necessary to keep the
|
`pod` when using CRI-O with `cgroupfs`. It is generally necessary to keep the
|
||||||
cgroup driver configuration of the kubelet (usually done via kubeadm) and CRI-O
|
cgroup driver configuration of the kubelet (usually done via kubeadm) and CRI-O
|
||||||
in sync.
|
in sync.
|
||||||
|
|
||||||
|
For CRI-O, the CRI socket is `/var/run/crio/crio.sock` by default.
|
||||||
|
|
||||||
### Docker Engine {#docker}
|
### Docker Engine {#docker}
|
||||||
|
|
||||||
Docker Engine is the container runtime that started it all. Formerly known just as Docker,
|
{{< note >}}
|
||||||
this container runtime is available in various forms.
|
These instructions assume that you are using the
|
||||||
[Install Docker Engine](https://docs.docker.com/engine/install/) explains your options
|
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) adapter to integrate
|
||||||
for installing this runtime.
|
Docker Engine with Kubernetes.
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
Docker Engine is directly compatible with Kubernetes {{< skew currentVersion >}}, using the deprecated `dockershim` component. For more information
|
1. On each of your nodes, install Docker for your Linux distribution as per
|
||||||
and context, see the [Dockershim deprecation FAQ](/dockershim).
|
[Install Docker Engine](https://docs.docker.com/engine/install/#server).
|
||||||
|
|
||||||
You can also find third-party adapters that let you use Docker Engine with Kubernetes
|
2. Install [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd), following
|
||||||
through the supported {{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}}
|
the instructions in that source code repository.
|
||||||
(CRI).
|
|
||||||
|
|
||||||
The following CRI adaptors are designed to work with Docker Engine:
|
For `cri-dockerd`, the CRI socket is `/run/cri-dockerd.sock` by default.
|
||||||
|
|
||||||
- [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) from Mirantis
|
|
||||||
|
|
||||||
### Mirantis Container Runtime {#mcr}
|
### Mirantis Container Runtime {#mcr}
|
||||||
|
|
||||||
|
@ -439,3 +248,14 @@ available container runtime that was formerly known as Docker Enterprise Edition
|
||||||
|
|
||||||
You can use Mirantis Container Runtime with Kubernetes using the open source
|
You can use Mirantis Container Runtime with Kubernetes using the open source
|
||||||
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) component, included with MCR.
|
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) component, included with MCR.
|
||||||
|
|
||||||
|
To learn more about how to install Mirantis Container Runtime,
|
||||||
|
visit [MCR Deployment Guide](https://docs.mirantis.com/mcr/20.10/install.html).
|
||||||
|
|
||||||
|
Check the systemd unit named `cri-docker.socket` to find out the path to the CRI
|
||||||
|
socket.
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
As well as a container runtime, your cluster will need a working
|
||||||
|
[network plugin](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
|
||||||
|
|
|
@ -70,9 +70,10 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
### Installing kubeadm on your hosts
|
### Preparing the hosts
|
||||||
|
|
||||||
See ["Installing kubeadm"](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts.
|
||||||
|
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
If you have already installed kubeadm, run `apt-get update &&
|
If you have already installed kubeadm, run `apt-get update &&
|
||||||
|
|
|
@ -14,7 +14,7 @@ card:
|
||||||
This page shows how to install the `kubeadm` toolbox.
|
This page shows how to install the `kubeadm` toolbox.
|
||||||
For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
|
For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
|
||||||
|
|
||||||
|
{{% dockershim-removal %}}
|
||||||
|
|
||||||
## {{% heading "prerequisites" %}}
|
## {{% heading "prerequisites" %}}
|
||||||
|
|
||||||
|
@ -69,10 +69,10 @@ For more details please see the [Network Plugin Requirements](/docs/concepts/ext
|
||||||
## Check required ports
|
## Check required ports
|
||||||
These
|
These
|
||||||
[required ports](/docs/reference/ports-and-protocols/)
|
[required ports](/docs/reference/ports-and-protocols/)
|
||||||
need to be open in order for Kubernetes components to communicate with each other. You can use telnet to check if a port is open. For example:
|
need to be open in order for Kubernetes components to communicate with each other. You can use tools like netcat to check if a port is open. For example:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
telnet 127.0.0.1 6443
|
nc 127.0.0.1 6443
|
||||||
```
|
```
|
||||||
|
|
||||||
The pod network plugin you use (see below) may also require certain ports to be
|
The pod network plugin you use (see below) may also require certain ports to be
|
||||||
|
|
|
@ -8,6 +8,8 @@ weight: 80
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
|
{{% dockershim-removal %}}
|
||||||
|
|
||||||
The lifecycle of the kubeadm CLI tool is decoupled from the
|
The lifecycle of the kubeadm CLI tool is decoupled from the
|
||||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs
|
[kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs
|
||||||
on each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is
|
on each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is
|
||||||
|
|
|
@ -16,8 +16,7 @@ weight: 30
|
||||||
|
|
||||||
You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster.
|
You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster.
|
||||||
|
|
||||||
|
{{% dockershim-removal %}}
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "prerequisites" %}}
|
## {{% heading "prerequisites" %}}
|
||||||
{{< version-check >}}
|
{{< version-check >}}
|
||||||
|
|
|
@ -2,6 +2,7 @@
|
||||||
title: "Migrating from dockershim"
|
title: "Migrating from dockershim"
|
||||||
weight: 10
|
weight: 10
|
||||||
content_type: task
|
content_type: task
|
||||||
|
no_list: true
|
||||||
---
|
---
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
@ -22,3 +23,25 @@ section to know your options. Make sure to
|
||||||
[report issues](https://github.com/kubernetes/kubernetes/issues) you encountered
|
[report issues](https://github.com/kubernetes/kubernetes/issues) you encountered
|
||||||
with the migration. So the issue can be fixed in a timely manner and your cluster would be
|
with the migration. So the issue can be fixed in a timely manner and your cluster would be
|
||||||
ready for dockershim removal.
|
ready for dockershim removal.
|
||||||
|
|
||||||
|
Your cluster might have more than one kind of node, although this is not a common
|
||||||
|
configuration.
|
||||||
|
|
||||||
|
These tasks will help you to migrate:
|
||||||
|
|
||||||
|
* [Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
|
||||||
|
* [Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/)
|
||||||
|
* [Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)
|
||||||
|
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
* Check out [container runtimes](/docs/setup/production-environment/container-runtimes/)
|
||||||
|
to understand your options for a container runtime.
|
||||||
|
* There is a
|
||||||
|
[GitHub issue](https://github.com/kubernetes/kubernetes/issues/106917)
|
||||||
|
to track discussion about the deprecation and removal of dockershim.
|
||||||
|
* If you found a defect or other technical concern relating to migrating away from dockershim,
|
||||||
|
you can [report an issue](https://github.com/kubernetes/kubernetes/issues/new/choose)
|
||||||
|
to the Kubernetes project.
|
||||||
|
|
||||||
|
|
|
@ -88,3 +88,8 @@ You can still pull images or build them using `docker build` command. But images
|
||||||
built or pulled by Docker would not be visible to container runtime and
|
built or pulled by Docker would not be visible to container runtime and
|
||||||
Kubernetes. They needed to be pushed to some registry to allow them to be used
|
Kubernetes. They needed to be pushed to some registry to allow them to be used
|
||||||
by Kubernetes.
|
by Kubernetes.
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
- Read [Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/) to understand your next steps
|
||||||
|
- Read the [dockershim deprecation FAQ](/blog/2020/12/02/dockershim-faq/) article for more information.
|
||||||
|
|
|
@ -32,16 +32,22 @@ kubectl get nodes -o wide
|
||||||
The output is similar to the following. The column `CONTAINER-RUNTIME` outputs
|
The output is similar to the following. The column `CONTAINER-RUNTIME` outputs
|
||||||
the runtime and its version.
|
the runtime and its version.
|
||||||
|
|
||||||
|
For Docker Engine, the output is similar to this:
|
||||||
|
|
||||||
```none
|
```none
|
||||||
# For dockershim
|
|
||||||
NAME STATUS VERSION CONTAINER-RUNTIME
|
NAME STATUS VERSION CONTAINER-RUNTIME
|
||||||
node-1 Ready v1.16.15 docker://19.3.1
|
node-1 Ready v1.16.15 docker://19.3.1
|
||||||
node-2 Ready v1.16.15 docker://19.3.1
|
node-2 Ready v1.16.15 docker://19.3.1
|
||||||
node-3 Ready v1.16.15 docker://19.3.1
|
node-3 Ready v1.16.15 docker://19.3.1
|
||||||
```
|
```
|
||||||
|
If your runtime shows as Docker Engine, you still might not be affected by the
|
||||||
|
removal of dockershim in Kubernetes 1.24. [Check the runtime
|
||||||
|
endpoint](#which-endpoint) to see if you use dockershim. If you don't use
|
||||||
|
dockershim, you aren't affected.
|
||||||
|
|
||||||
|
For containerd, the output is similar to this:
|
||||||
|
|
||||||
```none
|
```none
|
||||||
# For containerd
|
|
||||||
NAME STATUS VERSION CONTAINER-RUNTIME
|
NAME STATUS VERSION CONTAINER-RUNTIME
|
||||||
node-1 Ready v1.19.6 containerd://1.4.1
|
node-1 Ready v1.19.6 containerd://1.4.1
|
||||||
node-2 Ready v1.19.6 containerd://1.4.1
|
node-2 Ready v1.19.6 containerd://1.4.1
|
||||||
|
@ -49,4 +55,44 @@ node-3 Ready v1.19.6 containerd://1.4.1
|
||||||
```
|
```
|
||||||
|
|
||||||
Find out more information about container runtimes
|
Find out more information about container runtimes
|
||||||
on [Container Runtimes](/docs/setup/production-environment/container-runtimes/) page.
|
on [Container Runtimes](/docs/setup/production-environment/container-runtimes/)
|
||||||
|
page.
|
||||||
|
|
||||||
|
## Find out what container runtime endpoint you use {#which-endpoint}
|
||||||
|
|
||||||
|
The container runtime talks to the kubelet over a Unix socket using the [CRI
|
||||||
|
protocol](/docs/concepts/architecture/cri/), which is based on the gRPC
|
||||||
|
framework. The kubelet acts as a client, and the runtime acts as the server.
|
||||||
|
In some cases, you might find it useful to know which socket your nodes use. For
|
||||||
|
example, with the removal of dockershim in Kubernetes 1.24 and later, you might
|
||||||
|
want to know whether you use Docker Engine with dockershim.
|
||||||
|
|
||||||
|
{{<note>}}
|
||||||
|
If you currently use Docker Engine in your nodes with `cri-dockerd`, you aren't
|
||||||
|
affected by the dockershim removal.
|
||||||
|
{{</note>}}
|
||||||
|
|
||||||
|
You can check which socket you use by checking the kubelet configuration on your
|
||||||
|
nodes.
|
||||||
|
|
||||||
|
1. Read the starting commands for the kubelet process:
|
||||||
|
|
||||||
|
```
|
||||||
|
tr \\0 ' ' < /proc/"$(pgrep kubelet)"/cmdline
|
||||||
|
```
|
||||||
|
If you don't have `tr` or `pgrep`, check the command line for the kubelet
|
||||||
|
process manually.
|
||||||
|
|
||||||
|
1. In the output, look for the `--container-runtime` flag and the
|
||||||
|
`--container-runtime-endpoint` flag.
|
||||||
|
|
||||||
|
* If your nodes use Kubernetes v1.23 and earlier and these flags aren't
|
||||||
|
present or if the `--container-runtime` flag is not `remote`,
|
||||||
|
you use the dockershim socket with Docker Engine.
|
||||||
|
* If the `--container-runtime-endpoint` flag is present, check the socket
|
||||||
|
name to find out which runtime you use. For example,
|
||||||
|
`unix:///run/containerd/containerd.sock` is the containerd endpoint.
|
||||||
|
|
||||||
|
If you use Docker Engine with the dockershim, [migrate to a different runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/),
|
||||||
|
or, if you want to continue using Docker Engine in v1.24 and later, migrate to a
|
||||||
|
CRI-compatible adapter like [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).
|
|
@ -107,7 +107,7 @@ kubectl delete pod qos-demo --namespace=qos-example
|
||||||
A Pod is given a QoS class of Burstable if:
|
A Pod is given a QoS class of Burstable if:
|
||||||
|
|
||||||
* The Pod does not meet the criteria for QoS class Guaranteed.
|
* The Pod does not meet the criteria for QoS class Guaranteed.
|
||||||
* At least one Container in the Pod has a memory or CPU request.
|
* At least one Container in the Pod has a memory or CPU request or limit.
|
||||||
|
|
||||||
Here is the configuration file for a Pod that has one Container. The Container has a memory limit of 200 MiB
|
Here is the configuration file for a Pod that has one Container. The Container has a memory limit of 200 MiB
|
||||||
and a memory request of 100 MiB.
|
and a memory request of 100 MiB.
|
||||||
|
|
|
@ -194,7 +194,7 @@ both Linux and Windows kernels). The time window used to calculate CPU is shown
|
||||||
in Metrics API.
|
in Metrics API.
|
||||||
|
|
||||||
To learn more about how Kubernetes allocates and measures CPU resources, see
|
To learn more about how Kubernetes allocates and measures CPU resources, see
|
||||||
[meaning of CPU](/docs/concepts/configuration/manage-resources-container/#meaning-of-cpu).
|
[meaning of CPU](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu).
|
||||||
|
|
||||||
### Memory
|
### Memory
|
||||||
|
|
||||||
|
@ -209,7 +209,7 @@ anonymous memory associated with the container in question. The working set metr
|
||||||
includes some cached (file-backed) memory, because the host OS cannot always reclaim pages.
|
includes some cached (file-backed) memory, because the host OS cannot always reclaim pages.
|
||||||
|
|
||||||
To learn more about how Kubernetes allocates and measures memory resources, see
|
To learn more about how Kubernetes allocates and measures memory resources, see
|
||||||
[meaning of memory](/docs/concepts/configuration/manage-resources-container/#meaning-of-memory).
|
[meaning of memory](/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory).
|
||||||
|
|
||||||
## Metrics Server
|
## Metrics Server
|
||||||
|
|
||||||
|
|
|
@ -11,6 +11,9 @@ min-kubernetes-server-version: 1.7
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
|
{{% dockershim-removal %}}
|
||||||
|
|
||||||
|
|
||||||
Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.
|
Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.
|
||||||
|
|
||||||
Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart.
|
Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart.
|
||||||
|
|
|
@ -9,8 +9,10 @@ data:
|
||||||
# Apply this config only on the primary.
|
# Apply this config only on the primary.
|
||||||
[mysqld]
|
[mysqld]
|
||||||
log-bin
|
log-bin
|
||||||
|
datadir=/var/lib/mysql/mysql
|
||||||
replica.cnf: |
|
replica.cnf: |
|
||||||
# Apply this config only on replicas.
|
# Apply this config only on replicas.
|
||||||
[mysqld]
|
[mysqld]
|
||||||
super-read-only
|
super-read-only
|
||||||
|
datadir=/var/lib/mysql/mysql
|
||||||
|
|
||||||
|
|
|
@ -647,6 +647,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
||||||
"service/networking": {
|
"service/networking": {
|
||||||
"curlpod": {&apps.Deployment{}},
|
"curlpod": {&apps.Deployment{}},
|
||||||
"custom-dns": {&api.Pod{}},
|
"custom-dns": {&api.Pod{}},
|
||||||
|
"default-ingressclass": {&networking.IngressClass{}},
|
||||||
"dual-stack-default-svc": {&api.Service{}},
|
"dual-stack-default-svc": {&api.Service{}},
|
||||||
"dual-stack-ipfamilies-ipv6": {&api.Service{}},
|
"dual-stack-ipfamilies-ipv6": {&api.Service{}},
|
||||||
"dual-stack-ipv6-svc": {&api.Service{}},
|
"dual-stack-ipv6-svc": {&api.Service{}},
|
||||||
|
@ -662,6 +663,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
||||||
"name-virtual-host-ingress": {&networking.Ingress{}},
|
"name-virtual-host-ingress": {&networking.Ingress{}},
|
||||||
"name-virtual-host-ingress-no-third-host": {&networking.Ingress{}},
|
"name-virtual-host-ingress-no-third-host": {&networking.Ingress{}},
|
||||||
"namespaced-params": {&networking.IngressClass{}},
|
"namespaced-params": {&networking.IngressClass{}},
|
||||||
|
"networkpolicy": {&networking.NetworkPolicy{}},
|
||||||
"network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
|
"network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
|
||||||
"network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
|
"network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
|
||||||
"network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
|
"network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
|
||||||
|
|
|
@ -0,0 +1,35 @@
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: test-network-policy
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
role: db
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
- Egress
|
||||||
|
ingress:
|
||||||
|
- from:
|
||||||
|
- ipBlock:
|
||||||
|
cidr: 172.17.0.0/16
|
||||||
|
except:
|
||||||
|
- 172.17.1.0/24
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
project: myproject
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
role: frontend
|
||||||
|
ports:
|
||||||
|
- protocol: TCP
|
||||||
|
port: 6379
|
||||||
|
egress:
|
||||||
|
- to:
|
||||||
|
- ipBlock:
|
||||||
|
cidr: 10.0.0.0/24
|
||||||
|
ports:
|
||||||
|
- protocol: TCP
|
||||||
|
port: 5978
|
||||||
|
|
|
@ -272,7 +272,7 @@ spec:
|
||||||
number: 80
|
number: 80
|
||||||
```
|
```
|
||||||
|
|
||||||
Si vous créez une ressource Ingress sans aucun hôte défini dans les règles, tout trafic Web à destination de l'adresse IP de votre contrôleur d'Ingress peut être mis en correspondance sans qu'un hôte virtuel basé sur le nom ne soit requis. Par exemple, la ressource Ingress suivante acheminera le trafic demandé pour `first.bar.com` au `service1` `second.foo.com` au `service2`, et à tout trafic à l'adresse IP sans nom d'hôte défini dans la demande (c'est-à-dire sans en-tête de requête présenté) au `service3`.
|
Si vous créez une ressource Ingress sans aucun hôte défini dans les règles, tout trafic Web à destination de l'adresse IP de votre contrôleur d'Ingress peut être mis en correspondance sans qu'un hôte virtuel basé sur le nom ne soit requis. Par exemple, la ressource Ingress suivante acheminera le trafic demandé pour `first.bar.com` au `service1`, `second.foo.com` au `service2`, et à tout trafic à l'adresse IP sans nom d'hôte défini dans la demande (c'est-à-dire sans en-tête de requête présenté) au `service3`.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: networking.k8s.io/v1
|
apiVersion: networking.k8s.io/v1
|
||||||
|
|
|
@ -198,8 +198,8 @@ allowedTopologies:
|
||||||
- matchLabelExpressions:
|
- matchLabelExpressions:
|
||||||
- key: failure-domain.beta.kubernetes.io/zone
|
- key: failure-domain.beta.kubernetes.io/zone
|
||||||
values:
|
values:
|
||||||
- us-central1-a
|
- us-central-1a
|
||||||
- us-central1-b
|
- us-central-1b
|
||||||
```
|
```
|
||||||
|
|
||||||
## Parameter-Parameter
|
## Parameter-Parameter
|
||||||
|
|
|
@ -106,7 +106,7 @@ membuat Pod pada semua Node.
|
||||||
|
|
||||||
### Dijadwalkan oleh _default scheduler_
|
### Dijadwalkan oleh _default scheduler_
|
||||||
|
|
||||||
{{< feature-state state="stable" for-kubernetes-version="1.17" >}}
|
{{< feature-state for_kubernetes_version="1.17" state="stable" >}}
|
||||||
|
|
||||||
DaemonSet memastikan bahwa semua Node yang memenuhi syarat menjalankan salinan
|
DaemonSet memastikan bahwa semua Node yang memenuhi syarat menjalankan salinan
|
||||||
Pod. Normalnya, Node yang menjalankan Pod dipilih oleh _scheduler_ Kubernetes.
|
Pod. Normalnya, Node yang menjalankan Pod dipilih oleh _scheduler_ Kubernetes.
|
||||||
|
|
|
@ -104,7 +104,7 @@ Ini dilakukan dengan menspesifikasikan _parent_ cgroup sebagai nilai dari _flag_
|
||||||
Kami merekomendasikan _daemon_ sistem Kubernetes untuk ditempatkan pada
|
Kami merekomendasikan _daemon_ sistem Kubernetes untuk ditempatkan pada
|
||||||
tingkatan cgroup yang tertinggi (contohnya, `runtime.slice` pada mesin systemd).
|
tingkatan cgroup yang tertinggi (contohnya, `runtime.slice` pada mesin systemd).
|
||||||
Secara ideal, setiap _daemon_ sistem sebaiknya dijalankan pada _child_ cgroup
|
Secara ideal, setiap _daemon_ sistem sebaiknya dijalankan pada _child_ cgroup
|
||||||
di bawah _parent_ ini. Lihat [dokumentasi](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md#recommended-cgroups-setup)
|
di bawah _parent_ ini. Lihat [dokumentasi](https://git.k8s.io/design-proposals-archive/node/node-allocatable.md#recommended-cgroups-setup)
|
||||||
untuk mengetahui rekomendasi hierarki cgroup secara detail.
|
untuk mengetahui rekomendasi hierarki cgroup secara detail.
|
||||||
|
|
||||||
Catatan: kubelet **tidak membuat** `--kube-reserved-cgroup` jika cgroup
|
Catatan: kubelet **tidak membuat** `--kube-reserved-cgroup` jika cgroup
|
||||||
|
|
|
@ -25,13 +25,13 @@ I componenti aggiuntivi in ogni sezione sono ordinati alfabeticamente - l'ordine
|
||||||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unisce Flannel e Calico, fornendo i criteri di rete e di rete.
|
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unisce Flannel e Calico, fornendo i criteri di rete e di rete.
|
||||||
* [Cilium](https://github.com/cilium/cilium) è un plug-in di criteri di rete e di rete L3 in grado di applicare in modo trasparente le politiche HTTP / API / L7. Sono supportate entrambe le modalità di routing e overlay / incapsulamento.
|
* [Cilium](https://github.com/cilium/cilium) è un plug-in di criteri di rete e di rete L3 in grado di applicare in modo trasparente le politiche HTTP / API / L7. Sono supportate entrambe le modalità di routing e overlay / incapsulamento.
|
||||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) consente a Kubernetes di connettersi senza problemi a una scelta di plugin CNI, come Calico, Canal, Flannel, Romana o Weave.
|
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) consente a Kubernetes di connettersi senza problemi a una scelta di plugin CNI, come Calico, Canal, Flannel, Romana o Weave.
|
||||||
* [Contiv](http://contiv.github.io) offre networking configurabile (L3 nativo con BGP, overlay con vxlan, L2 classico e Cisco-SDN / ACI) per vari casi d'uso e un ricco framework di policy. Il progetto Contiv è completamente [open source](http://github.com/contiv). Il [programma di installazione](http://github.com/contiv/install) fornisce sia opzioni di installazione basate su kubeadm che non su Kubeadm.
|
* [Contiv](https://contivpp.io/) offre networking configurabile (L3 nativo con BGP, overlay con vxlan, L2 classico e Cisco-SDN / ACI) per vari casi d'uso e un ricco framework di policy. Il progetto Contiv è completamente [open source](http://github.com/contiv). Il [programma di installazione](http://github.com/contiv/install) fornisce sia opzioni di installazione basate su kubeadm che non su Kubeadm.
|
||||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) è un provider di reti sovrapposte che può essere utilizzato con Kubernetes.
|
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) è un provider di reti sovrapposte che può essere utilizzato con Kubernetes.
|
||||||
* [Knitter](https://github.com/ZTE/Knitter/) è una soluzione di rete che supporta più reti in Kubernetes.
|
* [Knitter](https://github.com/ZTE/Knitter/) è una soluzione di rete che supporta più reti in Kubernetes.
|
||||||
* Multus è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes.
|
* Multus è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes.
|
||||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) fornisce l'integrazione tra VMware NSX-T e orchestratori di contenitori come Kubernetes, oltre all'integrazione tra NSX-T e piattaforme CaaS / PaaS basate su container come Pivotal Container Service (PKS) e OpenShift.
|
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) fornisce l'integrazione tra VMware NSX-T e orchestratori di contenitori come Kubernetes, oltre all'integrazione tra NSX-T e piattaforme CaaS / PaaS basate su container come Pivotal Container Service (PKS) e OpenShift.
|
||||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1/docs/kubernetes-1-installation.rst) è una piattaforma SDN che fornisce una rete basata su policy tra i pod di Kubernetes e non Kubernetes con visibilità e monitoraggio della sicurezza.
|
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1/docs/kubernetes-1-installation.rst) è una piattaforma SDN che fornisce una rete basata su policy tra i pod di Kubernetes e non Kubernetes con visibilità e monitoraggio della sicurezza.
|
||||||
* [Romana](http://romana.io) è una soluzione di rete Layer 3 per pod network che supporta anche [API NetworkPolicy](/docs/concepts/services-networking/network-policies/). Dettagli di installazione del componente aggiuntivo di Kubeadm disponibili [qui](https://github.com/romana/romana/tree/master/containerize).
|
* [Romana](https://github.com/romana/romana) è una soluzione di rete Layer 3 per pod network che supporta anche [API NetworkPolicy](/docs/concepts/services-networking/network-policies/). Dettagli di installazione del componente aggiuntivo di Kubeadm disponibili [qui](https://github.com/romana/romana/tree/master/containerize).
|
||||||
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) fornisce i criteri di rete e di rete, continuerà a funzionare su entrambi i lati di una partizione di rete e non richiede un database esterno.
|
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) fornisce i criteri di rete e di rete, continuerà a funzionare su entrambi i lati di una partizione di rete e non richiede un database esterno.
|
||||||
|
|
||||||
## Service Discovery
|
## Service Discovery
|
||||||
|
@ -48,5 +48,3 @@ I componenti aggiuntivi in ogni sezione sono ordinati alfabeticamente - l'ordine
|
||||||
qui ci sono molti altri componenti aggiuntivi documentati nella directory deprecata [cluster / addons](https://git.k8s.io/kubernetes/cluster/addons).
|
qui ci sono molti altri componenti aggiuntivi documentati nella directory deprecata [cluster / addons](https://git.k8s.io/kubernetes/cluster/addons).
|
||||||
|
|
||||||
Quelli ben mantenuti dovrebbero essere collegati qui.
|
Quelli ben mantenuti dovrebbero essere collegati qui.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -91,7 +91,7 @@ spec:
|
||||||
* `service.beta.kubernetes.io / aws-load-balancer-access-log-enabled`: utilizzato sul servizio per abilitare o disabilitare i log di accesso.
|
* `service.beta.kubernetes.io / aws-load-balancer-access-log-enabled`: utilizzato sul servizio per abilitare o disabilitare i log di accesso.
|
||||||
* `service.beta.kubernetes.io / aws-load-balancer-access-log-s3-bucket-name`: usato per specificare il nome del bucket di log degli accessi s3.
|
* `service.beta.kubernetes.io / aws-load-balancer-access-log-s3-bucket-name`: usato per specificare il nome del bucket di log degli accessi s3.
|
||||||
* `service.beta.kubernetes.io / aws-load-balancer-access-log-s3-bucket-prefix`: utilizzato per specificare il prefisso del bucket del registro di accesso s3.
|
* `service.beta.kubernetes.io / aws-load-balancer-access-log-s3-bucket-prefix`: utilizzato per specificare il prefisso del bucket del registro di accesso s3.
|
||||||
* `service.beta.kubernetes.io / aws-load-balancer-additional-resource-tags`: utilizzato sul servizio per specificare un elenco separato da virgole di coppie chiave-valore che verranno registrate come tag aggiuntivi nel ELB. Ad esempio: "Key1 = Val1, Key2 = Val2, KeyNoVal1 =, KeyNoVal2" `.
|
* `service.beta.kubernetes.io / aws-load-balancer-additional-resource-tags`: utilizzato sul servizio per specificare un elenco separato da virgole di coppie chiave-valore che verranno registrate come tag aggiuntivi nel ELB. Ad esempio: `"Key1 = Val1, Key2 = Val2, KeyNoVal1 =, KeyNoVal2"`.
|
||||||
* `service.beta.kubernetes.io / aws-load-balancer-backend-protocol`: utilizzato sul servizio per specificare il protocollo parlato dal backend (pod) dietro un listener. Se `http` (predefinito) o `https`, viene creato un listener HTTPS che termina la connessione e analizza le intestazioni. Se impostato su `ssl` o `tcp`, viene utilizzato un listener SSL "raw". Se impostato su `http` e `aws-load-balancer-ssl-cert` non viene utilizzato, viene utilizzato un listener HTTP.
|
* `service.beta.kubernetes.io / aws-load-balancer-backend-protocol`: utilizzato sul servizio per specificare il protocollo parlato dal backend (pod) dietro un listener. Se `http` (predefinito) o `https`, viene creato un listener HTTPS che termina la connessione e analizza le intestazioni. Se impostato su `ssl` o `tcp`, viene utilizzato un listener SSL "raw". Se impostato su `http` e `aws-load-balancer-ssl-cert` non viene utilizzato, viene utilizzato un listener HTTP.
|
||||||
* `service.beta.kubernetes.io / aws-load-balancer-ssl-cert`: utilizzato nel servizio per richiedere un listener sicuro. Il valore è un certificato ARN valido. Per ulteriori informazioni, vedere [ELB Listener Config](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN è un ARN certificato IAM o CM, ad es. `ARN: AWS: ACM: US-est-1: 123456789012: certificato / 12345678-1234-1234-1234-123456789012`.
|
* `service.beta.kubernetes.io / aws-load-balancer-ssl-cert`: utilizzato nel servizio per richiedere un listener sicuro. Il valore è un certificato ARN valido. Per ulteriori informazioni, vedere [ELB Listener Config](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN è un ARN certificato IAM o CM, ad es. `ARN: AWS: ACM: US-est-1: 123456789012: certificato / 12345678-1234-1234-1234-123456789012`.
|
||||||
* `service.beta.kubernetes.io / aws-load-balancer-connection-draining-enabled`: utilizzato sul servizio per abilitare o disabilitare il drenaggio della connessione.
|
* `service.beta.kubernetes.io / aws-load-balancer-connection-draining-enabled`: utilizzato sul servizio per abilitare o disabilitare il drenaggio della connessione.
|
||||||
|
|
|
@ -105,7 +105,7 @@ bypassando il meccanismo di registrazione predefinito. Usano il [klog] [klog]
|
||||||
biblioteca di registrazione. È possibile trovare le convenzioni per la gravità della registrazione per quelli
|
biblioteca di registrazione. È possibile trovare le convenzioni per la gravità della registrazione per quelli
|
||||||
componenti nel [documento di sviluppo sulla registrazione](https://git.k8s.io/community/contributors/devel/logging.md).
|
componenti nel [documento di sviluppo sulla registrazione](https://git.k8s.io/community/contributors/devel/logging.md).
|
||||||
|
|
||||||
Analogamente ai log del contenitore, i log dei componenti di sistema sono in /var/log`
|
Analogamente ai log del contenitore, i log dei componenti di sistema sono in `/var/log`
|
||||||
la directory dovrebbe essere ruotata. Nei cluster di Kubernetes allevati da
|
la directory dovrebbe essere ruotata. Nei cluster di Kubernetes allevati da
|
||||||
lo script `kube-up.sh`, quei log sono configurati per essere ruotati da
|
lo script `kube-up.sh`, quei log sono configurati per essere ruotati da
|
||||||
lo strumento `logrotate` al giorno o una volta che la dimensione supera i 100 MB.
|
lo strumento `logrotate` al giorno o una volta che la dimensione supera i 100 MB.
|
||||||
|
|
|
@ -368,7 +368,7 @@ In alternativa, puoi anche aggiornare le risorse con `kubectl edit`:
|
||||||
$ kubectl edit deployment/my-nginx
|
$ kubectl edit deployment/my-nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
Questo equivale a prima "get` la risorsa, modificarla nell'editor di testo e quindi" applicare "la risorsa con la
|
Questo equivale a prima `get` la risorsa, modificarla nell'editor di testo e quindi `apply` la risorsa con la
|
||||||
versione aggiornata:
|
versione aggiornata:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
|
@ -127,7 +127,7 @@ di [avere simultaneamente accesso a diverse implementazioni](https://github.com/
|
||||||
del [modello di rete Kubernetes](https://git.k8s.io/website/docs/concepts/cluster-administration/networking.md#kubernetes-model) in runtime.
|
del [modello di rete Kubernetes](https://git.k8s.io/website/docs/concepts/cluster-administration/networking.md#kubernetes-model) in runtime.
|
||||||
Ciò include qualsiasi implementazione che funziona come un [plugin CNI](https://github.com/containernetworking/cni#3rd-party-plugins),
|
Ciò include qualsiasi implementazione che funziona come un [plugin CNI](https://github.com/containernetworking/cni#3rd-party-plugins),
|
||||||
come [Flannel](https://github.com/coreos/flannel#flanella), [Calico](http://docs.projectcalico.org/),
|
come [Flannel](https://github.com/coreos/flannel#flanella), [Calico](http://docs.projectcalico.org/),
|
||||||
[Romana](http://romana.io), [Weave-net](https://www.weave.works/products/tessere-net/).
|
[Romana](https://github.com/romana/romana), [Weave-net](https://www.weave.works/products/tessere-net/).
|
||||||
|
|
||||||
CNI-Genie supporta anche [assegnando più indirizzi IP a un pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-indirizzi-per-pod), ciascuno da un diverso plugin CNI.
|
CNI-Genie supporta anche [assegnando più indirizzi IP a un pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-indirizzi-per-pod), ciascuno da un diverso plugin CNI.
|
||||||
|
|
||||||
|
@ -153,7 +153,7 @@ complessità della rete richiesta per implementare Kubernetes su larga scala all
|
||||||
|
|
||||||
226/5000
|
226/5000
|
||||||
[Contiv](https://github.com/contiv/netplugin) fornisce un networking configurabile (nativo l3 usando BGP,
|
[Contiv](https://github.com/contiv/netplugin) fornisce un networking configurabile (nativo l3 usando BGP,
|
||||||
overlay usando vxlan, classic l2 o Cisco-SDN / ACI) per vari casi d'uso. [Contiv](http://contiv.io) è tutto aperto.
|
overlay usando vxlan, classic l2 o Cisco-SDN / ACI) per vari casi d'uso. [Contiv](https://contivpp.io/) è tutto aperto.
|
||||||
|
|
||||||
### Contrail / Tungsten Fabric
|
### Contrail / Tungsten Fabric
|
||||||
|
|
||||||
|
@ -195,7 +195,7 @@ Docker è avviato con:
|
||||||
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
|
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
|
||||||
```
|
```
|
||||||
|
|
||||||
Questo bridge è creato da Kubelet (controllato da `--network-plugin = kubenet` flag) in base al `Nodo` .spec.podCIDR`.
|
Questo bridge è creato da Kubelet (controllato da `--network-plugin = kubenet` flag) in base al Nodo `.spec.podCIDR`.
|
||||||
|
|
||||||
Docker ora assegna gli IP dal blocco `cbr-cidr`. I contenitori possono raggiungere l'un l'altro e `Nodi` sul
|
Docker ora assegna gli IP dal blocco `cbr-cidr`. I contenitori possono raggiungere l'un l'altro e `Nodi` sul
|
||||||
ponte `cbr0`. Questi IP sono tutti instradabili all'interno della rete del progetto GCE.
|
ponte `cbr0`. Questi IP sono tutti instradabili all'interno della rete del progetto GCE.
|
||||||
|
@ -255,7 +255,7 @@ Lars Kellogg-Stedman.
|
||||||
|
|
||||||
### Multus (a Multi Network plugin)
|
### Multus (a Multi Network plugin)
|
||||||
|
|
||||||
[Multus](https://github.com/Intel-Corp/multus-cni) è un plugin Multi CNI per supportare la funzionalità Multi
|
[Multus](https://github.com/k8snetworkplumbingwg/multus-cni) è un plugin Multi CNI per supportare la funzionalità Multi
|
||||||
Networking in Kubernetes utilizzando oggetti di rete basati su CRD in Kubernetes.
|
Networking in Kubernetes utilizzando oggetti di rete basati su CRD in Kubernetes.
|
||||||
|
|
||||||
Multus supporta tutti i [plug-in di riferimento](https://github.com/containernetworking/plugins)
|
Multus supporta tutti i [plug-in di riferimento](https://github.com/containernetworking/plugins)
|
||||||
|
@ -316,7 +316,7 @@ Flannel, alias [canal](https://github.com/tigera/canal) o native GCE, AWS o netw
|
||||||
|
|
||||||
### Romana
|
### Romana
|
||||||
|
|
||||||
[Romana](http://romana.io) è una soluzione di automazione della sicurezza e della rete open source che consente di
|
[Romana](https://github.com/romana/romana) è una soluzione di automazione della sicurezza e della rete open source che consente di
|
||||||
distribuire Kubernetes senza una rete di overlay. Romana supporta Kubernetes
|
distribuire Kubernetes senza una rete di overlay. Romana supporta Kubernetes
|
||||||
[Politica di rete](/docs/concepts/services-networking/network-policies/) per fornire isolamento tra gli spazi dei nomi
|
[Politica di rete](/docs/concepts/services-networking/network-policies/) per fornire isolamento tra gli spazi dei nomi
|
||||||
di rete.
|
di rete.
|
||||||
|
@ -335,5 +335,3 @@ entrambi i casi, la rete fornisce un indirizzo IP per pod, come è standard per
|
||||||
|
|
||||||
Il progetto iniziale del modello di rete e la sua logica, e un po 'di futuro i piani sono descritti in maggior
|
Il progetto iniziale del modello di rete e la sua logica, e un po 'di futuro i piani sono descritti in maggior
|
||||||
dettaglio nella [progettazione della rete documento](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
|
dettaglio nella [progettazione della rete documento](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -137,7 +137,7 @@ kubectl describe node <ノード名をここに挿入>
|
||||||
`SchedulingDisabled`はKubernetesのAPIにおけるConditionではありません;その代わり、cordonされたノードはUnschedulableとしてマークされます。
|
`SchedulingDisabled`はKubernetesのAPIにおけるConditionではありません;その代わり、cordonされたノードはUnschedulableとしてマークされます。
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
ノードのConditionはJSONオブジェクトで表現されます。例えば、正常なノードの場合は以下のような構造体が表示されます。
|
Nodeの状態は、Nodeリソースの`.status`の一部として表現されます。例えば、正常なノードの場合は以下のようなjson構造が表示されます。
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"conditions": [
|
"conditions": [
|
||||||
|
@ -173,36 +173,25 @@ CapacityとAllocatableについて深く知りたい場合は、ノード上で
|
||||||
### Info {#info}
|
### Info {#info}
|
||||||
|
|
||||||
カーネルのバージョン、Kubernetesのバージョン(kubeletおよびkube-proxyのバージョン)、(使用されている場合)Dockerのバージョン、OS名など、ノードに関する一般的な情報です。
|
カーネルのバージョン、Kubernetesのバージョン(kubeletおよびkube-proxyのバージョン)、(使用されている場合)Dockerのバージョン、OS名など、ノードに関する一般的な情報です。
|
||||||
この情報はノードからkubeletを通じて取得されます。
|
この情報はノードからkubeletを通じて取得され、Kubernetes APIに公開されます。
|
||||||
|
|
||||||
## 管理 {#management}
|
|
||||||
|
|
||||||
[Pod](/ja/docs/concepts/workloads/pods/pod/)や[Service](/ja/docs/concepts/services-networking/service/)と違い、ノードは本質的にはKubernetesによって作成されません。GCPのようなクラウドプロバイダーによって外的に作成されるか、VMや物理マシンのプールに存在するものです。そのため、Kubernetesがノードを作成すると、そのノードを表すオブジェクトが作成されます。作成後、Kubernetesはそのノードが有効かどうかを確認します。 たとえば、次の内容からノードを作成しようとしたとします:
|
## ハートビート
|
||||||
|
ハートビートは、Kubernetesノードから送信され、ノードが利用可能か判断するのに役立ちます。
|
||||||
|
以下の2つのハートビートがあります:
|
||||||
|
* Nodeの`.status`の更新
|
||||||
|
* [Lease object](/docs/reference/generated/kubernetes-api/{{< latest-version >}}#lease-v1-coordination-k8s-io)です。
|
||||||
|
各ノードは`kube-node-lease`という{{< glossary_tooltip term_id="namespace" text="namespace">}}に関連したLeaseオブジェクトを持ちます。
|
||||||
|
Leaseは軽量なリソースで、クラスターのスケールに応じてノードのハートビートにおけるパフォーマンスを改善します。
|
||||||
|
|
||||||
```json
|
kubeletが`NodeStatus`とLeaseオブジェクトの作成および更新を担当します。
|
||||||
{
|
|
||||||
"kind": "Node",
|
|
||||||
"apiVersion": "v1",
|
|
||||||
"metadata": {
|
|
||||||
"name": "10.240.79.157",
|
|
||||||
"labels": {
|
|
||||||
"name": "my-first-k8s-node"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Kubernetesは内部的にNodeオブジェクトを作成し、 `metadata.name`フィールドに基づくヘルスチェックによってノードを検証します。ノードが有効な場合、つまり必要なサービスがすべて実行されている場合は、Podを実行する資格があります。それ以外の場合、該当ノードが有効になるまではいかなるクラスターの活動に対しても無視されます。
|
- kubeletは、ステータスに変化があったり、設定した間隔の間に更新がない時に`NodeStatus`を更新します。`NodeStatus`更新のデフォルト間隔は5分です。(到達不能の場合のデフォルトタイムアウトである40秒よりもはるかに長いです)
|
||||||
Nodeオブジェクトの名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります。
|
- kubeletは10秒間隔(デフォルトの更新間隔)でLeaseオブジェクトの生成と更新を実施します。Leaseの更新は`NodeStatus`の更新とは独立されて行われます。Leaseの更新が失敗した場合、kubeletは200ミリ秒から始まり7秒を上限とした指数バックオフでリトライします。
|
||||||
|
|
||||||
{{< note >}}
|
|
||||||
Kubernetesは無効なノードのためにオブジェクトを保存し、それをチェックし続けます。
|
|
||||||
このプロセスを停止するには、Nodeオブジェクトを明示的に削除する必要があります。
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
現在、Kubernetesのノードインターフェースと相互作用する3つのコンポーネントがあります。ノードコントローラー、kubelet、およびkubectlです。
|
|
||||||
|
|
||||||
### ノードコントローラー
|
## ノードコントローラー
|
||||||
|
|
||||||
ノード{{< glossary_tooltip text="コントローラー" term_id="controller" >}}は、ノードのさまざまな側面を管理するKubernetesのコントロールプレーンコンポーネントです。
|
ノード{{< glossary_tooltip text="コントローラー" term_id="controller" >}}は、ノードのさまざまな側面を管理するKubernetesのコントロールプレーンコンポーネントです。
|
||||||
|
|
||||||
|
@ -216,16 +205,6 @@ Kubernetesは無効なノードのためにオブジェクトを保存し、そ
|
||||||
ノードが到達不能(例えば、ノードがダウンしているなどので理由で、ノードコントローラーがハートビートの受信を停止した場合)になると、ノードコントローラーは、NodeStatusのNodeReady conditionをConditionUnknownに変更する役割があります。その後も該当ノードが到達不能のままであった場合、Graceful Terminationを使って全てのPodを退役させます。デフォルトのタイムアウトは、ConditionUnknownの報告を開始するまで40秒、その後Podの追い出しを開始するまで5分に設定されています。
|
ノードが到達不能(例えば、ノードがダウンしているなどので理由で、ノードコントローラーがハートビートの受信を停止した場合)になると、ノードコントローラーは、NodeStatusのNodeReady conditionをConditionUnknownに変更する役割があります。その後も該当ノードが到達不能のままであった場合、Graceful Terminationを使って全てのPodを退役させます。デフォルトのタイムアウトは、ConditionUnknownの報告を開始するまで40秒、その後Podの追い出しを開始するまで5分に設定されています。
|
||||||
ノードコントローラーは、`--node-monitor-period`に設定された秒数ごとに各ノードの状態をチェックします。
|
ノードコントローラーは、`--node-monitor-period`に設定された秒数ごとに各ノードの状態をチェックします。
|
||||||
|
|
||||||
#### ハートビート
|
|
||||||
ハートビートは、Kubernetesノードから送信され、ノードが利用可能か判断するのに役立ちます。
|
|
||||||
2つのハートビートがあります:`NodeStatus`の更新と[Lease object](/docs/reference/generated/kubernetes-api/{{< latest-version >}}#lease-v1-coordination-k8s-io)です。
|
|
||||||
各ノードは`kube-node-lease`という{{< glossary_tooltip term_id="namespace" text="namespace">}}に関連したLeaseオブジェクトを持ちます。
|
|
||||||
Leaseは軽量なリソースで、クラスターのスケールに応じてノードのハートビートにおけるパフォーマンスを改善します。
|
|
||||||
|
|
||||||
kubeletが`NodeStatus`とLeaseオブジェクトの作成および更新を担当します。
|
|
||||||
|
|
||||||
- kubeletは、ステータスに変化があったり、設定した間隔の間に更新がない時に`NodeStatus`を更新します。`NodeStatus`更新のデフォルト間隔は5分です。(到達不能の場合のデフォルトタイムアウトである40秒よりもはるかに長いです)
|
|
||||||
- kubeletは10秒間隔(デフォルトの更新間隔)でLeaseオブジェクトの生成と更新を実施します。Leaseの更新は`NodeStatus`の更新とは独立されて行われます。Leaseの更新が失敗した場合、kubeletは200ミリ秒から始まり7秒を上限とした指数バックオフでリトライします。
|
|
||||||
|
|
||||||
#### 信頼性
|
#### 信頼性
|
||||||
|
|
||||||
|
@ -269,6 +248,88 @@ Pod以外のプロセス用にリソースを明示的に予約したい場合
|
||||||
kubeletはリソースの割当を決定する際にトポロジーのヒントを利用できます。
|
kubeletはリソースの割当を決定する際にトポロジーのヒントを利用できます。
|
||||||
詳細は、[ノードのトポロジー管理ポリシーを制御する](/ja/docs/tasks/administer-cluster/topology-manager/)を参照してください。
|
詳細は、[ノードのトポロジー管理ポリシーを制御する](/ja/docs/tasks/administer-cluster/topology-manager/)を参照してください。
|
||||||
|
|
||||||
|
## ノードの正常終了 {#graceful-node-shutdown}
|
||||||
|
|
||||||
|
{{< feature-state state="beta" for_k8s_version="v1.21" >}}
|
||||||
|
|
||||||
|
kubeletは、ノードのシステムシャットダウンを検出すると、ノード上で動作しているPodを終了させます。
|
||||||
|
|
||||||
|
Kubelet は、ノードのシャットダウン時に、ポッドが通常の[通常のポッド終了プロセス](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)に従うようにします。
|
||||||
|
|
||||||
|
Graceful Node Shutdownはsystemdに依存しているため、[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/)を
|
||||||
|
利用してノードのシャットダウンを一定時間遅らせることができます。
|
||||||
|
|
||||||
|
Graceful Node Shutdownは、v1.21でデフォルトで有効になっている`GracefulNodeShutdown` [フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)で制御されます。
|
||||||
|
|
||||||
|
なお、デフォルトでは、後述の設定オプション`ShutdownGracePeriod`および`ShutdownGracePeriodCriticalPods`の両方がゼロに設定されているため、Graceful node shutdownは有効になりません。この機能を有効にするには、この2つのkubeletの設定を適切に設定し、ゼロ以外の値を設定する必要があります。
|
||||||
|
|
||||||
|
Graceful shutdownでは、kubeletは以下の2段階でPodを終了させます。
|
||||||
|
|
||||||
|
1. そのノード上で動作している通常のPodを終了させます。
|
||||||
|
2. そのノード上で動作している[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)を終了させます。
|
||||||
|
|
||||||
|
Graceful Node Shutdownには、2つの[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/)オプションを設定します。:
|
||||||
|
* `ShutdownGracePeriod`:
|
||||||
|
* ノードがシャットダウンを遅らせるべき合計期間を指定します。これは、通常のPodと[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)の両方のPod終了の合計猶予期間です。
|
||||||
|
* `ShutdownGracePeriodCriticalPods`:
|
||||||
|
* ノードのシャットダウン時に[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)を終了させるために使用する期間を指定します。この値は、ShutdownGracePeriodよりも小さくする必要があります。
|
||||||
|
|
||||||
|
例えば、`ShutdownGracePeriod=30s`、`ShutdownGracePeriodCriticalPods=10s`とすると、
|
||||||
|
kubeletはノードのシャットダウンを30秒遅らせます。シャットダウンの間、最初の20(30-10)秒は通常のポッドを優雅に終了させるために確保され、
|
||||||
|
残りの10秒は重要なポッドを終了させるために確保されることになります。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
Graceful Node Shutdown中にPodが退避された場合、それらのPodの`.status`は`Failed`になります。
|
||||||
|
`kubectl get pods`を実行すると、退避させられたPodのステータスが `Shutdown` と表示されます。
|
||||||
|
また、`kubectl describe pod`を実行すると、ノードのシャットダウンのためにPodが退避されたことがわかります。
|
||||||
|
|
||||||
|
```
|
||||||
|
Status: Failed
|
||||||
|
Reason: Shutdown
|
||||||
|
Message: Node is shutting, evicting pods
|
||||||
|
```
|
||||||
|
|
||||||
|
失敗したポッドオブジェクトは、明示的に削除されるか、[GCによってクリーンアップ](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)されるまで保存されます。
|
||||||
|
これは、ノードが突然終了した場合とは異なった振る舞いです。
|
||||||
|
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
## スワップメモリの管理 {#swap-memory}
|
||||||
|
|
||||||
|
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
|
||||||
|
|
||||||
|
Kubernetes 1.22以前では、ノードはスワップメモリの使用をサポートしておらず、ノード上でスワップが検出された場合、
|
||||||
|
kubeletはデフォルトで起動に失敗していました。1.22以降では、スワップメモリのサポートをノードごとに有効にすることができます。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
ノードでスワップを有効にするには、kubeletの `NodeSwap` [フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にし、
|
||||||
|
`--fail-swap-on`コマンドラインフラグまたは`failSwapOn`[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)を false に設定する必要があります。
|
||||||
|
|
||||||
|
|
||||||
|
ユーザーはオプションで、ノードがスワップメモリをどのように使用するかを指定するために、`memorySwap.swapBehavior`を設定することもできます。ノードがスワップメモリをどのように使用するかを指定します。例えば、以下のようになります。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
memorySwap:
|
||||||
|
swapBehavior: LimitedSwap
|
||||||
|
```
|
||||||
|
|
||||||
|
swapBehaviorで使用できる設定オプションは以下の通りです。:
|
||||||
|
- `LimitedSwap`: Kubernetesのワークロードが、使用できるスワップ量に制限を設けます。Kubernetesが管理していないノード上のワークロードは、依然としてスワップを使用できます。
|
||||||
|
- `UnlimitedSwap`: Kubernetesのワークロードが使用できるスワップ量に制限を設けません。システムの限界まで、要求されただけのスワップメモリを使用することができます。
|
||||||
|
|
||||||
|
`memorySwap`の設定が指定されておらず、[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が有効な場合、デフォルトのkubeletは`LimitedSwap`の設定と同じ動作を適用します。
|
||||||
|
|
||||||
|
`LimitedSwap`設定の動作は、ノードがコントロールグループ(「cgroups」とも呼ばれる)のv1とv2のどちらで動作しているかによって異なります。
|
||||||
|
|
||||||
|
Kubernetesのワークロードでは、メモリとスワップを組み合わせて使用することができ、ポッドのメモリ制限が設定されている場合はその制限まで使用できます。
|
||||||
|
- **cgroupsv1:** Kubernetesのワークロードは、メモリとスワップを組み合わせて使用することができ、ポッドのメモリ制限が設定されている場合はその制限まで使用できます。
|
||||||
|
- **cgroupsv2:** Kubernetesのワークロードは、スワップメモリを使用できません。
|
||||||
|
|
||||||
|
詳しくは、[KEP-2400](https://github.com/kubernetes/enhancements/issues/2400)と
|
||||||
|
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md)をご覧いただき、テストにご協力、ご意見をお聞かせください。
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
* [ノードコンポーネント](/ja/docs/concepts/overview/components/#node-components)について学習する。
|
* [ノードコンポーネント](/ja/docs/concepts/overview/components/#node-components)について学習する。
|
||||||
|
|
|
@ -0,0 +1,204 @@
|
||||||
|
---
|
||||||
|
title: ロギングのアーキテクチャ
|
||||||
|
content_type: concept
|
||||||
|
weight: 60
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
|
||||||
|
アプリケーションログは、アプリケーション内で何が起こっているかを理解するのに役立ちます。ログは、問題のデバッグとクラスターアクティビティの監視に特に役立ちます。最近のほとんどのアプリケーションには、何らかのロギングメカニズムがあります。同様に、コンテナエンジンはロギングをサポートするように設計されています。コンテナ化されたアプリケーションで、最も簡単で最も採用されているロギング方法は、標準出力と標準エラーストリームへの書き込みです。
|
||||||
|
|
||||||
|
ただし、コンテナエンジンまたはランタイムによって提供されるネイティブ機能は、たいていの場合、完全なロギングソリューションには十分ではありません。
|
||||||
|
|
||||||
|
たとえば、コンテナがクラッシュした場合やPodが削除された場合、またはノードが停止した場合に、アプリケーションのログにアクセスしたい場合があります。
|
||||||
|
|
||||||
|
クラスターでは、ノードやPod、またはコンテナに関係なく、ノードに個別のストレージとライフサイクルが必要です。この概念は、_クラスターレベルロギング_ と呼ばれます。
|
||||||
|
|
||||||
|
<!-- body -->
|
||||||
|
|
||||||
|
クラスターレベルロギングのアーキテクチャでは、ログを保存、分析、およびクエリするための個別のバックエンドが必要です。Kubernetesは、ログデータ用のネイティブストレージソリューションを提供していません。代わりに、Kubernetesに統合される多くのロギングソリューションがあります。次のセクションでは、ノードでログを処理および保存する方法について説明します。
|
||||||
|
|
||||||
|
## Kubernetesでの基本的なロギング {#basic-logging-in-kubernetes}
|
||||||
|
|
||||||
|
この例では、1秒に1回標準出力ストリームにテキストを書き込むコンテナを利用する、`Pod` specificationを使います。
|
||||||
|
|
||||||
|
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||||
|
|
||||||
|
このPodを実行するには、次のコマンドを使用します:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
出力は次のようになります:
|
||||||
|
|
||||||
|
```console
|
||||||
|
pod/counter created
|
||||||
|
```
|
||||||
|
|
||||||
|
ログを取得するには、以下のように`kubectl logs`コマンドを使用します:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl logs counter
|
||||||
|
```
|
||||||
|
|
||||||
|
出力は次のようになります:
|
||||||
|
|
||||||
|
```console
|
||||||
|
0: Mon Jan 1 00:00:00 UTC 2001
|
||||||
|
1: Mon Jan 1 00:00:01 UTC 2001
|
||||||
|
2: Mon Jan 1 00:00:02 UTC 2001
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
コンテナの以前のインスタンスからログを取得するために、`kubectl logs --previous`を使用できます。Podに複数のコンテナがある場合は、次のように-cフラグでコマンドにコンテナ名を追加することで、アクセスするコンテナのログを指定します。
|
||||||
|
|
||||||
|
```console
|
||||||
|
kubectl logs counter -c count
|
||||||
|
```
|
||||||
|
|
||||||
|
詳細については、[`kubectl logs`ドキュメント](/docs/reference/generated/kubectl/kubectl-commands#logs)を参照してください。
|
||||||
|
|
||||||
|
## ノードレベルでのロギング {#logging-at-the-node-level}
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
コンテナエンジンは、生成された出力を処理して、コンテナ化されたアプリケーションの`stdout`と`stderr`ストリームにリダイレクトします。たとえば、Dockerコンテナエンジンは、これら2つのストリームを[ロギングドライバー](https://docs.docker.com/engine/admin/logging/overview)にリダイレクトします。ロギングドライバーは、JSON形式でファイルに書き込むようにKubernetesで設定されています。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
Docker JSONロギングドライバーは、各行を個別のメッセージとして扱います。Dockerロギングドライバーを使用する場合、複数行メッセージを直接サポートすることはできません。ロギングエージェントレベルあるいはそれ以上のレベルで、複数行のメッセージを処理する必要があります。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
デフォルトでは、コンテナが再起動すると、kubeletは1つの終了したコンテナをログとともに保持します。Podがノードから削除されると、対応する全てのコンテナが、ログとともに削除されます。
|
||||||
|
|
||||||
|
ノードレベルロギングでの重要な考慮事項は、ノードで使用可能な全てのストレージをログが消費しないように、ログローテーションを実装することです。Kubernetesはログのローテーションを担当しませんが、デプロイツールでそれに対処するソリューションを構築する必要があります。たとえば、`kube-up.sh`スクリプトによってデプロイされたKubernetesクラスターには、1時間ごとに実行するように構成された[`logrotate`](https://linux.die.net/man/8/logrotate)ツールがあります。アプリケーションのログを自動的にローテーションするようにコンテナランタイムを構築することもできます。
|
||||||
|
|
||||||
|
例として、[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)に対応するスクリプトである`kube-up.sh`が、どのようにGCPでCOSイメージのロギングを構築しているかについて、詳細な情報を見つけることができます。
|
||||||
|
|
||||||
|
**CRIコンテナランタイム**を使用する場合、kubeletはログのローテーションとログディレクトリ構造の管理を担当します。kubeletはこの情報をCRIコンテナランタイムに送信し、ランタイムはコンテナログを指定された場所に書き込みます。2つのkubeletパラメーター、[`container-log-max-size`と`container-log-max-files`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)を[kubelet設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)で使うことで、各ログファイルの最大サイズと各コンテナで許可されるファイルの最大数をそれぞれ設定できます。
|
||||||
|
|
||||||
|
基本的なロギングの例のように、[`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs)を実行すると、ノード上のkubeletがリクエストを処理し、ログファイルから直接読み取ります。kubeletはログファイルの内容を返します。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
外部システムがローテーションを実行した場合、またはCRIコンテナランタイムが使用されている場合は、最新のログファイルの内容のみが`kubectl logs`で利用可能になります。例えば、10MBのファイルがある場合、`logrotate`によるローテーションが実行されると、2つのファイルが存在することになります: 1つはサイズが10MBのファイルで、もう1つは空のファイルです。この例では、`kubectl logs`は最新のログファイルの内容、つまり空のレスポンスを返します。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
### システムコンポーネントログ {#system-component-logs}
|
||||||
|
|
||||||
|
システムコンポーネントには、コンテナ内で実行されるものとコンテナ内で実行されないものの2種類があります。例えば以下のとおりです。
|
||||||
|
|
||||||
|
* Kubernetesスケジューラーとkube-proxyはコンテナ内で実行されます。
|
||||||
|
* kubeletとコンテナランタイムはコンテナ内で実行されません。
|
||||||
|
|
||||||
|
systemdを搭載したマシンでは、kubeletとコンテナランタイムがjournaldに書き込みます。systemdが存在しない場合、kubeletとコンテナランタイムは`var/log`ディレクトリ内の`.log`ファイルに書き込みます。コンテナ内のシステムコンポーネントは、デフォルトのロギングメカニズムを迂回して、常に`/var/log`ディレクトリに書き込みます。それらは[`klog`](https://github.com/kubernetes/klog)というロギングライブラリを使用します。これらのコンポーネントのロギングの重大性に関する規則は、[development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)に記載されています。
|
||||||
|
|
||||||
|
コンテナログと同様に、`/var/log`ディレクトリ内のシステムコンポーネントログはローテーションする必要があります。`kube-up.sh`スクリプトによって生成されたKubernetesクラスターでは、これらのログは、`logrotate`ツールによって毎日、またはサイズが100MBを超えた時にローテーションされるように設定されています。
|
||||||
|
|
||||||
|
## クラスターレベルロギングのアーキテクチャ {#cluster-level-logging-architectures}
|
||||||
|
|
||||||
|
Kubernetesはクラスターレベルロギングのネイティブソリューションを提供していませんが、検討可能な一般的なアプローチがいくつかあります。ここにいくつかのオプションを示します:
|
||||||
|
|
||||||
|
* 全てのノードで実行されるノードレベルのロギングエージェントを使用します。
|
||||||
|
* アプリケーションのPodにログインするための専用のサイドカーコンテナを含めます。
|
||||||
|
* アプリケーション内からバックエンドに直接ログを送信します。
|
||||||
|
|
||||||
|
### ノードロギングエージェントの使用 {#using-a-node-logging-agent}
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
各ノードに _ノードレベルのロギングエージェント_ を含めることで、クラスターレベルロギングを実装できます。ロギングエージェントは、ログを公開したり、ログをバックエンドに送信したりする専用のツールです。通常、ロギングエージェントは、そのノード上の全てのアプリケーションコンテナからのログファイルを含むディレクトリにアクセスできるコンテナです。
|
||||||
|
|
||||||
|
ロギングエージェントは全てのノードで実行する必要があるため、エージェントを`DaemonSet`として実行することをおすすめします。
|
||||||
|
|
||||||
|
ノードレベルのロギングは、ノードごとに1つのエージェントのみを作成し、ノードで実行されているアプリケーションに変更を加える必要はありません。
|
||||||
|
|
||||||
|
コンテナはstdoutとstderrに書き込みますが、合意された形式はありません。ノードレベルのエージェントはこれらのログを収集し、集約のために転送します。
|
||||||
|
|
||||||
|
### ロギングエージェントでサイドカーコンテナを使用する {#sidecar-container-with-logging-agent}
|
||||||
|
|
||||||
|
サイドカーコンテナは、次のいずれかの方法で使用できます:
|
||||||
|
|
||||||
|
* サイドカーコンテナは、アプリケーションログを自身の`stdout`にストリーミングします。
|
||||||
|
* サイドカーコンテナは、アプリケーションコンテナからログを取得するように設定されたロギングエージェントを実行します。
|
||||||
|
|
||||||
|
#### ストリーミングサイドカーコンテナ {#streaming-sidecar-container}
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
サイドカーコンテナに自身の`stdout`や`stderr`ストリームへの書き込みを行わせることで、各ノードですでに実行されているkubeletとロギングエージェントを利用できます。サイドカーコンテナは、ファイル、ソケット、またはjournaldからログを読み取ります。各サイドカーコンテナは、ログを自身の`stdout`または`stderr`ストリームに出力します。
|
||||||
|
|
||||||
|
このアプローチにより、`stdout`または`stderr`への書き込みのサポートが不足している場合も含め、アプリケーションのさまざまな部分からいくつかのログストリームを分離できます。ログのリダイレクトの背後にあるロジックは最小限であるため、大きなオーバーヘッドにはなりません。さらに、`stdout`と`stderr`はkubeletによって処理されるため、`kubectl logs`のような組み込みツールを使用できます。
|
||||||
|
|
||||||
|
たとえば、Podは単一のコンテナを実行し、コンテナは2つの異なる形式を使用して2つの異なるログファイルに書き込みます。Podの構成ファイルは次のとおりです:
|
||||||
|
|
||||||
|
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
|
||||||
|
|
||||||
|
両方のコンポーネントをコンテナの`stdout`ストリームにリダイレクトできたとしても、異なる形式のログエントリを同じログストリームに書き込むことはおすすめしません。代わりに、2つのサイドカーコンテナを作成できます。各サイドカーコンテナは、共有ボリュームから特定のログファイルを追跡し、ログを自身の`stdout`ストリームにリダイレクトできます。
|
||||||
|
|
||||||
|
2つのサイドカーコンテナを持つPodの構成ファイルは次のとおりです:
|
||||||
|
|
||||||
|
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
|
||||||
|
|
||||||
|
これで、このPodを実行するときに、次のコマンドを実行して、各ログストリームに個別にアクセスできます:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl logs counter count-log-1
|
||||||
|
```
|
||||||
|
|
||||||
|
出力は次のようになります:
|
||||||
|
|
||||||
|
```console
|
||||||
|
0: Mon Jan 1 00:00:00 UTC 2001
|
||||||
|
1: Mon Jan 1 00:00:01 UTC 2001
|
||||||
|
2: Mon Jan 1 00:00:02 UTC 2001
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl logs counter count-log-2
|
||||||
|
```
|
||||||
|
|
||||||
|
出力は次のようになります:
|
||||||
|
|
||||||
|
```console
|
||||||
|
Mon Jan 1 00:00:00 UTC 2001 INFO 0
|
||||||
|
Mon Jan 1 00:00:01 UTC 2001 INFO 1
|
||||||
|
Mon Jan 1 00:00:02 UTC 2001 INFO 2
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
クラスターにインストールされているノードレベルのエージェントは、それ以上の設定を行わなくても、これらのログストリームを自動的に取得します。必要があれば、ソースコンテナに応じてログをパースするようにエージェントを構成できます。
|
||||||
|
|
||||||
|
CPUとメモリーの使用量が少ない(CPUの場合は数ミリコアのオーダー、メモリーの場合は数メガバイトのオーダー)にも関わらず、ログをファイルに書き込んでから`stdout`にストリーミングすると、ディスクの使用量が2倍になる可能性があることに注意してください。単一のファイルに書き込むアプリケーションがある場合は、ストリーミングサイドカーコンテナアプローチを実装するのではなく、`/dev/stdout`を宛先として設定することをおすすめします。
|
||||||
|
|
||||||
|
サイドカーコンテナを使用して、アプリケーション自体ではローテーションできないログファイルをローテーションすることもできます。このアプローチの例は、`logrotate`を定期的に実行する小さなコンテナです。しかし、`stdout`と`stderr`を直接使い、ローテーションと保持のポリシーをkubeletに任せることをおすすめします。
|
||||||
|
|
||||||
|
#### ロギングエージェントを使用したサイドカーコンテナ {#sidecar-container-with-a-logging-agent}
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
ノードレベルロギングのエージェントが、あなたの状況に必要なだけの柔軟性を備えていない場合は、アプリケーションで実行するように特別に構成した別のロギングエージェントを使用してサイドカーコンテナを作成できます。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
サイドカーコンテナでロギングエージェントを使用すると、大量のリソースが消費される可能性があります。さらに、これらのログはkubeletによって制御されていないため、`kubectl logs`を使用してこれらのログにアクセスすることができません。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
ロギングエージェントを使用したサイドカーコンテナを実装するために使用できる、2つの構成ファイルを次に示します。最初のファイルには、fluentdを設定するための[`ConfigMap`](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)が含まれています。
|
||||||
|
|
||||||
|
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
fluentdの構成については、[fluentd documentation](https://docs.fluentd.org/)を参照してください。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
2番目のファイルは、fluentdを実行しているサイドカーコンテナを持つPodを示しています。Podは、fluentdが構成データを取得できるボリュームをマウントします。
|
||||||
|
|
||||||
|
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
|
||||||
|
|
||||||
|
サンプル構成では、fluentdを任意のロギングエージェントに置き換えて、アプリケーションコンテナ内の任意のソースから読み取ることができます。
|
||||||
|
|
||||||
|
### アプリケーションから直接ログを公開する {#exposing-logs-directly-from-the-application}
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
すべてのアプリケーションから直接ログを公開または送信するクラスターロギングは、Kubernetesのスコープ外です。
|
|
@ -10,14 +10,29 @@ weight: 30
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
|
||||||
KubernetesのSecretはパスワード、OAuthトークン、SSHキーのような機密情報を保存し、管理できるようにします。
|
|
||||||
Secretに機密情報を保存することは、それらを{{< glossary_tooltip text="Pod" term_id="pod" >}}の定義や{{< glossary_tooltip text="コンテナイメージ" term_id="image" >}}に直接記載するより、安全で柔軟です。
|
|
||||||
詳しくは[Secretの設計文書](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md)を参照してください。
|
|
||||||
|
|
||||||
Secretはパスワード、トークン、キーのような小容量の機密データを含むオブジェクトです。
|
Secretとは、パスワードやトークン、キーなどの少量の機密データを含むオブジェクトのことです。
|
||||||
他の方法としては、そのような情報はPodの定義やイメージに含めることができます。
|
このような情報は、Secretを用いないと{{< glossary_tooltip term_id="pod" >}}の定義や{{< glossary_tooltip text="コンテナイメージ" term_id="image" >}}に直接記載することになってしまうかもしれません。
|
||||||
ユーザーはSecretを作ることができ、またシステムが作るSecretもあります。
|
Secretを使用すれば、アプリケーションコードに機密データを含める必要がなくなります。
|
||||||
|
|
||||||
|
なぜなら、Secretは、それを使用するPodとは独立して作成することができ、
|
||||||
|
Podの作成、閲覧、編集といったワークフローの中でSecret(およびそのデータ)が漏洩する危険性が低くなるためです。
|
||||||
|
また、Kubernetesやクラスター内で動作するアプリケーションは、不揮発性ストレージに機密データを書き込まないようにするなど、Secretで追加の予防措置を取ることができます。
|
||||||
|
|
||||||
|
Secretsは、{{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}}に似ていますが、機密データを保持するために用います。
|
||||||
|
|
||||||
|
|
||||||
|
{{< caution >}}
|
||||||
|
KubernetesのSecretは、デフォルトでは、APIサーバーの基礎となるデータストア(etcd)に暗号化されずに保存されます。APIにアクセスできる人は誰でもSecretを取得または変更でき、etcdにアクセスできる人も同様です。
|
||||||
|
さらに、名前空間でPodを作成する権限を持つ人は、そのアクセスを使用して、その名前空間のあらゆるSecretを読むことができます。これには、Deploymentを作成する能力などの間接的なアクセスも含まれます。
|
||||||
|
|
||||||
|
Secretsを安全に使用するには、以下の手順を推奨します。
|
||||||
|
|
||||||
|
1. Secretsを[安全に暗号化する](/docs/tasks/administer-cluster/encrypt-data/)
|
||||||
|
2. Secretsのデータの読み取りを制限する[RBACルール](/docs/reference/access-authn-authz/authorization/)の有効化または設定
|
||||||
|
3. 適切な場合には、RBACなどのメカニズムを使用して、どの原則が新しいSecretの作成や既存のSecretの置き換えを許可されるかを制限します。
|
||||||
|
|
||||||
|
{{< /caution >}}
|
||||||
|
|
||||||
<!-- body -->
|
<!-- body -->
|
||||||
|
|
||||||
|
@ -30,6 +45,7 @@ PodがSecretを使う方法は3種類あります。
|
||||||
- [コンテナの環境変数](#using-secrets-as-environment-variables)として利用する
|
- [コンテナの環境変数](#using-secrets-as-environment-variables)として利用する
|
||||||
- Podを生成するために[kubeletがイメージをpullする](#using-imagepullsecrets)ときに使用する
|
- Podを生成するために[kubeletがイメージをpullする](#using-imagepullsecrets)ときに使用する
|
||||||
|
|
||||||
|
KubernetesのコントロールプレーンでもSecretsは使われています。例えば、[bootstrap token Secrets](#bootstrap-token-secrets)は、ノード登録を自動化するための仕組みです。
|
||||||
|
|
||||||
Secretオブジェクトの名称は正当な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)である必要があります。
|
Secretオブジェクトの名称は正当な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)である必要があります。
|
||||||
シークレットの構成ファイルを作成するときに、`data`および/または`stringData`フィールドを指定できます。`data`フィールドと`stringData`フィールドはオプションです。
|
シークレットの構成ファイルを作成するときに、`data`および/または`stringData`フィールドを指定できます。`data`フィールドと`stringData`フィールドはオプションです。
|
||||||
|
@ -145,7 +161,8 @@ Docker configファイルがない場合、または`kubectl`を使用してDock
|
||||||
kubectl create secret docker-registry secret-tiger-docker \
|
kubectl create secret docker-registry secret-tiger-docker \
|
||||||
--docker-username=tiger \
|
--docker-username=tiger \
|
||||||
--docker-password=pass113 \
|
--docker-password=pass113 \
|
||||||
--docker-email=tiger@acme.com
|
--docker-email=tiger@acme.com \
|
||||||
|
--docker-server=my-registry.example:5000
|
||||||
```
|
```
|
||||||
|
|
||||||
このコマンドは、`kubernetes.io/dockerconfigjson`型のSecretを作成します。
|
このコマンドは、`kubernetes.io/dockerconfigjson`型のSecretを作成します。
|
||||||
|
@ -153,15 +170,21 @@ kubectl create secret docker-registry secret-tiger-docker \
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"auths": {
|
"apiVersion": "v1",
|
||||||
"https://index.docker.io/v1/": {
|
"data": {
|
||||||
"username": "tiger",
|
".dockerconfigjson": "eyJhdXRocyI6eyJteS1yZWdpc3RyeTo1MDAwIjp7InVzZXJuYW1lIjoidGlnZXIiLCJwYXNzd29yZCI6InBhc3MxMTMiLCJlbWFpbCI6InRpZ2VyQGFjbWUuY29tIiwiYXV0aCI6ImRHbG5aWEk2Y0dGemN6RXhNdz09In19fQ=="
|
||||||
"password": "pass113",
|
},
|
||||||
"email": "tiger@acme.com",
|
"kind": "Secret",
|
||||||
"auth": "dGlnZXI6cGFzczExMw=="
|
"metadata": {
|
||||||
}
|
"creationTimestamp": "2021-07-01T07:30:59Z",
|
||||||
}
|
"name": "secret-tiger-docker",
|
||||||
|
"namespace": "default",
|
||||||
|
"resourceVersion": "566718",
|
||||||
|
"uid": "e15c1d7b-9071-4100-8681-f3a7a2ce89ca"
|
||||||
|
},
|
||||||
|
"type": "kubernetes.io/dockerconfigjson"
|
||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Basic authentication Secret
|
### Basic authentication Secret
|
||||||
|
@ -1062,3 +1085,4 @@ Podに複数のコンテナが含まれることもあります。しかし、Po
|
||||||
- [`kubectl`を使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)方法を学ぶ
|
- [`kubectl`を使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)方法を学ぶ
|
||||||
- [config fileを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-config-file/)方法を学ぶ
|
- [config fileを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-config-file/)方法を学ぶ
|
||||||
- [kustomizeを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)方法を学ぶ
|
- [kustomizeを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)方法を学ぶ
|
||||||
|
- [SecretのAPIリファレンス](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/)を読む
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
---
|
---
|
||||||
title: Kubernetesのスケジューラー
|
title: Kubernetesのスケジューラー
|
||||||
content_type: concept
|
content_type: concept
|
||||||
weight: 60
|
weight: 10
|
||||||
---
|
---
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
@ -62,9 +62,9 @@ _スコアリング_ ステップでは、Podを割り当てるのに最も適
|
||||||
* [Podトポロジーの分散制約](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)を参照してください。
|
* [Podトポロジーの分散制約](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)を参照してください。
|
||||||
* kube-schedulerの[リファレンスドキュメント](/docs/reference/command-line-tools-reference/kube-scheduler/)を参照してください。
|
* kube-schedulerの[リファレンスドキュメント](/docs/reference/command-line-tools-reference/kube-scheduler/)を参照してください。
|
||||||
* [複数のスケジューラーの設定](/docs/tasks/administer-cluster/configure-multiple-schedulers/)について学んでください。
|
* [複数のスケジューラーの設定](/docs/tasks/administer-cluster/configure-multiple-schedulers/)について学んでください。
|
||||||
* [トポロジーの管理ポリシー](/docs/tasks/administer-cluster/topology-manager/)について学んでください。
|
* [トポロジーの管理ポリシー](/ja/docs/tasks/administer-cluster/topology-manager/)について学んでください。
|
||||||
* [Podのオーバーヘッド](/docs/concepts/scheduling-eviction/pod-overhead/)について学んでください。
|
* [Podのオーバーヘッド](/ja/docs/concepts/scheduling-eviction/pod-overhead/)について学んでください。
|
||||||
* ボリュームを使用するPodのスケジューリングについて以下で学んでください。
|
* ボリュームを使用するPodのスケジューリングについて以下で学んでください。
|
||||||
* [Volume Topology Support](/docs/concepts/storage/storage-classes/#volume-binding-mode)
|
* [Volume Topology Support](/docs/concepts/storage/storage-classes/#volume-binding-mode)
|
||||||
* [ストレージ容量の追跡](/ja//ja/docs/concepts/storage/storage-capacity/)
|
* [ストレージ容量の追跡](/ja/docs/concepts/storage/storage-capacity/)
|
||||||
* [Node-specific Volume Limits](/docs/concepts/storage/storage-limits/)
|
* [Node-specific Volume Limits](/docs/concepts/storage/storage-limits/)
|
||||||
|
|
|
@ -0,0 +1,197 @@
|
||||||
|
---
|
||||||
|
title: 拡張リソースのリソースビンパッキング
|
||||||
|
content_type: concept
|
||||||
|
weight: 80
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
|
||||||
|
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||||
|
|
||||||
|
kube-schedulerでは、優先度関数`RequestedToCapacityRatioResourceAllocation`を使用した、
|
||||||
|
拡張リソースを含むリソースのビンパッキングを有効化できます。優先度関数はそれぞれのニーズに応じて、kube-schedulerを微調整するために使用できます。
|
||||||
|
|
||||||
|
<!-- body -->
|
||||||
|
|
||||||
|
## `RequestedToCapacityRatioResourceAllocation`を使用したビンパッキングの有効化
|
||||||
|
|
||||||
|
Kubernetesでは、キャパシティー比率への要求に基づいたNodeのスコアリングをするために、各リソースの重みと共にリソースを指定することができます。これにより、ユーザーは適切なパラメーターを使用することで拡張リソースをビンパックすることができ、大規模クラスターにおける希少なリソースを有効活用できるようになります。優先度関数`RequestedToCapacityRatioResourceAllocation`の動作は`RequestedToCapacityRatioArgs`と呼ばれる設定オプションによって変わります。この引数は`shape`と`resources`パラメーターによって構成されます。`shape`パラメーターは`utilization`と`score`の値に基づいて、最も要求が多い場合か最も要求が少ない場合の関数をチューニングできます。`resources`パラメーターは、スコアリングの際に考慮されるリソース名の`name`と、各リソースの重みを指定する`weight`で構成されます。
|
||||||
|
|
||||||
|
以下は、拡張リソース`intel.com/foo`と`intel.com/bar`のビンパッキングに`requestedToCapacityRatioArguments`を設定する例になります。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
# ...
|
||||||
|
pluginConfig:
|
||||||
|
- name: RequestedToCapacityRatio
|
||||||
|
args:
|
||||||
|
shape:
|
||||||
|
- utilization: 0
|
||||||
|
score: 10
|
||||||
|
- utilization: 100
|
||||||
|
score: 0
|
||||||
|
resources:
|
||||||
|
- name: intel.com/foo
|
||||||
|
weight: 3
|
||||||
|
- name: intel.com/bar
|
||||||
|
weight: 5
|
||||||
|
```
|
||||||
|
スケジューラーには、kube-schedulerフラグ`--config=/path/to/config/file`を使用して`KubeSchedulerConfiguration`のファイルを指定することで渡すことができます。
|
||||||
|
|
||||||
|
**この機能はデフォルトで無効化されています**
|
||||||
|
|
||||||
|
### 優先度関数のチューニング
|
||||||
|
|
||||||
|
`shape`は`RequestedToCapacityRatioPriority`関数の動作を指定するために使用されます。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
shape:
|
||||||
|
- utilization: 0
|
||||||
|
score: 0
|
||||||
|
- utilization: 100
|
||||||
|
score: 10
|
||||||
|
```
|
||||||
|
|
||||||
|
上記の引数は、`utilization`が0%の場合は0、`utilization`が100%の場合は10という`score`をNodeに与え、ビンパッキングの動作を有効にしています。最小要求を有効にするには、次のようにスコアを反転させる必要があります。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
shape:
|
||||||
|
- utilization: 0
|
||||||
|
score: 10
|
||||||
|
- utilization: 100
|
||||||
|
score: 0
|
||||||
|
```
|
||||||
|
|
||||||
|
`resources`はオプションパラメーターで、デフォルトでは以下の通りです。
|
||||||
|
|
||||||
|
``` yaml
|
||||||
|
resources:
|
||||||
|
- name: cpu
|
||||||
|
weight: 1
|
||||||
|
- name: memory
|
||||||
|
weight: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
以下のように拡張リソースの追加に利用できます。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
- name: intel.com/foo
|
||||||
|
weight: 5
|
||||||
|
- name: cpu
|
||||||
|
weight: 3
|
||||||
|
- name: memory
|
||||||
|
weight: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
`weight`はオプションパラメーターで、指定されてない場合1が設定されます。また、マイナスの値は設定できません。
|
||||||
|
|
||||||
|
### キャパシティ割り当てのためのNodeスコアリング
|
||||||
|
|
||||||
|
このセクションは、本機能の内部詳細について理解したい方を対象としています。以下は、与えられた値に対してNodeのスコアがどのように計算されるかの例です。
|
||||||
|
|
||||||
|
要求されたリソース:
|
||||||
|
|
||||||
|
```
|
||||||
|
intel.com/foo : 2
|
||||||
|
memory: 256MB
|
||||||
|
cpu: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
リソースの重み:
|
||||||
|
|
||||||
|
```
|
||||||
|
intel.com/foo : 5
|
||||||
|
memory: 1
|
||||||
|
cpu: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
`shape`の値 {{0, 0}, {100, 10}}
|
||||||
|
|
||||||
|
Node1のスペック:
|
||||||
|
|
||||||
|
```
|
||||||
|
Available:
|
||||||
|
intel.com/foo: 4
|
||||||
|
memory: 1 GB
|
||||||
|
cpu: 8
|
||||||
|
|
||||||
|
Used:
|
||||||
|
intel.com/foo: 1
|
||||||
|
memory: 256MB
|
||||||
|
cpu: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
Nodeのスコア:
|
||||||
|
|
||||||
|
```
|
||||||
|
intel.com/foo = resourceScoringFunction((2+1),4)
|
||||||
|
= (100 - ((4-3)*100/4)
|
||||||
|
= (100 - 25)
|
||||||
|
= 75 # requested + used = 75% * available
|
||||||
|
= rawScoringFunction(75)
|
||||||
|
= 7 # floor(75/10)
|
||||||
|
|
||||||
|
memory = resourceScoringFunction((256+256),1024)
|
||||||
|
= (100 -((1024-512)*100/1024))
|
||||||
|
= 50 # requested + used = 50% * available
|
||||||
|
= rawScoringFunction(50)
|
||||||
|
= 5 # floor(50/10)
|
||||||
|
|
||||||
|
cpu = resourceScoringFunction((2+1),8)
|
||||||
|
= (100 -((8-3)*100/8))
|
||||||
|
= 37.5 # requested + used = 37.5% * available
|
||||||
|
= rawScoringFunction(37.5)
|
||||||
|
= 3 # floor(37.5/10)
|
||||||
|
|
||||||
|
NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3)
|
||||||
|
= 5
|
||||||
|
```
|
||||||
|
|
||||||
|
Node2のスペック:
|
||||||
|
|
||||||
|
```
|
||||||
|
Available:
|
||||||
|
intel.com/foo: 8
|
||||||
|
memory: 1GB
|
||||||
|
cpu: 8
|
||||||
|
Used:
|
||||||
|
intel.com/foo: 2
|
||||||
|
memory: 512MB
|
||||||
|
cpu: 6
|
||||||
|
```
|
||||||
|
|
||||||
|
Nodeのスコア:
|
||||||
|
|
||||||
|
```
|
||||||
|
intel.com/foo = resourceScoringFunction((2+2),8)
|
||||||
|
= (100 - ((8-4)*100/8)
|
||||||
|
= (100 - 50)
|
||||||
|
= 50
|
||||||
|
= rawScoringFunction(50)
|
||||||
|
= 5
|
||||||
|
|
||||||
|
memory = resourceScoringFunction((256+512),1024)
|
||||||
|
= (100 -((1024-768)*100/1024))
|
||||||
|
= 75
|
||||||
|
= rawScoringFunction(75)
|
||||||
|
= 7
|
||||||
|
|
||||||
|
cpu = resourceScoringFunction((2+6),8)
|
||||||
|
= (100 -((8-8)*100/8))
|
||||||
|
= 100
|
||||||
|
= rawScoringFunction(100)
|
||||||
|
= 10
|
||||||
|
|
||||||
|
NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
|
||||||
|
= 7
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
- [スケジューリングフレームワーク](/ja/docs/concepts/scheduling-eviction/scheduling-framework/)について更に読む
|
||||||
|
- [スケジューラーの設定](/docs/reference/scheduling/config/)について更に読む
|
|
@ -49,6 +49,25 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
<td><strong>項目</strong></td>
|
<td><strong>項目</strong></td>
|
||||||
<td><strong>ポリシー</strong></td>
|
<td><strong>ポリシー</strong></td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>ホストのプロセス</td>
|
||||||
|
<td>
|
||||||
|
<p>Windows Podは、Windowsノードへの特権的なアクセスを可能にする<a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess</a>コンテナ</a>を実行する機能を提供します。ベースラインポリシーでは、ホストへの特権的なアクセスは禁止されています。HostProcess Podは、Kubernetes v1.22時点ではアルファ版の機能です。
|
||||||
|
ホストのネームスペースの共有は無効化すべきです。</p>
|
||||||
|
<p><strong>制限されるフィールド</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li><code>spec.securityContext.windowsOptions.hostProcess</code></li>
|
||||||
|
<li><code>spec.containers[*].securityContext.windowsOptions.hostProcess</code></li>
|
||||||
|
<li><code>spec.initContainers[*].securityContext.windowsOptions.hostProcess</code></li>
|
||||||
|
<li><code>spec.ephemeralContainers[*].securityContext.windowsOptions.hostProcess</code></li>
|
||||||
|
</ul>
|
||||||
|
<p><strong>認められる値</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li>Undefined/nil</li>
|
||||||
|
<li><code>false</code></li>
|
||||||
|
</ul>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>ホストのネームスペース</td>
|
<td>ホストのネームスペース</td>
|
||||||
<td>
|
<td>
|
||||||
|
@ -57,7 +76,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
spec.hostNetwork<br>
|
spec.hostNetwork<br>
|
||||||
spec.hostPID<br>
|
spec.hostPID<br>
|
||||||
spec.hostIPC<br>
|
spec.hostIPC<br>
|
||||||
<br><b>認められる値:</b> false<br>
|
<br><b>認められる値:</b> false, Undefined/nil<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
|
@ -67,6 +86,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
<br><b>制限されるフィールド:</b><br>
|
<br><b>制限されるフィールド:</b><br>
|
||||||
spec.containers[*].securityContext.privileged<br>
|
spec.containers[*].securityContext.privileged<br>
|
||||||
spec.initContainers[*].securityContext.privileged<br>
|
spec.initContainers[*].securityContext.privileged<br>
|
||||||
|
spec.ephemeralContainers[*].securityContext.privileged<br>
|
||||||
<br><b>認められる値:</b> false, undefined/nil<br>
|
<br><b>認められる値:</b> false, undefined/nil<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
@ -77,7 +97,22 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
<br><b>制限されるフィールド:</b><br>
|
<br><b>制限されるフィールド:</b><br>
|
||||||
spec.containers[*].securityContext.capabilities.add<br>
|
spec.containers[*].securityContext.capabilities.add<br>
|
||||||
spec.initContainers[*].securityContext.capabilities.add<br>
|
spec.initContainers[*].securityContext.capabilities.add<br>
|
||||||
<br><b>認められる値:</b> 空 (または既知のリストに限定)<br>
|
spec.ephemeralContainers[*].securityContext.capabilities.add<br>
|
||||||
|
<br><b>認められる値:</b>
|
||||||
|
Undefined/nil<br>
|
||||||
|
AUDIT_WRITE<br>
|
||||||
|
CHOWN<br>
|
||||||
|
DAC_OVERRIDE<br>
|
||||||
|
FOWNER<br>
|
||||||
|
FSETID<br>
|
||||||
|
KILL<br>
|
||||||
|
MKNOD<br>
|
||||||
|
NET_BIND_SERVICE<br>
|
||||||
|
SETFCAP<br>
|
||||||
|
SETGID<br>
|
||||||
|
SETPCAP<br>
|
||||||
|
SETUID<br>
|
||||||
|
SYS_CHROOT<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
|
@ -96,6 +131,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
<br><b>制限されるフィールド:</b><br>
|
<br><b>制限されるフィールド:</b><br>
|
||||||
spec.containers[*].ports[*].hostPort<br>
|
spec.containers[*].ports[*].hostPort<br>
|
||||||
spec.initContainers[*].ports[*].hostPort<br>
|
spec.initContainers[*].ports[*].hostPort<br>
|
||||||
|
spec.ephemeralContainers[*].ports[*].hostPort<br>
|
||||||
<br><b>認められる値:</b> 0, undefined (または既知のリストに限定)<br>
|
<br><b>認められる値:</b> 0, undefined (または既知のリストに限定)<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
@ -105,7 +141,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
サポートされるホストでは、AppArmorの'runtime/default'プロファイルがデフォルトで適用されます。デフォルトのポリシーはポリシーの上書きや無効化を防ぎ、許可されたポリシーのセットを上書きできないよう制限すべきです。<br>
|
サポートされるホストでは、AppArmorの'runtime/default'プロファイルがデフォルトで適用されます。デフォルトのポリシーはポリシーの上書きや無効化を防ぎ、許可されたポリシーのセットを上書きできないよう制限すべきです。<br>
|
||||||
<br><b>制限されるフィールド:</b><br>
|
<br><b>制限されるフィールド:</b><br>
|
||||||
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']<br>
|
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']<br>
|
||||||
<br><b>認められる値:</b> 'runtime/default', undefined<br>
|
<br><b>認められる値:</b> 'runtime/default', undefined, localhost/*<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
|
@ -116,7 +152,24 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
spec.securityContext.seLinuxOptions<br>
|
spec.securityContext.seLinuxOptions<br>
|
||||||
spec.containers[*].securityContext.seLinuxOptions<br>
|
spec.containers[*].securityContext.seLinuxOptions<br>
|
||||||
spec.initContainers[*].securityContext.seLinuxOptions<br>
|
spec.initContainers[*].securityContext.seLinuxOptions<br>
|
||||||
|
spec.ephemeralContainers[*].securityContext.seLinuxOptions.type<br>
|
||||||
<br><b>認められる値:</b>undefined/nil<br>
|
<br><b>認められる値:</b>undefined/nil<br>
|
||||||
|
Undefined/""<br>
|
||||||
|
container_t<br>
|
||||||
|
container_init_t<br>
|
||||||
|
container_kvm_t<br>
|
||||||
|
<hr />
|
||||||
|
<br><b>制限されるフィールド:</b><br>
|
||||||
|
spec.securityContext.seLinuxOptions.user<br>
|
||||||
|
spec.containers[*].securityContext.seLinuxOptions.user<br>
|
||||||
|
spec.initContainers[*].securityContext.seLinuxOptions.user<br>
|
||||||
|
spec.ephemeralContainers[*].securityContext.seLinuxOptions.user<br>
|
||||||
|
spec.securityContext.seLinuxOptions.role<br>
|
||||||
|
spec.containers[*].securityContext.seLinuxOptions.role<br>
|
||||||
|
spec.initContainers[*].securityContext.seLinuxOptions.role<br>
|
||||||
|
spec.ephemeralContainers[*].securityContext.seLinuxOptions.role<br>
|
||||||
|
<br><b>認められる値:</b>undefined/nil<br>
|
||||||
|
Undefined/""
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
|
@ -126,9 +179,29 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
<br><b>制限されるフィールド:</b><br>
|
<br><b>制限されるフィールド:</b><br>
|
||||||
spec.containers[*].securityContext.procMount<br>
|
spec.containers[*].securityContext.procMount<br>
|
||||||
spec.initContainers[*].securityContext.procMount<br>
|
spec.initContainers[*].securityContext.procMount<br>
|
||||||
|
spec.ephemeralContainers[*].securityContext.procMount<br>
|
||||||
<br><b>認められる値:</b>undefined/nil, 'Default'<br>
|
<br><b>認められる値:</b>undefined/nil, 'Default'<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Seccomp</td>
|
||||||
|
<td>
|
||||||
|
<p>Seccompプロファイルを明示的に<code>Unconfined</code>に設定することはできません。</p>
|
||||||
|
<p><strong>Restricted Fields</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li><code>spec.securityContext.seccompProfile.type</code></li>
|
||||||
|
<li><code>spec.containers[*].securityContext.seccompProfile.type</code></li>
|
||||||
|
<li><code>spec.initContainers[*].securityContext.seccompProfile.type</code></li>
|
||||||
|
<li><code>spec.ephemeralContainers[*].securityContext.seccompProfile.type</code></li>
|
||||||
|
</ul>
|
||||||
|
<p><strong>Allowed Values</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li>Undefined/nil</li>
|
||||||
|
<li><code>RuntimeDefault</code></li>
|
||||||
|
<li><code>Localhost</code></li>
|
||||||
|
</ul>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>Sysctl</td>
|
<td>Sysctl</td>
|
||||||
<td>
|
<td>
|
||||||
|
@ -179,7 +252,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
spec.volumes[*].rbd<br>
|
spec.volumes[*].rbd<br>
|
||||||
spec.volumes[*].flexVolume<br>
|
spec.volumes[*].flexVolume<br>
|
||||||
spec.volumes[*].cinder<br>
|
spec.volumes[*].cinder<br>
|
||||||
spec.volumes[*].cephFS<br>
|
spec.volumes[*].cephfs<br>
|
||||||
spec.volumes[*].flocker<br>
|
spec.volumes[*].flocker<br>
|
||||||
spec.volumes[*].fc<br>
|
spec.volumes[*].fc<br>
|
||||||
spec.volumes[*].azureFile<br>
|
spec.volumes[*].azureFile<br>
|
||||||
|
@ -189,7 +262,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
spec.volumes[*].portworxVolume<br>
|
spec.volumes[*].portworxVolume<br>
|
||||||
spec.volumes[*].scaleIO<br>
|
spec.volumes[*].scaleIO<br>
|
||||||
spec.volumes[*].storageos<br>
|
spec.volumes[*].storageos<br>
|
||||||
spec.volumes[*].csi<br>
|
spec.volumes[*].photonPersistentDisk<br>
|
||||||
<br><b>認められる値:</b> undefined/nil<br>
|
<br><b>認められる値:</b> undefined/nil<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
@ -200,6 +273,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
<br><b>制限されるフィールド:</b><br>
|
<br><b>制限されるフィールド:</b><br>
|
||||||
spec.containers[*].securityContext.allowPrivilegeEscalation<br>
|
spec.containers[*].securityContext.allowPrivilegeEscalation<br>
|
||||||
spec.initContainers[*].securityContext.allowPrivilegeEscalation<br>
|
spec.initContainers[*].securityContext.allowPrivilegeEscalation<br>
|
||||||
|
spec.ephemeralContainers[*].securityContext.allowPrivilegeEscalation<br>
|
||||||
<br><b>認められる値:</b> false<br>
|
<br><b>認められる値:</b> false<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
@ -211,6 +285,7 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
spec.securityContext.runAsNonRoot<br>
|
spec.securityContext.runAsNonRoot<br>
|
||||||
spec.containers[*].securityContext.runAsNonRoot<br>
|
spec.containers[*].securityContext.runAsNonRoot<br>
|
||||||
spec.initContainers[*].securityContext.runAsNonRoot<br>
|
spec.initContainers[*].securityContext.runAsNonRoot<br>
|
||||||
|
spec.ephemeralContainers[*].securityContext.runAsNonRoot<br>
|
||||||
<br><b>認められる値:</b> true<br>
|
<br><b>認められる値:</b> true<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
@ -242,6 +317,36 @@ _Pod Security Policy_ はクラスターレベルのリソースで、Pod定義
|
||||||
undefined / nil<br>
|
undefined / nil<br>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td style="white-space: nowrap">Capabilities (v1.22+)</td>
|
||||||
|
<td>
|
||||||
|
<p>
|
||||||
|
コンテナはすべてのケイパビリティを削除する必要があり、<code>NET_BIND_SERVICE</code>ケイパビリティを追加することだけが許可されています。
|
||||||
|
</p>
|
||||||
|
<p><strong>Restricted Fields</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li><code>spec.containers[*].securityContext.capabilities.drop</code></li>
|
||||||
|
<li><code>spec.initContainers[*].securityContext.capabilities.drop</code></li>
|
||||||
|
<li><code>spec.ephemeralContainers[*].securityContext.capabilities.drop</code></li>
|
||||||
|
</ul>
|
||||||
|
<p><strong>Allowed Values</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li>Any list of capabilities that includes <code>ALL</code></li>
|
||||||
|
</ul>
|
||||||
|
<hr />
|
||||||
|
<p><strong>Restricted Fields</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li><code>spec.containers[*].securityContext.capabilities.add</code></li>
|
||||||
|
<li><code>spec.initContainers[*].securityContext.capabilities.add</code></li>
|
||||||
|
<li><code>spec.ephemeralContainers[*].securityContext.capabilities.add</code></li>
|
||||||
|
</ul>
|
||||||
|
<p><strong>Allowed Values</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li>Undefined/nil</li>
|
||||||
|
<li><code>NET_BIND_SERVICE</code></li>
|
||||||
|
</ul>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
@ -284,6 +389,14 @@ Kubernetesでは、Linuxベースのワークロードと比べてWindowsの使
|
||||||
特に、PodのSecurityContextフィールドは[Windows環境では効果がありません](/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext)。
|
特に、PodのSecurityContextフィールドは[Windows環境では効果がありません](/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext)。
|
||||||
したがって、現段階では標準化されたセキュリティポリシーは存在しません。
|
したがって、現段階では標準化されたセキュリティポリシーは存在しません。
|
||||||
|
|
||||||
|
Windows Podに制限付きプロファイルを適用すると、実行時にPodに影響が出る場合があります。
|
||||||
|
制限付きプロファイルでは、Linux固有の制限(seccompプロファイルや特権昇格の不許可など)を適用する必要があります。
|
||||||
|
kubeletおよび/またはそのコンテナランタイムがこれらのLinux固有の値を無視した場合、Windows Podは制限付きプロファイル内で正常に動作します。
|
||||||
|
ただし、強制力がないため、Windows コンテナを使用するPodについては、ベースラインプロファイルと比較して追加の制限はありません。
|
||||||
|
|
||||||
|
HostProcess Podを作成するためのHostProcessフラグの使用は、特権的なポリシーに沿ってのみ行われるべきです。
|
||||||
|
Windows HostProcess Podの作成は、ベースラインおよび制限されたポリシーの下でブロックされているため、いかなるHostProcess Podも特権的であるとみなされるべきです。
|
||||||
|
|
||||||
### サンドボックス化されたPodはどのように扱えばよいでしょうか?
|
### サンドボックス化されたPodはどのように扱えばよいでしょうか?
|
||||||
|
|
||||||
現在のところ、Podがサンドボックス化されていると見なされるかどうかを制御できるAPI標準はありません。
|
現在のところ、Podがサンドボックス化されていると見なされるかどうかを制御できるAPI標準はありません。
|
||||||
|
|
|
@ -0,0 +1,101 @@
|
||||||
|
---
|
||||||
|
title: トポロジーを意識したヒント
|
||||||
|
content_type: concept
|
||||||
|
weight: 45
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
|
||||||
|
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||||
|
|
||||||
|
*Topology Aware Hint*は、クライアントがendpointをどのように使用するかについての提案を含めることにより、トポロジーを考慮したルーティングを可能にします。このアプローチでは、EndpointSliceおよび/またはEndpointオブジェクトの消費者が、これらのネットワークエンドポイントへのトラフィックを、それが発生した場所の近くにルーティングできるように、メタデータを追加します。
|
||||||
|
|
||||||
|
たとえば、局所的にトラフィックをルーティングすることで、コストを削減したり、ネットワークパフォーマンスを向上させたりできます。
|
||||||
|
|
||||||
|
<!-- body -->
|
||||||
|
|
||||||
|
## 動機
|
||||||
|
|
||||||
|
Kubernetesクラスターは、マルチゾーン環境で展開されることが多くなっています。
|
||||||
|
*Topology Aware Hint*は、トラフィックを発信元のゾーン内に留めておくのに役立つメカニズムを提供します。このコンセプトは、一般に「Topology Aware Routing」と呼ばれています。EndpointSliceコントローラーは{{< glossary_tooltip term_id="Service" >}}のendpointを計算する際に、各endpointのトポロジー(リージョンとゾーン)を考慮し、ゾーンに割り当てるためのヒントフィールドに値を入力します。
|
||||||
|
EndpointSliceコントローラーは、各endpointのトポロジー(リージョンとゾーン)を考慮し、ゾーンに割り当てるためのヒントフィールドに入力します。
|
||||||
|
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}のようなクラスターコンポーネントは、次にこれらのヒントを消費し、それらを使用してトラフィックがルーティングされる方法に影響を与えることが可能です(トポロジー的に近いendpointを優先します)。
|
||||||
|
|
||||||
|
|
||||||
|
## Topology Aware Hintを使う
|
||||||
|
|
||||||
|
`service.kubernetes.io/topology-aware-hints`アノテーションを`auto`に設定すると、サービスに対してTopology Aware Hintを有効にすることができます。これはEndpointSliceコントローラーが安全と判断した場合に、トポロジーヒントを設定するように指示します。
|
||||||
|
重要なのは、これはヒントが常に設定されることを保証するものではないことです。
|
||||||
|
|
||||||
|
## 使い方 {#implementation}
|
||||||
|
|
||||||
|
この機能を有効にする機能は、EndpointSliceコントローラーとkube-proxyの2つのコンポーネントに分かれています。このセクションでは、各コンポーネントがこの機能をどのように実装しているか、高レベルの概要を説明します。
|
||||||
|
|
||||||
|
### EndpointSliceコントローラー {#implementation-control-plane}
|
||||||
|
|
||||||
|
この機能が有効な場合、EndpointSliceコントローラーはEndpointSliceにヒントを設定する役割を担います。
|
||||||
|
コントローラーは、各ゾーンに比例した量のendpointを割り当てます。
|
||||||
|
この割合は、そのゾーンで実行されているノードの[割り当て可能な](/ja/docs/task/administer-cluster/reserve-compute-resources/#node-allocatable)CPUコアを基に決定されます。
|
||||||
|
|
||||||
|
たとえば、あるゾーンに2つのCPUコアがあり、別のゾーンに1つのCPUコアしかない場合、コントローラーは2つのCPUコアを持つゾーンに2倍のendpointを割り当てます。
|
||||||
|
|
||||||
|
次の例は、ヒントが入力されたときのEndpointSliceの様子を示しています。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: discovery.k8s.io/v1
|
||||||
|
kind: EndpointSlice
|
||||||
|
metadata:
|
||||||
|
name: example-hints
|
||||||
|
labels:
|
||||||
|
kubernetes.io/service-name: example-svc
|
||||||
|
addressType: IPv4
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
protocol: TCP
|
||||||
|
port: 80
|
||||||
|
endpoints:
|
||||||
|
- addresses:
|
||||||
|
- "10.1.2.3"
|
||||||
|
conditions:
|
||||||
|
ready: true
|
||||||
|
hostname: pod-1
|
||||||
|
zone: zone-a
|
||||||
|
hints:
|
||||||
|
forZones:
|
||||||
|
- name: "zone-a"
|
||||||
|
```
|
||||||
|
|
||||||
|
### kube-proxy {#implementation-kube-proxy}
|
||||||
|
|
||||||
|
kube-proxyは、EndpointSliceコントローラーによって設定されたヒントに基づいて、ルーティング先のendpointをフィルター処理します。ほとんどの場合、これはkube-proxyが同じゾーン内のendpointにトラフィックをルーティングできることを意味します。コントローラーが別のゾーンからendpointを割り当てて、ゾーン間でendpointがより均等に分散されるようにする場合があります。これにより、一部のトラフィックが他のゾーンにルーティングされます。
|
||||||
|
|
||||||
|
## セーフガード
|
||||||
|
|
||||||
|
各ノードのKubernetesコントロールプレーンとkube-proxyは、Topology Aware Hintを使用する前に、いくつかのセーフガードルールを適用します。これらがチェックアウトされない場合、kube-proxyは、ゾーンに関係なく、クラスター内のどこからでもendpointを選択します。
|
||||||
|
|
||||||
|
1. **endpointの数が不十分です:** クラスター内のゾーンよりもendpointが少ない場合、コントローラーはヒントを割り当てません。
|
||||||
|
|
||||||
|
2. **バランスの取れた割り当てを実現できません:** 場合によっては、ゾーン間でendpointのバランスの取れた割り当てを実現できないことがあります。たとえば、ゾーンaがゾーンbの2倍の大きさであるが、endpointが2つしかない場合、ゾーンaに割り当てられたendpointはゾーンbの2倍のトラフィックを受信する可能性があります。この「予想される過負荷」値が各ゾーンの許容しきい値を下回ることができない場合、コントローラーはヒントを割り当てません。重要なことに、これはリアルタイムのフィードバックに基づいていません。それでも、個々のendpointが過負荷になる可能性があります。
|
||||||
|
|
||||||
|
3. **1つ以上のノードの情報が不十分です:** ノードに`topology.kubernetes.io/zone`ラベルがないか、割り当て可能なCPUの値を報告していない場合、コントロールプレーンはtopology-aware endpoint hintsを設定しないため、kube-proxyはendpointをゾーンでフィルタリングしません。
|
||||||
|
|
||||||
|
4. **1つ以上のendpointにゾーンヒントが存在しません:** これが発生すると、kube-proxyはTopology Aware Hintから、またはTopology Aware Hintへの移行が進行中であると見なします。この状態のサービスに対してendpointをフィルタリングすることは危険であるため、kube-proxyはすべてのendpointを使用するようにフォールバックします。
|
||||||
|
|
||||||
|
5. **ゾーンはヒントで表されません:** kube-proxyが、実行中のゾーンをターゲットとするヒントを持つendpointを1つも見つけることができない場合、すべてのゾーンのendpointを使用することになります。これは既存のクラスターに新しいゾーンを追加するときに発生する可能性が最も高くなります。
|
||||||
|
|
||||||
|
## 制約事項
|
||||||
|
|
||||||
|
* Serviceで`externalTrafficPolicy`または`internalTrafficPolicy`が`Local`に設定されている場合、Topology Aware Hintは使用されません。同じServiceではなく、異なるServiceの同じクラスターで両方の機能を使用することができます。
|
||||||
|
|
||||||
|
* このアプローチは、ゾーンのサブセットから発信されるトラフィックの割合が高いサービスではうまく機能しません。代わりに、これは着信トラフィックが各ゾーンのノードの容量にほぼ比例することを前提としています。
|
||||||
|
|
||||||
|
* EndpointSliceコントローラーは、各ゾーンの比率を計算するときに、準備ができていないノードを無視します。ノードの大部分の準備ができていない場合、これは意図しない結果をもたらす可能性があります。
|
||||||
|
|
||||||
|
* EndpointSliceコントローラーは、各ゾーンの比率を計算するデプロイ時に{{< glossary_tooltip text="toleration" term_id="toleration" >}}を考慮しません。サービスをバックアップするPodがクラスター内のノードのサブセットに制限されている場合、これは考慮されません。
|
||||||
|
|
||||||
|
* これはオートスケーリングと相性が悪いかもしれません。例えば、多くのトラフィックが1つのゾーンから発信されている場合、そのゾーンに割り当てられたendpointのみがそのトラフィックを処理することになります。その結果、{{< glossary_tooltip text="Horizontal Pod Autoscaler" term_id="horizontal-pod-autoscaler" >}}がこのイベントを拾えなくなったり、新しく追加されたPodが別のゾーンで開始されたりする可能性があります。
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を読む。
|
|
@ -0,0 +1,20 @@
|
||||||
|
---
|
||||||
|
title: ReplicaSet
|
||||||
|
id: replica-set
|
||||||
|
date: 2018-04-12
|
||||||
|
full_link: /ja/docs/concepts/workloads/controllers/replicaset/
|
||||||
|
short_description: >
|
||||||
|
ReplicaSetは、指定された数のPodレプリカが一度に動作するように保証します。
|
||||||
|
|
||||||
|
aka:
|
||||||
|
tags:
|
||||||
|
- fundamental
|
||||||
|
- core-object
|
||||||
|
- workload
|
||||||
|
---
|
||||||
|
ReplicaSetは、任意の時点で動作しているレプリカPodの集合を保持します。(保持することを目指します。)
|
||||||
|
|
||||||
|
<!--more-->
|
||||||
|
{{< glossary_tooltip term_id="deployment" >}}などのワークロードオブジェクトは、ReplicaSetの仕様に基づいて、
|
||||||
|
設定された数の{{< glossary_tooltip term_id="pod" text="Pods" >}}がクラスターで稼働することを保証するために、
|
||||||
|
ReplicaSetを使用します。
|
|
@ -0,0 +1,5 @@
|
||||||
|
---
|
||||||
|
title: Scheduling
|
||||||
|
weight: 70
|
||||||
|
toc-hide: true
|
||||||
|
---
|
|
@ -0,0 +1,387 @@
|
||||||
|
---
|
||||||
|
title: スケジューラーの設定
|
||||||
|
content_type: concept
|
||||||
|
weight: 20
|
||||||
|
---
|
||||||
|
|
||||||
|
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
|
||||||
|
|
||||||
|
設定ファイルを作成し、そのパスをコマンドライン引数として渡すことで`kube-scheduler`の振る舞いをカスタマイズすることができます。
|
||||||
|
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
|
||||||
|
<!-- body -->
|
||||||
|
|
||||||
|
スケジューリングプロファイルは、{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}でスケジューリングの異なるステージを設定することができます。
|
||||||
|
各ステージは、拡張点に公開されています。プラグインをそれらの拡張点に1つ以上実装することで、スケジューリングの振る舞いを変更できます。
|
||||||
|
|
||||||
|
KubeSchedulerConfiguration([`v1beta2`](/docs/reference/config-api/kube-scheduler-config.v1beta2/)か[`v1beta3`](/docs/reference/config-api/kube-scheduler-config.v1beta3/))構造体を使用して、`kube-scheduler --config <filename>`を実行することで、スケジューリングプロファイルを指定することができます。
|
||||||
|
|
||||||
|
最小限の設定は次の通りです。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
clientConnection:
|
||||||
|
kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig
|
||||||
|
```
|
||||||
|
|
||||||
|
## プロファイル
|
||||||
|
|
||||||
|
スケジューリングプロファイルは、{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}でスケジューリングの異なるステージを設定することができます。
|
||||||
|
各ステージは[拡張点](#extension-points)に公開されています。
|
||||||
|
[プラグイン](#scheduling-plugins)をそれらの拡張点に1つ以上実装することで、スケジューリングの振る舞いを変更できます。
|
||||||
|
|
||||||
|
単一の`kube-scheduler`インスタンスで[複数のプロファイル](#multiple-profiles)を実行するように設定することも可能です。
|
||||||
|
|
||||||
|
### 拡張点 {#extension-points}
|
||||||
|
|
||||||
|
スケジューリングは一連のステージで行われ、以下の拡張点に公開されています。
|
||||||
|
|
||||||
|
1. `queueSort`: これらのプラグインは、スケジューリングキューにある`pending`状態のPodをソートするための順序付け関数を提供します。同時に有効化できるプラグインは1つだけです。
|
||||||
|
1. `preFilter`: これらのプラグインは、フィルタリングをする前にPodやクラスターの情報のチェックや前処理のために使用されます。これらのプラグインは、設定された順序で呼び出されます。
|
||||||
|
1. `filter`: これらのプラグインは、スケジューリングポリシーにおけるPredicatesに相当するもので、Podの実行不可能なNodeをフィルターするために使用されます。もし全てのNodeがフィルターされてしまった場合、Podはunschedulableとしてマークされます。
|
||||||
|
1. `postFilter`:これらのプラグインは、Podの実行可能なNodeが見つからなかった場合、設定された順序で呼び出されます。もし`postFilter`プラグインのいずれかが、Podを __スケジュール可能__ とマークした場合、残りの`postFilter`プラグインは呼び出されません。
|
||||||
|
1. `preScore`: これは、スコアリング前の作業を行う際に使用できる情報提供のための拡張点です。
|
||||||
|
1. `score`: これらのプラグインはフィルタリングフェーズを通過してきたそれぞれのNodeに対してスコア付けを行います。その後スケジューラーは、最も高い重み付きスコアの合計を持つノードを選択します。
|
||||||
|
1. `reserve`: これは、指定されたPodのためにリソースが予約された際に、プラグインに通知する、情報提供のための拡張点です。また、プラグインは`Reserve`中に失敗した際、または`Reserve`の後に呼び出される`Unreserve`も実装しています。
|
||||||
|
1. `permit`: これらのプラグインではPodのバインディングを拒む、または遅延させることができます。
|
||||||
|
1. `preBind`: これらのプラグインは、Podがバインドされる前に必要な処理を実行できます。
|
||||||
|
1. `bind`: これらのプラグインはPodをNodeにバインドします。`bind`プラグインは順番に呼び出され、1つのプラグインがバインドを完了すると、残りのプラグインはスキップされます。`bind`プラグインは少なくとも1つは必要です。
|
||||||
|
1. `postBind`: これは、Podがバインドされた後に呼び出される情報提供のための拡張点です。
|
||||||
|
1. `multiPoint`: このフィールドは設定のみ可能で、プラグインが適用されるすべての拡張点に対して同時に有効化または無効化することができます。
|
||||||
|
|
||||||
|
次の例のように、それぞれの拡張点に対して、特定の[デフォルトプラグイン](#scheduling-plugins)を無効化、または自作のプラグインを有効化することができます。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- plugins:
|
||||||
|
score:
|
||||||
|
disabled:
|
||||||
|
- name: PodTopologySpread
|
||||||
|
enabled:
|
||||||
|
- name: MyCustomPluginA
|
||||||
|
weight: 2
|
||||||
|
- name: MyCustomPluginB
|
||||||
|
weight: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
`disabled`配列の`name`フィールドに`*`を使用することで、その拡張点の全てのデフォルトプラグインを無効化できます。また、必要に応じてプラグインの順序を入れ替える場合にも使用されます。
|
||||||
|
|
||||||
|
### Scheduling plugins {#scheduling-plugins}
|
||||||
|
|
||||||
|
以下のプラグインはデフォルトで有効化されており、1つ以上の拡張点に実装されています。
|
||||||
|
|
||||||
|
- `ImageLocality`:Podが実行するコンテナイメージを既に持っているNodeを優先します。
|
||||||
|
拡張点:`score`
|
||||||
|
- `TaintToleration`:[TaintsとTolerations](/ja/docs/concepts/scheduling-eviction/taint-and-toleration/)を実行します。
|
||||||
|
実装する拡張点:`filter`、`preScore`、`score`
|
||||||
|
- `NodeName`: PodのSpecのNode名が、現在のNodeと一致するかをチェックします。
|
||||||
|
拡張点:`filter`
|
||||||
|
- `NodePorts`:要求されたPodのポートに対して、Nodeが空きポートを持っているかチェックします。
|
||||||
|
拡張点:`preFilter`、`filter`
|
||||||
|
- `NodeAffinity`:[nodeselectors](/ja/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)と[Nodeアフィニティ](/ja/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)を実行します。
|
||||||
|
拡張点:`filter`、`score`
|
||||||
|
- `PodTopologySpread`:[Podトポロジーの分散制約](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)を実行します。
|
||||||
|
拡張点:`preFilter`、`filter`、`preScore`、`score`
|
||||||
|
- `NodeUnschedulable`:`.spec.unschedulable`がtrueに設定されているNodeをフィルタリングします。
|
||||||
|
拡張点:`filter`.
|
||||||
|
- `NodeResourcesFit`:Podが要求しているすべてのリソースがNodeにあるかをチェックします。スコアは3つのストラテジのうちの1つを使用します:`LeastAllocated`(デフォルト)、`MostAllocated`、 と`RequestedToCapacityRatio`
|
||||||
|
拡張点:`preFilter`、`filter`、`score`
|
||||||
|
- `NodeResourcesBalancedAllocation`:Podがスケジュールされた場合に、よりバランスの取れたリソース使用量となるNodeを優先します。
|
||||||
|
拡張点:`score`
|
||||||
|
- `VolumeBinding`:Nodeが、要求された{{< glossary_tooltip text="ボリューム" term_id="volume" >}}を持っている、もしくはバインドしているかチェックします。
|
||||||
|
拡張点:`preFilter`、`filter`、`reserve`、`preBind`、`score`
|
||||||
|
{{< note >}}
|
||||||
|
`score`拡張点は、`VolumeCapacityPriority`機能が有効になっている時に有効化されます。
|
||||||
|
要求されたボリュームに適合する最小のPVを優先的に使用します。
|
||||||
|
{{< /note >}}
|
||||||
|
- `VolumeRestrictions`:Nodeにマウントされたボリュームが、ボリュームプロバイダ固有の制限を満たしているかを確認します。
|
||||||
|
拡張点:`filter`
|
||||||
|
- `VolumeZone`:要求されたボリュームがゾーン要件を満たしているかどうかを確認します。
|
||||||
|
拡張点:`filter`
|
||||||
|
- `NodeVolumeLimits`:NodeのCSIボリューム制限を満たすかどうかをチェックします。
|
||||||
|
拡張点:`filter`
|
||||||
|
- `EBSLimits`:NodeのAWSのEBSボリューム制限を満たすかどうかをチェックします。
|
||||||
|
拡張点:`filter`
|
||||||
|
- `GCEPDLimits`:NodeのGCP-PDボリューム制限を満たすかどうかをチェックします。
|
||||||
|
拡張点:`filter`
|
||||||
|
- `AzureDiskLimits`:NodeのAzureディスクボリューム制限を満たすかどうかをチェックします。
|
||||||
|
拡張点:`filter`
|
||||||
|
- `InterPodAffinity`:[Pod間のaffinityとanti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)を実行します。
|
||||||
|
拡張点:`preFilter`、`filter`、`preScore`、`score`
|
||||||
|
- `PrioritySort`:デフォルトの優先順位に基づくソートを提供します。
|
||||||
|
拡張点:`queueSort`.
|
||||||
|
- `DefaultBinder`:デフォルトのバインディングメカニズムを提供します。
|
||||||
|
拡張点:`bind`
|
||||||
|
- `DefaultPreemption`:デフォルトのプリエンプションメカニズムを提供します。
|
||||||
|
拡張点:`postFilter`
|
||||||
|
|
||||||
|
また、コンポーネント設定のAPIにより、以下のプラグインを有効にすることができます。
|
||||||
|
デフォルトでは有効になっていません。
|
||||||
|
|
||||||
|
- `SelectorSpread`:{{< glossary_tooltip text="サービス" term_id="service" >}}と{{< glossary_tooltip text="レプリカセット" term_id="replica-set" >}}、{{< glossary_tooltip text="ステートフルセット" term_id="statefulset" >}}、に属するPodのNode間の拡散を優先します。
|
||||||
|
拡張点:`preScore`、`score`
|
||||||
|
- `CinderLimits`:Nodeが[`OpenStack Cinder`](https://docs.openstack.org/cinder/)ボリューム制限を満たせるかチェックします。
|
||||||
|
拡張点:`filter`
|
||||||
|
|
||||||
|
### 複数のプロファイル {#multiple-profiles}
|
||||||
|
|
||||||
|
`kube-scheduler`は複数のプロファイルを実行するように設定することができます。
|
||||||
|
各プロファイルは関連するスケジューラー名を持ち、その[拡張点](#extension-points)に異なるプラグインを設定することが可能です。
|
||||||
|
|
||||||
|
以下のサンプル設定では、スケジューラーは2つのプロファイルで実行されます。1つはデフォルトプラグインで、もう1つはすべてのスコアリングプラグインを無効にしたものです。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: default-scheduler
|
||||||
|
- schedulerName: no-scoring-scheduler
|
||||||
|
plugins:
|
||||||
|
preScore:
|
||||||
|
disabled:
|
||||||
|
- name: '*'
|
||||||
|
score:
|
||||||
|
disabled:
|
||||||
|
- name: '*'
|
||||||
|
```
|
||||||
|
|
||||||
|
特定のプロファイルに従ってスケジュールさせたいPodは、その`.spec.schedulerName`に、対応するスケジューラー名を含めることができます。
|
||||||
|
|
||||||
|
デフォルトでは、スケジューラー名`default-scheduler`としてプロファイルが生成されます。
|
||||||
|
このプロファイルは、上記のデフォルトプラグインを含みます。複数のプロファイルを宣言する場合は、それぞれユニークなスケジューラー名にする必要があります。
|
||||||
|
|
||||||
|
もしPodがスケジューラー名を指定しない場合、kube-apiserverは`default-scheduler`を設定します。
|
||||||
|
従って、これらのPodをスケジュールするために、このスケジューラー名を持つプロファイルが存在する必要があります。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
Podのスケジューリングイベントには、ReportingControllerとして`.spec.schedulerName`が設定されています。
|
||||||
|
リーダー選出のイベントには、リスト先頭のプロファイルのスケジューラー名が使用されます。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
すべてのプロファイルは、`queueSort`拡張点で同じプラグインを使用し、同じ設定パラメーターを持つ必要があります (該当する場合)。これは、pending状態のPodキューがスケジューラーに1つしかないためです。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
### 複数の拡張点に適用されるプラグイン {#multipoint}
|
||||||
|
|
||||||
|
`kubescheduler.config.k8s.io/v1beta3`からは、プロファイル設定に`multiPoint`というフィールドが追加され、複数の拡張点でプラグインを簡単に有効・無効化できるようになりました。
|
||||||
|
`multiPoint`設定の目的は、カスタムプロファイルを使用する際に、ユーザーや管理者が必要とする設定を簡素化することです。
|
||||||
|
|
||||||
|
`MyPlugin`というプラグインがあり、`preScore`、`score`、`preFilter`、`filter`拡張点を実装しているとします。
|
||||||
|
すべての利用可能な拡張点で`MyPlugin`を有効化するためには、プロファイル設定は次のようにします。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
multiPoint:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
```
|
||||||
|
|
||||||
|
これは以下のように、`MyPlugin`を手動ですべての拡張ポイントに対して有効にすることと同じです。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: non-multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
preScore:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
preFilter:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
filter:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
```
|
||||||
|
|
||||||
|
`multiPoint`を使用する利点の一つは、将来的に`MyPlugin`が別の拡張点を実装した場合に、`multiPoint`設定が自動的に新しい拡張点に対しても有効化されることです。
|
||||||
|
|
||||||
|
特定の拡張点は、その拡張点の`disabled`フィールドを使用して、`MultiPoint`の展開から除外することができます。
|
||||||
|
これは、デフォルトのプラグインを無効にしたり、デフォルト以外のプラグインを無効にしたり、ワイルドカード(`'*'`)を使ってすべてのプラグインを無効にしたりする場合に有効です。
|
||||||
|
`Score`と`PreScore`を無効にするためには、次の例のようにします。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: non-multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
multiPoint:
|
||||||
|
enabled:
|
||||||
|
- name: 'MyPlugin'
|
||||||
|
preScore:
|
||||||
|
disabled:
|
||||||
|
- name: '*'
|
||||||
|
score:
|
||||||
|
disabled:
|
||||||
|
- name: '*'
|
||||||
|
```
|
||||||
|
|
||||||
|
`v1beta3`では、`MultiPoint`を通じて、内部的に全ての[デフォルトプラグイン](#scheduling-plugins)が有効化されています。
|
||||||
|
しかしながら、デフォルト値(並び順やスコアの重みなど)を柔軟に設定し直せるように、個別の拡張点は用意されています。
|
||||||
|
例えば、2つのスコアプラグイン`DefaultScore1`と`DefaultScore2`に、重み1が設定されているとします。
|
||||||
|
その場合、次のように重さを変更し、並べ替えることができます。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: 'DefaultScore2'
|
||||||
|
weight: 5
|
||||||
|
```
|
||||||
|
|
||||||
|
この例では、`MultiPoint`はデフォルトプラグインであるため、明示的にプラグイン名を指定する必要はありません。
|
||||||
|
そして、`Score`に指定されているプラグインは`DefaultScore2`のみです。
|
||||||
|
これは、特定の拡張点を通じて設定されたプラグインは、常に`MultiPoint`プラグインよりも優先されるためです。つまり、この設定例では、結果的に2つのプラグインを両方指定することなく、並び替えが行えます。
|
||||||
|
|
||||||
|
`MultiPoint`プラグインを設定する際の一般的な優先順位は、以下の通りです。
|
||||||
|
1. 特定の拡張点が最初に実行され、その設定は他の場所で設定されたものよりも優先される
|
||||||
|
2. `MultiPoint`を使用して、手動で設定したプラグインとその設定内容
|
||||||
|
3. デフォルトプラグインとそのデフォルト設定
|
||||||
|
|
||||||
|
上記の優先順位を示すために、次の例はこれらのプラグインをベースにします。
|
||||||
|
|
||||||
|
|プラグイン|拡張点|
|
||||||
|
|---|---|
|
||||||
|
|`DefaultQueueSort`|`QueueSort`|
|
||||||
|
|`CustomQueueSort`|`QueueSort`|
|
||||||
|
|`DefaultPlugin1`|`Score`, `Filter`|
|
||||||
|
|`DefaultPlugin2`|`Score`|
|
||||||
|
|`CustomPlugin1`|`Score`, `Filter`|
|
||||||
|
|`CustomPlugin2`|`Score`, `Filter`|
|
||||||
|
|
||||||
|
これらのプラグインの有効な設定例は次の通りです。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
multiPoint:
|
||||||
|
enabled:
|
||||||
|
- name: 'CustomQueueSort'
|
||||||
|
- name: 'CustomPlugin1'
|
||||||
|
weight: 3
|
||||||
|
- name: 'CustomPlugin2'
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultQueueSort'
|
||||||
|
filter:
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultPlugin1'
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: 'DefaultPlugin2'
|
||||||
|
```
|
||||||
|
|
||||||
|
なお、特定の拡張点に`MultiPoint`プラグインを再宣言しても、エラーにはなりません。
|
||||||
|
特定の拡張点が優先されるため、再宣言は無視されます(ログは記録されます)。
|
||||||
|
|
||||||
|
|
||||||
|
このサンプルは、ほとんどのコンフィグを一箇所にまとめるだけでなく、いくつかの工夫をしています。
|
||||||
|
* カスタムの`queueSort`プラグインを有効にし、デフォルトのプラグインを無効にする。
|
||||||
|
* `CustomPlugin1`と`CustomPlugin2`を有効にし、この拡張点のプラグイン内で、最初に実行されるようにする。
|
||||||
|
* `filter`拡張点でのみ、`DefaultPlugin1`を無効にする。
|
||||||
|
* `score`拡張点で`DefaultPlugin2`が最初に実行されるように並べ替える(カスタムプラグインより先に)。
|
||||||
|
|
||||||
|
`v1beta3`以前のバージョンで、`multiPoint`がない場合、上記の設定例は、次のものと同等になります。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
|
||||||
|
# デフォルトQueueSortプラグインを無効化
|
||||||
|
queueSort:
|
||||||
|
enabled:
|
||||||
|
- name: 'CustomQueueSort'
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultQueueSort'
|
||||||
|
|
||||||
|
# カスタムFilterプラグインを有効化
|
||||||
|
filter:
|
||||||
|
enabled:
|
||||||
|
- name: 'CustomPlugin1'
|
||||||
|
- name: 'CustomPlugin2'
|
||||||
|
- name: 'DefaultPlugin2'
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultPlugin1'
|
||||||
|
|
||||||
|
# カスタムScoreプラグインを有効化し、実行順を並べ替える
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: 'DefaultPlugin2'
|
||||||
|
weight: 1
|
||||||
|
- name: 'DefaultPlugin1'
|
||||||
|
weight: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
これは複雑な例ですが、`MultiPoint`設定の柔軟性と、拡張点を設定する既存の方法とのシームレスな統合を実証しています。
|
||||||
|
|
||||||
|
## スケジューラー設定の移行
|
||||||
|
|
||||||
|
{{< tabs name="tab_with_md" >}}
|
||||||
|
{{% tab name="v1beta1 → v1beta2" %}}
|
||||||
|
* v1beta2`のバージョン`の設定では、新しい`NodeResourcesFit`プラグインをスコア拡張点で使用できます。
|
||||||
|
この新しい拡張機能は、`NodeResourcesLeastAllocated`、`NodeResourcesMostAllocated`、 `RequestedToCapacityRatio`プラグインの機能を組み合わせたものです。
|
||||||
|
例えば、以前は`NodeResourcesMostAllocated`プラグインを使っていたなら、代わりに`NodeResourcesFitプラグインを使用し(デフォルトで有効)、`pluginConfig`に次のような`scoreStrategy`を追加することになるでしょう。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- pluginConfig:
|
||||||
|
- args:
|
||||||
|
scoringStrategy:
|
||||||
|
resources:
|
||||||
|
- name: cpu
|
||||||
|
weight: 1
|
||||||
|
type: MostAllocated
|
||||||
|
name: NodeResourcesFit
|
||||||
|
```
|
||||||
|
|
||||||
|
* スケジューラープラグインの`NodeLabel`は廃止されました。代わりに[`NodeAffinity`](/ja/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)プラグイン(デフォルトで有効)を使用することで同様の振る舞いを実現できます。
|
||||||
|
|
||||||
|
* スケジューラープラグインの`ServiceAffinity`は廃止されました。代わりに[`InterPodAffinity`](/ja/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)プラグイン(デフォルトで有効)を使用することで同様の振る舞いを実現できます。
|
||||||
|
|
||||||
|
* スケジューラープラグインの`NodePreferAvoidPods`は廃止されました。代わりに[Node taints](/ja/docs/concepts/scheduling-eviction/taint-and-toleration/)を使用することで同様の振る舞いを実現できます。
|
||||||
|
|
||||||
|
* v1beta2で有効化されたプラグインは、そのプラグインのデフォルトの設定より優先されます。
|
||||||
|
|
||||||
|
* スケジューラーのヘルスとメトリクスのバインドアドレスに設定されている`host`や`port`が無効な場合、バリデーションに失敗します。
|
||||||
|
{{% /tab %}}
|
||||||
|
|
||||||
|
{{% tab name="v1beta2 → v1beta3" %}}
|
||||||
|
* デフォルトで3つのプラグインの重みが増加しました。
|
||||||
|
* `InterPodAffinity`:1から2
|
||||||
|
* `NodeAffinity`:1から2
|
||||||
|
* `TaintToleration`:1から3
|
||||||
|
{{% /tab %}}
|
||||||
|
{{< /tabs >}}
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
* [kube-schedulerリファレンス](/docs/reference/command-line-tools-reference/kube-scheduler/)を読む
|
||||||
|
* [scheduling](/ja/docs/concepts/scheduling-eviction/kube-scheduler/)について学ぶ
|
||||||
|
* [kube-scheduler設定(v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/)のリファレンスを読む
|
||||||
|
* [kube-scheduler設定(v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)のリファレンスを読む
|
|
@ -0,0 +1,19 @@
|
||||||
|
---
|
||||||
|
title: スケジューリングポリシー
|
||||||
|
content_type: concept
|
||||||
|
sitemap:
|
||||||
|
priority: 0.2 # スケジューリングポリシーは廃止されました。
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
|
||||||
|
バージョンv1.23より前のKubernetesでは、スケジューリングポリシーを使用して、*predicates*と*priorities*の処理を指定することができました。例えば、`kube-scheduler --policy-config-file <filename>`または`kube-scheduler --policy-configmap <ConfigMap>`を実行すると、スケジューリングポリシーを設定することが可能です。
|
||||||
|
|
||||||
|
このスケジューリングポリシーは、バージョンv1.23以降のKubernetesではサポートされていません。関連するフラグである、`policy-config-file`、`policy-configmap`、`policy-configmap-namespace`、`use-legacy-policy-config`も同様にサポートされていません。
|
||||||
|
代わりに、[スケジューラー設定](/ja/docs/reference/scheduling/config/)を使用してください。
|
||||||
|
|
||||||
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
* [スケジューリング](/ja/docs/concepts/scheduling-eviction/kube-scheduler/)について学ぶ
|
||||||
|
* [kube-scheduler設定](/ja/docs/reference/scheduling/config/)について学ぶ
|
||||||
|
* [kube-scheduler設定リファレンス(v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)について読む
|
|
@ -522,4 +522,4 @@ Minikubeの詳細については、[proposal](https://git.k8s.io/community/contr
|
||||||
|
|
||||||
## コミュニティ
|
## コミュニティ
|
||||||
|
|
||||||
コントリビューションや質問、コメントは歓迎・奨励されています! Minikubeの開発者は[Slack](https://kubernetes.slack.com)の`#minikube`チャンネルにいます(Slackへの招待状は[こちら](http://slack.kubernetes.io/))。[kubernetes-dev Google Groupsメーリングリスト](https://groups.google.com/forum/#!forum/kubernetes-dev)もあります。メーリングリストに投稿する際は件名の最初に "minikube: " をつけてください。
|
コントリビューションや質問、コメントは歓迎・奨励されています! Minikubeの開発者は[Slack](https://kubernetes.slack.com)の`#minikube`チャンネルにいます(Slackへの招待状は[こちら](http://slack.kubernetes.io/))。[dev@kubernetes Google Groupsメーリングリスト](https://groups.google.com/a/kubernetes.io/g/dev/)もあります。メーリングリストに投稿する際は件名の最初に "minikube: " をつけてください。
|
||||||
|
|
|
@ -5,12 +5,14 @@ metadata:
|
||||||
labels:
|
labels:
|
||||||
app: mysql
|
app: mysql
|
||||||
data:
|
data:
|
||||||
master.cnf: |
|
primary.cnf: |
|
||||||
# Apply this config only on the master.
|
# Apply this config only on the primary.
|
||||||
[mysqld]
|
[mysqld]
|
||||||
log-bin
|
log-bin
|
||||||
slave.cnf: |
|
datadir=/var/lib/mysql/mysql
|
||||||
# Apply this config only on slaves.
|
replica.cnf: |
|
||||||
|
# Apply this config only on replicas.
|
||||||
[mysqld]
|
[mysqld]
|
||||||
super-read-only
|
super-read-only
|
||||||
|
datadir=/var/lib/mysql/mysql
|
||||||
|
|
||||||
|
|
|
@ -21,6 +21,7 @@ content_type: concept
|
||||||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install)은 Flannel과 Calico를 통합하여 네트워킹 및 네트워크 폴리시를 제공한다.
|
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install)은 Flannel과 Calico를 통합하여 네트워킹 및 네트워크 폴리시를 제공한다.
|
||||||
* [Cilium](https://github.com/cilium/cilium)은 L3 네트워크 및 네트워크 폴리시 플러그인으로 HTTP/API/L7 폴리시를 투명하게 시행할 수 있다. 라우팅 및 오버레이/캡슐화 모드를 모두 지원하며, 다른 CNI 플러그인 위에서 작동할 수 있다.
|
* [Cilium](https://github.com/cilium/cilium)은 L3 네트워크 및 네트워크 폴리시 플러그인으로 HTTP/API/L7 폴리시를 투명하게 시행할 수 있다. 라우팅 및 오버레이/캡슐화 모드를 모두 지원하며, 다른 CNI 플러그인 위에서 작동할 수 있다.
|
||||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie)를 사용하면 쿠버네티스는 Calico, Canal, Flannel, Romana 또는 Weave와 같은 CNI 플러그인을 완벽하게 연결할 수 있다.
|
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie)를 사용하면 쿠버네티스는 Calico, Canal, Flannel, Romana 또는 Weave와 같은 CNI 플러그인을 완벽하게 연결할 수 있다.
|
||||||
|
* [Contiv](https://contivpp.io/)는 다양한 유스케이스와 풍부한 폴리시 프레임워크를 위해 구성 가능한 네트워킹(BGP를 사용하는 네이티브 L3, vxlan을 사용하는 오버레이, 클래식 L2 그리고 Cisco-SDN/ACI)을 제공한다. Contiv 프로젝트는 완전히 [오픈소스](https://github.com/contiv)이다. [인스톨러](https://github.com/contiv/install)는 kubeadm을 이용하거나, 그렇지 않은 경우에 대해서도 설치 옵션을 모두 제공한다.
|
||||||
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)은 [Tungsten Fabric](https://tungsten.io)을 기반으로 하며, 오픈소스이고, 멀티 클라우드 네트워크 가상화 및 폴리시 관리 플랫폼이다. Contrail과 Tungsten Fabric은 쿠버네티스, OpenShift, OpenStack 및 Mesos와 같은 오케스트레이션 시스템과 통합되어 있으며, 가상 머신, 컨테이너/파드 및 베어 메탈 워크로드에 대한 격리 모드를 제공한다.
|
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)은 [Tungsten Fabric](https://tungsten.io)을 기반으로 하며, 오픈소스이고, 멀티 클라우드 네트워크 가상화 및 폴리시 관리 플랫폼이다. Contrail과 Tungsten Fabric은 쿠버네티스, OpenShift, OpenStack 및 Mesos와 같은 오케스트레이션 시스템과 통합되어 있으며, 가상 머신, 컨테이너/파드 및 베어 메탈 워크로드에 대한 격리 모드를 제공한다.
|
||||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)은 쿠버네티스와 함께 사용할 수 있는 오버레이 네트워크 제공자이다.
|
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)은 쿠버네티스와 함께 사용할 수 있는 오버레이 네트워크 제공자이다.
|
||||||
* [Knitter](https://github.com/ZTE/Knitter/)는 쿠버네티스 파드에서 여러 네트워크 인터페이스를 지원하는 플러그인이다.
|
* [Knitter](https://github.com/ZTE/Knitter/)는 쿠버네티스 파드에서 여러 네트워크 인터페이스를 지원하는 플러그인이다.
|
||||||
|
@ -29,7 +30,7 @@ content_type: concept
|
||||||
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin)은 OVN 기반의 CNI 컨트롤러 플러그인으로 클라우드 네이티브 기반 서비스 기능 체인(Service function chaining(SFC)), 다중 OVN 오버레이 네트워킹, 동적 서브넷 생성, 동적 가상 네트워크 생성, VLAN 공급자 네트워크, 직접 공급자 네트워크와 멀티 클러스터 네트워킹의 엣지 기반 클라우드 등 네이티브 워크로드에 이상적인 멀티 네티워크 플러그인이다.
|
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin)은 OVN 기반의 CNI 컨트롤러 플러그인으로 클라우드 네이티브 기반 서비스 기능 체인(Service function chaining(SFC)), 다중 OVN 오버레이 네트워킹, 동적 서브넷 생성, 동적 가상 네트워크 생성, VLAN 공급자 네트워크, 직접 공급자 네트워크와 멀티 클러스터 네트워킹의 엣지 기반 클라우드 등 네이티브 워크로드에 이상적인 멀티 네티워크 플러그인이다.
|
||||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) 컨테이너 플러그인(NCP)은 VMware NSX-T와 쿠버네티스와 같은 컨테이너 오케스트레이터 간의 통합은 물론 NSX-T와 PKS(Pivotal 컨테이너 서비스) 및 OpenShift와 같은 컨테이너 기반 CaaS/PaaS 플랫폼 간의 통합을 제공한다.
|
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) 컨테이너 플러그인(NCP)은 VMware NSX-T와 쿠버네티스와 같은 컨테이너 오케스트레이터 간의 통합은 물론 NSX-T와 PKS(Pivotal 컨테이너 서비스) 및 OpenShift와 같은 컨테이너 기반 CaaS/PaaS 플랫폼 간의 통합을 제공한다.
|
||||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)는 가시성과 보안 모니터링 기능을 통해 쿠버네티스 파드와 비-쿠버네티스 환경 간에 폴리시 기반 네트워킹을 제공하는 SDN 플랫폼이다.
|
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)는 가시성과 보안 모니터링 기능을 통해 쿠버네티스 파드와 비-쿠버네티스 환경 간에 폴리시 기반 네트워킹을 제공하는 SDN 플랫폼이다.
|
||||||
* [Romana](https://romana.io)는 [네트워크폴리시 API](/ko/docs/concepts/services-networking/network-policies/)도 지원하는 파드 네트워크용 Layer 3 네트워킹 솔루션이다. Kubeadm 애드온 설치에 대한 세부 정보는 [여기](https://github.com/romana/romana/tree/master/containerize)에 있다.
|
* [Romana](https://github.com/romana/romana)는 [네트워크폴리시 API](/ko/docs/concepts/services-networking/network-policies/)도 지원하는 파드 네트워크용 Layer 3 네트워킹 솔루션이다. Kubeadm 애드온 설치에 대한 세부 정보는 [여기](https://github.com/romana/romana/tree/master/containerize)에 있다.
|
||||||
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)은 네트워킹 및 네트워크 폴리시를 제공하고, 네트워크 파티션의 양면에서 작업을 수행하며, 외부 데이터베이스는 필요하지 않다.
|
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)은 네트워킹 및 네트워크 폴리시를 제공하고, 네트워크 파티션의 양면에서 작업을 수행하며, 외부 데이터베이스는 필요하지 않다.
|
||||||
|
|
||||||
## 서비스 검색
|
## 서비스 검색
|
||||||
|
|
|
@ -6,7 +6,7 @@ weight: 15
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
`kubectl` 커맨드라인 툴은 쿠버네티스 오브젝트를 생성하고 관리하기 위한
|
`kubectl` 커맨드라인 툴은 쿠버네티스 오브젝트를 생성하고 관리하기 위한
|
||||||
몇 가지 상이한 방법을 지원한다. 이 문서는 여러가지 접근법에 대한 개요을
|
몇 가지 상이한 방법을 지원한다. 이 문서는 여러가지 접근법에 대한 개요를
|
||||||
제공한다. Kubectl로 오브젝트 관리하기에 대한 자세한 설명은
|
제공한다. Kubectl로 오브젝트 관리하기에 대한 자세한 설명은
|
||||||
[Kubectl 서적](https://kubectl.docs.kubernetes.io)에서 확인한다.
|
[Kubectl 서적](https://kubectl.docs.kubernetes.io)에서 확인한다.
|
||||||
|
|
||||||
|
|
|
@ -8,3 +8,13 @@ menu:
|
||||||
post: >
|
post: >
|
||||||
<p>阅读关于 kubernetes 和容器规范的最新信息,以及获取最新的技术。</p>
|
<p>阅读关于 kubernetes 和容器规范的最新信息,以及获取最新的技术。</p>
|
||||||
---
|
---
|
||||||
|
|
||||||
|
{{< comment >}}
|
||||||
|
|
||||||
|
<!-- For information about contributing to the blog, see
|
||||||
|
https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post -->
|
||||||
|
|
||||||
|
有关为博客提供内容的信息,请参见
|
||||||
|
https://kubernetes.io/zh/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post
|
||||||
|
|
||||||
|
{{< /comment >}}
|
|
@ -0,0 +1,180 @@
|
||||||
|
---
|
||||||
|
layout: blog
|
||||||
|
title: "关注 SIG Node"
|
||||||
|
date: 2021-09-27
|
||||||
|
slug: sig-node-spotlight-2021
|
||||||
|
---
|
||||||
|
<!--
|
||||||
|
---
|
||||||
|
layout: blog
|
||||||
|
title: "Spotlight on SIG Node"
|
||||||
|
date: 2021-09-27
|
||||||
|
slug: sig-node-spotlight-2021
|
||||||
|
---
|
||||||
|
-->
|
||||||
|
**Author:** Dewan Ahmed, Red Hat
|
||||||
|
<!--
|
||||||
|
**Author:** Dewan Ahmed, Red Hat
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
In Kubernetes, a _Node_ is a representation of a single machine in your cluster. [SIG Node](https://github.com/kubernetes/community/tree/master/sig-node) owns that very important Node component and supports various subprojects such as Kubelet, Container Runtime Interface (CRI) and more to support how the pods and host resources interact. In this blog, we have summarized our conversation with [Elana Hashman (EH)](https://twitter.com/ehashdn) & [Sergey Kanzhelev (SK)](https://twitter.com/SergeyKanzhelev), who walk us through the various aspects of being a part of the SIG and share some insights about how others can get involved.
|
||||||
|
-->
|
||||||
|
|
||||||
|
## 介绍
|
||||||
|
|
||||||
|
在 Kubernetes 中,一个 _Node_ 是你集群中的某台机器。
|
||||||
|
[SIG Node](https://github.com/kubernetes/community/tree/master/sig-node) 负责这一非常重要的 Node 组件并支持各种子项目,
|
||||||
|
如 Kubelet, Container Runtime Interface (CRI) 以及其他支持 Pod 和主机资源间交互的子项目。
|
||||||
|
在这篇文章中,我们总结了和 [Elana Hashman (EH)](https://twitter.com/ehashdn) & [Sergey Kanzhelev (SK)](https://twitter.com/SergeyKanzhelev) 的对话,是他们带领我们了解作为此 SIG 一份子的各个方面,并分享一些关于其他人如何参与的见解。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## A summary of our conversation
|
||||||
|
|
||||||
|
### Could you tell us a little about what SIG Node does?
|
||||||
|
|
||||||
|
SK: SIG Node is a vertical SIG responsible for the components that support the controlled interactions between the pods and host resources. We manage the lifecycle of pods that are scheduled to a node. This SIG's focus is to enable a broad set of workload types, including workloads with hardware specific or performance sensitive requirements. All while maintaining isolation boundaries between pods on a node, as well as the pod and the host. This SIG maintains quite a few components and has many external dependencies (like container runtimes or operating system features), which makes the complexity we deal with huge. We tame the complexity and aim to continuously improve node reliability.
|
||||||
|
-->
|
||||||
|
## 我们的对话总结
|
||||||
|
|
||||||
|
### 你能告诉我们一些关于 SIG Node 的工作吗?
|
||||||
|
|
||||||
|
SK:SIG Node 是一个垂直 SIG,负责支持 Pod 和主机资源之间受控互动的组件。我们管理被调度到节点上的 Pod 的生命周期。
|
||||||
|
这个 SIG 的重点是支持广泛的工作负载类型,包括具有硬件特性或性能敏感要求的工作负载。同时保持节点上 Pod 之间的隔离边界,以及 Pod 和主机的隔离边界。
|
||||||
|
这个 SIG 维护了相当多的组件,并有许多外部依赖(如容器运行时间或操作系统功能),这使得我们处理起来十分复杂。但我们战胜了这种复杂度,旨在不断提高节点的可靠性。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### "SIG Node is a vertical SIG" could you explain a bit more?
|
||||||
|
|
||||||
|
EH: There are two kinds of SIGs: horizontal and vertical. Horizontal SIGs are concerned with a particular function of every component in Kubernetes: for example, SIG Security considers security aspects of every component in Kubernetes, or SIG Instrumentation looks at the logs, metrics, traces and events of every component in Kubernetes. Such SIGs don't tend to own a lot of code.
|
||||||
|
|
||||||
|
Vertical SIGs, on the other hand, own a single component, and are responsible for approving and merging patches to that code base. SIG Node owns the "Node" vertical, pertaining to the kubelet and its lifecycle. This includes the code for the kubelet itself, as well as the node controller, the container runtime interface, and related subprojects like the node problem detector.
|
||||||
|
-->
|
||||||
|
### 你能再解释一下 “SIG Node 是一种垂直 SIG” 的含义吗?
|
||||||
|
|
||||||
|
EH:有两种 SIG:横向和垂直。横向 SIG 关注 Kubernetes 中每个组件的特定功能:例如,SIG Security 考虑 Kubernetes 中每个组件的安全方面,或者 SIG Instrumentation 关注 Kubernetes 中每个组件的日志、度量、跟踪和事件。
|
||||||
|
这样的 SIG 并不太会拥有大量的代码。
|
||||||
|
|
||||||
|
相反,垂直 SIG 拥有一个单一的组件,并负责批准和合并该代码库的补丁。
|
||||||
|
SIG Node 拥有 "Node" 的垂直性,与 kubelet 和它的生命周期有关。这包括 kubelet 本身的代码,以及节点控制器、容器运行时接口和相关的子项目,比如节点问题检测器。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### How did the CI subproject start? Is this specific to SIG Node and how does it help the SIG?
|
||||||
|
|
||||||
|
SK: The subproject started as a follow up after one of the releases was blocked by numerous test failures of critical tests. These tests haven’t started falling all at once, rather continuous lack of attention led to slow degradation of tests quality. SIG Node was always prioritizing quality and reliability, and forming of the subproject was a way to highlight this priority.
|
||||||
|
-->
|
||||||
|
### CI 子项目是如何开始的?这是专门针对 SIG Node 的吗?它对 SIG 有什么帮助?
|
||||||
|
|
||||||
|
SK:该子项目是在其中一个版本因关键测试的大量测试失败而受阻后开始跟进的。
|
||||||
|
这些测试并不是一下子就开始下降的,而是持续的缺乏关注导致了测试质量的缓慢下降。
|
||||||
|
SIG Node 一直将质量和可靠性放在首位,组建这个子项目是强调这一优先事项的一种方式。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### As the 3rd largest SIG in terms of number of issues and PRs, how does your SIG juggle so much work?
|
||||||
|
|
||||||
|
EH: It helps to be organized. When I increased my contributions to the SIG in January of 2021, I found myself overwhelmed by the volume of pull requests and issues and wasn't sure where to start. We were already tracking test-related issues and pull requests on the CI subproject board, but that was missing a lot of our bugfixes and feature work. So I began putting together a triage board for the rest of our pull requests, which allowed me to sort each one by status and what actions to take, and documented its use for other contributors. We closed or merged over 500 issues and pull requests tracked by our two boards in each of the past two releases. The Kubernetes devstats showed that we have significantly increased our velocity as a result.
|
||||||
|
|
||||||
|
In June, we ran our first bug scrub event to work through the backlog of issues filed against SIG Node, ensuring they were properly categorized. We closed over 130 issues over the course of this 48 hour global event, but as of writing we still have 333 open issues.
|
||||||
|
-->
|
||||||
|
### 作为 issue 和 PR 数量第三大的 SIG,你们 SIG 是如何兼顾这么多工作的?
|
||||||
|
|
||||||
|
EH:这归功于有组织性。当我在 2021 年 1 月增加对 SIG 的贡献时,我发现自己被大量的 PR 和 issue 淹没了,不知道该从哪里开始。
|
||||||
|
我们已经在 CI 子项目板上跟踪与测试有关的 issue 和 PR 请求,但这缺少了很多 bug 修复和功能工作。
|
||||||
|
因此,我开始为我们剩余的 PR 建立一个分流板,这使我能够根据状态和采取的行动对其进行分类,并为其他贡献者记录它的用途。
|
||||||
|
在过去的两个版本中,我们关闭或合并了超过 500 个 issue 和 PR。Kubernetes devstats 显示,我们的速度因此而大大提升。
|
||||||
|
|
||||||
|
6月,我们进行了第一次 bug 清除活动,以解决针对 SIG Node 的积压问题,确保它们被正确归类。
|
||||||
|
在这次 48 小时的全球活动中,我们关闭了 130 多个问题,但截至发稿时,我们仍有 333 个问题没有解决。
|
||||||
|
<!--
|
||||||
|
### Why should new and existing contributors consider joining SIG Node?
|
||||||
|
|
||||||
|
SK: Being a SIG Node contributor gives you skills and recognition that are rewarding and useful. Understanding under the hood of a kubelet helps architecting better apps, tune and optimize those apps, and gives leg up in issues troubleshooting. If you are a new contributor, SIG Node gives you the foundational knowledge that is key to understanding why other Kubernetes components are designed the way they are. Existing contributors may benefit as many features will require SIG Node changes one way or another. So being a SIG Node contributor helps building features in other SIGs faster.
|
||||||
|
|
||||||
|
SIG Node maintains numerous components, many of which have dependency on external projects or OS features. This makes the onboarding process quite lengthy and demanding. But if you are up for a challenge, there is always a place for you, and a group of people to support.
|
||||||
|
-->
|
||||||
|
### 为什么新的和现有的贡献者应该考虑加入 Node 兴趣小组呢?
|
||||||
|
|
||||||
|
SK:作为 SIG Node 的贡献者会带给你有意义且有用的技能和认可度。
|
||||||
|
了解 Kubelet 的内部结构有助于构建更好的应用程序,调整和优化这些应用程序,并在 issue 排查上获得优势。
|
||||||
|
如果你是一个新手贡献者,SIG Node 为你提供了基础知识,这是理解其他 Kubernetes 组件的设计方式的关键。
|
||||||
|
现在的贡献者可能会受益于许多功能都需要 SIG Node 的这种或那种变化。所以成为 SIG Node 的贡献者有助于更快地建立其他 SIG 的功能。
|
||||||
|
|
||||||
|
SIG Node 维护着许多组件,其中许多组件都依赖于外部项目或操作系统功能。这使得入职过程相当冗长和苛刻。
|
||||||
|
但如果你愿意接受挑战,总有一个地方适合你,也有一群人支持你。
|
||||||
|
<!--
|
||||||
|
### What do you do to help new contributors get started?
|
||||||
|
|
||||||
|
EH: Getting started in SIG Node can be intimidating, since there is so much work to be done, our SIG meetings are very large, and it can be hard to find a place to start.
|
||||||
|
|
||||||
|
I always encourage new contributors to work on things that they have some investment in already. In SIG Node, that might mean volunteering to help fix a bug that you have personally been affected by, or helping to triage bugs you care about by priority.
|
||||||
|
|
||||||
|
To come up to speed on any open source code base, there are two strategies you can take: start by exploring a particular issue deeply, and follow that to expand the edges of your knowledge as needed, or briefly review as many issues and change requests as you possibly can to get a higher level picture of how the component works. Ultimately, you will need to do both if you want to become a Node reviewer or approver.
|
||||||
|
|
||||||
|
[Davanum Srinivas](https://twitter.com/dims) and I each ran a cohort of group mentoring to help teach new contributors the skills to become Node reviewers, and if there's interest we can work to find a mentor to run another session. I also encourage new contributors to attend our Node CI Subproject meeting: it's a smaller audience and we don't record the triage sessions, so it can be a less intimidating way to get started with the SIG.
|
||||||
|
-->
|
||||||
|
### 你是如何帮助新手贡献者开始工作的?
|
||||||
|
|
||||||
|
EH:在 SIG Node 的起步工作可能是令人生畏的,因为有太多的工作要做,我们的 SIG 会议非常大,而且很难找到一个开始的地方。
|
||||||
|
|
||||||
|
我总是鼓励新手贡献者在他们已经有一些投入的方向上更进一步。
|
||||||
|
在 SIG Node 中,这可能意味着自愿帮助修复一个只影响到你个人的 bug,或者按优先级去分流你关心的 bug。
|
||||||
|
|
||||||
|
为了尽快了解任何开源代码库,你可以采取两种策略:从深入探索一个特定的问题开始,然后根据需要扩展你的知识边缘,或者单纯地尽可能多的审查 issues 和变更请求,以了解更高层次的组件工作方式。
|
||||||
|
最终,如果你想成为一名 Node reviewer 或 approver,两件事是不可避免的。
|
||||||
|
|
||||||
|
[Davanum Srinivas](https://twitter.com/dims) 和我各自举办了一次小组辅导,以帮助教导新手贡献者成为 Node reviewer 的技能,如果有兴趣,我们可以努力寻找一个导师来举办另一次会议。
|
||||||
|
我也鼓励新手贡献者参加我们的 Node CI 子项目会议:它的听众较少,而且我们不记录分流会议,所以它可以是一个比较温和的方式来开始 SIG 之旅。
|
||||||
|
<!--
|
||||||
|
### Are there any particular skills you’d like to recruit for? What skills are contributors to SIG Usability likely to learn?
|
||||||
|
|
||||||
|
SK: SIG Node works on many workstreams in very different areas. All of these areas are on system level. For the typical code contributions you need to have a passion for building and utilizing low level APIs and writing performant and reliable components. Being a contributor you will learn how to debug and troubleshoot, profile, and monitor these components, as well as user workload that is run by these components. Often, with the limited to no access to Nodes, as they are running production workloads.
|
||||||
|
|
||||||
|
The other way of contribution is to help document SIG node features. This type of contribution requires a deep understanding of features, and ability to explain them in simple terms.
|
||||||
|
|
||||||
|
Finally, we are always looking for feedback on how best to run your workload. Come and explain specifics of it, and what features in SIG Node components may help to run it better.
|
||||||
|
-->
|
||||||
|
### 有什么特别的技能者是你想招募的吗?对 SIG 可用性的贡献者可能会学到什么技能?
|
||||||
|
|
||||||
|
SK:SIG Node 在大相径庭的领域从事许多工作流。所有这些领域都是系统级的。
|
||||||
|
对于典型的代码贡献,你需要对建立和善用低级别的 API 以及编写高性能和可靠的组件有热情。
|
||||||
|
作为一个贡献者,你将学习如何调试和排除故障,剖析和监控这些组件,以及由这些组件运行的用户工作负载。
|
||||||
|
通常情况下,由于节点正在运行生产工作负载,所以对节点的访问是有限的,甚至是没有的。
|
||||||
|
|
||||||
|
另一种贡献方式是帮助记录 SIG Node 的功能。这种类型的贡献需要对功能有深刻的理解,并有能力用简单的术语解释它们。
|
||||||
|
|
||||||
|
最后,我们一直在寻找关于如何最好地运行你的工作负载的反馈。来解释一下它的具体情况,以及 SIG Node 组件中的哪些功能可能有助于更好地运行它。
|
||||||
|
<!--
|
||||||
|
### What are you getting positive feedback on, and what’s coming up next for SIG Node?
|
||||||
|
|
||||||
|
EH: Over the past year SIG Node has adopted some new processes to help manage our feature development and Kubernetes enhancement proposals, and other SIGs have looked to us for inspiration in managing large workloads. I hope that this is an area we can continue to provide leadership in and further iterate on.
|
||||||
|
|
||||||
|
We have a great balance of new features and deprecations in flight right now. Deprecations of unused or difficult to maintain features help us keep technical debt and maintenance load under control, and examples include the dockershim and DynamicKubeletConfiguration deprecations. New features will unlock additional functionality in end users' clusters, and include exciting features like support for cgroups v2, swap memory, graceful node shutdowns, and device management policies.
|
||||||
|
-->
|
||||||
|
### 你在哪些方面得到了积极的反馈,以及 SIG Node 的下一步计划是什么?
|
||||||
|
|
||||||
|
EH:在过去的一年里,SIG Node 采用了一些新的流程来帮助管理我们的功能开发和 Kubernetes 增强提议,其他 SIG 也向我们寻求在管理大型工作负载方面的灵感。
|
||||||
|
我希望这是一个我们可以继续领导并进一步迭代的领域。
|
||||||
|
|
||||||
|
现在,我们在新功能和废弃功能之间保持了很好的平衡。
|
||||||
|
废弃未使用或难以维护的功能有助于我们控制技术债务和维护负荷,例子包括 dockershim 和 DynamicKubeletConfiguration 的废弃。
|
||||||
|
新功能将在终端用户的集群中释放更多的功能,包括令人兴奋的功能,如支持 cgroups v2、交换内存、优雅的节点关闭和设备管理策略。
|
||||||
|
<!--
|
||||||
|
### Any closing thoughts/resources you’d like to share?
|
||||||
|
|
||||||
|
SK/EH: It takes time and effort to get to any open source community. SIG Node may overwhelm you at first with the number of participants, volume of work, and project scope. But it is totally worth it. Join our welcoming community! [SIG Node GitHub Repo](https://github.com/kubernetes/community/tree/master/sig-node) contains many useful resources including Slack, mailing list and other contact info.
|
||||||
|
-->
|
||||||
|
### 最后你有什么想法/资源要分享吗?
|
||||||
|
|
||||||
|
SK/EH:进入任何开源社区都需要时间和努力。一开始 SIG Node 可能会因为参与者的数量、工作量和项目范围而让你不知所措。但这是完全值得的。
|
||||||
|
请加入我们这个热情的社区! [SIG Node GitHub Repo](https://github.com/kubernetes/community/tree/master/sig-node)包含许多有用的资源,包括 Slack、邮件列表和其他联系信息。
|
||||||
|
<!--
|
||||||
|
## Wrap Up
|
||||||
|
|
||||||
|
SIG Node hosted a [KubeCon + CloudNativeCon Europe 2021 talk](https://www.youtube.com/watch?v=z5aY4e2RENA) with an intro and deep dive to their awesome SIG. Join the SIG's meetings to find out about the most recent research results, what the plans are for the forthcoming year, and how to get involved in the upstream Node team as a contributor!
|
||||||
|
-->
|
||||||
|
## 总结
|
||||||
|
|
||||||
|
SIG Node 举办了一场 [KubeCon + CloudNativeCon Europe 2021 talk](https://www.youtube.com/watch?v=z5aY4e2RENA),对他们强大的 SIG 进行了介绍和深入探讨。
|
||||||
|
加入 SIG 的会议,了解最新的研究成果,未来一年的计划是什么,以及如何作为贡献者参与到上游的 Node 团队中!
|
|
@ -0,0 +1,197 @@
|
||||||
|
---
|
||||||
|
layout: blog
|
||||||
|
title: "认识我们的贡献者 - 亚太地区(印度地区)"
|
||||||
|
date: 2022-01-10T12:00:00+0000
|
||||||
|
slug: meet-our-contributors-india-ep-01
|
||||||
|
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
|
||||||
|
---
|
||||||
|
<!--
|
||||||
|
layout: blog
|
||||||
|
title: "Meet Our Contributors - APAC (India region)"
|
||||||
|
date: 2022-01-10T12:00:00+0000
|
||||||
|
slug: meet-our-contributors-india-ep-01
|
||||||
|
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!--
|
||||||
|
**Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
|
||||||
|
-->
|
||||||
|
**作者和采访者:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan) , [Atharva Shinde](https://github.com/Atharva-Shinde) , [Avinesh Tripathi](https://github.com/AvineshTripathi) , [Debabrata Panigrahi](https://github.com/Debanitrkl) , [Kunal Verma](https://github.com/verma-kunal) , [Pranshu Srivastava](https://github.com/PranshuSrivastava) , [Pritish Samal](https://github.com/CIPHERTron) , [Purneswar Prasad](https://github.com/PurneswarPrasad) , [Vedant Kakde](https://github.com/vedant-kakde)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
**Editor:** [Priyanka Saggu](https://psaggu.com)
|
||||||
|
-->
|
||||||
|
**编辑:** [Priyanka Saggu](https://psaggu.com)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Good day, everyone 👋
|
||||||
|
-->
|
||||||
|
大家好 👋
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Welcome to the first episode of the APAC edition of the "Meet Our Contributors" blog post series.
|
||||||
|
-->
|
||||||
|
欢迎来到亚太地区的“认识我们的贡献者”博文系列第一期。
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
|
In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives.
|
||||||
|
-->
|
||||||
|
在这篇文章中,我们将向您介绍来自印度地区的五位优秀贡献者,他们一直在以各种方式积极地为上游 Kubernetes 项目做贡献,同时也是众多社区倡议的领导者和维护者。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
💫 *Let's get started, so without further ado…*
|
||||||
|
-->
|
||||||
|
💫 *闲话少说,我们开始吧。*
|
||||||
|
|
||||||
|
|
||||||
|
## [Arsh Sharma](https://github.com/RinkiyaKeDad)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Arsh is currently employed with Okteto as a Developer Experience engineer. As a new contributor, he realised that 1:1 mentorship opportunities were quite beneficial in getting him started with the upstream project.
|
||||||
|
-->
|
||||||
|
Arsh 目前在 Okteto 公司中担任开发者体验工程师职务。作为一名新的贡献者,他意识到一对一的指导机会让他在开始上游项目中受益匪浅。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
He is presently a CI Signal shadow on the Kubernetes 1.23 release team. He is also contributing to the SIG Testing and SIG Docs projects, as well as to the [cert-manager](https://github.com/cert-manager/infrastructure) tools development work that is being done under the aegis of SIG Architecture.
|
||||||
|
-->
|
||||||
|
他目前是 Kubernetes 1.23 版本团队的 CI Signal 经理。他还致力于为 SIG Testing 和 SIG Docs 项目提供贡献,并且在 SIG Architecture 项目中负责 [证书管理器](https://github.com/cert-manager/infrastructure) 工具的开发工作。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
To the newcomers, Arsh helps plan their early contributions sustainably.
|
||||||
|
-->
|
||||||
|
对于新人来说,Arsh 帮助他们可持续地计划早期贡献。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
> _I would encourage folks to contribute in a way that's sustainable. What I mean by that
|
||||||
|
> is that it's easy to be very enthusiastic early on and take up more stuff than one can
|
||||||
|
> actually handle. This can often lead to burnout in later stages. It's much more sustainable
|
||||||
|
> to work on things iteratively._
|
||||||
|
-->
|
||||||
|
> _我鼓励大家以可持续的方式为社区做贡献。我的意思是,一个人很容易在早期的时候非常有热情,并且承担了很多超出个人实际能力的事情。这通常会导致后期的倦怠。迭代地处理事情会让大家对社区的贡献变得可持续。_
|
||||||
|
|
||||||
|
## [Kunal Kushwaha](https://github.com/kunal-kushwaha)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Kunal Kushwaha is a core member of the Kubernetes marketing council. He is also a CNCF ambassador and one of the founders of the [CNCF Students Program](https://community.cncf.io/cloud-native-students/).. He also served as a Communications role shadow during the 1.22 release cycle.
|
||||||
|
-->
|
||||||
|
Kunal Kushwaha 是 Kubernetes 营销委员会的核心成员。他同时也是 [CNCF 学生计划](https://community.cncf.io/cloud-native-students/) 的创始人之一。他还在 1.22 版本周期中担任通信经理一职。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
|
||||||
|
-->
|
||||||
|
在他的第一年结束时,Kunal 开始为 [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) 项目做贡献。然后,他被推选从事同一项目,此项目是 Google Summer of Code 的一部分。Kunal 在 Google Summer of Code、Google Code-in 等项目中指导过很多人。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
As an open-source enthusiast, he believes that diverse participation in the community is beneficial since it introduces new perspectives and opinions and respect for one's peers. He has worked on various open-source projects, and his participation in communities has considerably assisted his development as a developer.
|
||||||
|
-->
|
||||||
|
作为一名开源爱好者,他坚信,社区的多元化参与是非常有益的,因为他引入了新的观念和观点,并尊重自己的伙伴。它曾参与过各种开源项目,他在这些社区中的参与对他作为开发者的发展有很大帮助。
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
|
> _I believe if you find yourself in a place where you do not know much about the
|
||||||
|
> project, that's a good thing because now you can learn while contributing and the
|
||||||
|
> community is there to help you. It has helped me a lot in gaining skills, meeting
|
||||||
|
> people from around the world and also helping them. You can learn on the go,
|
||||||
|
> you don't have to be an expert. Make sure to also check out no code contributions
|
||||||
|
> because being a beginner is a skill and you can bring new perspectives to the
|
||||||
|
> organisation._
|
||||||
|
-->
|
||||||
|
> _我相信,如果你发现自己在一个了解不多的项目当中,那是件好事,因为现在你可以一边贡献一边学习,社区也会帮助你。它帮助我获得了很多技能,认识了来自世界各地的人,也帮助了他们。你可以在这个过程中学习,自己不一定必须是专家。请重视非代码贡献,因为作为初学者这是一项技能,你可以为组织带来新的视角。_
|
||||||
|
|
||||||
|
## [Madhav Jivarajani](https://github.com/MadhavJivrajani)
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Madhav Jivarajani works on the VMware Upstream Kubernetes stability team. He began contributing to the Kubernetes project in January 2021 and has since made significant contributions to several areas of work under SIG Architecture, SIG API Machinery, and SIG ContribEx (contributor experience).
|
||||||
|
-->
|
||||||
|
Madhav Jivarajani 在 VMware 上游 Kubernetes 稳定性团队工作。他于 2021 年 1 月开始为 Kubernetes 项目做贡献,此后在 SIG Architecture、SIG API Machinery 和 SIG ContribEx(贡献者经验)等项目的几个工作领域做出了重大贡献。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Among several significant contributions are his recent efforts toward the Archival of [design proposals](https://github.com/kubernetes/community/issues/6055), refactoring the ["groups" codebase](https://github.com/kubernetes/k8s.io/pull/2713) under k8s-infra repository to make it mockable and testable, and improving the functionality of the [GitHub k8s bot](https://github.com/kubernetes/test-infra/issues/23129).
|
||||||
|
-->
|
||||||
|
在这几个重要项目中,他最近致力于 [设计方案](https://github.com/kubernetes/community/issues/6055) 的存档工作,重构 k8s-infra 存储库下的 ["组"代码库](https://github.com/kubernetes/k8s.io/pull/2713) ,使其具有可模拟性和可测试性,以及改进 [GitHub k8s 机器人](https://github.com/kubernetes/test-infra/issues/23129) 的功能。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
In addition to his technical efforts, Madhav oversees many projects aimed at assisting new contributors. He organises bi-weekly "KEP reading club" sessions to help newcomers understand the process of adding new features, deprecating old ones, and making other key changes to the upstream project. He has also worked on developing [Katacoda scenarios](https://github.com/kubernetes-sigs/contributor-katacoda) to assist new contributors to become acquainted with the process of contributing to k/k. In addition to his current efforts to meet with community members every week, he has organised several [new contributors workshops (NCW)](https://www.youtube.com/watch?v=FgsXbHBRYIc).
|
||||||
|
-->
|
||||||
|
除了在技术方面的贡献,Madhav 还监督许多旨在帮助新贡献者的项目。他每两周组织一次的“KEP 阅读俱乐部”会议,帮助新人了解添加新功能、摒弃旧功能以及对上游项目进行其他关键更改的过程。他还致力于开发 [Katacoda 场景](https://github.com/kubernetes-sigs/contributor-katacoda) ,以帮助新的贡献者在为 k/k 做贡献的过程更加熟练。目前除了每周与社区成员会面外,他还组织了几个 [新贡献者讲习班(NCW)](https://www.youtube.com/watch?v=FgsXbHBRYIc) 。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
> _I initially did not know much about Kubernetes. I joined because the community was
|
||||||
|
> super friendly. But what made me stay was not just the people, but the project itself.
|
||||||
|
> My solution to not feeling overwhelmed in the community was to gain as much context
|
||||||
|
> and knowledge into the topics that I was interested in and were being discussed. And
|
||||||
|
> as a result I continued to dig deeper into Kubernetes and the design of it.
|
||||||
|
> I am a systems nut & thus Kubernetes was an absolute goldmine for me._
|
||||||
|
-->
|
||||||
|
> _一开始我对 Kubernetes 了解并不多。我加入社区是因为社区超级友好。但让我留下来的不仅仅是人,还有项目本身。我在社区中不会感到不知所措,这是因为我能够在感兴趣的和正在讨论的主题中获得尽可能多的背景和知识。因此,我将继续深入探讨 Kubernetes 及其设计。我是一个系统迷,kubernetes 对我来说绝对是一个金矿。_
|
||||||
|
|
||||||
|
|
||||||
|
## [Rajas Kakodkar](https://github.com/rajaskakodkar)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Rajas Kakodkar currently works at VMware as a Member of Technical Staff. He has been engaged in many aspects of the upstream Kubernetes project since 2019.
|
||||||
|
-->
|
||||||
|
Rajas Kakodkar 目前在 VMware 担任技术人员。自 2019 年以来,他一直多方面地从事上游 kubernetes 项目。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
He is now a key contributor to the Testing special interest group. He is also active in the SIG Network community. Lately, Rajas has contributed significantly to the [NetworkPolicy++](https://docs.google.com/document/d/1AtWQy2fNa4qXRag9cCp5_HsefD7bxKe3ea2RPn8jnSs/) and [`kpng`](https://github.com/kubernetes-sigs/kpng) sub-projects.
|
||||||
|
-->
|
||||||
|
他现在是 Testing 特别兴趣小组的关键贡献者。他还活跃在 SIG Network 社区。最近,Rajas 为 [NetworkPolicy++](https://docs.google.com/document/d/1AtWQy2fNa4qXRag9cCp5_HsefD7bxKe3ea2RPn8jnSs/) 和 [`kpng`](https://github.com/kubernetes-sigs/kpng) 子项目做出了重大贡献。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
One of the first challenges he ran across was that he was in a different time zone than the upstream project's regular meeting hours. However, async interactions on community forums progressively corrected that problem.
|
||||||
|
-->
|
||||||
|
他遇到的第一个挑战是,他所处的时区与上游项目的日常会议时间不同。不过,社区论坛上的异步交互逐渐解决了这个问题。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
> _I enjoy contributing to Kubernetes not just because I get to work on
|
||||||
|
> cutting edge tech but more importantly because I get to work with
|
||||||
|
> awesome people and help in solving real world problems._
|
||||||
|
-->
|
||||||
|
> _我喜欢为 kubernetes 做出贡献,不仅因为我可以从事尖端技术工作,更重要的是,我可以和优秀的人一起工作,并帮助解决现实问题。_
|
||||||
|
|
||||||
|
## [Rajula Vineet Reddy](https://github.com/rajula96reddy)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Rajula Vineet Reddy, a Junior Engineer at CERN, is a member of the Marketing Council team under SIG ContribEx . He also served as a release shadow for SIG Release during the 1.22 and 1.23 Kubernetes release cycles.
|
||||||
|
-->
|
||||||
|
Rajula Vineet Reddy,CERN 的初级工程师,是 SIG ContribEx 项目下营销委员会的成员。在 Kubernetes 1.22 和 1.23 版本周期中,他还担任 SIG Release 的版本经理。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
He started looking at the Kubernetes project as part of a university project with the help of one of his professors. Over time, he spent a significant amount of time reading the project's documentation, Slack discussions, GitHub issues, and blogs, which helped him better grasp the Kubernetes project and piqued his interest in contributing upstream. One of his key contributions was his assistance with automation in the SIG ContribEx Upstream Marketing subproject.
|
||||||
|
-->
|
||||||
|
在他的一位教授的帮助下,他开始将 kubernetes 项目作为大学项目的一部分。慢慢地,他花费了大量的时间阅读项目的文档、Slack 讨论、GitHub issues 和博客,这有助于他更好地掌握 kubernetes 项目,并激发了他对上游项目做贡献的兴趣。他的主要贡献之一是他在SIG ContribEx上游营销子项目中协助实现了自动化。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
According to Rajula, attending project meetings and shadowing various project roles are vital for learning about the community.
|
||||||
|
-->
|
||||||
|
Rajas 说,参与项目会议和跟踪各种项目角色对于了解社区至关重要。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
> _I find the community very helpful and it's always_
|
||||||
|
> “you get back as much as you contribute”.
|
||||||
|
> _The more involved you are, the more you will understand, get to learn and
|
||||||
|
> contribute new things._
|
||||||
|
>
|
||||||
|
> _The first step to_ “come forward and start” _is hard. But it's all gonna be
|
||||||
|
> smooth after that. Just take that jump._
|
||||||
|
-->
|
||||||
|
> _我发现社区非常有帮助,而且总是“你得到的回报和你贡献的一样多”。你参与得越多,你就越会了解、学习和贡献新东西。_
|
||||||
|
> _“挺身而出”的第一步是艰难的。但在那之后一切都会顺利的。勇敢地参与进来吧。_
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
If you have any recommendations/suggestions for who we should interview next, please let us know in #sig-contribex. We're thrilled to have other folks assisting us in reaching out to even more wonderful individuals of the community. Your suggestions would be much appreciated.
|
||||||
|
-->
|
||||||
|
如果您对我们下一步应该采访谁有任何意见/建议,请在 #sig-contribex 中告知我们。我们很高兴有其他人帮助我们接触社区中更优秀的人。我们将不胜感激。
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
|
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋
|
||||||
|
-->
|
||||||
|
我们下期见。最后,祝大家都能快乐地为社区做贡献!👋
|
||||||
|
|
|
@ -0,0 +1,301 @@
|
||||||
|
---
|
||||||
|
layout: blog
|
||||||
|
title: "Kubernetes 1.24 的删除和弃用"
|
||||||
|
date: 2022-04-07
|
||||||
|
slug: upcoming-changes-in-kubernetes-1-24
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
layout: blog
|
||||||
|
title: "Kubernetes Removals and Deprecations In 1.24"
|
||||||
|
date: 2022-04-07
|
||||||
|
slug: upcoming-changes-in-kubernetes-1-24
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!--
|
||||||
|
**Author**: Mickey Boxell (Oracle)
|
||||||
|
|
||||||
|
As Kubernetes evolves, features and APIs are regularly revisited and removed. New features may offer
|
||||||
|
an alternative or improved approach to solving existing problems, motivating the team to remove the
|
||||||
|
old approach.
|
||||||
|
-->
|
||||||
|
**作者**: Mickey Boxell (Oracle)
|
||||||
|
|
||||||
|
随着 Kubernetes 的发展,特性和 API 会定期被重新访问和删除。
|
||||||
|
新特性可能会提供替代或改进的方法,来解决现有的问题,从而激励团队移除旧的方法。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
We want to make sure you are aware of the changes coming in the Kubernetes 1.24 release. The release will
|
||||||
|
**deprecate** several (beta) APIs in favor of stable versions of the same APIs. The major change coming
|
||||||
|
in the Kubernetes 1.24 release is the
|
||||||
|
[removal of Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim).
|
||||||
|
This is discussed below and will be explored in more depth at release time. For an early look at the
|
||||||
|
changes coming in Kubernetes 1.24, take a look at the in-progress
|
||||||
|
[CHANGELOG](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md).
|
||||||
|
-->
|
||||||
|
我们希望确保你了解 Kubernetes 1.24 版本的变化。 该版本将 **弃用** 一些(测试版/beta)API,
|
||||||
|
转而支持相同 API 的稳定版本。 Kubernetes 1.24 版本的主要变化是
|
||||||
|
[移除 Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim)。
|
||||||
|
这将在下面讨论,并将在发布时更深入地探讨。
|
||||||
|
要提前了解 Kubernetes 1.24 中的更改,请查看正在更新中的
|
||||||
|
[CHANGELOG](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md)。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## A note about Dockershim
|
||||||
|
|
||||||
|
It's safe to say that the removal receiving the most attention with the release of Kubernetes 1.24
|
||||||
|
is Dockershim. Dockershim was deprecated in v1.20. As noted in the [Kubernetes 1.20 changelog](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation):
|
||||||
|
"Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet
|
||||||
|
uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance
|
||||||
|
issues in the Kubernetes community." With the upcoming release of Kubernetes 1.24, the Dockershim will
|
||||||
|
finally be removed.
|
||||||
|
-->
|
||||||
|
## 关于 Dockershim {#a-note-about-dockershim}
|
||||||
|
|
||||||
|
可以肯定地说,随着 Kubernetes 1.24 的发布,最受关注的删除是 Dockershim。
|
||||||
|
Dockershim 在 1.20 版本中已被弃用。 如
|
||||||
|
[Kubernetes 1.20 变更日志](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)中所述:
|
||||||
|
"Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet
|
||||||
|
uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance
|
||||||
|
issues in the Kubernetes community."
|
||||||
|
随着即将发布的 Kubernetes 1.24,Dockershim 将最终被删除。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
In the article [Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/),
|
||||||
|
the authors succinctly captured the change's impact and encouraged users to remain calm:
|
||||||
|
> Docker as an underlying runtime is being deprecated in favor of runtimes that use the
|
||||||
|
> Container Runtime Interface (CRI) created for Kubernetes. Docker-produced images
|
||||||
|
> will continue to work in your cluster with all runtimes, as they always have.
|
||||||
|
-->
|
||||||
|
在文章[别慌: Kubernetes 和 Docker](/zh/blog/2020/12/02/dont-panic-kubernetes-and-docker/) 中,
|
||||||
|
作者简洁地记述了变化的影响,并鼓励用户保持冷静:
|
||||||
|
>弃用 Docker 这个底层运行时,转而支持符合为 Kubernetes 创建的容器运行接口
|
||||||
|
>Container Runtime Interface (CRI) 的运行时。
|
||||||
|
>Docker 构建的镜像,将在你的集群的所有运行时中继续工作,一如既往。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Several guides have been created with helpful information about migrating from dockershim
|
||||||
|
to container runtimes that are directly compatible with Kubernetes. You can find them on the
|
||||||
|
[Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/)
|
||||||
|
page in the Kubernetes documentation.
|
||||||
|
-->
|
||||||
|
已经有一些文档指南,提供了关于从 dockershim 迁移到与 Kubernetes 直接兼容的容器运行时的有用信息。
|
||||||
|
你可以在 Kubernetes 文档中的[从 dockershim 迁移](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/)
|
||||||
|
页面上找到它们。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
For more information about why Kubernetes is moving away from dockershim, check out the aptly
|
||||||
|
named: [Kubernetes is Moving on From Dockershim](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)
|
||||||
|
and the [updated dockershim removal FAQ](/blog/2022/02/17/dockershim-faq/).
|
||||||
|
|
||||||
|
Take a look at the [Is Your Cluster Ready for v1.24?](/blog/2022/03/31/ready-for-dockershim-removal/) post to learn about how to ensure your cluster continues to work after upgrading from v1.23 to v1.24.
|
||||||
|
-->
|
||||||
|
有关 Kubernetes 为何不再使用 dockershim 的更多信息,
|
||||||
|
请参见:[Kubernetes 正在离开 Dockershim](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)
|
||||||
|
和[最新的弃用 Dockershim 的常见问题](/zh/blog/2022/02/17/dockershim-faq/)。
|
||||||
|
|
||||||
|
查看[你的集群准备好使用 v1.24 了吗?](/blog/2022/03/31/ready-for-dockershim-removal/) 一文,
|
||||||
|
了解如何确保你的集群在从 1.23 版本升级到 1.24 版本后继续工作。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## The Kubernetes API removal and deprecation process
|
||||||
|
|
||||||
|
Kubernetes contains a large number of components that evolve over time. In some cases, this
|
||||||
|
evolution results in APIs, flags, or entire features, being removed. To prevent users from facing
|
||||||
|
breaking changes, Kubernetes contributors adopted a feature [deprecation policy](/docs/reference/using-api/deprecation-policy/).
|
||||||
|
This policy ensures that stable APIs may only be deprecated when a newer stable version of that
|
||||||
|
same API is available and that APIs have a minimum lifetime as indicated by the following stability levels:
|
||||||
|
|
||||||
|
* Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes.
|
||||||
|
* Beta or pre-release API versions must be supported for 3 releases after deprecation.
|
||||||
|
* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
|
||||||
|
-->
|
||||||
|
## Kubernetes API 删除和弃用流程 {#the-Kubernetes-api-removal-and-deprecation-process}
|
||||||
|
|
||||||
|
Kubernetes 包含大量随时间演变的组件。在某些情况下,这种演变会导致 API、标志或整个特性被删除。
|
||||||
|
为了防止用户面对重大变化,Kubernetes 贡献者采用了一项特性[弃用策略](/zh/docs/reference/using-api/deprecation-policy/)。
|
||||||
|
此策略确保仅当同一 API 的较新稳定版本可用并且
|
||||||
|
API 具有以下稳定性级别所指示的最短生命周期时,才可能弃用稳定版本 API:
|
||||||
|
|
||||||
|
* 正式发布 (GA) 或稳定的 API 版本可能被标记为已弃用,但不得在 Kubernetes 的主版本中删除。
|
||||||
|
* 测试版(beta)或预发布 API 版本在弃用后必须支持 3 个版本。
|
||||||
|
* Alpha 或实验性 API 版本可能会在任何版本中被删除,恕不另行通知。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Removals follow the same deprecation policy regardless of whether an API is removed due to a beta feature
|
||||||
|
graduating to stable or because that API was not proven to be successful. Kubernetes will continue to make
|
||||||
|
sure migration options are documented whenever APIs are removed.
|
||||||
|
-->
|
||||||
|
删除遵循相同的弃用政策,无论 API 是由于 测试版(beta)功能逐渐稳定还是因为该
|
||||||
|
API 未被证明是成功的而被删除。
|
||||||
|
Kubernetes 将继续确保在删除 API 时提供用来迁移的文档。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
**Deprecated** APIs are those that have been marked for removal in a future Kubernetes release. **Removed**
|
||||||
|
APIs are those that are no longer available for use in current, supported Kubernetes versions after having
|
||||||
|
been deprecated. These removals have been superseded by newer, stable/generally available (GA) APIs.
|
||||||
|
-->
|
||||||
|
**弃用的** API 是指那些已标记为在未来 Kubernetes 版本中被删除的 API。
|
||||||
|
**删除的** API 是指那些在被弃用后不再可用于当前受支持的 Kubernetes 版本的 API。
|
||||||
|
这些删除已被更新的、稳定的/普遍可用的 (GA) API 所取代。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## API removals, deprecations, and other changes for Kubernetes 1.24
|
||||||
|
|
||||||
|
* [Dynamic kubelet configuration](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig` is used to enable the dynamic configuration of the kubelet. The `DynamicKubeletConfig` flag was deprecated in Kubernetes 1.22. In v1.24, this feature gate will be removed from the kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). Refer to the ["Dynamic kubelet config is removed" KEP](https://github.com/kubernetes/enhancements/issues/281) for more information.
|
||||||
|
-->
|
||||||
|
## Kubernetes 1.24 的 API 删除、弃用和其他更改 {#api-removals-deprecations-and-other-changes-for-kubernetes-1.24}
|
||||||
|
|
||||||
|
* [动态 kubelet 配置](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig`
|
||||||
|
用于启用 kubelet 的动态配置。Kubernetes 1.22 中弃用 `DynamicKubeletConfig` 标志。
|
||||||
|
在 1.24 版本中,此特性门控将从 kubelet 中移除。请参阅[重新配置 kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/)。
|
||||||
|
更多详细信息,请参阅[“删除动态 kubelet 配置” 的 KEP](https://github.com/kubernetes/enhancements/issues/281)。
|
||||||
|
<!--
|
||||||
|
* [Dynamic log sanitization](https://github.com/kubernetes/kubernetes/pull/107207): The experimental dynamic log sanitization feature is deprecated and will be removed in v1.24. This feature introduced a logging filter that could be applied to all Kubernetes system components logs to prevent various types of sensitive information from leaking via logs. Refer to [KEP-1753: Kubernetes system components logs sanitization](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation) for more information and an [alternative approach](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#alternatives=).
|
||||||
|
-->
|
||||||
|
* [动态日志清洗](https://github.com/kubernetes/kubernetes/pull/107207):实验性的动态日志清洗功能已被弃用,
|
||||||
|
将在 1.24 版本中被删除。该功能引入了一个日志过滤器,可以应用于所有 Kubernetes 系统组件的日志,
|
||||||
|
以防止各种类型的敏感信息通过日志泄漏。有关更多信息和替代方法,请参阅
|
||||||
|
[KEP-1753: Kubernetes 系统组件日志清洗](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation)。
|
||||||
|
<!--
|
||||||
|
* In-tree provisioner to CSI driver migration: This applies to a number of in-tree plugins, including [Portworx](https://github.com/kubernetes/enhancements/issues/2589). Refer to the [In-tree Storage Plugin to CSI Migration Design Doc](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/csi-migration.md#background-and-motivations) for more information.
|
||||||
|
-->
|
||||||
|
* 树内驱动(In-tree provisioner)向 CSI 卷迁移:这适用于许多树内插件,
|
||||||
|
包括 [Portworx](https://github.com/kubernetes/enhancements/issues/2589)。
|
||||||
|
参见[树内存储插件向 CSI 卷迁移的设计文档](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/csi-migration.md#background-and-motivations)
|
||||||
|
了解更多信息。
|
||||||
|
<!--
|
||||||
|
* [Removing Dockershim from kubelet](https://github.com/kubernetes/enhancements/issues/2221): the Container Runtime Interface (CRI) for Docker (i.e. Dockershim) is currently a built-in container runtime in the kubelet code base. It was deprecated in v1.20. As of v1.24, the kubelet will no longer have dockershim. Check out this blog on [what you need to do be ready for v1.24](/blog/2022/03/31/ready-for-dockershim-removal/).
|
||||||
|
-->
|
||||||
|
* [从 kubelet 中移除 Dockershim](https://github.com/kubernetes/enhancements/issues/2221):Docker
|
||||||
|
的容器运行时接口(CRI)(即 Dockershim)目前是 kubelet 代码中内置的容器运行时。 它在 1.20 版本中已被弃用。
|
||||||
|
从 1.24 版本开始,kubelet 已经移除 dockershim。 查看这篇博客,
|
||||||
|
[了解你需要为 1.24 版本做些什么](/blog/2022/03/31/ready-for-dockershim-removal/)。
|
||||||
|
<!--
|
||||||
|
* [Storage capacity tracking for pod scheduling](https://github.com/kubernetes/enhancements/issues/1472): The CSIStorageCapacity API supports exposing currently available storage capacity via CSIStorageCapacity objects and enhances scheduling of pods that use CSI volumes with late binding. In v1.24, the CSIStorageCapacity API will be stable. The API graduating to stable initates the deprecation of the v1beta1 CSIStorageCapacity API. Refer to the [Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking) for more information.
|
||||||
|
-->
|
||||||
|
* [pod 调度的存储容量追踪](https://github.com/kubernetes/enhancements/issues/1472):CSIStorageCapacity API
|
||||||
|
支持通过 CSIStorageCapacity 对象暴露当前可用的存储容量,并增强了使用带有延迟绑定的 CSI 卷的 Pod 的调度。
|
||||||
|
CSIStorageCapacity API 自 1.24 版本起提供稳定版本。升级到稳定版的 API 将弃用 v1beta1 CSIStorageCapacity API。
|
||||||
|
更多信息请参见 [Pod 调度存储容量约束 KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking)。
|
||||||
|
<!--
|
||||||
|
* [The `master` label is no longer present on kubeadm control plane nodes](https://github.com/kubernetes/kubernetes/pull/107533). For new clusters, the label 'node-role.kubernetes.io/master' will no longer be added to control plane nodes, only the label 'node-role.kubernetes.io/control-plane' will be added. For more information, refer to [KEP-2067: Rename the kubeadm "master" label and taint](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint).
|
||||||
|
-->
|
||||||
|
* [kubeadm 控制面节点上不再存在 `master` 标签](https://github.com/kubernetes/kubernetes/pull/107533)。
|
||||||
|
对于新集群,控制平面节点将不再添加 'node-role.kubernetes.io/master' 标签,
|
||||||
|
只会添加 'node-role.kubernetes.io/control-plane' 标签。更多信息请参考
|
||||||
|
[KEP-2067:重命名 kubeadm “master” 标签和污点](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067)。
|
||||||
|
<!--
|
||||||
|
* [VolumeSnapshot v1beta1 CRD will be removed](https://github.com/kubernetes/enhancements/issues/177). Volume snapshot and restore functionality for Kubernetes and the [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, entered beta in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.21 and is now unsupported. Refer to [KEP-177: CSI Snapshot](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and [kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter/releases/tag/v4.1.0) for more information.
|
||||||
|
-->
|
||||||
|
* [VolumeSnapshot v1beta1 CRD 在 1.24 版本中将被移除](https://github.com/kubernetes/enhancements/issues/177)。
|
||||||
|
Kubernetes 和 [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) (CSI)
|
||||||
|
的卷快照和恢复功能,在 1.20 版本中进入测试版。该功能提供标准化 API 设计 (CRD ) 并为 CSI 卷驱动程序添加了 PV 快照/恢复支持,
|
||||||
|
VolumeSnapshot v1beta1 在 1.21 版本中已被弃用,现在不受支持。更多信息请参考
|
||||||
|
[KEP-177:CSI 快照](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot)和
|
||||||
|
[kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter/releases/tag/v4.1.0)。
|
||||||
|
<!--
|
||||||
|
## What to do
|
||||||
|
|
||||||
|
### Dockershim removal
|
||||||
|
|
||||||
|
As stated earlier, there are several guides about
|
||||||
|
[Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/).
|
||||||
|
You can start with [Finding what container runtime are on your nodes](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/).
|
||||||
|
If your nodes are using dockershim, there are other possible Docker Engine dependencies such as
|
||||||
|
Pods or third-party tools executing Docker commands or private registries in the Docker configuration file. You can follow the
|
||||||
|
[Check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) guide to review possible
|
||||||
|
Docker Engine dependencies. Before upgrading to v1.24, you decide to either remain using Docker Engine and
|
||||||
|
[Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) or migrate to a CRI-compatible runtime. Here's a guide to
|
||||||
|
[change the container runtime on a node from Docker Engine to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
|
||||||
|
-->
|
||||||
|
## 需要做什么 {#what-to-do}
|
||||||
|
|
||||||
|
### 删除 Dockershim {#dockershim-removal}
|
||||||
|
如前所述,有一些关于从 [dockershim 迁移](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/)的指南。
|
||||||
|
你可以[从查明节点上所使用的容器运行时](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)开始。
|
||||||
|
如果你的节点使用 dockershim,则还有其他可能的 Docker Engine 依赖项,
|
||||||
|
例如 Pod 或执行 Docker 命令的第三方工具或 Docker 配置文件中的私有注册表。
|
||||||
|
你可以按照[检查弃用 Dockershim 对你的影响](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
|
||||||
|
的指南来查看可能的 Docker 引擎依赖项。在升级到 1.24 版本之前, 你决定要么继续使用 Docker Engine 并
|
||||||
|
[将 Docker Engine 节点从 dockershim 迁移到 cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/),
|
||||||
|
要么迁移到与 CRI 兼容的运行时。这是[将节点上的容器运行时从 Docker Engine 更改为 containerd](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/) 的指南。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### `kubectl convert`
|
||||||
|
|
||||||
|
The [`kubectl convert`](/docs/tasks/tools/included/kubectl-convert-overview/) plugin for `kubectl`
|
||||||
|
can be helpful to address migrating off deprecated APIs. The plugin facilitates the conversion of
|
||||||
|
manifests between different API versions, for example, from a deprecated to a non-deprecated API
|
||||||
|
version. More general information about the API migration process can be found in the [Deprecated API Migration Guide](/docs/reference/using-api/deprecation-guide/).
|
||||||
|
Follow the [install `kubectl convert` plugin](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-convert-plugin)
|
||||||
|
documentation to download and install the `kubectl-convert` binary.
|
||||||
|
-->
|
||||||
|
### `kubectl convert` {#kubectl-convert}
|
||||||
|
|
||||||
|
kubectl 的 [`kubectl convert`](/zh/docs/tasks/tools/included/kubectl-convert-overview/)
|
||||||
|
插件有助于解决弃用 API 的迁移问题。该插件方便了不同 API 版本之间清单的转换,
|
||||||
|
例如,从弃用的 API 版本到非弃用的 API 版本。关于 API 迁移过程的更多信息可以在
|
||||||
|
[已弃用 API 的迁移指南](/docs/reference/using-api/deprecation-guide/)中找到。按照
|
||||||
|
[安装 `kubectl convert` 插件](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-convert-plugin)
|
||||||
|
文档下载并安装 `kubectl-convert` 二进制文件。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Looking ahead
|
||||||
|
|
||||||
|
The Kubernetes 1.25 and 1.26 releases planned for later this year will stop serving beta versions
|
||||||
|
of several currently stable Kubernetes APIs. The v1.25 release will also remove PodSecurityPolicy,
|
||||||
|
which was deprecated with Kubernetes 1.21 and will not graduate to stable. See [PodSecurityPolicy
|
||||||
|
Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) for more information.
|
||||||
|
-->
|
||||||
|
### 展望未来 {#looking-ahead}
|
||||||
|
|
||||||
|
计划在今年晚些时候发布的 Kubernetes 1.25 和 1.26 版本,将停止提供一些
|
||||||
|
Kubernetes API 的 beta 版本,这些 API 当前为稳定版。1.25 版本还将删除 PodSecurityPolicy,
|
||||||
|
它已在 Kubernetes 1.21 版本中被弃用,并且不会升级到稳定版。有关详细信息,请参阅
|
||||||
|
[PodSecurityPolicy 弃用:过去、现在和未来](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The official [list of API removals planned for Kubernetes 1.25](/docs/reference/using-api/deprecation-guide/#v1-25) is:
|
||||||
|
-->
|
||||||
|
[Kubernetes 1.25 计划移除的 API 的官方列表](/zh/docs/reference/using-api/deprecation-guide/#v1-25)是:
|
||||||
|
|
||||||
|
* The beta CronJob API (batch/v1beta1)
|
||||||
|
* The beta EndpointSlice API (discovery.k8s.io/v1beta1)
|
||||||
|
* The beta Event API (events.k8s.io/v1beta1)
|
||||||
|
* The beta HorizontalPodAutoscaler API (autoscaling/v2beta1)
|
||||||
|
* The beta PodDisruptionBudget API (policy/v1beta1)
|
||||||
|
* The beta PodSecurityPolicy API (policy/v1beta1)
|
||||||
|
* The beta RuntimeClass API (node.k8s.io/v1beta1)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The official [list of API removals planned for Kubernetes 1.26](/docs/reference/using-api/deprecation-guide/#v1-26) is:
|
||||||
|
|
||||||
|
* The beta FlowSchema and PriorityLevelConfiguration APIs (flowcontrol.apiserver.k8s.io/v1beta1)
|
||||||
|
* The beta HorizontalPodAutoscaler API (autoscaling/v2beta2)
|
||||||
|
-->
|
||||||
|
[Kubernetes 1.25 计划移除的 API 的官方列表](/zh/docs/reference/using-api/deprecation-guide/#v1-25)是:
|
||||||
|
|
||||||
|
* The beta FlowSchema 和 PriorityLevelConfiguration API (flowcontrol.apiserver.k8s.io/v1beta1)
|
||||||
|
* The beta HorizontalPodAutoscaler API (autoscaling/v2beta2)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Want to know more?
|
||||||
|
Deprecations are announced in the Kubernetes release notes. You can see the announcements of pending deprecations in the release notes for:
|
||||||
|
* [Kubernetes 1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation)
|
||||||
|
* [Kubernetes 1.22](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#deprecation)
|
||||||
|
* [Kubernetes 1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation)
|
||||||
|
* We will formally announce the deprecations that come with [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) as part of the CHANGELOG for that release.
|
||||||
|
|
||||||
|
For information on the process of deprecation and removal, check out the official Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document.
|
||||||
|
-->
|
||||||
|
### 了解更多 {#want-to-know-more}
|
||||||
|
Kubernetes 发行说明中宣告了弃用信息。你可以在以下版本的发行说明中看到待弃用的公告:
|
||||||
|
* [Kubernetes 1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation)
|
||||||
|
* [Kubernetes 1.22](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#deprecation)
|
||||||
|
* [Kubernetes 1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation)
|
||||||
|
* 我们将正式宣布 [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) 的弃用信息,
|
||||||
|
作为该版本 CHANGELOG 的一部分。
|
||||||
|
|
||||||
|
有关弃用和删除过程的信息,请查看 Kubernetes 官方[弃用策略](/zh/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) 文档。
|
||||||
|
|
|
@ -52,21 +52,20 @@ There are two main ways to have Nodes added to the {{< glossary_tooltip text="AP
|
||||||
1. The kubelet on a node self-registers to the control plane
|
1. The kubelet on a node self-registers to the control plane
|
||||||
2. You, or another human user, manually add a Node object
|
2. You, or another human user, manually add a Node object
|
||||||
|
|
||||||
After you create a Node object, or the kubelet on a node self-registers, the
|
After you create a Node {{< glossary_tooltip text="object" term_id="object" >}},
|
||||||
control plane checks whether the new Node object is valid. For example, if you
|
or the kubelet on a node self-registers, the control plane checks whether the new Node object is
|
||||||
try to create a Node from the following JSON manifest:
|
valid. For example, if you try to create a Node from the following JSON manifest:
|
||||||
-->
|
-->
|
||||||
## 管理 {#management}
|
## 管理 {#management}
|
||||||
|
|
||||||
向 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver"
|
向 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}添加节点的方式主要有两种:
|
||||||
>}}添加节点的方式主要有两种:
|
|
||||||
|
|
||||||
1. 节点上的 `kubelet` 向控制面执行自注册;
|
1. 节点上的 `kubelet` 向控制面执行自注册;
|
||||||
2. 你,或者别的什么人,手动添加一个 Node 对象。
|
2. 你,或者别的什么人,手动添加一个 Node 对象。
|
||||||
|
|
||||||
在你创建了 Node 对象或者节点上的 `kubelet` 执行了自注册操作之后,
|
在你创建了 Node {{< glossary_tooltip text="object" term_id="object" >}}或者节点上的
|
||||||
控制面会检查新的 Node 对象是否合法。例如,如果你使用下面的 JSON
|
`kubelet` 执行了自注册操作之后,控制面会检查新的 Node 对象是否合法。
|
||||||
对象来创建 Node 对象:
|
例如,如果你尝试使用下面的 JSON 对象来创建 Node 对象:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
@ -93,13 +92,14 @@ Kubernetes 会在内部创建一个 Node 对象作为节点的表示。Kubernete
|
||||||
如果节点是健康的(即所有必要的服务都在运行中),则该节点可以用来运行 Pod。
|
如果节点是健康的(即所有必要的服务都在运行中),则该节点可以用来运行 Pod。
|
||||||
否则,直到该节点变为健康之前,所有的集群活动都会忽略该节点。
|
否则,直到该节点变为健康之前,所有的集群活动都会忽略该节点。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
<!--
|
<!--
|
||||||
Kubernetes keeps the object for the invalid Node and continues checking to see whether
|
Kubernetes keeps the object for the invalid Node and continues checking to see whether
|
||||||
it becomes healthy.
|
it becomes healthy.
|
||||||
|
|
||||||
You, or a {{< glossary_tooltip term_id="controller" text="controller">}}, must explicitly
|
You, or a {{< glossary_tooltip term_id="controller" text="controller">}}, must explicitly
|
||||||
delete the Node object to stop that health checking.
|
delete the Node object to stop that health checking.
|
||||||
-->
|
-->
|
||||||
{{< note >}}
|
|
||||||
Kubernetes 会一直保存着非法节点对应的对象,并持续检查该节点是否已经
|
Kubernetes 会一直保存着非法节点对应的对象,并持续检查该节点是否已经
|
||||||
变得健康。
|
变得健康。
|
||||||
你,或者某个{{< glossary_tooltip term_id="controller" text="控制器">}}必需显式地
|
你,或者某个{{< glossary_tooltip term_id="controller" text="控制器">}}必需显式地
|
||||||
|
@ -113,6 +113,27 @@ The name of a Node object must be a valid
|
||||||
Node 对象的名称必须是合法的
|
Node 对象的名称必须是合法的
|
||||||
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Node name uniqueness
|
||||||
|
|
||||||
|
The [name](/docs/concepts/overview/working-with-objects/names#names) identifies a Node. Two Nodes
|
||||||
|
cannot have the same name at the same time. Kubernetes also assumes that a resource with the same
|
||||||
|
name is the same object. In case of a Node, it is implicitly assumed that an instance using the
|
||||||
|
same name will have the same state (e.g. network settings, root disk contents)
|
||||||
|
and attributes like node labels. This may lead to
|
||||||
|
inconsistencies if an instance was modified without changing its name. If the Node needs to be
|
||||||
|
replaced or updated significantly, the existing Node object needs to be removed from API server
|
||||||
|
first and re-added after the update.
|
||||||
|
-->
|
||||||
|
### 节点名称唯一性 {#node-name-uniqueness}
|
||||||
|
|
||||||
|
节点的[名称](/docs/concepts/overview/working-with-objects/names#names)用来标识 Node 对象。
|
||||||
|
没有两个 Node 可以同时使用相同的名称。 Kubernetes 还假定名字相同的资源是同一个对象。
|
||||||
|
就 Node 而言,隐式假定使用相同名称的实例会具有相同的状态(例如网络配置、根磁盘内容)
|
||||||
|
和类似节点标签这类属性。这可能在节点被更改但其名称未变时导致系统状态不一致。
|
||||||
|
如果某个 Node 需要被替换或者大量变更,需要从 API 服务器移除现有的 Node 对象,
|
||||||
|
之后再在更新之后重新将其加入。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Self-registration of Nodes
|
### Self-registration of Nodes
|
||||||
|
|
||||||
|
@ -139,15 +160,14 @@ For self-registration, the kubelet is started with the following options:
|
||||||
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
|
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
|
||||||
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
|
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
|
||||||
-->
|
-->
|
||||||
- `--kubeconfig` - 用于向 API 服务器表明身份的凭据路径。
|
- `--kubeconfig` - 用于向 API 服务器执行身份认证所用的凭据的路径。
|
||||||
- `--cloud-provider` - 与某{{< glossary_tooltip text="云驱动" term_id="cloud-provider" >}}
|
- `--cloud-provider` - 与某{{< glossary_tooltip text="云驱动" term_id="cloud-provider" >}}
|
||||||
进行通信以读取与自身相关的元数据的方式。
|
进行通信以读取与自身相关的元数据的方式。
|
||||||
- `--register-node` - 自动向 API 服务注册。
|
- `--register-node` - 自动向 API 服务注册。
|
||||||
- `--register-with-taints` - 使用所给的污点列表(逗号分隔的 `<key>=<value>:<effect>`)注册节点。
|
- `--register-with-taints` - 使用所给的{{< glossary_tooltip text="污点" term_id="taint" >}}列表
|
||||||
当 `register-node` 为 false 时无效。
|
(逗号分隔的 `<key>=<value>:<effect>`)注册节点。当 `register-node` 为 false 时无效。
|
||||||
- `--node-ip` - 节点 IP 地址。
|
- `--node-ip` - 节点 IP 地址。
|
||||||
- `--node-labels` - 在集群中注册节点时要添加的
|
- `--node-labels` - 在集群中注册节点时要添加的{{< glossary_tooltip text="标签" term_id="label" >}}。
|
||||||
{{< glossary_tooltip text="标签" term_id="label" >}}。
|
|
||||||
(参见 [NodeRestriction 准入控制插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)所实施的标签限制)。
|
(参见 [NodeRestriction 准入控制插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)所实施的标签限制)。
|
||||||
- `--node-status-update-frequency` - 指定 kubelet 向控制面发送状态的频率。
|
- `--node-status-update-frequency` - 指定 kubelet 向控制面发送状态的频率。
|
||||||
|
|
||||||
|
@ -156,10 +176,38 @@ When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
|
||||||
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
|
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
|
||||||
kubelets are only authorized to create/modify their own Node resource.
|
kubelets are only authorized to create/modify their own Node resource.
|
||||||
-->
|
-->
|
||||||
启用[节点授权模式](/zh/docs/reference/access-authn-authz/node/)和
|
启用[Node 鉴权模式](/zh/docs/reference/access-authn-authz/node/)和
|
||||||
[NodeRestriction 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
[NodeRestriction 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||||
时,仅授权 `kubelet` 创建或修改其自己的节点资源。
|
时,仅授权 `kubelet` 创建或修改其自己的节点资源。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
As mentioned in the [Node name uniqueness](#node-name-uniqueness) section,
|
||||||
|
when Node configuration needs to be updated, it is a good practice to re-register
|
||||||
|
the node with the API server. For example, if the kubelet being restarted with
|
||||||
|
the new set of `--node-labels`, but the same Node name is used, the change will
|
||||||
|
not take an effect, as labels are being set on the Node registration.
|
||||||
|
-->
|
||||||
|
正如[节点名称唯一性](#node-name-uniqueness)一节所述,当 Node 的配置需要被更新时,
|
||||||
|
一种好的做法是重新向 API 服务器注册该节点。例如,如果 kubelet 重启时其 `--node-labels`
|
||||||
|
是新的值集,但同一个 Node 名称已经被使用,则所作变更不会起作用,
|
||||||
|
因为节点标签是在 Node 注册时完成的。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Pods already scheduled on the Node may misbehave or cause issues if the Node
|
||||||
|
configuration will be changed on kubelet restart. For example, already running
|
||||||
|
Pod may be tainted against the new labels assigned to the Node, while other
|
||||||
|
Pods, that are incompatible with that Pod will be scheduled based on this new
|
||||||
|
label. Node re-registration ensures all Pods will be drained and properly
|
||||||
|
re-scheduled.
|
||||||
|
-->
|
||||||
|
如果在 kubelet 重启期间 Node 配置发生了变化,已经被调度到某 Node 上的 Pod
|
||||||
|
可能会出现行为不正常或者出现其他问题,例如,已经运行的 Pod
|
||||||
|
可能通过污点机制设置了与 Node 上新设置的标签相排斥的规则,也有一些其他 Pod,
|
||||||
|
本来与此 Pod 之间存在不兼容的问题,也会因为新的标签设置而被调到到同一节点。
|
||||||
|
节点重新注册操作可以确保节点上所有 Pod 都被排空并被正确地重新调度。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Manual Node administration
|
### Manual Node administration
|
||||||
|
|
||||||
|
@ -192,34 +240,35 @@ preparatory step before a node reboot or other maintenance.
|
||||||
|
|
||||||
To mark a Node unschedulable, run:
|
To mark a Node unschedulable, run:
|
||||||
-->
|
-->
|
||||||
你可以结合使用节点上的标签和 Pod 上的选择算符来控制调度。
|
你可以结合使用 Node 上的标签和 Pod 上的选择算符来控制调度。
|
||||||
例如,你可以限制某 Pod 只能在符合要求的节点子集上运行。
|
例如,你可以限制某 Pod 只能在符合要求的节点子集上运行。
|
||||||
|
|
||||||
如果标记节点为不可调度(unschedulable),将阻止新 Pod 调度到该节点之上,但不会
|
如果标记节点为不可调度(unschedulable),将阻止新 Pod 调度到该 Node 之上,
|
||||||
影响任何已经在其上的 Pod。
|
但不会影响任何已经在其上的 Pod。
|
||||||
这是重启节点或者执行其他维护操作之前的一个有用的准备步骤。
|
这是重启节点或者执行其他维护操作之前的一个有用的准备步骤。
|
||||||
|
|
||||||
要标记一个节点为不可调度,执行以下命令:
|
要标记一个 Node 为不可调度,执行以下命令:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl cordon $NODENAME
|
kubectl cordon $NODENAME
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
See [Safely Drain a Node](/docs/tasks/administer-cluster/safely-drain-node/)
|
See [Safely Drain a Node](/docs/tasks/administer-cluster/safely-drain-node/)
|
||||||
for more details.
|
for more details.
|
||||||
-->
|
-->
|
||||||
更多细节参考[安全腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。
|
更多细节参考[安全腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
<!--
|
<!--
|
||||||
Pods that are part of a {{< glossary_tooltip term_id="daemonset" >}} tolerate
|
Pods that are part of a {{< glossary_tooltip term_id="daemonset" >}} tolerate
|
||||||
being run on an unschedulable Node. DaemonSets typically provide node-local services
|
being run on an unschedulable Node. DaemonSets typically provide node-local services
|
||||||
that should run on the Node even if it is being drained of workload applications.
|
that should run on the Node even if it is being drained of workload applications.
|
||||||
-->
|
-->
|
||||||
{{< note >}}
|
|
||||||
被 {{< glossary_tooltip term_id="daemonset" text="DaemonSet" >}} 控制器创建的 Pod
|
被 {{< glossary_tooltip term_id="daemonset" text="DaemonSet" >}} 控制器创建的 Pod
|
||||||
能够容忍节点的不可调度属性。
|
能够容忍节点的不可调度属性。
|
||||||
DaemonSet 通常提供节点本地的服务,即使节点上的负载应用已经被腾空,这些服务也仍需
|
DaemonSet 通常提供节点本地的服务,即使节点上的负载应用已经被腾空,
|
||||||
运行在节点之上。
|
这些服务也仍需运行在节点之上。
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -263,11 +312,11 @@ The usage of these fields varies depending on your cloud provider or bare metal
|
||||||
这些字段的用法取决于你的云服务商或者物理机配置。
|
这些字段的用法取决于你的云服务商或者物理机配置。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
* HostName: The as reported by the node's kernel. Can be overridden via the kubelet `-hostname-override` parameter.
|
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet `-hostname-override` parameter.
|
||||||
* ExternalIP: Typically the IP address of the node that is externally routable (available from outside the cluster).
|
* ExternalIP: Typically the IP address of the node that is externally routable (available from outside the cluster).
|
||||||
* InternalIP: Typichostnameally the IP address of the node that is routable only within the cluster.
|
* InternalIP: Typichostnameally the IP address of the node that is routable only within the cluster.
|
||||||
-->
|
-->
|
||||||
* HostName:由节点的内核设置。可以通过 kubelet 的 `--hostname-override` 参数覆盖。
|
* HostName:由节点的内核报告。可以通过 kubelet 的 `--hostname-override` 参数覆盖。
|
||||||
* ExternalIP:通常是节点的可外部路由(从集群外可访问)的 IP 地址。
|
* ExternalIP:通常是节点的可外部路由(从集群外可访问)的 IP 地址。
|
||||||
* InternalIP:通常是节点的仅可在集群内部路由的 IP 地址。
|
* InternalIP:通常是节点的仅可在集群内部路由的 IP 地址。
|
||||||
|
|
||||||
|
@ -301,14 +350,14 @@ The `conditions` field describes the status of all `Running` nodes. Examples of
|
||||||
| `NetworkUnavailable` | `True` 表示节点网络配置不正确;否则为 `False` |
|
| `NetworkUnavailable` | `True` 表示节点网络配置不正确;否则为 `False` |
|
||||||
{{< /table >}}
|
{{< /table >}}
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
<!--
|
<!--
|
||||||
If you use command-line tools to print details of a cordoned Node, the Condition includes
|
If you use command-line tools to print details of a cordoned Node, the Condition includes
|
||||||
`SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead,
|
`SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead,
|
||||||
cordoned nodes are marked Unschedulable in their spec.
|
cordoned nodes are marked Unschedulable in their spec.
|
||||||
-->
|
-->
|
||||||
{{< note >}}
|
如果使用命令行工具来打印已保护(Cordoned)节点的细节,其中的 Condition 字段可能包括
|
||||||
如果使用命令行工具来打印已保护(Cordoned)节点的细节,其中的 Condition 字段可能
|
`SchedulingDisabled`。`SchedulingDisabled` 不是 Kubernetes API 中定义的
|
||||||
包括 `SchedulingDisabled`。`SchedulingDisabled` 不是 Kubernetes API 中定义的
|
|
||||||
Condition,被保护起来的节点在其规约中被标记为不可调度(Unschedulable)。
|
Condition,被保护起来的节点在其规约中被标记为不可调度(Unschedulable)。
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
|
@ -340,16 +389,20 @@ than the `pod-eviction-timeout` (an argument passed to the
|
||||||
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
|
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
|
||||||
for all Pods assigned to that node. The default eviction timeout duration is
|
for all Pods assigned to that node. The default eviction timeout duration is
|
||||||
**five minutes**.
|
**five minutes**.
|
||||||
|
-->
|
||||||
|
如果 Ready 状况的 `status` 处于 `Unknown` 或者 `False` 状态的时间超过了
|
||||||
|
`pod-eviction-timeout` 值(一个传递给
|
||||||
|
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
|
||||||
|
的参数),[节点控制器](#node-controller) 会对节点上的所有 Pod 触发
|
||||||
|
{{< glossary_tooltip text="API-发起的驱逐" term_id="api-eviction" >}}。
|
||||||
|
默认的逐出超时时长为 **5 分钟**。
|
||||||
|
|
||||||
|
<!--
|
||||||
In some cases when the node is unreachable, the API server is unable to communicate
|
In some cases when the node is unreachable, the API server is unable to communicate
|
||||||
with the kubelet on the node. The decision to delete the pods cannot be communicated to
|
with the kubelet on the node. The decision to delete the pods cannot be communicated to
|
||||||
the kubelet until communication with the API server is re-established. In the meantime,
|
the kubelet until communication with the API server is re-established. In the meantime,
|
||||||
the pods that are scheduled for deletion may continue to run on the partitioned node.
|
the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||||
-->
|
-->
|
||||||
如果 Ready 条件的 `status` 处于 `Unknown` 或者 `False` 状态的时间超过了 `pod-eviction-timeout` 值,
|
|
||||||
(一个传递给 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} 的参数),
|
|
||||||
[节点控制器](#node-controller) 会对节点上的所有 Pod 触发
|
|
||||||
{{< glossary_tooltip text="API-发起的驱逐" term_id="api-eviction" >}}。
|
|
||||||
默认的逐出超时时长为 **5 分钟**。
|
|
||||||
某些情况下,当节点不可达时,API 服务器不能和其上的 kubelet 通信。
|
某些情况下,当节点不可达时,API 服务器不能和其上的 kubelet 通信。
|
||||||
删除 Pod 的决定不能传达给 kubelet,直到它重新建立和 API 服务器的连接为止。
|
删除 Pod 的决定不能传达给 kubelet,直到它重新建立和 API 服务器的连接为止。
|
||||||
与此同时,被计划删除的 Pod 可能会继续在游离的节点上运行。
|
与此同时,被计划删除的 Pod 可能会继续在游离的节点上运行。
|
||||||
|
@ -370,15 +423,24 @@ names.
|
||||||
从 Kubernetes 删除节点对象将导致 API 服务器删除节点上所有运行的 Pod 对象并释放它们的名字。
|
从 Kubernetes 删除节点对象将导致 API 服务器删除节点上所有运行的 Pod 对象并释放它们的名字。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The node lifecycle controller automatically creates
|
When problems occur on nodes, the Kubernetes control plane automatically creates
|
||||||
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that represent conditions.
|
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
|
||||||
|
affecting the node.
|
||||||
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
|
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
|
||||||
Pods can also have tolerations which let them tolerate a Node's taints.
|
Pods can also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
|
||||||
|
them run on a Node even though it has a specific taint.
|
||||||
-->
|
-->
|
||||||
节点生命周期控制器会自动创建代表状况的
|
当节点上出现问题时,Kubernetes 控制面会自动创建与影响节点的状况对应的
|
||||||
[污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。
|
[污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。
|
||||||
当调度器将 Pod 指派给某节点时,会考虑节点上的污点。
|
调度器在将 Pod 指派到某 Node 时会考虑 Node 上的污点设置。
|
||||||
Pod 则可以通过容忍度(Toleration)表达所能容忍的污点。
|
Pod 也可以设置{{< glossary_tooltip text="容忍度" term_id="toleration" >}},
|
||||||
|
以便能够在设置了特定污点的 Node 上运行。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
|
||||||
|
for more details.
|
||||||
|
-->
|
||||||
|
进一步的细节可参阅[根据状况为节点设置污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Capacity and Allocatable {#capacity}
|
### Capacity and Allocatable {#capacity}
|
||||||
|
@ -386,9 +448,9 @@ Pod 则可以通过容忍度(Toleration)表达所能容忍的污点。
|
||||||
Describes the resources available on the node: CPU, memory and the maximum
|
Describes the resources available on the node: CPU, memory and the maximum
|
||||||
number of pods that can be scheduled onto the node.
|
number of pods that can be scheduled onto the node.
|
||||||
-->
|
-->
|
||||||
### 容量与可分配 {#capacity}
|
### 容量(Capacity)与可分配(Allocatable) {#capacity}
|
||||||
|
|
||||||
描述节点上的可用资源:CPU、内存和可以调度到节点上的 Pod 的个数上限。
|
这两个值描述节点上的可用资源:CPU、内存和可以调度到节点上的 Pod 的个数上限。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The fields in the capacity block indicate the total amount of resources that a
|
The fields in the capacity block indicate the total amount of resources that a
|
||||||
|
@ -415,34 +477,51 @@ The kubelet gathers this information from the node and publishes it into
|
||||||
the Kubernetes API.
|
the Kubernetes API.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
### 信息 {#info}
|
### 信息(Info) {#info}
|
||||||
|
|
||||||
描述节点的一般信息,如内核版本、Kubernetes 版本(`kubelet` 和 `kube-proxy` 版本)、
|
Info 指的是节点的一般信息,如内核版本、Kubernetes 版本(`kubelet` 和 `kube-proxy` 版本)、
|
||||||
容器运行时详细信息,以及 节点使用的操作系统。
|
容器运行时详细信息,以及 节点使用的操作系统。
|
||||||
`kubelet` 从节点收集这些信息并将其发布到 Kubernetes API。
|
`kubelet` 从节点收集这些信息并将其发布到 Kubernetes API。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
|
||||||
## Heartbeats
|
## Heartbeats
|
||||||
|
|
||||||
Heartbeats, sent by Kubernetes nodes, help your cluster determine the
|
Heartbeats, sent by Kubernetes nodes, help your cluster determine the
|
||||||
availability of each node, and to take action when failures are detected.
|
availability of each node, and to take action when failures are detected.
|
||||||
|
|
||||||
For nodes there are two forms of heartbeats:
|
For nodes there are two forms of heartbeats:
|
||||||
|
-->
|
||||||
|
## 心跳 {#heartbeats}
|
||||||
|
Kubernetes 节点发送的心跳帮助你的集群确定每个节点的可用性,并在检测到故障时采取行动。
|
||||||
|
|
||||||
|
对于节点,有两种形式的心跳:
|
||||||
|
|
||||||
|
<!--
|
||||||
* updates to the `.status` of a Node
|
* updates to the `.status` of a Node
|
||||||
* [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects
|
* [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects
|
||||||
within the `kube-node-lease`
|
within the `kube-node-lease`
|
||||||
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
|
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
|
||||||
Each Node has an associated Lease object.
|
Each Node has an associated Lease object.
|
||||||
|
-->
|
||||||
|
* 更新节点的 `.status`
|
||||||
|
* `kube-node-lease` {{<glossary_tooltip term_id="namespace" text="命名空间">}}中的
|
||||||
|
[Lease(租约)](/docs/reference/kubernetes-api/cluster-resources/lease-v1/)对象。
|
||||||
|
每个节点都有一个关联的 Lease 对象。
|
||||||
|
|
||||||
|
<!--
|
||||||
Compared to updates to `.status` of a Node, a Lease is a lightweight resource.
|
Compared to updates to `.status` of a Node, a Lease is a lightweight resource.
|
||||||
Using Leases for heartbeats reduces the performance impact of these updates
|
Using Leases for heartbeats reduces the performance impact of these updates
|
||||||
for large clusters.
|
for large clusters.
|
||||||
|
|
||||||
The kubelet is responsible for creating and updating the `.status` of Nodes,
|
The kubelet is responsible for creating and updating the `.status` of Nodes,
|
||||||
and for updating their related Leases.
|
and for updating their related Leases.
|
||||||
|
-->
|
||||||
|
与 Node 的 `.status` 更新相比,Lease 是一种轻量级资源。
|
||||||
|
使用 Lease 来表达心跳在大型集群中可以减少这些更新对性能的影响。
|
||||||
|
|
||||||
|
kubelet 负责创建和更新节点的 `.status`,以及更新它们对应的 Lease。
|
||||||
|
|
||||||
|
<!--
|
||||||
- The kubelet updates the node's `.status` either when there is change in status
|
- The kubelet updates the node's `.status` either when there is change in status
|
||||||
or if there has been no update for a configured interval. The default interval
|
or if there has been no update for a configured interval. The default interval
|
||||||
for `.status` updates to Nodes is 5 minutes, which is much longer than the 40
|
for `.status` updates to Nodes is 5 minutes, which is much longer than the 40
|
||||||
|
@ -452,27 +531,12 @@ and for updating their related Leases.
|
||||||
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
|
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
|
||||||
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.
|
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.
|
||||||
-->
|
-->
|
||||||
## 心跳 {#heartbeats}
|
- 当节点状态发生变化时,或者在配置的时间间隔内没有更新事件时,kubelet 会更新 `.status`。
|
||||||
Kubernetes 节点发送的心跳帮助你的集群确定每个节点的可用性,并在检测到故障时采取行动。
|
`.status` 更新的默认间隔为 5 分钟(比节点不可达事件的 40 秒默认超时时间长很多)。
|
||||||
|
- `kubelet` 会创建并每 10 秒(默认更新间隔时间)更新 Lease 对象。
|
||||||
对于节点,有两种形式的心跳:
|
Lease 的更新独立于 Node 的 `.status` 更新而发生。
|
||||||
|
如果 Lease 的更新操作失败,kubelet 会采用指数回退机制,从 200 毫秒开始重试,
|
||||||
* 更新节点的 `.status`
|
最长重试间隔为 7 秒钟。
|
||||||
* [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) 对象
|
|
||||||
在 `kube-node-lease` {{<glossary_tooltip term_id="namespace" text="命名空间">}}中。
|
|
||||||
每个节点都有一个关联的 Lease 对象。
|
|
||||||
|
|
||||||
与 Node 的 `.status` 更新相比,`Lease` 是一种轻量级资源。
|
|
||||||
使用 `Leases` 心跳在大型集群中可以减少这些更新对性能的影响。
|
|
||||||
|
|
||||||
kubelet 负责创建和更新节点的 `.status`,以及更新它们对应的 `Lease`。
|
|
||||||
|
|
||||||
- 当状态发生变化时,或者在配置的时间间隔内没有更新事件时,kubelet 会更新 `.status`。
|
|
||||||
`.status` 更新的默认间隔为 5 分钟(比不可达节点的 40 秒默认超时时间长很多)。
|
|
||||||
- `kubelet` 会每 10 秒(默认更新间隔时间)创建并更新其 `Lease` 对象。
|
|
||||||
`Lease` 更新独立于 `NodeStatus` 更新而发生。
|
|
||||||
如果 `Lease` 的更新操作失败,`kubelet` 会采用指数回退机制,从 200 毫秒开始
|
|
||||||
重试,最长重试间隔为 7 秒钟。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Node Controller
|
## Node Controller
|
||||||
|
@ -485,8 +549,8 @@ CIDR block to the node when it is registered (if CIDR assignment is turned on).
|
||||||
-->
|
-->
|
||||||
## 节点控制器 {#node-controller}
|
## 节点控制器 {#node-controller}
|
||||||
|
|
||||||
节点{{< glossary_tooltip text="控制器" term_id="controller" >}}是
|
节点{{< glossary_tooltip text="控制器" term_id="controller" >}}是 Kubernetes 控制面组件,
|
||||||
Kubernetes 控制面组件,管理节点的方方面面。
|
管理节点的方方面面。
|
||||||
|
|
||||||
节点控制器在节点的生命周期中扮演多个角色。
|
节点控制器在节点的生命周期中扮演多个角色。
|
||||||
第一个是当节点注册时为它分配一个 CIDR 区段(如果启用了 CIDR 分配)。
|
第一个是当节点注册时为它分配一个 CIDR 区段(如果启用了 CIDR 分配)。
|
||||||
|
@ -505,6 +569,7 @@ controller deletes the node from its list of nodes.
|
||||||
<!--
|
<!--
|
||||||
The third is monitoring the nodes' health. The node controller is
|
The third is monitoring the nodes' health. The node controller is
|
||||||
responsible for:
|
responsible for:
|
||||||
|
|
||||||
- In the case that a node becomes unreachable, updating the NodeReady condition
|
- In the case that a node becomes unreachable, updating the NodeReady condition
|
||||||
of within the Node's `.status`. In this case the node controller sets the
|
of within the Node's `.status`. In this case the node controller sets the
|
||||||
NodeReady condition to `ConditionUnknown`.
|
NodeReady condition to `ConditionUnknown`.
|
||||||
|
@ -516,12 +581,13 @@ responsible for:
|
||||||
|
|
||||||
The node controller checks the state of each node every `-node-monitor-period` seconds.
|
The node controller checks the state of each node every `-node-monitor-period` seconds.
|
||||||
-->
|
-->
|
||||||
第三个是监控节点的健康状况。 节点控制器是负责:
|
第三个是监控节点的健康状况。节点控制器负责:
|
||||||
- 在节点节点不可达的情况下,在 Node 的 `.status` 中更新 `NodeReady` 状况。
|
|
||||||
在这种情况下,节点控制器将 `NodeReady` 状况更新为 `ConditionUnknown` 。
|
- 在节点不可达的情况下,在 Node 的 `.status` 中更新 NodeReady 状况。
|
||||||
|
在这种情况下,节点控制器将 NodeReady 状况更新为 `Unknown` 。
|
||||||
- 如果节点仍然无法访问:对于不可达节点上的所有 Pod 触发
|
- 如果节点仍然无法访问:对于不可达节点上的所有 Pod 触发
|
||||||
[API-发起的逐出](/zh/docs/concepts/scheduling-eviction/api-eviction/)。
|
[API-发起的逐出](/zh/docs/concepts/scheduling-eviction/api-eviction/)。
|
||||||
默认情况下,节点控制器 在将节点标记为 `ConditionUnknown` 后等待 5 分钟 提交第一个驱逐请求。
|
默认情况下,节点控制器在将节点标记为 `Unknown` 后等待 5 分钟提交第一个驱逐请求。
|
||||||
|
|
||||||
节点控制器每隔 `--node-monitor-period` 秒检查每个节点的状态。
|
节点控制器每隔 `--node-monitor-period` 秒检查每个节点的状态。
|
||||||
|
|
||||||
|
@ -542,29 +608,33 @@ The node eviction behavior changes when a node in a given availability zone
|
||||||
becomes unhealthy. The node controller checks what percentage of nodes in the zone
|
becomes unhealthy. The node controller checks what percentage of nodes in the zone
|
||||||
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
|
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
|
||||||
the same time:
|
the same time:
|
||||||
|
-->
|
||||||
|
当一个可用区域(Availability Zone)中的节点变为不健康时,节点的驱逐行为将发生改变。
|
||||||
|
节点控制器会同时检查可用区域中不健康(NodeReady 状况为 `Unknown` 或 `False`)
|
||||||
|
的节点的百分比:
|
||||||
|
|
||||||
|
<!--
|
||||||
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
|
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
|
||||||
(default 0.55), then the eviction rate is reduced.
|
(default 0.55), then the eviction rate is reduced.
|
||||||
- If the cluster is small (i.e. has less than or equal to
|
- If the cluster is small (i.e. has less than or equal to
|
||||||
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
|
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
|
||||||
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
|
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
|
||||||
(default 0.01) per second.
|
(default 0.01) per second.
|
||||||
|
-->
|
||||||
|
- 如果不健康节点的比例超过 `--unhealthy-zone-threshold` (默认为 0.55),
|
||||||
|
驱逐速率将会降低。
|
||||||
|
- 如果集群较小(意即小于等于 `--large-cluster-size-threshold` 个节点 - 默认为 50),
|
||||||
|
驱逐操作将会停止。
|
||||||
|
- 否则驱逐速率将降为每秒 `--secondary-node-eviction-rate` 个(默认为 0.01)。
|
||||||
|
|
||||||
|
<!--
|
||||||
The reason these policies are implemented per availability zone is because one
|
The reason these policies are implemented per availability zone is because one
|
||||||
availability zone might become partitioned from the master while the others remain
|
availability zone might become partitioned from the master while the others remain
|
||||||
connected. If your cluster does not span multiple cloud provider availability zones,
|
connected. If your cluster does not span multiple cloud provider availability zones,
|
||||||
then the eviction mechanism does not take per-zone unavailability into account.
|
then the eviction mechanism does not take per-zone unavailability into account.
|
||||||
-->
|
-->
|
||||||
当一个可用区域(Availability Zone)中的节点变为不健康时,节点的驱逐行为将发生改变。
|
在逐个可用区域中实施这些策略的原因是,
|
||||||
节点控制器会同时检查可用区域中不健康(NodeReady 状况为 `ConditionUnknown` 或 `ConditionFalse`)
|
当一个可用区域可能从控制面脱离时其它可用区域可能仍然保持连接。
|
||||||
的节点的百分比:
|
|
||||||
- 如果不健康节点的比例超过 `--unhealthy-zone-threshold` (默认为 0.55),
|
|
||||||
驱逐速率将会降低。
|
|
||||||
- 如果集群较小(意即小于等于 `--large-cluster-size-threshold`
|
|
||||||
个节点 - 默认为 50),驱逐操作将会停止。
|
|
||||||
- 否则驱逐速率将降为每秒 `--secondary-node-eviction-rate` 个(默认为 0.01)。
|
|
||||||
|
|
||||||
在单个可用区域实施这些策略的原因是当一个可用区域可能从控制面脱离时其它可用区域
|
|
||||||
可能仍然保持连接。
|
|
||||||
如果你的集群没有跨越云服务商的多个可用区域,那(整个集群)就只有一个可用区域。
|
如果你的集群没有跨越云服务商的多个可用区域,那(整个集群)就只有一个可用区域。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -583,8 +653,8 @@ evict pods from the remaining nodes that are unhealthy or unreachable).
|
||||||
因此,如果一个可用区域中的所有节点都不健康时,节点控制器会以正常的速率
|
因此,如果一个可用区域中的所有节点都不健康时,节点控制器会以正常的速率
|
||||||
`--node-eviction-rate` 进行驱逐操作。
|
`--node-eviction-rate` 进行驱逐操作。
|
||||||
在所有的可用区域都不健康(也即集群中没有健康节点)的极端情况下,
|
在所有的可用区域都不健康(也即集群中没有健康节点)的极端情况下,
|
||||||
节点控制器将假设控制面与节点间的连接出了某些问题,它将停止所有驱逐动作(如果故障后部分节点重新连接,
|
节点控制器将假设控制面与节点间的连接出了某些问题,它将停止所有驱逐动作
|
||||||
节点控制器会从剩下不健康或者不可达节点中驱逐 `pods`)。
|
(如果故障后部分节点重新连接,节点控制器会从剩下不健康或者不可达节点中驱逐 Pod)。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The Node Controller is also responsible for evicting pods running on nodes with
|
The Node Controller is also responsible for evicting pods running on nodes with
|
||||||
|
@ -595,8 +665,8 @@ that the scheduler won't place Pods onto unhealthy nodes.
|
||||||
-->
|
-->
|
||||||
节点控制器还负责驱逐运行在拥有 `NoExecute` 污点的节点上的 Pod,
|
节点控制器还负责驱逐运行在拥有 `NoExecute` 污点的节点上的 Pod,
|
||||||
除非这些 Pod 能够容忍此污点。
|
除非这些 Pod 能够容忍此污点。
|
||||||
节点控制器还负责根据节点故障(例如节点不可访问或没有就绪)为其添加
|
节点控制器还负责根据节点故障(例如节点不可访问或没有就绪)
|
||||||
{{< glossary_tooltip text="污点" term_id="taint" >}}。
|
为其添加{{< glossary_tooltip text="污点" term_id="taint" >}}。
|
||||||
这意味着调度器不会将 Pod 调度到不健康的节点上。
|
这意味着调度器不会将 Pod 调度到不健康的节点上。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -612,8 +682,8 @@ you need to set the node's capacity information when you add it.
|
||||||
|
|
||||||
Node 对象会跟踪节点上资源的容量(例如可用内存和 CPU 数量)。
|
Node 对象会跟踪节点上资源的容量(例如可用内存和 CPU 数量)。
|
||||||
通过[自注册](#self-registration-of-nodes)机制生成的 Node 对象会在注册期间报告自身容量。
|
通过[自注册](#self-registration-of-nodes)机制生成的 Node 对象会在注册期间报告自身容量。
|
||||||
如果你[手动](#manual-node-administration)添加了 Node,你就需要在添加节点时
|
如果你[手动](#manual-node-administration)添加了 Node,
|
||||||
手动设置节点容量。
|
你就需要在添加节点时手动设置节点容量。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
|
The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
|
||||||
|
@ -623,18 +693,19 @@ The sum of requests includes all containers started by the kubelet, but excludes
|
||||||
containers started directly by the container runtime, and also excludes any
|
containers started directly by the container runtime, and also excludes any
|
||||||
process running outside of the kubelet's control.
|
process running outside of the kubelet's control.
|
||||||
-->
|
-->
|
||||||
Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}保证节点上
|
Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}
|
||||||
有足够的资源供其上的所有 Pod 使用。它会检查节点上所有容器的请求的总和不会超过节点的容量。
|
保证节点上有足够的资源供其上的所有 Pod 使用。
|
||||||
|
它会检查节点上所有容器的请求的总和不会超过节点的容量。
|
||||||
总的请求包括由 kubelet 启动的所有容器,但不包括由容器运行时直接启动的容器,
|
总的请求包括由 kubelet 启动的所有容器,但不包括由容器运行时直接启动的容器,
|
||||||
也不包括不受 `kubelet` 控制的其他进程。
|
也不包括不受 `kubelet` 控制的其他进程。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
<!--
|
<!--
|
||||||
If you want to explicitly reserve resources for non-Pod processes, follow this tutorial to
|
If you want to explicitly reserve resources for non-Pod processes, follow this tutorial to
|
||||||
[reserve resources for system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
|
[reserve resources for system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
|
||||||
-->
|
-->
|
||||||
{{< note >}}
|
如果要为非 Pod 进程显式保留资源。
|
||||||
如果要为非 Pod 进程显式保留资源。请参考
|
请参考[为系统守护进程预留资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved)。
|
||||||
[为系统守护进程预留资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved)。
|
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -653,8 +724,7 @@ for more information.
|
||||||
-->
|
-->
|
||||||
如果启用了 `TopologyManager` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/),
|
如果启用了 `TopologyManager` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/),
|
||||||
`kubelet` 可以在作出资源分配决策时使用拓扑提示。
|
`kubelet` 可以在作出资源分配决策时使用拓扑提示。
|
||||||
参考[控制节点上拓扑管理策略](/zh/docs/tasks/administer-cluster/topology-manager/)
|
参考[控制节点上拓扑管理策略](/zh/docs/tasks/administer-cluster/topology-manager/)了解详细信息。
|
||||||
了解详细信息。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Graceful node shutdown {#graceful-node-shutdown}
|
## Graceful node shutdown {#graceful-node-shutdown}
|
||||||
|
@ -666,11 +736,14 @@ for more information.
|
||||||
<!--
|
<!--
|
||||||
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
|
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
|
||||||
|
|
||||||
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
|
Kubelet ensures that pods follow the normal
|
||||||
|
[pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||||
|
during the node shutdown.
|
||||||
-->
|
-->
|
||||||
kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的 Pods。
|
kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的 Pods。
|
||||||
|
|
||||||
在节点终止期间,kubelet 保证 Pod 遵从常规的 [Pod 终止流程](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
在节点终止期间,kubelet 保证 Pod 遵从常规的
|
||||||
|
[Pod 终止流程](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The graceful node shutdown feature depends on systemd since it takes advantage of
|
The graceful node shutdown feature depends on systemd since it takes advantage of
|
||||||
|
@ -678,7 +751,7 @@ The graceful node shutdown feature depends on systemd since it takes advantage o
|
||||||
delay the node shutdown with a given duration.
|
delay the node shutdown with a given duration.
|
||||||
-->
|
-->
|
||||||
体面节点关闭特性依赖于 systemd,因为它要利用
|
体面节点关闭特性依赖于 systemd,因为它要利用
|
||||||
[systemd 抑制器锁](https://www.freedesktop.org/wiki/Software/systemd/inhibit/)
|
[systemd 抑制器锁](https://www.freedesktop.org/wiki/Software/systemd/inhibit/)机制,
|
||||||
在给定的期限内延迟节点关闭。
|
在给定的期限内延迟节点关闭。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -687,8 +760,8 @@ Graceful node shutdown is controlled with the `GracefulNodeShutdown`
|
||||||
enabled by default in 1.21.
|
enabled by default in 1.21.
|
||||||
-->
|
-->
|
||||||
体面节点关闭特性受 `GracefulNodeShutdown`
|
体面节点关闭特性受 `GracefulNodeShutdown`
|
||||||
[特性门控](/docs/reference/command-line-tools-reference/feature-gates/)
|
[特性门控](/docs/reference/command-line-tools-reference/feature-gates/)控制,
|
||||||
控制,在 1.21 版本中是默认启用的。
|
在 1.21 版本中是默认启用的。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Note that by default, both configuration options described below,
|
Note that by default, both configuration options described below,
|
||||||
|
@ -697,8 +770,7 @@ thus not activating Graceful node shutdown functionality.
|
||||||
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
|
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
|
||||||
-->
|
-->
|
||||||
注意,默认情况下,下面描述的两个配置选项,`ShutdownGracePeriod` 和
|
注意,默认情况下,下面描述的两个配置选项,`ShutdownGracePeriod` 和
|
||||||
`ShutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活
|
`ShutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活体面节点关闭功能。
|
||||||
体面节点关闭功能。
|
|
||||||
要激活此功能特性,这两个 kubelet 配置选项要适当配置,并设置为非零值。
|
要激活此功能特性,这两个 kubelet 配置选项要适当配置,并设置为非零值。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -707,7 +779,7 @@ During a graceful shutdown, kubelet terminates pods in two phases:
|
||||||
1. Terminate regular pods running on the node.
|
1. Terminate regular pods running on the node.
|
||||||
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
|
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
|
||||||
-->
|
-->
|
||||||
在体面关闭节点过程中,kubelet 分两个阶段来终止 Pods:
|
在体面关闭节点过程中,kubelet 分两个阶段来终止 Pod:
|
||||||
|
|
||||||
1. 终止在节点上运行的常规 Pod。
|
1. 终止在节点上运行的常规 Pod。
|
||||||
2. 终止在节点上运行的[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
|
2. 终止在节点上运行的[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
|
||||||
|
@ -723,11 +795,10 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/
|
||||||
[`KubeletConfiguration`](/zh/docs/tasks/administer-cluster/kubelet-config-file/) 选项:
|
[`KubeletConfiguration`](/zh/docs/tasks/administer-cluster/kubelet-config-file/) 选项:
|
||||||
|
|
||||||
* `ShutdownGracePeriod`:
|
* `ShutdownGracePeriod`:
|
||||||
* 指定节点应延迟关闭的总持续时间。此时间是 Pod 体面终止的时间总和,不区分常规 Pod 还是
|
* 指定节点应延迟关闭的总持续时间。此时间是 Pod 体面终止的时间总和,不区分常规 Pod
|
||||||
[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
|
还是[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
|
||||||
* `ShutdownGracePeriodCriticalPods`:
|
* `ShutdownGracePeriodCriticalPods`:
|
||||||
* 在节点关闭期间指定用于终止
|
* 在节点关闭期间指定用于终止[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
|
||||||
[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
|
|
||||||
的持续时间。该值应小于 `ShutdownGracePeriod`。
|
的持续时间。该值应小于 `ShutdownGracePeriod`。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -740,8 +811,7 @@ reserved for terminating [critical pods](/docs/tasks/administer-cluster/guarante
|
||||||
例如,如果设置了 `ShutdownGracePeriod=30s` 和 `ShutdownGracePeriodCriticalPods=10s`,
|
例如,如果设置了 `ShutdownGracePeriod=30s` 和 `ShutdownGracePeriodCriticalPods=10s`,
|
||||||
则 kubelet 将延迟 30 秒关闭节点。
|
则 kubelet 将延迟 30 秒关闭节点。
|
||||||
在关闭期间,将保留前 20(30 - 10)秒用于体面终止常规 Pod,
|
在关闭期间,将保留前 20(30 - 10)秒用于体面终止常规 Pod,
|
||||||
而保留最后 10 秒用于终止
|
而保留最后 10 秒用于终止[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
|
||||||
[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
When pods were evicted during the graceful node shutdown, they are marked as failed.
|
When pods were evicted during the graceful node shutdown, they are marked as failed.
|
||||||
|
@ -749,61 +819,199 @@ Running `kubectl get pods` shows the status of the the evicted pods as `Shutdown
|
||||||
And `kubectl describe pod` indicates that the pod was evicted because of node shutdown:
|
And `kubectl describe pod` indicates that the pod was evicted because of node shutdown:
|
||||||
|
|
||||||
```
|
```
|
||||||
Status: Failed
|
Reason: Terminated
|
||||||
Reason: Shutdown
|
Message: Pod was terminated in response to imminent node shutdown.
|
||||||
Message: Node is shutting, evicting pods
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Failed pod objects will be preserved until explicitly deleted or [cleaned up by the GC](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection).
|
|
||||||
This is a change of behavior compared to abrupt node termination.
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
当 Pod 在正常节点关闭期间被驱逐时,它们会被标记为 `failed`。
|
当 Pod 在正常节点关闭期间被驱逐时,它们会被标记为已经失败(Failed)。
|
||||||
运行 `kubectl get pods` 将被驱逐的 pod 的状态显示为 `Shutdown`。
|
运行 `kubectl get pods` 时,被驱逐的 Pod 的状态显示为 `Shutdown`。
|
||||||
并且 `kubectl describe pod` 表示 pod 因节点关闭而被驱逐:
|
并且 `kubectl describe pod` 表示 Pod 因节点关闭而被驱逐:
|
||||||
|
|
||||||
```
|
```
|
||||||
Status: Failed
|
Reason: Terminated
|
||||||
Reason: Shutdown
|
Message: Pod was terminated in response to imminent node shutdown.
|
||||||
Message: Node is shutting, evicting pods
|
|
||||||
```
|
```
|
||||||
|
|
||||||
`Failed` 的 pod 对象将被保留,直到被明确删除或
|
|
||||||
[由 GC 清理](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)。
|
|
||||||
与突然的节点终止相比这是一种行为变化。
|
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown}
|
||||||
|
-->
|
||||||
|
### 基于 Pod 优先级的体面节点关闭 {#pod-priority-graceful-node-shutdown}
|
||||||
|
|
||||||
|
{{< feature-state state="alpha" for_k8s_version="v1.23" >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
To provide more flexibility during graceful node shutdown around the ordering
|
||||||
|
of pods during shutdown, graceful node shutdown honors the PriorityClass for
|
||||||
|
Pods, provided that you enabled this feature in your cluster. The feature
|
||||||
|
allows cluster administers to explicitly define the ordering of pods
|
||||||
|
during graceful node shutdown based on
|
||||||
|
[priority classes](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass).
|
||||||
|
-->
|
||||||
|
为了在体面节点关闭期间提供更多的灵活性,尤其是处理关闭期间的 Pod 排序问题,
|
||||||
|
体面节点关闭机制能够关注 Pod 的 PriorityClass 设置,前提是你已经在集群中启用了此功能特性。
|
||||||
|
此功能特性允许集群管理员基于 Pod
|
||||||
|
的[优先级类(Priority Class)](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
|
||||||
|
显式地定义体面节点关闭期间 Pod 的处理顺序。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The [Graceful Node Shutdown](#graceful-node-shutdown) feature, as described
|
||||||
|
above, shuts down pods in two phases, non-critical pods, followed by critical
|
||||||
|
pods. If additional flexibility is needed to explicitly define the ordering of
|
||||||
|
pods during shutdown in a more granular way, pod priority based graceful
|
||||||
|
shutdown can be used.
|
||||||
|
-->
|
||||||
|
前文所述的[体面节点关闭](#graceful-node-shutdown)特性能够分两个阶段关闭 Pod,
|
||||||
|
首先关闭的是非关键的 Pod,之后再处理关键 Pod。
|
||||||
|
如果需要显式地以更细粒度定义关闭期间 Pod 的处理顺序,需要一定的灵活度,
|
||||||
|
这时可以使用基于 Pod 优先级的体面关闭机制。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
When graceful node shutdown honors pod priorities, this makes it possible to do
|
||||||
|
graceful node shutdown in multiple phases, each phase shutting down a
|
||||||
|
particular priority class of pods. The kubelet can be configured with the exact
|
||||||
|
phases and shutdown time per phase.
|
||||||
|
-->
|
||||||
|
当体面节点关闭能够处理 Pod 优先级时,体面节点关闭的处理可以分为多个阶段,
|
||||||
|
每个阶段关闭特定优先级类的 Pod。kubelet 可以被配置为按确切的阶段处理 Pod,
|
||||||
|
且每个阶段可以独立设置关闭时间。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Assuming the following custom pod
|
||||||
|
[priority classes](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
|
||||||
|
in a cluster,
|
||||||
|
-->
|
||||||
|
假设集群中存在以下自定义的 Pod
|
||||||
|
[优先级类](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)。
|
||||||
|
|
||||||
|
| Pod 优先级类名称 | Pod 优先级类数值 |
|
||||||
|
|-------------------------|------------------------|
|
||||||
|
|`custom-class-a` | 100000 |
|
||||||
|
|`custom-class-b` | 10000 |
|
||||||
|
|`custom-class-c` | 1000 |
|
||||||
|
|`regular/unset` | 0 |
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||||
|
the settings for `shutdownGracePeriodByPodPriority` could look like:
|
||||||
|
-->
|
||||||
|
在 [kubelet 配置](/zh/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)中,
|
||||||
|
`shutdownGracePeriodByPodPriority` 可能看起来是这样:
|
||||||
|
|
||||||
|
| Pod 优先级类数值 | 关闭期限 |
|
||||||
|
|------------------------|-----------|
|
||||||
|
| 100000 | 10 秒 |
|
||||||
|
| 10000 | 180 秒 |
|
||||||
|
| 1000 | 120 秒 |
|
||||||
|
| 0 | 60 秒 |
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The corresponding kubelet config YAML configuration would be:
|
||||||
|
-->
|
||||||
|
对应的 kubelet 配置 YAML 将会是:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
shutdownGracePeriodByPodPriority:
|
||||||
|
- priority: 100000
|
||||||
|
shutdownGracePeriodSeconds: 10
|
||||||
|
- priority: 10000
|
||||||
|
shutdownGracePeriodSeconds: 180
|
||||||
|
- priority: 1000
|
||||||
|
shutdownGracePeriodSeconds: 120
|
||||||
|
- priority: 0
|
||||||
|
shutdownGracePeriodSeconds: 60
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The above table implies that any pod with `priority` value >= 100000 will get
|
||||||
|
just 10 seconds to stop, any pod with value >= 10000 and < 100000 will get 180
|
||||||
|
seconds to stop, any pod with value >= 1000 and < 10000 will get 120 seconds to stop.
|
||||||
|
Finally, all other pods will get 60 seconds to stop.
|
||||||
|
|
||||||
|
One doesn't have to specify values corresponding to all of the classes. For
|
||||||
|
example, you could instead use these settings:
|
||||||
|
-->
|
||||||
|
上面的表格表明,所有 `priority` 值大于等于 100000 的 Pod 会得到 10 秒钟期限停止,
|
||||||
|
所有 `priority` 值介于 10000 和 100000 之间的 Pod 会得到 180 秒钟期限停止,
|
||||||
|
所有 `priority` 值介于 1000 和 10000 之间的 Pod 会得到 120 秒钟期限停止,
|
||||||
|
所有其他 Pod 将获得 60 秒的时间停止。
|
||||||
|
|
||||||
|
用户不需要为所有的优先级类都设置数值。例如,你也可以使用下面这种配置:
|
||||||
|
|
||||||
|
| Pod 优先级类数值 | 关闭期限 |
|
||||||
|
|------------------------|-----------|
|
||||||
|
| 100000 | 300 秒 |
|
||||||
|
| 1000 | 120 秒 |
|
||||||
|
| 0 | 60 秒 |
|
||||||
|
|
||||||
|
<!--
|
||||||
|
In the above case, the pods with `custom-class-b` will go into the same bucket
|
||||||
|
as `custom-class-c` for shutdown.
|
||||||
|
|
||||||
|
If there are no pods in a particular range, then the kubelet does not wait
|
||||||
|
for pods in that priority range. Instead, the kubelet immediately skips to the
|
||||||
|
next priority class value range.
|
||||||
|
-->
|
||||||
|
在上面这个场景中,优先级类为 `custom-class-b` 的 Pod 会与优先级类为 `custom-class-c`
|
||||||
|
的 Pod 在关闭时按相同期限处理。
|
||||||
|
|
||||||
|
如果在特定的范围内不存在 Pod,则 kubelet 不会等待对应优先级范围的 Pod。
|
||||||
|
kubelet 会直接跳到下一个优先级数值范围进行处理。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
If this feature is enabled and no configuration is provided, then no ordering
|
||||||
|
action will be taken.
|
||||||
|
|
||||||
|
Using this feature, requires enabling the
|
||||||
|
`GracefulNodeShutdownBasedOnPodPriority` feature gate, and setting the kubelet
|
||||||
|
config's `ShutdownGracePeriodByPodPriority` to the desired configuration
|
||||||
|
containing the pod priority class values and their respective shutdown periods.
|
||||||
|
-->
|
||||||
|
如果此功能特性被启用,但没有提供配置数据,则不会出现排序操作。
|
||||||
|
|
||||||
|
使用此功能特性需要启用 `GracefulNodeShutdownBasedOnPodPriority` 功能特性,
|
||||||
|
并将 kubelet 配置中的 `ShutdownGracePeriodByPodPriority` 设置为期望的配置,
|
||||||
|
其中包含 Pod 的优先级类数值以及对应的关闭期限。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Metrics `graceful_shutdown_start_time_seconds` and `graceful_shutdown_end_time_seconds`
|
||||||
|
are emitted under the kubelet subsystem to monitor node shutdowns.
|
||||||
|
-->
|
||||||
|
kubelet 子系统中会生成 `graceful_shutdown_start_time_seconds` 和
|
||||||
|
`graceful_shutdown_end_time_seconds` 度量指标以便监视节点关闭行为。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Swap memory management {#swap-memory}
|
## Swap memory management {#swap-memory}
|
||||||
|
|
||||||
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
|
|
||||||
|
|
||||||
Prior to Kubernetes 1.22, nodes did not support the use of swap memory, and a
|
|
||||||
kubelet would by default fail to start if swap was detected on a node. In 1.22
|
|
||||||
onwards, swap memory support can be enabled on a per-node basis.
|
|
||||||
|
|
||||||
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
|
|
||||||
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
|
|
||||||
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
|
||||||
must be set to false.
|
|
||||||
|
|
||||||
A user can also optionally configure `memorySwap.swapBehavior` in order to
|
|
||||||
specify how a node will use swap memory. For example,
|
|
||||||
-->
|
-->
|
||||||
## 交换内存管理 {#swap-memory}
|
## 交换内存管理 {#swap-memory}
|
||||||
|
|
||||||
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
|
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
|
||||||
|
|
||||||
在 Kubernetes 1.22 之前,节点不支持使用交换内存,并且
|
<!--
|
||||||
默认情况下,如果在节点上检测到交换内存配置,kubelet 将无法启动。 在 1.22
|
Prior to Kubernetes 1.22, nodes did not support the use of swap memory, and a
|
||||||
以后,可以在每个节点的基础上启用交换内存支持。
|
kubelet would by default fail to start if swap was detected on a node. In 1.22
|
||||||
|
onwards, swap memory support can be enabled on a per-node basis.
|
||||||
|
-->
|
||||||
|
在 Kubernetes 1.22 之前,节点不支持使用交换内存,并且默认情况下,
|
||||||
|
如果在节点上检测到交换内存配置,kubelet 将无法启动。
|
||||||
|
在 1.22 以后,可以逐个节点地启用交换内存支持。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
|
||||||
|
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
|
||||||
|
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||||
|
must be set to false.
|
||||||
|
-->
|
||||||
要在节点上启用交换内存,必须启用kubelet 的 `NodeSwap` 特性门控,
|
要在节点上启用交换内存,必须启用kubelet 的 `NodeSwap` 特性门控,
|
||||||
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
|
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
|
||||||
[配置](/zh/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
[配置](/zh/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||||
设置为 false。
|
设置为 false。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
A user can also optionally configure `memorySwap.swapBehavior` in order to
|
||||||
|
specify how a node will use swap memory. For example,
|
||||||
|
-->
|
||||||
用户还可以选择配置 `memorySwap.swapBehavior` 以指定节点使用交换内存的方式。例如:
|
用户还可以选择配置 `memorySwap.swapBehavior` 以指定节点使用交换内存的方式。例如:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
|
@ -818,41 +1026,43 @@ The available configuration options for `swapBehavior` are:
|
||||||
use. Workloads on the node not managed by Kubernetes can still swap.
|
use. Workloads on the node not managed by Kubernetes can still swap.
|
||||||
- `UnlimitedSwap`: Kubernetes workloads can use as much swap memory as they
|
- `UnlimitedSwap`: Kubernetes workloads can use as much swap memory as they
|
||||||
request, up to the system limit.
|
request, up to the system limit.
|
||||||
|
-->
|
||||||
|
可用的 `swapBehavior` 的配置选项有:
|
||||||
|
|
||||||
|
- `LimitedSwap`:Kubernetes 工作负载的交换内存会受限制。
|
||||||
|
不受 Kubernetes 管理的节点上的工作负载仍然可以交换。
|
||||||
|
- `UnlimitedSwap`:Kubernetes 工作负载可以使用尽可能多的交换内存请求,
|
||||||
|
一直到达到系统限制为止。
|
||||||
|
|
||||||
|
<!--
|
||||||
If configuration for `memorySwap` is not specified and the feature gate is
|
If configuration for `memorySwap` is not specified and the feature gate is
|
||||||
enabled, by default the kubelet will apply the same behaviour as the
|
enabled, by default the kubelet will apply the same behaviour as the
|
||||||
`LimitedSwap` setting.
|
`LimitedSwap` setting.
|
||||||
|
|
||||||
The behaviour of the `LimitedSwap` setting depends if the node is running with
|
The behaviour of the `LimitedSwap` setting depends if the node is running with
|
||||||
v1 or v2 of control groups (also known as "cgroups"):
|
v1 or v2 of control groups (also known as "cgroups"):
|
||||||
|
-->
|
||||||
|
如果启用了特性门控但是未指定 `memorySwap` 的配置,默认情况下 kubelet 将使用
|
||||||
|
`LimitedSwap` 设置。
|
||||||
|
|
||||||
|
`LimitedSwap` 这种设置的行为取决于节点运行的是 v1 还是 v2 的控制组(也就是 `cgroups`):
|
||||||
|
|
||||||
|
<!--
|
||||||
- **cgroupsv1:** Kubernetes workloads can use any combination of memory and
|
- **cgroupsv1:** Kubernetes workloads can use any combination of memory and
|
||||||
swap, up to the pod's memory limit, if set.
|
swap, up to the pod's memory limit, if set.
|
||||||
- **cgroupsv2:** Kubernetes workloads cannot use swap memory.
|
- **cgroupsv2:** Kubernetes workloads cannot use swap memory.
|
||||||
|
-->
|
||||||
|
- **cgroupsv1:** Kubernetes 工作负载可以使用内存和交换,上限为 Pod 的内存限制值(如果设置了的话)。
|
||||||
|
- **cgroupsv2:** Kubernetes 工作负载不能使用交换内存。
|
||||||
|
|
||||||
|
<!--
|
||||||
For more information, and to assist with testing and provide feedback, please
|
For more information, and to assist with testing and provide feedback, please
|
||||||
see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
|
see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
|
||||||
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
|
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
|
||||||
-->
|
-->
|
||||||
已有的 `swapBehavior` 的配置选项有:
|
如需更多信息以及协助测试和提供反馈,请参见
|
||||||
|
[KEP-2400](https://github.com/kubernetes/enhancements/issues/2400)
|
||||||
- `LimitedSwap`:Kubernetes 工作负载的交换内存会受限制。
|
及其[设计提案](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md)。
|
||||||
不受 Kubernetes 管理的节点上的工作负载仍然可以交换。
|
|
||||||
- `UnlimitedSwap`:Kubernetes 工作负载可以使用尽可能多的交换内存
|
|
||||||
请求,一直到系统限制。
|
|
||||||
|
|
||||||
如果启用了特性门控但是未指定 `memorySwap` 的配置,默认情况下 kubelet 将使用
|
|
||||||
`LimitedSwap` 设置。
|
|
||||||
|
|
||||||
`LimitedSwap` 设置的行为还取决于节点运行的是 v1 还是 v2 的控制组(也就是 `cgroups`):
|
|
||||||
|
|
||||||
- **cgroupsv1:** Kubernetes 工作负载可以使用内存和
|
|
||||||
交换,达到 pod 的内存限制(如果设置)。
|
|
||||||
- **cgroupsv2:** Kubernetes 工作负载不能使用交换内存。
|
|
||||||
|
|
||||||
如需更多信息以及协助测试和提供反馈,请
|
|
||||||
参见 [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) 及其
|
|
||||||
[设计方案](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md)。
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
@ -863,10 +1073,10 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
|
||||||
section of the architecture design document.
|
section of the architecture design document.
|
||||||
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||||
-->
|
-->
|
||||||
* 了解有关节点[组件](/zh/docs/concepts/overview/components/#node-components)。
|
* 进一步了解节点[组件](/zh/docs/concepts/overview/components/#node-components)。
|
||||||
* 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。
|
* 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。
|
||||||
* 阅读架构设计文档中有关
|
* 阅读架构设计文档中有关
|
||||||
[节点](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
|
[Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
|
||||||
的章节。
|
的章节。
|
||||||
* 了解[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。
|
* 了解[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。
|
||||||
|
|
||||||
|
|
|
@ -31,6 +31,7 @@ Add-ons 扩展了 Kubernetes 的功能。
|
||||||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
|
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
|
||||||
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
|
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
|
||||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
|
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
|
||||||
|
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
|
||||||
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
|
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
|
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
|
||||||
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
|
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
|
||||||
|
@ -54,6 +55,10 @@ Add-ons 扩展了 Kubernetes 的功能。
|
||||||
同时支持路由(routing)和覆盖/封装(overlay/encapsulation)模式。
|
同时支持路由(routing)和覆盖/封装(overlay/encapsulation)模式。
|
||||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 使 Kubernetes 无缝连接到一种 CNI 插件,
|
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 使 Kubernetes 无缝连接到一种 CNI 插件,
|
||||||
例如:Flannel、Calico、Canal、Romana 或者 Weave。
|
例如:Flannel、Calico、Canal、Romana 或者 Weave。
|
||||||
|
* [Contiv](https://contivpp.io/) 为各种用例和丰富的策略框架提供可配置的网络
|
||||||
|
(使用 BGP 的本机 L3、使用 vxlan 的覆盖、标准 L2 和 Cisco-SDN/ACI)。
|
||||||
|
Contiv 项目完全[开源](https://github.com/contiv)。
|
||||||
|
[安装程序](https://github.com/contiv/install) 提供了基于 kubeadm 和非 kubeadm 的安装选项。
|
||||||
* 基于 [Tungsten Fabric](https://tungsten.io) 的
|
* 基于 [Tungsten Fabric](https://tungsten.io) 的
|
||||||
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)
|
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)
|
||||||
是一个开源的多云网络虚拟化和策略管理平台,Contrail 和 Tungsten Fabric 与业务流程系统
|
是一个开源的多云网络虚拟化和策略管理平台,Contrail 和 Tungsten Fabric 与业务流程系统
|
||||||
|
|
|
@ -45,8 +45,7 @@ and possibly a port number as well; for example: `fictional.registry.example:104
|
||||||
|
|
||||||
If you don't specify a registry hostname, Kubernetes assumes that you mean the Docker public registry.
|
If you don't specify a registry hostname, Kubernetes assumes that you mean the Docker public registry.
|
||||||
|
|
||||||
After the image name part you can add a _tag_ (as also using with commands such
|
After the image name part you can add a _tag_ (in the same way you would when using with commands like `docker` or `podman`).
|
||||||
as `docker` and `podman`).
|
|
||||||
Tags let you identify different versions of the same series of images.
|
Tags let you identify different versions of the same series of images.
|
||||||
-->
|
-->
|
||||||
## 镜像名称 {#image-names}
|
## 镜像名称 {#image-names}
|
||||||
|
@ -57,8 +56,7 @@ Tags let you identify different versions of the same series of images.
|
||||||
|
|
||||||
如果你不指定仓库的主机名,Kubernetes 认为你在使用 Docker 公共仓库。
|
如果你不指定仓库的主机名,Kubernetes 认为你在使用 Docker 公共仓库。
|
||||||
|
|
||||||
在镜像名称之后,你可以添加一个 _标签(Tag)_ (就像在 `docker` 或 `podman`
|
在镜像名称之后,你可以添加一个标签(Tag)(与使用 `docker` 或 `podman` 等命令时的方式相同)。
|
||||||
中也在用的那样)。
|
|
||||||
使用标签能让你辨识同一镜像序列中的不同版本。
|
使用标签能让你辨识同一镜像序列中的不同版本。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -169,7 +167,7 @@ replace `<image-name>:<tag>` with `<image-name>@<digest>`
|
||||||
将 `<image-name>:<tag>` 替换为 `<image-name>@<digest>`,例如 `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`。
|
将 `<image-name>:<tag>` 替换为 `<image-name>@<digest>`,例如 `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
|
When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image by digest fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
|
||||||
|
|
||||||
There are third-party [admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
|
There are third-party [admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
|
||||||
that mutate Pods (and pod templates) when they are created, so that the
|
that mutate Pods (and pod templates) when they are created, so that the
|
||||||
|
@ -179,7 +177,7 @@ running the same code no matter what tag changes happen at the registry.
|
||||||
-->
|
-->
|
||||||
当使用镜像标签时,如果镜像仓库修改了代码所对应的镜像标签,可能会出现新旧代码混杂在 Pod 中运行的情况。
|
当使用镜像标签时,如果镜像仓库修改了代码所对应的镜像标签,可能会出现新旧代码混杂在 Pod 中运行的情况。
|
||||||
镜像摘要唯一标识了镜像的特定版本,因此 Kubernetes 每次启动具有指定镜像名称和摘要的容器时,都会运行相同的代码。
|
镜像摘要唯一标识了镜像的特定版本,因此 Kubernetes 每次启动具有指定镜像名称和摘要的容器时,都会运行相同的代码。
|
||||||
指定一个镜像可以固定你所运行的代码,这样镜像仓库的变化就不会导致版本的混杂。
|
通过摘要指定镜像可固定你运行的代码,这样镜像仓库的变化就不会导致版本的混杂。
|
||||||
|
|
||||||
有一些第三方的[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)
|
有一些第三方的[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)
|
||||||
在创建 Pod(和 Pod 模板)时产生变更,这样运行的工作负载就是根据镜像摘要,而不是标签来定义的。
|
在创建 Pod(和 Pod 模板)时产生变更,这样运行的工作负载就是根据镜像摘要,而不是标签来定义的。
|
||||||
|
@ -346,17 +344,12 @@ These options are explained in more detail below.
|
||||||
<!--
|
<!--
|
||||||
### Configuring nodes to authenticate to a private registry
|
### Configuring nodes to authenticate to a private registry
|
||||||
|
|
||||||
If you run Docker on your nodes, you can configure the Docker container
|
Specific instructions for setting credentials depends on the container runtime and registry you chose to use. You should refer to your solution's documentation for the most accurate information.
|
||||||
runtime to authenticate to a private container registry.
|
|
||||||
|
|
||||||
This approach is suitable if you can control node configuration.
|
|
||||||
-->
|
-->
|
||||||
### 配置 Node 对私有仓库认证
|
### 配置 Node 对私有仓库认证
|
||||||
|
|
||||||
如果你在节点上运行的是 Docker,你可以配置 Docker
|
设置凭据的具体说明取决于你选择使用的容器运行时和仓库。
|
||||||
容器运行时来向私有容器仓库认证身份。
|
你应该参考解决方案的文档来获取最准确的信息。
|
||||||
|
|
||||||
此方法适用于能够对节点进行配置的场合。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Default Kubernetes only supports the `auths` and `HttpHeaders` section in Docker configuration.
|
Default Kubernetes only supports the `auths` and `HttpHeaders` section in Docker configuration.
|
||||||
|
@ -368,154 +361,13 @@ Kubernetes 默认仅支持 Docker 配置中的 `auths` 和 `HttpHeaders` 部分
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Docker stores keys for private registries in the `$HOME/.dockercfg` or `$HOME/.docker/config.json` file. If you put the same file
|
For an example of configuring a private container image registry, see the
|
||||||
in the search paths list below, kubelet uses it as the credential provider when pulling images.
|
[Pull an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry)
|
||||||
|
task. That example uses a private registry in Docker Hub.
|
||||||
-->
|
-->
|
||||||
Docker 将私有仓库的密钥保存在 `$HOME/.dockercfg` 或 `$HOME/.docker/config.json`
|
有关配置私有容器镜像仓库的示例,请参阅任务
|
||||||
文件中。如果你将相同的文件放在下面所列的搜索路径中,`kubelet` 会在拉取镜像时将其用作凭据
|
[从私有镜像库中提取图像](/zh/docs/tasks/configure-pod-container/pull-image-private-registry)。
|
||||||
数据来源:
|
该示例使用 Docker Hub 中的私有注册表。
|
||||||
|
|
||||||
<!--
|
|
||||||
* `{--root-dir:-/var/lib/kubelet}/config.json`
|
|
||||||
* `{cwd of kubelet}/config.json`
|
|
||||||
* `${HOME}/.docker/config.json`
|
|
||||||
* `/.docker/config.json`
|
|
||||||
* `{--root-dir:-/var/lib/kubelet}/.dockercfg`
|
|
||||||
* `{cwd of kubelet}/.dockercfg`
|
|
||||||
* `${HOME}/.dockercfg`
|
|
||||||
* `/.dockercfg`
|
|
||||||
-->
|
|
||||||
* `{--root-dir:-/var/lib/kubelet}/config.json`
|
|
||||||
* `{kubelet 当前工作目录}/config.json`
|
|
||||||
* `${HOME}/.docker/config.json`
|
|
||||||
* `/.docker/config.json`
|
|
||||||
* `{--root-dir:-/var/lib/kubelet}/.dockercfg`
|
|
||||||
* `{kubelet 当前工作目录}/.dockercfg`
|
|
||||||
* `${HOME}/.dockercfg`
|
|
||||||
* `/.dockercfg`
|
|
||||||
|
|
||||||
<!--
|
|
||||||
You may have to set `HOME=/root` explicitly in the environment of the kubelet process.
|
|
||||||
-->
|
|
||||||
{{< note >}}
|
|
||||||
你可能不得不为 `kubelet` 进程显式地设置 `HOME=/root` 环境变量。
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Here are the recommended steps to configuring your nodes to use a private registry. In this
|
|
||||||
example, run these on your desktop/laptop:
|
|
||||||
-->
|
|
||||||
推荐采用如下步骤来配置节点以便访问私有仓库。以下示例中,在 PC 或笔记本电脑中操作:
|
|
||||||
|
|
||||||
<!--
|
|
||||||
1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC.
|
|
||||||
1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use.
|
|
||||||
1. Get a list of your nodes; for example:
|
|
||||||
- if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )`
|
|
||||||
- if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )`
|
|
||||||
1. Copy your local `.docker/config.json` to one of the search paths list above.
|
|
||||||
- for example, to test this out: `for n in $nodes; do scp ~/.docker/config.json root@"$n":/var/lib/kubelet/config.json; done`
|
|
||||||
-->
|
|
||||||
1. 针对你要使用的每组凭据,运行 `docker login [服务器]` 命令。这会更新
|
|
||||||
你本地环境中的 `$HOME/.docker/config.json` 文件。
|
|
||||||
1. 在编辑器中打开查看 `$HOME/.docker/config.json` 文件,确保其中仅包含你要
|
|
||||||
使用的凭据信息。
|
|
||||||
1. 获得节点列表;例如:
|
|
||||||
|
|
||||||
- 如果想要节点名称:`nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
|
|
||||||
|
|
||||||
- 如果想要节点 IP ,`nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
|
|
||||||
|
|
||||||
1. 将本地的 `.docker/config.json` 拷贝到所有节点,放入如上所列的目录之一:
|
|
||||||
- 例如,可以试一下:`for n in $nodes; do scp ~/.docker/config.json root@"$n":/var/lib/kubelet/config.json; done`
|
|
||||||
|
|
||||||
<!--
|
|
||||||
For production clusters, use a configuration management tool so that you can apply this
|
|
||||||
setting to all the nodes where you need it.
|
|
||||||
-->
|
|
||||||
{{< note >}}
|
|
||||||
对于产品环境的集群,可以使用配置管理工具来将这些设置应用到
|
|
||||||
你所期望的节点上。
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Verify by creating a Pod that uses a private image; for example:
|
|
||||||
-->
|
|
||||||
创建使用私有镜像的 Pod 来验证。例如:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl apply -f - <<EOF
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: private-image-test-1
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: uses-private-image
|
|
||||||
image: $PRIVATE_IMAGE_NAME
|
|
||||||
imagePullPolicy: Always
|
|
||||||
command: [ "echo", "SUCCESS" ]
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
输出类似于:
|
|
||||||
|
|
||||||
```
|
|
||||||
pod/private-image-test-1 created
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
|
||||||
If everything is working, then, after a few moments, you can run:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl logs private-image-test-1
|
|
||||||
```
|
|
||||||
and see that the command outputs:
|
|
||||||
```
|
|
||||||
SUCCESS
|
|
||||||
```
|
|
||||||
-->
|
|
||||||
如果一切顺利,那么一段时间后你可以执行:
|
|
||||||
```shell
|
|
||||||
kubectl logs private-image-test-1
|
|
||||||
```
|
|
||||||
然后可以看到命令的输出:
|
|
||||||
```
|
|
||||||
SUCCESS
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
|
||||||
If you suspect that the command failed, you can run:
|
|
||||||
-->
|
|
||||||
如果你怀疑命令失败了,你可以运行:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl describe pods/private-image-test-1 | grep 'Failed'
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
|
||||||
In case of failure, the output is similar to:
|
|
||||||
-->
|
|
||||||
如果命令确实失败,输出类似于:
|
|
||||||
|
|
||||||
```
|
|
||||||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
|
||||||
You must ensure all nodes in the cluster have the same `.docker/config.json`. Otherwise, pods will run on
|
|
||||||
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
|
|
||||||
template needs to include the `.docker/config.json` or mount a drive that contains it.
|
|
||||||
|
|
||||||
All pods will have read access to images in any private registry once private
|
|
||||||
registry keys are added to the `.docker/config.json`.
|
|
||||||
-->
|
|
||||||
你必须确保集群中所有节点的 `.docker/config.json` 文件内容相同。
|
|
||||||
否则,Pod 会能在一些节点上正常运行而无法在另一些节点上启动。
|
|
||||||
例如,如果使用节点自动扩缩,那么每个实例模板都需要包含 `.docker/config.json`,
|
|
||||||
或者挂载一个包含该文件的驱动器。
|
|
||||||
|
|
||||||
在 `.docker/config.json` 中配置了私有仓库密钥后,所有 Pod 都将能读取私有仓库中的镜像。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Interpretation of config.json {#config-json}
|
### Interpretation of config.json {#config-json}
|
||||||
|
@ -686,18 +538,17 @@ Kubernetes 支持在 Pod 中设置容器镜像仓库的密钥。
|
||||||
<!--
|
<!--
|
||||||
#### Creating a Secret with a Docker config
|
#### Creating a Secret with a Docker config
|
||||||
|
|
||||||
|
You need to know the username, registry password and client email address for authenticating
|
||||||
|
to the registry, as well as its hostname.
|
||||||
Run the following command, substituting the appropriate uppercase values:
|
Run the following command, substituting the appropriate uppercase values:
|
||||||
-->
|
-->
|
||||||
#### 使用 Docker Config 创建 Secret {#creating-a-secret-with-docker-config}
|
#### 使用 Docker Config 创建 Secret {#creating-a-secret-with-docker-config}
|
||||||
|
|
||||||
运行以下命令,将大写字母代替为合适的值:
|
你需要知道用于向仓库进行身份验证的用户名、密码和客户端电子邮件地址,以及它的主机名。
|
||||||
|
运行以下命令,注意替换适当的大写值:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl create secret docker-registry <名称> \
|
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||||
--docker-server=DOCKER_REGISTRY_SERVER \
|
|
||||||
--docker-username=DOCKER_USER \
|
|
||||||
--docker-password=DOCKER_PASSWORD \
|
|
||||||
--docker-email=DOCKER_EMAIL
|
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
|
|
@ -213,11 +213,10 @@ handler 需要配置在 runtimes 块中:
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
See containerd's config documentation for more details:
|
See the containerd [CRI Plugin Config Guide](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) for more details.
|
||||||
https://github.com/containerd/cri/blob/master/docs/config.md
|
|
||||||
-->
|
-->
|
||||||
更详细信息,请查阅 containerd 配置文档:
|
更详细信息,请查阅 containerd
|
||||||
https://github.com/containerd/cri/blob/master/docs/config.md
|
[CRI 插件配置指南](https://github.com/containerd/cri/blob/master/docs/config.md)
|
||||||
|
|
||||||
#### [cri-o](https://cri-o.io/)
|
#### [cri-o](https://cri-o.io/)
|
||||||
|
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -884,7 +884,7 @@ in the Pod manifest, and represent parameters to the container runtime.
|
||||||
<!--
|
<!--
|
||||||
Security profiles are control plane mechanisms to enforce specific settings in the Security Context,
|
Security profiles are control plane mechanisms to enforce specific settings in the Security Context,
|
||||||
as well as other related parameters outside the Security Context. As of July 2021,
|
as well as other related parameters outside the Security Context. As of July 2021,
|
||||||
[Pod Security Policies](/docs/concepts/profile/pod-security-profile/) are deprecated in favor of the
|
[Pod Security Policies](/docs/concepts/security/pod-security-policy/) are deprecated in favor of the
|
||||||
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
||||||
|
|
||||||
Other alternatives for enforcing security profiles are being developed in the Kubernetes
|
Other alternatives for enforcing security profiles are being developed in the Kubernetes
|
||||||
|
|
|
@ -565,21 +565,6 @@ a list of search domains of up to 2048 characters.
|
||||||
如果启用 kube-apiserver 和 kubelet 的特性门控 `ExpandedDNSConfig`,Kubernetes 将可以有最多 32 个
|
如果启用 kube-apiserver 和 kubelet 的特性门控 `ExpandedDNSConfig`,Kubernetes 将可以有最多 32 个
|
||||||
搜索域以及一个最多 2048 个字符的搜索域列表。
|
搜索域以及一个最多 2048 个字符的搜索域列表。
|
||||||
|
|
||||||
<!--
|
|
||||||
### Feature availability
|
|
||||||
|
|
||||||
The availability of Pod DNS Config and DNS Policy "`None`" is shown as below.
|
|
||||||
-->
|
|
||||||
### 功能的可用性
|
|
||||||
|
|
||||||
Pod DNS 配置和 DNS 策略 "`None`" 的可用版本对应如下所示。
|
|
||||||
|
|
||||||
| k8s 版本 | 特性支持 |
|
|
||||||
| :---------: |:-----------:|
|
|
||||||
| 1.14 | 稳定 |
|
|
||||||
| 1.10 | Beta(默认启用) |
|
|
||||||
| 1.9 | Alpha |
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
|
|
@ -121,42 +121,7 @@ An example NetworkPolicy might look like this:
|
||||||
|
|
||||||
下面是一个 NetworkPolicy 的示例:
|
下面是一个 NetworkPolicy 的示例:
|
||||||
|
|
||||||
```yaml
|
{{< codenew file="service/networking/networkpolicy.yaml" >}}
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: test-network-policy
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
role: db
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
- Egress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
- ipBlock:
|
|
||||||
cidr: 172.17.0.0/16
|
|
||||||
except:
|
|
||||||
- 172.17.1.0/24
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
project: myproject
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
role: frontend
|
|
||||||
ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 6379
|
|
||||||
egress:
|
|
||||||
- to:
|
|
||||||
- ipBlock:
|
|
||||||
cidr: 10.0.0.0/24
|
|
||||||
ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 5978
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy.
|
POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy.
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -311,7 +311,7 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
|
||||||
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
||||||
command is interrupted, it can be restarted.
|
command is interrupted, it can be restarted.
|
||||||
|
|
||||||
When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
|
When using the REST API or [client library](/docs/reference/using-api/client-libraries), you need to do the steps explicitly (scale replicas to
|
||||||
0, wait for pod deletions, then delete the ReplicationController).
|
0, wait for pod deletions, then delete the ReplicationController).
|
||||||
-->
|
-->
|
||||||
## 使用 ReplicationController {#working-with-replicationcontrollers}
|
## 使用 ReplicationController {#working-with-replicationcontrollers}
|
||||||
|
@ -323,7 +323,7 @@ When using the REST API or Go client library, you need to do the steps explicitl
|
||||||
kubectl 将 ReplicationController 缩放为 0 并等待以便在删除 ReplicationController 本身之前删除每个 Pod。
|
kubectl 将 ReplicationController 缩放为 0 并等待以便在删除 ReplicationController 本身之前删除每个 Pod。
|
||||||
如果这个 kubectl 命令被中断,可以重新启动它。
|
如果这个 kubectl 命令被中断,可以重新启动它。
|
||||||
|
|
||||||
当使用 REST API 或 Go 客户端库时,你需要明确地执行这些步骤(缩放副本为 0、
|
当使用 REST API 或[客户端库](/zh/docs/reference/using-api/client-libraries)时,你需要明确地执行这些步骤(缩放副本为 0、
|
||||||
等待 Pod 删除,之后删除 ReplicationController 资源)。
|
等待 Pod 删除,之后删除 ReplicationController 资源)。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -333,7 +333,7 @@ You can delete a ReplicationController without affecting any of its pods.
|
||||||
|
|
||||||
Using kubectl, specify the `--cascade=orphan` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
|
Using kubectl, specify the `--cascade=orphan` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
|
||||||
|
|
||||||
When using the REST API or Go client library, simply delete the ReplicationController object.
|
When using the REST API or [client library](/docs/reference/using-api/client-libraries), you can delete the ReplicationController object.
|
||||||
-->
|
-->
|
||||||
### 只删除 ReplicationController
|
### 只删除 ReplicationController
|
||||||
|
|
||||||
|
@ -341,7 +341,7 @@ When using the REST API or Go client library, simply delete the ReplicationContr
|
||||||
|
|
||||||
使用 kubectl,为 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 指定 `--cascade=orphan` 选项。
|
使用 kubectl,为 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 指定 `--cascade=orphan` 选项。
|
||||||
|
|
||||||
当使用 REST API 或 Go 客户端库时,只需删除 ReplicationController 对象。
|
当使用 REST API 或客户端库(/zh/docs/reference/using-api/client-libraries)时,只需删除 ReplicationController 对象。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
||||||
|
|
|
@ -498,7 +498,8 @@ apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
kind: KubeSchedulerConfiguration
|
kind: KubeSchedulerConfiguration
|
||||||
|
|
||||||
profiles:
|
profiles:
|
||||||
- pluginConfig:
|
- schedulerName: default-scheduler
|
||||||
|
pluginConfig:
|
||||||
- name: PodTopologySpread
|
- name: PodTopologySpread
|
||||||
args:
|
args:
|
||||||
defaultConstraints:
|
defaultConstraints:
|
||||||
|
@ -589,7 +590,8 @@ apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
kind: KubeSchedulerConfiguration
|
kind: KubeSchedulerConfiguration
|
||||||
|
|
||||||
profiles:
|
profiles:
|
||||||
- pluginConfig:
|
- schedulerName: default-scheduler
|
||||||
|
pluginConfig:
|
||||||
- name: PodTopologySpread
|
- name: PodTopologySpread
|
||||||
args:
|
args:
|
||||||
defaultConstraints: []
|
defaultConstraints: []
|
||||||
|
|
|
@ -40,6 +40,8 @@ You need to have these tools installed:
|
||||||
- [Golang](https://golang.org/doc/install) version 1.13+
|
- [Golang](https://golang.org/doc/install) version 1.13+
|
||||||
- [Docker](https://docs.docker.com/engine/installation/)
|
- [Docker](https://docs.docker.com/engine/installation/)
|
||||||
- [etcd](https://github.com/coreos/etcd/)
|
- [etcd](https://github.com/coreos/etcd/)
|
||||||
|
- [make](https://www.gnu.org/software/make/)
|
||||||
|
- [gcc compiler/linker](https://gcc.gnu.org/)
|
||||||
-->
|
-->
|
||||||
- 你需要安装以下工具:
|
- 你需要安装以下工具:
|
||||||
|
|
||||||
|
@ -47,6 +49,8 @@ You need to have these tools installed:
|
||||||
- [Golang](https://golang.org/doc/install) 的 1.13 版本或更高
|
- [Golang](https://golang.org/doc/install) 的 1.13 版本或更高
|
||||||
- [Docker](https://docs.docker.com/engine/installation/)
|
- [Docker](https://docs.docker.com/engine/installation/)
|
||||||
- [etcd](https://github.com/coreos/etcd/)
|
- [etcd](https://github.com/coreos/etcd/)
|
||||||
|
- [make](https://www.gnu.org/software/make/)
|
||||||
|
- [gcc compiler/linker](https://gcc.gnu.org/)
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
- Your $GOPATH environment variable must be set, and the location of `etcd`
|
- Your $GOPATH environment variable must be set, and the location of `etcd`
|
||||||
|
@ -237,8 +241,9 @@ hack/update-generated-protobuf.sh
|
||||||
On branch master
|
On branch master
|
||||||
...
|
...
|
||||||
modified: api/openapi-spec/swagger.json
|
modified: api/openapi-spec/swagger.json
|
||||||
|
modified: api/openapi-spec/v3/apis__apps__v1_openapi.json
|
||||||
|
modified: pkg/generated/openapi/zz_generated.openapi.go
|
||||||
modified: staging/src/k8s.io/api/apps/v1/generated.proto
|
modified: staging/src/k8s.io/api/apps/v1/generated.proto
|
||||||
modified: staging/src/k8s.io/api/apps/v1/types.go
|
|
||||||
modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go
|
modified: staging/src/k8s.io/api/apps/v1/types_swagger_doc_generated.go
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -534,7 +534,7 @@ imagePolicy:
|
||||||
kubeConfigFile: /path/to/kubeconfig/for/backend
|
kubeConfigFile: /path/to/kubeconfig/for/backend
|
||||||
# 以秒计的时长,控制批准请求的缓存时间
|
# 以秒计的时长,控制批准请求的缓存时间
|
||||||
allowTTL: 50
|
allowTTL: 50
|
||||||
# 以秒计的时长,控制批准请求的缓存时间
|
# 以秒计的时长,控制拒绝请求的缓存时间
|
||||||
denyTTL: 50
|
denyTTL: 50
|
||||||
# 以毫秒计的时长,控制重试间隔
|
# 以毫秒计的时长,控制重试间隔
|
||||||
retryBackoff: 500
|
retryBackoff: 500
|
||||||
|
|
|
@ -50,7 +50,7 @@ kube-apiserver --authorization-mode=Example,RBAC --<其他选项> --<其他选
|
||||||
The RBAC API declares four kinds of Kubernetes object: _Role_, _ClusterRole_,
|
The RBAC API declares four kinds of Kubernetes object: _Role_, _ClusterRole_,
|
||||||
_RoleBinding_ and _ClusterRoleBinding_. You can
|
_RoleBinding_ and _ClusterRoleBinding_. You can
|
||||||
[describe objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects),
|
[describe objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects),
|
||||||
or amend them, using tools such as `kubectl,` just like any other Kubernetes object.
|
or amend them, using tools such as `kubectl`, just like any other Kubernetes object.
|
||||||
|
|
||||||
-->
|
-->
|
||||||
## API 对象 {#api-overview}
|
## API 对象 {#api-overview}
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -76,6 +76,10 @@ of CertificateAuthority, since CA data will always be passed to the plugin as by
|
||||||
Cluster 中包含允许 exec 插件与 Kubernetes 集群进行通信身份认证时所需
|
Cluster 中包含允许 exec 插件与 Kubernetes 集群进行通信身份认证时所需
|
||||||
的信息。
|
的信息。
|
||||||
|
|
||||||
|
为了确保该结构体包含需要与 Kubernetes 集群进行通信的所有内容(就像通过 Kubeconfig 一样),
|
||||||
|
除了证书授权之外,该字段应该映射到 "k8s.io/client-go/tools/clientcmd/api/v1".cluster,
|
||||||
|
由于 CA 数据将始终以字节形式传递给插件。
|
||||||
|
|
||||||
<table class="table">
|
<table class="table">
|
||||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
@ -167,7 +171,7 @@ clusters:
|
||||||
只是针对不同集群会有一些细节上的差异,例如 audience。
|
只是针对不同集群会有一些细节上的差异,例如 audience。
|
||||||
此字段使得特定于集群的配置可以直接使用集群信息来设置。
|
此字段使得特定于集群的配置可以直接使用集群信息来设置。
|
||||||
不建议使用此字段来保存 Secret 数据,因为 exec 插件的主要优势之一是不需要在
|
不建议使用此字段来保存 Secret 数据,因为 exec 插件的主要优势之一是不需要在
|
||||||
kubeconfig 中保存 Secret 数据。
|
kubeconfig 中保存 Secret 数据。</p>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
|
@ -222,6 +226,7 @@ ExecCredentialSpec 保存传输组件所提供的特定于请求和运行时的
|
||||||
<!--
|
<!--
|
||||||
**Appears in:**
|
**Appears in:**
|
||||||
-->
|
-->
|
||||||
|
**出现在:**
|
||||||
|
|
||||||
- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)
|
- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)
|
||||||
|
|
||||||
|
@ -235,7 +240,7 @@ itself should at least be protected via file permissions.
|
||||||
<p>ExecCredentialStatus 中包含传输组件要使用的凭据。</p>
|
<p>ExecCredentialStatus 中包含传输组件要使用的凭据。</p>
|
||||||
<p>字段 token 和 clientKeyData 都是敏感字段。此数据只能在
|
<p>字段 token 和 clientKeyData 都是敏感字段。此数据只能在
|
||||||
客户端与 exec 插件进程之间使用内存来传递。exec 插件本身至少
|
客户端与 exec 插件进程之间使用内存来传递。exec 插件本身至少
|
||||||
应通过文件访问许可来实施保护。</p>》
|
应通过文件访问许可来实施保护。</p>
|
||||||
|
|
||||||
<table class="table">
|
<table class="table">
|
||||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||||
|
|
|
@ -1,12 +1,22 @@
|
||||||
---
|
---
|
||||||
title: Client Authentication (v1beta1)
|
title: 客户端身份认证(Client Authentication)(v1beta1)
|
||||||
content_type: tool-reference
|
content_type: tool-reference
|
||||||
package: client.authentication.k8s.io/v1beta1
|
package: client.authentication.k8s.io/v1beta1
|
||||||
auto_generated: true
|
auto_generated: true
|
||||||
---
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
title: Client Authentication (v1beta1)
|
||||||
|
content_type: tool-reference
|
||||||
|
package: client.authentication.k8s.io/v1beta1
|
||||||
|
auto_generated: true
|
||||||
|
-->
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
## Resource Types
|
## Resource Types
|
||||||
|
-->
|
||||||
|
## 资源类型 {#resource-types}
|
||||||
|
|
||||||
|
|
||||||
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
|
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
|
||||||
|
@ -20,11 +30,14 @@ auto_generated: true
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
ExecCredential is used by exec-based plugins to communicate credentials to
|
ExecCredential is used by exec-based plugins to communicate credentials to
|
||||||
HTTP transports.
|
HTTP transports.
|
||||||
|
-->
|
||||||
|
ExecCredential 由基于 exec 的插件使用,与 HTTP 传输组件沟通凭据信息。
|
||||||
|
|
||||||
<table class="table">
|
<table class="table">
|
||||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
|
||||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>client.authentication.k8s.io/v1beta1</code></td></tr>
|
<tr><td><code>apiVersion</code><br/>string</td><td><code>client.authentication.k8s.io/v1beta1</code></td></tr>
|
||||||
|
@ -33,11 +46,13 @@ HTTP transports.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<tr><td><code>spec</code> <B>[Required]</B><br/>
|
<tr><td><code>spec</code> <B><!--[Required]-->[必需]</B><br/>
|
||||||
<a href="#client-authentication-k8s-io-v1beta1-ExecCredentialSpec"><code>ExecCredentialSpec</code></a>
|
<a href="#client-authentication-k8s-io-v1beta1-ExecCredentialSpec"><code>ExecCredentialSpec</code></a>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
Spec holds information passed to the plugin by the transport.</td>
|
<!--Spec holds information passed to the plugin by the transport.-->
|
||||||
|
字段 spec 包含由 HTTP 传输组件传递给插件的信息。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -45,8 +60,10 @@ HTTP transports.
|
||||||
<a href="#client-authentication-k8s-io-v1beta1-ExecCredentialStatus"><code>ExecCredentialStatus</code></a>
|
<a href="#client-authentication-k8s-io-v1beta1-ExecCredentialStatus"><code>ExecCredentialStatus</code></a>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
Status is filled in by the plugin and holds the credentials that the transport
|
<!--Status is filled in by the plugin and holds the credentials that the transport
|
||||||
should use to contact the API.</td>
|
should use to contact the API.-->
|
||||||
|
字段 status 由插件填充,包含传输组件与 API 服务器连接时需要提供的凭据。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -60,11 +77,13 @@ should use to contact the API.</td>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**Appears in:**
|
<!--**Appears in:**-->
|
||||||
|
**出现在:**
|
||||||
|
|
||||||
- [ExecCredentialSpec](#client-authentication-k8s-io-v1beta1-ExecCredentialSpec)
|
- [ExecCredentialSpec](#client-authentication-k8s-io-v1beta1-ExecCredentialSpec)
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
Cluster contains information to allow an exec plugin to communicate
|
Cluster contains information to allow an exec plugin to communicate
|
||||||
with the kubernetes cluster being authenticated to.
|
with the kubernetes cluster being authenticated to.
|
||||||
|
|
||||||
|
@ -72,18 +91,27 @@ To ensure that this struct contains everything someone would need to communicate
|
||||||
with a kubernetes cluster (just like they would via a kubeconfig), the fields
|
with a kubernetes cluster (just like they would via a kubeconfig), the fields
|
||||||
should shadow "k8s.io/client-go/tools/clientcmd/api/v1".Cluster, with the exception
|
should shadow "k8s.io/client-go/tools/clientcmd/api/v1".Cluster, with the exception
|
||||||
of CertificateAuthority, since CA data will always be passed to the plugin as bytes.
|
of CertificateAuthority, since CA data will always be passed to the plugin as bytes.
|
||||||
|
-->
|
||||||
|
Cluster 中包含允许 exec 插件与 Kubernetes 集群进行通信身份认证时所需
|
||||||
|
的信息。
|
||||||
|
|
||||||
|
为了确保该结构体包含需要与 Kubernetes 集群进行通信的所有内容(就像通过 Kubeconfig 一样),
|
||||||
|
该字段应该映射到 "k8s.io/client-go/tools/clientcmd/api/v1".cluster,
|
||||||
|
除了证书授权之外,由于 CA 数据将始终以字节形式传递给插件。
|
||||||
|
|
||||||
<table class="table">
|
<table class="table">
|
||||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<tr><td><code>server</code> <B>[Required]</B><br/>
|
<tr><td><code>server</code> <B><!--[Required]-->[必需]</B><br/>
|
||||||
<code>string</code>
|
<code>string</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
Server is the address of the kubernetes cluster (https://hostname:port).</td>
|
<!--Server is the address of the kubernetes cluster (https://hostname:port).-->
|
||||||
|
字段 server 是 Kubernetes 集群的地址(https://hostname:port)。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -91,9 +119,14 @@ of CertificateAuthority, since CA data will always be passed to the plugin as by
|
||||||
<code>string</code>
|
<code>string</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
|
<!--
|
||||||
TLSServerName is passed to the server for SNI and is used in the client to
|
TLSServerName is passed to the server for SNI and is used in the client to
|
||||||
check server certificates against. If ServerName is empty, the hostname
|
check server certificates against. If ServerName is empty, the hostname
|
||||||
used to contact the server is used.</td>
|
used to contact the server is used.
|
||||||
|
-->
|
||||||
|
tls-server-name 是用来提供给服务器用作 SNI 解析的,客户端以此检查服务器的证书。
|
||||||
|
如此字段为空,则使用链接服务器时使用的主机名。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -101,8 +134,13 @@ used to contact the server is used.</td>
|
||||||
<code>bool</code>
|
<code>bool</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
|
<!--
|
||||||
InsecureSkipTLSVerify skips the validity check for the server's certificate.
|
InsecureSkipTLSVerify skips the validity check for the server's certificate.
|
||||||
This will make your HTTPS connections insecure.</td>
|
This will make your HTTPS connections insecure.
|
||||||
|
-->
|
||||||
|
设置此字段之后,会令客户端跳过对服务器端证书的合法性检查。
|
||||||
|
这会使得你的 HTTPS 链接不再安全。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -110,8 +148,13 @@ This will make your HTTPS connections insecure.</td>
|
||||||
<code>[]byte</code>
|
<code>[]byte</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
|
<!--
|
||||||
CAData contains PEM-encoded certificate authority certificates.
|
CAData contains PEM-encoded certificate authority certificates.
|
||||||
If empty, system roots should be used.</td>
|
If empty, system roots should be used.
|
||||||
|
-->
|
||||||
|
此字段包含 PEM 编码的证书机构(CA)证书。
|
||||||
|
如果为空,则使用系统的根证书。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -119,8 +162,9 @@ If empty, system roots should be used.</td>
|
||||||
<code>string</code>
|
<code>string</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
ProxyURL is the URL to the proxy to be used for all requests to this
|
<!--ProxyURL is the URL to the proxy to be used for all requests to this cluster.-->
|
||||||
cluster.</td>
|
此字段用来设置向集群发送所有请求时要使用的代理服务器。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -128,27 +172,40 @@ cluster.</td>
|
||||||
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime/#RawExtension"><code>k8s.io/apimachinery/pkg/runtime.RawExtension</code></a>
|
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime/#RawExtension"><code>k8s.io/apimachinery/pkg/runtime.RawExtension</code></a>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
|
<!--
|
||||||
Config holds additional config data that is specific to the exec
|
Config holds additional config data that is specific to the exec
|
||||||
plugin with regards to the cluster being authenticated to.
|
plugin with regards to the cluster being authenticated to.
|
||||||
|
|
||||||
This data is sourced from the clientcmd Cluster object's
|
This data is sourced from the clientcmd Cluster object's
|
||||||
extensions[client.authentication.k8s.io/exec] field:
|
extensions[client.authentication.k8s.io/exec] field:
|
||||||
|
-->
|
||||||
|
<p>此字段包含一些额外的、特定于 exec 插件和所连接的集群的数据,</p>
|
||||||
|
<p>此字段来自于 clientcmd 集群对象的 <code>extensions[client.authentication.k8s.io/exec]</code>
|
||||||
|
字段:</p>
|
||||||
|
<pre>
|
||||||
clusters:
|
clusters:
|
||||||
- name: my-cluster
|
- name: my-cluster
|
||||||
cluster:
|
cluster:
|
||||||
...
|
...
|
||||||
extensions:
|
extensions:
|
||||||
- name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config
|
- name: client.authentication.k8s.io/exec # 针对每个集群 exec 配置所预留的扩展名称
|
||||||
extension:
|
extension:
|
||||||
audience: 06e3fbd18de8 # arbitrary config
|
audience: 06e3fbd18de8 # 任意配置信息
|
||||||
|
</pre>
|
||||||
|
<!--
|
||||||
In some environments, the user config may be exactly the same across many clusters
|
In some environments, the user config may be exactly the same across many clusters
|
||||||
(i.e. call this exec plugin) minus some details that are specific to each cluster
|
(i.e. call this exec plugin) minus some details that are specific to each cluster
|
||||||
such as the audience. This field allows the per cluster config to be directly
|
such as the audience. This field allows the per cluster config to be directly
|
||||||
specified with the cluster info. Using this field to store secret data is not
|
specified with the cluster info. Using this field to store secret data is not
|
||||||
recommended as one of the prime benefits of exec plugins is that no secrets need
|
recommended as one of the prime benefits of exec plugins is that no secrets need
|
||||||
to be stored directly in the kubeconfig.</td>
|
to be stored directly in the kubeconfig.
|
||||||
|
-->
|
||||||
|
<p>在某些环境中,用户配置可能对很多集群而言都完全一样(即调用同一个 exec 插件),
|
||||||
|
只是针对不同集群会有一些细节上的差异,例如 audience。
|
||||||
|
此字段使得特定于集群的配置可以直接使用集群信息来设置。
|
||||||
|
不建议使用此字段来保存 Secret 数据,因为 exec 插件的主要优势之一是不需要在
|
||||||
|
kubeconfig 中保存 Secret 数据。</p>
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -162,16 +219,20 @@ to be stored directly in the kubeconfig.</td>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**Appears in:**
|
<!-- **Appears in:** -->
|
||||||
|
**出现在:**
|
||||||
|
|
||||||
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
|
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
ExecCredentialSpec holds request and runtime specific information provided by
|
ExecCredentialSpec holds request and runtime specific information provided by
|
||||||
the transport.
|
the transport.
|
||||||
|
-->
|
||||||
|
ExecCredentialSpec 保存传输组件所提供的特定于请求和运行时的信息。
|
||||||
|
|
||||||
<table class="table">
|
<table class="table">
|
||||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
|
||||||
|
|
||||||
|
@ -180,10 +241,16 @@ the transport.
|
||||||
<a href="#client-authentication-k8s-io-v1beta1-Cluster"><code>Cluster</code></a>
|
<a href="#client-authentication-k8s-io-v1beta1-Cluster"><code>Cluster</code></a>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
|
<!--
|
||||||
Cluster contains information to allow an exec plugin to communicate with the
|
Cluster contains information to allow an exec plugin to communicate with the
|
||||||
kubernetes cluster being authenticated to. Note that Cluster is non-nil only
|
kubernetes cluster being authenticated to. Note that Cluster is non-nil only
|
||||||
when provideClusterInfo is set to true in the exec provider config (i.e.,
|
when provideClusterInfo is set to true in the exec provider config (i.e.,
|
||||||
ExecConfig.ProvideClusterInfo).</td>
|
ExecConfig.ProvideClusterInfo).
|
||||||
|
-->
|
||||||
|
此字段中包含的信息使得 exec 插件能够与要访问的 Kubernetes 集群通信。
|
||||||
|
注意,cluster 字段只有在 exec 驱动的配置中 provideClusterInfo
|
||||||
|
(即:ExecConfig.ProvideClusterInfo)被设置为 true 时才不能为空。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
@ -197,20 +264,27 @@ ExecConfig.ProvideClusterInfo).</td>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**Appears in:**
|
<!-- **Appears in:** -->
|
||||||
|
**出现在:**
|
||||||
|
|
||||||
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
|
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
ExecCredentialStatus holds credentials for the transport to use.
|
ExecCredentialStatus holds credentials for the transport to use.
|
||||||
|
|
||||||
Token and ClientKeyData are sensitive fields. This data should only be
|
Token and ClientKeyData are sensitive fields. This data should only be
|
||||||
transmitted in-memory between client and exec plugin process. Exec plugin
|
transmitted in-memory between client and exec plugin process. Exec plugin
|
||||||
itself should at least be protected via file permissions.
|
itself should at least be protected via file permissions.
|
||||||
|
-->
|
||||||
|
<p>ExecCredentialStatus 中包含传输组件要使用的凭据。</p>
|
||||||
|
|
||||||
|
<p>字段 token 和 clientKeyData 都是敏感字段。
|
||||||
|
此数据只能在客户端与 exec 插件进程之间使用内存来传递。
|
||||||
|
exec 插件本身至少应通过文件访问许可来实施保护。</p>
|
||||||
|
|
||||||
<table class="table">
|
<table class="table">
|
||||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||||
<tbody>
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -218,31 +292,39 @@ itself should at least be protected via file permissions.
|
||||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta"><code>meta/v1.Time</code></a>
|
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
ExpirationTimestamp indicates a time when the provided credentials expire.</td>
|
<!-- ExpirationTimestamp indicates a time when the provided credentials expire. -->
|
||||||
|
给出所提供的凭据到期的时间。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
<tr><td><code>token</code> <B>[Required]</B><br/>
|
<tr><td><code>token</code> <B><!--[Required]-->[必需]</B><br/>
|
||||||
<code>string</code>
|
<code>string</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
Token is a bearer token used by the client for request authentication.</td>
|
<!-- Token is a bearer token used by the client for request authentication. -->
|
||||||
|
客户端用做请求身份认证的持有者令牌。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
<tr><td><code>clientCertificateData</code> <B>[Required]</B><br/>
|
<tr><td><code>clientCertificateData</code> <B><!--[Required]-->[必需]</B><br/>
|
||||||
<code>string</code>
|
<code>string</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
PEM-encoded client TLS certificates (including intermediates, if any).</td>
|
<!-- PEM-encoded client TLS certificates (including intermediates, if any). -->
|
||||||
|
PEM 编码的客户端 TLS 证书(如果有临时证书,也会包含)。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
<tr><td><code>clientKeyData</code> <B>[Required]</B><br/>
|
<tr><td><code>clientKeyData</code> <B><!--[Required]-->[必需]</B><br/>
|
||||||
<code>string</code>
|
<code>string</code>
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
PEM-encoded private key for the above certificate.</td>
|
<!-- PEM-encoded private key for the above certificate. -->
|
||||||
|
与上述证书对应的、PEM 编码的私钥。
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,5 @@
|
||||||
|
---
|
||||||
|
title: "身份认证资源"
|
||||||
|
weight: 4
|
||||||
|
auto_generated: true
|
||||||
|
---
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
title: "鉴权资源"
|
||||||
|
weight: 5
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
title: "集群资源"
|
||||||
|
weight: 8
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,400 @@
|
||||||
|
---
|
||||||
|
api_metadata:
|
||||||
|
apiVersion: ""
|
||||||
|
import: "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
|
kind: "ObjectMeta"
|
||||||
|
content_type: "api_reference"
|
||||||
|
description: "ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户必须创建的所有对象。"
|
||||||
|
title: "ObjectMeta"
|
||||||
|
weight: 7
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
api_metadata:
|
||||||
|
apiVersion: ""
|
||||||
|
import: "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
|
kind: "ObjectMeta"
|
||||||
|
content_type: "api_reference"
|
||||||
|
description: "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create."
|
||||||
|
title: "ObjectMeta"
|
||||||
|
weight: 7
|
||||||
|
auto_generated: true
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The file is auto-generated from the Go source code of the component using a generic
|
||||||
|
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||||
|
to generate the reference documentation, please read
|
||||||
|
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||||
|
To update the reference content, please follow the
|
||||||
|
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||||
|
guide. You can file document formatting bugs against the
|
||||||
|
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||||
|
-->
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.
|
||||||
|
-->
|
||||||
|
ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户必须创建的所有对象。
|
||||||
|
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
- **name** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names
|
||||||
|
-->
|
||||||
|
name 在命名空间内必须是唯一的。创建资源时需要,尽管某些资源可能允许客户端请求自动地生成适当的名称。
|
||||||
|
名称主要用于创建幂等性和配置定义。无法更新。
|
||||||
|
更多信息:http://kubernetes.io/docs/user-guide/identifiers#names
|
||||||
|
|
||||||
|
|
||||||
|
- **generateName** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.
|
||||||
|
-->
|
||||||
|
generateName 是一个可选前缀,由服务器使用,**仅在**未提供 name 字段时生成唯一名称。
|
||||||
|
如果使用此字段,则返回给客户端的名称将与传递的名称不同。该值还将与唯一的后缀组合。
|
||||||
|
提供的值与 name 字段具有相同的验证规则,并且可能会根据所需的后缀长度被截断,以使该值在服务器上唯一。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).
|
||||||
|
|
||||||
|
Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
|
||||||
|
-->
|
||||||
|
如果指定了此字段并且生成的名称存在,则服务器将不会返回 409 ——相反,它将返回 201 Created 或 500,
|
||||||
|
原因是 ServerTimeout 指示在分配的时间内找不到唯一名称,客户端应重试(可选,在 Retry-After 标头中指定的时间之后)。
|
||||||
|
|
||||||
|
仅在未指定 name 时应用。更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
|
||||||
|
|
||||||
|
- **namespace** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.
|
||||||
|
|
||||||
|
Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces
|
||||||
|
-->
|
||||||
|
namespace 定义了一个值空间,其中每个名称必须唯一。空命名空间相当于 “default” 命名空间,但 “default” 是规范表示。
|
||||||
|
并非所有对象都需要限定在命名空间中——这些对象的此字段的值将为空。
|
||||||
|
|
||||||
|
必须是 DNS_LABEL。无法更新。更多信息:http://kubernetes.io/docs/user-guide/namespaces
|
||||||
|
|
||||||
|
- **labels** (map[string]string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels
|
||||||
|
-->
|
||||||
|
可用于组织和分类(确定范围和选择)对象的字符串键和值的映射。
|
||||||
|
可以匹配 ReplicationControllers 和 Service 的选择器。更多信息:http://kubernetes.io/docs/user-guide/labels
|
||||||
|
|
||||||
|
- **annotations** (map[string]string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations
|
||||||
|
-->
|
||||||
|
annotations 是一个非结构化的键值映射,存储在资源中,可以由外部工具设置以存储和检索任意元数据。
|
||||||
|
它们不可查询,在修改对象时应保留。更多信息:http://kubernetes.io/docs/user-guide/annotations
|
||||||
|
|
||||||
|
|
||||||
|
<!-- ### System {#System} -->
|
||||||
|
### 系统字段 {#System}
|
||||||
|
|
||||||
|
|
||||||
|
- **finalizers** ([]string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.
|
||||||
|
-->
|
||||||
|
在从注册表中删除对象之前该字段必须为空。
|
||||||
|
每个条目都是负责的组件的标识符,各组件将从列表中删除自己对应的条目。
|
||||||
|
如果对象的 deletionTimestamp 非空,则只能删除此列表中的条目。
|
||||||
|
终结器可以按任何顺序处理和删除。**没有**按照顺序执行,
|
||||||
|
因为它引入了终结器卡住的重大风险。finalizers 是一个共享字段,
|
||||||
|
任何有权限的参与者都可以对其进行重新排序。如果按顺序处理终结器列表,
|
||||||
|
那么这可能导致列表中第一个负责终结器的组件正在等待列表中靠后负责终结器的组件产生的信号(字段值、外部系统或其他),
|
||||||
|
从而导致死锁。在没有强制排序的情况下,终结者可以在它们之间自由排序,
|
||||||
|
并且不容易受到列表中排序更改的影响。
|
||||||
|
|
||||||
|
- **managedFields** ([]ManagedFieldsEntry)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object.
|
||||||
|
-->
|
||||||
|
managedFields 将 workflow-id 和版本映射到由该工作流管理的字段集。
|
||||||
|
这主要用于内部管理,用户通常不需要设置或理解该字段。
|
||||||
|
工作流可以是用户名、控制器名或特定应用路径的名称,如 “ci-cd”。
|
||||||
|
字段集始终存在于修改对象时工作流使用的版本。
|
||||||
|
|
||||||
|
<a name="ManagedFieldsEntry"></a>
|
||||||
|
<!--
|
||||||
|
*ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.*
|
||||||
|
-->
|
||||||
|
ManagedFieldsEntry 是一个 workflow-id,一个 FieldSet,也是该字段集适用的资源的组版本。
|
||||||
|
|
||||||
|
- **managedFields.apiVersion** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
APIVersion defines the version of this resource that this field set applies to. The format is "group/version" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted.
|
||||||
|
-->
|
||||||
|
apiVersion 定义此字段集适用的资源的版本。
|
||||||
|
格式是 “group/version”,就像顶级 apiVersion 字段一样。
|
||||||
|
必须跟踪字段集的版本,因为它不能自动转换。
|
||||||
|
|
||||||
|
- **managedFields.fieldsType** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: "FieldsV1"
|
||||||
|
-->
|
||||||
|
FieldsType 是不同字段格式和版本的鉴别器。
|
||||||
|
目前只有一个可能的值:“FieldsV1”
|
||||||
|
|
||||||
|
- **managedFields.fieldsV1** (FieldsV1)
|
||||||
|
|
||||||
|
<!-- FieldsV1 holds the first JSON version format as described in the "FieldsV1" type. -->
|
||||||
|
FieldsV1 包含类型 “FieldsV1” 中描述的第一个 JSON 版本格式。
|
||||||
|
|
||||||
|
<a name="FieldsV1"></a>
|
||||||
|
<!--
|
||||||
|
*FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.
|
||||||
|
|
||||||
|
Each key is either a '.' representing the field itself, and will always map to an empty set,
|
||||||
|
or a string representing a sub-field or item. The string will follow one of these four formats:
|
||||||
|
'f:<name>', where <name> is the name of a field in a struct, or key in a map
|
||||||
|
'v:<value>', where <value> is the exact json formatted value of a list item
|
||||||
|
'i:<index>', where <index> is position of a item in a list
|
||||||
|
'k:<keys>', where <keys> is a map of a list item's key fields to their unique values
|
||||||
|
If a key maps to an empty Fields value, the field that key represents is part of the set.
|
||||||
|
|
||||||
|
The exact format is defined in sigs.k8s.io/structured-merge-diff*
|
||||||
|
-->
|
||||||
|
FieldsV1 以 JSON 格式将一组字段存储在像 Trie 这样的数据结构中。
|
||||||
|
|
||||||
|
每个键或是 `.` 表示字段本身,并且始终映射到一个空集,
|
||||||
|
或是一个表示子字段或元素的字符串。该字符串将遵循以下四种格式之一:
|
||||||
|
1. `f:<name>`,其中 `<name>` 是结构中字段的名称,或映射中的键
|
||||||
|
2. `v:<value>`,其中 `<value>` 是列表项的精确 json 格式值
|
||||||
|
3. `i:<index>`,其中 `<index>` 是列表中项目的位置
|
||||||
|
4. `k:<keys>`,其中 `<keys>` 是列表项的关键字段到其唯一值的映射
|
||||||
|
如果一个键映射到一个空的 Fields 值,则该键表示的字段是集合的一部分。
|
||||||
|
|
||||||
|
确切的格式在 sigs.k8s.io/structured-merge-diff 中定义。
|
||||||
|
|
||||||
|
- **managedFields.manager** (string)
|
||||||
|
|
||||||
|
<!-- Manager is an identifier of the workflow managing these fields. -->
|
||||||
|
manager 是管理这些字段的工作流的标识符。
|
||||||
|
|
||||||
|
- **managedFields.operation** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'.
|
||||||
|
-->
|
||||||
|
operation 是导致创建此 managedFields 表项的操作类型。
|
||||||
|
此字段的仅有合法值是 “Apply” 和 “Update”。
|
||||||
|
|
||||||
|
- **managedFields.subresource** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource.
|
||||||
|
-->
|
||||||
|
subresource 是用于更新该对象的子资源的名称,如果对象是通过主资源更新的,则为空字符串。
|
||||||
|
该字段的值用于区分管理者,即使他们共享相同的名称。例如,状态更新将不同于使用相同管理者名称的常规更新。
|
||||||
|
请注意,apiVersion 字段与 subresource 字段无关,它始终对应于主资源的版本。
|
||||||
|
|
||||||
|
- **managedFields.time** (Time)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Time is timestamp of when these fields were set. It should always be empty if Operation is 'Apply'
|
||||||
|
-->
|
||||||
|
time 是设置这些字段的时间戳。如果 operation 为 “Apply”,则它应始终为空
|
||||||
|
|
||||||
|
<a name="Time"></a>
|
||||||
|
<!--
|
||||||
|
*Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
|
||||||
|
-->
|
||||||
|
time 是 time.Time 的包装类,支持正确地序列化为 YAML 和 JSON。
|
||||||
|
为 time 包提供的许多工厂方法提供了包装类。
|
||||||
|
|
||||||
|
|
||||||
|
- **ownerReferences** ([]OwnerReference)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
*Patch strategy: merge on key `uid`*
|
||||||
|
|
||||||
|
List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.
|
||||||
|
-->
|
||||||
|
补丁策略:在键 `uid` 上执行合并操作
|
||||||
|
|
||||||
|
此对象所依赖的对象列表。如果列表中的所有对象都已被删除,则该对象将被垃圾回收。
|
||||||
|
如果此对象由控制器管理,则此列表中的条目将指向此控制器,controller 字段设置为 true。
|
||||||
|
管理控制器不能超过一个。
|
||||||
|
|
||||||
|
|
||||||
|
<a name="OwnerReference"></a>
|
||||||
|
<!--
|
||||||
|
*OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.*
|
||||||
|
-->
|
||||||
|
OwnerReference 包含足够可以让你识别拥有对象的信息。
|
||||||
|
拥有对象必须与依赖对象位于同一命名空间中,或者是集群作用域的,因此没有命名空间字段。
|
||||||
|
|
||||||
|
- **ownerReferences.apiVersion** (string),<!-- required -->必选
|
||||||
|
<!-- API version of the referent. -->
|
||||||
|
被引用资源的 API 版本。
|
||||||
|
|
||||||
|
- **ownerReferences.kind** (string),<!-- required -->必选
|
||||||
|
|
||||||
|
<!-- Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -->
|
||||||
|
被引用资源的类别。更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||||
|
|
||||||
|
- **ownerReferences.name** (string),<!-- required -->必选
|
||||||
|
|
||||||
|
<!-- Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names -->
|
||||||
|
被引用资源的名称。更多信息:http://kubernetes.io/docs/user-guide/identifiers#names
|
||||||
|
|
||||||
|
- **ownerReferences.uid** (string),<!-- required -->必选
|
||||||
|
|
||||||
|
<!-- UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids -->
|
||||||
|
被引用资源的 uid。更多信息:http://kubernetes.io/docs/user-guide/identifiers#uids
|
||||||
|
|
||||||
|
- **ownerReferences.blockOwnerDeletion** (boolean)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.
|
||||||
|
-->
|
||||||
|
如果为 true,**并且**如果所有者具有 “foregroundDeletion” 终结器,
|
||||||
|
则在删除此引用之前,无法从键值存储中删除所有者。
|
||||||
|
默认为 false。要设置此字段,用户需要所有者的 “delete” 权限,
|
||||||
|
否则将返回 422 (Unprocessable Entity)。
|
||||||
|
|
||||||
|
- **ownerReferences.controller** (boolean)
|
||||||
|
|
||||||
|
<!-- If true, this reference points to the managing controller. -->
|
||||||
|
如果为 true,则此引用指向管理的控制器。
|
||||||
|
|
||||||
|
<!-- ### Read-only {#Read-only} -->
|
||||||
|
### 只读字段 {#Read-only}
|
||||||
|
|
||||||
|
|
||||||
|
- **creationTimestamp** (Time)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.
|
||||||
|
|
||||||
|
Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||||
|
-->
|
||||||
|
creationTimestamp 是一个时间戳,表示创建此对象时的服务器时间。
|
||||||
|
不能保证在单独的操作中按发生前的顺序设置。
|
||||||
|
客户端不得设置此值。它以 RFC3339 形式表示,并采用 UTC。
|
||||||
|
|
||||||
|
由系统填充。只读。列表为空。更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||||
|
|
||||||
|
<a name="Time"></a>
|
||||||
|
<!--
|
||||||
|
*Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
|
||||||
|
-->
|
||||||
|
time 是 time.Time 的包装类,支持正确地序列化为 YAML 和 JSON。
|
||||||
|
为 time 包提供的许多工厂方法提供了包装类。
|
||||||
|
|
||||||
|
- **deletionGracePeriodSeconds** (int64)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.
|
||||||
|
-->
|
||||||
|
此对象从系统中删除之前允许正常终止的秒数。
|
||||||
|
仅当设置了 deletionTimestamp 时才设置。
|
||||||
|
只能缩短。只读。
|
||||||
|
|
||||||
|
- **deletionTimestamp** (Time)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.
|
||||||
|
|
||||||
|
Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||||
|
-->
|
||||||
|
deletionTimestamp 是删除此资源的 RFC 3339 日期和时间。
|
||||||
|
该字段在用户请求优雅删除时由服务器设置,客户端不能直接设置。
|
||||||
|
一旦 finalizers 列表为空,该资源预计将在此字段中的时间之后被删除
|
||||||
|
(不再从资源列表中可见,并且无法通过名称访问)。
|
||||||
|
只要 finalizers 列表包含项目,就阻止删除。一旦设置了 deletionTimestamp,
|
||||||
|
该值可能不会被取消设置或在未来进一步设置,尽管它可能会缩短或在此时间之前可能会删除资源。
|
||||||
|
例如,用户可能要求在 30 秒内删除一个 Pod。
|
||||||
|
Kubelet 将通过向 Pod 中的容器发送优雅的终止信号来做出反应。
|
||||||
|
30 秒后,Kubelet 将向容器发送硬终止信号(SIGKILL),
|
||||||
|
并在清理后从 API 中删除 Pod。在网络存在分区的情况下,
|
||||||
|
此对象可能在此时间戳之后仍然存在,直到管理员或自动化进程可以确定资源已完全终止。
|
||||||
|
如果未设置,则未请求优雅删除该对象。
|
||||||
|
|
||||||
|
请求优雅删除时由系统填充。只读。更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||||
|
|
||||||
|
<a name="Time"></a>
|
||||||
|
<!--
|
||||||
|
*Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
|
||||||
|
-->
|
||||||
|
“Time 是 time.Time 的包装类,支持正确地序列化为 YAML 和 JSON。
|
||||||
|
为 time 包提供的许多工厂方法提供了包装类。”
|
||||||
|
|
||||||
|
- **generation** (int64)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.
|
||||||
|
-->
|
||||||
|
表示期望状态的特定生成的序列号。由系统填充。只读。
|
||||||
|
|
||||||
|
- **resourceVersion** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.
|
||||||
|
|
||||||
|
Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
|
||||||
|
-->
|
||||||
|
一个不透明的值,表示此对象的内部版本,客户端可以使用该值来确定对象是否已被更改。
|
||||||
|
可用于乐观并发、变更检测以及对资源或资源集的监听操作。
|
||||||
|
客户端必须将这些值视为不透明的,且未更改地传回服务器。
|
||||||
|
它们可能仅对特定资源或一组资源有效。
|
||||||
|
|
||||||
|
由系统填充。只读。客户端必须将值视为不透明。
|
||||||
|
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
|
||||||
|
|
||||||
|
- **selfLink** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
SelfLink is a URL representing this object. Populated by the system. Read-only.
|
||||||
|
|
||||||
|
DEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release.
|
||||||
|
-->
|
||||||
|
selfLink 是表示此对象的 URL。由系统填充。只读。
|
||||||
|
|
||||||
|
**已弃用**。Kubernetes 将在 1.20 版本中停止传播该字段,并计划在 1.21 版本中删除该字段。
|
||||||
|
|
||||||
|
- **uid** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.
|
||||||
|
|
||||||
|
Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids
|
||||||
|
-->
|
||||||
|
UID 是该对象在时间和空间上的唯一值。它通常由服务器在成功创建资源时生成,并且不允许使用 PUT 操作更改。
|
||||||
|
|
||||||
|
由系统填充。只读。更多信息:http://kubernetes.io/docs/user-guide/identifiers#uids
|
||||||
|
|
||||||
|
<!-- ### Ignored {#Ignored} -->
|
||||||
|
### 忽略字段 {#Ignored}
|
||||||
|
|
||||||
|
|
||||||
|
- **clusterName** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.
|
||||||
|
-->
|
||||||
|
对象所属的集群的名称。这用于区分不同集群中具有相同名称和命名空间的资源。
|
||||||
|
该字段现在没有在任何地方设置,如果在创建或更新请求中设置,apiserver 将忽略它。
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,208 @@
|
||||||
|
---
|
||||||
|
api_metadata:
|
||||||
|
apiVersion: ""
|
||||||
|
import: "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
|
kind: "Status"
|
||||||
|
content_type: "api_reference"
|
||||||
|
description: "状态(Status)是不返回其他对象的调用的返回值。"
|
||||||
|
title: "Status"
|
||||||
|
weight: 12
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
api_metadata:
|
||||||
|
apiVersion: ""
|
||||||
|
import: "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
|
kind: "Status"
|
||||||
|
content_type: "api_reference"
|
||||||
|
description: "Status is a return value for calls that don't return other objects."
|
||||||
|
title: "Status"
|
||||||
|
weight: 12
|
||||||
|
auto_generated: true
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The file is auto-generated from the Go source code of the component using a generic
|
||||||
|
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||||
|
to generate the reference documentation, please read
|
||||||
|
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||||
|
To update the reference content, please follow the
|
||||||
|
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||||
|
guide. You can file document formatting bugs against the
|
||||||
|
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||||
|
-->
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
|
||||||
|
|
||||||
|
|
||||||
|
<!-- Status is a return value for calls that don't return other objects. -->
|
||||||
|
状态(Status)是不返回其他对象的调用的返回值。
|
||||||
|
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
- **apiVersion** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
|
||||||
|
-->
|
||||||
|
|
||||||
|
APIVersion 定义对象表示的版本化模式。
|
||||||
|
服务器应将已识别的模式转换为最新的内部值,并可能拒绝无法识别的值。
|
||||||
|
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
|
||||||
|
|
||||||
|
- **code** (int32)
|
||||||
|
|
||||||
|
<!-- Suggested HTTP return code for this status, 0 if not set. -->
|
||||||
|
此状态的建议 HTTP 返回代码,如果未设置,则为 0。
|
||||||
|
|
||||||
|
- **details** (StatusDetails)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Extended data associated with the reason. Each reason may define its own extended details.
|
||||||
|
This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.
|
||||||
|
-->
|
||||||
|
与原因(Reason)相关的扩展数据。每个原因都可以定义自己的扩展细节。
|
||||||
|
此字段是可选的,并且不保证返回的数据符合任何模式,除非由原因类型定义。
|
||||||
|
|
||||||
|
<a name="StatusDetails"></a>
|
||||||
|
<!--
|
||||||
|
*StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response.
|
||||||
|
The Reason field of a Status object defines what attributes will be set.
|
||||||
|
Clients must ignore fields that do not match the defined type of each attribute,
|
||||||
|
and should assume that any attribute may be empty, invalid, or under defined.*
|
||||||
|
-->
|
||||||
|
*StatusDetails 是一组附加属性,可以由服务器设置以提供有关响应的附加信息。*
|
||||||
|
*状态对象的原因字段定义将设置哪些属性。*
|
||||||
|
*客户端必须忽略与每个属性的定义类型不匹配的字段,并且应该假定任何属性可能为空、无效或未定义。*
|
||||||
|
|
||||||
|
- **details.causes** ([]StatusCause)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The Causes array includes more details associated with the StatusReason failure.
|
||||||
|
Not all StatusReasons may provide detailed causes.
|
||||||
|
-->
|
||||||
|
Causes 数组包含与 StatusReason 故障相关的更多详细信息。
|
||||||
|
并非所有 StatusReasons 都可以提供详细的原因。
|
||||||
|
|
||||||
|
<a name="StatusCause"></a>
|
||||||
|
<!--
|
||||||
|
*StatusCause provides more information about an api.Status failure, including cases when multiple errors are encountered.*
|
||||||
|
-->
|
||||||
|
*StatusCause 提供有关 api.Status 失败的更多信息,包括遇到多个错误的情况。*
|
||||||
|
|
||||||
|
- **details.causes.field** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The field of the resource that has caused this error, as named by its JSON serialization.
|
||||||
|
May include dot and postfix notation for nested attributes. Arrays are zero-indexed.
|
||||||
|
Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.
|
||||||
|
-->
|
||||||
|
导致此错误的资源字段,由其 JSON 序列化命名。
|
||||||
|
可能包括嵌套属性的点和后缀表示法。数组是从零开始索引的。
|
||||||
|
由于字段有多个错误,字段可能会在一系列原因中出现多次。可选。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Examples:
|
||||||
|
"name" - the field "name" on the current resource
|
||||||
|
"items[0].name" - the field "name" on the first array entry in "items"
|
||||||
|
-->
|
||||||
|
示例:
|
||||||
|
- “name”:当前资源上的字段 “name”
|
||||||
|
- “items[0].name”:“items” 中第一个数组条目上的字段 “name”
|
||||||
|
|
||||||
|
- **details.causes.message** (string)
|
||||||
|
|
||||||
|
<!-- A human-readable description of the cause of the error. This field may be presented as-is to a reader. -->
|
||||||
|
对错误原因的可读描述。该字段可以按原样呈现给读者。
|
||||||
|
|
||||||
|
- **details.causes.reason** (string)
|
||||||
|
|
||||||
|
<!-- A machine-readable description of the cause of the error. If this value is empty there is no information available. -->
|
||||||
|
错误原因的机器可读描述。如果此值为空,则没有可用信息。
|
||||||
|
|
||||||
|
- **details.group** (string)
|
||||||
|
|
||||||
|
<!-- The group attribute of the resource associated with the status StatusReason. -->
|
||||||
|
与状态 StatusReason 关联的资源的组属性。
|
||||||
|
|
||||||
|
- **details.kind** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The kind attribute of the resource associated with the status StatusReason.
|
||||||
|
On some operations may differ from the requested resource Kind.
|
||||||
|
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||||
|
-->
|
||||||
|
与状态 StatusReason 关联的资源的种类属性。
|
||||||
|
在某些操作上可能与请求的资源种类不同。
|
||||||
|
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||||
|
|
||||||
|
- **details.name** (string)
|
||||||
|
|
||||||
|
<!-- The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described). -->
|
||||||
|
与状态 StatusReason 关联的资源的名称属性(当有一个可以描述的名称时)。
|
||||||
|
|
||||||
|
- **details.retryAfterSeconds** (int32)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
If specified, the time in seconds before the operation should be retried.
|
||||||
|
Some errors may indicate the client must take an alternate action -
|
||||||
|
for those errors this field may indicate how long to wait before taking the alternate action.
|
||||||
|
-->
|
||||||
|
如果指定,则应重试操作前的时间(以秒为单位)。
|
||||||
|
一些错误可能表明客户端必须采取替代操作——对于这些错误,此字段可能指示在采取替代操作之前等待多长时间。
|
||||||
|
|
||||||
|
- **details.uid** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
UID of the resource. (when there is a single resource which can be described).
|
||||||
|
More info: http://kubernetes.io/docs/user-guide/identifiers#uids
|
||||||
|
-->
|
||||||
|
资源的 UID(当有单个可以描述的资源时)。
|
||||||
|
更多信息:http://kubernetes.io/docs/user-guide/identifiers#uids
|
||||||
|
|
||||||
|
- **kind** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Kind is a string value representing the REST resource this object represents.
|
||||||
|
Servers may infer this from the endpoint the client submits requests to.
|
||||||
|
Cannot be updated. In CamelCase.
|
||||||
|
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||||
|
-->
|
||||||
|
Kind 是一个字符串值,表示此对象表示的 REST 资源。
|
||||||
|
服务器可以从客户端提交请求的端点推断出这一点。
|
||||||
|
无法更新。驼峰式规则。
|
||||||
|
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||||
|
|
||||||
|
- **message** (string)
|
||||||
|
|
||||||
|
<!-- A human-readable description of the status of this operation. -->
|
||||||
|
此操作状态的人类可读描述。
|
||||||
|
|
||||||
|
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
|
||||||
|
|
||||||
|
<!-- Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -->
|
||||||
|
标准列表元数据。
|
||||||
|
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||||
|
|
||||||
|
|
||||||
|
- **reason** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
A machine-readable description of why this operation is in the "Failure" status.
|
||||||
|
If this value is empty there is no information available.
|
||||||
|
A Reason clarifies an HTTP status code but does not override it.
|
||||||
|
-->
|
||||||
|
机器可读的说明,说明此操作为何处于“失败”状态。
|
||||||
|
如果此值为空,则没有可用信息。
|
||||||
|
Reason 澄清了 HTTP 状态代码,但不会覆盖它。
|
||||||
|
|
||||||
|
- **status** (string)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
|
||||||
|
-->
|
||||||
|
操作状态。“Success”或“Failure” 之一。
|
||||||
|
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
|
|
@ -0,0 +1,220 @@
|
||||||
|
---
|
||||||
|
api_metadata:
|
||||||
|
apiVersion: ""
|
||||||
|
import: ""
|
||||||
|
kind: "Common Parameters"
|
||||||
|
content_type: "api_reference"
|
||||||
|
description: ""
|
||||||
|
title: "常用参数"
|
||||||
|
weight: 10
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The file is auto-generated from the Go source code of the component using a generic
|
||||||
|
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||||
|
to generate the reference documentation, please read
|
||||||
|
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||||
|
To update the reference content, please follow the
|
||||||
|
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||||
|
guide. You can file document formatting bugs against the
|
||||||
|
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||||
|
-->
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## allowWatchBookmarks {#allowWatchBookmarks}
|
||||||
|
<!--
|
||||||
|
allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.
|
||||||
|
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
allowWatchBookmarks 字段请求类型为 BOOKMARK 的监视事件。
|
||||||
|
没有实现书签的服务器可能会忽略这个标志,并根据服务器的判断发送书签。
|
||||||
|
客户端不应该假设书签会在任何特定的时间间隔返回,也不应该假设服务器会在会话期间发送任何书签事件。
|
||||||
|
如果当前请求不是 watch 请求,则忽略该字段。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## continue {#continue}
|
||||||
|
<!--
|
||||||
|
The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token.
|
||||||
|
-->
|
||||||
|
当需要从服务器检索更多结果时,应该设置 continue 选项。由于这个值是服务器定义的,
|
||||||
|
客户端只能使用先前查询结果中具有相同查询参数的 continue 值(continue值除外),
|
||||||
|
服务器可能拒绝它识别不到的 continue 值。
|
||||||
|
如果指定的 continue 值不再有效,无论是由于过期(通常是 5 到 15 分钟)
|
||||||
|
还是服务器上的配置更改,服务器将响应 "410 ResourceExpired" 错误和一个 continue 令牌。
|
||||||
|
<!--
|
||||||
|
If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the "next key".
|
||||||
|
-->
|
||||||
|
如果客户端需要一个一致的列表,它必须在没有 continue 字段的情况下重新发起 list 请求。
|
||||||
|
否则,客户端可能会发送另一个带有 410 错误令牌的 list 请求,服务器将响应从下一个键开始的列表,
|
||||||
|
但列表数据来自最新的快照,这与之前
|
||||||
|
的列表结果不一致。第一个列表请求之后的对象创建,修改,或删除的对象将被包含在响应中,
|
||||||
|
只要他们的键是在“下一个键”之后。
|
||||||
|
<!--
|
||||||
|
This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
|
||||||
|
-->
|
||||||
|
当 watch 字段为 true 时,不支持此字段。客户端可以从服务器返回的最后一个 resourceVersion 值开始监视,就不会错过任何修改。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## dryRun {#dryRun}
|
||||||
|
<!--
|
||||||
|
When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
表示不应该持久化所请求的修改。无效或无法识别的 dryRun 指令将导致错误响应,
|
||||||
|
并且服务器不再对请求进行进一步处理。有效值为:
|
||||||
|
- All: 将处理所有的演练阶段
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## fieldManager {#fieldManager}
|
||||||
|
<!--
|
||||||
|
fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
fieldManager 是与进行这些更改的参与者或实体相关联的名称。
|
||||||
|
长度小于或128个字符且仅包含可打印字符,如 https://golang.org/pkg/unicode/#IsPrint 所定义。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## fieldSelector {#fieldSelector}
|
||||||
|
<!--
|
||||||
|
A selector to restrict the list of returned objects by their fields. Defaults to everything.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
根据返回对象的字段限制返回对象列表的选择器。默认为返回所有字段。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## force {#force}
|
||||||
|
<!--
|
||||||
|
Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
Force 将“强制”应用请求。这意味着用户将重新获得他人拥有的冲突领域。
|
||||||
|
对于非应用补丁请求,Force 标志必须不设置。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## gracePeriodSeconds {#gracePeriodSeconds}
|
||||||
|
<!--
|
||||||
|
The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
删除对象前的持续时间(秒数)。值必须为非负整数。取值为 0 表示立即删除。
|
||||||
|
如果该值为 nil,将使用指定类型的默认宽限期。如果没有指定,默认为每个对象的设置值。0 表示立即删除。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## labelSelector {#labelSelector}
|
||||||
|
<!--
|
||||||
|
A selector to restrict the list of returned objects by their labels. Defaults to everything.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
通过标签限制返回对象列表的选择器。默认为返回所有对象。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## limit {#limit}
|
||||||
|
<!--
|
||||||
|
limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results.
|
||||||
|
-->
|
||||||
|
limit 是一个列表调用返回的最大响应数。如果有更多的条目,服务器会将列表元数据上的
|
||||||
|
'continue' 字段设置为一个值,该值可以用于相同的初始查询来检索下一组结果。
|
||||||
|
<!--
|
||||||
|
Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.
|
||||||
|
-->
|
||||||
|
设置 limit 可能会在所有请求的对象被过滤掉的情况下返回少于请求的条目数量(下限为零),
|
||||||
|
并且客户端应该只根据 continue 字段是否存在来确定是否有更多的结果可用。
|
||||||
|
服务器可能选择不支持 limit 参数,并将返回所有可用的结果。
|
||||||
|
如果指定了 limit 并且 continue 字段为空,客户端可能会认为没有更多的结果可用。
|
||||||
|
如果 watch 为 true,则不支持此字段。
|
||||||
|
<!--
|
||||||
|
The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests.
|
||||||
|
-->
|
||||||
|
服务器保证在使用 continue 时返回的对象将与不带 limit 的列表调用相同,——
|
||||||
|
也就是说,在发出第一个请求后所创建、修改或删除的对象将不包含在任何后续的继续请求中。
|
||||||
|
<!--
|
||||||
|
This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
这有时被称为一致性快照,确保使用 limit 的客户端在分块接收非常大的结果的客户端能够看到所有可能的对象。
|
||||||
|
如果对象在分块列表期间被更新,则返回计算第一个列表结果时存在的对象版本。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## namespace {#namespace}
|
||||||
|
<!--
|
||||||
|
object name and auth scope, such as for teams and projects
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
|
||||||
|
对象名称和身份验证范围,例如用于团队和项目。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## pretty {#pretty}
|
||||||
|
<!--
|
||||||
|
If 'true', then the output is pretty printed.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
|
||||||
|
如果设置为 'true' ,那么输出是规范的打印。
|
||||||
|
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## propagationPolicy {#propagationPolicy}
|
||||||
|
<!--
|
||||||
|
Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
该字段决定是否以及如何执行垃圾收集。可以设置此字段或 OrphanDependents,但不能同时设置。
|
||||||
|
默认策略由 metadata.finalizers 和特定资源的默认策略设置决定。可接受的值是:
|
||||||
|
- 'Orphan':孤立依赖项;
|
||||||
|
- 'Background':允许垃圾回收器后台删除依赖;
|
||||||
|
- 'Foreground':一个级联策略,前台删除所有依赖项。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## resourceVersion {#resourceVersion}
|
||||||
|
<!--
|
||||||
|
resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.
|
||||||
|
|
||||||
|
Defaults to unset
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
resourceVersion 对请求所针对的资源版本设置约束。
|
||||||
|
详情请参见 https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions。
|
||||||
|
|
||||||
|
默认不设置
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## resourceVersionMatch {#resourceVersionMatch}
|
||||||
|
<!--
|
||||||
|
resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.
|
||||||
|
|
||||||
|
Defaults to unset
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
resourceVersionMatch 字段决定如何将 resourceVersion 应用于列表调用。
|
||||||
|
强烈建议对设置了 resourceVersion 的列表调用设置 resourceVersion 匹配,
|
||||||
|
具体请参见 https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions。
|
||||||
|
|
||||||
|
默认不设置
|
||||||
|
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## timeoutSeconds {#timeoutSeconds}
|
||||||
|
<!--
|
||||||
|
Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
list/watch 调用的超时秒数。这选项限制调用的持续时间,无论是否有活动。
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
## watch {#watch}
|
||||||
|
<!--
|
||||||
|
Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
|
||||||
|
<hr>
|
||||||
|
-->
|
||||||
|
监视对所述资源的更改,并将其这类变更以添加、更新和删除通知流的形式返回。指定 resourceVersion。
|
||||||
|
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
title: "配置和存储资源"
|
||||||
|
weight: 3
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,8 @@
|
||||||
|
---
|
||||||
|
title: "扩展资源"
|
||||||
|
weight: 7
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
title: "策略资源"
|
||||||
|
weight: 6
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
title: "Service 资源"
|
||||||
|
weight: 2
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
title: "工作负载资源"
|
||||||
|
weight: 1
|
||||||
|
auto_generated: true
|
||||||
|
---
|
||||||
|
|
|
@ -22,7 +22,7 @@ file and passing its path as a command line argument.
|
||||||
<!--
|
<!--
|
||||||
A scheduling Profile allows you to configure the different stages of scheduling
|
A scheduling Profile allows you to configure the different stages of scheduling
|
||||||
in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}.
|
in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}.
|
||||||
Each stage is exposed in a extension point. Plugins provide scheduling behaviors
|
Each stage is exposed in an extension point. Plugins provide scheduling behaviors
|
||||||
by implementing one or more of these extension points.
|
by implementing one or more of these extension points.
|
||||||
-->
|
-->
|
||||||
调度模板(Profile)允许你配置 {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
|
调度模板(Profile)允许你配置 {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
|
||||||
|
@ -31,17 +31,19 @@ by implementing one or more of these extension points.
|
||||||
<!--
|
<!--
|
||||||
You can specify scheduling profiles by running `kube-scheduler --config <filename>`,
|
You can specify scheduling profiles by running `kube-scheduler --config <filename>`,
|
||||||
using the
|
using the
|
||||||
[KubeSchedulerConfiguration (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/)
|
KubeSchedulerConfiguration ([v1beta2](/docs/reference/config-api/kube-scheduler-config.v1beta2/)
|
||||||
|
or [v1beta3](/docs/reference/config-api/kube-scheduler-config.v1beta3/))
|
||||||
struct.
|
struct.
|
||||||
-->
|
-->
|
||||||
你可以通过运行 `kube-scheduler --config <filename>` 来设置调度模板,
|
你可以通过运行 `kube-scheduler --config <filename>` 来设置调度模板,
|
||||||
使用 [KubeSchedulerConfiguration (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) 结构体。
|
使用 KubeSchedulerConfiguration ([v1beta2](/zh/docs/reference/config-api/kube-scheduler-config.v1beta2/)
|
||||||
|
或者 [v1beta3](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/)) 结构体。
|
||||||
|
|
||||||
<!-- A minimal configuration looks as follows: -->
|
<!-- A minimal configuration looks as follows: -->
|
||||||
最简单的配置如下:
|
最简单的配置如下:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
kind: KubeSchedulerConfiguration
|
kind: KubeSchedulerConfiguration
|
||||||
clientConnection:
|
clientConnection:
|
||||||
kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig
|
kubeconfig: /etc/srv/kubernetes/kube-scheduler/kubeconfig
|
||||||
|
@ -77,62 +79,74 @@ extension points:
|
||||||
调度行为发生在一系列阶段中,这些阶段是通过以下扩展点公开的:
|
调度行为发生在一系列阶段中,这些阶段是通过以下扩展点公开的:
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
1. `QueueSort`: These plugins provide an ordering function that is used to
|
1. `queueSort`: These plugins provide an ordering function that is used to
|
||||||
sort pending Pods in the scheduling queue. Exactly one queue sort plugin
|
sort pending Pods in the scheduling queue. Exactly one queue sort plugin
|
||||||
may be enabled at a time.
|
may be enabled at a time.
|
||||||
-->
|
-->
|
||||||
1. `QueueSort`:这些插件对调度队列中的悬决的 Pod 排序。
|
1. `queueSort`:这些插件对调度队列中的悬决的 Pod 排序。
|
||||||
一次只能启用一个队列排序插件。
|
一次只能启用一个队列排序插件。
|
||||||
<!--
|
<!--
|
||||||
2. `PreFilter`: These plugins are used to pre-process or check information
|
2. `preFilter`: These plugins are used to pre-process or check information
|
||||||
about a Pod or the cluster before filtering. They can mark a pod as
|
about a Pod or the cluster before filtering. They can mark a pod as
|
||||||
unschedulable.
|
unschedulable.
|
||||||
-->
|
-->
|
||||||
2. `PreFilter`:这些插件用于在过滤之前预处理或检查 Pod 或集群的信息。
|
2. `preFilter`:这些插件用于在过滤之前预处理或检查 Pod 或集群的信息。
|
||||||
它们可以将 Pod 标记为不可调度。
|
它们可以将 Pod 标记为不可调度。
|
||||||
<!--
|
<!--
|
||||||
3. `Filter`: These plugins are the equivalent of Predicates in a scheduling
|
3. `filter`: These plugins are the equivalent of Predicates in a scheduling
|
||||||
Policy and are used to filter out nodes that can not run the Pod. Filters
|
Policy and are used to filter out nodes that can not run the Pod. Filters
|
||||||
are called in the configured order. A pod is marked as unschedulable if no
|
are called in the configured order. A pod is marked as unschedulable if no
|
||||||
nodes pass all the filters.
|
nodes pass all the filters.
|
||||||
-->
|
-->
|
||||||
3. `Filter`:这些插件相当于调度策略中的断言(Predicates),用于过滤不能运行 Pod 的节点。
|
3. `filter`:这些插件相当于调度策略中的断言(Predicates),用于过滤不能运行 Pod 的节点。
|
||||||
过滤器的调用顺序是可配置的。
|
过滤器的调用顺序是可配置的。
|
||||||
如果没有一个节点通过所有过滤器的筛选,Pod 将会被标记为不可调度。
|
如果没有一个节点通过所有过滤器的筛选,Pod 将会被标记为不可调度。
|
||||||
<!--
|
<!--
|
||||||
4. `PreScore`: This is an informational extension point that can be used
|
4. `postFilter`: These plugins are called in their configured order when no
|
||||||
|
feasible nodes were found for the pod. If any `postFilter` plugin marks the
|
||||||
|
Pod _schedulable_, the remaining plugins are not called.
|
||||||
|
-->
|
||||||
|
4. `postFilter`:当无法为 Pod 找到可用节点时,按照这些插件的配置顺序调用他们。
|
||||||
|
如果任何 `postFilter` 插件将 Pod 标记为“可调度”,则不会调用其余插件。
|
||||||
|
<!--
|
||||||
|
1. `preScore`: This is an informational extension point that can be used
|
||||||
for doing pre-scoring work.
|
for doing pre-scoring work.
|
||||||
-->
|
-->
|
||||||
4. `PreScore`:这是一个信息扩展点,可用于预打分工作。
|
5. `preScore`:这是一个信息扩展点,可用于预打分工作。
|
||||||
<!--
|
<!--
|
||||||
5. `Score`: These plugins provide a score to each node that has passed the
|
6. `score`: These plugins provide a score to each node that has passed the
|
||||||
filtering phase. The scheduler will then select the node with the highest
|
filtering phase. The scheduler will then select the node with the highest
|
||||||
weighted scores sum.
|
weighted scores sum.
|
||||||
-->
|
-->
|
||||||
5. `Score`:这些插件给通过筛选阶段的节点打分。调度器会选择得分最高的节点。
|
6. `score`:这些插件给通过筛选阶段的节点打分。调度器会选择得分最高的节点。
|
||||||
<!--
|
<!--
|
||||||
6. `Reserve`: This is an informational extension point that notifies plugins
|
7. `reserve`: This is an informational extension point that notifies plugins
|
||||||
when resources have been reserved for a given Pod. Plugins also implement an
|
when resources have been reserved for a given Pod. Plugins also implement an
|
||||||
`Unreserve` call that gets called in the case of failure during or after
|
`Unreserve` call that gets called in the case of failure during or after
|
||||||
`Reserve`.
|
`Reserve`.
|
||||||
-->
|
-->
|
||||||
6. `Reserve`:这是一个信息扩展点,当资源已经预留给 Pod 时,会通知插件。
|
7. `reserve`:这是一个信息扩展点,当资源已经预留给 Pod 时,会通知插件。
|
||||||
这些插件还实现了 `Unreserve` 接口,在 `Reserve` 期间或之后出现故障时调用。
|
这些插件还实现了 `Unreserve` 接口,在 `Reserve` 期间或之后出现故障时调用。
|
||||||
<!-- 7. `Permit`: These plugins can prevent or delay the binding of a Pod. -->
|
<!-- 8. `permit`: These plugins can prevent or delay the binding of a Pod. -->
|
||||||
7. `Permit`:这些插件可以阻止或延迟 Pod 绑定。
|
8. `permit`:这些插件可以阻止或延迟 Pod 绑定。
|
||||||
<!-- 8. `PreBind`: These plugins perform any work required before a Pod is bound.-->
|
<!-- 9. `preBind`: These plugins perform any work required before a Pod is bound.-->
|
||||||
8. `PreBind`:这些插件在 Pod 绑定节点之前执行。
|
9. `preBind`:这些插件在 Pod 绑定节点之前执行。
|
||||||
<!--
|
<!--
|
||||||
9. `Bind`: The plugins bind a Pod to a Node. Bind plugins are called in order
|
10. `bind`: The plugins bind a Pod to a Node. Bind plugins are called in order
|
||||||
and once one has done the binding, the remaining plugins are skipped. At
|
and once one has done the binding, the remaining plugins are skipped. At
|
||||||
least one bind plugin is required.
|
least one bind plugin is required.
|
||||||
-->
|
-->
|
||||||
9. `Bind`:这个插件将 Pod 与节点绑定。绑定插件是按顺序调用的,只要有一个插件完成了绑定,其余插件都会跳过。绑定插件至少需要一个。
|
10. `bind`:这个插件将 Pod 与节点绑定。绑定插件是按顺序调用的,只要有一个插件完成了绑定,其余插件都会跳过。绑定插件至少需要一个。
|
||||||
<!--
|
<!--
|
||||||
10. `PostBind`: This is an informational extension point that is called after
|
11. `postBind`: This is an informational extension point that is called after
|
||||||
a Pod has been bound.
|
a Pod has been bound.
|
||||||
-->
|
-->
|
||||||
10. `PostBind`:这是一个信息扩展点,在 Pod 绑定了节点之后调用。
|
11. `postBind`:这是一个信息扩展点,在 Pod 绑定了节点之后调用。
|
||||||
|
<!--
|
||||||
|
12. `multiPoint`: This is a config-only field that allows plugins to be enabled
|
||||||
|
or disabled for all of their applicable extension points simultaneously.
|
||||||
|
-->
|
||||||
|
12. `multiPoint`:这是一个仅配置字段,允许同时为所有适用的扩展点启用或禁用插件。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
For each extension point, you could disable specific [default plugins](#scheduling-plugins)
|
For each extension point, you could disable specific [default plugins](#scheduling-plugins)
|
||||||
|
@ -141,13 +155,13 @@ or enable your own. For example:
|
||||||
对每个扩展点,你可以禁用[默认插件](#scheduling-plugins)或者是启用自己的插件,例如:
|
对每个扩展点,你可以禁用[默认插件](#scheduling-plugins)或者是启用自己的插件,例如:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
kind: KubeSchedulerConfiguration
|
kind: KubeSchedulerConfiguration
|
||||||
profiles:
|
profiles:
|
||||||
- plugins:
|
- plugins:
|
||||||
score:
|
score:
|
||||||
disabled:
|
disabled:
|
||||||
- name: NodeResourcesLeastAllocated
|
- name: PodTopologySpread
|
||||||
enabled:
|
enabled:
|
||||||
- name: MyCustomPluginA
|
- name: MyCustomPluginA
|
||||||
weight: 2
|
weight: 2
|
||||||
|
@ -164,16 +178,7 @@ desired.
|
||||||
如果需要,这个字段也可以用来对插件重新顺序。
|
如果需要,这个字段也可以用来对插件重新顺序。
|
||||||
|
|
||||||
<!-- ### Scheduling plugins -->
|
<!-- ### Scheduling plugins -->
|
||||||
### 调度插件 {#scheduling-plugin}
|
### 调度插件 {#scheduling-plugins}
|
||||||
|
|
||||||
<!--
|
|
||||||
1. `UnReserve`: This is an informational extension point that is called if
|
|
||||||
a Pod is rejected after being reserved and put on hold by a `Permit` plugin.
|
|
||||||
-->
|
|
||||||
1. `UnReserve`:这是一个信息扩展点,如果一个 Pod 在预留后被拒绝,并且被 `Permit` 插件搁置,它就会被调用。
|
|
||||||
|
|
||||||
<!-- ## Scheduling plugins -->
|
|
||||||
## 调度插件 {#scheduling-plugins}
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The following plugins, enabled by default, implement one or more of these
|
The following plugins, enabled by default, implement one or more of these
|
||||||
|
@ -181,200 +186,175 @@ extension points:
|
||||||
-->
|
-->
|
||||||
下面默认启用的插件实现了一个或多个扩展点:
|
下面默认启用的插件实现了一个或多个扩展点:
|
||||||
|
|
||||||
<!--
|
|
||||||
- `SelectorSpread`: Favors spreading across nodes for Pods that belong to
|
|
||||||
{{< glossary_tooltip text="Services" term_id="service" >}},
|
|
||||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and
|
|
||||||
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}.
|
|
||||||
Extension points: `PreScore`, `Score`.
|
|
||||||
-->
|
|
||||||
- `SelectorSpread`:对于属于 {{< glossary_tooltip text="Services" term_id="service" >}}、
|
|
||||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} 和
|
|
||||||
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} 的 Pod,偏好跨多个节点部署。
|
|
||||||
|
|
||||||
实现的扩展点:`PreScore`,`Score`。
|
|
||||||
<!--
|
<!--
|
||||||
- `ImageLocality`: Favors nodes that already have the container images that the
|
- `ImageLocality`: Favors nodes that already have the container images that the
|
||||||
Pod runs.
|
Pod runs.
|
||||||
Extension points: `Score`.
|
Extension points: `score`.
|
||||||
-->
|
-->
|
||||||
- `ImageLocality`:选择已经存在 Pod 运行所需容器镜像的节点。
|
- `ImageLocality`:选择已经存在 Pod 运行所需容器镜像的节点。
|
||||||
|
|
||||||
实现的扩展点:`Score`。
|
实现的扩展点:`score`。
|
||||||
<!--
|
<!--
|
||||||
- `TaintToleration`: Implements
|
- `TaintToleration`: Implements
|
||||||
[taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
[taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||||
Implements extension points: `Filter`, `Prescore`, `Score`.
|
Implements extension points: `filter`, `prescore`, `score`.
|
||||||
-->
|
-->
|
||||||
- `TaintToleration`:实现了[污点和容忍](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。
|
- `TaintToleration`:实现了[污点和容忍](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。
|
||||||
|
|
||||||
实现的扩展点:`Filter`,`Prescore`,`Score`。
|
实现的扩展点:`filter`,`prescore`,`score`。
|
||||||
<!--
|
<!--
|
||||||
- `NodeName`: Checks if a Pod spec node name matches the current node.
|
- `NodeName`: Checks if a Pod spec node name matches the current node.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `NodeName`:检查 Pod 指定的节点名称与当前节点是否匹配。
|
- `NodeName`:检查 Pod 指定的节点名称与当前节点是否匹配。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `NodePorts`: Checks if a node has free ports for the requested Pod ports.
|
- `NodePorts`: Checks if a node has free ports for the requested Pod ports.
|
||||||
Extension points: `PreFilter`, `Filter`.
|
Extension points: `preFilter`, `filter`.
|
||||||
-->
|
-->
|
||||||
- `NodePorts`:检查 Pod 请求的端口在节点上是否可用。
|
- `NodePorts`:检查 Pod 请求的端口在节点上是否可用。
|
||||||
|
|
||||||
实现的扩展点:`PreFilter`,`Filter`。
|
实现的扩展点:`preFilter`,`filter`。
|
||||||
<!--
|
|
||||||
- `NodePreferAvoidPods`: Scores nodes according to the node
|
|
||||||
{{< glossary_tooltip text=" " term_id="annotation" >}}
|
|
||||||
`scheduler.alpha.kubernetes.io/preferAvoidPods`.
|
|
||||||
Extension points: `Score`.
|
|
||||||
-->
|
|
||||||
- `NodePreferAvoidPods`:基于节点的 {{< glossary_tooltip text="注解" term_id="annotation" >}}
|
|
||||||
`scheduler.alpha.kubernetes.io/preferAvoidPods` 打分。
|
|
||||||
|
|
||||||
实现的扩展点:`Score`。
|
|
||||||
<!--
|
<!--
|
||||||
- `NodeAffinity`: Implements
|
- `NodeAffinity`: Implements
|
||||||
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
|
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
|
||||||
and [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity).
|
and [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity).
|
||||||
Extension points: `Filter`, `Score`.
|
Extension points: `filter`, `score`.
|
||||||
-->
|
-->
|
||||||
- `NodeAffinity`:实现了[节点选择器](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
|
- `NodeAffinity`:实现了[节点选择器](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
|
||||||
和[节点亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)。
|
和[节点亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)。
|
||||||
|
|
||||||
实现的扩展点:`Filter`,`Score`.
|
实现的扩展点:`filter`,`score`.
|
||||||
<!--
|
<!--
|
||||||
- `PodTopologySpread`: Implements
|
- `PodTopologySpread`: Implements
|
||||||
[Pod topology spread](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
[Pod topology spread](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
||||||
Extension points: `PreFilter`, `Filter`, `PreScore`, `Score`.
|
Extension points: `preFilter`, `filter`, `preScore`, `score`.
|
||||||
-->
|
-->
|
||||||
- `PodTopologySpread`:实现了 [Pod 拓扑分布](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。
|
- `PodTopologySpread`:实现了 [Pod 拓扑分布](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。
|
||||||
|
|
||||||
实现的扩展点:`PreFilter`,`Filter`,`PreScore`,`Score`。
|
实现的扩展点:`preFilter`,`filter`,`preScore`,`score`。
|
||||||
<!--
|
<!--
|
||||||
- `NodeUnschedulable`: Filters out nodes that have `.spec.unschedulable` set to
|
- `NodeUnschedulable`: Filters out nodes that have `.spec.unschedulable` set to
|
||||||
true.
|
true.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `NodeUnschedulable`:过滤 `.spec.unschedulable` 值为 true 的节点。
|
- `NodeUnschedulable`:过滤 `.spec.unschedulable` 值为 true 的节点。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `NodeResourcesFit`: Checks if the node has all the resources that the Pod is
|
- `NodeResourcesFit`: Checks if the node has all the resources that the Pod is
|
||||||
requesting.
|
requesting. The score can use one of three strategies: `LeastAllocated`
|
||||||
Extension points: `PreFilter`, `Filter`.
|
(default), `MostAllocated` and `RequestedToCapacityRatio`.
|
||||||
|
Extension points: `preFilter`, `filter`, `score`.
|
||||||
-->
|
-->
|
||||||
- `NodeResourcesFit`:检查节点是否拥有 Pod 请求的所有资源。
|
- `NodeResourcesFit`:检查节点是否拥有 Pod 请求的所有资源。
|
||||||
|
得分可以使用以下三种策略之一:`LeastAllocated`(默认)、`MostAllocated`
|
||||||
|
和`RequestedToCapacityRatio`。
|
||||||
|
|
||||||
实现的扩展点:`PreFilter`,`Filter`。
|
实现的扩展点:`preFilter`,`filter`,`score`。
|
||||||
<!--
|
<!--
|
||||||
- `NodeResourcesBalancedAllocation`: Favors nodes that would obtain a more
|
- `NodeResourcesBalancedAllocation`: Favors nodes that would obtain a more
|
||||||
balanced resource usage if the Pod is scheduled there.
|
balanced resource usage if the Pod is scheduled there.
|
||||||
Extension points: `Score`.
|
Extension points: `score`.
|
||||||
-->
|
-->
|
||||||
- `NodeResourcesBalancedAllocation`:调度 Pod 时,选择资源使用更为均衡的节点。
|
- `NodeResourcesBalancedAllocation`:调度 Pod 时,选择资源使用更为均衡的节点。
|
||||||
|
|
||||||
实现的扩展点:`Score`。
|
实现的扩展点:`score`。
|
||||||
<!--
|
|
||||||
- `NodeResourcesLeastAllocated`: Favors nodes that have a low allocation of
|
|
||||||
resources.
|
|
||||||
Extension points: `Score`.
|
|
||||||
-->
|
|
||||||
- `NodeResourcesLeastAllocated`:选择资源分配较少的节点。
|
|
||||||
|
|
||||||
实现的扩展点:`Score`。
|
|
||||||
<!--
|
<!--
|
||||||
- `VolumeBinding`: Checks if the node has or if it can bind the requested
|
- `VolumeBinding`: Checks if the node has or if it can bind the requested
|
||||||
{{< glossary_tooltip text="volumes" term_id="volume" >}}.
|
{{< glossary_tooltip text="volumes" term_id="volume" >}}.
|
||||||
Extension points: `PreFilter`, `Filter`, `Reserve`, `PreBind`, `Score`.
|
Extension points: `preFilter`, `filter`, `reserve`, `preBind`, `score`.
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
`Score` extension point is enabled when `VolumeCapacityPriority` feature is
|
`score` extension point is enabled when `VolumeCapacityPriority` feature is
|
||||||
enabled. It prioritizes the smallest PVs that can fit the requested volume
|
enabled. It prioritizes the smallest PVs that can fit the requested volume
|
||||||
size.
|
size.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
-->
|
-->
|
||||||
- `VolumeBinding`:检查节点是否有请求的卷,或是否可以绑定请求的卷。
|
- `VolumeBinding`:检查节点是否有请求的卷,或是否可以绑定请求的卷。
|
||||||
实现的扩展点: `PreFilter`、`Filter`、`Reserve`、`PreBind` 和 `Score`。
|
实现的扩展点: `preFilter`、`filter`、`reserve`、`preBind` 和 `score`。
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
当 `VolumeCapacityPriority` 特性被启用时,`Score` 扩展点也被启用。
|
当 `VolumeCapacityPriority` 特性被启用时,`score` 扩展点也被启用。
|
||||||
它优先考虑可以满足所需卷大小的最小 PV。
|
它优先考虑可以满足所需卷大小的最小 PV。
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
- `VolumeRestrictions`: Checks that volumes mounted in the node satisfy
|
- `VolumeRestrictions`: Checks that volumes mounted in the node satisfy
|
||||||
restrictions that are specific to the volume provider.
|
restrictions that are specific to the volume provider.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `VolumeRestrictions`:检查挂载到节点上的卷是否满足卷提供程序的限制。
|
- `VolumeRestrictions`:检查挂载到节点上的卷是否满足卷提供程序的限制。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `VolumeZone`: Checks that volumes requested satisfy any zone requirements they
|
- `VolumeZone`: Checks that volumes requested satisfy any zone requirements they
|
||||||
might have.
|
might have.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `VolumeZone`:检查请求的卷是否在任何区域都满足。
|
- `VolumeZone`:检查请求的卷是否在任何区域都满足。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `NodeVolumeLimits`: Checks that CSI volume limits can be satisfied for the
|
- `NodeVolumeLimits`: Checks that CSI volume limits can be satisfied for the
|
||||||
node.
|
node.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `NodeVolumeLimits`:检查该节点是否满足 CSI 卷限制。
|
- `NodeVolumeLimits`:检查该节点是否满足 CSI 卷限制。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `EBSLimits`: Checks that AWS EBS volume limits can be satisfied for the node.
|
- `EBSLimits`: Checks that AWS EBS volume limits can be satisfied for the node.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `EBSLimits`:检查节点是否满足 AWS EBS 卷限制。
|
- `EBSLimits`:检查节点是否满足 AWS EBS 卷限制。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `GCEPDLimits`: Checks that GCP-PD volume limits can be satisfied for the node.
|
- `GCEPDLimits`: Checks that GCP-PD volume limits can be satisfied for the node.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `GCEPDLimits`:检查该节点是否满足 GCP-PD 卷限制。
|
- `GCEPDLimits`:检查该节点是否满足 GCP-PD 卷限制。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `AzureDiskLimits`: Checks that Azure disk volume limits can be satisfied for
|
- `AzureDiskLimits`: Checks that Azure disk volume limits can be satisfied for
|
||||||
the node.
|
the node.
|
||||||
Extension points: `Filter`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `AzureDiskLimits`:检查该节点是否满足 Azure 卷限制。
|
- `AzureDiskLimits`:检查该节点是否满足 Azure 卷限制。
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
实现的扩展点:`filter`。
|
||||||
<!--
|
<!--
|
||||||
- `InterPodAffinity`: Implements
|
- `InterPodAffinity`: Implements
|
||||||
[inter-Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
[inter-Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
||||||
Extension points: `PreFilter`, `Filter`, `PreScore`, `Score`.
|
Extension points: `preFilter`, `filter`, `preScore`, `score`.
|
||||||
-->
|
-->
|
||||||
- `InterPodAffinity`:实现 [Pod 间亲和性与反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)。
|
- `InterPodAffinity`:实现 [Pod 间亲和性与反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)。
|
||||||
|
|
||||||
实现的扩展点:`PreFilter`,`Filter`,`PreScore`,`Score`。
|
实现的扩展点:`preFilter`,`filter`,`preScore`,`score`。
|
||||||
<!--
|
<!--
|
||||||
- `PrioritySort`: Provides the default priority based sorting.
|
- `PrioritySort`: Provides the default priority based sorting.
|
||||||
Extension points: `QueueSort`.
|
Extension points: `queueSort`.
|
||||||
-->
|
-->
|
||||||
- `PrioritySort`:提供默认的基于优先级的排序。
|
- `PrioritySort`:提供默认的基于优先级的排序。
|
||||||
|
|
||||||
实现的扩展点:`QueueSort`。
|
实现的扩展点:`queueSort`。
|
||||||
<!--
|
<!--
|
||||||
- `DefaultBinder`: Provides the default binding mechanism.
|
- `DefaultBinder`: Provides the default binding mechanism.
|
||||||
Extension points: `Bind`.
|
Extension points: `bind`.
|
||||||
-->
|
-->
|
||||||
- `DefaultBinder`:提供默认的绑定机制。
|
- `DefaultBinder`:提供默认的绑定机制。
|
||||||
|
|
||||||
实现的扩展点:`Bind`。
|
实现的扩展点:`bind`。
|
||||||
<!--
|
<!--
|
||||||
- `DefaultPreemption`: Provides the default preemption mechanism.
|
- `DefaultPreemption`: Provides the default preemption mechanism.
|
||||||
Extension points: `PostFilter`.
|
Extension points: `PostFilter`.
|
||||||
-->
|
-->
|
||||||
- `DefaultPreemption`:提供默认的抢占机制。
|
- `DefaultPreemption`:提供默认的抢占机制。
|
||||||
|
|
||||||
实现的扩展点:`PostFilter`。
|
实现的扩展点:`postFilter`。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
You can also enable the following plugins, through the component config APIs,
|
You can also enable the following plugins, through the component config APIs,
|
||||||
|
@ -383,52 +363,28 @@ that are not enabled by default:
|
||||||
你也可以通过组件配置 API 启用以下插件(默认不启用):
|
你也可以通过组件配置 API 启用以下插件(默认不启用):
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
- `NodeResourcesMostAllocated`: Favors nodes that have a high allocation of
|
- `SelectorSpread`: Favors spreading across nodes for Pods that belong to
|
||||||
resources.
|
{{< glossary_tooltip text="Services" term_id="service" >}},
|
||||||
Extension points: `Score`.
|
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and
|
||||||
|
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}.
|
||||||
|
Extension points: `preScore`, `score`.
|
||||||
-->
|
-->
|
||||||
- `NodeResourcesMostAllocated`:选择已分配资源多的节点。
|
- `SelectorSpread`:偏向把属于
|
||||||
|
{{< glossary_tooltip text="Services" term_id="service" >}},
|
||||||
|
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} 和
|
||||||
|
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} 的 Pod 跨节点分布。
|
||||||
|
|
||||||
|
实现的扩展点:`preScore`,`score`。
|
||||||
|
|
||||||
实现的扩展点:`Score`。
|
|
||||||
<!--
|
<!--
|
||||||
- `RequestedToCapacityRatio`: Favor nodes according to a configured function of
|
- `CinderLimits`: Checks that [OpenStack Cinder](https://docs.openstack.org/cinder/)
|
||||||
the allocated resources.
|
volume limits can be satisfied for the node.
|
||||||
Extension points: `Score`.
|
Extension points: `filter`.
|
||||||
-->
|
-->
|
||||||
- `RequestedToCapacityRatio`:根据已分配资源的某函数设置选择节点。
|
- `CinderLimits`:检查是否可以满足节点的 [OpenStack Cinder](https://docs.openstack.org/cinder/)
|
||||||
|
卷限制
|
||||||
|
|
||||||
实现的扩展点:`Score`。
|
|
||||||
<!--
|
|
||||||
- `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied
|
|
||||||
for the node.
|
|
||||||
Extension points: `Filter`.
|
|
||||||
-->
|
|
||||||
- `CinderVolume`:检查该节点是否满足 OpenStack Cinder 卷限制。
|
|
||||||
|
|
||||||
实现的扩展点:`Filter`。
|
|
||||||
<!--
|
|
||||||
- `NodeLabel`: Filters and / or scores a node according to configured
|
|
||||||
{{< glossary_tooltip text="label(s)" term_id="label" >}}.
|
|
||||||
Extension points: `Filter`, `Score`.
|
|
||||||
-->
|
|
||||||
- `NodeLabel`:根据配置的 {{< glossary_tooltip text="标签" term_id="label" >}}
|
|
||||||
过滤节点和/或给节点打分。
|
|
||||||
|
|
||||||
实现的扩展点:`Filter`,`Score`。
|
|
||||||
<!--
|
|
||||||
- `ServiceAffinity`: Checks that Pods that belong to a
|
|
||||||
{{< glossary_tooltip term_id="service" >}} fit in a set of nodes defined by
|
|
||||||
configured labels. This plugin also favors spreading the Pods belonging to a
|
|
||||||
Service across nodes.
|
|
||||||
Extension points: `PreFilter`, `Filter`, `Score`.
|
|
||||||
-->
|
|
||||||
- `ServiceAffinity`:检查属于某个 {{< glossary_tooltip term_id="service" >}} 的 Pod
|
|
||||||
与配置的标签所定义的节点集是否适配。
|
|
||||||
这个插件还支持将属于某个 Service 的 Pod 分散到各个节点。
|
|
||||||
|
|
||||||
实现的扩展点:`PreFilter`,`Filter`,`Score`。
|
|
||||||
|
|
||||||
<!-- ### Multiple profiles -->
|
|
||||||
### 多配置文件 {#multiple-profiles}
|
### 多配置文件 {#multiple-profiles}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -447,7 +403,7 @@ disabled.
|
||||||
使用下面的配置样例,调度器将运行两个配置文件:一个使用默认插件,另一个禁用所有打分插件。
|
使用下面的配置样例,调度器将运行两个配置文件:一个使用默认插件,另一个禁用所有打分插件。
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
kind: KubeSchedulerConfiguration
|
kind: KubeSchedulerConfiguration
|
||||||
profiles:
|
profiles:
|
||||||
- schedulerName: default-scheduler
|
- schedulerName: default-scheduler
|
||||||
|
@ -496,23 +452,372 @@ Pod 的调度事件把 `.spec.schedulerName` 字段值作为 ReportingController
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
<!--
|
<!--
|
||||||
All profiles must use the same plugin in the QueueSort extension point and have
|
All profiles must use the same plugin in the queueSort extension point and have
|
||||||
the same configuration parameters (if applicable). This is because the scheduler
|
the same configuration parameters (if applicable). This is because the scheduler
|
||||||
only has one pending pods queue.
|
only has one pending pods queue.
|
||||||
-->
|
-->
|
||||||
所有配置文件必须在 QueueSort 扩展点使用相同的插件,并具有相同的配置参数(如果适用)。
|
所有配置文件必须在 queueSort 扩展点使用相同的插件,并具有相同的配置参数(如果适用)。
|
||||||
这是因为调度器只有一个保存 pending 状态 Pod 的队列。
|
这是因为调度器只有一个保存 pending 状态 Pod 的队列。
|
||||||
|
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Plugins that apply to multiple extension points {#multipoint}
|
||||||
|
-->
|
||||||
|
|
||||||
|
### 应用于多个扩展点的插件 {#multipoint}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Starting from `kubescheduler.config.k8s.io/v1beta3`, there is an additional field in the
|
||||||
|
profile config, `multiPoint`, which allows for easily enabling or disabling a plugin
|
||||||
|
across several extension points. The intent of `multiPoint` config is to simplify the
|
||||||
|
configuration needed for users and administrators when using custom profiles.
|
||||||
|
-->
|
||||||
|
从 `kubescheduler.config.k8s.io/v1beta3` 开始,配置文件配置中有一个附加字段 `multiPoint`,它允许跨多个扩展点轻松启用或禁用插件。
|
||||||
|
`multiPoint` 配置的目的是简化用户和管理员在使用自定义配置文件时所需的配置。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Consider a plugin, `MyPlugin`, which implements the `preScore`, `score`, `preFilter`,
|
||||||
|
and `filter` extension points. To enable `MyPlugin` for all its available extension
|
||||||
|
points, the profile config looks like:
|
||||||
|
-->
|
||||||
|
|
||||||
|
考虑一个插件,`MyPlugin`,它实现了 `preScore`、`score`、`preFilter` 和 `filter` 扩展点。
|
||||||
|
要为其所有可用的扩展点启用 `MyPlugin`,配置文件配置如下所示:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
multiPoint:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
This would equate to manually enabling `MyPlugin` for all of its extension
|
||||||
|
points, like so:
|
||||||
|
-->
|
||||||
|
|
||||||
|
这相当于为所有扩展点手动启用`MyPlugin`,如下所示:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: non-multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
preScore:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
preFilter:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
filter:
|
||||||
|
enabled:
|
||||||
|
- name: MyPlugin
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
One benefit of using `multiPoint` here is that if `MyPlugin` implements another
|
||||||
|
extension point in the future, the `multiPoint` config will automatically enable it
|
||||||
|
for the new extension.
|
||||||
|
-->
|
||||||
|
|
||||||
|
在这里使用 `multiPoint` 的一个好处是,如果 `MyPlugin` 将来实现另一个扩展点,`multiPoint` 配置将自动为新扩展启用它。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Specific extension points can be excluded from `MultiPoint` expansion using
|
||||||
|
the `disabled` field for that extension point. This works with disabling default
|
||||||
|
plugins, non-default plugins, or with the wildcard (`'*'`) to disable all plugins.
|
||||||
|
An example of this, disabling `Score` and `PreScore`, would be:
|
||||||
|
-->
|
||||||
|
|
||||||
|
可以使用该扩展点的 `disabled` 字段将特定扩展点从 `MultiPoint` 扩展中排除。
|
||||||
|
这适用于禁用默认插件、非默认插件或使用通配符 (`'*'`) 来禁用所有插件。
|
||||||
|
禁用 `Score` 和 `PreScore` 的一个例子是:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: non-multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
multiPoint:
|
||||||
|
enabled:
|
||||||
|
- name: 'MyPlugin'
|
||||||
|
preScore:
|
||||||
|
disabled:
|
||||||
|
- name: '*'
|
||||||
|
score:
|
||||||
|
disabled:
|
||||||
|
- name: '*'
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
In `v1beta3`, all [default plugins](#scheduling-plugins) are enabled internally through `MultiPoint`.
|
||||||
|
However, individual extension points are still available to allow flexible
|
||||||
|
reconfiguration of the default values (such as ordering and Score weights). For
|
||||||
|
example, consider two Score plugins `DefaultScore1` and `DefaultScore2`, each with
|
||||||
|
a weight of `1`. They can be reordered with different weights like so:
|
||||||
|
-->
|
||||||
|
|
||||||
|
在 `v1beta3` 中,所有 [默认插件](#scheduling-plugins) 都通过 `MultiPoint` 在内部启用。
|
||||||
|
但是,仍然可以使用单独的扩展点来灵活地重新配置默认值(例如排序和分数权重)。
|
||||||
|
例如,考虑两个Score插件 `DefaultScore1` 和 `DefaultScore2` ,每个插件的权重为 `1` 。
|
||||||
|
它们可以用不同的权重重新排序,如下所示:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: 'DefaultScore2'
|
||||||
|
weight: 5
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
In this example, it's unnecessary to specify the plugins in `MultiPoint` explicitly
|
||||||
|
because they are default plugins. And the only plugin specified in `Score` is `DefaultScore2`.
|
||||||
|
This is because plugins set through specific extension points will always take precedence
|
||||||
|
over `MultiPoint` plugins. So, this snippet essentially re-orders the two plugins
|
||||||
|
without needing to specify both of them.
|
||||||
|
-->
|
||||||
|
|
||||||
|
在这个例子中,没有必要在 `MultiPoint` 中明确指定插件,因为它们是默认插件。
|
||||||
|
`Score` 中指定的唯一插件是 `DefaultScore2`。
|
||||||
|
这是因为通过特定扩展点设置的插件将始终优先于 `MultiPoint` 插件。
|
||||||
|
因此,此代码段实质上重新排序了这两个插件,而无需同时指定它们。
|
||||||
|
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The general hierarchy for precedence when configuring `MultiPoint` plugins is as follows:
|
||||||
|
-->
|
||||||
|
配置 `MultiPoint` 插件时优先级的一般层次结构如下:
|
||||||
|
|
||||||
|
<!--
|
||||||
|
1. Specific extension points run first, and their settings override whatever is set elsewhere
|
||||||
|
-->
|
||||||
|
1. 特定的扩展点首先运行,它们的设置会覆盖其他地方的设置
|
||||||
|
|
||||||
|
<!--2. Plugins manually configured through `MultiPoint` and their settings
|
||||||
|
-->
|
||||||
|
2. 通过 `MultiPoint` 手动配置的插件及其设置
|
||||||
|
|
||||||
|
<!--3. Default plugins and their default settings
|
||||||
|
-->
|
||||||
|
3. 默认插件及其默认设置
|
||||||
|
|
||||||
|
<!--
|
||||||
|
To demonstrate the above hierarchy, the following example is based on these plugins:
|
||||||
|
-->
|
||||||
|
为了演示上述层次结构,以下示例基于这些插件:
|
||||||
|
|插件|扩展点|
|
||||||
|
|---|---|
|
||||||
|
|`DefaultQueueSort`|`QueueSort`|
|
||||||
|
|`CustomQueueSort`|`QueueSort`|
|
||||||
|
|`DefaultPlugin1`|`Score`, `Filter`|
|
||||||
|
|`DefaultPlugin2`|`Score`|
|
||||||
|
|`CustomPlugin1`|`Score`, `Filter`|
|
||||||
|
|`CustomPlugin2`|`Score`, `Filter`|
|
||||||
|
|
||||||
|
<!--
|
||||||
|
A valid sample configuration for these plugins would be:
|
||||||
|
-->
|
||||||
|
这些插件的一个有效示例配置是:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
multiPoint:
|
||||||
|
enabled:
|
||||||
|
- name: 'CustomQueueSort'
|
||||||
|
- name: 'CustomPlugin1'
|
||||||
|
weight: 3
|
||||||
|
- name: 'CustomPlugin2'
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultQueueSort'
|
||||||
|
filter:
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultPlugin1'
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: 'DefaultPlugin2'
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Note that there is no error for re-declaring a `MultiPoint` plugin in a specific
|
||||||
|
extension point. The re-declaration is ignored (and logged), as specific extension points
|
||||||
|
take precedence.
|
||||||
|
-->
|
||||||
|
请注意,在特定扩展点中重新声明 `MultiPoint` 插件不会出错。
|
||||||
|
重新声明被忽略(并记录),因为特定的扩展点优先。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Besides keeping most of the config in one spot, this sample does a few things:
|
||||||
|
-->
|
||||||
|
|
||||||
|
除了将大部分配置保存在一个位置之外,此示例还做了一些事情:
|
||||||
|
|
||||||
|
<!--
|
||||||
|
* Enables the custom `queueSort` plugin and disables the default one
|
||||||
|
* Enables `CustomPlugin1` and `CustomPlugin2`, which will run first for all of their extension points
|
||||||
|
* Disables `DefaultPlugin1`, but only for `filter`
|
||||||
|
* Reorders `DefaultPlugin2` to run first in `score` (even before the custom plugins)
|
||||||
|
-->
|
||||||
|
* 启用自定义 `queueSort` 插件并禁用默认插件
|
||||||
|
|
||||||
|
* 启用 `CustomPlugin1` 和 `CustomPlugin2`,这将首先为它们的所有扩展点运行
|
||||||
|
|
||||||
|
* 禁用 `DefaultPlugin1`,但仅适用于 `filter`
|
||||||
|
|
||||||
|
* 重新排序 `DefaultPlugin2` 以在 `score` 中首先运行(甚至在自定义插件之前)
|
||||||
|
|
||||||
|
<!--
|
||||||
|
In versions of the config before `v1beta3`, without `multiPoint`, the above snippet would equate to this:
|
||||||
|
-->
|
||||||
|
在 `v1beta3` 之前的配置版本中,没有 `multiPoint`,上面的代码片段等同于:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- schedulerName: multipoint-scheduler
|
||||||
|
plugins:
|
||||||
|
|
||||||
|
# Disable the default QueueSort plugin
|
||||||
|
queueSort:
|
||||||
|
enabled:
|
||||||
|
- name: 'CustomQueueSort'
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultQueueSort'
|
||||||
|
|
||||||
|
# Enable custom Filter plugins
|
||||||
|
filter:
|
||||||
|
enabled:
|
||||||
|
- name: 'CustomPlugin1'
|
||||||
|
- name: 'CustomPlugin2'
|
||||||
|
- name: 'DefaultPlugin2'
|
||||||
|
disabled:
|
||||||
|
- name: 'DefaultPlugin1'
|
||||||
|
|
||||||
|
# Enable and reorder custom score plugins
|
||||||
|
score:
|
||||||
|
enabled:
|
||||||
|
- name: 'DefaultPlugin2'
|
||||||
|
weight: 1
|
||||||
|
- name: 'DefaultPlugin1'
|
||||||
|
weight: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
While this is a complicated example, it demonstrates the flexibility of `MultiPoint` config
|
||||||
|
as well as its seamless integration with the existing methods for configuring extension points.
|
||||||
|
-->
|
||||||
|
|
||||||
|
虽然这是一个复杂的例子,但它展示了 `MultiPoint` 配置的灵活性以及它与配置扩展点的现有方法的无缝集成。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## Scheduler configuration migrations
|
||||||
|
-->
|
||||||
|
|
||||||
|
## 调度程序配置迁移
|
||||||
|
{{< tabs name="tab_with_md" >}}
|
||||||
|
{{% tab name="v1beta1 → v1beta2" %}}
|
||||||
|
<!--
|
||||||
|
* With the v1beta2 configuration version, you can use a new score extension for the
|
||||||
|
`NodeResourcesFit` plugin.
|
||||||
|
The new extension combines the functionalities of the `NodeResourcesLeastAllocated`,
|
||||||
|
`NodeResourcesMostAllocated` and `RequestedToCapacityRatio` plugins.
|
||||||
|
For example, if you previously used the `NodeResourcesMostAllocated` plugin, you
|
||||||
|
would instead use `NodeResourcesFit` (enabled by default) and add a `pluginConfig`
|
||||||
|
with a `scoreStrategy` that is similar to:
|
||||||
|
-->
|
||||||
|
* 在 v1beta2 配置版本中,你可以为 `NodeResourcesFit` 插件使用新的 score 扩展。
|
||||||
|
新的扩展结合了 `NodeResourcesLeastAllocated`、`NodeResourcesMostAllocated` 和 `RequestedToCapacityRatio` 插件的功能。
|
||||||
|
例如,如果你之前使用了 `NodeResourcesMostAllocated` 插件,
|
||||||
|
则可以改用 `NodeResourcesFit`(默认启用)并添加一个 `pluginConfig` 和 `scoreStrategy`,类似于:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta2
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
profiles:
|
||||||
|
- pluginConfig:
|
||||||
|
- args:
|
||||||
|
scoringStrategy:
|
||||||
|
resources:
|
||||||
|
- name: cpu
|
||||||
|
weight: 1
|
||||||
|
type: MostAllocated
|
||||||
|
name: NodeResourcesFit
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
* The scheduler plugin `NodeLabel` is deprecated; instead, use the [`NodeAffinity`](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) plugin (enabled by default) to achieve similar behavior.
|
||||||
|
-->
|
||||||
|
* 调度器插件 `NodeLabel` 已弃用;
|
||||||
|
相反,要使用 [`NodeAffinity`](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
|
||||||
|
插件(默认启用)来实现类似的行为。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
* The scheduler plugin `ServiceAffinity` is deprecated; instead, use the [`InterPodAffinity`](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) plugin (enabled by default) to achieve similar behavior.
|
||||||
|
-->
|
||||||
|
* 调度程序插件 `ServiceAffinity` 已弃用;
|
||||||
|
相反,使用 [`InterPodAffinity`](/zh/doc/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
|
||||||
|
插件(默认启用)来实现类似的行为。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
* The scheduler plugin `NodePreferAvoidPods` is deprecated; instead, use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to achieve similar behavior.
|
||||||
|
-->
|
||||||
|
* 调度器插件 `NodePreferAvoidPods` 已弃用;
|
||||||
|
相反,使用 [节点污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/) 来实现类似的行为。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
* A plugin enabled in a v1beta2 configuration file takes precedence over the default configuration for that plugin.
|
||||||
|
-->
|
||||||
|
* 在 v1beta2 配置文件中启用的插件优先于该插件的默认配置。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
* Invalid `host` or `port` configured for scheduler healthz and metrics bind address will cause validation failure.
|
||||||
|
-->
|
||||||
|
* 调度器的健康检查和审计的绑定地址,所配置的 `host` 或 `port` 无效将导致验证失败。
|
||||||
|
|
||||||
|
{{% /tab %}}
|
||||||
|
|
||||||
|
{{% tab name="v1beta2 → v1beta3" %}}
|
||||||
|
<!--
|
||||||
|
* Three plugins' weight are increased by default:
|
||||||
|
* `InterPodAffinity` from 1 to 2
|
||||||
|
* `NodeAffinity` from 1 to 2
|
||||||
|
* `TaintToleration` from 1 to 3
|
||||||
|
-->
|
||||||
|
* 默认增加三个插件的权重:
|
||||||
|
* `InterPodAffinity` 从 1 到 2
|
||||||
|
* `NodeAffinity` 从 1 到 2
|
||||||
|
* `TaintToleration` 从 1 到 3
|
||||||
|
{{% /tab %}}
|
||||||
|
{{< /tabs >}}
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
* Read the [kube-scheduler reference](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
|
* Read the [kube-scheduler reference](/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||||
* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/)
|
* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/)
|
||||||
* Read the [kube-scheduler configuration (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) reference
|
* Read the [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/) reference
|
||||||
|
* Read the [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) reference
|
||||||
-->
|
-->
|
||||||
* 阅读 [kube-scheduler 参考](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)
|
* 阅读 [kube-scheduler 参考](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||||
* 了解[调度](/zh/docs/concepts/scheduling-eviction/kube-scheduler/)
|
* 了解[调度](/zh/docs/concepts/scheduling-eviction/kube-scheduler/)
|
||||||
* 阅读 [kube-scheduler 配置 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/) 参考
|
* 阅读 [kube-scheduler 配置 (v1beta2)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta2/) 参考
|
||||||
|
* 阅读 [kube-scheduler 配置 (v1beta3)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/) 参考
|
||||||
|
|
|
@ -47,9 +47,16 @@ kubeadm 目前不支持对 CoreDNS 部署进行定制。
|
||||||
有关更多详细信息,请参阅[在 kubeadm 中使用 init phases](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases).
|
有关更多详细信息,请参阅[在 kubeadm 中使用 init phases](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases).
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
<!-- body -->
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
To reconfigure a cluster that has already been created see
|
||||||
|
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure).
|
||||||
|
-->
|
||||||
|
|
||||||
{{< feature-state for_k8s_version="1.12" state="stable" >}}
|
要重新配置已创建的集群,请参阅[重新配置 kubeadm 集群](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure)。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!-- body -->
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Customizing the control plane with flags in `ClusterConfiguration`
|
## Customizing the control plane with flags in `ClusterConfiguration`
|
||||||
|
|
|
@ -81,10 +81,12 @@ kubeadm init --pod-network-cidr=10.244.0.0/16,2001:db8:42:0::/56 --service-cidr=
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
To make things clearer, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for the primary dual-stack control plane node.
|
To make things clearer, here is an example kubeadm
|
||||||
|
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
|
`kubeadm-config.yaml` for the primary dual-stack control plane node.
|
||||||
-->
|
-->
|
||||||
为了更便于理解,参看下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
为了更便于理解,参看下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
||||||
[配置文件](/docs/reference/config-api/kubeadm-config.v1beta3/),
|
[配置文件](/zh/docs/reference/config-api/kubeadm-config.v1beta3/),
|
||||||
该文件用于双协议栈控制面的主控制节点。
|
该文件用于双协议栈控制面的主控制节点。
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
|
@ -138,14 +140,15 @@ The `--apiserver-advertise-address` flag does not support dual-stack.
|
||||||
|
|
||||||
Before joining a node, make sure that the node has IPv6 routable network interface and allows IPv6 forwarding.
|
Before joining a node, make sure that the node has IPv6 routable network interface and allows IPv6 forwarding.
|
||||||
|
|
||||||
Here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for joining a worker node to the cluster.
|
Here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
|
`kubeadm-config.yaml` for joining a worker node to the cluster.
|
||||||
-->
|
-->
|
||||||
### 向双协议栈集群添加节点 {#join-a-node-to-dual-stack-cluster}
|
### 向双协议栈集群添加节点 {#join-a-node-to-dual-stack-cluster}
|
||||||
|
|
||||||
在添加节点之前,请确保该节点具有 IPv6 可路由的网络接口并且启用了 IPv6 转发。
|
在添加节点之前,请确保该节点具有 IPv6 可路由的网络接口并且启用了 IPv6 转发。
|
||||||
|
|
||||||
下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
||||||
[配置文件](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
[配置文件](/zh/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
示例用于向集群中添加工作节点。
|
示例用于向集群中添加工作节点。
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
|
@ -164,10 +167,11 @@ nodeRegistration:
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Also, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for joining another control plane node to the cluster.
|
Also, here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
|
`kubeadm-config.yaml` for joining another control plane node to the cluster.
|
||||||
-->
|
-->
|
||||||
下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
||||||
[配置文件](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
[配置文件](/zh/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
示例用于向集群中添加另一个控制面节点。
|
示例用于向集群中添加另一个控制面节点。
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
|
@ -215,13 +219,14 @@ You can deploy a single-stack cluster that has the dual-stack networking feature
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
To make things more clear, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for the single-stack control plane node.
|
To make things more clear, here is an example kubeadm
|
||||||
|
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
|
`kubeadm-config.yaml` for the single-stack control plane node.
|
||||||
-->
|
-->
|
||||||
为了更便于理解,参看下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
为了更便于理解,参看下面的名为 `kubeadm-config.yaml` 的 kubeadm
|
||||||
[配置文件](/docs/reference/config-api/kubeadm-config.v1beta3/)示例,
|
[配置文件](/zh/docs/reference/config-api/kubeadm-config.v1beta3/)示例,
|
||||||
该文件用于单协议栈控制面节点。
|
该文件用于单协议栈控制面节点。
|
||||||
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: kubeadm.k8s.io/v1beta3
|
apiVersion: kubeadm.k8s.io/v1beta3
|
||||||
kind: ClusterConfiguration
|
kind: ClusterConfiguration
|
||||||
|
|
|
@ -27,6 +27,8 @@ For information on how to create a cluster with kubeadm once you have performed
|
||||||
有关在执行此安装过程后如何使用 kubeadm 创建集群的信息,请参见
|
有关在执行此安装过程后如何使用 kubeadm 创建集群的信息,请参见
|
||||||
[使用 kubeadm 创建集群](/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) 页面。
|
[使用 kubeadm 创建集群](/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) 页面。
|
||||||
|
|
||||||
|
{{% dockershim-removal %}}
|
||||||
|
|
||||||
## {{% heading "prerequisites" %}}
|
## {{% heading "prerequisites" %}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -117,15 +119,15 @@ For more details please see the [Network Plugin Requirements](/docs/concepts/ext
|
||||||
## Check required ports
|
## Check required ports
|
||||||
These
|
These
|
||||||
[required ports](/docs/reference/ports-and-protocols/)
|
[required ports](/docs/reference/ports-and-protocols/)
|
||||||
need to be open in order for Kubernetes components to communicate with each other. You can use telnet to check if a port is open. For example:
|
need to be open in order for Kubernetes components to communicate with each other. You can use tools like netcat to check if a port is open. For example:
|
||||||
-->
|
-->
|
||||||
|
|
||||||
## 检查所需端口{#check-required-ports}
|
## 检查所需端口{#check-required-ports}
|
||||||
|
|
||||||
启用这些[必要的端口](/zh/docs/reference/ports-and-protocols/)后才能使 Kubernetes 的各组件相互通信。可以使用 telnet 来检查端口是否启用,例如:
|
启用这些[必要的端口](/zh/docs/reference/ports-and-protocols/)后才能使 Kubernetes 的各组件相互通信。可以使用 netcat 之类的工具来检查端口是否启用,例如:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
telnet 127.0.0.1 6443
|
nc 127.0.0.1 6443
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -156,7 +158,7 @@ to interface with your chosen container runtime.
|
||||||
|
|
||||||
If you don't specify a runtime, kubeadm automatically tries to detect an installed
|
If you don't specify a runtime, kubeadm automatically tries to detect an installed
|
||||||
container runtime by scanning through a list of well known Unix domain sockets.
|
container runtime by scanning through a list of well known Unix domain sockets.
|
||||||
The following table lists container runtimes and their associated socket paths:
|
The following table lists container runtimes that kubeadm looks for, and their associated socket paths:
|
||||||
|
|
||||||
| Runtime | Domain Socket |
|
| Runtime | Domain Socket |
|
||||||
|------------|---------------------------------|
|
|------------|---------------------------------|
|
||||||
|
@ -170,33 +172,33 @@ The following table lists container runtimes and their associated socket paths:
|
||||||
|
|
||||||
如果你不指定运行时,则 kubeadm 会自动尝试检测到系统上已经安装的运行时,
|
如果你不指定运行时,则 kubeadm 会自动尝试检测到系统上已经安装的运行时,
|
||||||
方法是扫描一组众所周知的 Unix 域套接字。
|
方法是扫描一组众所周知的 Unix 域套接字。
|
||||||
下面的表格列举了一些容器运行时及其对应的套接字路径:
|
下面的表格列举了一些 kubeadm 查找的容器运行时及其对应的套接字路径:
|
||||||
|
|
||||||
| 运行时 | 域套接字 |
|
| 运行时 | 域套接字 |
|
||||||
|------------|----------------------------------|
|
|------------|----------------------------------|
|
||||||
| Docker | /var/run/dockershim.sock |
|
| Docker Engine | `/var/run/dockershim.sock` |
|
||||||
| containerd | /run/containerd/containerd.sock |
|
| containerd | `/run/containerd/containerd.sock` |
|
||||||
| CRI-O | /var/run/crio/crio.sock |
|
| CRI-O | `/var/run/crio/crio.sock` |
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
<br />
|
<br />
|
||||||
If both Docker and containerd are detected, Docker takes precedence. This is
|
If both Docker Engine and containerd are detected, kubeadm will give precedence to Docker Engine. This is
|
||||||
needed because Docker 18.09 ships with containerd and both are detectable even if you only
|
needed because Docker 18.09 ships with containerd and both are detectable even if you only
|
||||||
installed Docker.
|
installed Docker.
|
||||||
If any other two or more runtimes are detected, kubeadm exits with an error.
|
**If any other two or more runtimes are detected, kubeadm exits with an error.**
|
||||||
|
|
||||||
The kubelet integrates with Docker through the built-in `dockershim` CRI implementation.
|
The kubelet can integrate with Docker Engine using the deprecated `dockershim` adapter (the dockershim is part of the kubelet itself).
|
||||||
|
|
||||||
See [container runtimes](/docs/setup/production-environment/container-runtimes/)
|
See [container runtimes](/docs/setup/production-environment/container-runtimes/)
|
||||||
for more information.
|
for more information.
|
||||||
-->
|
-->
|
||||||
<br/>
|
<br/>
|
||||||
如果同时检测到 Docker 和 containerd,则优先选择 Docker。
|
如果同时检测到 Docker Engine 和 containerd,kubeadm 将优先考虑 Docker Engine。
|
||||||
这是必然的,因为 Docker 18.09 附带了 containerd 并且两者都是可以检测到的,
|
这是必然的,因为 Docker 18.09 附带了 containerd 并且两者都是可以检测到的,
|
||||||
即使你仅安装了 Docker。
|
即使你仅安装了 Docker。
|
||||||
如果检测到其他两个或多个运行时,kubeadm 输出错误信息并退出。
|
**如果检测到其他两个或多个运行时,kubeadm 输出错误信息并退出。**
|
||||||
|
|
||||||
kubelet 通过内置的 `dockershim` CRI 实现与 Docker 集成。
|
kubelet 可以使用已弃用的 dockershim 适配器与 Docker Engine 集成(dockershim 是 kubelet 本身的一部分)。
|
||||||
|
|
||||||
参阅[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)
|
参阅[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)
|
||||||
以了解更多信息。
|
以了解更多信息。
|
||||||
|
@ -205,13 +207,13 @@ kubelet 通过内置的 `dockershim` CRI 实现与 Docker 集成。
|
||||||
{{% tab name="其它操作系统" %}}
|
{{% tab name="其它操作系统" %}}
|
||||||
<!--
|
<!--
|
||||||
By default, kubeadm uses {{< glossary_tooltip term_id="docker" >}} as the container runtime.
|
By default, kubeadm uses {{< glossary_tooltip term_id="docker" >}} as the container runtime.
|
||||||
The kubelet integrates with Docker through the built-in `dockershim` CRI implementation.
|
The kubelet can integrate with Docker Engine using the deprecated `dockershim` adapter (the dockershim is part of the kubelet itself).
|
||||||
|
|
||||||
See [container runtimes](/docs/setup/production-environment/container-runtimes/)
|
See [container runtimes](/docs/setup/production-environment/container-runtimes/)
|
||||||
for more information.
|
for more information.
|
||||||
-->
|
-->
|
||||||
默认情况下, kubeadm 使用 {{< glossary_tooltip term_id="docker" >}} 作为容器运行时。
|
默认情况下, kubeadm 使用 {{< glossary_tooltip term_id="docker" >}} 作为容器运行时。
|
||||||
kubelet 通过内置的 `dockershim` CRI 实现与 Docker 集成。
|
kubelet 可以使用已弃用的 dockershim 适配器与 Docker Engine 集成(dockershim 是 kubelet 本身的一部分)。
|
||||||
参阅[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)
|
参阅[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)
|
||||||
以了解更多信息。
|
以了解更多信息。
|
||||||
|
|
||||||
|
@ -355,6 +357,9 @@ sudo systemctl enable --now kubelet
|
||||||
You have to do this until SELinux support is improved in the kubelet.
|
You have to do this until SELinux support is improved in the kubelet.
|
||||||
|
|
||||||
- You can leave SELinux enabled if you know how to configure it but it may require settings that are not supported by kubeadm.
|
- You can leave SELinux enabled if you know how to configure it but it may require settings that are not supported by kubeadm.
|
||||||
|
- If the `baseurl` fails because your Red Hat-based distribution cannot interpret `basearch`, replace `\$basearch` with your computer's architecture.
|
||||||
|
Type `uname -m` to see that value.
|
||||||
|
For example, the `baseurl` URL for `x86_64` could be: `https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64`.
|
||||||
-->
|
-->
|
||||||
**请注意:**
|
**请注意:**
|
||||||
|
|
||||||
|
@ -365,6 +370,9 @@ sudo systemctl enable --now kubelet
|
||||||
你必须这么做,直到 kubelet 做出对 SELinux 的支持进行升级为止。
|
你必须这么做,直到 kubelet 做出对 SELinux 的支持进行升级为止。
|
||||||
|
|
||||||
- 如果你知道如何配置 SELinux 则可以将其保持启用状态,但可能需要设定 kubeadm 不支持的部分配置
|
- 如果你知道如何配置 SELinux 则可以将其保持启用状态,但可能需要设定 kubeadm 不支持的部分配置
|
||||||
|
- 如果由于该 Red Hat 的发行版无法解析 `basearch` 导致获取 `baseurl` 失败,请将 `\$basearch` 替换为你计算机的架构。
|
||||||
|
输入 `uname -m` 以查看该值。
|
||||||
|
例如,`x86_64` 的 `baseurl` URL 可以是:`https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64`。
|
||||||
|
|
||||||
{{% /tab %}}
|
{{% /tab %}}
|
||||||
{{% tab name="无包管理器的情况" %}}
|
{{% tab name="无包管理器的情况" %}}
|
||||||
|
|
|
@ -18,7 +18,6 @@ This page shows how to access clusters using the Kubernetes API.
|
||||||
|
|
||||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||||
|
|
||||||
|
|
||||||
<!-- steps -->
|
<!-- steps -->
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -56,11 +55,11 @@ kubectl config view
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Many of the [examples](https://github.com/kubernetes/examples/tree/master/) provide an introduction to using
|
Many of the [examples](https://github.com/kubernetes/examples/tree/master/) provide an introduction to using
|
||||||
kubectl. Complete documentation is found in the [kubectl manual](/docs/reference/kubectl/overview/).
|
kubectl. Complete documentation is found in the [kubectl manual](/docs/reference/kubectl/).
|
||||||
-->
|
-->
|
||||||
|
|
||||||
许多[样例](https://github.com/kubernetes/examples/tree/master/)
|
许多[样例](https://github.com/kubernetes/examples/tree/master/)
|
||||||
提供了使用 kubectl 的介绍。完整文档请见 [kubectl 手册](/zh/docs/reference/kubectl/overview/)。
|
提供了使用 kubectl 的介绍。完整文档请见 [kubectl 手册](/zh/docs/reference/kubectl/)。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Directly accessing the REST API
|
### Directly accessing the REST API
|
||||||
|
@ -160,8 +159,25 @@ export CLUSTER_NAME="some_server_name"
|
||||||
# 指向引用该集群名称的 API 服务器
|
# 指向引用该集群名称的 API 服务器
|
||||||
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}")
|
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}")
|
||||||
|
|
||||||
# 获得令牌
|
# 创建一个 secret 来保存默认服务账户的令牌
|
||||||
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
|
kubectl apply -f - <<EOF
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: default-token
|
||||||
|
annotations:
|
||||||
|
kubernetes.io/service-account.name: default
|
||||||
|
type: kubernetes.io/service-account-token
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# 等待令牌控制器使用令牌填充 secret:
|
||||||
|
while ! kubectl describe secret default-token | grep -E '^token' >/dev/null; do
|
||||||
|
echo "waiting for token..." >&2
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
# 获取令牌
|
||||||
|
TOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)
|
||||||
|
|
||||||
# 使用令牌玩转 API
|
# 使用令牌玩转 API
|
||||||
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||||
|
@ -185,30 +201,6 @@ curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
<!-- Using `jsonpath` approach: -->
|
|
||||||
使用 `jsonpath` 方式:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
|
|
||||||
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
|
|
||||||
curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
|
|
||||||
```
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"kind": "APIVersions",
|
|
||||||
"versions": [
|
|
||||||
"v1"
|
|
||||||
],
|
|
||||||
"serverAddressByClientCIDRs": [
|
|
||||||
{
|
|
||||||
"clientCIDR": "0.0.0.0/0",
|
|
||||||
"serverAddress": "10.0.1.149:443"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The above example uses the `--insecure` flag. This leaves it subject to MITM
|
The above example uses the `--insecure` flag. This leaves it subject to MITM
|
||||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||||
|
|
|
@ -108,7 +108,7 @@ You should be able to access the new `nginx` service from other Pods. To access
|
||||||
要从 default 命名空间中的其它s Pod 来访问该服务。可以启动一个 busybox 容器:
|
要从 default 命名空间中的其它s Pod 来访问该服务。可以启动一个 busybox 容器:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl run busybox --rm -ti --image=busybox /bin/sh
|
kubectl run busybox --rm -ti --image=busybox:1.28 /bin/sh
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -180,7 +180,7 @@ When you attempt to access the `nginx` Service from a Pod without the correct la
|
||||||
如果你尝试从没有设定正确标签的 Pod 中去访问 `nginx` 服务,请求将会超时:
|
如果你尝试从没有设定正确标签的 Pod 中去访问 `nginx` 服务,请求将会超时:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl run busybox --rm -ti --image=busybox -- /bin/sh
|
kubectl run busybox --rm -ti --image=busybox:1.28 -- /bin/sh
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -207,7 +207,7 @@ You can create a Pod with the correct labels to see that the request is allowed:
|
||||||
创建一个拥有正确标签的 Pod,你将看到请求是被允许的:
|
创建一个拥有正确标签的 Pod,你将看到请求是被允许的:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl run busybox --rm -ti --labels="access=true" --image=busybox -- /bin/sh
|
kubectl run busybox --rm -ti --labels="access=true" --image=busybox:1.28 -- /bin/sh
|
||||||
```
|
```
|
||||||
<!--
|
<!--
|
||||||
In your shell, run the command:
|
In your shell, run the command:
|
||||||
|
|
|
@ -25,8 +25,9 @@ You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can m
|
||||||
混合使用运行于 Linux 上的 Pod 和运行于 Windows 上的 Pod。
|
混合使用运行于 Linux 上的 Pod 和运行于 Windows 上的 Pod。
|
||||||
本页面展示如何将 Windows 节点注册到你的集群。
|
本页面展示如何将 Windows 节点注册到你的集群。
|
||||||
|
|
||||||
## {{% heading "prerequisites" %}}
|
{{% dockershim-removal %}}
|
||||||
|
|
||||||
|
## {{% heading "prerequisites" %}}
|
||||||
{{< version-check >}}
|
{{< version-check >}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
|
|
@ -0,0 +1,509 @@
|
||||||
|
---
|
||||||
|
title: 重新配置 kubeadm 集群
|
||||||
|
content_type: task
|
||||||
|
weight: 10
|
||||||
|
---
|
||||||
|
<!--
|
||||||
|
reviewers:
|
||||||
|
- sig-cluster-lifecycle
|
||||||
|
title: Reconfiguring a kubeadm cluster
|
||||||
|
content_type: task
|
||||||
|
weight: 10
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!-- overview -->
|
||||||
|
<!--
|
||||||
|
kubeadm does not support automated ways of reconfiguring components that
|
||||||
|
were deployed on managed nodes. One way of automating this would be
|
||||||
|
by using a custom [operator](/docs/concepts/extend-kubernetes/operator/).
|
||||||
|
-->
|
||||||
|
kubeadm 不支持自动重新配置部署在托管节点上的组件的方式。
|
||||||
|
一种自动化的方法是使用自定义的
|
||||||
|
[operator](/zh/docs/concepts/extend-kubernetes/operator/)。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
To modify the components configuration you must manually edit associated cluster
|
||||||
|
objects and files on disk.
|
||||||
|
|
||||||
|
This guide shows the correct sequence of steps that need to be performed
|
||||||
|
to achieve kubeadm cluster reconfiguration.
|
||||||
|
-->
|
||||||
|
要修改组件配置,你必须手动编辑磁盘上关联的集群对象和文件。
|
||||||
|
本指南展示了实现 kubeadm 集群重新配置所需执行的正确步骤顺序。
|
||||||
|
|
||||||
|
## {{% heading "prerequisites" %}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
- You need a cluster that was deployed using kubeadm
|
||||||
|
- Have administrator credentials (`/etc/kubernetes/admin.conf`) and network connectivity
|
||||||
|
to a running kube-apiserver in the cluster from a host that has kubectl installed
|
||||||
|
- Have a text editor installed on all hosts
|
||||||
|
-->
|
||||||
|
- 你需要一个使用 kubeadm 部署的集群
|
||||||
|
- 拥有管理员凭据(`/etc/kubernetes/admin.conf`)
|
||||||
|
和从安装了 kubectl 的主机到集群中正在运行的 kube-apiserver 的网络连接
|
||||||
|
- 在所有主机上安装文本编辑器
|
||||||
|
|
||||||
|
<!-- steps -->
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## Reconfiguring the cluster
|
||||||
|
kubeadm writes a set of cluster wide component configuration options in
|
||||||
|
ConfigMaps and other objects. These objects must be manually edited. The command `kubectl edit`
|
||||||
|
can be used for that.
|
||||||
|
-->
|
||||||
|
## 重新配置集群
|
||||||
|
|
||||||
|
kubeadm 在 ConfigMap 和其他对象中写入了一组集群范围的组件配置选项。
|
||||||
|
这些对象必须手动编辑,可以使用命令 `kubectl edit`。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The `kubectl edit` command will open a text editor where you can edit and save the object directly.
|
||||||
|
|
||||||
|
You can use the environment variables `KUBECONFIG` and `KUBE_EDITOR` to specify the location of
|
||||||
|
the kubectl consumed kubeconfig file and preferred text editor.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
-->
|
||||||
|
`kubectl edit` 命令将打开一个文本编辑器,你可以在其中直接编辑和保存对象。
|
||||||
|
你可以使用环境变量 `KUBECONFIG` 和 `KUBE_EDITOR` 来指定 kubectl
|
||||||
|
使用的 kubeconfig 文件和首选文本编辑器的位置。
|
||||||
|
|
||||||
|
例如:
|
||||||
|
```
|
||||||
|
KUBECONFIG=/etc/kubernetes/admin.conf KUBE_EDITOR=nano kubectl edit <parameters>
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
Upon saving any changes to these cluster objects, components running on nodes may not be
|
||||||
|
automatically updated. The steps below instruct you on how to perform that manually.
|
||||||
|
-->
|
||||||
|
保存对这些集群对象的任何更改后,节点上运行的组件可能不会自动更新。
|
||||||
|
以下步骤将指导你如何手动执行该操作。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
{{< warning >}}
|
||||||
|
<!--
|
||||||
|
Component configuration in ConfigMaps is stored as unstructured data (YAML string).
|
||||||
|
This means that validation will not be performed upon updating the contents of a ConfigMap.
|
||||||
|
You have to be careful to follow the documented API format for a particular
|
||||||
|
component configuration and avoid introducing typos and YAML indentation mistakes.
|
||||||
|
-->
|
||||||
|
|
||||||
|
ConfigMaps 中的组件配置存储为非结构化数据(YAML 字符串)。 这意味着在更新
|
||||||
|
ConfigMap 的内容时不会执行验证。 你必须小心遵循特定组件配置的文档化 API 格式,
|
||||||
|
并避免引入拼写错误和 YAML 缩进错误。
|
||||||
|
{{< /warning >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Applying cluster configuration changes
|
||||||
|
|
||||||
|
#### Updating the `ClusterConfiguration`
|
||||||
|
|
||||||
|
During cluster creation and upgrade, kubeadm writes its
|
||||||
|
[`ClusterConfiguration`](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
|
in a ConfigMap called `kubeadm-config` in the `kube-system` namespace.
|
||||||
|
|
||||||
|
To change a particular option in the `ClusterConfiguration` you can edit the ConfigMap with this command:
|
||||||
|
|
||||||
|
The configuration is located under the `data.ClusterConfiguration` key.
|
||||||
|
-->
|
||||||
|
### 应用集群配置更改
|
||||||
|
|
||||||
|
#### 更新 `ClusterConfiguration`
|
||||||
|
|
||||||
|
在集群创建和升级期间,kubeadm 将其
|
||||||
|
[`ClusterConfiguration`](/zh/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||||
|
写入 `kube-system` 命名空间中名为 `kubeadm-config` 的 ConfigMap。
|
||||||
|
|
||||||
|
要更改 `ClusterConfiguration` 中的特定选项,你可以使用以下命令编辑 ConfigMap:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl edit cm -n kube-system kubeadm-config
|
||||||
|
```
|
||||||
|
|
||||||
|
配置位于 `data.ClusterConfiguration` 键下。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
The `ClusterConfiguration` includes a variety of options that affect the configuration of individual
|
||||||
|
components such as kube-apiserver, kube-scheduler, kube-controller-manager, CoreDNS, etcd and kube-proxy.
|
||||||
|
Changes to the configuration must be reflected on node components manually.
|
||||||
|
-->
|
||||||
|
`ClusterConfiguration` 包括各种影响单个组件配置的选项, 例如
|
||||||
|
kube-apiserver、kube-scheduler、kube-controller-manager、
|
||||||
|
CoreDNS、etcd 和 kube-proxy。 对配置的更改必须手动反映在节点组件上。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
#### Reflecting `ClusterConfiguration` changes on control plane nodes
|
||||||
|
|
||||||
|
kubeadm manages the control plane components as static Pod manifests located in
|
||||||
|
the directory `/etc/kubernetes/manifests`.
|
||||||
|
Any changes to the `ClusterConfiguration` under the `apiServer`, `controllerManager`, `scheduler` or `etcd`
|
||||||
|
keys must be reflected in the associated files in the manifests directory on a control plane node.
|
||||||
|
-->
|
||||||
|
#### 在控制平面节点上反映 `ClusterConfiguration` 更改
|
||||||
|
|
||||||
|
kubeadm 将控制平面组件作为位于 `/etc/kubernetes/manifests`
|
||||||
|
目录中的静态 Pod 清单进行管理。
|
||||||
|
对 `apiServer`、`controllerManager`、`scheduler` 或 `etcd`键下的
|
||||||
|
`ClusterConfiguration` 的任何更改都必须反映在控制平面节点上清单目录中的关联文件中。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Such changes may include:
|
||||||
|
- `extraArgs` - requires updating the list of flags passed to a component container
|
||||||
|
- `extraMounts` - requires updated the volume mounts for a component container
|
||||||
|
- `*SANs` - requires writing new certificates with updated Subject Alternative Names.
|
||||||
|
|
||||||
|
Before proceeding with these changes, make sure you have backed up the directory `/etc/kubernetes/`.
|
||||||
|
-->
|
||||||
|
|
||||||
|
此类更改可能包括:
|
||||||
|
- `extraArgs` - 需要更新传递给组件容器的标志列表
|
||||||
|
- `extraMounts` - 需要更新组件容器的卷挂载
|
||||||
|
- `*SANs` - 需要使用更新的主题备用名称编写新证书
|
||||||
|
|
||||||
|
在继续进行这些更改之前,请确保你已备份目录 `/etc/kubernetes/`。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
To write new certificates you can use:
|
||||||
|
|
||||||
|
To write new manifest files in `/etc/kubernetes/manifests` you can use:
|
||||||
|
-->
|
||||||
|
|
||||||
|
要编写新证书,你可以使用:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubeadm init phase certs <component-name> --config <config-file>
|
||||||
|
```
|
||||||
|
|
||||||
|
要在 `/etc/kubernetes/manifests` 中编写新的清单文件,你可以使用:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubeadm init phase control-plane <component-name> --config <config-file>
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
The `<config-file>` contents must match the updated `ClusterConfiguration`.
|
||||||
|
The `<component-name>` value must be the name of the component.
|
||||||
|
-->
|
||||||
|
`<config-file>` 内容必须与更新后的 `ClusterConfiguration` 匹配。
|
||||||
|
`<component-name>` 值必须是组件的名称。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
Updating a file in `/etc/kubernetes/manifests` will tell the kubelet to restart the static Pod for the corresponding component.
|
||||||
|
Try doing these changes one node at a time to leave the cluster without downtime.
|
||||||
|
-->
|
||||||
|
更新 `/etc/kubernetes/manifests` 中的文件将告诉 kubelet 重新启动相应组件的静态 Pod。
|
||||||
|
尝试一次对一个节点进行这些更改,以在不停机的情况下离开集群。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Applying kubelet configuration changes
|
||||||
|
|
||||||
|
#### Updating the `KubeletConfiguration`
|
||||||
|
|
||||||
|
During cluster creation and upgrade, kubeadm writes its
|
||||||
|
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||||
|
in a ConfigMap called `kubelet-config` in the `kube-system` namespace.
|
||||||
|
|
||||||
|
You can edit the ConfigMap with this command:
|
||||||
|
|
||||||
|
The configuration is located under the `data.kubelet` key.
|
||||||
|
-->
|
||||||
|
### 应用 kubelet 配置更改
|
||||||
|
|
||||||
|
#### 更新 `KubeletConfiguration`
|
||||||
|
|
||||||
|
在集群创建和升级期间,kubeadm 将其
|
||||||
|
[`KubeletConfiguration`](/zh/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||||
|
写入 `kube-system` 命名空间中名为 `kubelet-config` 的 ConfigMap。
|
||||||
|
你可以使用以下命令编辑 ConfigMap:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl edit cm -n kube-system kubelet-config
|
||||||
|
```
|
||||||
|
|
||||||
|
配置位于 `data.kubelet` 键下。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
#### Reflecting the kubelet changes
|
||||||
|
|
||||||
|
To reflect the change on kubeadm nodes you must do the following:
|
||||||
|
- Log in to a kubeadm node
|
||||||
|
- Run `kubeadm upgrade node phase kubelet-config` to download the latest `kubelet-config`
|
||||||
|
ConfigMap contents into the local file `/var/lib/kubelet/config.conf`
|
||||||
|
- Edit the file `/var/lib/kubelet/kubeadm-flags.env` to apply additional configuration with
|
||||||
|
flags
|
||||||
|
- Restart the kubelet service with `systemctl restart kubelet`
|
||||||
|
-->
|
||||||
|
#### 反映 kubelet 的更改
|
||||||
|
|
||||||
|
要反映 kubeadm 节点上的更改,你必须执行以下操作:
|
||||||
|
|
||||||
|
- 登录到 kubeadm 节点
|
||||||
|
- 运行 `kubeadm upgrade node phase kubelet-config` 下载最新的
|
||||||
|
`kubelet-config` ConfigMap 内容到本地文件 `/var/lib/kubelet/config.conf`
|
||||||
|
- 编辑文件 `/var/lib/kubelet/kubeadm-flags.env` 以使用标志来应用额外的配置
|
||||||
|
- 使用 `systemctl restart kubelet` 重启 kubelet 服务
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
Do these changes one node at a time to allow workloads to be rescheduled properly.
|
||||||
|
-->
|
||||||
|
一次执行一个节点的这些更改,以允许正确地重新安排工作负载。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
During `kubeadm upgrade`, kubeadm downloads the `KubeletConfiguration` from the
|
||||||
|
`kubelet-config` ConfigMap and overwrite the contents of `/var/lib/kubelet/config.conf`.
|
||||||
|
This means that node local configuration must be applied either by flags in
|
||||||
|
`/var/lib/kubelet/kubeadm-flags.env` or by manually updating the contents of
|
||||||
|
`/var/lib/kubelet/config.conf` after `kubeadm upgrade`, and then restarting the kubelet.
|
||||||
|
-->
|
||||||
|
在 `kubeadm upgrade` 期间,kubeadm 从 `kubelet-config` ConfigMap
|
||||||
|
下载 `KubeletConfiguration` 并覆盖 `/var/lib/kubelet/config.conf` 的内容。
|
||||||
|
这意味着节点本地配置必须通过`/var/lib/kubelet/kubeadm-flags.env`中的标志或在
|
||||||
|
kubeadm upgrade` 后手动更新`/var/lib/kubelet/config.conf`的内容来应用,然后重新启动 kubelet。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Applying kube-proxy configuration changes
|
||||||
|
|
||||||
|
#### Updating the `KubeProxyConfiguration`
|
||||||
|
|
||||||
|
During cluster creation and upgrade, kubeadm writes its
|
||||||
|
[`KubeProxyConfiguration`](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
|
||||||
|
in a ConfigMap in the `kube-system` namespace called `kube-proxy`.
|
||||||
|
|
||||||
|
This ConfigMap is used by the `kube-proxy` DaemonSet in the `kube-system` namespace.
|
||||||
|
|
||||||
|
To change a particular option in the `KubeProxyConfiguration`, you can edit the ConfigMap with this command:
|
||||||
|
|
||||||
|
The configuration is located under the `data.config.conf` key.
|
||||||
|
-->
|
||||||
|
### 应用 kube-proxy 配置更改
|
||||||
|
|
||||||
|
#### 更新 `KubeProxyConfiguration`
|
||||||
|
|
||||||
|
在集群创建和升级期间,kubeadm 将其写入
|
||||||
|
[`KubeProxyConfiguration`](/zh/docs/reference/config-api/kube-proxy-config.v1alpha1/)
|
||||||
|
在名为 `kube-proxy` 的 `kube-system` 命名空间中的 ConfigMap 中。
|
||||||
|
|
||||||
|
此 ConfigMap 由 `kube-system` 命名空间中的 `kube-proxy` DaemonSet 使用。
|
||||||
|
|
||||||
|
要更改 `KubeProxyConfiguration` 中的特定选项,你可以使用以下命令编辑 ConfigMap:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl edit cm -n kube-system kube-proxy
|
||||||
|
```
|
||||||
|
|
||||||
|
配置位于 `data.config.conf` 键下。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
#### Reflecting the kube-proxy changes
|
||||||
|
|
||||||
|
Once the `kube-proxy` ConfigMap is updated, you can restart all kube-proxy Pods:
|
||||||
|
|
||||||
|
Obtain the Pod names:
|
||||||
|
|
||||||
|
Delete a Pod with:
|
||||||
|
|
||||||
|
New Pods that use the updated ConfigMap will be created.
|
||||||
|
-->
|
||||||
|
#### 反映 kube-proxy 的更改
|
||||||
|
|
||||||
|
更新 `kube-proxy` ConfigMap 后,你可以重新启动所有 kube-proxy Pod:
|
||||||
|
|
||||||
|
获取 Pod 名称:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get po -n kube-system | grep kube-proxy
|
||||||
|
```
|
||||||
|
|
||||||
|
使用以下命令删除 Pod:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl delete po -n kube-system <pod-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
将创建使用更新的 ConfigMap 的新 Pod。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
Because kubeadm deploys kube-proxy as a DaemonSet, node specific configuration is unsupported.
|
||||||
|
-->
|
||||||
|
由于 kubeadm 将 kube-proxy 部署为 DaemonSet,因此不支持特定于节点的配置。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Applying CoreDNS configuration changes
|
||||||
|
|
||||||
|
#### Updating the CoreDNS Deployment and Service
|
||||||
|
|
||||||
|
kubeadm deploys CoreDNS as a Deployment called `coredns` and with a Service `kube-dns`,
|
||||||
|
both in the `kube-system` namespace.
|
||||||
|
|
||||||
|
To update any of the CoreDNS settings, you can edit the Deployment and
|
||||||
|
Service objects:
|
||||||
|
-->
|
||||||
|
### 应用 CoreDNS 配置更改
|
||||||
|
|
||||||
|
#### 更新 CoreDNS 的 Deployment 和 Service
|
||||||
|
|
||||||
|
kubeadm 将 CoreDNS 部署为名为 `coredns` 的 Deployment,并使用 Service `kube-dns`,
|
||||||
|
两者都在 `kube-system` 命名空间中。
|
||||||
|
|
||||||
|
要更新任何 CoreDNS 设置,你可以编辑 Deployment 和 Service:
|
||||||
|
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl edit deployment -n kube-system coredns
|
||||||
|
kubectl edit service -n kube-system kube-dns
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
#### Reflecting the CoreDNS changes
|
||||||
|
|
||||||
|
Once the CoreDNS changes are applied you can delete the CoreDNS Pods:
|
||||||
|
|
||||||
|
Obtain the Pod names:
|
||||||
|
|
||||||
|
Delete a Pod with:
|
||||||
|
-->
|
||||||
|
#### 反映 CoreDNS 的更改
|
||||||
|
|
||||||
|
应用 CoreDNS 更改后,你可以删除 CoreDNS Pod。
|
||||||
|
|
||||||
|
获取 Pod 名称:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get po -n kube-system | grep coredns
|
||||||
|
```
|
||||||
|
|
||||||
|
使用以下命令删除 Pod:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl delete po -n kube-system <pod-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
New Pods with the updated CoreDNS configuration will be created.
|
||||||
|
-->
|
||||||
|
将创建具有更新的 CoreDNS 配置的新 Pod。
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
<!--
|
||||||
|
kubeadm does not allow CoreDNS configuration during cluster creation and upgrade.
|
||||||
|
This means that if you execute `kubeadm upgrade apply`, your changes to the CoreDNS
|
||||||
|
-->
|
||||||
|
kubeadm 不允许在集群创建和升级期间配置 CoreDNS。
|
||||||
|
这意味着如果执行了 `kubeadm upgrade apply`,你对
|
||||||
|
CoreDNS 对象的更改将丢失并且必须重新应用。
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## Persisting the reconfiguration
|
||||||
|
|
||||||
|
During the execution of `kubeadm upgrade` on a managed node, kubeadm might overwrite configuration
|
||||||
|
that was applied after the cluster was created (reconfiguration).
|
||||||
|
-->
|
||||||
|
## 持久化重新配置
|
||||||
|
|
||||||
|
在受管节点上执行 `kubeadm upgrade` 期间,kubeadm
|
||||||
|
可能会覆盖在创建集群(重新配置)后应用的配置。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
### Persisting Node object reconfiguration
|
||||||
|
|
||||||
|
kubeadm writes Labels, Taints, CRI socket and other information on the Node object for a particular
|
||||||
|
Kubernetes node. To change any of the contents of this Node object you can use:
|
||||||
|
-->
|
||||||
|
### 持久化 Node 对象重新配置
|
||||||
|
|
||||||
|
kubeadm 在特定 Kubernetes 节点的 Node 对象上写入标签、污点、CRI
|
||||||
|
套接字和其他信息。要更改此 Node 对象的任何内容,你可以使用:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl edit no <node-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
During `kubeadm upgrade` the contents of such a Node might get overwritten.
|
||||||
|
If you would like to persist your modifications to the Node object after upgrade,
|
||||||
|
you can prepare a [kubectl patch](/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)
|
||||||
|
and apply it to the Node object:
|
||||||
|
-->
|
||||||
|
在 `kubeadm upgrade` 期间,此类节点的内容可能会被覆盖。
|
||||||
|
如果你想在升级后保留对 Node 对象的修改,你可以准备一个
|
||||||
|
[kubectl patch](/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)
|
||||||
|
并将其应用到 Node 对象:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl patch no <node-name> --patch-file <patch-file>
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
#### Persisting control plane component reconfiguration
|
||||||
|
|
||||||
|
The main source of control plane configuration is the `ClusterConfiguration`
|
||||||
|
object stored in the cluster. To extend the static Pod manifests configuration,
|
||||||
|
[patches](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#patches) can be used.
|
||||||
|
|
||||||
|
These patch files must remain as files on the control plane nodes to ensure that
|
||||||
|
they can be used by the `kubeadm upgrade ... --patches <directory>`.
|
||||||
|
|
||||||
|
If reconfiguration is done to the `ClusterConfiguration` and static Pod manifests on disk,
|
||||||
|
the set of node specific patches must be updated accordingly.
|
||||||
|
-->
|
||||||
|
#### 持久化控制平面组件重新配置
|
||||||
|
|
||||||
|
控制平面配置的主要来源是存储在集群中的 `ClusterConfiguration` 对象。
|
||||||
|
要扩展静态 Pod 清单配置,可以使用
|
||||||
|
[patches](/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#patches)。
|
||||||
|
|
||||||
|
这些补丁文件必须作为文件保留在控制平面节点上,以确保它们可以被
|
||||||
|
`kubeadm upgrade ... --patches <directory>` 使用。
|
||||||
|
|
||||||
|
如果对 `ClusterConfiguration` 和磁盘上的静态 Pod 清单进行了重新配置,则必须相应地更新节点特定补丁集。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
#### Persisting kubelet reconfiguration
|
||||||
|
|
||||||
|
Any changes to the `KubeletConfiguration` stored in `/var/lib/kubelet/config.conf` will be overwritten on
|
||||||
|
`kubeadm upgrade` by downloading the contents of the cluster wide `kubelet-config` ConfigMap.
|
||||||
|
To persist kubelet node specific configuration either the file `/var/lib/kubelet/config.conf`
|
||||||
|
has to be updated manually post-upgrade or the file `/var/lib/kubelet/kubeadm-flags.env` can include flags.
|
||||||
|
The kubelet flags override the associated `KubeletConfiguration` options, but note that
|
||||||
|
some of the flags are deprecated.
|
||||||
|
|
||||||
|
A kubelet restart will be required after changing `/var/lib/kubelet/config.conf` or
|
||||||
|
`/var/lib/kubelet/kubeadm-flags.env`.
|
||||||
|
-->
|
||||||
|
#### 持久化 kubelet 重新配置
|
||||||
|
|
||||||
|
对存储在 `/var/lib/kubelet/config.conf` 中的 `KubeletConfiguration`
|
||||||
|
所做的任何更改都将在 `kubeadm upgrade` 时因为下载集群范围内的 `kubelet-config`
|
||||||
|
ConfigMap 的内容而被覆盖。
|
||||||
|
要持久保存 kubelet 节点特定的配置,文件`/var/lib/kubelet/config.conf`
|
||||||
|
必须在升级后手动更新,或者文件`/var/lib/kubelet/kubeadm-flags.env` 可以包含标志。
|
||||||
|
kubelet 标志会覆盖相关的 `KubeletConfiguration` 选项,但请注意,有些标志已被弃用。
|
||||||
|
|
||||||
|
更改 `/var/lib/kubelet/config.conf` 或 `/var/lib/kubelet/kubeadm-flags.env`
|
||||||
|
后需要重启 kubelet。
|
||||||
|
|
||||||
|
{{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
- [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade)
|
||||||
|
- [Customizing components with the kubeadm API](/docs/setup/production-environment/tools/kubeadm/control-plane-flags)
|
||||||
|
- [Certificate management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)
|
||||||
|
-->
|
||||||
|
|
||||||
|
- [升级 kubeadm 集群](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade)
|
||||||
|
- [使用 kubeadm API 自定义组件](/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags)
|
||||||
|
- [使用 kubeadm 管理证书](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)
|
|
@ -99,6 +99,51 @@ Rootless Podman is not supported.
|
||||||
|
|
||||||
<!-- Supporting rootless podman is discussed in https://github.com/kubernetes/minikube/issues/8719 -->
|
<!-- Supporting rootless podman is discussed in https://github.com/kubernetes/minikube/issues/8719 -->
|
||||||
|
|
||||||
|
<!--
|
||||||
|
## Running Kubernetes inside Unprivileged Containers
|
||||||
|
|
||||||
|
{{% thirdparty-content %}}
|
||||||
|
|
||||||
|
### sysbox
|
||||||
|
|
||||||
|
-->
|
||||||
|
|
||||||
|
## 在非特权容器内运行 Kubernetes
|
||||||
|
|
||||||
|
{{% thirdparty-content %}}
|
||||||
|
|
||||||
|
### sysbox
|
||||||
|
|
||||||
|
<!--
|
||||||
|
[Sysbox](https://github.com/nestybox/sysbox) is an open-source container runtime
|
||||||
|
(similar to "runc") that supports running system-level workloads such as Docker
|
||||||
|
and Kubernetes inside unprivileged containers isolated with the Linux user
|
||||||
|
namespace.
|
||||||
|
-->
|
||||||
|
|
||||||
|
[Sysbox](https://github.com/nestybox/sysbox) 是一个开源容器运行时
|
||||||
|
(类似于 “runc”),支持在 Linux 用户命名空间隔离的非特权容器内运行系统级工作负载,
|
||||||
|
比如 Docker 和 Kubernetes。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
See [Sysbox Quick Start Guide: Kubernetes-in-Docker](https://github.com/nestybox/sysbox/blob/master/docs/quickstart/kind.md) for more info.
|
||||||
|
-->
|
||||||
|
|
||||||
|
查看 [Sysbox 快速入门指南: Kubernetes-in-Docker](https://github.com/nestybox/sysbox/blob/master/docs/quickstart/kind.md)
|
||||||
|
了解更多细节。
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Sysbox supports running Kubernetes inside unprivileged containers without
|
||||||
|
requiring Cgroup v2 and without the `KubeletInUserNamespace` feature gate. It
|
||||||
|
does this by exposing specially crafted `/proc` and `/sys` filesystems inside
|
||||||
|
the container plus several other advanced OS virtualization techniques.
|
||||||
|
-->
|
||||||
|
|
||||||
|
Sysbox 支持在非特权容器内运行 Kubernetes,
|
||||||
|
而不需要 Cgroup v2 和 “KubeletInUserNamespace” 特性门控。
|
||||||
|
Sysbox 通过在容器内暴露特定的 `/proc` 和 `/sys` 文件系统,
|
||||||
|
以及其它一些先进的操作系统虚拟化技术来实现。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Running Rootless Kubernetes directly on a host
|
## Running Rootless Kubernetes directly on a host
|
||||||
|
|
||||||
|
@ -446,7 +491,7 @@ This feature gate also allows kube-proxy to ignore an error during setting `RLIM
|
||||||
The `KubeletInUserNamespace` feature gate was introduced in Kubernetes v1.22 with "alpha" status.
|
The `KubeletInUserNamespace` feature gate was introduced in Kubernetes v1.22 with "alpha" status.
|
||||||
|
|
||||||
Running kubelet in a user namespace without using this feature gate is also possible
|
Running kubelet in a user namespace without using this feature gate is also possible
|
||||||
by mounting a specially crafted proc filesystem, but not officially supported.
|
by mounting a specially crafted proc filesystem (as done by [Sysbox](https://github.com/nestybox/sysbox)), but not officially supported.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
### 配置 kubelet
|
### 配置 kubelet
|
||||||
|
@ -478,7 +523,8 @@ cgroupDriver: "cgroupfs"
|
||||||
|
|
||||||
`KubeletInUserNamespace` 特性门控从 Kubernetes v1.22 被引入, 标记为 "alpha" 状态。
|
`KubeletInUserNamespace` 特性门控从 Kubernetes v1.22 被引入, 标记为 "alpha" 状态。
|
||||||
|
|
||||||
通过挂载特制的 proc 文件系统,也可以在不使用这个特性门控的情况下在用户命名空间运行 kubelet,但这不受官方支持。
|
通过挂载特制的 proc 文件系统 (比如 [Sysbox](https://github.com/nestybox/sysbox)),
|
||||||
|
也可以在不使用这个特性门控的情况下在用户命名空间运行 kubelet,但这不受官方支持。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Configuring kube-proxy
|
### Configuring kube-proxy
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
|
---
|
||||||
title: 将节点上的容器运行时从 Docker Engine 改为 containerd
|
title: 将节点上的容器运行时从 Docker Engine 改为 containerd
|
||||||
weight: 8
|
weight: 8
|
||||||
content_type: task
|
content_type: task
|
||||||
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
title: "Changing the Container Runtime on a Node from Docker Engine to containerd"
|
title: "Changing the Container Runtime on a Node from Docker Engine to containerd"
|
||||||
|
@ -9,7 +11,10 @@ content_type: task
|
||||||
-->
|
-->
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
This task outlines the steps needed to update your container runtime to containerd from Docker. It is applicable for cluster operators running Kubernetes 1.23 or earlier. Also this covers an example scenario for migrating from dockershim to containerd and alternative container runtimes can be picked from this [page](https://kubernetes.io/docs/setup/production-environment/container-runtimes/).
|
This task outlines the steps needed to update your container runtime to containerd from Docker. It
|
||||||
|
is applicable for cluster operators running Kubernetes 1.23 or earlier. Also this covers an
|
||||||
|
example scenario for migrating from dockershim to containerd and alternative container runtimes
|
||||||
|
can be picked from this [page](/docs/setup/production-environment/container-runtimes/).
|
||||||
-->
|
-->
|
||||||
本任务给出将容器运行时从 Docker 改为 containerd 所需的步骤。
|
本任务给出将容器运行时从 Docker 改为 containerd 所需的步骤。
|
||||||
此任务适用于运行 1.23 或更早版本 Kubernetes 的集群操作人员。
|
此任务适用于运行 1.23 或更早版本 Kubernetes 的集群操作人员。
|
||||||
|
@ -22,27 +27,32 @@ This task outlines the steps needed to update your container runtime to containe
|
||||||
{{% thirdparty-content %}}
|
{{% thirdparty-content %}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Install containerd. For more information see, [containerd's installation documentation](https://containerd.io/docs/getting-started/) and for specific prerequisite follow [this](/docs/setup/production-environment/container-runtimes/#containerd).
|
Install containerd. For more information see
|
||||||
|
[containerd's installation documentation](https://containerd.io/docs/getting-started/)
|
||||||
|
and for specific prerequisite follow
|
||||||
|
[the containerd guide](/docs/setup/production-environment/container-runtimes/#containerd).
|
||||||
-->
|
-->
|
||||||
安装 containerd。进一步的信息可参见
|
安装 containerd。进一步的信息可参见
|
||||||
[containerd 的安装文档](https://containerd.io/docs/getting-started/)。
|
[containerd 的安装文档](https://containerd.io/docs/getting-started/)。
|
||||||
关于一些特定的环境准备工作,请参阅[此页面](/zh/docs/setup/production-environment/container-runtimes/#containerd)。
|
关于一些特定的环境准备工作,请遵循 [containerd 指南](/zh/docs/setup/production-environment/container-runtimes/#containerd)。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Drain the node
|
## Drain the node
|
||||||
|
|
||||||
```
|
```shell
|
||||||
# replace <node-to-drain> with the name of your node you are draining
|
|
||||||
kubectl drain <node-to-drain> --ignore-daemonsets
|
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Replace `<node-to-drain>` with the name of your node you are draining.
|
||||||
-->
|
-->
|
||||||
## 腾空节点 {#drain-the-node}
|
## 腾空节点 {#drain-the-node}
|
||||||
|
|
||||||
```
|
```shell
|
||||||
# 将 <node-to-drain> 替换为你所要腾空的节点的名称
|
|
||||||
kubectl drain <node-to-drain> --ignore-daemonsets
|
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||||
```
|
```
|
||||||
|
|
||||||
|
将 `<node-to-drain>` 替换为你所要腾空的节点的名称
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Stop the Docker daemon
|
## Stop the Docker daemon
|
||||||
-->
|
-->
|
||||||
|
@ -56,19 +66,21 @@ systemctl disable docker.service --now
|
||||||
<!--
|
<!--
|
||||||
## Install Containerd
|
## Install Containerd
|
||||||
|
|
||||||
This [page](/docs/setup/production-environment/container-runtimes/#containerd) contains detailed steps to install containerd.
|
Follow the [guide](/docs/setup/production-environment/container-runtimes/#containerd)
|
||||||
|
for detailed steps to install containerd.
|
||||||
-->
|
-->
|
||||||
## 安装 Containerd {#install-containerd}
|
## 安装 Containerd {#install-containerd}
|
||||||
|
|
||||||
此[页面](/zh/docs/setup/production-environment/container-runtimes/#containerd)
|
遵循此[指南](/zh/docs/setup/production-environment/container-runtimes/#containerd)
|
||||||
包含安装 containerd 的详细步骤。
|
了解安装 containerd 的详细步骤。
|
||||||
|
|
||||||
{{< tabs name="tab-cri-containerd-installation" >}}
|
{{< tabs name="tab-cri-containerd-installation" >}}
|
||||||
{{% tab name="Linux" %}}
|
{{% tab name="Linux" %}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
1. Install the `containerd.io` package from the official Docker repositories.
|
1. Install the `containerd.io` package from the official Docker repositories.
|
||||||
Instructions for setting up the Docker repository for your respective Linux distribution and installing the `containerd.io` package can be found at
|
Instructions for setting up the Docker repository for your respective Linux distribution and
|
||||||
|
installing the `containerd.io` package can be found at
|
||||||
[Install Docker Engine](https://docs.docker.com/engine/install/#server).
|
[Install Docker Engine](https://docs.docker.com/engine/install/#server).
|
||||||
-->
|
-->
|
||||||
1. 从官方的 Docker 仓库安装 `containerd.io` 包。关于为你所使用的 Linux 发行版来设置
|
1. 从官方的 Docker 仓库安装 `containerd.io` 包。关于为你所使用的 Linux 发行版来设置
|
||||||
|
@ -76,7 +88,7 @@ Instructions for setting up the Docker repository for your respective Linux dist
|
||||||
[Install Docker Engine](https://docs.docker.com/engine/install/#server)。
|
[Install Docker Engine](https://docs.docker.com/engine/install/#server)。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
2. Configure containerd:
|
1. Configure containerd:
|
||||||
-->
|
-->
|
||||||
2. 配置 containerd:
|
2. 配置 containerd:
|
||||||
|
|
||||||
|
@ -86,19 +98,19 @@ Instructions for setting up the Docker repository for your respective Linux dist
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
3. Restart containerd:
|
1. Restart containerd:
|
||||||
-->
|
-->
|
||||||
3. 重启 containerd:
|
3. 重启 containerd:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
sudo systemctl restart containerd
|
sudo systemctl restart containerd
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% /tab %}}
|
{{% /tab %}}
|
||||||
{{% tab name="Windows (PowerShell)" %}}
|
{{% tab name="Windows (PowerShell)" %}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Start a Powershell session, set `$Version` to the desired version (ex: `$Version="1.4.3"`), and then run the following commands:
|
Start a Powershell session, set `$Version` to the desired version (ex: `$Version="1.4.3"`), and
|
||||||
|
then run the following commands:
|
||||||
-->
|
-->
|
||||||
启动一个 Powershell 会话,将 `$Version` 设置为期望的版本(例如:`$Version="1.4.3"`),
|
启动一个 Powershell 会话,将 `$Version` 设置为期望的版本(例如:`$Version="1.4.3"`),
|
||||||
之后运行下面的命令:
|
之后运行下面的命令:
|
||||||
|
@ -148,7 +160,9 @@ Start a Powershell session, set `$Version` to the desired version (ex: `$Version
|
||||||
<!--
|
<!--
|
||||||
## Configure the kubelet to use containerd as its container runtime
|
## Configure the kubelet to use containerd as its container runtime
|
||||||
|
|
||||||
Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags. `--container-runtime=remote` and `--container-runtime-endpoint=unix:///run/containerd/containerd.sock"`
|
Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags.
|
||||||
|
`--container-runtime=remote` and
|
||||||
|
`--container-runtime-endpoint=unix:///run/containerd/containerd.sock"`.
|
||||||
-->
|
-->
|
||||||
## 配置 kubelet 使用 containerd 作为其容器运行时
|
## 配置 kubelet 使用 containerd 作为其容器运行时
|
||||||
|
|
||||||
|
@ -158,21 +172,19 @@ Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtim
|
||||||
<!--
|
<!--
|
||||||
For users using kubeadm should consider the following:
|
For users using kubeadm should consider the following:
|
||||||
|
|
||||||
The `kubeadm` tool stores the CRI socket for each host as an annotation in the Node object for that host.
|
Users using kubeadm should be aware that the `kubeadm` tool stores the CRI socket for each host as
|
||||||
|
an annotation in the Node object for that host. To change it you can execute the following command
|
||||||
|
on a machine that has the kubeadm `/etc/kubernetes/admin.conf` file.
|
||||||
-->
|
-->
|
||||||
对于使用 kubeadm 的用户,可以考虑下面的问题:
|
对于使用 kubeadm 的用户,可以考虑下面的问题:
|
||||||
|
|
||||||
`kubeadm` 工具将每个主机的 CRI 套接字保存在该主机对应的 Node 对象的注解中。
|
`kubeadm` 工具将每个主机的 CRI 套接字保存在该主机对应的 Node 对象的注解中。
|
||||||
|
使用 `kubeadm` 的用户应该知道,`kubeadm` 工具将每个主机的 CRI 套接字保存在该主机对应的 Node 对象的注解中。
|
||||||
|
要更改这一注解信息,你可以在一台包含 kubeadm `/etc/kubernetes/admin.conf` 文件的机器上执行以下命令:
|
||||||
|
|
||||||
<!--
|
```shell
|
||||||
To change it you must do the following:
|
kubectl edit no <node-name>
|
||||||
|
```
|
||||||
Execute `kubectl edit no <NODE-NAME>` on a machine that has the kubeadm `/etc/kubernetes/admin.conf` file.
|
|
||||||
-->
|
|
||||||
要更改这一注解信息,你必须执行下面的操作:
|
|
||||||
|
|
||||||
在一台包含 `/etc/kubernetes/admin.conf` 文件的机器上,执行
|
|
||||||
`kubectl edit no <节点名称>`。
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
This will start a text editor where you can edit the Node object.
|
This will start a text editor where you can edit the Node object.
|
||||||
|
@ -220,7 +232,7 @@ Run `kubectl get nodes -o wide` and containerd appears as the runtime for the no
|
||||||
{{% thirdparty-content %}}
|
{{% thirdparty-content %}}
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Finally if everything goes well remove docker
|
Finally if everything goes well, remove Docker.
|
||||||
-->
|
-->
|
||||||
最后,在一切顺利时删除 Docker。
|
最后,在一切顺利时删除 Docker。
|
||||||
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue