Merge pull request #23197 from tengqm/zh-links-tasks-6
[zh] Tidy up and fix links in tasks section (6/10)
This commit is contained in:
commit
7f7e81a272
|
@ -1,19 +1,15 @@
|
|||
---
|
||||
reviewers:
|
||||
- johnbelamaric
|
||||
title: 使用 CoreDNS 进行服务发现
|
||||
min-kubernetes-server-version: v1.9
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- johnbelamaric
|
||||
title: Using CoreDNS for Service Discovery
|
||||
min-kubernetes-server-version: v1.9
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -23,55 +19,50 @@ This page describes the CoreDNS upgrade process and how to install CoreDNS inste
|
|||
-->
|
||||
此页面介绍了 CoreDNS 升级过程以及如何安装 CoreDNS 而不是 kube-dns。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## About CoreDNS
|
||||
-->
|
||||
|
||||
## 关于 CoreDNS
|
||||
|
||||
<!--
|
||||
[CoreDNS](https://coredns.io) is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS.
|
||||
Like Kubernetes, the CoreDNS project is hosted by the {{< glossary_tooltip text="CNCF" term_id="cncf" >}}.
|
||||
-->
|
||||
[CoreDNS](https://coredns.io) 是一个灵活可扩展的 DNS 服务器,可以作为 Kubernetes 集群 DNS。与 Kubernetes 一样,CoreDNS 项目由 {{< glossary_tooltip text="CNCF" term_id="cncf" >}} 托管。
|
||||
## 关于 CoreDNS
|
||||
|
||||
[CoreDNS](https://coredns.io) 是一个灵活可扩展的 DNS 服务器,可以作为 Kubernetes 集群 DNS。
|
||||
与 Kubernetes 一样,CoreDNS 项目由 {{< glossary_tooltip text="CNCF" term_id="cncf" >}} 托管。
|
||||
|
||||
<!--
|
||||
You can use CoreDNS instead of kube-dns in your cluster by replacing kube-dns in an existing
|
||||
deployment, or by using tools like kubeadm that will deploy and upgrade the cluster for you.
|
||||
-->
|
||||
通过在现有的集群中替换 kube-dns,可以在集群中使用 CoreDNS 代替 kube-dns 部署,或者使用 kubeadm 等工具来为您部署和升级集群。
|
||||
通过在现有的集群中替换 kube-dns,可以在集群中使用 CoreDNS 代替 kube-dns 部署,
|
||||
或者使用 kubeadm 等工具来为你部署和升级集群。
|
||||
|
||||
<!--
|
||||
## Installing CoreDNS
|
||||
-->
|
||||
|
||||
## 安装 CoreDNS
|
||||
|
||||
<!--
|
||||
For manual deployment or replacement of kube-dns, see the documentation at the
|
||||
[CoreDNS GitHub project.](https://github.com/coredns/deployment/tree/master/kubernetes)
|
||||
-->
|
||||
有关手动部署或替换 kube-dns,请参阅 [CoreDNS GitHub 工程](https://github.com/coredns/deployment/tree/master/kubernetes)。
|
||||
## 安装 CoreDNS
|
||||
|
||||
有关手动部署或替换 kube-dns,请参阅
|
||||
[CoreDNS GitHub 工程](https://github.com/coredns/deployment/tree/master/kubernetes)。
|
||||
|
||||
<!--
|
||||
## Migrating to CoreDNS
|
||||
|
||||
### Upgrading an existing cluster with kubeadm
|
||||
-->
|
||||
|
||||
## 迁移到 CoreDNS
|
||||
|
||||
<!--
|
||||
## Upgrading an existing cluster with kubeadm
|
||||
-->
|
||||
|
||||
## 使用 kubeadm 升级现有集群
|
||||
### 使用 kubeadm 升级现有集群
|
||||
|
||||
<!--
|
||||
In Kubernetes version 1.10 and later, you can also move to CoreDNS when you use `kubeadm` to upgrade
|
||||
|
@ -79,14 +70,16 @@ a cluster that is using `kube-dns`. In this case, `kubeadm` will generate the Co
|
|||
("Corefile") based upon the `kube-dns` ConfigMap, preserving configurations for federation,
|
||||
stub domains, and upstream name server.
|
||||
-->
|
||||
在 Kubernetes 1.10 及更高版本中,当您使用 `kubeadm` 升级使用 `kube-dns` 的集群时,您还可以迁移到 CoreDNS。
|
||||
在本例中 `kubeadm` 将生成 CoreDNS 配置("Corefile")基于 `kube-dns` ConfigMap,保存联邦、存根域和上游名称服务器的配置。
|
||||
在 Kubernetes 1.10 及更高版本中,当你使用 `kubeadm` 升级使用 `kube-dns` 的集群时,你还可以迁移到 CoreDNS。
|
||||
在本例中 `kubeadm` 将生成 CoreDNS 配置("Corefile")基于 `kube-dns` ConfigMap,
|
||||
保存联邦、存根域和上游名称服务器的配置。
|
||||
|
||||
<!--
|
||||
If you are moving from kube-dns to CoreDNS, make sure to set the `CoreDNS` feature gate to `true`
|
||||
during an upgrade. For example, here is what a `v1.11.0` upgrade would look like:
|
||||
-->
|
||||
如果您正在从 kube-dns 迁移到 CoreDNS,请确保在升级期间将 `CoreDNS` 特性门设置为 `true`。例如,`v1.11.0` 升级应该是这样的:
|
||||
如果你正在从 kube-dns 迁移到 CoreDNS,请确保在升级期间将 `CoreDNS` 特性门设置为 `true`。
|
||||
例如,`v1.11.0` 升级应该是这样的:
|
||||
|
||||
```
|
||||
kubeadm upgrade apply v1.11.0 --feature-gates=CoreDNS=true
|
||||
|
@ -98,8 +91,8 @@ is used by default. Follow the guide outlined [here](/docs/reference/setup-tools
|
|||
your upgraded cluster to use kube-dns.
|
||||
-->
|
||||
在 Kubernetes 版本 1.13 和更高版本中,`CoreDNS`特性门已经删除,CoreDNS 在默认情况下使用。
|
||||
如果您想升级集群以使用 kube-dns,请遵循
|
||||
[此处](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase#cmd-phase-addon) 。
|
||||
如果你想升级集群以使用 kube-dns,请遵循
|
||||
[此处](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase#cmd-phase-addon) 。
|
||||
|
||||
<!--
|
||||
In versions prior to 1.11 the Corefile will be **overwritten** by the one created during upgrade.
|
||||
|
@ -107,86 +100,92 @@ In versions prior to 1.11 the Corefile will be **overwritten** by the one create
|
|||
customizations after the new ConfigMap is up and running.
|
||||
-->
|
||||
在 1.11 之前的版本中,核心文件将被升级过程中创建的文件覆盖。
|
||||
**如果已对其进行自定义,则应保存现有的 ConfigMap。** 在新的 ConfigMap 启动并运行后,您可以重新应用自定义。
|
||||
**如果已对其进行自定义,则应保存现有的 ConfigMap。**
|
||||
在新的 ConfigMap 启动并运行后,你可以重新应用自定义。
|
||||
|
||||
<!--
|
||||
If you are running CoreDNS in Kubernetes version 1.11 and later, during upgrade,
|
||||
your existing Corefile will be retained.
|
||||
-->
|
||||
如果您在 Kubernetes 1.11 及更高版本中运行 CoreDNS,则在升级期间,将保留现有的 Corefile。
|
||||
如果你在 Kubernetes 1.11 及更高版本中运行 CoreDNS,则在升级期间,将保留现有的 Corefile。
|
||||
|
||||
<!--
|
||||
## Installing kube-dns instead of CoreDNS with kubeadm
|
||||
-->
|
||||
## 使用 kubeadm 安装 kube-dns 而不是 CoreDNS
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
In Kubernetes 1.11, CoreDNS has graduated to General Availability (GA)
|
||||
and is installed by default.
|
||||
-->
|
||||
在 Kubernetes 1.11 中,CoreDNS 已经升级到通用可用性(GA),并默认安装。
|
||||
|
||||
{{< note >}}
|
||||
在 Kubernetes 1.11 中,CoreDNS 已经升级到通用可用性(GA),并默认安装。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
In Kubernetes 1.18, kube-dns usage with kubeadm has been deprecated and will be removed in a future version.
|
||||
-->
|
||||
{{< warning >}}
|
||||
在 Kubernetes 1.18 中,用 kubeadm 来安装 kube-dns 这一做法已经被废弃,
|
||||
会在将来版本中移除。
|
||||
{{< /warning >}}
|
||||
|
||||
<!--
|
||||
To install kube-dns on versions prior to 1.13, set the `CoreDNS` feature gate
|
||||
value to `false`:
|
||||
-->
|
||||
若要在1.13之前到版本上安装 kube-dns,请将 `CoreDNS` 特性门值设置为 `false`:
|
||||
若要在 1.13 之前版本上安装 kube-dns,请将 `CoreDNS` 特性门控设置为 `false`:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubeadm init --feature-gates=CoreDNS=false
|
||||
```
|
||||
|
||||
<!--
|
||||
For versions 1.13 and later, follow the guide outlined [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase#cmd-phase-addon).
|
||||
-->
|
||||
对于 1.13 版和更高版本,请遵循[此处](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase#cmd-phase-addon)概述到指南。
|
||||
对于 1.13 版和更高版本,请遵循
|
||||
[此处](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase#cmd-phase-addon)概述到指南。
|
||||
|
||||
<!--
|
||||
## Upgrading CoreDNS
|
||||
-->
|
||||
## 升级 CoreDNS
|
||||
|
||||
<!--
|
||||
CoreDNS is available in Kubernetes since v1.9.
|
||||
You can check the version of CoreDNS shipped with Kubernetes and the changes made to CoreDNS [here](https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md).
|
||||
-->
|
||||
## 升级 CoreDNS
|
||||
|
||||
从 v1.9 起,Kubernetes 提供了 CoreDNS。
|
||||
您可以在[此处](https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md)检查 Kubernetes 随附的 CoreDNS 版本以及对 CoreDNS 所做的更改。
|
||||
你可以在[此处](https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md)
|
||||
查看 Kubernetes 随附的 CoreDNS 版本以及对 CoreDNS 所做的更改。
|
||||
|
||||
<!--
|
||||
CoreDNS can be upgraded manually in case you want to only upgrade CoreDNS or use your own custom image.
|
||||
There is a helpful [guideline and walkthrough](https://github.com/coredns/deployment/blob/master/kubernetes/Upgrading_CoreDNS.md) available to ensure a smooth upgrade.
|
||||
-->
|
||||
如果您只想升级 CoreDNS 或使用自己的自定义镜像,则可以手动升级 CoreDNS。
|
||||
如果你只想升级 CoreDNS 或使用自己的自定义镜像,则可以手动升级 CoreDNS。
|
||||
参看[指南和演练](https://github.com/coredns/deployment/blob/master/kubernetes/Upgrading_CoreDNS.md)
|
||||
文档了解如何平滑升级。
|
||||
|
||||
|
||||
<!--
|
||||
## Tuning CoreDNS
|
||||
-->
|
||||
|
||||
## CoreDNS 调优
|
||||
|
||||
<!--
|
||||
When resource utilisation is a concern, it may be useful to tune the configuration of CoreDNS. For more details, check out the
|
||||
[documentation on scaling CoreDNS]((https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md)).
|
||||
-->
|
||||
当涉及到资源利用时,优化内核的配置可能是有用的。有关详细信息,请参阅 [关于扩展 CoreDNS 的文档](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md)。
|
||||
|
||||
## CoreDNS 调优
|
||||
|
||||
当资源利用方面有问题时,优化 CoreDNS 的配置可能是有用的。
|
||||
有关详细信息,请参阅[有关扩缩 CoreDNS 的文档](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md)。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
You can configure [CoreDNS](https://coredns.io) to support many more use cases than
|
||||
kube-dns by modifying the `Corefile`. For more information, see the
|
||||
[CoreDNS site](https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/).
|
||||
-->
|
||||
您可以通过修改 `Corefile` 来配置 [CoreDNS](https://coredns.io),以支持比 ku-dns 更多的用例。有关更多信息,请参考 [CoreDNS 网站](https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/)。
|
||||
|
||||
|
||||
|
||||
你可以通过修改 `Corefile` 来配置 [CoreDNS](https://coredns.io),以支持比 kube-dns 更多的用例。
|
||||
请参考 [CoreDNS 网站](https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/)
|
||||
以了解更多信息。
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: 声明网络策略
|
||||
min-kubernetes-server-version: v1.8
|
||||
content_type: task
|
||||
---
|
||||
<!--
|
||||
|
@ -16,7 +17,9 @@ content_type: task
|
|||
<!--
|
||||
This document helps you get started using the Kubernetes [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) to declare network policies that govern how pods communicate with each other.
|
||||
-->
|
||||
本文可以帮助您开始使用 Kubernetes 的 [NetworkPolicy API](/zh/docs/concepts/services-networking/network-policies/) 声明网络策略去管理 Pod 之间的通信
|
||||
本文可以帮助你开始使用 Kubernetes 的
|
||||
[NetworkPolicy API](/zh/docs/concepts/services-networking/network-policies/)
|
||||
声明网络策略去管理 Pod 之间的通信
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -31,13 +34,13 @@ Make sure you've configured a network provider with network policy support. Ther
|
|||
* [Romana](/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/)
|
||||
* [Weave Net](/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/)
|
||||
-->
|
||||
您首先需要有一个支持网络策略的 Kubernetes 集群。已经有许多支持 NetworkPolicy 的网络提供商,包括:
|
||||
你首先需要有一个支持网络策略的 Kubernetes 集群。已经有许多支持 NetworkPolicy 的网络提供商,包括:
|
||||
|
||||
* [Calico](/zh/docs/tasks/configure-pod-container/calico-network-policy/)
|
||||
* [Calico](/zh/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/)
|
||||
* [Cilium](/zh/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/)
|
||||
* [Kube-router](/zh/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy/)
|
||||
* [Romana](/zh/docs/tasks/configure-pod-container/romana-network-policy/)
|
||||
* [Weave 网络](/zh/docs/tasks/configure-pod-container/weave-network-policy/)
|
||||
* [Romana](/zh/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/)
|
||||
* [Weave 网络](/zh/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/)
|
||||
|
||||
<!--
|
||||
The above list is sorted alphabetically by product name, not by recommendation or preference. This example is valid for a Kubernetes cluster using any of these providers.
|
||||
|
@ -56,13 +59,14 @@ To see how Kubernetes network policy works, start off by creating an `nginx` Dep
|
|||
-->
|
||||
## 创建一个`nginx` Deployment 并且通过服务将其暴露
|
||||
|
||||
为了查看 Kubernetes 网络策略是怎样工作的,可以从创建一个`nginx` deployment 并且通过服务将其暴露开始
|
||||
为了查看 Kubernetes 网络策略是怎样工作的,可以从创建一个`nginx` Deployment 并且通过服务将其暴露开始
|
||||
|
||||
```console
|
||||
```shell
|
||||
kubectl create deployment nginx --image=nginx
|
||||
```
|
||||
|
||||
```none
|
||||
deployment "nginx" created
|
||||
deployment.apps/nginx created
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -73,8 +77,9 @@ Expose the Deployment through a Service called `nginx`.
|
|||
```console
|
||||
kubectl expose deployment nginx --port=80
|
||||
```
|
||||
|
||||
```none
|
||||
service "nginx" exposed
|
||||
service/nginx exposed
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -103,7 +108,7 @@ You should be able to access the new `nginx` service from other Pods. To access
|
|||
-->
|
||||
## 通过从 Pod 访问服务对其进行测试
|
||||
|
||||
您应该可以从其它的 Pod 访问这个新的 `nginx` 服务。
|
||||
你应该可以从其它的 Pod 访问这个新的 `nginx` 服务。
|
||||
要从 default 命名空间中的其它s Pod 来访问该服务。可以启动一个 busybox 容器:
|
||||
|
||||
```console
|
||||
|
@ -118,6 +123,7 @@ In your shell, run the following command:
|
|||
```shell
|
||||
wget --spider --timeout=1 nginx
|
||||
```
|
||||
|
||||
```none
|
||||
Connecting to nginx (10.100.0.16:80)
|
||||
remote file exists
|
||||
|
|
|
@ -1,17 +1,13 @@
|
|||
---
|
||||
reviewers:
|
||||
- smarterclayton
|
||||
title: 静态加密 Secret 数据
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- smarterclayton
|
||||
title: Encrypting Secret Data at Rest
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -20,42 +16,21 @@ This page shows how to enable and configure encryption of secret data at rest.
|
|||
-->
|
||||
本文展示如何启用和配置静态 Secret 数据的加密
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
<!--
|
||||
* Kubernetes version 1.7.0 or later is required
|
||||
|
||||
* etcd v3 or later is required
|
||||
|
||||
* Encryption at rest is alpha in 1.7.0 which means it may change without notice. Users may be required to decrypt their data prior to upgrading to 1.8.0.
|
||||
-->
|
||||
* 需要 Kubernetes 1.7.0 或者更高版本
|
||||
|
||||
* 需要 etcd v3 或者更高版本
|
||||
|
||||
* 静态数据加密在 1.7.0 中仍然是 alpha 版本,这意味着它可能会在没有通知的情况下进行更改。在升级到 1.8.0 之前,用户可能需要解密他们的数据。
|
||||
|
||||
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Configuration and determining whether encryption at rest is already enabled
|
||||
|
||||
The `kube-apiserver` process accepts an argument `--experimental-encryption-provider-config`
|
||||
The `kube-apiserver` process accepts an argument `-experimental-encryption-provider-config`
|
||||
that controls how API data is encrypted in etcd. An example configuration
|
||||
is provided below.
|
||||
|
||||
|
@ -69,8 +44,8 @@ is provided below.
|
|||
## 理解静态数据加密
|
||||
|
||||
```yaml
|
||||
kind: EncryptionConfiguration
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
|
@ -100,23 +75,31 @@ Each `resources` array item is a separate config and contains a complete configu
|
|||
that should be encrypted. The `providers` array is an ordered list of the possible encryption
|
||||
providers. Only one provider type may be specified per entry (`identity` or `aescbc` may be provided,
|
||||
but not both in the same item).
|
||||
-->
|
||||
每个 `resources` 数组项目是一个单独的完整的配置。
|
||||
`resources.resources` 字段是要加密的 Kubernetes 资源名称(`resource` 或 `resource.group`)的数组。
|
||||
`providers` 数组是可能的加密 provider 的有序列表。
|
||||
每个条目只能指定一个 provider 类型(可以是 `identity` 或 `aescbc`,但不能在同一个项目中同时指定)。
|
||||
|
||||
<!--
|
||||
The first provider in the list is used to encrypt resources going into storage. When reading
|
||||
resources from storage each provider that matches the stored data attempts to decrypt the data in
|
||||
order. If no provider can read the stored data due to a mismatch in format or secret key, an error
|
||||
is returned which prevents clients from accessing that resource.
|
||||
-->
|
||||
列表中的第一个 provider 用于加密进入存储的资源。
|
||||
当从存储器读取资源时,与存储的数据匹配的所有 provider 将按顺序尝试解密数据。
|
||||
如果由于格式或密钥不匹配而导致没有 provider 能够读取存储的数据,则会返回一个错误,以防止客户端访问该资源。
|
||||
|
||||
<!--
|
||||
**IMPORTANT:** If any resource is not readable via the encryption config (because keys were changed),
|
||||
the only recourse is to delete that key from the underlying etcd directly. Calls that attempt to
|
||||
read that resource will fail until it is deleted or a valid decryption key is provided.
|
||||
-->
|
||||
每个 `resources` 数组项目是一个单独的完整的配置。 `resources.resources` 字段是要加密的 Kubernetes 资源名称(`resource` 或 `resource.group`)的数组。
|
||||
`providers` 数组是可能的加密 provider 的有序列表。每个条目只能指定一个 provider 类型(可以是 `identity` 或 `aescbc`,但不能在同一个项目中同时指定)。
|
||||
|
||||
列表中的第一个提供者用于加密进入存储的资源。当从存储器读取资源时,与存储的数据匹配的所有提供者将尝试按顺序解密数据。
|
||||
如果由于格式或密钥不匹配而导致提供者无法读取存储的数据,则会返回一个错误,以防止客户端访问该资源。
|
||||
|
||||
**重要:** 如果通过加密配置无法读取资源(因为密钥已更改),唯一的方法是直接从基础 etcd 中删除该密钥。任何尝试读取资源的调用将会失败,直到它被删除或提供有效的解密密钥。
|
||||
{{< caution >}}
|
||||
**重要:** 如果通过加密配置无法读取资源(因为密钥已更改),唯一的方法是直接从底层 etcd 中删除该密钥。
|
||||
任何尝试读取资源的调用将会失败,直到它被删除或提供有效的解密密钥。
|
||||
{{< /caution >}}
|
||||
|
||||
### Providers:
|
||||
|
||||
|
@ -132,28 +115,57 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations
|
|||
Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider
|
||||
is the first provider, the first key is used for encryption.
|
||||
-->
|
||||
名称 | 加密类型 | 强度 | 速度 | 密钥长度 | 其它事项
|
||||
{{< table caption="Kubernetes 静态数据加密的 Providers" >}}
|
||||
名称 | 加密类型 | 强度 | 速度 | 密钥长度 | 其它事项
|
||||
-----|------------|----------|-------|------------|---------------------
|
||||
`identity` | 无 | N/A | N/A | N/A | 不加密写入的资源。当设置为第一个 provider 时,资源将在新值写入时被解密。
|
||||
`aescbc` | 填充 PKCS#7 的 AES-CBC | 最强 | 快 | 32字节 | 建议使用的加密项,但可能比 `secretbox` 稍微慢一些。
|
||||
`secretbox` | XSalsa20 和 Poly1305 | 强 | 更快 | 32字节 | 较新的标准,在需要高度评审的环境中可能不被接受。
|
||||
`aesgcm` | 带有随机数的 AES-GCM | 必须每 200k 写入一次 | 最快 | 16, 24, 或者 32字节 | 建议不要使用,除非实施了自动密钥循环方案。
|
||||
`kms` | 使用信封加密方案:数据使用带有 PKCS#7 填充的 AES-CBC 通过 data encryption keys(DEK)加密,DEK 根据 Key Management Service(KMS)中的配置通过 key encryption keys(KEK)加密 | 最强 | 快 | 32字节 | 建议使用第三方工具进行密钥管理。为每个加密生成新的 DEK,并由用户控制 KEK 轮换来简化密钥轮换。[配置 KMS 提供程序](/docs/tasks/administer-cluster/kms-provider/)
|
||||
`aesgcm` | 带有随机数的 AES-GCM | 必须每 200k 写入一次 | 最快 | 16, 24 或者 32字节 | 建议不要使用,除非实施了自动密钥循环方案。
|
||||
`kms` | 使用信封加密方案:数据使用带有 PKCS#7 填充的 AES-CBC 通过数据加密密钥(DEK)加密,DEK 根据 Key Management Service(KMS)中的配置通过密钥加密密钥(Key Encryption Keys,KEK)加密 | 最强 | 快 | 32字节 | 建议使用第三方工具进行密钥管理。为每个加密生成新的 DEK,并由用户控制 KEK 轮换来简化密钥轮换。[配置 KMS 提供程序](/zh/docs/tasks/administer-cluster/kms-provider/)
|
||||
|
||||
每个 provider 都支持多个密钥 - 在解密时会按顺序使用密钥,如果是第一个 provider,则第一个密钥用于加密。
|
||||
|
||||
<!--
|
||||
__Storing the raw encryption key in the EncryptionConfig only moderately improves your security posture, compared to no encryption.
|
||||
Please use `kms` provider for additional security.__ By default, the `identity` provider is used to protect secrets in etcd, which
|
||||
provides no encryption. `EncryptionConfiguration` was introduced to encrypt secrets locally, with a locally managed key.
|
||||
-->
|
||||
__在 EncryptionConfig 中保存原始的加密密钥与不加密相比只会略微地提升安全级别。
|
||||
请使用 `kms` 驱动以获得更强的安全性。__
|
||||
默认情况下,`identity` 驱动被用来对 etcd 中的 Secret 提供保护,
|
||||
而这个驱动不提供加密能力。
|
||||
`EncryptionConfiguration` 的引入是为了能够使用本地管理的密钥来在本地加密 Secret 数据。
|
||||
|
||||
<!--
|
||||
Encrypting secrets with a locally managed key protects against an etcd compromise, but it fails to protect against a host compromise.
|
||||
Since the encryption keys are stored on the host in the EncryptionConfig YAML file, a skilled attacker can access that file and
|
||||
extract the encryption keys.
|
||||
-->
|
||||
使用本地管理的密钥来加密 Secret 能够保护数据免受 etcd 破坏的影响,不过无法针对
|
||||
主机被侵入提供防护。
|
||||
这是因为加密的密钥保存在主机上的 EncryptionConfig YAML 文件中,有经验的入侵者
|
||||
仍能访问该文件并从中提取出加密密钥。
|
||||
|
||||
<!--
|
||||
Envelope encryption creates dependence on a separate key, not stored in Kubernetes. In this case, an attacker would need to compromise etcd, the kubeapi-server, and the third-party KMS provider to retrieve the plaintext values, providing a higher level of security than locally-stored encryption keys.
|
||||
-->
|
||||
封套加密(Envelope Encryption)引入了对独立密钥的依赖,而这个密钥并不保存在 Kubernetes 中。
|
||||
在这种情况下下,入侵者需要攻破 etcd、kube-apiserver 和第三方的 KMS
|
||||
驱动才能获得明文数据,因而这种方案提供了比本地保存加密密钥更高的安全级别。
|
||||
|
||||
<!--
|
||||
## Encrypting your data
|
||||
|
||||
Create a new encryption config file:
|
||||
-->
|
||||
## 加密您的数据
|
||||
## 加密你的数据
|
||||
|
||||
创建一个新的加密配置文件:
|
||||
|
||||
```yaml
|
||||
kind: EncryptionConfiguration
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
|
@ -172,11 +184,11 @@ To create a new secret perform the following steps:
|
|||
-->
|
||||
遵循如下步骤来创建一个新的 secret:
|
||||
|
||||
1. 生成一个 32 字节的随机密钥并进行 base64 编码。如果您在 Linux 或 Mac OS X 上,请运行以下命令:
|
||||
1. 生成一个 32 字节的随机密钥并进行 base64 编码。如果你在 Linux 或 Mac OS X 上,请运行以下命令:
|
||||
|
||||
```
|
||||
head -c 32 /dev/urandom | base64
|
||||
```
|
||||
```
|
||||
head -c 32 /dev/urandom | base64
|
||||
```
|
||||
|
||||
<!--
|
||||
2. Place that value in the secret field.
|
||||
|
@ -186,11 +198,15 @@ To create a new secret perform the following steps:
|
|||
**IMPORTANT:** Your config file contains keys that can decrypt content in etcd, so you must properly restrict permissions on your masters so only the user who runs the kube-apiserver can read it.
|
||||
-->
|
||||
2. 将这个值放入到 secret 字段中。
|
||||
3. 设置 `kube-apiserver` 的 `--experimental-encryption-provider-config` 参数,将其指定到配置文件所在位置。
|
||||
4. 重启您的 API server。
|
||||
3. 设置 `kube-apiserver` 的 `--experimental-encryption-provider-config` 参数,将其指向
|
||||
配置文件所在位置。
|
||||
4. 重启你的 API server。
|
||||
|
||||
**重要:** 您的配置文件包含可以解密 etcd 内容的密钥,因此您必须正确限制主设备的权限,以便只有能运行 kube-apiserver 的用户才能读取它。
|
||||
|
||||
{{< caution >}}
|
||||
你的配置文件包含可以解密 etcd 内容的密钥,因此你必须正确限制主控节点的访问权限,
|
||||
以便只有能运行 kube-apiserver 的用户才能读取它。
|
||||
{{< caution >}}
|
||||
|
||||
<!--
|
||||
## Verifying that data is encrypted
|
||||
|
@ -201,53 +217,58 @@ program to retrieve the contents of your secret.
|
|||
|
||||
1. Create a new secret called `secret1` in the `default` namespace:
|
||||
-->
|
||||
## 验证数据是否被加密
|
||||
## 验证数据已被加密
|
||||
|
||||
数据在写入 etcd 时会被加密。重新启动你的 `kube-apiserver` 后,任何新创建或更新的密码在存储时都应该被加密。
|
||||
如果想要检查,你可以使用 `etcdctl` 命令行程序来检索你的加密内容。
|
||||
|
||||
1. 创建一个新的 secret,名称为 `secret1`,命名空间为 `default`:
|
||||
|
||||
```
|
||||
kubectl create secret generic secret1 -n default --from-literal=mykey=mydata
|
||||
```
|
||||
```shell
|
||||
kubectl create secret generic secret1 -n default --from-literal=mykey=mydata
|
||||
```
|
||||
|
||||
<!--
|
||||
2. Using the etcdctl commandline, read that secret out of etcd:
|
||||
-->
|
||||
2. 使用 etcdctl 命令行,从 etcd 中读取 secret:
|
||||
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C
|
||||
```
|
||||
```shell
|
||||
ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C
|
||||
```
|
||||
|
||||
<!--
|
||||
where `[...]` must be the additional arguments for connecting to the etcd server.
|
||||
-->
|
||||
这里的 `[...]` 是用来连接 etcd 服务的额外参数。
|
||||
|
||||
<!--
|
||||
where `[...]` must be the additional arguments for connecting to the etcd server.
|
||||
3. Verify the stored secret is prefixed with `k8s:enc:aescbc:v1:` which indicates the `aescbc` provider has encrypted the resulting data.
|
||||
4. Verify the secret is correctly decrypted when retrieved via the API:
|
||||
-->
|
||||
这里的 `[...]` 是用来连接 etcd 服务的额外参数。
|
||||
3. 验证存储的密钥前缀是否为 `k8s:enc:aescbc:v1:`,这表明 `aescbc` provider 已加密结果数据。
|
||||
4. 通过 API 检索,验证 secret 是否被正确解密:
|
||||
|
||||
```
|
||||
kubectl describe secret secret1 -n default
|
||||
```
|
||||
|
||||
<!--
|
||||
should match `mykey: mydata`
|
||||
-->
|
||||
必须匹配 `mykey: mydata`
|
||||
```shell
|
||||
kubectl describe secret secret1 -n default
|
||||
```
|
||||
|
||||
<!--
|
||||
should match `mykey: mydata`, mydata is encoded, check [decoding a secret](/docs/concepts/configuration/secret#decoding-a-secret) to
|
||||
completely decode the secret.
|
||||
-->
|
||||
其输出应该是 `mykey: bXlkYXRh`,`mydata` 数据是被加密过的,请参阅
|
||||
[解密 Secret](/zh/docs/concepts/configuration/secret#decoding-a-secret)
|
||||
了解如何完全解码 Secret 内容。
|
||||
|
||||
<!--
|
||||
## Ensure all secrets are encrypted
|
||||
|
||||
Since secrets are encrypted on write, performing an update on a secret will encrypt that content.
|
||||
-->
|
||||
## 确保所有 secret 都被加密
|
||||
## 确保所有 Secret 都被加密
|
||||
|
||||
由于 secret 是在写入时被加密,因此对 secret 执行更新也会加密该内容。
|
||||
由于 Secret 是在写入时被加密,因此对 Secret 执行更新也会加密该内容。
|
||||
|
||||
```
|
||||
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
|
||||
|
@ -255,13 +276,17 @@ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
|
|||
|
||||
<!--
|
||||
The command above reads all secrets and then updates them to apply server side encryption.
|
||||
-->
|
||||
上面的命令读取所有 Secret,然后使用服务端加密来更新其内容。
|
||||
|
||||
<!--
|
||||
If an error occurs due to a conflicting write, retry the command.
|
||||
For larger clusters, you may wish to subdivide the secrets by namespace or script an update.
|
||||
-->
|
||||
上面的命令读取所有 secret,然后使用服务端加密来进行更新。
|
||||
{{< note >}}
|
||||
如果由于冲突写入而发生错误,请重试该命令。
|
||||
对于较大的集群,您可能希望通过命名空间或更新脚本来分割 secret。
|
||||
|
||||
对于较大的集群,你可能希望通过命名空间或更新脚本来对 Secret 进行划分。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Rotating a decryption key
|
||||
|
@ -273,14 +298,15 @@ the presence of a highly available deployment where multiple `kube-apiserver` pr
|
|||
2. Restart all `kube-apiserver` processes to ensure each server can decrypt using the new key
|
||||
3. Make the new key the first entry in the `keys` array so that it is used for encryption in the config
|
||||
4. Restart all `kube-apiserver` processes to ensure each server now encrypts using the new key
|
||||
5. Run `kubectl get secrets --all-namespaces -o json | kubectl replace -f -` to encrypt all existing secrets with the new key
|
||||
5. Run `kubectl get secrets -all-namespaces -o json | kubectl replace -f -` to encrypt all existing secrets with the new key
|
||||
6. Remove the old decryption key from the config after you back up etcd with the new key in use and update all secrets
|
||||
|
||||
With a single `kube-apiserver`, step 2 may be skipped.
|
||||
-->
|
||||
## 回滚解密密钥
|
||||
## 轮换解密密钥
|
||||
|
||||
在不发生停机的情况下更改 secret 需要多步操作,特别是在有多个 `kube-apiserver` 进程正在运行的高可用部署的情况下。
|
||||
在不发生停机的情况下更改 Secret 需要多步操作,特别是在有多个 `kube-apiserver` 进程正在运行的
|
||||
高可用环境中。
|
||||
|
||||
1. 生成一个新密钥并将其添加为所有服务器上当前提供程序的第二个密钥条目
|
||||
2. 重新启动所有 `kube-apiserver` 进程以确保每台服务器都可以使用新密钥进行解密
|
||||
|
@ -291,7 +317,6 @@ With a single `kube-apiserver`, step 2 may be skipped.
|
|||
|
||||
如果只有一个 `kube-apiserver`,第 2 步可能可以忽略。
|
||||
|
||||
|
||||
<!--
|
||||
## Decrypting all data
|
||||
|
||||
|
@ -302,8 +327,8 @@ To disable encryption at rest place the `identity` provider as the first entry i
|
|||
要禁用 rest 加密,请将 `identity` provider 作为配置中的第一个条目:
|
||||
|
||||
```yaml
|
||||
kind: EncryptionConfiguration
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
|
@ -316,11 +341,16 @@ resources:
|
|||
```
|
||||
|
||||
<!--
|
||||
and restart all `kube-apiserver` processes. Then run the command `kubectl get secrets --all-namespaces -o json | kubectl replace -f -`
|
||||
and restart all `kube-apiserver` processes. Then run
|
||||
-->
|
||||
并重新启动所有 `kube-apiserver` 进程。然后运行:
|
||||
|
||||
```
|
||||
kubectl get secrets -all-namespaces -o json | kubectl replace -f -`
|
||||
```
|
||||
|
||||
<!--
|
||||
to force all secrets to be decrypted.
|
||||
-->
|
||||
并重新启动所有 `kube-apiserver` 进程。然后运行命令 `kubectl get secrets --all-namespaces -o json | kubectl replace -f -` 强制解密所有 secret。
|
||||
|
||||
|
||||
|
||||
以强制解密所有 secret。
|
||||
|
||||
|
|
|
@ -3,10 +3,8 @@ title: 为节点发布扩展资源
|
|||
content_type: task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Advertise Extended Resources for a Node
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -16,38 +14,27 @@ This page shows how to specify extended resources for a Node.
|
|||
Extended resources allow cluster administrators to advertise node-level
|
||||
resources that would otherwise be unknown to Kubernetes.
|
||||
-->
|
||||
本文展示了如何为节点指定扩展资源。 扩展资源允许集群管理员发布节点级别的资源,这些资源在不进行发布的情况下无法被 Kubernetes 感知。
|
||||
|
||||
|
||||
|
||||
|
||||
本文展示了如何为节点指定扩展资源(Extended Resource)。
|
||||
扩展资源允许集群管理员发布节点级别的资源,这些资源在不进行发布的情况下无法被 Kubernetes 感知。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Get the names of your Nodes
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
Choose one of your Nodes to use for this exercise.
|
||||
-->
|
||||
## 获取您的节点名称
|
||||
## 获取你的节点名称
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
选择您的一个节点用于此练习。
|
||||
选择一个节点用于此练习。
|
||||
|
||||
<!--
|
||||
## Advertise a new extended resource on one of your Nodes
|
||||
|
@ -56,6 +43,12 @@ To advertise a new extended resource on a Node, send an HTTP PATCH request to
|
|||
the Kubernetes API server. For example, suppose one of your Nodes has four dongles
|
||||
attached. Here's an example of a PATCH request that advertises four dongle resources
|
||||
for your Node.
|
||||
-->
|
||||
## 在你的一个节点上发布一种新的扩展资源
|
||||
|
||||
为在一个节点上发布一种新的扩展资源,需要发送一个 HTTP PATCH 请求到 Kubernetes API server。
|
||||
例如:假设你的一个节点上带有四个 dongle 资源。
|
||||
下面是一个 PATCH 请求的示例,该请求为你的节点发布四个 dongle 资源。
|
||||
|
||||
```shell
|
||||
PATCH /api/v1/nodes/<your-node-name>/status HTTP/1.1
|
||||
|
@ -72,101 +65,51 @@ Host: k8s-master:8080
|
|||
]
|
||||
```
|
||||
|
||||
<!--
|
||||
Note that Kubernetes does not need to know what a dongle is or what a dongle is for.
|
||||
The preceding PATCH request just tells Kubernetes that your Node has four things that
|
||||
you call dongles.
|
||||
|
||||
Start a proxy, so that you can easily send requests to the Kubernetes API server:
|
||||
-->
|
||||
注意:Kubernetes 不需要了解 dongle 资源的含义和用途。
|
||||
前面的 PATCH 请求仅仅告诉 Kubernetes 你的节点拥有四个你称之为 dongle 的东西。
|
||||
|
||||
启动一个代理(proxy),以便你可以很容易地向 Kubernetes API server 发送请求:
|
||||
|
||||
```shell
|
||||
kubectl proxy
|
||||
```
|
||||
|
||||
<!--
|
||||
In another command window, send the HTTP PATCH request.
|
||||
Replace `<your-node-name>` with the name of your Node:
|
||||
-->
|
||||
|
||||
在另一个命令窗口中,发送 HTTP PATCH 请求。 用你的节点名称替换 `<your-node-name>`:
|
||||
|
||||
```shell
|
||||
curl --header "Content-Type: application/json-patch+json" \
|
||||
--request PATCH \
|
||||
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
|
||||
http://localhost:8001/api/v1/nodes/<your-node-name>/status
|
||||
--request PATCH \
|
||||
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
|
||||
http://localhost:8001/api/v1/nodes/<your-node-name>/status
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
In the preceding request, `~1` is the encoding for the character / in
|
||||
the patch path. The operation path value in JSON-Patch is interpreted as a
|
||||
JSON-Pointer. For more details, see
|
||||
[IETF RFC 6901](https://tools.ietf.org/html/rfc6901), section 3.
|
||||
{{< /note >}}
|
||||
|
||||
The output shows that the Node has a capacity of 4 dongles:
|
||||
|
||||
```
|
||||
"capacity": {
|
||||
"cpu": "2",
|
||||
"memory": "2049008Ki",
|
||||
"example.com/dongle": "4",
|
||||
```
|
||||
|
||||
Describe your Node:
|
||||
|
||||
```
|
||||
kubectl describe node <your-node-name>
|
||||
```
|
||||
|
||||
Once again, the output shows the dongle resource:
|
||||
|
||||
```yaml
|
||||
Capacity:
|
||||
cpu: 2
|
||||
memory: 2049008Ki
|
||||
example.com/dongle: 4
|
||||
```
|
||||
|
||||
Now, application developers can create Pods that request a certain
|
||||
number of dongles. See
|
||||
[Assign Extended Resources to a Container](/docs/tasks/configure-pod-container/extended-resource/).
|
||||
-->
|
||||
## 在您的一个节点上发布一种新的扩展资源
|
||||
|
||||
为在一个节点上发布一种新的扩展资源,需要发送一个 HTTP PATCH 请求到 Kubernetes API server。 例如:假设您的一个节点上带有四个 dongle 资源。下面是一个 PATCH 请求的示例, 该请求为您的节点发布四个 dongle 资源。
|
||||
|
||||
```shell
|
||||
PATCH /api/v1/nodes/<your-node-name>/status HTTP/1.1
|
||||
Accept: application/json
|
||||
Content-Type: application/json-patch+json
|
||||
Host: k8s-master:8080
|
||||
|
||||
[
|
||||
{
|
||||
"op": "add",
|
||||
"path": "/status/capacity/example.com~1dongle",
|
||||
"value": "4"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
注意:Kubernetes 不需要了解 dongle 资源的含义和用途。 前面的 PATCH 请求仅仅告诉 Kubernetes 您的节点拥有四个您称之为 dongle 的东西。
|
||||
|
||||
启动一个代理(proxy),以便您可以很容易地向 Kubernetes API server 发送请求:
|
||||
|
||||
```shell
|
||||
kubectl proxy
|
||||
```
|
||||
|
||||
在另一个命令窗口中,发送 HTTP PATCH 请求。 用您的节点名称替换 `<your-node-name>`:
|
||||
|
||||
```shell
|
||||
curl --header "Content-Type: application/json-patch+json" \
|
||||
--request PATCH \
|
||||
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
|
||||
http://localhost:8001/api/v1/nodes/<your-node-name>/status
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
在前面的请求中,`~1` 为 patch 路径中 “/” 符号的编码。JSON-Patch 中的操作路径值被解析为 JSON 指针。 更多细节,请查看 [IETF RFC 6901](https://tools.ietf.org/html/rfc6901) 的第 3 部分。
|
||||
在前面的请求中,`~1` 为 patch 路径中 “/” 符号的编码。
|
||||
JSON-Patch 中的操作路径值被解析为 JSON 指针。
|
||||
更多细节,请查看 [IETF RFC 6901](https://tools.ietf.org/html/rfc6901) 的第 3 节。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The output shows that the Node has a capacity of 4 dongles:
|
||||
-->
|
||||
输出显示该节点的 dongle 资源容量(capacity)为 4:
|
||||
|
||||
```
|
||||
|
@ -176,12 +119,14 @@ http://localhost:8001/api/v1/nodes/<your-node-name>/status
|
|||
"example.com/dongle": "4",
|
||||
```
|
||||
|
||||
描述您的节点:
|
||||
<!-- Describe your Node: -->
|
||||
描述你的节点:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl describe node <your-node-name>
|
||||
```
|
||||
|
||||
<!-- Once again, the output shows the dongle resource: -->
|
||||
输出再次展示了 dongle 资源:
|
||||
|
||||
```yaml
|
||||
|
@ -191,7 +136,13 @@ Capacity:
|
|||
example.com/dongle: 4
|
||||
```
|
||||
|
||||
现在,应用开发者可以创建请求一定数量 dongle 资源的 Pod 了。 参见[将扩展资源分配给容器](/docs/tasks/configure-pod-container/extended-resource/)。
|
||||
<!--
|
||||
Now, application developers can create Pods that request a certain
|
||||
number of dongles. See
|
||||
[Assign Extended Resources to a Container](/docs/tasks/configure-pod-container/extended-resource/).
|
||||
-->
|
||||
现在,应用开发者可以创建请求一定数量 dongle 资源的 Pod 了。
|
||||
参见[将扩展资源分配给容器](/zh/docs/tasks/configure-pod-container/extended-resource/)。
|
||||
|
||||
<!--
|
||||
## Discussion
|
||||
|
@ -202,17 +153,25 @@ running on the Node, it can have a certain number of dongles to be shared
|
|||
by all components running on the Node. And just as application developers
|
||||
can create Pods that request a certain amount of memory and CPU, they can
|
||||
create Pods that request a certain number of dongles.
|
||||
-->
|
||||
## 讨论
|
||||
|
||||
扩展资源类似于内存和 CPU 资源。例如,正如一个节点拥有一定数量的内存和 CPU 资源,
|
||||
它们被节点上运行的所有组件共享,该节点也可以拥有一定数量的 dongle 资源,
|
||||
这些资源同样被节点上运行的所有组件共享。
|
||||
此外,正如应用开发者可以创建请求一定数量的内存和 CPU 资源的 Pod,
|
||||
他们也可以创建请求一定数量 dongle 资源的 Pod。
|
||||
|
||||
<!--
|
||||
Extended resources are opaque to Kubernetes; Kubernetes does not
|
||||
know anything about what they are. Kubernetes knows only that a Node
|
||||
has a certain number of them. Extended resources must be advertised in integer
|
||||
amounts. For example, a Node can advertise four dongles, but not 4.5 dongles.
|
||||
-->
|
||||
## 讨论
|
||||
|
||||
扩展资源类似于内存和 CPU 资源。 例如,正如一个节点拥有一定数量的内存和 CPU 资源, 它们被节点上运行的所有组件共享,该节点也可以拥有一定数量的 dongle 资源, 这些资源同样被节点上运行的所有组件共享。 此外,正如应用开发者可以创建请求一定数量的内存和 CPU 资源的 Pod, 他们也可以创建请求一定数量 dongle 资源的 Pod。
|
||||
|
||||
扩展资源对 Kubernetes 是不透明的。 Kubernetes 不知道扩展资源含义相关的任何信息。 Kubernetes 只了解一个节点拥有一定数量的扩展资源。 扩展资源必须以整形数量进行发布。 例如,一个节点可以发布 4 个 dongle 资源,但是不能发布 4.5 个。
|
||||
扩展资源对 Kubernetes 是不透明的。Kubernetes 不知道扩展资源含义相关的任何信息。
|
||||
Kubernetes 只了解一个节点拥有一定数量的扩展资源。
|
||||
扩展资源必须以整形数量进行发布。
|
||||
例如,一个节点可以发布 4 个 dongle 资源,但是不能发布 4.5 个。
|
||||
|
||||
<!--
|
||||
### Storage example
|
||||
|
@ -222,28 +181,13 @@ create a name for the special storage, say example.com/special-storage.
|
|||
Then you could advertise it in chunks of a certain size, say 100 GiB. In that case,
|
||||
your Node would advertise that it has eight resources of type
|
||||
example.com/special-storage.
|
||||
|
||||
```yaml
|
||||
Capacity:
|
||||
...
|
||||
example.com/special-storage: 8
|
||||
```
|
||||
|
||||
If you want to allow arbitrary requests for special storage, you
|
||||
could advertise special storage in chunks of size 1 byte. In that case, you would advertise
|
||||
800Gi resources of type example.com/special-storage.
|
||||
|
||||
```yaml
|
||||
Capacity:
|
||||
...
|
||||
example.com/special-storage: 800Gi
|
||||
```
|
||||
|
||||
Then a Container could request any number of bytes of special storage, up to 800Gi.
|
||||
-->
|
||||
### 存储示例
|
||||
|
||||
假设一个节点拥有一种特殊类型的磁盘存储,其容量为 800 GiB。 您可以为该特殊存储创建一个名称, 如 example.com/special-storage。 然后您就可以按照一定规格的块(如 100 GiB)对其进行发布。 在这种情况下,您的节点将会通知它拥有八个 example.com/special-storage 类型的资源。
|
||||
假设一个节点拥有一种特殊类型的磁盘存储,其容量为 800 GiB。
|
||||
你可以为该特殊存储创建一个名称,如 `example.com/special-storage`。
|
||||
然后你就可以按照一定规格的块(如 100 GiB)对其进行发布。
|
||||
在这种情况下,你的节点将会通知它拥有八个 `example.com/special-storage` 类型的资源。
|
||||
|
||||
```yaml
|
||||
Capacity:
|
||||
|
@ -251,7 +195,13 @@ Capacity:
|
|||
example.com/special-storage: 8
|
||||
```
|
||||
|
||||
如果您想要允许针对特殊存储任意(数量)的请求,您可以按照 1 byte 大小的块来发布特殊存储。 在这种情况下,您将会发布 800Gi 数量的 example.com/special-storage 类型的资源。
|
||||
<!--
|
||||
If you want to allow arbitrary requests for special storage, you
|
||||
could advertise special storage in chunks of size 1 byte. In that case, you would advertise
|
||||
800Gi resources of type example.com/special-storage.
|
||||
-->
|
||||
如果你想要允许针对特殊存储任意(数量)的请求,你可以按照 1 字节大小的块来发布特殊存储。
|
||||
在这种情况下,你将会发布 800Gi 数量的 example.com/special-storage 类型的资源。
|
||||
|
||||
```yaml
|
||||
Capacity:
|
||||
|
@ -259,50 +209,21 @@ Capacity:
|
|||
example.com/special-storage: 800Gi
|
||||
```
|
||||
|
||||
<!--
|
||||
Then a Container could request any number of bytes of special storage, up to 800Gi.
|
||||
-->
|
||||
然后,容器就能够请求任意数量(多达 800Gi)字节的特殊存储。
|
||||
|
||||
```yaml
|
||||
Capacity:
|
||||
...
|
||||
example.com/special-storage: 800Gi
|
||||
```
|
||||
|
||||
<!--
|
||||
## Clean up
|
||||
|
||||
Here is a PATCH request that removes the dongle advertisement from a Node.
|
||||
|
||||
```
|
||||
PATCH /api/v1/nodes/<your-node-name>/status HTTP/1.1
|
||||
Accept: application/json
|
||||
Content-Type: application/json-patch+json
|
||||
Host: k8s-master:8080
|
||||
|
||||
[
|
||||
{
|
||||
"op": "remove",
|
||||
"path": "/status/capacity/example.com~1dongle",
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Start a proxy, so that you can easily send requests to the Kubernetes API server:
|
||||
|
||||
```shell
|
||||
kubectl proxy
|
||||
```
|
||||
|
||||
In another command window, send the HTTP PATCH request.
|
||||
Replace `<your-node-name>` with the name of your Node:
|
||||
|
||||
```shell
|
||||
curl --header "Content-Type: application/json-patch+json" \
|
||||
--request PATCH \
|
||||
--data '[{"op": "remove", "path": "/status/capacity/example.com~1dongle"}]' \
|
||||
http://localhost:8001/api/v1/nodes/<your-node-name>/status
|
||||
```
|
||||
|
||||
Verify that the dongle advertisement has been removed:
|
||||
|
||||
```
|
||||
kubectl describe node <your-node-name> | grep dongle
|
||||
```
|
||||
|
||||
(you should not see any output)
|
||||
-->
|
||||
## 清理
|
||||
|
||||
|
@ -322,13 +243,20 @@ Host: k8s-master:8080
|
|||
]
|
||||
```
|
||||
|
||||
启动一个代理,以便您可以很容易地向 Kubernetes API server 发送请求:
|
||||
<!--
|
||||
Start a proxy, so that you can easily send requests to the Kubernetes API server:
|
||||
-->
|
||||
启动一个代理,以便你可以很容易地向 Kubernetes API 服务器发送请求:
|
||||
|
||||
```shell
|
||||
kubectl proxy
|
||||
```
|
||||
|
||||
在另一个命令窗口中,发送 HTTP PATCH 请求。 用您的节点名称替换 `<your-node-name>`:
|
||||
<!--
|
||||
In another command window, send the HTTP PATCH request.
|
||||
Replace `<your-node-name>` with the name of your Node:
|
||||
-->
|
||||
在另一个命令窗口中,发送 HTTP PATCH 请求。用你的节点名称替换 `<your-node-name>`:
|
||||
|
||||
```shell
|
||||
curl --header "Content-Type: application/json-patch+json" \
|
||||
|
@ -337,20 +265,23 @@ curl --header "Content-Type: application/json-patch+json" \
|
|||
http://localhost:8001/api/v1/nodes/<your-node-name>/status
|
||||
```
|
||||
|
||||
<!--
|
||||
Verify that the dongle advertisement has been removed:
|
||||
-->
|
||||
验证 dongle 资源的发布已经被移除:
|
||||
|
||||
```
|
||||
kubectl describe node <your-node-name> | grep dongle
|
||||
```
|
||||
|
||||
|
||||
<!--
|
||||
(you should not see any output)
|
||||
-->
|
||||
(你应该看不到任何输出)
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
(你不应该看到任何输出)
|
||||
|
||||
<!--
|
||||
### For application developers
|
||||
|
||||
|
@ -363,11 +294,10 @@ kubectl describe node <your-node-name> | grep dongle
|
|||
-->
|
||||
### 针对应用开发人员
|
||||
|
||||
* [将扩展资源分配给容器](/docs/tasks/configure-pod-container/extended-resource/)
|
||||
* [将扩展资源分配给容器](/zh/docs/tasks/configure-pod-container/extended-resource/)
|
||||
|
||||
### 针对集群管理员
|
||||
|
||||
* [为 Namespace 配置最小和最大内存约束](/docs/tasks/administer-cluster/memory-constraint-namespace/)
|
||||
* [为 Namespace 配置最小和最大 CPU 约束](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
|
||||
|
||||
* [为名字空间配置最小和最大内存约束](/zh/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
|
||||
* [为名字空间配置最小和最大 CPU 约束](/zh/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
|
||||
|
||||
|
|
|
@ -3,10 +3,8 @@ title: IP Masquerade Agent 用户指南
|
|||
content_type: task
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: IP Masquerade Agent User Guide
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -15,14 +13,10 @@ This page shows how to configure and enable the ip-masq-agent.
|
|||
-->
|
||||
此页面展示如何配置和启用 ip-masq-agent。
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
<!--
|
||||
## IP Masquerade Agent User Guide
|
||||
|
@ -37,7 +31,7 @@ ip-masq-agent 配置 iptables 规则以隐藏位于集群节点 IP 地址后面
|
|||
<!--
|
||||
### **Key Terms**
|
||||
-->
|
||||
### **关键词**
|
||||
### **关键术语**
|
||||
|
||||
<!--
|
||||
* **NAT (Network Address Translation)**
|
||||
|
@ -70,7 +64,17 @@ ip-masq-agent 配置 iptables 规则以隐藏位于集群节点 IP 地址后面
|
|||
<!--
|
||||
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
|
||||
-->
|
||||
ip-masq-agent 配置 iptables 规则,以便在将流量发送到集群节点的IP和集群IP范围之外的目标时处理伪装节点/pod 的 IP 地址。这基本上隐藏了集群节点 IP 地址后面的pod IP地址。在某些环境中,去往“外部”地址的流量必须从已知的机器地址发出。例如,在 Google Cloud 中,任何到互联网的流量都必须来自 VM 的 IP。使用容器时,如 Google Kubernetes Engine,从Pod IP 发出的流量将被拒绝出出站。为了避免这种情况,我们必须将 Pod IP 隐藏在 VM 自己的IP地址后面 - 通常称为“伪装”。默认情况下,代理配置为将[RFC 1918](https://tools.ietf.org/html/rfc1918)指定的三个私有IP范围视为非伪装[CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)。这些范围是 10.0.0.0/8,172.16.0.0/12 和 192.168.0.0/16。默认情况下,代理还将链路本地地址(169.254.0.0/16)视为非伪装 CIDR。代理程序配置为每隔60秒从*/etc/config/ip-masq-agent*重新加载其配置,这也是可修改的。
|
||||
ip-masq-agent 配置 iptables 规则,以便在将流量发送到集群节点的 IP 和集群 IP 范围之外的目标时
|
||||
处理伪装节点/Pod 的 IP 地址。这基本上隐藏了集群节点 IP 地址后面的 Pod IP 地址。
|
||||
在某些环境中,去往“外部”地址的流量必须从已知的机器地址发出。
|
||||
例如,在 Google Cloud 中,任何到互联网的流量都必须来自 VM 的 IP。
|
||||
使用容器时,如 Google Kubernetes Engine,从 Pod IP 发出的流量将被拒绝出站。
|
||||
为了避免这种情况,我们必须将 Pod IP 隐藏在 VM 自己的 IP 地址后面 - 通常称为“伪装”。
|
||||
默认情况下,代理配置为将[RFC 1918](https://tools.ietf.org/html/rfc1918)指定的三个私有
|
||||
IP 范围视为非伪装 [CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)。
|
||||
这些范围是 10.0.0.0/8,172.16.0.0/12 和 192.168.0.0/16。
|
||||
默认情况下,代理还将链路本地地址(169.254.0.0/16)视为非伪装 CIDR。
|
||||
代理程序配置为每隔 60 秒从 */etc/config/ip-masq-agent* 重新加载其配置,这也是可修改的。
|
||||
|
||||

|
||||
|
||||
|
@ -87,7 +91,7 @@ The agent configuration file must be written in YAML or JSON syntax, and may con
|
|||
<!--
|
||||
* **masqLinkLocal:** A Boolean (true / false) which indicates whether to masquerade traffic to the link local prefix 169.254.0.0/16. False by default.
|
||||
-->
|
||||
* **masqLinkLocal:** 布尔值 (true / false),表示是否将流量伪装到本地链路前缀169.254.0.0/16。 默认为 false。
|
||||
* **masqLinkLocal:** 布尔值 (true / false),表示是否将流量伪装到本地链路前缀 169.254.0.0/16。默认为 false。
|
||||
|
||||
<!--
|
||||
* **resyncInterval:** An interval at which the agent attempts to reload config from disk. e.g. '30s' where 's' is seconds, 'ms' is milliseconds etc...
|
||||
|
@ -97,7 +101,10 @@ The agent configuration file must be written in YAML or JSON syntax, and may con
|
|||
<!--
|
||||
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
|
||||
-->
|
||||
10.0.0.0/8,172.16.0.0/12和192.168.0.0/16)范围内的流量不会被伪装。任何其他流量(假设是互联网)将被伪装。pod 访问本地目的地的例子,可以是其节点的 IP 地址、另一节点的地址或集群的 IP 地址范围内的一个 IP 地址。默认情况下,任何其他流量都将伪装。以下条目展示了 ip-masq-agent 的默认使用的规则:
|
||||
10.0.0.0/8、172.16.0.0/12 和 192.168.0.0/16 范围内的流量不会被伪装。
|
||||
任何其他流量(假设是互联网)将被伪装。
|
||||
Pod 访问本地目的地的例子,可以是其节点的 IP 地址、另一节点的地址或集群的 IP 地址范围内的一个 IP 地址。
|
||||
默认情况下,任何其他流量都将伪装。以下条目展示了 ip-masq-agent 的默认使用的规则:
|
||||
|
||||
<!--
|
||||
```
|
||||
|
@ -123,9 +130,11 @@ MASQUERADE all -- anywhere anywhere /* ip-masq-agent:
|
|||
<!--
|
||||
By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) to your cluster:
|
||||
-->
|
||||
默认情况下,从 Kubernetes 1.7.0 版本开始的 GCE/Google Kubernetes Engine 中,如果启用了网络策略,或者您使用的集群 CIDR 不在 10.0.0.0/8 范围内,则 ip-masq-agent 将在您的集群中运行。如果您在其他环境中运行,则可以将 ip-masq-agent [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 添加到您的集群:
|
||||
|
||||
|
||||
默认情况下,从 Kubernetes 1.7.0 版本开始的 GCE/Google Kubernetes Engine 中,
|
||||
如果启用了网络策略,或者你使用的集群 CIDR 不在 10.0.0.0/8 范围内,
|
||||
则 ip-masq-agent 将在你的集群中运行。
|
||||
如果你在其他环境中运行,则可以将 ip-masq-agent
|
||||
[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/) 添加到你的集群:
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -143,7 +152,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/ip-masq-
|
|||
<!--
|
||||
You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.
|
||||
-->
|
||||
您必须同时将适当的节点标签应用于集群中希望代理运行的任何节点。
|
||||
你必须同时将适当的节点标签应用于集群中希望代理运行的任何节点。
|
||||
|
||||
`
|
||||
kubectl label nodes my-node beta.kubernetes.io/masq-agent-ds-ready=true
|
||||
|
@ -157,7 +166,11 @@ More information can be found in the ip-masq-agent documentation [here](https://
|
|||
<!--
|
||||
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called "config".
|
||||
-->
|
||||
在大多数情况下,默认的规则集应该足够;但是,如果您的群集不是这种情况,则可以创建并应用 [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) 来自定义受影响的 IP 范围。 例如,要允许 ip-masq-agent 仅作用于 10.0.0.0/8,您可以一个名为 “config” 的文件中创建以下 [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) 。
|
||||
在大多数情况下,默认的规则集应该足够;但是,如果你的群集不是这种情况,则可以创建并应用
|
||||
[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
来自定义受影响的 IP 范围。
|
||||
例如,要允许 ip-masq-agent 仅作用于 10.0.0.0/8,你可以一个名为 “config” 的文件中创建以下
|
||||
[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -175,7 +188,7 @@ resyncInterval: 60s
|
|||
<!--
|
||||
Run the following command to add the config map to your cluster:
|
||||
-->
|
||||
运行以下命令将配置映射添加到您的集群:
|
||||
运行以下命令将配置映射添加到你的集群:
|
||||
|
||||
```
|
||||
kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system
|
||||
|
@ -185,8 +198,9 @@ kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-syste
|
|||
This will update a file located at */etc/config/ip-masq-agent* which is periodically checked every *resyncInterval* and applied to the cluster node.
|
||||
After the resync interval has expired, you should see the iptables rules reflect your changes:
|
||||
-->
|
||||
这将更新位于 */etc/config/ip-masq-agent* 的一个文件,该文件以 *resyncInterval* 为周期定期检查并应用于集群节点。
|
||||
重新同步间隔到期后,您应该看到您的更改在 iptables 规则中体现:
|
||||
这将更新位于 */etc/config/ip-masq-agent* 的一个文件,该文件以 *resyncInterval*
|
||||
为周期定期检查并应用于集群节点。
|
||||
重新同步间隔到期后,你应该看到你的更改在 iptables 规则中体现:
|
||||
|
||||
<!--
|
||||
```
|
||||
|
|
|
@ -1,81 +1,82 @@
|
|||
---
|
||||
reviewers:
|
||||
- derekwaynecarr
|
||||
- janetkuo
|
||||
title: 命名空间演练
|
||||
title: 名字空间演练
|
||||
content_type: task
|
||||
---
|
||||
<!-- ---
|
||||
<!--
|
||||
reviewers:
|
||||
- derekwaynecarr
|
||||
- janetkuo
|
||||
title: Namespaces Walkthrough
|
||||
content_type: task
|
||||
--- -->
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!-- Kubernetes {{< glossary_tooltip text="namespaces" term_id="namespace" >}}
|
||||
help different projects, teams, or customers to share a Kubernetes cluster. -->
|
||||
|
||||
Kubernetes {{< glossary_tooltip text="命名空间" term_id="namespace" >}}
|
||||
<!--
|
||||
Kubernetes {{< glossary_tooltip text="namespaces" term_id="namespace" >}}
|
||||
help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
-->
|
||||
Kubernetes {{< glossary_tooltip text="名字空间" term_id="namespace" >}}
|
||||
有助于不同的项目、团队或客户去共享 Kubernetes 集群。
|
||||
|
||||
<!-- It does this by providing the following:
|
||||
<!--
|
||||
It does this by providing the following:
|
||||
|
||||
1. A scope for [Names](/docs/concepts/overview/working-with-objects/names/).
|
||||
2. A mechanism to attach authorization and policy to a subsection of the cluster. -->
|
||||
|
||||
2. A mechanism to attach authorization and policy to a subsection of the cluster.
|
||||
-->
|
||||
名字空间通过以下方式实现这点:
|
||||
|
||||
1. 为[名字](/docs/concepts/overview/working-with-objects/names/)设置作用域.
|
||||
1. 为[名字](/zh/docs/concepts/overview/working-with-objects/names/)设置作用域.
|
||||
2. 为集群中的部分资源关联鉴权和策略的机制。
|
||||
|
||||
<!-- Use of multiple namespaces is optional. -->
|
||||
|
||||
使用多个命名空间是可选的。
|
||||
|
||||
<!-- This example demonstrates how to use Kubernetes namespaces to subdivide your cluster. -->
|
||||
|
||||
此示例演示了如何使用 Kubernetes 命名空间细分群集。
|
||||
|
||||
<!--
|
||||
Use of multiple namespaces is optional.
|
||||
|
||||
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
|
||||
-->
|
||||
使用多个名字空间是可选的。
|
||||
|
||||
此示例演示了如何使用 Kubernetes 名字空间细分群集。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!-- ## Prerequisites -->
|
||||
<!--
|
||||
## Prerequisites
|
||||
|
||||
## 环境准备
|
||||
|
||||
<!-- This example assumes the following:
|
||||
This example assumes the following:
|
||||
|
||||
1. You have an [existing Kubernetes cluster](/docs/setup/).
|
||||
2. You have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_. -->
|
||||
2. You have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_.
|
||||
-->
|
||||
## 环境准备
|
||||
|
||||
此示例作如下假设:
|
||||
|
||||
1. 您已拥有一个 [配置好的 Kubernetes 集群](/docs/setup/)。
|
||||
2. 您已对 Kubernetes 的 _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_ 和 _[Deployments](/docs/concepts/workloads/controllers/deployment/)_ 有基本理解。
|
||||
1. 你已拥有一个[配置好的 Kubernetes 集群](/zh/docs/setup/)。
|
||||
2. 你已对 Kubernetes 的 _[Pods](/zh/docs/concepts/workloads/pods/)_、
|
||||
_[Services](/zh/docs/concepts/services-networking/service/)_ 和
|
||||
_[Deployments](/zh/docs/concepts/workloads/controllers/deployment/)_
|
||||
有基本理解。
|
||||
|
||||
<!-- ## Understand the default namespace
|
||||
<!--
|
||||
## Understand the default namespace
|
||||
|
||||
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods,
|
||||
Services, and Deployments used by the cluster. -->
|
||||
Services, and Deployments used by the cluster.
|
||||
-->
|
||||
1. 理解默认名字空间
|
||||
|
||||
1. 理解默认命名空间
|
||||
默认情况下,Kubernetes 集群会在配置集群时实例化一个默认名字空间,用以存放集群所使用的默认
|
||||
Pod、Service 和 Deployment 集合。
|
||||
|
||||
默认情况下,Kubernetes 集群会在配置集群时实例化一个默认命名空间,用以存放集群所使用的默认 Pods、Services 和 Deployments 集合。
|
||||
|
||||
<!-- Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following: -->
|
||||
|
||||
假设您有一个新的集群,您可以通过执行以下操作来检查可用的命名空间:
|
||||
<!--
|
||||
Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:
|
||||
-->
|
||||
假设你有一个新的集群,你可以通过执行以下操作来检查可用的名字空间:
|
||||
|
||||
```shell
|
||||
kubectl get namespaces
|
||||
|
@ -85,72 +86,89 @@ NAME STATUS AGE
|
|||
default Active 13m
|
||||
```
|
||||
|
||||
<!-- ## Create new namespaces -->
|
||||
<!--
|
||||
## Create new namespaces
|
||||
|
||||
## 创建新的命名空间
|
||||
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
|
||||
-->
|
||||
## 创建新的名字空间
|
||||
|
||||
<!-- For this exercise, we will create two additional Kubernetes namespaces to hold our content. -->
|
||||
|
||||
在本练习中,我们将创建两个额外的 Kubernetes 命名空间来保存我们的内容。
|
||||
|
||||
<!-- Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases. -->
|
||||
在本练习中,我们将创建两个额外的 Kubernetes 名字空间来保存我们的内容。
|
||||
|
||||
<!--
|
||||
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
|
||||
-->
|
||||
我们假设一个场景,某组织正在使用共享的 Kubernetes 集群来支持开发和生产:
|
||||
|
||||
<!-- The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments
|
||||
<!--
|
||||
The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments
|
||||
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
|
||||
are relaxed to enable agile development. -->
|
||||
are relaxed to enable agile development.
|
||||
-->
|
||||
开发团队希望在集群中维护一个空间,以便他们可以查看用于构建和运行其应用程序的 Pod、Service
|
||||
和 Deployment 列表。在这个空间里,Kubernetes 资源被自由地加入或移除,
|
||||
对谁能够或不能修改资源的限制被放宽,以实现敏捷开发。
|
||||
|
||||
开发团队希望在集群中维护一个空间,以便他们可以查看用于构建和运行其应用程序的 Pods、Services 和 Deployments 列表。在这个空间里,Kubernetes 资源被自由地加入或移除,对谁能够或不能修改资源的限制被放宽,以实现敏捷开发。
|
||||
<!--
|
||||
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
|
||||
Pods, Services, and Deployments that run the production site.
|
||||
-->
|
||||
运维团队希望在集群中维护一个空间,以便他们可以强制实施一些严格的规程,
|
||||
对谁可以或谁不可以操作运行生产站点的 Pod、Service 和 Deployment 集合进行控制。
|
||||
|
||||
<!-- The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
|
||||
Pods, Services, and Deployments that run the production site. -->
|
||||
<!--
|
||||
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`.
|
||||
-->
|
||||
该组织可以遵循的一种模式是将 Kubernetes 集群划分为两个名字空间:`development` 和 `production`。
|
||||
|
||||
运维团队希望在集群中维护一个空间,以便他们可以强制实施一些严格的规程,对谁可以或谁不可以操作运行生产站点的 Pods、Services 和 Deployments 集合进行控制。
|
||||
<!--
|
||||
Let's create two new namespaces to hold our work.
|
||||
-->
|
||||
让我们创建两个新的名字空间来保存我们的工作。
|
||||
|
||||
<!-- One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`. -->
|
||||
|
||||
该组织可以遵循的一种模式是将 Kubernetes 集群划分为两个命名空间:development 和 production。
|
||||
|
||||
<!-- Let's create two new namespaces to hold our work. -->
|
||||
|
||||
让我们创建两个新的命名空间来保存我们的工作。
|
||||
|
||||
<!-- Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace: -->
|
||||
|
||||
文件 [`namespace-dev.json`](/examples/admin/namespace-dev.json) 描述了 development 命名空间:
|
||||
<!--
|
||||
Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace:
|
||||
-->
|
||||
文件 [`namespace-dev.json`](/examples/admin/namespace-dev.json) 描述了 `development` 名字空间:
|
||||
|
||||
{{< codenew language="json" file="admin/namespace-dev.json" >}}
|
||||
|
||||
<!-- Create the `development` namespace using kubectl. -->
|
||||
<!--
|
||||
Create the `development` namespace using kubectl.
|
||||
-->
|
||||
|
||||
使用 kubectl 创建 development 命名空间。
|
||||
使用 kubectl 创建 `development` 名字空间。
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
|
||||
```
|
||||
|
||||
<!-- Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a `production` namespace: -->
|
||||
|
||||
将下列的内容保存到文件 [`namespace-prod.json`](/examples/admin/namespace-prod.json) 中,这些内容是对 production 命名空间的描述:
|
||||
<!--
|
||||
Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a `production` namespace:
|
||||
-->
|
||||
将下列的内容保存到文件 [`namespace-prod.json`](/examples/admin/namespace-prod.json) 中,
|
||||
这些内容是对 `production` 名字空间的描述:
|
||||
|
||||
{{< codenew language="json" file="admin/namespace-prod.json" >}}
|
||||
|
||||
<!-- And then let's create the `production` namespace using kubectl. -->
|
||||
|
||||
让我们使用 kubectl 创建 production 命名空间。
|
||||
<!--
|
||||
And then let's create the `production` namespace using kubectl.
|
||||
-->
|
||||
让我们使用 kubectl 创建 `production` 名字空间。
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
|
||||
```
|
||||
|
||||
<!-- To be sure things are right, let's list all of the namespaces in our cluster. -->
|
||||
|
||||
为了确保一切正常,我们列出集群中的所有命名空间。
|
||||
<!--
|
||||
To be sure things are right, let's list all of the namespaces in our cluster.
|
||||
-->
|
||||
为了确保一切正常,我们列出集群中的所有名字空间。
|
||||
|
||||
```shell
|
||||
kubectl get namespaces --show-labels
|
||||
```
|
||||
|
||||
```
|
||||
NAME STATUS AGE LABELS
|
||||
default Active 32m <none>
|
||||
|
@ -158,29 +176,32 @@ development Active 29s name=development
|
|||
production Active 23s name=production
|
||||
```
|
||||
|
||||
<!-- ## Create pods in each namespace -->
|
||||
<!--
|
||||
## Create pods in each namespace
|
||||
|
||||
## 在每个命名空间中创建 pod
|
||||
A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
|
||||
|
||||
<!-- A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
|
||||
Users interacting with one namespace do not see the content in another namespace.
|
||||
|
||||
Users interacting with one namespace do not see the content in another namespace. -->
|
||||
To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
|
||||
-->
|
||||
## 在每个名字空间中创建 pod
|
||||
|
||||
Kubernetes 命名空间为集群中的 Pods、Services 和 Deployments 提供了作用域。
|
||||
Kubernetes 名字空间为集群中的 Pod、Service 和 Deployment 提供了作用域。
|
||||
|
||||
与一个命名空间交互的用户不会看到另一个命名空间中的内容。
|
||||
与一个名字空间交互的用户不会看到另一个名字空间中的内容。
|
||||
|
||||
<!-- To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace. -->
|
||||
|
||||
为了演示这一点,让我们在 development 命名空间中启动一个简单的 Deployment 和 Pod。
|
||||
|
||||
<!-- We first check what is the current context: -->
|
||||
为了演示这一点,让我们在 development 名字空间中启动一个简单的 Deployment 和 Pod。
|
||||
|
||||
<!--
|
||||
We first check what is the current context:
|
||||
-->
|
||||
我们首先检查一下当前的上下文:
|
||||
|
||||
```shell
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
|
@ -207,6 +228,7 @@ users:
|
|||
password: h5M0FtUUIflBSdI7
|
||||
username: admin
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl config current-context
|
||||
```
|
||||
|
@ -214,9 +236,11 @@ kubectl config current-context
|
|||
lithe-cocoa-92103_kubernetes
|
||||
```
|
||||
|
||||
<!-- The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. -->
|
||||
|
||||
下一步是为 kubectl 客户端定义一个上下文,以便在每个命名空间中工作。"cluster" 和 "user" 字段的值将从当前上下文中复制。
|
||||
<!--
|
||||
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
|
||||
-->
|
||||
下一步是为 kubectl 客户端定义一个上下文,以便在每个名字空间中工作。
|
||||
"cluster" 和 "user" 字段的值将从当前上下文中复制。
|
||||
|
||||
```shell
|
||||
kubectl config set-context dev --namespace=development \
|
||||
|
@ -228,15 +252,17 @@ kubectl config set-context prod --namespace=production \
|
|||
--user=lithe-cocoa-92103_kubernetes
|
||||
```
|
||||
|
||||
<!-- By default, the above commands adds two contexts that are saved into file
|
||||
<!--
|
||||
By default, the above commands adds two contexts that are saved into file
|
||||
`.kube/config`. You can now view the contexts and alternate against the two
|
||||
new request contexts depending on which namespace you wish to work against. -->
|
||||
|
||||
默认地,上述命令会添加两个上下文到 `.kube/config` 文件中。
|
||||
您现在可以查看上下文并根据您希望使用的命名空间并在这两个新的请求上下文之间切换。
|
||||
|
||||
<!-- To view the new contexts: -->
|
||||
new request contexts depending on which namespace you wish to work against.
|
||||
-->
|
||||
默认情况下,上述命令会添加两个上下文到 `.kube/config` 文件中。
|
||||
你现在可以查看上下文并根据你希望使用的名字空间并在这两个新的请求上下文之间切换。
|
||||
|
||||
<!--
|
||||
To view the new contexts:
|
||||
-->
|
||||
查看新的上下文:
|
||||
|
||||
```shell
|
||||
|
@ -279,41 +305,50 @@ users:
|
|||
username: admin
|
||||
```
|
||||
|
||||
<!-- Let's switch to operate in the `development` namespace. -->
|
||||
|
||||
让我们切换到 development 命名空间进行操作。
|
||||
<!--
|
||||
Let's switch to operate in the `development` namespace.
|
||||
-->
|
||||
让我们切换到 `development` 名字空间进行操作。
|
||||
|
||||
```shell
|
||||
kubectl config use-context dev
|
||||
```
|
||||
|
||||
<!-- You can verify your current context by doing the following: -->
|
||||
|
||||
您可以使用下列命令验证当前上下文:
|
||||
<!--
|
||||
You can verify your current context by doing the following:
|
||||
-->
|
||||
你可以使用下列命令验证当前上下文:
|
||||
|
||||
```shell
|
||||
kubectl config current-context
|
||||
```
|
||||
|
||||
```
|
||||
dev
|
||||
```
|
||||
|
||||
<!-- At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace. -->
|
||||
|
||||
此时,我们从命令行向 Kubernetes 集群发出的所有请求都限定在 development 命名空间中。
|
||||
|
||||
<!-- Let's create some contents. -->
|
||||
<!--
|
||||
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace.
|
||||
-->
|
||||
此时,我们从命令行向 Kubernetes 集群发出的所有请求都限定在 `development` 名字空间中。
|
||||
|
||||
<!--
|
||||
Let's create some contents.
|
||||
-->
|
||||
让我们创建一些内容。
|
||||
|
||||
```shell
|
||||
kubectl run snowflake --image=k8s.gcr.io/serve_hostname --replicas=2
|
||||
```
|
||||
<!-- We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
|
||||
Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
|
||||
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details. -->
|
||||
{{< codenew file="admin/snowflake-deployment.yaml" >}}
|
||||
|
||||
我们刚刚创建了一个副本大小为 2 的 deployment,该 deployment 运行名为 snowflake 的 pod,其中包含一个仅提供主机名服务的基本容器。请注意,`kubectl run` 仅在 Kubernetes 集群版本 >= v1.2 时创建 deployment。如果您运行在旧版本上,则会创建 replication controller。如果期望执行旧版本的行为,请使用 `--generator=run/v1` 创建 replication controller。 参见 [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) 获取更多细节。
|
||||
<!--
|
||||
Apply the manifest to create a Deployment
|
||||
-->
|
||||
应用清单文件来创建 Deployment。
|
||||
|
||||
<!--
|
||||
We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
|
||||
-->
|
||||
我们刚刚创建了一个副本大小为 2 的 Deployment,该 Deployment 运行名为 `snowflake` 的 Pod,
|
||||
其中包含一个仅提供主机名服务的基本容器。
|
||||
|
||||
```shell
|
||||
kubectl get deployment
|
||||
|
@ -324,7 +359,7 @@ snowflake 2 2 2 2 2m
|
|||
```
|
||||
|
||||
```shell
|
||||
kubectl get pods -l run=snowflake
|
||||
kubectl get pods -l app=snowflake
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
@ -332,41 +367,51 @@ snowflake-3968820950-9dgr8 1/1 Running 0 2m
|
|||
snowflake-3968820950-vgc4n 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
<!-- And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace. -->
|
||||
<!--
|
||||
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace.
|
||||
-->
|
||||
这很棒,开发人员可以做他们想要的事情,而不必担心影响 `production` 名字空间中的内容。
|
||||
|
||||
这很棒,开发人员可以做他们想要的事情,而不必担心影响 production 命名空间中的内容。
|
||||
|
||||
<!-- Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other. -->
|
||||
|
||||
让我们切换到 production 命名空间,展示一个命名空间中的资源如何对另一个命名空间不可见。
|
||||
<!--
|
||||
Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
|
||||
-->
|
||||
让我们切换到 `production` 名字空间,展示一个名字空间中的资源如何对另一个名字空间不可见。
|
||||
|
||||
```shell
|
||||
kubectl config use-context prod
|
||||
```
|
||||
|
||||
<!-- The `production` namespace should be empty, and the following commands should return nothing. -->
|
||||
|
||||
`production` 命名空间应该是空的,下列命令应该返回的内容为空。
|
||||
<!--
|
||||
The `production` namespace should be empty, and the following commands should return nothing.
|
||||
-->
|
||||
`production` 名字空间应该是空的,下列命令应该返回的内容为空。
|
||||
|
||||
```shell
|
||||
kubectl get deployment
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
<!-- Production likes to run cattle, so let's create some cattle pods. -->
|
||||
|
||||
生产环境需要运行 cattle,让我们创建一些名为 cattle 的 pods。
|
||||
<!--
|
||||
Production likes to run cattle, so let's create some cattle pods.
|
||||
-->
|
||||
生产环境需要以放牛的方式运维,让我们创建一些名为 `cattle` 的 Pod。
|
||||
|
||||
```shell
|
||||
kubectl run cattle --image=k8s.gcr.io/serve_hostname --replicas=5
|
||||
kubectl create deployment cattle --image=k8s.gcr.io/serve_hostname
|
||||
kubectl scale deployment cattle --replicas=5
|
||||
|
||||
kubectl get deployment
|
||||
```
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
cattle 5 5 5 5 10s
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get pods -l run=cattle
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cattle-2263376956-41xy6 1/1 Running 0 34s
|
||||
cattle-2263376956-kw466 1/1 Running 0 34s
|
||||
|
@ -375,13 +420,14 @@ cattle-2263376956-p5p3i 1/1 Running 0 34s
|
|||
cattle-2263376956-sxpth 1/1 Running 0 34s
|
||||
```
|
||||
|
||||
<!-- At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace. -->
|
||||
|
||||
此时,应该很清楚的展示了用户在一个命名空间中创建的资源对另一个命名空间是不可见的。
|
||||
|
||||
<!-- As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
|
||||
authorization rules for each namespace. -->
|
||||
|
||||
随着 Kubernetes 中的策略支持的发展,我们将扩展此场景,以展示如何为每个命名空间提供不同的授权规则。
|
||||
<!--
|
||||
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
|
||||
-->
|
||||
此时,应该很清楚的展示了用户在一个名字空间中创建的资源对另一个名字空间是不可见的。
|
||||
|
||||
<!--
|
||||
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
|
||||
authorization rules for each namespace.
|
||||
-->
|
||||
随着 Kubernetes 中的策略支持的发展,我们将扩展此场景,以展示如何为每个名字空间提供不同的授权规则。
|
||||
|
||||
|
|
|
@ -1,20 +1,14 @@
|
|||
---
|
||||
reviewers:
|
||||
- derekwaynecarr
|
||||
- vishh
|
||||
- timstclair
|
||||
title: 配置资源不足时的处理方式
|
||||
content_type: concept
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- derekwaynecarr
|
||||
- vishh
|
||||
- timstclair
|
||||
title: Configure Out Of Resource Handling
|
||||
content_type: concept
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -27,13 +21,11 @@ are low. This is especially important when dealing with incompressible
|
|||
compute resources, such as memory or disk space. If such resources are exhausted,
|
||||
nodes become unstable.
|
||||
-->
|
||||
本页介绍如何使用 `kubelet` 配置资源不足时的处理方式。
|
||||
|
||||
本页介绍了如何使用`kubelet`配置资源不足时的处理方式。
|
||||
|
||||
当可用计算资源较少时,`kubelet`需要保证节点稳定性。这在处理如内存和硬盘之类的不可压缩资源时尤为重要。如果任意一种资源耗尽,节点将会变得不稳定。
|
||||
|
||||
|
||||
|
||||
当可用计算资源较少时,`kubelet`需要保证节点稳定性。
|
||||
这在处理如内存和硬盘之类的不可压缩资源时尤为重要。
|
||||
如果任意一种资源耗尽,节点将会变得不稳定。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -47,11 +39,12 @@ a Pod, it terminates all of its containers and transitions its `PodPhase` to `Fa
|
|||
|
||||
If the evicted Pod is managed by a Deployment, the Deployment will create another Pod
|
||||
to be scheduled by Kubernetes.
|
||||
|
||||
-->
|
||||
## 驱逐策略
|
||||
|
||||
`kubelet` 能够主动监测和防止计算资源的全面短缺。在那种情况下,`kubelet`可以主动地结束一个或多个 pod 以回收短缺的资源。当 `kubelet` 结束一个 pod 时,它将终止 pod 中的所有容器,而 pod 的 `PodPhase` 将变为 `Failed`。
|
||||
`kubelet` 能够主动监测和防止计算资源的全面短缺。
|
||||
在那种情况下,`kubelet` 可以主动地结束一个或多个 Pod 以回收短缺的资源。
|
||||
当 `kubelet` 结束一个 Pod 时,它将终止 Pod 中的所有容器,而 Pod 的 `PodPhase` 将变为 `Failed`。
|
||||
|
||||
如果被驱逐的 Pod 由 Deployment 管理,这个 Deployment 会创建另一个 Pod 给 Kubernetes 来调度。
|
||||
|
||||
|
@ -61,7 +54,13 @@ to be scheduled by Kubernetes.
|
|||
The `kubelet` supports eviction decisions based on the signals described in the following
|
||||
table. The value of each signal is described in the Description column, which is based on
|
||||
the `kubelet` summary API.
|
||||
-->
|
||||
### 驱逐信号
|
||||
|
||||
`kubelet` 支持按照以下表格中描述的信号触发驱逐决定。
|
||||
每个信号的值在 description 列描述,基于 `kubelet` 摘要 API。
|
||||
|
||||
<!--
|
||||
| Eviction Signal | Description |
|
||||
|----------------------------|-----------------------------------------------------------------------|
|
||||
| `memory.available` | `memory.available` := `node.status.capacity[memory]` - `node.stats.memory.workingSet` |
|
||||
|
@ -69,11 +68,23 @@ the `kubelet` summary API.
|
|||
| `nodefs.inodesFree` | `nodefs.inodesFree` := `node.stats.fs.inodesFree` |
|
||||
| `imagefs.available` | `imagefs.available` := `node.stats.runtime.imagefs.available` |
|
||||
| `imagefs.inodesFree` | `imagefs.inodesFree` := `node.stats.runtime.imagefs.inodesFree` |
|
||||
-->
|
||||
| 驱逐信号 | 描述 |
|
||||
|----------------------------|-----------------------------------------------------------------------|
|
||||
| `memory.available` | `memory.available` := `node.status.capacity[memory]` - `node.stats.memory.workingSet` |
|
||||
| `nodefs.available` | `nodefs.available` := `node.stats.fs.available` |
|
||||
| `nodefs.inodesFree` | `nodefs.inodesFree` := `node.stats.fs.inodesFree` |
|
||||
| `imagefs.available` | `imagefs.available` := `node.stats.runtime.imagefs.available` |
|
||||
| `imagefs.inodesFree` | `imagefs.inodesFree` := `node.stats.runtime.imagefs.inodesFree` |
|
||||
|
||||
<!--
|
||||
Each of the above signals supports either a literal or percentage based value.
|
||||
The percentage based value is calculated relative to the total capacity
|
||||
associated with each signal.
|
||||
-->
|
||||
上面的每个信号都支持字面值或百分比的值。基于百分比的值的计算与每个信号对应的总容量相关。
|
||||
|
||||
<!--
|
||||
The value for `memory.available` is derived from the cgroupfs instead of tools
|
||||
like `free -m`. This is important because `free -m` does not work in a
|
||||
container, and if users use the [node
|
||||
|
@ -85,13 +96,30 @@ reproduces the same set of steps that the `kubelet` performs to calculate
|
|||
`memory.available`. The `kubelet` excludes inactive_file (i.e. # of bytes of
|
||||
file-backed memory on inactive LRU list) from its calculation as it assumes that
|
||||
memory is reclaimable under pressure.
|
||||
-->
|
||||
`memory.available` 的值从 cgroupfs 获取,而不是通过类似 `free -m` 的工具。
|
||||
这很重要,因为 `free -m` 不能在容器中工作,并且如果用户使用了
|
||||
[节点可分配资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
特性,资源不足的判定将同时在本地 cgroup 层次结构的终端用户 Pod 部分和根节点做出。
|
||||
这个[脚本](/zh/docs/tasks/administer-cluster/out-of-resource/memory-available.sh)
|
||||
复现了与 `kubelet` 计算 `memory.available` 相同的步骤。
|
||||
`kubelet` 将 `inactive_file`(意即活动 LRU 列表上基于文件后端的内存字节数)从计算中排除,
|
||||
因为它假设内存在出现压力时将被回收。
|
||||
|
||||
|
||||
<!--
|
||||
`kubelet` supports only two filesystem partitions.
|
||||
|
||||
1. The `nodefs` filesystem that kubelet uses for volumes, daemon logs, etc.
|
||||
2. The `imagefs` filesystem that container runtimes uses for storing images and
|
||||
container writable layers.
|
||||
-->
|
||||
`kubelet` 只支持两种文件系统分区。
|
||||
|
||||
1. `nodefs` 文件系统,kubelet 将其用于卷和守护程序日志等。
|
||||
2. `imagefs` 文件系统,容器运行时用于保存镜像和容器可写层。
|
||||
|
||||
<!--
|
||||
`imagefs` is optional. `kubelet` auto-discovers these filesystems using
|
||||
cAdvisor. `kubelet` does not care about any other filesystems. Any other types
|
||||
of configurations are not currently supported by the kubelet. For example, it is
|
||||
|
@ -101,30 +129,13 @@ In future releases, the `kubelet` will deprecate the existing [garbage
|
|||
collection](/docs/concepts/cluster-administration/kubelet-garbage-collection/)
|
||||
support in favor of eviction in response to disk pressure.
|
||||
-->
|
||||
### 驱逐信号
|
||||
`imagefs` 可选。`kubelet` 使用 cAdvisor 自动发现这些文件系统。
|
||||
`kubelet` 不关心其它文件系统。当前不支持配置任何其它类型。
|
||||
例如,在专用 `filesytem` 中存储卷和日志是不可以的。
|
||||
|
||||
`kubelet` 支持按照以下表格中描述的信号触发驱逐决定。每个信号的值在 description 列描述,基于 `kubelet` 摘要 API。
|
||||
|
||||
| 驱逐信号 | 描述 |
|
||||
|----------------------------|-----------------------------------------------------------------------|
|
||||
| `memory.available` | `memory.available` := `node.status.capacity[memory]` - `node.stats.memory.workingSet` |
|
||||
| `nodefs.available` | `nodefs.available` := `node.stats.fs.available` |
|
||||
| `nodefs.inodesFree` | `nodefs.inodesFree` := `node.stats.fs.inodesFree` |
|
||||
| `imagefs.available` | `imagefs.available` := `node.stats.runtime.imagefs.available` |
|
||||
| `imagefs.inodesFree` | `imagefs.inodesFree` := `node.stats.runtime.imagefs.inodesFree` |
|
||||
|
||||
上面的每个信号都支持字面值或百分比的值。基于百分比的值的计算与每个信号对应的总容量相关。
|
||||
|
||||
`memory.available` 的值从 cgroupfs 获取,而不是通过类似 `free -m` 的工具。这很重要,因为 `free -m` 不能在容器中工作,并且如果用户使用了 [可分配节点](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)特性,资源不足的判定将同时在本地 cgroup 层次结构的终端用户 pod 部分和根节点做出。这个 [脚本](/docs/tasks/administer-cluster/out-of-resource/memory-available.sh)复现了与 `kubelet` 计算 `memory.available` 相同的步骤。`kubelet`将`inactive_file`(意即活动 LRU 列表上基于文件后端的内存字节数)从计算中排除,因为它假设内存在出现压力时将被回收。
|
||||
|
||||
`kubelet` 只支持两种文件系统分区。
|
||||
|
||||
1. `nodefs` 文件系统,kubelet 将其用于卷和守护程序日志等。
|
||||
2. `imagefs` 文件系统,容器运行时用于保存镜像和容器可写层。
|
||||
|
||||
`imagefs`可选。`kubelet`使用 cAdvisor 自动发现这些文件系统。`kubelet`不关心其它文件系统。当前不支持配置任何其它类型。例如,在专用`文件系统`中存储卷和日志是不可以的。
|
||||
|
||||
在将来的发布中,`kubelet`将废除当前存在的 [垃圾回收](/docs/concepts/cluster-administration/kubelet-garbage-collection/) 机制,这种机制目前支持将驱逐操作作为对磁盘压力的响应。
|
||||
在将来的发布中,`kubelet`将废除当前存在的
|
||||
[垃圾回收](/zh/docs/concepts/cluster-administration/kubelet-garbage-collection/)
|
||||
机制,这种机制目前支持将驱逐操作作为对磁盘压力的响应。
|
||||
|
||||
<!--
|
||||
### Eviction Thresholds
|
||||
|
@ -157,9 +168,11 @@ either `memory.available<10%` or `memory.available<1Gi`. You cannot use both.
|
|||
|
||||
* 合法的 `eviction-signal` 标志如上所示。
|
||||
* `operator` 是所需的关系运算符,例如 `<`。
|
||||
* `quantity` 是驱逐阈值值标志,例如 `1Gi`。合法的标志必须匹配 Kubernetes 使用的数量表示。驱逐阈值也可以使用 `%` 标记表示百分比。
|
||||
* `quantity` 是驱逐阈值值标志,例如 `1Gi`。合法的标志必须匹配 Kubernetes 使用的数量表示。
|
||||
驱逐阈值也可以使用 `%` 标记表示百分比。
|
||||
|
||||
举例说明,如果一个节点有 `10Gi` 内存,希望在可用内存下降到 `1Gi` 以下时引起驱逐操作,则驱逐阈值可以使用下面任意一种方式指定(但不是两者同时)。
|
||||
举例说明,如果一个节点有 `10Gi` 内存,希望在可用内存下降到 `1Gi` 以下时引起驱逐操作,
|
||||
则驱逐阈值可以使用下面任意一种方式指定(但不是两者同时)。
|
||||
|
||||
* `memory.available<10%`
|
||||
* `memory.available<1Gi`
|
||||
|
@ -172,14 +185,25 @@ administrator-specified grace period. No action is taken by the `kubelet`
|
|||
to reclaim resources associated with the eviction signal until that grace
|
||||
period has been exceeded. If no grace period is provided, the `kubelet`
|
||||
returns an error on startup.
|
||||
-->
|
||||
#### 软驱逐阈值
|
||||
|
||||
软驱逐阈值使用一对由驱逐阈值和管理员必须指定的宽限期组成的配置对。在超过宽限期前,`kubelet`不会采取任何动作回收和驱逐信号关联的资源。如果没有提供宽限期,`kubelet`启动时将报错。
|
||||
|
||||
<!--
|
||||
In addition, if a soft eviction threshold has been met, an operator can
|
||||
specify a maximum allowed Pod termination grace period to use when evicting
|
||||
pods from the node. If specified, the `kubelet` uses the lesser value among
|
||||
the `pod.Spec.TerminationGracePeriodSeconds` and the max allowed grace period.
|
||||
If not specified, the `kubelet` kills Pods immediately with no graceful
|
||||
termination.
|
||||
-->
|
||||
此外,如果达到了软驱逐阈值,操作员可以指定从节点驱逐 pod 时,在宽限期内允许结束的 pod 的最大数量。
|
||||
如果指定了 `pod.Spec.TerminationGracePeriodSeconds` 值,
|
||||
`kubelet` 将使用它和宽限期二者中较小的一个。
|
||||
如果没有指定,`kubelet`将立即终止 pod,而不会优雅结束它们。
|
||||
|
||||
<!--
|
||||
To configure soft eviction thresholds, the following flags are supported:
|
||||
|
||||
* `eviction-soft` describes a set of eviction thresholds (e.g. `memory.available<1.5Gi`) that if met over a
|
||||
|
@ -189,12 +213,6 @@ correspond to how long a soft eviction threshold must hold before triggering a P
|
|||
* `eviction-max-pod-grace-period` describes the maximum allowed grace period (in seconds) to use when terminating
|
||||
pods in response to a soft eviction threshold being met.
|
||||
-->
|
||||
#### 软驱逐阈值
|
||||
|
||||
软驱逐阈值使用一对由驱逐阈值和管理员必须指定的宽限期组成的配置对。在超过宽限期前,`kubelet`不会采取任何动作回收和驱逐信号关联的资源。如果没有提供宽限期,`kubelet`启动时将报错。
|
||||
|
||||
此外,如果达到了软驱逐阈值,操作员可以指定从节点驱逐 pod 时,在宽限期内允许结束的 pod 的最大数量。如果指定了 `pod.Spec.TerminationGracePeriodSeconds` 值,`kubelet`将使用它和宽限期二者中较小的一个。如果没有指定,`kubelet`将立即终止 pod,而不会优雅结束它们。
|
||||
|
||||
软驱逐阈值的配置支持下列标记:
|
||||
|
||||
* `eviction-soft` 描述了驱逐阈值的集合(例如 `memory.available<1.5Gi`),如果在宽限期之外满足条件将触发 pod 驱逐。
|
||||
|
@ -223,7 +241,8 @@ The `kubelet` has the following default hard eviction threshold:
|
|||
-->
|
||||
#### 硬驱逐阈值
|
||||
|
||||
硬驱逐阈值没有宽限期,一旦察觉,`kubelet`将立即采取行动回收关联的短缺资源。如果满足硬驱逐阈值,`kubelet`将立即结束 pod 而不是优雅终止。
|
||||
硬驱逐阈值没有宽限期,一旦察觉,`kubelet`将立即采取行动回收关联的短缺资源。
|
||||
如果满足硬驱逐阈值,`kubelet`将立即结束 pod 而不是优雅终止。
|
||||
|
||||
硬驱逐阈值的配置支持下列标记:
|
||||
|
||||
|
@ -257,7 +276,14 @@ The `kubelet` maps one or more eviction signals to a corresponding node conditio
|
|||
If a hard eviction threshold has been met, or a soft eviction threshold has been met
|
||||
independent of its associated grace period, the `kubelet` reports a condition that
|
||||
reflects the node is under pressure.
|
||||
-->
|
||||
### 节点状态
|
||||
|
||||
`kubelet` 会将一个或多个驱逐信号映射到对应的节点状态。
|
||||
|
||||
如果满足硬驱逐阈值,或者满足独立于其关联宽限期的软驱逐阈值时,`kubelet`将报告节点处于压力下的状态。
|
||||
|
||||
<!--
|
||||
The following node conditions are defined that correspond to the specified eviction signal.
|
||||
|
||||
| Node Condition | Eviction Signal | Description |
|
||||
|
@ -268,18 +294,12 @@ The following node conditions are defined that correspond to the specified evict
|
|||
The `kubelet` continues to report node status updates at the frequency specified by
|
||||
`--node-status-update-frequency` which defaults to `10s`.
|
||||
-->
|
||||
### 节点状态
|
||||
|
||||
`kubelet` 会将一个或多个驱逐信号映射到对应的节点状态。
|
||||
|
||||
如果满足硬驱逐阈值,或者满足独立于其关联宽限期的软驱逐阈值时,`kubelet`将报告节点处于压力下的状态。
|
||||
|
||||
下列节点状态根据相应的驱逐信号定义。
|
||||
|
||||
| 节点状态 | 驱逐信号 | 描述 |
|
||||
|-------------------------|-------------------------------|--------------------------------------------|
|
||||
| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold |
|
||||
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
|
||||
| `MemoryPressure` | `memory.available` | 节点上可用内存量达到逐出阈值 |
|
||||
| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, 或 `imagefs.inodesFree` | 节点或者节点的根文件系统或镜像文件系统上可用磁盘空间和 i 节点个数达到逐出阈值 |
|
||||
|
||||
`kubelet` 将以 `--node-status-update-frequency` 指定的频率连续报告节点状态更新,其默认值为 `10s`。
|
||||
|
||||
|
@ -303,7 +323,8 @@ condition back to `false`.
|
|||
-->
|
||||
### 节点状态振荡
|
||||
|
||||
如果节点在软驱逐阈值的上下振荡,但没有超过关联的宽限期时,将引起对应节点的状态持续在 true 和 false 间跳变,并导致不好的调度结果。
|
||||
如果节点在软驱逐阈值的上下振荡,但没有超过关联的宽限期时,将引起对应节点的状态持续在
|
||||
true 和 false 间跳变,并导致不好的调度结果。
|
||||
|
||||
为了防止这种振荡,可以定义下面的标志,用于控制 `kubelet` 从压力状态中退出之前必须等待的时间。
|
||||
|
||||
|
@ -326,7 +347,8 @@ machine has a dedicated `imagefs` configured for the container runtime.
|
|||
|
||||
如果满足驱逐阈值并超过了宽限期,`kubelet`将启动回收压力资源的过程,直到它发现低于设定阈值的信号为止。
|
||||
|
||||
`kubelet`将尝试在驱逐终端用户 pod 前回收节点层级资源。发现磁盘压力时,如果节点针对容器运行时配置有独占的 `imagefs`,`kubelet`回收节点层级资源的方式将会不同。
|
||||
`kubelet` 将尝试在驱逐终端用户 pod 前回收节点层级资源。
|
||||
发现磁盘压力时,如果节点针对容器运行时配置有独占的 `imagefs`,`kubelet`回收节点层级资源的方式将会不同。
|
||||
|
||||
<!--
|
||||
#### With `imagefs`
|
||||
|
@ -342,13 +364,13 @@ If `nodefs` filesystem has met eviction thresholds, `kubelet` frees up disk spac
|
|||
1. Delete dead Pods and their containers
|
||||
2. Delete all unused images
|
||||
-->
|
||||
#### 使用 `Imagefs`
|
||||
#### 使用 `imagefs`
|
||||
|
||||
如果 `nodefs` 文件系统满足驱逐阈值,`kubelet`通过驱逐 pod 及其容器来释放磁盘空间。
|
||||
|
||||
如果 `imagefs` 文件系统满足驱逐阈值,`kubelet`通过删除所有未使用的镜像来释放磁盘空间。
|
||||
|
||||
#### 未使用 `Imagefs`
|
||||
#### 未使用 `imagefs`
|
||||
|
||||
如果 `nodefs` 满足驱逐阈值,`kubelet`将以下面的顺序释放磁盘空间:
|
||||
|
||||
|
@ -363,6 +385,16 @@ If the `kubelet` is unable to reclaim sufficient resource on the node, `kubelet`
|
|||
The `kubelet` ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests,
|
||||
then by [Priority](/docs/concepts/configuration/pod-priority-preemption/), and then by the consumption of the starved compute resource relative to the Pods' scheduling requests.
|
||||
|
||||
-->
|
||||
### 驱逐最终用户的 pod
|
||||
|
||||
如果 `kubelet` 在节点上无法回收足够的资源,`kubelet`将开始驱逐 pod。
|
||||
|
||||
`kubelet` 首先根据他们对短缺资源的使用是否超过请求来排除 pod 的驱逐行为,
|
||||
然后通过[优先级](/zh/docs/concepts/configuration/pod-priority-preemption/),
|
||||
然后通过相对于 pod 的调度请求消耗急需的计算资源。
|
||||
|
||||
<!--
|
||||
As a result, `kubelet` ranks and evicts Pods in the following order:
|
||||
|
||||
* `BestEffort` or `Burstable` Pods whose usage of a starved resource exceeds its request.
|
||||
|
@ -376,25 +408,29 @@ and `journald`) is consuming more resources than were reserved via `system-reser
|
|||
less than requests remaining, then the node must choose to evict such a Pod in order to
|
||||
preserve node stability and to limit the impact of the unexpected consumption to other Pods.
|
||||
In this case, it will choose to evict pods of Lowest Priority first.
|
||||
-->
|
||||
`kubelet` 按以下顺序对要驱逐的 pod 排名:
|
||||
|
||||
* `BestEffort` 或 `Burstable`,其对短缺资源的使用超过了其请求,此类 pod 按优先级排序,然后使用高于请求。
|
||||
* `Guaranteed` pod 和 `Burstable` pod,其使用率低于请求,最后被驱逐。
|
||||
`Guaranteed` Pod 只有为所有的容器指定了要求和限制并且它们相等时才能得到保证。
|
||||
由于另一个 Pod 的资源消耗,这些 Pod 保证永远不会被驱逐。
|
||||
如果系统守护进程(例如 `kubelet`、`docker`、和 `journald`)消耗的资源多于通过
|
||||
`system-reserved` 或 `kube-reserved` 分配保留的资源,并且该节点只有 `Guaranteed` 或
|
||||
`Burstable` Pod 使用少于剩余的请求,然后节点必须选择驱逐这样的 Pod
|
||||
以保持节点的稳定性并限制意外消耗对其他 pod 的影响。
|
||||
在这种情况下,它将首先驱逐优先级最低的 pod。
|
||||
|
||||
<!--
|
||||
If necessary, `kubelet` evicts Pods one at a time to reclaim disk when `DiskPressure`
|
||||
is encountered. If the `kubelet` is responding to `inode` starvation, it reclaims
|
||||
`inodes` by evicting Pods with the lowest quality of service first. If the `kubelet`
|
||||
is responding to lack of available disk, it ranks Pods within a quality of service
|
||||
that consumes the largest amount of disk and kill those first.
|
||||
-->
|
||||
### 驱逐最终用户的 pod
|
||||
|
||||
如果 `kubelet` 在节点上无法回收足够的资源,`kubelet`将开始驱逐 pod。
|
||||
|
||||
`kubelet` 首先根据他们对短缺资源的使用是否超过请求来排除 pod 的驱逐行为,然后通过 [优先级](/docs/concepts/configuration/pod-priority-preemption/),然后通过相对于 pod 的调度请求消耗急需的计算资源。
|
||||
|
||||
`kubelet` 按以下顺序对要驱逐的 pod 排名:
|
||||
|
||||
* `BestEffort` 或 `Burstable`,其对短缺资源的使用超过了其请求,此类 pod 按优先级排序,然后使用高于请求。
|
||||
* `Guaranteed` pod 和 `Burstable` pod,其使用率低于请求,最后被驱逐。`Guaranteed`pod 只有为所有的容器指定了要求和限制并且它们相等时才能得到保证。由于另一个 pod 的资源消耗,这些 pod 保证永远不会被驱逐。如果系统守护进程(例如 `kubelet`、`docker`、和 `journald`)消耗的资源多于通过 `system-reserved` 或 `kube-reserved` 分配保留的资源,并且该节点只有 `Guaranteed` 或 `Burstable` pod 使用少于剩余的请求,然后节点必须选择驱逐这样的 pod 以保持节点的稳定性并限制意外消耗对其他 pod 的影响。在这种情况下,它将首先驱逐优先级最低的 pod。
|
||||
|
||||
必要时,`kubelet`会在遇到 `DiskPressure` 时驱逐一个 pod 来回收磁盘空间。如果 `kubelet` 响应 `inode` 短缺,它会首先驱逐服务质量最低的 pod 来回收 `inodes`。如果 `kubelet` 响应缺少可用磁盘,它会将 pod 排在服务质量范围内,该服务会消耗大量的磁盘并首先结束这些磁盘。
|
||||
必要时,`kubelet`会在遇到 `DiskPressure` 时逐个驱逐 Pod 来回收磁盘空间。
|
||||
如果 `kubelet` 响应 `inode` 短缺,它会首先驱逐服务质量最低的 Pod 来回收 `inodes`。
|
||||
如果 `kubelet` 响应缺少可用磁盘,它会将 Pod 排在服务质量范围内,该服务会消耗大量的磁盘并首先结束这些磁盘。
|
||||
|
||||
<!--
|
||||
#### With `imagefs`
|
||||
|
@ -425,31 +461,20 @@ If `nodefs` is triggering evictions, `kubelet` sorts Pods based on their total d
|
|||
In certain scenarios, eviction of Pods could result in reclamation of small amount of resources. This can result in
|
||||
`kubelet` hitting eviction thresholds in repeated successions. In addition to that, eviction of resources like `disk`,
|
||||
is time consuming.
|
||||
|
||||
To mitigate these issues, `kubelet` can have a per-resource `minimum-reclaim`. Whenever `kubelet` observes
|
||||
resource pressure, `kubelet` attempts to reclaim at least `minimum-reclaim` amount of resource below
|
||||
the configured eviction threshold.
|
||||
|
||||
For example, with the following configuration:
|
||||
|
||||
```
|
||||
--eviction-hard=memory.available<500Mi,nodefs.available<1Gi,imagefs.available<100Gi
|
||||
--eviction-minimum-reclaim="memory.available=0Mi,nodefs.available=500Mi,imagefs.available=2Gi"`
|
||||
```
|
||||
|
||||
If an eviction threshold is triggered for `memory.available`, the `kubelet` works to ensure
|
||||
that `memory.available` is at least `500Mi`. For `nodefs.available`, the `kubelet` works
|
||||
to ensure that `nodefs.available` is at least `1.5Gi`, and for `imagefs.available` it
|
||||
works to ensure that `imagefs.available` is at least `102Gi` before no longer reporting pressure
|
||||
on their associated resources.
|
||||
|
||||
The default `eviction-minimum-reclaim` is `0` for all resources.
|
||||
-->
|
||||
### 最小驱逐回收
|
||||
|
||||
在某些场景,驱逐 pod 会导致回收少量资源。这将导致 `kubelet` 反复碰到驱逐阈值。除此之外,对如 `disk` 这类资源的驱逐时比较耗时的。
|
||||
|
||||
为了减少这类问题,`kubelet`可以为每个资源配置一个 `minimum-reclaim`。当 `kubelet` 发现资源压力时,`kubelet`将尝试至少回收驱逐阈值之下 `minimum-reclaim` 数量的资源。
|
||||
<!--
|
||||
To mitigate these issues, `kubelet` can have a per-resource `minimum-reclaim`. Whenever `kubelet` observes
|
||||
resource pressure, `kubelet` attempts to reclaim at least `minimum-reclaim` amount of resource below
|
||||
the configured eviction threshold.
|
||||
|
||||
For example, with the following configuration:
|
||||
-->
|
||||
为了减少这类问题,`kubelet`可以为每个资源配置一个 `minimum-reclaim`。
|
||||
当 `kubelet` 发现资源压力时,`kubelet`将尝试至少回收驱逐阈值之下 `minimum-reclaim` 数量的资源。
|
||||
|
||||
例如使用下面的配置:
|
||||
|
||||
|
@ -458,7 +483,19 @@ The default `eviction-minimum-reclaim` is `0` for all resources.
|
|||
--eviction-minimum-reclaim="memory.available=0Mi,nodefs.available=500Mi,imagefs.available=2Gi"`
|
||||
```
|
||||
|
||||
如果 `memory.available` 驱逐阈值被触发,`kubelet`将保证 `memory.available` 至少为 `500Mi`。对于 `nodefs.available`,`kubelet`将保证 `nodefs.available` 至少为 `1.5Gi`。对于 `imagefs.available`,`kubelet`将保证 `imagefs.available` 至少为 `102Gi`,直到不再有相关资源报告压力为止。
|
||||
<!--
|
||||
If an eviction threshold is triggered for `memory.available`, the `kubelet` works to ensure
|
||||
that `memory.available` is at least `500Mi`. For `nodefs.available`, the `kubelet` works
|
||||
to ensure that `nodefs.available` is at least `1.5Gi`, and for `imagefs.available` it
|
||||
works to ensure that `imagefs.available` is at least `102Gi` before no longer reporting pressure
|
||||
on their associated resources.
|
||||
|
||||
The default `eviction-minimum-reclaim` is `0` for all resources.
|
||||
-->
|
||||
如果 `memory.available` 驱逐阈值被触发,`kubelet` 将保证 `memory.available` 至少为 `500Mi`。
|
||||
对于 `nodefs.available`,`kubelet` 将保证 `nodefs.available` 至少为 `1.5Gi`。
|
||||
对于 `imagefs.available`,`kubelet` 将保证 `imagefs.available` 至少为 `102Gi`,
|
||||
直到不再有相关资源报告压力为止。
|
||||
|
||||
所有资源的默认 `eviction-minimum-reclaim` 值为 `0`。
|
||||
|
||||
|
@ -480,8 +517,8 @@ pods on the node.
|
|||
|
||||
| 节点状态 | 调度器行为 |
|
||||
| ---------------- | ------------------------------------------------ |
|
||||
| `MemoryPressure` | No new `BestEffort` Pods are scheduled to the node. |
|
||||
| `DiskPressure` | No new Pods are scheduled to the node. |
|
||||
| `MemoryPressure` | 新的 `BestEffort` Pod 不会被调度到该节点 |
|
||||
| `DiskPressure` | 没有新的 Pod 会被调度到该节点 |
|
||||
|
||||
<!--
|
||||
## Node OOM Behavior
|
||||
|
@ -490,13 +527,29 @@ If the node experiences a system OOM (out of memory) event prior to the `kubelet
|
|||
the node depends on the [oom_killer](https://lwn.net/Articles/391222/) to respond.
|
||||
|
||||
The `kubelet` sets a `oom_score_adj` value for each container based on the quality of service for the Pod.
|
||||
-->
|
||||
## 节点 OOM 行为
|
||||
|
||||
如果节点在 `kubelet` 回收内存之前经历了系统 OOM(内存不足)事件,它将基于
|
||||
[oom-killer](https://lwn.net/Articles/391222/) 做出响应。
|
||||
|
||||
`kubelet` 基于 pod 的 service 质量为每个容器设置一个 `oom_score_adj` 值。
|
||||
|
||||
<!--
|
||||
| Quality of Service | oom_score_adj |
|
||||
|----------------------------|-----------------------------------------------------------------------|
|
||||
| `Guaranteed` | -998 |
|
||||
| `BestEffort` | 1000 |
|
||||
| `Burstable` | min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999) |
|
||||
-->
|
||||
|
||||
| Service 质量 | oom_score_adj |
|
||||
|----------------------------|-----------------------------------------------------------------------|
|
||||
| `Guaranteed` | -998 |
|
||||
| `BestEffort` | 1000 |
|
||||
| `Burstable` | min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999) |
|
||||
|
||||
<!--
|
||||
If the `kubelet` is unable to reclaim memory prior to a node experiencing system OOM, the `oom_killer` calculates
|
||||
an `oom_score` based on the percentage of memory it's using on the node, and then add the `oom_score_adj` to get an
|
||||
effective `oom_score` for the container, and then kills the container with the highest score.
|
||||
|
@ -507,23 +560,14 @@ to reclaim memory.
|
|||
|
||||
Unlike Pod eviction, if a Pod container is OOM killed, it may be restarted by the `kubelet` based on its `RestartPolicy`.
|
||||
-->
|
||||
## 节点 OOM 行为
|
||||
如果 `kubelet` 在节点经历系统 OOM 之前无法回收内存,`oom_killer` 将基于它在节点上
|
||||
使用的内存百分比算出一个 `oom_score`,并加上 `oom_score_adj` 得到容器的有效
|
||||
`oom_score`,然后结束得分最高的容器。
|
||||
|
||||
如果节点在 `kubelet` 回收内存之前经历了系统 OOM(内存不足)事件,它将基于 [oom-killer](https://lwn.net/Articles/391222/) 做出响应。
|
||||
预期的行为应该是拥有最低服务质量并消耗和调度请求相关内存量最多的容器第一个被结束,以回收内存。
|
||||
|
||||
`kubelet` 基于 pod 的 service 质量为每个容器设置一个 `oom_score_adj` 值。
|
||||
|
||||
| Service 质量 | oom_score_adj |
|
||||
|----------------------------|-----------------------------------------------------------------------|
|
||||
| `Guaranteed` | -998 |
|
||||
| `BestEffort` | 1000 |
|
||||
| `Burstable` | min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999) |
|
||||
|
||||
如果 `kubelet` 在节点经历系统 OOM 之前无法回收内存,`oom_killer`将基于它在节点上使用的内存百分比算出一个 `oom_score`,并加上 `oom_score_adj` 得到容器的有效 `oom_score`,然后结束得分最高的容器。
|
||||
|
||||
预期的行为应该是拥有最低 service 质量并消耗和调度请求相关内存量最多的容器第一个被结束,以回收内存。
|
||||
|
||||
和 pod 驱逐不同,如果一个 pod 的容器是被 OOM 结束的,基于其 `RestartPolicy`,它可能会被 `kubelet` 重新启动。
|
||||
和 pod 驱逐不同,如果一个 Pod 的容器是被 OOM 结束的,基于其 `RestartPolicy`,
|
||||
它可能会被 `kubelet` 重新启动。
|
||||
|
||||
<!--
|
||||
## Best Practices
|
||||
|
@ -539,19 +583,6 @@ Consider the following scenario:
|
|||
* Operator wants to evict Pods at 95% memory utilization to reduce incidence of system OOM.
|
||||
|
||||
To facilitate this scenario, the `kubelet` would be launched as follows:
|
||||
|
||||
```
|
||||
--eviction-hard=memory.available<500Mi
|
||||
--system-reserved=memory=1.5Gi
|
||||
```
|
||||
|
||||
Implicit in this configuration is the understanding that "System reserved" should include the amount of memory
|
||||
covered by the eviction threshold.
|
||||
|
||||
To reach that capacity, either some Pod is using more than its request, or the system is using more than `1.5Gi - 500Mi = 1Gi`.
|
||||
|
||||
This configuration ensures that the scheduler does not place Pods on a node that immediately induce memory pressure
|
||||
and trigger eviction assuming those Pods use less than their configured request.
|
||||
-->
|
||||
## 最佳实践
|
||||
|
||||
|
@ -572,6 +603,15 @@ and trigger eviction assuming those Pods use less than their configured request.
|
|||
--system-reserved=memory=1.5Gi
|
||||
```
|
||||
|
||||
<!--
|
||||
Implicit in this configuration is the understanding that "System reserved" should include the amount of memory
|
||||
covered by the eviction threshold.
|
||||
|
||||
To reach that capacity, either some Pod is using more than its request, or the system is using more than `1.5Gi - 500Mi = 1Gi`.
|
||||
|
||||
This configuration ensures that the scheduler does not place Pods on a node that immediately induce memory pressure
|
||||
and trigger eviction assuming those Pods use less than their configured request.
|
||||
-->
|
||||
这个配置的暗示是理解系统保留应该包含被驱逐阈值覆盖的内存数量。
|
||||
|
||||
要达到这个容量,要么某些 pod 使用了超过它们请求的资源,要么系统使用的内存超过 `1.5Gi - 500Mi = 1Gi`。
|
||||
|
@ -595,11 +635,13 @@ for eviction. Instead `DaemonSet` should ideally launch `Guaranteed` Pods.
|
|||
-->
|
||||
### DaemonSet
|
||||
|
||||
我们永远都不希望 `kubelet` 驱逐一个从 `DaemonSet` 派生的 pod,因为这个 pod 将立即被重建并调度回相同的节点。
|
||||
我们永远都不希望 `kubelet` 驱逐一个从 `DaemonSet` 派生的 Pod,因为这个 Pod 将立即被重建并调度回相同的节点。
|
||||
|
||||
目前,`kubelet`没有办法区分一个 pod 是由 `DaemonSet` 还是其他对象创建。如果/当这个信息可用时,`kubelet`可能会预先将这些 pod 从提供给驱逐策略的候选集合中过滤掉。
|
||||
目前,`kubelet`没有办法区分一个 Pod 是由 `DaemonSet` 还是其他对象创建。
|
||||
如果/当这个信息可用时,`kubelet` 可能会预先将这些 pod 从提供给驱逐策略的候选集合中过滤掉。
|
||||
|
||||
总之,强烈推荐 `DaemonSet` 不要创建 `BestEffort` 的 pod,防止其被识别为驱逐的候选 pod。相反,理想情况下 `DaemonSet` 应该启动 `Guaranteed` 的 pod。
|
||||
总之,强烈推荐 `DaemonSet` 不要创建 `BestEffort` 的 Pod,防止其被识别为驱逐的候选 Pod。
|
||||
相反,理想情况下 `DaemonSet` 应该启动 `Guaranteed` 的 pod。
|
||||
|
||||
<!--
|
||||
## Deprecation of existing feature flags to reclaim disk
|
||||
|
@ -608,7 +650,14 @@ for eviction. Instead `DaemonSet` should ideally launch `Guaranteed` Pods.
|
|||
|
||||
As disk based eviction matures, the following `kubelet` flags are marked for deprecation
|
||||
in favor of the simpler configuration supported around eviction.
|
||||
-->
|
||||
## 现有的回收磁盘特性标签已被弃用
|
||||
|
||||
`kubelet` 已经按需求清空了磁盘空间以保证节点稳定性。
|
||||
|
||||
当磁盘驱逐成熟时,下面的 `kubelet` 标志将被标记为废弃的,以简化支持驱逐的配置。
|
||||
|
||||
<!--
|
||||
| Existing Flag | New Flag |
|
||||
| ------------- | -------- |
|
||||
| `--image-gc-high-threshold` | `--eviction-hard` or `eviction-soft` |
|
||||
|
@ -619,11 +668,6 @@ in favor of the simpler configuration supported around eviction.
|
|||
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` |
|
||||
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` |
|
||||
-->
|
||||
## 弃用现有特性标签以回收磁盘
|
||||
|
||||
`kubelet` 已经按需求清空了磁盘空间以保证节点稳定性。
|
||||
|
||||
当磁盘驱逐成熟时,下面的 `kubelet` 标志将被标记为废弃的,以简化支持驱逐的配置。
|
||||
|
||||
| 现有标签 | 新标签 |
|
||||
| ------------- | -------- |
|
||||
|
@ -639,26 +683,28 @@ in favor of the simpler configuration supported around eviction.
|
|||
## Known issues
|
||||
|
||||
The following sections describe known issues related to out of resource handling.
|
||||
-->
|
||||
## 已知问题
|
||||
|
||||
以下部分描述了与资源外处理有关的已知问题。
|
||||
|
||||
<!--
|
||||
### kubelet may not observe memory pressure right away
|
||||
|
||||
The `kubelet` currently polls `cAdvisor` to collect memory usage stats at a regular interval. If memory usage
|
||||
increases within that window rapidly, the `kubelet` may not observe `MemoryPressure` fast enough, and the `OOMKiller`
|
||||
will still be invoked. We intend to integrate with the `memcg` notification API in a future release to reduce this
|
||||
latency, and instead have the kernel tell us when a threshold has been crossed immediately.
|
||||
|
||||
If you are not trying to achieve extreme utilization, but a sensible measure of overcommit, a viable workaround for
|
||||
this issue is to set eviction thresholds at approximately 75% capacity. This increases the ability of this feature
|
||||
to prevent system OOMs, and promote eviction of workloads so cluster state can rebalance.
|
||||
-->
|
||||
## 已知问题
|
||||
|
||||
以下部分描述了与资源外处理有关的已知问题。
|
||||
|
||||
### kubelet 可能无法立即发现内存压力
|
||||
|
||||
`kubelet`当前通过以固定的时间间隔轮询 `cAdvisor` 来收集内存使用数据。如果内存使用在那个时间窗口内迅速增长,`kubelet`可能不能足够快的发现 `MemoryPressure`,`OOMKiller`将不会被调用。我们准备在将来的发行版本中通过集成 `memcg` 通知 API 来减小这种延迟。当超过阈值时,内核将立即告诉我们。
|
||||
|
||||
<!--
|
||||
If you are not trying to achieve extreme utilization, but a sensible measure of overcommit, a viable workaround for
|
||||
this issue is to set eviction thresholds at approximately 75% capacity. This increases the ability of this feature
|
||||
to prevent system OOMs, and promote eviction of workloads so cluster state can rebalance.
|
||||
-->
|
||||
如果您想处理可察觉的超量使用而不要求极端精准,可以设置驱逐阈值为大约 75% 容量作为这个问题的变通手段。这将增强这个特性的能力,防止系统 OOM,并提升负载卸载能力,以再次平衡集群状态。
|
||||
|
||||
<!--
|
||||
|
@ -669,5 +715,7 @@ the ability to get root container stats on an on-demand basis [(https://github.c
|
|||
-->
|
||||
### kubelet 可能会驱逐超过需求数量的 pod
|
||||
|
||||
由于状态采集的时间差,驱逐操作可能驱逐比所需的更多的 pod。将来可通过添加从根容器获取所需状态的能力 [https://github.com/google/cadvisor/issues/1247](https://github.com/google/cadvisor/issues/1247) 来减缓这种状况。
|
||||
由于状态采集的时间差,驱逐操作可能驱逐比所需的更多的 pod。将来可通过添加从根容器获取所需状态的能力
|
||||
[https://github.com/google/cadvisor/issues/1247](https://github.com/google/cadvisor/issues/1247)
|
||||
来减缓这种状况。
|
||||
|
||||
|
|
|
@ -4,10 +4,8 @@ content_type: task
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Configure Quotas for API Objects
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -20,22 +18,14 @@ You specify quotas in a
|
|||
[ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core)
|
||||
object.
|
||||
-->
|
||||
|
||||
本文讨论如何为 API 对象配置配额,包括 PersistentVolumeClaims 和 Services。
|
||||
本文讨论如何为 API 对象配置配额,包括 PersistentVolumeClaim 和 Service。
|
||||
配额限制了可以在命名空间中创建的特定类型对象的数量。
|
||||
您可以在 [ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) 对象中指定配额。
|
||||
|
||||
|
||||
|
||||
你可以在 [ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) 对象中指定配额。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
|
@ -44,7 +34,6 @@ object.
|
|||
Create a namespace so that the resources you create in this exercise are
|
||||
isolated from the rest of your cluster.
|
||||
-->
|
||||
|
||||
## 创建命名空间
|
||||
|
||||
创建一个命名空间以便本例中创建的资源和集群中的其余部分相隔离。
|
||||
|
@ -58,7 +47,6 @@ kubectl create namespace quota-object-example
|
|||
|
||||
Here is the configuration file for a ResourceQuota object:
|
||||
-->
|
||||
|
||||
## 创建 ResourceQuota
|
||||
|
||||
下面是一个 ResourceQuota 对象的配置文件:
|
||||
|
@ -68,17 +56,15 @@ Here is the configuration file for a ResourceQuota object:
|
|||
<!--
|
||||
Create the ResourceQuota:
|
||||
-->
|
||||
|
||||
创建 ResourceQuota
|
||||
创建 ResourceQuota:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace=quota-object-example
|
||||
kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace=quota-object-example
|
||||
```
|
||||
|
||||
<!--
|
||||
View detailed information about the ResourceQuota:
|
||||
-->
|
||||
|
||||
查看 ResourceQuota 的详细信息:
|
||||
|
||||
```shell
|
||||
|
@ -90,9 +76,8 @@ The output shows that in the quota-object-example namespace, there can be at mos
|
|||
one PersistentVolumeClaim, at most two Services of type LoadBalancer, and no Services
|
||||
of type NodePort.
|
||||
-->
|
||||
|
||||
输出结果表明在 quota-object-example 命名空间中,至多只能有一个 PersistentVolumeClaim,最多两个 LoadBalancer 类型的服务,不能有 NodePort 类型的服务。
|
||||
|
||||
输出结果表明在 quota-object-example 命名空间中,至多只能有一个 PersistentVolumeClaim,
|
||||
最多两个 LoadBalancer 类型的服务,不能有 NodePort 类型的服务。
|
||||
|
||||
```yaml
|
||||
status:
|
||||
|
@ -111,7 +96,6 @@ status:
|
|||
|
||||
Here is the configuration file for a PersistentVolumeClaim object:
|
||||
-->
|
||||
|
||||
## 创建 PersistentVolumeClaim
|
||||
|
||||
下面是一个 PersistentVolumeClaim 对象的配置文件:
|
||||
|
@ -121,17 +105,15 @@ Here is the configuration file for a PersistentVolumeClaim object:
|
|||
<!--
|
||||
Create the PersistentVolumeClaim:
|
||||
-->
|
||||
|
||||
创建 PersistentVolumeClaim:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace=quota-object-example
|
||||
kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace=quota-object-example
|
||||
```
|
||||
|
||||
<!--
|
||||
Verify that the PersistentVolumeClaim was created:
|
||||
-->
|
||||
|
||||
确认已创建完 PersistentVolumeClaim:
|
||||
|
||||
```shell
|
||||
|
@ -141,10 +123,9 @@ kubectl get persistentvolumeclaims --namespace=quota-object-example
|
|||
<!--
|
||||
The output shows that the PersistentVolumeClaim exists and has status Pending:
|
||||
-->
|
||||
|
||||
输出信息表明 PersistentVolumeClaim 存在并且处于 Pending 状态:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME STATUS
|
||||
pvc-quota-demo Pending
|
||||
```
|
||||
|
@ -154,7 +135,6 @@ pvc-quota-demo Pending
|
|||
|
||||
Here is the configuration file for a second PersistentVolumeClaim:
|
||||
-->
|
||||
|
||||
## 尝试创建第二个 PersistentVolumeClaim
|
||||
|
||||
下面是第二个 PersistentVolumeClaim 的配置文件:
|
||||
|
@ -164,7 +144,6 @@ Here is the configuration file for a second PersistentVolumeClaim:
|
|||
<!--
|
||||
Attempt to create the second PersistentVolumeClaim:
|
||||
-->
|
||||
|
||||
尝试创建第二个 PersistentVolumeClaim:
|
||||
|
||||
```shell
|
||||
|
@ -174,23 +153,21 @@ kubectl create -f https://k8s.io/examples/admin/resource/quota-objects-pvc-2.yam
|
|||
The output shows that the second PersistentVolumeClaim was not created,
|
||||
because it would have exceeded the quota for the namespace.
|
||||
-->
|
||||
|
||||
输出信息表明第二个 PersistentVolumeClaim 没有创建成功,因为这会超出命名空间的配额。
|
||||
|
||||
|
||||
```
|
||||
persistentvolumeclaims "pvc-quota-demo-2" is forbidden:
|
||||
exceeded quota: object-quota-demo, requested: persistentvolumeclaims=1,
|
||||
used: persistentvolumeclaims=1, limited: persistentvolumeclaims=1
|
||||
```
|
||||
|
||||
<!--
|
||||
## Notes
|
||||
|
||||
These are the strings used to identify API resources that can be constrained
|
||||
by quotas:
|
||||
-->
|
||||
|
||||
## 注意事项
|
||||
## 说明
|
||||
|
||||
下面这些字符串可被用来标识那些能被配额限制的 API 资源:
|
||||
|
||||
|
@ -212,20 +189,16 @@ by quotas:
|
|||
|
||||
Delete your namespace:
|
||||
-->
|
||||
|
||||
## 清理
|
||||
|
||||
删除您的命名空间:
|
||||
删除你的命名空间:
|
||||
|
||||
```shell
|
||||
kubectl delete namespace quota-object-example
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
### For cluster administrators
|
||||
|
||||
|
@ -244,17 +217,12 @@ kubectl delete namespace quota-object-example
|
|||
|
||||
### 集群管理员参考
|
||||
|
||||
* [为命名空间配置默认的内存请求和限制](/docs/tasks/administer-cluster/memory-default-namespace/)
|
||||
|
||||
* [为命名空间配置默认的 CPU 请求和限制](/docs/tasks/administer-cluster/cpu-default-namespace/)
|
||||
|
||||
* [为命名空间配置内存的最小和最大限制](/docs/tasks/administer-cluster/memory-constraint-namespace/)
|
||||
|
||||
* [为命名空间配置 CPU 的最小和最大限制](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
|
||||
|
||||
* [为命名空间配置 CPU 和内存配额](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
|
||||
|
||||
* [为命名空间配置 Pod 配额](/docs/tasks/administer-cluster/quota-pod-namespace/)
|
||||
* [为命名空间配置默认的内存请求和限制](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
|
||||
* [为命名空间配置默认的 CPU 请求和限制](/zh/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
|
||||
* [为命名空间配置内存的最小和最大限制](/zh/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
|
||||
* [为命名空间配置 CPU 的最小和最大限制](/zh/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
|
||||
* [为命名空间配置 CPU 和内存配额](/zh/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
|
||||
* [为命名空间配置 Pod 配额](/zh/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
|
||||
|
||||
<!--
|
||||
### For app developers
|
||||
|
@ -268,9 +236,7 @@ kubectl delete namespace quota-object-example
|
|||
|
||||
### 应用开发者参考
|
||||
|
||||
* [为容器和 Pod 分配内存资源](/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
* [为容器和 Pod 分配 CPU 资源](/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
* [为 Pod 配置服务质量](/docs/tasks/configure-pod-container/quality-service-pod/)
|
||||
|
||||
|
||||
* [为容器和 Pod 分配内存资源](/zh/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
* [为容器和 Pod 分配 CPU 资源](/zh/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
* [为 Pod 配置服务质量](/zh/docs/tasks/configure-pod-container/quality-service-pod/)
|
||||
|
||||
|
|
Loading…
Reference in New Issue