Merge pull request #25057 from tengqm/zh-sync-6

[zh] Sync changes from English site (6)
This commit is contained in:
Kubernetes Prow Robot 2020-11-24 19:48:38 -08:00 committed by GitHub
commit 0468384247
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 618 additions and 357 deletions

View File

@ -1,4 +1,4 @@
---
title: "控制器"
title: "工作负载资源"
weight: 20
---

View File

@ -201,20 +201,20 @@ Follow the steps given below to create the above Deployment:
3. 要查看 Deployment 上线状态,运行 `kubectl rollout status deployment.v1.apps/nginx-deployment`
输出类似于:
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
```
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
```
<!--
4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this:
-->
4. 几秒钟后再次运行 `kubectl get deployments`。输出类似于:
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
```
<!--
Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.
@ -1504,7 +1504,7 @@ deployment.apps/nginx-deployment patched
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
attributes to the Deployment's `.status.conditions`:
-->
超过截止时间后, Deployment 控制器将添加具有以下属性的 DeploymentCondition 到
超过截止时间后Deployment 控制器将添加具有以下属性的 DeploymentCondition 到
Deployment 的 `.status.conditions` 中:
* Type=Progressing
@ -1514,7 +1514,9 @@ Deployment 的 `.status.conditions` 中:
<!--
See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions.
-->
参考 [Kubernetes API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) 获取更多状态状况相关的信息。
参考
[Kubernetes API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties)
获取更多状态状况相关的信息。
<!--
Kubernetes takes no action on a stalled Deployment other than to report a status condition with

View File

@ -1,13 +1,13 @@
---
title: 垃圾收集
content_type: concept
weight: 70
weight: 60
---
<!--
title: Garbage Collection
content_type: concept
weight: 70
weight: 60
-->
<!-- overview -->

View File

@ -5,7 +5,7 @@ feature:
title: 批量执行
description: >
除了服务之外Kubernetes 还可以管理你的批处理和 CI 工作负载,在期望时替换掉失效的容器。
weight: 60
weight: 50
---
<!--
reviewers:
@ -17,7 +17,7 @@ feature:
title: Batch execution
description: >
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
weight: 60
weight: 50
-->
<!-- overview -->

View File

@ -1,8 +1,17 @@
---
title: ReplicaSet
content_type: concept
weight: 10
weight: 20
---
<!--
reviewers:
- Kashomon
- bprashanth
- madhusudancs
title: ReplicaSet
content_type: concept
weight: 20
-->
<!-- overview -->
@ -18,30 +27,25 @@ ReplicaSet 的目的是维护一组在任何时候都处于运行状态的 Pod
<!--
## How a ReplicaSet works
A ReplicaSet is defined with fields, including a selector that specifies how
to identify Pods it can acquire, a number of replicas indicating how many Pods
it should be maintaining, and a pod template specifying the data of new Pods
it should create to meet the number of replicas criteria. A ReplicaSet then
fulfills its purpose by creating and deleting Pods as needed to reach the
desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number
of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods
it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating
and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
template.
-->
## ReplicaSet 的工作原理 {#how-a-replicaset-works}
RepicaSet 是通过一组字段来定义的,包括一个用来识别可获得的 Pod
的集合的选择算符,一个用来标明应该维护的副本个数的数值,一个用来指定应该创建新 Pod
以满足副本个数条件时要使用的 Pod 模板等等。每个 ReplicaSet 都通过根据需要创建和
删除 Pod 以使得副本个数达到期望值,进而实现其存在价值。当 ReplicaSet 需要创建
新的 Pod 时,会使用所提供的 Pod 模板。
的集合的选择算符、一个用来标明应该维护的副本个数的数值、一个用来指定应该创建新 Pod
以满足副本个数条件时要使用的 Pod 模板等等。
每个 ReplicaSet 都通过根据需要创建和 删除 Pod 以使得副本个数达到期望值,
进而实现其存在价值。当 ReplicaSet 需要创建新的 Pod 时,会使用所提供的 Pod 模板。
<!--
A ReplicaSet is linked to its Pods via the Pods'
[metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
field, which specifies what resource the current object is owned by. All Pods
acquired by a ReplicaSet have their owning ReplicaSet's identifying
information within their ownerReferences field. It's through this link that
the ReplicaSet knows of the state of the Pods it is maintaining and plans
accordingly.
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning
ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet
knows of the state of the Pods it is maintaining and plans accordingly.
-->
ReplicaSet 通过 Pod 上的
[metadata.ownerReferences](/zh/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
@ -51,41 +55,14 @@ ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 R
并据此计划其操作行为。
<!--
A ReplicaSet identifies new Pods to acquire by using its selector. If there is
a Pod that has no OwnerReference or the OwnerReference is not a {{<
glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's
selector, it will be immediately acquired by said ReplicaSet.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet.
-->
ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有
OwnerReference 或者其 OwnerReference 不是一个
{{< glossary_tooltip text="控制器" term_id="controller" >}},且其匹配到
某 ReplicaSet 的选择算符,则该 Pod 立即被此 ReplicaSet 获得。
<!--
## How to use a ReplicaSet
Most [`kubectl`](/docs/user-guide/kubectl/) commands that support
Replication Controllers also support ReplicaSets. One exception is the
[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) command. If
you want the rolling update functionality please consider using Deployments
instead. Also, the
[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) command is
imperative whereas Deployments are declarative, so we recommend using Deployments
through the [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout) command.
While ReplicaSets can be used independently, today it's mainly used by
[Deployments](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod
creation, deletion and updates. When you use Deployments you don't have to worry
about managing the ReplicaSets that they create. Deployments own and manage
their ReplicaSets.
-->
## 怎样使用 ReplicaSet {#how-to-use-a-replicaset}
大多数支持 Replication Controllers 的[`kubectl`](/zh/docs/reference/kubectl/kubectl/)命令也支持 ReplicaSets。但[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) 命令是个例外。如果您想要滚动更新功能请考虑使用 Deployment。[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) 命令是必需的,而 Deployment 是声明性的,因此我们建议通过 [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout)命令使用 Deployment。
虽然 ReplicaSets 可以独立使用,但今天它主要被[Deployments](/zh/docs/concepts/workloads/controllers/deployment/) 用作协调 Pod 创建、删除和更新的机制。
当您使用 Deployment 时,您不必担心还要管理它们创建的 ReplicaSet。Deployment 会拥有并管理它们的 ReplicaSet。
<!--
## When to use a ReplicaSet
@ -98,14 +75,16 @@ you require custom update orchestration or don't require updates at all.
This actually means that you may never need to manipulate ReplicaSet objects:
use a Deployment instead, and define your application in the spec section.
-->
## 什么时候使用 ReplicaSet
## 何时使用 ReplicaSet
ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。
然而Deployment 是一个更高级的概念,它管理 ReplicaSet并向 Pod 提供声明式的更新以及许多其他有用的功能。
因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet除非您需要自定义更新业务流程或根本不需要更新。
然而Deployment 是一个更高级的概念,它管理 ReplicaSet并向 Pod
提供声明式的更新以及许多其他有用的功能。
因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet除非
你需要自定义更新业务流程或根本不需要更新。
这实际上意味着,您可能永远不需要操作 ReplicaSet 对象:而是使用 Deployment并在 spec 部分定义您的应用。
这实际上意味着,你可能永远不需要操作 ReplicaSet 对象:而是使用
Deployment并在 spec 部分定义你的应用。
<!--
## Example
@ -118,151 +97,349 @@ ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should
create the defined ReplicaSet and the pods that it manages.
-->
将此清单保存到 `frontend.yaml` 中,并将其提交到 Kubernetes 集群,
应该就能创建 yaml 文件所定义的 ReplicaSet 及其管理的 Pod。
将此清单保存到 `frontend.yaml` 中,并将其提交到 Kubernetes 集群,应该就能创建 yaml 文件所定义的 ReplicaSet 及其管理的 Pod。
```shell
$ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
replicaset.apps/frontend created
$ kubectl describe rs/frontend
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
```
<!--
You can then get the current ReplicaSets deployed:
-->
你可以看到当前被部署的 ReplicaSet
```shell
kubectl get rs
```
<!--
And see the frontend one you created:
-->
并看到你所创建的前端:
```
NAME DESIRED CURRENT READY AGE
frontend 3 3 3 6s
```
<!--
You can also check on the state of the ReplicaSet:
-->
你也可以查看 ReplicaSet 的状态:
```shell
kubectl describe rs/frontend
```
<!--
And you will see output similar to:
-->
你会看到类似如下的输出:
```
Name: frontend
Namespace: default
Selector: tier=frontend,tier in (frontend)
Selector: tier=frontend
Labels: app=guestbook
tier=frontend
Annotations: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"ReplicaSet","metadata":{"annotations":{},"labels":{"app":"guestbook","tier":"frontend"},"name":"frontend",...
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=guestbook
tier=frontend
Labels: tier=frontend
Containers:
php-redis:
Image: gcr.io/google_samples/gb-frontend:v3
Port: 80/TCP
Requests:
cpu: 100m
memory: 100Mi
Environment:
GET_HOSTS_FROM: dns
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
frontend-dnjpy 1/1 Running 0 1m
frontend-qhloh 1/1 Running 0 1m
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 117s replicaset-controller Created pod: frontend-wtsmm
Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-b2zdv
Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-vcmts
```
<!--
And lastly you can check for the Pods brought up:
-->
最后可以查看启动了的 Pods
```shell
kubectl get pods
```
<!--
You should see Pod information similar to:
-->
你会看到类似如下的 Pod 信息:
```
NAME READY STATUS RESTARTS AGE
frontend-b2zdv 1/1 Running 0 6m36s
frontend-vcmts 1/1 Running 0 6m36s
frontend-wtsmm 1/1 Running 0 6m36s
```
<!--
You can also verify that the owner reference of these pods is set to the frontend ReplicaSet.
To do this, get the yaml of one of the Pods running:
-->
你也可以查看 Pods 的属主引用被设置为前端的 ReplicaSet。
要实现这点,可取回运行中的 Pods 之一的 YAML
```shell
kubectl get pods frontend-b2zdv -o yaml
```
<!--
The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's ownerReferences field:
-->
输出将类似这样frontend ReplicaSet 的信息被设置在 metadata 的
`ownerReferences` 字段中:
```yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-02-12T07:06:16Z"
generateName: frontend-
labels:
tier: frontend
name: frontend-b2zdv
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: frontend
uid: f391f6db-bb9b-4c09-ae74-6a1f77f3d5cf
...
```
<!--
## Non-Template Pod acquisitions
While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have
labels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited
to owning Pods specified by its template - it can acquire other Pods in the manner specified in the previous sections.
-->
## 非模板 Pod 的获得
<!--
While you can create bare Pods with no problems, it is strongly recommended to
make sure that the bare Pods do not have labels which match the selector of
one of your ReplicaSets. The reason for this is because a ReplicaSet is not
limited to owning Pods specified by its template - it can acquire other Pods
in the manner specified in the previous sections.
Take the previous frontend ReplicaSet example, and the Pods specified in the
following manifest:
-->
尽管你完全可以直接创建裸的 Pods强烈建议你确保这些裸的 Pods 并不包含可能与你
的某个 ReplicaSet 的选择算符相匹配的标签。原因在于 ReplicaSet 并不仅限于拥有
在其模板中设置的 Pods它还可以像前面小节中所描述的那样获得其他 Pods。
{{< codenew file="pods/pod-rs.yaml" >}}
<!--
As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend
ReplicaSet, they will immediately be acquired by it.
Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas to
fulfill its replica count requirement:
-->
由于这些 Pod 没有控制器Controller或其他对象作为其属主引用并且
其标签与 frontend ReplicaSet 的选择算符匹配,它们会立即被该 ReplicaSet
获取。
假定你在 frontend ReplicaSet 已经被部署之后创建 Pods并且你已经在 ReplicaSet
中设置了其初始的 Pod 副本数以满足其副本计数需要:
```shell
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
```
<!--
The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over
its desired count.
Fetching the Pods:
-->
新的 Pods 会被该 ReplicaSet 获取,并立即被 ReplicaSet 终止,因为
它们的存在会使得 ReplicaSet 中 Pod 个数超出其期望值。
取回 Pods
```shell
kubectl get pods
```
<!--
The output shows that the new Pods are either already terminated, or in the process of being terminated:
-->
输出显示新的 Pods 或者已经被终止,或者处于终止过程中:
```shell
NAME READY STATUS RESTARTS AGE
frontend-b2zdv 1/1 Running 0 10m
frontend-vcmts 1/1 Running 0 10m
frontend-wtsmm 1/1 Running 0 10m
pod1 0/1 Terminating 0 1s
pod2 0/1 Terminating 0 1s
```
<!--
If you create the Pods first:
-->
如果你先行创建 Pods
```shell
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
```
<!--
And then create the ReplicaSet however:
-->
之后再创建 ReplicaSet
```shell
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
```
<!--
You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the
number of its new Pods and the original matches its desired count. As fetching the Pods:
-->
你会看到 ReplicaSet 已经获得了该 Pods并仅根据其规约创建新的 Pods直到
新的 Pods 和原来的 Pods 的总数达到其预期个数。
这时取回 Pods
```shell
kubectl get pods
```
<!--
Will reveal in its output:
-->
将会生成下面的输出:
```
NAME READY STATUS RESTARTS AGE
frontend-hmmj2 1/1 Running 0 9s
pod1 1/1 Running 0 36s
pod2 1/1 Running 0 36s
```
采用这种方式,一个 ReplicaSet 中可以包含异质的 Pods 集合。
<!--
## Writing a ReplicaSet Spec
As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For
general information about working with manifests, see [object management using kubectl](/docs/concepts/overview/object-management-kubectl/overview/).
As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields.
For ReplicaSets, the kind is always just ReplicaSet.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
Refer to the first lines of the `frontend.yaml` example for guidance.
The name of a ReplicaSet object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
-->
## 编写 ReplicaSet 的 spec
## 编写 ReplicaSet Spec
与所有其他 Kubernetes API 对象一样ReplicaSet 也需要 `apiVersion`、`kind`、和 `metadata` 字段。
对于 ReplicaSets 而言,其 kind 始终是 ReplicaSet。
在 Kubernetes 1.9 中ReplicaSet 上的 API 版本 `apps/v1` 是其当前版本,且被
默认启用。API 版本 `apps/v1beta2` 已被废弃。
参考 `frontend.yaml` 示例的第一行。
与所有其他 Kubernetes API 对象一样ReplicaSet 也需要 `apiVersion`、`kind`、和 `metadata` 字段。有关使用清单的一般信息,请参见 [使用 kubectl 管理对象](/zh/docs/concepts/overview/working-with-objects/object-management/)。
ReplicaSet 对象的名称必须是合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
ReplicaSet 也需要 [`.spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) 部分。
ReplicaSet 也需要 [`.spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status)
部分。
<!--
### Pod Template
The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a
[pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a
[pod](/docs/concepts/workloads/pods/pod/), except that it is nested and does not have an `apiVersion` or `kind`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates) which is also
required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`.
Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.
In addition to required fields of a pod, a pod template in a ReplicaSet must specify appropriate
labels and an appropriate restart policy.
For labels, make sure to not overlap with other controllers. For more information, see [pod selector](#pod-selector).
For [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy), the only allowed value for `.spec.template.spec.restartPolicy` is `Always`, which is the default.
For local container restarts, ReplicaSet delegates to an agent on the node,
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field,
`.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default.
-->
### Pod 模版
`.spec.template``.spec` 唯一需要的字段。`.spec.template` 是 [Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates)。它和 [Pod](/zh/docs/concepts/workloads/pods/) 的语法几乎完全一样,除了它是嵌套的并没有 `apiVersion``kind`
`.spec.template` 是一个[Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates)
要求设置标签。在 `frontend.yaml` 示例中,我们指定了标签 `tier: frontend`
注意不要将标签与其他控制器的选择算符重叠,否则那些控制器会尝试收养此 Pod。
除了所需的 Pod 字段之外ReplicaSet 中的 Pod 模板必须指定适当的标签和适当的重启策略。
对于标签,请确保不要与其他控制器重叠。更多信息请参考 [Pod 选择器](#pod-selector)。
对于 [重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)`.spec.template.spec.restartPolicy` 唯一允许的取值是 `Always`,这也是默认值.
对于本地容器重新启动ReplicaSet 委托给了节点上的代理去执行,例如[Kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 或 Docker 去执行。
对于模板的[重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
字段,`.spec.template.spec.restartPolicy`,唯一允许的取值是 `Always`,这也是默认值.
<!--
### Pod Selector
The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). A ReplicaSet
manages all the pods with labels that match the selector. It does not distinguish
between pods that it created or deleted and pods that another person or process created or
deleted. This allows the ReplicaSet to be replaced without affecting the running pods.
The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). As discussed
[earlier](#how-a-replicaset-works) these are the labels used to identify potential Pods to acquire. In our
`frontend.yaml` example, the selector was:
The `.spec.template.metadata.labels` must match the `.spec.selector`, or it will
```yaml
matchLabels:
tier: frontend
```
In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, or it will
be rejected by the API.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
-->
### Pod 选择算符 {#pod-selector}
### Pod 选择器
`.spec.selector` 字段是一个[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)。
如前文中[所讨论的](how-a-replicaset-works),这些是用来标识要被获取的 Pods
的标签。在签名的 `frontend.yaml` 示例中,选择算符为:
`.spec.selector` 字段是[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/)。ReplicaSet 管理所有标签匹配与标签选择器的 Pod。它不区分自己创建或删除的 Pod 和其他人或进程创建或删除的pod。这允许在不影响运行中的 Pod 的情况下替换副本集。
```yaml
matchLabels:
tier: frontend
```
`.spec.template.metadata.labels` 必须匹配 `.spec.selector`,否则它将被 API 拒绝。
在 ReplicaSet 中,`.spec.template.metadata.labels` 的值必须与 `spec.selector`
相匹配,否则该配置会被 API 拒绝。
Kubernetes 1.9 版本中API 版本 `apps/v1` 中的 ReplicaSet 类型的版本是当前版本并默认开启。API 版本 `apps/v1beta2` 被弃用。
{{< note >}}
<!--
For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
-->
对于设置了相同的 `.spec.selector`,但
`.spec.template.metadata.labels``.spec.template.spec` 字段不同的
两个 ReplicaSet 而言,每个 ReplicaSet 都会忽略被另一个 ReplicaSet 所
创建的 Pods。
{{< /note >}}
<!--
Also you should not normally create any pods whose labels match this selector, either directly, with
another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it
created the other pods. Kubernetes does not stop you from doing this.
If you do end up with multiple controllers that have overlapping selectors, you
will have to manage the deletion yourself.
-->
另外,通常您不应该创建标签与此选择器匹配的任何 Pod或者直接与另一个 ReplicaSet 或另一个控制器(如 Deployment标签匹配的任何 Pod。
如果你这样做ReplicaSet 会认为它创造了其他 Pod。Kubernetes 并不会阻止您这样做。
如果您最终使用了多个具有重叠选择器的控制器,则必须自己负责删除。
<!--
### Labels on a ReplicaSet
The ReplicaSet can itself have labels (`.metadata.labels`). Typically, you
would set these the same as the `.spec.template.metadata.labels`. However, they are allowed to be
different, and the `.metadata.labels` do not affect the behavior of the ReplicaSet.
### Replicas
You can specify how many pods should run concurrently by setting `.spec.replicas`. The number running at any time may be higher
or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully
shut down, and a replacement starts early.
You can specify how many Pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete
its Pods to match this number.
If you do not specify `.spec.replicas`, then it defaults to 1.
-->
### Replicas
通过设置 `.spec.replicas` 您可以指定要同时运行多少个 Pod。
在任何时间运行的 Pod 数量可能高于或低于 `.spec.replicas` 指定的数量,例如在副本刚刚被增加或减少后、或者 Pod 正在被优雅地关闭、以及替换提前开始。
你可以通过设置 `.spec.replicas` 来指定要同时运行的 Pod 个数。
ReplicaSet 创建、删除 Pods 以与此值匹配。
如果您没有指定 `.spec.replicas`, 那么默认值为 1。
如果你没有指定 `.spec.replicas`, 那么默认值为 1。
<!--
## Working with ReplicaSets
@ -273,58 +450,63 @@ To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/referen
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. e.g. :
-->
## 使用 ReplicaSets 的具体方法
## 使用 ReplicaSets
### 删除 ReplicaSet 和它的 Pod
要删除 ReplicaSet 和它的所有 Pod使用[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令。
默认情况下,[垃圾收集器](/zh/docs/concepts/workloads/controllers/garbage-collection/) 自动删除所有依赖的 Pod。
要删除 ReplicaSet 和它的所有 Pod使用
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令。
默认情况下,[垃圾收集器](/zh/docs/concepts/workloads/controllers/garbage-collection/)
自动删除所有依赖的 Pod。
当使用 REST API 或 `client-go` 库时,您必须在删除选项中将 `propagationPolicy` 设置为 `Background``Foreground`。例如:
当使用 REST API 或 `client-go` 库时,你必须在删除选项中将 `propagationPolicy`
设置为 `Background``Foreground`。例如:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
> -H "Content-Type: application/json"
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
<!--
### Deleting just a ReplicaSet
You can delete a ReplicaSet without affecting any of its pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`, e.g. :
You can delete a ReplicaSet without affecting any of its pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `-cascade=false` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
For example:
-->
### 只删除 ReplicaSet
您可以只删除 ReplicaSet 而不影响它的 Pod方法是使用[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令并设置 `--cascade=false` 选项。
你可以只删除 ReplicaSet 而不影响它的 Pod方法是使用
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete)
命令并设置 `--cascade=false` 选项。
当使用 REST API 或 `client-go` 库时,您必须将 `propagationPolicy` 设置为 `Orphan`。例如:
当使用 REST API 或 `client-go` 库时,你必须将 `propagationPolicy` 设置为 `Orphan`
例如:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
> -H "Content-Type: application/json"
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
<!--
Once the original is deleted, you can create a new ReplicaSet to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
However, it will not make any effort to make existing pods match a new, different pod template.
To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
To update Pods to a new spec in a controlled way, use a
[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as ReplicaSets do not support a rolling update directly.
-->
一旦删除了原来的 ReplicaSet就可以创建一个新的来替换它。
由于新旧 ReplicaSet 的 `.spec.selector` 是相同的,新的 ReplicaSet 将接管老的 Pod。
但是,它不会努力使现有的 Pod 与新的、不同的 Pod 模板匹配。
若想要以可控的方式将 Pod 更新到新的 spec就要使用 [滚动更新](#rolling-updates)的方式。
若想要以可控的方式更新 Pod 的规约,可以使用
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/#creating-a-deployment)
资源,因为 ReplicaSet 并不直接支持滚动更新。
<!--
### Isolating pods from a ReplicaSet
Pods may be removed from a ReplicaSet's target set by changing their labels. This technique may be used to remove pods
@ -332,10 +514,10 @@ from service for debugging, data recovery, etc. Pods that are removed in this wa
assuming that the number of replicas is not also changed).
-->
### 将 Pod 从 ReplicaSet 中隔离
可以通过改变标签来从 ReplicaSet 的目标集中移除 Pod。这种技术可以用来从服务中去除 Pod以便进行排错、数据恢复等。
可以通过改变标签来从 ReplicaSet 的目标集中移除 Pod。
这种技术可以用来从服务中去除 Pod以便进行排错、数据恢复等。
以这种方式移除的 Pod 将被自动替换(假设副本的数量没有改变)。
<!--
@ -344,10 +526,10 @@ from service for debugging, data recovery, etc. Pods that are removed in this wa
A ReplicaSet can be easily scaled up or down by simply updating the `.spec.replicas` field. The ReplicaSet controller
ensures that a desired number of pods with a matching label selector are available and operational.
-->
### 缩放 RepliaSet
通过更新 `.spec.replicas` 字段ReplicaSet 可以被轻松的进行缩放。ReplicaSet 控制器能确保匹配标签选择器的数量的 Pod 是可用的和可操作的。
通过更新 `.spec.replicas` 字段ReplicaSet 可以被轻松的进行缩放。ReplicaSet
控制器能确保匹配标签选择器的数量的 Pod 是可用的和可操作的。
<!--
### ReplicaSet as an Horizontal Pod Autoscaler Target
@ -357,13 +539,13 @@ A ReplicaSet can also be a target for
a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
the ReplicaSet we created in the previous example.
-->
### ReplicaSet 作为水平的 Pod 自动缩放器目标
ReplicaSet 也可以作为 [水平的 Pod 缩放器 (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/) 的目标。也就是说ReplicaSet 可以被 HPA 自动缩放。
ReplicaSet 也可以作为
[水平的 Pod 缩放器 (HPA)](/zh/docs/tasks/run-application/horizontal-pod-autoscale/)
的目标。也就是说ReplicaSet 可以被 HPA 自动缩放。
以下是 HPA 以我们在前一个示例中创建的副本集为目标的示例。
{{< codenew file="controllers/hpa-rs.yaml" >}}
<!--
@ -371,23 +553,21 @@ Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluste
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage
of the replicated pods.
-->
将这个列表保存到 `hpa-rs.yaml` 并提交到 Kubernetes 集群,就能创建它所定义的 HPA进而就能根据复制的 Pod 的 CPU 利用率对目标 ReplicaSet进行自动缩放。
将这个列表保存到 `hpa-rs.yaml` 并提交到 Kubernetes 集群,就能创建它所定义的
HPA进而就能根据复制的 Pod 的 CPU 利用率对目标 ReplicaSet进行自动缩放。
```shell
kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml
kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml
```
<!--
Alternatively, you can use the `kubectl autoscale` command to accomplish the same
(and it's easier!)
-->
或者,可以使用 `kubectl autoscale` 命令完成相同的操作。
(而且它更简单!)
或者,可以使用 `kubectl autoscale` 命令完成相同的操作。 (而且它更简单!)
```shell
kubectl autoscale rs frontend
kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50
```
<!--
@ -395,43 +575,49 @@ kubectl autoscale rs frontend
### Deployment (Recommended)
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying ReplicaSets and their Pods
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features. For more information on running a stateless
application using a Deployment, please read [Run a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update
them and their Pods via declarative, server-side rolling updates.
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
creation, deletion and updates. When you use Deployments you don't have to worry about managing the ReplicaSets that
they create. Deployments own and manage their ReplicaSets.
As such, it is recommended to use Deployments when you want ReplicaSets.
-->
## ReplicaSet 的替代方案
### Deployment (推荐)
[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 是一个高级 API 对象,它以 `kubectl rolling-update` 的方式更新其底层副本集及其Pod。
如果您需要滚动更新功能,建议使用 Deployment因为 Deployment 与 `kubectl rolling-update` 不同的是:它是声明式的、服务器端的、并且具有其他特性。
有关使用 Deployment 来运行无状态应用的更多信息,请参阅 [使用 Deployment 运行无状态应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)。
[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 是一个
可以拥有 ReplicaSet 并使用声明式方式在服务器端完成对 Pods 滚动更新的对象。
尽管 ReplicaSet 可以独立使用,目前它们的主要用途是提供给 Deployment 作为
编排 Pod 创建、删除和更新的一种机制。当使用 Deployment 时,你不必关心
如何管理它所创建的 ReplicaSetDeployment 拥有并管理其 ReplicaSet。
因此,建议你在需要 ReplicaSet 时使用 Deployment。
<!--
### Bare Pods
Unlike the case where a user directly created pods, a ReplicaSet replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
-->
### 裸 Pod
与用户直接创建 Pod 的情况不同ReplicaSet 会替换那些由于某些原因被删除或被终止的 Pod例如在节点故障或破坏性的节点维护如内核升级的情况下。
因为这个好处,我们建议您使用 ReplicaSet即使应用程序只需要一个 Pod。
想像一下ReplicaSet 类似于进程监视器,只不过它在多个节点上监视多个 Pod而不是在单个节点上监视单个进程。
与用户直接创建 Pod 的情况不同ReplicaSet 会替换那些由于某些原因被删除或被终止的
Pod例如在节点故障或破坏性的节点维护如内核升级的情况下。
因为这个原因,我们建议你使用 ReplicaSet即使应用程序只需要一个 Pod。
想像一下ReplicaSet 类似于进程监视器,只不过它在多个节点上监视多个 Pod
而不是在单个节点上监视单个进程。
ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理例如Kubelet 或 Docker去完成。
<!--
### Job
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicaSet for pods that are expected to terminate on their own
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are expected to terminate on their own
(that is, batch jobs).
-->
### Job
使用[`Job`](/zh/docs/concepts/workloads/controllers/job/) 代替ReplicaSet可以用于那些期望自行终止的 Pod。
使用[`Job`](/zh/docs/concepts/workloads/controllers/job/) 代替ReplicaSet
可以用于那些期望自行终止的 Pod。
<!--
### DaemonSet
@ -441,11 +627,25 @@ machine-level function, such as machine monitoring or machine logging. These po
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
-->
### DaemonSet
对于管理那些提供主机级别功能(如主机监控和主机日志)的容器,就要用[`DaemonSet`](/zh/docs/concepts/workloads/controllers/daemonset/) 而不用 ReplicaSet。
这些 Pod 的寿命与主机寿命有关:这些 Pod 需要先于主机上的其他 Pod 运行,并且在机器准备重新启动/关闭时安全地终止。
对于管理那些提供主机级别功能(如主机监控和主机日志)的容器,
就要用 [`DaemonSet`](/zh/docs/concepts/workloads/controllers/daemonset/)
而不用 ReplicaSet。
这些 Pod 的寿命与主机寿命有关:这些 Pod 需要先于主机上的其他 Pod 运行,
并且在机器准备重新启动/关闭时安全地终止。
### ReplicationController
<!--
ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/).
The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based
selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
As such, ReplicaSets are preferred over ReplicationControllers
-->
ReplicaSet 是 [ReplicationController](/zh/docs/concepts/workloads/controllers/replicationcontroller/)
的后继者。二者目的相同且行为类似,只是 ReplicationController 不支持
[标签用户指南](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)
中讨论的基于集合的选择算符需求。
因此,相比于 ReplicationController应优先考虑 ReplicaSet。

View File

@ -7,7 +7,7 @@ feature:
重新启动失败的容器,在节点死亡时替换并重新调度容器,杀死不响应用户定义的健康检查的容器,并且在它们准备好服务之前不会将它们公布给客户端。
content_type: concept
weight: 20
weight: 90
---
<!--
@ -22,7 +22,7 @@ feature:
Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
content_type: concept
weight: 20
weight: 90
-->
<!-- overview -->
@ -499,7 +499,7 @@ API object can be found at:
### ReplicaSet
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
Its mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate Pod creation, deletion and updates.
Its mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate Pod creation, deletion and updates.
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or dont require updates at all.
-->
## ReplicationController 的替代方案
@ -508,8 +508,10 @@ Note that we recommend using Deployments instead of directly using Replica Sets,
[`ReplicaSet`](/zh/docs/concepts/workloads/controllers/replicaset/) 是下一代 ReplicationController
支持新的[基于集合的标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#set-based-requirement)。
它主要被 [`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 用来作为一种编排 Pod 创建、删除及更新的机制。
请注意,我们推荐使用 Deployment 而不是直接使用 ReplicaSet除非你需要自定义更新编排或根本不需要更新。
它主要被 [`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/)
用来作为一种编排 Pod 创建、删除及更新的机制。
请注意,我们推荐使用 Deployment 而不是直接使用 ReplicaSet除非
你需要自定义更新编排或根本不需要更新。
<!--
### Deployment (Recommended)

View File

@ -276,7 +276,7 @@ from a _pod template_ and manage those Pods on your behalf.
PodTemplates are specifications for creating Pods, and are included in workload resources such as
[Deployments](/docs/concepts/workloads/controllers/deployment/),
[Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
[Jobs](/docs/concepts/workloads/controllers/job/), and
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/).
-->
### Pod 模版 {#pod-templates}
@ -405,7 +405,7 @@ or POSIX shared memory. Containers in different Pods have distinct IP addresses
and can not communicate by IPC without
[special configuration](/docs/concepts/policy/pod-security-policy/).
Containers that want to interact with a container running in a different Pod can
use IP networking to comunicate.
use IP networking to communicate.
-->
在同一个 Pod 内,所有容器共享一个 IP 地址和端口空间,并且可以通过 `localhost` 发现对方。
他们也能通过如 SystemV 信号量或 POSIX 共享内存这类标准的进程间通信方式互相通信。
@ -487,7 +487,7 @@ but cannot be controlled from there.
<!--
* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
* Learn about [PodPresets](/docs/concepts/workloads/pods/podpreset/).
* Lean about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
* Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
configure different Pods with different container runtime configurations.
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
* Read about [PodDisruptionBudget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions.
@ -506,7 +506,8 @@ but cannot be controlled from there.
* Pod 在 Kubernetes REST API 中是一个顶层资源;
[Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
对象的定义中包含了更多的细节信息。
* 博客 [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) 中解释了在同一 Pod 中包含多个容器时的几种常见布局。
* 博客 [分布式系统工具箱:复合容器模式](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)
中解释了在同一 Pod 中包含多个容器时的几种常见布局。
<!--
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}, you can read about the prior art, including:
@ -516,9 +517,9 @@ To understand the context for why Kubernetes wraps a common Pod API in other res
或 {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
封装通用的 Pod API相关的背景信息可以在前人的研究中找到。具体包括
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).

View File

@ -3,20 +3,29 @@ approvers:
- erictune
title: Init 容器
content_type: concept
weight: 40
---
<!---
reviewers:
- erictune
title: Init Containers
content_type: concept
weight: 40
-->
<!-- overview -->
<!--
This page provides an overview of init containers: specialized containers that run before app containers in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
This page provides an overview of init containers: specialized containers that run
before app containers in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
Init containers can contain utilities or setup scripts not present in an app image.
-->
本页提供了 Init 容器的概览,它是一种特殊容器,在 {{< glossary_tooltip text="Pod" term_id="pod" >}}
内的应用容器启动之前运行,可以包括一些应用镜像中不存在的实用工具和安装脚本。
本页提供了 Init 容器的概览。Init 容器是一种特殊容器,在 {{< glossary_tooltip text="Pod" term_id="pod" >}}
内的应用容器启动之前运行。Init 容器可以包括一些应用镜像中不存在的实用工具和安装脚本。
<!--
You can specify init containers in the Pod specification alongside the `containers` array (which describes app containers).
You can specify init containers in the Pod specification alongside the `containers`
array (which describes app containers).
-->
你可以在 Pod 的规约中与用来描述应用容器的 `containers` 数组平行的位置指定
Init 容器。
@ -26,7 +35,9 @@ Init 容器。
<!--
## Understanding init containers
A {{< glossary_tooltip text="Pod" term_id="pod" >}} can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
A {{< glossary_tooltip text="Pod" term_id="pod" >}} can have multiple containers
running apps within it, but it can also have one or more init containers, which are run
before the app containers are started.
-->
## 理解 Init 容器
@ -35,6 +46,7 @@ A {{< glossary_tooltip text="Pod" term_id="pod" >}} can have multiple containers
<!--
Init containers are exactly like regular containers, except:
* Init containers always run to completion.
* Each init container must complete successfully before the next one starts.
-->
@ -44,15 +56,20 @@ Init 容器与普通的容器非常像,除了如下两点:
* 每个都必须在下一个启动之前成功完成。
<!--
If a Pod's init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a `restartPolicy` of Never, Kubernetes does not restart the Pod.
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.
-->
如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod直到 Init 容器成功为止。
然而,如果 Pod 对应的 `restartPolicy` 值为 NeverKubernetes 不会重新启动 Pod。
如果 Pod 的 Init 容器失败,kubelet 会不断地重启该 Init 容器直到该容器成功为止。
然而,如果 Pod 对应的 `restartPolicy` 值为 "Never"Kubernetes 不会重新启动 Pod。
<!--
To specify an init container for a Pod, add the `initContainers` field into the Pod specification, as an array of objects of type [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core), alongside the app `containers` array.
The status of the init containers is returned in `.status.initContainerStatuses` field as an array of the container statuses (similar to the `.status.containerStatuses` field).
To specify an init container for a Pod, add the `initContainers` field into
the Pod specification, as an array of objects of type
[Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core),
alongside the app `containers` array.
The status of the init containers is returned in `.status.initContainerStatuses`
field as an array of the container statuses (similar to the `.status.containerStatuses`
field).
-->
为 Pod 设置 Init 容器需要在 Pod 的 `spec` 中添加 `initContainers` 字段,
该字段以 [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
@ -62,9 +79,19 @@ Init 容器的状态在 `status.initContainerStatuses` 字段中以容器状态
<!--
### Differences from regular containers
Init containers support all the fields and features of app containers, including resource limits, volumes, and security settings. However, the resource requests and limits for an init container are handled differently, as documented in [Resources](#resources).
Also, init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or `startupProbe` because they must run to completion before the Pod can be ready.
If you specify multiple init containers for a Pod, Kubelet runs each init container sequentially. Each init container must succeed before the next can run. When all of the init containers have run to completion, Kubelet initializes the application containers for the Pod and runs them as usual.
Init containers support all the fields and features of app containers,
including resource limits, volumes, and security settings. However, the
resource requests and limits for an init container are handled differently,
as documented in [Resources](#resources).
Also, init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or
`startupProbe` because they must run to completion before the Pod can be ready.
If you specify multiple init containers for a Pod, Kubelet runs each init
container sequentially. Each init container must succeed before the next can run.
When all of the init containers have run to completion, Kubelet initializes
the application containers for the Pod and runs them as usual.
-->
### 与普通容器的不同之处
@ -80,12 +107,24 @@ Kubernetes 才会为 Pod 初始化应用容器并像平常一样运行。
<!--
## Using init containers
Because init containers have separate images from app containers, they have some advantages for start-up related code:
* Init containers can contain utilities or custom code for setup that are not present in an app image. For example, there is no need to make an image `FROM` another image just to use a tool like `sed`, `awk`, `python`, or `dig` during setup.
* Init containers can securely run utilities that would make an app container image less secure.
* The application image builder and deployer roles can work independently without the need to jointly build a single app image.
* Init containers can run with a different view of the filesystem than app containers in the same Pod. Consequently, they can be given access to {{< glossary_tooltip text="Secrets" term_id="secret" >}} that app containers cannot access.
* Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met. Once preconditions are met, all of the app containers in a Pod can start in parallel.
Because init containers have separate images from app containers, they
have some advantages for start-up related code:
* Init containers can contain utilities or custom code for setup that are not present in an app
image. For example, there is no need to make an image `FROM` another image just to use a tool like
`sed`, `awk`, `python`, or `dig` during setup.
* The application image builder and deployer roles can work independently without
the need to jointly build a single app image.
* Init containers can run with a different view of the filesystem than app containers in the
same Pod. Consequently, they can be given access to
{{< glossary_tooltip text="Secrets" term_id="secret" >}} that app containers cannot access.
* Because init containers run to completion before any app containers start, init containers offer
a mechanism to block or delay app container startup until a set of preconditions are met. Once
preconditions are met, all of the app containers in a Pod can start in parallel.
* Init containers can securely run utilities or custom code that would otherwise make an app
container image less secure. By keeping unnecessary tools separate you can limit the attack
surface of your app container image.
-->
## 使用 Init 容器
@ -108,12 +147,15 @@ Because init containers have separate images from app containers, they have some
<!--
### Examples
Here are some ideas for how to use init containers:
* Wait for a {{< glossary_tooltip text="Service" term_id="service">}} to
be created, using a shell one-line command like:
```shell
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
```
* Register this Pod with a remote server from the downward API with a command like:
```shell
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
@ -124,7 +166,11 @@ Here are some ideas for how to use init containers:
```
* Clone a Git repository into a {{< glossary_tooltip text="Volume" term_id="volume" >}}
* Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app container. For example, place the `POD_IP` value in a configuration and generate the main app configuration file using Jinja.
* Place values into a configuration file and run a template tool to dynamically
generate a configuration file for the main app container. For example,
place the `POD_IP` value in a configuration and generate the main app
configuration file using Jinja.
-->
### 示例 {#examples}
@ -156,24 +202,10 @@ Here are some ideas for how to use init containers:
<!--
#### Init containers in use
This example defines a simple Pod that has two init containers. The first waits for `myservice`, and the second waits for `mydb`. Once both init containers complete, the Pod runs the app container from its `spec` section.
```yaml
```
The following YAML file outlines the `mydb` and `myservice` services:
```yaml
```
You can start this Pod by running:
```shell
```
And check on its status with:
```shell
```
This example defines a simple Pod that has two init containers.
The first waits for `myservice`, and the second waits for `mydb`. Once both
init containers complete, the Pod runs the app container from its `spec` section.
-->
### 使用 Init 容器的情况
@ -201,63 +233,40 @@ spec:
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
```
下面的 yaml 文件展示了 `mydb``myservice` 两个 Service
```
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
kind: Service
apiVersion: v1
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
```
要启动这个 Pod可以执行如下命令
<!--
You can start this Pod by running:
-->
你通过运行下面的命令启动 Pod
```shell
kubectl apply -f myapp.yaml
```
输出为:
```
pod/myapp-pod created
```
要检查其状态:
<!--
And check on its status with:
-->
使用下面的命令检查其状态:
```shell
kubectl get -f myapp.yaml
```
输出类似于:
```
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 6m
```
如需更详细的信息:
<!--
or for more details:
-->
或者查看更多详细信息:
```shell
kubectl describe -f myapp.yaml
```
输出类似于:
```
Name: myapp-pod
Namespace: default
@ -293,6 +302,9 @@ Events:
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634
```
<!--
To see logs for the init containers in this Pod, run:
-->
如需查看 Pod 内 Init 容器的日志,请执行:
```shell
@ -301,7 +313,8 @@ kubectl logs myapp-pod -c init-mydb # 查看第二个 Init 容器
```
<!--
At this point, those init containers will be waiting to discover Services named `mydb` and `myservice`.
At this point, those init containers will be waiting to discover Services named
`mydb` and `myservice`.
Here's a configuration you can use to make those Services appear:
-->
@ -332,23 +345,27 @@ spec:
targetPort: 9377
```
<!--
To create the `mydb` and `myservice` services:
-->
创建 `mydb``myservice` 服务的命令:
```shell
kubectl create -f services.yaml
```
输出类似于:
```
service "myservice" created
service "mydb" created
```
<!--
You'll then see that those init containers complete, and that the `myapp-pod`
Pod moves into the Running state:
-->
这样你将能看到这些 Init 容器执行完毕,随后 `my-app` 的 Pod 进入 `Running` 状态:
```shell
$ kubectl get -f myapp.yaml
kubectl get -f myapp.yaml
```
```
@ -356,32 +373,43 @@ NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 9m
```
一旦我们启动了 `mydb``myservice` 这两个服务,我们能够看到 Init 容器完成,
并且 `myapp-pod` 被创建。
<!--
This simple example should provide some inspiration for you to create your own init containers. [What's next](#what-s-next) contains a link to a more detailed example.
This simple example should provide some inspiration for you to create your own
init containers. [What's next](#whats-next) contains a link to a more detailed example.
-->
这个简单例子应该能为你创建自己的 Init 容器提供一些启发。
[接下来](#what-s-next)节提供了更详细例子的链接。
[接下来](#whats-next)节提供了更详细例子的链接。
<!--
## Detailed behavior
During the startup of a Pod, each init container starts in order, after the network and volumes are initialized. Each container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with failure, it is retried according to the Pod `restartPolicy`. However, if the Pod `restartPolicy` is set to Always, the init containers use `restartPolicy` OnFailure.
During Pod startup, the kubelet delays running init containers until the networking
and storage are ready. Then the kubelet runs the Pod's init containers in the order
they appear in the Pod's spec.
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an init container are not aggregated under a Service. A Pod that is initializing
is in the `Pending` state but should have a condition `Initializing` set to true.
Each init container must exit successfully before
the next container starts. If a container fails to start due to the runtime or
exits with failure, it is retried according to the Pod `restartPolicy`. However,
if the Pod `restartPolicy` is set to Always, the init containers use
`restartPolicy` OnFailure.
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers must execute again.
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an
init container are not aggregated under a Service. A Pod that is initializing
is in the `Pending` state but should have a condition `Initialized` set to true.
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers
must execute again.
-->
## 具体行为 {#detailed-behavior}
在 Pod 启动过程中,每个 Init 容器在网络和数据卷初始化之后会按顺序启动。
在 Pod 启动过程中,每个 Init 容器会在网络和数据卷初始化之后按顺序启动。
kubelet 运行依据 Init 容器在 Pod 规约中的出现顺序依次运行之。
每个 Init 容器成功退出后才会启动下一个 Init 容器。
如果它们因为容器运行时的原因无法启动,或以错误状态退出,它会根据 Pod 的 `restartPolicy` 策略进行重试。
然而,如果 Pod 的 `restartPolicy` 设置为 "Always"Init 容器失败时会使用 `restartPolicy`
的 "OnFailure" 策略。
如果某容器因为容器运行时的原因无法启动或以错误状态退出kubelet 会根据
Pod 的 `restartPolicy` 策略进行重试。
然而,如果 Pod 的 `restartPolicy` 设置为 "Always"Init 容器失败时会使用
`restartPolicy` 的 "OnFailure" 策略。
在所有的 Init 容器没有成功之前Pod 将不会变成 `Ready` 状态。
Init 容器的端口将不会在 Service 中进行聚集。正在初始化中的 Pod 处于 `Pending` 状态,
@ -390,11 +418,17 @@ Init 容器的端口将不会在 Service 中进行聚集。正在初始化中的
如果 Pod [重启](#pod-restart-reasons),所有 Init 容器必须重新执行。
<!--
Changes to the init container spec are limited to the container image field. Altering an init container image field is equivalent to restarting the Pod.
Changes to the init container spec are limited to the container image field.
Altering an init container image field is equivalent to restarting the Pod.
Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on `EmptyDirs` should be prepared for the possibility that an output file already exists.
Because init containers can be restarted, retried, or re-executed, init container
code should be idempotent. In particular, code that writes to files on `EmptyDirs`
should be prepared for the possibility that an output file already exists.
Init containers have all of the fields of an app container. However, Kubernetes
prohibits `readinessProbe` from being used because init containers cannot
define readiness distinct from completion. This is enforced during validation.
Init containers have all of the fields of an app container. However, Kubernetes prohibits `readinessProbe` from being used because init containers cannot define readiness distinct from completion. This is enforced during validation.
-->
对 Init 容器规约的修改仅限于容器的 `image` 字段。
更改 Init 容器的 `image` 字段,等同于重启该 Pod。
@ -407,9 +441,12 @@ Init 容器具有应用容器的所有字段。然而 Kubernetes 禁止使用 `r
Kubernetes 会在校验时强制执行此检查。
<!--
Use `activeDeadlineSeconds` on the Pod and `livenessProbe` on the container to prevent init containers from failing forever. The active deadline includes init containers.
Use `activeDeadlineSeconds` on the Pod and `livenessProbe` on the container to
prevent init containers from failing forever. The active deadline includes init
containers.
The name of each app and init container in a Pod must be unique; avalidation error is thrown for any container sharing a name with another.
The name of each app and init container in a Pod must be unique; a
validation error is thrown for any container sharing a name with another.
-->
在 Pod 上使用 `activeDeadlineSeconds` 和在容器上使用 `livenessProbe` 可以避免
Init 容器一直重复失败。`activeDeadlineSeconds` 时间包含了 Init 容器启动的时间。
@ -418,17 +455,21 @@ Init 容器一直重复失败。`activeDeadlineSeconds` 时间包含了 Init 容
与任何其它容器共享同一个名称,会在校验时抛出错误。
<!--
### Resources
Given the ordering and execution for init containers, the following rules for resource usage apply:
* The highest of any particular resource request or limit defined on all init containers is the *effective init request/limit*
### Resources
Given the ordering and execution for init containers, the following rules
for resource usage apply:
* The highest of any particular resource request or limit defined on all init
containers is the *effective init request/limit*
* The Pod's *effective request/limit* for a resource is the higher of:
* the sum of all app containers request/limit for a resource
* the effective init request/limit for a resource
* Scheduling is done based on effective requests/limits, which means init containers can reserve resources for initialization that are not used during the life of the Pod.
* The QoS (quality of service) tier of the Pod's *effective QoS tier* is the QoS tier for init containers and app containers alike.
Quota and limits are applied based on the effective Pod request and limit.
Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as the scheduler.
* Scheduling is done based on effective requests/limits, which means
init containers can reserve resources for initialization that are not used
during the life of the Pod.
* The QoS (quality of service) tier of the Pod's *effective QoS tier* is the
QoS tier for init containers and app containers alike.
-->
### 资源 {#resources}
@ -442,15 +483,27 @@ Pod level control groups (cgroups) are based on the effective Pod request and li
这些资源在 Pod 生命周期过程中并没有被使用。
* Pod 的 *有效 QoS 层* ,与 Init 容器和应用容器的一样。
配额和限制适用于有效 Pod的 limit/request。
Pod 级别的 cgroups 是基于有效 Pod 的 limit/request和调度器相同。
<!--
Quota and limits are applied based on the effective Pod request and limit.
Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as the scheduler.
-->
配额和限制适用于有效 Pod 的请求和限制值。
Pod 级别的 cgroups 是基于有效 Pod 的请求和限制值,和调度器相同。
<!--
### Pod restart reasons
A Pod can restart, causing re-execution of init containers, for the following reasons:
* A user updates the Pod specification, causing the init container image to change. Any changes to the init container image restarts the Pod. App container image changes only restart the app container.
* The Pod infrastructure container is restarted. This is uncommon and would have to be done by someone with root access to nodes.
* All containers in a Pod are terminated while `restartPolicy` is set to Always, forcing a restart, and the init container completion record has been lost due to garbage collection.
### Pod restart reasons
A Pod can restart, causing re-execution of init containers, for the following
reasons:
* A user updates the Pod specification, causing the init container image to change.
Any changes to the init container image restarts the Pod. App container image
changes only restart the app container.
* The Pod infrastructure container is restarted. This is uncommon and would
have to be done by someone with root access to nodes.
* All containers in a Pod are terminated while `restartPolicy` is set to Always,
forcing a restart, and the init container completion record has been lost due
to garbage collection.
-->
### Pod 重启的原因 {#pod-restart-reasons}
@ -471,7 +524,6 @@ Pod 重启会导致 Init 容器重新执行,主要有如下几个原因:
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
-->
* 阅读[创建包含 Init 容器的 Pod](/zh/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* 学习如何[调试 Init 容器](/zh/docs/tasks/debug-application-cluster/debug-init-containers/)

View File

@ -19,7 +19,8 @@ of its primary containers starts OK, and then through either the `Succeeded` or
Whilst a Pod is running, the kubelet is able to restart containers to handle some
kind of faults. Within a Pod, Kubernetes tracks different container
[states](#container-states) and handles
[states](#container-states) and determines what action to take to make the Pod
healthy again.
-->
本页面讲述 Pod 的生命周期。
Pod 遵循一个预定义的生命周期,起始于 `Pending` [阶段](#pod-phase),如果至少
@ -28,7 +29,7 @@ Pod 遵循一个预定义的生命周期,起始于 `Pending` [阶段](#pod-pha
在 Pod 运行期间,`kubelet` 能够重启容器以处理一些失效场景。
在 Pod 内部Kubernetes 跟踪不同容器的[状态](#container-states)
处理可能出现的状况
确定使 Pod 重新变得健康所需要采取的动作
<!--
In the Kubernetes API, Pods have both a specification and an actual status. The
@ -88,7 +89,7 @@ Pod 自身不具有自愈能力。如果 Pod 被调度到某{{< glossary_tooltip
<!--
A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead,
that Pod can be replaced by a new, near-identical Pod, with even the same name i
that Pod can be replaced by a new, near-identical Pod, with even the same name if
desired, but with a different UID.
When something is said to have the same lifetime as a Pod, such as a
@ -193,7 +194,7 @@ Kubernetes 会跟踪 Pod 中每个容器的状态,就像它跟踪 Pod 总体
`Terminated`(已终止)。
<!--
To the check state of a Pod's containers, you can use
To check the state of a Pod's containers, you can use
`kubectl describe pod <name-of-pod>`. The output shows the state for each container
within that Pod.
@ -207,7 +208,7 @@ Each state has a specific meaning:
<!--
### `Waiting` {#container-state-waiting}
If a container is not in either the `Running` or `Terminated` state, it `Waiting`.
If a container is not in either the `Running` or `Terminated` state, it is `Waiting`.
A container in the `Waiting` state is still running the operations it requires in
order to complete start up: for example, pulling the container image from a container
image registry, or applying {{< glossary_tooltip text="Secret" term_id="secret" >}}
@ -228,23 +229,23 @@ Reason 字段,其中给出了容器处于等待状态的原因。
### `Running` {#container-state-running}
The `Running` status indicates that a container is executing without issues. If there
was a `postStart` hook configured, it has already executed and executed. When you use
was a `postStart` hook configured, it has already executed and finished. When you use
`kubectl` to query a Pod with a container that is `Running`, you also see information
about when the container entered the `Running` state.
-->
### `Running`(运行中) {#container-state-running}
`Running` 状态表明容器正在执行状态并且没有问题发生。
如果配置了 `postStart` 回调,那么该回调已经执行完成。
如果配置了 `postStart` 回调,那么该回调已经执行且已完成。
如果你使用 `kubectl` 来查询包含 `Running` 状态的容器的 Pod 时,你也会看到
关于容器进入 `Running` 状态的信息。
<!--
### `Terminated` {#container-state-terminated}
A container in the `Terminated` state has begin execution and has then either run to
completion or has failed for some reason. When you use `kubectl` to query a Pod with
a container that is `Terminated`, you see a reason, and exit code, and the start and
A container in the `Terminated` state began execution and then either ran to
completion or failed for some reason. When you use `kubectl` to query a Pod with
a container that is `Terminated`, you see a reason, an exit code, and the start and
finish time for that container's period of execution.
If a container has a `preStop` hook configured, that runs before the container enters
@ -268,8 +269,8 @@ and Never. The default value is Always.
The `restartPolicy` applies to all containers in the Pod. `restartPolicy` only
refers to restarts of the containers by the kubelet on the same node. After containers
in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s,
40s, …), that is capped at five minutes. Once a container has executed with no problems
for 10 minutes without any problems, the kubelet resets the restart backoff timer for
40s, …), that is capped at five minutes. Once a container has executed for 10 minutes
without any problems, the kubelet resets the restart backoff timer for
that container.
-->
## 容器重启策略 {#restart-policy}
@ -426,7 +427,8 @@ When a Pod's containers are Ready but at least one custom condition is missing o
## Container probes
A [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) is a diagnostic
performed periodically by the [kubelet](/docs/admin/kubelet/)
performed periodically by the
[kubelet](/docs/reference/command-line-tools-reference/kubelet/)
on a Container. To perform a diagnostic,
the kubelet calls a
[Handler](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core) implemented by
@ -434,10 +436,10 @@ the container. There are three types of handlers:
-->
## 容器探针 {#container-probes}
[探针](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
[Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
是由 [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 对容器执行的定期诊断。
要执行诊断kubelet 调用由容器实现的
[Handler](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core)
[Handler](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core)
(处理程序)。有三种类型的处理程序:
<!--
@ -593,7 +595,7 @@ to stop.
-->
### 何时该使用启动探针? {#when-should-you-use-a-startup-probe}
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
<!--
Startup probes are useful for Pods that have containers that take a long time to
@ -647,14 +649,17 @@ shutdown.
Pod。
<!--
Typically, the container runtime sends a a TERM signal is sent to the main process in each
container. Once the grace period has expired, the KILL signal is sent to any remainig
Typically, the container runtime sends a TERM signal to the main process in each
container. Many container runtimes respect the `STOPSIGNAL` value defined in the container
image and send this instead of TERM.
Once the grace period has expired, the KILL signal is sent to any remainig
processes, and the Pod is then deleted from the
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}. If the kubelet or the
container runtime's management service is restarted while waiting for processes to terminate, the
cluster retries from the start including the full original grace period.
-->
通常情况下,容器运行时会发送一个 TERM 信号到每个容器中的主进程。
很多容器运行时都能够注意到容器镜像中 `STOPSIGNAL` 的值,并发送该信号而不是 TERM。
一旦超出了体面终止限期,容器运行时会向所有剩余进程发送 KILL 信号,之后
Pod 就会被从 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}
上移除。如果 `kubelet` 或者容器运行时的管理服务在等待进程终止期间被重启,
@ -666,9 +671,9 @@ An example flow:
1. You use the `kubectl` tool to manually delete a specific Pod, with the default grace period
(30 seconds).
1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead"
along with the grace period.
along with the grace period.
If you use `kubectl describe` to check on the Pod you're deleting, that Pod shows up as
"Terminating".
"Terminating".
On the node where the Pod is running: as soon as the kubelet sees that a Pod has been marked
as terminating (a graceful shutdown duration has been set), the kubelet begins the local Pod
shutdown process.
@ -737,7 +742,7 @@ An example flow:
`SIGKILL` to any processes still running in any container in the Pod.
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
1. The kubelet triggers forcible removal of Pod object from the API server, by setting grace period
to 0 (immediate deletion).
to 0 (immediate deletion).
1. The API server deletes the Pod's API object, which is then no longer visible from any client.
-->
4. 超出终止宽限期线时,`kubelet` 会触发强制关闭过程。容器运行时会向 Pod 中所有容器内
@ -745,14 +750,14 @@ An example flow:
`kubelet` 也会清理隐藏的 `pause` 容器,如果容器运行时使用了这种容器的话。
5. `kubelet` 触发强制从 API 服务器上删除 Pod 对象的逻辑,并将体面终止限期设置为 0
(这意味着马上删除)。
(这意味着马上删除)。
6. API 服务器删除 Pod 的 API 对象,从任何客户端都无法再看到该对象。
<!--
### Forced Pod termination {#pod-termination-forced}
Forced deletions can be potentially disruptiove for some workloads and their Pods.
Forced deletions can be potentially disruptive for some workloads and their Pods.
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports
the `-grace-period=<seconds>` option which allows you to override the default and specify your
@ -850,4 +855,3 @@ and
可参阅 [PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core)
和 [ContainerStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core)。