Merge pull request #27766 from tengqm/zh-sync-concepts-9
[zh] Resync concepts section (9)
This commit is contained in:
commit
3532f1f4dc
|
|
@ -12,15 +12,15 @@ weight: 80
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.8" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
|
||||
|
||||
<!--
|
||||
A _Cron Job_ creates [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) on a time-based schedule.
|
||||
A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repeating schedule.
|
||||
|
||||
One CronJob object is like one line of a _crontab_ (cron table) file. It runs a job periodically
|
||||
on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format.
|
||||
-->
|
||||
_Cron Job_ 创建基于时间调度的 [Jobs](/zh/docs/concepts/workloads/controllers/job/)。
|
||||
_CronJob_ 创建基于时隔重复调度的 {{< glossary_tooltip term_id="job" text="Jobs" >}}。
|
||||
|
||||
一个 CronJob 对象就像 _crontab_ (cron table) 文件中的一行。
|
||||
它用 [Cron](https://en.wikipedia.org/wiki/Cron) 格式进行编写,
|
||||
|
|
@ -102,24 +102,22 @@ This example CronJob manifest prints the current time and a hello message every
|
|||
# * * * * *
|
||||
```
|
||||
|
||||
|
||||
<!--
|
||||
| Entry | Description | Equivalent to |
|
||||
| ------------- | ------------- |------------- |
|
||||
| @yearly (or @annually) | Run once a year at midnight of 1 January | 0 0 1 1 * |
|
||||
| @monthly | Run once a month at midnight of the first day of the month | 0 0 1 * * |
|
||||
| @weekly | Run once a week at midnight on Sunday morning | 0 0 * * 0 |
|
||||
| @daily (or @midnight) | Run once a day at midnight | 0 0 * * * |
|
||||
| @hourly | Run once an hour at the beginning of the hour | 0 * * * * |
|
||||
| Entry | Description | Equivalent to |
|
||||
| ------------- | ------------- |------------- |
|
||||
| @yearly (or @annually) | Run once a year at midnight of 1 January | 0 0 1 1 * |
|
||||
| @monthly | Run once a month at midnight of the first day of the month | 0 0 1 * * |
|
||||
| @weekly | Run once a week at midnight on Sunday morning | 0 0 * * 0 |
|
||||
| @daily (or @midnight) | Run once a day at midnight | 0 0 * * * |
|
||||
| @hourly | Run once an hour at the beginning of the hour | 0 * * * * |
|
||||
-->
|
||||
| 输入 | 描述 | 相当于 |
|
||||
| ------------- | ------------- |------------- |
|
||||
| @yearly (or @annually) | 每年 1 月 1 日的午夜运行一次 | 0 0 1 1 * |
|
||||
| @monthly | 每月第一天的午夜运行一次 | 0 0 1 * * |
|
||||
| @weekly | 每周的周日午夜运行一次 | 0 0 * * 0 |
|
||||
| @daily (or @midnight) | 每天午夜运行一次 | 0 0 * * * |
|
||||
| @hourly | 每小时的开始一次 | 0 * * * * |
|
||||
|
||||
| 输入 | 描述 | 相当于 |
|
||||
| ------------- | ------------- |------------- |
|
||||
| @yearly (or @annually) | 每年 1 月 1 日的午夜运行一次 | 0 0 1 1 * |
|
||||
| @monthly | 每月第一天的午夜运行一次 | 0 0 1 * * |
|
||||
| @weekly | 每周的周日午夜运行一次 | 0 0 * * 0 |
|
||||
| @daily (or @midnight) | 每天午夜运行一次 | 0 0 * * * |
|
||||
| @hourly | 每小时的开始一次 | 0 * * * * |
|
||||
|
||||
<!--
|
||||
For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight:
|
||||
|
|
@ -192,8 +190,10 @@ For example, suppose a CronJob is set to schedule a new Job every one minute beg
|
|||
`startingDeadlineSeconds` field is not set. If the CronJob controller happens to
|
||||
be down from `08:29:00` to `10:21:00`, the job will not start as the number of missed jobs which missed their schedule is greater than 100.
|
||||
-->
|
||||
例如,假设一个 CronJob 被设置为从 `08:30:00` 开始每隔一分钟创建一个新的 Job,并且它的 `startingDeadlineSeconds` 字段
|
||||
未被设置。如果 CronJob 控制器从 `08:29:00` 到 `10:21:00` 终止运行,则该 Job 将不会启动,因为其错过的调度次数超过了100。
|
||||
例如,假设一个 CronJob 被设置为从 `08:30:00` 开始每隔一分钟创建一个新的 Job,
|
||||
并且它的 `startingDeadlineSeconds` 字段未被设置。如果 CronJob 控制器从
|
||||
`08:29:00` 到 `10:21:00` 终止运行,则该 Job 将不会启动,因为其错过的调度
|
||||
次数超过了 100。
|
||||
|
||||
<!--
|
||||
To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its
|
||||
|
|
@ -214,22 +214,25 @@ the Job in turn is responsible for the management of the Pods it represents.
|
|||
CronJob 仅负责创建与其调度时间相匹配的 Job,而 Job 又负责管理其代表的 Pod。
|
||||
|
||||
<!--
|
||||
## New controller
|
||||
## Controller version {#new-controller}
|
||||
|
||||
There's an alternative implementation of the CronJob controller, available as an alpha feature since Kubernetes 1.20. To select version 2 of the CronJob controller, pass the following [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}.
|
||||
|
||||
```
|
||||
--feature-gates="CronJobControllerV2=true"
|
||||
```
|
||||
Starting with Kubernetes v1.21 the second version of the CronJob controller
|
||||
is the default implementation. To disable the default CronJob controller
|
||||
and use the original CronJob controller instead, one pass the `CronJobControllerV2`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}},
|
||||
and set this flag to `false`. For example:
|
||||
-->
|
||||
## 新控制器
|
||||
## 控制器版本 {#new-controller}
|
||||
|
||||
CronJob 控制器有一个替代的实现,自 Kubernetes 1.20 开始以 alpha 特性引入。
|
||||
如果选择 CronJob 控制器的 v2 版本,请在 {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}
|
||||
中设置以下[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) 标志。
|
||||
从 Kubernetes v1.21 版本开始,CronJob 控制器的第二个版本被用作默认实现。
|
||||
要禁用此默认 CronJob 控制器而使用原来的 CronJob 控制器,请在
|
||||
{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}
|
||||
中设置[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`CronJobControllerV2`,将此标志设置为 `false`。例如:
|
||||
|
||||
```
|
||||
--feature-gates="CronJobControllerV2=true"
|
||||
--feature-gates="CronJobControllerV2=false"
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
@ -240,7 +243,8 @@ documents the format of CronJob `schedule` fields.
|
|||
For instructions on creating and working with cron jobs, and for an example of a spec file for a cron job, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
|
||||
-->
|
||||
|
||||
* 进一步了解 [Cron 表达式的格式](https://en.wikipedia.org/wiki/Cron),学习设置 CronJob `schedule` 字段
|
||||
* 进一步了解 [Cron 表达式的格式](https://en.wikipedia.org/wiki/Cron),学习设置
|
||||
CronJob `schedule` 字段
|
||||
* 有关创建和使用 CronJob 的说明及示例规约文件,请参见
|
||||
[使用 CronJob 运行自动化任务](/zh/docs/tasks/job/automated-tasks-with-cron-jobs/)。
|
||||
|
||||
|
|
|
|||
|
|
@ -472,7 +472,7 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
|
|||
<!--
|
||||
### Deleting just a ReplicaSet
|
||||
|
||||
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=orphan` option.
|
||||
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `-cascade=orphan` option.
|
||||
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
|
||||
For example:
|
||||
-->
|
||||
|
|
@ -531,6 +531,102 @@ ensures that a desired number of pods with a matching label selector are availab
|
|||
通过更新 `.spec.replicas` 字段,ReplicaSet 可以被轻松的进行缩放。ReplicaSet
|
||||
控制器能确保匹配标签选择器的数量的 Pod 是可用的和可操作的。
|
||||
|
||||
<!--
|
||||
When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to
|
||||
prioritize scaling down pods based on the following general algorithm:
|
||||
-->
|
||||
在降低集合规模时,ReplicaSet 控制器通过对可用的 Pods 进行排序来优先选择
|
||||
要被删除的 Pods。其一般性算法如下:
|
||||
|
||||
<!--
|
||||
1. Pending (and unschedulable) pods are scaled down first
|
||||
2. If controller.kubernetes.io/pod-deletion-cost annotation is set, then
|
||||
the pod with the lower value will come first.
|
||||
3. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
|
||||
4. If the pods' creation times differ, the pod that was created more recently
|
||||
comes before the older pod (the creation times are bucketed on an integer log scale
|
||||
when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
|
||||
-->
|
||||
1. 首先选择剔除悬决(Pending,且不可调度)的 Pods
|
||||
2. 如果设置了 `controller.kubernetes.io/pod-deletion-cost` 注解,则注解值
|
||||
较小的优先被裁减掉
|
||||
3. 所处节点上副本个数较多的 Pod 优先于所处节点上副本较少者
|
||||
4. 如果 Pod 的创建时间不同,最近创建的 Pod 优先于早前创建的 Pod 被裁减。
|
||||
(当 `LogarithmicScaleDown` 这一
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
被启用时,创建时间是按整数幂级来分组的)。
|
||||
|
||||
如果以上比较结果都相同,则随机选择。
|
||||
|
||||
<!--
|
||||
### Pod deletion cost
|
||||
-->
|
||||
### Pod 删除开销 {#pod-deletion-cost}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
Using the [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)
|
||||
annotation, users can set a preference regarding which pods to remove first when downscaling a ReplicaSet.
|
||||
-->
|
||||
通过使用 [`controller.kubernetes.io/pod-deletion-cost`](/zh/docs/reference/labels-annotations-taints/#pod-deletion-cost)
|
||||
注解,用户可以对 ReplicaSet 缩容时要先删除哪些 Pods 设置偏好。
|
||||
|
||||
<!--
|
||||
The annotation should be set on the pod, the range is [-2147483647, 2147483647]. It represents the cost of
|
||||
deleting a pod compared to other pods belonging to the same ReplicaSet. Pods with lower deletion
|
||||
cost are preferred to be deleted before pods with higher deletion cost.
|
||||
-->
|
||||
此注解要设置到 Pod 上,取值范围为 [-2147483647, 2147483647]。
|
||||
所代表的的是删除同一 ReplicaSet 中其他 Pod 相比较而言的开销。
|
||||
删除开销较小的 Pods 比删除开销较高的 Pods 更容易被删除。
|
||||
|
||||
<!--
|
||||
The implicit value for this annotation for pods that don't set it is 0; negative values are permitted.
|
||||
Invalid values will be rejected by the API server.
|
||||
-->
|
||||
Pods 如果未设置此注解,则隐含的设置值为 0。负值也是可接受的。
|
||||
如果注解值非法,API 服务器会拒绝对应的 Pod。
|
||||
|
||||
<!--
|
||||
This feature is alpha and disabled by default. You can enable it by setting the
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`PodDeletionCost` in both kube-apiserver and kube-controller-manager.
|
||||
-->
|
||||
此功能特性处于 Alpha 阶段,默认被禁用。你可以通过为 kube-apiserver 和
|
||||
kube-controller-manager 设置
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`PodDeletionCost` 来启用此功能。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
- This is honored on a best-effort basis, so it does not offer any guarantees on pod deletion order.
|
||||
- Users should avoid updating the annotation frequently, such as updating it based on a metric value,
|
||||
because doing so will generate a significant number of pod updates on the apiserver.
|
||||
-->
|
||||
- 此机制实施时仅是尽力而为,并不能对 Pod 的删除顺序作出任何保证;
|
||||
- 用户应避免频繁更新注解值,例如根据某观测度量值来更新此注解值是应该避免的。
|
||||
这样做会在 API 服务器上产生大量的 Pod 更新操作。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
#### Example Use Case
|
||||
|
||||
The different pods of an application could have different utilization levels. On scale down, the application
|
||||
may prefer to remove the pods with lower utilization. To avoid frequently updating the pods, the application
|
||||
should update `controller.kubernetes.io/pod-deletion-cost` once before issuing a scale down (setting the
|
||||
annotation to a value proportional to pod utilization level). This works if the application itself controls
|
||||
the down scaling; for example, the driver pod of a Spark deployment.
|
||||
-->
|
||||
#### 使用场景示例
|
||||
|
||||
同一应用的不同 Pods 可能其利用率是不同的。在对应用执行缩容操作时,可能
|
||||
希望移除利用率较低的 Pods。为了避免频繁更新 Pods,应用应该在执行缩容
|
||||
操作之前更新一次 `controller.kubernetes.io/pod-deletion-cost` 注解值
|
||||
(将注解值设置为一个与其 Pod 利用率对应的值)。
|
||||
如果应用自身控制器缩容操作时(例如 Spark 部署的驱动 Pod),这种机制
|
||||
是可以起作用的。
|
||||
|
||||
<!--
|
||||
### ReplicaSet as an Horizontal Pod Autoscaler Target
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ feature:
|
|||
anchor: ReplicationController 如何工作
|
||||
description: >
|
||||
重新启动失败的容器,在节点死亡时替换并重新调度容器,杀死不响应用户定义的健康检查的容器,并且在它们准备好服务之前不会将它们公布给客户端。
|
||||
|
||||
content_type: concept
|
||||
weight: 90
|
||||
---
|
||||
|
|
@ -97,10 +96,12 @@ Run the example job by downloading the example file and then running this comman
|
|||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
replicationcontroller/nginx created
|
||||
```
|
||||
|
|
@ -113,10 +114,12 @@ Check on the status of the ReplicationController using this command:
|
|||
```shell
|
||||
kubectl describe replicationcontrollers/nginx
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
Name: nginx
|
||||
Namespace: default
|
||||
|
|
@ -167,6 +170,7 @@ echo $pods
|
|||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
|
||||
```
|
||||
|
|
@ -183,14 +187,16 @@ specifies an expression with the name from each pod in the returned list.
|
|||
## Writing a ReplicationController Spec
|
||||
|
||||
As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields.
|
||||
For general information about working with config files, see [object management ](/docs/concepts/overview/working-with-objects/object-management/).
|
||||
For general information about working with configuration files, see [object management](/docs/concepts/overview/working-with-objects/object-management/).
|
||||
|
||||
A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
-->
|
||||
## 编写一个 ReplicationController Spec
|
||||
## 编写一个 ReplicationController 规约
|
||||
|
||||
与所有其它 Kubernetes 配置一样,ReplicationController 需要 `apiVersion`、`kind` 和 `metadata` 字段。
|
||||
有关使用配置文件的常规信息,参考[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management/)。
|
||||
与所有其它 Kubernetes 配置一样,ReplicationController 需要 `apiVersion`、
|
||||
`kind` 和 `metadata` 字段。
|
||||
有关使用配置文件的常规信息,参考
|
||||
[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management/)。
|
||||
|
||||
ReplicationController 也需要一个 [`.spec` 部分](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。
|
||||
|
||||
|
|
@ -230,7 +236,7 @@ for example the [Kubelet](/docs/admin/kubelet/) or Docker.
|
|||
|
||||
The ReplicationController can itself have labels (`.metadata.labels`). Typically, you
|
||||
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
|
||||
then it defaults to `.spec.template.metadata.labels`. However, they are allowed to be
|
||||
then it defaults to `.spec.template.metadata.labels`. However, they are allowed to be
|
||||
different, and the `.metadata.labels` do not affect the behavior of the ReplicationController.
|
||||
-->
|
||||
### ReplicationController 上的标签
|
||||
|
|
@ -351,12 +357,13 @@ To update pods to a new spec in a controlled way, use a [rolling update](#rollin
|
|||
<!--
|
||||
### Isolating pods from a ReplicationController
|
||||
|
||||
Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging and data recovery. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
-->
|
||||
### 从 ReplicationController 中隔离 Pod
|
||||
|
||||
通过更改 Pod 的标签,可以从 ReplicationController 的目标中删除 Pod。
|
||||
此技术可用于从服务中删除 Pod 以进行调试、数据恢复等。以这种方式删除的 Pod 将自动替换(假设复制副本的数量也没有更改)。
|
||||
此技术可用于从服务中删除 Pod 以进行调试、数据恢复等。以这种方式删除的 Pod
|
||||
将被自动替换(假设复制副本的数量也没有更改)。
|
||||
|
||||
<!--
|
||||
## Common usage patterns
|
||||
|
|
@ -374,12 +381,11 @@ As mentioned above, whether you have 1 pod you want to keep running, or 1000, a
|
|||
<!--
|
||||
### Scaling
|
||||
|
||||
The ReplicationController scales the number of replicas up or down by setting the `replicas` field.
|
||||
You can configure the ReplicationController to manage the replicas manually or by an auto-scaling control agent.
|
||||
The ReplicationController enables scaling the number of replicas up or down, either manually or by an auto-scaling control agent, by updating the `replicas` field.
|
||||
-->
|
||||
### 扩缩容 {#scaling}
|
||||
|
||||
通过设置 `replicas` 字段,ReplicationController 可以方便地横向扩容或缩容副本的数量。
|
||||
通过设置 `replicas` 字段,ReplicationController 可以允许扩容或缩容副本的数量。
|
||||
你可以手动或通过自动缩放控制代理来控制 ReplicationController 执行此操作。
|
||||
|
||||
<!--
|
||||
|
|
@ -421,7 +427,8 @@ For instance, a service might target all pods with `tier in (frontend), environm
|
|||
-->
|
||||
### 多个版本跟踪
|
||||
|
||||
除了在滚动更新过程中运行应用程序的多个版本之外,通常还会使用多个版本跟踪来长时间,甚至持续运行多个版本。这些跟踪将根据标签加以区分。
|
||||
除了在滚动更新过程中运行应用程序的多个版本之外,通常还会使用多个版本跟踪来长时间,
|
||||
甚至持续运行多个版本。这些跟踪将根据标签加以区分。
|
||||
|
||||
例如,一个服务可能把具有 `tier in (frontend), environment in (prod)` 的所有 Pod 作为目标。
|
||||
现在假设你有 10 个副本的 Pod 组成了这个层。但是你希望能够 `canary` (`金丝雀`)发布这个组件的新版本。
|
||||
|
|
@ -429,7 +436,8 @@ For instance, a service might target all pods with `tier in (frontend), environm
|
|||
标签为 `tier=frontend, environment=prod, track=stable` 而为 `canary`
|
||||
设置另一个 ReplicationController,其中 `replicas` 设置为 1,
|
||||
标签为 `tier=frontend, environment=prod, track=canary`。
|
||||
现在这个服务覆盖了 `canary` 和非 `canary` Pod。但你可以单独处理 ReplicationController,以测试、监控结果等。
|
||||
现在这个服务覆盖了 `canary` 和非 `canary` Pod。但你可以单独处理
|
||||
ReplicationController,以测试、监控结果等。
|
||||
|
||||
<!--
|
||||
### Using ReplicationControllers with Services
|
||||
|
|
@ -441,7 +449,8 @@ A ReplicationController will never terminate on its own, but it isn't expected t
|
|||
-->
|
||||
### 和服务一起使用 ReplicationController
|
||||
|
||||
多个 ReplicationController 可以位于一个服务的后面,例如,一部分流量流向旧版本,一部分流量流向新版本。
|
||||
多个 ReplicationController 可以位于一个服务的后面,例如,一部分流量流向旧版本,
|
||||
一部分流量流向新版本。
|
||||
|
||||
一个 ReplicationController 永远不会自行终止,但它不会像服务那样长时间存活。
|
||||
服务可以由多个 ReplicationController 控制的 Pod 组成,并且在服务的生命周期内
|
||||
|
|
@ -455,8 +464,10 @@ Pods created by a ReplicationController are intended to be fungible and semantic
|
|||
-->
|
||||
## 编写多副本的应用
|
||||
|
||||
由 ReplicationController 创建的 Pod 是可替换的,语义上是相同的,尽管随着时间的推移,它们的配置可能会变得异构。
|
||||
这显然适合于多副本的无状态服务器,但是 ReplicationController 也可以用于维护主选、分片和工作池应用程序的可用性。
|
||||
由 ReplicationController 创建的 Pod 是可替换的,语义上是相同的,
|
||||
尽管随着时间的推移,它们的配置可能会变得异构。
|
||||
这显然适合于多副本的无状态服务器,但是 ReplicationController 也可以用于维护主选、
|
||||
分片和工作池应用程序的可用性。
|
||||
这样的应用程序应该使用动态的工作分配机制,例如
|
||||
[RabbitMQ 工作队列](https://www.rabbitmq.com/tutorials/tutorial-two-python.html),
|
||||
而不是静态的或者一次性定制每个 Pod 的配置,这被认为是一种反模式。
|
||||
|
|
@ -481,8 +492,10 @@ The ReplicationController is forever constrained to this narrow responsibility.
|
|||
-->
|
||||
ReplicationController 永远被限制在这个狭隘的职责范围内。
|
||||
它本身既不执行就绪态探测,也不执行活跃性探测。
|
||||
它不负责执行自动缩放,而是由外部自动缩放器控制(如 [#492](https://issue.k8s.io/492) 中所述),后者负责更改其 `replicas` 字段值。
|
||||
我们不会向 ReplicationController 添加调度策略(例如,[spreading](https://issue.k8s.io/367#issuecomment-48428019))。
|
||||
它不负责执行自动缩放,而是由外部自动缩放器控制(如
|
||||
[#492](https://issue.k8s.io/492) 中所述),后者负责更改其 `replicas` 字段值。
|
||||
我们不会向 ReplicationController 添加调度策略(例如,
|
||||
[spreading](https://issue.k8s.io/367#issuecomment-48428019))。
|
||||
它也不应该验证所控制的 Pod 是否与当前指定的模板匹配,因为这会阻碍自动调整大小和其他自动化过程。
|
||||
类似地,完成期限、整理依赖关系、配置扩展和其他特性也属于其他地方。
|
||||
我们甚至计划考虑批量创建 Pod 的机制(查阅 [#170](https://issue.k8s.io/170))。
|
||||
|
|
@ -549,7 +562,8 @@ Unlike in the case where a user directly created pods, a ReplicationController r
|
|||
-->
|
||||
### 裸 Pod
|
||||
|
||||
与用户直接创建 Pod 的情况不同,ReplicationController 能够替换因某些原因被删除或被终止的 Pod ,例如在节点故障或中断节点维护的情况下,例如内核升级。
|
||||
与用户直接创建 Pod 的情况不同,ReplicationController 能够替换因某些原因
|
||||
被删除或被终止的 Pod ,例如在节点故障或中断节点维护的情况下,例如内核升级。
|
||||
因此,我们建议你使用 ReplicationController,即使你的应用程序只需要一个 Pod。
|
||||
可以将其看作类似于进程管理器,它只管理跨多个节点的多个 Pod ,而不是单个节点上的单个进程。
|
||||
ReplicationController 将本地容器重启委托给节点上的某个代理(例如,Kubelet 或 Docker)。
|
||||
|
|
|
|||
|
|
@ -1,17 +1,17 @@
|
|||
---
|
||||
title: 已完成资源的 TTL 控制器
|
||||
content_type: concept
|
||||
weight: 65
|
||||
weight: 70
|
||||
---
|
||||
<!--
|
||||
title: TTL Controller for Finished Resources
|
||||
content_type: concept
|
||||
weight: 65
|
||||
weight: 70
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
|
||||
|
||||
<!--
|
||||
The TTL controller provides a TTL mechanism to limit the lifetime of resource
|
||||
|
|
@ -25,14 +25,14 @@ TTL 控制器目前只处理 {{< glossary_tooltip text="Job" term_id="job" >}}
|
|||
可能以后会扩展以处理将完成执行的其他资源,例如 Pod 和自定义资源。
|
||||
|
||||
<!--
|
||||
Alpha Disclaimer: this feature is currently alpha, and can be enabled with both kube-apiserver and kube-controller-manager
|
||||
This feature is currently beta and enabled by default, and can be disabled via
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`TTLAfterFinished`.
|
||||
`TTLAfterFinished` in both kube-apiserver and kube-controller-manager.
|
||||
-->
|
||||
Alpha 免责声明:此功能目前是 alpha 版,并且可以通过 `kube-apiserver` 和
|
||||
此功能目前是 Beta 版而自动启用,并且可以通过 `kube-apiserver` 和
|
||||
`kube-controller-manager` 上的
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`TTLAfterFinished` 启用。
|
||||
`TTLAfterFinished` 禁用。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue