Merge pull request #39343 from Zhuzhenghao/replicaset
Cleanup page replicaset
This commit is contained in:
commit
dd3a6796cf
|
|
@ -69,7 +69,7 @@ kubectl get rs
|
|||
|
||||
And see the frontend one you created:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
frontend 3 3 3 6s
|
||||
```
|
||||
|
|
@ -118,7 +118,7 @@ kubectl get pods
|
|||
|
||||
You should see Pod information similar to:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-b2zdv 1/1 Running 0 6m36s
|
||||
frontend-vcmts 1/1 Running 0 6m36s
|
||||
|
|
@ -160,7 +160,7 @@ While you can create bare Pods with no problems, it is strongly recommended to m
|
|||
labels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited
|
||||
to owning Pods specified by its template-- it can acquire other Pods in the manner specified in the previous sections.
|
||||
|
||||
Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:
|
||||
Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:
|
||||
|
||||
{{< codenew file="pods/pod-rs.yaml" >}}
|
||||
|
||||
|
|
@ -229,9 +229,9 @@ As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `
|
|||
For ReplicaSets, the `kind` is always a ReplicaSet.
|
||||
|
||||
When the control plane creates new Pods for a ReplicaSet, the `.metadata.name` of the
|
||||
ReplicaSet is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid
|
||||
ReplicaSet is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid
|
||||
[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
|
||||
value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
|
||||
value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
|
||||
the name should follow the more restrictive rules for a
|
||||
[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
|
||||
|
||||
|
|
@ -288,8 +288,8 @@ When using the REST API or the `client-go` library, you must set `propagationPol
|
|||
```shell
|
||||
kubectl proxy --port=8080
|
||||
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
|
||||
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
|
||||
> -H "Content-Type: application/json"
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
### Deleting just a ReplicaSet
|
||||
|
|
@ -303,11 +303,11 @@ For example:
|
|||
```shell
|
||||
kubectl proxy --port=8080
|
||||
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
|
||||
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
|
||||
> -H "Content-Type: application/json"
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
Once the original is deleted, you can create a new ReplicaSet to replace it. As long
|
||||
Once the original is deleted, you can create a new ReplicaSet to replace it. As long
|
||||
as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods.
|
||||
However, it will not make any effort to make existing Pods match a new, different pod template.
|
||||
To update Pods to a new spec in a controlled way, use a
|
||||
|
|
@ -335,19 +335,19 @@ prioritize scaling down pods based on the following general algorithm:
|
|||
1. If the pods' creation times differ, the pod that was created more recently
|
||||
comes before the older pod (the creation times are bucketed on an integer log scale
|
||||
when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
|
||||
|
||||
|
||||
If all of the above match, then selection is random.
|
||||
|
||||
### Pod deletion cost
|
||||
### Pod deletion cost
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
|
||||
|
||||
Using the [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)
|
||||
Using the [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)
|
||||
annotation, users can set a preference regarding which pods to remove first when downscaling a ReplicaSet.
|
||||
|
||||
The annotation should be set on the pod, the range is [-2147483647, 2147483647]. It represents the cost of
|
||||
deleting a pod compared to other pods belonging to the same ReplicaSet. Pods with lower deletion
|
||||
cost are preferred to be deleted before pods with higher deletion cost.
|
||||
cost are preferred to be deleted before pods with higher deletion cost.
|
||||
|
||||
The implicit value for this annotation for pods that don't set it is 0; negative values are permitted.
|
||||
Invalid values will be rejected by the API server.
|
||||
|
|
@ -360,13 +360,13 @@ This feature is beta and enabled by default. You can disable it using the
|
|||
- This is honored on a best-effort basis, so it does not offer any guarantees on pod deletion order.
|
||||
- Users should avoid updating the annotation frequently, such as updating it based on a metric value,
|
||||
because doing so will generate a significant number of pod updates on the apiserver.
|
||||
{{< /note >}}
|
||||
{{< /note >}}
|
||||
|
||||
#### Example Use Case
|
||||
|
||||
The different pods of an application could have different utilization levels. On scale down, the application
|
||||
The different pods of an application could have different utilization levels. On scale down, the application
|
||||
may prefer to remove the pods with lower utilization. To avoid frequently updating the pods, the application
|
||||
should update `controller.kubernetes.io/pod-deletion-cost` once before issuing a scale down (setting the
|
||||
should update `controller.kubernetes.io/pod-deletion-cost` once before issuing a scale down (setting the
|
||||
annotation to a value proportional to pod utilization level). This works if the application itself controls
|
||||
the down scaling; for example, the driver pod of a Spark deployment.
|
||||
|
||||
|
|
@ -400,7 +400,7 @@ kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50
|
|||
|
||||
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update
|
||||
them and their Pods via declarative, server-side rolling updates.
|
||||
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
|
||||
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
|
||||
creation, deletion and updates. When you use Deployments you don't have to worry about managing the ReplicaSets that
|
||||
they create. Deployments own and manage their ReplicaSets.
|
||||
As such, it is recommended to use Deployments when you want ReplicaSets.
|
||||
|
|
@ -422,7 +422,7 @@ expected to terminate on their own (that is, batch jobs).
|
|||
### DaemonSet
|
||||
|
||||
Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicaSet for Pods that provide a
|
||||
machine-level function, such as machine monitoring or machine logging. These Pods have a lifetime that is tied
|
||||
machine-level function, such as machine monitoring or machine logging. These Pods have a lifetime that is tied
|
||||
to a machine lifetime: the Pod needs to be running on the machine before other Pods start, and are
|
||||
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
|
||||
|
||||
|
|
@ -444,4 +444,3 @@ As such, ReplicaSets are preferred over ReplicationControllers
|
|||
object definition to understand the API for replica sets.
|
||||
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how
|
||||
you can use it to manage application availability during disruptions.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue