mv "Assign Pods" and "Taints and Tolerations" concepts to "Scheduling and Eviction"
* Moved "Assigning Pods to Nodes" article to Concepts -> Scheduling and Eviction * Moved "Taints and Tolerations" article to Concepts -> Scheduling and Eviction * Updated weight of the "Kubernetes Scheduler" article so it appears first * Updated redirects * Replaced links to "Assigning Pods to Nodes" and "Taints and Tolerations" articles to avoid redirects. Signed-off-by: Adam Kaplan <adam.kaplan@redhat.com>
This commit is contained in:
parent
4d5ddc5586
commit
55e17b86f2
|
@ -144,7 +144,7 @@ The local persistent volume beta feature is not complete by far. Some notable en
|
|||
|
||||
[Pod disruption budget](/docs/concepts/workloads/pods/disruptions/) is also very important for those workloads that must maintain quorum. Setting a disruption budget for your workload ensures that it does not drop below quorum due to voluntary disruption events, such as node drains during upgrade.
|
||||
|
||||
[Pod affinity and anti-affinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) ensures that your workloads stay either co-located or spread out across failure domains. If you have multiple local persistent volumes available on a single node, it may be preferable to specify an pod anti-affinity policy to spread your workload across nodes. Note that if you want multiple pods to share the same local persistent volume, you do not need to specify a pod affinity policy. The scheduler understands the locality constraints of the local persistent volume and schedules your pod to the correct node.
|
||||
[Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) ensures that your workloads stay either co-located or spread out across failure domains. If you have multiple local persistent volumes available on a single node, it may be preferable to specify an pod anti-affinity policy to spread your workload across nodes. Note that if you want multiple pods to share the same local persistent volume, you do not need to specify a pod affinity policy. The scheduler understands the locality constraints of the local persistent volume and schedules your pod to the correct node.
|
||||
|
||||
## Getting involved
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ Why is RuntimeClass a pod level concept? The Kubernetes resource model expects c
|
|||
|
||||
## What's next?
|
||||
|
||||
The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add [NodeAffinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The [Pod Overhead proposal](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
|
||||
The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add [NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The [Pod Overhead proposal](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
|
||||
|
||||
Many other RuntimeClass extensions have also been proposed, and will be revisited as the feature continues to develop and mature. A few more extensions that are being considered include:
|
||||
|
||||
|
|
|
@ -191,7 +191,7 @@ all the Pod objects running on the node to be deleted from the API server, and f
|
|||
names.
|
||||
|
||||
The node lifecycle controller automatically creates
|
||||
[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
|
||||
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that represent conditions.
|
||||
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
|
||||
Pods can also have tolerations which let them tolerate a Node's taints.
|
||||
|
||||
|
|
|
@ -163,7 +163,7 @@ with the pod's tolerations in admission, effectively taking the union of the set
|
|||
by each.
|
||||
|
||||
To learn more about configuring the node selector and tolerations, see [Assigning Pods to
|
||||
Nodes](/docs/concepts/configuration/assign-pod-node/).
|
||||
Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/).
|
||||
|
||||
[RuntimeClass admission controller]: /docs/reference/access-authn-authz/admission-controllers/#runtimeclass
|
||||
|
||||
|
|
|
@ -226,6 +226,6 @@ selector:
|
|||
#### Selecting sets of nodes
|
||||
|
||||
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
|
||||
See the documentation on [node selection](/docs/concepts/configuration/assign-pod-node/) for more information.
|
||||
See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -155,7 +155,7 @@ value is `another-node-label-value` should be preferred.
|
|||
|
||||
You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`.
|
||||
You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use
|
||||
[node taints](/docs/concepts/configuration/taint-and-toleration/) to repel pods from specific nodes.
|
||||
[node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to repel pods from specific nodes.
|
||||
|
||||
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
|
||||
to be scheduled onto a candidate node.
|
||||
|
@ -392,7 +392,7 @@ The above pod will run on the node kube-01.
|
|||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
[Taints](/docs/concepts/configuration/taint-and-toleration/) allow a Node to *repel* a set of Pods.
|
||||
[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods.
|
||||
|
||||
The design documents for
|
||||
[node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Kubernetes Scheduler
|
||||
content_template: templates/concept
|
||||
weight: 50
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
|
|
@ -10,7 +10,7 @@ weight: 40
|
|||
|
||||
|
||||
{{% capture overview %}}
|
||||
[_Node affinity_](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity),
|
||||
[_Node affinity_](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
|
||||
is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attracts* them to
|
||||
a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a
|
||||
hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods.
|
|
@ -905,7 +905,7 @@ the NLB Target Group's health check on the auto-assigned
|
|||
`.spec.healthCheckNodePort` and not receive any traffic.
|
||||
|
||||
In order to achieve even traffic, either use a DaemonSet or specify a
|
||||
[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
|
||||
[pod anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
|
||||
to not locate on the same node.
|
||||
|
||||
You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer)
|
||||
|
|
|
@ -169,9 +169,9 @@ will delay the binding and provisioning of a PersistentVolume until a Pod using
|
|||
PersistentVolumes will be selected or provisioned conforming to the topology that is
|
||||
specified by the Pod's scheduling constraints. These include, but are not limited to, [resource
|
||||
requirements](/docs/concepts/configuration/manage-compute-resources-container),
|
||||
[node selectors](/docs/concepts/configuration/assign-pod-node/#nodeselector),
|
||||
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector),
|
||||
[pod affinity and
|
||||
anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity),
|
||||
anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
|
||||
and [taints and tolerations](/docs/concepts/configuration/taint-and-toleration).
|
||||
|
||||
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
|
||||
|
|
|
@ -99,8 +99,8 @@ create a Pod with a different value on a node for testing.
|
|||
|
||||
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
|
||||
create Pods on nodes which match that [node
|
||||
selector](/docs/concepts/configuration/assign-pod-node/). Likewise if you specify a `.spec.template.spec.affinity`,
|
||||
then DaemonSet controller will create Pods on nodes which match that [node affinity](/docs/concepts/configuration/assign-pod-node/).
|
||||
selector](/docs/concepts/scheduling-eviction/assign-pod-node/). Likewise if you specify a `.spec.template.spec.affinity`,
|
||||
then DaemonSet controller will create Pods on nodes which match that [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/).
|
||||
If you do not specify either, then the DaemonSet controller will create Pods on all nodes.
|
||||
|
||||
## How Daemon Pods are Scheduled
|
||||
|
|
|
@ -250,7 +250,7 @@ for more details.
|
|||
|
||||
This plug-in facilitates creation of dedicated nodes with extended resources.
|
||||
If operators want to create dedicated nodes with extended resources (like GPUs, FPGAs etc.), they are expected to
|
||||
[taint the node](/docs/concepts/configuration/taint-and-toleration/#example-use-cases) with the extended resource
|
||||
[taint the node](/docs/concepts/scheduling-eviction/taint-and-toleration/#example-use-cases) with the extended resource
|
||||
name as the key. This admission controller, if enabled, automatically
|
||||
adds tolerations for such taints to pods requesting extended resources, so users don't have to manually
|
||||
add these tolerations.
|
||||
|
|
|
@ -335,7 +335,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
|
||||
- `Accelerators`: Enable Nvidia GPU support when using Docker
|
||||
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
|
||||
- `AffinityInAnnotations`(*deprecated*): Enable setting [Pod affinity or anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
|
||||
- `AffinityInAnnotations`(*deprecated*): Enable setting [Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
|
||||
- `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints.
|
||||
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
|
||||
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
|
||||
|
@ -489,7 +489,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be set for each pod.
|
||||
See [sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details.
|
||||
- `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on nodes and tolerations on Pods.
|
||||
See [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/) for more details.
|
||||
See [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) for more details.
|
||||
- `TaintNodesByCondition`: Enable automatic tainting nodes based on [node conditions](/docs/concepts/architecture/nodes/#condition).
|
||||
- `TokenRequest`: Enable the `TokenRequest` endpoint on service account resources.
|
||||
- `TokenRequestProjection`: Enable the injection of service account tokens into
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Taint
|
||||
id: taint
|
||||
date: 2019-01-11
|
||||
full_link: /docs/concepts/configuration/taint-and-toleration/
|
||||
full_link: /docs/concepts/scheduling-eviction/taint-and-toleration/
|
||||
short_description: >
|
||||
A core object consisting of three required properties: key, value, and effect. Taints prevent the scheduling of pods on nodes or node groups.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Toleration
|
||||
id: toleration
|
||||
date: 2019-01-11
|
||||
full_link: /docs/concepts/configuration/taint-and-toleration/
|
||||
full_link: /docs/concepts/scheduling-eviction/taint-and-toleration/
|
||||
short_description: >
|
||||
A core object consisting of three required properties: key, value, and effect. Tolerations enable the scheduling of pods on nodes or node groups that have a matching taint.
|
||||
|
||||
|
|
|
@ -77,7 +77,7 @@ The following *priorities* implement scoring:
|
|||
{{< glossary_tooltip term_id="replica-set" >}}.
|
||||
|
||||
- `InterPodAffinityPriority`: Implements preferred
|
||||
[inter pod affininity and antiaffinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
||||
[inter pod affininity and antiaffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
||||
|
||||
- `LeastRequestedPriority`: Favors nodes with fewer requested resources. In other
|
||||
words, the more Pods that are placed on a Node, and the more resources those
|
||||
|
@ -97,7 +97,7 @@ The following *priorities* implement scoring:
|
|||
|
||||
- `NodeAffinityPriority`: Prioritizes nodes according to node affinity scheduling
|
||||
preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution.
|
||||
You can read more about this in [Assigning Pods to Nodes](/docs/concepts/configuration/assign-pod-node/).
|
||||
You can read more about this in [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/).
|
||||
|
||||
- `TaintTolerationPriority`: Prepares the priority list for all the nodes, based on
|
||||
the number of intolerable taints on the node. This policy adjusts a node's rank
|
||||
|
|
|
@ -68,7 +68,7 @@ extension points:
|
|||
Pod runs.
|
||||
Extension points: `Score`.
|
||||
- `TaintToleration`: Implements
|
||||
[taints and tolerations](/docs/concepts/configuration/taint-and-toleration/).
|
||||
[taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||
Implements extension points: `Filter`, `Prescore`, `Score`.
|
||||
- `NodeName`: Checks if a Pod spec node name matches the current node.
|
||||
Extension points: `Filter`.
|
||||
|
@ -79,8 +79,8 @@ extension points:
|
|||
`scheduler.alpha.kubernetes.io/preferAvoidPods`.
|
||||
Extension points: `Score`.
|
||||
- `NodeAffinity`: Implements
|
||||
[node selectors](/docs/concepts/configuration/assign-pod-node/#nodeselector)
|
||||
and [node affinity](/docs/concepts/configuration/assign-pod-node/#node-affinity).
|
||||
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
|
||||
and [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity).
|
||||
Extension points: `Filter`, `Score`.
|
||||
- `PodTopologySpread`: Implements
|
||||
[Pod topology spread](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
||||
|
@ -117,7 +117,7 @@ extension points:
|
|||
the node.
|
||||
Extension points: `Filter`.
|
||||
- `InterPodAffinity`: Implements
|
||||
[inter-Pod affinity and anti-affinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
||||
[inter-Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
||||
Extension points: `PreFilter`, `Filter`, `PreScore`, `Score`.
|
||||
- `PrioritySort`: Provides the default priority based sorting.
|
||||
Extension points: `QueueSort`.
|
||||
|
|
|
@ -356,7 +356,7 @@ the field will be omitted when marshalling. When the field is omitted, kubeadm a
|
|||
|
||||
There are at least two workarounds:
|
||||
|
||||
1. Use the `node-role.kubernetes.io/master:PreferNoSchedule` taint instead of an empty slice. [Pods will get scheduled on masters](/docs/concepts/configuration/taint-and-toleration/), unless other nodes have capacity.
|
||||
1. Use the `node-role.kubernetes.io/master:PreferNoSchedule` taint instead of an empty slice. [Pods will get scheduled on masters](/docs/concepts/scheduling-eviction/taint-and-toleration/), unless other nodes have capacity.
|
||||
|
||||
2. Remove the taint after kubeadm init exits:
|
||||
```bash
|
||||
|
|
|
@ -173,8 +173,8 @@ to the metadata API, and avoid using provisioning data to deliver secrets.
|
|||
### Controlling which nodes pods may access
|
||||
|
||||
By default, there are no restrictions on which nodes may run a pod. Kubernetes offers a
|
||||
[rich set of policies for controlling placement of pods onto nodes](/docs/concepts/configuration/assign-pod-node/)
|
||||
and the [taint based pod placement and eviction](/docs/concepts/configuration/taint-and-toleration/)
|
||||
[rich set of policies for controlling placement of pods onto nodes](/docs/concepts/scheduling-eviction/assign-pod-node/)
|
||||
and the [taint based pod placement and eviction](/docs/concepts/scheduling-eviction/taint-and-toleration/)
|
||||
that are available to end users. For many clusters use of these policies to separate workloads
|
||||
can be a convention that authors adopt or enforce via tooling.
|
||||
|
||||
|
|
|
@ -159,7 +159,7 @@ A pod with the _unsafe_ sysctls will fail to launch on any node which has not
|
|||
enabled those two _unsafe_ sysctls explicitly. As with _node-level_ sysctls it
|
||||
is recommended to use
|
||||
[_taints and toleration_ feature](/docs/reference/generated/kubectl/kubectl-commands/#taint) or
|
||||
[taints on nodes](/docs/concepts/configuration/taint-and-toleration/)
|
||||
[taints on nodes](/docs/concepts/scheduling-eviction/taint-and-toleration/)
|
||||
to schedule those pods onto the right nodes.
|
||||
|
||||
## PodSecurityPolicy
|
||||
|
|
|
@ -116,5 +116,5 @@ This means that the pod will prefer a node that has a `disktype=ssd` label.
|
|||
|
||||
{{% capture whatsnext %}}
|
||||
Learn more about
|
||||
[Node Affinity](/docs/concepts/configuration/assign-pod-node/#node-affinity).
|
||||
[Node Affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity).
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -341,7 +341,7 @@ nodes. There are lots of ways to setup the profiles though, such as:
|
|||
The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles
|
||||
must be loaded onto every node. An alternative approach is to add a node label for each profile (or
|
||||
class of profiles) on the node, and use a
|
||||
[node selector](/docs/concepts/configuration/assign-pod-node/) to ensure the Pod is run on a
|
||||
[node selector](/docs/concepts/scheduling-eviction/assign-pod-node/) to ensure the Pod is run on a
|
||||
node with the required profile.
|
||||
|
||||
### Restricting profiles with the PodSecurityPolicy
|
||||
|
|
|
@ -91,11 +91,13 @@
|
|||
/docs/concepts/cluster-administration/sysctl-cluster/ /docs/tasks/administer-cluster/sysctl-cluster/ 301
|
||||
/docs/concepts/cluster-administration/static-pod/ /docs/tasks/administer-cluster/static-pod/ 301
|
||||
/docs/concepts/clusters/logging/ /docs/concepts/cluster-administration/logging/ 301
|
||||
/docs/concepts/configuration/assign-pod-node/ /docs/concepts/scheduling-eviction/assign-pod-node/ 301
|
||||
/docs/concepts/configuration/container-command-arg/ /docs/tasks/inject-data-application/define-command-argument-container/ 301
|
||||
/docs/concepts/configuration/container-command-args/ /docs/tasks/inject-data-application/define-command-argument-container/ 301
|
||||
/docs/concepts/configuration/manage-compute-resources-container/ /docs/concepts/configuration/manage-resources-containers/ 301
|
||||
/docs/concepts/configuration/scheduler-perf-tuning/ /docs/concepts/scheduling-eviction/scheduler-perf-tuning/ 301
|
||||
/docs/concepts/configuration/scheduling-framework/ /docs/concepts/scheduling-eviction/scheduling-framework/ 301
|
||||
/docs/concepts/configuration/taint-and-toleration/ /docs/concepts/scheduling-eviction/taint-and-toleration/ 301
|
||||
/docs/concepts/ecosystem/thirdpartyresource/ /docs/tasks/access-kubernetes-api/extend-api-third-party-resource/ 301
|
||||
/docs/concepts/jobs/cron-jobs/ /docs/concepts/workloads/controllers/cron-jobs/ 301
|
||||
/docs/concepts/jobs/run-to-completion-finite-workloads/ /docs/concepts/workloads/controllers/jobs-run-to-completion/ 301
|
||||
|
@ -365,8 +367,8 @@
|
|||
/docs/user-guide/monitoring/ /docs/tasks/debug-application-cluster/resource-usage-monitoring/ 301
|
||||
/docs/user-guide/namespaces/ /docs/concepts/overview/working-with-objects/namespaces/ 301
|
||||
/docs/user-guide/networkpolicies/ /docs/concepts/services-networking/network-policies/ 301
|
||||
/docs/user-guide/node-selection/ /docs/concepts/configuration/assign-pod-node/ 301
|
||||
/docs/user-guide/node-selection/README /docs/concepts/configuration/assign-pod-node/ 301
|
||||
/docs/user-guide/node-selection/ /docs/concepts/scheduling-eviction/assign-pod-node/ 301
|
||||
/docs/user-guide/node-selection/README /docs/concepts/scheduling-eviction/assign-pod-node/ 301
|
||||
/docs/user-guide/overview/ /docs/concepts/overview/what-is-kubernetes/ 301
|
||||
/docs/user-guide/persistent-volumes/ /docs/concepts/storage/persistent-volumes/ 301
|
||||
/docs/user-guide/persistent-volumes/index /docs/concepts/storage/persistent-volumes/ 301
|
||||
|
|
Loading…
Reference in New Issue