Merge pull request #1382 from YanTingLittleCute/patch-12

Automatic merge from submit-queue.

Update rescheduling.md
This commit is contained in:
Kubernetes Submit Queue 2017-11-08 18:46:04 -08:00 committed by GitHub
commit 59c0c21d3b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 3 additions and 3 deletions

View File

@ -45,7 +45,7 @@ Example use cases for rescheduling are
* moving a pod onto an under-utilized node
* moving a pod onto a node that meets more of the pod's affinity/anti-affinity preferences
* moving a running pod off of a node in anticipation of a known or speculated future event
* draining a node in preparation for maintenance, decomissioning, auto-scale-down, etc.
 * draining a node in preparation for maintenance, decommissioning, auto-scale-down, etc.
* "preempting" a running pod to make room for a pending pod to schedule
* proactively/speculatively make room for large and/or exclusive pods to facilitate
fast scheduling in the future (often called "defragmentation")
@ -145,12 +145,12 @@ it allows the API server to do validation (e.g. to catch mis-spelling).
In the future, which priorities are usable for a given namespace and pods with certain
attributes may be configurable, similar to ResourceQuota, LimitRange, or security policy.
Priority and resource QoS are indepedent.
Priority and resource QoS are independent.
The priority we have described here might be used to prioritize the scheduling queue
(i.e. the order in which a scheduler examines pods in its scheduling loop), but the two
priority concepts do not have to be connected. It is somewhat logical to tie them
together, since a higher priority genreally indicates that a pod is more urgent to get
together, since a higher priority generally indicates that a pod is more urgent to get
running. Also, scheduling low-priority pods before high-priority pods might lead to
avoidable preemptions if the high-priority pods end up preempting the low-priority pods
that were just scheduled.