Fix broken links.

This commit is contained in:
Slava Semushin 2017-09-21 22:10:18 +02:00
parent 43ce57ac47
commit 0f13a190ad
2 changed files with 4 additions and 4 deletions

View File

@ -239,8 +239,8 @@ The binding of the `system:bootstrappers` (or similar) group to the ability to s
## Revision history
- Initial proposal ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/cb9f198a0763e0a7540cdcc9db912a403ab1acab/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md)
- v1.6 updates ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/d8ce9e91b0099795318bb06c13f00d9dad41ac26/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md)
- Initial proposal ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/cb9f198a0763e0a7540cdcc9db912a403ab1acab/contributors/design-proposals/bootstrap-discovery.md)
- v1.6 updates ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/d8ce9e91b0099795318bb06c13f00d9dad41ac26/contributors/design-proposals/bootstrap-discovery.md)
- v1.8 updates ([@luxas](https://github.com/luxas))
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -42,7 +42,7 @@ _Author: @bsalamat_
# Background
Running various types of workloads with different priorities is a common practice in medium and large clusters to achieve higher resource utilization. In such scenarios, the amount of workload can be larger than what the total resources of the cluster can handle. If so, the cluster chooses the most important workloads and runs them. The importance of workloads are specified by a combination of [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/scheduling/pod-priority-api.md), QoS, or other cluster-specific metrics. The potential to have more work than what cluster resources can handle is called "overcommitment". Overcommitment is very common in on-prem clusters where the number of nodes is fixed, but it can similarly happen in cloud as cloud customers may choose to run their clusters overcommitted/overloaded at times in order to save money. For example, a cloud customer may choose to run at most 100 nodes, knowing that all of their critical workloads fit on 100 nodes and if there is more work, they won't be critical and can wait until cluster load decreases.
Running various types of workloads with different priorities is a common practice in medium and large clusters to achieve higher resource utilization. In such scenarios, the amount of workload can be larger than what the total resources of the cluster can handle. If so, the cluster chooses the most important workloads and runs them. The importance of workloads are specified by a combination of [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/pod-priority-api.md), QoS, or other cluster-specific metrics. The potential to have more work than what cluster resources can handle is called "overcommitment". Overcommitment is very common in on-prem clusters where the number of nodes is fixed, but it can similarly happen in cloud as cloud customers may choose to run their clusters overcommitted/overloaded at times in order to save money. For example, a cloud customer may choose to run at most 100 nodes, knowing that all of their critical workloads fit on 100 nodes and if there is more work, they won't be critical and can wait until cluster load decreases.
## Terminology
@ -71,7 +71,7 @@ The race condition will still exist if we have multiple schedulers. More on this
## Preemption order
When scheduling a pending pod, scheduler tries to place the pod on a node that does not require preemption. If there is no such a node, scheduler may favor a node where the number and/or priority of victims (preempted pods) is smallest. After choosing the node, scheduler considers the lowest [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/scheduling/pod-priority-api.md) pods for preemption first. Scheduler starts from the lowest priority and considers enough pods that should be preempted to allow the pending pod to schedule. Scheduler only considers pods that have lower priority than the pending pod.
When scheduling a pending pod, scheduler tries to place the pod on a node that does not require preemption. If there is no such a node, scheduler may favor a node where the number and/or priority of victims (preempted pods) is smallest. After choosing the node, scheduler considers the lowest [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/pod-priority-api.md) pods for preemption first. Scheduler starts from the lowest priority and considers enough pods that should be preempted to allow the pending pod to schedule. Scheduler only considers pods that have lower priority than the pending pod.
#### Important notes