node: cpumanager: document the graduation process

Document the graduation process and the maturity level
of the cpumanager policy options, and the new feature gate
involved. No changes regarding the existing options.

For more details: https://github.com/kubernetes/enhancements/pull/2933
Signed-off-by: Francesco Romani <fromani@redhat.com>
This commit is contained in:
Francesco Romani 2021-09-14 18:01:51 +02:00
parent 6e45595d3a
commit 7d8483e0e4
1 changed files with 14 additions and 1 deletions

View File

@ -60,6 +60,13 @@ duration as `--node-status-update-frequency`.
The behavior of the static policy can be fine-tuned using the `--cpu-manager-policy-options` flag.
The flag takes a comma-separated list of `key=value` policy options.
This feature can be disabled completely using the `CPUManagerPolicyOptions` feature gate.
The policy options are split into two groups: alpha quality (hidden by default) and beta quality
(visible by default). The groups are guarded respectively by the `CPUManagerPolicyAlphaOptions`
and `CPUManagerPolicyBetaOptions` feature gates. Diverging from the Kubernetes standard, these
feature gates guard groups of options, because it would have been too cumbersome to add a feature
gate for each individual option.
### None policy
@ -218,6 +225,12 @@ equal to one. The `nginx` container is granted 2 exclusive CPUs.
#### Static policy options
You can toggle groups of options on and off based upon their maturity level
using the following feature gates:
* `CPUManagerPolicyBetaOptions` default enabled. Disable to hide beta-level options.
* `CPUManagerPolicyAlphaOptions` default disabled. Enable to show alpha-level options.
You will still have to enable each option using the `CPUManagerPolicyOptions` kubelet option.
The following policy options exist for the static `CPUManager` policy:
* `full-pcpus-only` (beta, visible by default)
* `distribute-cpus-across-numa` (alpha, hidden by default)
@ -237,7 +250,7 @@ one NUMA node is required to satisfy the allocation.
By default, the `CPUManager` will pack CPUs onto one NUMA node until it is
filled, with any remaining CPUs simply spilling over to the next NUMA node.
This can cause undesired bottlenecks in parallel code relying on barriers (and
similar synchronization primitivies), as this type of code tends to run only as
similar synchronization primitives), as this type of code tends to run only as
fast as its slowest worker (which is slowed down by the fact that fewer CPUs
are available on at least one NUMA node).
By distributing CPUs evenly across NUMA nodes, application developers can more