diff --git a/vertical-pod-autoscaler/README.md b/vertical-pod-autoscaler/README.md index f4541559cd..f707a38187 100644 --- a/vertical-pod-autoscaler/README.md +++ b/vertical-pod-autoscaler/README.md @@ -67,7 +67,7 @@ The current default version is Vertical Pod Autoscaler 0.13.0 **NOTE:** In 0.13.0 we deprecate `autoscaling.k8s.io/v1beta2` API. We plan to remove this API version. While for now you can continue to use `v1beta2` API we recommend using `autoscaling.k8s.io/v1` instead. `v1` and `v1beta2` APIs are -almost identical (`v1` API has some fields which are not present in `v1beta2) +almost identical (`v1` API has some fields which are not present in `v1beta2`) so simply changing which API version you're calling should be enough in almost all cases. @@ -307,7 +307,7 @@ You can then choose which recommender to use by setting `recommenders` inside th ### Custom memory bump-up after OOMKill -After an OOMKill event was observed, VPA increases the memory recommendation based on the observed memory usage in the event according to this formula: `recommendation = memory-usage-in-oomkill-event + max(oom-min-bump-up-bytes, memory-usage-in-oomkill-event * oom-bump-up-ratio)`. +After an OOMKill event was observed, VPA increases the memory recommendation based on the observed memory usage in the event according to this formula: `recommendation = memory-usage-in-oomkill-event + max(oom-min-bump-up-bytes, memory-usage-in-oomkill-event * oom-bump-up-ratio)`. You can configure the minimum bump-up as well as the multiplier by specifying startup arguments for the recommender: `oom-bump-up-ratio` specifies the memory bump up ratio when OOM occurred, default is `1.2`. This means, memory will be increased by 20% after an OOMKill event. `oom-min-bump-up-bytes` specifies minimal increase of memory after observing OOM. Defaults to `100 * 1024 * 1024` (=100MiB) @@ -324,8 +324,8 @@ Usage in recommender deployment ### Using CPU management with static policy If you are using the [CPU management with static policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy) for some containers, -you probably want the CPU recommendation to be an integer. A dedicated recommendation pre-processor can perform a round up on the CPU recommendation. Recommendation capping still applies after the round up. -To activate this feature, pass the flag `--cpu-integer-post-processor-enabled` when you start the recommender. +you probably want the CPU recommendation to be an integer. A dedicated recommendation pre-processor can perform a round up on the CPU recommendation. Recommendation capping still applies after the round up. +To activate this feature, pass the flag `--cpu-integer-post-processor-enabled` when you start the recommender. The pre-processor only acts on containers having a specific configuration. This configuration consists in an annotation on your VPA object for each impacted container. The annotation format is the following: ```