packet: Add documentation regarding new env variable

Adds documentation regarding env variables introduced:

 - PACKET_CONTROLLER_NODE_IDENTIFIER_LABEL
 - INSTALLED_CCM

Signed-off-by: Imran Pochi <imran@kinvolk.io>
This commit is contained in:
Imran Pochi 2021-08-19 20:38:35 +05:30
parent 87beac1af7
commit 8e6f109dab
1 changed files with 30 additions and 1 deletions

View File

@ -79,6 +79,35 @@ affinity:
- t1.small.x86
```
## CCM and Controller node labels
### CCM
By default, autoscaler assumes that you have an older deprecated version of `packet-ccm` installed in your
cluster. If however, that is not the case and you've migrated to the new `cloud-provider-equinix-metal` CCM,
then this must be told to autoscaler. This can be done via setting an environment variable in the deployment:
```
env:
- name: INSTALLED_CCM
value: cloud-provider-equinix-metal
```
**NOTE**: As a prerequisite, ensure that all worker nodes in your cluster have the prefix `equinixmetal://` in
the Node spec `.spec.providerID`. If there are any existing worker nodes with prefix `packet://`, then drain
the node, remove the node and restart the kubelet on that worker node to re-register the node in the cluster,
this would ensure that `cloud-provider-equinix-metal` CCM sets the uuid with prefix `equinixmetal://` to the
field `.spec.ProviderID`.
### Controller node labels
Autoscaler assumes that control plane nodes in your cluster are identified by the label
`node-role.kubernetes.io/master`. If for some reason, this assumption is not true in your case, then set the
envirnment variable in the deployment:
```
env:
- name: PACKET_CONTROLLER_NODE_IDENTIFIER_LABEL
value: <label>
```
## Notes
The autoscaler will not remove nodes which have non-default kube-system pods.
@ -99,4 +128,4 @@ the control plane (previously referred to as master) node.
4. Cloud inits in the examples have pinned versions for Kubernetes in order to minimize potential incompatibilities as a result of nodes provisioned with different Kubernetes versions.
5. In the provided cluster-autoscaler deployment example, the autoscaler pod has a nodeaffinity which forces it to deploy on the control plane (previously referred to as master) node, so that the cluster-autoscaler can scale down all of the worker nodes. Without this change there was a possibility for the cluster-autoscaler to be deployed on a worker node that could not be downscaled.
5. In the provided cluster-autoscaler deployment example, the autoscaler pod has a nodeaffinity which forces it to deploy on the control plane (previously referred to as master) node, so that the cluster-autoscaler can scale down all of the worker nodes. Without this change there was a possibility for the cluster-autoscaler to be deployed on a worker node that could not be downscaled.