kops-controller manages a few node-role node-labels. We
now remove any extra managed labels that land on the node.
This means we will now actively remove the extra node label if we
previously erroneously applied to a control-plane node; previous code
changes stopped applying it.
If we deserialize the yaml, we don't go through the version-conversion
logic. That logic maps from Master -> ControlPlane, so without that
logic we see unexpected values in the "string enums".
* feat(karpenter): Upgrade to version 0.27.0
Upgrade Karpenter to current last stable version `0.27.0`.
Template have been updated to use the same templates than the Helm chart.
* feat(karpenter): Use AWSNodeTemplate for launchTemplate
To set Launch Templates is deprecated into the provisioner, it is recommends using the `AWSNodeTemplate` to set it.
Ref:
- https://karpenter.sh/v0.27.0/concepts/node-templates/
* feat(karpenter): Enable pruning addon
* Use extra flags in upgrade-ab scenario test
* feat(karpenter): Drop `karpenter` feature flag
* feat(karpenter): Add release note for `1.27`
* feat(karpenter): Upgrade to version 0.27.3
* feat(karpenter): fix template
* feat(karpenter): Upgrade to version 0.27.5
* Update Karpenter documentation with depending kops version
* Delete KOPS_FEATURE_FLAGS from e2e test `run-test`
* Run hack/update-expected.sh
We were silently ignoring unknown roles, which makes it hard to know
when our expectations aren't met. It looks like the rename of the
role from "Master" to "ControlPlane" may have caused some drift
against our expectations also.
In order to verify that the caller is running on the specified node,
we source the expected IP address from the cloud, and require that the
node set up a simple challenge/response server to answer requests.
Because the challenge server runs on a port outside of the nodePort
range, this also makes it harder for pods to impersonate their host
nodes - though we do combine this with TPM and similar functionality
where it is available.