Adds a release note for gce/metadata-proxy and upgrade instructions

This commit is contained in:
eric-hole 2020-03-14 12:17:01 -07:00
parent 450fad6e4c
commit d39006e2b5
1 changed files with 4 additions and 2 deletions

View File

@ -10,6 +10,8 @@
* Cilium CNI can now use AWS networking natively through the AWS ENI IPAM mode. Kops can also run a Kubernetes cluster entirely without kube-proxy using Cilium's BPF NodePort implementation
* New clusters in GCE are configured to run the [metadata-proxy](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metadata-proxy) by default. The proxy runs as a DaemonSet and lands on nodes with the nodeLabel `cloud.google.com/metadata-proxy-ready: "true"`. If you want to enable metadata-proxy on an existing cluster/instance group, add that nodeLabel to your instancegroup specs (`kops edit ig ...`) and run `kops update cluster`. When the changes are applied, the proxy will roll out to those targeted nodes.
# Breaking changes
* Terraform users on AWS may need to rename some resources in their state file in order to prepare for Terraform 0.12 support. See Required Actions below.
@ -30,7 +32,7 @@
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In Kops, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
* The default route was named `aws_route.0-0-0-0--0` and will now be named `aws_route.route-0-0-0-0--0`.
* Additional CIDR blocks associated with a VPC were similarly named the hyphenated CIDR block with two hyphens for the `/`, for example `aws_vpc_ipv4_cidr_block_association.10-1-0-0--16`. These will now be prefixed with `cidr-`, for example `aws_vpc_ipv4_cidr_block_association.cidr-10-1-0-0--16`.
To prevent downtime, follow these steps with the new version of Kops:
```
kops update cluster --target terraform ...
@ -55,7 +57,7 @@
featureGates:
PodPriority: "true"
```
* If a custom Kops build was used on a cluster, a kops-controller Deployment may have been created that should get deleted.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to Kops 1.16.0-beta.1 or later.