diff --git a/docs/calico-v3.md b/docs/calico-v3.md new file mode 100644 index 0000000000..7e85183ec4 --- /dev/null +++ b/docs/calico-v3.md @@ -0,0 +1,96 @@ +# Calico Version 3 +In early 2018 Version 3 of Calico was released, it included a reworked data +model and with that a switch from the etcd v2 to v3 API. This document covers +the requirements, upgrade process, and configuration to install +Calico Version 3. + +## Requirements +- The main requirement needed for Calico Version 3 is the etcd v3 API available + with etcd server version 3. +- Another requirement is for the Kubernetes version to be a minimum of v1.7.0. + +### etcd +Due to the etcd v3 API being a requirement of Calico Version 3 +(when using etcd as the datastore) not all Kops installations will be +upgradable to Calico V3. Installations using etcd v2 (or earlier) will need +to remain on Calico V2 or update to etcdv3. + +## Configuration of a new cluster +To ensure a new cluster will have Calico Version 3 installed the following +two configurations options should be set: +- `spec.etcdClusters.etcdMembers[0].Version` (Main cluster) should be + set to a Version of etcd greater than 3.x or the default version + needs to be greater than 3.x. +- The Networking config must have the Calico MajorVersion set to `v3` like + the following: + ``` + spec: + networking: + calico: + majorVersion: v3 + ``` + +Both of the above two settings can be set by doing a `kops edit cluster ...` +before bringing the cluster up for the first time. + +With the above two settings your Kops deployed cluster will be running with +Calico Version 3. + +### Create cluster networking flag + +When enabling Calico with the `--networking calico` flag, etcd will be set to +a v3 version. Feel free to change to a different v3 version of etcd. + +## Upgrading an existing cluster +Assuming your cluster meets the requirements it is possible to upgrade +your Calico Kops cluster. + +A few notes about the upgrade: +- During the first portion of the migration, while the calico-kube-controllers + pod is running its Init, no new policies will be applied though already + applied policy will be active. +- During the migration no new pods will be scheduled as adding new workloads + to Calico is blocked. Once the calico-complete-upgrade job has completed + pods will once again be schedulable. +- The upgrade process that has been automated in kops can be found in + [the Upgrading Calico docs](https://docs.projectcalico.org/v3.1/getting-started/kubernetes/upgrade/upgrade). + +Perform the upgrade with the following steps: + +1. First you must ensure that you are running Calico V2.6.5+. With the + latest Kops (greater than 1.9) ensuring your cluster is updated can be + done by doing a `kops update` on the cluster. +1. Verify your Calico data will migrate successfully by installing and + configuring the + [calico-upgrade command](https://docs.projectcalico.org/v3.1/getting-started/kubernetes/upgrade/setup) + and then run `calico-upgrade dry-run` and verify it reports that the + migration can be completed successfully. +1. Set `majorVersion` field as below by editing + your cluster configuration with `kops edit cluster`. + ``` + spec: + networking: + calico: + majorVersion: v3 + ``` +1. Update your cluster with `kops update` like you would normally update. +1. Monitor the progress of the migration by using + `kubectl get pods -n kube-system` and checking the status of the following pods: + - calico-node pods should restart one at a time and all becoming Running + - calico-kube-controllers pod will restart and after the first calico-node + pod starts running it will start running + - calico-complete-upgrade pod will be Completed after all the calico-node + pods start running + If any of the above fail by entering a crash loop you should investigate + by checking the logs with `kubectl -n kube-system logs `. +1. Once the calico-node and calico-kube-controllers are running and the + calico-complete-upgrade pod has completed the migration has finished + successfully. + +### Recovering from a partial migration + +The InitContainer of the first calico-node pod that starts will perform the +datastore migration necessary for upgrading from Calico v2 to Calico v3, if +this InitContainer is killed or restarted when the new datastore is being +populated it will be necessary to manually remove the Calico data in the +etcd v3 API before the migration will be successful. diff --git a/docs/networking.md b/docs/networking.md index 3c53fc9c5c..a1aed38b3e 100644 --- a/docs/networking.md +++ b/docs/networking.md @@ -34,7 +34,7 @@ has built in support for CNI networking components. Several different CNI providers are currently built into kops: -* [Calico](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/) +* [Calico](https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/calico#installing-with-the-etcd-datastore) * [Canal (Flannel + Calico)](https://github.com/projectcalico/canal) * [flannel](https://github.com/coreos/flannel) - use `--networking flannel-vxlan` (recommended) or `--networking flannel-udp` (legacy). `--networking flannel` now selects `flannel-vxlan`. * [kopeio-vxlan](https://github.com/kopeio/networking) @@ -170,7 +170,8 @@ To enable this mode in a cluster, with Calico as the CNI and Network Policy prov ``` networking: - calico: {} + calico: + majorVersion: v3 ``` You will need to change that block, and add an additional field, to look like this: @@ -178,6 +179,7 @@ You will need to change that block, and add an additional field, to look like th ``` networking: calico: + majorVersion: v3 crossSubnet: true ``` @@ -194,6 +196,8 @@ Only the masters have the IAM policy (`ec2:*`) to allow k8s-ec2-srcdst to execut For Calico specific documentation please visit the [Calico Docs](http://docs.projectcalico.org/latest/getting-started/kubernetes/). +For details on upgrading a Calico v2 deployment see [Calico Version 3](calico-v3.md). + #### Getting help with Calico For help with Calico or to report any issues: