Edit Route Reflector deployment

This commit is contained in:
Joao Fernandes 2018-05-10 17:05:54 -07:00 committed by Joao Fernandes
parent f75c988053
commit 28f0814c9b
3 changed files with 265 additions and 216 deletions

View File

@ -1600,6 +1600,8 @@ manuals:
title: Use a load balancer title: Use a load balancer
- path: /ee/ucp/admin/configure/integrate-with-multiple-registries/ - path: /ee/ucp/admin/configure/integrate-with-multiple-registries/
title: Integrate with multiple registries title: Integrate with multiple registries
- path: /ee/ucp/admin/configure/deploy-route-reflectors/
title: Improve network performance with Route Reflectors
- sectiontitle: Monitor and troubleshoot - sectiontitle: Monitor and troubleshoot
section: section:
- path: /ee/ucp/admin/monitor-and-troubleshoot/ - path: /ee/ucp/admin/monitor-and-troubleshoot/

View File

@ -0,0 +1,263 @@
---
title: Improve network performance with Route Reflectors
description: Learn how to deploy Calico Route Reflectors to improve performance
of Kubernetes networking
keywords: cluster, node, label, certificate, SAN
---
UCP uses Calico as the default Kubernetes networking solution. Calico is
configured to create a BGP mesh between all nodes in the cluster.
As you add more nodes to the cluster, networking performance starts decreasing.
If your cluster has more than 100 nodes, you should reconfigure Calico to use
Route Reflectors instead of a node-to-node mesh.
This article guides you in deploying Calico Route Reflectors in a UCP cluster.
UCP running on Microsoft Azure uses Azure SDN instead of Calico for
multi-host networking.
If your UCP deployment is running on Azure, you don't need to configure it this
way.
## Before you begin
For production-grade systems, you should deploy at least two Route Reflectors,
each running on a dedicated node. These nodes should not be running any other
workloads.
If Route Reflectors are running on a same node as other workloads, swarm ingress
and NodePorts might not work in these workloads.
## Choose dedicated notes
Start by tainting the nodes, so that no other workload runs there. Configure
your CLI with a UCP client bundle, and for each dedicated node, run:
```
kubectl taint node <node-name> \
com.docker.ucp.kubernetes.calico/route-reflector=true:NoSchedule
```
Then add labels to those nodes, so that you can target them when deploying the
Route Reflectors. For each dedicated node, run:
```
kubectl label nodes <node-name> \
com.docker.ucp.kubernetes.calico/route-reflector=true
```
## Deploy the Route Reflectors
Create a `calico-rr.yaml` file with the following content:
```
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-rr
namespace: kube-system
labels:
app: calico-rr
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: calico-rr
template:
metadata:
labels:
k8s-app: calico-rr
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: com.docker.ucp.kubernetes.calico/route-reflector
value: "true"
effect: NoSchedule
hostNetwork: true
containers:
- name: calico-rr
image: calico/routereflector:v0.6.1
env:
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key # Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /calico-secrets
name: etcd-certs
securityContext:
privileged: true
nodeSelector:
com.docker.ucp.kubernetes.calico/route-reflector: "true"
volumes:
# Mount in the etcd TLS secrets.
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
```
Then, deploy the DaemonSet using:
```
kubectl create -f calico-rr.yaml
```
## Configure calicoctl
To reconfigure Calico to use Route Reflectors instead of a node-to-node mesh,
you'll need to SSH into a UCP node and download the `calicoctl` tool.
Log in to a UCP node using SSH, and run:
```
sudo curl --location https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl \
--output /usr/bin/calicoctl
sudo chmod +x /usr/bin/calicoctl
```
Now you need to configure `calicoctl` to communicate with the etcd key-value
store managed by UCP. Create a file named `/etc/calico/calicoctl.cfg` with
the following content:
```
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: "etcdv3"
etcdEndpoints: "127.0.0.1:12379"
etcdKeyFile: "/var/lib/docker/volumes/ucp-node-certs/_data/key.pem"
etcdCertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/cert.pem"
etcdCACertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/ca.pem"
```
## Disable node-to-node BGP mesh
Not that you've configured `calicoctl`, you can check the current Calico BGP
configuration:
```
sudo calicoctl get bgpconfig
```
If you don't see any configuration listed, create one by running:
```
cat << EOF | sudo calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
name: default
spec:
logSeverityScreen: Info
nodeToNodeMeshEnabled: false
asNumber: 63400
EOF
```
This creates a new configuration with node-to-node mesh BGP disabled.
If you have a configuration, and `meshenabled` is set to `true`, update your
configuration:
```
sudo calicoctl get bgpconfig --output yaml > bgp.yaml
```
Edit the `bgp.yaml` file, updating `nodeToNodeMeshEnabled` to `false`. Then
update Calico configuration by running:
```
sudo calicoctl replace -f bgp.yaml
```
## Configure Calico to use Route Reflectors
To configure Calico to use the Route Reflectors you need to know the AS number
for your network first. For that, run:
```
sudo calicoctl get nodes --output=wide
```
Now that you have the AS number, you can create the Calico configuration.
For each Route Reflector, customize and run the following snippet:
```
sudo calicoctl create -f - << EOF
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
name: bgppeer-global
spec:
peerIP: <IP_RR>
asNumber: <AS_NUMBER>
EOF
```
Where:
* `IP_RR` is the IP of the node where the Route Reflector pod is deployed.
* `AS_NUMBER` is the same `AS number` for your nodes.
You can learn more about this configuration in the
[Calico documentation](https://docs.projectcalico.org/v3.1/usage/routereflector/calico-routereflector).
## Stop calico-node pods
If you have `calico-node` pods running on the nodes dedicated for running the
Route Reflector, manually delete them. This ensures that you don't have them
both running on the same node.
Using your UCP client bundle, run:
```
# Find the Pod name
kubectl get pods -n kube-system -o wide | grep <node-name>
# Delete the Pod
kubectl delete pod -n kube-system <pod-name>
```
## Validate peers
Now you can check that other `calico-node` pods running on other nodes are
peering with the Route Reflector:
```
sudo calicoctl node status
```
You should see something like:
```
IPv4 BGP status
+--------------+-----------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-----------+-------+----------+-------------+
| 172.31.24.86 | global | up | 23:10:04 | Established |
+--------------+-----------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
```

View File

@ -1,216 +0,0 @@
# Steps to deploy Route Reflector on UCP 3.0
UCP 3.0 ships with Calico for Kubernetes networking. Calico use BGP as the control plane to distribute Pod routes between the node in the UCP cluster. Calcio BGP is setup as mesh between the nodes. For large scale production-grade deployments it is recommended to deploy Route Reflectors to avoid node to node BGP mesh. This document provides detailed steps to deploy Route Reflectors in UCP cluster.
Note:
1) If you are using UCP 3.0 on Azure deploying route reflectors is not required. The networking control plane is handled
by Azure SDN and not calico.
2) Nodes marked for RR should not run any non system workloads. These nodes will not be gauranteed to work for ingress (swarm) or nodePort(kubernetes)
3) It is recommended to deploy Route Reflectors if the cluster scale exceeds 100 nodes.
4) It is recommended to deploy atleast 2 Route Reflectors in the cluster. Identify the nodes where the route reflector needs to be deployed considering that these nodes will not be available to the scheduler to schedule any workloads. Choose the orchestrator as Kubernetes for these nodes.
1) On one of the nodes in Docker EE cluster, download the calicoctl binary from
```
curl --output /usr/bin/calicoctl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl --output /usr/bin/calicoctl
chmod +x /usr/bin/calicoctl
```
2) `calicoctl` will need to be configured with the etcd information. There are 2 options:
a) You can set the environment variables
```
export ETCD_ENDPOINTS=127.0.0.1:12379
export ETCD_KEY_FILE=/var/lib/docker/volumes/ucp-node-certs/_data/key.pem
export ETCD_CA_CERT_FILE=/var/lib/docker/volumes/ucp-node-certs/_data/ca.pem
export ETCD_CERT_FILE=/var/lib/docker/volumes/ucp-node-certs/_data/cert.pem
```
b) You can create a configuration file.
`calicoctl` by default looks at `/etc/calico/calicoctl.cfg` for the configuration. For custom
configuration file location, use `--config` with `calicoctl`
`calicoctl.cfg:`
```
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: "etcdv3"
etcdEndpoints: "127.0.0.1:12379"
etcdKeyFile: "/var/lib/docker/volumes/ucp-node-certs/_data/key.pem"
etcdCertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/cert.pem"
etcdCACertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/ca.pem"
```
3) Identify the nodes where the route reflector needs to be deployed. Taint the nodes to ensure only the Route Reflector pods that tolerate the taint can be scheduled on these nodes.
In this example node `ubuntu-0` is being tainted. Use the client bundle and run:
```kubectl taint node ubuntu-0 com.docker.ucp.kubernetes.calico/route-reflector=true:NoSchedule```
This will need to be done on every node where the calico route reflector pod will need to be deployed.
4) Add labels to the same node. The labels will be used to for node placement by the scheduler to deploy the calico route reflector pods.
Use the client bundle and run:
```kubectl label nodes ubuntu-0 com.docker.ucp.kubernetes.calico/route-reflector=true ```
This will need to be done on every node where the calico route reflector pod will need to be deployed.
5) Deploy the calico route reflector Daemonset. The calico-rr pods will be deployed on all nodes that have the taint
and the node labels.
`calico-rr.yaml`
```
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-rr
namespace: kube-system
labels:
app: calico-rr
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: calico-rr
template:
metadata:
labels:
k8s-app: calico-rr
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: com.docker.ucp.kubernetes.calico/route-reflector
value: "true"
effect: NoSchedule
hostNetwork: true
containers:
- name: calico-rr
image: calico/routereflector:v0.6.1
env:
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key # Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /calico-secrets
name: etcd-certs
securityContext:
privileged: true
nodeSelector:
com.docker.ucp.kubernetes.calico/route-reflector: "true"
volumes:
# Mount in the etcd TLS secrets.
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
```
kubectl create -f calico-rr.yaml
```
6) Use `calicoctl` to get the current bgp configuration and modify it to turn off node to node bgp mesh.
```calicoctl get bgpconfig -o yaml > bgp.yaml```
Set the nodeToNodeMeshEnabled to false in the spec.
` bgp.yaml:`
```
....
spec:
asNumber: 63400
logSeverityScreen: Info
nodeToNodeMeshEnabled: false
....
```
Replace the bgp configuation with the modified `bgp.yaml`
```
calicoctl replace -f bgp.yaml
```
If the bgpConfig object is empty, create the bgp config object.
```
cat << EOF | calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
name: default
spec:
logSeverityScreen: Info
nodeToNodeMeshEnabled: false
asNumber: 63400
EOF
```
7) Create a bgp RR peer configuration. You can create configurations for each RR
```
calicoctl create -f - << EOF
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
name: bgppeer-global
spec:
peerIP: <IP_RR> #This will be the IP of node where the calico route reflector pod will be deployed.
asNumber: <AS_NUM> #Use the same asNumber from the configuration above step 6).
EOF
```
The bgp RR configuration needs to be added for every Route Reflector that will be deployed.
Refer to https://docs.projectcalico.org/v3.1/usage/routereflector/calico-routereflector for more information.
8) If you have calico-node pod running on the nodes marked for route reflector in step 3). You will need to manually delete the calico-node pod. We recommend this to avoid running both calico route reflector and calico-node pods on the same node.
```
kubectl get pods -n kube-system -o wide | grep ubuntu-0
calico-node-t4lwt 2/2 Running 0 3h 172.31.20.89 ubuntu-0
kubectl delete pod -n kube-system calico-node-t4lwt
```
9) You can verify the calico-node from other nodes peering with the route reflector are status by downloading calicoctl bundle on other nodes and checking for `calicoctl node status`
```
sudo ./calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-----------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-----------+-------+----------+-------------+
| 172.31.24.86 | global | up | 23:10:04 | Established |
+--------------+-----------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
```