New production info

This commit is contained in:
paigehargrave 2019-03-27 08:00:00 -04:00 committed by GitHub
parent 60e00cc150
commit 154c74ff2f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 75 additions and 34 deletions

View File

@ -5,15 +5,31 @@ description: Learn how to configure the layer 7 routing solution for a productio
keywords: routing, proxy keywords: routing, proxy
--- ---
The layer 7 solution that ships out of the box with UCP is highly available # Deploying to production
This section includes documentation on configuring Interlock
for a production environment. If you have not yet deployed Interlock, refer to [Deploying Interlock](index.md) because this information builds upon the basic deployment. This topic does not cover infrastructure deployment -
it assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
Refer to the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
getting a Swarm cluster deployed.
The layer 7 solution that ships with UCP is highly available
and fault tolerant. It is also designed to work independently of how many and fault tolerant. It is also designed to work independently of how many
nodes you're managing with UCP. nodes you're managing with UCP.
![production deployment](../../images/interlock-deploy-production-1.svg) ![production deployment](../../images/interlock-deploy-production-1.svg)
For a production-grade deployment, you should tune the default deployment to For a production-grade deployment, you need to perform the following actions:
1. Pick two nodes that are going to be dedicated to run the proxy service.
2. Apply labels to those nodes, so that you can constrain the proxy service to
only run on nodes with those labels.
3. Update the `ucp-interlock` service to deploy proxies using that constraint.
4. Configure your load balancer to only route traffic to the dedicated nodes.
## Select dedicated nodes
Tuning the default deployment to
have two nodes dedicated for running the two replicas of the have two nodes dedicated for running the two replicas of the
`ucp-interlock-proxy` service. This ensures: `ucp-interlock-proxy` service ensures:
* The proxy services have dedicated resources to handle user requests. You * The proxy services have dedicated resources to handle user requests. You
can configure these nodes with higher performance network interfaces. can configure these nodes with higher performance network interfaces.
@ -22,45 +38,54 @@ deployment secure.
* The proxy service is running on two nodes. If one node fails, layer 7 routing * The proxy service is running on two nodes. If one node fails, layer 7 routing
continues working. continues working.
To achieve this you need to: ## Apply node labels
Configure the selected nodes as load balancer worker nodes ( for example, `lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy service. After you log in to one of the Swarm managers, run the following commands to add node labels
1. Enable layer 7 routing. [Learn how](index.md). to the dedicated ingress workers:
2. Pick two nodes that are going to be dedicated to run the proxy service.
3. Apply labels to those nodes, so that you can constrain the proxy service to
only run on nodes with those labels.
4. Update the `ucp-interlock` service to deploy proxies using that constraint.
5. Configure your load balancer to route traffic to the dedicated nodes only.
## Apply labels to nodes
In this example, we chose node-5 and node-6 to be dedicated just for running
the proxy service. To apply labels to those nodes run:
```bash ```bash
docker node update --label-add nodetype=loadbalancer <node> $> docker node update --label-add nodetype=loadbalancer lb-00
lb-00
$> docker node update --label-add nodetype=loadbalancer lb-01
lb-01
``` ```
To make sure the label was successfully applied, run: You can inspect each node to ensure the labels were successfully added:
{% raw %}
```bash ```bash
docker node inspect --format '{{ index .Spec.Labels "nodetype" }}' <node> $> docker node inspect -f '{{ .Spec.Labels }}' lb-00
map[nodetype:loadbalancer]
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
map[nodetype:loadbalancer]
``` ```
{% endraw %}
The command should print "loadbalancer". The command should print "loadbalancer".
## Configure the ucp-interlock service ## Update proxy service
Now that your nodes are labelled, you need to update the `ucp-interlock-proxy` Now that your nodes are labelled, you need to update the `ucp-interlock-proxy`
service configuration to deploy the proxy service with the correct constraints. service configuration to deploy the proxy service with the correct constraints (constrained to those
workers). From a manager, add a constraint to the `ucp-interlock-proxy` service to update the running service:
Add a constraint to the `ucp-interlock-proxy` service to update the running service:
```bash ```bash
docker service update \ $> docker service update --replicas=2 \
--constraint-add node.labels.nodetype==loadbalancer \ --constraint-add node.labels.nodetype==loadbalancer \
ucp-interlock-proxy --stop-signal SIGQUIT \
--stop-grace-period=5s \
$(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
```
This updates the proxy service to have two (2) replicas and ensure they are constrained to
the workers with the label `nodetype==loadbalancer` as well as configure the stop signal for the tasks
to be a `SIGQUIT` with a grace period of five (5) seconds. This will ensure that Nginx uses a graceful shutdown
before exiting to ensure the client request is finished.
Inspect the service to ensure the replicas have started on the desired nodes:
```bash
$> docker service ps $(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
o21esdruwu30 interlock-proxy.1 nginx:alpine lb-01 Running Preparing 3 seconds ago
n8yed2gp36o6 \_ interlock-proxy.1 nginx:alpine mgr-01 Shutdown Shutdown less than a second ago
aubpjc4cnw79 interlock-proxy.2 nginx:alpine lb-00 Running Preparing 3 seconds ago
``` ```
Then add the constraint to the `ProxyConstraints` array in the `interlock-proxy` service Then add the constraint to the `ProxyConstraints` array in the `interlock-proxy` service
@ -72,18 +97,34 @@ configuration so it takes effect if Interlock is restored from backup:
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"] ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"]
``` ```
[Learn how to configure ucp-interlock](configure.md). [Learn how to configure ucp-interlock](../config/index.md).
Once reconfigured you can check if the proxy service is running on the dedicated nodes: Once reconfigured, you can check if the proxy service is running on the dedicated nodes:
```bash ```bash
docker service ps ucp-interlock-proxy docker service ps ucp-interlock-proxy
``` ```
## Configure your load balancer ## Configure load balancer
Update the settings in the upstream load balancer (ELB, F5, etc) with the
addresses of the dedicated ingress workers. This directs all traffic to these nodes.
Once the proxy service is running on dedicated nodes, configure your upstream You have now configured Interlock for a dedicated ingress production environment. Refer to the [configuration information](../config/tuning.md) if you want to continue tuning.
load balancer with the domain names or IP addresses of those nodes.
This makes sure all traffic is directed to these nodes. ## Production deployment configuration example
The following example shows the configuration of an eight (8) node Swarm cluster. There are three (3) managers
and five (5) workers. Two of the workers are configured with node labels to be dedicated
ingress cluster load balancer nodes. These will receive all application traffic.
There is also an upstream load balancer (such as an Elastic Load Balancer or F5). The upstream
load balancers will be statically configured for the two load balancer worker nodes.
This configuration has several benefits. The management plane is both isolated and redundant.
No application traffic hits the managers and application ingress traffic can be routed
to the dedicated nodes. These nodes can be configured with higher performance network interfaces
to provide more bandwidth for the user services.
![Interlock 2.0 Production Deployment](../../images/interlock_production_deploy.png)
## Next steps
- [Configuring Interlock](../config/index.md)
- [Deploying applications](../usage.index.md)