mirror of https://github.com/docker/docs.git
interlock service cluster fixes
Technical: - Fix: need four loadbalancer nodes because there are two proxy replicas per service cluster and the proxies publish in host port publish mode - Fix: Interlock core service crashes without `ProxyReplicas` specified in the config - Fix: image `docker/interlock` does not exist - Fix: indicate testing commands should be run with access to swarmkit api for IP lookup - Improve: renamed interlock network to match what UCP default interlock network - Improve: Changed PublishedPort and PublishedSSLPort to avoid conflicts with default UCP and DTR ports Style: - None Signed-off-by: Trapier Marshall <trapier.marshall@docker.com>
This commit is contained in:
parent
178b6ceb10
commit
24faf7d265
|
@ -7,7 +7,7 @@ keywords: ucp, interlock, load balancing
|
|||
|
||||
In this example we will configure an eight (8) node Swarm cluster that uses service clusters
|
||||
to route traffic to different proxies. There are three (3) managers
|
||||
and five (5) workers. Two of the workers are configured with node labels to be dedicated
|
||||
and five (5) workers. Four of the workers are configured with node labels to be dedicated
|
||||
ingress cluster load balancer nodes. These will receive all application traffic.
|
||||
|
||||
This example will not cover the actual deployment of infrastructure.
|
||||
|
@ -17,15 +17,18 @@ getting a Swarm cluster deployed.
|
|||
|
||||

|
||||
|
||||
We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
|
||||
service. Once you are logged into one of the Swarm managers run the following to add node labels
|
||||
to the dedicated ingress workers:
|
||||
We will configure four load balancer worker nodes (`lb-00` through `lb-03`) with node labels in order to pin the Interlock Proxy
|
||||
service for each Interlock service cluster. Once you are logged into one of the Swarm managers run the following to add node labels to the dedicated ingress workers:
|
||||
|
||||
```bash
|
||||
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-east lb-00
|
||||
lb-00
|
||||
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-west lb-01
|
||||
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-east lb-01
|
||||
lb-01
|
||||
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-west lb-02
|
||||
lb-02
|
||||
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-west lb-03
|
||||
lb-03
|
||||
```
|
||||
|
||||
You can inspect each node to ensure the labels were successfully added:
|
||||
|
@ -34,7 +37,7 @@ You can inspect each node to ensure the labels were successfully added:
|
|||
{% raw %}
|
||||
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
|
||||
map[nodetype:loadbalancer region:us-east]
|
||||
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
|
||||
$> docker node inspect -f '{{ .Spec.Labels }}' lb-02
|
||||
map[nodetype:loadbalancer region:us-west]
|
||||
{% endraw %}
|
||||
```
|
||||
|
@ -56,11 +59,12 @@ PollInterval = "3s"
|
|||
ProxyArgs = []
|
||||
ProxyServiceName = "ucp-interlock-proxy-us-east"
|
||||
ProxyConfigPath = "/etc/nginx/nginx.conf"
|
||||
ProxyReplicas = 2
|
||||
ServiceCluster = "us-east"
|
||||
PublishMode = "host"
|
||||
PublishedPort = 80
|
||||
PublishedPort = 8080
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 443
|
||||
PublishedSSLPort = 8443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.us-east.Config]
|
||||
User = "nginx"
|
||||
|
@ -81,11 +85,12 @@ PollInterval = "3s"
|
|||
ProxyArgs = []
|
||||
ProxyServiceName = "ucp-interlock-proxy-us-west"
|
||||
ProxyConfigPath = "/etc/nginx/nginx.conf"
|
||||
ProxyReplicas = 2
|
||||
ServiceCluster = "us-west"
|
||||
PublishMode = "host"
|
||||
PublishedPort = 80
|
||||
PublishedPort = 8080
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 443
|
||||
PublishedSSLPort = 8443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.us-west.Config]
|
||||
User = "nginx"
|
||||
|
@ -100,14 +105,14 @@ PollInterval = "3s"
|
|||
EOF
|
||||
oqkvv1asncf6p2axhx41vylgt
|
||||
```
|
||||
Note that we are using "host" mode networking in order to use the same ports (`80` and `443`) in the cluster. We cannot use ingress
|
||||
Note that we are using "host" mode networking in order to use the same ports (`8080` and `8443`) in the cluster. We cannot use ingress
|
||||
networking as it reserves the port across all nodes. If you want to use ingress networking you will have to use different ports
|
||||
for each service cluster.
|
||||
|
||||
Next we will create a dedicated network for Interlock and the extensions:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay interlock
|
||||
$> docker network create -d overlay ucp-interlock
|
||||
```
|
||||
|
||||
Now we can create the Interlock service:
|
||||
|
@ -116,10 +121,10 @@ Now we can create the Interlock service:
|
|||
$> docker service create \
|
||||
--name ucp-interlock \
|
||||
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
|
||||
--network interlock \
|
||||
--network ucp-interlock \
|
||||
--constraint node.role==manager \
|
||||
--config src=com.docker.ucp.interlock.conf-1,target=/config.toml \
|
||||
{{ page.ucp_org }}/interlock:{{ page.ucp_version }} run -c /config.toml
|
||||
{{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} run -c /config.toml
|
||||
sjpgq7h621exno6svdnsvpv9z
|
||||
```
|
||||
|
||||
|
@ -177,13 +182,13 @@ Only the service cluster that is designated will be configured for the applicati
|
|||
will not be configured to serve traffic for the `us-west` service cluster and vice versa. We can see this in action when we
|
||||
send requests to each service cluster.
|
||||
|
||||
When we send a request to the `us-east` service cluster it only knows about the `us-east` application (be sure to ssh to the `lb-00` node):
|
||||
When we send a request to the `us-east` service cluster it only knows about the `us-east` application. This example uses IP address lookup from the swarm API, so ssh to a manager node or configure your shell with a UCP client bundle before testing:
|
||||
|
||||
```bash
|
||||
{% raw %}
|
||||
$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping
|
||||
$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
|
||||
{"instance":"1b2d71619592","version":"0.1","metadata":"us-east","request_id":"3d57404cf90112eee861f9d7955d044b"}
|
||||
$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping
|
||||
$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
|
||||
<html>
|
||||
<head><title>404 Not Found</title></head>
|
||||
<body bgcolor="white">
|
||||
|
|
Loading…
Reference in New Issue