Merge pull request #9816 from ollypom/interlock-patch

Interlock Service Cluster Config Note
This commit is contained in:
Traci Morrison 2019-12-11 11:44:05 -05:00 committed by GitHub
commit 070fcc6101
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 22 additions and 22 deletions

View File

@ -1,5 +1,5 @@
---
title: Deploy a layer 7 routing solution
title: Deploy a layer 7 routing solution
description: Learn the deployment steps for the UCP layer 7 routing solution
keywords: routing, proxy, interlock
redirect_from:
@ -43,11 +43,11 @@ to communicate.
4. The `ucp-interlock-extension` generates a configuration to be used by
the proxy service. By default the proxy service is NGINX, so this service
generates a standard NGINX configuration. UCP creates the `com.docker.ucp.interlock.conf-1` configuration file and uses it to configure all
the internal components of this service.
the internal components of this service.
5. The `ucp-interlock` service takes the proxy configuration and uses it to
start the `ucp-interlock-proxy` service.
Now you are ready to use the layer 7 routing service with your Swarm workloads. There are three primary Interlock services: core, extension, and proxy. To learn more about these services, see [TOML configuration options](https://docs.docker.com/ee/ucp/interlock/config/#toml-file-configuration-options).
Now you are ready to use the layer 7 routing service with your Swarm workloads. There are three primary Interlock services: core, extension, and proxy. To learn more about these services, see [TOML configuration options](https://docs.docker.com/ee/ucp/interlock/config/#toml-file-configuration-options).
The following code sample provides a default UCP configuration. This will be created automatically when enabling Interlock as described in this section.
@ -160,7 +160,7 @@ oqkvv1asncf6p2axhx41vylgt
Next, create a dedicated network for Interlock and the extensions:
```bash
$> docker network create -d overlay interlock
$> docker network create --driver overlay ucp-interlock
```
### Create the Interlock service
@ -172,13 +172,12 @@ on setting up for a production environment.
```bash
$> docker service create \
--name interlock \
--name ucp-interlock \
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
--network interlock \
--network ucp-interlock \
--constraint node.role==manager \
--config src=service.interlock.conf,target=/config.toml \
{{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
At this point, there should be three (3) services created: one for the Interlock service,
@ -186,17 +185,17 @@ one for the extension service, and one for the proxy service:
```bash
$> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lheajcskcbby modest_raman replicated 1/1 nginx:alpine *:80->80/tcp *:443->443/tcp
oxjvqc6gxf91 keen_clarke replicated 1/1 {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}
sjpgq7h621ex interlock replicated 1/1 {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }}
ID NAME MODE REPLICAS IMAGE PORTS
sjpgq7h621ex ucp-interlock replicated 1/1 {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }}
oxjvqc6gxf91 ucp-interlock-extension replicated 1/1 {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}
lheajcskcbby ucp-interlock-proxy replicated 1/1 {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} *:80->80/tcp *:443->443/tcp
```
The Interlock traffic layer is now deployed.
The Interlock traffic layer is now deployed.
## Next steps
- [Configure Interlock](../config/index.md)
- [Configure Interlock](../config/index.md)
- [Deploy applications](../usage/index.md)
- [Production deployment information](./production.md)
- [Offline installation](./offline-install.md)

View File

@ -26,10 +26,10 @@ $> docker network create -d overlay demo-east
$> docker network create -d overlay demo-west
```
Add the networks to the Interlock configuration file. Interlock automatically adds networks to the proxy service upon the next proxy update. See *Minimizing the number of overlay networks* in [Interlock architecture](https://docs.docker.com/ee/ucp/interlock/architecture/) for more information.
Add the networks to the Interlock configuration file. Interlock automatically adds networks to the proxy service upon the next proxy update. See *Minimizing the number of overlay networks* in [Interlock architecture](https://docs.docker.com/ee/ucp/interlock/architecture/) for more information.
> Note
>
>
> Interlock will _only_ connect to the specified networks, and will connect to them all at startup.
Next, deploy the application in the `us-east` service cluster:
@ -88,7 +88,7 @@ Application traffic is isolated to each service cluster. Interlock also ensures
The following example configures an eight (8) node Swarm cluster that uses service clusters
to route traffic to different proxies. This example includes:
- Three (3) managers and five (5) workers
- Three (3) managers and five (5) workers
- Four workers that are configured with node labels to be dedicated
ingress cluster load balancer nodes. These nodes receive all application traffic.
@ -127,10 +127,11 @@ map[nodetype:loadbalancer region:us-west]
Next, create an Interlock configuration object that contains multiple extensions with varying service clusters.
> Important
>
> The configuration object specified in the following code sample applies to UCP versions 3.0.10 and later, and versions 3.1.4 and later.
If you are working with UCP version 3.0.0 - 3.0.9 or 3.1.0 - 3.1.3, specify `com.docker.ucp.interlock.service-clusters.conf`.
>
> The configuration object specified in the following code sample applies to
> UCP versions 3.0.10 and later, and versions 3.1.4 and later. If you are
> working with UCP version 3.0.0 - 3.0.9 or 3.1.0 - 3.1.3, the config object
> should be named `com.docker.ucp.interlock.service-clusters.conf`.
```bash
$> cat << EOF | docker config create com.docker.ucp.interlock.conf-1 -
@ -193,8 +194,8 @@ PollInterval = "3s"
EOF
oqkvv1asncf6p2axhx41vylgt
```
> Note
>
> Note
>
> "Host" mode networking is used in order to use the same ports (`8080` and `8443`) in the cluster. You cannot use ingress
> networking as it reserves the port across all nodes. If you want to use ingress networking, you must use different ports
> for each service cluster.