Merge pull request #1333 from traci-morrison/interlock3-updates

Updates for Interlock 3
This commit is contained in:
Traci Morrison 2019-10-08 12:04:10 -04:00 committed by GitHub
commit 472866de7f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 83 additions and 63 deletions

View File

@ -95,6 +95,7 @@ Minimizing the number of overlay networks that Interlock connects to can be acco
- Reduce the number of networks. If the architecture permits it, applications can be grouped together to use the same networks.
- Use Interlock service clusters. By segmenting Interlock, service clusters also segment which networks are connected to Interlock, reducing the number of networks to which each proxy is connected.
- Use admin-defined networks and limit the number of networks per service cluster.
#### Use Interlock VIP Mode
VIP Mode can be used to reduce the impact of application updates on the Interlock proxies. It utilizes the Swarm L4 load balancing VIPs instead of individual task IPs to load balance traffic to a more stable internal endpoint. This prevents the proxy LB configs from changing for most kinds of app service updates reducing churn for Interlock. The following features are not supported in VIP mode:

View File

@ -107,9 +107,11 @@ It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm joi
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
getting a Swarm cluster deployed.
Note: When using host mode networking, you cannot use the DNS service discovery because that
requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
that will give you that functionality if needed.
> Note
>
> When using host mode networking, you cannot use the DNS service discovery because that
> requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
> that will give you that functionality if needed.
Configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
service. Once you are logged into one of the Swarm managers run the following to add node labels

View File

@ -50,9 +50,11 @@ docker service update \
ucp-interlock
```
**Note**: Every time you enable the layer 7 routing
solution from the UCP UI, the `ucp-interlock` service is started using the
default configuration.
> Note
>
> When you enable the layer 7 routing
> solution from the UCP UI, the `ucp-interlock` service is started using the
> default configuration.
If you've customized the configuration used by the `ucp-interlock` service,
you must update it again to use the Docker configuration object
@ -67,7 +69,7 @@ The following sections describe how to configure the primary Interlock services:
### Core configuration
The core configuraton handles the Interlock service itself. These are the configuration options for the `ucp-interlock` service:
The core configuration handles the Interlock service itself. The following configuration options are available for the `ucp-interlock` service.
| Option | Type | Description |
|:-------------------|:------------|:-----------------------------------------------------------------------------------------------|
@ -78,40 +80,44 @@ The core configuraton handles the Interlock service itself. These are the config
| `TLSKey` | string | Path to the key for connecting securely to the Docker API. |
| `AllowInsecure` | bool | Skip TLS verification when connecting to the Docker API via TLS. |
| `PollInterval` | string | Interval to poll the Docker API for changes. Defaults to `3s`. |
| `EndpointOverride` | string | Override the default GRPC API endpoint for extensions. The default is detected via Swarm. |
| `EndpointOverride` | string | Override the default GRPC API endpoint for extensions. The default is detected via Swarm. |
| `Extensions` | []Extension | Array of extensions as listed below. |
### Extension configuration
Interlock must contain at least one extension to service traffic. The following options are available to configure the extensions:
Interlock must contain at least one extension to service traffic. The following options are available to configure the extensions.
| Option | Type | Description |
|:-------------------|:------------|:-----------------------------------------------------------|
| `Image` | string | Name of the Docker Image to use for the extension service |
| `Args` | []string | Arguments to be passed to the Docker extension service upon creation |
| `Labels` | map[string]string | Labels to add to the extension service |
| `ContainerLabels` | map[string]string | labels to be added to the extension service tasks |
| `Constraints` | []string | one or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the extension service |
| `PlacementPreferences` | []string | one or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the extension service |
| `ContainerLabels` | map[string]string | Labels to be added to the extension service tasks |
| `Constraints` | []string | One or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the extension service |
| `PlacementPreferences` | []string | One or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the extension service |
| `ServiceName` | string | Name of the extension service |
| `ProxyImage` | string | Name of the Docker Image to use for the proxy service |
| `ProxyArgs` | []string | Arguments to be passed to the Docker proxy service upon creation |
| `ProxyLabels` | map[string]string | Labels to add to the proxy service |
| `ProxyContainerLabels` | map[string]string | labels to be added to the proxy service tasks |
| `ProxyContainerLabels` | map[string]string | Labels to be added to the proxy service tasks |
| `ProxyServiceName` | string | Name of the proxy service |
| `ProxyConfigPath` | string | Path in the service for the generated proxy config |
| `ProxyReplicas` | uint | number of proxy service replicas |
| `ProxyStopSignal` | string | stop signal for the proxy service (i.e. `SIGQUIT`) |
| `ProxyStopGracePeriod` | string | stop grace period for the proxy service (i.e. `5s`) |
| `ProxyConstraints` | []string | one or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the proxy service |
| `ProxyPlacementPreferences` | []string | one or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the proxy service |
| `ProxyUpdateDelay` | string | delay between rolling proxy container updates |
| `ProxyReplicas` | unit | Number of proxy service replicas |
| `ProxyStopSignal` | string | Stop signal for the proxy service (i.e. `SIGQUIT`) |
| `ProxyStopGracePeriod` | string | Stop grace period for the proxy service (i.e. `5s`) |
| `ProxyConstraints` | []string | One or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the proxy service. Set the variable to `false`, as it is currently set to `true` by default. |
| `ProxyPlacementPreferences` | []string | One or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the proxy service |
| `ProxyUpdateDelay` | string | Delay between rolling proxy container updates |
| `ServiceCluster` | string | Name of the cluster this extension services |
| `PublishMode` | string (`ingress` or `host`) | Publish mode that the proxy service uses |
| `PublishedPort` | int | Port on which the proxy service serves non-SSL traffic |
| `PublishedSSLPort` | int | Port on which the proxy service serves SSL traffic |
| `Template` | string | Docker configuration object that is used as the extension template |
| `Config` | Config | Proxy configuration used by the extensions as described in the following table |
| `Config` | Config | Proxy configuration used by the extensions as described in this section |
| `HitlessServiceUpdate` | Config | When set to `true`, services can be updated without restarting the proxy container. |
| `ConfigImage` | Config | Name for the config service (used by hitless service updates). For example, `docker/ucp-interlock-config:3.2.1`. |
| `ConfigServiceName` | Config | Name of the config service. This name is equivalent to `ProxyServiceName`. For example, `ucp-interlock-config`. |
### Proxy
Options are made available to the extensions, and the extensions utilize the options needed for proxy service configuration. This provides overrides to the extension configuration.
@ -119,14 +125,14 @@ Options are made available to the extensions, and the extensions utilize the opt
Because Interlock passes the extension configuration directly to the extension, each extension has
different configuration options available. Refer to the documentation for each extension for supported options:
- [Nginx](nginx-config.md)
- [NGINX](nginx-config.md)
#### Customize the default proxy service
The default proxy service used by UCP to provide layer 7 routing is NGINX. If users try to access a route that hasn't been configured, they will see the default NGINX 404 page:
![Default NGINX page](../../images/interlock-default-service-1.png){: .with-border}
You can customize this by labelling a service with
You can customize this by labeling a service with
`com.docker.lb.default_backend=true`. In this case, if users try to access a route that's
not configured, they are redirected to this service.
@ -163,8 +169,10 @@ to this demo service.
![Custom default page](../../images/interlock-default-service-2.png){: .with-border}
To minimize forwarding interruption to the updating service while updating a single replicated service, use `com.docker.lb.backend_mode=vip`.
### Example Configuration
The following is an example configuration to use with the Nginx extension.
The following is an example configuration to use with the NGINX extension.
```toml
ListenAddr = ":8080"
@ -197,7 +205,7 @@ PollInterval = "3s"
## Next steps
- [Configure host mode networking](host-mode-networking.md)
- [Configure an nginx extension](nginx-config.md)
- [Configure an NGINX extension](nginx-config.md)
- [Use application service labels](service-labels.md)
- [Tune the proxy service](tuning.md)
- [Update Interlock services](updates.md)

View File

@ -6,7 +6,7 @@ redirect_from:
- /ee/ucp/interlock/deploy/configuration-reference/
---
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh (HRM).
## Prerequisites
@ -20,8 +20,9 @@ By default, layer 7 routing is disabled, so you must first
enable this service from the UCP web UI.
1. Log in to the UCP web UI as an administrator.
2. Navigate to **Admin Settings**
3. Select **Layer 7 Routing** and then select **Enable Layer 7 Routing**
2. Navigate to **Admin Settings**.
3. Select **Layer 7 Routing**.
4. Select the **Enable Layer 7 Routing** check box.
![http routing mesh](../../images/interlock-install-4.png){: .with-border}
@ -29,7 +30,7 @@ By default, the routing mesh service listens on port 8080 for HTTP and port
8443 for HTTPS. Change the ports if you already have services that are using
them.
When 7 routing is enabled:
When layer 7 routing is enabled:
1. UCP creates the `ucp-interlock` overlay network.
2. UCP deploys the `ucp-interlock` service and attaches it both to the Docker
@ -46,10 +47,9 @@ the internal components of this service.
5. The `ucp-interlock` service takes the proxy configuration and uses it to
start the `ucp-interlock-proxy` service.
At this point everything is ready for you to start using the layer 7 routing
service with your swarm workloads.
Now you are ready to use the layer 7 routing service with your Swarm workloads. There are three primary Interlock services: core, extension, and proxy. To learn more about these services, see [TOML configuration options](https://docs.docker.com/ee/ucp/interlock/config/#toml-file-configuration-options).
The following code sample provides a default UCP configuration (this will be created automatically when enabling Interlock as per section [Enable layer 7 routing](#enable-layer-7-routing)):
The following code sample provides a default UCP configuration. This will be created automatically when enabling Interlock as described in this section.
```toml
ListenAddr = ":8080"
@ -118,7 +118,7 @@ PollInterval = "3s"
## Enable layer 7 routing manually
Interlock can also be enabled from the command line by following the below sections.
Interlock can also be enabled from the command line, as described in the following sections.
### Work with the core service configuration file
@ -167,8 +167,8 @@ $> docker network create -d overlay interlock
Now you can create the Interlock service. Note the requirement to constrain to a manager. The
Interlock core service must have access to a Swarm manager, however the extension and proxy services
are recommended to run on workers. See the [Production](./production.md) section for more information
on setting up for an production environment.
are recommended to run on workers. See the [Production](./production.md) section for more information
on setting up for a production environment.
```bash
$> docker service create \

View File

@ -5,7 +5,7 @@ keywords: routing, proxy, interlock
---
To install Interlock on a Docker cluster without internet access, the Docker images must be loaded. This topic describes how to export the images from a local Docker
engine and then loading them to the Docker Swarm cluster.
engine and then load them to the Docker Swarm cluster.
First, using an existing Docker engine, save the images:
@ -15,17 +15,19 @@ $> docker save {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}
$> docker save {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} > interlock-proxy-nginx.tar
```
Note: replace `{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version
}}` and `{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}` with the
corresponding extension and proxy image if you are not using Nginx.
> Note
>
> Replace `{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version
> }}` and `{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}` with the
> corresponding extension and proxy image if you are not using Nginx.
You should have the following three files:
- `interlock.tar`: This is the core Interlock application.
- `interlock-extension-nginx.tar`: This is the Interlock extension for Nginx.
- `interlock-proxy-nginx.tar`: This is the official Nginx image based on Alpine.
- `interlock-extension-nginx.tar`: This is the Interlock extension for NGINX.
- `interlock-proxy-nginx.tar`: This is the official NGINX image based on Alpine.
Copy these files to each node in the Docker Swarm cluster and run the following commands to load each image:
Next, copy these files to each node in the Docker Swarm cluster and run the following commands to load each image:
```bash
$> docker load < interlock.tar

View File

@ -62,7 +62,7 @@ map[nodetype:loadbalancer]
The command should print "loadbalancer".
## Update proxy service
Now that your nodes are labelled, you need to update the `ucp-interlock-proxy`
Now that your nodes are labeled, you need to update the `ucp-interlock-proxy`
service configuration to deploy the proxy service with the correct constraints (constrained to those
workers). From a manager, add a constraint to the `ucp-interlock-proxy` service to update the running service:
@ -98,7 +98,7 @@ configuration so it takes effect if Interlock is restored from backup:
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"]
```
[Learn how to configure ucp-interlock](../config/index.md).
By default, the config service is global, scheduling one task on every node in the cluster, but it will use proxy constraints if available. To add or change scheduling restraints, update the `ProxyConstraints` variable in the Interlock configuration file. See [configure ucp-interlock](../config/index.md) for more information.
Once reconfigured, you can check if the proxy service is running on the dedicated nodes:

View File

@ -94,6 +94,7 @@ networks:
Note that:
* Docker Compose files _must_ reference networks as external. Include `external:true` in the `docker-compose.yml` file.
* The `com.docker.lb.hosts` label defines the hostname for the service. When
the layer 7 routing solution gets a request containing `app.example.org` in
the host header, that request is forwarded to the demo service.

View File

@ -28,7 +28,7 @@ VIP mode optimizes for fewer proxy updates in a tradeoff for a reduced feature s
Most application updates do not require configuring backends in VIP mode.
In VIP routing mode Interlock uses the service VIP (a persistent endpoint that exists from service creation to service deletion) as the proxy backend.
VIP routing mode was introduced in Universal Control Plane (UCP) 3.0 version 3.0.3 and 3.1 version 3.1.2.
VIP routing mode was introduced in UCP 3.0 version 3.0.3 and 3.1 version 3.1.2.
VIP routing mode applies L7 routing and then sends packets to the Swarm L4 load balancer which routes traffic service containers.
![vip mode](../../images/interlock-vip-mode.png)
@ -91,21 +91,19 @@ $> docker service create \
```
Interlock detects when the service is available and publishes it. After tasks are running
and the proxy service is updated, the application is available via any url that is not
and the proxy service is updated, the application is available via any URL that is not
configured:
![Default Backend](../../images/interlock_default_backend.png)
#### Publish a service using "vip" backend mode.
#### Publish a service using "vip" backend mode
1. Create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
2. Create the initial service:
```bash

View File

@ -6,7 +6,7 @@ keywords: ucp, interlock, load balancing, routing
## Configure Proxy Services
With the node labels, you can re-configure the Interlock Proxy services to be constrained to the
workers for each region. FOr example, from a manager, run the following commands to pin the proxy services to the ingress workers:
workers for each region. For example, from a manager, run the following commands to pin the proxy services to the ingress workers:
```bash
$> docker service update \
@ -26,6 +26,12 @@ $> docker network create -d overlay demo-east
$> docker network create -d overlay demo-west
```
Add the networks to the Interlock configuration file. Interlock automatically adds networks to the proxy service upon the next proxy update. See *Minimizing the number of overlay networks* in [Interlock architecture](https://docs.docker.com/ee/ucp/interlock/architecture/) for more information.
> Note
>
> Interlock will _only_ connect to the specified networks, and will connect to them all at startup.
Next, deploy the application in the `us-east` service cluster:
```bash
@ -120,7 +126,9 @@ map[nodetype:loadbalancer region:us-west]
Next, create an Interlock configuration object that contains multiple extensions with varying service clusters.
< Important: The configuration object specified in the following code sample applies to UCP versions 3.0.10 and later, and versions 3.1.4 and later.
> Important
>
> The configuration object specified in the following code sample applies to UCP versions 3.0.10 and later, and versions 3.1.4 and later.
If you are working with UCP version 3.0.0 - 3.0.9 or 3.1.0 - 3.1.3, specify `com.docker.ucp.interlock.service-clusters.conf`.
@ -185,9 +193,11 @@ PollInterval = "3s"
EOF
oqkvv1asncf6p2axhx41vylgt
```
Note that "host" mode networking is used in order to use the same ports (`8080` and `8443`) in the cluster. You cannot use ingress
networking as it reserves the port across all nodes. If you want to use ingress networking, you must use different ports
for each service cluster.
> Note
>
> "Host" mode networking is used in order to use the same ports (`8080` and `8443`) in the cluster. You cannot use ingress
> networking as it reserves the port across all nodes. If you want to use ingress networking, you must use different ports
> for each service cluster.
Next, create a dedicated network for Interlock and the extensions:

View File

@ -14,14 +14,11 @@ You can publish a service and configure the proxy for persistent (sticky) sessio
To configure sticky sessions using cookies:
1. Create an overlay network so that service traffic is isolated and secure, as shown in the following example:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
2. Create a service with the cookie to use for sticky sessions:
```bash
$> docker service create \
--name demo \
@ -75,14 +72,11 @@ The following example shows how to configure sticky sessions using client IP has
as cookies but enables workarounds for some applications that cannot use the other method. When using IP hashing, reconfigure Interlock proxy to use [host mode networking](../config/host-mode-networking.md), because the default `ingress` networking mode uses SNAT, which obscures client IP addresses.
1. Create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
2. Create a service with the cookie to use for sticky sessions using IP hashing:
```bash
$> docker service create \
--name demo \
@ -128,6 +122,8 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
You can use `docker service scale demo=10` to add more replicas. When scaled, requests are pinned
to a specific backend.
> **Note**: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
> Note
>
> Due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
> expected, because internally the proxy uses the new set of replicas to determine a backend on which to pin. When the upstreams are
> determined, a new "sticky" backend is chosen as the dedicated upstream.

View File

@ -89,7 +89,7 @@ Because the private key and certificate are stored as Docker secrets, you can
easily scale the number of replicas used for running the proxy service. Docker
distributes the secrets to the replicas.
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md)
and deploy the service:
```bash
@ -107,7 +107,7 @@ After creating the DNS entry, you can access your service:
https://<hostname>:<https-port>
```
FOr this example:
For this example:
* `hostname` is the name you specified with the `com.docker.lb.hosts` label.
* `https-port` is the port you configured in the [UCP settings](../deploy/index.md).
@ -135,7 +135,9 @@ using a version of `curl` that includes the SNI header with insecure requests.
Otherwise, `curl` displays an error saying that the SSL handshake
was aborted.
> **Note**: Currently there is no way to update expired certificates using this method.
> Note
>
> Currently there is no way to update expired certificates using this method.
> The proper way is to create a new secret then update the corresponding service.
## Let your service handle TLS