Introduce Interlock (#311)

* Introduce Interlock

* Organize directory structure for Interlock
This commit is contained in:
Joao Fernandes 2017-12-04 14:14:36 -08:00 committed by Jim Galasyn
parent 572cac2928
commit 2cfe2b8a83
26 changed files with 1667 additions and 0 deletions

View File

@ -1732,6 +1732,62 @@ manuals:
title: Manage secrets
- path: /datacenter/ucp/3.0/guides/user/secrets/grant-revoke-access/
title: Grant access to secrets
- sectiontitle: Interlock
section:
- title: Interlock overview
path: /datacenter/ucp/3.0/guides/interlock/
- sectiontitle: Introduction
section:
- title: What is Interlock
path: /datacenter/ucp/3.0/guides/interlock/intro/
- title: Architecture
path: /datacenter/ucp/3.0/guides/interlock/intro/architecture/
- sectiontitle: Deployment
section:
- title: Get started
path: /datacenter/ucp/3.0/guides/interlock/install/
- title: Production
path: /datacenter/ucp/3.0/guides/interlock/install/production/
- title: Offline install
path: /datacenter/ucp/3.0/guides/interlock/install/offline/
- sectiontitle: Configuration
section:
- title: Interlock
path: /datacenter/ucp/3.0/guides/interlock/configuration/
- title: Service labels
path: /datacenter/ucp/3.0/guides/interlock/configuration/service-labels/
- sectiontitle: Extensions
section:
- title: Nginx
path: /datacenter/ucp/3.0/guides/interlock/extensions/nginx/
- title: HAProxy
path: /datacenter/ucp/3.0/guides/interlock/extensions/haproxy/
- sectiontitle: Deploy apps with Interlock
section:
- title: Basic deployment
path: /datacenter/ucp/3.0/guides/interlock/usage/
- title: Applications with SSL
path: /datacenter/ucp/3.0/guides/interlock/usage/ssl/
- title: Application redirects
path: /datacenter/ucp/3.0/guides/interlock/usage/redirects/
- title: Persistent (sticky) sessions
path: /datacenter/ucp/3.0/guides/interlock/usage/sessions/
- title: Websockets
path: /datacenter/ucp/3.0/guides/interlock/usage/websockets/
- title: Canary application instances
path: /datacenter/ucp/3.0/guides/interlock/usage/canary/
- title: Service clusters
path: /datacenter/ucp/3.0/guides/interlock/usage/service-clusters/
- title: Context/Path based routing
path: /datacenter/ucp/3.0/guides/interlock/usage/context/
- title: Host mode networking
path: /datacenter/ucp/3.0/guides/interlock/usage/host-mode-networking/
- sectiontitle: Operations
section:
- title: Updates
path: /datacenter/ucp/3.0/guides/interlock/ops/
- title: Tuning
path: /datacenter/ucp/3.0/guides/interlock/ops/tuning/
- path: /datacenter/ucp/3.0/reference/api/
title: API reference
- path: /datacenter/ucp/3.0/guides/release-notes/

View File

@ -0,0 +1,87 @@
---
title: Configure Interlock
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
Interlock configuration is managed via file as [TOML](https://github.com/toml-lang/toml).
The following will describe how to configure the various components of Interlock.
## Core
The core configuration handles the Interlock service itself. The following options
are available:
| Option | Type | Description |
|:-------------------|:------------|:----------------------------------------------------------------------------------------------|
| `ListenAddr` | string | address to serve the Interlock GRPC API (default: `:8080`) |
| `DockerURL` | string | path to the socket or TCP address to the Docker API (default: `unix:///var/run/docker.sock`) |
| `TLSCACert` | string | path to the CA certificate for connecting securely to the Docker API |
| `TLSCert` | string | path to the certificate for connecting securely to the Docker API |
| `TLSKey` | string | path to the key for connecting securely to the Docker API |
| `AllowInsecure` | bool | skip TLS verification when connecting to the Docker API via TLS |
| `PollInterval` | string | interval to poll the Docker API for changes (default: `3s`) |
| `EndpointOverride` | string | override the default GRPC API endpoint for extensions (by default this is detected via Swarm) |
| `Extensions` | []Extension | array of extensions as listed below |
## Extension
Interlock must contain at least one extension to service traffic. The following options are
available to configure the extensions.
| Option | Type | Description |
|:-------------------|:-----------------------------|:---------------------------------------------------------------------|
| `Image` | string | name of the Docker Image to use for the extension service |
| `Args` | []string | arguments to be passed to the Docker extension service upon creation |
| `Labels` | map[string]string | labels to be added to the extension service |
| `ServiceName` | string | name of the extension service |
| `ProxyImage` | string | name of the Docker Image to use for the proxy service |
| `ProxyArgs` | []string | arguments to be passed to the Docker proxy service upon creation |
| `ProxyLabels` | map[string]string | labels to be added to the proxy service |
| `ProxyServiceName` | string | name of the proxy service |
| `ProxyConfigPath` | string | path in the service for the generated proxy config |
| `ServiceCluster` | string | name of the cluster this extension services |
| `PublishMode` | string (`ingress` or `host`) | publish mode that the proxy service uses |
| `PublishedPort` | int | port that the proxy service serves non-SSL traffic |
| `PublishedSSLPort` | int | port that the proxy service serves SSL traffic |
| `Template` | string | Docker config object that is used as the extension template |
| `Config` | Config | proxy configuration used by the extensions as listed below |
## Proxy
The following options are made available to the extensions. The extensions use whichever they need to configure
the proxy service. This provides a way for the user to provide overrides to the extension configuration.
Interlock passes extension configuration through directly to the extension. Therefore, each extension has
different configuration options available. See the docs for each extension for the officially supported options.
## Example Configuration
The following is an example configuration to use with the Nginx extension.
```toml
ListenAddr = ":8080"
DockerURL = "unix:///var/run/docker.sock"
PollInterval = "3s"
[Extensions]
[Extensions.default]
Image = "docker/interlock-extension-nginx:latest"
Args = ["-D"]
ProxyImage = "nginx:alpine"
ProxyArgs = []
ProxyConfigPath = "/etc/nginx/nginx.conf"
ServiceCluster = ""
PublishMode = "ingress"
PublishedPort = 80
TargetPort = 80
PublishedSSLPort = 443
TargetSSLPort = 443
[Extensions.default.Config]
User = "nginx"
PidPath = "/var/run/proxy.pid"
WorkerProcesses = 1
RlimitNoFile = 65535
MaxConnections = 2048
[Extensions.default.Labels]
extension_name = "defaultExtension"
[Extensions.default.ProxyLabels]
proxy_name = "defaultProxy"
```

View File

@ -0,0 +1,27 @@
---
title: Interlock service labels
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
Applications that publish using Interlock use service labels to configure how they are published.
The following table describes the available options and what they do.
| Label | Description | Example |
|:---------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------|
| `com.docker.lb.hosts` | Comma separated list of the hosts that the service should serve | `example.com,test.com` |
| `com.docker.lb.port` | Port to use for internal upstream communication | `8080` |
| `com.docker.lb.network` | Name of network the proxy service should attach to for upstream connectivity | `app-network-a` |
| `com.docker.lb.context_root` | Context or path to use for the application | `/app` |
| `com.docker.lb.context_root_rewrite` | Boolean to enable rewrite for the context root | `true` |
| `com.docker.lb.ssl_only` | Boolean to force SSL for application | `true` |
| `com.docker.lb.ssl_cert` | Docker secret to use for the SSL certificate | `example.com.cert` |
| `com.docker.lb.ssl_key` | Docker secret to use for the SSL key | `example.com.key` |
| `com.docker.lb.websocket_endpoints` | Comma separated list of endpoints to configure to be upgraded for websockets | `/ws,/foo` |
| `com.docker.lb.service_cluster` | Name of the service cluster to use for the application | `us-east` |
| `com.docker.lb.ssl_backend` | Enable SSL communication to the upstreams | `true` |
| `com.docker.lb.ssl_backend_tls_verify` | Verification mode for the upstream TLS | `none` |
| `com.docker.lb.sticky_session_cookie` | Cookie to use for sticky sessions | `none` |
| `com.docker.lb.redirects` | Semi-colon separated list of redirects to add in the format of `<source>,<target>`. Example: (`http://old.example.com,http://new.example.com;`) | `none` |
| `com.docker.lb.ssl_passthrough` | Enable SSL passthrough | `false` |

View File

@ -0,0 +1,28 @@
---
title: Use HAProxy with Interlock
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
The following configuration options are available:
| Option | Type | Description |
|:--------------------|:-------|:--------------------------------------------------------------------------------|
| `PidPath` | string | path to the pid file for the proxy service |
| `MaxConnections` | int | maximum number of connections for proxy service |
| `ConnectTimeout` | int | timeout in seconds for clients to connect |
| `ClientTimeout` | int | timeout in seconds for the service to send a request to the proxied upstream |
| `ServerTimeout` | int | timeout in seconds for the service to read a response from the proxied upstream |
| `AdminUser` | string | username to be used with authenticated access to the proxy service |
| `AdminPass` | string | password to be used with authenticated access to the proxy service |
| `SSLOpts` | string | options to be passed when configuring SSL |
| `SSLDefaultDHParam` | int | size of DH parameters |
| `SSLVerify` | string | SSL client verification |
| `SSLCiphers` | string | SSL ciphers to use for the proxy service |
| `SSLProtocols` | string | enable the specified TLS protocols |
## Notes
When using SSL termination the certificate and key must be combined into a single certificate (i.e. `cat cert.pem key.pem > combined.pem`). The HAProxy extension
will use the certificate label only to configure SSL.

View File

@ -0,0 +1,30 @@
---
title: Use NGINX with Interlock
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
The following configuration options are available for the Nginx extension:
| Option | Type | Description |
|:------------------------|:-------|:----------------------------------------------------------------------------------------------------|
| `User` | string | user to be used in the proxy |
| `PidPath` | string | path to the pid file for the proxy service |
| `MaxConnections` | int | maximum number of connections for proxy service |
| `ConnectTimeout` | int | timeout in seconds for clients to connect |
| `SendTimeout` | int | timeout in seconds for the service to send a request to the proxied upstream |
| `ReadTimeout` | int | timeout in seconds for the service to read a response from the proxied upstream |
| `IPHash` | bool | specifies that requests are distributed between servers based on client IP addresses |
| `SSLOpts` | string | options to be passed when configuring SSL |
| `SSLDefaultDHParam` | int | size of DH parameters |
| `SSLDefaultDHParamPath` | string | path to DH parameters file |
| `SSLVerify` | string | SSL client verification |
| `WorkerProcesses` | string | number of worker processes for the proxy service |
| `RLimitNoFile` | int | number of maxiumum open files for the proxy service |
| `SSLCiphers` | string | SSL ciphers to use for the proxy service |
| `SSLProtocols` | string | enable the specified TLS protocols |
| `AccessLogPath` | string | Path to use for access logs (default: `/dev/stdout`) |
| `ErrorLogPath` | string | Path to use for error logs (default: `/dev/stdout`) |
| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger |
| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger |

View File

@ -0,0 +1,46 @@
---
title: Interlock overview
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
Interlock is an application routing and load balancing system for Docker Swarm. It uses
the Docker Remote API to automatically configure extensions such as Nginx or HAProxy for
application traffic.
## About
- [Introduction](intro/index.md)
- [What is Interlock](intro/index.md)
- [Architecture](intro/architecture.md)
- [Deployment](install/)
- [Requirements](install/index.md#requirements)
- [Installation](install/index.md#deployment)
## Configuration
- [Interlock configuration](configuration/index.md)
- [Service labels](configuration/service-labels.md)
## Extensions
- [NGINX](extensions/nginx.md)
- [HAProxy](extensions/haproxy.md)
## Usage
- [Basic deployment](usage/index.md)
- [Applications with SSL](usage/ssl.md)
- [Application redirects](usage/redirects.md)
- [Persistent (sticky) sessions](usage/sessions.md)
- [Websockets](usage/websockets.md)
- [Canary application instances](usage/canary.md)
- [Service clusters](usage/service-clusters.md)
- [Context/path based routing](usage/context.md)
- [Host mode networking](usage/host-mode-networking.md)
## Operations
- [Updates](ops/index.md)
- [Tuning](ops/tuning.md)

View File

@ -0,0 +1,82 @@
---
title: Get started with Interlock
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
## Requirements
- [Docker](https://www.docker.com) version 17.06+ is required to use Interlock
- Docker must be running in [Swarm mode](https://docs.docker.com/engine/swarm/)
- Internet access (see [Offline Installation](offline.md) for installing without internet access)
## Deployment
Interlock uses a configuration file for the core service. The following is an example config
to get started. In order to utilize the deployment and recovery features in Swarm we will
create a Docker Config object:
```bash
$> cat << EOF | docker config create service.interlock.conf -
ListenAddr = ":8080"
DockerURL = "unix:///var/run/docker.sock"
PollInterval = "3s"
[Extensions]
[Extensions.default]
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
Args = ["-D"]
ProxyImage = "nginx:alpine"
ProxyArgs = []
ProxyConfigPath = "/etc/nginx/nginx.conf"
ServiceCluster = ""
PublishMode = "ingress"
PublishedPort = 80
TargetPort = 80
PublishedSSLPort = 443
TargetSSLPort = 443
[Extensions.default.Config]
User = "nginx"
PidPath = "/var/run/proxy.pid"
WorkerProcesses = 1
RlimitNoFile = 65535
MaxConnections = 2048
EOF
oqkvv1asncf6p2axhx41vylgt
```
Next we will create a dedicated network for Interlock and the extensions:
```bash
$> docker network create -d overlay interlock
```
Now we can create the Interlock service. Note the requirement to constrain to a manager. The
Interlock core service must have access to a Swarm manager, however the extension and proxy services
are recommended to run on workers. See the [Production](production.md) section for more information
on setting up for an production environment.
```bash
$> docker service create \
--name interlock \
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
--network interlock \
--constraint node.role==manager \
--config src=service.interlock.conf,target=/config.toml \
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
There should be three (3) services created. One for the Interlock service,
one for the extension service and one for the proxy service:
```bash
$> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lheajcskcbby modest_raman replicated 1/1 nginx:alpine *:80->80/tcp *:443->443/tcp
oxjvqc6gxf91 keen_clarke replicated 1/1 interlockpreview/interlock-extension-nginx:2.0.0-preview
sjpgq7h621ex interlock replicated 1/1 interlockpreview/interlock:2.0.0-preview
```
The Interlock traffic layer is now deployed. Continue with the [Deploying Applications](/usage/index.md) to publish applications.

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

View File

@ -0,0 +1,38 @@
---
title: Deploy Interlock offline
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
To install Interlock on a Docker cluster without internet access the Docker images will
need to be loaded. This guide will show how to export the images from a local Docker
engine to then be loaded to the Docker Swarm cluster.
First, using an existing Docker engine save the images:
```bash
$> docker save docker/interlock:latest > interlock.tar
$> docker save docker/interlock-extension-nginx:latest > interlock-extension-nginx.tar
$> docker save nginx:alpine > nginx.tar
```
Note: replace `docker/interlock-extension-nginx:latest` and `nginx:alpine` with the corresponding
extension and proxy image if you are not using Nginx.
You should have two files:
- `interlock.tar`: This is the core Interlock application
- `interlock-extension-nginx.tar`: This is the Interlock extension for Nginx
- `nginx:alpine`: This is the official Nginx image based on Alpine
Copy these files to each node in the Docker Swarm cluster and run the following to load each image:
```bash
$> docker load < interlock.tar
$> docker load < interlock-extension-nginx.tar
$> docker load < nginx:alpine.tar
```
After running on each node, you can continue to the [Deployment](index.md#deployment) section to
continue the installation.

View File

@ -0,0 +1,84 @@
---
title: Deploy Interlock for Production
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
## Production Deployment
In this section you will find documentation on configuring Interlock
for a production environment. If you have not yet deployed Interlock please
see the [Getting Started](index.md) section as this information builds upon the
basic deployment. This example will not cover the actual deployment of infrastructure.
It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
getting a Swarm cluster deployed.
In this example we will configure an eight (8) node Swarm cluster. There are three (3) managers
and five (5) workers. Two of the workers are configured with node labels to be dedicated
ingress cluster load balancer nodes. These will receive all application traffic.
There is also an upstream load balancer (such as an Elastic Load Balancer or F5). The upstream
load balancers will be statically configured for the two load balancer worker nodes.
This configuration has several benefits. The management plane is both isolated and redundant.
No application traffic hits the managers and application ingress traffic can be routed
to the dedicated nodes. These nodes can be configured with higher performance network interfaces
to provide more bandwidth for the user services.
![Interlock 2.0 Production Deployment](interlock_production_deploy.png)
## Node Labels
We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
service. Once you are logged into one of the Swarm managers run the following to add node labels
to the dedicated ingress workers:
```bash
$> docker node update --label-add nodetype=loadbalancer lb-00
lb-00
$> docker node update --label-add nodetype=loadbalancer lb-01
lb-01
```
You can inspect each node to ensure the labels were successfully added:
```bash
{% raw %}
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
map[nodetype:loadbalancer]
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
map[nodetype:loadbalancer]
{% endraw %}
```
## Configure Proxy Service
Once we have the node labels we can re-configure the Interlock Proxy service to be constrained to those
workers. Again, from a manager run the following to pin the proxy service to the ingress workers:
```bash
$> docker service update --replicas=2 \
--constraint-add node.labels.nodetype==loadbalancer \
--stop-signal SIGTERM \
--stop-grace-period=5s \
$(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
```
This updates the proxy service to have two (2) replicas and ensure they are constrained to
the workers with the label `nodetype==loadbalancer` as well as configure the stop signal for the tasks
to be a `SIGTERM` with a grace period of five (5) seconds. This will ensure that Nginx closes the connections
before exiting to ensure the client request is finished.
Inspect the service to ensure the replicas have started on the desired nodes:
```bash
$> docker service ps $(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
o21esdruwu30 interlock-proxy.1 nginx:alpine lb-01 Running Preparing 3 seconds ago
n8yed2gp36o6 \_ interlock-proxy.1 nginx:alpine mgr-01 Shutdown Shutdown less than a second ago
aubpjc4cnw79 interlock-proxy.2 nginx:alpine lb-00 Running Preparing 3 seconds ago
```
Once configured you can update the settings in the upstream load balancer (ELB, F5, etc) with the
addresses of the dedicated ingress workers. This will direct all traffic to these nodes.
You have now configured Interlock for a dedicated ingress production environment. See the [Configuration](/config/interlock/) section
if you want to continue tuning.

View File

@ -0,0 +1,39 @@
---
title: Interlock architecture
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
The following are definitions that are used:
- Cluster: A group of compute resources running Docker
- Swarm: A Docker cluster running in Swarm mode
- Upstream: An upstream container that serves an application
- Proxy Service: A service that provides load balancing and proxying (such as Nginx)
- Extension Service: A helper service that configures the proxy service
- Service Cluster: A service cluster is an Interlock extension+proxy service
- GRPC: A high-performance RPC framework
## Services
Interlock runs entirely as Docker Swarm services. There are three core services
in an Interlock routing layer: core, extension and proxy.
## Core
The core service is responsible for interacting with the Docker Remote API and building
an upstream configuration for the extensions. This is served on a GRPC API that the
extensions are configured to access.
## Extension
The extension service is a helper service that queries the Interlock GRPC API for the
upstream configuration. The extension service uses this to configure
the proxy service. For proxy services that use files such as Nginx or HAProxy the
extension service generates the file and sends it to Interlock using the GRPC API. Interlock
then updates the corresponding Docker Config object for the proxy service.
## Proxy
The proxy service handles the actual requests for the upstream application services. These
are configured using the data created by the corresponding extension service.
Interlock manages both the extension and proxy service updates for both configuration changes
and application service deployments. There is no intervention from the operator required.

View File

@ -0,0 +1,60 @@
---
title: What is Interlock
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
Interlock is an application routing proxy service for Docker.
## Design Goals
- Fully integrate with Docker (Swarm, Services, Secrets, Configs)
- Enhanced configuration (context roots, TLS, zero downtime deploy, rollback)
- Support external load balancers (nginx, haproxy, F5, etc) via extensions
- Least privilege for extensions (no Docker API access)
Interlock was designed to be a first class application routing layer for Docker.
The following are the high level features it provides:
## Automatic Configuration
Interlock uses the Docker API for configuration. The user does not have to manually
update or restart anything to make services available.
## Native Swarm Support
Interlock is fully Docker native. It runs on Docker Swarm and routes traffic using
cluster networking and Docker services.
## High Availability
Interlock runs as Docker services which are highly available and handle failures gracefully.
## Scalability
Interlock uses a modular design where the proxy service is separate. This allows an
operator to individually customize and scale the proxy layer to whatever demand. This is
transparent to the user and causes no downtime.
## SSL
Interlock leverages Docker Secrets to securely store and use SSL certificates for services. Both
SSL termination and TCP passthrough are supported.
## Context Based Routing
Interlock supports advanced application request routing by context or path.
## Host Mode Networking
Interlock supports running the proxy and application services in "host" mode networking allowing
the operator to bypass the routing mesh completely. This is beneficial if you want
maximum performance for your applications.
## Blue-Green and Canary Service Deployment
Interlock supports blue-green service deployment allowing an operator to deploy a new application
while the current version is serving. Once traffic is verified to the new application the operator
can scale the older version to zero. If there is a problem the operation is quickly reversible.
## Service Cluster Support
Interlock supports multiple extension+proxy combinations allowing for operators to partition load
balancing resources for uses such as region or organization based load balancing.
## Least Privilege
Interlock supports (and recommends) being deployed where the load balancing
proxies do not need to be colocated with a Swarm manager. This makes the
deployment more secure by not exposing the Docker API access to the extension or proxy services.

View File

@ -0,0 +1,36 @@
---
title: Update Interlock
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
The following describes how to update Interlock. There are two parts
to the upgrade. First, the Interlock configuration must be updated
to specify the new extension and/or proxy image versions. Then the Interlock
service will be updated.
First we will create the new configuration:
```bash
$> docker config create service.interlock.conf.v2 <path-to-new-config>
```
Then you can update the Interlock service to remove the old and use the new:
```bash
$> docker service update --config-rm service.interlock.conf interlock
$> docker service update --config-add source=service.interlock.conf.v2,target=/config.toml interlock
```
Next update the Interlock service to use the new image:
```bash
$> docker service update \
--image interlockpreview/interlock@sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51 \
interlock
```
This will update the Interlock core service to use the `sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51`
version of Interlock. Interlock will start and check the config object which has the new extension version and will
perform a rolling deploy to update all extensions.

View File

@ -0,0 +1,35 @@
---
title: Tune Interlock
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
It is [recommended](/install/production/) to constrain the proxy service to multiple dedicated worker nodes.
Here are a few other tips for tuning:
## Stopping
You can adjust the stop signal and period by using the `stop-signal` and `stop-grace-period` settings. For example,
to set the stop signal to `SIGTERM` and grace period to ten (10) seconds use the following:
```bash
$> docker service update --stop-signal=SIGTERM --stop-grace-period=10s interlock-proxy
```
## Update Actions
In the event of an update failure the default Swarm action is to "pause". This will cause no more updates to happen from
Interlock until operator intervention. You can change this behavior by setting the `update-failure-action` setting. For example,
to automatically rollback to the previous configuration upon failure use the following:
```bash
$> docker service update --update-failure-action=rollback interlock-proxy
```
## Update Interval
By default Interlock will configure the proxy service by rolling update. If you would like to have more time between proxy
updates, such as to let the service settle, you can use the `update-delay` setting. For example, if you want to have
thirty (30) seconds between updates you can use the following:
```bash
$> docker service update --update-delay=30s interlock=proxy
```

View File

@ -0,0 +1,107 @@
---
title: Canary application instances
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In this example we will publish a service and deploy an updated service as canary instances.
First we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
Next we will create the initial service:
```bash
$> docker service create \
--name demo-v1 \
--network demo \
--detach=false \
--replicas=4 \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--env METADATA="demo-version-1" \
ehazlett/docker-demo
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `http://demo.local`:
```bash
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to demo.local (127.0.0.1) port 80 (#0)
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 20:28:26 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 120
< Connection: keep-alive
< Set-Cookie: session=1510172906715624280; Path=/; Expires=Thu, 09 Nov 2017 20:28:26 GMT; Max-Age=86400
< x-request-id: f884cf37e8331612b8e7630ad0ee4e0d
< x-proxy-id: 5ad7c31f9f00
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
< x-upstream-addr: 10.0.2.4:8080
< x-upstream-response-time: 1510172906.714
<
{"instance":"df20f55fc943","version":"0.1","metadata":"demo-version-1","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"}
```
Notice the `metadata` with `demo-version-1`.
Now we will deploy a "new" version:
```bash
$> docker service create \
--name demo-v2 \
--network demo \
--detach=false \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--env METADATA="demo-version-2" \
--env VERSION="0.2" \
ehazlett/docker-demo
```
Since this has a replica of one (1) and the initial version has four (4) replicas 20% of application traffic
will be sent to `demo-version-2`:
```bash
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
{"instance":"23d9a5ec47ef","version":"0.1","metadata":"demo-version-1","request_id":"060c609a3ab4b7d9462233488826791c"}
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
{"instance":"f42f7f0a30f9","version":"0.1","metadata":"demo-version-1","request_id":"c848e978e10d4785ac8584347952b963"}
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
{"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"}
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
{"instance":"1b0d55ed3d2f","version":"0.2","metadata":"demo-version-2","request_id":"b86ff1476842e801bf20a1b5f96cf94e"}
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
{"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"}
```
To increase traffic to the new version add more replicas with `docker scale`:
```bash
$> docker service scale demo-v2=4
demo-v2
```
To complete the upgrade, scale the `demo-v1` service to zero (0):
```bash
$> docker service scale demo-v1=0
demo-v1
```
This will route all application traffic to the new version. If you need to rollback, simply scale the v1 service
back up and v2 down.

View File

@ -0,0 +1,57 @@
---
title: Context/path based routing
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In this example we will publish a service using context or path based routing.
First we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
Next we will create the initial service:
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.context_root=/app \
--label com.docker.lb.context_root_rewrite=true \
--env METADATA="demo-context-root" \
ehazlett/docker-demo
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `http://demo.local`:
```bash
$> curl -vs -H "Host: demo.local" http://127.0.0.1/app/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /app/ HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Fri, 17 Nov 2017 14:25:17 GMT
< Content-Type: text/html; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< x-request-id: 077d18b67831519defca158e6f009f82
< x-proxy-id: 77c0c37d2c46
< x-server-info: interlock/2.0.0-dev (732c77e7) linux/amd64
< x-upstream-addr: 10.0.1.3:8080
< x-upstream-response-time: 1510928717.306
...
```

View File

@ -0,0 +1,145 @@
---
title: Host mode networking
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In some scenarios operators cannot use the overlay networks. Interlock supports
host mode networking in a variety of ways (proxy only, Interlock only, application only, hybrid).
In this example we will configure an eight (8) node Swarm cluster that uses host mode
networking to route traffic without using overlay networks. There are three (3) managers
and five (5) workers. Two of the workers are configured with node labels to be dedicated
ingress cluster load balancer nodes. These will receive all application traffic.
This example will not cover the actual deployment of infrastructure.
It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
getting a Swarm cluster deployed.
Note: when using host mode networking you will not be able to use the DNS service discovery as that
requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
that will give you that functionality if needed.
We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
service. Once you are logged into one of the Swarm managers run the following to add node labels
to the dedicated load balancer worker nodes:
```bash
$> docker node update --label-add nodetype=loadbalancer lb-00
lb-00
$> docker node update --label-add nodetype=loadbalancer lb-01
lb-01
```
You can inspect each node to ensure the labels were successfully added:
```bash
{% raw %}
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
map[nodetype:loadbalancer]
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
map[nodetype:loadbalancer]
{% endraw %}
```
Next, we will create a configuration object for Interlock that specifies host mode networking:
```bash
$> cat << EOF | docker config create service.interlock.conf -
ListenAddr = ":8080"
DockerURL = "unix:///var/run/docker.sock"
PollInterval = "3s"
[Extensions]
[Extensions.default]
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
Args = []
ServiceName = "interlock-ext"
ProxyImage = "nginx:alpine"
ProxyArgs = []
ProxyServiceName = "interlock-proxy"
ProxyConfigPath = "/etc/nginx/nginx.conf"
PublishMode = "host"
PublishedPort = 80
TargetPort = 80
PublishedSSLPort = 443
TargetSSLPort = 443
[Extensions.default.Config]
User = "nginx"
PidPath = "/var/run/proxy.pid"
WorkerProcesses = 1
RlimitNoFile = 65535
MaxConnections = 2048
EOF
oqkvv1asncf6p2axhx41vylgt
```
Note the `PublishMode = "host"` setting. This instructs Interlock to configure the proxy service for host mode networking.
Now we can create the Interlock service also using host mode networking:
```bash
$> docker service create \
--name interlock \
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
--constraint node.role==manager \
--publish mode=host,target=8080 \
--config src=service.interlock.conf,target=/config.toml \
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
## Configure Proxy Services
Once we have the node labels we can re-configure the Interlock Proxy services to be constrained to the
workers. Again, from a manager run the following to pin the proxy services to the load balancer worker nodes:
```bash
$> docker service update \
--constraint-add node.labels.nodetype==loadbalancer \
interlock-proxy
```
Now we can deploy the application:
```bash
$> docker service create \
--name demo \
--detach=false \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--publish mode=host,target=8080 \
--env METADATA="demo" \
ehazlett/docker-demo
```
This will run the service using host mode networking. Each task for the service will have a high port (i.e. 32768) and use
the node IP address to connect. You can see this when inspecting the headers from the request:
```bash
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
curl -vs -H "Host: demo.local" http://127.0.0.1/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Fri, 10 Nov 2017 15:38:40 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 110
< Connection: keep-alive
< Set-Cookie: session=1510328320174129112; Path=/; Expires=Sat, 11 Nov 2017 15:38:40 GMT; Max-Age=86400
< x-request-id: e4180a8fc6ee15f8d46f11df67c24a7d
< x-proxy-id: d07b29c99f18
< x-server-info: interlock/2.0.0-preview (17476782) linux/amd64
< x-upstream-addr: 172.20.0.4:32768
< x-upstream-response-time: 1510328320.172
<
{"instance":"897d3c7b9e9c","version":"0.1","metadata":"demo","request_id":"e4180a8fc6ee15f8d46f11df67c24a7d"}
```

View File

@ -0,0 +1,61 @@
---
title: Basic deployment
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
Once Interlock has been deployed you are now ready to launch and publish applications.
Using [Service Labels](https://docs.docker.com/engine/reference/commandline/service_create/#set-metadata-on-a-service--l-label)
the service is configured to publish itself to the load balancer.
Note: the examples below assume a DNS entry (or local hosts entry if you are testing local) exists
for each of the applications.
To publish we will create a Docker Service using two labels:
- `com.docker.lb.hosts`
- `com.docker.lb.port`
The `com.docker.lb.hosts` label instructs Interlock where the service should be available.
The `com.docker.lb.port` label instructs what port the proxy service should use to access
the upstreams.
In this example we will publish a demo service to the host `demo.local`.
First we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
Next we will deploy the application:
```bash
$> docker service create \
--name demo \
--network demo \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
ehazlett/docker-demo
6r0wiglf5f3bdpcy6zesh1pzx
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `http://demo.local`
```bash
$> curl -s -H "Host: demo.local" http://127.0.0.1/ping
{"instance":"c2f1afe673d4","version":"0.1",request_id":"7bcec438af14f8875ffc3deab9215bc5"}
```
To increase service capacity use the Docker Service [Scale](https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/) command:
```bash
$> docker service scale demo=4
demo scaled to 4
```
The four service replicas will be configured as upstreams. The load balancer will balance traffic
across all service replicas.

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -0,0 +1,64 @@
---
title: Application redirects
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In this example we will publish a service and configure a redirect from `old.local` to `new.local`.
First we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
Next we will create the service with the redirect:
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--label com.docker.lb.hosts=old.local,new.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.redirects=http://old.local,http://new.local \
--env METADATA="demo-new" \
ehazlett/docker-demo
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `http://new.local`
with a redirect configured that sends `http://old.local` to `http://new.local`:
```bash
$> curl -vs -H "Host: old.local" http://127.0.0.1
* Rebuilt URL to: http://127.0.0.1/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: old.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 302 Moved Temporarily
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 19:06:27 GMT
< Content-Type: text/html
< Content-Length: 161
< Connection: keep-alive
< Location: http://new.local/
< x-request-id: c4128318413b589cafb6d9ff8b2aef17
< x-proxy-id: 48854cd435a4
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
<
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx/1.13.6</center>
</body>
</html>
```

View File

@ -0,0 +1,200 @@
---
title: Service clusters
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In this example we will configure an eight (8) node Swarm cluster that uses service clusters
to route traffic to different proxies. There are three (3) managers
and five (5) workers. Two of the workers are configured with node labels to be dedicated
ingress cluster load balancer nodes. These will receive all application traffic.
This example will not cover the actual deployment of infrastructure.
It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
getting a Swarm cluster deployed.
![Interlock Service Clusters](interlock_service_clusters.png)
We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
service. Once you are logged into one of the Swarm managers run the following to add node labels
to the dedicated ingress workers:
```bash
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-east lb-00
lb-00
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-west lb-01
lb-01
```
You can inspect each node to ensure the labels were successfully added:
```bash
{% raw %}
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
map[nodetype:loadbalancer region:us-east]
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
map[nodetype:loadbalancer region:us-west]
{% endraw %}
```
Next, we will create a configuration object for Interlock that contains multiple extensions with varying service clusters:
```bash
$> cat << EOF | docker config create service.interlock.conf -
ListenAddr = ":8080"
DockerURL = "unix:///var/run/docker.sock"
PollInterval = "3s"
[Extensions]
[Extensions.us-east]
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
Args = ["-D"]
ServiceName = "interlock-ext-us-east"
ProxyImage = "nginx:alpine"
ProxyArgs = []
ProxyServiceName = "interlock-proxy-us-east"
ProxyConfigPath = "/etc/nginx/nginx.conf"
ServiceCluster = "us-east"
PublishMode = "host"
PublishedPort = 80
TargetPort = 80
PublishedSSLPort = 443
TargetSSLPort = 443
[Extensions.us-east.Config]
User = "nginx"
PidPath = "/var/run/proxy.pid"
WorkerProcesses = 1
RlimitNoFile = 65535
MaxConnections = 2048
[Extensions.us-east.Labels]
ext_region = "us-east"
[Extensions.us-east.ProxyLabels]
proxy_region = "us-east"
[Extensions.us-west]
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
Args = ["-D"]
ServiceName = "interlock-ext-us-west"
ProxyImage = "nginx:alpine"
ProxyArgs = []
ProxyServiceName = "interlock-proxy-us-west"
ProxyConfigPath = "/etc/nginx/nginx.conf"
ServiceCluster = "us-west"
PublishMode = "host"
PublishedPort = 80
TargetPort = 80
PublishedSSLPort = 443
TargetSSLPort = 443
[Extensions.us-west.Config]
User = "nginx"
PidPath = "/var/run/proxy.pid"
WorkerProcesses = 1
RlimitNoFile = 65535
MaxConnections = 2048
[Extensions.us-west.Labels]
ext_region = "us-west"
[Extensions.us-west.ProxyLabels]
proxy_region = "us-west"
EOF
oqkvv1asncf6p2axhx41vylgt
```
Note that we are using "host" mode networking in order to use the same ports (`80` and `443`) in the cluster. We cannot use ingress
networking as it reserves the port across all nodes. If you want to use ingress networking you will have to use different ports
for each service cluster.
Next we will create a dedicated network for Interlock and the extensions:
```bash
$> docker network create -d overlay interlock
```
Now we can create the Interlock service:
```bash
$> docker service create \
--name interlock \
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
--network interlock \
--constraint node.role==manager \
--config src=service.interlock.conf,target=/config.toml \
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
## Configure Proxy Services
Once we have the node labels we can re-configure the Interlock Proxy services to be constrained to the
workers for each region. Again, from a manager run the following to pin the proxy services to the ingress workers:
```bash
$> docker service update \
--constraint-add node.labels.nodetype==loadbalancer \
--constraint-add node.labels.region==us-east \
interlock-proxy-us-east
$> docker service update \
--constraint-add node.labels.nodetype==loadbalancer \
--constraint-add node.labels.region==us-west \
interlock-proxy-us-west
```
We are now ready to deploy applications. First we will create individual networks for each application:
```bash
$> docker network create -d overlay demo-east
$> docker network create -d overlay demo-west
```
Next we will deploy the application in the `us-east` service cluster:
```bash
$> docker service create \
--name demo-east \
--network demo-east \
--detach=true \
--label com.docker.lb.hosts=demo-east.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.service_cluster=us-east \
--env METADATA="us-east" \
ehazlett/docker-demo
```
Now we deploy the application in the `us-west` service cluster:
```bash
$> docker service create \
--name demo-west \
--network demo-west \
--detach=true \
--label com.docker.lb.hosts=demo-west.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.service_cluster=us-west \
--env METADATA="us-west" \
ehazlett/docker-demo
```
Only the service cluster that is designated will be configured for the applications. For example, the `us-east` service cluster
will not be configured to serve traffic for the `us-west` service cluster and vice versa. We can see this in action when we
send requests to each service cluster.
When we send a request to the `us-east` service cluster it only knows about the `us-east` application (be sure to ssh to the `lb-00` node):
```bash
{% raw %}
$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping
{"instance":"1b2d71619592","version":"0.1","metadata":"us-east","request_id":"3d57404cf90112eee861f9d7955d044b"}
$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.13.6</center>
</body>
</html>
{% endraw %}
```
Application traffic is isolated to each service cluster. Interlock also ensures that a proxy will only be updated if it has corresponding updates
to its designated service cluster. So in this example, updates to the `us-east` cluster will not affect the `us-west` cluster. If there is a problem
the others will not be affected.

View File

@ -0,0 +1,130 @@
---
title: Persistent (sticky) sessions
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In this example we will publish a service and configure the proxy for persistent (sticky) sessions.
# Cookies
In the following example we will show how to configure sticky sessions using cookies.
First we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
Next we will create the service with the cookie to use for sticky sessions:
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--replicas=5 \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.sticky_session_cookie=session \
--label com.docker.lb.port=8080 \
--env METADATA="demo-sticky" \
ehazlett/docker-demo
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `http://demo.local`
and configured to use sticky sessions:
```bash
$> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
> Cookie: session=1510171444496686286
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 20:04:36 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 117
< Connection: keep-alive
* Replaced cookie session="1510171444496686286" for domain demo.local, path /, expire 0
< Set-Cookie: session=1510171444496686286
< x-request-id: 3014728b429320f786728401a83246b8
< x-proxy-id: eae36bf0a3dc
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
< x-upstream-addr: 10.0.2.5:8080
< x-upstream-response-time: 1510171476.948
<
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
```
Notice the `Set-Cookie` from the application. This is stored by the `curl` command and sent with subsequent requests
which are pinned to the same instance. If you make a few requests you will notice the same `x-upstream-addr`.
# IP Hashing
In this example we show how to configure sticky sessions using client IP hashing. This is not as flexible or consistent
as cookies but enables workarounds for some applications that cannot use the other method.
First we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
Next we will create the service with the cookie to use for sticky sessions using IP hashing:
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--replicas=5 \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.ip_hash=true \
--env METADATA="demo-sticky" \
ehazlett/docker-demo
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `http://demo.local`
and configured to use sticky sessions:
```bash
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 20:04:36 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 117
< Connection: keep-alive
< x-request-id: 3014728b429320f786728401a83246b8
< x-proxy-id: eae36bf0a3dc
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
< x-upstream-addr: 10.0.2.5:8080
< x-upstream-response-time: 1510171476.948
<
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
```
You can use `docker service scale demo=10` to add some more replicas. Once scaled, you will notice that requests are pinned
to a specific backend.
Note: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are
determined a new "sticky" backend will be chosen and that will be the dedicated upstream.

View File

@ -0,0 +1,220 @@
---
title: Applications with SSL
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In this example we will publish a demo service to the host `demo.local` using SSL.
# SSL Termination
In this example we will be using Docker [Secrets](https://docs.docker.com/engine/swarm/secrets/)
to centrally and securely store SSL certificates in order to terminate SSL at the proxy service.
Application traffic is encrypted in transport to the proxy service which terminates SSL and then
uses unencrypted traffic inside the secure datacenter.
![Interlock SSL Termination](interlock_ssl_termination.png)
First we will generate certificates for the example:
```bash
$> openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=SomeState/L=SomeCity/O=Interlock/CN=demo.local" \
-keyout demo.local.key \
-out demo.local.cert
```
This should result in two files being created: `demo.local.cert` and `demo.local.key`. Next we
will use these to create Docker Secrets.
```bash
$> docker secret create demo.local.cert demo.local.cert
ywn8ykni6cmnq4iz64um1pj7s
$> docker secret create demo.local.key demo.local.key
e2xo036ukhfapip05c0sizf5w
```
Now we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
```bash
$> docker service create \
--name demo \
--network demo \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.ssl_cert=demo.local.cert \
--label com.docker.lb.ssl_key=demo.local.key \
ehazlett/docker-demo
6r0wiglf5f3bdpcy6zesh1pzx
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `https://demo.local`.
Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
You cannot use a host header as in other examples due to the way [SNI](https://tools.ietf.org/html/rfc3546#page-8) works.
```bash
$> curl -vsk https://demo.local/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to demo.local (127.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* start date: Nov 8 16:23:03 2017 GMT
* expire date: Nov 6 16:23:03 2027 GMT
* issuer: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 16:26:55 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 92
< Connection: keep-alive
< Set-Cookie: session=1510158415298009207; Path=/; Expires=Thu, 09 Nov 2017 16:26:55 GMT; Max-Age=86400
< x-request-id: 4b15ab2aaf2e0bbdea31f5e4c6b79ebd
< x-proxy-id: a783b7e646af
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
< x-upstream-addr: 10.0.2.3:8080
{"instance":"c2f1afe673d4","version":"0.1",request_id":"7bcec438af14f8875ffc3deab9215bc5"}
```
Since the certificate and key is stored securely in Swarm you can safely scale this service as well as the proxy
service and Swarm will handle granting access to the credentials only as needed.
# SSL Passthrough
In this example we will be using SSL passthrough to ensure encrypted communication from the request to the application
service. This ensures maximum security as there is no unencrypted transport.
![Interlock SSL Passthrough](interlock_ssl_passthrough.png)
First we will generate certificates for our application:
```bash
$> openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=SomeState/L=SomeCity/O=Interlock/CN=demo.local" \
-keyout app.key \
-out app.cert
```
This should result in two files being created: `app.cert` and `app.key`. Next we
will use these to create Docker Secrets.
```bash
$> docker secret create app.cert app.cert
ywn8ykni6cmnq4iz64um1pj7s
$> docker secret create app.key app.key
e2xo036ukhfapip05c0sizf5w
```
Now we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--secret source=app.cert,target=/run/secrets/cert.pem \
--secret source=app.key,target=/run/secrets/key.pem \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.ssl_passthrough=true \
--env METADATA="demo-ssl-passthrough" \
ehazlett/docker-demo --tls-cert=/run/secrets/cert.pem --tls-key=/run/secrets/key.pem
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `https://demo.local`.
Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
You cannot use a host header as in other examples due to the way [SNI](https://tools.ietf.org/html/rfc3546#page-8) works.
```bash
$> curl -vsk https://demo.local/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to demo.local (127.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* start date: Nov 8 16:39:45 2017 GMT
* expire date: Nov 6 16:39:45 2027 GMT
* issuer: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: close
< Set-Cookie: session=1510159255159600720; Path=/; Expires=Thu, 09 Nov 2017 16:40:55 GMT; Max-Age=86400
< Date: Wed, 08 Nov 2017 16:40:55 GMT
< Content-Length: 78
< Content-Type: text/plain; charset=utf-8
<
{"instance":"327d5a26bc30","version":"0.1","metadata":"demo-ssl-passthrough"}
```
Application traffic will travel securely fully encrypted from the request all the way to the application service.
Notice that Interlock cannot add the metadata response headers (version info, request ID, etc) as this is using
TCP passthrough and cannot add the metadata.

View File

@ -0,0 +1,35 @@
---
title: Websockets
description: Learn about Interlock, an application routing and load balancing system
for Docker Swarm.
keywords: ucp, interlock, load balancing
---
In this example we will publish a service and configure support for websockets.
First we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
Next we will create the service with websocket endpoints:
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.websocket_endpoints=/ws \
ehazlett/websocket-chat
```
Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
This uses the browser for websocket communication so you will need to have an entry or use a routable domain.
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `http://demo.local`. Open
two instances of your browser and you should see text on both instances as you type.