Merge branch 'amberjack' of github.com:bermudezmt/docs-private into amberjack

This commit is contained in:
Maria Bermudez 2019-05-09 14:55:38 -07:00
commit 657b73b4c1
25 changed files with 767 additions and 5 deletions

19
Jenkinsfile vendored
View File

@ -20,6 +20,7 @@ pipeline {
expression { env.GIT_URL == 'https://github.com/Docker/docker.github.io.git' }
}
stages {
<<<<<<< HEAD
stage( 'build and push stage image' ) {
when {
branch 'master'
@ -101,6 +102,8 @@ pipeline {
expression { env.GIT_URL == "https://github.com/docker/docs-private.git" }
}
stages {
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
stage( 'build and push new beta stage image' ) {
when {
branch 'amberjack'
@ -132,8 +135,15 @@ pipeline {
branch 'amberjack'
}
steps {
<<<<<<< HEAD
withVpn("$DTR_VPN_ADDRESS") {
sh "unzip -o $UCP_BUNDLE"
=======
withVpn(dtrVpnAddress) {
withCredentials(ucpBundle) {
sh 'unzip -o $UCP'
}
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
withDockerRegistry(reg) {
sh """
export DOCKER_TLS_VERIFY=1
@ -151,8 +161,15 @@ pipeline {
branch 'published'
}
steps {
<<<<<<< HEAD
withVpn("$DTR_VPN_ADDRESS") {
sh "unzip -o $UCP_BUNDLE"
=======
withVpn(dtrVpnAddress) {
withCredentials(ucpBundle) {
sh 'unzip -o $UCP'
}
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
withDockerRegistry(reg) {
sh """
export DOCKER_TLS_VERIFY=1
@ -168,4 +185,4 @@ pipeline {
}
}
}
}
}

View File

@ -1352,11 +1352,22 @@ manuals:
- title: Offline installation
path: /ee/ucp/interlock/deploy/offline-install/
- title: Layer 7 routing upgrade
<<<<<<< HEAD
path: /ee/ucp/interlock/deploy/upgrade/
=======
path: /ee/ucp/interlock/upgrade/
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
- sectiontitle: Configuration
section:
- title: Configure your deployment
path: /ee/ucp/interlock/config/
<<<<<<< HEAD
=======
- title: Using a custom extension template
path: /ee/ucp/interlock/config/custom-template/
- title: Configuring an HAProxy extension
path: /ee/ucp/interlock/config/haproxy-config/
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
- title: Configuring host mode networking
path: /ee/ucp/interlock/config/host-mode-networking/
- title: Configuring an nginx extension
@ -1375,20 +1386,38 @@ manuals:
path: /ee/ucp/interlock/usage/canary/
- title: Using context or path-based routing
path: /ee/ucp/interlock/usage/context/
<<<<<<< HEAD
- title: Specifying a routing mode
path: /ee/ucp/interlock/usage/interlock-vip-mode/
- title: Using routing labels
path: /ee/ucp/interlock/usage/labels-reference/
=======
- title: Publishing a default host service
path: /ee/ucp/interlock/usage/default-backend/
- title: Specifying a routing mode
path: /ee/ucp/interlock/usage/interlock-vip-mode/
- title: Using routing labels
path: /ee/ucp/interlock/usage/labels-reference.md/
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
- title: Implementing redirects
path: /ee/ucp/interlock/usage/redirects/
- title: Implementing a service cluster
path: /ee/ucp/interlock/usage/service-clusters/
- title: Implementing persistent (sticky) sessions
path: /ee/ucp/interlock/usage/sessions/
<<<<<<< HEAD
- title: Securing services with TLS
path: /ee/ucp/interlock/usage/tls/
- title: Configuring websockets
path: /ee/ucp/interlock/usage/websockets/
=======
- title: Implementing SSL
path: /ee/ucp/interlock/usage/ssl/
- title: Securing services with TLS
path: /ee/ucp/interlock/usage/tls/
- title: Configuring websockets
path: /ee/ucp/interlock/usage/websockets/
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
- sectiontitle: Deploy apps with Kubernetes
section:
- title: Access Kubernetes Resources

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -114,4 +114,8 @@ The following features are supported in VIP mode:
## Next steps
- [Deploy Interlock](deploy/index.md)
<<<<<<< HEAD
- [Configure Interlock](config/index.md)
=======
- [Configure Interlock[(config/index.md)
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -1,24 +1,34 @@
---
<<<<<<< HEAD
<<<<<<< HEAD
title: Configure host mode networking
description: Learn how to configure the UCP layer 7 routing solution with
host mode networking.
keywords: routing, proxy, interlock, load balancing
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
title: Host mode networking
description: Learn how to configure the UCP layer 7 routing solution with
host mode networking.
keywords: routing, proxy
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
redirect_from:
- /ee/ucp/interlock/usage/host-mode-networking/
- /ee/ucp/interlock/deploy/host-mode-networking/
---
<<<<<<< HEAD
<<<<<<< HEAD
=======
# Configuring host mode networking
>>>>>>> Raw content addition
=======
# Configuring host mode networking
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
By default, layer 7 routing components communicate with one another using
overlay networks, but Interlock supports
host mode networking in a variety of ways, including proxy only, Interlock only, application only, and hybrid.
@ -38,21 +48,29 @@ To use host mode networking instead of overlay networking:
If you have not done so, configure the
<<<<<<< HEAD
<<<<<<< HEAD
[layer 7 routing solution for production](../deploy/production.md).
=======
[layer 7 routing solution for production](production.md).
>>>>>>> Raw content addition
=======
[layer 7 routing solution for production](production.md).
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
The `ucp-interlock-proxy` service replicas should then be
running on their own dedicated nodes.
## Update the ucp-interlock config
<<<<<<< HEAD
<<<<<<< HEAD
[Update the ucp-interlock service configuration](./index.md) so that it uses
=======
[Update the ucp-interlock service configuration](configure.md) so that it uses
>>>>>>> Raw content addition
=======
[Update the ucp-interlock service configuration](configure.md) so that it uses
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
host mode networking.
Update the `PublishMode` key to:
@ -110,6 +128,7 @@ service is running.
If everything is working correctly, you should get a JSON result like:
<<<<<<< HEAD
<<<<<<< HEAD
{% raw %}
```json
@ -119,6 +138,8 @@ If everything is working correctly, you should get a JSON result like:
The following example describes how to configure an eight (8) node Swarm cluster that uses host mode
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```json
{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
```
@ -129,20 +150,28 @@ The following example describes how to configure an eight (8) node Swarm cluster
In this example we will configure an eight (8) node Swarm cluster that uses host mode
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
networking to route traffic without using overlay networks. There are three (3) managers
and five (5) workers. Two of the workers are configured with node labels to be dedicated
ingress cluster load balancer nodes. These will receive all application traffic.
<<<<<<< HEAD
<<<<<<< HEAD
This example does not cover the actual deployment of infrastructure.
=======
This example will not cover the actual deployment of infrastructure.
>>>>>>> Raw content addition
=======
This example will not cover the actual deployment of infrastructure.
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
getting a Swarm cluster deployed.
<<<<<<< HEAD
<<<<<<< HEAD
Note: When using host mode networking, you cannot use the DNS service discovery because that
requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
@ -150,12 +179,17 @@ that will give you that functionality if needed.
Configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Note: when using host mode networking you will not be able to use the DNS service discovery as that
requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
that will give you that functionality if needed.
We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
service. Once you are logged into one of the Swarm managers run the following to add node labels
to the dedicated load balancer worker nodes:
@ -168,10 +202,13 @@ lb-01
Inspect each node to ensure the labels were successfully added:
<<<<<<< HEAD
<<<<<<< HEAD
{% raw %}
=======
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```bash
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
map[nodetype:loadbalancer]
@ -179,9 +216,12 @@ $> docker node inspect -f '{{ .Spec.Labels }}' lb-01
map[nodetype:loadbalancer]
```
<<<<<<< HEAD
<<<<<<< HEAD
{% endraw %}
=======
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Next, create a configuration object for Interlock that specifies host mode networking:
@ -193,17 +233,23 @@ PollInterval = "3s"
[Extensions]
[Extensions.default]
<<<<<<< HEAD
<<<<<<< HEAD
Image = "{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}"
Args = []
ServiceName = "interlock-ext"
ProxyImage = "{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}"
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
Args = []
ServiceName = "interlock-ext"
ProxyImage = "nginx:alpine"
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
ProxyArgs = []
ProxyServiceName = "interlock-proxy"
ProxyConfigPath = "/etc/nginx/nginx.conf"
@ -225,11 +271,15 @@ oqkvv1asncf6p2axhx41vylgt
Note the `PublishMode = "host"` setting. This instructs Interlock to configure the proxy service for host mode networking.
<<<<<<< HEAD
<<<<<<< HEAD
Now create the Interlock service also using host mode networking:
=======
Now we can create the Interlock service also using host mode networking:
>>>>>>> Raw content addition
=======
Now we can create the Interlock service also using host mode networking:
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```bash
$> docker service create \
@ -238,6 +288,7 @@ $> docker service create \
--constraint node.role==manager \
--publish mode=host,target=8080 \
--config src=service.interlock.conf,target=/config.toml \
<<<<<<< HEAD
<<<<<<< HEAD
{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
@ -245,12 +296,17 @@ sjpgq7h621exno6svdnsvpv9z
## Configure proxy services
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
## Configure Proxy Services
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
With the node labels, you can re-configure the Interlock Proxy services to be constrained to the
workers. From a manager run the following to pin the proxy services to the load balancer worker nodes:

View File

@ -1,24 +1,33 @@
---
<<<<<<< HEAD
<<<<<<< HEAD
title: Configure layer 7 routing service
description: Learn how to configure the layer 7 routing solution for UCP.
keywords: routing, proxy, interlock, load balancing
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
title: Configuring layer 7 routing service
description: Learn how to configure the layer 7 routing solution for UCP, that allows
you to route traffic to swarm services.
keywords: routing, proxy
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
redirect_from:
- /ee/ucp/interlock/deploy/configure/
- /ee/ucp/interlock/usage/default-service/
---
<<<<<<< HEAD
<<<<<<< HEAD
To further customize the layer 7 routing solution, you must update the
`ucp-interlock` service with a new Docker configuration.
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
# Configuring layer 7 routing services
You can configure ports for incoming traffic from the UCP web UI.
@ -27,7 +36,10 @@ To further customize the layer 7 routing solution, you must update the
Here's how it works:
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
1. Find out what configuration is currently being used for the `ucp-interlock`
service and save it to a file:
@ -103,6 +115,7 @@ The core configuraton handles the Interlock service itself. These are the config
Interlock must contain at least one extension to service traffic. The following options are available to configure the extensions:
<<<<<<< HEAD
<<<<<<< HEAD
| Option | Type | Description |
|:-------------------|:------------|:-----------------------------------------------------------|
@ -120,6 +133,8 @@ Interlock must contain at least one extension to service traffic. The following
| `ProxyServiceName` | string | Name of the proxy service |
| `ProxyConfigPath` | string | Path in the service for the generated proxy config |
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
| Option | Type | Description |
|:-------------------|:------------------|:------------------------------------------------------------------------------|
| `Image` | string | Name of the Docker image to use for the extension service. |
@ -155,13 +170,17 @@ Interlock must contain at least one extension to service traffic. The following
| `ProxyContainerLabels` | map[string]string | labels to be added to the proxy service tasks |
| `ProxyServiceName` | string | name of the proxy service |
| `ProxyConfigPath` | string | path in the service for the generated proxy config |
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
| `ProxyReplicas` | uint | number of proxy service replicas |
| `ProxyStopSignal` | string | stop signal for the proxy service (i.e. `SIGQUIT`) |
| `ProxyStopGracePeriod` | string | stop grace period for the proxy service (i.e. `5s`) |
| `ProxyConstraints` | []string | one or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the proxy service |
| `ProxyPlacementPreferences` | []string | one or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the proxy service |
<<<<<<< HEAD
<<<<<<< HEAD
| `ProxyUpdateDelay` | string | delay between rolling proxy container updates |
| `ServiceCluster` | string | Name of the cluster this extension services |
| `PublishMode` | string (`ingress` or `host`) | Publish mode that the proxy service uses |
@ -170,13 +189,18 @@ Interlock must contain at least one extension to service traffic. The following
| `Template` | string | Docker configuration object that is used as the extension template |
| `Config` | Config | Proxy configuration used by the extensions as described in the following table |
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
| `ServiceCluster` | string | name of the cluster this extension services |
| `PublishMode` | string (`ingress` or `host`) | publish mode that the proxy service uses |
| `PublishedPort` | int | port that the proxy service serves non-SSL traffic |
| `PublishedSSLPort` | int | port that the proxy service serves SSL traffic |
| `Template` | string | Docker config object that is used as the extension template |
| `Config` | Config | proxy configuration used by the extensions as listed below |
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
### Proxy
Options are made available to the extensions, and the extensions utilize the options needed for proxy service configuration. This provides overrides to the extension configuration.
@ -187,11 +211,15 @@ different configuration options available. Refer to the documentation for each
- [Nginx](nginx-config.md)
- [HAproxy](haproxy-config.md)
<<<<<<< HEAD
<<<<<<< HEAD
#### Customize the default proxy service
=======
#### Customizing the default proxy service
>>>>>>> Raw content addition
=======
#### Customizing the default proxy service
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
The default proxy service used by UCP to provide layer 7 routing is NGINX. If users try to access a route that hasn't been configured, they will see the default NGINX 404 page:
![Default NGINX page](../../images/interlock-default-service-1.png){: .with-border}
@ -242,17 +270,23 @@ DockerURL = "unix:///var/run/docker.sock"
PollInterval = "3s"
[Extensions.default]
<<<<<<< HEAD
<<<<<<< HEAD
Image = "{{ page.ucp_org }}/interlock-extension-nginx:{{ page.ucp_version }}"
Args = ["-D"]
ServiceName = "interlock-ext"
ProxyImage = "{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}"
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Image = "docker/interlock-extension-nginx:latest"
Args = ["-D"]
ServiceName = "interlock-ext"
ProxyImage = "nginx:alpine"
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
ProxyArgs = []
ProxyServiceName = "interlock-proxy"
ProxyConfigPath = "/etc/nginx/nginx.conf"
@ -273,6 +307,7 @@ PollInterval = "3s"
## Next steps
<<<<<<< HEAD
<<<<<<< HEAD
- [Configure host mode networking](host-mode-networking.md)
- [Configure an nginx extension](nginx-config.md)
@ -280,6 +315,8 @@ PollInterval = "3s"
- [Tune the proxy service](tuning.md)
- [Update Interlock services](updates.md)
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
- [Using a custom extension template](custom-template.md)
- [Configuring an HAProxy extension](haproxy-config.md)
- [Configuring host mode networking](host-mode-networking.md)
@ -287,4 +324,7 @@ PollInterval = "3s"
- [Using application service labels](service-lables.md)
- [Tuning the proxy service](tuning.md)
- [Updating Interlock services](updates.md)
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -1,5 +1,6 @@
---
<<<<<<< HEAD
<<<<<<< HEAD
title: Configure Nginx
description: Learn how to configure an nginx extension
keywords: routing, proxy, interlock, load balancing
@ -26,6 +27,8 @@ available for the nginx extension:
| `SSLProtocols` | string | Enable the specified TLS protocols | `TLSv1.2` |
| `HideInfoHeaders` | bool | Hide proxy-related response headers. |
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
title: Nginx configuration
description: Learn how to configure an nginx extension
keywords: routing, proxy
@ -76,7 +79,10 @@ This is from Interlock docs - which is correct???????
| `RLimitNoFile` | int | number of maxiumum open files for the proxy service | `65535` |
| `SSLCiphers` | string | SSL ciphers to use for the proxy service | `HIGH:!aNULL:!MD5` |
| `SSLProtocols` | string | enable the specified TLS protocols | `TLSv1.2` |
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
| `KeepaliveTimeout` | string | connection keepalive timeout | `75s` |
| `ClientMaxBodySize` | string | maximum allowed size of the client request body | `1m` |
| `ClientBodyBufferSize` | string | sets buffer size for reading client request body | `8k` |
@ -94,7 +100,11 @@ This is from Interlock docs - which is correct???????
| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger | see default format |
| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger | see default format |
<<<<<<< HEAD
<<<<<<< HEAD
=======
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -1,11 +1,14 @@
---
<<<<<<< HEAD
<<<<<<< HEAD
title: Use application service labels
description: Learn how applications use service labels for publishing
keywords: routing, proxy, interlock, load balancing
---
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
title: Application service labels
description: Learn how applications use service labels for publishing
keywords: routing, proxy
@ -13,7 +16,10 @@ keywords: routing, proxy
# Using application service labels
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Service labels define hostnames that are routed to the
service, the applicable ports, and other routing configurations. Applications that publish using Interlock use service labels to configure how they are published.

View File

@ -1,5 +1,6 @@
---
<<<<<<< HEAD
<<<<<<< HEAD
title: Tune the proxy service
description: Learn how to tune the proxy service for environment optimization
keywords: routing, proxy, interlock
@ -10,6 +11,8 @@ Refer to [Proxy service constraints](../deploy/production.md) for information on
## Stop
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
title: Proxy service tuning
description: Learn how to ?????
keywords: routing, proxy
@ -21,7 +24,10 @@ keywords: routing, proxy
Refer to [Proxy service constraints](../deploy/production.md) for information on how to constrain the proxy service to multiple dedicated worker nodes.
## Stopping
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
To adjust the stop signal and period, use the `stop-signal` and `stop-grace-period` settings. For example,
to set the stop signal to `SIGTERM` and grace period to ten (10) seconds, use the following command:

View File

@ -1,5 +1,6 @@
---
<<<<<<< HEAD
<<<<<<< HEAD
title: Update Interlock services
description: Learn how to update the UCP layer 7 routing solution services
keywords: routing, proxy, interlock
@ -12,6 +13,8 @@ There are two parts to the update process:
## Update the Interlock configuration
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
title: Updating Interlock services
description: Learn how to update the UCP layer 7 routing solution services
keywords: routing, proxy
@ -24,13 +27,17 @@ There are two parts to the update process:
2. Updating the Interlock service to use the new configuration and image.
## Updating the Interlock configuration
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Create the new configuration:
```bash
$> docker config create service.interlock.conf.v2 <path-to-new-config>
```
<<<<<<< HEAD
<<<<<<< HEAD
## Update the Interlock service
Remove the old configuration and specify the new configuration:
@ -97,6 +104,8 @@ docker/ucp:{{ page.ucp_version }}
Interlock starts and checks the config object, which has the new extension version, and
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
## Updating the Interlock service
Remove the old configuration and specify the new configuration:
@ -107,11 +116,15 @@ $> docker service update --config-add source=service.interlock.conf.v2,target=/c
Next, update the Interlock service to use the new image. The following example updates the Interlock core service to use the `sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51`
version of Interlock. Interlock starts and checks the config object, which has the new extension version, and
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
performs a rolling deploy to update all extensions.
```bash
$> docker service update \
<<<<<<< HEAD
<<<<<<< HEAD
--image {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} \
ucp-interlock
@ -119,4 +132,8 @@ $> docker service update \
--image interlockpreview/interlock@sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51 \
interlock
>>>>>>> Raw content addition
=======
--image interlockpreview/interlock@sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51 \
interlock
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```

View File

@ -1,11 +1,18 @@
---
<<<<<<< HEAD
title: Deploy a layer 7 routing solution
description: Learn the deployment steps for the UCP layer 7 routing solution
keywords: routing, proxy, interlock
=======
title: Deploying a layer 7 routing solution for UCP to route traffic to swarm services
description: Learn the deployment steps for the UCP layer 7 routing solution
keywords: routing, proxy
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
redirect_from:
- /ee/ucp/interlock/deploy/configuration-reference/
---
<<<<<<< HEAD
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
1. [Prerequisites](#prerequisites)
@ -13,12 +20,27 @@ This topic covers deploying a layer 7 routing solution into a Docker Swarm to ro
3. [Work with the core service configuration file](#work-with-the-core-service-configuration-file)
4. [Create a dedicated network for Interlock and extensions](#create-a-dedicated-network-for-Interlock-and-extensions)
5. [Create the Interlock service](#create-the-interlock-service)
=======
# Deploying basic layer 7 routing and Interlock
This topic covers deploying a layer 7 routing solution for UCP into a Docker Swarm. Layer 7 routing is also referred to as HTTP routing mesh.
1. Prerequisites
2. Enable layer 7 routing
3. Working with the core service configuration file
4. Creating a dedicated network for Interlock and extensions
5. Creating the Interlock service
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
## Prerequisites
- [Docker](https://www.docker.com) version 17.06 or later
<<<<<<< HEAD
- Docker must be running in [Swarm mode](/engine/swarm/)
- Internet access (see [Offline installation](./offline-install.md) for installing without internet access)
=======
- Docker must be running in [Swarm mode](https://docs.docker.com/engine/swarm/)
- Internet access (see [Offline Installation](offline.md) for installing without internet access)
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
## Enable layer 7 routing
By default, layer 7 routing is disabled, so you must first
@ -120,10 +142,16 @@ PollInterval = "3s"
LargeClientHeaderBuffers = "4 8k"
ClientBodyTimeout = "60s"
UnderscoresInHeaders = false
<<<<<<< HEAD
HideInfoHeaders = false
```
### Work with the core service configuration file
=======
```
### Working with the core service configuration file
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Interlock uses the TOML file for the core service configuration. The following example utilizes Swarm deployment and recovery features by creating a Docker Config object:
```bash
@ -134,9 +162,15 @@ PollInterval = "3s"
[Extensions]
[Extensions.default]
<<<<<<< HEAD
Image = "{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}"
Args = ["-D"]
ProxyImage = "{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}"
=======
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
Args = ["-D"]
ProxyImage = "nginx:alpine"
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
ProxyArgs = []
ProxyConfigPath = "/etc/nginx/nginx.conf"
ProxyReplicas = 1
@ -157,7 +191,11 @@ EOF
oqkvv1asncf6p2axhx41vylgt
```
<<<<<<< HEAD
### Create a dedicated network for Interlock and extensions
=======
### Creating a dedicated network for Interlock and extensions
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Next, create a dedicated network for Interlock and the extensions:
@ -165,10 +203,17 @@ Next, create a dedicated network for Interlock and the extensions:
$> docker network create -d overlay interlock
```
<<<<<<< HEAD
### Create the Interlock service
Now you can create the Interlock service. Note the requirement to constrain to a manager. The
Interlock core service must have access to a Swarm manager, however the extension and proxy services
are recommended to run on workers. See the [Production](./production.md) section for more information
=======
### Creating the Interlock service
Now you can create the Interlock service. Note the requirement to constrain to a manager. The
Interlock core service must have access to a Swarm manager, however the extension and proxy services
are recommended to run on workers. See the [Production](production.md) section for more information
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
on setting up for an production environment.
```bash
@ -178,7 +223,11 @@ $> docker service create \
--network interlock \
--constraint node.role==manager \
--config src=service.interlock.conf,target=/config.toml \
<<<<<<< HEAD
{{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} -D run -c /config.toml
=======
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
sjpgq7h621exno6svdnsvpv9z
```
@ -189,15 +238,27 @@ one for the extension service, and one for the proxy service:
$> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lheajcskcbby modest_raman replicated 1/1 nginx:alpine *:80->80/tcp *:443->443/tcp
<<<<<<< HEAD
oxjvqc6gxf91 keen_clarke replicated 1/1 {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}
sjpgq7h621ex interlock replicated 1/1 {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }}
=======
oxjvqc6gxf91 keen_clarke replicated 1/1 interlockpreview/interlock-extension-nginx:2.0.0-preview
sjpgq7h621ex interlock replicated 1/1 interlockpreview/interlock:2.0.0-preview
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```
The Interlock traffic layer is now deployed.
## Next steps
<<<<<<< HEAD
- [Configure Interlock](../config/index.md)
- [Deploy applications](../usage/index.md)
- [Production deployment information](./production.md)
- [Offline installation](./offline-install.md)
=======
- [Configuring Interlock](../config/index.md)
- [Deploying applications](../usage/index.md)
- [Production deployment information](./production.md)
- [Offline installation information](./offline.md)
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -1,15 +1,24 @@
---
title: Offline installation considerations
<<<<<<< HEAD
description: Learn how to to install Interlock on a Docker cluster without internet access.
keywords: routing, proxy, interlock
---
=======
description: Learn how to To install Interlock on a Docker cluster without internet access.
keywords: routing, proxy
---
# Offline installation
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
To install Interlock on a Docker cluster without internet access, the Docker images must be loaded. This topic describes how to export the images from a local Docker
engine and then loading them to the Docker Swarm cluster.
First, using an existing Docker engine, save the images:
```bash
<<<<<<< HEAD
$> docker save {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} > interlock.tar
$> docker save {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }} > interlock-extension-nginx.tar
$> docker save {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} > nginx.tar
@ -18,6 +27,15 @@ $> docker save {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} > n
Note: replace `{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version
}}` and `{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}` with the
corresponding extension and proxy image if you are not using Nginx.
=======
$> docker save docker/interlock:latest > interlock.tar
$> docker save docker/interlock-extension-nginx:latest > interlock-extension-nginx.tar
$> docker save nginx:alpine > nginx.tar
```
Note: replace `docker/interlock-extension-nginx:latest` and `nginx:alpine` with the corresponding
extension and proxy image if you are not using Nginx.
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
You should have the following two files:
@ -34,5 +52,9 @@ $> docker load < nginx:alpine.tar
```
## Next steps
<<<<<<< HEAD
After running on each node, refer to the [Deploy](./index.md) section to
=======
After running on each node, you can continue to the [Deployment](index.md) section to
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
continue the installation.

View File

@ -5,10 +5,18 @@ description: Learn how to configure the layer 7 routing solution for a productio
keywords: routing, proxy, interlock
---
<<<<<<< HEAD
This section includes documentation on configuring Interlock
for a production environment. If you have not yet deployed Interlock, refer to [Deploying Interlock](./index.md) because this information builds upon the basic deployment. This topic does not cover infrastructure deployment -
it assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
Refer to the [Swarm](/engine/swarm/) documentation if you need help
=======
# Deploying to production
This section includes documentation on configuring Interlock
for a production environment. If you have not yet deployed Interlock, refer to [Deploying Interlock](index.md) because this information builds upon the basic deployment. This topic does not cover infrastructure deployment -
it assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
Refer to the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
getting a Swarm cluster deployed.
The layer 7 solution that ships with UCP is highly available
@ -50,14 +58,12 @@ lb-01
You can inspect each node to ensure the labels were successfully added:
{% raw %}
```bash
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
map[nodetype:loadbalancer]
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
map[nodetype:loadbalancer]
```
{% endraw %}
The command should print "loadbalancer".
@ -127,5 +133,10 @@ to provide more bandwidth for the user services.
![Interlock 2.0 Production Deployment](../../images/interlock_production_deploy.png)
## Next steps
<<<<<<< HEAD
- [Configure Interlock](../config/index.md)
- [Deploy applications](../usage.index.md)
=======
- [Configuring Interlock](../config/index.md)
- [Deploying applications](../usage.index.md)
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -0,0 +1,130 @@
---
title: Layer 7 routing upgrade
description: Learn how to upgrade your existing layer 7 routing solution
keywords: routing, proxy, hrm
redirect_from:
- /ee/ucp/interlock/upgrade/
---
# UCP upgrade process
The [HTTP routing mesh](/datacenter/ucp/2.2/guides/admin/configure/use-domain-names-to-access-services.md)
functionality was redesigned in UCP 3.0 for greater security and flexibility.
The functionality was also renamed to "layer 7 routing", to make it easier for
new users to get started.
[Learn about the new layer 7 routing functionality](../index.md).
To route traffic to your service you apply specific labels to your swarm
services, describing the hostname for the service and other configurations.
Things work in the same way as they did with the HTTP routing mesh, with the
only difference being that you use different labels.
You don't have to manually update your services. During the upgrade process to
3.0, UCP updates the services to start using new labels.
This article describes the upgrade process for the routing component, so that
you can troubleshoot UCP and your services, in case something goes wrong with
the upgrade.
If you are using the HTTP routing mesh, and start an upgrade to UCP 3.0:
1. UCP starts a reconciliation process to ensure all internal components are
deployed. As part of this, services using HRM labels are inspected.
2. UCP creates the `com.docker.ucp.interlock.conf-<id>` based on HRM configurations.
3. The HRM service is removed.
4. The `ucp-interlock` service is deployed with the configuration created.
5. The `ucp-interlock` service deploys the `ucp-interlock-extension` and
`ucp-interlock-proxy-services`.
The only way to rollback from an upgrade is by restoring from a backup taken
before the upgrade. If something goes wrong during the upgrade process, you
need to troubleshoot the interlock services and your services, since the HRM
service won't be running after the upgrade.
[Learn more about the interlock services and architecture](../architecture.md).
## Check that routing works
After upgrading to UCP 3.0, you should check if all swarm services are still
routable.
For services using HTTP:
```bash
curl -vs http://<ucp-url>:<hrm-http-port>/ -H "Host: <service-hostname>"
```
For services using HTTPS:
```bash
curl -vs https://<ucp-url>:<hrm-https-port>
```
After the upgrade, check that you can still use the same hostnames to access
the swarm services.
## The ucp-interlock services are not running
After the upgrade to UCP 3.0, the following services should be running:
* `ucp-interlock`: monitors swarm workloads configured to use layer 7 routing.
* `ucp-interlock-extension`: Helper service that generates the configuration for
the `ucp-interlock-proxy` service.
* `ucp-interlock-proxy`: A service that provides load balancing and proxying for
swarm workloads.
To check if these services are running, use a client bundle with administrator
permissions and run:
```bash
docker ps --filter "name=ucp-interlock"
```
* If the `ucp-interlock` service doesn't exist or is not running, something went
wrong with the reconciliation step.
* If this still doesn't work, it's possible that UCP is having problems creating
the `com.docker.ucp.interlock.conf-1`, due to name conflicts. Make sure you
don't have any configuration with the same name by running:
```
docker config ls --filter "name=com.docker.ucp.interlock"
```
* If either the `ucp-interlock-extension` or `ucp-interlock-proxy` services are
not running, it's possible that there are port conflicts.
As a workaround re-enable the layer 7 routing configuration from the
[UCP settings page](deploy/index.md). Make sure the ports you choose are not
being used by other services.
## Workarounds and clean-up
If you have any of the problems above, disable and enable the layer 7 routing
setting on the [UCP settings page](index.md). This redeploys the
services with their default configuration.
When doing that make sure you specify the same ports you were using for HRM,
and that no other services are listening on those ports.
You should also check if the `ucp-hrm` service is running. If it is, you should
stop it since it can conflict with the `ucp-interlock-proxy` service.
## Optionally remove labels
As part of the upgrade process UCP adds the
[labels specific to the new layer 7 routing solution](../usage/labels-reference.md).
You can update your services to remove the old HRM labels, since they won't be
used anymore.
## Optionally segregate control traffic
Interlock is designed so that all the control traffic is kept separate from
the application traffic.
If before upgrading you had all your applications attached to the `ucp-hrm`
network, after upgrading you can update your services to start using a
dedicated network for routing that's not shared with other services.
[Learn how to use a dedicated network](../usage/index.md).
If before upgrading you had a dedicate network to route traffic to each service,
Interlock will continue using those dedicated networks. However the
`ucp-interlock` will be attached to each of those networks. You can update
the `ucp-interlock` service so that it is only connected to the `ucp-hrm` network.

View File

@ -2,17 +2,23 @@
title: Layer 7 routing overview
description: Learn how to route layer 7 traffic to your Swarm services
<<<<<<< HEAD
<<<<<<< HEAD
keywords: routing, UCP, interlock, load balancing
---
Application-layer (Layer 7) routing is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock architecture takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
=======
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
keywords: routing, proxy
---
## Introduction
Interlock is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
<<<<<<< HEAD
>>>>>>> Raw content addition
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Interlock is specific to the Swarm orchestrator. If you're trying to route
traffic to your Kubernetes applications, check

View File

@ -4,6 +4,10 @@ description: Learn how to do canary deployments for your Docker swarm services
keywords: routing, proxy
---
<<<<<<< HEAD
=======
# Publishing a service as a canary instance
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
The following example publishes a service as a canary instance.
First, create an overlay network to isolate and secure service traffic:
@ -27,7 +31,11 @@ $> docker service create \
ehazlett/docker-demo
```
<<<<<<< HEAD
Interlock detects when the service is available and publishes it. After tasks are running
=======
Interlock detects when the service is available and publishes it.After tasks are running
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
and the proxy service is updated, the application is available via `http://demo.local`:
```bash
@ -58,7 +66,11 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
Notice `metadata` is specified with `demo-version-1`.
<<<<<<< HEAD
## Deploy an updated service as a canary instance
=======
# Deploying an updated service as a canary instance
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
The following example deploys an updated service as a canary instance:
```bash

View File

@ -1,10 +1,18 @@
---
<<<<<<< HEAD
title: Use context and path-based routing
=======
title: Context and path-based routing
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
description: Learn how to route traffic to your Docker swarm services based
on a url path.
keywords: routing, proxy
---
<<<<<<< HEAD
=======
# Using context or path-based routing
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
The following example publishes a service using context or path based routing.
First, create an overlay network so that service traffic is isolated and secure:

View File

@ -7,6 +7,7 @@ redirect_from:
- /ee/ucp/interlock/deploy/configure/
---
<<<<<<< HEAD
After Interlock is deployed, you can launch and publish services and applications.
Use [Service Labels](/engine/reference/commandline/service_create/#set-metadata-on-a-service--l-label)
to configure services to publish themselves to the load balancer.
@ -16,6 +17,18 @@ for each of the applications.
## Publish a service with four replicas
Create a Docker Service using two labels:
=======
## Deploying services and applications
After Interlock is deployed, you can launch and publish services and applications.
Use [Service Labels](https://docs.docker.com/engine/reference/commandline/service_create/#set-metadata-on-a-service--l-label)
to configure services to publish themselves to the load balancer.
**Note**: The following examples assume a DNS entry (or local hosts entry if you are testing locally) exists
for each of the applications.
## Publishing a service with four replicas
To publish, create a Docker Service using two labels:
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
- `com.docker.lb.hosts`
- `com.docker.lb.port`
@ -24,7 +37,11 @@ The `com.docker.lb.hosts` label instructs Interlock where the service should be
The `com.docker.lb.port` label instructs what port the proxy service should use to access
the upstreams.
<<<<<<< HEAD
Publish a demo service to the host `demo.local`:
=======
This example publishes a demo service to the host `demo.local`.
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
First, create an overlay network so that service traffic is isolated and secure:
@ -63,8 +80,15 @@ demo scaled to 4
In this example, four service replicas are configured as upstreams. The load balancer balances traffic
across all service replicas.
<<<<<<< HEAD
## Publish a service with a web interface
This example deploys a simple service that:
=======
-------------------------------ARE BOTH EXAMPLES NEEDED?-----------------------------------------------
## Publishing a service with a web interface
The next example deploys a simple service that:
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
* Has a JSON endpoint that returns the ID of the task serving the request.
* Has a web interface that shows how many tasks the service is running.
@ -148,6 +172,7 @@ able to start using the service from your browser.
![browser](../../images/route-simple-app-1.png){: .with-border }
<<<<<<< HEAD
## Next steps
- [Publish a service as a canary instance](./canary.md)
@ -161,3 +186,19 @@ able to start using the service from your browser.
- [Implement SSL](./ssl.md)
- [Secure services with TLS](./tls.md)
- [Configure websockets](./websockets.md)
=======
## Next steps
- [Publishing a service as a canary instance](canary.md)
- [Using context or path-based routing](context.md)
- [Publishing a default host service](default-backend.md)
- [Specifying a routing mode](interlock-vip-mode)
- [Using routing labels](labels-reference.md)
- [Implementing redirects](redirects.md)
- [Implementing a service cluster](service-clusters.md)
- [Implementing persistent (sticky) sessions](sessions.md)
- [Implementing SSL](ssl.md)
- [Securing services with TLS](tls.md)
- [Configuring websockets](websockets.md)
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -1,4 +1,5 @@
---
<<<<<<< HEAD
title: Specify a routing mode
description: Learn about task and VIP backend routing modes for Layer 7 routing
keywords: routing, proxy, interlock
@ -9,6 +10,17 @@ redirect_from:
You can publish services using "vip" and "task" backend routing modes.
## Task routing mode
=======
title: Routing modes
description: Learn about task and VIP backend routing modes for Layer 7 routing
keywords: routing, proxy
---
## Specifying a routing mode
You can publish services using "vip" and "task" backend routing modes.
### Task Routing Mode
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Task routing is the default Interlock behavior and the default backend mode if one is not specified.
In task routing mode, Interlock uses backend task IPs to route traffic from the proxy to each container.
@ -18,7 +30,11 @@ Task routing mode applies L7 routing and then sends packets directly to a contai
![task mode](../../images/interlock-task-mode.png)
<<<<<<< HEAD
## VIP routing mode
=======
### VIP Routing Mode
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
VIP mode is an alternative mode of routing in which Interlock uses the Swarm service VIP as the backend IP instead of container IPs.
Traffic to the frontend route is L7 load balanced to the Swarm service VIP, which L4 load balances to backend tasks.
@ -65,7 +81,11 @@ The following two updates still require a proxy reconfiguration (because these a
- Add/Remove a network on a service
- Deployment/Deletion of a service
<<<<<<< HEAD
#### Publish a default host service
=======
#### Publishing a default host service
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
The following example publishes a service to be a default host. The service responds
whenever there is a request to a host that is not configured.

View File

@ -5,7 +5,13 @@ description: Learn about the labels you can use in your swarm services to route
keywords: routing, proxy
---
<<<<<<< HEAD
After you enable the layer 7 routing solution, you can
=======
## Using routing labels
Once the layer 7 routing solution is enabled, you can
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
[start using it in your swarm services](index.md).

View File

@ -5,6 +5,119 @@ description: Learn how to implement redirects using swarm services and the
keywords: routing, proxy, redirects, interlock
---
<<<<<<< HEAD
The following example publishes a service and configures a redirect from `old.local` to `new.local`.
First, create an overlay network so that service traffic is isolated and secure:
=======
# Implementing redirects
The following example deploys a simple
service that can be reached at `app.example.org`. Requests to `old.example.org` are redirected to that service.
Create a docker-compose.yml file as shown in the following example:
```yaml
version: "3.2"
services:
demo:
image: ehazlett/docker-demo
deploy:
replicas: 1
labels:
com.docker.lb.hosts: app.example.org,old.example.org
com.docker.lb.network: demo-network
com.docker.lb.port: 8080
com.docker.lb.redirects: http://old.example.org,http://app.example.org
networks:
- demo-network
networks:
demo-network:
driver: overlay
```
Note that the demo service has labels to signal that traffic for both
`app.example.org` and `old.example.org` should be routed to this service.
There is also a label indicating that all traffic directed to `old.example.org`
should be redirected to `app.example.org`.
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
and deploy the service:
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
<<<<<<< HEAD
Next, create the service with the redirect:
=======
You can also use the CLI to test if the redirect is working, by running the following command:
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--label com.docker.lb.hosts=old.local,new.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.redirects=http://old.local,http://new.local \
--env METADATA="demo-new" \
ehazlett/docker-demo
```
<<<<<<< HEAD
Interlock detects when the service is available and publishes it. After tasks are running
and the proxy service is updated, the application is available via `http://new.local`
with a redirect configured that sends `http://old.local` to `http://new.local`:
=======
You should see something like the following output:
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
```bash
$> curl -vs -H "Host: old.local" http://127.0.0.1
* Rebuilt URL to: http://127.0.0.1/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: old.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 302 Moved Temporarily
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 19:06:27 GMT
< Content-Type: text/html
< Content-Length: 161
< Connection: keep-alive
< Location: http://new.local/
< x-request-id: c4128318413b589cafb6d9ff8b2aef17
< x-proxy-id: 48854cd435a4
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
<
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx/1.13.6</center>
</body>
</html>
```
<<<<<<< HEAD
=======
You can also test that the redirect works from your browser. For that, make sure you add entries for both `app.example.org` and
`old.example.org` to your `/etc/hosts` file and map them to the IP address
of a UCP node.
------------------------SHOULD THE FOLLOWING BE INCLUDED AS WELL? ------------------------------------------
The following example publishes a service and configures a redirect from `old.local` to `new.local`.
First, create an overlay network so that service traffic is isolated and secure:
@ -62,3 +175,4 @@ $> curl -vs -H "Host: old.local" http://127.0.0.1
</body>
</html>
```
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -1,4 +1,5 @@
---
<<<<<<< HEAD
title: Implement service clusters
description: Learn how to route traffic to different proxies using a service cluster.
keywords: ucp, interlock, load balancing, routing
@ -83,6 +84,18 @@ The following example configures an eight (8) node Swarm cluster that uses servi
to route traffic to different proxies. This example includes:
- Three (3) managers and five (5) workers
=======
title: Service clusters
description: Learn how to route traffic to different proxies using a service cluster.
keywords: ucp, interlock, load balancing
---
# Implementing a service cluster
The following example configures an eight (8) node Swarm cluster that uses service clusters
to route traffic to different proxies. This example includes:
- three (3) managers and five (5) workers
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
- Four workers that are configured with node labels to be dedicated
ingress cluster load balancer nodes. These nodes receive all application traffic.
@ -196,3 +209,97 @@ $> docker network create -d overlay ucp-interlock
```
Now [enable the Interlock service](../deploy/index.md#enable-layer-7-routing).
<<<<<<< HEAD
=======
-------REMOVE THE FOLLOWING PER --------- https://github.com/docker/docker.github.io/issues/8415-----
< Important: The `--name` value and configuration object specified in the following code sample applies to UCP versions 3.0.10 and later, and versions 3.1.4 and later.
If you are working with UCP version 3.0.0 - 3.0.9 or 3.1.0 - 3.1.3, specify `--name ucp-interlock-service-clusters` and `src=com.docker.ucp.interlock.service-clusters.conf-1` .
```bash
$> docker service create \
--name ucp-interlock \
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
--network ucp-interlock \
--constraint node.role==manager \
--config src=com.docker.ucp.interlock.conf-1,target=/config.toml \
{{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
-----------------------------------------------------------END OF DELETE------------------------------
## Configure Proxy Services
With the node labels, you can re-configure the Interlock Proxy services to be constrained to the
workers for each region. From a manager, run the following commands to pin the proxy services to the ingress workers:
```bash
$> docker service update \
--constraint-add node.labels.nodetype==loadbalancer \
--constraint-add node.labels.region==us-east \
ucp-interlock-proxy-us-east
$> docker service update \
--constraint-add node.labels.nodetype==loadbalancer \
--constraint-add node.labels.region==us-west \
ucp-interlock-proxy-us-west
```
You are now ready to deploy applications. First, create individual networks for each application:
```bash
$> docker network create -d overlay demo-east
$> docker network create -d overlay demo-west
```
Next, deploy the application in the `us-east` service cluster:
```bash
$> docker service create \
--name demo-east \
--network demo-east \
--detach=true \
--label com.docker.lb.hosts=demo-east.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.service_cluster=us-east \
--env METADATA="us-east" \
ehazlett/docker-demo
```
Now deploy the application in the `us-west` service cluster:
```bash
$> docker service create \
--name demo-west \
--network demo-west \
--detach=true \
--label com.docker.lb.hosts=demo-west.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.service_cluster=us-west \
--env METADATA="us-west" \
ehazlett/docker-demo
```
Only the designated service cluster is configured for the applications. For example, the `us-east` service cluster
is not configured to serve traffic for the `us-west` service cluster and vice versa. You can observe this when you
send requests to each service cluster.
When you send a request to the `us-east` service cluster, it only knows about the `us-east` application. This example uses IP address lookup from the swarm API, so you must `ssh` to a manager node or configure your shell with a UCP client bundle before testing:
```bash
{% raw %}
$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
{"instance":"1b2d71619592","version":"0.1","metadata":"us-east","request_id":"3d57404cf90112eee861f9d7955d044b"}
$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.13.6</center>
</body>
</html>
{% endraw %}
```
Application traffic is isolated to each service cluster. Interlock also ensures that a proxy is updated only if it has corresponding updates to its designated service cluster. In this example, updates to the `us-east` cluster do not affect the `us-west` cluster. If there is a problem, the others are not affected.
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -5,10 +5,18 @@ description: Learn how to configure your swarm services with persistent sessions
keywords: routing, proxy, cookies, IP hash
---
<<<<<<< HEAD
You can publish a service and configure the proxy for persistent (sticky) sessions using:
- Cookies
- IP hashing
=======
# Implementing persistent (sticky) sessions
You can publish a service and configure the proxy for persistent (sticky) sessions using:
- Cookies
- IP Hashing
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
## Cookies
To configure sticky sessions using cookies:
@ -131,3 +139,7 @@ to a specific backend.
> **Note**: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
> expected, because internally the proxy uses the new set of replicas to determine a backend on which to pin. When the upstreams are
> determined, a new "sticky" backend is chosen as the dedicated upstream.
<<<<<<< HEAD
=======
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5

View File

@ -1,11 +1,19 @@
---
<<<<<<< HEAD
title: Secure services with TLS
description: Learn how to configure your swarm services with TLS.
=======
title: Securing services with TLS
description: Learn how to configure your swarm services with TLS using the layer
7 routing solution for UCP.
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
keywords: routing, proxy, tls
redirect_from:
- /ee/ucp/interlock/usage/ssl/
---
<<<<<<< HEAD
=======
# Securing services with TLS
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
After [deploying a layer 7 routing solution](../deploy/index.md), you have two options for securing your
services with TLS:
@ -188,3 +196,4 @@ service such that TLS traffic for `app.example.org` is passed to the service.
Since the connection is fully encrypted from end-to-end, the proxy service
cannot add metadata such as version information or request ID to the
response headers.

View File

@ -1,9 +1,20 @@
---
<<<<<<< HEAD
title: Use websockets
description: Learn how to use websockets in your swarm services.
keywords: routing, proxy, websockets
---
=======
title: Websockets
description: Learn how to use websockets in your swarm services when using the
layer 7 routing solution for UCP.
keywords: routing, proxy
---
# Using websockets
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
First, create an overlay network to isolate and secure service traffic:
```bash
@ -26,6 +37,13 @@ $> docker service create \
> **Note**: for this to work, you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
> This uses the browser for websocket communication, so you must have an entry or use a routable domain.
<<<<<<< HEAD
=======
Interlock detects when the service is available and publishes it. Once tasks are running
and the proxy service is updated, the application should be available via `http://demo.local`. Open
two instances of your browser and text should be displayed on both instances as you type.
>>>>>>> df4abbfc665cd5b9e518a8f6d91bd686f1bf8ce5
Interlock detects when the service is available and publishes it. Once tasks are running
and the proxy service is updated, the application should be available via `http://demo.local`. Open