|
@ -1313,40 +1313,56 @@ manuals:
|
|||
section:
|
||||
- title: Simple deployment
|
||||
path: /ee/ucp/interlock/deploy/
|
||||
- title: Configure your deployment
|
||||
path: /ee/ucp/interlock/deploy/configure/
|
||||
- title: Production deployment
|
||||
path: /ee/ucp/interlock/deploy/production/
|
||||
- title: Host mode networking
|
||||
path: /ee/ucp/interlock/deploy/host-mode-networking/
|
||||
- title: Configuration reference
|
||||
path: /ee/ucp/interlock/deploy/configuration-reference/
|
||||
- sectiontitle: Route traffic to services
|
||||
- title: Offline installation
|
||||
path: /ee/ucp/interlock/deploy/offline-install/
|
||||
- title: Layer 7 routing upgrade
|
||||
path: /ee/ucp/interlock/upgrade/
|
||||
- sectiontitle: Configuration
|
||||
section:
|
||||
- title: Simple swarm service
|
||||
- title: Configure your deployment
|
||||
path: /ee/ucp/interlock/config/
|
||||
- title: Using a custom extension template
|
||||
path: /ee/ucp/interlock/config/custom-template/
|
||||
- title: Configuring an HAProxy extension
|
||||
path: /ee/ucp/interlock/config/haproxy-config/
|
||||
- title: Configuring host mode networking
|
||||
path: /ee/ucp/interlock/config/host-mode-networking/
|
||||
- title: Configuring an nginx extension
|
||||
path: /ee/ucp/interlock/config/nginx-config/
|
||||
- title: Using application service labels
|
||||
path: /ee/ucp/interlock/config/service-labels/
|
||||
- title: Tuning the proxy service
|
||||
path: /ee/ucp/interlock/config/tuning/
|
||||
- title: Updating Interlock services
|
||||
path: /ee/ucp/interlock/config/updates/
|
||||
- sectiontitle: Routing traffic to services
|
||||
section:
|
||||
- title: Deploying services
|
||||
path: /ee/ucp/interlock/usage/
|
||||
- title: Set a default service
|
||||
path: /ee/ucp/interlock/usage/default-service/
|
||||
- title: Applications with TLS
|
||||
path: /ee/ucp/interlock/usage/tls/
|
||||
- title: Application redirects
|
||||
path: /ee/ucp/interlock/usage/redirects/
|
||||
- title: Persistent (sticky) sessions
|
||||
path: /ee/ucp/interlock/usage/sessions/
|
||||
- title: Websockets
|
||||
path: /ee/ucp/interlock/usage/websockets/
|
||||
- title: Canary application instances
|
||||
- title: Publishing a service as a canary instance
|
||||
path: /ee/ucp/interlock/usage/canary/
|
||||
- title: Service clusters
|
||||
path: /ee/ucp/interlock/usage/service-clusters/
|
||||
- title: Context/Path based routing
|
||||
- title: Using context or path-based routing
|
||||
path: /ee/ucp/interlock/usage/context/
|
||||
- title: VIP backend mode
|
||||
path: /ee/ucp/interlock/usage/interlock-vip-mode/
|
||||
- title: Service labels reference
|
||||
path: /ee/ucp/interlock/usage/labels-reference/
|
||||
- title: Layer 7 routing upgrade
|
||||
path: /ee/ucp/interlock/upgrade/
|
||||
- title: Publishing a default host service
|
||||
path: /ee/ucp/interlock/usage/default-backend/
|
||||
- title: Specifying a routing mode
|
||||
path: /ee/ucp/interlock/usage/interlock-vip-mode/
|
||||
- title: Using routing labels
|
||||
path: /ee/ucp/interlock/usage/labels-reference.md/
|
||||
- title: Implementing redirects
|
||||
path: /ee/ucp/interlock/usage/redirects/
|
||||
- title: Implementing a service cluster
|
||||
path: /ee/ucp/interlock/usage/service-clusters/
|
||||
- title: Implementing persistent (sticky) sessions
|
||||
path: /ee/ucp/interlock/usage/sessions/
|
||||
- title: Implementing SSL
|
||||
path: /ee/ucp/interlock/usage/ssl/
|
||||
- title: Securing services with TLS
|
||||
path: /ee/ucp/interlock/usage/tls/
|
||||
- title: Configuring websockets
|
||||
path: /ee/ucp/interlock/usage/websockets/
|
||||
- sectiontitle: Deploy apps with Kubernetes
|
||||
section:
|
||||
- title: Access Kubernetes Resources
|
||||
|
|
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 38 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 31 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 33 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 32 KiB |
|
@ -2,72 +2,116 @@
|
|||
title: Interlock architecture
|
||||
description: Learn more about the architecture of the layer 7 routing solution
|
||||
for Docker swarm services.
|
||||
keywords: routing, proxy
|
||||
keywords: routing, UCP, interlock, load balancing
|
||||
---
|
||||
|
||||
The layer 7 routing solution for swarm workloads is known as Interlock, and has
|
||||
three components:
|
||||
This document covers the following considerations:
|
||||
|
||||
* **Interlock-proxy**: This is a proxy/load-balancing service that handles the
|
||||
requests from the outside world. By default this service is a containerized
|
||||
NGINX deployment.
|
||||
* **Interlock-extension**: This is a helper service that generates the
|
||||
configuration used by the proxy service.
|
||||
* **Interlock**: This is the central piece of the layer 7 routing solution.
|
||||
It uses the Docker API to monitor events, and manages the extension and
|
||||
proxy services.
|
||||
- **Interlock default architecture**
|
||||
- **Single Interlock deployment** (default)
|
||||
A single interlock deployment creates a /24 ingress network to be used by all applications in a Docker Enterprise cluster
|
||||
- **Service clusters**
|
||||
An interlock Service Cluster creates separate interlock proxies that are assigned to specific applications.
|
||||
- **Application optimization for Interlock**
|
||||
Interlock has several configuration options so that it can be deployed in a manner that best matches the application and infrastructure requirements of a deployment.
|
||||
|
||||
This is what the default configuration looks like, once you enable layer 7
|
||||
routing in UCP:
|
||||
A good understanding of this content is necessary for the successful deployment and use of Interlock.
|
||||
|
||||

|
||||
### Interlock default architecture
|
||||
|
||||
The Interlock service starts a single replica on a manager node. The
|
||||
Interlock-extension service runs a single replica on any available node, and
|
||||
the Interlock-proxy service starts two replicas on any available node.
|
||||

|
||||
|
||||
If you don't have any worker nodes in your cluster, then all Interlock
|
||||
components run on manager nodes.
|
||||
### Single Interlock deployment
|
||||
|
||||
## Deployment lifecycle
|
||||
When an application image is updated, the following actions occur:
|
||||
|
||||
By default layer 7 routing is disabled, so an administrator first needs to
|
||||
enable this service from the UCP web UI.
|
||||
1. The service is updated with a new version of the application.
|
||||
2. The default “stop-first” policy stops the first replica before scheduling the second. The interlock proxies remove ip1.0 out of the backend pool as the app.1 task is removed.
|
||||
3. The first application task is rescheduled with the new image after the first task stops.
|
||||
|
||||
Once that happens:
|
||||

|
||||
|
||||
1. UCP creates the `ucp-interlock` overlay network.
|
||||
2. UCP deploys the `ucp-interlock` service and attaches it both to the Docker
|
||||
socket and the overlay network that was created. This allows the Interlock
|
||||
service to use the Docker API. That's also the reason why this service needs to
|
||||
run on a manger node.
|
||||
3. The `ucp-interlock` service starts the `ucp-interlock-extension` service
|
||||
and attaches it to the `ucp-interlock` network. This allows both services
|
||||
to communicate.
|
||||
4. The `ucp-interlock-extension` generates a configuration to be used by
|
||||
the proxy service. By default the proxy service is NGINX, so this service
|
||||
generates a standard NGINX configuration.
|
||||
5. The `ucp-interlock` service takes the proxy configuration and uses it to
|
||||
start the `ucp-interlock-proxy` service.
|
||||
The interlock proxy.1 is then rescheduled with the new nginx configuration that contains the update for the new app.1 task.
|
||||
|
||||
At this point everything is ready for you to start using the layer 7 routing
|
||||
service with your swarm workloads.
|
||||

|
||||
|
||||
## Routing lifecycle
|
||||
After proxy.1 is complete, proxy.2 redeploys with the updated ngix configuration for the app.1 task.
|
||||
|
||||
Once the layer 7 routing service is enabled, you apply specific labels to
|
||||
your swarm services. The labels define the hostnames that are routed to the
|
||||
service, the ports used, and other routing configurations.
|
||||

|
||||
|
||||
Once you deploy or update a swarm service with those labels:
|
||||
In this scenario, the amount of time that the service is unavailable is less than 30 seconds.
|
||||
|
||||
### Service clusters
|
||||
|
||||
1. The `ucp-interlock` service is monitoring the Docker API for events and
|
||||
publishes the events to the `ucp-interlock-extension` service.
|
||||
2. That service in turn generates a new configuration for the proxy service,
|
||||
based on the labels you've added to your services.
|
||||
3. The `ucp-interlock` service takes the new configuration and reconfigures the
|
||||
`ucp-interlock-proxy` to start using it.
|
||||

|
||||
|
||||
This all happens in milliseconds and with rolling updates. Even though
|
||||
services are being reconfigured, users won't notice it.
|
||||
### Optimizing Interlock for applications
|
||||
|
||||
#### Application update order
|
||||
Swarm provides control over the order in which old tasks are removed while new ones are created. This is controlled on the service-level with `--update-order`.
|
||||
|
||||
- `stop-first` (default)- Configures the currently updating task to stop before the new task is scheduled.
|
||||
- `start-first` - Configures the current task to stop after the new task has scheduled. This guarantees that the new task is running before the old task has shut down.
|
||||
|
||||
Use `start-first` if …
|
||||
|
||||
- You have a single application replica and you cannot have service interruption. Both the old and new tasks run simultaneously during the update, but this ensurse that there is no gap in service during the update.
|
||||
|
||||
Use `stop-first` if …
|
||||
|
||||
- Old and new tasks of your service cannot serve clients simultaneously.
|
||||
- You do not have enough cluster resourcing to run old and new replicas simultaneously.
|
||||
|
||||
In most cases, `start-first` is the best choice because it optimizes for high availability during updates.
|
||||
|
||||
#### Application update delay
|
||||
Swarm services use `update-delay` to control the speed at which a service is updated. This adds a timed delay between application tasks as they are updated. The delay controls the time from when the first task of a service transitions to healthy state and the time that the second task begins its update. The default is 0 seconds, which means that a replica task begins updating as soon as the previous updated task transitions in to a healthy state.
|
||||
|
||||
Use `update-delay` if …
|
||||
|
||||
- You are optimizing for the least number of dropped connections and a longer update cycle is an acceptable tradeoff.
|
||||
- Interlock update convergence takes a long time in your environment (can occur when having large amount of overlay networks).
|
||||
|
||||
Do not use `update-delay` if …
|
||||
|
||||
- Service updates must occur rapidly.
|
||||
- Old and new tasks of your service cannot serve clients simultaneously.
|
||||
|
||||
#### Use application health checks
|
||||
Swarm uses application health checks extensively to ensure that its updates do not cause service interruption. `health-cmd` can be configured in a Dockerfile or compose file to define a method for health checking an application. Without health checks, Swarm cannot determine when an application is truly ready to service traffic and will mark it as healthy as soon as the container process is running. This can potentially send traffic to an application before it is capable of serving clients, leading to dropped connections.
|
||||
|
||||
#### Application stop grace period
|
||||
`stop-grace-period` configures a time period for which the task will continue to run but will not accept new connections. This allows connections to drain before the task is stopped, reducing the possibility of terminating requests in-flight. The default value is 10 seconds. This means that a task continues to run for 10 seconds after starting its shutdown cycle, which also removes it from the load balancer to prevent it from accepting new connections. Applications that receive long-lived connections can benefit from longer shut down cycles so that connections can terminate normally.
|
||||
|
||||
### Interlock optimizations
|
||||
|
||||
#### Use service clusters for Interlock segmentation
|
||||
Interlock service clusters allow Interlock to be segmented into multiple logical instances called “service clusters”, which have independently managed proxies. Application traffic only uses the proxies for a specific service cluster, allowing the full segmentation of traffic. Each service cluster only connects to the networks using that specific service cluster, which reduces the number of overlay networks to which proxies connect. Because service clusters also deploy separate proxies, this also reduces the amount of churn in LB configs when there are service updates.
|
||||
|
||||
#### Minimizing number of overlay networks
|
||||
Interlock proxy containers connect to the overlay network of every Swarm service. Having many networks connected to Interlock adds incremental delay when Interlock updates its load balancer configuration. Each network connected to Interlock generally adds 1-2 seconds of update delay. With many networks, the Interlock update delay causes the LB config to be out of date for too long, which can cause traffic to be dropped.
|
||||
|
||||
Minimizing the number of overlay networks that Interlock connects to can be accomplished in two ways:
|
||||
|
||||
- Reduce the number of networks. If the architecture permits it, applications can be grouped together to use the same networks.
|
||||
- Use Interlock service clusters. By segmenting Interlock, service clusters also segment which networks are connected to Interlock, reducing the number of networks to which each proxy is connected.
|
||||
|
||||
#### Use Interlock VIP Mode
|
||||
VIP Mode can be used to reduce the impact of application updates on the Interlock proxies. It utilizes the Swarm L4 load balancing VIPs instead of individual task IPs to load balance traffic to a more stable internal endpoint. This prevents the proxy LB configs from changing for most kinds of app service updates reducing churn for Interlock. The following features are not supported in VIP mode:
|
||||
|
||||
- Sticky sessions
|
||||
- Websockets
|
||||
- Canary deployments
|
||||
|
||||
The following features are supported in VIP mode:
|
||||
|
||||
- Host & context routing
|
||||
- Context root rewrites
|
||||
- Interlock TLS termination
|
||||
- TLS passthrough
|
||||
- Service clusters
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Deploy Interlock](deploy/index.md)
|
||||
- [Configure Interlock](config/index.md)
|
||||
|
|
|
@ -0,0 +1,304 @@
|
|||
---
|
||||
title: Custom templates
|
||||
description: Learn how to use a custom extension template
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
---
|
||||
|
||||
Use a custom extension if a needed option is not available in the extension configuration.
|
||||
|
||||
> Warning:
|
||||
This should be used with extreme caution as this completely bypasses the built-in
|
||||
extension template. Therefore, if you update the extension image in the future,
|
||||
you will not receive the updated template because you are using a custom one.
|
||||
|
||||
To use a custom template:
|
||||
|
||||
1. Create a Swarm configuration using a new template
|
||||
2. Create a Swarm configuration object
|
||||
3. Update the extension
|
||||
|
||||
## Create a Swarm configuration using a new template
|
||||
First, create a Swarm config using the new template, as shown in the following example. This example uses a custom Nginx configuration template, but you can use any extension configuration (for example, HAProxy).
|
||||
|
||||
The contents of the example `custom-template.conf` include:
|
||||
|
||||
{% raw %}
|
||||
```
|
||||
# CUSTOM INTERLOCK CONFIG
|
||||
user {{ .ExtensionConfig.User }};
|
||||
worker_processes {{ .ExtensionConfig.WorkerProcesses }};
|
||||
|
||||
error_log {{ .ExtensionConfig.ErrorLogPath }} warn;
|
||||
pid {{ .ExtensionConfig.PidPath }};
|
||||
|
||||
|
||||
events {
|
||||
worker_connections {{ .ExtensionConfig.MaxConnections }};
|
||||
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
server_names_hash_bucket_size 128;
|
||||
|
||||
# add custom HTTP options here, etc.
|
||||
|
||||
log_format main {{ .ExtensionConfig.MainLogFormat }}
|
||||
|
||||
log_format trace {{ .ExtensionConfig.TraceLogFormat }}
|
||||
|
||||
access_log {{ .ExtensionConfig.AccessLogPath }} main;
|
||||
|
||||
sendfile on;
|
||||
#tcp_nopush on;
|
||||
|
||||
keepalive_timeout {{ .ExtensionConfig.KeepaliveTimeout }};
|
||||
client_max_body_size {{ .ExtensionConfig.ClientMaxBodySize }};
|
||||
client_body_buffer_size {{ .ExtensionConfig.ClientBodyBufferSize }};
|
||||
client_header_buffer_size {{ .ExtensionConfig.ClientHeaderBufferSize }};
|
||||
large_client_header_buffers {{ .ExtensionConfig.LargeClientHeaderBuffers }};
|
||||
client_body_timeout {{ .ExtensionConfig.ClientBodyTimeout }};
|
||||
underscores_in_headers {{ if .ExtensionConfig.UnderscoresInHeaders }}on{{ else }}off{{ end }};
|
||||
|
||||
add_header x-request-id $request_id;
|
||||
add_header x-proxy-id $hostname;
|
||||
add_header x-server-info "{{ .Version }}";
|
||||
add_header x-upstream-addr $upstream_addr;
|
||||
add_header x-upstream-response-time $upstream_response_time;
|
||||
|
||||
proxy_connect_timeout {{ .ExtensionConfig.ConnectTimeout }};
|
||||
proxy_send_timeout {{ .ExtensionConfig.SendTimeout }};
|
||||
proxy_read_timeout {{ .ExtensionConfig.ReadTimeout }};
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header x-request-id $request_id;
|
||||
send_timeout {{ .ExtensionConfig.SendTimeout }};
|
||||
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
|
||||
|
||||
ssl_prefer_server_ciphers on;
|
||||
ssl_ciphers {{ .ExtensionConfig.SSLCiphers }};
|
||||
ssl_protocols {{ .ExtensionConfig.SSLProtocols }};
|
||||
{{ if (and (gt .ExtensionConfig.SSLDefaultDHParam 0) (ne .ExtensionConfig.SSLDefaultDHParamPath "")) }}ssl_dhparam {{ .ExtensionConfig.SSLDefaultDHParamPath }};{{ end }}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
{{ if not .HasDefaultBackend }}
|
||||
# default host return 503
|
||||
server {
|
||||
listen {{ .Port }} default_server;
|
||||
server_name _;
|
||||
|
||||
root /usr/share/nginx/html;
|
||||
|
||||
error_page 503 /503.html;
|
||||
location = /503.html {
|
||||
try_files /503.html @error;
|
||||
internal;
|
||||
}
|
||||
|
||||
location @error {
|
||||
root /usr/share/nginx/html;
|
||||
}
|
||||
|
||||
location / {
|
||||
return 503;
|
||||
|
||||
}
|
||||
|
||||
location /nginx_status {
|
||||
stub_status on;
|
||||
access_log off;
|
||||
}
|
||||
|
||||
}
|
||||
{{ end }}
|
||||
|
||||
{{ range $host, $backends := .Hosts }}
|
||||
{{ with $hostBackend := index $backends 0 }}
|
||||
{{ $sslBackend := index $.SSLBackends $host }}
|
||||
upstream {{ backendName $host }} {
|
||||
{{ if $hostBackend.IPHash }}ip_hash; {{else}}zone {{ backendName $host }}_backend 64k;{{ end }}
|
||||
{{ if ne $hostBackend.StickySessionCookie "" }}hash $cookie_{{ $hostBackend.StickySessionCookie }} consistent; {{ end }}
|
||||
{{ range $backend := $backends }}
|
||||
{{ range $up := $backend.Targets }}server {{ $up }};
|
||||
{{ end }}
|
||||
{{ end }} {{/* end range backends */}}
|
||||
|
||||
}
|
||||
{{ if not $sslBackend.Passthrough }}
|
||||
server {
|
||||
listen {{ $.Port }}{{ if $hostBackend.DefaultBackend }} default_server{{ end }};
|
||||
{{ if $hostBackend.DefaultBackend }}server_name _;{{ else }}server_name {{$host}};{{ end }}
|
||||
|
||||
{{ if (isRedirectHost $host $hostBackend.Redirects) }}
|
||||
{{ range $redirect := $hostBackend.Redirects }}
|
||||
{{ if isRedirectMatch $redirect.Source $host }}return 302 {{ $redirect.Target }}$request_uri;{{ end }}
|
||||
{{ end }}
|
||||
{{ else }}
|
||||
|
||||
{{ if eq ( len $hostBackend.ContextRoots ) 0 }}
|
||||
{{ if not (isWebsocketRoot $hostBackend.WebsocketEndpoints) }}
|
||||
location / {
|
||||
proxy_pass {{ if $hostBackend.SSLBackend }}https://{{ else }}http://{{ backendName $host }};{{ end }}
|
||||
}
|
||||
{{ end }}
|
||||
|
||||
{{ range $ws := $hostBackend.WebsocketEndpoints }}
|
||||
location {{ $ws }} {
|
||||
proxy_pass {{ if $hostBackend.SSLBackend }}https://{{ else }}http://{{ backendName $host }};{{ end }}
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_set_header Origin '';
|
||||
}
|
||||
{{ end }} {{/* end range WebsocketEndpoints */}}
|
||||
{{ else }}
|
||||
|
||||
{{ range $ctxroot := $hostBackend.ContextRoots }}
|
||||
location {{ $ctxroot.Path }} {
|
||||
{{ if $ctxroot.Rewrite }}rewrite ^([^.]*[^/])$ $1/ permanent;
|
||||
rewrite ^{{ $ctxroot.Path }}/(.*) /$1 break;{{ end }}
|
||||
proxy_pass http://{{ backendName $host }};
|
||||
}
|
||||
{{ end }} {{/* end range contextroots */}}
|
||||
|
||||
{{ end }} {{/* end len $hostBackend.ContextRoots */}}
|
||||
location /nginx_status {
|
||||
stub_status on;
|
||||
access_log off;
|
||||
}
|
||||
{{ end }}{{/* end isRedirectHost */}}
|
||||
|
||||
}
|
||||
{{ end }} {{/* end if not sslBackend.Passthrough */}}
|
||||
|
||||
{{/* SSL */}}
|
||||
{{ if ne $hostBackend.SSLCert "" }}
|
||||
{{ $sslBackend := index $.SSLBackends $host }}
|
||||
server {
|
||||
listen 127.0.0.1:{{ $sslBackend.Port }} ssl proxy_protocol;
|
||||
server_name {{ $host }};
|
||||
ssl on;
|
||||
ssl_certificate /run/secrets/{{ $hostBackend.SSLCertTarget }};
|
||||
{{ if ne $hostBackend.SSLKey "" }}ssl_certificate_key /run/secrets/{{ $hostBackend.SSLKeyTarget }};{{ end }}
|
||||
set_real_ip_from 127.0.0.1/32;
|
||||
real_ip_header proxy_protocol;
|
||||
|
||||
{{ if eq ( len $hostBackend.ContextRoots ) 0 }}
|
||||
{{ if not (isWebsocketRoot $hostBackend.WebsocketEndpoints) }}
|
||||
location / {
|
||||
proxy_pass {{ if $hostBackend.SSLBackend }}https://{{ else }}http://{{ backendName $host }};{{ end }}
|
||||
}
|
||||
{{ end }}
|
||||
|
||||
{{ range $ws := $hostBackend.WebsocketEndpoints }}
|
||||
location {{ $ws }} {
|
||||
proxy_pass {{ if $hostBackend.SSLBackend }}https://{{ else }}http://{{ backendName $host }};{{ end }}
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
proxy_set_header Origin {{$host}};
|
||||
|
||||
}
|
||||
{{ end }} {{/* end range WebsocketEndpoints */}}
|
||||
{{ else }}
|
||||
|
||||
{{ range $ctxroot := $hostBackend.ContextRoots }}
|
||||
location {{ $ctxroot.Path }} {
|
||||
{{ if $ctxroot.Rewrite }}rewrite ^([^.]*[^/])$ $1/ permanent;
|
||||
rewrite ^{{ $ctxroot.Path }}/(.*) /$1 break;{{ end }}
|
||||
proxy_pass http://{{ backendName $host }};
|
||||
}
|
||||
{{ end }} {{/* end range contextroots */}}
|
||||
|
||||
{{ end }} {{/* end len $hostBackend.ContextRoots */}}
|
||||
location /nginx_status {
|
||||
stub_status on;
|
||||
access_log off;
|
||||
}
|
||||
|
||||
} {{ end }} {{/* end $hostBackend.SSLCert */}}
|
||||
{{ end }} {{/* end with hostBackend */}}
|
||||
|
||||
{{ end }} {{/* end range .Hosts */}}
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
}
|
||||
stream {
|
||||
# main log compatible format
|
||||
log_format stream '$remote_addr - - [$time_local] "$ssl_preread_server_name -> $name ($protocol)" '
|
||||
'$status $bytes_sent "" "" "" ';
|
||||
map $ssl_preread_server_name $name {
|
||||
{{ range $host, $sslBackend := $.SSLBackends }}
|
||||
{{ $sslBackend.Host }} {{ if $sslBackend.Passthrough }}pt-{{ backendName $host }};{{ else }}127.0.0.1:{{ $sslBackend.Port }}; {{ end }}
|
||||
{{ if $sslBackend.DefaultBackend }}default {{ if $sslBackend.Passthrough }}pt-{{ backendName $host }};{{ else }}127.0.0.1:{{ $sslBackend.Port }}; {{ end }}{{ end }}
|
||||
{{ end }}
|
||||
|
||||
}
|
||||
{{ range $host, $sslBackend := $.SSLBackends }}
|
||||
upstream pt-{{ backendName $sslBackend.Host }} {
|
||||
{{ $h := index $.Hosts $sslBackend.Host }}{{ $hostBackend := index $h 0 }}
|
||||
{{ if $sslBackend.Passthrough }}
|
||||
server 127.0.0.1:{{ $sslBackend.ProxyProtocolPort }};
|
||||
{{ else }}
|
||||
{{ range $up := $hostBackend.Targets }}server {{ $up }};
|
||||
{{ end }} {{/* end range backend targets */}}
|
||||
{{ end }} {{/* end range sslbackend */}}
|
||||
|
||||
}{{ end }} {{/* end range SSLBackends */}}
|
||||
|
||||
{{ range $host, $sslBackend := $.SSLBackends }}
|
||||
{{ $proxyProtocolPort := $sslBackend.ProxyProtocolPort }}
|
||||
{{ $h := index $.Hosts $sslBackend.Host }}{{ $hostBackend := index $h 0 }}
|
||||
{{ if ne $proxyProtocolPort 0 }}
|
||||
upstream proxy-{{ backendName $sslBackend.Host }} {
|
||||
{{ range $up := $hostBackend.Targets }}server {{ $up }};
|
||||
{{ end }} {{/* end range backend targets */}}
|
||||
|
||||
}
|
||||
server {
|
||||
listen {{ $proxyProtocolPort }} proxy_protocol;
|
||||
proxy_pass proxy-{{ backendName $sslBackend.Host }};
|
||||
|
||||
}
|
||||
{{ end }} {{/* end if ne proxyProtocolPort 0 */}}
|
||||
{{ end }} {{/* end range SSLBackends */}}
|
||||
|
||||
server {
|
||||
listen {{ $.SSLPort }};
|
||||
proxy_pass $name;
|
||||
proxy_protocol on;
|
||||
ssl_preread on;
|
||||
access_log {{ .ExtensionConfig.AccessLogPath }} stream;
|
||||
}
|
||||
}
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
## Create a Swarm configuration object
|
||||
To create a Swarm config object:
|
||||
|
||||
```
|
||||
$> docker config create interlock-custom-template custom.conf
|
||||
```
|
||||
|
||||
## Update the extension
|
||||
Now update the extension to use this new template:
|
||||
|
||||
```
|
||||
$> docker service update --config-add source=interlock-custom-template,target=/etc/docker/extension-template.conf interlock-ext
|
||||
```
|
||||
|
||||
This should trigger an update and a new proxy configuration will be generated.
|
||||
|
||||
## Remove the custom template
|
||||
To remove the custom template and revert to using the built-in template:
|
||||
|
||||
```
|
||||
$> docker service update --config-rm interlock-custom-template interlock-ext
|
||||
```
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
title: Configure HAProxy
|
||||
description: Learn how to configure an HAProxy extension
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
---
|
||||
|
||||
The following HAProxy configuration options are available:
|
||||
|
||||
| Option | Type | Description |
|
||||
| --- | --- | --- |
|
||||
| `PidPath` | string | path to the pid file for the proxy service |
|
||||
| `MaxConnections` | int | maximum number of connections for proxy service |
|
||||
| `ConnectTimeout` | int | timeout in seconds for clients to connect |
|
||||
| `ClientTimeout` | int | timeout in seconds for the service to send a request to the proxied upstream |
|
||||
| `ServerTimeout` | int | timeout in seconds for the service to read a response from the proxied upstream |
|
||||
| `AdminUser` | string | username to be used with authenticated access to the proxy service |
|
||||
| `AdminPass` | string | password to be used with authenticated access to the proxy service |
|
||||
| `SSLOpts` | string | options to be passed when configuring SSL |
|
||||
| `SSLDefaultDHParam` | int | size of DH parameters |
|
||||
| `SSLVerify` | string | SSL client verification |
|
||||
| `SSLCiphers` | string | SSL ciphers to use for the proxy service |
|
||||
| `SSLProtocols` | string | enable the specified TLS protocols |
|
||||
| `GlobalOptions` | []string | list of options that are included in the global configuration |
|
||||
| `DefaultOptions` | []string | list of options that are included in the default configuration |
|
||||
|
||||
## Notes
|
||||
|
||||
When using SSL termination, the certificate and key must be combined into a single certificate (i.e. `cat cert.pem key.pem > combined.pem`). The HAProxy extension only uses the certificate label to configure SSL.
|
|
@ -0,0 +1,235 @@
|
|||
---
|
||||
title: Configure host mode networking
|
||||
description: Learn how to configure the UCP layer 7 routing solution with
|
||||
host mode networking.
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/host-mode-networking/
|
||||
- /ee/ucp/interlock/deploy/host-mode-networking/
|
||||
---
|
||||
|
||||
By default, layer 7 routing components communicate with one another using
|
||||
overlay networks, but Interlock supports
|
||||
host mode networking in a variety of ways, including proxy only, Interlock only, application only, and hybrid.
|
||||
|
||||
When using host mode networking, you cannot use DNS service discovery,
|
||||
since that functionality requires overlay networking.
|
||||
For services to communicate, each service needs to know the IP address of
|
||||
the node where the other service is running.
|
||||
|
||||
To use host mode networking instead of overlay networking:
|
||||
|
||||
1. Perform the configuration needed for a production-grade deployment
|
||||
2. Update the ucp-interlock configuration
|
||||
3. Deploy your Swarm services
|
||||
|
||||
## Configuration for a production-grade deployment
|
||||
|
||||
If you have not done so, configure the
|
||||
[layer 7 routing solution for production](../deploy/production.md).
|
||||
|
||||
The `ucp-interlock-proxy` service replicas should then be
|
||||
running on their own dedicated nodes.
|
||||
|
||||
## Update the ucp-interlock config
|
||||
|
||||
[Update the ucp-interlock service configuration](./index.md) so that it uses
|
||||
host mode networking.
|
||||
|
||||
Update the `PublishMode` key to:
|
||||
|
||||
```toml
|
||||
PublishMode = "host"
|
||||
```
|
||||
|
||||
When updating the `ucp-interlock` service to use the new Docker configuration,
|
||||
make sure to update it so that it starts publishing its port on the host:
|
||||
|
||||
```bash
|
||||
docker service update \
|
||||
--config-rm $CURRENT_CONFIG_NAME \
|
||||
--config-add source=$NEW_CONFIG_NAME,target=/config.toml \
|
||||
--publish-add mode=host,target=8080 \
|
||||
ucp-interlock
|
||||
```
|
||||
|
||||
The `ucp-interlock` and `ucp-interlock-extension` services are now communicating
|
||||
using host mode networking.
|
||||
|
||||
## Deploy your swarm services
|
||||
|
||||
Now you can deploy your swarm services.
|
||||
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
|
||||
and deploy the service. The following example deploys a demo
|
||||
service that also uses host mode networking:
|
||||
|
||||
```bash
|
||||
docker service create \
|
||||
--name demo \
|
||||
--detach=false \
|
||||
--label com.docker.lb.hosts=app.example.org \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--publish mode=host,target=8080 \
|
||||
--env METADATA="demo" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
In this example, Docker allocates a high random port on the host where the service can be reached.
|
||||
|
||||
To test that everything is working, run the following command:
|
||||
|
||||
```bash
|
||||
curl --header "Host: app.example.org" \
|
||||
http://<proxy-address>:<routing-http-port>/ping
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
* `<proxy-address>` is the domain name or IP address of a node where the proxy
|
||||
service is running.
|
||||
* `<routing-http-port>` is the [port you're using to route HTTP traffic](index.md).
|
||||
|
||||
If everything is working correctly, you should get a JSON result like:
|
||||
|
||||
{% raw %}
|
||||
```json
|
||||
{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
The following example describes how to configure an eight (8) node Swarm cluster that uses host mode
|
||||
networking to route traffic without using overlay networks. There are three (3) managers
|
||||
and five (5) workers. Two of the workers are configured with node labels to be dedicated
|
||||
ingress cluster load balancer nodes. These will receive all application traffic.
|
||||
|
||||
This example does not cover the actual deployment of infrastructure.
|
||||
It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
|
||||
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
|
||||
getting a Swarm cluster deployed.
|
||||
|
||||
Note: When using host mode networking, you cannot use the DNS service discovery because that
|
||||
requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
|
||||
that will give you that functionality if needed.
|
||||
|
||||
Configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
|
||||
service. Once you are logged into one of the Swarm managers run the following to add node labels
|
||||
to the dedicated load balancer worker nodes:
|
||||
|
||||
```bash
|
||||
$> docker node update --label-add nodetype=loadbalancer lb-00
|
||||
lb-00
|
||||
$> docker node update --label-add nodetype=loadbalancer lb-01
|
||||
lb-01
|
||||
```
|
||||
|
||||
Inspect each node to ensure the labels were successfully added:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
|
||||
map[nodetype:loadbalancer]
|
||||
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
|
||||
map[nodetype:loadbalancer]
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
Next, create a configuration object for Interlock that specifies host mode networking:
|
||||
|
||||
```bash
|
||||
$> cat << EOF | docker config create service.interlock.conf -
|
||||
ListenAddr = ":8080"
|
||||
DockerURL = "unix:///var/run/docker.sock"
|
||||
PollInterval = "3s"
|
||||
|
||||
[Extensions]
|
||||
[Extensions.default]
|
||||
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
|
||||
Args = []
|
||||
ServiceName = "interlock-ext"
|
||||
ProxyImage = "nginx:alpine"
|
||||
ProxyArgs = []
|
||||
ProxyServiceName = "interlock-proxy"
|
||||
ProxyConfigPath = "/etc/nginx/nginx.conf"
|
||||
ProxyReplicas = 1
|
||||
PublishMode = "host"
|
||||
PublishedPort = 80
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.default.Config]
|
||||
User = "nginx"
|
||||
PidPath = "/var/run/proxy.pid"
|
||||
WorkerProcesses = 1
|
||||
RlimitNoFile = 65535
|
||||
MaxConnections = 2048
|
||||
EOF
|
||||
oqkvv1asncf6p2axhx41vylgt
|
||||
```
|
||||
|
||||
Note the `PublishMode = "host"` setting. This instructs Interlock to configure the proxy service for host mode networking.
|
||||
|
||||
Now create the Interlock service also using host mode networking:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name interlock \
|
||||
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
|
||||
--constraint node.role==manager \
|
||||
--publish mode=host,target=8080 \
|
||||
--config src=service.interlock.conf,target=/config.toml \
|
||||
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
|
||||
sjpgq7h621exno6svdnsvpv9z
|
||||
```
|
||||
|
||||
## Configure proxy services
|
||||
With the node labels, you can re-configure the Interlock Proxy services to be constrained to the
|
||||
workers. From a manager run the following to pin the proxy services to the load balancer worker nodes:
|
||||
|
||||
```bash
|
||||
$> docker service update \
|
||||
--constraint-add node.labels.nodetype==loadbalancer \
|
||||
interlock-proxy
|
||||
```
|
||||
|
||||
Now you can deploy the application:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo \
|
||||
--detach=false \
|
||||
--label com.docker.lb.hosts=demo.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--publish mode=host,target=8080 \
|
||||
--env METADATA="demo" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
This runs the service using host mode networking. Each task for the service has a high port (for example, 32768) and uses
|
||||
the node IP address to connect. You can see this when inspecting the headers from the request:
|
||||
|
||||
```bash
|
||||
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||
curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||
* Trying 127.0.0.1...
|
||||
* TCP_NODELAY set
|
||||
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
|
||||
> GET /ping HTTP/1.1
|
||||
> Host: demo.local
|
||||
> User-Agent: curl/7.54.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.13.6
|
||||
< Date: Fri, 10 Nov 2017 15:38:40 GMT
|
||||
< Content-Type: text/plain; charset=utf-8
|
||||
< Content-Length: 110
|
||||
< Connection: keep-alive
|
||||
< Set-Cookie: session=1510328320174129112; Path=/; Expires=Sat, 11 Nov 2017 15:38:40 GMT; Max-Age=86400
|
||||
< x-request-id: e4180a8fc6ee15f8d46f11df67c24a7d
|
||||
< x-proxy-id: d07b29c99f18
|
||||
< x-server-info: interlock/2.0.0-preview (17476782) linux/amd64
|
||||
< x-upstream-addr: 172.20.0.4:32768
|
||||
< x-upstream-response-time: 1510328320.172
|
||||
<
|
||||
{"instance":"897d3c7b9e9c","version":"0.1","metadata":"demo","request_id":"e4180a8fc6ee15f8d46f11df67c24a7d"}
|
||||
```
|
|
@ -0,0 +1,206 @@
|
|||
---
|
||||
title: Configure layer 7 routing service
|
||||
description: Learn how to configure the layer 7 routing solution for UCP.
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configure/
|
||||
- /ee/ucp/interlock/usage/default-service/
|
||||
---
|
||||
|
||||
To further customize the layer 7 routing solution, you must update the
|
||||
`ucp-interlock` service with a new Docker configuration.
|
||||
|
||||
1. Find out what configuration is currently being used for the `ucp-interlock`
|
||||
service and save it to a file:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' ucp-interlock)
|
||||
docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
2. Make the necessary changes to the `config.toml` file. See [TOML file configuration options](#toml-file-configuration-options) for more information.
|
||||
|
||||
3. Create a new Docker configuration object from the `config.toml` file:
|
||||
|
||||
```bash
|
||||
NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
|
||||
docker config create $NEW_CONFIG_NAME config.toml
|
||||
```
|
||||
|
||||
3. Update the `ucp-interlock` service to start using the new configuration:
|
||||
|
||||
```bash
|
||||
docker service update \
|
||||
--config-rm $CURRENT_CONFIG_NAME \
|
||||
--config-add source=$NEW_CONFIG_NAME,target=/config.toml \
|
||||
ucp-interlock
|
||||
```
|
||||
|
||||
By default, the `ucp-interlock` service is configured to pause if you provide an
|
||||
invalid configuration. The service will not restart without manual intervention.
|
||||
|
||||
If you want the service to automatically rollback to a previous stable
|
||||
configuration, you can update it with the following command:
|
||||
|
||||
```bash
|
||||
docker service update \
|
||||
--update-failure-action rollback \
|
||||
ucp-interlock
|
||||
```
|
||||
|
||||
**Note**: Every time you enable the layer 7 routing
|
||||
solution from the UCP UI, the `ucp-interlock` service is started using the
|
||||
default configuration.
|
||||
|
||||
If you've customized the configuration used by the `ucp-interlock` service,
|
||||
you must update it again to use the Docker configuration object
|
||||
you've created.
|
||||
|
||||
## TOML file configuration options
|
||||
The following sections describe how to configure the primary Interlock services:
|
||||
|
||||
- Core
|
||||
- Extension
|
||||
- Proxy
|
||||
|
||||
### Core configuration
|
||||
|
||||
The core configuraton handles the Interlock service itself. These are the configuration options for the `ucp-interlock` service:
|
||||
|
||||
| Option | Type | Description |
|
||||
|:-------------------|:------------|:-----------------------------------------------------------------------------------------------|
|
||||
| `ListenAddr` | string | Address to serve the Interlock GRPC API. Defaults to `8080`. |
|
||||
| `DockerURL` | string | Path to the socket or TCP address to the Docker API. Defaults to `unix:///var/run/docker.sock` |
|
||||
| `TLSCACert` | string | Path to the CA certificate for connecting securely to the Docker API. |
|
||||
| `TLSCert` | string | Path to the certificate for connecting securely to the Docker API. |
|
||||
| `TLSKey` | string | Path to the key for connecting securely to the Docker API. |
|
||||
| `AllowInsecure` | bool | Skip TLS verification when connecting to the Docker API via TLS. |
|
||||
| `PollInterval` | string | Interval to poll the Docker API for changes. Defaults to `3s`. |
|
||||
| `EndpointOverride` | string | Override the default GRPC API endpoint for extensions. The default is detected via Swarm. |
|
||||
| `Extensions` | []Extension | Array of extensions as listed below. |
|
||||
|
||||
### Extension configuration
|
||||
|
||||
Interlock must contain at least one extension to service traffic. The following options are available to configure the extensions:
|
||||
|
||||
| Option | Type | Description |
|
||||
|:-------------------|:------------|:-----------------------------------------------------------|
|
||||
| `Image` | string | Name of the Docker Image to use for the extension service |
|
||||
| `Args` | []string | Arguments to be passed to the Docker extension service upon creation |
|
||||
| `Labels` | map[string]string | Labels to add to the extension service |
|
||||
| `ContainerLabels` | map[string]string | labels to be added to the extension service tasks |
|
||||
| `Constraints` | []string | one or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the extension service |
|
||||
| `PlacementPreferences` | []string | one or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the extension service |
|
||||
| `ServiceName` | string | Name of the extension service |
|
||||
| `ProxyImage` | string | Name of the Docker Image to use for the proxy service |
|
||||
| `ProxyArgs` | []string | Arguments to be passed to the Docker proxy service upon creation |
|
||||
| `ProxyLabels` | map[string]string | Labels to add to the proxy service |
|
||||
| `ProxyContainerLabels` | map[string]string | labels to be added to the proxy service tasks |
|
||||
| `ProxyServiceName` | string | Name of the proxy service |
|
||||
| `ProxyConfigPath` | string | Path in the service for the generated proxy config |
|
||||
| `ProxyReplicas` | uint | number of proxy service replicas |
|
||||
| `ProxyStopSignal` | string | stop signal for the proxy service (i.e. `SIGQUIT`) |
|
||||
| `ProxyStopGracePeriod` | string | stop grace period for the proxy service (i.e. `5s`) |
|
||||
| `ProxyConstraints` | []string | one or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the proxy service |
|
||||
| `ProxyPlacementPreferences` | []string | one or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the proxy service |
|
||||
| `ProxyUpdateDelay` | string | delay between rolling proxy container updates |
|
||||
| `ServiceCluster` | string | Name of the cluster this extension services |
|
||||
| `PublishMode` | string (`ingress` or `host`) | Publish mode that the proxy service uses |
|
||||
| `PublishedPort` | int | Port on which the proxy service serves non-SSL traffic |
|
||||
| `PublishedSSLPort` | int | Port on which the proxy service serves SSL traffic |
|
||||
| `Template` | string | Docker configuration object that is used as the extension template |
|
||||
| `Config` | Config | Proxy configuration used by the extensions as described in the following table |
|
||||
|
||||
### Proxy
|
||||
Options are made available to the extensions, and the extensions utilize the options needed for proxy service configuration. This provides overrides to the extension configuration.
|
||||
|
||||
Because Interlock passes the extension configuration directly to the extension, each extension has
|
||||
different configuration options available. Refer to the documentation for each extension for supported options:
|
||||
|
||||
- [Nginx](nginx-config.md)
|
||||
- [HAproxy](haproxy-config.md)
|
||||
|
||||
#### Customize the default proxy service
|
||||
The default proxy service used by UCP to provide layer 7 routing is NGINX. If users try to access a route that hasn't been configured, they will see the default NGINX 404 page:
|
||||
|
||||
{: .with-border}
|
||||
|
||||
You can customize this by labelling a service with
|
||||
`com.docker.lb.default_backend=true`. In this case, if users try to access a route that's
|
||||
not configured, they are redirected to this service.
|
||||
|
||||
As an example, create a `docker-compose.yml` file with:
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
demo:
|
||||
image: ehazlett/interlock-default-app
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
com.docker.lb.default_backend: "true"
|
||||
com.docker.lb.port: 80
|
||||
networks:
|
||||
- demo-network
|
||||
|
||||
networks:
|
||||
demo-network:
|
||||
driver: overlay
|
||||
```
|
||||
|
||||
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
|
||||
and deploy the service:
|
||||
|
||||
```bash
|
||||
docker stack deploy --compose-file docker-compose.yml demo
|
||||
```
|
||||
|
||||
If users try to access a route that's not configured, they are directed
|
||||
to this demo service.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
### Example Configuration
|
||||
The following is an example configuration to use with the Nginx extension.
|
||||
|
||||
```toml
|
||||
ListenAddr = ":8080"
|
||||
DockerURL = "unix:///var/run/docker.sock"
|
||||
PollInterval = "3s"
|
||||
|
||||
[Extensions.default]
|
||||
Image = "docker/interlock-extension-nginx:latest"
|
||||
Args = ["-D"]
|
||||
ServiceName = "interlock-ext"
|
||||
ProxyImage = "nginx:alpine"
|
||||
ProxyArgs = []
|
||||
ProxyServiceName = "interlock-proxy"
|
||||
ProxyConfigPath = "/etc/nginx/nginx.conf"
|
||||
ProxyStopGracePeriod = "3s"
|
||||
PublishMode = "ingress"
|
||||
PublishedPort = 80
|
||||
ProxyReplicas = 1
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.default.Config]
|
||||
User = "nginx"
|
||||
PidPath = "/var/run/proxy.pid"
|
||||
WorkerProcesses = 1
|
||||
RlimitNoFile = 65535
|
||||
MaxConnections = 2048
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Use a custom extension template](custom-template.md)
|
||||
- [Configure an HAProxy extension](haproxy-config.md)
|
||||
- [Configure host mode networking](host-mode-networking.md)
|
||||
- [Configure an nginx extension](nginx-config.md)
|
||||
- [Use application service labels](service-lables.md)
|
||||
- [Tune the proxy service](tuning.md)
|
||||
- [Update Interlock services](updates.md)
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: Configure Nginx
|
||||
description: Learn how to configure an nginx extension
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
---
|
||||
|
||||
By default, nginx is used as a proxy, so the following configuration options are
|
||||
available for the nginx extension:
|
||||
|
||||
| Option | Type | Description . | Defaults |
|
||||
|:------ |:------ |:------ |:------ |
|
||||
| `User` | string | User to be used in the proxy | `nginx` |
|
||||
| `PidPath` | string | Path to the pid file for the proxy service | `/var/run/proxy.pid` |
|
||||
| `MaxConnections` | int | Maximum number of connections for proxy service | `1024` |
|
||||
| `ConnectTimeout` | int | Timeout in seconds for clients to connect | `600` |
|
||||
| `SendTimeout` | int | Timeout in seconds for the service to send a request to the proxied upstream | `600` |
|
||||
| `ReadTimeout` | int | Timeout in seconds for the service to read a response from the proxied upstream | `600` |
|
||||
| `SSLOpts` | string | Options to be passed when configuring SSL | |
|
||||
| `SSLDefaultDHParam` | int | Size of DH parameters | `1024` |
|
||||
| `SSLDefaultDHParamPath` | string | Path to DH parameters file | |
|
||||
| `SSLVerify` | string | SSL client verification | `required` |
|
||||
| `WorkerProcesses` | string | Number of worker processes for the proxy service | `1` |
|
||||
| `RLimitNoFile` | int | Number of maxiumum open files for the proxy service | `65535` |
|
||||
| `SSLCiphers` | string | SSL ciphers to use for the proxy service | `HIGH:!aNULL:!MD5` |
|
||||
| `SSLProtocols` | string | Enable the specified TLS protocols | `TLSv1.2` |
|
||||
| `HideInfoHeaders` | bool | Hide proxy-related response headers. |
|
||||
| `KeepaliveTimeout` | string | connection keepalive timeout | `75s` |
|
||||
| `ClientMaxBodySize` | string | maximum allowed size of the client request body | `1m` |
|
||||
| `ClientBodyBufferSize` | string | sets buffer size for reading client request body | `8k` |
|
||||
| `ClientHeaderBufferSize` | string | sets buffer size for reading client request header | `1k` |
|
||||
| `LargeClientHeaderBuffers` | string | sets the maximum number and size of buffers used for reading large client request header | `4 8k` |
|
||||
| `ClientBodyTimeout` | string | timeout for reading client request body | `60s` |
|
||||
| `UnderscoresInHeaders` | bool | enables or disables the use of underscores in client request header fields| `false` |
|
||||
| `ServerNamesHashBucketSize` | int | sets the bucket size for the server names hash tables (in KB) | `128` |
|
||||
| `UpstreamZoneSize` | int | size of the shared memory zone (in KB) | `64` |
|
||||
| `GlobalOptions` | []string | list of options that are included in the global configuration | |
|
||||
| `HTTPOptions` | []string | list of options that are included in the http configuration | |
|
||||
| `TCPOptions` | []string | list of options that are included in the stream (TCP) configuration | |
|
||||
| `AccessLogPath` | string | Path to use for access logs | `/dev/stdout` |
|
||||
| `ErrorLogPath` | string | Path to use for error logs | `/dev/stdout` |
|
||||
| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger | see default format |
|
||||
| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger | see default format |
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: Use application service labels
|
||||
description: Learn how applications use service labels for publishing
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
---
|
||||
|
||||
Service labels define hostnames that are routed to the
|
||||
service, the applicable ports, and other routing configurations. Applications that publish using Interlock use service labels to configure how they are published.
|
||||
|
||||
When you deploy or update a swarm service with service labels, the following actions occur:
|
||||
|
||||
1. The `ucp-interlock` service monitors the Docker API for events and
|
||||
publishes the events to the `ucp-interlock-extension` service.
|
||||
2. That service then generates a new configuration for the proxy service,
|
||||
based on the labels you added to your services.
|
||||
3. The `ucp-interlock` service takes the new configuration and reconfigures the
|
||||
`ucp-interlock-proxy` to start using the new configuration.
|
||||
|
||||
The previous steps occur in milliseconds and with rolling updates. Even though
|
||||
services are being reconfigured, users won't notice it.
|
||||
|
||||
## Service label options
|
||||
|
||||
The following table describes the available options:
|
||||
|
||||
| Label | Description | Example |
|
||||
| --- | --- | --- |
|
||||
| `com.docker.lb.hosts` | Comma separated list of the hosts that the service should serve | `example.com,test.com` |
|
||||
| `com.docker.lb.port` | Port to use for internal upstream communication | `8080` |
|
||||
| `com.docker.lb.network` | Name of network the proxy service should attach to for upstream connectivity | `app-network-a` |
|
||||
| `com.docker.lb.context_root` | Context or path to use for the application | `/app` |
|
||||
| `com.docker.lb.context_root_rewrite` | Boolean to enable rewrite for the context root | `true` |
|
||||
| `com.docker.lb.ssl_only` | Boolean to force SSL for application | `true` |
|
||||
| `com.docker.lb.ssl_cert` | Docker secret to use for the SSL certificate | `example.com.cert` |
|
||||
| `com.docker.lb.ssl_key` | Docker secret to use for the SSL key | `example.com.key` |
|
||||
| `com.docker.lb.websocket_endpoints` | Comma separated list of endpoints to configure to be upgraded for websockets | `/ws,/foo` |
|
||||
| `com.docker.lb.service_cluster` | Name of the service cluster to use for the application | `us-east` |
|
||||
| `com.docker.lb.ssl_backend` | Enable SSL communication to the upstreams | `true` |
|
||||
| `com.docker.lb.ssl_backend_tls_verify` | Verification mode for the upstream TLS | `none` |
|
||||
| `com.docker.lb.sticky_session_cookie` | Cookie to use for sticky sessions | `none` |
|
||||
| `com.docker.lb.redirects` | Semi-colon separated list of redirects to add in the format of `<source>,<target>`. Example: (`http://old.example.com,http://new.example.com;`) | `none` |
|
||||
| `com.docker.lb.ssl_passthrough` | Enable SSL passthrough | `false` |
|
||||
| `com.docker.lb.backend_mode` | Select the backend mode that the proxy should use to access the upstreams. Defaults to `task`. | `vip` |
|
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
title: Tune the proxy service
|
||||
description: Learn how to tune the proxy service for environment optimization
|
||||
keywords: routing, proxy, interlock
|
||||
---
|
||||
|
||||
## Constrain the proxy service to multiple dedicated worker nodes
|
||||
Refer to [Proxy service constraints](../deploy/production.md) for information on how to constrain the proxy service to multiple dedicated worker nodes.
|
||||
|
||||
## Stop
|
||||
To adjust the stop signal and period, use the `stop-signal` and `stop-grace-period` settings. For example,
|
||||
to set the stop signal to `SIGTERM` and grace period to ten (10) seconds, use the following command:
|
||||
|
||||
```bash
|
||||
$> docker service update --stop-signal=SIGTERM --stop-grace-period=10s interlock-proxy
|
||||
```
|
||||
|
||||
## Update actions
|
||||
In the event of an update failure, the default Swarm action is to "pause". This prevents Interlock updates from happening
|
||||
without operator intervention. You can change this behavior using the `update-failure-action` setting. For example,
|
||||
to automatically rollback to the previous configuration upon failure, use the following command:
|
||||
|
||||
```bash
|
||||
$> docker service update --update-failure-action=rollback interlock-proxy
|
||||
```
|
||||
|
||||
## Update interval
|
||||
By default, Interlock configures the proxy service using rolling update. For more time between proxy
|
||||
updates, such as to let a service settle, use the `update-delay` setting. For example, if you want to have
|
||||
thirty (30) seconds between updates, use the following command:
|
||||
|
||||
```bash
|
||||
$> docker service update --update-delay=30s interlock=proxy
|
||||
```
|
|
@ -0,0 +1,35 @@
|
|||
---
|
||||
title: Update Interlock services
|
||||
description: Learn how to update the UCP layer 7 routing solution services
|
||||
keywords: routing, proxy, interlock
|
||||
---
|
||||
|
||||
There are two parts to the update process:
|
||||
|
||||
1. Update the Interlock configuration to specify the new extension and/or proxy image versions.
|
||||
2. Update the Interlock service to use the new configuration and image.
|
||||
|
||||
## Update the Interlock configuration
|
||||
Create the new configuration:
|
||||
|
||||
```bash
|
||||
$> docker config create service.interlock.conf.v2 <path-to-new-config>
|
||||
```
|
||||
|
||||
## Update the Interlock service
|
||||
Remove the old configuration and specify the new configuration:
|
||||
|
||||
```bash
|
||||
$> docker service update --config-rm service.interlock.conf interlock
|
||||
$> docker service update --config-add source=service.interlock.conf.v2,target=/config.toml interlock
|
||||
```
|
||||
|
||||
Next, update the Interlock service to use the new image. The following example updates the Interlock core service to use the `sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51`
|
||||
version of Interlock. Interlock starts and checks the config object, which has the new extension version, and
|
||||
performs a rolling deploy to update all extensions.
|
||||
|
||||
```bash
|
||||
$> docker service update \
|
||||
--image interlockpreview/interlock@sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51 \
|
||||
interlock
|
||||
```
|
|
@ -1,148 +0,0 @@
|
|||
---
|
||||
title: Layer 7 routing configuration reference
|
||||
description: Learn the configuration options for the UCP layer 7 routing solution
|
||||
keywords: routing, proxy
|
||||
---
|
||||
|
||||
Once you enable the layer 7 routing service, UCP creates the
|
||||
`com.docker.ucp.interlock.conf-1` configuration and uses it to configure all
|
||||
the internal components of this service.
|
||||
|
||||
The configuration is managed as a TOML file.
|
||||
|
||||
## Example configuration
|
||||
|
||||
Here's an example of the default configuration used by UCP:
|
||||
|
||||
```toml
|
||||
ListenAddr = ":8080"
|
||||
DockerURL = "unix:///var/run/docker.sock"
|
||||
AllowInsecure = false
|
||||
PollInterval = "3s"
|
||||
|
||||
[Extensions]
|
||||
[Extensions.default]
|
||||
Image = "{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}"
|
||||
ServiceName = "ucp-interlock-extension"
|
||||
Args = []
|
||||
Constraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
|
||||
ProxyImage = "{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}"
|
||||
ProxyServiceName = "ucp-interlock-proxy"
|
||||
ProxyConfigPath = "/etc/nginx/nginx.conf"
|
||||
ProxyReplicas = 2
|
||||
ProxyStopSignal = "SIGQUIT"
|
||||
ProxyStopGracePeriod = "5s"
|
||||
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
|
||||
PublishMode = "ingress"
|
||||
PublishedPort = 80
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 8443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.default.Labels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.ContainerLabels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.ProxyLabels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.ProxyContainerLabels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.Config]
|
||||
Version = ""
|
||||
User = "nginx"
|
||||
PidPath = "/var/run/proxy.pid"
|
||||
MaxConnections = 1024
|
||||
ConnectTimeout = 600
|
||||
SendTimeout = 600
|
||||
ReadTimeout = 600
|
||||
IPHash = false
|
||||
AdminUser = ""
|
||||
AdminPass = ""
|
||||
SSLOpts = ""
|
||||
SSLDefaultDHParam = 1024
|
||||
SSLDefaultDHParamPath = ""
|
||||
SSLVerify = "required"
|
||||
WorkerProcesses = 1
|
||||
RLimitNoFile = 65535
|
||||
SSLCiphers = "HIGH:!aNULL:!MD5"
|
||||
SSLProtocols = "TLSv1.2"
|
||||
AccessLogPath = "/dev/stdout"
|
||||
ErrorLogPath = "/dev/stdout"
|
||||
MainLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" '\n\t\t '$status $body_bytes_sent \"$http_referer\" '\n\t\t '\"$http_user_agent\" \"$http_x_forwarded_for\"';"
|
||||
TraceLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" $status '\n\t\t '$body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n\t\t '\"$http_x_forwarded_for\" $request_id $msec $request_time '\n\t\t '$upstream_connect_time $upstream_header_time $upstream_response_time';"
|
||||
KeepaliveTimeout = "75s"
|
||||
ClientMaxBodySize = "32m"
|
||||
ClientBodyBufferSize = "8k"
|
||||
ClientHeaderBufferSize = "1k"
|
||||
LargeClientHeaderBuffers = "4 8k"
|
||||
ClientBodyTimeout = "60s"
|
||||
UnderscoresInHeaders = false
|
||||
HideInfoHeaders = false
|
||||
```
|
||||
|
||||
## Core configurations
|
||||
|
||||
These are the configurations used for the `ucp-interlock` service. The following
|
||||
options are available:
|
||||
|
||||
| Option | Type | Description |
|
||||
|:-------------------|:------------|:-----------------------------------------------------------------------------------------------|
|
||||
| `ListenAddr` | string | Address to serve the Interlock GRPC API. Defaults to `8080`. |
|
||||
| `DockerURL` | string | Path to the socket or TCP address to the Docker API. Defaults to `unix:///var/run/docker.sock` |
|
||||
| `TLSCACert` | string | Path to the CA certificate for connecting securely to the Docker API. |
|
||||
| `TLSCert` | string | Path to the certificate for connecting securely to the Docker API. |
|
||||
| `TLSKey` | string | Path to the key for connecting securely to the Docker API. |
|
||||
| `AllowInsecure` | bool | Skip TLS verification when connecting to the Docker API via TLS. |
|
||||
| `PollInterval` | string | Interval to poll the Docker API for changes. Defaults to `3s`. |
|
||||
| `EndpointOverride` | string | Override the default GRPC API endpoint for extensions. The default is detected via Swarm. |
|
||||
| `Extensions` | []Extension | Array of extensions as listed below. |
|
||||
|
||||
## Extension configuration
|
||||
|
||||
Interlock must contain at least one extension to service traffic.
|
||||
The following options are available to configure the extensions:
|
||||
|
||||
| Option | Type | Description |
|
||||
|:-------------------|:------------------|:------------------------------------------------------------------------------|
|
||||
| `Image` | string | Name of the Docker image to use for the extension service. |
|
||||
| `Args` | []string | Arguments to be passed to the Docker extension service upon creation. |
|
||||
| `Labels` | map[string]string | Labels to add to the extension service. |
|
||||
| `ServiceName` | string | Name of the extension service. |
|
||||
| `ProxyImage` | string | Name of the Docker image to use for the proxy service. |
|
||||
| `ProxyArgs` | []string | Arguments to be passed to the proxy service upon creation. |
|
||||
| `ProxyLabels` | map[string]string | Labels to add to the proxy service. |
|
||||
| `ProxyServiceName` | string | Name of the proxy service. |
|
||||
| `ProxyConfigPath` | string | Path in the service for the generated proxy configuration. |
|
||||
| `ServiceCluster` | string | Name of the cluster this extension services. |
|
||||
| `PublishMode` | string | Publish mode for the proxy service. Supported values are `ingress` or `host`. |
|
||||
| `PublishedPort` | int | Port where the proxy service serves non-TLS traffic. |
|
||||
| `PublishedSSLPort` | int | Port where the proxy service serves TLS traffic. |
|
||||
| `Template` | string | Docker configuration object that is used as the extension template. |
|
||||
| `Config` | Config | Proxy configuration used by the extensions as listed below. |
|
||||
|
||||
## Proxy configuration
|
||||
|
||||
By default NGINX is used as a proxy, so the following NGINX options are
|
||||
available for the proxy service:
|
||||
|
||||
| Option | Type | Description |
|
||||
|:------------------------|:-------|:-----------------------------------------------------------------------------------------------------|
|
||||
| `User` | string | User to be used in the proxy. |
|
||||
| `PidPath` | string | Path to the pid file for the proxy service. |
|
||||
| `MaxConnections` | int | Maximum number of connections for proxy service. |
|
||||
| `ConnectTimeout` | int | Timeout in seconds for clients to connect. |
|
||||
| `SendTimeout` | int | Timeout in seconds for the service to send a request to the proxied upstream. |
|
||||
| `ReadTimeout` | int | Timeout in seconds for the service to read a response from the proxied upstream. |
|
||||
| `IPHash` | bool | Specifies that requests are distributed between servers based on client IP addresses. |
|
||||
| `SSLOpts` | string | Options to be passed when configuring SSL. |
|
||||
| `SSLDefaultDHParam` | int | Size of DH parameters. |
|
||||
| `SSLDefaultDHParamPath` | string | Path to DH parameters file. |
|
||||
| `SSLVerify` | string | SSL client verification. |
|
||||
| `WorkerProcesses` | string | Number of worker processes for the proxy service. |
|
||||
| `RLimitNoFile` | int | Number of maxiumum open files for the proxy service. |
|
||||
| `SSLCiphers` | string | SSL ciphers to use for the proxy service. |
|
||||
| `SSLProtocols` | string | Enable the specified TLS protocols. |
|
||||
| `HideInfoHeaders` | bool | Hide proxy related response headers. |
|
||||
| `AccessLogPath` | string | Path to use for access logs (default: `/dev/stdout`). |
|
||||
| `ErrorLogPath` | string | Path to use for error logs (default: `/dev/stdout`). |
|
||||
| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger. |
|
||||
| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger. |
|
|
@ -1,64 +0,0 @@
|
|||
---
|
||||
title: Configure the layer 7 routing service
|
||||
description: Learn how to configure the layer 7 routing solution for UCP, that allows
|
||||
you to route traffic to swarm services.
|
||||
keywords: routing, proxy
|
||||
---
|
||||
|
||||
[When enabling the layer 7 routing solution](index.md) from the UCP web UI,
|
||||
you can configure the ports for incoming traffic. If you want to further
|
||||
customize the layer 7 routing solution, you can do it by updating the
|
||||
`ucp-interlock` service with a new Docker configuration.
|
||||
|
||||
Here's how it works:
|
||||
|
||||
1. Find out what configuration is currently being used for the `ucp-interlock`
|
||||
service and save it to a file:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' ucp-interlock)
|
||||
docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
2. Make the necessary changes to the `config.toml` file.
|
||||
[Learn about the configuration options available](configuration-reference.md).
|
||||
|
||||
3. Create a new Docker configuration object from the file you've edited:
|
||||
|
||||
```bash
|
||||
NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
|
||||
docker config create $NEW_CONFIG_NAME config.toml
|
||||
```
|
||||
|
||||
3. Update the `ucp-interlock` service to start using the new configuration:
|
||||
|
||||
```bash
|
||||
docker service update \
|
||||
--config-rm $CURRENT_CONFIG_NAME \
|
||||
--config-add source=$NEW_CONFIG_NAME,target=/config.toml \
|
||||
ucp-interlock
|
||||
```
|
||||
|
||||
By default the `ucp-interlock` service is configured to pause if you provide an
|
||||
invalid configuration. The service won't restart without a manual intervention.
|
||||
|
||||
If you want the service to automatically rollback to a previous stable
|
||||
configuration, you can update it with:
|
||||
|
||||
```bash
|
||||
docker service update \
|
||||
--update-failure-action rollback \
|
||||
ucp-interlock
|
||||
```
|
||||
|
||||
Another thing to be aware is that every time you enable the layer 7 routing
|
||||
solution from the UCP UI, the `ucp-interlock` service is started using the
|
||||
default configuration.
|
||||
|
||||
If you've customized the configuration used by the `ucp-interlock` service,
|
||||
you'll have to update it again to use the Docker configuration object
|
||||
you've created.
|
||||
|
||||
|
|
@ -1,100 +0,0 @@
|
|||
---
|
||||
title: Host mode networking
|
||||
description: Learn how to configure the UCP layer 7 routing solution with
|
||||
host mode networking.
|
||||
keywords: routing, proxy
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/host-mode-networking/
|
||||
---
|
||||
|
||||
By default the layer 7 routing components communicate with one another using
|
||||
overlay networks. You can customize the components to use host mode networking
|
||||
instead.
|
||||
|
||||
You can choose to:
|
||||
|
||||
* Configure the `ucp-interlock` and `ucp-interlock-extension` services to
|
||||
communicate using host mode networking.
|
||||
* Configure the `ucp-interlock-proxy` and your swarm service to communicate
|
||||
using host mode networking.
|
||||
* Use host mode networking for all of the components.
|
||||
|
||||
In this example we'll start with a production-grade deployment of the layer 7
|
||||
routing solution and update it so that it uses host mode networking instead of
|
||||
overlay networking.
|
||||
|
||||
When using host mode networking you won't be able to use DNS service discovery,
|
||||
since that functionality requires overlay networking.
|
||||
For two services to communicate, each service needs to know the IP address of
|
||||
the node where the other service is running.
|
||||
|
||||
## Production-grade deployment
|
||||
|
||||
If you haven't already, configure the
|
||||
[layer 7 routing solution for production](production.md).
|
||||
|
||||
Once you've done that, the `ucp-interlock-proxy` service replicas should be
|
||||
running on their own dedicated nodes.
|
||||
|
||||
## Update the ucp-interlock config
|
||||
|
||||
[Update the ucp-interlock service configuration](configure.md) so that it uses
|
||||
host mode networking.
|
||||
|
||||
Update the `PublishMode` key to:
|
||||
|
||||
```toml
|
||||
PublishMode = "host"
|
||||
```
|
||||
|
||||
When updating the `ucp-interlock` service to use the new Docker configuration,
|
||||
make sure to update it so that it starts publishing its port on the host:
|
||||
|
||||
```bash
|
||||
docker service update \
|
||||
--config-rm $CURRENT_CONFIG_NAME \
|
||||
--config-add source=$NEW_CONFIG_NAME,target=/config.toml \
|
||||
--publish-add mode=host,target=8080 \
|
||||
ucp-interlock
|
||||
```
|
||||
|
||||
The `ucp-interlock` and `ucp-interlock-extension` services are now communicating
|
||||
using host mode networking.
|
||||
|
||||
## Deploy your swarm services
|
||||
|
||||
Now you can deploy your swarm services. In this example we'll deploy a demo
|
||||
service that also uses host mode networking.
|
||||
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
|
||||
and deploy the service:
|
||||
|
||||
```bash
|
||||
docker service create \
|
||||
--name demo \
|
||||
--detach=false \
|
||||
--label com.docker.lb.hosts=app.example.org \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--publish mode=host,target=8080 \
|
||||
--env METADATA="demo" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Docker allocates a high random port on the host where the service can be reached.
|
||||
To test that everything is working you can run:
|
||||
|
||||
```bash
|
||||
curl --header "Host: app.example.org" \
|
||||
http://<proxy-address>:<routing-http-port>/ping
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
* `<proxy-address>` is the domain name or IP address of a node where the proxy
|
||||
service is running.
|
||||
* `<routing-http-port>` is the [port you're using to route HTTP traffic](index.md).
|
||||
|
||||
If everything is working correctly, you should get a JSON result like:
|
||||
|
||||
```json
|
||||
{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
|
||||
```
|
|
@ -1,13 +1,32 @@
|
|||
---
|
||||
title: Enable layer 7 routing
|
||||
description: Learn how to enable the layer 7 routing solution for UCP, that allows
|
||||
you to route traffic to swarm services.
|
||||
keywords: routing, proxy
|
||||
title: Deploy a layer 7 routing solution
|
||||
description: Learn the deployment steps for the UCP layer 7 routing solution
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configuration-reference/
|
||||
---
|
||||
|
||||
To enable support for layer 7 routing, also known as HTTP routing mesh,
|
||||
log in to the UCP web UI as an administrator, navigate to the **Admin Settings**
|
||||
page, and click the **Layer 7 Routing** option. Check the **Enable Layer 7 Routing** option.
|
||||
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
|
||||
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Enable layer 7 routing](#enable-layer-7-routing)
|
||||
3. [Work with the core service configuration file](#work-with-the-core-service-configuration-file)
|
||||
4. [Create a dedicated network for Interlock and extensions](#create-a-dedicated-network-for-Interlock-and-extensions)
|
||||
5. [Create the Interlock service](#create-the-interlock-service)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Docker](https://www.docker.com) version 17.06 or later
|
||||
- Docker must be running in [Swarm mode](/engine/swarm/)
|
||||
- Internet access (see [Offline installation](./offline-install.md) for installing without internet access)
|
||||
|
||||
## Enable layer 7 routing
|
||||
By default, layer 7 routing is disabled, so you must first
|
||||
enable this service from the UCP web UI.
|
||||
|
||||
1. Log in to the UCP web UI as an administrator.
|
||||
2. Navigate to **Admin Settings**
|
||||
3. Select **Layer 7 Routing** and then select **Enable Layer 7 Routing**
|
||||
|
||||
{: .with-border}
|
||||
|
||||
|
@ -15,4 +34,170 @@ By default, the routing mesh service listens on port 8080 for HTTP and port
|
|||
8443 for HTTPS. Change the ports if you already have services that are using
|
||||
them.
|
||||
|
||||
Once you save, the layer 7 routing service can be used by your swarm services.
|
||||
When 7 routing is enabled:
|
||||
|
||||
1. UCP creates the `ucp-interlock` overlay network.
|
||||
2. UCP deploys the `ucp-interlock` service and attaches it both to the Docker
|
||||
socket and the overlay network that was created. This allows the Interlock
|
||||
service to use the Docker API. That's also the reason why this service needs to
|
||||
run on a manger node.
|
||||
3. The `ucp-interlock` service starts the `ucp-interlock-extension` service
|
||||
and attaches it to the `ucp-interlock` network. This allows both services
|
||||
to communicate.
|
||||
4. The `ucp-interlock-extension` generates a configuration to be used by
|
||||
the proxy service. By default the proxy service is NGINX, so this service
|
||||
generates a standard NGINX configuration.
|
||||
( Is this valid here????) UCP creates the `com.docker.ucp.interlock.conf-1` configuration file and uses it to configure all
|
||||
the internal components of this service.
|
||||
5. The `ucp-interlock` service takes the proxy configuration and uses it to
|
||||
start the `ucp-interlock-proxy` service.
|
||||
|
||||
At this point everything is ready for you to start using the layer 7 routing
|
||||
service with your swarm workloads.
|
||||
|
||||
|
||||
The following code sample provides a default UCP configuration:
|
||||
|
||||
```toml
|
||||
ListenAddr = ":8080"
|
||||
DockerURL = "unix:///var/run/docker.sock"
|
||||
AllowInsecure = false
|
||||
PollInterval = "3s"
|
||||
|
||||
[Extensions]
|
||||
[Extensions.default]
|
||||
Image = "{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}"
|
||||
ServiceName = "ucp-interlock-extension"
|
||||
Args = []
|
||||
Constraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
|
||||
ProxyImage = "{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}"
|
||||
ProxyServiceName = "ucp-interlock-proxy"
|
||||
ProxyConfigPath = "/etc/nginx/nginx.conf"
|
||||
ProxyReplicas = 2
|
||||
ProxyStopSignal = "SIGQUIT"
|
||||
ProxyStopGracePeriod = "5s"
|
||||
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
|
||||
PublishMode = "ingress"
|
||||
PublishedPort = 80
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 8443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.default.Labels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.ContainerLabels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.ProxyLabels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.ProxyContainerLabels]
|
||||
"com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
|
||||
[Extensions.default.Config]
|
||||
Version = ""
|
||||
User = "nginx"
|
||||
PidPath = "/var/run/proxy.pid"
|
||||
MaxConnections = 1024
|
||||
ConnectTimeout = 600
|
||||
SendTimeout = 600
|
||||
ReadTimeout = 600
|
||||
IPHash = false
|
||||
AdminUser = ""
|
||||
AdminPass = ""
|
||||
SSLOpts = ""
|
||||
SSLDefaultDHParam = 1024
|
||||
SSLDefaultDHParamPath = ""
|
||||
SSLVerify = "required"
|
||||
WorkerProcesses = 1
|
||||
RLimitNoFile = 65535
|
||||
SSLCiphers = "HIGH:!aNULL:!MD5"
|
||||
SSLProtocols = "TLSv1.2"
|
||||
AccessLogPath = "/dev/stdout"
|
||||
ErrorLogPath = "/dev/stdout"
|
||||
MainLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" '\n\t\t '$status $body_bytes_sent \"$http_referer\" '\n\t\t '\"$http_user_agent\" \"$http_x_forwarded_for\"';"
|
||||
TraceLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" $status '\n\t\t '$body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n\t\t '\"$http_x_forwarded_for\" $request_id $msec $request_time '\n\t\t '$upstream_connect_time $upstream_header_time $upstream_response_time';"
|
||||
KeepaliveTimeout = "75s"
|
||||
ClientMaxBodySize = "32m"
|
||||
ClientBodyBufferSize = "8k"
|
||||
ClientHeaderBufferSize = "1k"
|
||||
LargeClientHeaderBuffers = "4 8k"
|
||||
ClientBodyTimeout = "60s"
|
||||
UnderscoresInHeaders = false
|
||||
HideInfoHeaders = false
|
||||
```
|
||||
|
||||
### Work with the core service configuration file
|
||||
Interlock uses the TOML file for the core service configuration. The following example utilizes Swarm deployment and recovery features by creating a Docker Config object:
|
||||
|
||||
```bash
|
||||
$> cat << EOF | docker config create service.interlock.conf -
|
||||
ListenAddr = ":8080"
|
||||
DockerURL = "unix:///var/run/docker.sock"
|
||||
PollInterval = "3s"
|
||||
|
||||
[Extensions]
|
||||
[Extensions.default]
|
||||
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
|
||||
Args = ["-D"]
|
||||
ProxyImage = "nginx:alpine"
|
||||
ProxyArgs = []
|
||||
ProxyConfigPath = "/etc/nginx/nginx.conf"
|
||||
ProxyReplicas = 1
|
||||
ProxyStopGracePeriod = "3s"
|
||||
ServiceCluster = ""
|
||||
PublishMode = "ingress"
|
||||
PublishedPort = 80
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.default.Config]
|
||||
User = "nginx"
|
||||
PidPath = "/var/run/proxy.pid"
|
||||
WorkerProcesses = 1
|
||||
RlimitNoFile = 65535
|
||||
MaxConnections = 2048
|
||||
EOF
|
||||
oqkvv1asncf6p2axhx41vylgt
|
||||
```
|
||||
|
||||
### Create a dedicated network for Interlock and extensions
|
||||
|
||||
Next, create a dedicated network for Interlock and the extensions:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay interlock
|
||||
```
|
||||
|
||||
### Create the Interlock service
|
||||
Now you can create the Interlock service. Note the requirement to constrain to a manager. The
|
||||
Interlock core service must have access to a Swarm manager, however the extension and proxy services
|
||||
are recommended to run on workers. See the [Production](./production.md) section for more information
|
||||
on setting up for an production environment.
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name interlock \
|
||||
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
|
||||
--network interlock \
|
||||
--constraint node.role==manager \
|
||||
--config src=service.interlock.conf,target=/config.toml \
|
||||
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
|
||||
sjpgq7h621exno6svdnsvpv9z
|
||||
```
|
||||
|
||||
At this point, there should be three (3) services created: one for the Interlock service,
|
||||
one for the extension service, and one for the proxy service:
|
||||
|
||||
```bash
|
||||
$> docker service ls
|
||||
ID NAME MODE REPLICAS IMAGE PORTS
|
||||
lheajcskcbby modest_raman replicated 1/1 nginx:alpine *:80->80/tcp *:443->443/tcp
|
||||
oxjvqc6gxf91 keen_clarke replicated 1/1 interlockpreview/interlock-extension-nginx:2.0.0-preview
|
||||
sjpgq7h621ex interlock replicated 1/1 interlockpreview/interlock:2.0.0-preview
|
||||
```
|
||||
|
||||
The Interlock traffic layer is now deployed.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Configure Interlock](../config/index.md)
|
||||
- [Deploy applications](../usage/index.md)
|
||||
- [Production deployment information](./production.md)
|
||||
- [Offline installation](./offline-install.md)
|
||||
|
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Offline installation considerations
|
||||
description: Learn how to to install Interlock on a Docker cluster without internet access.
|
||||
keywords: routing, proxy, interlock
|
||||
---
|
||||
|
||||
To install Interlock on a Docker cluster without internet access, the Docker images must be loaded. This topic describes how to export the images from a local Docker
|
||||
engine and then loading them to the Docker Swarm cluster.
|
||||
|
||||
First, using an existing Docker engine, save the images:
|
||||
|
||||
```bash
|
||||
$> docker save docker/interlock:latest > interlock.tar
|
||||
$> docker save docker/interlock-extension-nginx:latest > interlock-extension-nginx.tar
|
||||
$> docker save nginx:alpine > nginx.tar
|
||||
```
|
||||
|
||||
Note: replace `docker/interlock-extension-nginx:latest` and `nginx:alpine` with the corresponding
|
||||
extension and proxy image if you are not using Nginx.
|
||||
|
||||
You should have the following two files:
|
||||
|
||||
- `interlock.tar`: This is the core Interlock application.
|
||||
- `interlock-extension-nginx.tar`: This is the Interlock extension for Nginx.
|
||||
- `nginx:alpine`: This is the official Nginx image based on Alpine.
|
||||
|
||||
Copy these files to each node in the Docker Swarm cluster and run the following commands to load each image:
|
||||
|
||||
```bash
|
||||
$> docker load < interlock.tar
|
||||
$> docker load < interlock-extension-nginx.tar
|
||||
$> docker load < nginx:alpine.tar
|
||||
```
|
||||
|
||||
## Next steps
|
||||
After running on each node, refer to the [Deploy](./index.md) section to
|
||||
continue the installation.
|
|
@ -2,18 +2,33 @@
|
|||
title: Configure layer 7 routing for production
|
||||
description: Learn how to configure the layer 7 routing solution for a production
|
||||
environment.
|
||||
keywords: routing, proxy
|
||||
keywords: routing, proxy, interlock
|
||||
---
|
||||
|
||||
The layer 7 solution that ships out of the box with UCP is highly available
|
||||
This section includes documentation on configuring Interlock
|
||||
for a production environment. If you have not yet deployed Interlock, refer to [Deploying Interlock](./index.md) because this information builds upon the basic deployment. This topic does not cover infrastructure deployment -
|
||||
it assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
|
||||
Refer to the [Swarm](/engine/swarm/) documentation if you need help
|
||||
getting a Swarm cluster deployed.
|
||||
|
||||
The layer 7 solution that ships with UCP is highly available
|
||||
and fault tolerant. It is also designed to work independently of how many
|
||||
nodes you're managing with UCP.
|
||||
|
||||

|
||||
|
||||
For a production-grade deployment, you should tune the default deployment to
|
||||
For a production-grade deployment, you need to perform the following actions:
|
||||
|
||||
1. Pick two nodes that are going to be dedicated to run the proxy service.
|
||||
2. Apply labels to those nodes, so that you can constrain the proxy service to
|
||||
only run on nodes with those labels.
|
||||
3. Update the `ucp-interlock` service to deploy proxies using that constraint.
|
||||
4. Configure your load balancer to only route traffic to the dedicated nodes.
|
||||
|
||||
## Select dedicated nodes
|
||||
Tuning the default deployment to
|
||||
have two nodes dedicated for running the two replicas of the
|
||||
`ucp-interlock-proxy` service. This ensures:
|
||||
`ucp-interlock-proxy` service ensures:
|
||||
|
||||
* The proxy services have dedicated resources to handle user requests. You
|
||||
can configure these nodes with higher performance network interfaces.
|
||||
|
@ -22,45 +37,56 @@ deployment secure.
|
|||
* The proxy service is running on two nodes. If one node fails, layer 7 routing
|
||||
continues working.
|
||||
|
||||
To achieve this you need to:
|
||||
|
||||
1. Enable layer 7 routing. [Learn how](index.md).
|
||||
2. Pick two nodes that are going to be dedicated to run the proxy service.
|
||||
3. Apply labels to those nodes, so that you can constrain the proxy service to
|
||||
only run on nodes with those labels.
|
||||
4. Update the `ucp-interlock` service to deploy proxies using that constraint.
|
||||
5. Configure your load balancer to route traffic to the dedicated nodes only.
|
||||
|
||||
## Apply labels to nodes
|
||||
|
||||
In this example, we chose node-5 and node-6 to be dedicated just for running
|
||||
the proxy service. To apply labels to those nodes run:
|
||||
## Apply node labels
|
||||
Configure the selected nodes as load balancer worker nodes ( for example, `lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy service. After you log in to one of the Swarm managers, run the following commands to add node labels
|
||||
to the dedicated ingress workers:
|
||||
|
||||
```bash
|
||||
docker node update --label-add nodetype=loadbalancer <node>
|
||||
$> docker node update --label-add nodetype=loadbalancer lb-00
|
||||
lb-00
|
||||
$> docker node update --label-add nodetype=loadbalancer lb-01
|
||||
lb-01
|
||||
```
|
||||
|
||||
To make sure the label was successfully applied, run:
|
||||
You can inspect each node to ensure the labels were successfully added:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
docker node inspect --format '{{ index .Spec.Labels "nodetype" }}' <node>
|
||||
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
|
||||
map[nodetype:loadbalancer]
|
||||
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
|
||||
map[nodetype:loadbalancer]
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
The command should print "loadbalancer".
|
||||
|
||||
## Configure the ucp-interlock service
|
||||
|
||||
## Update proxy service
|
||||
Now that your nodes are labelled, you need to update the `ucp-interlock-proxy`
|
||||
service configuration to deploy the proxy service with the correct constraints.
|
||||
|
||||
Add a constraint to the `ucp-interlock-proxy` service to update the running service:
|
||||
service configuration to deploy the proxy service with the correct constraints (constrained to those
|
||||
workers). From a manager, add a constraint to the `ucp-interlock-proxy` service to update the running service:
|
||||
|
||||
```bash
|
||||
docker service update \
|
||||
$> docker service update --replicas=2 \
|
||||
--constraint-add node.labels.nodetype==loadbalancer \
|
||||
ucp-interlock-proxy
|
||||
--stop-signal SIGQUIT \
|
||||
--stop-grace-period=5s \
|
||||
$(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
|
||||
```
|
||||
|
||||
This updates the proxy service to have two (2) replicas and ensure they are constrained to
|
||||
the workers with the label `nodetype==loadbalancer` as well as configure the stop signal for the tasks
|
||||
to be a `SIGQUIT` with a grace period of five (5) seconds. This will ensure that Nginx uses a graceful shutdown
|
||||
before exiting to ensure the client request is finished.
|
||||
|
||||
Inspect the service to ensure the replicas have started on the desired nodes:
|
||||
|
||||
```bash
|
||||
$> docker service ps $(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
|
||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
||||
o21esdruwu30 interlock-proxy.1 nginx:alpine lb-01 Running Preparing 3 seconds ago
|
||||
n8yed2gp36o6 \_ interlock-proxy.1 nginx:alpine mgr-01 Shutdown Shutdown less than a second ago
|
||||
aubpjc4cnw79 interlock-proxy.2 nginx:alpine lb-00 Running Preparing 3 seconds ago
|
||||
```
|
||||
|
||||
Then add the constraint to the `ProxyConstraints` array in the `interlock-proxy` service
|
||||
|
@ -72,18 +98,34 @@ configuration so it takes effect if Interlock is restored from backup:
|
|||
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"]
|
||||
```
|
||||
|
||||
[Learn how to configure ucp-interlock](configure.md).
|
||||
[Learn how to configure ucp-interlock](../config/index.md).
|
||||
|
||||
Once reconfigured you can check if the proxy service is running on the dedicated nodes:
|
||||
Once reconfigured, you can check if the proxy service is running on the dedicated nodes:
|
||||
|
||||
```bash
|
||||
docker service ps ucp-interlock-proxy
|
||||
```
|
||||
|
||||
## Configure your load balancer
|
||||
## Configure load balancer
|
||||
Update the settings in the upstream load balancer (ELB, F5, etc) with the
|
||||
addresses of the dedicated ingress workers. This directs all traffic to these nodes.
|
||||
|
||||
Once the proxy service is running on dedicated nodes, configure your upstream
|
||||
load balancer with the domain names or IP addresses of those nodes.
|
||||
You have now configured Interlock for a dedicated ingress production environment. Refer to the [configuration information](../config/tuning.md) if you want to continue tuning.
|
||||
|
||||
This makes sure all traffic is directed to these nodes.
|
||||
## Production deployment configuration example
|
||||
The following example shows the configuration of an eight (8) node Swarm cluster. There are three (3) managers
|
||||
and five (5) workers. Two of the workers are configured with node labels to be dedicated
|
||||
ingress cluster load balancer nodes. These will receive all application traffic.
|
||||
There is also an upstream load balancer (such as an Elastic Load Balancer or F5). The upstream
|
||||
load balancers will be statically configured for the two load balancer worker nodes.
|
||||
|
||||
This configuration has several benefits. The management plane is both isolated and redundant.
|
||||
No application traffic hits the managers and application ingress traffic can be routed
|
||||
to the dedicated nodes. These nodes can be configured with higher performance network interfaces
|
||||
to provide more bandwidth for the user services.
|
||||
|
||||

|
||||
|
||||
## Next steps
|
||||
- [Configure Interlock](../config/index.md)
|
||||
- [Deploy applications](../usage.index.md)
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
---
|
||||
title: Layer 7 routing upgrade
|
||||
description: Learn how to route layer 7 traffic to your swarm services
|
||||
description: Learn how to upgrade your existing layer 7 routing solution
|
||||
keywords: routing, proxy, hrm
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/upgrade/
|
||||
---
|
||||
|
||||
The [HTTP routing mesh](/datacenter/ucp/2.2/guides/admin/configure/use-domain-names-to-access-services.md)
|
||||
|
@ -9,7 +11,7 @@ functionality was redesigned in UCP 3.0 for greater security and flexibility.
|
|||
The functionality was also renamed to "layer 7 routing", to make it easier for
|
||||
new users to get started.
|
||||
|
||||
[Learn about the new layer 7 routing functionality](index.md).
|
||||
[Learn about the new layer 7 routing functionality](../index.md).
|
||||
|
||||
To route traffic to your service you apply specific labels to your swarm
|
||||
services, describing the hostname for the service and other configurations.
|
||||
|
@ -23,8 +25,6 @@ This article describes the upgrade process for the routing component, so that
|
|||
you can troubleshoot UCP and your services, in case something goes wrong with
|
||||
the upgrade.
|
||||
|
||||
# UCP upgrade process
|
||||
|
||||
If you are using the HTTP routing mesh, and start an upgrade to UCP 3.0:
|
||||
|
||||
1. UCP starts a reconciliation process to ensure all internal components are
|
||||
|
@ -40,7 +40,7 @@ before the upgrade. If something goes wrong during the upgrade process, you
|
|||
need to troubleshoot the interlock services and your services, since the HRM
|
||||
service won't be running after the upgrade.
|
||||
|
||||
[Learn more about the interlock services and architecture](architecture.md).
|
||||
[Learn more about the interlock services and architecture](../architecture.md).
|
||||
|
||||
## Check that routing works
|
||||
|
||||
|
@ -96,7 +96,7 @@ being used by other services.
|
|||
## Workarounds and clean-up
|
||||
|
||||
If you have any of the problems above, disable and enable the layer 7 routing
|
||||
setting on the [UCP settings page](deploy/index.md). This redeploys the
|
||||
setting on the [UCP settings page](index.md). This redeploys the
|
||||
services with their default configuration.
|
||||
|
||||
When doing that make sure you specify the same ports you were using for HRM,
|
||||
|
@ -108,7 +108,7 @@ stop it since it can conflict with the `ucp-interlock-proxy` service.
|
|||
## Optionally remove labels
|
||||
|
||||
As part of the upgrade process UCP adds the
|
||||
[labels specific to the new layer 7 routing solution](usage/labels-reference.md).
|
||||
[labels specific to the new layer 7 routing solution](../usage/labels-reference.md).
|
||||
|
||||
You can update your services to remove the old HRM labels, since they won't be
|
||||
used anymore.
|
||||
|
@ -121,7 +121,7 @@ the application traffic.
|
|||
If before upgrading you had all your applications attached to the `ucp-hrm`
|
||||
network, after upgrading you can update your services to start using a
|
||||
dedicated network for routing that's not shared with other services.
|
||||
[Learn how to use a dedicated network](usage/index.md).
|
||||
[Learn how to use a dedicated network](../usage/index.md).
|
||||
|
||||
If before upgrading you had a dedicate network to route traffic to each service,
|
||||
Interlock will continue using those dedicated networks. However the
|
|
@ -1,35 +1,87 @@
|
|||
---
|
||||
title: Layer 7 routing overview
|
||||
description: Learn how to route layer 7 traffic to your swarm services
|
||||
keywords: routing, proxy
|
||||
description: Learn how to route layer 7 traffic to your Swarm services
|
||||
keywords: routing, UCP, interlock, load balancing
|
||||
---
|
||||
|
||||
Application-layer (Layer 7) routing is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock architecture takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
|
||||
|
||||
Interlock is specific to the Swarm orchestrator. If you're trying to route
|
||||
traffic to your Kubernetes applications, check
|
||||
[layer 7 routing with Kubernetes.](../kubernetes/layer-7-routing.md)
|
||||
|
||||
Interlock uses the Docker Remote API to automatically configure extensions such as NGINX or HAProxy for application traffic. Interlock is designed for:
|
||||
|
||||
- Full integration with Docker (Swarm, Services, Secrets, Configs)
|
||||
- Enhanced configuration (context roots, TLS, zero downtime deploy, rollback)
|
||||
- Support for external load balancers (nginx, haproxy, F5, etc) via extensions
|
||||
- Least privilege for extensions (no Docker API access)
|
||||
|
||||
Docker Engine running in swarm mode has a routing mesh, which makes it easy
|
||||
to expose your services to the outside world. Since all nodes participate
|
||||
in the routing mesh, users can access your service by contacting any node.
|
||||
in the routing mesh, users can access a service by contacting any node.
|
||||
|
||||

|
||||
|
||||
In this example the WordPress service is listening on port 8000 of the routing
|
||||
For example, a WordPress service is listening on port 8000 of the routing
|
||||
mesh. Even though the service is running on a single node, users can access
|
||||
WordPress using the domain name or IP of any of the nodes that are part of
|
||||
the swarm.
|
||||
|
||||
UCP extends this one step further with layer 7 layer routing (also known as
|
||||
application layer 7), allowing users to access Docker services using domain names
|
||||
instead of IP addresses.
|
||||
|
||||
This functionality is made available through the Interlock component.
|
||||
instead of IP addresses. This functionality is made available through the Interlock component.
|
||||
|
||||

|
||||
|
||||
In this example, users can access the WordPress service using
|
||||
Using Interlock in the previous example, users can access the WordPress service using
|
||||
`http://wordpress.example.org`. Interlock takes care of routing traffic to
|
||||
the right place.
|
||||
|
||||
Interlock is specific to the Swarm orchestrator. If you're trying to route
|
||||
traffic to your Kubernetes applications, check
|
||||
[layer 7 routing with Kubernetes.](../kubernetes/layer-7-routing.md)
|
||||
## Terminology
|
||||
|
||||
- Cluster: A group of compute resources running Docker
|
||||
- Swarm: A Docker cluster running in Swarm mode
|
||||
- Upstream: An upstream container that serves an application
|
||||
- Proxy Service: A service that provides load balancing and proxying (such as Nginx)
|
||||
- Extension Service: A helper service that configures the proxy service
|
||||
- Service Cluster: A service cluster is an Interlock extension+proxy service
|
||||
- gRPC: A high-performance RPC framework
|
||||
|
||||

|
||||
|
||||
## Interlock services
|
||||
Interlock has
|
||||
three primary services:
|
||||
|
||||
* **Interlock**: This is the central piece of the layer 7 routing solution.
|
||||
The core service is responsible for interacting with the Docker Remote API and building
|
||||
an upstream configuration for the extensions. It uses the Docker API to monitor events, and manages the extension and
|
||||
proxy services. This is served on a gRPC API that the
|
||||
extensions are configured to access.
|
||||
* **Interlock-extension**: This is a helper service that queries the Interlock gRPC API for the
|
||||
upstream configuration. The extension service uses this to configure
|
||||
the proxy service. For proxy services that use files such as Nginx or HAProxy, the
|
||||
extension service generates the file and sends it to Interlock using the gRPC API. Interlock
|
||||
then updates the corresponding Docker Config object for the proxy service.
|
||||
* **Interlock-proxy**: This is a proxy/load-balancing service that handles requests for the upstream application services. These
|
||||
are configured using the data created by the corresponding extension service. By default, this service is a containerized
|
||||
NGINX deployment.
|
||||
|
||||
Interlock manages both extension and proxy service updates for both configuration changes
|
||||
and application service deployments. There is no intervention from the operator required.
|
||||
|
||||
The following image shows the default Interlock configuration, once you enable layer 7
|
||||
routing in UCP:
|
||||
|
||||

|
||||
|
||||
The Interlock service starts a single replica on a manager node. The
|
||||
Interlock-extension service runs a single replica on any available node, and
|
||||
the Interlock-proxy service starts two replicas on any available node.
|
||||
|
||||
If you don't have any worker nodes in your cluster, then all Interlock
|
||||
components run on manager nodes.
|
||||
|
||||
## Features and benefits
|
||||
|
||||
|
@ -37,16 +89,31 @@ Layer 7 routing in UCP supports:
|
|||
|
||||
* **High availability**: All the components used for layer 7 routing leverage
|
||||
Docker swarm for high availability, and handle failures gracefully.
|
||||
* **Automatic configuration**: UCP monitors your services and automatically
|
||||
reconfigures the proxy services so that everything handled for you.
|
||||
* **Scalability**: You can customize and tune the proxy services that handle
|
||||
user-facing requests to meet whatever demand your services have.
|
||||
* **Automatic configuration**: Interlock uses the Docker API for configuration. You do not have to manually
|
||||
update or restart anything to make services available. UCP monitors your services and automatically
|
||||
reconfigures proxy services.
|
||||
* **Scalability**: Interlock uses a modular design with a separate proxy service. This allows an
|
||||
operator to individually customize and scale the proxy layer to handle user requests and meet services demands, with transparency and no downtime for users.
|
||||
* **TLS**: You can leverage Docker secrets to securely manage TLS Certificates
|
||||
and keys for your services. Both TLS termination and TCP passthrough are supported.
|
||||
* **Context-based routing**: You can define where to route the request based on
|
||||
context or path.
|
||||
* **Host mode networking**: By default layer 7 routing leverages the Docker Swarm
|
||||
routing mesh, but you don't have to. You can use host mode networking for maximum
|
||||
performance.
|
||||
* **Context-based routing**: Interlock supports advanced application request routing by context or path.
|
||||
* **Host mode networking**: By default, layer 7 routing leverages the Docker Swarm
|
||||
routing mesh, but Interlock supports running proxy and application services in "host" mode networking, allowing
|
||||
you to bypass the routing mesh completely. This is beneficial if you want
|
||||
maximum performance for your applications.
|
||||
* **Security**: The layer 7 routing components that are exposed to the outside
|
||||
world run on worker nodes. Even if they get compromised, your cluster won't.
|
||||
world run on worker nodes. Even if they are compromised, your cluster aren't.
|
||||
* **SSL**: Interlock leverages Docker Secrets to securely store and use SSL certificates for services. Both
|
||||
SSL termination and TCP passthrough are supported.
|
||||
* **Blue-Green and Canary Service Deployment**: Interlock supports blue-green service deployment allowing an operator to deploy a new application while the current version is serving. Once traffic is verified to the new application, the operator
|
||||
can scale the older version to zero. If there is a problem, the operation is easily reversible.
|
||||
* **Service Cluster Support**: Interlock supports multiple extension+proxy combinations allowing for operators to partition load
|
||||
balancing resources for uses such as region or organization based load balancing.
|
||||
* **Least Privilege**: Interlock supports (and recommends) being deployed where the load balancing
|
||||
proxies do not need to be colocated with a Swarm manager. This makes the
|
||||
deployment more secure by not exposing the Docker API access to the extension or proxy services.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Interlock architecture](architecture.md)
|
||||
- [Deploy Interlock](deploy/index.md)
|
||||
|
|
|
@ -1,19 +1,19 @@
|
|||
---
|
||||
title: Canary application instances
|
||||
title: Publish Canary application instances
|
||||
description: Learn how to do canary deployments for your Docker swarm services
|
||||
keywords: routing, proxy
|
||||
---
|
||||
|
||||
In this example we will publish a service and deploy an updated service as canary instances.
|
||||
The following example publishes a service as a canary instance.
|
||||
|
||||
First we will create an overlay network so that service traffic is isolated and secure:
|
||||
First, create an overlay network to isolate and secure service traffic:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
Next we will create the initial service:
|
||||
Next, create the initial service:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
|
@ -27,8 +27,8 @@ $> docker service create \
|
|||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
||||
and the proxy service has been updated the application should be available via `http://demo.local`:
|
||||
Interlock detects when the service is available and publishes it. After tasks are running
|
||||
and the proxy service is updated, the application is available via `http://demo.local`:
|
||||
|
||||
```bash
|
||||
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||
|
@ -56,9 +56,10 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
|||
{"instance":"df20f55fc943","version":"0.1","metadata":"demo-version-1","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"}
|
||||
```
|
||||
|
||||
Notice the `metadata` with `demo-version-1`.
|
||||
Notice `metadata` is specified with `demo-version-1`.
|
||||
|
||||
Now we will deploy a "new" version:
|
||||
## Deploy an updated service as a canary instance
|
||||
The following example deploys an updated service as a canary instance:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
|
@ -72,8 +73,8 @@ $> docker service create \
|
|||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Since this has a replica of one (1) and the initial version has four (4) replicas 20% of application traffic
|
||||
will be sent to `demo-version-2`:
|
||||
Since this has a replica of one (1), and the initial version has four (4) replicas, 20% of application traffic
|
||||
is sent to `demo-version-2`:
|
||||
|
||||
```bash
|
||||
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||
|
@ -88,7 +89,7 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
|||
{"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"}
|
||||
```
|
||||
|
||||
To increase traffic to the new version add more replicas with `docker scale`:
|
||||
To increase traffic to the new version, add more replicas with `docker scale`:
|
||||
|
||||
```bash
|
||||
$> docker service scale demo-v2=4
|
||||
|
@ -102,6 +103,5 @@ $> docker service scale demo-v1=0
|
|||
demo-v1
|
||||
```
|
||||
|
||||
This will route all application traffic to the new version. If you need to rollback, simply scale the v1 service
|
||||
This routes all application traffic to the new version. If you need to rollback, simply scale the v1 service
|
||||
back up and v2 down.
|
||||
|
||||
|
|
|
@ -1,20 +1,20 @@
|
|||
---
|
||||
title: Context/path based routing
|
||||
description: Learn how to do route traffic to your Docker swarm services based
|
||||
on a url path
|
||||
title: Use context and path-based routing
|
||||
description: Learn how to route traffic to your Docker swarm services based
|
||||
on a url path.
|
||||
keywords: routing, proxy
|
||||
---
|
||||
|
||||
In this example we will publish a service using context or path based routing.
|
||||
The following example publishes a service using context or path based routing.
|
||||
|
||||
First we will create an overlay network so that service traffic is isolated and secure:
|
||||
First, create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
Next we will create the initial service:
|
||||
Next, create the initial service:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
|
@ -31,13 +31,13 @@ $> docker service create \
|
|||
|
||||
> Only one path per host
|
||||
>
|
||||
> Interlock supports only one path per host per service cluster. Once a
|
||||
> particular `com.docker.lb.hosts` label has been applied, it cannot be applied
|
||||
> Interlock only supports one path per host per service cluster. When a
|
||||
> specific `com.docker.lb.hosts` label is applied, it cannot be applied
|
||||
> again in the same service cluster.
|
||||
{: .important}
|
||||
|
||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
||||
and the proxy service has been updated the application should be available via `http://demo.local`:
|
||||
Interlock detects when the service is available and publishes it. After tasks are running
|
||||
and the proxy service is updated, the application is available via `http://demo.local`:
|
||||
|
||||
```bash
|
||||
$> curl -vs -H "Host: demo.local" http://127.0.0.1/app/
|
||||
|
@ -62,4 +62,3 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/app/
|
|||
< x-upstream-response-time: 1510928717.306
|
||||
...
|
||||
```
|
||||
|
||||
|
|
|
@ -1,50 +0,0 @@
|
|||
---
|
||||
title: Set a default service
|
||||
description: Learn about Interlock, an application routing and load balancing system
|
||||
for Docker Swarm.
|
||||
keywords: ucp, interlock, load balancing
|
||||
---
|
||||
|
||||
The default proxy service used by UCP to provide layer 7 routing is NGINX,
|
||||
so when users try to access a route that hasn't been configured, they will
|
||||
see the default NGINX 404 page.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
You can customize this by labelling a service with
|
||||
`com.docker.lb.default_backend=true`. When users try to access a route that's
|
||||
not configured, they are redirected to this service.
|
||||
|
||||
As an example, create a `docker-compose.yml` file with:
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
demo:
|
||||
image: ehazlett/interlock-default-app
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
com.docker.lb.default_backend: "true"
|
||||
com.docker.lb.port: 80
|
||||
networks:
|
||||
- demo-network
|
||||
|
||||
networks:
|
||||
demo-network:
|
||||
driver: overlay
|
||||
```
|
||||
|
||||
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
|
||||
and deploy the service:
|
||||
|
||||
```bash
|
||||
docker stack deploy --compose-file docker-compose.yml demo
|
||||
```
|
||||
|
||||
Once users try to access a route that's not configured, they are directed
|
||||
to this demo service.
|
||||
|
||||
{: .with-border}
|
||||
|
|
@ -1,20 +1,75 @@
|
|||
---
|
||||
title: Route traffic to a simple swarm service
|
||||
description: Learn how to do canary deployments for your Docker swarm services
|
||||
title: Route traffic to a swarm service
|
||||
description: Learn how to deploy your Docker swarm services and applications
|
||||
keywords: routing, proxy
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configuration-reference/
|
||||
- /ee/ucp/interlock/deploy/configure/
|
||||
---
|
||||
|
||||
Once the [layer 7 routing solution is enabled](../deploy/index.md), you can
|
||||
start using it in your swarm services.
|
||||
After Interlock is deployed, you can launch and publish services and applications.
|
||||
Use [Service Labels](/engine/reference/commandline/service_create/#set-metadata-on-a-service--l-label)
|
||||
to configure services to publish themselves to the load balancer.
|
||||
|
||||
In this example we'll deploy a simple service which:
|
||||
The following examples assume a DNS entry (or local hosts entry if you are testing locally) exists
|
||||
for each of the applications.
|
||||
|
||||
## Publish a service with four replicas
|
||||
Create a Docker Service using two labels:
|
||||
|
||||
- `com.docker.lb.hosts`
|
||||
- `com.docker.lb.port`
|
||||
|
||||
The `com.docker.lb.hosts` label instructs Interlock where the service should be available.
|
||||
The `com.docker.lb.port` label instructs what port the proxy service should use to access
|
||||
the upstreams.
|
||||
|
||||
Publish a demo service to the host `demo.local`:
|
||||
|
||||
First, create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
Next, deploy the application:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo \
|
||||
--network demo \
|
||||
--label com.docker.lb.hosts=demo.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
ehazlett/docker-demo
|
||||
6r0wiglf5f3bdpcy6zesh1pzx
|
||||
```
|
||||
|
||||
Interlock detects when the service is available and publishes it. After tasks are running
|
||||
and the proxy service is updated, the application is available via `http://demo.local`.
|
||||
|
||||
```bash
|
||||
$> curl -s -H "Host: demo.local" http://127.0.0.1/ping
|
||||
{"instance":"c2f1afe673d4","version":"0.1",request_id":"7bcec438af14f8875ffc3deab9215bc5"}
|
||||
```
|
||||
|
||||
To increase service capacity, use the Docker Service [Scale](https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/) command:
|
||||
|
||||
```bash
|
||||
$> docker service scale demo=4
|
||||
demo scaled to 4
|
||||
```
|
||||
|
||||
In this example, four service replicas are configured as upstreams. The load balancer balances traffic
|
||||
across all service replicas.
|
||||
|
||||
## Publish a service with a web interface
|
||||
This example deploys a simple service that:
|
||||
|
||||
* Has a JSON endpoint that returns the ID of the task serving the request.
|
||||
* Has a web interface that shows how many tasks the service is running.
|
||||
* Can be reached at `http://app.example.org`.
|
||||
|
||||
## Deploy the service
|
||||
|
||||
Create a `docker-compose.yml` file with:
|
||||
|
||||
```yaml
|
||||
|
@ -47,8 +102,6 @@ should attach to in order to be able to communicate with the demo service.
|
|||
To use layer 7 routing, your services need to be attached to at least one network.
|
||||
If your service is only attached to a single network, you don't need to add
|
||||
a label to specify which network to use for routing. When using a common stack file for multiple deployments leveraging UCP Interlock / Layer 7 Routing, prefix `com.docker.lb.network` with the stack name to ensure traffic will be directed to the correct overlay network.
|
||||
|
||||
|
||||
* The `com.docker.lb.port` label specifies which port the `ucp-interlock-proxy`
|
||||
service should use to communicate with this demo service.
|
||||
* Your service doesn't need to expose a port in the swarm routing mesh. All
|
||||
|
@ -64,7 +117,7 @@ docker stack deploy --compose-file docker-compose.yml demo
|
|||
The `ucp-interlock` service detects that your service is using these labels
|
||||
and automatically reconfigures the `ucp-interlock-proxy` service.
|
||||
|
||||
## Test using the CLI
|
||||
### Test using the CLI
|
||||
|
||||
To test that requests are routed to the demo service, run:
|
||||
|
||||
|
@ -84,7 +137,7 @@ If everything is working correctly, you should get a JSON result like:
|
|||
{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
|
||||
```
|
||||
|
||||
## Test using a browser
|
||||
### Test using a browser
|
||||
|
||||
Since the demo service exposes an HTTP endpoint, you can also use your browser
|
||||
to validate that everything is working.
|
||||
|
@ -95,3 +148,16 @@ able to start using the service from your browser.
|
|||
|
||||
{: .with-border }
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Publish a service as a canary instance](./canary.md)
|
||||
- [Usie context or path-based routing](./context.md)
|
||||
- [Publish a default host service](./interlock-vip-mode.md)
|
||||
- [Specify a routing mode](./interlock-vip-mode.md)
|
||||
- [Use routing labels](./labels-reference.md)
|
||||
- [Implement redirects](./redirects.md)
|
||||
- [Implement a service cluster](./service-clusters.md)
|
||||
- [Implement persistent (sticky) sessions](./sessions.md)
|
||||
- [Implement SSL](./ssl.md)
|
||||
- [Secure services with TLS](./tls.md)
|
||||
- [Configure websockets](./websockets.md)
|
||||
|
|
|
@ -1,18 +1,12 @@
|
|||
---
|
||||
title: VIP Mode
|
||||
description: Learn about the VIP backend mode for Layer 7 routing
|
||||
keywords: routing, proxy
|
||||
title: Specify a routing mode
|
||||
description: Learn about task and VIP backend routing modes for Layer 7 routing
|
||||
keywords: routing, proxy, interlock
|
||||
---
|
||||
|
||||
## VIP Mode
|
||||
You can publish services using "vip" and "task" backend routing modes.
|
||||
|
||||
VIP mode is an alternative mode of routing in which Interlock uses the Swarm service VIP as the backend IP instead of container IPs.
|
||||
Traffic to the frontend route is L7 load balanced to the Swarm service VIP which L4 load balances to backend tasks.
|
||||
VIP mode can be useful to reduce the amount of churn in Interlock proxy service configuration, which may be an advantage in highly dynamic environments.
|
||||
It optimizes for fewer proxy updates in a tradeoff for a reduced feature set.
|
||||
Most kinds of application updates do not require a configuring backends in VIP mode.
|
||||
|
||||
#### Task Routing Mode
|
||||
## Task routing mode
|
||||
|
||||
Task routing is the default Interlock behavior and the default backend mode if one is not specified.
|
||||
In task routing mode, Interlock uses backend task IPs to route traffic from the proxy to each container.
|
||||
|
@ -20,16 +14,21 @@ Traffic to the frontend route is L7 load balanced directly to service tasks.
|
|||
This allows for per-container routing functionality such as sticky sessions.
|
||||
Task routing mode applies L7 routing and then sends packets directly to a container.
|
||||
|
||||
|
||||

|
||||
|
||||
#### VIP Routing Mode
|
||||
## VIP routing mode
|
||||
|
||||
VIP mode is an alternative mode of routing in which Interlock uses the Swarm service VIP as the backend IP instead of container IPs.
|
||||
Traffic to the frontend route is L7 load balanced to the Swarm service VIP, which L4 load balances to backend tasks.
|
||||
VIP mode can be useful to reduce the amount of churn in Interlock proxy service configuration, which can be an advantage in highly dynamic environments.
|
||||
|
||||
VIP mode optimizes for fewer proxy updates in a tradeoff for a reduced feature set.
|
||||
Most application updates do not require configuring backends in VIP mode.
|
||||
|
||||
In VIP routing mode Interlock uses the service VIP (a persistent endpoint that exists from service creation to service deletion) as the proxy backend.
|
||||
VIP routing mode was introduced in Universal Control Plane (UCP) 3.0 version 3.0.3 and 3.1 version 3.1.2.
|
||||
VIP routing mode applies L7 routing and then sends packets to the Swarm L4 load balancer which routes traffic service containers.
|
||||
|
||||
|
||||

|
||||
|
||||
While VIP mode provides endpoint stability in the face of application churn, it cannot support sticky sessions because sticky sessions depend on routing directly to container IPs.
|
||||
|
@ -39,19 +38,18 @@ Because VIP mode routes by service IP rather than by task IP it also affects the
|
|||
In task mode a canary service with one task next to an existing service with four tasks represents one out of five total tasks, so the canary will receive 20% of incoming requests.
|
||||
By contrast the same canary service in VIP mode will receive 50% of incoming requests, because it represents one out of two total services.
|
||||
|
||||
#### Usage
|
||||
|
||||
### Usage
|
||||
You can set the backend mode on a per-service basis, which means that some applications can be deployed in task mode, while others are deployed in VIP mode.
|
||||
The following label must be applied to services to use Interlock VIP mode:
|
||||
|
||||
The default backend mode is `task`. If a label is set to `task` or a label does not exist, then Interlock uses the `task` routing mode.
|
||||
|
||||
To use Interlock VIP mode, the following label must be applied:
|
||||
|
||||
```
|
||||
com.docker.lb.backend_mode=vip
|
||||
```
|
||||
|
||||
The default backend mode is `task`.
|
||||
If the label is set to `task` or the label does not exist then Interlock will use `task` routing mode.
|
||||
|
||||
In VIP mode the following non-exhaustive list of application events will not require proxy reconfiguration:
|
||||
In VIP mode, the following non-exhaustive list of application events does not require proxy reconfiguration:
|
||||
|
||||
- Service replica increase/decrease
|
||||
- New image deployment
|
||||
|
@ -59,9 +57,120 @@ In VIP mode the following non-exhaustive list of application events will not req
|
|||
- Add/Remove labels
|
||||
- Add/Remove environment variables
|
||||
- Rescheduling a failed application task
|
||||
- ...
|
||||
|
||||
The following two updates still require a proxy reconfiguration (because these actions will create or destroy a service VIP):
|
||||
The following two updates still require a proxy reconfiguration (because these actions create or destroy a service VIP):
|
||||
|
||||
- Add/Remove a network on a service
|
||||
- Deployment/Deletion of a service
|
||||
|
||||
#### Publish a default host service
|
||||
|
||||
The following example publishes a service to be a default host. The service responds
|
||||
whenever there is a request to a host that is not configured.
|
||||
|
||||
First, create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
Next, create the initial service:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo-default \
|
||||
--network demo \
|
||||
--detach=false \
|
||||
--replicas=1 \
|
||||
--label com.docker.lb.default_backend=true \
|
||||
--label com.docker.lb.port=8080 \
|
||||
ehazlett/interlock-default-app
|
||||
```
|
||||
|
||||
Interlock detects when the service is available and publishes it. After tasks are running
|
||||
and the proxy service is updated, the application is available via any url that is not
|
||||
configured:
|
||||
|
||||
|
||||

|
||||
|
||||
#### Publish a service using "vip" backend mode.
|
||||
|
||||
1. Create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
2. Create the initial service:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo \
|
||||
--network demo \
|
||||
--detach=false \
|
||||
--replicas=4 \
|
||||
--label com.docker.lb.hosts=demo.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.backend_mode=vip \
|
||||
--env METADATA="demo-vip-1" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Interlock detects when the service is available and publishes it. After tasks are running
|
||||
and the proxy service is updated, the application should be available via `http://demo.local`:
|
||||
|
||||
```bash
|
||||
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||
* Trying 127.0.0.1...
|
||||
* TCP_NODELAY set
|
||||
* Connected to demo.local (127.0.0.1) port 80 (#0)
|
||||
> GET /ping HTTP/1.1
|
||||
> Host: demo.local
|
||||
> User-Agent: curl/7.54.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.13.6
|
||||
< Date: Wed, 08 Nov 2017 20:28:26 GMT
|
||||
< Content-Type: text/plain; charset=utf-8
|
||||
< Content-Length: 120
|
||||
< Connection: keep-alive
|
||||
< Set-Cookie: session=1510172906715624280; Path=/; Expires=Thu, 09 Nov 2017 20:28:26 GMT; Max-Age=86400
|
||||
< x-request-id: f884cf37e8331612b8e7630ad0ee4e0d
|
||||
< x-proxy-id: 5ad7c31f9f00
|
||||
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
|
||||
< x-upstream-addr: 10.0.2.9:8080
|
||||
< x-upstream-response-time: 1510172906.714
|
||||
<
|
||||
{"instance":"df20f55fc943","version":"0.1","metadata":"demo","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"}
|
||||
```
|
||||
|
||||
Instead of using each task IP for load balancing, configuring VIP mode causes Interlock to use
|
||||
the Virtual IPs of the service instead. Inspecting the service shows the VIPs:
|
||||
|
||||
```
|
||||
"Endpoint": {
|
||||
"Spec": {
|
||||
"Mode": "vip"
|
||||
|
||||
},
|
||||
"VirtualIPs": [
|
||||
{
|
||||
"NetworkID": "jed11c1x685a1r8acirk2ylol",
|
||||
"Addr": "10.0.2.9/24"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
In this case, Interlock configures a single upstream for the host using the IP "10.0.2.9". Interlock
|
||||
skips further proxy updates as long as there is at least 1 replica for the service because the only upstream is the VIP.
|
||||
|
||||
Swarm routes requests for the VIP in a round robin fashion at L4. This means that the following Interlock features are
|
||||
incompatible with VIP mode:
|
||||
|
||||
- Sticky sessions
|
||||
|
|
|
@ -1,15 +1,13 @@
|
|||
---
|
||||
title: Layer 7 routing labels reference
|
||||
title: Use layer 7 routing labels
|
||||
description: Learn about the labels you can use in your swarm services to route
|
||||
layer 7 traffic to them.
|
||||
layer 7 traffic.
|
||||
keywords: routing, proxy
|
||||
---
|
||||
|
||||
Once the layer 7 routing solution is enabled, you can
|
||||
After you enable the layer 7 routing solution, you can
|
||||
[start using it in your swarm services](index.md).
|
||||
|
||||
The following labels are available for you to use in swarm services:
|
||||
|
||||
|
||||
| Label | Description | Example |
|
||||
|:---------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------|
|
||||
|
@ -22,7 +20,7 @@ The following labels are available for you to use in swarm services:
|
|||
| `com.docker.lb.ssl_key` | Docker secret to use for the SSL key. | `example.com.key` |
|
||||
| `com.docker.lb.websocket_endpoints` | Comma separated list of endpoints to configure to be upgraded for websockets. | `/ws,/foo` |
|
||||
| `com.docker.lb.service_cluster` | Name of the service cluster to use for the application. | `us-east` |
|
||||
| `com.docker.lb.sticky_session_cookie` | Cookie to use for sticky sessions. | `none` |
|
||||
| `com.docker.lb.redirects` | Semi-colon separated list of redirects to add in the format of `<source>,<target>`. Example: `http://old.example.com,http://new.example.com;` | `none` |
|
||||
| `com.docker.lb.sticky_session_cookie` | Cookie to use for sticky sessions. | `app_session` |
|
||||
| `com.docker.lb.redirects` | Semi-colon separated list of redirects to add in the format of `<source>,<target>`. | `http://old.example.com,http://new.example.com;` |
|
||||
| `com.docker.lb.ssl_passthrough` | Enable SSL passthrough. | `false` |
|
||||
| `com.docker.lb.backend_mode` | Select the backend mode that the proxy should use to access the upstreams. Defaults to `task`. | `vip` |
|
||||
|
|
|
@ -1,69 +1,64 @@
|
|||
---
|
||||
title: Application redirects
|
||||
title: Implement application redirects
|
||||
description: Learn how to implement redirects using swarm services and the
|
||||
layer 7 routing solution for UCP.
|
||||
keywords: routing, proxy, redirects
|
||||
keywords: routing, proxy, redirects, interlock
|
||||
---
|
||||
|
||||
Once the [layer 7 routing solution is enabled](../deploy/index.md), you can
|
||||
start using it in your swarm services. In this example we'll deploy a simple
|
||||
service that can be reached at `app.example.org`. We'll also redirect
|
||||
requests to `old.example.org` to that service.
|
||||
The following example publishes a service and configures a redirect from `old.local` to `new.local`.
|
||||
|
||||
To do that, create a docker-compose.yml file with:
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
demo:
|
||||
image: ehazlett/docker-demo
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
com.docker.lb.hosts: app.example.org,old.example.org
|
||||
com.docker.lb.network: demo-network
|
||||
com.docker.lb.port: 8080
|
||||
com.docker.lb.redirects: http://old.example.org,http://app.example.org
|
||||
networks:
|
||||
- demo-network
|
||||
|
||||
networks:
|
||||
demo-network:
|
||||
driver: overlay
|
||||
```
|
||||
|
||||
Note that the demo service has labels to signal that traffic for both
|
||||
`app.example.org` and `old.example.org` should be routed to this service.
|
||||
There's also a label indicating that all traffic directed to `old.example.org`
|
||||
should be redirected to `app.example.org`.
|
||||
|
||||
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
|
||||
and deploy the service:
|
||||
First, create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
```bash
|
||||
docker stack deploy --compose-file docker-compose.yml demo
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
You can also use the CLI to test if the redirect is working, by running:
|
||||
Next, create the service with the redirect:
|
||||
|
||||
```bash
|
||||
curl --head --header "Host: old.example.org" http://<ucp-ip>:<http-port>
|
||||
$> docker service create \
|
||||
--name demo \
|
||||
--network demo \
|
||||
--detach=false \
|
||||
--label com.docker.lb.hosts=old.local,new.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.redirects=http://old.local,http://new.local \
|
||||
--env METADATA="demo-new" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
You should see something like:
|
||||
Interlock detects when the service is available and publishes it. After tasks are running
|
||||
and the proxy service is updated, the application is available via `http://new.local`
|
||||
with a redirect configured that sends `http://old.local` to `http://new.local`:
|
||||
|
||||
```none
|
||||
HTTP/1.1 302 Moved Temporarily
|
||||
Server: nginx/1.13.8
|
||||
Date: Thu, 29 Mar 2018 23:16:46 GMT
|
||||
Content-Type: text/html
|
||||
Content-Length: 161
|
||||
Connection: keep-alive
|
||||
Location: http://app.example.org/
|
||||
```bash
|
||||
$> curl -vs -H "Host: old.local" http://127.0.0.1
|
||||
* Rebuilt URL to: http://127.0.0.1/
|
||||
* Trying 127.0.0.1...
|
||||
* TCP_NODELAY set
|
||||
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
|
||||
> GET / HTTP/1.1
|
||||
> Host: old.local
|
||||
> User-Agent: curl/7.54.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 302 Moved Temporarily
|
||||
< Server: nginx/1.13.6
|
||||
< Date: Wed, 08 Nov 2017 19:06:27 GMT
|
||||
< Content-Type: text/html
|
||||
< Content-Length: 161
|
||||
< Connection: keep-alive
|
||||
< Location: http://new.local/
|
||||
< x-request-id: c4128318413b589cafb6d9ff8b2aef17
|
||||
< x-proxy-id: 48854cd435a4
|
||||
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
|
||||
<
|
||||
<html>
|
||||
<head><title>302 Found</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>302 Found</h1></center>
|
||||
<hr><center>nginx/1.13.6</center>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
You can also test that the redirect works from your browser. For that, you
|
||||
need to make sure you add entries for both `app.example.org` and
|
||||
`old.example.org` to your `/etc/hosts` file, mapping them to the IP address
|
||||
of a UCP node.
|
||||
|
|
|
@ -1,24 +1,100 @@
|
|||
---
|
||||
title: Service clusters
|
||||
description: Learn about Interlock, an application routing and load balancing system
|
||||
for Docker Swarm.
|
||||
keywords: ucp, interlock, load balancing
|
||||
title: Implement service clusters
|
||||
description: Learn how to route traffic to different proxies using a service cluster.
|
||||
keywords: ucp, interlock, load balancing, routing
|
||||
---
|
||||
|
||||
In this example we will configure an eight (8) node Swarm cluster that uses service clusters
|
||||
to route traffic to different proxies. There are three (3) managers
|
||||
and five (5) workers. Four of the workers are configured with node labels to be dedicated
|
||||
ingress cluster load balancer nodes. These will receive all application traffic.
|
||||
## Configure Proxy Services
|
||||
With the node labels, you can re-configure the Interlock Proxy services to be constrained to the
|
||||
workers for each region. FOr example, from a manager, run the following commands to pin the proxy services to the ingress workers:
|
||||
|
||||
This example will not cover the actual deployment of infrastructure.
|
||||
```bash
|
||||
$> docker service update \
|
||||
--constraint-add node.labels.nodetype==loadbalancer \
|
||||
--constraint-add node.labels.region==us-east \
|
||||
ucp-interlock-proxy-us-east
|
||||
$> docker service update \
|
||||
--constraint-add node.labels.nodetype==loadbalancer \
|
||||
--constraint-add node.labels.region==us-west \
|
||||
ucp-interlock-proxy-us-west
|
||||
```
|
||||
|
||||
You are now ready to deploy applications. First, create individual networks for each application:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo-east
|
||||
$> docker network create -d overlay demo-west
|
||||
```
|
||||
|
||||
Next, deploy the application in the `us-east` service cluster:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo-east \
|
||||
--network demo-east \
|
||||
--detach=true \
|
||||
--label com.docker.lb.hosts=demo-east.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.service_cluster=us-east \
|
||||
--env METADATA="us-east" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Now deploy the application in the `us-west` service cluster:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo-west \
|
||||
--network demo-west \
|
||||
--detach=true \
|
||||
--label com.docker.lb.hosts=demo-west.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.service_cluster=us-west \
|
||||
--env METADATA="us-west" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Only the designated service cluster is configured for the applications. For example, the `us-east` service cluster
|
||||
is not configured to serve traffic for the `us-west` service cluster and vice versa. You can observe this when you
|
||||
send requests to each service cluster.
|
||||
|
||||
When you send a request to the `us-east` service cluster, it only knows about the `us-east` application. This example uses IP address lookup from the swarm API, so you must `ssh` to a manager node or configure your shell with a UCP client bundle before testing:
|
||||
|
||||
```bash
|
||||
{% raw %}
|
||||
$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
|
||||
{"instance":"1b2d71619592","version":"0.1","metadata":"us-east","request_id":"3d57404cf90112eee861f9d7955d044b"}
|
||||
$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
|
||||
<html>
|
||||
<head><title>404 Not Found</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>404 Not Found</h1></center>
|
||||
<hr><center>nginx/1.13.6</center>
|
||||
</body>
|
||||
</html>
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
Application traffic is isolated to each service cluster. Interlock also ensures that a proxy is updated only if it has corresponding updates to its designated service cluster. In this example, updates to the `us-east` cluster do not affect the `us-west` cluster. If there is a problem, the others are not affected.
|
||||
|
||||
## Usage
|
||||
|
||||
The following example configures an eight (8) node Swarm cluster that uses service clusters
|
||||
to route traffic to different proxies. This example includes:
|
||||
|
||||
- Three (3) managers and five (5) workers
|
||||
- Four workers that are configured with node labels to be dedicated
|
||||
ingress cluster load balancer nodes. These nodes receive all application traffic.
|
||||
|
||||
This example does not cover infrastructure deployment.
|
||||
It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
|
||||
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
|
||||
getting a Swarm cluster deployed.
|
||||
|
||||

|
||||

|
||||
|
||||
We will configure four load balancer worker nodes (`lb-00` through `lb-03`) with node labels in order to pin the Interlock Proxy
|
||||
service for each Interlock service cluster. Once you are logged into one of the Swarm managers run the following to add node labels to the dedicated ingress workers:
|
||||
Configure four load balancer worker nodes (`lb-00` through `lb-03`) with node labels in order to pin the Interlock Proxy
|
||||
service for each Interlock service cluster. After you log in to one of the Swarm managers, run the following commands to add node labels to the dedicated ingress workers:
|
||||
|
||||
```bash
|
||||
$> docker node update --label-add nodetype=loadbalancer --label-add region=us-east lb-00
|
||||
|
@ -31,7 +107,7 @@ $> docker node update --label-add nodetype=loadbalancer --label-add region=us-we
|
|||
lb-03
|
||||
```
|
||||
|
||||
You can inspect each node to ensure the labels were successfully added:
|
||||
Inspect each node to ensure the labels were successfully added:
|
||||
|
||||
```bash
|
||||
{% raw %}
|
||||
|
@ -42,9 +118,9 @@ map[nodetype:loadbalancer region:us-west]
|
|||
{% endraw %}
|
||||
```
|
||||
|
||||
Next, we will create a configuration object for Interlock that contains multiple extensions with varying service clusters.
|
||||
Next, create an Interlock configuration object that contains multiple extensions with varying service clusters.
|
||||
|
||||
Important: The configuration object specified in the following code sample applies to UCP versions 3.0.10 and later, and versions 3.1.4 and later.
|
||||
< Important: The configuration object specified in the following code sample applies to UCP versions 3.0.10 and later, and versions 3.1.4 and later.
|
||||
|
||||
If you are working with UCP version 3.0.0 - 3.0.9 or 3.1.0 - 3.1.3, specify `com.docker.ucp.interlock.service-clusters.conf`.
|
||||
|
||||
|
@ -109,105 +185,14 @@ PollInterval = "3s"
|
|||
EOF
|
||||
oqkvv1asncf6p2axhx41vylgt
|
||||
```
|
||||
Note that we are using "host" mode networking in order to use the same ports (`8080` and `8443`) in the cluster. We cannot use ingress
|
||||
networking as it reserves the port across all nodes. If you want to use ingress networking you will have to use different ports
|
||||
Note that "host" mode networking is used in order to use the same ports (`8080` and `8443`) in the cluster. You cannot use ingress
|
||||
networking as it reserves the port across all nodes. If you want to use ingress networking, you must use different ports
|
||||
for each service cluster.
|
||||
|
||||
Next we will create a dedicated network for Interlock and the extensions:
|
||||
Next, create a dedicated network for Interlock and the extensions:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay ucp-interlock
|
||||
```
|
||||
|
||||
Now we can create the Interlock service.
|
||||
|
||||
Important: The `--name` value and configuration object specified in the following code sample applies to UCP versions 3.0.10 and later, and versions 3.1.4 and later.
|
||||
|
||||
If you are working with UCP version 3.0.0 - 3.0.9 or 3.1.0 - 3.1.3, specify `--name ucp-interlock-service-clusters` and `src=com.docker.ucp.interlock.service-clusters.conf-1` .
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name ucp-interlock \
|
||||
--mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
|
||||
--network ucp-interlock \
|
||||
--constraint node.role==manager \
|
||||
--config src=com.docker.ucp.interlock.conf-1,target=/config.toml \
|
||||
{{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} run -c /config.toml
|
||||
sjpgq7h621exno6svdnsvpv9z
|
||||
```
|
||||
|
||||
## Configure Proxy Services
|
||||
Once we have the node labels we can re-configure the Interlock Proxy services to be constrained to the
|
||||
workers for each region. Again, from a manager run the following to pin the proxy services to the ingress workers:
|
||||
|
||||
```bash
|
||||
$> docker service update \
|
||||
--constraint-add node.labels.nodetype==loadbalancer \
|
||||
--constraint-add node.labels.region==us-east \
|
||||
ucp-interlock-proxy-us-east
|
||||
$> docker service update \
|
||||
--constraint-add node.labels.nodetype==loadbalancer \
|
||||
--constraint-add node.labels.region==us-west \
|
||||
ucp-interlock-proxy-us-west
|
||||
```
|
||||
|
||||
We are now ready to deploy applications. First we will create individual networks for each application:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo-east
|
||||
$> docker network create -d overlay demo-west
|
||||
```
|
||||
|
||||
Next we will deploy the application in the `us-east` service cluster:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo-east \
|
||||
--network demo-east \
|
||||
--detach=true \
|
||||
--label com.docker.lb.hosts=demo-east.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.service_cluster=us-east \
|
||||
--env METADATA="us-east" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Now we deploy the application in the `us-west` service cluster:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo-west \
|
||||
--network demo-west \
|
||||
--detach=true \
|
||||
--label com.docker.lb.hosts=demo-west.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.service_cluster=us-west \
|
||||
--env METADATA="us-west" \
|
||||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Only the service cluster that is designated will be configured for the applications. For example, the `us-east` service cluster
|
||||
will not be configured to serve traffic for the `us-west` service cluster and vice versa. We can see this in action when we
|
||||
send requests to each service cluster.
|
||||
|
||||
When we send a request to the `us-east` service cluster it only knows about the `us-east` application. This example uses IP address lookup from the swarm API, so ssh to a manager node or configure your shell with a UCP client bundle before testing:
|
||||
|
||||
```bash
|
||||
{% raw %}
|
||||
$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
|
||||
{"instance":"1b2d71619592","version":"0.1","metadata":"us-east","request_id":"3d57404cf90112eee861f9d7955d044b"}
|
||||
$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00):8080/ping
|
||||
<html>
|
||||
<head><title>404 Not Found</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>404 Not Found</h1></center>
|
||||
<hr><center>nginx/1.13.6</center>
|
||||
</body>
|
||||
</html>
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
Application traffic is isolated to each service cluster. Interlock also ensures that a proxy will only be updated if it has corresponding updates
|
||||
to its designated service cluster. So in this example, updates to the `us-east` cluster will not affect the `us-west` cluster. If there is a problem
|
||||
the others will not be affected.
|
||||
|
||||
Now [enable the Interlock service](../deploy/index.md#enable-layer-7-routing).
|
||||
|
|
|
@ -1,23 +1,26 @@
|
|||
---
|
||||
title: Persistent (sticky) sessions
|
||||
title: Implement persistent (sticky) sessions
|
||||
description: Learn how to configure your swarm services with persistent sessions
|
||||
using UCP.
|
||||
keywords: routing, proxy
|
||||
keywords: routing, proxy, cookies, IP hash
|
||||
---
|
||||
|
||||
In this example we will publish a service and configure the proxy for persistent (sticky) sessions.
|
||||
You can publish a service and configure the proxy for persistent (sticky) sessions using:
|
||||
|
||||
# Cookies
|
||||
In the following example we will show how to configure sticky sessions using cookies.
|
||||
- Cookies
|
||||
- IP hashing
|
||||
|
||||
First we will create an overlay network so that service traffic is isolated and secure:
|
||||
## Cookies
|
||||
To configure sticky sessions using cookies:
|
||||
|
||||
1. Create an overlay network so that service traffic is isolated and secure, as shown in the following example:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
Next we will create the service with the cookie to use for sticky sessions:
|
||||
2. Create a service with the cookie to use for sticky sessions:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
|
@ -32,9 +35,9 @@ $> docker service create \
|
|||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
||||
and the proxy service has been updated the application should be available via `http://demo.local`
|
||||
and configured to use sticky sessions:
|
||||
Interlock detects when the service is available and publishes it. When tasks are running
|
||||
and the proxy service is updated, the application is available via `http://demo.local`
|
||||
and is configured to use sticky sessions:
|
||||
|
||||
```bash
|
||||
$> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/ping
|
||||
|
@ -64,21 +67,21 @@ $> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/p
|
|||
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
|
||||
```
|
||||
|
||||
Notice the `Set-Cookie` from the application. This is stored by the `curl` command and sent with subsequent requests
|
||||
which are pinned to the same instance. If you make a few requests you will notice the same `x-upstream-addr`.
|
||||
Notice the `Set-Cookie` from the application. This is stored by the `curl` command and is sent with subsequent requests,
|
||||
which are pinned to the same instance. If you make a few requests, you will notice the same `x-upstream-addr`.
|
||||
|
||||
# IP Hashing
|
||||
In this example we show how to configure sticky sessions using client IP hashing. This is not as flexible or consistent
|
||||
as cookies but enables workarounds for some applications that cannot use the other method. When using IP hashing you should reconfigure Interlock proxy to use [host mode networking](../deploy/host-mode-networking.md) because the default `ingress` networking mode uses SNAT which obscures client IP addresses.
|
||||
## IP Hashing
|
||||
The following example shows how to configure sticky sessions using client IP hashing. This is not as flexible or consistent
|
||||
as cookies but enables workarounds for some applications that cannot use the other method. When using IP hashing, reconfigure Interlock proxy to use [host mode networking](../config/host-mode-networking.md), because the default `ingress` networking mode uses SNAT, which obscures client IP addresses.
|
||||
|
||||
First we will create an overlay network so that service traffic is isolated and secure:
|
||||
1. Create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
Next we will create the service with the cookie to use for sticky sessions using IP hashing:
|
||||
2. Create a service with the cookie to use for sticky sessions using IP hashing:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
|
@ -93,9 +96,9 @@ $> docker service create \
|
|||
ehazlett/docker-demo
|
||||
```
|
||||
|
||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
||||
and the proxy service has been updated the application should be available via `http://demo.local`
|
||||
and configured to use sticky sessions:
|
||||
Interlock detects when the service is available and publishes it. When tasks are running
|
||||
and the proxy service is updated, the application is available via `http://demo.local`
|
||||
and is configured to use sticky sessions:
|
||||
|
||||
```bash
|
||||
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||
|
@ -122,10 +125,9 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
|||
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
|
||||
```
|
||||
|
||||
You can use `docker service scale demo=10` to add some more replicas. Once scaled, you will notice that requests are pinned
|
||||
You can use `docker service scale demo=10` to add more replicas. When scaled, requests are pinned
|
||||
to a specific backend.
|
||||
|
||||
> **Note**: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
|
||||
> expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are
|
||||
> determined a new "sticky" backend will be chosen and that will be the dedicated upstream.
|
||||
|
||||
> expected, because internally the proxy uses the new set of replicas to determine a backend on which to pin. When the upstreams are
|
||||
> determined, a new "sticky" backend is chosen as the dedicated upstream.
|
||||
|
|
|
@ -0,0 +1,224 @@
|
|||
---
|
||||
title: Implement applications with SSL
|
||||
description: Learn how to configure your swarm services with SSL.
|
||||
keywords: routing, proxy, tls, ssl
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/ssl/
|
||||
---
|
||||
|
||||
This topic covers Swarm services implementation with:
|
||||
|
||||
- SSL termination
|
||||
- SSL passthrough
|
||||
|
||||
## SSL termination
|
||||
In the following example, Docker [Secrets](/engine/swarm/secrets/)
|
||||
are used to centrally and securely store SSL certificates in order to terminate SSL at the proxy service.
|
||||
Application traffic is encrypted in transport to the proxy service, which terminates SSL and then
|
||||
uses unencrypted traffic inside the secure datacenter.
|
||||
|
||||

|
||||
|
||||
First, certificates are generated:
|
||||
|
||||
```bash
|
||||
$> openssl req \
|
||||
-new \
|
||||
-newkey rsa:4096 \
|
||||
-days 3650 \
|
||||
-nodes \
|
||||
-x509 \
|
||||
-subj "/C=US/ST=SomeState/L=SomeCity/O=Interlock/CN=demo.local" \
|
||||
-keyout demo.local.key \
|
||||
-out demo.local.cert
|
||||
```
|
||||
|
||||
Two files are created: `demo.local.cert` and `demo.local.key`. Next, we
|
||||
use these to create Docker Secrets.
|
||||
|
||||
```bash
|
||||
$> docker secret create demo.local.cert demo.local.cert
|
||||
ywn8ykni6cmnq4iz64um1pj7s
|
||||
$> docker secret create demo.local.key demo.local.key
|
||||
e2xo036ukhfapip05c0sizf5w
|
||||
```
|
||||
|
||||
Next, we create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo \
|
||||
--network demo \
|
||||
--label com.docker.lb.hosts=demo.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.ssl_cert=demo.local.cert \
|
||||
--label com.docker.lb.ssl_key=demo.local.key \
|
||||
ehazlett/docker-demo
|
||||
6r0wiglf5f3bdpcy6zesh1pzx
|
||||
```
|
||||
|
||||
Interlock detects when the service is available and publishes it. After tasks are running
|
||||
and the proxy service is updated, the application should be available via `https://demo.local`.
|
||||
|
||||
Note: You must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
|
||||
You cannot use a host header as shown in other examples due to the way [SNI](https://tools.ietf.org/html/rfc3546#page-8) works.
|
||||
|
||||
```bash
|
||||
$> curl -vsk https://demo.local/ping
|
||||
* Trying 127.0.0.1...
|
||||
* TCP_NODELAY set
|
||||
* Connected to demo.local (127.0.0.1) port 443 (#0)
|
||||
* ALPN, offering http/1.1
|
||||
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
|
||||
* successfully set certificate verify locations:
|
||||
* CAfile: /etc/ssl/certs/ca-certificates.crt
|
||||
CApath: none
|
||||
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Server hello (2):
|
||||
* TLSv1.2 (IN), TLS handshake, Certificate (11):
|
||||
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
|
||||
* TLSv1.2 (IN), TLS handshake, Server finished (14):
|
||||
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
|
||||
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (OUT), TLS handshake, Finished (20):
|
||||
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Finished (20):
|
||||
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
|
||||
* ALPN, server accepted to use http/1.1
|
||||
* Server certificate:
|
||||
* subject: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
|
||||
* start date: Nov 8 16:23:03 2017 GMT
|
||||
* expire date: Nov 6 16:23:03 2027 GMT
|
||||
* issuer: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
|
||||
* SSL certificate verify result: self signed certificate (18), continuing anyway.
|
||||
> GET /ping HTTP/1.1
|
||||
> Host: demo.local
|
||||
> User-Agent: curl/7.54.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.13.6
|
||||
< Date: Wed, 08 Nov 2017 16:26:55 GMT
|
||||
< Content-Type: text/plain; charset=utf-8
|
||||
< Content-Length: 92
|
||||
< Connection: keep-alive
|
||||
< Set-Cookie: session=1510158415298009207; Path=/; Expires=Thu, 09 Nov 2017 16:26:55 GMT; Max-Age=86400
|
||||
< x-request-id: 4b15ab2aaf2e0bbdea31f5e4c6b79ebd
|
||||
< x-proxy-id: a783b7e646af
|
||||
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
|
||||
< x-upstream-addr: 10.0.2.3:8080
|
||||
|
||||
{"instance":"c2f1afe673d4","version":"0.1",request_id":"7bcec438af14f8875ffc3deab9215bc5"}
|
||||
```
|
||||
|
||||
Because the certificate and key are stored securely in Swarm, you can safely scale this service, as well as the proxy
|
||||
service, and Swarm handles granting access to the credentials as needed.
|
||||
|
||||
## SSL passthrough
|
||||
In the following example, SSL passthrough is used to ensure encrypted communication from the request to the application
|
||||
service. This ensures maximum security because there is no unencrypted transport.
|
||||
|
||||

|
||||
|
||||
First, generate certificates for the application:
|
||||
|
||||
```bash
|
||||
$> openssl req \
|
||||
-new \
|
||||
-newkey rsa:4096 \
|
||||
-days 3650 \
|
||||
-nodes \
|
||||
-x509 \
|
||||
-subj "/C=US/ST=SomeState/L=SomeCity/O=Interlock/CN=demo.local" \
|
||||
-keyout app.key \
|
||||
-out app.cert
|
||||
```
|
||||
|
||||
Two files are created: `app.cert` and `app.key`. Next, we
|
||||
use these to create Docker Secrets.
|
||||
|
||||
```bash
|
||||
$> docker secret create app.cert app.cert
|
||||
ywn8ykni6cmnq4iz64um1pj7s
|
||||
$> docker secret create app.key app.key
|
||||
e2xo036ukhfapip05c0sizf5w
|
||||
```
|
||||
|
||||
Now create an overlay network to isolate and secure service traffic:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
--name demo \
|
||||
--network demo \
|
||||
--detach=false \
|
||||
--secret source=app.cert,target=/run/secrets/cert.pem \
|
||||
--secret source=app.key,target=/run/secrets/key.pem \
|
||||
--label com.docker.lb.hosts=demo.local \
|
||||
--label com.docker.lb.port=8080 \
|
||||
--label com.docker.lb.ssl_passthrough=true \
|
||||
--env METADATA="demo-ssl-passthrough" \
|
||||
ehazlett/docker-demo --tls-cert=/run/secrets/cert.pem --tls-key=/run/secrets/key.pem
|
||||
```
|
||||
|
||||
Interlock detects when the service is available and publishes it. When tasks are running
|
||||
and the proxy service is updated, the application is available via `https://demo.local`.
|
||||
|
||||
Note: You must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
|
||||
You cannot use a host header as in other examples due to the way [SNI](https://tools.ietf.org/html/rfc3546#page-8) works.
|
||||
|
||||
```bash
|
||||
$> curl -vsk https://demo.local/ping
|
||||
* Trying 127.0.0.1...
|
||||
* TCP_NODELAY set
|
||||
* Connected to demo.local (127.0.0.1) port 443 (#0)
|
||||
* ALPN, offering http/1.1
|
||||
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
|
||||
* successfully set certificate verify locations:
|
||||
* CAfile: /etc/ssl/certs/ca-certificates.crt
|
||||
CApath: none
|
||||
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Server hello (2):
|
||||
* TLSv1.2 (IN), TLS handshake, Certificate (11):
|
||||
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
|
||||
* TLSv1.2 (IN), TLS handshake, Server finished (14):
|
||||
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
|
||||
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (OUT), TLS handshake, Finished (20):
|
||||
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Finished (20):
|
||||
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
|
||||
* ALPN, server accepted to use http/1.1
|
||||
* Server certificate:
|
||||
* subject: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
|
||||
* start date: Nov 8 16:39:45 2017 GMT
|
||||
* expire date: Nov 6 16:39:45 2027 GMT
|
||||
* issuer: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
|
||||
* SSL certificate verify result: self signed certificate (18), continuing anyway.
|
||||
> GET /ping HTTP/1.1
|
||||
> Host: demo.local
|
||||
> User-Agent: curl/7.54.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Connection: close
|
||||
< Set-Cookie: session=1510159255159600720; Path=/; Expires=Thu, 09 Nov 2017 16:40:55 GMT; Max-Age=86400
|
||||
< Date: Wed, 08 Nov 2017 16:40:55 GMT
|
||||
< Content-Length: 78
|
||||
< Content-Type: text/plain; charset=utf-8
|
||||
<
|
||||
{"instance":"327d5a26bc30","version":"0.1","metadata":"demo-ssl-passthrough"}
|
||||
```
|
||||
|
||||
Application traffic travels securely, fully encrypted from the request to the application service.
|
||||
Notice that Interlock cannot add the metadata response headers (version info, request ID, etc), because this is using
|
||||
TCP passthrough and cannot add the metadata.
|
|
@ -1,39 +1,30 @@
|
|||
---
|
||||
title: Applications with SSL
|
||||
description: Learn how to configure your swarm services with TLS using the layer
|
||||
7 routing solution for UCP.
|
||||
title: Secure services with TLS
|
||||
description: Learn how to configure your swarm services with TLS.
|
||||
keywords: routing, proxy, tls
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/ssl/
|
||||
---
|
||||
|
||||
Once the [layer 7 routing solution is enabled](../deploy/index.md), you can
|
||||
start using it in your swarm services. You have two options for securing your
|
||||
After [deploying a layer 7 routing solution](../deploy/index.md), you have two options for securing your
|
||||
services with TLS:
|
||||
|
||||
* Let the proxy terminate the TLS connection. All traffic between end-users and
|
||||
* [Let the proxy terminate the TLS connection.](#let-the-proxy-handle-tls) All traffic between end-users and
|
||||
the proxy is encrypted, but the traffic going between the proxy and your swarm
|
||||
service is not secured.
|
||||
* Let your swarm service terminate the TLS connection. The end-to-end traffic
|
||||
* [Let your swarm service terminate the TLS connection.](#let-your-service-handle-tls) The end-to-end traffic
|
||||
is encrypted and the proxy service allows TLS traffic to passthrough unchanged.
|
||||
|
||||
In this example we'll deploy a service that can be reached at `app.example.org`
|
||||
using these two options.
|
||||
|
||||
No matter how you choose to secure your swarm services, there are two steps to
|
||||
Regardless of the option selected to secure swarm services, there are two steps required to
|
||||
route traffic with TLS:
|
||||
|
||||
1. Create [Docker secrets](/engine/swarm/secrets.md) to manage from a central
|
||||
place the private key and certificate used for TLS.
|
||||
2. Add labels to your swarm service for UCP to reconfigure the proxy service.
|
||||
|
||||
|
||||
## Let the proxy handle TLS
|
||||
|
||||
In this example we'll deploy a swarm service and let the proxy service handle
|
||||
The following example deploys a swarm service and lets the proxy service handle
|
||||
the TLS connection. All traffic between the proxy and the swarm service is
|
||||
not secured, so you should only use this option if you trust that no one can
|
||||
monitor traffic inside services running on your datacenter.
|
||||
not secured, so use this option only if you trust that no one can
|
||||
monitor traffic inside services running in your datacenter.
|
||||
|
||||

|
||||
|
||||
|
@ -86,15 +77,15 @@ secrets:
|
|||
file: ./app.example.org.key
|
||||
```
|
||||
|
||||
Notice that the demo service has labels describing that the proxy service should
|
||||
route traffic to `app.example.org` to this service. All traffic between the
|
||||
Notice that the demo service has labels specifying that the proxy service should
|
||||
route `app.example.org` traffic to this service. All traffic between the
|
||||
service and proxy takes place using the `demo-network` network. The service also
|
||||
has labels describing the Docker secrets to use on the proxy service to terminate
|
||||
has labels specifying the Docker secrets to use on the proxy service for terminating
|
||||
the TLS connection.
|
||||
|
||||
Since the private key and certificate are stored as Docker secrets, you can
|
||||
Because the private key and certificate are stored as Docker secrets, you can
|
||||
easily scale the number of replicas used for running the proxy service. Docker
|
||||
takes care of distributing the secrets to the replicas.
|
||||
distributes the secrets to the replicas.
|
||||
|
||||
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
|
||||
and deploy the service:
|
||||
|
@ -103,25 +94,24 @@ and deploy the service:
|
|||
docker stack deploy --compose-file docker-compose.yml demo
|
||||
```
|
||||
|
||||
The service is now running. To test that everything is working correctly you
|
||||
first need to update your `/etc/hosts` file to map `app.example.org` to the
|
||||
The service is now running. To test that everything is working correctly, update your `/etc/hosts` file to map `app.example.org` to the
|
||||
IP address of a UCP node.
|
||||
|
||||
In a production deployment, you'll have to create a DNS entry so that your
|
||||
In a production deployment, you must create a DNS entry so that
|
||||
users can access the service using the domain name of your choice.
|
||||
After doing that, you'll be able to access your service at:
|
||||
After creating the DNS entry, you can access your service:
|
||||
|
||||
```bash
|
||||
https://<hostname>:<https-port>
|
||||
```
|
||||
|
||||
Where:
|
||||
* `hostname` is the name you used with the `com.docker.lb.hosts` label.
|
||||
* `https-port` is the port you've configured in the [UCP settings](../deploy/index.md).
|
||||
FOr this example:
|
||||
* `hostname` is the name you specified with the `com.docker.lb.hosts` label.
|
||||
* `https-port` is the port you configured in the [UCP settings](../deploy/index.md).
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Since we're using self-sign certificates in this example, client tools like
|
||||
Because this example uses self-sign certificates, client tools like
|
||||
browsers display a warning that the connection is insecure.
|
||||
|
||||
You can also test from the CLI:
|
||||
|
@ -132,23 +122,22 @@ curl --insecure \
|
|||
https://<hostname>:<https-port>/ping
|
||||
```
|
||||
|
||||
If everything is properly configured you should get a JSON payload:
|
||||
If everything is properly configured, you should get a JSON payload:
|
||||
|
||||
```json
|
||||
{"instance":"f537436efb04","version":"0.1","request_id":"5a6a0488b20a73801aa89940b6f8c5d2"}
|
||||
```
|
||||
|
||||
Since the proxy uses SNI to decide where to route traffic, make sure you're
|
||||
Because the proxy uses SNI to decide where to route traffic, make sure you are
|
||||
using a version of `curl` that includes the SNI header with insecure requests.
|
||||
If this doesn't happen, `curl` displays an error saying that the SSL handshake
|
||||
was aborterd.
|
||||
Otherwise, `curl` displays an error saying that the SSL handshake
|
||||
was aborted.
|
||||
|
||||
> **Note**: Currently there is no way to update expired certificates using this method.
|
||||
> The proper way is to create a new secret then update the corresponding service.
|
||||
|
||||
## Let your service handle TLS
|
||||
|
||||
You can also encrypt the traffic from end-users to your swarm service.
|
||||
The second option for securing with TLS involves encrypting traffic from end users to your swarm service.
|
||||
|
||||

|
||||
|
||||
|
@ -189,11 +178,11 @@ secrets:
|
|||
file: ./app.example.org.key
|
||||
```
|
||||
|
||||
Notice that we've update the service to start using the secrets with the
|
||||
The service is updated to start using the secrets with the
|
||||
private key and certificate. The service is also labeled with
|
||||
`com.docker.lb.ssl_passthrough: true`, signaling UCP to configure the proxy
|
||||
service such that TLS traffic for `app.example.org` is passed to the service.
|
||||
|
||||
Since the connection is fully encrypt from end-to-end, the proxy service
|
||||
won't be able to add metadata such as version info or request ID to the
|
||||
Since the connection is fully encrypted from end-to-end, the proxy service
|
||||
cannot add metadata such as version information or request ID to the
|
||||
response headers.
|
||||
|
|
|
@ -1,20 +1,17 @@
|
|||
---
|
||||
title: Websockets
|
||||
description: Learn how to use websocket in your swarm services when using the
|
||||
layer 7 routing solution for UCP.
|
||||
keywords: routing, proxy
|
||||
title: Use websockets
|
||||
description: Learn how to use websockets in your swarm services.
|
||||
keywords: routing, proxy, websockets
|
||||
---
|
||||
|
||||
In this example we will publish a service and configure support for websockets.
|
||||
|
||||
First we will create an overlay network so that service traffic is isolated and secure:
|
||||
First, create an overlay network to isolate and secure service traffic:
|
||||
|
||||
```bash
|
||||
$> docker network create -d overlay demo
|
||||
1se1glh749q1i4pw0kf26mfx5
|
||||
```
|
||||
|
||||
Next we will create the service with websocket endpoints:
|
||||
Next, create the service with websocket endpoints:
|
||||
|
||||
```bash
|
||||
$> docker service create \
|
||||
|
@ -27,10 +24,9 @@ $> docker service create \
|
|||
ehazlett/websocket-chat
|
||||
```
|
||||
|
||||
> **Note**: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
|
||||
> This uses the browser for websocket communication so you will need to have an entry or use a routable domain.
|
||||
|
||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
||||
and the proxy service has been updated the application should be available via `http://demo.local`. Open
|
||||
two instances of your browser and you should see text on both instances as you type.
|
||||
> **Note**: for this to work, you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
|
||||
> This uses the browser for websocket communication, so you must have an entry or use a routable domain.
|
||||
|
||||
Interlock detects when the service is available and publishes it. Once tasks are running
|
||||
and the proxy service is updated, the application should be available via `http://demo.local`. Open
|
||||
two instances of your browser and text should be displayed on both instances as you type.
|
||||
|
|