Update Interlock with TLS

This commit is contained in:
Joao Fernandes 2018-03-28 16:45:07 -07:00 committed by Jim Galasyn
parent 72e6b3d49a
commit af95c50375
5 changed files with 194 additions and 229 deletions

View File

@ -1715,8 +1715,8 @@ manuals:
path: /ee/ucp/interlock/usage/
- title: Set a default service
path: /ee/ucp/interlock/usage/default-service/
- title: Applications with SSL
path: /ee/ucp/interlock/usage/ssl/
- title: Applications with TLS
path: /ee/ucp/interlock/usage/tls/
- title: Application redirects
path: /ee/ucp/interlock/usage/redirects/
- title: Persistent (sticky) sessions

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

View File

@ -1,227 +0,0 @@
---
title: Applications with SSL
description: Learn how to configure your swarm services with TLS using the layer
7 routing solution for UCP.
keywords: routing, proxy
ui_tabs:
- version: ucp-3.0
orhigher: false
---
{% if include.version=="ucp-3.0" %}
In this example we will publish a demo service to the host `demo.local` using SSL.
# SSL Termination
In this example we will be using Docker [Secrets](https://docs.docker.com/engine/swarm/secrets/)
to centrally and securely store SSL certificates in order to terminate SSL at the proxy service.
Application traffic is encrypted in transport to the proxy service which terminates SSL and then
uses unencrypted traffic inside the secure datacenter.
![Interlock SSL Termination](interlock_ssl_termination.png)
First we will generate certificates for the example:
```bash
$> openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=SomeState/L=SomeCity/O=Interlock/CN=demo.local" \
-keyout demo.local.key \
-out demo.local.cert
```
This should result in two files being created: `demo.local.cert` and `demo.local.key`. Next we
will use these to create Docker Secrets.
```bash
$> docker secret create demo.local.cert demo.local.cert
ywn8ykni6cmnq4iz64um1pj7s
$> docker secret create demo.local.key demo.local.key
e2xo036ukhfapip05c0sizf5w
```
Now we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
```bash
$> docker service create \
--name demo \
--network demo \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.ssl_cert=demo.local.cert \
--label com.docker.lb.ssl_key=demo.local.key \
ehazlett/docker-demo
6r0wiglf5f3bdpcy6zesh1pzx
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `https://demo.local`.
Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
You cannot use a host header as in other examples due to the way [SNI](https://tools.ietf.org/html/rfc3546#page-8) works.
```bash
$> curl -vsk https://demo.local/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to demo.local (127.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* start date: Nov 8 16:23:03 2017 GMT
* expire date: Nov 6 16:23:03 2027 GMT
* issuer: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 16:26:55 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 92
< Connection: keep-alive
< Set-Cookie: session=1510158415298009207; Path=/; Expires=Thu, 09 Nov 2017 16:26:55 GMT; Max-Age=86400
< x-request-id: 4b15ab2aaf2e0bbdea31f5e4c6b79ebd
< x-proxy-id: a783b7e646af
< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
< x-upstream-addr: 10.0.2.3:8080
{"instance":"c2f1afe673d4","version":"0.1",request_id":"7bcec438af14f8875ffc3deab9215bc5"}
```
Since the certificate and key is stored securely in Swarm you can safely scale this service as well as the proxy
service and Swarm will handle granting access to the credentials only as needed.
# SSL Passthrough
In this example we will be using SSL passthrough to ensure encrypted communication from the request to the application
service. This ensures maximum security as there is no unencrypted transport.
![Interlock SSL Passthrough](interlock_ssl_passthrough.png)
First we will generate certificates for our application:
```bash
$> openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=SomeState/L=SomeCity/O=Interlock/CN=demo.local" \
-keyout app.key \
-out app.cert
```
This should result in two files being created: `app.cert` and `app.key`. Next we
will use these to create Docker Secrets.
```bash
$> docker secret create app.cert app.cert
ywn8ykni6cmnq4iz64um1pj7s
$> docker secret create app.key app.key
e2xo036ukhfapip05c0sizf5w
```
Now we will create an overlay network so that service traffic is isolated and secure:
```bash
$> docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
```
```bash
$> docker service create \
--name demo \
--network demo \
--detach=false \
--secret source=app.cert,target=/run/secrets/cert.pem \
--secret source=app.key,target=/run/secrets/key.pem \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.ssl_passthrough=true \
--env METADATA="demo-ssl-passthrough" \
ehazlett/docker-demo --tls-cert=/run/secrets/cert.pem --tls-key=/run/secrets/key.pem
```
Interlock will detect once the service is available and publish it. Once the tasks are running
and the proxy service has been updated the application should be available via `https://demo.local`.
Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
You cannot use a host header as in other examples due to the way [SNI](https://tools.ietf.org/html/rfc3546#page-8) works.
```bash
$> curl -vsk https://demo.local/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to demo.local (127.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* start date: Nov 8 16:39:45 2017 GMT
* expire date: Nov 6 16:39:45 2027 GMT
* issuer: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: close
< Set-Cookie: session=1510159255159600720; Path=/; Expires=Thu, 09 Nov 2017 16:40:55 GMT; Max-Age=86400
< Date: Wed, 08 Nov 2017 16:40:55 GMT
< Content-Length: 78
< Content-Type: text/plain; charset=utf-8
<
{"instance":"327d5a26bc30","version":"0.1","metadata":"demo-ssl-passthrough"}
```
Application traffic will travel securely fully encrypted from the request all the way to the application service.
Notice that Interlock cannot add the metadata response headers (version info, request ID, etc) as this is using
TCP passthrough and cannot add the metadata.
{% endif %}

View File

@ -0,0 +1,192 @@
---
title: Applications with SSL
description: Learn how to configure your swarm services with TLS using the layer
7 routing solution for UCP.
keywords: routing, proxy, tls
---
Once the [layer 7 routing solution is enabled](../deploy/index.md), you can
start using it in your swarm services. You have two options for securing your
services with TLS:
* Let the proxy terminate the TLS connection. All traffic between end-users and
the proxy is encrypted, but the traffic going between the proxy and your swarm
service is not secured.
* Let your swarm service terminate the TLS connection. The end-to-end traffic
is encrypted and the proxy service allows TLS traffic to passthrough unchanged.
In this example we'll deploy a service that can be reached at `app.example.org`
using these two options.
No matter how you choose to secure your swarm services, there are two steps to
route traffic with TLS:
1. Create [Docker secrets](/engine/swarm/secrets.md) to manage from a central
place the private key and certificate used for TLS.
2. Add labels to your swarm service for UCP to reconfigure the proxy service.
## Let the proxy handle TLS
In this example we'll deploy a swarm service and let the proxy service handle
the TLS connection. All traffic between the proxy and the swarm service is
not secured, so you should only use this option if you trust that no one can
monitor traffic inside services running on your datacenter.
![TLS Termination](../../images/interlock-tls-1.png)
Start by getting a private key and certificate for the TLS connection. Make
sure the Common Name in the certificate matches the name where your service
is going to be available.
You can generate a self-signed certificate for `app.example.org` by running:
```bash
openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=CA/L=SF/O=Docker-demo/CN=app.example.org" \
-keyout app.example.org.key \
-out app.example.org.cert
```
Then, create a docker-compose.yml file with the following content:
```yml
version: "3.2"
services:
demo:
image: ehazlett/docker-demo
deploy:
replicas: 1
labels:
com.docker.lb.hosts: app.example.org
com.docker.lb.network: demo-network
com.docker.lb.port: 8080
com.docker.lb.ssl_cert: demo_app.example.org.cert
com.docker.lb.ssl_key: demo_app.example.org.key
environment:
METADATA: proxy-handles-tls
networks:
- demo-network
networks:
demo-network:
driver: overlay
secrets:
app.example.org.cert:
file: ./app.example.org.cert
app.example.org.key:
file: ./app.example.org.key
```
Notice that the demo service has labels describing that the proxy service should
route traffic to `app.example.org` to this service. All traffic between the
service and proxy takes place using the `demo-network` network. The service also
has labels describing the Docker secrets to use on the proxy service to terminate
the TLS connection.
Since the private key and certificate are stored as Docker secrets, you can
easily scale the number of replicas used for running the proxy service. Docker
takes care of distributing the secrets to the replicas.
Set up your CLI client with a [UCP client bundle](../../user-access/cli.md),
and deploy the service:
```bash
docker stack deploy --compose-file docker-compose.yml demo
```
The service is now running. To test that everything is working correctly you
first need to update your `/etc/hosts` file to map `app.example.org` to the
IP address of a UCP node.
In a production deployment, you'll have to create a DNS entry so that your
users can access the service using the domain name of your choice.
After doing that, you'll be able to access your service at:
```bash
https://<hostname>:<https-port>
```
Where:
* `hostname` is the name you used with the `com.docker.lb.hosts` label.
* `https-port` is the port you've configured in the [UCP settings](../deploy/index.md).
![Browser screenshot](../../images/interlock-tls-2.png){: .with-border}
Since we're using self-sign certificates in this example, client tools like
browsers display a warning that the connection is insecure.
You can also test from the CLI:
```bash
curl --insecure \
--resolve <hostname>:<https-port>:<ucp-ip-address> \
https://<hostname>:<https-port>/ping
```
If everything is properly configured you should get a JSON payload:
```json
{"instance":"f537436efb04","version":"0.1","request_id":"5a6a0488b20a73801aa89940b6f8c5d2"}
```
Since the proxy uses SNI to decide where to route traffic, make sure you're
using a version of curl that includes the SNI header with insecure requests.
If this doesn't happen, curl displays an error saying that the SSL handshake
was aborterd.
## Let your service handle TLS
If you want to encrypt the traffic from end-users to your swarm service,
use the following docker-compose.yml file instead:
```yml
version: "3.2"
services:
demo:
image: ehazlett/docker-demo
command: --tls-cert=/run/secrets/cert.pem --tls-key=/run/secrets/key.pem
deploy:
replicas: 1
labels:
com.docker.lb.hosts: app.example.org
com.docker.lb.network: demo-network
com.docker.lb.port: 8080
com.docker.lb.ssl_passthrough: "true"
environment:
METADATA: end-to-end-TLS
networks:
- demo-network
secrets:
- source: app.example.org.cert
target: /run/secrets/cert.pem
- source: app.example.org.key
target: /run/secrets/key.pem
networks:
demo-network:
driver: overlay
secrets:
app.example.org.cert:
file: ./app.example.org.cert
app.example.org.key:
file: ./app.example.org.key
```
Notice that we've update the service to start using the secrets with the
private key and certificate. The service is also labeled with
`com.docker.lb.ssl_passthrough: true`, signaling UCP to configure the proxy
service such that TLS traffic for `app.example.org` is passed to the service.
Since the connection is fully encrypt from end-to-end, the proxy service
won't be able to add metadata such as version info or request ID to the
response headers.