mirror of https://github.com/docker/docs.git
Sessions info
This commit is contained in:
parent
ea2b368a16
commit
b7cd5e1fec
|
|
@ -5,19 +5,23 @@ description: Learn how to configure your swarm services with persistent sessions
|
||||||
keywords: routing, proxy
|
keywords: routing, proxy
|
||||||
---
|
---
|
||||||
|
|
||||||
In this example we will publish a service and configure the proxy for persistent (sticky) sessions.
|
# Implementing persistent (sticky) sessions
|
||||||
|
You can publish a service and configure the proxy for persistent (sticky) sessions using:
|
||||||
|
|
||||||
# Cookies
|
- Cookies
|
||||||
In the following example we will show how to configure sticky sessions using cookies.
|
- IP Hashing
|
||||||
|
|
||||||
First we will create an overlay network so that service traffic is isolated and secure:
|
## Cookies
|
||||||
|
To configure sticky sessions using cookies:
|
||||||
|
|
||||||
|
1. Create an overlay network so that service traffic is isolated and secure, as shown in the following example:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$> docker network create -d overlay demo
|
$> docker network create -d overlay demo
|
||||||
1se1glh749q1i4pw0kf26mfx5
|
1se1glh749q1i4pw0kf26mfx5
|
||||||
```
|
```
|
||||||
|
|
||||||
Next we will create the service with the cookie to use for sticky sessions:
|
2. Create a service with the cookie to use for sticky sessions:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$> docker service create \
|
$> docker service create \
|
||||||
|
|
@ -32,9 +36,9 @@ $> docker service create \
|
||||||
ehazlett/docker-demo
|
ehazlett/docker-demo
|
||||||
```
|
```
|
||||||
|
|
||||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
Interlock detects when the service is available and publishes it. When tasks are running
|
||||||
and the proxy service has been updated the application should be available via `http://demo.local`
|
and the proxy service is updated, the application is available via `http://demo.local`
|
||||||
and configured to use sticky sessions:
|
and is configured to use sticky sessions:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/ping
|
$> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/ping
|
||||||
|
|
@ -64,21 +68,21 @@ $> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/p
|
||||||
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
|
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
|
||||||
```
|
```
|
||||||
|
|
||||||
Notice the `Set-Cookie` from the application. This is stored by the `curl` command and sent with subsequent requests
|
Notice the `Set-Cookie` from the application. This is stored by the `curl` command and is sent with subsequent requests,
|
||||||
which are pinned to the same instance. If you make a few requests you will notice the same `x-upstream-addr`.
|
which are pinned to the same instance. If you make a few requests, you will notice the same `x-upstream-addr`.
|
||||||
|
|
||||||
# IP Hashing
|
## IP Hashing
|
||||||
In this example we show how to configure sticky sessions using client IP hashing. This is not as flexible or consistent
|
The following example shows how to configure sticky sessions using client IP hashing. This is not as flexible or consistent
|
||||||
as cookies but enables workarounds for some applications that cannot use the other method. When using IP hashing you should reconfigure Interlock proxy to use [host mode networking](../deploy/host-mode-networking.md) because the default `ingress` networking mode uses SNAT which obscures client IP addresses.
|
as cookies but enables workarounds for some applications that cannot use the other method. When using IP hashing, reconfigure Interlock proxy to use [host mode networking](../config/host-mode-networking.md), because the default `ingress` networking mode uses SNAT, which obscures client IP addresses.
|
||||||
|
|
||||||
First we will create an overlay network so that service traffic is isolated and secure:
|
1. Create an overlay network so that service traffic is isolated and secure:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$> docker network create -d overlay demo
|
$> docker network create -d overlay demo
|
||||||
1se1glh749q1i4pw0kf26mfx5
|
1se1glh749q1i4pw0kf26mfx5
|
||||||
```
|
```
|
||||||
|
|
||||||
Next we will create the service with the cookie to use for sticky sessions using IP hashing:
|
2. Create a service with the cookie to use for sticky sessions using IP hashing:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$> docker service create \
|
$> docker service create \
|
||||||
|
|
@ -93,9 +97,9 @@ $> docker service create \
|
||||||
ehazlett/docker-demo
|
ehazlett/docker-demo
|
||||||
```
|
```
|
||||||
|
|
||||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
Interlock detects when the service is available and publishes it. When tasks are running
|
||||||
and the proxy service has been updated the application should be available via `http://demo.local`
|
and the proxy service is updated, the application is available via `http://demo.local`
|
||||||
and configured to use sticky sessions:
|
and is configured to use sticky sessions:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||||
|
|
@ -122,10 +126,9 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
||||||
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
|
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
|
||||||
```
|
```
|
||||||
|
|
||||||
You can use `docker service scale demo=10` to add some more replicas. Once scaled, you will notice that requests are pinned
|
You can use `docker service scale demo=10` to add more replicas. When scaled, requests are pinned
|
||||||
to a specific backend.
|
to a specific backend.
|
||||||
|
|
||||||
> **Note**: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
|
> **Note**: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
|
||||||
> expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are
|
> expected, because internally the proxy uses the new set of replicas to determine a backend on which to pin. When the upstreams are
|
||||||
> determined a new "sticky" backend will be chosen and that will be the dedicated upstream.
|
> determined, a new "sticky" backend is chosen as the dedicated upstream.
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue