Fix merge conflicts

This commit is contained in:
Maria Bermudez 2019-05-20 12:21:29 -07:00
parent 708c116dfc
commit 1bca1115a7
16 changed files with 190 additions and 284 deletions

2
Jenkinsfile vendored
View File

@ -246,4 +246,4 @@ pipeline {
}
}
}
}
}

View File

@ -35,6 +35,7 @@ examples: |-
Images 5 2 16.43 MB 11.63 MB (70%)
Containers 2 0 212 B 212 B (100%)
Local Volumes 2 1 36 B 0 B (0%)
Build Cache 0 0 0B 0B
```
A more detailed view can be requested using the `-v, --verbose` flag:
@ -62,6 +63,14 @@ examples: |-
NAME LINKS SIZE
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e 2 36 B
my-named-vol 0 0 B
Build cache usage: 0B
CACHE ID CACHE TYPE SIZE CREATED LAST USED USAGE SHARED
0d8ab63ff30d regular 4.34MB 7 days ago 0 true
189876ac9226 regular 11.5MB 7 days ago 0 true
```
* `SHARED SIZE` is the amount of space that an image shares with another one (i.e. their common data)

View File

@ -574,7 +574,6 @@ configs:
file: ./my-credential-spec.json|
```
### depends_on
Express dependency between services, Service dependencies cause the following

View File

@ -1,4 +1,5 @@
---
<<<<<<< HEAD
description: Describes how to use the local logging driver.
keywords: local, docker, logging, driver
redirect_from:
@ -18,22 +19,42 @@ uses automatic compression to reduce the size on disk.
> file-format and storage mechanism are designed to be exclusively accessed by
> the Docker daemon, and should not be used by external tools as the
> implementation may change in future releases.
=======
description: Describes how to use the local binary (Protobuf) logging driver.
keywords: local, protobuf, docker, logging, driver
redirect_from:
- /engine/reference/logging/local/
- /engine/admin/logging/local/
title: local binary file Protobuf logging driver
---
This `log-driver` writes to `local` binary files using Protobuf [Protocol Buffers](https://en.wikipedia.org/wiki/Protocol_Buffers)
>>>>>>> Sync forked amberjack branch with docs-private (#1068)
## Usage
To use the `local` driver as the default logging driver, set the `log-driver`
and `log-opt` keys to appropriate values in the `daemon.json` file, which is
located in `/etc/docker/` on Linux hosts or
<<<<<<< HEAD
`C:\ProgramData\docker\config\daemon.json` on Windows Server. For more about
configuring Docker using `daemon.json`, see
[daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file).
The following example sets the log driver to `local` and sets the `max-size`
option.
=======
`C:\ProgramData\docker\config\daemon.json` on Windows Server. For more information about
configuring Docker using `daemon.json`, see
[daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file).
The following example sets the log driver to `local`.
>>>>>>> Sync forked amberjack branch with docs-private (#1068)
```json
{
"log-driver": "local",
<<<<<<< HEAD
"log-opts": {
"max-size": "10m"
}
@ -41,13 +62,30 @@ option.
```
Restart Docker for the changes to take effect for newly created containers. Existing containers do not use the new logging configuration.
=======
"log-opts": {}
}
```
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `max-file` in the example above) must therefore be enclosed in quotes (`"`).
Restart Docker for the changes to take effect for newly created containers.
Existing containers will not use the new logging configuration.
>>>>>>> Sync forked amberjack branch with docs-private (#1068)
You can set the logging driver for a specific container by using the
`--log-driver` flag to `docker container create` or `docker run`:
```bash
$ docker run \
<<<<<<< HEAD
--log-driver local --log-opt max-size=10m \
=======
--log-driver local --log-opt compress="false" \
>>>>>>> Sync forked amberjack branch with docs-private (#1068)
alpine echo hello world
```
@ -57,6 +95,7 @@ The `local` logging driver supports the following logging options:
| Option | Description | Example value |
|:------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------|
<<<<<<< HEAD
| `max-size` | The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to 20m. | `--log-opt max-size=10m` |
| `max-file` | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. **Only effective when `max-size` is also set.** A positive integer. Defaults to 5. | `--log-opt max-file=3` |
| `compress` | Toggle compression of rotated log files. Enabled by default. | `--log-opt compress=false` |
@ -68,4 +107,9 @@ files no larger than 10 megabytes each.
```bash
$ docker run -it --log-opt max-size=10m --log-opt max-file=3 alpine ash
```
```
=======
| `max-size` | The maximum size of each binary log file before rotation. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to `20m`. | `--log-opt max-size=10m` |
| `max-file` | The maximum number of binary log files. If rotating the logs creates an excess file, the oldest file is removed. **Only effective when `max-size` is also set.** A positive integer. Defaults to `5`. | `--log-opt max-file=5` |
| `compress` | Whether or not the binary files should be compressed. Defaults to `true` | `--log-opt compress=true` |
>>>>>>> Sync forked amberjack branch with docs-private (#1068)

View File

@ -78,6 +78,8 @@ to upgrade your installation to the latest release.
* The `--no-image-check` flag has been removed from the `upgrade` command as image check is no longer a part of the upgrade process.
=======
>>>>>>> Sync forked amberjack branch with docs-private (#1068)
# Version 2.6
## 2.6.6

View File

@ -1,24 +1,13 @@
---
<<<<<<< HEAD
title: Configure host mode networking
description: Learn how to configure the UCP layer 7 routing solution with
host mode networking.
keywords: routing, proxy, interlock, load balancing
=======
title: Host mode networking
description: Learn how to configure the UCP layer 7 routing solution with
host mode networking.
keywords: routing, proxy
>>>>>>> Raw content addition
redirect_from:
- /ee/ucp/interlock/usage/host-mode-networking/
- /ee/ucp/interlock/deploy/host-mode-networking/
---
<<<<<<< HEAD
=======
# Configuring host mode networking
>>>>>>> Raw content addition
By default, layer 7 routing components communicate with one another using
overlay networks, but Interlock supports
host mode networking in a variety of ways, including proxy only, Interlock only, application only, and hybrid.
@ -37,22 +26,14 @@ To use host mode networking instead of overlay networking:
## Configuration for a production-grade deployment
If you have not done so, configure the
<<<<<<< HEAD
[layer 7 routing solution for production](../deploy/production.md).
=======
[layer 7 routing solution for production](production.md).
>>>>>>> Raw content addition
The `ucp-interlock-proxy` service replicas should then be
running on their own dedicated nodes.
## Update the ucp-interlock config
<<<<<<< HEAD
[Update the ucp-interlock service configuration](./index.md) so that it uses
=======
[Update the ucp-interlock service configuration](configure.md) so that it uses
>>>>>>> Raw content addition
host mode networking.
Update the `PublishMode` key to:
@ -110,7 +91,6 @@ service is running.
If everything is working correctly, you should get a JSON result like:
<<<<<<< HEAD
{% raw %}
```json
{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
@ -118,44 +98,20 @@ If everything is working correctly, you should get a JSON result like:
{% endraw %}
The following example describes how to configure an eight (8) node Swarm cluster that uses host mode
=======
```json
{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
```
---------------------------REPLACE WITH THE FOLLOWING INFO??-------------------------------------------------
In this example we will configure an eight (8) node Swarm cluster that uses host mode
>>>>>>> Raw content addition
networking to route traffic without using overlay networks. There are three (3) managers
and five (5) workers. Two of the workers are configured with node labels to be dedicated
ingress cluster load balancer nodes. These will receive all application traffic.
<<<<<<< HEAD
This example does not cover the actual deployment of infrastructure.
=======
This example will not cover the actual deployment of infrastructure.
>>>>>>> Raw content addition
It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes).
See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help
getting a Swarm cluster deployed.
<<<<<<< HEAD
Note: When using host mode networking, you cannot use the DNS service discovery because that
requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
that will give you that functionality if needed.
Configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
=======
Note: when using host mode networking you will not be able to use the DNS service discovery as that
requires overlay networking. You can use other tooling such as [Registrator](https://github.com/gliderlabs/registrator)
that will give you that functionality if needed.
We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy
>>>>>>> Raw content addition
service. Once you are logged into one of the Swarm managers run the following to add node labels
to the dedicated load balancer worker nodes:
@ -168,20 +124,14 @@ lb-01
Inspect each node to ensure the labels were successfully added:
<<<<<<< HEAD
{% raw %}
=======
>>>>>>> Raw content addition
```bash
$> docker node inspect -f '{{ .Spec.Labels }}' lb-00
map[nodetype:loadbalancer]
$> docker node inspect -f '{{ .Spec.Labels }}' lb-01
map[nodetype:loadbalancer]
```
<<<<<<< HEAD
{% endraw %}
=======
>>>>>>> Raw content addition
Next, create a configuration object for Interlock that specifies host mode networking:
@ -193,17 +143,10 @@ PollInterval = "3s"
[Extensions]
[Extensions.default]
<<<<<<< HEAD
Image = "{{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }}"
Args = []
ServiceName = "interlock-ext"
ProxyImage = "{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}"
=======
Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview"
Args = []
ServiceName = "interlock-ext"
ProxyImage = "nginx:alpine"
>>>>>>> Raw content addition
ProxyArgs = []
ProxyServiceName = "interlock-proxy"
ProxyConfigPath = "/etc/nginx/nginx.conf"
@ -225,11 +168,7 @@ oqkvv1asncf6p2axhx41vylgt
Note the `PublishMode = "host"` setting. This instructs Interlock to configure the proxy service for host mode networking.
<<<<<<< HEAD
Now create the Interlock service also using host mode networking:
=======
Now we can create the Interlock service also using host mode networking:
>>>>>>> Raw content addition
```bash
$> docker service create \
@ -238,19 +177,11 @@ $> docker service create \
--constraint node.role==manager \
--publish mode=host,target=8080 \
--config src=service.interlock.conf,target=/config.toml \
<<<<<<< HEAD
{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
## Configure proxy services
=======
interlockpreview/interlock:2.0.0-preview -D run -c /config.toml
sjpgq7h621exno6svdnsvpv9z
```
## Configure Proxy Services
>>>>>>> Raw content addition
With the node labels, you can re-configure the Interlock Proxy services to be constrained to the
workers. From a manager run the following to pin the proxy services to the load balancer worker nodes:

View File

@ -1,33 +1,15 @@
---
<<<<<<< HEAD
title: Configure layer 7 routing service
description: Learn how to configure the layer 7 routing solution for UCP.
keywords: routing, proxy, interlock, load balancing
=======
title: Configuring layer 7 routing service
description: Learn how to configure the layer 7 routing solution for UCP, that allows
you to route traffic to swarm services.
keywords: routing, proxy
>>>>>>> Raw content addition
redirect_from:
- /ee/ucp/interlock/deploy/configure/
- /ee/ucp/interlock/usage/default-service/
---
<<<<<<< HEAD
To further customize the layer 7 routing solution, you must update the
`ucp-interlock` service with a new Docker configuration.
=======
# Configuring layer 7 routing services
You can configure ports for incoming traffic from the UCP web UI.
To further customize the layer 7 routing solution, you must update the
`ucp-interlock` service with a new Docker configuration.
Here's how it works:
>>>>>>> Raw content addition
1. Find out what configuration is currently being used for the `ucp-interlock`
service and save it to a file:
@ -103,7 +85,6 @@ The core configuraton handles the Interlock service itself. These are the config
Interlock must contain at least one extension to service traffic. The following options are available to configure the extensions:
<<<<<<< HEAD
| Option | Type | Description |
|:-------------------|:------------|:-----------------------------------------------------------|
| `Image` | string | Name of the Docker Image to use for the extension service |
@ -119,49 +100,11 @@ Interlock must contain at least one extension to service traffic. The following
| `ProxyContainerLabels` | map[string]string | labels to be added to the proxy service tasks |
| `ProxyServiceName` | string | Name of the proxy service |
| `ProxyConfigPath` | string | Path in the service for the generated proxy config |
=======
| Option | Type | Description |
|:-------------------|:------------------|:------------------------------------------------------------------------------|
| `Image` | string | Name of the Docker image to use for the extension service. |
| `Args` | []string | Arguments to be passed to the Docker extension service upon creation. |
| `Labels` | map[string]string | Labels to add to the extension service. |
| `ServiceName` | string | Name of the extension service. |
| `ProxyImage` | string | Name of the Docker image to use for the proxy service. |
| `ProxyArgs` | []string | Arguments to be passed to the proxy service upon creation. |
| `ProxyLabels` | map[string]string | Labels to add to the proxy service. |
| `ProxyServiceName` | string | Name of the proxy service. |
| `ProxyConfigPath` | string | Path in the service for the generated proxy configuration. |
| `ServiceCluster` | string | Name of the cluster this extension services. |
| `PublishMode` | string | Publish mode for the proxy service. Supported values are `ingress` or `host`. |
| `PublishedPort` | int | Port where the proxy service serves non-TLS traffic. |
| `PublishedSSLPort` | int | Port where the proxy service serves TLS traffic. |
| `Template` | string | Docker configuration object that is used as the extension template. |
| `Config` | Config | Proxy configuration used by the extensions as listed below. |
--------------------------WHICH INFO IS CORRECT???-------------------------------------------
| Option | Type | Description |
| --- | --- | --- |
| `Image` | string | name of the Docker Image to use for the extension service |
| `Args` | []string | arguments to be passed to the Docker extension service upon creation |
| `Labels` | map[string]string | labels to be added to the extension service |
| `ContainerLabels` | map[string]string | labels to be added to the extension service tasks |
| `Constraints` | []string | one or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the extension service |
| `PlacementPreferences` | []string | one or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the extension service |
| `ServiceName` | string | name of the extension service |
| `ProxyImage` | string | name of the Docker Image to use for the proxy service |
| `ProxyArgs` | []string | arguments to be passed to the Docker proxy service upon creation |
| `ProxyLabels` | map[string]string | labels to be added to the proxy service |
| `ProxyContainerLabels` | map[string]string | labels to be added to the proxy service tasks |
| `ProxyServiceName` | string | name of the proxy service |
| `ProxyConfigPath` | string | path in the service for the generated proxy config |
>>>>>>> Raw content addition
| `ProxyReplicas` | uint | number of proxy service replicas |
| `ProxyStopSignal` | string | stop signal for the proxy service (i.e. `SIGQUIT`) |
| `ProxyStopGracePeriod` | string | stop grace period for the proxy service (i.e. `5s`) |
| `ProxyConstraints` | []string | one or more [constraints](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint) to use when scheduling the proxy service |
| `ProxyPlacementPreferences` | []string | one or more [placement prefs](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences-placement-pref) to use when scheduling the proxy service |
<<<<<<< HEAD
| `ProxyUpdateDelay` | string | delay between rolling proxy container updates |
| `ServiceCluster` | string | Name of the cluster this extension services |
| `PublishMode` | string (`ingress` or `host`) | Publish mode that the proxy service uses |
@ -169,14 +112,6 @@ Interlock must contain at least one extension to service traffic. The following
| `PublishedSSLPort` | int | Port on which the proxy service serves SSL traffic |
| `Template` | string | Docker configuration object that is used as the extension template |
| `Config` | Config | Proxy configuration used by the extensions as described in the following table |
=======
| `ServiceCluster` | string | name of the cluster this extension services |
| `PublishMode` | string (`ingress` or `host`) | publish mode that the proxy service uses |
| `PublishedPort` | int | port that the proxy service serves non-SSL traffic |
| `PublishedSSLPort` | int | port that the proxy service serves SSL traffic |
| `Template` | string | Docker config object that is used as the extension template |
| `Config` | Config | proxy configuration used by the extensions as listed below |
>>>>>>> Raw content addition
### Proxy
Options are made available to the extensions, and the extensions utilize the options needed for proxy service configuration. This provides overrides to the extension configuration.
@ -186,11 +121,7 @@ different configuration options available. Refer to the documentation for each
- [Nginx](nginx-config.md)
<<<<<<< HEAD
#### Customize the default proxy service
=======
#### Customizing the default proxy service
>>>>>>> Raw content addition
The default proxy service used by UCP to provide layer 7 routing is NGINX. If users try to access a route that hasn't been configured, they will see the default NGINX 404 page:
![Default NGINX page](../../images/interlock-default-service-1.png){: .with-border}
@ -241,17 +172,10 @@ DockerURL = "unix:///var/run/docker.sock"
PollInterval = "3s"
[Extensions.default]
<<<<<<< HEAD
Image = "{{ page.ucp_org }}/interlock-extension-nginx:{{ page.ucp_version }}"
Args = ["-D"]
ServiceName = "interlock-ext"
ProxyImage = "{{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }}"
=======
Image = "docker/interlock-extension-nginx:latest"
Args = ["-D"]
ServiceName = "interlock-ext"
ProxyImage = "nginx:alpine"
>>>>>>> Raw content addition
ProxyArgs = []
ProxyServiceName = "interlock-proxy"
ProxyConfigPath = "/etc/nginx/nginx.conf"
@ -272,18 +196,8 @@ PollInterval = "3s"
## Next steps
<<<<<<< HEAD
- [Configure host mode networking](host-mode-networking.md)
- [Configure an nginx extension](nginx-config.md)
- [Use application service labels](service-labels.md)
- [Tune the proxy service](tuning.md)
- [Update Interlock services](updates.md)
=======
- [Using a custom extension template](custom-template.md)
- [Configuring an HAProxy extension](haproxy-config.md)
- [Configuring host mode networking](host-mode-networking.md)
- [Configuring an nginx extension](nginx-config.md)
- [Using application service labels](service-lables.md)
- [Tuning the proxy service](tuning.md)
- [Updating Interlock services](updates.md)
>>>>>>> Raw content addition

View File

@ -1,5 +1,4 @@
---
<<<<<<< HEAD
title: Configure Nginx
description: Learn how to configure an nginx extension
keywords: routing, proxy, interlock, load balancing
@ -25,58 +24,6 @@ available for the nginx extension:
| `SSLCiphers` | string | SSL ciphers to use for the proxy service | `HIGH:!aNULL:!MD5` |
| `SSLProtocols` | string | Enable the specified TLS protocols | `TLSv1.2` |
| `HideInfoHeaders` | bool | Hide proxy-related response headers. |
=======
title: Nginx configuration
description: Learn how to configure an nginx extension
keywords: routing, proxy
---
# Configuring an nginx extension
By default, nginx is used as a proxy, so the following configuration options are
available for the nginx extension:
| Option | Type | Description |
|:------------------------|:-------|:-----------------------------------------------------------------------------------------------------|
| `User` | string | User to be used in the proxy. |
| `PidPath` | string | Path to the pid file for the proxy service. |
| `MaxConnections` | int | Maximum number of connections for proxy service. |
| `ConnectTimeout` | int | Timeout in seconds for clients to connect. |
| `SendTimeout` | int | Timeout in seconds for the service to send a request to the proxied upstream. |
| `ReadTimeout` | int | Timeout in seconds for the service to read a response from the proxied upstream. |
| `IPHash` | bool | Specifies that requests are distributed between servers based on client IP addresses. |
| `SSLOpts` | string | Options to be passed when configuring SSL. |
| `SSLDefaultDHParam` | int | Size of DH parameters. |
| `SSLDefaultDHParamPath` | string | Path to DH parameters file. |
| `SSLVerify` | string | SSL client verification. |
| `WorkerProcesses` | string | Number of worker processes for the proxy service. |
| `RLimitNoFile` | int | Number of maxiumum open files for the proxy service. |
| `SSLCiphers` | string | SSL ciphers to use for the proxy service. |
| `SSLProtocols` | string | Enable the specified TLS protocols. |
| `AccessLogPath` | string | Path to use for access logs (default: `/dev/stdout`). |
| `ErrorLogPath` | string | Path to use for error logs (default: `/dev/stdout`). |
| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger. |
| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger. |
This is from Interlock docs - which is correct???????
| Option | Type | Description | Defaults |
| --- | --- | --- | --- |
| `User` | string | user to be used in the proxy | `nginx` |
| `PidPath` | string | path to the pid file for the proxy service | `/var/run/proxy.pid` |
| `MaxConnections` | int | maximum number of connections for proxy service | `1024` |
| `ConnectTimeout` | int | timeout in seconds for clients to connect | `600` |
| `SendTimeout` | int | timeout in seconds for the service to send a request to the proxied upstream | `600` |
| `ReadTimeout` | int | timeout in seconds for the service to read a response from the proxied upstream | `600` |
| `SSLOpts` | string | options to be passed when configuring SSL | |
| `SSLDefaultDHParam` | int | size of DH parameters | `1024` |
| `SSLDefaultDHParamPath` | string | path to DH parameters file | |
| `SSLVerify` | string | SSL client verification | `required` |
| `WorkerProcesses` | string | number of worker processes for the proxy service | `1` |
| `RLimitNoFile` | int | number of maxiumum open files for the proxy service | `65535` |
| `SSLCiphers` | string | SSL ciphers to use for the proxy service | `HIGH:!aNULL:!MD5` |
| `SSLProtocols` | string | enable the specified TLS protocols | `TLSv1.2` |
>>>>>>> Raw content addition
| `KeepaliveTimeout` | string | connection keepalive timeout | `75s` |
| `ClientMaxBodySize` | string | maximum allowed size of the client request body | `1m` |
| `ClientBodyBufferSize` | string | sets buffer size for reading client request body | `8k` |
@ -94,7 +41,3 @@ This is from Interlock docs - which is correct???????
| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger | see default format |
| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger | see default format |
<<<<<<< HEAD
=======
>>>>>>> Raw content addition

View File

@ -1,19 +1,9 @@
---
<<<<<<< HEAD
title: Use application service labels
description: Learn how applications use service labels for publishing
keywords: routing, proxy, interlock, load balancing
---
=======
title: Application service labels
description: Learn how applications use service labels for publishing
keywords: routing, proxy
---
# Using application service labels
>>>>>>> Raw content addition
Service labels define hostnames that are routed to the
service, the applicable ports, and other routing configurations. Applications that publish using Interlock use service labels to configure how they are published.

View File

@ -1,5 +1,4 @@
---
<<<<<<< HEAD
title: Tune the proxy service
description: Learn how to tune the proxy service for environment optimization
keywords: routing, proxy, interlock
@ -9,19 +8,6 @@ keywords: routing, proxy, interlock
Refer to [Proxy service constraints](../deploy/production.md) for information on how to constrain the proxy service to multiple dedicated worker nodes.
## Stop
=======
title: Proxy service tuning
description: Learn how to ?????
keywords: routing, proxy
---
# Tuning the proxy service
## Constraining the proxy service to multiple dedicated worker nodes
Refer to [Proxy service constraints](../deploy/production.md) for information on how to constrain the proxy service to multiple dedicated worker nodes.
## Stopping
>>>>>>> Raw content addition
To adjust the stop signal and period, use the `stop-signal` and `stop-grace-period` settings. For example,
to set the stop signal to `SIGTERM` and grace period to ten (10) seconds, use the following command:

View File

@ -1,5 +1,4 @@
---
<<<<<<< HEAD
title: Update Interlock services
description: Learn how to update the UCP layer 7 routing solution services
keywords: routing, proxy, interlock
@ -11,27 +10,12 @@ There are two parts to the update process:
2. Update the Interlock service to use the new configuration and image.
## Update the Interlock configuration
=======
title: Updating Interlock services
description: Learn how to update the UCP layer 7 routing solution services
keywords: routing, proxy
---
# Updating Interlock services
There are two parts to the update process:
1. Updating the Interlock configuration to specify the new extension and/or proxy image versions.
2. Updating the Interlock service to use the new configuration and image.
## Updating the Interlock configuration
>>>>>>> Raw content addition
Create the new configuration:
```bash
$> docker config create service.interlock.conf.v2 <path-to-new-config>
```
<<<<<<< HEAD
## Update the Interlock service
Remove the old configuration and specify the new configuration:
@ -96,27 +80,10 @@ docker/ucp:{{ page.ucp_version }}
```
Interlock starts and checks the config object, which has the new extension version, and
=======
## Updating the Interlock service
Remove the old configuration and specify the new configuration:
```bash
$> docker service update --config-rm service.interlock.conf interlock
$> docker service update --config-add source=service.interlock.conf.v2,target=/config.toml interlock
```
Next, update the Interlock service to use the new image. The following example updates the Interlock core service to use the `sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51`
version of Interlock. Interlock starts and checks the config object, which has the new extension version, and
>>>>>>> Raw content addition
performs a rolling deploy to update all extensions.
```bash
$> docker service update \
<<<<<<< HEAD
--image {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} \
ucp-interlock
=======
--image interlockpreview/interlock@sha256:d173014908eb09e9a70d8e5ed845469a61f7cbf4032c28fad0ed9af3fc04ef51 \
interlock
>>>>>>> Raw content addition
```

View File

@ -128,4 +128,4 @@ to provide more bandwidth for the user services.
## Next steps
- [Configure Interlock](../config/index.md)
- [Deploy applications](../usage.index.md)
- [Deploy applications](./index.md)

View File

@ -0,0 +1,129 @@
---
title: Layer 7 routing upgrade
description: Learn how to upgrade your existing layer 7 routing solution
keywords: routing, proxy, hrm
redirect_from:
- /ee/ucp/interlock/upgrade/
---
The [HTTP routing mesh](/datacenter/ucp/2.2/guides/admin/configure/use-domain-names-to-access-services.md)
functionality was redesigned in UCP 3.0 for greater security and flexibility.
The functionality was also renamed to "layer 7 routing", to make it easier for
new users to get started.
[Learn about the new layer 7 routing functionality](../index.md).
To route traffic to your service you apply specific labels to your swarm
services, describing the hostname for the service and other configurations.
Things work in the same way as they did with the HTTP routing mesh, with the
only difference being that you use different labels.
You don't have to manually update your services. During the upgrade process to
3.0, UCP updates the services to start using new labels.
This article describes the upgrade process for the routing component, so that
you can troubleshoot UCP and your services, in case something goes wrong with
the upgrade.
If you are using the HTTP routing mesh, and start an upgrade to UCP 3.0:
1. UCP starts a reconciliation process to ensure all internal components are
deployed. As part of this, services using HRM labels are inspected.
2. UCP creates the `com.docker.ucp.interlock.conf-<id>` based on HRM configurations.
3. The HRM service is removed.
4. The `ucp-interlock` service is deployed with the configuration created.
5. The `ucp-interlock` service deploys the `ucp-interlock-extension` and
`ucp-interlock-proxy-services`.
The only way to rollback from an upgrade is by restoring from a backup taken
before the upgrade. If something goes wrong during the upgrade process, you
need to troubleshoot the interlock services and your services, since the HRM
service won't be running after the upgrade.
[Learn more about the interlock services and architecture](../architecture.md).
## Check that routing works
After upgrading to UCP 3.0, you should check if all swarm services are still
routable.
For services using HTTP:
```bash
curl -vs http://<ucp-url>:<hrm-http-port>/ -H "Host: <service-hostname>"
```
For services using HTTPS:
```bash
curl -vs https://<ucp-url>:<hrm-https-port>
```
After the upgrade, check that you can still use the same hostnames to access
the swarm services.
## The ucp-interlock services are not running
After the upgrade to UCP 3.0, the following services should be running:
* `ucp-interlock`: monitors swarm workloads configured to use layer 7 routing.
* `ucp-interlock-extension`: Helper service that generates the configuration for
the `ucp-interlock-proxy` service.
* `ucp-interlock-proxy`: A service that provides load balancing and proxying for
swarm workloads.
To check if these services are running, use a client bundle with administrator
permissions and run:
```bash
docker ps --filter "name=ucp-interlock"
```
* If the `ucp-interlock` service doesn't exist or is not running, something went
wrong with the reconciliation step.
* If this still doesn't work, it's possible that UCP is having problems creating
the `com.docker.ucp.interlock.conf-1`, due to name conflicts. Make sure you
don't have any configuration with the same name by running:
```
docker config ls --filter "name=com.docker.ucp.interlock"
```
* If either the `ucp-interlock-extension` or `ucp-interlock-proxy` services are
not running, it's possible that there are port conflicts.
As a workaround re-enable the layer 7 routing configuration from the
[UCP settings page](deploy/index.md). Make sure the ports you choose are not
being used by other services.
## Workarounds and clean-up
If you have any of the problems above, disable and enable the layer 7 routing
setting on the [UCP settings page](index.md). This redeploys the
services with their default configuration.
When doing that make sure you specify the same ports you were using for HRM,
and that no other services are listening on those ports.
You should also check if the `ucp-hrm` service is running. If it is, you should
stop it since it can conflict with the `ucp-interlock-proxy` service.
## Optionally remove labels
As part of the upgrade process UCP adds the
[labels specific to the new layer 7 routing solution](../usage/labels-reference.md).
You can update your services to remove the old HRM labels, since they won't be
used anymore.
## Optionally segregate control traffic
Interlock is designed so that all the control traffic is kept separate from
the application traffic.
If before upgrading you had all your applications attached to the `ucp-hrm`
network, after upgrading you can update your services to start using a
dedicated network for routing that's not shared with other services.
[Learn how to use a dedicated network](../usage/index.md).
If before upgrading you had a dedicate network to route traffic to each service,
Interlock will continue using those dedicated networks. However the
`ucp-interlock` will be attached to each of those networks. You can update
the `ucp-interlock` service so that it is only connected to the `ucp-hrm` network.

View File

@ -1,18 +1,10 @@
---
title: Layer 7 routing overview
description: Learn how to route layer 7 traffic to your Swarm services
<<<<<<< HEAD
keywords: routing, UCP, interlock, load balancing
---
Application-layer (Layer 7) routing is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock architecture takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
=======
keywords: routing, proxy
---
## Introduction
Interlock is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
>>>>>>> Raw content addition
Interlock is specific to the Swarm orchestrator. If you're trying to route
traffic to your Kubernetes applications, check

View File

@ -151,13 +151,12 @@ able to start using the service from your browser.
## Next steps
- [Publish a service as a canary instance](./canary.md)
- [Usie context or path-based routing](./context.md)
- [Use context or path-based routing](./context.md)
- [Publish a default host service](./interlock-vip-mode.md)
- [Specify a routing mode](./interlock-vip-mode.md)
- [Use routing labels](./labels-reference.md)
- [Implement redirects](./redirects.md)
- [Implement a service cluster](./service-clusters.md)
- [Implement persistent (sticky) sessions](./sessions.md)
- [Implement SSL](./ssl.md)
- [Secure services with TLS](./tls.md)
- [Configure websockets](./websockets.md)

View File

@ -5,7 +5,8 @@ description: Learn about the labels you can use in your swarm services to route
keywords: routing, proxy
---
After you enable the layer 7 routing solution, you can [start using it in your swarm services](index.md).
After you enable the layer 7 routing solution, you can
[start using it in your swarm services](index.md).
| Label | Description | Example |