Merge branch 'master' of https://github.com/docker/docs-private
10
Dockerfile
|
|
@ -66,16 +66,6 @@ COPY --from=docs/docker.github.io:nginx-onbuild /etc/nginx/conf.d/default.conf /
|
|||
# archives less often than new ones.
|
||||
# To add a new archive, add it here
|
||||
# AND ALSO edit _data/docsarchives/archives.yaml to add it to the drop-down
|
||||
COPY --from=docs/docker.github.io:v1.4 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.5 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.6 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.7 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.8 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.9 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.10 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.11 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.12 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v1.13 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v17.03 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v17.06 ${TARGET} ${TARGET}
|
||||
COPY --from=docs/docker.github.io:v17.09 ${TARGET} ${TARGET}
|
||||
|
|
|
|||
15
README.md
|
|
@ -317,6 +317,21 @@ still optimizes the bandwidth during browsing).
|
|||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
```
|
||||
|
||||
## Accessing unsupported archived documentation
|
||||
|
||||
Supported documentation includes the current version plus the previous five versions.
|
||||
|
||||
If you are using a version of the documentation that is no longer supported, which means that the version number is not listed in the site dropdown list, you can still access that documentation in the following ways:
|
||||
|
||||
- By entering your version number and selecting it from the branch selection list for this repo
|
||||
- By directly accessing the Github URL for your version. For example, https://github.com/docker/docker.github.io/tree/v1.9 for `v1.9`
|
||||
- By running a container of the specific [tag for your documentation version](https://cloud.docker.com/u/docs/repository/docker/docs/docker.github.io/general#read-these-docs-offline)
|
||||
in Docker Hub. For example, run the following to access `v1.9`:
|
||||
|
||||
```bash
|
||||
docker run -it -p 4000:4000 docs/docker.github.io:v1.9
|
||||
```
|
||||
|
||||
## Building archives and the live published docs
|
||||
|
||||
All the images described below are automatically built using Docker Hub. To
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ safe: false
|
|||
lsi: false
|
||||
url: https://docs.docker.com
|
||||
# This needs to have all the directories you expect to be in the archives (delivered by docs-base in the Dockerfile)
|
||||
keep_files: ["v1.4", "v1.5", "v1.6", "v1.7", "v1.8", "v1.9", "v1.10", "v1.11", "v1.12", "v1.13", "v17.03", "v17.06", "v17.09", "v17.12", "v18.03"]
|
||||
keep_files: ["v17.03", "v17.06", "v17.09", "v17.12", "v18.03"]
|
||||
exclude: ["_scripts", "apidocs/layouts", "Gemfile", "hooks", "index.html", "404.html"]
|
||||
|
||||
# Component versions -- address like site.docker_ce_version
|
||||
|
|
@ -94,7 +94,7 @@ defaults:
|
|||
- scope:
|
||||
path: "install"
|
||||
values:
|
||||
win_latest_build: "docker-18.09.1"
|
||||
win_latest_build: "docker-18.09.2"
|
||||
- scope:
|
||||
path: "datacenter"
|
||||
values:
|
||||
|
|
|
|||
|
|
@ -22,33 +22,3 @@
|
|||
- archive:
|
||||
name: v17.03
|
||||
image: docs/docker.github.io:v17.03
|
||||
- archive:
|
||||
name: v1.13
|
||||
image: docs/docker.github.io:v1.13
|
||||
- archive:
|
||||
name: v1.12
|
||||
image: docs/docker.github.io:v1.12
|
||||
- archive:
|
||||
name: v1.11
|
||||
image: docs/docker.github.io:v1.11
|
||||
- archive:
|
||||
name: v1.10
|
||||
image: docs/docker.github.io:v1.10
|
||||
- archive:
|
||||
name: v1.9
|
||||
image: docs/docker.github.io:v1.9
|
||||
- archive:
|
||||
name: v1.8
|
||||
image: docs/docker.github.io:v1.8
|
||||
- archive:
|
||||
name: v1.7
|
||||
image: docs/docker.github.io:v1.7
|
||||
- archive:
|
||||
name: v1.6
|
||||
image: docs/docker.github.io:v1.6
|
||||
- archive:
|
||||
name: v1.5
|
||||
image: docs/docker.github.io:v1.5
|
||||
- archive:
|
||||
name: v1.4
|
||||
image: docs/docker.github.io:v1.4
|
||||
|
|
|
|||
|
|
@ -1454,7 +1454,7 @@ examples: |-
|
|||
On Windows server, assuming the default configuration, these commands are equivalent
|
||||
and result in `process` isolation:
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
PS C:\> docker run -d microsoft/nanoserver powershell echo process
|
||||
PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo process
|
||||
PS C:\> docker run -d --isolation process microsoft/nanoserver powershell echo process
|
||||
|
|
@ -1464,7 +1464,7 @@ examples: |-
|
|||
are running against a Windows client-based daemon, these commands are equivalent and
|
||||
result in `hyperv` isolation:
|
||||
|
||||
```PowerShell
|
||||
```powershell
|
||||
PS C:\> docker run -d microsoft/nanoserver powershell echo hyperv
|
||||
PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo hyperv
|
||||
PS C:\> docker run -d --isolation hyperv microsoft/nanoserver powershell echo hyperv
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ collection: |
|
|||
nodes, services, containers, volumes, networks, and secrets. [Learn how to manage collections](/datacenter/ucp/2.2/guides/access-control/manage-access-with-collections/).
|
||||
Compose: |
|
||||
[Compose](https://github.com/docker/compose) is a tool for defining and
|
||||
running complex applications with Docker. With compose, you define a
|
||||
running complex applications with Docker. With Compose, you define a
|
||||
multi-container application in a single file, then spin your
|
||||
application up in a single command which does everything that needs to
|
||||
be done to get it running.
|
||||
|
|
|
|||
|
|
@ -1182,6 +1182,8 @@ manuals:
|
|||
title: Add SANs to cluster certificates
|
||||
- path: /ee/ucp/admin/configure/collect-cluster-metrics/
|
||||
title: Collect UCP cluster metrics with Prometheus
|
||||
- path: /ee/ucp/admin/configure/metrics-descriptions/
|
||||
title: Using UCP cluster metrics with Prometheus
|
||||
- path: /ee/ucp/admin/configure/configure-rbac-kube/
|
||||
title: Configure native Kubernetes role-based access control
|
||||
- path: /ee/ucp/admin/configure/create-audit-logs/
|
||||
|
|
@ -1204,8 +1206,6 @@ manuals:
|
|||
title: UCP configuration file
|
||||
- path: /ee/ucp/admin/configure/use-node-local-network-in-swarm/
|
||||
title: Use a local node network in a swarm
|
||||
- path: /ee/ucp/admin/configure/use-nfs-volumes/
|
||||
title: Use NFS persistent storage
|
||||
- path: /ee/ucp/admin/configure/use-your-own-tls-certificates/
|
||||
title: Use your own TLS certificates
|
||||
- path: /ee/ucp/admin/configure/manage-and-deploy-private-images/
|
||||
|
|
@ -1335,6 +1335,8 @@ manuals:
|
|||
path: /ee/ucp/interlock/usage/service-clusters/
|
||||
- title: Context/Path based routing
|
||||
path: /ee/ucp/interlock/usage/context/
|
||||
- title: VIP backend mode
|
||||
path: /ee/ucp/interlock/usage/interlock-vip-mode/
|
||||
- title: Service labels reference
|
||||
path: /ee/ucp/interlock/usage/labels-reference/
|
||||
- title: Layer 7 routing upgrade
|
||||
|
|
@ -1343,6 +1345,8 @@ manuals:
|
|||
section:
|
||||
- title: Access Kubernetes Resources
|
||||
path: /ee/ucp/kubernetes/kube-resources/
|
||||
- title: Use NFS persistent storage
|
||||
path: /ee/ucp/admin/configure/use-nfs-volumes/
|
||||
- title: Configure AWS EBS Storage for Kubernetes
|
||||
path: /ee/ucp/kubernetes/configure-aws-storage/
|
||||
- title: Deploy a workload
|
||||
|
|
@ -2148,8 +2152,8 @@ manuals:
|
|||
title: Monitor the cluster status
|
||||
- path: /ee/dtr/admin/monitor-and-troubleshoot/notary-audit-logs/
|
||||
title: Check Notary audit logs
|
||||
- path: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs/
|
||||
title: Troubleshoot with logs
|
||||
- path: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-dtr/
|
||||
title: Troubleshoot Docker Trusted Registry
|
||||
- sectiontitle: Disaster recovery
|
||||
section:
|
||||
- title: Overview
|
||||
|
|
@ -2306,8 +2310,8 @@ manuals:
|
|||
title: Monitor the cluster status
|
||||
- path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/notary-audit-logs/
|
||||
title: Check Notary audit logs
|
||||
- path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs/
|
||||
title: Troubleshoot with logs
|
||||
- path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-dtr/
|
||||
title: Troubleshoot Docker Trusted Registry
|
||||
- path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-batch-jobs/
|
||||
title: Troubleshoot batch jobs
|
||||
- path: /datacenter/dtr/2.5/guides/admin/backups-and-disaster-recovery/
|
||||
|
|
@ -3112,6 +3116,8 @@ manuals:
|
|||
title: File system sharing
|
||||
- path: /docker-for-mac/osxfs-caching/
|
||||
title: Performance tuning for volume mounts (shared filesystems)
|
||||
- path: /docker-for-mac/space/
|
||||
title: Disk utilization
|
||||
- path: /docker-for-mac/troubleshoot/
|
||||
title: Logs and troubleshooting
|
||||
- path: /docker-for-mac/faqs/
|
||||
|
|
|
|||
|
|
@ -137,7 +137,7 @@ You only need to set up the repository once, after which you can install Docker
|
|||
|
||||
{% elsif section == "install-using-yum-repo" %}
|
||||
|
||||
> ***NOTE:*** If you need to run Docker EE 2.0, please see the following instructions:
|
||||
> **Note**: If you need to run Docker EE 2.0, please see the following instructions:
|
||||
> * [18.03](https://docs.docker.com/v18.03/ee/supported-platforms/) - Older Docker EE Engine only release
|
||||
> * [17.06](https://docs.docker.com/v17.06/engine/installation/) - Docker Enterprise Edition 2.0 (Docker Engine,
|
||||
> UCP, and DTR).
|
||||
|
|
|
|||
|
|
@ -13,19 +13,30 @@ for the bash and zsh shell.
|
|||
|
||||
Make sure bash completion is installed.
|
||||
|
||||
* On a current Linux OS (in a non-minimal installation), bash completion should be
|
||||
#### Linux
|
||||
|
||||
1. On a current Linux OS (in a non-minimal installation), bash completion should be
|
||||
available.
|
||||
|
||||
* On a Mac, install with `brew install bash-completion`.
|
||||
|
||||
Place the completion script in `/etc/bash_completion.d/`
|
||||
(or `/usr/local/etc/bash_completion.d/` on a Mac):
|
||||
2. Place the completion script in `/etc/bash_completion.d/`.
|
||||
|
||||
```shell
|
||||
sudo curl -L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
|
||||
```
|
||||
|
||||
On a Mac, add the following to your `~/.bash_profile`:
|
||||
### Mac
|
||||
|
||||
##### Install via Homebrew
|
||||
|
||||
1. Install with `brew install bash-completion`.
|
||||
2. After the installation, Brew displays the installation path. Make sure to place the completion script in the path.
|
||||
|
||||
For example, when running this command on Mac 10.13.2, place the completion script in `/usr/local/etc/bash_completion.d/`.
|
||||
|
||||
```shell
|
||||
sudo curl -L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/bash/docker-compose -o /usr/local/etc/bash_completion.d/docker-compose
|
||||
```
|
||||
|
||||
3. Add the following to your `~/.bash_profile`:
|
||||
|
||||
```shell
|
||||
if [ -f $(brew --prefix)/etc/bash_completion ]; then
|
||||
|
|
@ -33,13 +44,13 @@ if [ -f $(brew --prefix)/etc/bash_completion ]; then
|
|||
fi
|
||||
```
|
||||
|
||||
You can source your `~/.bash_profile` or launch a new terminal to utilize
|
||||
4. You can source your `~/.bash_profile` or launch a new terminal to utilize
|
||||
completion.
|
||||
|
||||
If you're using MacPorts instead of brew, use the following steps instead:
|
||||
##### Install via MacPorts
|
||||
|
||||
Run `sudo port install bash-completion` to install bash completion.
|
||||
Add the following lines to `~/.bash_profile`:
|
||||
1. Run `sudo port install bash-completion` to install bash completion.
|
||||
2. Add the following lines to `~/.bash_profile`:
|
||||
|
||||
```shell
|
||||
if [ -f /opt/local/etc/profile.d/bash_completion.sh ]; then
|
||||
|
|
@ -47,43 +58,44 @@ if [ -f /opt/local/etc/profile.d/bash_completion.sh ]; then
|
|||
fi
|
||||
```
|
||||
|
||||
You can source your `~/.bash_profile` or launch a new terminal to utilize
|
||||
3. You can source your `~/.bash_profile` or launch a new terminal to utilize
|
||||
completion.
|
||||
|
||||
### Zsh
|
||||
|
||||
#### With oh-my-zsh
|
||||
Make sure you have [installed `oh-my-zsh`](https://ohmyz.sh/) on your computer.
|
||||
|
||||
Add `docker` to the plugins list in `~/.zshrc`:
|
||||
#### With oh-my-zsh shell
|
||||
|
||||
Add `docker` and `docker-compose` to the plugins list in `~/.zshrc` to run autocompletion within the oh-my-zsh shell. In the following example, `...` represent other Zsh plugins you may have installed.
|
||||
|
||||
```shell
|
||||
plugins=(
|
||||
<existing-plugins> docker
|
||||
plugins=(... docker docker-compose
|
||||
)
|
||||
```
|
||||
|
||||
#### Without oh-my-zsh
|
||||
#### Without oh-my-zsh shell
|
||||
|
||||
Place the completion script in your `/path/to/zsh/completion` (typically `~/.zsh/completion/`):
|
||||
1. Place the completion script in your `/path/to/zsh/completion` (typically `~/.zsh/completion/`):
|
||||
|
||||
```shell
|
||||
$ mkdir -p ~/.zsh/completion
|
||||
$ curl -L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose
|
||||
```
|
||||
|
||||
Include the directory in your `$fpath` by adding in `~/.zshrc`:
|
||||
2. Include the directory in your `$fpath` by adding in `~/.zshrc`:
|
||||
|
||||
```shell
|
||||
fpath=(~/.zsh/completion $fpath)
|
||||
```
|
||||
|
||||
Make sure `compinit` is loaded or do it by adding in `~/.zshrc`:
|
||||
3. Make sure `compinit` is loaded or do it by adding in `~/.zshrc`:
|
||||
|
||||
```shell
|
||||
autoload -Uz compinit && compinit -i
|
||||
```
|
||||
|
||||
Then reload your shell:
|
||||
4. Then reload your shell:
|
||||
|
||||
```shell
|
||||
exec $SHELL -l
|
||||
|
|
|
|||
|
|
@ -205,7 +205,7 @@ the value assigned to a variable that shows up more than once_. The files in the
|
|||
list are processed from the top down. For the same variable specified in file
|
||||
`a.env` and assigned a different value in file `b.env`, if `b.env` is
|
||||
listed below (after), then the value from `b.env` stands. For example, given the
|
||||
following declaration in `docker_compose.yml`:
|
||||
following declaration in `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
|
|
|
|||
|
|
@ -532,7 +532,7 @@ the value assigned to a variable that shows up more than once_. The files in the
|
|||
list are processed from the top down. For the same variable specified in file
|
||||
`a.env` and assigned a different value in file `b.env`, if `b.env` is
|
||||
listed below (after), then the value from `b.env` stands. For example, given the
|
||||
following declaration in `docker_compose.yml`:
|
||||
following declaration in `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
|
|
@ -990,7 +990,7 @@ as it has the highest priority. It then connects to `app_net_3`, then
|
|||
app_net_2:
|
||||
app_net_3:
|
||||
|
||||
> **Note:** If multiple networks have the same priority, the connection order
|
||||
> **Note**: If multiple networks have the same priority, the connection order
|
||||
> is undefined.
|
||||
|
||||
### pid
|
||||
|
|
@ -1235,7 +1235,7 @@ volumes:
|
|||
mydata:
|
||||
```
|
||||
|
||||
> **Note:** When creating bind mounts, using the long syntax requires the
|
||||
> **Note**: When creating bind mounts, using the long syntax requires the
|
||||
> referenced folder to be created beforehand. Using the short syntax
|
||||
> creates the folder on the fly if it doesn't exist.
|
||||
> See the [bind mounts documentation](/engine/admin/volumes/bind-mounts.md/#differences-between--v-and---mount-behavior)
|
||||
|
|
@ -1248,7 +1248,7 @@ service.
|
|||
|
||||
volume_driver: mydriver
|
||||
|
||||
> **Note:** In [version 2 files](compose-versioning.md#version-2), this
|
||||
> **Note**: In [version 2 files](compose-versioning.md#version-2), this
|
||||
> option only applies to anonymous volumes (those specified in the image,
|
||||
> or specified under `volumes` without an explicit named volume or host path).
|
||||
> To configure the driver for a named volume, use the `driver` key under the
|
||||
|
|
@ -1298,7 +1298,7 @@ then read-write is used.
|
|||
Each of these is a single value, analogous to its
|
||||
[docker run](/engine/reference/run.md) counterpart.
|
||||
|
||||
> **Note:** The following options were added in [version 2.2](compose-versioning.md#version-22):
|
||||
> **Note**: The following options were added in [version 2.2](compose-versioning.md#version-22):
|
||||
> `cpu_count`, `cpu_percent`, `cpus`.
|
||||
> The following options were added in [version 2.1](compose-versioning.md#version-21):
|
||||
> `oom_kill_disable`, `cpu_period`
|
||||
|
|
|
|||
|
|
@ -279,7 +279,7 @@ at build time is the value in the environment where Compose is running.
|
|||
|
||||
#### cache_from
|
||||
|
||||
> **Note:** This option is new in v3.2
|
||||
> **Note**: This option is new in v3.2
|
||||
|
||||
A list of images that the engine uses for cache resolution.
|
||||
|
||||
|
|
@ -291,7 +291,7 @@ A list of images that the engine uses for cache resolution.
|
|||
|
||||
#### labels
|
||||
|
||||
> **Note:** This option is new in v3.3
|
||||
> **Note**: This option is new in v3.3
|
||||
|
||||
Add metadata to the resulting image using [Docker labels](/engine/userguide/labels-custom-metadata.md).
|
||||
You can use either an array or a dictionary.
|
||||
|
|
@ -490,7 +490,7 @@ an error.
|
|||
|
||||
### credential_spec
|
||||
|
||||
> **Note:** this option was added in v3.3.
|
||||
> **Note**: this option was added in v3.3.
|
||||
|
||||
Configure the credential spec for managed service account. This option is only
|
||||
used for services using Windows containers. The `credential_spec` must be in the
|
||||
|
|
@ -1001,7 +1001,7 @@ the value assigned to a variable that shows up more than once_. The files in the
|
|||
list are processed from the top down. For the same variable specified in file
|
||||
`a.env` and assigned a different value in file `b.env`, if `b.env` is
|
||||
listed below (after), then the value from `b.env` stands. For example, given the
|
||||
following declaration in `docker_compose.yml`:
|
||||
following declaration in `docker-compose.yml`:
|
||||
|
||||
```none
|
||||
services:
|
||||
|
|
@ -1195,7 +1195,7 @@ It's recommended that you use reverse-DNS notation to prevent your labels from c
|
|||
|
||||
### links
|
||||
|
||||
>**Warning**: >The `--link` flag is a legacy feature of Docker. It
|
||||
>**Warning**: The `--link` flag is a legacy feature of Docker. It
|
||||
may eventually be removed. Unless you absolutely need to continue using it, we
|
||||
recommend that you use [user-defined networks](/engine/userguide/networking//#user-defined-networks)
|
||||
to facilitate communication between two containers instead of using `--link`.
|
||||
|
|
@ -1431,7 +1431,7 @@ containers in the bare-metal machine's namespace and vice versa.
|
|||
|
||||
Expose ports.
|
||||
|
||||
> **Note:** Port mapping is incompatible with `network_mode: host`
|
||||
> **Note**: Port mapping is incompatible with `network_mode: host`
|
||||
|
||||
#### Short syntax
|
||||
|
||||
|
|
@ -1473,7 +1473,7 @@ ports:
|
|||
|
||||
```
|
||||
|
||||
> **Note:** The long syntax is new in v3.2
|
||||
> **Note**: The long syntax is new in v3.2
|
||||
|
||||
### restart
|
||||
|
||||
|
|
@ -1810,7 +1810,7 @@ volumes:
|
|||
mydata:
|
||||
```
|
||||
|
||||
> **Note:** The long syntax is new in v3.2
|
||||
> **Note**: The long syntax is new in v3.2
|
||||
|
||||
|
||||
#### Volumes for services, swarms, and stack files
|
||||
|
|
|
|||
|
|
@ -77,14 +77,14 @@ Docker Compose. To do so, follow these steps:
|
|||
version of Compose you want to use:
|
||||
|
||||
```none
|
||||
Invoke-WebRequest "https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe
|
||||
Invoke-WebRequest "https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\Docker\Docker\resources\bin\docker-compose.exe
|
||||
```
|
||||
|
||||
For example, to download Compose version {{site.compose_version}},
|
||||
the command is:
|
||||
|
||||
```none
|
||||
Invoke-WebRequest "https://github.com/docker/compose/releases/download/{{site.compose_version}}/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe
|
||||
Invoke-WebRequest "https://github.com/docker/compose/releases/download/{{site.compose_version}}/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\Docker\Docker\resources\bin\docker-compose.exe
|
||||
```
|
||||
> Use the latest Compose release number in the download command.
|
||||
>
|
||||
|
|
@ -129,7 +129,7 @@ by step instructions are also included below.
|
|||
sudo chmod +x /usr/local/bin/docker-compose
|
||||
```
|
||||
|
||||
> ***Note:*** If the command `docker-compose` fails after installation, check your path.
|
||||
> **Note**: If the command `docker-compose` fails after installation, check your path.
|
||||
> You can also create a symbolic link to `/usr/bin` or any other directory in your path.
|
||||
|
||||
For example:
|
||||
|
|
|
|||
|
|
@ -169,7 +169,7 @@ The following `strftime` codes are supported:
|
|||
| `%p` | AM or PM. | AM |
|
||||
| `%M` | Minute as a zero-padded decimal number. | 57 |
|
||||
| `%S` | Second as a zero-padded decimal number. | 04 |
|
||||
| `%L` | Milliseconds as a zero-padded decimal number. | 123 |
|
||||
| `%L` | Milliseconds as a zero-padded decimal number. | .123 |
|
||||
| `%f` | Microseconds as a zero-padded decimal number. | 000345 |
|
||||
| `%z` | UTC offset in the form +HHMM or -HHMM. | +1300 |
|
||||
| `%Z` | Time zone name. | PST |
|
||||
|
|
|
|||
|
|
@ -147,6 +147,7 @@ see more options.
|
|||
|:------------------------------|:--------------------------------------------------------------------------------------------------------------|
|
||||
| `none` | No logs are available for the container and `docker logs` does not return any output. |
|
||||
| [`json-file`](json-file.md) | The logs are formatted as JSON. The default logging driver for Docker. |
|
||||
| [`local`](local.md) | Writes logs messages to local filesystem in binary files using Protobuf. |
|
||||
| [`syslog`](syslog.md) | Writes logging messages to the `syslog` facility. The `syslog` daemon must be running on the host machine. |
|
||||
| [`journald`](journald.md) | Writes log messages to `journald`. The `journald` daemon must be running on the host machine. |
|
||||
| [`gelf`](gelf.md) | Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash. |
|
||||
|
|
|
|||
|
|
@ -110,7 +110,7 @@ for advanced [log tag options](log_tags.md).
|
|||
### fluentd-async-connect
|
||||
|
||||
Docker connects to Fluentd in the background. Messages are buffered until the
|
||||
connection is established.
|
||||
connection is established. Defaults to `false`.
|
||||
|
||||
### fluentd-buffer-limit
|
||||
|
||||
|
|
@ -123,11 +123,11 @@ How long to wait between retries. Defaults to 1 second.
|
|||
|
||||
### fluentd-max-retries
|
||||
|
||||
The maximum number of retries. Defaults to 10.
|
||||
The maximum number of retries. Defaults to `10`.
|
||||
|
||||
### fluentd-sub-second-precision
|
||||
|
||||
Generates event logs in nanosecond resolution. Defaults to false.
|
||||
Generates event logs in nanosecond resolution. Defaults to `false`.
|
||||
|
||||
## Fluentd daemon management with Docker
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ a specific plugin using `docker inspect`.
|
|||
## Configure the plugin as the default logging driver
|
||||
|
||||
After the plugin is installed, you can configure the Docker daemon to use it as
|
||||
the default by setting the plugin's name as the value of the `logging-driver`
|
||||
the default by setting the plugin's name as the value of the `log-driver`
|
||||
key in the `daemon.json`, as detailed in the
|
||||
[logging overview](configure.md#configure-the-default-logging-driver). If the
|
||||
logging driver supports additional options, you can set those as the values of
|
||||
|
|
|
|||
|
|
@ -28,8 +28,8 @@ any of the following:
|
|||
|:-----------------|:------------------------------------------------------------------------------------------------|
|
||||
| `no` | Do not automatically restart the container. (the default) |
|
||||
| `on-failure` | Restart the container if it exits due to an error, which manifests as a non-zero exit code. |
|
||||
| `unless-stopped` | Restart the container unless it is explicitly stopped or Docker itself is stopped or restarted. |
|
||||
| `always` | Always restart the container if it stops. |
|
||||
| `always` | Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in [restart policy details](#restart-policy-details)) |
|
||||
| `unless-stopped` | Similar to `always`, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts. |
|
||||
|
||||
The following example starts a Redis container and configures it to always
|
||||
restart unless it is explicitly stopped or Docker is restarted.
|
||||
|
|
|
|||
|
|
@ -101,7 +101,7 @@ you need to add this configuration in the Docker systemd service file.
|
|||
The `NO_PROXY` variable specifies a string that contains comma-separated
|
||||
values for hosts that should be excluded from proxying. These are the
|
||||
options you can specify to exclude hosts:
|
||||
* IP address prefix (`1.2.3.4`) or in CIDR notation (`1.2.3.4/8`)
|
||||
* IP address prefix (`1.2.3.4`)
|
||||
* Domain name, or a special DNS label (`*`)
|
||||
* A domain name matches that name and all subdomains. A domain name with
|
||||
a leading "." matches subdomains only. For example, given the domains
|
||||
|
|
|
|||
|
|
@ -11,6 +11,14 @@ known issues for each DTR version.
|
|||
You can then use [the upgrade instructions](admin/upgrade.md),
|
||||
to upgrade your installation to the latest release.
|
||||
|
||||
## Version 2.3.11
|
||||
|
||||
(28 February 2019)
|
||||
|
||||
### Changelog
|
||||
|
||||
* Bump the Golang version that is used to build DTR to version 1.10.8. (docker/dhe-deploy#10064)
|
||||
|
||||
## Version 2.3.10
|
||||
|
||||
(29 January 2019)
|
||||
|
|
|
|||
|
|
@ -24,11 +24,13 @@ command.
|
|||
|
||||
Example usage:
|
||||
|
||||
```bash
|
||||
$ docker run -it --rm dtr-internal.caas.docker.io/caas/dtr:2.4.0-alpha-008434_ge02413a install \
|
||||
--ucp-node <UCP_NODE_HOSTNAME> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a production deployment.
|
||||
> **Note**: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment.
|
||||
|
||||
## Options
|
||||
|
||||
|
|
|
|||
|
|
@ -24,11 +24,13 @@ command.
|
|||
|
||||
Example usage:
|
||||
|
||||
```bash
|
||||
$ docker run -it --rm docker/dtr:2.4.1 install \
|
||||
--ucp-node <UCP_NODE_HOSTNAME> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a production deployment.
|
||||
> **Note**: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment.
|
||||
|
||||
## Options
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,243 @@
|
|||
---
|
||||
title: Troubleshoot Docker Trusted Registry
|
||||
description: Learn how to troubleshoot your DTR installation.
|
||||
keywords: registry, monitor, troubleshoot
|
||||
redirect_from: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs
|
||||
---
|
||||
|
||||
This guide contains tips and tricks for troubleshooting DTR problems.
|
||||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on swarm overlay networking. One way to test
|
||||
if overlay networks are working correctly is to deploy containers to the same
|
||||
overlay network on different nodes and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }}
|
||||
```
|
||||
|
||||
Then use SSH to log into another node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1
|
||||
```
|
||||
|
||||
If the second command succeeds, it indicates overlay networking is working
|
||||
correctly between those nodes.
|
||||
|
||||
You can run this test with any attachable overlay network and any Docker image
|
||||
that has `sh` and `ping`.
|
||||
|
||||
|
||||
## Access RethinkDB directly
|
||||
|
||||
DTR uses RethinkDB for persisting data and replicating it across replicas.
|
||||
It might be helpful to connect directly to the RethinkDB instance running on a
|
||||
DTR replica to check the DTR internal state.
|
||||
|
||||
> **Warning**: Modifying RethinkDB directly is not supported and may cause
|
||||
> problems.
|
||||
{: .warning }
|
||||
|
||||
### via RethinkCLI
|
||||
|
||||
As of v2.5.5, the [RethinkCLI has been removed](/ee/dtr/release-notes/#255) from the RethinkDB image along with other unused components. You can now run RethinkCLI from a separate image in the `dockerhubenterprise` organization. Note that the commands below are using separate tags for non-interactive and interactive modes.
|
||||
|
||||
#### Non-interactive
|
||||
|
||||
Use SSH to log into a node that is running a DTR replica, and run the following:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
# List problems in the cluster detected by the current node.
|
||||
REPLICA_ID=$(docker container ls --filter=name=dtr-rethink --format '{{.Names}}' | cut -d'/' -f2 | cut -d'-' -f3 | head -n 1) && echo 'r.db("rethinkdb").table("current_issues")' | docker run --rm -i --net dtr-ol -v "dtr-ca-${REPLICA_ID}:/ca" -e DTR_REPLICA_ID=$REPLICA_ID dockerhubenterprise/rethinkcli:v2.2.0-ni non-interactive
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
On a healthy cluster the output will be `[]`.
|
||||
|
||||
#### Interactive
|
||||
|
||||
Starting in DTR 2.5.5, you can run RethinkCLI from a separate image. First, set an environment variable for your DTR replica ID:
|
||||
|
||||
```bash
|
||||
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
|
||||
```
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. Run the following command to get into interactive mode
|
||||
and query the contents of the DB:
|
||||
|
||||
```bash
|
||||
docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID
|
||||
```
|
||||
|
||||
```none
|
||||
# List problems in the cluster detected by the current node.
|
||||
> r.db("rethinkdb").table("current_issues")
|
||||
[]
|
||||
|
||||
# List all the DBs in RethinkDB
|
||||
> r.dbList()
|
||||
[ 'dtr2',
|
||||
'jobrunner',
|
||||
'notaryserver',
|
||||
'notarysigner',
|
||||
'rethinkdb' ]
|
||||
|
||||
# List the tables in the dtr2 db
|
||||
> r.db('dtr2').tableList()
|
||||
[ 'blob_links',
|
||||
'blobs',
|
||||
'client_tokens',
|
||||
'content_caches',
|
||||
'events',
|
||||
'layer_vuln_overrides',
|
||||
'manifests',
|
||||
'metrics',
|
||||
'namespace_team_access',
|
||||
'poll_mirroring_policies',
|
||||
'promotion_policies',
|
||||
'properties',
|
||||
'pruning_policies',
|
||||
'push_mirroring_policies',
|
||||
'repositories',
|
||||
'repository_team_access',
|
||||
'scanned_images',
|
||||
'scanned_layers',
|
||||
'tags',
|
||||
'user_settings',
|
||||
'webhooks' ]
|
||||
|
||||
# List the entries in the repositories table
|
||||
> r.db('dtr2').table('repositories')
|
||||
[ { enableManifestLists: false,
|
||||
id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b',
|
||||
immutableTags: false,
|
||||
name: 'test-repo-1',
|
||||
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
|
||||
namespaceName: 'admin',
|
||||
pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6',
|
||||
pulls: 0,
|
||||
pushes: 0,
|
||||
scanOnPush: false,
|
||||
tagLimit: 0,
|
||||
visibility: 'public' },
|
||||
{ enableManifestLists: false,
|
||||
id: '9f43f029-9683-459f-97d9-665ab3ac1fda',
|
||||
immutableTags: false,
|
||||
longDescription: '',
|
||||
name: 'testing',
|
||||
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
|
||||
namespaceName: 'admin',
|
||||
pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be',
|
||||
pulls: 0,
|
||||
pushes: 0,
|
||||
scanOnPush: false,
|
||||
shortDescription: '',
|
||||
tagLimit: 1,
|
||||
visibility: 'public' } ]
|
||||
```
|
||||
|
||||
Individual DBs and tables are a private implementation detail and may change in DTR
|
||||
from version to version, but you can always use `dbList()` and `tableList()` to explore
|
||||
the contents and data structure.
|
||||
|
||||
[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/).
|
||||
|
||||
### via API
|
||||
|
||||
To check on the overall status of your DTR cluster without interacting with RethinkCLI, run the following API request:
|
||||
|
||||
```bash
|
||||
curl -u admin:$TOKEN -X GET "https://<dtr-url>/api/v0/meta/cluster_status" -H "accept: application/json"
|
||||
```
|
||||
|
||||
#### Example API Response
|
||||
```none
|
||||
{
|
||||
"rethink_system_tables": {
|
||||
"cluster_config": [
|
||||
{
|
||||
"heartbeat_timeout_secs": 10,
|
||||
"id": "heartbeat"
|
||||
}
|
||||
],
|
||||
"current_issues": [],
|
||||
"db_config": [
|
||||
{
|
||||
"id": "339de11f-b0c2-4112-83ac-520cab68d89c",
|
||||
"name": "notaryserver"
|
||||
},
|
||||
{
|
||||
"id": "aa2e893f-a69a-463d-88c1-8102aafebebc",
|
||||
"name": "dtr2"
|
||||
},
|
||||
{
|
||||
"id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd",
|
||||
"name": "jobrunner"
|
||||
},
|
||||
{
|
||||
"id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039",
|
||||
"name": "notarysigner"
|
||||
}
|
||||
],
|
||||
"server_status": [
|
||||
{
|
||||
"id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a",
|
||||
"name": "dtr_rethinkdb_5eb9459a7832",
|
||||
"network": {
|
||||
"canonical_addresses": [
|
||||
{
|
||||
"host": "dtr-rethinkdb-5eb9459a7832.dtr-ol",
|
||||
"port": 29015
|
||||
}
|
||||
],
|
||||
"cluster_port": 29015,
|
||||
"connected_to": {
|
||||
"dtr_rethinkdb_56b65e8c1404": true
|
||||
},
|
||||
"hostname": "9e83e4fee173",
|
||||
"http_admin_port": "<no http admin>",
|
||||
"reql_port": 28015,
|
||||
"time_connected": "2019-02-15T00:19:22.035Z"
|
||||
},
|
||||
}
|
||||
...
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## Recover from an unhealthy replica
|
||||
|
||||
When a DTR replica is unhealthy or down, the DTR web UI displays a warning:
|
||||
|
||||
```none
|
||||
Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.
|
||||
```
|
||||
|
||||
To fix this, you should remove the unhealthy replica from the DTR cluster,
|
||||
and join a new one. Start by running:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
And then:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \
|
||||
--ucp-node <ucp-node-name> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
|
@ -1,138 +0,0 @@
|
|||
---
|
||||
title: Troubleshoot Docker Trusted Registry
|
||||
description: Learn how to troubleshoot your DTR installation.
|
||||
keywords: registry, monitor, troubleshoot
|
||||
---
|
||||
|
||||
This guide contains tips and tricks for troubleshooting DTR problems.
|
||||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on swarm overlay networking. One way to test
|
||||
if overlay networks are working correctly is to deploy containers to the same
|
||||
overlay network on different nodes and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }}
|
||||
```
|
||||
|
||||
Then use SSH to log into another node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1
|
||||
```
|
||||
|
||||
If the second command succeeds, it indicates overlay networking is working
|
||||
correctly between those nodes.
|
||||
|
||||
You can run this test with any attachable overlay network and any Docker image
|
||||
that has `sh` and `ping`.
|
||||
|
||||
|
||||
## Access RethinkDB directly
|
||||
|
||||
DTR uses RethinkDB for persisting data and replicating it across replicas.
|
||||
It might be helpful to connect directly to the RethinkDB instance running on a
|
||||
DTR replica to check the DTR internal state.
|
||||
|
||||
> **Warning**: Modifying RethinkDB directly is not supported and may cause
|
||||
> problems.
|
||||
{: .warning }
|
||||
|
||||
Use SSH to log into a node that is running a DTR replica, and run the following
|
||||
commands:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
# List problems in the cluster detected by the current node.
|
||||
echo 'r.db("rethinkdb").table("current_issues")' | \
|
||||
docker exec -i \
|
||||
$(docker ps -q --filter name=dtr-rethinkdb) \
|
||||
rethinkcli non-interactive; \
|
||||
echo
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
On a healthy cluster the output will be `[]`.
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. This
|
||||
container can also be used to connect to the local DTR replica and
|
||||
interactively query the contents of the DB.
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
docker exec -it $(docker ps -q --filter name=dtr-rethinkdb) rethinkcli
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
```none
|
||||
# List problems in the cluster detected by the current node.
|
||||
> r.db("rethinkdb").table("current_issues")
|
||||
[]
|
||||
|
||||
# List all the DBs in RethinkDB
|
||||
> r.dbList()
|
||||
[ 'dtr2',
|
||||
'jobrunner',
|
||||
'notaryserver',
|
||||
'notarysigner',
|
||||
'rethinkdb' ]
|
||||
|
||||
# List the tables in the dtr2 db
|
||||
> r.db('dtr2').tableList()
|
||||
[ 'client_tokens',
|
||||
'events',
|
||||
'manifests',
|
||||
'namespace_team_access',
|
||||
'properties',
|
||||
'repositories',
|
||||
'repository_team_access',
|
||||
'tags' ]
|
||||
|
||||
# List the entries in the repositories table
|
||||
> r.db('dtr2').table('repositories')
|
||||
[ { id: '19f1240a-08d8-4979-a898-6b0b5b2338d8',
|
||||
name: 'my-test-repo',
|
||||
namespaceAccountID: '924bf131-6213-43fa-a5ed-d73c7ccf392e',
|
||||
pk: 'cf5e8bf1197e281c747f27e203e42e22721d5c0870b06dfb1060ad0970e99ada',
|
||||
visibility: 'public' },
|
||||
...
|
||||
```
|
||||
|
||||
Individual DBs and tables are a private implementation detail and may change in DTR
|
||||
from version to version, but you can always use `dbList()` and `tableList()` to explore
|
||||
the contents and data structure.
|
||||
|
||||
[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/).
|
||||
|
||||
## Recover from an unhealthy replica
|
||||
|
||||
When a DTR replica is unhealthy or down, the DTR web UI displays a warning:
|
||||
|
||||
```none
|
||||
Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.
|
||||
```
|
||||
|
||||
To fix this, you should remove the unhealthy replica from the DTR cluster,
|
||||
and join a new one. Start by running:
|
||||
|
||||
```none
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
And then:
|
||||
|
||||
```none
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \
|
||||
--ucp-node <ucp-node-name> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
|
@ -20,9 +20,9 @@ docker run -it --rm docker/dtr \
|
|||
|
||||
This command forcefully removes all containers and volumes associated with
|
||||
a DTR replica without notifying the rest of the cluster. Use this command
|
||||
on all replicas uninstall DTR.
|
||||
on all replicas to uninstall DTR.
|
||||
|
||||
Use the 'remove' command to gracefully scale down your DTR cluster.
|
||||
Use `docker/dtr remove` to gracefully scale down your DTR cluster.
|
||||
|
||||
|
||||
## Options
|
||||
|
|
|
|||
|
|
@ -14,10 +14,10 @@ upgrade your installation to the latest release.
|
|||
|
||||
(18 Jan 2017)
|
||||
|
||||
Note: UCP 1.1.6 supports Docker Engine 1.12 but does not use the built-in
|
||||
orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
cluster using the older Docker Swarm v1.2.
|
||||
> **Note**: UCP 1.1.6 supports Docker Engine 1.12 but does not use the built-in
|
||||
> orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
> cluster using the older Docker Swarm v1.2.
|
||||
|
||||
**Security Update**
|
||||
|
||||
|
|
@ -41,10 +41,10 @@ the [permissions levels section](user-management/permission-levels.md) for more
|
|||
|
||||
(8 Dec 2016)
|
||||
|
||||
Note: UCP 1.1.5 supports Docker Engine 1.12 but does not use the built-in
|
||||
orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
cluster using the older Docker Swarm v1.2.
|
||||
> **Note**: UCP 1.1.5 supports Docker Engine 1.12 but does not use the built-in
|
||||
> orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
> cluster using the older Docker Swarm v1.2.
|
||||
|
||||
**Bug fixes**
|
||||
|
||||
|
|
@ -61,10 +61,10 @@ the authentication process.
|
|||
|
||||
(29 Sept 2016)
|
||||
|
||||
Note: UCP 1.1.4 supports Docker Engine 1.12 but does not use the built-in
|
||||
orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
cluster using Docker Swarm v1.2.5.
|
||||
> **Note**: UCP 1.1.4 supports Docker Engine 1.12 but does not use the built-in
|
||||
> orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
> cluster using Docker Swarm v1.2.5.
|
||||
|
||||
**Bug fixes**
|
||||
|
||||
|
|
@ -76,10 +76,10 @@ organization accounts
|
|||
|
||||
## Version 1.1.3
|
||||
|
||||
Note: UCP 1.1.3 supports Docker Engine 1.12 but does not use the built-in
|
||||
orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
cluster using Docker Swarm v1.2.5.
|
||||
> **Note**: UCP 1.1.3 supports Docker Engine 1.12 but does not use the built-in
|
||||
> orchestration capabilities provided by the Docker Engine with swarm mode enabled.
|
||||
> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a
|
||||
> cluster using Docker Swarm v1.2.5.
|
||||
|
||||
**Security Update**
|
||||
|
||||
|
|
@ -125,9 +125,9 @@ enabled, and is not compatible with swarm-mode based APIs, e.g. `docker service`
|
|||
|
||||
## Version 1.1.2
|
||||
|
||||
Note: UCP 1.1.2 supports Docker Engine 1.12 but doesn't use the new clustering
|
||||
capabilities provided by the Docker swarm mode. When installing this UCP version
|
||||
on a Docker Engine 1.12, UCP creates a "classic" Docker Swarm 1.2.3 cluster.
|
||||
> **Note**: UCP 1.1.2 supports Docker Engine 1.12 but doesn't use the new clustering
|
||||
> capabilities provided by the Docker swarm mode. When installing this UCP version
|
||||
> on a Docker Engine 1.12, UCP creates a "classic" Docker Swarm 1.2.3 cluster.
|
||||
|
||||
**Features**
|
||||
|
||||
|
|
|
|||
|
|
@ -63,8 +63,6 @@ might be serving your request. Make sure you're connecting directly to the
|
|||
URL of a manager node, and not a load balancer. In addition, pinging the
|
||||
endpoint with a `HEAD` results in a 404 error code. Use a `GET` request instead.
|
||||
|
||||
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [Troubleshoot with logs](troubleshoot-with-logs.md)
|
||||
|
|
|
|||
|
|
@ -194,7 +194,8 @@ apply two labels to your service:
|
|||
com.docker.ucp.mesh.http.1=external_route=http://example.org,redirect=https://example.org
|
||||
com.docker.ucp.mesh.http.2=external_route=sni://example.org
|
||||
```
|
||||
Note: It is not possible to redirect HTTPS to HTTP.
|
||||
|
||||
> **Note**: It is not possible to redirect HTTPS to HTTP.
|
||||
|
||||
### X-Forwarded-For header
|
||||
|
||||
|
|
|
|||
|
|
@ -41,6 +41,17 @@ As part of your backup policy you should regularly create backups of UCP.
|
|||
DTR is backed up independently.
|
||||
[Learn about DTR backups and recovery](../../../../dtr/2.3/guides/admin/backups-and-disaster-recovery.md).
|
||||
|
||||
> Warning: On UCP versions 3.0.0 - 3.0.7, before performing a UCP backup, you must clean up multiple /dev/shm mounts in the ucp-kublet entrypoint script by running the following script on all nodes via cron job:
|
||||
|
||||
```
|
||||
SHM_MOUNT=$(grep -m1 '^tmpfs./dev/shm' /proc/mounts)
|
||||
while [ $(grep -cm2 '^tmpfs./dev/shm' /proc/mounts) -gt 1 ]; do
|
||||
sudo umount /dev/shm
|
||||
done
|
||||
grep -q '^tmpfs./dev/shm' /proc/mounts || sudo mount "${SHM_MOUNT}"
|
||||
```
|
||||
For additional details, refer to [Docker KB000934](https://success.docker.com/article/more-than-one-dev-shm-mount-in-the-host-namespace){: target="_blank"}
|
||||
|
||||
To create a UCP backup, run the `{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} backup` command
|
||||
on a single UCP manager. This command creates a tar archive with the
|
||||
contents of all the [volumes used by UCP](../architecture.md) to persist data
|
||||
|
|
|
|||
|
|
@ -40,6 +40,10 @@ Docker UCP requires each node on the cluster to have a static IP address.
|
|||
Before installing UCP, ensure your network and nodes are configured to support
|
||||
this.
|
||||
|
||||
## Avoid IP range conflicts
|
||||
|
||||
The `service-cluster-ip-range` Kubernetes API Server flag is currently set to `10.96.0.0/16` and cannot be changed.
|
||||
|
||||
## Time synchronization
|
||||
|
||||
In distributed systems like Docker UCP, time synchronization is critical
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ impact to your users.
|
|||
Don't make changes to UCP configurations while you're upgrading.
|
||||
This can lead to misconfigurations that are difficult to troubleshoot.
|
||||
|
||||
> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
|
||||
> **Note**: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
|
||||
> Azure then please ensure all of the Azure [prerequisities](install-on-azure.md/#azure-prerequisites)
|
||||
> are met.
|
||||
|
||||
|
|
|
|||
|
|
@ -64,6 +64,10 @@ URL of a manager node, and not a load balancer. In addition, pinging the
|
|||
endpoint with a `HEAD` results in a 404 error code. Use a `GET` request instead.
|
||||
|
||||
|
||||
## Monitoring disk usage
|
||||
|
||||
Web UI disk usage metrics, including free space, only reflect the Docker managed portion of the filesystem: `/var/lib/docker`. To monitor the total space available on each filesystem of a UCP worker or manager, you must deploy a third party monitoring solution to monitor the operating system.
|
||||
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
|||
|
|
@ -187,7 +187,8 @@ apply two labels to your service:
|
|||
com.docker.ucp.mesh.http.1=external_route=http://example.org,redirect=https://example.org
|
||||
com.docker.ucp.mesh.http.2=external_route=sni://example.org
|
||||
```
|
||||
Note: It is not possible to redirect HTTPS to HTTP.
|
||||
|
||||
> **Note**: It is not possible to redirect HTTPS to HTTP.
|
||||
|
||||
### X-Forwarded-For header
|
||||
|
||||
|
|
|
|||
|
|
@ -63,9 +63,9 @@ command.
|
|||
| `--swarm-port` | Port for the Docker Swarm manager. Used for backwards compatibility |
|
||||
| `--swarm-grpc-port` | Port for communication between nodes |
|
||||
| `--cni-installer-url` | A URL pointing to a Kubernetes YAML file to be used as an installer for the CNI plugin of the cluster. If specified, the default CNI plugin is not installed. If the URL uses the HTTPS scheme, no certificate verification is performed. |
|
||||
|
||||
| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IPs from (Default: 192.168.0.0/16) |
|
||||
| `--cloud-provider` | The cloud provider for the cluster |
|
||||
| `--skip-cloud-provider` | Disables checks that rely on detecting the cloud provider (if any) on which the cluster is currently running. |
|
||||
| `--dns` | Set custom DNS servers for the UCP containers |
|
||||
| `--dns-opt` | Set DNS options for the UCP containers |
|
||||
| `--dns-search` | Set custom DNS search domains for the UCP containers |
|
||||
|
|
@ -80,7 +80,8 @@ command.
|
|||
| `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility |
|
||||
| `--disable-tracking` | Disable anonymous tracking and analytics |
|
||||
| `--disable-usage` | Disable anonymous usage reporting |
|
||||
| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation |
|
||||
| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation
|
||||
|
|
||||
| `--preserve-certs` | Don't generate certificates if they already exist |
|
||||
| `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility |
|
||||
| `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility |
|
||||
|
|
|
|||
|
|
@ -29,4 +29,4 @@ most benefits from Docker.
|
|||
|
||||
## Advanced development with the SDK or API
|
||||
|
||||
After you can write Dockerfiles or Compose files and use Docker CLI, take it to the next level by using Docker Engine SDK for Go/Python or use HTTP API directly.
|
||||
After you can write Dockerfiles or Compose files and use Docker CLI, take it to the next level by using Docker Engine SDK for Go/Python or use the HTTP API directly.
|
||||
|
|
|
|||
|
|
@ -27,32 +27,11 @@ one year**.
|
|||
|
||||
If your account [has the proper
|
||||
permissions](/docker-for-aws/iam-permissions.md), you can
|
||||
use the blue button from the stable or edge channel to bootstrap Docker for AWS
|
||||
using CloudFormation. For more about stable and edge channels, see the
|
||||
[FAQs](/docker-for-aws/faqs.md#stable-and-edge-channels).
|
||||
use the blue button to bootstrap Docker for AWS
|
||||
using CloudFormation.
|
||||
|
||||
<table style="width:100%">
|
||||
<tr>
|
||||
<th style="font-size: x-large; font-family: arial">Stable channel</th>
|
||||
<th style="font-size: x-large; font-family: arial">Edge channel</th>
|
||||
</tr>
|
||||
<tr valign="top">
|
||||
<td width="33%">This deployment is fully baked and tested, and comes with the latest CE version of Docker. <br><br>This is the best channel to use if you want a reliable platform to work with. <br><br>Stable is released quarterly and is for users that want an easier-to-maintain release pace.</td>
|
||||
<td width="34%">This deployment offers cutting edge features of the CE version of Docker and comes with experimental features turned on, described in the <a href="https://github.com/docker/docker-ce/blob/master/components/cli/experimental/README.md">Docker Experimental Features README</a> on GitHub. (Adjust the branch or tag in the URL to match your version.)<br><br>This is the best channel to use if you want to experiment with features under development, and can weather some instability and bugs. Edge is for users wanting a drop of the latest and greatest features every month. <br><br>We collect usage data on edges across the board.</td>
|
||||
</tr>
|
||||
<tr valign="top">
|
||||
<td width="33%">
|
||||
{{aws_blue_latest}}
|
||||
{{aws_blue_vpc_latest}}
|
||||
</td>
|
||||
<td width="34%">
|
||||
{{aws_blue_edge}}
|
||||
{{aws_blue_vpc_edge}}
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
> **Note* During stable channel updates, edge channel will have the same release (unless it's a patch release)
|
||||
|
||||
### Deployment options
|
||||
|
||||
|
|
|
|||
|
|
@ -90,6 +90,8 @@ All regions can be found here: [Microsoft Azure Regions](https://azure.microsoft
|
|||
An excerpt of the above regions to use when you create your service principal are:
|
||||
|
||||
```none
|
||||
australiacentral
|
||||
australiacentral2
|
||||
australiaeast
|
||||
australiasoutheast
|
||||
brazilsouth
|
||||
|
|
@ -100,6 +102,8 @@ centralus
|
|||
eastasia
|
||||
eastus
|
||||
eastus2
|
||||
francecentral
|
||||
francesouth
|
||||
japaneast
|
||||
japanwest
|
||||
koreacentral
|
||||
|
|
@ -111,8 +115,8 @@ southeastasia
|
|||
southindia
|
||||
uksouth
|
||||
ukwest
|
||||
usgovvirginia
|
||||
usgoviowa
|
||||
usgovvirginia
|
||||
westcentralus
|
||||
westeurope
|
||||
westindia
|
||||
|
|
|
|||
|
|
@ -20,29 +20,9 @@ This deployment is fully baked and tested, and comes with the latest Enterprise
|
|||
### Quickstart
|
||||
|
||||
If your account has the [proper permissions](#prerequisites), you can generate the [Service Principal](#service-principal) and
|
||||
then choose from the stable or edge channel to bootstrap Docker for Azure using Azure Resource Manager.
|
||||
For more about stable and edge channels, see the [FAQs](/docker-for-azure/faqs.md#stable-and-edge-channels).
|
||||
<table style="width:100%">
|
||||
<tr>
|
||||
<th style="font-size: x-large; font-family: arial">Stable channel</th>
|
||||
<th style="font-size: x-large; font-family: arial">Edge channel</th>
|
||||
</tr>
|
||||
<tr valign="top">
|
||||
<td width="50%">This deployment is fully baked and tested, and comes with the latest CE version of Docker. <br><br>This is the best channel to use if you want a reliable platform to work with. <br><br>Stable is released quarterly and is for users that want an easier-to-maintain release pace.</td>
|
||||
<td width="50%">This deployment offers cutting edge features of the CE version of Docker and comes with experimental features turned on, described in the <a href="https://github.com/docker/docker-ce/blob/master/components/cli/experimental/README.md">Docker Experimental Features README</a> on GitHub. (Adjust the branch or tag in the URL to match your version.)<br><br>This is the best channel to use if you want to experiment with features under development, and can weather some instability and bugs. Edge is for users wanting a drop of the latest and greatest features every month <br><br>We collect usage data on edges across the board.</td>
|
||||
</tr>
|
||||
<tr valign="top">
|
||||
<td width="50%">
|
||||
then bootstrap Docker for Azure using Azure Resource Manager.
|
||||
|
||||
{{azure_blue_latest}}
|
||||
</td>
|
||||
<td width="50%">
|
||||
{{azure_blue_edge}}
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
> **Note* During stable channel updates, edge channel will have the same release (unless it's a patch release)
|
||||
|
||||
|
||||
### Prerequisites
|
||||
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@ redirect_from:
|
|||
- /docker-for-ibm-cloud/release-notes/
|
||||
- /docker-for-ibm-cloud/scaling/
|
||||
- /docker-for-ibm-cloud/why/
|
||||
- /v17.12/docker-for-ibm-cloud/quickstart/
|
||||
---
|
||||
|
||||
Docker for IBM Cloud has been replaced by
|
||||
|
|
|
|||
|
|
@ -16,7 +16,31 @@ notes](release-notes) are also available. (Following the CE release model,
|
|||
releases, and download stable and edge product installers at [Download Docker
|
||||
for Mac](install.md#download-docker-for-mac).
|
||||
|
||||
## Edge Releases of 2018
|
||||
## Edge Releases of 2019
|
||||
|
||||
### Docker Community Edition 2.0.2.1 2019-02-15
|
||||
|
||||
[Download](https://download.docker.com/mac/edge/31274/Docker.dmg)
|
||||
|
||||
* Upgrades
|
||||
- [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
|
||||
|
||||
### Docker Community Edition 2.0.2.0 2019-02-06
|
||||
|
||||
[Download](https://download.docker.com/mac/edge/30972/Docker.dmg)
|
||||
|
||||
* Upgrades
|
||||
- [Docker Compose 1.24.0-rc1](https://github.com/docker/compose/releases/tag/1.24.0-rc1)
|
||||
- [Docker Machine 0.16.1](https://github.com/docker/machine/releases/tag/v0.16.1)
|
||||
- [Compose on Kubernetes 0.4.18](https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.18)
|
||||
|
||||
* New
|
||||
- Rebranded UI
|
||||
|
||||
* Bug fixes and minor changes
|
||||
- Kubernetes: use default maximum number of pods for kubelet. [docker/for-mac#3453](https://github.com/docker/for-mac/issues/3453)
|
||||
- Fix DockerHelper crash. [docker/for-mac#3470](https://github.com/docker/for-mac/issues/3470)
|
||||
- Fix binding of privileged ports with specified IP. [docker/for-mac#3464](https://github.com/docker/for-mac/issues/3464)
|
||||
|
||||
### Docker Community Edition 2.0.1.0 2019-01-11
|
||||
|
||||
|
|
@ -38,6 +62,8 @@ for Mac](install.md#download-docker-for-mac).
|
|||
- Rename Docker for Mac to Docker Desktop
|
||||
- Partially open services ports if possible. [docker/for-mac#3438](https://github.com/docker/for-mac/issues/3438)
|
||||
|
||||
## Edge Releases of 2018
|
||||
|
||||
### Docker Community Edition 2.0.0.0-mac82 2018-12-07
|
||||
|
||||
[Download](https://download.docker.com/mac/edge/29268/Docker.dmg)
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 48 KiB |
|
|
@ -412,9 +412,9 @@ $ security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychai
|
|||
See also, [Directory structures for
|
||||
certificates](#directory-structures-for-certificates).
|
||||
|
||||
> **Note:** You need to restart Docker Desktop for Mac after making any changes to the
|
||||
keychain or to the `~/.docker/certs.d` directory in order for the changes to
|
||||
take effect.
|
||||
> **Note**: You need to restart Docker Desktop for Mac after making any changes to the
|
||||
> keychain or to the `~/.docker/certs.d` directory in order for the changes to
|
||||
> take effect.
|
||||
|
||||
For a complete explanation of how to do this, see the blog post [Adding
|
||||
Self-signed Registry Certs to Docker & Docker Desktop for
|
||||
|
|
|
|||
|
|
@ -6,29 +6,51 @@ redirect_from:
|
|||
title: Leverage multi-CPU architecture support
|
||||
notoc: true
|
||||
---
|
||||
Docker images can support multiple architectures, which means that a single
|
||||
image may contain variants for different architectures, and sometimes for different
|
||||
operating systems, such as Windows.
|
||||
|
||||
Docker Desktop for Mac provides `binfmt_misc` multi architecture support, so you can run
|
||||
containers for different Linux architectures, such as `arm`, `mips`, `ppc64le`,
|
||||
and even `s390x`.
|
||||
When running an image with multi-architecture support, `docker` will
|
||||
automatically select an image variant which matches your OS and architecture.
|
||||
|
||||
Most of the official images on Docker Hub provide a [variety of architectures](https://github.com/docker-library/official-images#architectures-other-than-amd64).
|
||||
For example, the `busybox` image supports `amd64`, `arm32v5`, `arm32v6`,
|
||||
`arm32v7`, `arm64v8`, `i386`, `ppc64le`, and `s390x`. When running this image
|
||||
on an `x86_64` / `amd64` machine, the `x86_64` variant will be pulled and run,
|
||||
which can be seen from the output of the `uname -a` command that's run inside
|
||||
the container:
|
||||
|
||||
```bash
|
||||
$ docker run busybox uname -a
|
||||
|
||||
Linux 82ef1a0c07a2 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
**Docker Desktop for Mac** provides `binfmt_misc` multi-architecture support,
|
||||
which means you can run containers for different Linux architectures
|
||||
such as `arm`, `mips`, `ppc64le`, and even `s390x`.
|
||||
|
||||
This does not require any special configuration in the container itself as it uses
|
||||
<a href="http://wiki.qemu.org/" target="_blank">qemu-static</a> from the Docker for
|
||||
Mac VM.
|
||||
<a href="http://wiki.qemu.org/" target="_blank">qemu-static</a> from the **Docker for
|
||||
Mac VM**. Because of this, you can run an ARM container, like the `arm32v7` or `ppc64le`
|
||||
variants of the busybox image:
|
||||
|
||||
You can run an ARM container, like the <a href="https://resin.io/how-it-works/" target="_blank">
|
||||
resin</a> arm builds:
|
||||
|
||||
```
|
||||
$ docker run resin/armv7hf-debian uname -a
|
||||
|
||||
Linux 7ed2fca7a3f0 4.1.12 #1 SMP Tue Jan 12 10:51:00 UTC 2016 armv7l GNU/Linux
|
||||
|
||||
$ docker run justincormack/ppc64le-debian uname -a
|
||||
|
||||
Linux edd13885f316 4.1.12 #1 SMP Tue Jan 12 10:51:00 UTC 2016 ppc64le GNU/Linux
|
||||
### arm32v7 variant
|
||||
```bash
|
||||
$ docker run arm32v7/busybox uname -a
|
||||
|
||||
Linux 9e3873123d09 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 armv7l GNU/Linux
|
||||
```
|
||||
|
||||
Multi architecture support makes it easy to build <a href="https://blog.docker.com/2017/11/multi-arch-all-the-things/" target="_blank">
|
||||
multi architecture Docker images</a> or experiment with ARM images and binaries
|
||||
from your Mac.
|
||||
### ppc64le variant
|
||||
```bash
|
||||
$ docker run ppc64le/busybox uname -a
|
||||
|
||||
Linux 57a073cc4f10 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 ppc64le GNU/Linux
|
||||
```
|
||||
|
||||
Notice that this time, the `uname -a` output shows `armv7l` and
|
||||
`ppc64le` respectively.
|
||||
|
||||
Multi-architecture support makes it easy to build <a href="https://blog.docker.com/2017/11/multi-arch-all-the-things/" target="_blank">multi-architecture Docker images</a> or experiment with ARM images and binaries from your Mac.
|
||||
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ By default, you can share files in `/Users/`, `/Volumes/`, `/private/`, and
|
|||
`/tmp` directly. To add or remove directory trees that are exported to Docker,
|
||||
use the **File sharing** tab in Docker preferences {: .inline} -> **Preferences** ->
|
||||
**File sharing**. (See [Preferences](index.md#preferences).)
|
||||
**File sharing**. (See [Preferences](/docker-for-mac/index.md#preferences-menu).)
|
||||
|
||||
All other paths
|
||||
used in `-v` bind mounts are sourced from the Moby Linux VM running the Docker
|
||||
|
|
|
|||
|
|
@ -20,6 +20,13 @@ Desktop for Mac](install.md#download-docker-for-mac).
|
|||
|
||||
## Stable Releases of 2019
|
||||
|
||||
### Docker Community Edition 2.0.0.3 2019-02-15
|
||||
|
||||
[Download](https://download.docker.com/mac/stable/31259/Docker.dmg)
|
||||
|
||||
* Upgrades
|
||||
- [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
|
||||
|
||||
### Docker Community Edition 2.0.0.2 2019-01-16
|
||||
|
||||
[Download](https://download.docker.com/mac/stable/30215/Docker.dmg)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
description: Disk utilization
|
||||
keywords: mac, disk
|
||||
title: Disk utilization in Docker for Mac
|
||||
---
|
||||
|
||||
Docker for Mac stores Linux containers and images in a single, large "disk image" file
|
||||
in the Mac filesystem. This is different from Docker on Linux, which usually stores containers
|
||||
and images in the `/var/lib/docker` directory.
|
||||
|
||||
## Where is the "disk image" file?
|
||||
|
||||
To locate the "disk image" file, first select the whale menu icon and then select
|
||||
**Preferences...**. When the **Preferences...** window is displayed, select **Disk** and then **Reveal in Finder**:
|
||||
|
||||

|
||||
|
||||
The **Preferences...** window shows how much actual disk space the "disk image" file is consuming.
|
||||
In this example, the "disk image" file is consuming 2.4 GB out of a maximum of 64 GB.
|
||||
|
||||
Note that other tools might display space usage of the file in terms of the maximum file size, not the actual file size.
|
||||
|
||||
## If the file is too big
|
||||
|
||||
If the file is too big, you can
|
||||
- move it to a bigger drive,
|
||||
- delete unnecessary containers and images, or
|
||||
- reduce the maximum allowable size of the file.
|
||||
|
||||
### Move the file to a bigger drive
|
||||
|
||||
To move the file, open the **Preferences...** menu, select **Disk** and then select
|
||||
on **Move disk image**. Do not move the file directly in the finder or Docker for Mac will
|
||||
lose track of it.
|
||||
|
||||
### Delete unnecessary containers and images
|
||||
|
||||
To check whether you have too many unnecessary containers and images:
|
||||
|
||||
If your client and daemon API are version 1.25 or later (use the docker version command on the client to check your client and daemon API versions.), you can display detailed space usage information with:
|
||||
|
||||
```
|
||||
docker system df -v
|
||||
```
|
||||
|
||||
Alternatively, you can list images with:
|
||||
```bash
|
||||
$ docker image ls
|
||||
```
|
||||
and then list containers with:
|
||||
```bash
|
||||
$ docker container ls -a
|
||||
```
|
||||
|
||||
If there are lots of unneeded objects, try the command
|
||||
```bash
|
||||
$ docker system prune
|
||||
```
|
||||
This removes all stopped containers, unused networks, dangling images, and build cache.
|
||||
|
||||
Note that it might take a few minutes before space becomes free on the host, depending
|
||||
on what format the "disk image" file is in:
|
||||
- If the file is named `Docker.raw`: space on the host should be reclaimed within a few
|
||||
seconds.
|
||||
- If the file is named `Docker.qcow2`: space will be freed by a background process after
|
||||
a few minutes.
|
||||
|
||||
Note that space is only freed when images are deleted. Space is not freed automatically
|
||||
when files are deleted inside running containers. To trigger a space reclamation at any
|
||||
point, use the command:
|
||||
|
||||
```
|
||||
$ docker run --privileged --pid=host justincormack/nsenter1 /sbin/fstrim /var/lib/docker
|
||||
```
|
||||
|
||||
Note that many tools will report the maximum file size, not the actual file size.
|
||||
To query the actual size of the file on the host from a terminal, use:
|
||||
```bash
|
||||
$ cd ~/Library/Containers/com.docker.docker/Data
|
||||
$ cd vms/0 # or com.docker.driver.amd64-linux
|
||||
$ ls -klsh Docker.raw
|
||||
2333548 -rw-r--r--@ 1 akim staff 64G Dec 13 17:42 Docker.raw
|
||||
```
|
||||
In this example, the actual size of the disk is `2333548` KB, whereas the maximum size
|
||||
of the disk is `64` GB.
|
||||
|
||||
### Reduce the maximum size of the file
|
||||
|
||||
To reduce the maximum size of the file, select the whale menu icon and then select
|
||||
**Preferences...**. When the **Preferences...** window is displayed, select **Disk**.
|
||||
The **Disk** window contains a slider that allows the maximum disk size to be set.
|
||||
**Warning**: If the maximum size is reduced, the current file will be deleted and, therefore, all
|
||||
containers and images will be lost.
|
||||
|
||||
|
|
@ -112,9 +112,7 @@ Docker logs.
|
|||
The Console lives in `/Applications/Utilities`; you can search for it with
|
||||
Spotlight Search.
|
||||
|
||||
To read the Docker app log messages, in the top left corner of the window, type
|
||||
"docker" and press Enter. Then select the "Any" button that appeared on its
|
||||
left, and select "Process" instead.
|
||||
To read the Docker app log messages, type `docker` in the Console window search bar and press Enter. Then select `ANY` to expand the drop-down list next to your `docker` search entry, and select `Process`.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,29 @@ notes](release-notes) are also available. (Following the CE release model,
|
|||
releases, and download stable and edge product installers at [Download Docker
|
||||
for Windows](install.md#download-docker-for-windows).
|
||||
|
||||
## Edge Releases of 2018
|
||||
## Edge Releases of 2019
|
||||
|
||||
### Docker Community Edition 2.0.2.1 2019-02-15
|
||||
|
||||
[Download](https://download.docker.com/win/edge/31274/Docker%20Desktop%20Installer.exe)
|
||||
|
||||
* Upgrades
|
||||
- [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
|
||||
|
||||
### Docker Community Edition 2.0.2.0 2019-02-06
|
||||
|
||||
[Download](https://download.docker.com/win/edge/30972/Docker%20Desktop%20Installer.exe)
|
||||
|
||||
* Upgrades
|
||||
- [Docker Compose 1.24.0-rc1](https://github.com/docker/compose/releases/tag/1.24.0-rc1)
|
||||
- [Docker Machine 0.16.1](https://github.com/docker/machine/releases/tag/v0.16.1)
|
||||
- [Compose on Kubernetes 0.4.18](https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.18)
|
||||
|
||||
* New
|
||||
- Rebranded UI
|
||||
|
||||
* Bug fixes and minor changes
|
||||
- Kubernetes: use default maximum number of pods for kubelet. [docker/for-mac#3453](https://github.com/docker/for-mac/issues/3453)
|
||||
|
||||
### Docker Community Edition 2.0.1.0 2019-01-11
|
||||
|
||||
|
|
@ -39,6 +61,8 @@ for Windows](install.md#download-docker-for-windows).
|
|||
- Quit will not check if service is running anymore
|
||||
- Fix UI lock when changing kubernetes state
|
||||
|
||||
## Edge Releases of 2018
|
||||
|
||||
### Docker Community Edition 2.0.0.0-win82 2018-12-07
|
||||
|
||||
[Download](https://download.docker.com/win/edge/29268/Docker%20for%20Windows%20Installer.exe)
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ Hub](https://hub.docker.com/editions/community/docker-ce-desktop-windows){:
|
|||
Looking for information on using Windows containers?
|
||||
|
||||
* [Switch between Windows and Linux
|
||||
containers](https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers)
|
||||
containers](/docker-for-windows/index.md#switch-between-windows-and-linux-containers)
|
||||
describes the Linux / Windows containers toggle in Docker Desktop for Windows and
|
||||
points you to the tutorial mentioned above.
|
||||
* [Getting Started with Windows Containers
|
||||
|
|
@ -99,7 +99,7 @@ accessible from any terminal window.
|
|||
|
||||
If the whale is hidden in the Notifications area, click the up arrow on the
|
||||
taskbar to show it. To learn more, see [Docker
|
||||
Settings](index.md#docker-settings-dialog).
|
||||
Settings](/docker-for-windows/index.md#docker-settings-dialog).
|
||||
|
||||
If you just installed the app, you also get a popup success message with
|
||||
suggested next steps, and a link to this documentation.
|
||||
|
|
|
|||
|
|
@ -20,6 +20,16 @@ for Windows](install.md#download-docker-for-windows).
|
|||
|
||||
## Stable Releases of 2019
|
||||
|
||||
### Docker Community Edition 2.0.0.3 2019-02-15
|
||||
|
||||
[Download](https://download.docker.com/win/stable/31259/Docker%20for%20Windows%20Installer.exe)
|
||||
|
||||
* Upgrades
|
||||
- [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
|
||||
|
||||
* Bug fix
|
||||
- Fix crash in system tray menu when the Hub login fails or Air gap mode
|
||||
|
||||
### Docker Community Edition 2.0.0.2 2019-01-16
|
||||
|
||||
[Download](https://download.docker.com/win/stable/30215/Docker%20for%20Windows%20Installer.exe)
|
||||
|
|
|
|||
|
|
@ -129,9 +129,9 @@ For each source:
|
|||
|
||||
* Specify the **Dockerfile location** as a path relative to the root of the source code repository. (If the Dockerfile is at the repository root, leave this path set to `/`.)
|
||||
|
||||
> **Note:** When Docker Hub pulls a branch from a source code repository, it performs
|
||||
a shallow clone (only the tip of the specified branch). Refer to [Advanced options for Autobuild and Autotest](advanced.md)
|
||||
for more information.
|
||||
> **Note**: When Docker Hub pulls a branch from a source code repository, it performs
|
||||
> a shallow clone (only the tip of the specified branch). Refer to [Advanced options for Autobuild and Autotest](advanced.md)
|
||||
> for more information.
|
||||
|
||||
### Environment variables for builds
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ Docker Hub Organizations let you create teams so you can give your team access t
|
|||
- **Organizations** are a collection of teams and repositories that can be managed together.
|
||||
- **Teams** are groups of Docker Hub users that belong to your organization.
|
||||
|
||||
**Note:** in Docker Hub, users cannot be associated directly to an organization. They below only to teams within an the organization.
|
||||
> **Note**: in Docker Hub, users cannot be associated directly to an organization. They belong only to teams within an organization.
|
||||
|
||||
### Creating an organization
|
||||
|
||||
|
|
@ -48,7 +48,7 @@ To create a team:
|
|||
2. Click on **Add User**
|
||||
3. Provide the user's Docker ID username _or_ email to add them to the team 
|
||||
|
||||
**Note:** you are not automatically added to teams created by your.
|
||||
> **Note**: you are not automatically added to teams created by your organization.
|
||||
|
||||
### Removing team members
|
||||
|
||||
|
|
@ -58,7 +58,7 @@ To remove a member from a team, click the **x** next to their name:
|
|||
|
||||
### Giving a team access to a repository
|
||||
|
||||
To provide a team to access a repository:
|
||||
To provide a team access to a repository:
|
||||
|
||||
1. Visit the repository list on Docker Hub by clicking on **Repositories**
|
||||
2. Select your organization in the namespace dropdown list
|
||||
|
|
|
|||
|
|
@ -466,11 +466,12 @@ root:[~/] #
|
|||
root:[~/] # ./inspectDockerImage --json gforghetti/apache:latest | jq
|
||||
```
|
||||
|
||||
Note: The output was piped to the **jq** command to display it "nicely".
|
||||
|
||||
> **Note**: The output was piped to the `jq` command to display it "nicely".
|
||||
|
||||
#### Output:
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"Date": "Mon May 21 13:23:37 2018",
|
||||
"SystemOperatingSystem": "Operating System: Ubuntu 16.04.4 LTS",
|
||||
|
|
@ -580,7 +581,6 @@ Note: The output was piped to the **jq** command to display it "nicely".
|
|||
}
|
||||
]
|
||||
}
|
||||
root:[~/] #
|
||||
```
|
||||
|
||||
<a name="linux-with-html">
|
||||
|
|
|
|||
|
|
@ -364,12 +364,11 @@ gforghetti:~/$
|
|||
gforghetti:~:$ ./inspectDockerLoggingPlugin --json gforghetti/docker-log-driver-test:latest | jq
|
||||
```
|
||||
|
||||
> Note: The output was piped to the **jq** command to display it "nicely".
|
||||
> **Note**: The output was piped to the `jq` command to display it "nicely".
|
||||
|
||||
#### Output:
|
||||
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"Date": "Mon May 21 14:38:28 2018",
|
||||
"SystemOperatingSystem": "Operating System: Ubuntu 16.04.4 LTS",
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ for each of the following components:
|
|||
|
||||
1. Docker Swarm. [Backup Swarm resources like service and network definitions](/engine/swarm/admin_guide.md#back-up-the-swarm).
|
||||
2. Universal Control Plane (UCP). [Backup UCP configurations](/ee/ucp/admin/backups-and-disaster-recovery.md).
|
||||
3. Docker Trusted Registry (DTR). [Backup DTR configurations and images](/ee/dtr/admin/disaster-recovery/index.md).
|
||||
3. Docker Trusted Registry (DTR). [Backup DTR configurations and images](/ee/dtr/admin/disaster-recovery/create-a-backup.md).
|
||||
|
||||
Before proceeding to backup the next component, you should test the backup you've
|
||||
created to make sure it's not corrupt. One way to test your backups is to do
|
||||
|
|
@ -30,9 +30,9 @@ swarm and join new ones to bring the swarm to an healthy state.
|
|||
To restore Docker Enterprise Edition, you need to restore the individual
|
||||
components one by one:
|
||||
|
||||
1. Docker Engine. [Learn more](/engine/swarm/admin_guide.md#recover-from-disaster).
|
||||
1. Docker Swarm. [Learn more](/engine/swarm/admin_guide.md#recover-from-disaster).
|
||||
2. Universal Control Plane (UCP). [Learn more](/ee/ucp/admin/backups-and-disaster-recovery.md#restore-your-swarm).
|
||||
3. Docker Trusted Registry (DTR). [Learn more](/ee/dtr/admin/disaster-recovery/index.md).
|
||||
3. Docker Trusted Registry (DTR). [Learn more](/ee/dtr/admin/disaster-recovery/restore-from-backup.md).
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ stored in the primary DTR. You can
|
|||
[customize the storage parameters](/registry/configuration/#storage),
|
||||
if you want the cached images to be backended by persistent storage.
|
||||
|
||||
> Note: Kubernetes Peristent Volumes or Persistent Volume Claims would have to be
|
||||
> **Note**: Kubernetes Peristent Volumes or Persistent Volume Claims would have to be
|
||||
> used to provide persistent backend storage capabilities for the cache.
|
||||
|
||||
```
|
||||
|
|
|
|||
|
|
@ -38,7 +38,8 @@ docker run -it --rm \
|
|||
--https-proxy username:password@<doman>:<port> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
NOTE: DTR will hide the password portion of the URL, when it is displayed in the DTR UI.
|
||||
|
||||
> **Note**: DTR will hide the password portion of the URL, when it is displayed in the DTR UI.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ It also reconfigures DTR removing all other nodes from the cluster, leaving DTR
|
|||
as a single-replica cluster with the replica you chose.
|
||||
|
||||
Start by finding the ID of the DTR replica that you want to repair from.
|
||||
You can find the list of replicas by navigating to the UCP web UI, or by using
|
||||
You can find the list of replicas by navigating to **Shared Resources > Stacks** or **Swarm > Volumes** (when using [swarm mode](/engine/swarm/)) on the UCP web interface, or by using
|
||||
a UCP client bundle to run:
|
||||
|
||||
{% raw %}
|
||||
|
|
@ -57,6 +57,15 @@ docker ps --format "{{.Names}}" | grep dtr
|
|||
```
|
||||
{% endraw %}
|
||||
|
||||
Another way to determine the replica ID is to SSH into a DTR node and run the following:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
|
||||
&& echo $REPLICA_ID
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
Then, use your UCP client bundle to run the emergency repair command:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ To remove unhealthy replicas, you'll first have to find the replica ID
|
|||
of one of the replicas you want to keep, and the replica IDs of the unhealthy
|
||||
replicas you want to remove.
|
||||
|
||||
You can find this in the **Stacks** page of the UCP web UI, or by using the UCP
|
||||
You can find the list of replicas by navigating to **Shared Resources > Stacks** or **Swarm > Volumes** (when using [swarm mode](/engine/swarm/)) on the UCP web interface, or by using the UCP
|
||||
client bundle to run:
|
||||
|
||||
{% raw %}
|
||||
|
|
@ -66,6 +66,15 @@ docker ps --format "{{.Names}}" | grep dtr
|
|||
```
|
||||
{% endraw %}
|
||||
|
||||
Another way to determine the replica ID is to SSH into a DTR node and run the following:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
|
||||
&& echo $REPLICA_ID
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
Then use the UCP client bundle to remove the unhealthy replicas:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ For each machine where you want to install DTR:
|
|||
`docker load` command to load the Docker images from the tar archive:
|
||||
|
||||
```bash
|
||||
$ docker load < dtr.tar.gz
|
||||
$ docker load -i dtr.tar.gz
|
||||
```
|
||||
|
||||
## Install DTR
|
||||
|
|
|
|||
|
|
@ -0,0 +1,243 @@
|
|||
---
|
||||
title: Troubleshoot Docker Trusted Registry
|
||||
description: Learn how to troubleshoot your DTR installation.
|
||||
keywords: registry, monitor, troubleshoot
|
||||
redirect_from: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs
|
||||
---
|
||||
|
||||
This guide contains tips and tricks for troubleshooting DTR problems.
|
||||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on swarm overlay networking. One way to test
|
||||
if overlay networks are working correctly is to deploy containers to the same
|
||||
overlay network on different nodes and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }}
|
||||
```
|
||||
|
||||
Then use SSH to log into another node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1
|
||||
```
|
||||
|
||||
If the second command succeeds, it indicates overlay networking is working
|
||||
correctly between those nodes.
|
||||
|
||||
You can run this test with any attachable overlay network and any Docker image
|
||||
that has `sh` and `ping`.
|
||||
|
||||
|
||||
## Access RethinkDB directly
|
||||
|
||||
DTR uses RethinkDB for persisting data and replicating it across replicas.
|
||||
It might be helpful to connect directly to the RethinkDB instance running on a
|
||||
DTR replica to check the DTR internal state.
|
||||
|
||||
> **Warning**: Modifying RethinkDB directly is not supported and may cause
|
||||
> problems.
|
||||
{: .warning }
|
||||
|
||||
### via RethinkCLI
|
||||
|
||||
As of v2.5.5, the [RethinkCLI has been removed](/ee/dtr/release-notes/#255) from the RethinkDB image along with other unused components. You can now run RethinkCLI from a separate image in the `dockerhubenterprise` organization. Note that the commands below are using separate tags for non-interactive and interactive modes.
|
||||
|
||||
#### Non-interactive
|
||||
|
||||
Use SSH to log into a node that is running a DTR replica, and run the following:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
# List problems in the cluster detected by the current node.
|
||||
REPLICA_ID=$(docker container ls --filter=name=dtr-rethink --format '{{.Names}}' | cut -d'/' -f2 | cut -d'-' -f3 | head -n 1) && echo 'r.db("rethinkdb").table("current_issues")' | docker run --rm -i --net dtr-ol -v "dtr-ca-${REPLICA_ID}:/ca" -e DTR_REPLICA_ID=$REPLICA_ID dockerhubenterprise/rethinkcli:v2.2.0-ni non-interactive
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
On a healthy cluster the output will be `[]`.
|
||||
|
||||
#### Interactive
|
||||
|
||||
Starting in DTR 2.5.5, you can run RethinkCLI from a separate image. First, set an environment variable for your DTR replica ID:
|
||||
|
||||
```bash
|
||||
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
|
||||
```
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. Run the following command to get into interactive mode
|
||||
and query the contents of the DB:
|
||||
|
||||
```bash
|
||||
docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID
|
||||
```
|
||||
|
||||
```none
|
||||
# List problems in the cluster detected by the current node.
|
||||
> r.db("rethinkdb").table("current_issues")
|
||||
[]
|
||||
|
||||
# List all the DBs in RethinkDB
|
||||
> r.dbList()
|
||||
[ 'dtr2',
|
||||
'jobrunner',
|
||||
'notaryserver',
|
||||
'notarysigner',
|
||||
'rethinkdb' ]
|
||||
|
||||
# List the tables in the dtr2 db
|
||||
> r.db('dtr2').tableList()
|
||||
[ 'blob_links',
|
||||
'blobs',
|
||||
'client_tokens',
|
||||
'content_caches',
|
||||
'events',
|
||||
'layer_vuln_overrides',
|
||||
'manifests',
|
||||
'metrics',
|
||||
'namespace_team_access',
|
||||
'poll_mirroring_policies',
|
||||
'promotion_policies',
|
||||
'properties',
|
||||
'pruning_policies',
|
||||
'push_mirroring_policies',
|
||||
'repositories',
|
||||
'repository_team_access',
|
||||
'scanned_images',
|
||||
'scanned_layers',
|
||||
'tags',
|
||||
'user_settings',
|
||||
'webhooks' ]
|
||||
|
||||
# List the entries in the repositories table
|
||||
> r.db('dtr2').table('repositories')
|
||||
[ { enableManifestLists: false,
|
||||
id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b',
|
||||
immutableTags: false,
|
||||
name: 'test-repo-1',
|
||||
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
|
||||
namespaceName: 'admin',
|
||||
pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6',
|
||||
pulls: 0,
|
||||
pushes: 0,
|
||||
scanOnPush: false,
|
||||
tagLimit: 0,
|
||||
visibility: 'public' },
|
||||
{ enableManifestLists: false,
|
||||
id: '9f43f029-9683-459f-97d9-665ab3ac1fda',
|
||||
immutableTags: false,
|
||||
longDescription: '',
|
||||
name: 'testing',
|
||||
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
|
||||
namespaceName: 'admin',
|
||||
pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be',
|
||||
pulls: 0,
|
||||
pushes: 0,
|
||||
scanOnPush: false,
|
||||
shortDescription: '',
|
||||
tagLimit: 1,
|
||||
visibility: 'public' } ]
|
||||
```
|
||||
|
||||
Individual DBs and tables are a private implementation detail and may change in DTR
|
||||
from version to version, but you can always use `dbList()` and `tableList()` to explore
|
||||
the contents and data structure.
|
||||
|
||||
[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/).
|
||||
|
||||
### via API
|
||||
|
||||
To check on the overall status of your DTR cluster without interacting with RethinkCLI, run the following API request:
|
||||
|
||||
```bash
|
||||
curl -u admin:$TOKEN -X GET "https://<dtr-url>/api/v0/meta/cluster_status" -H "accept: application/json"
|
||||
```
|
||||
|
||||
#### Example API Response
|
||||
```none
|
||||
{
|
||||
"rethink_system_tables": {
|
||||
"cluster_config": [
|
||||
{
|
||||
"heartbeat_timeout_secs": 10,
|
||||
"id": "heartbeat"
|
||||
}
|
||||
],
|
||||
"current_issues": [],
|
||||
"db_config": [
|
||||
{
|
||||
"id": "339de11f-b0c2-4112-83ac-520cab68d89c",
|
||||
"name": "notaryserver"
|
||||
},
|
||||
{
|
||||
"id": "aa2e893f-a69a-463d-88c1-8102aafebebc",
|
||||
"name": "dtr2"
|
||||
},
|
||||
{
|
||||
"id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd",
|
||||
"name": "jobrunner"
|
||||
},
|
||||
{
|
||||
"id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039",
|
||||
"name": "notarysigner"
|
||||
}
|
||||
],
|
||||
"server_status": [
|
||||
{
|
||||
"id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a",
|
||||
"name": "dtr_rethinkdb_5eb9459a7832",
|
||||
"network": {
|
||||
"canonical_addresses": [
|
||||
{
|
||||
"host": "dtr-rethinkdb-5eb9459a7832.dtr-ol",
|
||||
"port": 29015
|
||||
}
|
||||
],
|
||||
"cluster_port": 29015,
|
||||
"connected_to": {
|
||||
"dtr_rethinkdb_56b65e8c1404": true
|
||||
},
|
||||
"hostname": "9e83e4fee173",
|
||||
"http_admin_port": "<no http admin>",
|
||||
"reql_port": 28015,
|
||||
"time_connected": "2019-02-15T00:19:22.035Z"
|
||||
},
|
||||
}
|
||||
...
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## Recover from an unhealthy replica
|
||||
|
||||
When a DTR replica is unhealthy or down, the DTR web UI displays a warning:
|
||||
|
||||
```none
|
||||
Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.
|
||||
```
|
||||
|
||||
To fix this, you should remove the unhealthy replica from the DTR cluster,
|
||||
and join a new one. Start by running:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
And then:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \
|
||||
--ucp-node <ucp-node-name> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
|
@ -1,138 +0,0 @@
|
|||
---
|
||||
title: Troubleshoot Docker Trusted Registry
|
||||
description: Learn how to troubleshoot your DTR installation.
|
||||
keywords: registry, monitor, troubleshoot
|
||||
---
|
||||
|
||||
This guide contains tips and tricks for troubleshooting DTR problems.
|
||||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on swarm overlay networking. One way to test
|
||||
if overlay networks are working correctly is to deploy containers to the same
|
||||
overlay network on different nodes and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }}
|
||||
```
|
||||
|
||||
Then use SSH to log into another node and run:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1
|
||||
```
|
||||
|
||||
If the second command succeeds, it indicates overlay networking is working
|
||||
correctly between those nodes.
|
||||
|
||||
You can run this test with any attachable overlay network and any Docker image
|
||||
that has `sh` and `ping`.
|
||||
|
||||
|
||||
## Access RethinkDB directly
|
||||
|
||||
DTR uses RethinkDB for persisting data and replicating it across replicas.
|
||||
It might be helpful to connect directly to the RethinkDB instance running on a
|
||||
DTR replica to check the DTR internal state.
|
||||
|
||||
> **Warning**: Modifying RethinkDB directly is not supported and may cause
|
||||
> problems.
|
||||
{: .warning }
|
||||
|
||||
Use SSH to log into a node that is running a DTR replica, and run the following
|
||||
commands:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
# List problems in the cluster detected by the current node.
|
||||
echo 'r.db("rethinkdb").table("current_issues")' | \
|
||||
docker exec -i \
|
||||
$(docker ps -q --filter name=dtr-rethinkdb) \
|
||||
rethinkcli non-interactive; \
|
||||
echo
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
On a healthy cluster the output will be `[]`.
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. This
|
||||
container can also be used to connect to the local DTR replica and
|
||||
interactively query the contents of the DB.
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
docker exec -it $(docker ps -q --filter name=dtr-rethinkdb) rethinkcli
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
```none
|
||||
# List problems in the cluster detected by the current node.
|
||||
> r.db("rethinkdb").table("current_issues")
|
||||
[]
|
||||
|
||||
# List all the DBs in RethinkDB
|
||||
> r.dbList()
|
||||
[ 'dtr2',
|
||||
'jobrunner',
|
||||
'notaryserver',
|
||||
'notarysigner',
|
||||
'rethinkdb' ]
|
||||
|
||||
# List the tables in the dtr2 db
|
||||
> r.db('dtr2').tableList()
|
||||
[ 'client_tokens',
|
||||
'events',
|
||||
'manifests',
|
||||
'namespace_team_access',
|
||||
'properties',
|
||||
'repositories',
|
||||
'repository_team_access',
|
||||
'tags' ]
|
||||
|
||||
# List the entries in the repositories table
|
||||
> r.db('dtr2').table('repositories')
|
||||
[ { id: '19f1240a-08d8-4979-a898-6b0b5b2338d8',
|
||||
name: 'my-test-repo',
|
||||
namespaceAccountID: '924bf131-6213-43fa-a5ed-d73c7ccf392e',
|
||||
pk: 'cf5e8bf1197e281c747f27e203e42e22721d5c0870b06dfb1060ad0970e99ada',
|
||||
visibility: 'public' },
|
||||
...
|
||||
```
|
||||
|
||||
Individual DBs and tables are a private implementation detail and may change in DTR
|
||||
from version to version, but you can always use `dbList()` and `tableList()` to explore
|
||||
the contents and data structure.
|
||||
|
||||
[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/).
|
||||
|
||||
## Recover from an unhealthy replica
|
||||
|
||||
When a DTR replica is unhealthy or down, the DTR web UI displays a warning:
|
||||
|
||||
```none
|
||||
Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.
|
||||
```
|
||||
|
||||
To fix this, you should remove the unhealthy replica from the DTR cluster,
|
||||
and join a new one. Start by running:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
And then:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \
|
||||
--ucp-node <ucp-node-name> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
Before Width: | Height: | Size: 129 KiB After Width: | Height: | Size: 175 KiB |
|
Before Width: | Height: | Size: 105 KiB After Width: | Height: | Size: 122 KiB |
|
Before Width: | Height: | Size: 124 KiB After Width: | Height: | Size: 154 KiB |
|
Before Width: | Height: | Size: 90 KiB After Width: | Height: | Size: 118 KiB |
|
Before Width: | Height: | Size: 119 KiB After Width: | Height: | Size: 136 KiB |
|
Before Width: | Height: | Size: 105 KiB After Width: | Height: | Size: 109 KiB |
|
Before Width: | Height: | Size: 174 KiB After Width: | Height: | Size: 175 KiB |
|
|
@ -22,6 +22,35 @@ to upgrade your installation to the latest release.
|
|||
|
||||
# Version 2.6
|
||||
|
||||
## 2.6.3
|
||||
|
||||
(2019-2-28)
|
||||
|
||||
### Changelog
|
||||
|
||||
* Bump the Golang version that is used to build DTR to version 1.11.5. (docker/dhe-deploy#10060)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* Users with read-only permissions can no longer see the README edit button for a repository. (docker/dhe-deploy#10056)
|
||||
|
||||
### Known issues
|
||||
|
||||
* Docker Engine Enterprise Edition (Docker EE) Upgrade
|
||||
* There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before `18.09` to version `18.09` or greater. For DTR-specific changes, see [2.5 to 2.6 upgrade](/ee/dtr/admin/upgrade/#25-to-26-upgrade).
|
||||
|
||||
* Web Interface
|
||||
* Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490)
|
||||
* When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474)
|
||||
* In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554)
|
||||
|
||||
* Webhooks
|
||||
* When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
|
||||
* System
|
||||
* When upgrading from `2.5` to `2.6`, the system will run a `metadatastoremigration` job after a successful upgrade. This is necessary for online garbage collection. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](/ee/dtr/admin/upgrade/#25-to-26-upgrade).
|
||||
|
||||
## 2.6.2
|
||||
|
||||
(2019-1-29)
|
||||
|
|
@ -30,6 +59,24 @@ to upgrade your installation to the latest release.
|
|||
|
||||
* Fixed a bug where scanning Windows images were stuck in Pending state. (docker/dhe-deploy #9969)
|
||||
|
||||
### Known issues
|
||||
|
||||
* Docker Engine Enterprise Edition (Docker EE) Upgrade
|
||||
* There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before `18.09` to version `18.09` or greater. For DTR-specific changes, see [2.5 to 2.6 upgrade](/ee/dtr/admin/upgrade/#25-to-26-upgrade).
|
||||
|
||||
* Web Interface
|
||||
* Users with read-only permissions to a repository can edit the repository README but their changes will not be saved. Only repository admins should have the ability to [edit the description](/ee/dtr/admin/manage-users/permission-levels/#team-permission-levels) of a repository. (docker/dhe-deploy #9677)
|
||||
* Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490)
|
||||
* When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474)
|
||||
* In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554)
|
||||
|
||||
* Webhooks
|
||||
* When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
|
||||
* System
|
||||
* When upgrading from `2.5` to `2.6`, the system will run a `metadatastoremigration` job after a successful upgrade. This is necessary for online garbage collection. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](/ee/dtr/admin/upgrade/#25-to-26-upgrade).
|
||||
|
||||
## 2.6.1
|
||||
|
||||
(2019-01-09)
|
||||
|
|
@ -43,6 +90,24 @@ to upgrade your installation to the latest release.
|
|||
### Changelog
|
||||
* GoLang version bump to 1.11.4.
|
||||
|
||||
### Known issues
|
||||
|
||||
* Docker Engine Enterprise Edition (Docker EE) Upgrade
|
||||
* There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before `18.09` to version `18.09` or greater. For DTR-specific changes, see [2.5 to 2.6 upgrade](/ee/dtr/admin/upgrade/#25-to-26-upgrade).
|
||||
|
||||
* Web Interface
|
||||
* Users with read-only permissions to a repository can edit the repository README but their changes will not be saved. Only repository admins should have the ability to [edit the description](/ee/dtr/admin/manage-users/permission-levels/#team-permission-levels) of a repository. (docker/dhe-deploy #9677)
|
||||
* Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490)
|
||||
* When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474)
|
||||
* In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554)
|
||||
|
||||
* Webhooks
|
||||
* When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
|
||||
* System
|
||||
* When upgrading from `2.5` to `2.6`, the system will run a `metadatastoremigration` job after a successful upgrade. This is necessary for online garbage collection. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](/ee/dtr/admin/upgrade/#25-to-26-upgrade).
|
||||
|
||||
## 2.6.0
|
||||
|
||||
(2018-11-08)
|
||||
|
|
@ -84,6 +149,7 @@ to upgrade your installation to the latest release.
|
|||
* Users with read-only permissions to a repository can edit the repository README but their changes will not be saved. Only repository admins should have the ability to [edit the description](/ee/dtr/admin/manage-users/permission-levels/#team-permission-levels) of a repository. (docker/dhe-deploy #9677)
|
||||
* Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490)
|
||||
* When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474)
|
||||
* In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554)
|
||||
|
||||
* Webhooks
|
||||
* When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
|
|
@ -104,6 +170,43 @@ to upgrade your installation to the latest release.
|
|||
|
||||
# Version 2.5
|
||||
|
||||
## 2.5.9
|
||||
|
||||
(2019-2-28)
|
||||
|
||||
### Changelog
|
||||
|
||||
* Bump the Golang version that is used to build DTR to version 1.10.8. (docker/dhe-deploy#10071)
|
||||
|
||||
### Known Issues
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
## 2.5.8
|
||||
|
||||
(2019-1-29)
|
||||
|
|
@ -112,6 +215,35 @@ to upgrade your installation to the latest release.
|
|||
|
||||
* Fixed an issue that prevented vulnerability updates from running if they were previously interrupted. (docker/dhe-deploy #9958)
|
||||
|
||||
### Known Issues
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
## 2.5.7
|
||||
|
||||
(2019-01-09)
|
||||
|
|
@ -125,6 +257,35 @@ to upgrade your installation to the latest release.
|
|||
### Changelog
|
||||
* GoLang version bump to 1.10.7.
|
||||
|
||||
### Known Issues
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
## 2.5.6
|
||||
|
||||
(2018-10-25)
|
||||
|
|
@ -138,6 +299,35 @@ to upgrade your installation to the latest release.
|
|||
* Backported ManifestList fixes. (docker/dhe-deploy#9547)
|
||||
* Removed support sidebar link and associated content. (docker/dhe-deploy#9411)
|
||||
|
||||
### Known Issues
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
## 2.5.5
|
||||
|
||||
(2018-8-30)
|
||||
|
|
@ -148,6 +338,35 @@ to upgrade your installation to the latest release.
|
|||
* Fixed bug to enable poll mirroring with Windows images.
|
||||
* The RethinkDB image has been patched to remove unused components with known vulnerabilities including the RethinkCLI. To get an equivalent interface, run RethinkCLI from a separate image using `docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID`.
|
||||
|
||||
### Known Issues
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
## 2.5.3
|
||||
|
||||
(2018-6-21)
|
||||
|
|
@ -163,8 +382,33 @@ to upgrade your installation to the latest release.
|
|||
* Fixed issue where worker capacities wouldn't update on minor version upgrades.
|
||||
|
||||
### Known Issues
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
|
||||
## 2.5.2
|
||||
|
|
@ -175,6 +419,35 @@ to upgrade your installation to the latest release.
|
|||
|
||||
* Fixed a problem where promotion policies based on scanning results would not be executed correctly.
|
||||
|
||||
### Known issues
|
||||
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
## 2.5.1
|
||||
|
||||
(2018-5-17)
|
||||
|
|
@ -202,6 +475,35 @@ to upgrade your installation to the latest release.
|
|||
* Fixed URL for the destination repository.
|
||||
* Option to skip TLS verification when testing mirroring.
|
||||
|
||||
### Known issues
|
||||
|
||||
* Web Interface
|
||||
* The web interface shows "This repository has no tags" in repositories where tags
|
||||
have long names. As a workaround, reduce the length of the name for the
|
||||
repository and tag.
|
||||
* When deleting a repository with signed images, the DTR web interface no longer
|
||||
shows instructions on how to delete trust data.
|
||||
* There's no web interface support to update mirroring policies when rotating the TLS
|
||||
certificates used by DTR. Use the API instead.
|
||||
* The web interface for promotion policies is currently broken if you have a large number
|
||||
of repositories.
|
||||
* Clicking "Save & Apply" on a promotion policy doesn't work.
|
||||
* Webhooks
|
||||
* There is no webhook event for when an image is pulled.
|
||||
* HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492)
|
||||
* Online garbage collection
|
||||
* The events API won't report events when tags and manifests are deleted.
|
||||
* The events API won't report blobs deleted by the garbage collection job.
|
||||
* Docker EE Advanced features
|
||||
* Scanning any new push after metadatastore migration will not yet work.
|
||||
* Pushes to repos with promotion policies (repo as source) are broken when an
|
||||
image has a layer over 100MB.
|
||||
* On upgrade the scanningstore container may restart with this error message:
|
||||
FATAL: database files are incompatible with server
|
||||
|
||||
* System
|
||||
* When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration).
|
||||
|
||||
## 2.5.0
|
||||
|
||||
(2018-4-17)
|
||||
|
|
@ -298,6 +600,21 @@ specify `--log-protocol`.
|
|||
|
||||
# Version 2.4
|
||||
|
||||
## 2.4.10
|
||||
|
||||
(2019-2-28)
|
||||
|
||||
### Changelog
|
||||
|
||||
* Bump the Golang version that is used to build DTR to version 1.10.8. (docker/dhe-deploy#10068)
|
||||
|
||||
**Known issues**
|
||||
|
||||
* Backup uses too much memory and can cause out of memory issues for large databases.
|
||||
* The `--nfs-storage-url` option uses the system's default NFS version instead
|
||||
of testing the server to find which version works.
|
||||
|
||||
|
||||
## Version 2.4.8
|
||||
|
||||
(2019-01-29)
|
||||
|
|
@ -305,6 +622,13 @@ specify `--log-protocol`.
|
|||
### Changelog
|
||||
* GoLang version bump to 1.10.6.
|
||||
|
||||
**Known issues**
|
||||
|
||||
* Backup uses too much memory and can cause out of memory issues for large databases.
|
||||
* The `--nfs-storage-url` option uses the system's default NFS version instead
|
||||
of testing the server to find which version works.
|
||||
|
||||
|
||||
## Version 2.4.7
|
||||
|
||||
(2018-10-25)
|
||||
|
|
@ -317,6 +641,12 @@ specify `--log-protocol`.
|
|||
* Patched security vulnerabilities in the load balancer.
|
||||
* Patch packages and base OS to eliminate and address some critical vulnerabilities in DTR dependencies.
|
||||
|
||||
**Known issues**
|
||||
|
||||
* Backup uses too much memory and can cause out of memory issues for large databases.
|
||||
* The `--nfs-storage-url` option uses the system's default NFS version instead
|
||||
of testing the server to find which version works.
|
||||
|
||||
## Version 2.4.6
|
||||
|
||||
(2018-07-26)
|
||||
|
|
@ -325,6 +655,12 @@ specify `--log-protocol`.
|
|||
* Fixed bug where repository tag list UI was not loading after a tag migration.
|
||||
* The RethinkDB image has been patched to remove unused components with known vulnerabilities including the rethinkcli. To get an equivalent interface please run the rethinkcli from a separate image using `docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli $REPLICA_ID`.
|
||||
|
||||
**Known issues**
|
||||
|
||||
* Backup uses too much memory and can cause out of memory issues for large databases.
|
||||
* The `--nfs-storage-url` option uses the system's default NFS version instead
|
||||
of testing the server to find which version works.
|
||||
|
||||
## Version 2.4.5
|
||||
|
||||
(2018-06-21)
|
||||
|
|
@ -337,6 +673,12 @@ specify `--log-protocol`.
|
|||
|
||||
* Prevent OOM during garbage collection by reading less data into memory at a time.
|
||||
|
||||
**Known issues**
|
||||
|
||||
* Backup uses too much memory and can cause out of memory issues for large databases.
|
||||
* The `--nfs-storage-url` option uses the system's default NFS version instead
|
||||
of testing the server to find which version works.
|
||||
|
||||
## Version 2.4.4
|
||||
|
||||
(2018-05-17)
|
||||
|
|
@ -352,11 +694,17 @@ specify `--log-protocol`.
|
|||
* Reduce noise in the jobrunner logs by changing some of the more detailed messages to debug level.
|
||||
* Eliminate a race condition in which webhook for license updates doesn't fire.
|
||||
|
||||
**Known issues**
|
||||
|
||||
* Backup uses too much memory and can cause out of memory issues for large databases.
|
||||
* The `--nfs-storage-url` option uses the system's default NFS version instead
|
||||
of testing the server to find which version works.
|
||||
|
||||
## Version 2.4.3
|
||||
|
||||
(2018-03-19)
|
||||
|
||||
**Security**
|
||||
**Security notice**
|
||||
|
||||
* Dependencies updated to consume upstream CVE patches.
|
||||
|
||||
|
|
@ -410,6 +758,12 @@ vulnerability database.
|
|||
removed in DTR 2.5. You can use the
|
||||
`/api/v0/imagescan/repositories/{namespace}/{reponame}/{tag}` endpoint instead.
|
||||
|
||||
**Known issues**
|
||||
|
||||
* Backup uses too much memory and can cause out of memory issues for large databases.
|
||||
* The `--nfs-storage-url` option uses the system's default NFS version instead
|
||||
of testing the server to find which version works.
|
||||
|
||||
|
||||
## DTR 2.4.0
|
||||
|
||||
|
|
|
|||
|
|
@ -11,11 +11,14 @@ Tag pruning is the process of cleaning up unnecessary or unwanted repository tag
|
|||
* specifying a tag pruning policy or alternatively,
|
||||
* setting a tag limit
|
||||
|
||||
|
||||
> Tag Pruning
|
||||
>
|
||||
> When run, tag pruning only deletes a tag and does not carry out any actual blob deletion. For actual blob deletions, see [Garbage Collection](../../admin/configure/garbage-collection.md).
|
||||
|
||||
> Known Issue
|
||||
>
|
||||
> While the tag limit field is disabled when you turn on immutability for a new repository, this is currently [not the case with **Repository Settings**](/ee/dtr/release-notes/#known-issues). As a workaround, turn off immutability when setting a tag limit via **Repository Settings > Pruning**.
|
||||
|
||||
In the following section, we will cover how to specify a tag pruning policy and set a tag limit on repositories that you manage. It will not include modifying or deleting a tag pruning policy.
|
||||
|
||||
## Specify a tag pruning policy
|
||||
|
|
@ -65,7 +68,10 @@ In addition to pruning policies, you can also set tag limits on repositories tha
|
|||
|
||||
{: .with-border}
|
||||
|
||||
To set a tag limit, select the repository that you want to update and click the **Settings** tab. Specify a number in the **Pruning** section and click **Save**. The **Pruning** tab will now display your tag limit above the prune triggers list along with a link to modify this setting.
|
||||
To set a tag limit, do the following:
|
||||
1. Select the repository that you want to update and click the **Settings** tab.
|
||||
2. Turn off immutability for the repository.
|
||||
3. Specify a number in the **Pruning** section and click **Save**. The **Pruning** tab will now display your tag limit above the prune triggers list along with a link to modify this setting.
|
||||
|
||||
|
||||
{: .with-border}
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ Back up your Docker EE components in the following order:
|
|||
|
||||
1. [Back up your swarm](/engine/swarm/admin_guide/#back-up-the-swarm)
|
||||
2. Back up UCP
|
||||
3. [Back up DTR](../../dtr/2.5/admin/disaster-recovery/index.md)
|
||||
3. [Back up DTR](/ee/dtr/admin/disaster-recovery/)
|
||||
|
||||
## Backup policy
|
||||
|
||||
|
|
@ -45,6 +45,17 @@ As part of your backup policy you should regularly create backups of UCP.
|
|||
DTR is backed up independently.
|
||||
[Learn about DTR backups and recovery](../../dtr/2.5/admin/disaster-recovery/index.md).
|
||||
|
||||
> Warning: On UCP versions 3.1.0 - 3.1.2, before performing a UCP backup, you must clean up multiple /dev/shm mounts in the ucp-kublet entrypoint script by running the following script on all nodes via cron job:
|
||||
|
||||
```
|
||||
SHM_MOUNT=$(grep -m1 '^tmpfs./dev/shm' /proc/mounts)
|
||||
while [ $(grep -cm2 '^tmpfs./dev/shm' /proc/mounts) -gt 1 ]; do
|
||||
sudo umount /dev/shm
|
||||
done
|
||||
grep -q '^tmpfs./dev/shm' /proc/mounts || sudo mount "${SHM_MOUNT}"
|
||||
```
|
||||
For additional details, refer to [Docker KB000934](https://success.docker.com/article/more-than-one-dev-shm-mount-in-the-host-namespace){: target="_blank"}
|
||||
|
||||
To create a UCP backup, run the `{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} backup` command
|
||||
on a single UCP manager. This command creates a tar archive with the
|
||||
contents of all the [volumes used by UCP](../ucp-architecture.md) to persist data
|
||||
|
|
|
|||
|
|
@ -44,6 +44,8 @@ These are metrics about the state of services running on the container platform.
|
|||
- Convergence of K8s deployments and Swarm services
|
||||
- Cluster load by number of services or containers or pods
|
||||
|
||||
Web UI disk usage metrics, including free space, only reflect the Docker managed portion of the filesystem: `/var/lib/docker`. To monitor the total space available on each filesystem of a UCP worker or manager, you must deploy a third party monitoring solution to monitor the operating system.
|
||||
|
||||
## Deploy Prometheus on worker nodes
|
||||
|
||||
Universal Control Plane deploys Prometheus by default on the manager nodes to provide a built-in metrics backend. For cluster sizes over 100 nodes or for use cases where scraping metrics from the Prometheus instances are needed, we recommend that you deploy Prometheus on dedicated worker nodes in the cluster.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Create UCP audit logs
|
||||
description: Learn how to create audit logs of all activity in UCP
|
||||
title: Enable audit logging on UCP
|
||||
description: Learn how to enable audit logging of all activity in UCP
|
||||
keywords: logs, ucp, swarm, kubernetes, audits
|
||||
---
|
||||
|
||||
|
|
@ -121,7 +121,10 @@ The section of the UCP configuration file that controls UCP auditing logging is:
|
|||
support_dump_include_audit_logs = false
|
||||
```
|
||||
|
||||
The supported variables are `""`, `"metadata"` or `"request"`.
|
||||
The supported variables for `level` are `""`, `"metadata"` or `"request"`.
|
||||
|
||||
> Important: The `support_dump_include_audit_logs` flag specifies whether user identification information from the ucp-controller container logs is included in the support dump. To prevent this information from being sent with the support dump, make sure that `support_dump_include_audit_logs` is set to `false`. When disabled, the support dump collection tool filters out any lines from the `ucp-controller` container logs that contain the substring `auditID`.
|
||||
|
||||
|
||||
## Accessing Audit Logs
|
||||
|
||||
|
|
@ -195,6 +198,17 @@ events and may create a large amount of log entries.
|
|||
- /kubernetesdocs
|
||||
- /manage
|
||||
|
||||
## API endpoint information redacted
|
||||
|
||||
Information for the following API endpoints is redacted from the audit logs for security purposes:
|
||||
|
||||
- `/secrets/create` (POST)
|
||||
- `/secrets/{id}/update` (POST)
|
||||
- `/swarm/join` (POST)
|
||||
- `/swarm/update` (POST)
|
||||
-`/auth/login` (POST)
|
||||
- Kube secrete create/update endpoints
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Collect UCP Cluster Metrics with Prometheus](collect-cluster-metrics.md)
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ workloads.
|
|||
If Route Reflectors are running on a same node as other workloads, swarm ingress
|
||||
and NodePorts might not work in these workloads.
|
||||
|
||||
## Choose dedicated notes
|
||||
## Choose dedicated nodes
|
||||
|
||||
Start by tainting the nodes, so that no other workload runs there. Configure
|
||||
your CLI with a UCP client bundle, and for each dedicated node, run:
|
||||
|
|
|
|||
|
|
@ -141,7 +141,7 @@ Click **Yes** to enable integrating UCP users and teams with LDAP servers.
|
|||
| No simple pagination | If your LDAP server doesn't support pagination. |
|
||||
| Just-In-Time User Provisioning | Whether to create user accounts only when users log in for the first time. The default value of `true` is recommended. If you upgraded from UCP 2.0.x, the default is `false`. |
|
||||
|
||||
> **Note:** LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0.
|
||||
> **Note**: LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Set up high availability
|
||||
description: Docker Universal Control plane has support for high availability. Learn how to set up your installation to ensure it tolerates failures.
|
||||
keywords: ucp, high availability, replica
|
||||
redirect_from:
|
||||
- /ee/ucp/admin/configure/set-up-high-availability/
|
||||
---
|
||||
|
||||
Docker Universal Control Plane is designed for high availability (HA). You can
|
||||
|
|
|
|||
|
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
description: Using UCP cluster metrics with Prometheus
|
||||
keywords: prometheus, metrics, ucp
|
||||
title: Using UCP cluster metrics with Prometheus
|
||||
redirect_from:
|
||||
- /engine/admin/prometheus/
|
||||
---
|
||||
|
||||
# UCP metrics
|
||||
|
||||
The following table lists the metrics that UCP exposes in Prometheus, along with descriptions. Note that only the metrics
|
||||
labeled with `ucp_` are documented. Other metrics are exposed in Prometheus but are not documented.
|
||||
|
||||
| Name | Units | Description | Labels | Metric source |
|
||||
|---------------------------------------------------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|---------------|
|
||||
| `ucp_controller_services` | number of services | The total number of Swarm services | | Controller |
|
||||
| `ucp_engine_container_cpu_percent` | percentage | The percentage of CPU time this container is using. | container labels | Node |
|
||||
| `ucp_engine_container_cpu_total_time_nanoseconds` | nanoseconds | Total CPU time used by this container in nanoseconds | container labels | Node |
|
||||
| `ucp_engine_container_health` | 0.0 or 1.0 | Whether or not this container is healthy, according to its healthcheck. Note that if this value is 0, it just means that the container is not reporting healthy; it might not have a healthcheck defined at all, or its healthcheck might not have returned any results yet | container labels | Node |
|
||||
| `ucp_engine_container_memory_max_usage_bytes` | bytes | Maximum memory used by this container in bytes | container labels | Node |
|
||||
| `ucp_engine_container_memory_usage_bytes` | bytes | Current memory used by this container in bytes | container labels | Node |
|
||||
| `ucp_engine_container_memory_usage_percent` | percentage | Percentage of total node memory currently being used by this container | container labels | Node |
|
||||
| `ucp_engine_container_network_rx_bytes_total` | bytes | Number of bytes received by this container on this network in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_network_rx_dropped_packets_total` | number of packets | Number of packets bound for this container on this network that were dropped in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_network_rx_errors_total` | number of errors | Number of received network errors for this container on this network in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_network_rx_packets_total` | number of packets | Number of received packets for this container on this network in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_network_tx_bytes_total` | bytes | Number of bytes sent by this container on this network in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_network_tx_dropped_packets_total` | number of packets | Number of packets sent from this container on this network that were dropped in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_network_tx_errors_total` | number of errors | Number of sent network errors for this container on this network in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_network_tx_packets_total` | number of packets | Number of sent packets for this container on this network in the last sample | container networking labels | Node |
|
||||
| `ucp_engine_container_unhealth` | 0.0 or 1.0 | Whether or not this container is unhealthy, according to its healthcheck. Note that if this value is 0, it just means that the container is not reporting unhealthy; it might not have a healthcheck defined at all, or its healthcheck might not have returned any results yet | container labels | Node |
|
||||
| `ucp_engine_containers` | number of containers | Total number of containers on this node | node labels | Node |
|
||||
| `ucp_engine_cpu_total_time_nanoseconds` | nanoseconds | System CPU time used by this container in nanoseconds | container labels | Node |
|
||||
| `ucp_engine_disk_free_bytes` | bytes | Free disk space on the Docker root directory on this node in bytes. Note that this metric is not available for Windows nodes | node labels | Node |
|
||||
| `ucp_engine_disk_total_bytes` | bytes | Total disk space on the Docker root directory on this node in bytes. Note that this metric is not available for Windows nodes | node labels | Node |
|
||||
| `ucp_engine_images` | number of images | Total number of images on this node | node labels | Node |
|
||||
| `ucp_engine_memory_total_bytes` | bytes | Total amount of memory on this node in bytes | node labels | Node |
|
||||
| `ucp_engine_networks` | number of networks | Total number of networks on this node | node labels | Node |
|
||||
| `ucp_engine_node_health` | 0.0 or 1.0 | Whether or not this node is healthy, as determined by UCP | nodeName: node name, nodeAddr: node IP address | Controller |
|
||||
| `ucp_engine_num_cpu_cores` | number of cores | Number of CPU cores on this node | node labels | Node |
|
||||
| `ucp_engine_pod_container_ready` | 0.0 or 1.0 | Whether or not this container in a Kubernetes pod is ready, as determined by its readiness probe. | pod labels | Controller |
|
||||
| `ucp_engine_pod_ready` | 0.0 or 1.0 | Whether or not this Kubernetes pod is ready, as determined by its readiness probe. | pod labels | Controller |
|
||||
| `ucp_engine_volumes` | number of volumes | Total number of volumes on this node | node labels | Node |
|
||||
|
||||
## Metrics labels
|
||||
|
||||
Metrics exposed by UCP in Prometheus have standardized labels, depending on the resource that they are measuring.
|
||||
The following table lists some of the labels that are used, along with their values:
|
||||
|
||||
### Container labels
|
||||
|
||||
| Label name | Value |
|
||||
|--------------------|---------------------------------------------------------------------------------------------|
|
||||
| `collection` | The collection ID of the collection this container is in, if any |
|
||||
| `container` | The ID of this container |
|
||||
| `image` | The name of this container's image |
|
||||
| `manager` | "true" if the container's node is a UCP manager, "false" otherwise |
|
||||
| `name` | The name of the container |
|
||||
| `podName` | If this container is part of a Kubernetes pod, this is the pod's name |
|
||||
| `podNamespace` | If this container is part of a Kubernetes pod, this is the pod's namespace |
|
||||
| `podContainerName` | If this container is part of a Kubernetes pod, this is the container's name in the pod spec |
|
||||
| `service` | If this container is part of a Swarm service, this is the service ID |
|
||||
| `stack` | If this container is part of a Docker compose stack, this is the name of the stack |
|
||||
|
||||
### Container networking labels
|
||||
|
||||
The following metrics measure network activity for a given network attached to a given
|
||||
container. They have the same labels as Container labels, with one addition:
|
||||
|
||||
| Label name | Value |
|
||||
|------------|-----------------------|
|
||||
| `network` | The ID of the network |
|
||||
|
||||
### Node labels
|
||||
|
||||
| Label name | Value |
|
||||
|------------|--------------------------------------------------------|
|
||||
| `manager` | "true" if the node is a UCP manager, "false" otherwise |
|
||||
|
||||
## Metric source
|
||||
|
||||
UCP exports metrics on every node and also exports additional metrics from
|
||||
every controller. The metrics that are exported from controllers are
|
||||
cluster-scoped, for example, the total number of Swarm services. Metrics that
|
||||
are exported from nodes are specific to those nodes, for example, the total memory
|
||||
on that node.
|
||||
|
|
@ -12,8 +12,10 @@ If a user deploys a malicious service that can affect the node where it
|
|||
is running, it won't be able to affect other nodes in the cluster, or
|
||||
any cluster management functionality.
|
||||
|
||||
## Swarm Workloads
|
||||
|
||||
To restrict users from deploying to manager nodes, log in with administrator
|
||||
credentials to the UCP web UI, navigate to the **Admin Settings**
|
||||
credentials to the UCP web interface, navigate to the **Admin Settings**
|
||||
page, and choose **Scheduler**.
|
||||
|
||||
{: .with-border}
|
||||
|
|
@ -24,4 +26,82 @@ or not.
|
|||
Having a grant with the `Scheduler` role against the `/` collection takes
|
||||
precedence over any other grants with `Node Schedule` on subcollections.
|
||||
|
||||
## Kubernetes Workloads
|
||||
|
||||
By default Universal Control Plane clusters takes advantage of [Taints and
|
||||
Tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)
|
||||
to prevent a User's workload being deployed on to UCP Manager or DTR Nodes.
|
||||
|
||||
You can view this taint by running:
|
||||
|
||||
```bash
|
||||
$ kubectl get nodes <ucpmanager> -o json | jq -r '.spec.taints | .[]'
|
||||
{
|
||||
"effect": "NoSchedule",
|
||||
"key": "com.docker.ucp.manager"
|
||||
}
|
||||
```
|
||||
|
||||
> Note: Workloads deployed by an Administrator in the `kube-system` namespace do
|
||||
> not follow these scheduling constraints. If an Administrator deploys a
|
||||
> workload in the `kube-system` namespace, a toleration is applied to bypass
|
||||
> this taint, and the workload is scheduled on all node types.
|
||||
|
||||
### Allow Administrators to Schedule on Manager / DTR Nodes
|
||||
|
||||
To allow Administrators to deploy workloads accross all nodes types, an
|
||||
Administrator can tick the "Allow administrators to deploy containers on UCP
|
||||
managers or nodes running DTR" box in the UCP web interface.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
For all new workloads deployed by Administrators after this box has been
|
||||
ticked, UCP will apply a toleration to your workloads to allow the pods to be
|
||||
scheduled on all node types.
|
||||
|
||||
For existing workloads, the Administrator will need to edit the Pod
|
||||
specification, through `kubectl edit <object> <workload>` or the UCP web interface and add
|
||||
the following toleration:
|
||||
|
||||
```bash
|
||||
tolerations:
|
||||
- key: "com.docker.ucp.manager"
|
||||
operator: "Exists"
|
||||
```
|
||||
|
||||
You can check this has been applied succesfully by:
|
||||
|
||||
```bash
|
||||
$ kubectl get <object> <workload> -o json | jq -r '.spec.template.spec.tolerations | .[]'
|
||||
{
|
||||
"key": "com.docker.ucp.manager",
|
||||
"operator": "Exists"
|
||||
}
|
||||
```
|
||||
|
||||
### Allow Users and Service Accounts to Schedule on Manager / DTR Nodes
|
||||
|
||||
To allow Kubernetes Users and Service Accounts to deploy workloads accross all
|
||||
node types in your cluster, an Administrator will need to tick "Allow all
|
||||
authenticated users, including service accounts, to schedule on all nodes,
|
||||
including UCP managers and DTR nodes." in the UCP web interface.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
For all new workloads deployed by Kubernetes Users after this box has been
|
||||
ticked, UCP will apply a toleration to your workloads to allow the pods to be
|
||||
scheduled on all node types. For existing workloads, the User would need to edit
|
||||
Pod Specification as detailed above in the "Allow Administrators to Schedule on
|
||||
Manager / DTR Nodes" section.
|
||||
|
||||
There is a NoSchedule taint on UCP managers and DTR nodes and if you have
|
||||
scheduling on managers/workers disabled in the UCP scheduling options, then a
|
||||
toleration for that taint will not get applied to the deployments, so they
|
||||
should not schedule on those nodes. Unless the Kube workload is deployed in the
|
||||
`kube-system` name space.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Deploy an Application Package](/ee/ucp/deploy-application-package/)
|
||||
- [Deploy a Swarm Workload](/ee/ucp/swarm/)
|
||||
- [Deploy a Kubernetes Workload](/ee/ucp/kubernetes//)
|
||||
|
|
|
|||
|
|
@ -112,6 +112,8 @@ Configures audit logging options for UCP components.
|
|||
|
||||
Specifies scheduling options and the default orchestrator for new nodes.
|
||||
|
||||
> **Note**: If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag.
|
||||
|
||||
| Parameter | Required | Description |
|
||||
|:------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `enable_admin_ucp_scheduling` | no | Set to `true` to allow admins to schedule on containers on manager nodes. The default is `false`. |
|
||||
|
|
@ -181,7 +183,7 @@ components. Assigning these values overrides the settings in a container's
|
|||
| `metrics_retention_time` | no | Adjusts the metrics retention time. |
|
||||
| `metrics_scrape_interval` | no | Sets the interval for how frequently managers gather metrics from nodes in the cluster. |
|
||||
| `metrics_disk_usage_interval` | no | Sets the interval for how frequently storage metrics are gathered. This operation can be expensive when large volumes are present. |
|
||||
| `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 512MB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. |
|
||||
| `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 1GB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. |
|
||||
| `cloud_provider` | no | Set the cloud provider for the kubernetes cluster. |
|
||||
| `pod_cidr` | yes | Sets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is `192.168.0.0/16`. |
|
||||
| `calico_mtu` | no | Set the MTU (maximum transmission unit) size for the Calico plugin. |
|
||||
|
|
|
|||
|
|
@ -8,9 +8,9 @@ Docker UCP supports Network File System (NFS) persistent volumes for
|
|||
Kubernetes. To enable this feature on a UCP cluster, you need to set up
|
||||
an NFS storage volume provisioner.
|
||||
|
||||
> Kubernetes storage drivers
|
||||
> ### Kubernetes storage drivers
|
||||
>
|
||||
> Currently, NFS is the only Kubernetes storage driver that UCP supports.
|
||||
>NFS is one of the Kubernetes storage drivers that UCP supports. See [Kubernetes Volume Drivers](https://success.docker.com/article/compatibility-matrix#kubernetesvolumedrivers) in the Compatibility Matrix for the full list.
|
||||
{: important}
|
||||
|
||||
## Enable NFS volume provisioning
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ For each machine that you want to manage with UCP:
|
|||
`docker load` command, to load the Docker images from the tar archive:
|
||||
|
||||
```bash
|
||||
$ docker load < ucp.tar.gz
|
||||
$ docker load -i ucp.tar.gz
|
||||
```
|
||||
|
||||
Follow the same steps for the DTR binaries.
|
||||
|
|
|
|||
|
|
@ -42,12 +42,22 @@ this.
|
|||
|
||||
## Avoid IP range conflicts
|
||||
|
||||
The `service-cluster-ip-range` Kubernetes API Server flag is currently set to `10.96.0.0/16` and cannot be changed.
|
||||
|
||||
Swarm uses a default address pool of `10.0.0.0/16` for its overlay networks. If this conflicts with your current network implementation, please use a custom IP address pool. To specify a custom IP address pool, use the `--default-address-pool` command line option during [Swarm initialization](../../../../engine/swarm/swarm-mode.md).
|
||||
|
||||
**NOTE:** Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be installed using this flag and UCP must be installed on top of it.
|
||||
> **Note**: Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be installed using this flag and UCP must be installed on top of it.
|
||||
|
||||
Kubernetes uses a default cluster IP pool for pods that is `192.168.0.0/16`. If it conflicts with your current networks, please use a custom IP pool by specifying `--pod-cidr` during UCP installation.
|
||||
|
||||
## Avoid firewall conflicts
|
||||
|
||||
For SUSE Linux Enterprise Server 12 SP2 (SLES12), the `FW_LO_NOTRACK` flag is turned on by default in the openSUSE firewall. This speeds up packet processing on the loopback interface, and breaks certain firewall setups that need to redirect outgoing packets via custom rules on the local machine.
|
||||
|
||||
To turn off the FW_LO_NOTRACK option, edit the `/etc/sysconfig/SuSEfirewall2` file and set `FW_LO_NOTRACK="no"`. Save the file and restart the firewall or reboot.
|
||||
|
||||
For For SUSE Linux Enterprise Server 12 SP3, the default value for `FW_LO_NOTRACK` was changed to `no`.
|
||||
|
||||
## Time synchronization
|
||||
|
||||
In distributed systems like Docker UCP, time synchronization is critical
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ You can install UCP on-premises or on a cloud provider. Common requirements:
|
|||
* 4 vCPUs for manager nodes
|
||||
* 25-100GB of free disk space
|
||||
|
||||
Note that Windows container images are typically larger than Linux ontainer images. For
|
||||
Note that Windows container images are typically larger than Linux container images. For
|
||||
this reason, you should provision more local storage for Windows
|
||||
nodes and for any DTR setups that store Windows container images.
|
||||
|
||||
|
|
@ -70,6 +70,7 @@ host types:
|
|||
| managers | TCP 6443 (configurable) | External, Internal | Port for Kubernetes API server endpoint |
|
||||
| managers, workers | TCP 6444 | Self | Port for Kubernetes API reverse proxy |
|
||||
| managers, workers | TCP, UDP 7946 | Internal | Port for gossip-based clustering |
|
||||
| managers, workers | TCP 9099 | Self | Port for calico health check
|
||||
| managers, workers | TCP 10250 | Internal | Port for Kubelet |
|
||||
| managers, workers | TCP 12376 | Internal | Port for a TLS authentication proxy that provides access to the Docker Engine |
|
||||
| managers, workers | TCP 12378 | Self | Port for Etcd reverse proxy |
|
||||
|
|
@ -83,6 +84,14 @@ host types:
|
|||
| managers | TCP 12386 | Internal | Port for the authentication worker |
|
||||
| managers | TCP 12388 | Internal | Internal Port for the Kubernetes API Server |
|
||||
|
||||
## Avoid firewall conflicts
|
||||
|
||||
For SUSE Linux Enterprise Server 12 SP2 (SLES12), the `FW_LO_NOTRACK` flag is turned on by default in the openSUSE firewall. This speeds up packet processing on the loopback interface, and breaks certain firewall setups that need to redirect outgoing packets via custom rules on the local machine.
|
||||
|
||||
To turn off the FW_LO_NOTRACK option, edit the `/etc/sysconfig/SuSEfirewall2` file and set `FW_LO_NOTRACK="no"`. Save the file and restart the firewall or reboot.
|
||||
|
||||
For For SUSE Linux Enterprise Server 12 SP3, the default value for `FW_LO_NOTRACK` was changed to `no`.
|
||||
|
||||
## Enable ESP traffic
|
||||
|
||||
For overlay networks with encryption to work, you need to ensure that
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ For each machine that you want to manage with UCP:
|
|||
`docker load` command, to load the Docker images from the tar archive:
|
||||
|
||||
```bash
|
||||
$ docker load < ucp.tar.gz
|
||||
$ docker load -i ucp.tar.gz
|
||||
```
|
||||
|
||||
## Upgrade UCP
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ Learn about [UCP system requirements](system-requirements.md).
|
|||
Ensure that your cluster nodes meet the minimum requirements for port openings.
|
||||
[Ports used](system-requirements.md/#ports-used) are documented in the UCP system requirements.
|
||||
|
||||
> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
|
||||
> **Note**: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
|
||||
> Azure then please ensure all of the Azure [prerequisites](install-on-azure.md/#azure-prerequisites)
|
||||
> are met.
|
||||
|
||||
|
|
@ -56,17 +56,17 @@ to install the Docker Enterprise Edition.
|
|||
Starting with the manager nodes, and then worker nodes:
|
||||
|
||||
1. Log into the node using ssh.
|
||||
2. Upgrade the Docker Engine to version 17.06.2-ee-8 or higher. See [Upgrade Docker EE](https://docs.docker.com/ee/upgrade/).
|
||||
2. Upgrade the Docker Engine to version 18.09.0 or higher. See [Upgrade Docker EE](https://docs.docker.com/ee/upgrade/).
|
||||
3. Make sure the node is healthy.
|
||||
|
||||
In your browser, navigate to the **Nodes** page in the UCP web UI,
|
||||
In your browser, navigate to **Nodes** in the UCP web interface,
|
||||
and check that the node is healthy and is part of the cluster.
|
||||
|
||||
## Upgrade UCP
|
||||
|
||||
You can upgrade UCP from the web UI or the CLI.
|
||||
You can upgrade UCP from the web or the command line interface.
|
||||
|
||||
### Use the UI to perform an upgrade
|
||||
### Use the web interface to perform an upgrade
|
||||
|
||||
When an upgrade is available for a UCP installation, a banner appears.
|
||||
|
||||
|
|
@ -77,17 +77,17 @@ It can be found under the **Upgrade** tab of the **Admin Settings** section.
|
|||
|
||||
{: .with-border}
|
||||
|
||||
In the **Available Versions** dropdown, select **3.0.0** and click
|
||||
In the **Available Versions** dropdown, select the version you want to update to and click
|
||||
**Upgrade UCP**.
|
||||
|
||||
During the upgrade, the UI will be unavailable, and you should wait
|
||||
During the upgrade, the web interface will be unavailable, and you should wait
|
||||
until completion before continuing to interact with it. When the upgrade
|
||||
completes, you'll see a notification that a newer version of the UI
|
||||
is available and a browser refresh is required to see the latest UI.
|
||||
completes, you'll see a notification that a newer version of the web interface
|
||||
is available and a browser refresh is required to see it.
|
||||
|
||||
### Use the CLI to perform an upgrade
|
||||
|
||||
To upgrade from the CLI, log into a UCP manager node using ssh, and run:
|
||||
To upgrade from the CLI, log into a UCP manager node using SSH, and run:
|
||||
|
||||
```
|
||||
# Get the latest version of UCP
|
||||
|
|
@ -100,10 +100,10 @@ docker container run --rm -it \
|
|||
upgrade --interactive
|
||||
```
|
||||
|
||||
This runs the upgrade command in interactive mode, so that you are prompted
|
||||
for any necessary configuration values.
|
||||
This runs the upgrade command in interactive mode, which will prompt you
|
||||
for required configuration values.
|
||||
|
||||
Once the upgrade finishes, navigate to the UCP web UI and make sure that
|
||||
Once the upgrade finishes, navigate to the UCP web interface and make sure that
|
||||
all the nodes managed by UCP are healthy.
|
||||
|
||||
## Where to go next
|
||||
|
|
|
|||
|
|
@ -70,6 +70,10 @@ To enable this feature, DTR 2.6 is required and single sign-on with UCP must be
|
|||
|
||||

|
||||
|
||||
## Monitoring disk usage
|
||||
|
||||
Web UI disk usage metrics, including free space, only reflect the Docker managed portion of the filesystem: `/var/lib/docker`. To monitor the total space available on each filesystem of a UCP worker or manager, you must deploy a third party monitoring solution to monitor the operating system.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Troubleshoot with logs](troubleshoot-with-logs.md)
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ built-in collection, `/Shared`.
|
|||
|
||||
Other collections are also being created to enable shared `db` applications.
|
||||
|
||||
> **Note:** For increased security with node-based isolation, use Docker
|
||||
> **Note**: For increased security with node-based isolation, use Docker
|
||||
> Enterprise Advanced.
|
||||
|
||||
- `/Shared/mobile` hosts all Mobile applications and resources.
|
||||
|
|
@ -107,7 +107,7 @@ collection boundaries. By assigning multiple grants per team, the Mobile and
|
|||
Payments applications teams can connect to dedicated Database resources through
|
||||
a secure and controlled interface, leveraging Database networks and secrets.
|
||||
|
||||
> **Note:** In Docker Enterprise Standard, all resources are deployed across the
|
||||
> **Note**: In Docker Enterprise Standard, all resources are deployed across the
|
||||
> same group of UCP worker nodes. Node segmentation is provided in Docker
|
||||
> Enterprise Advanced and discussed in the [next tutorial](ee-advanced.md).
|
||||
|
||||
|
|
|
|||
|
|
@ -40,7 +40,14 @@ can be nested inside one another, to create hierarchies.
|
|||
|
||||
You can nest collections inside one another. If a user is granted permissions
|
||||
for one collection, they'll have permissions for its child collections,
|
||||
pretty much like a directory structure..
|
||||
pretty much like a directory structure. As of UCP `3.1`, the ability to create a nested
|
||||
collection of more than 2 layers deep within the root `/Swarm/` collection has been deprecated.
|
||||
|
||||
The following image provides two examples of nested collections with the recommended maximum
|
||||
of two nesting layers. The first example illustrates an environment-oriented collection, and the second
|
||||
example illustrates an application-oriented collection.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
For a child collection, or for a user who belongs to more than one team, the
|
||||
system concatenates permissions from multiple roles into an "effective role" for
|
||||
|
|
@ -57,7 +64,7 @@ Docker EE provides a number of built-in collections.
|
|||
| `/` | Path to all resources in the Swarm cluster. Resources not in a collection are put here. |
|
||||
| `/System` | Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. |
|
||||
| `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](isolate-nodes.md). |
|
||||
| `/Shared/Private/` | Path to a user's private collection. |
|
||||
| `/Shared/Private/` | Path to a user's private collection. Note that private collections are not created until the user logs in for the first time. |
|
||||
| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). |
|
||||
|
||||
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 107 KiB After Width: | Height: | Size: 50 KiB |
|
After Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 84 KiB After Width: | Height: | Size: 104 KiB |
|
After Width: | Height: | Size: 104 KiB |
|
After Width: | Height: | Size: 104 KiB |
|
Before Width: | Height: | Size: 181 KiB After Width: | Height: | Size: 110 KiB |
|
Before Width: | Height: | Size: 58 KiB After Width: | Height: | Size: 68 KiB |
|
|
@ -18,7 +18,6 @@ The following labels are available for you to use in swarm services:
|
|||
| `com.docker.lb.network` | Name of network the proxy service should attach to for upstream connectivity. | `app-network-a` |
|
||||
| `com.docker.lb.context_root` | Context or path to use for the application. | `/app` |
|
||||
| `com.docker.lb.context_root_rewrite` | Boolean to enable rewrite for the context root. | `true` |
|
||||
| `com.docker.lb.ssl_only` | Boolean to force SSL for application. | `true` |
|
||||
| `com.docker.lb.ssl_cert` | Docker secret to use for the SSL certificate. | `example.com.cert` |
|
||||
| `com.docker.lb.ssl_key` | Docker secret to use for the SSL key. | `example.com.key` |
|
||||
| `com.docker.lb.websocket_endpoints` | Comma separated list of endpoints to configure to be upgraded for websockets. | `/ws,/foo` |
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ which are pinned to the same instance. If you make a few requests you will noti
|
|||
|
||||
# IP Hashing
|
||||
In this example we show how to configure sticky sessions using client IP hashing. This is not as flexible or consistent
|
||||
as cookies but enables workarounds for some applications that cannot use the other method.
|
||||
as cookies but enables workarounds for some applications that cannot use the other method. When using IP hashing you should reconfigure Interlock proxy to use [host mode networking](../deploy/host-mode-networking.md) because the default `ingress` networking mode uses SNAT which obscures client IP addresses.
|
||||
|
||||
First we will create an overlay network so that service traffic is isolated and secure:
|
||||
|
||||
|
|
@ -125,7 +125,7 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping
|
|||
You can use `docker service scale demo=10` to add some more replicas. Once scaled, you will notice that requests are pinned
|
||||
to a specific backend.
|
||||
|
||||
Note: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
|
||||
expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are
|
||||
determined a new "sticky" backend will be chosen and that will be the dedicated upstream.
|
||||
> **Note**: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is
|
||||
> expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are
|
||||
> determined a new "sticky" backend will be chosen and that will be the dedicated upstream.
|
||||
|
||||
|
|
|
|||
|
|
@ -143,7 +143,7 @@ using a version of `curl` that includes the SNI header with insecure requests.
|
|||
If this doesn't happen, `curl` displays an error saying that the SSL handshake
|
||||
was aborterd.
|
||||
|
||||
> ***NOTE:*** Currently there is no way to update expired certificates using this method.
|
||||
> **Note**: Currently there is no way to update expired certificates using this method.
|
||||
> The proper way is to create a new secret then update the corresponding service.
|
||||
|
||||
## Let your service handle TLS
|
||||
|
|
|
|||
|
|
@ -27,8 +27,8 @@ $> docker service create \
|
|||
ehazlett/websocket-chat
|
||||
```
|
||||
|
||||
Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
|
||||
This uses the browser for websocket communication so you will need to have an entry or use a routable domain.
|
||||
> **Note**: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file.
|
||||
> This uses the browser for websocket communication so you will need to have an entry or use a routable domain.
|
||||
|
||||
Interlock will detect once the service is available and publish it. Once the tasks are running
|
||||
and the proxy service has been updated the application should be available via `http://demo.local`. Open
|
||||
|
|
|
|||