diff --git a/Dockerfile b/Dockerfile index b133731198..7fd9846510 100644 --- a/Dockerfile +++ b/Dockerfile @@ -66,16 +66,6 @@ COPY --from=docs/docker.github.io:nginx-onbuild /etc/nginx/conf.d/default.conf / # archives less often than new ones. # To add a new archive, add it here # AND ALSO edit _data/docsarchives/archives.yaml to add it to the drop-down -COPY --from=docs/docker.github.io:v1.4 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.5 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.6 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.7 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.8 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.9 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.10 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.11 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.12 ${TARGET} ${TARGET} -COPY --from=docs/docker.github.io:v1.13 ${TARGET} ${TARGET} COPY --from=docs/docker.github.io:v17.03 ${TARGET} ${TARGET} COPY --from=docs/docker.github.io:v17.06 ${TARGET} ${TARGET} COPY --from=docs/docker.github.io:v17.09 ${TARGET} ${TARGET} diff --git a/README.md b/README.md index 0cda197bd6..d700366c81 100644 --- a/README.md +++ b/README.md @@ -317,6 +317,21 @@ still optimizes the bandwidth during browsing). > This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice. ``` +## Accessing unsupported archived documentation + +Supported documentation includes the current version plus the previous five versions. + +If you are using a version of the documentation that is no longer supported, which means that the version number is not listed in the site dropdown list, you can still access that documentation in the following ways: + +- By entering your version number and selecting it from the branch selection list for this repo +- By directly accessing the Github URL for your version. For example, https://github.com/docker/docker.github.io/tree/v1.9 for `v1.9` +- By running a container of the specific [tag for your documentation version](https://cloud.docker.com/u/docs/repository/docker/docs/docker.github.io/general#read-these-docs-offline) +in Docker Hub. For example, run the following to access `v1.9`: + + ```bash + docker run -it -p 4000:4000 docs/docker.github.io:v1.9 + ``` + ## Building archives and the live published docs All the images described below are automatically built using Docker Hub. To diff --git a/_config.yml b/_config.yml index 463ec13ade..0711e1a7ee 100644 --- a/_config.yml +++ b/_config.yml @@ -13,7 +13,7 @@ safe: false lsi: false url: https://docs.docker.com # This needs to have all the directories you expect to be in the archives (delivered by docs-base in the Dockerfile) -keep_files: ["v1.4", "v1.5", "v1.6", "v1.7", "v1.8", "v1.9", "v1.10", "v1.11", "v1.12", "v1.13", "v17.03", "v17.06", "v17.09", "v17.12", "v18.03"] +keep_files: ["v17.03", "v17.06", "v17.09", "v17.12", "v18.03"] exclude: ["_scripts", "apidocs/layouts", "Gemfile", "hooks", "index.html", "404.html"] # Component versions -- address like site.docker_ce_version @@ -94,7 +94,7 @@ defaults: - scope: path: "install" values: - win_latest_build: "docker-18.09.1" + win_latest_build: "docker-18.09.2" - scope: path: "datacenter" values: diff --git a/_data/docsarchive/archives.yaml b/_data/docsarchive/archives.yaml index 3069fd3570..a961160d4f 100644 --- a/_data/docsarchive/archives.yaml +++ b/_data/docsarchive/archives.yaml @@ -22,33 +22,3 @@ - archive: name: v17.03 image: docs/docker.github.io:v17.03 -- archive: - name: v1.13 - image: docs/docker.github.io:v1.13 -- archive: - name: v1.12 - image: docs/docker.github.io:v1.12 -- archive: - name: v1.11 - image: docs/docker.github.io:v1.11 -- archive: - name: v1.10 - image: docs/docker.github.io:v1.10 -- archive: - name: v1.9 - image: docs/docker.github.io:v1.9 -- archive: - name: v1.8 - image: docs/docker.github.io:v1.8 -- archive: - name: v1.7 - image: docs/docker.github.io:v1.7 -- archive: - name: v1.6 - image: docs/docker.github.io:v1.6 -- archive: - name: v1.5 - image: docs/docker.github.io:v1.5 -- archive: - name: v1.4 - image: docs/docker.github.io:v1.4 diff --git a/_data/engine-cli/docker_run.yaml b/_data/engine-cli/docker_run.yaml index 28e07ab825..dca4428f3c 100644 --- a/_data/engine-cli/docker_run.yaml +++ b/_data/engine-cli/docker_run.yaml @@ -1454,7 +1454,7 @@ examples: |- On Windows server, assuming the default configuration, these commands are equivalent and result in `process` isolation: - ```PowerShell + ```powershell PS C:\> docker run -d microsoft/nanoserver powershell echo process PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo process PS C:\> docker run -d --isolation process microsoft/nanoserver powershell echo process @@ -1464,7 +1464,7 @@ examples: |- are running against a Windows client-based daemon, these commands are equivalent and result in `hyperv` isolation: - ```PowerShell + ```powershell PS C:\> docker run -d microsoft/nanoserver powershell echo hyperv PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo hyperv PS C:\> docker run -d --isolation hyperv microsoft/nanoserver powershell echo hyperv diff --git a/_data/glossary.yaml b/_data/glossary.yaml index d42732b086..ddc581405c 100644 --- a/_data/glossary.yaml +++ b/_data/glossary.yaml @@ -32,7 +32,7 @@ collection: | nodes, services, containers, volumes, networks, and secrets. [Learn how to manage collections](/datacenter/ucp/2.2/guides/access-control/manage-access-with-collections/). Compose: | [Compose](https://github.com/docker/compose) is a tool for defining and - running complex applications with Docker. With compose, you define a + running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. diff --git a/_data/toc.yaml b/_data/toc.yaml index 419588ec09..02932e0ff8 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -1182,6 +1182,8 @@ manuals: title: Add SANs to cluster certificates - path: /ee/ucp/admin/configure/collect-cluster-metrics/ title: Collect UCP cluster metrics with Prometheus + - path: /ee/ucp/admin/configure/metrics-descriptions/ + title: Using UCP cluster metrics with Prometheus - path: /ee/ucp/admin/configure/configure-rbac-kube/ title: Configure native Kubernetes role-based access control - path: /ee/ucp/admin/configure/create-audit-logs/ @@ -1204,8 +1206,6 @@ manuals: title: UCP configuration file - path: /ee/ucp/admin/configure/use-node-local-network-in-swarm/ title: Use a local node network in a swarm - - path: /ee/ucp/admin/configure/use-nfs-volumes/ - title: Use NFS persistent storage - path: /ee/ucp/admin/configure/use-your-own-tls-certificates/ title: Use your own TLS certificates - path: /ee/ucp/admin/configure/manage-and-deploy-private-images/ @@ -1335,6 +1335,8 @@ manuals: path: /ee/ucp/interlock/usage/service-clusters/ - title: Context/Path based routing path: /ee/ucp/interlock/usage/context/ + - title: VIP backend mode + path: /ee/ucp/interlock/usage/interlock-vip-mode/ - title: Service labels reference path: /ee/ucp/interlock/usage/labels-reference/ - title: Layer 7 routing upgrade @@ -1343,6 +1345,8 @@ manuals: section: - title: Access Kubernetes Resources path: /ee/ucp/kubernetes/kube-resources/ + - title: Use NFS persistent storage + path: /ee/ucp/admin/configure/use-nfs-volumes/ - title: Configure AWS EBS Storage for Kubernetes path: /ee/ucp/kubernetes/configure-aws-storage/ - title: Deploy a workload @@ -2148,8 +2152,8 @@ manuals: title: Monitor the cluster status - path: /ee/dtr/admin/monitor-and-troubleshoot/notary-audit-logs/ title: Check Notary audit logs - - path: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs/ - title: Troubleshoot with logs + - path: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-dtr/ + title: Troubleshoot Docker Trusted Registry - sectiontitle: Disaster recovery section: - title: Overview @@ -2306,8 +2310,8 @@ manuals: title: Monitor the cluster status - path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/notary-audit-logs/ title: Check Notary audit logs - - path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs/ - title: Troubleshoot with logs + - path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-dtr/ + title: Troubleshoot Docker Trusted Registry - path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-batch-jobs/ title: Troubleshoot batch jobs - path: /datacenter/dtr/2.5/guides/admin/backups-and-disaster-recovery/ @@ -3112,6 +3116,8 @@ manuals: title: File system sharing - path: /docker-for-mac/osxfs-caching/ title: Performance tuning for volume mounts (shared filesystems) + - path: /docker-for-mac/space/ + title: Disk utilization - path: /docker-for-mac/troubleshoot/ title: Logs and troubleshooting - path: /docker-for-mac/faqs/ diff --git a/_includes/ee-linux-install-reuse.md b/_includes/ee-linux-install-reuse.md index bbd1cfa6b5..62e845c7a2 100644 --- a/_includes/ee-linux-install-reuse.md +++ b/_includes/ee-linux-install-reuse.md @@ -137,7 +137,7 @@ You only need to set up the repository once, after which you can install Docker {% elsif section == "install-using-yum-repo" %} -> ***NOTE:*** If you need to run Docker EE 2.0, please see the following instructions: +> **Note**: If you need to run Docker EE 2.0, please see the following instructions: > * [18.03](https://docs.docker.com/v18.03/ee/supported-platforms/) - Older Docker EE Engine only release > * [17.06](https://docs.docker.com/v17.06/engine/installation/) - Docker Enterprise Edition 2.0 (Docker Engine, > UCP, and DTR). diff --git a/compose/completion.md b/compose/completion.md index f688c6b712..91c8ba2cd4 100644 --- a/compose/completion.md +++ b/compose/completion.md @@ -13,19 +13,30 @@ for the bash and zsh shell. Make sure bash completion is installed. -* On a current Linux OS (in a non-minimal installation), bash completion should be +#### Linux + +1. On a current Linux OS (in a non-minimal installation), bash completion should be available. - -* On a Mac, install with `brew install bash-completion`. - -Place the completion script in `/etc/bash_completion.d/` -(or `/usr/local/etc/bash_completion.d/` on a Mac): +2. Place the completion script in `/etc/bash_completion.d/`. ```shell sudo curl -L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose ``` -On a Mac, add the following to your `~/.bash_profile`: +### Mac + +##### Install via Homebrew + +1. Install with `brew install bash-completion`. +2. After the installation, Brew displays the installation path. Make sure to place the completion script in the path. + + For example, when running this command on Mac 10.13.2, place the completion script in `/usr/local/etc/bash_completion.d/`. + +```shell +sudo curl -L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/bash/docker-compose -o /usr/local/etc/bash_completion.d/docker-compose +``` + +3. Add the following to your `~/.bash_profile`: ```shell if [ -f $(brew --prefix)/etc/bash_completion ]; then @@ -33,13 +44,13 @@ if [ -f $(brew --prefix)/etc/bash_completion ]; then fi ``` -You can source your `~/.bash_profile` or launch a new terminal to utilize +4. You can source your `~/.bash_profile` or launch a new terminal to utilize completion. -If you're using MacPorts instead of brew, use the following steps instead: +##### Install via MacPorts -Run `sudo port install bash-completion` to install bash completion. -Add the following lines to `~/.bash_profile`: +1. Run `sudo port install bash-completion` to install bash completion. +2. Add the following lines to `~/.bash_profile`: ```shell if [ -f /opt/local/etc/profile.d/bash_completion.sh ]; then @@ -47,43 +58,44 @@ if [ -f /opt/local/etc/profile.d/bash_completion.sh ]; then fi ``` -You can source your `~/.bash_profile` or launch a new terminal to utilize +3. You can source your `~/.bash_profile` or launch a new terminal to utilize completion. ### Zsh -#### With oh-my-zsh +Make sure you have [installed `oh-my-zsh`](https://ohmyz.sh/) on your computer. -Add `docker` to the plugins list in `~/.zshrc`: +#### With oh-my-zsh shell + +Add `docker` and `docker-compose` to the plugins list in `~/.zshrc` to run autocompletion within the oh-my-zsh shell. In the following example, `...` represent other Zsh plugins you may have installed. ```shell -plugins=( - docker +plugins=(... docker docker-compose ) ``` -#### Without oh-my-zsh +#### Without oh-my-zsh shell -Place the completion script in your `/path/to/zsh/completion` (typically `~/.zsh/completion/`): +1. Place the completion script in your `/path/to/zsh/completion` (typically `~/.zsh/completion/`): ```shell $ mkdir -p ~/.zsh/completion $ curl -L https://raw.githubusercontent.com/docker/compose/{{site.compose_version}}/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose ``` -Include the directory in your `$fpath` by adding in `~/.zshrc`: +2. Include the directory in your `$fpath` by adding in `~/.zshrc`: ```shell fpath=(~/.zsh/completion $fpath) ``` -Make sure `compinit` is loaded or do it by adding in `~/.zshrc`: +3. Make sure `compinit` is loaded or do it by adding in `~/.zshrc`: ```shell autoload -Uz compinit && compinit -i ``` -Then reload your shell: +4. Then reload your shell: ```shell exec $SHELL -l diff --git a/compose/compose-file/compose-file-v1.md b/compose/compose-file/compose-file-v1.md index 7e0ede1bc1..9e5c1ac1de 100644 --- a/compose/compose-file/compose-file-v1.md +++ b/compose/compose-file/compose-file-v1.md @@ -205,7 +205,7 @@ the value assigned to a variable that shows up more than once_. The files in the list are processed from the top down. For the same variable specified in file `a.env` and assigned a different value in file `b.env`, if `b.env` is listed below (after), then the value from `b.env` stands. For example, given the -following declaration in `docker_compose.yml`: +following declaration in `docker-compose.yml`: ```yaml services: diff --git a/compose/compose-file/compose-file-v2.md b/compose/compose-file/compose-file-v2.md index cf59a2a446..f6c356dab2 100644 --- a/compose/compose-file/compose-file-v2.md +++ b/compose/compose-file/compose-file-v2.md @@ -532,7 +532,7 @@ the value assigned to a variable that shows up more than once_. The files in the list are processed from the top down. For the same variable specified in file `a.env` and assigned a different value in file `b.env`, if `b.env` is listed below (after), then the value from `b.env` stands. For example, given the -following declaration in `docker_compose.yml`: +following declaration in `docker-compose.yml`: ```yaml services: @@ -990,7 +990,7 @@ as it has the highest priority. It then connects to `app_net_3`, then app_net_2: app_net_3: -> **Note:** If multiple networks have the same priority, the connection order +> **Note**: If multiple networks have the same priority, the connection order > is undefined. ### pid @@ -1235,7 +1235,7 @@ volumes: mydata: ``` -> **Note:** When creating bind mounts, using the long syntax requires the +> **Note**: When creating bind mounts, using the long syntax requires the > referenced folder to be created beforehand. Using the short syntax > creates the folder on the fly if it doesn't exist. > See the [bind mounts documentation](/engine/admin/volumes/bind-mounts.md/#differences-between--v-and---mount-behavior) @@ -1248,7 +1248,7 @@ service. volume_driver: mydriver -> **Note:** In [version 2 files](compose-versioning.md#version-2), this +> **Note**: In [version 2 files](compose-versioning.md#version-2), this > option only applies to anonymous volumes (those specified in the image, > or specified under `volumes` without an explicit named volume or host path). > To configure the driver for a named volume, use the `driver` key under the @@ -1298,7 +1298,7 @@ then read-write is used. Each of these is a single value, analogous to its [docker run](/engine/reference/run.md) counterpart. -> **Note:** The following options were added in [version 2.2](compose-versioning.md#version-22): +> **Note**: The following options were added in [version 2.2](compose-versioning.md#version-22): > `cpu_count`, `cpu_percent`, `cpus`. > The following options were added in [version 2.1](compose-versioning.md#version-21): > `oom_kill_disable`, `cpu_period` diff --git a/compose/compose-file/index.md b/compose/compose-file/index.md index 2ab9179e5f..4a736aaaa2 100644 --- a/compose/compose-file/index.md +++ b/compose/compose-file/index.md @@ -279,7 +279,7 @@ at build time is the value in the environment where Compose is running. #### cache_from -> **Note:** This option is new in v3.2 +> **Note**: This option is new in v3.2 A list of images that the engine uses for cache resolution. @@ -291,7 +291,7 @@ A list of images that the engine uses for cache resolution. #### labels -> **Note:** This option is new in v3.3 +> **Note**: This option is new in v3.3 Add metadata to the resulting image using [Docker labels](/engine/userguide/labels-custom-metadata.md). You can use either an array or a dictionary. @@ -490,7 +490,7 @@ an error. ### credential_spec -> **Note:** this option was added in v3.3. +> **Note**: this option was added in v3.3. Configure the credential spec for managed service account. This option is only used for services using Windows containers. The `credential_spec` must be in the @@ -1001,7 +1001,7 @@ the value assigned to a variable that shows up more than once_. The files in the list are processed from the top down. For the same variable specified in file `a.env` and assigned a different value in file `b.env`, if `b.env` is listed below (after), then the value from `b.env` stands. For example, given the -following declaration in `docker_compose.yml`: +following declaration in `docker-compose.yml`: ```none services: @@ -1195,7 +1195,7 @@ It's recommended that you use reverse-DNS notation to prevent your labels from c ### links ->**Warning**: >The `--link` flag is a legacy feature of Docker. It +>**Warning**: The `--link` flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use [user-defined networks](/engine/userguide/networking//#user-defined-networks) to facilitate communication between two containers instead of using `--link`. @@ -1431,7 +1431,7 @@ containers in the bare-metal machine's namespace and vice versa. Expose ports. -> **Note:** Port mapping is incompatible with `network_mode: host` +> **Note**: Port mapping is incompatible with `network_mode: host` #### Short syntax @@ -1473,7 +1473,7 @@ ports: ``` -> **Note:** The long syntax is new in v3.2 +> **Note**: The long syntax is new in v3.2 ### restart @@ -1810,7 +1810,7 @@ volumes: mydata: ``` -> **Note:** The long syntax is new in v3.2 +> **Note**: The long syntax is new in v3.2 #### Volumes for services, swarms, and stack files diff --git a/compose/install.md b/compose/install.md index d461135877..4b190c3d6f 100644 --- a/compose/install.md +++ b/compose/install.md @@ -77,14 +77,14 @@ Docker Compose. To do so, follow these steps: version of Compose you want to use: ```none - Invoke-WebRequest "https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe + Invoke-WebRequest "https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\Docker\Docker\resources\bin\docker-compose.exe ``` For example, to download Compose version {{site.compose_version}}, the command is: ```none - Invoke-WebRequest "https://github.com/docker/compose/releases/download/{{site.compose_version}}/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe + Invoke-WebRequest "https://github.com/docker/compose/releases/download/{{site.compose_version}}/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\Docker\Docker\resources\bin\docker-compose.exe ``` > Use the latest Compose release number in the download command. > @@ -129,7 +129,7 @@ by step instructions are also included below. sudo chmod +x /usr/local/bin/docker-compose ``` -> ***Note:*** If the command `docker-compose` fails after installation, check your path. +> **Note**: If the command `docker-compose` fails after installation, check your path. > You can also create a symbolic link to `/usr/bin` or any other directory in your path. For example: diff --git a/config/containers/logging/awslogs.md b/config/containers/logging/awslogs.md index 3b7e0c6c62..282f4c71cf 100644 --- a/config/containers/logging/awslogs.md +++ b/config/containers/logging/awslogs.md @@ -169,7 +169,7 @@ The following `strftime` codes are supported: | `%p` | AM or PM. | AM | | `%M` | Minute as a zero-padded decimal number. | 57 | | `%S` | Second as a zero-padded decimal number. | 04 | -| `%L` | Milliseconds as a zero-padded decimal number. | 123 | +| `%L` | Milliseconds as a zero-padded decimal number. | .123 | | `%f` | Microseconds as a zero-padded decimal number. | 000345 | | `%z` | UTC offset in the form +HHMM or -HHMM. | +1300 | | `%Z` | Time zone name. | PST | diff --git a/config/containers/logging/configure.md b/config/containers/logging/configure.md index cad9544e08..f281ed2140 100644 --- a/config/containers/logging/configure.md +++ b/config/containers/logging/configure.md @@ -147,6 +147,7 @@ see more options. |:------------------------------|:--------------------------------------------------------------------------------------------------------------| | `none` | No logs are available for the container and `docker logs` does not return any output. | | [`json-file`](json-file.md) | The logs are formatted as JSON. The default logging driver for Docker. | +| [`local`](local.md) | Writes logs messages to local filesystem in binary files using Protobuf. | | [`syslog`](syslog.md) | Writes logging messages to the `syslog` facility. The `syslog` daemon must be running on the host machine. | | [`journald`](journald.md) | Writes log messages to `journald`. The `journald` daemon must be running on the host machine. | | [`gelf`](gelf.md) | Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash. | diff --git a/config/containers/logging/fluentd.md b/config/containers/logging/fluentd.md index 575824e2bc..6f934e2587 100644 --- a/config/containers/logging/fluentd.md +++ b/config/containers/logging/fluentd.md @@ -110,7 +110,7 @@ for advanced [log tag options](log_tags.md). ### fluentd-async-connect Docker connects to Fluentd in the background. Messages are buffered until the -connection is established. +connection is established. Defaults to `false`. ### fluentd-buffer-limit @@ -123,11 +123,11 @@ How long to wait between retries. Defaults to 1 second. ### fluentd-max-retries -The maximum number of retries. Defaults to 10. +The maximum number of retries. Defaults to `10`. ### fluentd-sub-second-precision -Generates event logs in nanosecond resolution. Defaults to false. +Generates event logs in nanosecond resolution. Defaults to `false`. ## Fluentd daemon management with Docker diff --git a/config/containers/logging/plugins.md b/config/containers/logging/plugins.md index f49e537f7e..30b3342f72 100644 --- a/config/containers/logging/plugins.md +++ b/config/containers/logging/plugins.md @@ -24,7 +24,7 @@ a specific plugin using `docker inspect`. ## Configure the plugin as the default logging driver After the plugin is installed, you can configure the Docker daemon to use it as -the default by setting the plugin's name as the value of the `logging-driver` +the default by setting the plugin's name as the value of the `log-driver` key in the `daemon.json`, as detailed in the [logging overview](configure.md#configure-the-default-logging-driver). If the logging driver supports additional options, you can set those as the values of diff --git a/config/containers/start-containers-automatically.md b/config/containers/start-containers-automatically.md index a695072aeb..4c24788bc1 100644 --- a/config/containers/start-containers-automatically.md +++ b/config/containers/start-containers-automatically.md @@ -28,8 +28,8 @@ any of the following: |:-----------------|:------------------------------------------------------------------------------------------------| | `no` | Do not automatically restart the container. (the default) | | `on-failure` | Restart the container if it exits due to an error, which manifests as a non-zero exit code. | -| `unless-stopped` | Restart the container unless it is explicitly stopped or Docker itself is stopped or restarted. | -| `always` | Always restart the container if it stops. | +| `always` | Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in [restart policy details](#restart-policy-details)) | +| `unless-stopped` | Similar to `always`, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts. | The following example starts a Redis container and configures it to always restart unless it is explicitly stopped or Docker is restarted. diff --git a/config/daemon/systemd.md b/config/daemon/systemd.md index 57d7863603..1c18eeae3e 100644 --- a/config/daemon/systemd.md +++ b/config/daemon/systemd.md @@ -101,7 +101,7 @@ you need to add this configuration in the Docker systemd service file. The `NO_PROXY` variable specifies a string that contains comma-separated values for hosts that should be excluded from proxying. These are the options you can specify to exclude hosts: - * IP address prefix (`1.2.3.4`) or in CIDR notation (`1.2.3.4/8`) + * IP address prefix (`1.2.3.4`) * Domain name, or a special DNS label (`*`) * A domain name matches that name and all subdomains. A domain name with a leading "." matches subdomains only. For example, given the domains diff --git a/datacenter/dtr/2.3/guides/release-notes.md b/datacenter/dtr/2.3/guides/release-notes.md index 64a3cbb3de..f67ab8eb1d 100644 --- a/datacenter/dtr/2.3/guides/release-notes.md +++ b/datacenter/dtr/2.3/guides/release-notes.md @@ -11,6 +11,14 @@ known issues for each DTR version. You can then use [the upgrade instructions](admin/upgrade.md), to upgrade your installation to the latest release. +## Version 2.3.11 + +(28 February 2019) + +### Changelog + +* Bump the Golang version that is used to build DTR to version 1.10.8. (docker/dhe-deploy#10064) + ## Version 2.3.10 (29 January 2019) diff --git a/datacenter/dtr/2.3/reference/cli/install.md b/datacenter/dtr/2.3/reference/cli/install.md index af70dd5652..9ab28e346b 100644 --- a/datacenter/dtr/2.3/reference/cli/install.md +++ b/datacenter/dtr/2.3/reference/cli/install.md @@ -24,11 +24,13 @@ command. Example usage: +```bash $ docker run -it --rm dtr-internal.caas.docker.io/caas/dtr:2.4.0-alpha-008434_ge02413a install \ --ucp-node \ --ucp-insecure-tls +``` -Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a production deployment. +> **Note**: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment. ## Options diff --git a/datacenter/dtr/2.4/reference/cli/install.md b/datacenter/dtr/2.4/reference/cli/install.md index 8769054686..34590c63a5 100644 --- a/datacenter/dtr/2.4/reference/cli/install.md +++ b/datacenter/dtr/2.4/reference/cli/install.md @@ -24,11 +24,13 @@ command. Example usage: +```bash $ docker run -it --rm docker/dtr:2.4.1 install \ --ucp-node \ --ucp-insecure-tls +``` -Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a production deployment. +> **Note**: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment. ## Options diff --git a/datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-dtr.md b/datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-dtr.md new file mode 100644 index 0000000000..58d203e04f --- /dev/null +++ b/datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-dtr.md @@ -0,0 +1,243 @@ +--- +title: Troubleshoot Docker Trusted Registry +description: Learn how to troubleshoot your DTR installation. +keywords: registry, monitor, troubleshoot +redirect_from: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs +--- + +This guide contains tips and tricks for troubleshooting DTR problems. + +## Troubleshoot overlay networks + +High availability in DTR depends on swarm overlay networking. One way to test +if overlay networks are working correctly is to deploy containers to the same +overlay network on different nodes and see if they can ping one another. + +Use SSH to log into a node and run: + +```bash +docker run -it --rm \ + --net dtr-ol --name overlay-test1 \ + --entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }} +``` + +Then use SSH to log into another node and run: + +```bash +docker run -it --rm \ + --net dtr-ol --name overlay-test2 \ + --entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1 +``` + +If the second command succeeds, it indicates overlay networking is working +correctly between those nodes. + +You can run this test with any attachable overlay network and any Docker image +that has `sh` and `ping`. + + +## Access RethinkDB directly + +DTR uses RethinkDB for persisting data and replicating it across replicas. +It might be helpful to connect directly to the RethinkDB instance running on a +DTR replica to check the DTR internal state. + +> **Warning**: Modifying RethinkDB directly is not supported and may cause +> problems. +{: .warning } + +### via RethinkCLI + +As of v2.5.5, the [RethinkCLI has been removed](/ee/dtr/release-notes/#255) from the RethinkDB image along with other unused components. You can now run RethinkCLI from a separate image in the `dockerhubenterprise` organization. Note that the commands below are using separate tags for non-interactive and interactive modes. + +#### Non-interactive + +Use SSH to log into a node that is running a DTR replica, and run the following: + +{% raw %} +```bash +# List problems in the cluster detected by the current node. +REPLICA_ID=$(docker container ls --filter=name=dtr-rethink --format '{{.Names}}' | cut -d'/' -f2 | cut -d'-' -f3 | head -n 1) && echo 'r.db("rethinkdb").table("current_issues")' | docker run --rm -i --net dtr-ol -v "dtr-ca-${REPLICA_ID}:/ca" -e DTR_REPLICA_ID=$REPLICA_ID dockerhubenterprise/rethinkcli:v2.2.0-ni non-interactive +``` +{% endraw %} + +On a healthy cluster the output will be `[]`. + +#### Interactive + +Starting in DTR 2.5.5, you can run RethinkCLI from a separate image. First, set an environment variable for your DTR replica ID: + +```bash +REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-') +``` + +RethinkDB stores data in different databases that contain multiple tables. Run the following command to get into interactive mode +and query the contents of the DB: + +```bash +docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID +``` + +```none +# List problems in the cluster detected by the current node. +> r.db("rethinkdb").table("current_issues") +[] + +# List all the DBs in RethinkDB +> r.dbList() +[ 'dtr2', + 'jobrunner', + 'notaryserver', + 'notarysigner', + 'rethinkdb' ] + +# List the tables in the dtr2 db +> r.db('dtr2').tableList() +[ 'blob_links', + 'blobs', + 'client_tokens', + 'content_caches', + 'events', + 'layer_vuln_overrides', + 'manifests', + 'metrics', + 'namespace_team_access', + 'poll_mirroring_policies', + 'promotion_policies', + 'properties', + 'pruning_policies', + 'push_mirroring_policies', + 'repositories', + 'repository_team_access', + 'scanned_images', + 'scanned_layers', + 'tags', + 'user_settings', + 'webhooks' ] + +# List the entries in the repositories table +> r.db('dtr2').table('repositories') +[ { enableManifestLists: false, + id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b', + immutableTags: false, + name: 'test-repo-1', + namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481', + namespaceName: 'admin', + pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6', + pulls: 0, + pushes: 0, + scanOnPush: false, + tagLimit: 0, + visibility: 'public' }, + { enableManifestLists: false, + id: '9f43f029-9683-459f-97d9-665ab3ac1fda', + immutableTags: false, + longDescription: '', + name: 'testing', + namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481', + namespaceName: 'admin', + pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be', + pulls: 0, + pushes: 0, + scanOnPush: false, + shortDescription: '', + tagLimit: 1, + visibility: 'public' } ] +``` + +Individual DBs and tables are a private implementation detail and may change in DTR +from version to version, but you can always use `dbList()` and `tableList()` to explore +the contents and data structure. + +[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/). + +### via API + +To check on the overall status of your DTR cluster without interacting with RethinkCLI, run the following API request: + +```bash +curl -u admin:$TOKEN -X GET "https:///api/v0/meta/cluster_status" -H "accept: application/json" +``` + +#### Example API Response +```none +{ + "rethink_system_tables": { + "cluster_config": [ + { + "heartbeat_timeout_secs": 10, + "id": "heartbeat" + } + ], + "current_issues": [], + "db_config": [ + { + "id": "339de11f-b0c2-4112-83ac-520cab68d89c", + "name": "notaryserver" + }, + { + "id": "aa2e893f-a69a-463d-88c1-8102aafebebc", + "name": "dtr2" + }, + { + "id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd", + "name": "jobrunner" + }, + { + "id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039", + "name": "notarysigner" + } + ], + "server_status": [ + { + "id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a", + "name": "dtr_rethinkdb_5eb9459a7832", + "network": { + "canonical_addresses": [ + { + "host": "dtr-rethinkdb-5eb9459a7832.dtr-ol", + "port": 29015 + } + ], + "cluster_port": 29015, + "connected_to": { + "dtr_rethinkdb_56b65e8c1404": true + }, + "hostname": "9e83e4fee173", + "http_admin_port": "", + "reql_port": 28015, + "time_connected": "2019-02-15T00:19:22.035Z" + }, + } + ... + ] + } +} + +``` + +## Recover from an unhealthy replica + +When a DTR replica is unhealthy or down, the DTR web UI displays a warning: + +```none +Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy. +``` + +To fix this, you should remove the unhealthy replica from the DTR cluster, +and join a new one. Start by running: + +```bash +docker run -it --rm \ + {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \ + --ucp-insecure-tls +``` + +And then: + +```bash +docker run -it --rm \ + {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \ + --ucp-node \ + --ucp-insecure-tls +``` diff --git a/datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md b/datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md deleted file mode 100644 index 5d41c27340..0000000000 --- a/datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: Troubleshoot Docker Trusted Registry -description: Learn how to troubleshoot your DTR installation. -keywords: registry, monitor, troubleshoot ---- - -This guide contains tips and tricks for troubleshooting DTR problems. - -## Troubleshoot overlay networks - -High availability in DTR depends on swarm overlay networking. One way to test -if overlay networks are working correctly is to deploy containers to the same -overlay network on different nodes and see if they can ping one another. - -Use SSH to log into a node and run: - -```bash -docker run -it --rm \ - --net dtr-ol --name overlay-test1 \ - --entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }} -``` - -Then use SSH to log into another node and run: - -```bash -docker run -it --rm \ - --net dtr-ol --name overlay-test2 \ - --entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1 -``` - -If the second command succeeds, it indicates overlay networking is working -correctly between those nodes. - -You can run this test with any attachable overlay network and any Docker image -that has `sh` and `ping`. - - -## Access RethinkDB directly - -DTR uses RethinkDB for persisting data and replicating it across replicas. -It might be helpful to connect directly to the RethinkDB instance running on a -DTR replica to check the DTR internal state. - -> **Warning**: Modifying RethinkDB directly is not supported and may cause -> problems. -{: .warning } - -Use SSH to log into a node that is running a DTR replica, and run the following -commands: - -{% raw %} -```bash -# List problems in the cluster detected by the current node. -echo 'r.db("rethinkdb").table("current_issues")' | \ - docker exec -i \ - $(docker ps -q --filter name=dtr-rethinkdb) \ - rethinkcli non-interactive; \ - echo -``` -{% endraw %} - -On a healthy cluster the output will be `[]`. - -RethinkDB stores data in different databases that contain multiple tables. This -container can also be used to connect to the local DTR replica and -interactively query the contents of the DB. - -{% raw %} -```bash -docker exec -it $(docker ps -q --filter name=dtr-rethinkdb) rethinkcli -``` -{% endraw %} - -```none -# List problems in the cluster detected by the current node. -> r.db("rethinkdb").table("current_issues") -[] - -# List all the DBs in RethinkDB -> r.dbList() -[ 'dtr2', - 'jobrunner', - 'notaryserver', - 'notarysigner', - 'rethinkdb' ] - -# List the tables in the dtr2 db -> r.db('dtr2').tableList() -[ 'client_tokens', - 'events', - 'manifests', - 'namespace_team_access', - 'properties', - 'repositories', - 'repository_team_access', - 'tags' ] - -# List the entries in the repositories table -> r.db('dtr2').table('repositories') -[ { id: '19f1240a-08d8-4979-a898-6b0b5b2338d8', - name: 'my-test-repo', - namespaceAccountID: '924bf131-6213-43fa-a5ed-d73c7ccf392e', - pk: 'cf5e8bf1197e281c747f27e203e42e22721d5c0870b06dfb1060ad0970e99ada', - visibility: 'public' }, -... -``` - -Individual DBs and tables are a private implementation detail and may change in DTR -from version to version, but you can always use `dbList()` and `tableList()` to explore -the contents and data structure. - -[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/). - -## Recover from an unhealthy replica - -When a DTR replica is unhealthy or down, the DTR web UI displays a warning: - -```none -Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy. -``` - -To fix this, you should remove the unhealthy replica from the DTR cluster, -and join a new one. Start by running: - -```none -docker run -it --rm \ - {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \ - --ucp-insecure-tls -``` - -And then: - -```none -docker run -it --rm \ - {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \ - --ucp-node \ - --ucp-insecure-tls -``` diff --git a/datacenter/dtr/2.5/reference/cli/destroy.md b/datacenter/dtr/2.5/reference/cli/destroy.md index e6219aba0b..f3a84067f1 100644 --- a/datacenter/dtr/2.5/reference/cli/destroy.md +++ b/datacenter/dtr/2.5/reference/cli/destroy.md @@ -20,9 +20,9 @@ docker run -it --rm docker/dtr \ This command forcefully removes all containers and volumes associated with a DTR replica without notifying the rest of the cluster. Use this command -on all replicas uninstall DTR. +on all replicas to uninstall DTR. -Use the 'remove' command to gracefully scale down your DTR cluster. +Use `docker/dtr remove` to gracefully scale down your DTR cluster. ## Options diff --git a/datacenter/ucp/1.1/release_notes.md b/datacenter/ucp/1.1/release_notes.md index 1586de609e..1cc0707fb3 100644 --- a/datacenter/ucp/1.1/release_notes.md +++ b/datacenter/ucp/1.1/release_notes.md @@ -14,10 +14,10 @@ upgrade your installation to the latest release. (18 Jan 2017) -Note: UCP 1.1.6 supports Docker Engine 1.12 but does not use the built-in -orchestration capabilities provided by the Docker Engine with swarm mode enabled. -When installing this UCP version on a Docker Engine 1.12 host, UCP creates a -cluster using the older Docker Swarm v1.2. +> **Note**: UCP 1.1.6 supports Docker Engine 1.12 but does not use the built-in +> orchestration capabilities provided by the Docker Engine with swarm mode enabled. +> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a +> cluster using the older Docker Swarm v1.2. **Security Update** @@ -41,10 +41,10 @@ the [permissions levels section](user-management/permission-levels.md) for more (8 Dec 2016) -Note: UCP 1.1.5 supports Docker Engine 1.12 but does not use the built-in -orchestration capabilities provided by the Docker Engine with swarm mode enabled. -When installing this UCP version on a Docker Engine 1.12 host, UCP creates a -cluster using the older Docker Swarm v1.2. +> **Note**: UCP 1.1.5 supports Docker Engine 1.12 but does not use the built-in +> orchestration capabilities provided by the Docker Engine with swarm mode enabled. +> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a +> cluster using the older Docker Swarm v1.2. **Bug fixes** @@ -61,10 +61,10 @@ the authentication process. (29 Sept 2016) -Note: UCP 1.1.4 supports Docker Engine 1.12 but does not use the built-in -orchestration capabilities provided by the Docker Engine with swarm mode enabled. -When installing this UCP version on a Docker Engine 1.12 host, UCP creates a -cluster using Docker Swarm v1.2.5. +> **Note**: UCP 1.1.4 supports Docker Engine 1.12 but does not use the built-in +> orchestration capabilities provided by the Docker Engine with swarm mode enabled. +> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a +> cluster using Docker Swarm v1.2.5. **Bug fixes** @@ -76,10 +76,10 @@ organization accounts ## Version 1.1.3 -Note: UCP 1.1.3 supports Docker Engine 1.12 but does not use the built-in -orchestration capabilities provided by the Docker Engine with swarm mode enabled. -When installing this UCP version on a Docker Engine 1.12 host, UCP creates a -cluster using Docker Swarm v1.2.5. +> **Note**: UCP 1.1.3 supports Docker Engine 1.12 but does not use the built-in +> orchestration capabilities provided by the Docker Engine with swarm mode enabled. +> When installing this UCP version on a Docker Engine 1.12 host, UCP creates a +> cluster using Docker Swarm v1.2.5. **Security Update** @@ -125,9 +125,9 @@ enabled, and is not compatible with swarm-mode based APIs, e.g. `docker service` ## Version 1.1.2 -Note: UCP 1.1.2 supports Docker Engine 1.12 but doesn't use the new clustering -capabilities provided by the Docker swarm mode. When installing this UCP version -on a Docker Engine 1.12, UCP creates a "classic" Docker Swarm 1.2.3 cluster. +> **Note**: UCP 1.1.2 supports Docker Engine 1.12 but doesn't use the new clustering +> capabilities provided by the Docker swarm mode. When installing this UCP version +> on a Docker Engine 1.12, UCP creates a "classic" Docker Swarm 1.2.3 cluster. **Features** diff --git a/datacenter/ucp/2.2/guides/admin/monitor-and-troubleshoot/index.md b/datacenter/ucp/2.2/guides/admin/monitor-and-troubleshoot/index.md index 6fc18596ce..e8e600ed38 100644 --- a/datacenter/ucp/2.2/guides/admin/monitor-and-troubleshoot/index.md +++ b/datacenter/ucp/2.2/guides/admin/monitor-and-troubleshoot/index.md @@ -63,8 +63,6 @@ might be serving your request. Make sure you're connecting directly to the URL of a manager node, and not a load balancer. In addition, pinging the endpoint with a `HEAD` results in a 404 error code. Use a `GET` request instead. - - ## Where to go next * [Troubleshoot with logs](troubleshoot-with-logs.md) diff --git a/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services.md b/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services.md index 20bc8c6d6c..81a13d9361 100644 --- a/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services.md +++ b/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services.md @@ -194,7 +194,8 @@ apply two labels to your service: com.docker.ucp.mesh.http.1=external_route=http://example.org,redirect=https://example.org com.docker.ucp.mesh.http.2=external_route=sni://example.org ``` -Note: It is not possible to redirect HTTPS to HTTP. + +> **Note**: It is not possible to redirect HTTPS to HTTP. ### X-Forwarded-For header diff --git a/datacenter/ucp/3.0/guides/admin/backups-and-disaster-recovery.md b/datacenter/ucp/3.0/guides/admin/backups-and-disaster-recovery.md index 58bb999f5b..688f6d6b98 100644 --- a/datacenter/ucp/3.0/guides/admin/backups-and-disaster-recovery.md +++ b/datacenter/ucp/3.0/guides/admin/backups-and-disaster-recovery.md @@ -41,6 +41,17 @@ As part of your backup policy you should regularly create backups of UCP. DTR is backed up independently. [Learn about DTR backups and recovery](../../../../dtr/2.3/guides/admin/backups-and-disaster-recovery.md). +> Warning: On UCP versions 3.0.0 - 3.0.7, before performing a UCP backup, you must clean up multiple /dev/shm mounts in the ucp-kublet entrypoint script by running the following script on all nodes via cron job: + +``` +SHM_MOUNT=$(grep -m1 '^tmpfs./dev/shm' /proc/mounts) +while [ $(grep -cm2 '^tmpfs./dev/shm' /proc/mounts) -gt 1 ]; do + sudo umount /dev/shm +done +grep -q '^tmpfs./dev/shm' /proc/mounts || sudo mount "${SHM_MOUNT}" +``` +For additional details, refer to [Docker KB000934](https://success.docker.com/article/more-than-one-dev-shm-mount-in-the-host-namespace){: target="_blank"} + To create a UCP backup, run the `{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} backup` command on a single UCP manager. This command creates a tar archive with the contents of all the [volumes used by UCP](../architecture.md) to persist data diff --git a/datacenter/ucp/3.0/guides/admin/install/plan-installation.md b/datacenter/ucp/3.0/guides/admin/install/plan-installation.md index 04ea8b586c..98e56c29df 100644 --- a/datacenter/ucp/3.0/guides/admin/install/plan-installation.md +++ b/datacenter/ucp/3.0/guides/admin/install/plan-installation.md @@ -40,6 +40,10 @@ Docker UCP requires each node on the cluster to have a static IP address. Before installing UCP, ensure your network and nodes are configured to support this. +## Avoid IP range conflicts + +The `service-cluster-ip-range` Kubernetes API Server flag is currently set to `10.96.0.0/16` and cannot be changed. + ## Time synchronization In distributed systems like Docker UCP, time synchronization is critical diff --git a/datacenter/ucp/3.0/guides/admin/install/upgrade.md b/datacenter/ucp/3.0/guides/admin/install/upgrade.md index 243c401039..47a6b2271d 100644 --- a/datacenter/ucp/3.0/guides/admin/install/upgrade.md +++ b/datacenter/ucp/3.0/guides/admin/install/upgrade.md @@ -22,7 +22,7 @@ impact to your users. Don't make changes to UCP configurations while you're upgrading. This can lead to misconfigurations that are difficult to troubleshoot. -> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft +> **Note**: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft > Azure then please ensure all of the Azure [prerequisities](install-on-azure.md/#azure-prerequisites) > are met. diff --git a/datacenter/ucp/3.0/guides/admin/monitor-and-troubleshoot/index.md b/datacenter/ucp/3.0/guides/admin/monitor-and-troubleshoot/index.md index 6fc18596ce..2eee480e0a 100644 --- a/datacenter/ucp/3.0/guides/admin/monitor-and-troubleshoot/index.md +++ b/datacenter/ucp/3.0/guides/admin/monitor-and-troubleshoot/index.md @@ -64,6 +64,10 @@ URL of a manager node, and not a load balancer. In addition, pinging the endpoint with a `HEAD` results in a 404 error code. Use a `GET` request instead. +## Monitoring disk usage + +Web UI disk usage metrics, including free space, only reflect the Docker managed portion of the filesystem: `/var/lib/docker`. To monitor the total space available on each filesystem of a UCP worker or manager, you must deploy a third party monitoring solution to monitor the operating system. + ## Where to go next diff --git a/datacenter/ucp/3.0/guides/user/services/use-domain-names-to-access-services.md b/datacenter/ucp/3.0/guides/user/services/use-domain-names-to-access-services.md index 4f70c76168..f7be26d1dc 100644 --- a/datacenter/ucp/3.0/guides/user/services/use-domain-names-to-access-services.md +++ b/datacenter/ucp/3.0/guides/user/services/use-domain-names-to-access-services.md @@ -187,7 +187,8 @@ apply two labels to your service: com.docker.ucp.mesh.http.1=external_route=http://example.org,redirect=https://example.org com.docker.ucp.mesh.http.2=external_route=sni://example.org ``` -Note: It is not possible to redirect HTTPS to HTTP. + +> **Note**: It is not possible to redirect HTTPS to HTTP. ### X-Forwarded-For header diff --git a/datacenter/ucp/3.0/reference/cli/install.md b/datacenter/ucp/3.0/reference/cli/install.md index 79ee12a366..2c562d90b8 100644 --- a/datacenter/ucp/3.0/reference/cli/install.md +++ b/datacenter/ucp/3.0/reference/cli/install.md @@ -63,9 +63,9 @@ command. | `--swarm-port` | Port for the Docker Swarm manager. Used for backwards compatibility | | `--swarm-grpc-port` | Port for communication between nodes | | `--cni-installer-url` | A URL pointing to a Kubernetes YAML file to be used as an installer for the CNI plugin of the cluster. If specified, the default CNI plugin is not installed. If the URL uses the HTTPS scheme, no certificate verification is performed. | - | `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IPs from (Default: 192.168.0.0/16) | | `--cloud-provider` | The cloud provider for the cluster | +| `--skip-cloud-provider` | Disables checks that rely on detecting the cloud provider (if any) on which the cluster is currently running. | | `--dns` | Set custom DNS servers for the UCP containers | | `--dns-opt` | Set DNS options for the UCP containers | | `--dns-search` | Set custom DNS search domains for the UCP containers | @@ -80,7 +80,8 @@ command. | `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility | | `--disable-tracking` | Disable anonymous tracking and analytics | | `--disable-usage` | Disable anonymous usage reporting | -| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation | +| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation +| | `--preserve-certs` | Don't generate certificates if they already exist | | `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility | | `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility | diff --git a/develop/index.md b/develop/index.md index d07d2a462d..142b1fb173 100644 --- a/develop/index.md +++ b/develop/index.md @@ -29,4 +29,4 @@ most benefits from Docker. ## Advanced development with the SDK or API -After you can write Dockerfiles or Compose files and use Docker CLI, take it to the next level by using Docker Engine SDK for Go/Python or use HTTP API directly. +After you can write Dockerfiles or Compose files and use Docker CLI, take it to the next level by using Docker Engine SDK for Go/Python or use the HTTP API directly. diff --git a/docker-for-aws/index.md b/docker-for-aws/index.md index b88d8ac21a..5771e08560 100644 --- a/docker-for-aws/index.md +++ b/docker-for-aws/index.md @@ -27,32 +27,11 @@ one year**. If your account [has the proper permissions](/docker-for-aws/iam-permissions.md), you can -use the blue button from the stable or edge channel to bootstrap Docker for AWS -using CloudFormation. For more about stable and edge channels, see the -[FAQs](/docker-for-aws/faqs.md#stable-and-edge-channels). +use the blue button to bootstrap Docker for AWS +using CloudFormation. - - - - - - - - - - - - - -
Stable channelEdge channel
This deployment is fully baked and tested, and comes with the latest CE version of Docker.

This is the best channel to use if you want a reliable platform to work with.

Stable is released quarterly and is for users that want an easier-to-maintain release pace.
This deployment offers cutting edge features of the CE version of Docker and comes with experimental features turned on, described in the Docker Experimental Features README on GitHub. (Adjust the branch or tag in the URL to match your version.)

This is the best channel to use if you want to experiment with features under development, and can weather some instability and bugs. Edge is for users wanting a drop of the latest and greatest features every month.

We collect usage data on edges across the board.
- {{aws_blue_latest}} - {{aws_blue_vpc_latest}} - - {{aws_blue_edge}} - {{aws_blue_vpc_edge}} -
- -> **Note* During stable channel updates, edge channel will have the same release (unless it's a patch release) +{{aws_blue_latest}} +{{aws_blue_vpc_latest}} ### Deployment options diff --git a/docker-for-azure/faqs.md b/docker-for-azure/faqs.md index 142a29b1dc..8bee5ab671 100644 --- a/docker-for-azure/faqs.md +++ b/docker-for-azure/faqs.md @@ -90,6 +90,8 @@ All regions can be found here: [Microsoft Azure Regions](https://azure.microsoft An excerpt of the above regions to use when you create your service principal are: ```none +australiacentral +australiacentral2 australiaeast australiasoutheast brazilsouth @@ -100,6 +102,8 @@ centralus eastasia eastus eastus2 +francecentral +francesouth japaneast japanwest koreacentral @@ -111,8 +115,8 @@ southeastasia southindia uksouth ukwest -usgovvirginia usgoviowa +usgovvirginia westcentralus westeurope westindia diff --git a/docker-for-azure/index.md b/docker-for-azure/index.md index 61c4d28df6..59c59933b4 100644 --- a/docker-for-azure/index.md +++ b/docker-for-azure/index.md @@ -20,29 +20,9 @@ This deployment is fully baked and tested, and comes with the latest Enterprise ### Quickstart If your account has the [proper permissions](#prerequisites), you can generate the [Service Principal](#service-principal) and -then choose from the stable or edge channel to bootstrap Docker for Azure using Azure Resource Manager. -For more about stable and edge channels, see the [FAQs](/docker-for-azure/faqs.md#stable-and-edge-channels). - - - - - - - - - - - - - -
Stable channelEdge channel
This deployment is fully baked and tested, and comes with the latest CE version of Docker.

This is the best channel to use if you want a reliable platform to work with.

Stable is released quarterly and is for users that want an easier-to-maintain release pace.
This deployment offers cutting edge features of the CE version of Docker and comes with experimental features turned on, described in the Docker Experimental Features README on GitHub. (Adjust the branch or tag in the URL to match your version.)

This is the best channel to use if you want to experiment with features under development, and can weather some instability and bugs. Edge is for users wanting a drop of the latest and greatest features every month

We collect usage data on edges across the board.
- {{azure_blue_latest}} - - {{azure_blue_edge}} -
- -> **Note* During stable channel updates, edge channel will have the same release (unless it's a patch release) +then bootstrap Docker for Azure using Azure Resource Manager. +{{azure_blue_latest}} ### Prerequisites diff --git a/docker-for-ibm-cloud.md b/docker-for-ibm-cloud.md index 4de47fb7a5..f5f37fb2dc 100644 --- a/docker-for-ibm-cloud.md +++ b/docker-for-ibm-cloud.md @@ -19,6 +19,7 @@ redirect_from: - /docker-for-ibm-cloud/release-notes/ - /docker-for-ibm-cloud/scaling/ - /docker-for-ibm-cloud/why/ + - /v17.12/docker-for-ibm-cloud/quickstart/ --- Docker for IBM Cloud has been replaced by diff --git a/docker-for-mac/edge-release-notes.md b/docker-for-mac/edge-release-notes.md index 1cbf230005..0d5f65247f 100644 --- a/docker-for-mac/edge-release-notes.md +++ b/docker-for-mac/edge-release-notes.md @@ -16,7 +16,31 @@ notes](release-notes) are also available. (Following the CE release model, releases, and download stable and edge product installers at [Download Docker for Mac](install.md#download-docker-for-mac). -## Edge Releases of 2018 +## Edge Releases of 2019 + +### Docker Community Edition 2.0.2.1 2019-02-15 + +[Download](https://download.docker.com/mac/edge/31274/Docker.dmg) + +* Upgrades + - [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) + +### Docker Community Edition 2.0.2.0 2019-02-06 + +[Download](https://download.docker.com/mac/edge/30972/Docker.dmg) + +* Upgrades + - [Docker Compose 1.24.0-rc1](https://github.com/docker/compose/releases/tag/1.24.0-rc1) + - [Docker Machine 0.16.1](https://github.com/docker/machine/releases/tag/v0.16.1) + - [Compose on Kubernetes 0.4.18](https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.18) + +* New + - Rebranded UI + +* Bug fixes and minor changes + - Kubernetes: use default maximum number of pods for kubelet. [docker/for-mac#3453](https://github.com/docker/for-mac/issues/3453) + - Fix DockerHelper crash. [docker/for-mac#3470](https://github.com/docker/for-mac/issues/3470) + - Fix binding of privileged ports with specified IP. [docker/for-mac#3464](https://github.com/docker/for-mac/issues/3464) ### Docker Community Edition 2.0.1.0 2019-01-11 @@ -38,6 +62,8 @@ for Mac](install.md#download-docker-for-mac). - Rename Docker for Mac to Docker Desktop - Partially open services ports if possible. [docker/for-mac#3438](https://github.com/docker/for-mac/issues/3438) +## Edge Releases of 2018 + ### Docker Community Edition 2.0.0.0-mac82 2018-12-07 [Download](https://download.docker.com/mac/edge/29268/Docker.dmg) diff --git a/docker-for-mac/images/settings-disk.png b/docker-for-mac/images/settings-disk.png new file mode 100644 index 0000000000..e025b16e15 Binary files /dev/null and b/docker-for-mac/images/settings-disk.png differ diff --git a/docker-for-mac/index.md b/docker-for-mac/index.md index 7d4dcde633..3032e0ce5a 100644 --- a/docker-for-mac/index.md +++ b/docker-for-mac/index.md @@ -412,9 +412,9 @@ $ security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychai See also, [Directory structures for certificates](#directory-structures-for-certificates). -> **Note:** You need to restart Docker Desktop for Mac after making any changes to the -keychain or to the `~/.docker/certs.d` directory in order for the changes to -take effect. +> **Note**: You need to restart Docker Desktop for Mac after making any changes to the +> keychain or to the `~/.docker/certs.d` directory in order for the changes to +> take effect. For a complete explanation of how to do this, see the blog post [Adding Self-signed Registry Certs to Docker & Docker Desktop for diff --git a/docker-for-mac/multi-arch.md b/docker-for-mac/multi-arch.md index 78132c1f12..bb248844e3 100644 --- a/docker-for-mac/multi-arch.md +++ b/docker-for-mac/multi-arch.md @@ -6,29 +6,51 @@ redirect_from: title: Leverage multi-CPU architecture support notoc: true --- +Docker images can support multiple architectures, which means that a single +image may contain variants for different architectures, and sometimes for different +operating systems, such as Windows. -Docker Desktop for Mac provides `binfmt_misc` multi architecture support, so you can run -containers for different Linux architectures, such as `arm`, `mips`, `ppc64le`, -and even `s390x`. +When running an image with multi-architecture support, `docker` will +automatically select an image variant which matches your OS and architecture. + +Most of the official images on Docker Hub provide a [variety of architectures](https://github.com/docker-library/official-images#architectures-other-than-amd64). +For example, the `busybox` image supports `amd64`, `arm32v5`, `arm32v6`, +`arm32v7`, `arm64v8`, `i386`, `ppc64le`, and `s390x`. When running this image +on an `x86_64` / `amd64` machine, the `x86_64` variant will be pulled and run, +which can be seen from the output of the `uname -a` command that's run inside +the container: + +```bash +$ docker run busybox uname -a + +Linux 82ef1a0c07a2 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 GNU/Linux +``` + +**Docker Desktop for Mac** provides `binfmt_misc` multi-architecture support, +which means you can run containers for different Linux architectures +such as `arm`, `mips`, `ppc64le`, and even `s390x`. This does not require any special configuration in the container itself as it uses -qemu-static from the Docker for -Mac VM. +qemu-static from the **Docker for +Mac VM**. Because of this, you can run an ARM container, like the `arm32v7` or `ppc64le` +variants of the busybox image: -You can run an ARM container, like the -resin arm builds: - -``` -$ docker run resin/armv7hf-debian uname -a - -Linux 7ed2fca7a3f0 4.1.12 #1 SMP Tue Jan 12 10:51:00 UTC 2016 armv7l GNU/Linux - -$ docker run justincormack/ppc64le-debian uname -a - -Linux edd13885f316 4.1.12 #1 SMP Tue Jan 12 10:51:00 UTC 2016 ppc64le GNU/Linux +### arm32v7 variant +```bash +$ docker run arm32v7/busybox uname -a +Linux 9e3873123d09 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 armv7l GNU/Linux ``` -Multi architecture support makes it easy to build -multi architecture Docker images or experiment with ARM images and binaries -from your Mac. +### ppc64le variant +```bash +$ docker run ppc64le/busybox uname -a + +Linux 57a073cc4f10 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 ppc64le GNU/Linux +``` + +Notice that this time, the `uname -a` output shows `armv7l` and +`ppc64le` respectively. + +Multi-architecture support makes it easy to build multi-architecture Docker images or experiment with ARM images and binaries from your Mac. + diff --git a/docker-for-mac/osxfs.md b/docker-for-mac/osxfs.md index 02e3bfac3f..e1e17bdecd 100644 --- a/docker-for-mac/osxfs.md +++ b/docker-for-mac/osxfs.md @@ -63,7 +63,7 @@ By default, you can share files in `/Users/`, `/Volumes/`, `/private/`, and `/tmp` directly. To add or remove directory trees that are exported to Docker, use the **File sharing** tab in Docker preferences ![whale menu](images/whale-x.png){: .inline} -> **Preferences** -> -**File sharing**. (See [Preferences](index.md#preferences).) +**File sharing**. (See [Preferences](/docker-for-mac/index.md#preferences-menu).) All other paths used in `-v` bind mounts are sourced from the Moby Linux VM running the Docker diff --git a/docker-for-mac/release-notes.md b/docker-for-mac/release-notes.md index 52d6f3fd16..bb6de1d393 100644 --- a/docker-for-mac/release-notes.md +++ b/docker-for-mac/release-notes.md @@ -20,6 +20,13 @@ Desktop for Mac](install.md#download-docker-for-mac). ## Stable Releases of 2019 +### Docker Community Edition 2.0.0.3 2019-02-15 + +[Download](https://download.docker.com/mac/stable/31259/Docker.dmg) + +* Upgrades + - [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) + ### Docker Community Edition 2.0.0.2 2019-01-16 [Download](https://download.docker.com/mac/stable/30215/Docker.dmg) diff --git a/docker-for-mac/space.md b/docker-for-mac/space.md new file mode 100644 index 0000000000..61ec635d09 --- /dev/null +++ b/docker-for-mac/space.md @@ -0,0 +1,94 @@ +--- +description: Disk utilization +keywords: mac, disk +title: Disk utilization in Docker for Mac +--- + +Docker for Mac stores Linux containers and images in a single, large "disk image" file +in the Mac filesystem. This is different from Docker on Linux, which usually stores containers +and images in the `/var/lib/docker` directory. + +## Where is the "disk image" file? + +To locate the "disk image" file, first select the whale menu icon and then select +**Preferences...**. When the **Preferences...** window is displayed, select **Disk** and then **Reveal in Finder**: + +![Disk preferences](images/settings-disk.png) + +The **Preferences...** window shows how much actual disk space the "disk image" file is consuming. +In this example, the "disk image" file is consuming 2.4 GB out of a maximum of 64 GB. + +Note that other tools might display space usage of the file in terms of the maximum file size, not the actual file size. + +## If the file is too big + +If the file is too big, you can +- move it to a bigger drive, +- delete unnecessary containers and images, or +- reduce the maximum allowable size of the file. + +### Move the file to a bigger drive + +To move the file, open the **Preferences...** menu, select **Disk** and then select +on **Move disk image**. Do not move the file directly in the finder or Docker for Mac will +lose track of it. + +### Delete unnecessary containers and images + +To check whether you have too many unnecessary containers and images: + +If your client and daemon API are version 1.25 or later (use the docker version command on the client to check your client and daemon API versions.), you can display detailed space usage information with: + +``` +docker system df -v +``` + +Alternatively, you can list images with: +```bash +$ docker image ls +``` +and then list containers with: +```bash +$ docker container ls -a +``` + +If there are lots of unneeded objects, try the command +```bash +$ docker system prune +``` +This removes all stopped containers, unused networks, dangling images, and build cache. + +Note that it might take a few minutes before space becomes free on the host, depending +on what format the "disk image" file is in: +- If the file is named `Docker.raw`: space on the host should be reclaimed within a few + seconds. +- If the file is named `Docker.qcow2`: space will be freed by a background process after + a few minutes. + +Note that space is only freed when images are deleted. Space is not freed automatically +when files are deleted inside running containers. To trigger a space reclamation at any +point, use the command: + +``` +$ docker run --privileged --pid=host justincormack/nsenter1 /sbin/fstrim /var/lib/docker +``` + +Note that many tools will report the maximum file size, not the actual file size. +To query the actual size of the file on the host from a terminal, use: +```bash +$ cd ~/Library/Containers/com.docker.docker/Data +$ cd vms/0 # or com.docker.driver.amd64-linux +$ ls -klsh Docker.raw +2333548 -rw-r--r--@ 1 akim staff 64G Dec 13 17:42 Docker.raw +``` +In this example, the actual size of the disk is `2333548` KB, whereas the maximum size +of the disk is `64` GB. + +### Reduce the maximum size of the file + +To reduce the maximum size of the file, select the whale menu icon and then select +**Preferences...**. When the **Preferences...** window is displayed, select **Disk**. +The **Disk** window contains a slider that allows the maximum disk size to be set. +**Warning**: If the maximum size is reduced, the current file will be deleted and, therefore, all +containers and images will be lost. + diff --git a/docker-for-mac/troubleshoot.md b/docker-for-mac/troubleshoot.md index 3649cca2b0..a858f1eb76 100644 --- a/docker-for-mac/troubleshoot.md +++ b/docker-for-mac/troubleshoot.md @@ -112,9 +112,7 @@ Docker logs. The Console lives in `/Applications/Utilities`; you can search for it with Spotlight Search. -To read the Docker app log messages, in the top left corner of the window, type -"docker" and press Enter. Then select the "Any" button that appeared on its -left, and select "Process" instead. +To read the Docker app log messages, type `docker` in the Console window search bar and press Enter. Then select `ANY` to expand the drop-down list next to your `docker` search entry, and select `Process`. ![Mac Console search for Docker app](images/console.png) diff --git a/docker-for-windows/edge-release-notes.md b/docker-for-windows/edge-release-notes.md index 98f39b47c2..28c3ca60fa 100644 --- a/docker-for-windows/edge-release-notes.md +++ b/docker-for-windows/edge-release-notes.md @@ -16,7 +16,29 @@ notes](release-notes) are also available. (Following the CE release model, releases, and download stable and edge product installers at [Download Docker for Windows](install.md#download-docker-for-windows). -## Edge Releases of 2018 +## Edge Releases of 2019 + +### Docker Community Edition 2.0.2.1 2019-02-15 + +[Download](https://download.docker.com/win/edge/31274/Docker%20Desktop%20Installer.exe) + +* Upgrades + - [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) + +### Docker Community Edition 2.0.2.0 2019-02-06 + +[Download](https://download.docker.com/win/edge/30972/Docker%20Desktop%20Installer.exe) + +* Upgrades + - [Docker Compose 1.24.0-rc1](https://github.com/docker/compose/releases/tag/1.24.0-rc1) + - [Docker Machine 0.16.1](https://github.com/docker/machine/releases/tag/v0.16.1) + - [Compose on Kubernetes 0.4.18](https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.18) + +* New + - Rebranded UI + +* Bug fixes and minor changes + - Kubernetes: use default maximum number of pods for kubelet. [docker/for-mac#3453](https://github.com/docker/for-mac/issues/3453) ### Docker Community Edition 2.0.1.0 2019-01-11 @@ -39,6 +61,8 @@ for Windows](install.md#download-docker-for-windows). - Quit will not check if service is running anymore - Fix UI lock when changing kubernetes state +## Edge Releases of 2018 + ### Docker Community Edition 2.0.0.0-win82 2018-12-07 [Download](https://download.docker.com/win/edge/29268/Docker%20for%20Windows%20Installer.exe) diff --git a/docker-for-windows/install.md b/docker-for-windows/install.md index 9b5b769e81..b1fd76f684 100644 --- a/docker-for-windows/install.md +++ b/docker-for-windows/install.md @@ -54,7 +54,7 @@ Hub](https://hub.docker.com/editions/community/docker-ce-desktop-windows){: Looking for information on using Windows containers? * [Switch between Windows and Linux - containers](https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers) + containers](/docker-for-windows/index.md#switch-between-windows-and-linux-containers) describes the Linux / Windows containers toggle in Docker Desktop for Windows and points you to the tutorial mentioned above. * [Getting Started with Windows Containers @@ -99,7 +99,7 @@ accessible from any terminal window. If the whale is hidden in the Notifications area, click the up arrow on the taskbar to show it. To learn more, see [Docker -Settings](index.md#docker-settings-dialog). +Settings](/docker-for-windows/index.md#docker-settings-dialog). If you just installed the app, you also get a popup success message with suggested next steps, and a link to this documentation. diff --git a/docker-for-windows/release-notes.md b/docker-for-windows/release-notes.md index 13527bfb0a..006a63c25b 100644 --- a/docker-for-windows/release-notes.md +++ b/docker-for-windows/release-notes.md @@ -20,6 +20,16 @@ for Windows](install.md#download-docker-for-windows). ## Stable Releases of 2019 +### Docker Community Edition 2.0.0.3 2019-02-15 + +[Download](https://download.docker.com/win/stable/31259/Docker%20for%20Windows%20Installer.exe) + +* Upgrades + - [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2), fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) + +* Bug fix + - Fix crash in system tray menu when the Hub login fails or Air gap mode + ### Docker Community Edition 2.0.0.2 2019-01-16 [Download](https://download.docker.com/win/stable/30215/Docker%20for%20Windows%20Installer.exe) diff --git a/docker-hub/builds/index.md b/docker-hub/builds/index.md index e773c20e8e..0b128ec895 100644 --- a/docker-hub/builds/index.md +++ b/docker-hub/builds/index.md @@ -129,9 +129,9 @@ For each source: * Specify the **Dockerfile location** as a path relative to the root of the source code repository. (If the Dockerfile is at the repository root, leave this path set to `/`.) -> **Note:** When Docker Hub pulls a branch from a source code repository, it performs -a shallow clone (only the tip of the specified branch). Refer to [Advanced options for Autobuild and Autotest](advanced.md) -for more information. +> **Note**: When Docker Hub pulls a branch from a source code repository, it performs +> a shallow clone (only the tip of the specified branch). Refer to [Advanced options for Autobuild and Autotest](advanced.md) +> for more information. ### Environment variables for builds diff --git a/docker-hub/orgs.md b/docker-hub/orgs.md index 1244af347f..72ba707b37 100644 --- a/docker-hub/orgs.md +++ b/docker-hub/orgs.md @@ -13,7 +13,7 @@ Docker Hub Organizations let you create teams so you can give your team access t - **Organizations** are a collection of teams and repositories that can be managed together. - **Teams** are groups of Docker Hub users that belong to your organization. -**Note:** in Docker Hub, users cannot be associated directly to an organization. They below only to teams within an the organization. +> **Note**: in Docker Hub, users cannot be associated directly to an organization. They belong only to teams within an organization. ### Creating an organization @@ -48,7 +48,7 @@ To create a team: 2. Click on **Add User** 3. Provide the user's Docker ID username _or_ email to add them to the team ![Add User to Team](images/orgs-team-add-user.png) -**Note:** you are not automatically added to teams created by your. +> **Note**: you are not automatically added to teams created by your organization. ### Removing team members @@ -58,7 +58,7 @@ To remove a member from a team, click the **x** next to their name: ### Giving a team access to a repository -To provide a team to access a repository: +To provide a team access to a repository: 1. Visit the repository list on Docker Hub by clicking on **Repositories** 2. Select your organization in the namespace dropdown list diff --git a/docker-hub/publish/certify-images.md b/docker-hub/publish/certify-images.md index ab96a2045a..3cc5e6dc8f 100644 --- a/docker-hub/publish/certify-images.md +++ b/docker-hub/publish/certify-images.md @@ -466,11 +466,12 @@ root:[~/] # root:[~/] # ./inspectDockerImage --json gforghetti/apache:latest | jq ``` -Note: The output was piped to the **jq** command to display it "nicely". + +> **Note**: The output was piped to the `jq` command to display it "nicely". #### Output: -``` +```json { "Date": "Mon May 21 13:23:37 2018", "SystemOperatingSystem": "Operating System: Ubuntu 16.04.4 LTS", @@ -580,7 +581,6 @@ Note: The output was piped to the **jq** command to display it "nicely". } ] } -root:[~/] # ``` diff --git a/docker-hub/publish/certify-plugins-logging.md b/docker-hub/publish/certify-plugins-logging.md index fd806ae7bd..69cb248e27 100644 --- a/docker-hub/publish/certify-plugins-logging.md +++ b/docker-hub/publish/certify-plugins-logging.md @@ -364,12 +364,11 @@ gforghetti:~/$ gforghetti:~:$ ./inspectDockerLoggingPlugin --json gforghetti/docker-log-driver-test:latest | jq ``` -> Note: The output was piped to the **jq** command to display it "nicely". +> **Note**: The output was piped to the `jq` command to display it "nicely". #### Output: - -``` +```json { "Date": "Mon May 21 14:38:28 2018", "SystemOperatingSystem": "Operating System: Ubuntu 16.04.4 LTS", diff --git a/ee/backup.md b/ee/backup.md index a370906af1..c6cb6d70a9 100644 --- a/ee/backup.md +++ b/ee/backup.md @@ -11,7 +11,7 @@ for each of the following components: 1. Docker Swarm. [Backup Swarm resources like service and network definitions](/engine/swarm/admin_guide.md#back-up-the-swarm). 2. Universal Control Plane (UCP). [Backup UCP configurations](/ee/ucp/admin/backups-and-disaster-recovery.md). -3. Docker Trusted Registry (DTR). [Backup DTR configurations and images](/ee/dtr/admin/disaster-recovery/index.md). +3. Docker Trusted Registry (DTR). [Backup DTR configurations and images](/ee/dtr/admin/disaster-recovery/create-a-backup.md). Before proceeding to backup the next component, you should test the backup you've created to make sure it's not corrupt. One way to test your backups is to do @@ -30,10 +30,10 @@ swarm and join new ones to bring the swarm to an healthy state. To restore Docker Enterprise Edition, you need to restore the individual components one by one: -1. Docker Engine. [Learn more](/engine/swarm/admin_guide.md#recover-from-disaster). +1. Docker Swarm. [Learn more](/engine/swarm/admin_guide.md#recover-from-disaster). 2. Universal Control Plane (UCP). [Learn more](/ee/ucp/admin/backups-and-disaster-recovery.md#restore-your-swarm). -3. Docker Trusted Registry (DTR). [Learn more](/ee/dtr/admin/disaster-recovery/index.md). +3. Docker Trusted Registry (DTR). [Learn more](/ee/dtr/admin/disaster-recovery/restore-from-backup.md). ## Where to go next -- [Upgrade Docker EE](upgrade.md) \ No newline at end of file +- [Upgrade Docker EE](upgrade.md) diff --git a/ee/dtr/admin/configure/deploy-caches/simple-kube.md b/ee/dtr/admin/configure/deploy-caches/simple-kube.md index cd9b70d2a1..0236edb6bc 100644 --- a/ee/dtr/admin/configure/deploy-caches/simple-kube.md +++ b/ee/dtr/admin/configure/deploy-caches/simple-kube.md @@ -82,7 +82,7 @@ stored in the primary DTR. You can [customize the storage parameters](/registry/configuration/#storage), if you want the cached images to be backended by persistent storage. -> Note: Kubernetes Peristent Volumes or Persistent Volume Claims would have to be +> **Note**: Kubernetes Peristent Volumes or Persistent Volume Claims would have to be > used to provide persistent backend storage capabilities for the cache. ``` diff --git a/ee/dtr/admin/configure/use-a-web-proxy.md b/ee/dtr/admin/configure/use-a-web-proxy.md index 9ea427ec12..56f8c86a51 100644 --- a/ee/dtr/admin/configure/use-a-web-proxy.md +++ b/ee/dtr/admin/configure/use-a-web-proxy.md @@ -38,7 +38,8 @@ docker run -it --rm \ --https-proxy username:password@: \ --ucp-insecure-tls ``` -NOTE: DTR will hide the password portion of the URL, when it is displayed in the DTR UI. + +> **Note**: DTR will hide the password portion of the URL, when it is displayed in the DTR UI. ## Where to go next diff --git a/ee/dtr/admin/disaster-recovery/repair-a-cluster.md b/ee/dtr/admin/disaster-recovery/repair-a-cluster.md index 3a2a89d700..e6bdf42edb 100644 --- a/ee/dtr/admin/disaster-recovery/repair-a-cluster.md +++ b/ee/dtr/admin/disaster-recovery/repair-a-cluster.md @@ -45,7 +45,7 @@ It also reconfigures DTR removing all other nodes from the cluster, leaving DTR as a single-replica cluster with the replica you chose. Start by finding the ID of the DTR replica that you want to repair from. -You can find the list of replicas by navigating to the UCP web UI, or by using +You can find the list of replicas by navigating to **Shared Resources > Stacks** or **Swarm > Volumes** (when using [swarm mode](/engine/swarm/)) on the UCP web interface, or by using a UCP client bundle to run: {% raw %} @@ -57,6 +57,15 @@ docker ps --format "{{.Names}}" | grep dtr ``` {% endraw %} +Another way to determine the replica ID is to SSH into a DTR node and run the following: + +{% raw %} +```bash +REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-') +&& echo $REPLICA_ID +``` +{% endraw %} + Then, use your UCP client bundle to run the emergency repair command: ```bash diff --git a/ee/dtr/admin/disaster-recovery/repair-a-single-replica.md b/ee/dtr/admin/disaster-recovery/repair-a-single-replica.md index 5224219ee3..f61c21c13b 100644 --- a/ee/dtr/admin/disaster-recovery/repair-a-single-replica.md +++ b/ee/dtr/admin/disaster-recovery/repair-a-single-replica.md @@ -54,7 +54,7 @@ To remove unhealthy replicas, you'll first have to find the replica ID of one of the replicas you want to keep, and the replica IDs of the unhealthy replicas you want to remove. -You can find this in the **Stacks** page of the UCP web UI, or by using the UCP +You can find the list of replicas by navigating to **Shared Resources > Stacks** or **Swarm > Volumes** (when using [swarm mode](/engine/swarm/)) on the UCP web interface, or by using the UCP client bundle to run: {% raw %} @@ -66,6 +66,15 @@ docker ps --format "{{.Names}}" | grep dtr ``` {% endraw %} +Another way to determine the replica ID is to SSH into a DTR node and run the following: + +{% raw %} +```bash +REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-') +&& echo $REPLICA_ID +``` +{% endraw %} + Then use the UCP client bundle to remove the unhealthy replicas: ```bash diff --git a/ee/dtr/admin/install/install-offline.md b/ee/dtr/admin/install/install-offline.md index d32aac0fe7..5c22de8cb6 100644 --- a/ee/dtr/admin/install/install-offline.md +++ b/ee/dtr/admin/install/install-offline.md @@ -44,7 +44,7 @@ For each machine where you want to install DTR: `docker load` command to load the Docker images from the tar archive: ```bash - $ docker load < dtr.tar.gz + $ docker load -i dtr.tar.gz ``` ## Install DTR diff --git a/ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-dtr.md b/ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-dtr.md new file mode 100644 index 0000000000..58d203e04f --- /dev/null +++ b/ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-dtr.md @@ -0,0 +1,243 @@ +--- +title: Troubleshoot Docker Trusted Registry +description: Learn how to troubleshoot your DTR installation. +keywords: registry, monitor, troubleshoot +redirect_from: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs +--- + +This guide contains tips and tricks for troubleshooting DTR problems. + +## Troubleshoot overlay networks + +High availability in DTR depends on swarm overlay networking. One way to test +if overlay networks are working correctly is to deploy containers to the same +overlay network on different nodes and see if they can ping one another. + +Use SSH to log into a node and run: + +```bash +docker run -it --rm \ + --net dtr-ol --name overlay-test1 \ + --entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }} +``` + +Then use SSH to log into another node and run: + +```bash +docker run -it --rm \ + --net dtr-ol --name overlay-test2 \ + --entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1 +``` + +If the second command succeeds, it indicates overlay networking is working +correctly between those nodes. + +You can run this test with any attachable overlay network and any Docker image +that has `sh` and `ping`. + + +## Access RethinkDB directly + +DTR uses RethinkDB for persisting data and replicating it across replicas. +It might be helpful to connect directly to the RethinkDB instance running on a +DTR replica to check the DTR internal state. + +> **Warning**: Modifying RethinkDB directly is not supported and may cause +> problems. +{: .warning } + +### via RethinkCLI + +As of v2.5.5, the [RethinkCLI has been removed](/ee/dtr/release-notes/#255) from the RethinkDB image along with other unused components. You can now run RethinkCLI from a separate image in the `dockerhubenterprise` organization. Note that the commands below are using separate tags for non-interactive and interactive modes. + +#### Non-interactive + +Use SSH to log into a node that is running a DTR replica, and run the following: + +{% raw %} +```bash +# List problems in the cluster detected by the current node. +REPLICA_ID=$(docker container ls --filter=name=dtr-rethink --format '{{.Names}}' | cut -d'/' -f2 | cut -d'-' -f3 | head -n 1) && echo 'r.db("rethinkdb").table("current_issues")' | docker run --rm -i --net dtr-ol -v "dtr-ca-${REPLICA_ID}:/ca" -e DTR_REPLICA_ID=$REPLICA_ID dockerhubenterprise/rethinkcli:v2.2.0-ni non-interactive +``` +{% endraw %} + +On a healthy cluster the output will be `[]`. + +#### Interactive + +Starting in DTR 2.5.5, you can run RethinkCLI from a separate image. First, set an environment variable for your DTR replica ID: + +```bash +REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-') +``` + +RethinkDB stores data in different databases that contain multiple tables. Run the following command to get into interactive mode +and query the contents of the DB: + +```bash +docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID +``` + +```none +# List problems in the cluster detected by the current node. +> r.db("rethinkdb").table("current_issues") +[] + +# List all the DBs in RethinkDB +> r.dbList() +[ 'dtr2', + 'jobrunner', + 'notaryserver', + 'notarysigner', + 'rethinkdb' ] + +# List the tables in the dtr2 db +> r.db('dtr2').tableList() +[ 'blob_links', + 'blobs', + 'client_tokens', + 'content_caches', + 'events', + 'layer_vuln_overrides', + 'manifests', + 'metrics', + 'namespace_team_access', + 'poll_mirroring_policies', + 'promotion_policies', + 'properties', + 'pruning_policies', + 'push_mirroring_policies', + 'repositories', + 'repository_team_access', + 'scanned_images', + 'scanned_layers', + 'tags', + 'user_settings', + 'webhooks' ] + +# List the entries in the repositories table +> r.db('dtr2').table('repositories') +[ { enableManifestLists: false, + id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b', + immutableTags: false, + name: 'test-repo-1', + namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481', + namespaceName: 'admin', + pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6', + pulls: 0, + pushes: 0, + scanOnPush: false, + tagLimit: 0, + visibility: 'public' }, + { enableManifestLists: false, + id: '9f43f029-9683-459f-97d9-665ab3ac1fda', + immutableTags: false, + longDescription: '', + name: 'testing', + namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481', + namespaceName: 'admin', + pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be', + pulls: 0, + pushes: 0, + scanOnPush: false, + shortDescription: '', + tagLimit: 1, + visibility: 'public' } ] +``` + +Individual DBs and tables are a private implementation detail and may change in DTR +from version to version, but you can always use `dbList()` and `tableList()` to explore +the contents and data structure. + +[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/). + +### via API + +To check on the overall status of your DTR cluster without interacting with RethinkCLI, run the following API request: + +```bash +curl -u admin:$TOKEN -X GET "https:///api/v0/meta/cluster_status" -H "accept: application/json" +``` + +#### Example API Response +```none +{ + "rethink_system_tables": { + "cluster_config": [ + { + "heartbeat_timeout_secs": 10, + "id": "heartbeat" + } + ], + "current_issues": [], + "db_config": [ + { + "id": "339de11f-b0c2-4112-83ac-520cab68d89c", + "name": "notaryserver" + }, + { + "id": "aa2e893f-a69a-463d-88c1-8102aafebebc", + "name": "dtr2" + }, + { + "id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd", + "name": "jobrunner" + }, + { + "id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039", + "name": "notarysigner" + } + ], + "server_status": [ + { + "id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a", + "name": "dtr_rethinkdb_5eb9459a7832", + "network": { + "canonical_addresses": [ + { + "host": "dtr-rethinkdb-5eb9459a7832.dtr-ol", + "port": 29015 + } + ], + "cluster_port": 29015, + "connected_to": { + "dtr_rethinkdb_56b65e8c1404": true + }, + "hostname": "9e83e4fee173", + "http_admin_port": "", + "reql_port": 28015, + "time_connected": "2019-02-15T00:19:22.035Z" + }, + } + ... + ] + } +} + +``` + +## Recover from an unhealthy replica + +When a DTR replica is unhealthy or down, the DTR web UI displays a warning: + +```none +Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy. +``` + +To fix this, you should remove the unhealthy replica from the DTR cluster, +and join a new one. Start by running: + +```bash +docker run -it --rm \ + {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \ + --ucp-insecure-tls +``` + +And then: + +```bash +docker run -it --rm \ + {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \ + --ucp-node \ + --ucp-insecure-tls +``` diff --git a/ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md b/ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md deleted file mode 100644 index 15ef5ac2ab..0000000000 --- a/ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: Troubleshoot Docker Trusted Registry -description: Learn how to troubleshoot your DTR installation. -keywords: registry, monitor, troubleshoot ---- - -This guide contains tips and tricks for troubleshooting DTR problems. - -## Troubleshoot overlay networks - -High availability in DTR depends on swarm overlay networking. One way to test -if overlay networks are working correctly is to deploy containers to the same -overlay network on different nodes and see if they can ping one another. - -Use SSH to log into a node and run: - -```bash -docker run -it --rm \ - --net dtr-ol --name overlay-test1 \ - --entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }} -``` - -Then use SSH to log into another node and run: - -```bash -docker run -it --rm \ - --net dtr-ol --name overlay-test2 \ - --entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1 -``` - -If the second command succeeds, it indicates overlay networking is working -correctly between those nodes. - -You can run this test with any attachable overlay network and any Docker image -that has `sh` and `ping`. - - -## Access RethinkDB directly - -DTR uses RethinkDB for persisting data and replicating it across replicas. -It might be helpful to connect directly to the RethinkDB instance running on a -DTR replica to check the DTR internal state. - -> **Warning**: Modifying RethinkDB directly is not supported and may cause -> problems. -{: .warning } - -Use SSH to log into a node that is running a DTR replica, and run the following -commands: - -{% raw %} -```bash -# List problems in the cluster detected by the current node. -echo 'r.db("rethinkdb").table("current_issues")' | \ - docker exec -i \ - $(docker ps -q --filter name=dtr-rethinkdb) \ - rethinkcli non-interactive; \ - echo -``` -{% endraw %} - -On a healthy cluster the output will be `[]`. - -RethinkDB stores data in different databases that contain multiple tables. This -container can also be used to connect to the local DTR replica and -interactively query the contents of the DB. - -{% raw %} -```bash -docker exec -it $(docker ps -q --filter name=dtr-rethinkdb) rethinkcli -``` -{% endraw %} - -```none -# List problems in the cluster detected by the current node. -> r.db("rethinkdb").table("current_issues") -[] - -# List all the DBs in RethinkDB -> r.dbList() -[ 'dtr2', - 'jobrunner', - 'notaryserver', - 'notarysigner', - 'rethinkdb' ] - -# List the tables in the dtr2 db -> r.db('dtr2').tableList() -[ 'client_tokens', - 'events', - 'manifests', - 'namespace_team_access', - 'properties', - 'repositories', - 'repository_team_access', - 'tags' ] - -# List the entries in the repositories table -> r.db('dtr2').table('repositories') -[ { id: '19f1240a-08d8-4979-a898-6b0b5b2338d8', - name: 'my-test-repo', - namespaceAccountID: '924bf131-6213-43fa-a5ed-d73c7ccf392e', - pk: 'cf5e8bf1197e281c747f27e203e42e22721d5c0870b06dfb1060ad0970e99ada', - visibility: 'public' }, -... -``` - -Individual DBs and tables are a private implementation detail and may change in DTR -from version to version, but you can always use `dbList()` and `tableList()` to explore -the contents and data structure. - -[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/). - -## Recover from an unhealthy replica - -When a DTR replica is unhealthy or down, the DTR web UI displays a warning: - -```none -Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy. -``` - -To fix this, you should remove the unhealthy replica from the DTR cluster, -and join a new one. Start by running: - -```bash -docker run -it --rm \ - {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \ - --ucp-insecure-tls -``` - -And then: - -```bash -docker run -it --rm \ - {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \ - --ucp-node \ - --ucp-insecure-tls -``` diff --git a/ee/dtr/images/security-scanning-setup-1.png b/ee/dtr/images/security-scanning-setup-1.png index f2afa4a162..39867d63a7 100644 Binary files a/ee/dtr/images/security-scanning-setup-1.png and b/ee/dtr/images/security-scanning-setup-1.png differ diff --git a/ee/dtr/images/security-scanning-setup-2.png b/ee/dtr/images/security-scanning-setup-2.png index bbff97fe76..a9d161d95b 100644 Binary files a/ee/dtr/images/security-scanning-setup-2.png and b/ee/dtr/images/security-scanning-setup-2.png differ diff --git a/ee/dtr/images/security-scanning-setup-3.png b/ee/dtr/images/security-scanning-setup-3.png index d90a56a2f0..a3d77ab5b2 100644 Binary files a/ee/dtr/images/security-scanning-setup-3.png and b/ee/dtr/images/security-scanning-setup-3.png differ diff --git a/ee/dtr/images/security-scanning-setup-4.png b/ee/dtr/images/security-scanning-setup-4.png index 9a1eaae531..0921546f06 100644 Binary files a/ee/dtr/images/security-scanning-setup-4.png and b/ee/dtr/images/security-scanning-setup-4.png differ diff --git a/ee/dtr/images/security-scanning-setup-5.png b/ee/dtr/images/security-scanning-setup-5.png index 4e93286bf0..b357cef19a 100644 Binary files a/ee/dtr/images/security-scanning-setup-5.png and b/ee/dtr/images/security-scanning-setup-5.png differ diff --git a/ee/dtr/images/security-scanning-setup-6.png b/ee/dtr/images/security-scanning-setup-6.png index 6fc1a38b4e..04fac6c638 100644 Binary files a/ee/dtr/images/security-scanning-setup-6.png and b/ee/dtr/images/security-scanning-setup-6.png differ diff --git a/ee/dtr/images/security-scanning-setup-7.png b/ee/dtr/images/security-scanning-setup-7.png index 2acf1ae462..39867d63a7 100644 Binary files a/ee/dtr/images/security-scanning-setup-7.png and b/ee/dtr/images/security-scanning-setup-7.png differ diff --git a/ee/dtr/release-notes.md b/ee/dtr/release-notes.md index 497cf07e83..35fd1a9d99 100644 --- a/ee/dtr/release-notes.md +++ b/ee/dtr/release-notes.md @@ -22,13 +22,60 @@ to upgrade your installation to the latest release. # Version 2.6 +## 2.6.3 + +(2019-2-28) + +### Changelog + +* Bump the Golang version that is used to build DTR to version 1.11.5. (docker/dhe-deploy#10060) + +### Bug Fixes + +* Users with read-only permissions can no longer see the README edit button for a repository. (docker/dhe-deploy#10056) + +### Known issues + +* Docker Engine Enterprise Edition (Docker EE) Upgrade + * There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before `18.09` to version `18.09` or greater. For DTR-specific changes, see [2.5 to 2.6 upgrade](/ee/dtr/admin/upgrade/#25-to-26-upgrade). + +* Web Interface + * Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490) + * When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474) + * In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554) + +* Webhooks + * When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + +* System + * When upgrading from `2.5` to `2.6`, the system will run a `metadatastoremigration` job after a successful upgrade. This is necessary for online garbage collection. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](/ee/dtr/admin/upgrade/#25-to-26-upgrade). + ## 2.6.2 (2019-1-29) ### Bug Fixes - * Fixed a bug where scanning Windows images were stuck in Pending state. (docker/dhe-deploy #9969) +* Fixed a bug where scanning Windows images were stuck in Pending state. (docker/dhe-deploy #9969) + +### Known issues + +* Docker Engine Enterprise Edition (Docker EE) Upgrade + * There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before `18.09` to version `18.09` or greater. For DTR-specific changes, see [2.5 to 2.6 upgrade](/ee/dtr/admin/upgrade/#25-to-26-upgrade). + +* Web Interface + * Users with read-only permissions to a repository can edit the repository README but their changes will not be saved. Only repository admins should have the ability to [edit the description](/ee/dtr/admin/manage-users/permission-levels/#team-permission-levels) of a repository. (docker/dhe-deploy #9677) + * Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490) + * When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474) + * In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554) + +* Webhooks + * When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + +* System + * When upgrading from `2.5` to `2.6`, the system will run a `metadatastoremigration` job after a successful upgrade. This is necessary for online garbage collection. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](/ee/dtr/admin/upgrade/#25-to-26-upgrade). ## 2.6.1 @@ -43,6 +90,24 @@ to upgrade your installation to the latest release. ### Changelog * GoLang version bump to 1.11.4. +### Known issues + +* Docker Engine Enterprise Edition (Docker EE) Upgrade + * There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before `18.09` to version `18.09` or greater. For DTR-specific changes, see [2.5 to 2.6 upgrade](/ee/dtr/admin/upgrade/#25-to-26-upgrade). + +* Web Interface + * Users with read-only permissions to a repository can edit the repository README but their changes will not be saved. Only repository admins should have the ability to [edit the description](/ee/dtr/admin/manage-users/permission-levels/#team-permission-levels) of a repository. (docker/dhe-deploy #9677) + * Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490) + * When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474) + * In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554) + +* Webhooks + * When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + +* System + * When upgrading from `2.5` to `2.6`, the system will run a `metadatastoremigration` job after a successful upgrade. This is necessary for online garbage collection. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](/ee/dtr/admin/upgrade/#25-to-26-upgrade). + ## 2.6.0 (2018-11-08) @@ -84,6 +149,7 @@ to upgrade your installation to the latest release. * Users with read-only permissions to a repository can edit the repository README but their changes will not be saved. Only repository admins should have the ability to [edit the description](/ee/dtr/admin/manage-users/permission-levels/#team-permission-levels) of a repository. (docker/dhe-deploy #9677) * Poll mirroring for Docker plugins such as `docker/imagefs` is currently broken. (docker/dhe-deploy #9490) * When viewing the details of a scanned image tag, the header may display a different vulnerability count from the layer details. (docker/dhe-deploy #9474) + * In order to set a tag limit for pruning purposes, immutability must be turned off for a repository. This limitation is not clear in the **Repository Settings** view. (docker/dhe-deploy #9554) * Webhooks * When configured for "Image promoted from repository" events, a webhook notification is triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) @@ -104,6 +170,43 @@ to upgrade your installation to the latest release. # Version 2.5 +## 2.5.9 + +(2019-2-28) + +### Changelog + +* Bump the Golang version that is used to build DTR to version 1.10.8. (docker/dhe-deploy#10071) + +### Known Issues +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. +* Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + * When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). + ## 2.5.8 (2019-1-29) @@ -112,6 +215,35 @@ to upgrade your installation to the latest release. * Fixed an issue that prevented vulnerability updates from running if they were previously interrupted. (docker/dhe-deploy #9958) +### Known Issues +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. +* Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + * When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). + ## 2.5.7 (2019-01-09) @@ -125,6 +257,35 @@ to upgrade your installation to the latest release. ### Changelog * GoLang version bump to 1.10.7. +### Known Issues +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. +* Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + * When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). + ## 2.5.6 (2018-10-25) @@ -138,6 +299,35 @@ to upgrade your installation to the latest release. * Backported ManifestList fixes. (docker/dhe-deploy#9547) * Removed support sidebar link and associated content. (docker/dhe-deploy#9411) +### Known Issues +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. +* Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + * When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). + ## 2.5.5 (2018-8-30) @@ -148,6 +338,35 @@ to upgrade your installation to the latest release. * Fixed bug to enable poll mirroring with Windows images. * The RethinkDB image has been patched to remove unused components with known vulnerabilities including the RethinkCLI. To get an equivalent interface, run RethinkCLI from a separate image using `docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID`. +### Known Issues +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. +* Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) + * When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). + ## 2.5.3 (2018-6-21) @@ -163,8 +382,33 @@ to upgrade your installation to the latest release. * Fixed issue where worker capacities wouldn't update on minor version upgrades. ### Known Issues +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. * Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) * When configured for "Image promoted from repository" events, a webhook notification will be triggered twice during an image promotion when scanning is enabled on a repository. (docker/dhe-deploy #9685) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). ## 2.5.2 @@ -175,6 +419,35 @@ to upgrade your installation to the latest release. * Fixed a problem where promotion policies based on scanning results would not be executed correctly. +### Known issues + +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. +* Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). + ## 2.5.1 (2018-5-17) @@ -201,6 +474,35 @@ to upgrade your installation to the latest release. * Enhancements to the mirroring interface including: * Fixed URL for the destination repository. * Option to skip TLS verification when testing mirroring. + + ### Known issues + +* Web Interface + * The web interface shows "This repository has no tags" in repositories where tags + have long names. As a workaround, reduce the length of the name for the + repository and tag. + * When deleting a repository with signed images, the DTR web interface no longer + shows instructions on how to delete trust data. + * There's no web interface support to update mirroring policies when rotating the TLS + certificates used by DTR. Use the API instead. + * The web interface for promotion policies is currently broken if you have a large number + of repositories. + * Clicking "Save & Apply" on a promotion policy doesn't work. +* Webhooks + * There is no webhook event for when an image is pulled. + * HTTPS webhooks do not go through HTTPS proxy when configured. (docker/dhe-deploy #9492) +* Online garbage collection + * The events API won't report events when tags and manifests are deleted. + * The events API won't report blobs deleted by the garbage collection job. +* Docker EE Advanced features + * Scanning any new push after metadatastore migration will not yet work. + * Pushes to repos with promotion policies (repo as source) are broken when an + image has a layer over 100MB. + * On upgrade the scanningstore container may restart with this error message: + FATAL: database files are incompatible with server + +* System + * When opting into online garbage collection, the system will run a `metadatastoremigration` job after a successful upgrade. If the three system attempts fail, you will have to retrigger the `metadatastoremigration` job manually. [Learn about manual metadata store migration](../../v18.03/ee/dtr/admin/configure/garbage-collection/#metadata-store-migration). ## 2.5.0 @@ -298,6 +600,21 @@ specify `--log-protocol`. # Version 2.4 +## 2.4.10 + +(2019-2-28) + +### Changelog + +* Bump the Golang version that is used to build DTR to version 1.10.8. (docker/dhe-deploy#10068) + +**Known issues** + +* Backup uses too much memory and can cause out of memory issues for large databases. +* The `--nfs-storage-url` option uses the system's default NFS version instead +of testing the server to find which version works. + + ## Version 2.4.8 (2019-01-29) @@ -305,6 +622,13 @@ specify `--log-protocol`. ### Changelog * GoLang version bump to 1.10.6. +**Known issues** + +* Backup uses too much memory and can cause out of memory issues for large databases. +* The `--nfs-storage-url` option uses the system's default NFS version instead +of testing the server to find which version works. + + ## Version 2.4.7 (2018-10-25) @@ -317,6 +641,12 @@ specify `--log-protocol`. * Patched security vulnerabilities in the load balancer. * Patch packages and base OS to eliminate and address some critical vulnerabilities in DTR dependencies. +**Known issues** + +* Backup uses too much memory and can cause out of memory issues for large databases. +* The `--nfs-storage-url` option uses the system's default NFS version instead +of testing the server to find which version works. + ## Version 2.4.6 (2018-07-26) @@ -325,6 +655,12 @@ specify `--log-protocol`. * Fixed bug where repository tag list UI was not loading after a tag migration. * The RethinkDB image has been patched to remove unused components with known vulnerabilities including the rethinkcli. To get an equivalent interface please run the rethinkcli from a separate image using `docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli $REPLICA_ID`. +**Known issues** + +* Backup uses too much memory and can cause out of memory issues for large databases. +* The `--nfs-storage-url` option uses the system's default NFS version instead +of testing the server to find which version works. + ## Version 2.4.5 (2018-06-21) @@ -337,6 +673,12 @@ specify `--log-protocol`. * Prevent OOM during garbage collection by reading less data into memory at a time. +**Known issues** + +* Backup uses too much memory and can cause out of memory issues for large databases. +* The `--nfs-storage-url` option uses the system's default NFS version instead +of testing the server to find which version works. + ## Version 2.4.4 (2018-05-17) @@ -352,11 +694,17 @@ specify `--log-protocol`. * Reduce noise in the jobrunner logs by changing some of the more detailed messages to debug level. * Eliminate a race condition in which webhook for license updates doesn't fire. +**Known issues** + +* Backup uses too much memory and can cause out of memory issues for large databases. +* The `--nfs-storage-url` option uses the system's default NFS version instead +of testing the server to find which version works. + ## Version 2.4.3 (2018-03-19) -**Security** +**Security notice** * Dependencies updated to consume upstream CVE patches. @@ -410,6 +758,12 @@ vulnerability database. removed in DTR 2.5. You can use the `/api/v0/imagescan/repositories/{namespace}/{reponame}/{tag}` endpoint instead. +**Known issues** + +* Backup uses too much memory and can cause out of memory issues for large databases. +* The `--nfs-storage-url` option uses the system's default NFS version instead +of testing the server to find which version works. + ## DTR 2.4.0 diff --git a/ee/dtr/user/tag-pruning.md b/ee/dtr/user/tag-pruning.md index a12fbccac6..6853a0c777 100644 --- a/ee/dtr/user/tag-pruning.md +++ b/ee/dtr/user/tag-pruning.md @@ -11,11 +11,14 @@ Tag pruning is the process of cleaning up unnecessary or unwanted repository tag * specifying a tag pruning policy or alternatively, * setting a tag limit - > Tag Pruning > > When run, tag pruning only deletes a tag and does not carry out any actual blob deletion. For actual blob deletions, see [Garbage Collection](../../admin/configure/garbage-collection.md). +> Known Issue +> +> While the tag limit field is disabled when you turn on immutability for a new repository, this is currently [not the case with **Repository Settings**](/ee/dtr/release-notes/#known-issues). As a workaround, turn off immutability when setting a tag limit via **Repository Settings > Pruning**. + In the following section, we will cover how to specify a tag pruning policy and set a tag limit on repositories that you manage. It will not include modifying or deleting a tag pruning policy. ## Specify a tag pruning policy @@ -65,7 +68,10 @@ In addition to pruning policies, you can also set tag limits on repositories tha ![](../images/tag-pruning-4.png){: .with-border} -To set a tag limit, select the repository that you want to update and click the **Settings** tab. Specify a number in the **Pruning** section and click **Save**. The **Pruning** tab will now display your tag limit above the prune triggers list along with a link to modify this setting. +To set a tag limit, do the following: +1. Select the repository that you want to update and click the **Settings** tab. +2. Turn off immutability for the repository. +3. Specify a number in the **Pruning** section and click **Save**. The **Pruning** tab will now display your tag limit above the prune triggers list along with a link to modify this setting. ![](../images/tag-pruning-5.png){: .with-border} diff --git a/ee/ucp/admin/backups-and-disaster-recovery.md b/ee/ucp/admin/backups-and-disaster-recovery.md index 3e5be6d0b1..c07aa4f8a0 100644 --- a/ee/ucp/admin/backups-and-disaster-recovery.md +++ b/ee/ucp/admin/backups-and-disaster-recovery.md @@ -37,7 +37,7 @@ Back up your Docker EE components in the following order: 1. [Back up your swarm](/engine/swarm/admin_guide/#back-up-the-swarm) 2. Back up UCP -3. [Back up DTR](../../dtr/2.5/admin/disaster-recovery/index.md) +3. [Back up DTR](/ee/dtr/admin/disaster-recovery/) ## Backup policy @@ -45,6 +45,17 @@ As part of your backup policy you should regularly create backups of UCP. DTR is backed up independently. [Learn about DTR backups and recovery](../../dtr/2.5/admin/disaster-recovery/index.md). +> Warning: On UCP versions 3.1.0 - 3.1.2, before performing a UCP backup, you must clean up multiple /dev/shm mounts in the ucp-kublet entrypoint script by running the following script on all nodes via cron job: + +``` +SHM_MOUNT=$(grep -m1 '^tmpfs./dev/shm' /proc/mounts) +while [ $(grep -cm2 '^tmpfs./dev/shm' /proc/mounts) -gt 1 ]; do + sudo umount /dev/shm +done +grep -q '^tmpfs./dev/shm' /proc/mounts || sudo mount "${SHM_MOUNT}" +``` +For additional details, refer to [Docker KB000934](https://success.docker.com/article/more-than-one-dev-shm-mount-in-the-host-namespace){: target="_blank"} + To create a UCP backup, run the `{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} backup` command on a single UCP manager. This command creates a tar archive with the contents of all the [volumes used by UCP](../ucp-architecture.md) to persist data diff --git a/ee/ucp/admin/configure/collect-cluster-metrics.md b/ee/ucp/admin/configure/collect-cluster-metrics.md index e3f2f9c304..ce869338a6 100644 --- a/ee/ucp/admin/configure/collect-cluster-metrics.md +++ b/ee/ucp/admin/configure/collect-cluster-metrics.md @@ -22,16 +22,16 @@ The Docker EE platform provides a base set of metrics that gets you running and ## Business metrics ## These are high-level aggregate metrics that typically combine technical, financial, and organizational data to create metrics for business leaders of the IT infrastructure. Some examples of business metrics might be: - - Company or division-level application downtime - - Aggregate resource utilization - - Application resource demand growth + - Company or division-level application downtime + - Aggregate resource utilization + - Application resource demand growth ## Application metrics ## These are metrics about domain of APM tools like AppDynamics or DynaTrace and provide metrics about the state or performance of the application itself. - - Service state metrics - - Container platform metrics - - Host infrastructure metrics + - Service state metrics + - Container platform metrics + - Host infrastructure metrics Docker EE 2.1 does not collect or expose application level metrics. @@ -40,9 +40,11 @@ The following are metrics Docker EE 2.1 collects, aggregates, and exposes: ## Service state metrics ## These are metrics about the state of services running on the container platform. These types of metrics have very low cardinality, meaning the values are typically from a small fixed set of possibilities, commonly binary. - - Application health - - Convergence of K8s deployments and Swarm services - - Cluster load by number of services or containers or pods + - Application health + - Convergence of K8s deployments and Swarm services + - Cluster load by number of services or containers or pods + +Web UI disk usage metrics, including free space, only reflect the Docker managed portion of the filesystem: `/var/lib/docker`. To monitor the total space available on each filesystem of a UCP worker or manager, you must deploy a third party monitoring solution to monitor the operating system. ## Deploy Prometheus on worker nodes diff --git a/ee/ucp/admin/configure/create-audit-logs.md b/ee/ucp/admin/configure/create-audit-logs.md index 931e670104..871d6a3194 100644 --- a/ee/ucp/admin/configure/create-audit-logs.md +++ b/ee/ucp/admin/configure/create-audit-logs.md @@ -1,6 +1,6 @@ --- -title: Create UCP audit logs -description: Learn how to create audit logs of all activity in UCP +title: Enable audit logging on UCP +description: Learn how to enable audit logging of all activity in UCP keywords: logs, ucp, swarm, kubernetes, audits --- @@ -121,7 +121,10 @@ The section of the UCP configuration file that controls UCP auditing logging is: support_dump_include_audit_logs = false ``` -The supported variables are `""`, `"metadata"` or `"request"`. +The supported variables for `level` are `""`, `"metadata"` or `"request"`. + +> Important: The `support_dump_include_audit_logs` flag specifies whether user identification information from the ucp-controller container logs is included in the support dump. To prevent this information from being sent with the support dump, make sure that `support_dump_include_audit_logs` is set to `false`. When disabled, the support dump collection tool filters out any lines from the `ucp-controller` container logs that contain the substring `auditID`. + ## Accessing Audit Logs @@ -195,6 +198,17 @@ events and may create a large amount of log entries. - /kubernetesdocs - /manage +## API endpoint information redacted + +Information for the following API endpoints is redacted from the audit logs for security purposes: + +- `/secrets/create` (POST) +- `/secrets/{id}/update` (POST) +- `/swarm/join` (POST) +- `/swarm/update` (POST) +-`/auth/login` (POST) +- Kube secrete create/update endpoints + ## Where to go next - [Collect UCP Cluster Metrics with Prometheus](collect-cluster-metrics.md) diff --git a/ee/ucp/admin/configure/deploy-route-reflectors.md b/ee/ucp/admin/configure/deploy-route-reflectors.md index d4978db9e5..ca8f6090ed 100644 --- a/ee/ucp/admin/configure/deploy-route-reflectors.md +++ b/ee/ucp/admin/configure/deploy-route-reflectors.md @@ -27,7 +27,7 @@ workloads. If Route Reflectors are running on a same node as other workloads, swarm ingress and NodePorts might not work in these workloads. -## Choose dedicated notes +## Choose dedicated nodes Start by tainting the nodes, so that no other workload runs there. Configure your CLI with a UCP client bundle, and for each dedicated node, run: diff --git a/ee/ucp/admin/configure/external-auth/index.md b/ee/ucp/admin/configure/external-auth/index.md index a98deb7ea2..fd6f2e852e 100644 --- a/ee/ucp/admin/configure/external-auth/index.md +++ b/ee/ucp/admin/configure/external-auth/index.md @@ -141,7 +141,7 @@ Click **Yes** to enable integrating UCP users and teams with LDAP servers. | No simple pagination | If your LDAP server doesn't support pagination. | | Just-In-Time User Provisioning | Whether to create user accounts only when users log in for the first time. The default value of `true` is recommended. If you upgraded from UCP 2.0.x, the default is `false`. | -> **Note:** LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0. +> **Note**: LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0. ![](../../../images/ldap-integration-1.png){: .with-border} diff --git a/ee/ucp/admin/configure/join-nodes/index.md b/ee/ucp/admin/configure/join-nodes/index.md index 41f7642dd6..3f305d7c6f 100644 --- a/ee/ucp/admin/configure/join-nodes/index.md +++ b/ee/ucp/admin/configure/join-nodes/index.md @@ -2,6 +2,8 @@ title: Set up high availability description: Docker Universal Control plane has support for high availability. Learn how to set up your installation to ensure it tolerates failures. keywords: ucp, high availability, replica +redirect_from: +- /ee/ucp/admin/configure/set-up-high-availability/ --- Docker Universal Control Plane is designed for high availability (HA). You can diff --git a/ee/ucp/admin/configure/metrics-descriptions.md b/ee/ucp/admin/configure/metrics-descriptions.md new file mode 100644 index 0000000000..71799ef204 --- /dev/null +++ b/ee/ucp/admin/configure/metrics-descriptions.md @@ -0,0 +1,86 @@ +--- +description: Using UCP cluster metrics with Prometheus +keywords: prometheus, metrics, ucp +title: Using UCP cluster metrics with Prometheus +redirect_from: +- /engine/admin/prometheus/ +--- + +# UCP metrics + +The following table lists the metrics that UCP exposes in Prometheus, along with descriptions. Note that only the metrics +labeled with `ucp_` are documented. Other metrics are exposed in Prometheus but are not documented. + +| Name | Units | Description | Labels | Metric source | +|---------------------------------------------------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|---------------| +| `ucp_controller_services` | number of services | The total number of Swarm services | | Controller | +| `ucp_engine_container_cpu_percent` | percentage | The percentage of CPU time this container is using. | container labels | Node | +| `ucp_engine_container_cpu_total_time_nanoseconds` | nanoseconds | Total CPU time used by this container in nanoseconds | container labels | Node | +| `ucp_engine_container_health` | 0.0 or 1.0 | Whether or not this container is healthy, according to its healthcheck. Note that if this value is 0, it just means that the container is not reporting healthy; it might not have a healthcheck defined at all, or its healthcheck might not have returned any results yet | container labels | Node | +| `ucp_engine_container_memory_max_usage_bytes` | bytes | Maximum memory used by this container in bytes | container labels | Node | +| `ucp_engine_container_memory_usage_bytes` | bytes | Current memory used by this container in bytes | container labels | Node | +| `ucp_engine_container_memory_usage_percent` | percentage | Percentage of total node memory currently being used by this container | container labels | Node | +| `ucp_engine_container_network_rx_bytes_total` | bytes | Number of bytes received by this container on this network in the last sample | container networking labels | Node | +| `ucp_engine_container_network_rx_dropped_packets_total` | number of packets | Number of packets bound for this container on this network that were dropped in the last sample | container networking labels | Node | +| `ucp_engine_container_network_rx_errors_total` | number of errors | Number of received network errors for this container on this network in the last sample | container networking labels | Node | +| `ucp_engine_container_network_rx_packets_total` | number of packets | Number of received packets for this container on this network in the last sample | container networking labels | Node | +| `ucp_engine_container_network_tx_bytes_total` | bytes | Number of bytes sent by this container on this network in the last sample | container networking labels | Node | +| `ucp_engine_container_network_tx_dropped_packets_total` | number of packets | Number of packets sent from this container on this network that were dropped in the last sample | container networking labels | Node | +| `ucp_engine_container_network_tx_errors_total` | number of errors | Number of sent network errors for this container on this network in the last sample | container networking labels | Node | +| `ucp_engine_container_network_tx_packets_total` | number of packets | Number of sent packets for this container on this network in the last sample | container networking labels | Node | +| `ucp_engine_container_unhealth` | 0.0 or 1.0 | Whether or not this container is unhealthy, according to its healthcheck. Note that if this value is 0, it just means that the container is not reporting unhealthy; it might not have a healthcheck defined at all, or its healthcheck might not have returned any results yet | container labels | Node | +| `ucp_engine_containers` | number of containers | Total number of containers on this node | node labels | Node | +| `ucp_engine_cpu_total_time_nanoseconds` | nanoseconds | System CPU time used by this container in nanoseconds | container labels | Node | +| `ucp_engine_disk_free_bytes` | bytes | Free disk space on the Docker root directory on this node in bytes. Note that this metric is not available for Windows nodes | node labels | Node | +| `ucp_engine_disk_total_bytes` | bytes | Total disk space on the Docker root directory on this node in bytes. Note that this metric is not available for Windows nodes | node labels | Node | +| `ucp_engine_images` | number of images | Total number of images on this node | node labels | Node | +| `ucp_engine_memory_total_bytes` | bytes | Total amount of memory on this node in bytes | node labels | Node | +| `ucp_engine_networks` | number of networks | Total number of networks on this node | node labels | Node | +| `ucp_engine_node_health` | 0.0 or 1.0 | Whether or not this node is healthy, as determined by UCP | nodeName: node name, nodeAddr: node IP address | Controller | +| `ucp_engine_num_cpu_cores` | number of cores | Number of CPU cores on this node | node labels | Node | +| `ucp_engine_pod_container_ready` | 0.0 or 1.0 | Whether or not this container in a Kubernetes pod is ready, as determined by its readiness probe. | pod labels | Controller | +| `ucp_engine_pod_ready` | 0.0 or 1.0 | Whether or not this Kubernetes pod is ready, as determined by its readiness probe. | pod labels | Controller | +| `ucp_engine_volumes` | number of volumes | Total number of volumes on this node | node labels | Node | + +## Metrics labels + +Metrics exposed by UCP in Prometheus have standardized labels, depending on the resource that they are measuring. +The following table lists some of the labels that are used, along with their values: + +### Container labels + +| Label name | Value | +|--------------------|---------------------------------------------------------------------------------------------| +| `collection` | The collection ID of the collection this container is in, if any | +| `container` | The ID of this container | +| `image` | The name of this container's image | +| `manager` | "true" if the container's node is a UCP manager, "false" otherwise | +| `name` | The name of the container | +| `podName` | If this container is part of a Kubernetes pod, this is the pod's name | +| `podNamespace` | If this container is part of a Kubernetes pod, this is the pod's namespace | +| `podContainerName` | If this container is part of a Kubernetes pod, this is the container's name in the pod spec | +| `service` | If this container is part of a Swarm service, this is the service ID | +| `stack` | If this container is part of a Docker compose stack, this is the name of the stack | + +### Container networking labels + +The following metrics measure network activity for a given network attached to a given +container. They have the same labels as Container labels, with one addition: + +| Label name | Value | +|------------|-----------------------| +| `network` | The ID of the network | + +### Node labels + +| Label name | Value | +|------------|--------------------------------------------------------| +| `manager` | "true" if the node is a UCP manager, "false" otherwise | + +## Metric source + +UCP exports metrics on every node and also exports additional metrics from +every controller. The metrics that are exported from controllers are +cluster-scoped, for example, the total number of Swarm services. Metrics that +are exported from nodes are specific to those nodes, for example, the total memory +on that node. diff --git a/ee/ucp/admin/configure/restrict-services-to-worker-nodes.md b/ee/ucp/admin/configure/restrict-services-to-worker-nodes.md index f8e673d283..358cdcdfe1 100644 --- a/ee/ucp/admin/configure/restrict-services-to-worker-nodes.md +++ b/ee/ucp/admin/configure/restrict-services-to-worker-nodes.md @@ -12,8 +12,10 @@ If a user deploys a malicious service that can affect the node where it is running, it won't be able to affect other nodes in the cluster, or any cluster management functionality. +## Swarm Workloads + To restrict users from deploying to manager nodes, log in with administrator -credentials to the UCP web UI, navigate to the **Admin Settings** +credentials to the UCP web interface, navigate to the **Admin Settings** page, and choose **Scheduler**. ![](../../images/restrict-services-to-worker-nodes-1.png){: .with-border} @@ -24,4 +26,82 @@ or not. Having a grant with the `Scheduler` role against the `/` collection takes precedence over any other grants with `Node Schedule` on subcollections. +## Kubernetes Workloads +By default Universal Control Plane clusters takes advantage of [Taints and +Tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) +to prevent a User's workload being deployed on to UCP Manager or DTR Nodes. + +You can view this taint by running: + +```bash +$ kubectl get nodes -o json | jq -r '.spec.taints | .[]' +{ + "effect": "NoSchedule", + "key": "com.docker.ucp.manager" +} +``` + +> Note: Workloads deployed by an Administrator in the `kube-system` namespace do +> not follow these scheduling constraints. If an Administrator deploys a +> workload in the `kube-system` namespace, a toleration is applied to bypass +> this taint, and the workload is scheduled on all node types. + +### Allow Administrators to Schedule on Manager / DTR Nodes + +To allow Administrators to deploy workloads accross all nodes types, an +Administrator can tick the "Allow administrators to deploy containers on UCP +managers or nodes running DTR" box in the UCP web interface. + +![](../../images/restrict-services-to-worker-nodes-2.png){: .with-border} + +For all new workloads deployed by Administrators after this box has been +ticked, UCP will apply a toleration to your workloads to allow the pods to be +scheduled on all node types. + +For existing workloads, the Administrator will need to edit the Pod +specification, through `kubectl edit ` or the UCP web interface and add +the following toleration: + +```bash +tolerations: +- key: "com.docker.ucp.manager" + operator: "Exists" +``` + +You can check this has been applied succesfully by: + +```bash +$ kubectl get -o json | jq -r '.spec.template.spec.tolerations | .[]' +{ + "key": "com.docker.ucp.manager", + "operator": "Exists" +} +``` + +### Allow Users and Service Accounts to Schedule on Manager / DTR Nodes + +To allow Kubernetes Users and Service Accounts to deploy workloads accross all +node types in your cluster, an Administrator will need to tick "Allow all +authenticated users, including service accounts, to schedule on all nodes, +including UCP managers and DTR nodes." in the UCP web interface. + +![](../../images/restrict-services-to-worker-nodes-3.png){: .with-border} + +For all new workloads deployed by Kubernetes Users after this box has been +ticked, UCP will apply a toleration to your workloads to allow the pods to be +scheduled on all node types. For existing workloads, the User would need to edit +Pod Specification as detailed above in the "Allow Administrators to Schedule on +Manager / DTR Nodes" section. + +There is a NoSchedule taint on UCP managers and DTR nodes and if you have +scheduling on managers/workers disabled in the UCP scheduling options, then a +toleration for that taint will not get applied to the deployments, so they +should not schedule on those nodes. Unless the Kube workload is deployed in the +`kube-system` name space. + +## Where to go next + +- [Deploy an Application Package](/ee/ucp/deploy-application-package/) +- [Deploy a Swarm Workload](/ee/ucp/swarm/) +- [Deploy a Kubernetes Workload](/ee/ucp/kubernetes//) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 2af248e983..78ce30c8a5 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -112,6 +112,8 @@ Configures audit logging options for UCP components. Specifies scheduling options and the default orchestrator for new nodes. +> **Note**: If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag. + | Parameter | Required | Description | |:------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------| | `enable_admin_ucp_scheduling` | no | Set to `true` to allow admins to schedule on containers on manager nodes. The default is `false`. | @@ -181,7 +183,7 @@ components. Assigning these values overrides the settings in a container's | `metrics_retention_time` | no | Adjusts the metrics retention time. | | `metrics_scrape_interval` | no | Sets the interval for how frequently managers gather metrics from nodes in the cluster. | | `metrics_disk_usage_interval` | no | Sets the interval for how frequently storage metrics are gathered. This operation can be expensive when large volumes are present. | -| `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 512MB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. | +| `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 1GB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. | | `cloud_provider` | no | Set the cloud provider for the kubernetes cluster. | | `pod_cidr` | yes | Sets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is `192.168.0.0/16`. | | `calico_mtu` | no | Set the MTU (maximum transmission unit) size for the Calico plugin. | diff --git a/ee/ucp/admin/configure/use-nfs-volumes.md b/ee/ucp/admin/configure/use-nfs-volumes.md index b4d9fadb49..821a94c827 100644 --- a/ee/ucp/admin/configure/use-nfs-volumes.md +++ b/ee/ucp/admin/configure/use-nfs-volumes.md @@ -8,9 +8,9 @@ Docker UCP supports Network File System (NFS) persistent volumes for Kubernetes. To enable this feature on a UCP cluster, you need to set up an NFS storage volume provisioner. -> Kubernetes storage drivers +> ### Kubernetes storage drivers > -> Currently, NFS is the only Kubernetes storage driver that UCP supports. +>NFS is one of the Kubernetes storage drivers that UCP supports. See [Kubernetes Volume Drivers](https://success.docker.com/article/compatibility-matrix#kubernetesvolumedrivers) in the Compatibility Matrix for the full list. {: important} ## Enable NFS volume provisioning diff --git a/ee/ucp/admin/install/install-offline.md b/ee/ucp/admin/install/install-offline.md index d1d1aa9165..40a0fbc209 100644 --- a/ee/ucp/admin/install/install-offline.md +++ b/ee/ucp/admin/install/install-offline.md @@ -57,7 +57,7 @@ For each machine that you want to manage with UCP: `docker load` command, to load the Docker images from the tar archive: ```bash - $ docker load < ucp.tar.gz + $ docker load -i ucp.tar.gz ``` Follow the same steps for the DTR binaries. diff --git a/ee/ucp/admin/install/plan-installation.md b/ee/ucp/admin/install/plan-installation.md index 7e19cf6341..d3b7e0bfba 100644 --- a/ee/ucp/admin/install/plan-installation.md +++ b/ee/ucp/admin/install/plan-installation.md @@ -42,12 +42,22 @@ this. ## Avoid IP range conflicts +The `service-cluster-ip-range` Kubernetes API Server flag is currently set to `10.96.0.0/16` and cannot be changed. + Swarm uses a default address pool of `10.0.0.0/16` for its overlay networks. If this conflicts with your current network implementation, please use a custom IP address pool. To specify a custom IP address pool, use the `--default-address-pool` command line option during [Swarm initialization](../../../../engine/swarm/swarm-mode.md). -**NOTE:** Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be installed using this flag and UCP must be installed on top of it. +> **Note**: Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be installed using this flag and UCP must be installed on top of it. Kubernetes uses a default cluster IP pool for pods that is `192.168.0.0/16`. If it conflicts with your current networks, please use a custom IP pool by specifying `--pod-cidr` during UCP installation. +## Avoid firewall conflicts + +For SUSE Linux Enterprise Server 12 SP2 (SLES12), the `FW_LO_NOTRACK` flag is turned on by default in the openSUSE firewall. This speeds up packet processing on the loopback interface, and breaks certain firewall setups that need to redirect outgoing packets via custom rules on the local machine. + +To turn off the FW_LO_NOTRACK option, edit the `/etc/sysconfig/SuSEfirewall2` file and set `FW_LO_NOTRACK="no"`. Save the file and restart the firewall or reboot. + +For For SUSE Linux Enterprise Server 12 SP3, the default value for `FW_LO_NOTRACK` was changed to `no`. + ## Time synchronization In distributed systems like Docker UCP, time synchronization is critical diff --git a/ee/ucp/admin/install/system-requirements.md b/ee/ucp/admin/install/system-requirements.md index 3822c99c2c..373e7665d9 100644 --- a/ee/ucp/admin/install/system-requirements.md +++ b/ee/ucp/admin/install/system-requirements.md @@ -31,7 +31,7 @@ You can install UCP on-premises or on a cloud provider. Common requirements: * 4 vCPUs for manager nodes * 25-100GB of free disk space -Note that Windows container images are typically larger than Linux ontainer images. For +Note that Windows container images are typically larger than Linux container images. For this reason, you should provision more local storage for Windows nodes and for any DTR setups that store Windows container images. @@ -70,6 +70,7 @@ host types: | managers | TCP 6443 (configurable) | External, Internal | Port for Kubernetes API server endpoint | | managers, workers | TCP 6444 | Self | Port for Kubernetes API reverse proxy | | managers, workers | TCP, UDP 7946 | Internal | Port for gossip-based clustering | +| managers, workers | TCP 9099 | Self | Port for calico health check | managers, workers | TCP 10250 | Internal | Port for Kubelet | | managers, workers | TCP 12376 | Internal | Port for a TLS authentication proxy that provides access to the Docker Engine | | managers, workers | TCP 12378 | Self | Port for Etcd reverse proxy | @@ -83,6 +84,14 @@ host types: | managers | TCP 12386 | Internal | Port for the authentication worker | | managers | TCP 12388 | Internal | Internal Port for the Kubernetes API Server | +## Avoid firewall conflicts + +For SUSE Linux Enterprise Server 12 SP2 (SLES12), the `FW_LO_NOTRACK` flag is turned on by default in the openSUSE firewall. This speeds up packet processing on the loopback interface, and breaks certain firewall setups that need to redirect outgoing packets via custom rules on the local machine. + +To turn off the FW_LO_NOTRACK option, edit the `/etc/sysconfig/SuSEfirewall2` file and set `FW_LO_NOTRACK="no"`. Save the file and restart the firewall or reboot. + +For For SUSE Linux Enterprise Server 12 SP3, the default value for `FW_LO_NOTRACK` was changed to `no`. + ## Enable ESP traffic For overlay networks with encryption to work, you need to ensure that diff --git a/ee/ucp/admin/install/upgrade-offline.md b/ee/ucp/admin/install/upgrade-offline.md index 8e0a23d4fc..660a3e1085 100644 --- a/ee/ucp/admin/install/upgrade-offline.md +++ b/ee/ucp/admin/install/upgrade-offline.md @@ -47,7 +47,7 @@ For each machine that you want to manage with UCP: `docker load` command, to load the Docker images from the tar archive: ```bash - $ docker load < ucp.tar.gz + $ docker load -i ucp.tar.gz ``` ## Upgrade UCP diff --git a/ee/ucp/admin/install/upgrade.md b/ee/ucp/admin/install/upgrade.md index 99e6360295..15d8a4f15a 100644 --- a/ee/ucp/admin/install/upgrade.md +++ b/ee/ucp/admin/install/upgrade.md @@ -29,7 +29,7 @@ Learn about [UCP system requirements](system-requirements.md). Ensure that your cluster nodes meet the minimum requirements for port openings. [Ports used](system-requirements.md/#ports-used) are documented in the UCP system requirements. -> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft +> **Note**: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft > Azure then please ensure all of the Azure [prerequisites](install-on-azure.md/#azure-prerequisites) > are met. @@ -56,17 +56,17 @@ to install the Docker Enterprise Edition. Starting with the manager nodes, and then worker nodes: 1. Log into the node using ssh. -2. Upgrade the Docker Engine to version 17.06.2-ee-8 or higher. See [Upgrade Docker EE](https://docs.docker.com/ee/upgrade/). +2. Upgrade the Docker Engine to version 18.09.0 or higher. See [Upgrade Docker EE](https://docs.docker.com/ee/upgrade/). 3. Make sure the node is healthy. - In your browser, navigate to the **Nodes** page in the UCP web UI, + In your browser, navigate to **Nodes** in the UCP web interface, and check that the node is healthy and is part of the cluster. ## Upgrade UCP -You can upgrade UCP from the web UI or the CLI. +You can upgrade UCP from the web or the command line interface. -### Use the UI to perform an upgrade +### Use the web interface to perform an upgrade When an upgrade is available for a UCP installation, a banner appears. @@ -77,17 +77,17 @@ It can be found under the **Upgrade** tab of the **Admin Settings** section. ![](../../images/upgrade-ucp-2.png){: .with-border} -In the **Available Versions** dropdown, select **3.0.0** and click +In the **Available Versions** dropdown, select the version you want to update to and click **Upgrade UCP**. -During the upgrade, the UI will be unavailable, and you should wait +During the upgrade, the web interface will be unavailable, and you should wait until completion before continuing to interact with it. When the upgrade -completes, you'll see a notification that a newer version of the UI -is available and a browser refresh is required to see the latest UI. +completes, you'll see a notification that a newer version of the web interface +is available and a browser refresh is required to see it. ### Use the CLI to perform an upgrade -To upgrade from the CLI, log into a UCP manager node using ssh, and run: +To upgrade from the CLI, log into a UCP manager node using SSH, and run: ``` # Get the latest version of UCP @@ -100,10 +100,10 @@ docker container run --rm -it \ upgrade --interactive ``` -This runs the upgrade command in interactive mode, so that you are prompted -for any necessary configuration values. +This runs the upgrade command in interactive mode, which will prompt you +for required configuration values. -Once the upgrade finishes, navigate to the UCP web UI and make sure that +Once the upgrade finishes, navigate to the UCP web interface and make sure that all the nodes managed by UCP are healthy. ## Where to go next diff --git a/ee/ucp/admin/monitor-and-troubleshoot/index.md b/ee/ucp/admin/monitor-and-troubleshoot/index.md index cfc1ff3b85..d269c4dbe9 100644 --- a/ee/ucp/admin/monitor-and-troubleshoot/index.md +++ b/ee/ucp/admin/monitor-and-troubleshoot/index.md @@ -70,6 +70,10 @@ To enable this feature, DTR 2.6 is required and single sign-on with UCP must be ![example of vulnerability information in UCP](../../images/example-of-vuln-data-in-ucp.png) +## Monitoring disk usage + +Web UI disk usage metrics, including free space, only reflect the Docker managed portion of the filesystem: `/var/lib/docker`. To monitor the total space available on each filesystem of a UCP worker or manager, you must deploy a third party monitoring solution to monitor the operating system. + ## Where to go next - [Troubleshoot with logs](troubleshoot-with-logs.md) diff --git a/ee/ucp/authorization/ee-standard.md b/ee/ucp/authorization/ee-standard.md index 8e5e9d7743..f43e08f4f3 100644 --- a/ee/ucp/authorization/ee-standard.md +++ b/ee/ucp/authorization/ee-standard.md @@ -53,7 +53,7 @@ built-in collection, `/Shared`. Other collections are also being created to enable shared `db` applications. -> **Note:** For increased security with node-based isolation, use Docker +> **Note**: For increased security with node-based isolation, use Docker > Enterprise Advanced. - `/Shared/mobile` hosts all Mobile applications and resources. @@ -107,7 +107,7 @@ collection boundaries. By assigning multiple grants per team, the Mobile and Payments applications teams can connect to dedicated Database resources through a secure and controlled interface, leveraging Database networks and secrets. -> **Note:** In Docker Enterprise Standard, all resources are deployed across the +> **Note**: In Docker Enterprise Standard, all resources are deployed across the > same group of UCP worker nodes. Node segmentation is provided in Docker > Enterprise Advanced and discussed in the [next tutorial](ee-advanced.md). diff --git a/ee/ucp/authorization/group-resources.md b/ee/ucp/authorization/group-resources.md index 50b57f6b1d..67f3fdbad4 100644 --- a/ee/ucp/authorization/group-resources.md +++ b/ee/ucp/authorization/group-resources.md @@ -40,7 +40,14 @@ can be nested inside one another, to create hierarchies. You can nest collections inside one another. If a user is granted permissions for one collection, they'll have permissions for its child collections, -pretty much like a directory structure.. +pretty much like a directory structure. As of UCP `3.1`, the ability to create a nested +collection of more than 2 layers deep within the root `/Swarm/` collection has been deprecated. + +The following image provides two examples of nested collections with the recommended maximum +of two nesting layers. The first example illustrates an environment-oriented collection, and the second +example illustrates an application-oriented collection. + +![](../images/nested-collection.png){: .with-border} For a child collection, or for a user who belongs to more than one team, the system concatenates permissions from multiple roles into an "effective role" for @@ -57,8 +64,8 @@ Docker EE provides a number of built-in collections. | `/` | Path to all resources in the Swarm cluster. Resources not in a collection are put here. | | `/System` | Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. | | `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](isolate-nodes.md). | -| `/Shared/Private/` | Path to a user's private collection. | -| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). | +| `/Shared/Private/` | Path to a user's private collection. Note that private collections are not created until the user logs in for the first time. | +| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). | ### Default collections diff --git a/ee/ucp/images/interlock-install-3.png b/ee/ucp/images/interlock-install-3.png index c7ea730e55..1a8284f56a 100644 Binary files a/ee/ucp/images/interlock-install-3.png and b/ee/ucp/images/interlock-install-3.png differ diff --git a/ee/ucp/images/nested-collection.png b/ee/ucp/images/nested-collection.png new file mode 100644 index 0000000000..6da2677646 Binary files /dev/null and b/ee/ucp/images/nested-collection.png differ diff --git a/ee/ucp/images/restrict-services-to-worker-nodes-1.png b/ee/ucp/images/restrict-services-to-worker-nodes-1.png index 70c3382297..67b984982a 100644 Binary files a/ee/ucp/images/restrict-services-to-worker-nodes-1.png and b/ee/ucp/images/restrict-services-to-worker-nodes-1.png differ diff --git a/ee/ucp/images/restrict-services-to-worker-nodes-2.png b/ee/ucp/images/restrict-services-to-worker-nodes-2.png new file mode 100644 index 0000000000..860bafeed2 Binary files /dev/null and b/ee/ucp/images/restrict-services-to-worker-nodes-2.png differ diff --git a/ee/ucp/images/restrict-services-to-worker-nodes-3.png b/ee/ucp/images/restrict-services-to-worker-nodes-3.png new file mode 100644 index 0000000000..3bbd56becb Binary files /dev/null and b/ee/ucp/images/restrict-services-to-worker-nodes-3.png differ diff --git a/ee/ucp/images/upgrade-ucp-1.png b/ee/ucp/images/upgrade-ucp-1.png index 02feffe51f..441ae33737 100644 Binary files a/ee/ucp/images/upgrade-ucp-1.png and b/ee/ucp/images/upgrade-ucp-1.png differ diff --git a/ee/ucp/images/upgrade-ucp-2.png b/ee/ucp/images/upgrade-ucp-2.png index ed55f22e90..9e4a5bd30c 100644 Binary files a/ee/ucp/images/upgrade-ucp-2.png and b/ee/ucp/images/upgrade-ucp-2.png differ diff --git a/ee/ucp/interlock/usage/labels-reference.md b/ee/ucp/interlock/usage/labels-reference.md index dc01811e0f..834ad6f13f 100644 --- a/ee/ucp/interlock/usage/labels-reference.md +++ b/ee/ucp/interlock/usage/labels-reference.md @@ -18,7 +18,6 @@ The following labels are available for you to use in swarm services: | `com.docker.lb.network` | Name of network the proxy service should attach to for upstream connectivity. | `app-network-a` | | `com.docker.lb.context_root` | Context or path to use for the application. | `/app` | | `com.docker.lb.context_root_rewrite` | Boolean to enable rewrite for the context root. | `true` | -| `com.docker.lb.ssl_only` | Boolean to force SSL for application. | `true` | | `com.docker.lb.ssl_cert` | Docker secret to use for the SSL certificate. | `example.com.cert` | | `com.docker.lb.ssl_key` | Docker secret to use for the SSL key. | `example.com.key` | | `com.docker.lb.websocket_endpoints` | Comma separated list of endpoints to configure to be upgraded for websockets. | `/ws,/foo` | diff --git a/ee/ucp/interlock/usage/sessions.md b/ee/ucp/interlock/usage/sessions.md index f1104ec486..46f43b65a6 100644 --- a/ee/ucp/interlock/usage/sessions.md +++ b/ee/ucp/interlock/usage/sessions.md @@ -69,7 +69,7 @@ which are pinned to the same instance. If you make a few requests you will noti # IP Hashing In this example we show how to configure sticky sessions using client IP hashing. This is not as flexible or consistent -as cookies but enables workarounds for some applications that cannot use the other method. +as cookies but enables workarounds for some applications that cannot use the other method. When using IP hashing you should reconfigure Interlock proxy to use [host mode networking](../deploy/host-mode-networking.md) because the default `ingress` networking mode uses SNAT which obscures client IP addresses. First we will create an overlay network so that service traffic is isolated and secure: @@ -125,7 +125,7 @@ $> curl -vs -H "Host: demo.local" http://127.0.0.1/ping You can use `docker service scale demo=10` to add some more replicas. Once scaled, you will notice that requests are pinned to a specific backend. -Note: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is -expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are -determined a new "sticky" backend will be chosen and that will be the dedicated upstream. +> **Note**: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is +> expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are +> determined a new "sticky" backend will be chosen and that will be the dedicated upstream. diff --git a/ee/ucp/interlock/usage/tls.md b/ee/ucp/interlock/usage/tls.md index 7c52129323..aedd6ecabf 100644 --- a/ee/ucp/interlock/usage/tls.md +++ b/ee/ucp/interlock/usage/tls.md @@ -143,7 +143,7 @@ using a version of `curl` that includes the SNI header with insecure requests. If this doesn't happen, `curl` displays an error saying that the SSL handshake was aborterd. -> ***NOTE:*** Currently there is no way to update expired certificates using this method. +> **Note**: Currently there is no way to update expired certificates using this method. > The proper way is to create a new secret then update the corresponding service. ## Let your service handle TLS diff --git a/ee/ucp/interlock/usage/websockets.md b/ee/ucp/interlock/usage/websockets.md index ec2b1b46b5..5aa8a8c18a 100644 --- a/ee/ucp/interlock/usage/websockets.md +++ b/ee/ucp/interlock/usage/websockets.md @@ -27,8 +27,8 @@ $> docker service create \ ehazlett/websocket-chat ``` -Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file. -This uses the browser for websocket communication so you will need to have an entry or use a routable domain. +> **Note**: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file. +> This uses the browser for websocket communication so you will need to have an entry or use a routable domain. Interlock will detect once the service is available and publish it. Once the tasks are running and the proxy service has been updated the application should be available via `http://demo.local`. Open diff --git a/ee/ucp/kubernetes/configure-aws-storage.md b/ee/ucp/kubernetes/configure-aws-storage.md index 0a5f4b7d88..179edc536c 100644 --- a/ee/ucp/kubernetes/configure-aws-storage.md +++ b/ee/ucp/kubernetes/configure-aws-storage.md @@ -32,8 +32,8 @@ Instances must have the following [AWS Identity and Access Management](https://d ### Infrastructure Configuration - Apply the roles and policies to Kubernetes masters and workers as indicated in the above chart. -- EC2 instances must be set to the private DNS hostname of the instance (will typically end in `.internal`) -- EC2 instances must also be labeled with the key `KubernetesCluster` with a matching value across all nodes. +- Set the hostname of the EC2 instances to the private DNS hostname of the instance. See [DNS Hostnames](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-hostnames) and [To change the system hostname without a public DNS name](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-hostname.html#set-hostname-system) for more details. +- Label the EC2 instances with the key `KubernetesCluster` and assign the same value across all nodes, for example, `UCPKubenertesCluster`. ### Cluster Configuration diff --git a/ee/ucp/kubernetes/install-cni-plugin.md b/ee/ucp/kubernetes/install-cni-plugin.md index d91f19981f..ee7c856d0d 100644 --- a/ee/ucp/kubernetes/install-cni-plugin.md +++ b/ee/ucp/kubernetes/install-cni-plugin.md @@ -11,7 +11,7 @@ UCP supports certified third-party Container Networking Interface (CNI) plugins. built-in [Calico](https://github.com/projectcalico/cni-plugin) plugin, but you can override that and install a Docker certified plugin. -***NOTE:*** The `--cni-installer-url` option is deprecated as of UCP 3.1. It is replaced by the `--unmanaged-cni` option. +> **Note**: The `--cni-installer-url` option is deprecated as of UCP 3.1. It is replaced by the `--unmanaged-cni` option. # Install UCP with a custom CNI plugin @@ -27,9 +27,10 @@ docker container run --rm -it --name ucp \ --unmanaged-cni \ --interactive ``` -***NOTE:*** Setting `--unmanaged-cni` to `true` value installs UCP without a managed CNI plugin. UCP and the -Kubernetes components will be running but pod-to-pod networking will not function until a CNI plugin is manually -installed. This will impact some functionality of UCP until a CNI plugin is running. + +> **Note**: Setting `--unmanaged-cni` to `true` value installs UCP without a managed CNI plugin. UCP and the +> Kubernetes components will be running but pod-to-pod networking will not function until a CNI plugin is manually +> installed. This will impact some functionality of UCP until a CNI plugin is running. You must provide a correct YAML installation file for the CNI plugin, but most of the default files work on Docker EE with no modification. diff --git a/ee/ucp/kubernetes/iscsi/dci.sh b/ee/ucp/kubernetes/iscsi/dci.sh deleted file mode 100644 index 75f9d8c6ce..0000000000 --- a/ee/ucp/kubernetes/iscsi/dci.sh +++ /dev/null @@ -1,56 +0,0 @@ -#!/bin/sh - -# Prior to executing this script, user should have -# setup `dci secrets` specific to their cloud provider. -# I used AWS and setup access keys prior. - -export DCI_CLOUD=aws -export DCI_DEPLOYMENT=core-storage9 -export DCI_REPOSITORY=dockereng - -# To override the dci container image, use this option -# For most cases, setting this to ${DCI_CLOUD}-local is sufficient. -export DCI_TAG=aws-9857f6f - -# Initialize cluster -dci cluster init - -# Install a specific development version of UCP -# Perform a few other cluster specific configs. -dci cluster config set use_dev_version true -dci cluster config set docker_ucp_image_repository dockereng -dci cluster config set docker_ucp_version "3.2.0-latest" -dci cluster config set docker_ucp_storage_driver iscsi - -# Preserve install containers. Ansible is case sensitive. 'False', not 'false' -dci cluster config set docker_remove_containers False - -# set log level to debug. -# Use this to test --iscsiadm-path and --iscsidb-path -dci cluster config set docker_ucp_install_args "'--debug'" - -dci cluster config set region us-west-2 -dci cluster config set linux_ucp_worker_count 2 -dci cluster config set linux_dtr_count 0 - -# Apply presets -dci cluster apply-preset "RHEL 7.5" - -dci cluster provision - -# Post provision installs. -dci cluster ssh "sudo yum install -y iscsi-initiator-utils" -dci cluster ssh "sudo modprobe iscsi_tcp" -# master doubles up as iscsi target. So let firewalld allow iscsi traffic. -# Restarting firewalld before installing dockerd is least intrusive; -# else iptable rules get messed up -dci cluster ssh "sudo yum install -y firewalld" -dci cluster ssh "sudo systemctl enable firewalld" -dci cluster ssh "sudo systemctl start firewalld" -dci cluster ssh "sudo firewall-cmd --add-service=iscsi-target --permanent" -dci cluster ssh "sudo firewall-cmd --add-port=18700/tcp --permanent" -dci cluster ssh "sudo firewall-cmd --reload" - -# Install cluster. This is essential -# Apply = provision + install -dci cluster install --log-level debug diff --git a/ee/ucp/kubernetes/iscsi/iscsi-provisioner-d.yaml b/ee/ucp/kubernetes/iscsi/iscsi-provisioner-d.yaml deleted file mode 100644 index b5ab0fac57..0000000000 --- a/ee/ucp/kubernetes/iscsi/iscsi-provisioner-d.yaml +++ /dev/null @@ -1,71 +0,0 @@ -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: iscsi-provisioner-runner -rules: - - apiGroups: [""] - resources: ["persistentvolumes"] - verbs: ["get", "list", "watch", "create", "delete"] - - apiGroups: [""] - resources: ["persistentvolumeclaims"] - verbs: ["get", "list", "watch", "update"] - - apiGroups: ["storage.k8s.io"] - resources: ["storageclasses"] - verbs: ["get", "list", "watch"] - - apiGroups: [""] - resources: ["events"] - verbs: ["create", "update", "patch"] ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: run-iscsi-provisioner -subjects: - - kind: ServiceAccount - name: iscsi-provisioner - namespace: default -roleRef: - kind: ClusterRole - name: iscsi-provisioner-runner - apiGroup: rbac.authorization.k8s.io ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: iscsi-provisioner ---- -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - name: iscsi-provisioner -spec: - replicas: 1 - template: - metadata: - labels: - app: iscsi-provisioner - spec: - containers: - - name: iscsi-provisioner - imagePullPolicy: Always - image: quay.io/external_storage/iscsi-controller:latest - args: - - "start" - env: - - name: PROVISIONER_NAME - value: iscsi-targetd - - name: LOG_LEVEL - value: debug - - name: TARGETD_USERNAME - valueFrom: - secretKeyRef: - name: targetd-account - key: username - - name: TARGETD_PASSWORD - valueFrom: - secretKeyRef: - name: targetd-account - key: password - - name: TARGETD_ADDRESS - value: 172.31.8.88 - serviceAccount: iscsi-provisioner diff --git a/ee/ucp/kubernetes/iscsi/iscsi-pvc.yaml b/ee/ucp/kubernetes/iscsi/iscsi-pvc.yaml deleted file mode 100644 index 658a475e2b..0000000000 --- a/ee/ucp/kubernetes/iscsi/iscsi-pvc.yaml +++ /dev/null @@ -1,11 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: iscsi-claim -spec: - storageClassName: "iscsi-targetd-vg-targetd" - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 100Mi diff --git a/ee/ucp/kubernetes/iscsi/iscsi-storageclass.yaml b/ee/ucp/kubernetes/iscsi/iscsi-storageclass.yaml deleted file mode 100644 index ca34f59572..0000000000 --- a/ee/ucp/kubernetes/iscsi/iscsi-storageclass.yaml +++ /dev/null @@ -1,13 +0,0 @@ -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: iscsi-targetd-vg-targetd -provisioner: iscsi-targetd -parameters: - targetPortal: 172.31.8.88 - iqn: iqn.2019-01.org.iscsi.docker:targetd - iscsiInterface: default - volumeGroup: vg-targetd - initiators: iqn.2019-01.com.example:node1 - chapAuthDiscovery: "false" - chapAuthSession: "false" diff --git a/ee/ucp/kubernetes/iscsi/iscsi_target.sh b/ee/ucp/kubernetes/iscsi/iscsi_target.sh deleted file mode 100644 index b6d15748e7..0000000000 --- a/ee/ucp/kubernetes/iscsi/iscsi_target.sh +++ /dev/null @@ -1,45 +0,0 @@ -#!/bin/bash - -# Main customization needed for this script is -# setting the iscsi target_name. Default is -# "iqn.2019-01.org.iscsi.docker:targetd". Feel free to change it, -# as long as it follows IQN naming rules. - -# Run this script as root -if [[ $EUID -ne 0 ]]; then - echo "This script must be run as root" - exit 1 -fi - -# Setup volume group on Loopback device. -mkdir /var/lib/loopback -cd /var/lib/loopback -dd if=/dev/zero of=disk.img bs=1G count=2 - -export LOOP=`sudo losetup -f` -losetup $LOOP disk.img -vgcreate vg-targetd $LOOP - - -# Install targetd and targetcli -yum install -y targetcli targetd - -# Enable targetcli -systemctl enable target -systemctl start target - -# Configure targetd - -echo "password: ciao - -# defaults below; uncomment and edit -# if using a thin pool, use / -# e.g vg-targetd/pool -pool_name: vg-targetd -user: admin -ssl: false -target_name: iqn.2019-01.org.iscsi.docker:targetd" > /etc/target/targetd.yaml - -# Enable targetd -systemctl enable targetd -systemctl start targetd diff --git a/ee/ucp/kubernetes/iscsi/iscsi_testing.txt b/ee/ucp/kubernetes/iscsi/iscsi_testing.txt deleted file mode 100644 index 067c8f31f0..0000000000 --- a/ee/ucp/kubernetes/iscsi/iscsi_testing.txt +++ /dev/null @@ -1,73 +0,0 @@ -This page describes the ISCSI testing setup for the UCP Amberjack release. -Note: When using this setup, update the scripts to have IP addresses reflecting your cluster. - - -1. Deploy UCP with ISCSI storage using DCI script, dci.sh -The main thing to note here is that UCP should be invoked with the newly added -"--storage=iscsi" option. There are 2 other options that can be used to -specify the path of the host iscsiadm binary and path of the host iscsi -database, which are "--iscsiadm-path" and "--iscsidb-path" respectively. -The default paths for these are "/usr/sbin/iscsiadm", "/etc/iscsi". -If the hosts in your cluster have iscsi packages installed in a different -path, use these options to customize the paths. - -Another point worth mentioning is that the hosts should have iscsi packages -installed. This is done in the DCI post-provisioning phase. - -2. ISCSI Initiator setup. -In UCP, all worker nodes are configured as ISCSI initiators. -Log in to the worker and configure it as initiator. -sudo sh -c 'echo "InitiatorName=iqn.2019-01.com.example:node1" > /etc/iscsi/initiatorname.iscsi' -sudo systemctl restart iscsid - -3. ISCSI target setup. -In my setup, I configure the UCP master to be the ISCSI target as well. -This is purely for ease of testing in the cluster. - -# Copy iscsi_target.sh setup script to master. -- scp -i anusha-dci.pem iscsi_target.sh ec2-user@54.187.2.136:/tmp/iscsi_target.sh - -# login and run script -sudo /tmp/iscsi_target.sh - - -4. kubectl client setup. -# Download UCP client bundle and setup kubectl on the client. Ensure that it works. -$ kubectl version -Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} -Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.0-docker-preview-7", GitCommit:"6f9542764d9cd2eb95ed50b25498f6fe837fac24", GitTreeState:"clean", -BuildDate:"2019-01-15T18:42:46Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} - - -5. Deploy the iscsi provisioner pod. - -$ export NS=default -$ kubectl create secret generic targetd-account --from-literal=username=admin --from-literal=password=ciao -n $NS - -# Use the internal IP of master: 172.31.8.88 -$ kubectl apply -f iscsi-provisioner-d.yaml -n $NS -clusterrole "iscsi-provisioner-runner" created -clusterrolebinding "run-iscsi-provisioner" created -serviceaccount "iscsi-provisioner" created -deployment "iscsi-provisioner" created - -6. Apply the storage class. -$ kubectl apply -f iscsi-storageclass.yaml -storageclass "iscsi-targetd-vg-targetd" created - -$ kubectl get sc -NAME PROVISIONER AGE -iscsi-targetd-vg-targetd iscsi-targetd 30s - -7. Apply the PVC -$ kubectl apply -f iscsi-pvc.yaml -n $NS -persistentvolumeclaim "iscsi-claim" created - -$ kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -iscsi-claim Bound pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO iscsi-targetd-vg-targetd 1m - -8. View the PV -$ kubectl get pv -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO Delete Bound default/iscsi-claim iscsi-targetd-vg-targetd 36s diff --git a/ee/ucp/release-notes.md b/ee/ucp/release-notes.md index f9535edbc8..7b3be9a3ad 100644 --- a/ee/ucp/release-notes.md +++ b/ee/ucp/release-notes.md @@ -21,6 +21,28 @@ upgrade your installation to the latest release. # Version 3.1 +## 3.1.4 + +2019-02-28 + +**New platforms** +* Added support for SLES 15. +* Added support for Oracle 7.6. + + **Kubernetes** +* Kubernetes has been updated to version 1.11.7. (docker/orca#16157) + + **Bug Fixes** +* Bump the Golang version that is used to build UCP to version 1.10.8. (docker/orca#16068) +* Fixed an issue that caused UCP upgrade failure to upgrade with Interlock deployment. (docker/orca#16009) +* Fixed an issue that caused Windows node ucp-agent(s) to constantly reboot when audit logging is enabled. (docker/orca#16122) +* Fixed an issue to ensure that non-admin user actions (with the RestrictedControl role) against RBAC resources are read-only. (docker/orca#16121) +* Fixed an issue to prevent UCP users from updating services with a port that conflicts with the UCP controller port. (escalation#855) +* Fixed an issue to validate Calico certs expiration dates and update accordingly. (escalation#981) + +**Enhancements** +* Changed packaging and builds for UCP to build bootstrapper last. This avoids the "upgrade available" banner on all UCPs until the entirety of UCP is available. + ## 3.1.3 2019-01-29 @@ -38,6 +60,8 @@ upgrade your installation to the latest release. * Non-admin users can no longer create `PersistentVolumes` that mount host directories. (docker/orca#15936) * Added support for the limit arg in `docker ps`. (docker/orca#15812) * Fixed an issue with ucp-proxy health check. (docker/orca#15814, docker/orca#15813, docker/orca#16021, docker/orca#15811) + * Fixed an issue with manual creation of a **ClusterRoleBinding** or **RoleBinding** for `User` or `Group` subjects requiring the ID of the user, organization, or team. (docker/orca#14935) + * Fixed an issue in which Kube Rolebindings only worked on UCP User ID and not UCP username. (docker/orca#14935) ### Known issue * By default, Kubelet begins deleting images, starting with the oldest unused images, after exceeding 85% disk space utilization. This causes an issue in an air-gapped environment. (docker/orca#16082) @@ -191,7 +215,9 @@ There are several backward-incompatible changes in the Kubernetes API that may a The following features are deprecated in UCP 3.1. * Collections - * User-created nested collections more than 2 layers deep within the root `/Swarm/` collection are deprecated and will be removed in future versions of the product. In the future, we recommend that at most only two levels of collections be created within UCP under the shared Cluster collection designated as `/Swarm/`. For example, if a production collection is created as a collection under the cluster collection `/Swarm/` as `/Swarm/production/` then at most one level of nestedness should be created, as in `/Swarm/production/app/`. + * The ability to create a nested collection of more than 2 layers deep within the root `/Swarm/` collection is now deprecated and will not be included in future versions of the product. However, current nested collections with more than 2 layers are still retained. + + * Docker recommends a maximum of two layers when creating collections within UCP under the shared cluster collection designated as `/Swarm/`. For example, if a production collection called `/Swarm/production` is created under the shared cluster collection, `/Swarm/`, then only one level of nesting should be created: `/Swarm/production/app/`. See [Nested Collections](/ee/ucp/authorization/group-resources/#nested-collections) for more details. * Kubernetes * **PersistentVolumeLabel** admission controller is deprecated in Kubernetes 1.11. This functionality will be migrated to [Cloud Controller Manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/) @@ -208,6 +234,16 @@ The following features are deprecated in UCP 3.1. # Version 3.0 +## 3.0.10 + +2019-02-28 + + **Bug Fixes** +* Bump the Golang version that is used to build UCP to version 1.10.8. +* Prevent UCP users from updating services with a port that conflicts with the UCP controller port. (escalation#855) +* Fixed an issue that causes UCP fail to upgrade with Interlock deployment. (docker/orca/#16009) +* Validate Calico certs expiration date and update accordingly. (escalation#981) + ## 3.0.9 2018-01-29 @@ -472,8 +508,6 @@ Azure Disk when installing UCP with the `--cloud-provider` option. iptables -t filter -D KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP ``` * `ucp-kube-controller-manager` emits a large number of container logs. - * Excessive delay is seen when sending `docker service ls` through UCP client - bundle on a cluster that is running thousands of services. * Inter-node networking may break on Kubernetes pods while the `calico-node` pods are being upgraded on each node. This may cause up to a few minutes of networking disruption for pods on each node during the upgrade process, @@ -650,12 +684,54 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. # Version 2.2 +## Version 2.2.17 + +2019-02-28 + + **Bug Fixes** +* Bump the Golang version that is used to build UCP to version 1.10.8. +* Prevent UCP users from updating services with a port that conflicts with the UCP controller port. (escalation#855) + +### Known issues + +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used +instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. + ## Version 2.2.16 - 2019-01-29 +2019-01-29 ### Bug fixes * Added support for the `limit` argument in `docker ps`. (#15812) + +### Known issues + +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used +instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.15 @@ -666,7 +742,24 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * Significantly reduced database load in environments with a lot of concurrent and repeated API requests by the same user. * Added the ability to set custom HTTP response headers to be returned by the UCP Controller API Server. * Web interface - * Fixed stack creation for non admin user when UCP uses a custom controller port. + * Fixed stack creation for non admin user when UCP uses a custom controller port. + +### Known issues + +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used +instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.14 @@ -680,6 +773,23 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * Web Interface * Fixed an issue that prevented "Per User Limit" on Admin Settings from working. (docker/escalation#639) + +### Known issues + +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used +instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.13 @@ -690,6 +800,23 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * Security * Fixed a critical security issue to prevent UCP from accepting certificates from the system pool when adding client CAs to the server that requires mutual authentication. + +### Known issues + +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used +instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.12 @@ -702,6 +829,23 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. were stored in cleartext on UCP hosts. Please refer to the following KB article https://success.docker.com/article/upgrading-to-ucp-2-2-12-ucp-3-0-4/ for proper implementation of this fix. + +### Known issues + +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used +instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.11 @@ -726,6 +870,23 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * UI * Fixed an issue that causes the web interface to not parse volume options correctly. * Fixed an issue that prevents the user from deploying stacks through the web interface. + +### Known issues + +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used +instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.10 @@ -761,11 +922,24 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * UCP can now be installed on a system with more than 127 logical CPU cores. * Improved the performance of UCP's local and global health checks. -### Known Issue +### Known issues * Excessive delay is seen when sending `docker service ls` through a UCP client bundle on a cluster that is running thousands of services. - + * Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. + * The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used + instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.9 @@ -781,6 +955,27 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * Core * Fixed an issue that causes container fail to start with `container ID not found` during high concurrent API calls to create and start containers. + +### Known issues + +* RethinkDB can only run with up to 127 CPU cores. +* When integrating with LDAP and using multiple domain servers, if the +default server configuration is not chosen, then the last server configuration +is always used, regardless of which one is actually the best match. +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used + instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.7 @@ -791,6 +986,27 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * Fixed an issue where the minimum TLS version setting is not correctly handled, leading to non-default values causing `ucp-controller` and `ucp-agent` to keep restarting. + +### Known issues + +* RethinkDB can only run with up to 127 CPU cores. +* When integrating with LDAP and using multiple domain servers, if the +default server configuration is not chosen, then the last server configuration +is always used, regardless of which one is actually the best match. +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used + instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.6 @@ -846,6 +1062,20 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads. * When integrating with LDAP and using multiple domain servers, if the default server configuration is not chosen, then the last server configuration is always used, regardless of which one is actually the best match. +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used + instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.5 @@ -869,6 +1099,20 @@ plugins. If you’re using certain third-party volume plugins (such as Netapp) and are planning on upgrading UCP, you can skip 2.2.5 and wait for the upcoming 2.2.6 release, which will provide an alternative way to turn on RBAC enforcement for volumes. +* Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used + instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.4 @@ -901,7 +1145,19 @@ for volumes. ### Known issues * Docker currently has limitations related to overlay networking and services using VIP-based endpoints. These limitations apply to use of the HTTP Routing Mesh (HRM). HRM users should familiarize themselves with these limitations. In particular, HRM may encounter virtual IP exhaustion (as evidenced by `failed to allocate network IP for task` Docker log messages). If this happens, and if the HRM service is restarted or rescheduled for any reason, HRM may fail to resume operation automatically. See the Docker EE 17.06-ee5 release notes for details. - * The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* The Swarm admin web interface for UCP versions 2.2.0 and later contain a bug. If used with Docker Engine version 17.06.2-ee5 or earlier, attempting to update "Task History Limit", "Heartbeat Period" and "Node Certificate Expiry" settings using the UI will cause the cluster to crash on next restart. Using UCP 2.2.X and Docker Engine 17.06-ee6 and later, updating these settings will fail (but not cause the cluster to crash). Users are encouraged to update to Docker Engine version 17.06.2-ee6 and later, and to use the Docker CLI (instead of the UCP UI) to update these settings. Rotating join tokens works with any combination of Docker Engine and UCP versions. Docker Engine versions 17.03 and earlier (which use UCP version 2.1 and earlier) are not affected by this problem. +* Upgrading heterogeneous swarms from CLI may fail because x86 images are used + instead of the correct image for the worker architecture. +* Agent container log is empty even though it's running correctly. +* Rapid UI settings updates may cause unintended settings changes for logging + settings and other admin settings. +* Attempting to load an (unsupported) `tar.gz` image results in a poor error + message. +* Searching for images in the UCP images UI doesn't work. +* Removing a stack may leave orphaned volumes. +* Storage metrics are not available for Windows. +* You can't create a bridge network from the web interface. As a workaround use + `/`. ## Version 2.2.3 @@ -956,7 +1212,6 @@ for volumes. * You can't create a bridge network from the web interface. As a workaround use `/`. - ## version 2.2.2 2017-08-30 @@ -990,9 +1245,34 @@ for volumes. ### Known issues -* When deploying compose files that use secrets, the secret definition must -include `external: true`, otherwise the deployment fails with the error +* UI issues: + * Cannot currently remove nodes using UCP web interface. Workaround is to remove from CLI + instead. + * Search does not function correctly for images. + * Cannot view label constraints from a collection's details pages. Workaround + is to view by editing the collection. + * Certain config changes to UCP make take several minutes to update after making + changes in the web interface. In particular this affects LDAP/AD configuration changes. + * Turning `LDAP Enabled` from "Yes" to "No" disables the save button. Workaround + is to do a page refresh which completes the configuration change. + * Removing stacks from the UI may cause certain resources to not be deleted, + including networks or volumes. Workaround is to delete the resources directly. + * When you create a network and check 'Enable hostname based routing', the web + interface doesn't apply the HRM labels to the network. As a workaround, + [create the network using the CLI](https://docs.docker.com/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/#service-labels). + * The web interface does not currently persist changes to session timeout settings. + As a workaround you can update the settings from the CLI, by [adapting these instructions for the + session timeout](https://docs.docker.com/datacenter/ucp/2.2/guides/admin/configure/external-auth/enable-ldap-config-file/). +* docker/ucp + * The `support` command does not currently produce a valid support dump. As a + workaround you can download a support dumps from the web interface. +* Compose + * When deploying compose files that use secrets, the secret definition must include `external: true`, otherwise the deployment fails with the error. `unable to inspect secret`. +* Windows issues + * Disk related metrics do not display for Windows worker nodes. + * If upgrading from an existing deployment, ensure that HRM is using a non-encrypted + network prior to attaching Windows services. ## Version 2.2.0 diff --git a/ee/upgrade.md b/ee/upgrade.md index 144aa818f9..19f1c4cd60 100644 --- a/ee/upgrade.md +++ b/ee/upgrade.md @@ -11,7 +11,7 @@ redirect_from: In Docker Engine - Enterprise 18.09, significant architectural improvements were made to the network architecture in Swarm to increase the performance and scale of the built-in load balancing functionality. -> ***NOTE:*** These changes introduce new constraints to the Docker Engine - Enterprise upgrade process that, +> **Note**: These changes introduce new constraints to the Docker Engine - Enterprise upgrade process that, > if not correctly followed, can have impact on the availability of applications running on the Swarm. These > constraints impact any upgrades coming from any version before 18.09 to version 18.09 or greater. diff --git a/engine/ce-ee-node-activate.md b/engine/ce-ee-node-activate.md index 40cff6f21f..367980534d 100644 --- a/engine/ce-ee-node-activate.md +++ b/engine/ce-ee-node-activate.md @@ -26,7 +26,7 @@ on your hub/store account after starting the trial or paid license. This allows upgrade operations to work as expected and keep them current as long as your license is still valid and has not expired. -> ***NOTE:*** You can use the `docker engine update` command. However, if you continue to use +> **Note**: You can use the `docker engine update` command. However, if you continue to use > the CE packages, the OS package will no longer replace the active daemon binary during apt/yum > updates, so you are responsible for performing the `docker engine update` operation periodically > to keep your engine up to date. @@ -61,10 +61,10 @@ Server: Docker Engine - Community 2. Log into the Docker engine from the command line. -**NOTE:** When running the command `docker login`, the shell stores the credentials in the current user's home -directory. RHEL and Ubuntu-based Linux distributions have different behavior for sudo. RHEL sets $HOME to point -to `/root` while Ubuntu leaves `$HOME` pointing to the user's home directory who ran `sudo` and this can cause -permission and access problems when switching between `sudo` and non-sudo'd commands. +> **Note**: When running the command `docker login`, the shell stores the credentials in the current user's home +> directory. RHEL and Ubuntu-based Linux distributions have different behavior for sudo. RHEL sets $HOME to point +> to `/root` while Ubuntu leaves `$HOME` pointing to the user's home directory who ran `sudo` and this can cause +> permission and access problems when switching between `sudo` and non-sudo'd commands. For Ubuntu or Debian: diff --git a/engine/release-notes.md b/engine/release-notes.md index 399665134d..cac26a4d50 100644 --- a/engine/release-notes.md +++ b/engine/release-notes.md @@ -16,19 +16,53 @@ Docker EE is a superset of all the features in Docker CE. It incorporates defect that you can use in environments where new features cannot be adopted as quickly for consistency and compatibility reasons. -> ***NOTE:*** +> **Note**: > New in 18.09 is an aligned release model for Docker Engine - Community and Docker > Engine - Enterprise. The new versioning scheme is YY.MM.x where x is an incrementing > patch version. The enterprise engine is a superset of the community engine. They > will ship concurrently with the same x patch version based on the same code base. -> ***NOTE:*** +> **Note**: > The client and container runtime are now in separate packages from the daemon in > Docker Engine 18.09. Users should install and update all three packages at the same time > to get the latest patch releases. For example, on Ubuntu: > `sudo apt install docker-ce docker-ce-cli containerd.io`. See the install instructions > for the corresponding linux distro for details. +## 18.09.3 + +2019-02-28 + +### Networking fixes for Docker Engine EE and CE +* Windows: now avoids regeneration of network IDs to prevent broken references to networks. [docker/engine#149](https://github.com/docker/engine/pull/149) +* Windows: Fixed an issue to address `- restart always` flag on standalone containers not working when specifying a network. (docker/escalation#1037) +* Fixed an issue to address the IPAM state from networkdb if the manager is not attached to the overlay network. (docker/escalation#1049) + +### Runtime fixes and updates for Docker Engine EE and CE + +* Updated to Go version 1.10.8. +* Modified names in the container name generator. [docker/engine#159](https://github.com/docker/engine/pull/159) +* When copying an existing folder, xattr set errors when the target filesystem doesn't support xattr are now ignored. [docker/engine#135](https://github.com/docker/engine/pull/135) +* Graphdriver: fixed "device" mode not being detected if "character-device" bit is set. [docker/engine#160](https://github.com/docker/engine/pull/160) +* Fixed nil pointer derefence on failure to connect to containerd. [docker/engine#162](https://github.com/docker/engine/pull/162) +* Deleted stale containerd object on start failure. [docker/engine#154](https://github.com/docker/engine/pull/154) + +### Known Issues +* There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before 18.09 to version 18.09 or greater. + +## 18.09.2 + +2019-02-11 + +### Security fixes for Docker Engine - Enterprise and Docker Engine - Community +* Update `runc` to address a critical vulnerability that allows specially-crafted containers to gain administrative privileges on the host. [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) +* Ubuntu 14.04 customers using a 3.13 kernel will need to upgrade to a supported Ubuntu 4.x kernel + +For additional information, [refer to the Docker blog post](https://blog.docker.com/2019/02/docker-security-update-cve-2018-5736-and-container-security-best-practices/). + +### Known Issues +* There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before 18.09 to version 18.09 or greater. + ## 18.09.1 2019-01-09 @@ -82,6 +116,7 @@ Update your configuration if this command prints a non-empty value for `MountFla ### Known Issues * When upgrading from 18.09.0 to 18.09.1, `containerd` is not upgraded to the correct version on Ubuntu. [Learn more](https://success.docker.com/article/error-upgrading-to-engine-18091-with-containerd). +* There are [important changes to the upgrade process](/ee/upgrade) that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before 18.09 to version 18.09 or greater. ## 18.09.0 @@ -215,8 +250,7 @@ As of EE 2.2, Docker will deprecate support for Device Mapper as a storage drive time, but support will be removed in a future release. Docker will continue to support Device Mapper for existing EE 2.0 and 2.1 customers. Please contact Sales for more information. -Docker recommends that existing customers [migrate to using Overlay2 for the storage driver] -(https://success.docker.com/article/how-do-i-migrate-an-existing-ucp-cluster-to-the-overlay2-graph-driver). +Docker recommends that existing customers [migrate to using Overlay2 for the storage driver](https://success.docker.com/article/how-do-i-migrate-an-existing-ucp-cluster-to-the-overlay2-graph-driver). The [Overlay2 storage driver](https://docs.docker.com/storage/storagedriver/overlayfs-driver/) is now the default for Docker engine implementations. @@ -227,40 +261,51 @@ For more information on the list of deprecated flags and APIs, have a look at th In this release, Docker has also removed support for TLS < 1.2 [moby/moby#37660](https://github.com/moby/moby/pull/37660), Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/docker-ce-packaging/pull/255) / [docker-ce-packaging#254](https://github.com/docker/docker-ce-packaging/pull/254), and Debian 8 "Jessie" [docker-ce-packaging#255](https://github.com/docker/docker-ce-packaging/pull/255) / [docker-ce-packaging#254](https://github.com/docker/docker-ce-packaging/pull/254). -### 18.03.1-ee-5 -2019-01-09 - -### Security fixes -* Upgraded Go language to 1.10.6 to resolve CVE-2018-16873, CVE-2018-16874, and CVE-2018-16875. -* Added `/proc/asound` to masked paths -* Fixed authz plugin for 0-length content and path validation. - -### Fixes for Docker Engine EE -* Disable kmem accounting in runc on RHEL/CentOS (docker/escalation#614, docker/escalation#692) -* Fix resource leak on `docker logs --follow` [moby/moby#37576](https://github.com/moby/moby/pull/37576) -* Mask proxy credentials from URL when displayed in system info (docker/escalation#879) - -### 17.06.2-ee-18 -2019-01-09 - -### Security fixes -* Upgraded Go language to 1.10.6 to resolve CVE-2018-16873, CVE-2018-16874, and CVE-2018-16875. -* Added `/proc/asound` to masked paths -* Fixed authz plugin for 0-length content and path validation. - -### Fixes for Docker Engine EE -* Disable kmem accounting in runc on RHEL/CentOS (docker/escalation#614, docker/escalation#692) -* Fix resource leak on `docker logs --follow` [moby/moby#37576](https://github.com/moby/moby/pull/37576) -* Mask proxy credentials from URL when displayed in system info (docker/escalation#879) - - ## Older Docker Engine EE Release notes -### 18.03.1-ee-4 +## 18.03.1-ee-7 + +2019-02-28 + +### Runtime + +* Updated to Go version 1.10.8. +* Updated to containerd version 1.1.6. +- When copying existing folder, xattr set errors when the target filesystem doesn't support xattr are now ignored. [moby/moby#38316](https://github.com/moby/moby/pull/38316) +- Fixed FIFO, sockets, and device files in userns, and fixed device mode not being detected. [moby/moby#38758](https://github.com/moby/moby/pull/38758) +- Deleted stale containerd object on start failure. [moby/moby#38364](https://github.com/moby/moby/pull/38364) + +## 18.03.1-ee-7 +2019-02-28 + +### Bug fixes +* Fixed an issue to address the IPAM state from networkdb if manager is not attached to the overlay network. (docker/escalation#1049) + +## 18.03.1-ee-6 +2019-02-11 + +### Security fixes for Docker Engine - Enterprise +* Update `runc` to address a critical vulnerability that allows specially-crafted containers to gain administrative privileges on the host. [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) +* Ubuntu 14.04 customers using a 3.13 kernel will need to upgrade to a supported Ubuntu 4.x kernel + +## 18.03.1-ee-5 +2019-01-09 + +### Security fixes +* Upgraded Go language to 1.10.6 to resolve CVE-2018-16873, CVE-2018-16874, and CVE-2018-16875. +* Added `/proc/asound` to masked paths +* Fixed authz plugin for 0-length content and path validation. + +### Fixes for Docker Engine - Enterprise +* Disable kmem accounting in runc on RHEL/CentOS (docker/escalation#614, docker/escalation#692) +* Fix resource leak on `docker logs --follow` [moby/moby#37576](https://github.com/moby/moby/pull/37576) +* Mask proxy credentials from URL when displayed in system info (docker/escalation#879) + +## 18.03.1-ee-4 2018-10-25 - > *** NOTE: *** If you're deploying UCP or DTR, use Docker EE Engine 18.09 or higher. 18.03 is an engine only release. + > **Note**: If you're deploying UCP or DTR, use Docker EE Engine 18.09 or higher. 18.03 is an engine only release. #### Client @@ -283,59 +328,7 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d * Fixed the logic used for skipping over running tasks. [docker/swarmkit#2724](https://github.com/docker/swarmkit/pull/2724) * Addressed unassigned task leak when a service is removed. [docker/swarmkit#2709](https://github.com/docker/swarmkit/pull/2709) - ### 18.03.1-ee-3 - 2018-08-30 - - #### Builder - - * Fix: no error if build args are missing during docker build. [docker/engine#25](https://github.com/docker/engine/pull/25) - * Ensure RUN instruction to run without healthcheck. [moby/moby#37413](https://github.com/moby/moby/pull/37413) - - #### Client - - * Fix manifest list to always use correct size. [docker/cli#1156](https://github.com/docker/cli/pull/1156) - * Various shell completion script updates. [docker/cli#1159](https://github.com/docker/cli/pull/1159) [docker/cli#1227](https://github.com/docker/cli/pull/1227) - * Improve version output alignment. [docker/cli#1204](https://github.com/docker/cli/pull/1204) - - #### Runtime - - * Disable CRI plugin listening on port 10010 by default. [docker/engine#29](https://github.com/docker/engine/pull/29) - * Update containerd to v1.1.2. [docker/engine#33](https://github.com/docker/engine/pull/33) - * Windows: Pass back system errors on container exit. [moby/moby#35967](https://github.com/moby/moby/pull/35967) - * Windows: Fix named pipe support for hyper-v isolated containers. [docker/engine#2](https://github.com/docker/engine/pull/2) [docker/cli#1165](https://github.com/docker/cli/pull/1165) - * Register OCI media types. [docker/engine#4](https://github.com/docker/engine/pull/4) - - #### Swarm Mode - - * Clean up tasks in dirty list for which the service has been deleted. [docker/swarmkit#2694](https://github.com/docker/swarmkit/pull/2694) - * Propagate the provided external CA certificate to the external CA object in swarm. [docker/cli#1178](https://github.com/docker/cli/pull/1178) - -2018-10-25 - -> ***NOTE:*** If you're deploying UCP or DTR, use Docker EE Engine 18.09 or higher. 18.03 is an engine only release. - -#### Client - -* Fixed help message flags on docker stack commands and child commands. [docker/cli#1251](https://github.com/docker/cli/pull/1251) -* Fixed typo breaking zsh docker update autocomplete. [docker/cli#1232](https://github.com/docker/cli/pull/1232) - -### Networking - -* Added optimizations to reduce the messages in the NetworkDB queue. [docker/libnetwork#2225](https://github.com/docker/libnetwork/pull/2225) -* Fixed a very rare condition where managers are not correctly triggering the reconnection logic. [docker/libnetwork#2226](https://github.com/docker/libnetwork/pull/2226) -* Changed loglevel from error to warning for missing disable_ipv6 file. [docker/libnetwork#2224](https://github.com/docker/libnetwork/pull/2224) - -#### Runtime - -* Fixed denial of service with large numbers in cpuset-cpus and cpuset-mems. [moby/moby#37967](https://github.com/moby/moby/pull/37967) -* Added stability improvements for devicemapper shutdown. [moby/moby#36307](https://github.com/moby/moby/pull/36307) [moby/moby#36438](https://github.com/moby/moby/pull/36438) - -#### Swarm Mode - -* Fixed the logic used for skipping over running tasks. [docker/swarmkit#2724](https://github.com/docker/swarmkit/pull/2724) -* Addressed unassigned task leak when a service is removed. [docker/swarmkit#2709](https://github.com/docker/swarmkit/pull/2709) - -### 18.03.1-ee-3 +## 18.03.1-ee-3 2018-08-30 #### Builder @@ -362,7 +355,7 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d * Clean up tasks in dirty list for which the service has been deleted. [docker/swarmkit#2694](https://github.com/docker/swarmkit/pull/2694) * Propagate the provided external CA certificate to the external CA object in swarm. [docker/cli#1178](https://github.com/docker/cli/pull/1178) -### 18.03.1-ee-2 +## 18.03.1-ee-2 2018-07-10 > #### Important notes about this release @@ -374,8 +367,7 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d + Add /proc/acpi to masked paths [(CVE-2018-10892)](https://cve.mitre.org/cgi-bin/cvename.cgi?name=2018-10892). [moby/moby#37404](https://github.com/moby/moby/pull/37404) - -### 18.03.1-ee-1 +## 18.03.1-ee-1 2018-06-27 > #### Important notes about this release @@ -399,7 +391,109 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d + Support for `--chown` with `COPY` and `ADD` in `Dockerfile`. + Added functionality for the `docker logs` command to include the output of multiple logging drivers. -### 17.06.2-ee-17 +## 17.06.2-ee-20 +2019-02-28 + +### Bug fixes +* Fixed an issue to address the IPAM state from networkdb if manager is not attached to the overlay network. (docker/escalation#1049) + +### Runtime + +* Updated to Go version 1.10.8. ++ Added cgroup namespace support. [docker/runc#7](https://github.com/docker/runc/pull/7) + +### Windows + +* Fixed `failed to register layer` bug on `docker pull` of windows images. + +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-19 + +2019-02-11 + +### Security fixes for Docker Engine - Enterprise +* Update `runc` to address a critical vulnerability that allows specially-crafted containers to gain administrative privileges on the host. [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) +* Ubuntu 14.04 customers using a 3.13 kernel will need to upgrade to a supported Ubuntu 4.x kernel + +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-18 +2019-01-09 + +### Security fixes +* Upgraded Go language to 1.10.6 to resolve CVE-2018-16873, CVE-2018-16874, and CVE-2018-16875. +* Added `/proc/asound` to masked paths +* Fixed authz plugin for 0-length content and path validation. + +### Fixes for Docker Engine EE +* Disable kmem accounting in runc on RHEL/CentOS (docker/escalation#614, docker/escalation#692) +* Fix resource leak on `docker logs --follow` [moby/moby#37576](https://github.com/moby/moby/pull/37576) +* Mask proxy credentials from URL when displayed in system info (docker/escalation#879) + +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-17 2018-10-25 #### Networking @@ -418,7 +512,29 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d * Fixed leaking task resources. [docker/swarmkit#2755](https://github.com/docker/swarmkit/pull/2755) * Fixed deadlock in dispatcher that could cause node to crash. [docker/swarmkit#2753](https://github.com/docker/swarmkit/pull/2753) -### 17.06.2-ee-16 +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-16 2018-07-26 #### Client @@ -444,13 +560,57 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d * RoleManager will remove deleted nodes from the cluster membership. [docker/swarmkit#2607](https://github.com/docker/swarmkit/pull/2607) - Fix unassigned task leak when service is removed. [docker/swarmkit#2708](https://github.com/docker/swarmkit/pull/2708) -### 17.06.2-ee-15 +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-15 2018-07-10 #### Runtime - Add /proc/acpi to masked paths [(CVE-2018-10892)](https://cve.mitre.org/cgi-bin/cvename.cgi?name=2018-10892). [moby/moby#37404](https://github.com/moby/moby/pull/37404) +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + ### 17.06.2-ee-14 2018-06-21 @@ -470,21 +630,87 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d - Fix `docker stack deploy --prune` with empty name removes all swarm services. [moby/moby#36776](https://github.com/moby/moby/issues/36776) -### 17.06.2-ee-13 +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-13 2018-06-04 #### Networking - Fix attachable containers that may leave DNS state when exiting. [docker/libnetwork#2175](https://github.com/docker/libnetwork/pull/2175) -### 17.06.2-ee-12 +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-12 2018-05-29 #### Networking - Fix to allow service update with no connection loss. [docker/libnetwork#2157](https://github.com/docker/libnetwork/pull/2157) -### 17.06.2-ee-11 +#### Known issues + +* When all Swarm managers are stopped at the same time, the swarm might end up in a +split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-11 2018-05-17 #### Client @@ -508,15 +734,52 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d * When all Swarm managers are stopped at the same time, the swarm might end up in a split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: -### 17.06.2-ee-10 +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-10 2018-04-27 #### Runtime * Fix version output to not have `-dev`. -### 17.06.2-ee-9 +#### Known issues + +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-9 2018-04-26 #### Runtime @@ -530,7 +793,27 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). - Increase raft ElectionTick to 10xHeartbeatTick. [docker/swarmkit#2564](https://github.com/docker/swarmkit/pull/2564) - Adding logic to restore networks in order. [docker/swarmkit#2584](https://github.com/docker/swarmkit/pull/2584) -### 17.06.2-ee-8 +#### Known issues + +* Under certain conditions, swarm leader re-election may timeout + prematurely. During this period, docker commands may fail. Also during + this time, creation of globally-scoped networks may be unstable. As a + workaround, wait for leader election to complete before issuing commands + to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-8 2018-04-17 #### Runtime @@ -552,8 +835,20 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). this time, creation of globally-scoped networks may be unstable. As a workaround, wait for leader election to complete before issuing commands to the cluster. +* It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. +* Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: -### 17.06.2-ee-7 +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-7 2018-03-19 #### Important notes about this release @@ -604,7 +899,22 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). - Synchronize Dispatcher.Stop() with incoming rpcs [docker/swarmkit#2524](https://github.com/docker/swarmkit/pull/2524) - Fix IP overlap with empty EndpointSpec [docker/swarmkit#2511](https://github.com/docker/swarmkit/pull/2511) -### 17.06.2-ee-6 +#### Known issues + + * It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. + * Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-6 2017-11-27 #### Runtime @@ -621,7 +931,22 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). * Only shut down old tasks on success [docker/swarmkit#2308](https://github.com/docker/swarmkit/pull/2308) * Error on cluster spec name change [docker/swarmkit#2436](https://github.com/docker/swarmkit/pull/2436) -### 17.06.2-ee-5 +#### Known issues + + * It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. + * Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. +* SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-5 2017-11-02 #### Important notes about this release @@ -670,8 +995,17 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). * It's recommended that users create overlay networks with `/24` blocks (the default) of 256 IP addresses when networks are used by services created using VIP-based endpoint-mode (the default). This is because of limitations with Docker Swarm [moby/moby#30820](moby/moby/issues/30820). Users should _not_ work around this by increasing the IP block size. To work around this limitation, either use `dnsrr` endpoint-mode or use multiple smaller overlay networks. * Docker may experience IP exhaustion if many tasks are assigned to a single overlay network, for example if many services are attached to that network or because services on the network are scaled to many replicas. The problem may also manifest when tasks are rescheduled because of node failures. In case of node failure, Docker currently waits 24h to release overlay IP addresses. The problem can be diagnosed by looking for `failed to allocate network IP for task` messages in the Docker logs. * SELinux enablement is not supported for containers on IBM Z on RHEL because of missing Red Hat package. +* If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: -### 17.06.2-ee-4 +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-4 2017-10-12 #### Client @@ -689,15 +1023,38 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). * Overlay fix for transient IP reuse [docker/libnetwork#1935](https://github.com/docker/libnetwork/pull/1935) [docker/libnetwork#1968](https://github.com/docker/libnetwork/pull/1968) * Serialize IP allocation [docker/libnetwork#1788](https://github.com/docker/libnetwork/pull/1788) +#### Known issues -### 17.06.2-ee-3 +If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.2-ee-3 2017-09-22 #### Swarm mode - Increase max message size to allow larger snapshots [docker/swarmkit#131](https://github.com/docker/swarmkit/pull/131) -### 17.06.1-ee-2 +#### Known issues + +If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.1-ee-2 2017-08-24 #### Client @@ -717,7 +1074,19 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). - Ignore PullOptions for running tasks [#2351](https://github.com/docker/swarmkit/pull/2351) -### 17.06.1-ee-1 +#### Known issues + +If a container is spawned on node A, using the same IP of a container destroyed +on nodeB within 5 min from the time that it exit, the container on node A is +not reachable until one of these 2 conditions happens: + +1. Container on A sends a packet out, +2. The timer that cleans the arp entry in the overlay namespace is triggered (around 5 minutes). + +As a workaround, send at least a packet out from each container like +(ping, GARP, etc). + +## 17.06.1-ee-1 2017-08-16 #### Important notes about this release @@ -734,7 +1103,6 @@ split-brain scenario. [Learn more](https://success.docker.com/article/KB000759). migrated to the v2 protocol, set the `--disable-legacy-registry=false` daemon option. - #### Builder + Add `--iidfile` option to docker build. It allows specifying a location where to save the resulting image ID @@ -993,7 +1361,7 @@ not reachable until one of these 2 conditions happens: As a workaround, send at least a packet out from each container like (ping, GARP, etc). -### Docker EE 17.03.2-ee-8 +## Docker EE 17.03.2-ee-8 2017-12-13 * Handle cleanup DNS for attachable container to prevent leak in name resolution [docker/libnetwork#1999](https://github.com/docker/libnetwork/pull/1999) @@ -1008,12 +1376,85 @@ As a workaround, send at least a packet out from each container like * Don't abort when setting `may_detach_mounts` [moby/moby#35172](https://github.com/moby/moby/pull/35172) * Protect health monitor channel to prevent engine panic [moby/moby#35482](https://github.com/moby/moby/pull/35482) -### Docker EE 17.03.2-ee-7 +## Docker EE 17.03.2-ee-7 2017-10-04 * Fix logic in network resource reaping to prevent memory leak [docker/libnetwork#1944](https://github.com/docker/libnetwork/pull/1944) [docker/libnetwork#1960](https://github.com/docker/libnetwork/pull/1960) * Increase max GRPC message size to 128MB for larger snapshots so newly added managers can successfully join [docker/swarmkit#2375](https://github.com/docker/swarmkit/pull/2375) +### Docker EE 17.03.2-ee-6 +2017-08-24 + +* Fix daemon panic on docker image push [moby/moby#33105](https://github.com/moby/moby/pull/33105) +* Fix panic in concurrent network creation/deletion operations [docker/libnetwork#1861](https://github.com/docker/libnetwork/pull/1861) +* Improve network db stability under stressful situations [docker/libnetwork#1860](https://github.com/docker/libnetwork/pull/1860) +* Enable TCP Keep-Alive in Docker client [docker/cli#415](https://github.com/docker/cli/pull/415) +* Lock goroutine to OS thread while changing NS [docker/libnetwork#1911](https://github.com/docker/libnetwork/pull/1911) +* Ignore PullOptions for running tasks [docker/swarmkit#2351](https://github.com/docker/swarmkit/pull/2351) + +### Docker EE 17.03.2-ee-5 +20 Jul 2017 + +* Add more locking to storage drivers [#31136](https://github.com/moby/moby/pull/31136) +* Prevent data race on `docker network connect/disconnect` [#33456](https://github.com/moby/moby/pull/33456) +* Improve service discovery reliability [#1796](https://github.com/docker/libnetwork/pull/1796) [#18078](https://github.com/docker/libnetwork/pull/1808) +* Fix resource leak in swarm mode [#2215](https://github.com/docker/swarmkit/pull/2215) +* Optimize `docker system df` for volumes on NFS [#33620](https://github.com/moby/moby/pull/33620) +* Fix validation bug with host-mode ports in swarm mode [#2177](https://github.com/docker/swarmkit/pull/2177) +* Fix potential crash in swarm mode [#2268](https://github.com/docker/swarmkit/pull/2268) +* Improve network control-plane reliability [#1704](https://github.com/docker/libnetwork/pull/1704) +* Do not error out when selinux relabeling is not supported on volume filesystem [#33831](https://github.com/moby/moby/pull/33831) +* Remove debugging code for aufs ebusy errors [#31665](https://github.com/moby/moby/pull/31665) +* Prevent resource leak on healthchecks [#33781](https://github.com/moby/moby/pull/33781) +* Fix issue where containerd supervisor may exit prematurely [#32590](https://github.com/moby/moby/pull/32590) +* Fix potential containerd crash [#2](https://github.com/docker/containerd/pull/2) +* Ensure server details are set in client even when an error is returned [#33827](https://github.com/moby/moby/pull/33827) +* Fix issue where slow/dead `docker logs` clients can block the container [#33897](https://github.com/moby/moby/pull/33897) +* Fix potential panic on Windows when running as a service [#32244](https://github.com/moby/moby/pull/32244) + +### Docker EE 17.03.2-ee-4 +2017-06-01 + +Refer to the [detailed list](https://github.com/moby/moby/releases/tag/v17.03.2-ce) of all changes since the release of Docker EE 17.03.1-ee-3 + +**Note**: This release includes a fix for potential data loss under certain +circumstances with the local (built-in) volume driver. + +### Docker EE 17.03.1-ee-3 +2017-03-30 + +* Fix an issue with the SELinux policy for Oracle Linux [#31501](https://github.com/docker/docker/pull/31501) + +### Docker EE 17.03.1-ee-2 +2017-03-28 + +* Fix issue with swarm CA timeouts [#2063](https://github.com/docker/swarmkit/pull/2063) [#2064](https://github.com/docker/swarmkit/pull/2064/files) + +Refer to the [detailed list](https://github.com/moby/moby/releases/tag/v17.03.1-ce) of all changes since the release of Docker EE 17.03.0-ee-1 + +### Docker EE 17.03.0-ee-1 (2 Mar 2017) + +Initial Docker EE release, based on Docker CE 17.03.0 + +* Optimize size calculation for `docker system df` container size [#31159](https://github.com/docker/docker/pull/31159) + +## Older Docker Engine CE Release notes + +## 18.06.3-ce + +2019-02-19 + +### Security fixes for Docker Engine - Community +* Change how the `runc` critical vulnerability patch is applied to include the fix in RPM packages. [docker/engine#156](https://github.com/docker/engine/pull/156) + +## 18.06.2 + +2019-02-11 + +### Security fixes for Docker Engine - Community +* Update `runc` to address a critical vulnerability that allows specially-crafted containers to gain administrative privileges on the host. [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) +* Ubuntu 14.04 customers using a 3.13 kernel will need to upgrade to a supported Ubuntu 4.x kernel + ## 18.06.1-ce 2018-08-21 @@ -1219,67 +1660,6 @@ As a workaround, send at least a packet out from each container like ## 18.03.1-ce 2018-04-26 -### Docker EE 17.03.2-ee-6 -2017-08-24 - -* Fix daemon panic on docker image push [moby/moby#33105](https://github.com/moby/moby/pull/33105) -* Fix panic in concurrent network creation/deletion operations [docker/libnetwork#1861](https://github.com/docker/libnetwork/pull/1861) -* Improve network db stability under stressful situations [docker/libnetwork#1860](https://github.com/docker/libnetwork/pull/1860) -* Enable TCP Keep-Alive in Docker client [docker/cli#415](https://github.com/docker/cli/pull/415) -* Lock goroutine to OS thread while changing NS [docker/libnetwork#1911](https://github.com/docker/libnetwork/pull/1911) -* Ignore PullOptions for running tasks [docker/swarmkit#2351](https://github.com/docker/swarmkit/pull/2351) - -### Docker EE 17.03.2-ee-5 -20 Jul 2017 - -* Add more locking to storage drivers [#31136](https://github.com/moby/moby/pull/31136) -* Prevent data race on `docker network connect/disconnect` [#33456](https://github.com/moby/moby/pull/33456) -* Improve service discovery reliability [#1796](https://github.com/docker/libnetwork/pull/1796) [#18078](https://github.com/docker/libnetwork/pull/1808) -* Fix resource leak in swarm mode [#2215](https://github.com/docker/swarmkit/pull/2215) -* Optimize `docker system df` for volumes on NFS [#33620](https://github.com/moby/moby/pull/33620) -* Fix validation bug with host-mode ports in swarm mode [#2177](https://github.com/docker/swarmkit/pull/2177) -* Fix potential crash in swarm mode [#2268](https://github.com/docker/swarmkit/pull/2268) -* Improve network control-plane reliability [#1704](https://github.com/docker/libnetwork/pull/1704) -* Do not error out when selinux relabeling is not supported on volume filesystem [#33831](https://github.com/moby/moby/pull/33831) -* Remove debugging code for aufs ebusy errors [#31665](https://github.com/moby/moby/pull/31665) -* Prevent resource leak on healthchecks [#33781](https://github.com/moby/moby/pull/33781) -* Fix issue where containerd supervisor may exit prematurely [#32590](https://github.com/moby/moby/pull/32590) -* Fix potential containerd crash [#2](https://github.com/docker/containerd/pull/2) -* Ensure server details are set in client even when an error is returned [#33827](https://github.com/moby/moby/pull/33827) -* Fix issue where slow/dead `docker logs` clients can block the container [#33897](https://github.com/moby/moby/pull/33897) -* Fix potential panic on Windows when running as a service [#32244](https://github.com/moby/moby/pull/32244) - -### Docker EE 17.03.2-ee-4 -2017-06-01 - -Refer to the [detailed list](https://github.com/moby/moby/releases/tag/v17.03.2-ce) of all changes since the release of Docker EE 17.03.1-ee-3 - -**Note**: This release includes a fix for potential data loss under certain -circumstances with the local (built-in) volume driver. - -### Docker EE 17.03.1-ee-3 -2017-03-30 - -* Fix an issue with the SELinux policy for Oracle Linux [#31501](https://github.com/docker/docker/pull/31501) - -### Docker EE 17.03.1-ee-2 -2017-03-28 - -* Fix issue with swarm CA timeouts [#2063](https://github.com/docker/swarmkit/pull/2063) [#2064](https://github.com/docker/swarmkit/pull/2064/files) - -Refer to the [detailed list](https://github.com/moby/moby/releases/tag/v17.03.1-ce) of all changes since the release of Docker EE 17.03.0-ee-1 - -### Docker EE 17.03.0-ee-1 (2 Mar 2017) - -Initial Docker EE release, based on Docker CE 17.03.0 - -* Optimize size calculation for `docker system df` container size [#31159](https://github.com/docker/docker/pull/31159) - -## Older Docker Engine CE Release notes - -### 18.03.1-ce -2018-04-26 - #### Client - Fix error with merge compose file with networks [docker/cli#983](https://github.com/docker/cli/pull/983) @@ -1315,7 +1695,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 * Allow for larger preset property values, do not override [docker/libnetwork#2124](https://github.com/docker/libnetwork/pull/2124) * Prevent panics on concurrent reads/writes when calling `changeNodeState` [docker/libnetwork#2136](https://github.com/docker/libnetwork/pull/2136) -### 18.03.0-ce +## 18.03.0-ce 2018-03-21 #### Builder @@ -1433,7 +1813,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 + Add swarm types to bash completion event type filter [docker/cli#888](https://github.com/docker/cli/pull/888) - Fix issue where network inspect does not show Created time for networks in swarm scope [moby/moby#36095](https://github.com/moby/moby/pull/36095) -### 17.12.1-ce +## 17.12.1-ce 2018-02-27 #### Client @@ -1474,7 +1854,12 @@ Initial Docker EE release, based on Docker CE 17.03.0 #### Swarm * Remove watchMiss from swarm mode [docker/libnetwork#2047](https://github.com/docker/libnetwork/pull/2047) -### 17.12.0-ce +#### Known Issues +* Health check no longer uses the container's working directory [moby/moby#35843](https://github.com/moby/moby/issues/35843) +* Errors not returned from client in stack deploy configs [moby/moby#757](https://github.com/docker/cli/pull/757) +* Docker cannot use memory limit when using systemd options [moby/moby#35123](https://github.com/moby/moby/issues/35123) + +## 17.12.0-ce 2017-12-27 #### Known Issues @@ -1577,7 +1962,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 * Pass Version to engine static builds [docker/docker-ce-packaging#70](https://github.com/docker/docker-ce-packaging/pull/70) + Added support for aarch64 on Debian (stretch/jessie) and Ubuntu Zesty or newer [docker/docker-ce-packaging#35](https://github.com/docker/docker-ce-packaging/pull/35) -### 17.09.1-ce +## 17.09.1-ce 2017-12-07 #### Builder @@ -1621,7 +2006,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 - Provide custom gRPC dialer to override default proxy dialer [docker/swarmkit/#2457](https://github.com/docker/swarmkit/pull/2457) - Avoids recursive readlock on swarm info [moby/moby#35388](https://github.com/moby/moby/pull/35388) -### 17.09.0-ce +## 17.09.0-ce 2017-09-26 #### Builder @@ -1686,7 +2071,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 + Remove deprecated `--enable-api-cors` daemon flag [moby/moby#34821](https://github.com/moby/moby/pull/34821) -### 17.06.2-ce +## 17.06.2-ce 2017-09-05 #### Client @@ -1702,7 +2087,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 - Ignore PullOptions for running tasks [docker/swarmkit#2351](https://github.com/docker/swarmkit/pull/2351) -### 17.06.1-ce +## 17.06.1-ce 2017-08-15 #### Builder @@ -1758,7 +2143,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 * Cluster update and memory issue fixes [#114](https://github.com/docker/docker-ce/pull/114) * Changing get network request to return predefined network in swarm [#150](https://github.com/docker/docker-ce/pull/150) -### 17.06.0-ce +## 17.06.0-ce 2017-06-28 > **Note**: Docker 17.06.0 has an issue in the image builder causing a change in the behavior @@ -1863,7 +2248,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 * Disable legacy registry (v1) by default [#33629](https://github.com/moby/moby/pull/33629) -### 17.03.2-ce +## 17.03.2-ce 2017-05-29 ## 17.03.3-ce @@ -1896,7 +2281,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 - Fix a case where tasks could get killed unexpectedly [#33118](https://github.com/moby/moby/pull/33118) - Fix an issue preventing to deploy services if the registry cannot be reached despite the needed images being locally present [#33117](https://github.com/moby/moby/pull/33117) -### 17.03.1-ce +## 17.03.1-ce 2017-03-27 #### Remote API (v1.27) & Client @@ -1929,7 +2314,7 @@ Initial Docker EE release, based on Docker CE 17.03.0 * Cleanup HCS on restore [#31503](https://github.com/docker/docker/pull/31503) -### 17.03.0-ce +## 17.03.0-ce 2017-03-01 **IMPORTANT**: Starting with this release, Docker is on a monthly release cycle and uses a @@ -1978,7 +2363,7 @@ Upgrading from Docker 1.13.1 to 17.03.0 is expected to be simple and low-risk. ## Edge releases -### 18.05.0-ce +## 18.05.0-ce 2018-05-09 #### Builder @@ -2049,7 +2434,7 @@ Upgrading from Docker 1.13.1 to 17.03.0 is expected to be simple and low-risk. * Expose swarmkit's Raft tuning parameters in engine config. [moby/moby#36726](https://github.com/moby/moby/pull/36726) * Make internal/test/daemon.Daemon swarm aware. [moby/moby#36826](https://github.com/moby/moby/pull/36826) -### 18.04.0-ce +## 18.04.0-ce 2018-04-10 #### Builder @@ -2131,7 +2516,7 @@ Upgrading from Docker 1.13.1 to 17.03.0 is expected to be simple and low-risk. - Fix agent logging race. [docker/swarmkit#2578](https://github.com/docker/swarmkit/pull/2578) * Adding logic to restore networks in order. [docker/swarmkit#2571](https://github.com/docker/swarmkit/pull/2571) -### 18.02.0-ce +## 18.02.0-ce 2018-02-07 #### Builder @@ -2197,7 +2582,7 @@ Upgrading from Docker 1.13.1 to 17.03.0 is expected to be simple and low-risk. * Update runc to fix hang during start and exec [moby/moby#36097](https://github.com/moby/moby/pull/36097) - Fix "--node-generic-resource" singular/plural [moby/moby#36125](https://github.com/moby/moby/pull/36125) -### 18.01.0-ce +## 18.01.0-ce 2018-01-10 #### Builder @@ -2255,7 +2640,7 @@ Upgrading from Docker 1.13.1 to 17.03.0 is expected to be simple and low-risk. - Fix published ports not being updated if a service has the same number of host-mode published ports with Published Port 0 [docker/swarmkit#2376](https://github.com/docker/swarmkit/pull/2376) * Make the task termination order deterministic [docker/swarmkit#2265](https://github.com/docker/swarmkit/pull/2265) -### 17.11.0-ce +## 17.11.0-ce 2017-11-20 > **Important**: Docker CE 17.11 is the first Docker release based on @@ -2328,7 +2713,7 @@ running, un-managed, on the system. + Build packages for Debian 10 (Buster) [docker/docker-ce-packaging#50](https://github.com/docker/docker-ce-packaging/pull/50) + Build packages for Ubuntu 17.10 (Artful) [docker/docker-ce-packaging#55](https://github.com/docker/docker-ce-packaging/pull/55) -### 17.10.0-ce +## 17.10.0-ce 2017-10-17 > **Important**: Starting with this release, `docker service create`, `docker service update`, @@ -2378,7 +2763,7 @@ use `--detach` to keep the old behaviour. - Do not filter nodes if logdriver is set to `none` [docker/swarmkit#2396](https://github.com/docker/swarmkit/pull/2396) + Adding ipam options to ipam driver requests [docker/swarmkit#2324](https://github.com/docker/swarmkit/pull/2324) -### 17.07.0-ce +## 17.07.0-ce 2017-08-29 #### API & Client @@ -2441,7 +2826,7 @@ use `--detach` to keep the old behaviour. * Fix error during service creation if a network with the same name exists both as "local" and "swarm" scoped network [docker/cli#184](https://github.com/docker/cli/pull/184) * (experimental) Add support for plugins on swarm [moby/moby#33575](https://github.com/moby/moby/pull/33575) -### 17.05.0-ce +## 17.05.0-ce 2017-05-04 #### Builder @@ -2528,7 +2913,7 @@ use `--detach` to keep the old behaviour. - Deprecate `--api-enable-cors` daemon flag. This flag was marked deprecated in Docker 1.6.0 but not listed in deprecated features [#32352](https://github.com/docker/docker/pull/32352) - Remove Ubuntu 12.04 (Precise Pangolin) as supported platform. Ubuntu 12.04 is EOL, and no longer receives updates [#32520](https://github.com/docker/docker/pull/32520) -### 17.04.0-ce +## 17.04.0-ce 2017-04-05 #### Builder diff --git a/engine/security/apparmor.md b/engine/security/apparmor.md index 4208519117..3a54d61584 100644 --- a/engine/security/apparmor.md +++ b/engine/security/apparmor.md @@ -55,12 +55,8 @@ $ docker run --rm -it --security-opt apparmor=your_profile hello-world To unload a profile from AppArmor: ```bash -# stop apparmor -$ /etc/init.d/apparmor stop # unload the profile $ apparmor_parser -R /path/to/profile -# start apparmor -$ /etc/init.d/apparmor start ``` ### Resources for writing profiles diff --git a/engine/security/https.md b/engine/security/https.md index 18376b4a93..0e8bdd4d3d 100644 --- a/engine/security/https.md +++ b/engine/security/https.md @@ -102,7 +102,7 @@ Docker clients. For client authentication, create a client key and certificate signing request: -> **Note:** for simplicity of the next couple of steps, you may perform this +> **Note**: for simplicity of the next couple of steps, you may perform this > step on the Docker daemon's host machine as well. $ openssl genrsa -out key.pem 4096 diff --git a/engine/security/https/README.md b/engine/security/https/README.md index 41e9fe22ea..8db187c76b 100644 --- a/engine/security/https/README.md +++ b/engine/security/https/README.md @@ -16,7 +16,7 @@ My process is as following: lots of things to see and manually answer, as openssl wants to be interactive -**NOTE:** make sure you enter the hostname (`boot2docker` in my case) when prompted for `Computer Name`) +> **Note**: make sure you enter the hostname (`boot2docker` in my case) when prompted for `Computer Name`) root@boot2docker:/# sudo make run diff --git a/engine/security/trust/trust_sandbox.md b/engine/security/trust/trust_sandbox.md index 6cfbf8a326..65551042c9 100644 --- a/engine/security/trust/trust_sandbox.md +++ b/engine/security/trust/trust_sandbox.md @@ -77,9 +77,9 @@ the `trustsandbox` container, the Notary server, and the Registry server. version: "2" services: notaryserver: - image: dockersecurity/notary_autobuilds:server-v0.4.2 + image: dockersecurity/notary_autobuilds:server-v0.5.1 volumes: - - notarycerts:/go/src/github.com/docker/notary/fixtures + - notarycerts:/var/lib/notary/fixtures networks: - sandbox environment: diff --git a/engine/swarm/configs.md b/engine/swarm/configs.md index 23163a220f..99e99eadd4 100644 --- a/engine/swarm/configs.md +++ b/engine/swarm/configs.md @@ -59,7 +59,7 @@ containers, configs are all mounted into `C:\ProgramData\Docker\configs` and symbolic links are created to the desired location, which defaults to `C:\`. -You can set the ownership (`uid` and `gid`) or the config, using either the +You can set the ownership (`uid` and `gid`) for the config, using either the numerical ID or the name of the user or group. You can also specify the file permissions (`mode`). These settings are ignored for Windows containers. diff --git a/engine/swarm/join-nodes.md b/engine/swarm/join-nodes.md index f5d8a267d1..3c9e216bfb 100644 --- a/engine/swarm/join-nodes.md +++ b/engine/swarm/join-nodes.md @@ -26,7 +26,7 @@ the `docker swarm join` command. The node only uses the token at join time. If you subsequently rotate the token, it doesn't affect existing swarm nodes. Refer to [Run Docker Engine in swarm mode](swarm-mode.md#view-the-join-command-or-update-a-swarm-join-token). -**NOTE:** Docker engine allows a non-FIPS node to join a FIPS-enabled swarm cluster. +> **Note**: Docker engine allows a non-FIPS node to join a FIPS-enabled swarm cluster. While a mixed FIPS environment makes upgrading or changing status easier, Docker recommends not running a mixed FIPS environment in production. diff --git a/engine/swarm/networking.md b/engine/swarm/networking.md index 615021621c..2b1e07109d 100644 --- a/engine/swarm/networking.md +++ b/engine/swarm/networking.md @@ -208,7 +208,7 @@ Multiple pools can be configured if discontiguous address space is required. How The default mask length can be configured and is the same for all networks. It is set to `/24` by default. To change the default subnet mask length, use the `--default-addr-pool-mask-length` command line option. -**NOTE:** Default address pools can only be configured on `swarm init` and cannot be altered after cluster creation. +> **Note**: Default address pools can only be configured on `swarm init` and cannot be altered after cluster creation. ##### Overlay network size limitations diff --git a/engine/swarm/services.md b/engine/swarm/services.md index 44c2722a48..3830d5063b 100644 --- a/engine/swarm/services.md +++ b/engine/swarm/services.md @@ -664,7 +664,7 @@ For more information on constraints, refer to the `docker service create` #### Placement preferences While [placement constraints](#placement-constraints) limit the nodes a service -can run on, _placement preferences_ try to place services on appropriate nodes +can run on, _placement preferences_ try to place tasks on appropriate nodes in an algorithmic way (currently, only spread evenly). For instance, if you assign each node a `rack` label, you can set a placement preference to spread the service evenly across nodes with the `rack` label, by value. This way, if diff --git a/engine/swarm/swarm-tutorial/inspect-service.md b/engine/swarm/swarm-tutorial/inspect-service.md index a5cdb8bcc8..07955729c5 100644 --- a/engine/swarm/swarm-tutorial/inspect-service.md +++ b/engine/swarm/swarm-tutorial/inspect-service.md @@ -92,7 +92,7 @@ the Docker CLI to see details about the service running in the swarm. ```bash [manager1]$ docker service ps helloworld - NAME IMAGE NODE DESIRED STATE LAST STATE + NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS helloworld.1.8p1vev3fq5zm0mi8g0as41w35 alpine worker2 Running Running 3 minutes ``` @@ -100,7 +100,7 @@ the Docker CLI to see details about the service running in the swarm. `worker2` node. You may see the service running on your manager node. By default, manager nodes in a swarm can execute tasks just like worker nodes. - Swarm also shows you the `DESIRED STATE` and `LAST STATE` of the service + Swarm also shows you the `DESIRED STATE` and `CURRENT STATE` of the service task so you can see if tasks are running according to the service definition. diff --git a/get-started/part2.md b/get-started/part2.md index 7cf2b77df9..cf0407ebf3 100644 --- a/get-started/part2.md +++ b/get-started/part2.md @@ -54,7 +54,7 @@ after doing that, you can expect that the build of your app defined in this ### `Dockerfile` -Create an empty directory. Change directories (`cd`) into the new directory, +Create an empty directory on your local machine. Change directories (`cd`) into the new directory, create a file called `Dockerfile`, copy-and-paste the following content into that file, and save it. Take note of the comments that explain each statement in your new Dockerfile. diff --git a/get-started/part3.md b/get-started/part3.md index c9baf3cc60..28ec86e5ca 100644 --- a/get-started/part3.md +++ b/get-started/part3.md @@ -143,18 +143,28 @@ named it the same as shown in this example, the name is `getstartedlab_web`. The service ID is listed as well, along with the number of replicas, image name, and exposed ports. +Alternatively, you can run `docker stack services`, followed by the name of +your stack. The following example command lets you view all services associated with the +`getstartedlab` stack: + +```bash +docker stack services getstartedlab +ID NAME MODE REPLICAS IMAGE PORTS +bqpve1djnk0x getstartedlab_web replicated 5/5 username/repo:tag *:4000->80/tcp +``` + A single container running in a service is called a **task**. Tasks are given unique IDs that numerically increment, up to the number of `replicas` you defined in `docker-compose.yml`. List the tasks for your service: -```shell +```bash docker service ps getstartedlab_web ``` Tasks also show up if you just list all the containers on your system, though that is not filtered by service: -```shell +```bash docker container ls -q ``` @@ -168,6 +178,18 @@ load-balancing; with each request, one of the 5 tasks is chosen, in a round-robin fashion, to respond. The container IDs match your output from the previous command (`docker container ls -q`). +To view all tasks of a stack, you can run `docker stack ps` followed by your app name, as shown in the following example: + +```bash +docker stack ps getstartedlab +ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS +uwiaw67sc0eh getstartedlab_web.1 username/repo:tag docker-desktop Running Running 9 minutes ago +sk50xbhmcae7 getstartedlab_web.2 username/repo:tag docker-desktop Running Running 9 minutes ago +c4uuw5i6h02j getstartedlab_web.3 username/repo:tag docker-desktop Running Running 9 minutes ago +0dyb70ixu25s getstartedlab_web.4 username/repo:tag docker-desktop Running Running 9 minutes ago +aocrb88ap8b0 getstartedlab_web.5 username/repo:tag docker-desktop Running Running 9 minutes ago +``` + > Running Windows 10? > > Windows 10 PowerShell should already have `curl` available, but if not you can diff --git a/install/index.md b/install/index.md index 231e2d8803..06df281e1b 100644 --- a/install/index.md +++ b/install/index.md @@ -14,6 +14,7 @@ redirect_from: - /engine/installation/linux/docker-ce/ - /engine/installation/linux/docker-ee/ - /engine/installation/ +- /en/latest/installation/ --- Docker Community Edition (CE) is ideal for developers and small diff --git a/install/linux/docker-ce/ubuntu.md b/install/linux/docker-ce/ubuntu.md index 9e99ff9945..60fdaccc0f 100644 --- a/install/linux/docker-ce/ubuntu.md +++ b/install/linux/docker-ce/ubuntu.md @@ -56,7 +56,7 @@ networks, are preserved. The Docker CE package is now called `docker-ce`. ### Supported storage drivers Docker CE on Ubuntu supports `overlay2`, `aufs` and `btrfs` storage drivers. -> *** Note: *** In Docker Engine - Enterprise, `btrfs` is only supported on SLES. See the documentation on +> **Note**: In Docker Engine - Enterprise, `btrfs` is only supported on SLES. See the documentation on > [btrfs](/engine/userguide/storagedriver/btrfs-driver.md) for more details. For new installations on version 4 and higher of the Linux kernel, `overlay2` diff --git a/install/linux/docker-ee/rhel.md b/install/linux/docker-ee/rhel.md index c182de5aaf..cfb50d1242 100644 --- a/install/linux/docker-ee/rhel.md +++ b/install/linux/docker-ee/rhel.md @@ -57,7 +57,7 @@ $ cat /proc/sys/crypto/fips_enabled 1 ``` -> ***NOTE:*** FIPS is only supported in the Docker Engine EE. UCP and DTR currently do not have support for FIPS-140-2. +> **Note**: FIPS is only supported in the Docker Engine EE. UCP and DTR currently do not have support for FIPS-140-2. To enable FIPS 140-2 compliance on a system that is not in FIPS 140-2 mode, do the following: diff --git a/install/linux/docker-ee/suse.md b/install/linux/docker-ee/suse.md index dc3ecafe5f..be83f8218f 100644 --- a/install/linux/docker-ee/suse.md +++ b/install/linux/docker-ee/suse.md @@ -164,7 +164,7 @@ Before you install Docker EE for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker EE from the repository. -> ***NOTE:*** If you need to run Docker EE 2.0, please see the following instructions: +> **Note**: If you need to run Docker EE 2.0, please see the following instructions: > * [18.03](https://docs.docker.com/v18.03/ee/supported-platforms/) - Older Docker EE Engine only release > * [17.06](https://docs.docker.com/v17.06/engine/installation/) - Docker Enterprise Edition 2.0 (Docker Engine, > UCP, and DTR). diff --git a/install/linux/docker-ee/ubuntu.md b/install/linux/docker-ee/ubuntu.md index e46af2efd9..979220fc50 100644 --- a/install/linux/docker-ee/ubuntu.md +++ b/install/linux/docker-ee/ubuntu.md @@ -137,7 +137,7 @@ from the repository. 4. Temporarily add a `$DOCKER_EE_VERSION` variable into your environment. - > ***NOTE:*** If you need to run something other than Docker EE 2.0, please see the following instructions: + > **Note**: If you need to run something other than Docker EE 2.0, please see the following instructions: > * [18.03](https://docs.docker.com/v18.03/ee/supported-platforms/) - Older Docker EE Engine only release > * [17.06](https://docs.docker.com/v17.06/engine/installation/) - Docker Enterprise Edition 2.0 (Docker Engine, > UCP, and DTR). diff --git a/install/windows/docker-ee.md b/install/windows/docker-ee.md index a9b1d57193..31f53e3abb 100644 --- a/install/windows/docker-ee.md +++ b/install/windows/docker-ee.md @@ -14,49 +14,53 @@ Docker Engine - Enterprise enables native Docker containers on Windows Server. W > Release notes > -> [Release notes for all versions](/release-notes/) +> [Release notes for all versions](/engine/release-notes/) ## System requirements -Windows OS requirements around specific CPU and RAM requirements also need to be met as specified -in the [Windows Server Requirements](https://docs.microsoft.com/en-us/windows-server/get-started/system-requirements). -This provides information for specific CPU and memory specs and capabilities (instruction sets like CMPXCHG16b, -LAHF/SAHF, and PrefetchW, security: DEP/NX, etc.). +Windows OS requirements around specific CPU and RAM requirements also need to be +met as specified in the [Windows Server +Requirements](https://docs.microsoft.com/en-us/windows-server/get-started/system-requirements). +This provides information for specific CPU and memory specs and capabilities +(instruction sets like CMPXCHG16b, LAHF/SAHF, and PrefetchW, security: DEP/NX, +etc.). -* OS Versions: Server 2016 (Core and GUI), 1709 and 1803 +* OS Versions: + - Long Term Service Channel (LTSC) - 2016 and 2019 (Core and GUI) + - Semi-annual Channel (SAC) - 1709, 1803 and 1809 * RAM: 4GB -* Disk space: [32 GB minimum recommendation for Windows](https://docs.microsoft.com/en-us/windows-server/get-started/system - requirements). An additional 32 GB of Space is recommended for base images for ServerCore and NanoServer along with buffer - space for workload containers running IIS, SQL Server and .Net apps. +* Disk space: [32 GB minimum recommendation for Windows](https://docs.microsoft.com/en-us/windows-server/get-started/system-requirements). +Docker recommends an additional 32 GB of space for base images for ServerCore +and NanoServer along with buffer space for workload containers running IIS, SQL Server and .Net apps. + ## Install Docker Engine - Enterprise -Docker Engine - Enterprise requires Windows Server 2016, 1703, or 1803. See -[What to know before you install](#what-to-know-before-you-install) for a -full list of prerequisites. +To install the Docker Engine - Enterprise on your hosts, Docker provides a +[OneGet](https://github.com/oneget/oneget) PowerShell Module. -1. Open a PowerShell command prompt, and type the following commands. +1. Open an elevated PowerShell command prompt, and type the following commands. - ```PowerShell + ```powershell Install-Module DockerMsftProvider -Force Install-Package Docker -ProviderName DockerMsftProvider -Force ``` 2. Check if a reboot is required, and if yes, restart your instance: - ```PowerShell + ```powershell (Install-WindowsFeature Containers).RestartNeeded ``` If the output of this command is **Yes**, then restart the server with: - ```PowerShell + ```powershell Restart-Computer ``` 3. Test your Docker Engine - Enterprise installation by running the `hello-world` container. - ```PowerShell + ```powershell docker run hello-world:nanoserver Unable to find image 'hello-world:nanoserver' locally @@ -78,7 +82,7 @@ Some advanced Docker features, such as swarm mode, require the fixes included in [KB4015217](https://support.microsoft.com/en-us/help/4015217/windows-10-update-kb4015217) (or a later cumulative patch). -```PowerShell +```powershell sconfig ``` @@ -87,49 +91,58 @@ Select option `6) Download and Install Updates`. ### FIPS 140-2 cryptographic module support -[Federal Information Processing Standards (FIPS) Publication 140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf) is a United States Federal security requirement for cryptographic modules. +[Federal Information Processing Standards (FIPS) Publication +140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf) +is a United States Federal security requirement for cryptographic modules. -With Docker EE Basic license for versions 18.09 and later, Docker provides FIPS 140-2 support in Windows Server 2016. This includes a FIPS supported cryptographic module. If the Windows implementation already has FIPS support enabled, FIPS is automatically enabled in the Docker engine. +With Docker EE Basic license for versions 18.09 and later, Docker provides FIPS +140-2 support in Windows Server. This includes a FIPS supported cryptographic +module. If the Windows implementation already has FIPS support enabled, FIPS is +automatically enabled in the Docker engine. -**NOTE:** FIPS 140-2 is only supported in the Docker EE engine. UCP and DTR currently do not have support for FIPS 140-2. -To enable FIPS 140-2 compliance on a system that is not in FIPS 140-2 mode, execute the following command in PowerShell: +> **Note**: FIPS 140-2 is only supported in the Docker EE engine. UCP and DTR currently do not have support for FIPS 140-2. + +To enable FIPS 140-2 compliance on a system that is not in FIPS 140-2 mode, +execute the following command in PowerShell: ```powershell [System.Environment]::SetEnvironmentVariable("DOCKER_FIPS", "1", "Machine") ``` -FIPS 140-2 mode may also be enabled via the Windows Registry. To update the pertinent registry key, execute the following PowerShell command as an Administrator: +FIPS 140-2 mode may also be enabled via the Windows Registry. To update the +pertinent registry key, execute the following PowerShell command as an +Administrator: -```PowerShell +```powershell Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy\" -Name "Enabled" -Value "1" ``` Restart the Docker service by running the following command. -```PowerShell +```powershell net stop docker net start docker ``` To confirm Docker is running with FIPS-140-2 enabled, run the `docker info` command: -```YAML +```yaml Labels: com.docker.security.fips=enabled ``` -**NOTE:** If the system has the FIPS-140-2 cryptographic module installed on the operating system, it is possible to disable FIPS-140-2 compliance. To disable FIPS-140-2 in Docker but not the operating system, set the value `"DOCKER_FIPS","0"` in the `[System.Environment]`.` +> **Note**: If the system has the FIPS-140-2 cryptographic module installed on the operating system, it is possible to disable FIPS-140-2 compliance. To disable FIPS-140-2 in Docker but not the operating system, set the value `"DOCKER_FIPS","0"` in the `[System.Environment]`.` ## Use a script to install Docker EE -Use the following steps when you want to install manually, script automated -installs, or install on air-gapped systems. +Use the following guide if you wanted to install the Docker Engine - Enterprise +manually, via a script, or on air-gapped systems. 1. In a PowerShell command prompt, download the installer archive on a machine that has a connection. - ```PowerShell + ```powershell # On an online machine, download the zip file. Invoke-WebRequest -UseBasicParsing -OutFile {{ filename }} {{ download_url }} ``` @@ -141,8 +154,8 @@ installs, or install on air-gapped systems. PowerShell command prompt, use the following commands to extract the archive, register, and start the Docker service. - ```PowerShell - #Stop Docker service + ```powershell + # Stop Docker service Stop-Service docker # Extract the archive. @@ -174,7 +187,7 @@ installs, or install on air-gapped systems. 3. Test your Docker EE installation by running the `hello-world` container. - ```PowerShell + ```powershell docker container run hello-world:nanoserver ``` @@ -182,7 +195,7 @@ installs, or install on air-gapped systems. To install a specific version, use the `RequiredVersion` flag: -```PowerShell +```powershell Install-Package -Name docker -ProviderName DockerMsftProvider -Force -RequiredVersion {{ site.docker_ee_version }} ... Name Version Source Summary @@ -192,69 +205,68 @@ Docker {{ site.docker_ee_version }} Docker ### Updating the DockerMsftProvider -Installing specific Docker EE versions may require an update to previously installed DockerMsftProvider modules. To update: +Installing specific Docker EE versions may require an update to previously +installed DockerMsftProvider modules. To update: -```PowerShell +```powershell Update-Module DockerMsftProvider ``` -Then open a new Powershell session for the update to take effect. +Then open a new PowerShell session for the update to take effect. ## Update Docker Engine - Enterprise -To update Docker Engine - Enterprise to the most recent release, specify the `-RequiredVersion` and `-Update` flags: +To update Docker Engine - Enterprise to the most recent release, specify the +`-RequiredVersion` and `-Update` flags: -```PowerShell +```powershell Install-Package -Name docker -ProviderName DockerMsftProvider -RequiredVersion {{ site.docker_ee_version }} -Update -Force ``` -The required version must match any of the versions available in this json file: https://dockermsft.blob.core.windows.net/dockercontainer/DockerMsftIndex.json +The required version number must match a versions available on the [JSON +index](https://dockermsft.blob.core.windows.net/dockercontainer/DockerMsftIndex.json) ## Uninstall Docker EE - Use the following commands to completely remove the Docker Engine - Enterprise from a Windows Server: + Use the following commands to completely remove the Docker Engine - Enterprise + from a Windows Server: 1. Leave any active Docker Swarm - ```PowerShell + + ```powershell docker swarm leave --force ``` - + 1. Remove all running and stopped containers - - ```PowerShell + + ```powershell docker rm -f $(docker ps --all --quiet) ``` - + 1. Prune container data - ```PowerShell + + ```powershell docker system prune --all --volumes ``` - + 1. Uninstall Docker PowerShell Package and Module - ```PowerShell + + ```powershell Uninstall-Package -Name docker -ProviderName DockerMsftProvider Uninstall-Module -Name DockerMsftProvider ``` 1. Clean up Windows Networking and file system - ```PowerShell + + ```powershell Get-HNSNetwork | Remove-HNSNetwork Remove-Item -Path "C:\ProgramData\Docker" -Recurse -Force ``` -## Preparing a Docker EE Engine for use with UCP +## Preparing a Windows Host for use with UCP -Run the -[UCP installation script for Windows](/datacenter/ucp/3.0/guides/admin/configure/join-windows-worker-nodes/#run-the-windows-node-setup-script). - -Start the Docker service: - -```PowerShell -Start-Service Docker -``` - -* **What the Docker Engine - Enterprise install includes**: The installation -provides [Docker Engine](/engine/userguide/intro.md) and the -[Docker CLI client](/engine/reference/commandline/cli.md). +To add a Windows Server host to an existing Universal Control Plane cluster +please follow the list of [prerequisites and joining +instructions](/ee/ucp/admin/configure/join-nodes/join-windows-nodes-to-cluster/#run-the-windows-node-setup-script). ## About Docker Engine - Enterprise containers and Windows Server @@ -265,9 +277,6 @@ provides a tutorial on how to set up and run Windows containers on Windows 10 or Windows Server 2016. It shows you how to use a MusicStore application with Windows containers. -* [Setup - Windows Server 2016 (Lab)](https://github.com/docker/labs/blob/master/windows/windows-containers/Setup-Server2016.md) -describes environment setup in detail. - * Docker Container Platform for Windows Server [articles and blog posts](https://www.docker.com/microsoft/) on the Docker website. diff --git a/machine/AVAILABLE_DRIVER_PLUGINS.md b/machine/AVAILABLE_DRIVER_PLUGINS.md index 61e014ae4d..c6119c067b 100644 --- a/machine/AVAILABLE_DRIVER_PLUGINS.md +++ b/machine/AVAILABLE_DRIVER_PLUGINS.md @@ -224,6 +224,20 @@ with Docker Inc. Use 3rd party plugins at your own risk. miqui@hpe.com + + Kamatera + + https://github.com/OriHoch/docker-machine-driver-kamatera + + + OriHoch + + + support@kamatera.com + + KVM @@ -238,6 +252,18 @@ with Docker Inc. Use 3rd party plugins at your own risk. "mailto:daniel.hiltgen@docker.com">daniel.hiltgen@docker.com + + Linode + + https://github.com/linode/docker-machine-driver-linode + + + Linode + + + developers@linode.com + + NTT Communications Enterprise Cloud diff --git a/machine/drivers/azure.md b/machine/drivers/azure.md index 9c6f211507..3c7af9a01c 100644 --- a/machine/drivers/azure.md +++ b/machine/drivers/azure.md @@ -7,7 +7,7 @@ title: Microsoft Azure You need an Azure Subscription to use this Docker Machine driver. [Sign up for a free trial.][trial] -> **NOTE:** This documentation is for the new version of the Azure driver, which started +> **Note**: This documentation is for the new version of the Azure driver, which started > shipping with v0.7.0. This driver is not backwards-compatible with the old > Azure driver. If you want to continue managing your existing Azure machines, please > download and use machine versions prior to v0.7.0. diff --git a/machine/drivers/index.md b/machine/drivers/index.md index be0aafad10..4990e89786 100644 --- a/machine/drivers/index.md +++ b/machine/drivers/index.md @@ -9,7 +9,7 @@ title: Machine drivers - [Digital Ocean](digital-ocean.md) - [Exoscale](exoscale.md) - [Google Compute Engine](gce.md) -- [Generic](generic.md) +- [Linode](linode.md) (unofficial plugin, not supported by Docker) - [Microsoft Hyper-V](hyper-v.md) - [OpenStack](openstack.md) - [Rackspace](rackspace.md) diff --git a/machine/drivers/linode.md b/machine/drivers/linode.md new file mode 100644 index 0000000000..89da6ae403 --- /dev/null +++ b/machine/drivers/linode.md @@ -0,0 +1,61 @@ +--- +description: Linode driver for machine +keywords: machine, Linode, driver +title: Linode +--- + +Create machines on [Linode](https://www.linode.com). + +### Credentials + +You will need a Linode APIv4 Personal Access Token. Get one here: . +Supply the token to `docker-machine create -d linode` with `--linode-token`. + +### Install + +`docker-machine` is required, [see the installation documentation](https://docs.docker.com/machine/install-machine/). + +Then, install the latest release of the Linode machine driver for your environment from the [releases list](https://github.com/linode/docker-machine-driver-linode/releases). + +### Usage + +```bash +docker-machine create -d linode --linode-token= linode +``` + +See the [Linode Docker machine driver project page](https://github.com/linode/docker-machine-driver-linode) for more examples. + +#### Options, Environment Variables, and Defaults + +| Argument | Env | Default | Description +| --- | --- | --- | --- +| `linode-token` | `LINODE_TOKEN` | None | **required** Linode APIv4 Token (see [here](https://developers.linode.com/api/v4#section/Personal-Access-Token)) +| `linode-root-pass` | `LINODE_ROOT_PASSWORD` | *generated* | The Linode Instance `root_pass` (password assigned to the `root` account) +| `linode-authorized-users` | `LINODE_AUTHORIZED_USERS` | None | Linode user accounts (separated by commas) whose Linode SSH keys will be permitted root access to the created node +| `linode-label` | `LINODE_LABEL` | *generated* | The Linode Instance `label`, unless overridden this will match the docker-machine name. This `label` must be unique on the account. +| `linode-region` | `LINODE_REGION` | `us-east` | The Linode Instance `region` (see [here](https://api.linode.com/v4/regions)) +| `linode-instance-type` | `LINODE_INSTANCE_TYPE` | `g6-standard-4` | The Linode Instance `type` (see [here](https://api.linode.com/v4/linode/types)) +| `linode-image` | `LINODE_IMAGE` | `linode/ubuntu18.04` | The Linode Instance `image` which provides the Linux distribution (see [here](https://api.linode.com/v4/images)). +| `linode-ssh-port` | `LINODE_SSH_PORT` | `22` | The port that SSH is running on, needed for Docker Machine to provision the Linode. +| `linode-ssh-user` | `LINODE_SSH_USER` | `root` | The user as which docker-machine should log in to the Linode instance to install Docker. This user must have passwordless sudo. +| `linode-docker-port` | `LINODE_DOCKER_PORT` | `2376` | The TCP port of the Linode that Docker will be listening on +| `linode-swap-size` | `LINODE_SWAP_SIZE` | `512` | The amount of swap space provisioned on the Linode Instance +| `linode-stackscript` | `LINODE_STACKSCRIPT` | None | Specifies the Linode StackScript to use to create the instance, either by numeric ID, or using the form *username*/*label*. +| `linode-stackscript-data` | `LINODE_STACKSCRIPT_DATA` | None | A JSON string specifying data that is passed (via UDF) to the selected StackScript. +| `linode-create-private-ip` | `LINODE_CREATE_PRIVATE_IP` | None | A flag specifying to create private IP for the Linode instance. +| `linode-tags` | `LINODE_TAGS` | None | A comma separated list of tags to apply to the the Linode resource +| `linode-ua-prefix` | `LINODE_UA_PREFIX` | None | Prefix the User-Agent in Linode API calls with some 'product/version' + +#### Notes + +* When using the `linode/containerlinux` `linode-image`, the `linode-ssh-user` will default to `core` +* A `linode-root-pass` will be generated if not provided. This password will not be shown. Rely on `docker-machine ssh` or [Linode's Rescue features](https://www.linode.com/docs/quick-answers/linode-platform/reset-the-root-password-on-your-linode/) to access the node directly. + +#### Debugging + +Detailed run output will be emitted when using the LinodeGo `LINODE_DEBUG=1` option along with the `docker-machine` `--debug` option. + +```bash +LINODE_DEBUG=1 docker-machine --debug create -d linode --linode-token=$LINODE_TOKEN machinename +``` + diff --git a/network/overlay.md b/network/overlay.md index c3e75a04d7..18d555de08 100644 --- a/network/overlay.md +++ b/network/overlay.md @@ -230,7 +230,7 @@ preferred because it is somewhat self-documenting. -p 8080:80/tcp -p 8080:80/udp or
-p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp -Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routine mesh. +Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routing mesh. diff --git a/reference/dtr/2.6/api/dtr-swagger-2.6.json b/reference/dtr/2.6/api/dtr-swagger-2.6.json new file mode 100644 index 0000000000..0958f472c0 --- /dev/null +++ b/reference/dtr/2.6/api/dtr-swagger-2.6.json @@ -0,0 +1,9957 @@ +{ + "swagger": "2.0", + "info": { + "title": "Docker Trusted Registry", + "version": "2.6.0-tp9" + }, + "paths": { + "/api/v0/accounts/language": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "Get the chosen language", + "operationId": "GetLanguage", + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Language" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Language" + } + } + } + } + }, + "/api/v0/accounts/{namespace}": { + "delete": { + "description": "\n\t*Authorization:* Client must be authenticated as a system admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "Removes a user or organization along with all repositories", + "operationId": "DeleteNamespace", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + } + ], + "responses": { + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_ACCOUNT: An account with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/accounts/{namespace}/repositories": { + "delete": { + "description": "\n\t*Authorization:* Client must be authenticated as a system admin, organization admin or user in question\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "Removes all of a user or organization's repositories", + "operationId": "DeleteNamespaceRepositories", + "deprecated": true, + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + } + ], + "responses": { + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_ACCOUNT: An account with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/accounts/{namespace}/webhooks": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "List the webhook subscriptions for a namespace", + "operationId": "ListNamespaceWebhooks", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_ACCOUNT: An account with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + } + } + } + }, + "/api/v0/accounts/{orgname}/teams/{teamname}/repositoryAccess": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who owns the organization the team is in or be a member of that team.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "List repository access grants for a team", + "operationId": "ListTeamRepoAccess", + "parameters": [ + { + "type": "string", + "description": "organization account name", + "name": "orgname", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "team name", + "name": "teamname", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListTeamRepoAccess" + } + }, + "400": { + "description": "the team does not belong to the organization" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TEAM: A team with the given name does not exist in the organization." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListTeamRepoAccess" + } + } + } + } + }, + "/api/v0/accounts/{username}/repositoryAccess/{namespace}/{reponame}": { + "get": { + "description": "\n\t*Authorization:* Client must be authenticated either as the user in question or be a system admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "Check a user's access to a repository", + "operationId": "GetUserRepoAccess", + "parameters": [ + { + "type": "string", + "description": "user account name", + "name": "username", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.RepoUserAccess" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.RepoUserAccess" + } + } + } + } + }, + "/api/v0/accounts/{username}/settings": { + "get": { + "description": "\n*Authorization:* Client must be authenticated either as the user in question or be a system admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "Check a user's settings", + "operationId": "GetUserSettings", + "parameters": [ + { + "type": "string", + "description": "user account name", + "name": "username", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.UserSettings" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_USER: A user with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.UserSettings" + } + } + } + }, + "patch": { + "description": "\n*Authorization:* Client must be authenticated either as the user in question or be a system admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "accounts" + ], + "summary": "Update a user's settings", + "operationId": "UpdateUserSettings", + "parameters": [ + { + "type": "string", + "description": "user account name", + "name": "username", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.UserSettings" + } + } + ], + "responses": { + "200": { + "description": "Successfully updated user settings." + }, + "400": { + "description": "INVALID_USER_SETTINGS: The submitted user settings change request contains invalid values." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_USER: A user with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully updated user settings." + } + } + } + }, + "/api/v0/action_configs": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "action_configs" + ], + "summary": "List all action configs", + "operationId": "ListActionConfigs", + "responses": { + "200": { + "description": "Success, list of action configs returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.ActionConfigs" + } + }, + "default": { + "description": "Success, list of action configs returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.ActionConfigs" + } + } + } + }, + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "action_configs" + ], + "summary": "Configure actions", + "operationId": "UpdateActionConfig", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/tmpforms.ActionConfigCreate" + } + } + ], + "responses": { + "202": { + "description": "Success.", + "schema": { + "$ref": "#/definitions/tmpresponses.ActionConfig" + } + } + } + } + }, + "/api/v0/action_configs/{action}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "action_configs" + ], + "summary": "Get info about the actionConfig with the given action", + "operationId": "GetActionConfig", + "parameters": [ + { + "type": "string", + "description": "name of action to fetch the config for", + "name": "action", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "Success, action config info returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.ActionConfig" + } + }, + "default": { + "description": "Success, action config info returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.ActionConfig" + } + } + } + }, + "delete": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "action_configs" + ], + "summary": "Delete the action config. The defaults will be used.", + "operationId": "DeleteActionConfig", + "parameters": [ + { + "type": "string", + "description": "the name of the action to delete the config for", + "name": "action", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "Success, action config has been deleted." + } + } + } + }, + "/api/v0/api_tokens": { + "get": { + "description": "listUserAPITokensHandler", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "API_tokens" + ], + "summary": "Get all API tokens associated with user. Get all tokens if no user is not specified", + "operationId": "GetAllAPITokensByUser", + "parameters": [ + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "string", + "description": "Limit the API token results to a specific user", + "name": "username", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully retrieved API tokens", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.APIToken" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_USER: A user with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved API tokens", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.APIToken" + } + } + } + } + }, + "post": { + "description": "createAPITokenHandler", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "API_tokens" + ], + "summary": "Create a new API token", + "operationId": "CreateAnAPIToken", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreateAPIToken" + } + }, + { + "type": "string", + "description": "Limit the API token results to a specific user", + "name": "username", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully created API token", + "schema": { + "$ref": "#/definitions/responses.NewAPIToken" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully created API token", + "schema": { + "$ref": "#/definitions/responses.NewAPIToken" + } + } + } + }, + "delete": { + "description": "cleanupAPITokenHleanupHandler", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "API_tokens" + ], + "summary": "Mass deletion of API tokens from database based on user, time created, and/or generation method", + "operationId": "APITokenCleanup", + "parameters": [ + { + "type": "string", + "description": "Limit the API token results to a specific user", + "name": "username", + "in": "query" + }, + { + "type": "string", + "description": "The date on which the token was last used", + "name": "usedbefore", + "in": "query" + }, + { + "type": "string", + "default": "auto", + "description": "The method by which the token was created", + "name": "generatedby", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully cleaned up API tokens" + }, + "400": { + "description": "INVALID_PARAMETERS: Unable to parse query parameters" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_USER: A user with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully cleaned up API tokens" + } + } + } + }, + "/api/v0/api_tokens/{hashedtoken}": { + "get": { + "description": "getAPITokenHandler", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "API_tokens" + ], + "summary": "Get an API token's information based on it's token id", + "operationId": "GetAnAPIToken", + "parameters": [ + { + "type": "string", + "description": "API token id", + "name": "hashedtoken", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "Successfully retrieved API token", + "schema": { + "$ref": "#/definitions/responses.APIToken" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_API_TOKEN: An API token with the id name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved API token", + "schema": { + "$ref": "#/definitions/responses.APIToken" + } + } + } + }, + "delete": { + "description": "deleteAPITokenHandler", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "API_tokens" + ], + "summary": "Delete a specific API token", + "operationId": "DeleteAnAPIToken", + "parameters": [ + { + "type": "string", + "description": "API token id", + "name": "hashedtoken", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "Successfully deleted API token" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_API_TOKEN: An API token with the id name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully deleted API token" + } + } + }, + "patch": { + "description": "updateAPITokenHandler", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "API_tokens" + ], + "summary": "Update information about a specific API token", + "operationId": "UpdateAnAPIToken", + "parameters": [ + { + "type": "string", + "description": "API token id", + "name": "hashedtoken", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.UpdateAPIToken" + } + } + ], + "responses": { + "200": { + "description": "Successfully updated API tokens", + "schema": { + "$ref": "#/definitions/responses.APIToken" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_API_TOKEN: An API token with the id name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully updated API tokens", + "schema": { + "$ref": "#/definitions/responses.APIToken" + } + } + } + } + }, + "/api/v0/content_caches": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as any active user in the system.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "content_caches" + ], + "summary": "List all content caches", + "operationId": "ListContentCaches", + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ContentCache" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ContentCache" + } + } + } + } + }, + "post": { + "description": "\n*Authorization:* Client must be authenticated an admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "content_caches" + ], + "summary": "Create content cache", + "operationId": "CreateContentCache", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreateContentCache" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.ContentCache" + } + }, + "400": { + "description": "invalid content cache details" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/content_caches/{contentcacheuuid}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as any active user in the system.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "content_caches" + ], + "summary": "View details of a content cache", + "operationId": "GetContentCache", + "parameters": [ + { + "type": "string", + "description": "uuid of content cache", + "name": "contentcacheuuid", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ContentCache" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_CONTENT_CACHE: A content cache with the given uuid does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ContentCache" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated an admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "content_caches" + ], + "summary": "Remove a content cache", + "operationId": "DeleteContentCache", + "parameters": [ + { + "type": "string", + "description": "uuid of content cache", + "name": "contentcacheuuid", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success or content cache does not exist" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_CONTENT_CACHE: A content cache with the given uuid does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/crons": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "crons" + ], + "summary": "List all crons", + "operationId": "ListCrons", + "responses": { + "200": { + "description": "Success, list of crons returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Crons" + } + }, + "default": { + "description": "Success, list of crons returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Crons" + } + } + } + }, + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "crons" + ], + "summary": "Create / update a periodic task", + "operationId": "UpdateCron", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/tmpforms.CronCreate" + } + } + ], + "responses": { + "202": { + "description": "Success.", + "schema": { + "$ref": "#/definitions/tmpresponses.Cron" + } + } + } + } + }, + "/api/v0/crons/{action}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "crons" + ], + "summary": "Get info about the cron with the given action", + "operationId": "GetCron", + "parameters": [ + { + "type": "string", + "description": "action of the cron to fetch", + "name": "action", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "Success, cron info returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Cron" + } + }, + "default": { + "description": "Success, cron info returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Cron" + } + } + } + }, + "delete": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "crons" + ], + "summary": "Delete the cron. Jobs created from it will not be canceled.", + "operationId": "DeleteCron", + "parameters": [ + { + "type": "string", + "description": "action of cron to delete", + "name": "action", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "Success, cron has been deleted." + } + } + } + }, + "/api/v0/events": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "events" + ], + "summary": "Get Events", + "operationId": "GetEvents", + "parameters": [ + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "string", + "name": "publishedBefore", + "in": "query" + }, + { + "type": "string", + "name": "publishedAfter", + "in": "query" + }, + { + "type": "string", + "description": "UUID of the user or organization that performed the event", + "name": "actorId", + "in": "query" + }, + { + "type": "string", + "description": "Type of events to filter by", + "name": "eventType", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Events" + } + }, + "400": { + "description": "INVALID_PARAMETERS: Unable to parse query parameters" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Events" + } + } + } + } + }, + "/api/v0/imagescan/layeroverride": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Gets a list of all the available overrides", + "operationId": "GetLayerVulnOverrides", + "responses": { + "200": { + "description": "Successfully set vulnerability override", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.LayerVulnOverride" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "500": { + "description": "INTERNAL_ERROR: An internal server error occurred. Contact a system administrator for more information." + }, + "default": { + "description": "Successfully set vulnerability override", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.LayerVulnOverride" + } + } + } + } + } + }, + "/api/v0/imagescan/layeroverride/{layerid}": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Sets a vulnerability override for the given layer", + "operationId": "SetLayerVulnOverride", + "parameters": [ + { + "type": "string", + "description": "layer id", + "name": "layerid", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.VulnOverrideOption" + } + } + ], + "responses": { + "200": { + "description": "Successfully set vulnerability override" + }, + "400": { + "description": "INVALID_SETTINGS: The submitted settings change request contains invalid values." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_LAYER: A layer with the given sha does not exist in the repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "500": { + "description": "INTERNAL_ERROR: An internal server error occurred. Contact a system administrator for more information." + }, + "default": { + "description": "Successfully set vulnerability override" + } + } + } + }, + "/api/v0/imagescan/layeroverride/{vulnerabilityid}": { + "delete": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Deletes a layer vulnerability override", + "operationId": "DeleteLayerVulnOverride", + "parameters": [ + { + "type": "string", + "description": "vulnerability id", + "name": "vulnerabilityid", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_LAYER: A layer with the given sha does not exist in the repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK" + } + } + } + }, + "/api/v0/imagescan/repositories/{namespace}/{reponame}/{tag}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Deprecated use /scansummary/repositories/{namespace}/{reponame}/{tag}", + "operationId": "GetSummaryByManifestDigest", + "deprecated": true, + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "tag name", + "name": "tag", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include detailed summary results", + "name": "detailed", + "in": "query" + }, + { + "type": "string", + "description": "Operating system of the tag", + "name": "os", + "in": "query" + }, + { + "type": "string", + "description": "Architecture of the tag", + "name": "arch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully retrieved summary.", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.OldScanSummary" + } + } + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved summary.", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.OldScanSummary" + } + } + } + } + } + }, + "/api/v0/imagescan/scan": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Do a scan or a scan/check of all layers", + "operationId": "ScanAllLayers", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.ScanOptions" + } + } + ], + "responses": { + "200": { + "description": "Successfully submitted all layers to jobrunner for scan/check." + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully submitted all layers to jobrunner for scan/check." + } + } + } + }, + "/api/v0/imagescan/scan/update": { + "put": { + "consumes": [ + "multipart/form-data", + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Update the vulnerability database for security scanning", + "operationId": "UpdateVulnDB", + "parameters": [ + { + "type": "file", + "description": "Upload file to init database", + "name": "file", + "in": "formData" + }, + { + "type": "boolean", + "default": false, + "description": "Init or update vuln db in online mode.", + "name": "online", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully started to updated vulnerability DB." + }, + "400": { + "description": "SCANNING_DB_NOT_READY: Scanning DB is not ready" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully started to updated vulnerability DB." + } + } + } + }, + "/api/v0/imagescan/scan/{namespace}/{reponame}/{tag}/{os}/{arch}": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Do a scan or a scan/check of given image", + "operationId": "ScanImage", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "tag name", + "name": "tag", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "operating system of the tag", + "name": "os", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "architecture of the tag", + "name": "arch", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "Successfully submitted image to jobrunner for scan/check." + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully submitted image to jobrunner for scan/check." + } + } + } + }, + "/api/v0/imagescan/scansummary/component/{component}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get the image by component", + "operationId": "GetScannedImageByComponent", + "parameters": [ + { + "type": "string", + "description": "component", + "name": "component", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include scan status summary results", + "name": "scanstatus", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + } + } + } + }, + "/api/v0/imagescan/scansummary/cve/{cve}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get the image by CVE", + "operationId": "GetScannedImageByCVE", + "parameters": [ + { + "type": "string", + "description": "cve", + "name": "cve", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include scan status summary results", + "name": "scanstatus", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + } + } + } + }, + "/api/v0/imagescan/scansummary/layer/{layerid}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get the image by layer sha", + "operationId": "GetScannedImageByLayer", + "parameters": [ + { + "type": "string", + "description": "layer id", + "name": "layerid", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include scan status summary results", + "name": "scanstatus", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_LAYER: A layer with the given sha does not exist in the repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + } + } + } + }, + "/api/v0/imagescan/scansummary/license/{license}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get the image by license", + "operationId": "GetScannedImageByLicense", + "parameters": [ + { + "type": "string", + "description": "license", + "name": "license", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include scan status summary results", + "name": "scanstatus", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved images", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + } + } + } + }, + "/api/v0/imagescan/scansummary/repositories/{namespace}/{reponame}/{reference}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get the scan summary info on a namespace/repo:tag or namespace/repo@digest", + "operationId": "GetScanSummary", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "digest or tag for an image manifest", + "name": "reference", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include scan status summary results", + "name": "scanstatus", + "in": "query" + }, + { + "type": "string", + "description": "Operating system of the tag", + "name": "os", + "in": "query" + }, + { + "type": "string", + "description": "Architecture of the tag", + "name": "arch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Successfully retrieved summary.", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_REF: A ref with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved summary.", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ScanSummary" + } + } + } + } + } + }, + "/api/v0/imagescan/scansummary/repositories/{namespace}/{reponame}/{tag}/export": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "text/csv", + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get the scan summary info on a namespace/repo:tag as a file", + "operationId": "ExportScanSummary", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "tag name", + "name": "tag", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include scan status summary results", + "name": "scanstatus", + "in": "query" + }, + { + "type": "string", + "description": "Operating system of the tag", + "name": "os", + "in": "query" + }, + { + "type": "string", + "description": "Architecture of the tag", + "name": "arch", + "in": "query" + }, + { + "type": "string", + "description": "Scan summary exported filetype", + "name": "Accept", + "in": "header" + } + ], + "responses": { + "400": { + "description": "INVALID_PARAMETERS: Unable to parse query parameters" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/imagescan/scansummary/tags": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get a list of scan summaries", + "operationId": "GetScanSummaries", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.ImagesForm" + } + } + ], + "responses": { + "200": { + "description": "Successfully retrieved summary.", + "schema": { + "$ref": "#/definitions/responses.ThinScanSummaries" + } + }, + "400": { + "description": "INVALID_PARAMETERS: Unable to parse query parameters" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved summary.", + "schema": { + "$ref": "#/definitions/responses.ThinScanSummaries" + } + } + } + } + }, + "/api/v0/imagescan/status": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "imagescan" + ], + "summary": "Get the status and version of scanning service", + "operationId": "GetNautilusDBStatus", + "responses": { + "200": { + "description": "Successfully retrieved DB status", + "schema": { + "$ref": "#/definitions/responses.NautilusStatus" + } + }, + "400": { + "description": "SCANNING_NOT_ENABLED: Scanning is not enabled" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully retrieved DB status", + "schema": { + "$ref": "#/definitions/responses.NautilusStatus" + } + } + } + } + }, + "/api/v0/index/autocomplete": { + "get": { + "description": "\nRepository results will be filtered to only those repositories visible to the client. Account results will not be filtered.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "index" + ], + "summary": "Autocompletion for repositories and/or accounts", + "operationId": "Autocomplete", + "parameters": [ + { + "type": "string", + "description": "Autocomplete query", + "name": "query", + "in": "query", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include repositories in the response", + "name": "includeRepositories", + "in": "query" + }, + { + "type": "boolean", + "default": true, + "description": "Whether to include accounts in the response", + "name": "includeAccounts", + "in": "query" + }, + { + "type": "string", + "description": "Exact repository namespace to limit results to.", + "name": "namespace", + "in": "query" + }, + { + "type": "number", + "default": 25, + "name": "limit", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Autocomplete" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Autocomplete" + } + } + } + } + }, + "/api/v0/index/dockersearch": { + "get": { + "description": "\nThis is used for the Docker CLI's docker search command. Repository results will be filtered to only those repositories visible to the client.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "index" + ], + "summary": "Search Docker repositories", + "operationId": "Docker Search", + "parameters": [ + { + "type": "string", + "description": "Search query", + "name": "q", + "in": "query", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.DockerSearch" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.DockerSearch" + } + } + } + } + }, + "/api/v0/jobs": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "jobs" + ], + "summary": "List all jobs ordered by most recently scheduled", + "operationId": "ListJobs", + "parameters": [ + { + "type": "string", + "default": "any", + "description": "Filter jobs by action.", + "name": "action", + "in": "query" + }, + { + "type": "string", + "default": "any", + "description": "Filter jobs by worker ID.", + "name": "worker", + "in": "query" + }, + { + "type": "string", + "default": "any", + "description": "Show only jobs that are running.", + "name": "running", + "in": "query" + }, + { + "type": "integer", + "default": 0, + "description": "Return most recently scheduled jobs starting from this offset index.", + "name": "start", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of jobs per page of results.", + "name": "limit", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Success, list of jobs returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Jobs" + } + }, + "default": { + "description": "Success, list of jobs returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Jobs" + } + } + } + }, + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "jobs" + ], + "summary": "Schedule a job to be run immediately", + "operationId": "CreateJob", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/tmpforms.JobSubmission" + } + } + ], + "responses": { + "202": { + "description": "Success, job waiting to be claimed.", + "schema": { + "$ref": "#/definitions/tmpresponses.Job" + } + } + } + } + }, + "/api/v0/jobs/{jobID}": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "jobs" + ], + "summary": "Get info about the job with the given ID", + "operationId": "GetJob", + "parameters": [ + { + "type": "string", + "description": "ID of job to fetch", + "name": "jobID", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "Success, job info returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Job" + } + }, + "default": { + "description": "Success, job info returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Job" + } + } + } + }, + "delete": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "jobs" + ], + "summary": "Signal this job's worker to cancel and delete the job", + "operationId": "DeleteJobs", + "parameters": [ + { + "type": "string", + "description": "ID of job to delete", + "name": "jobID", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "Success, job has been deleted." + } + } + } + }, + "/api/v0/jobs/{jobID}/cancel": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "jobs" + ], + "summary": "Signal this job's worker to cancel the job", + "operationId": "CancelJob", + "parameters": [ + { + "type": "string", + "description": "ID of job to cancel", + "name": "jobID", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "Success, job has been canceled." + } + } + } + }, + "/api/v0/jobs/{jobID}/logs": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "jobs" + ], + "summary": "Retrieve logs for this job from its worker", + "operationId": "GetJobLogs", + "parameters": [ + { + "type": "string", + "description": "ID of job whose logs to retrieve", + "name": "jobID", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": false, + "description": "t/f: stream new logs", + "name": "stream", + "in": "query" + }, + { + "type": "integer", + "default": 0, + "description": "Line number to start from", + "name": "offset", + "in": "query" + }, + { + "type": "integer", + "default": 0, + "description": "Number of lines to return if not streaming", + "name": "limit", + "in": "query" + } + ], + "responses": { + "200": { + "description": "Success, job's logs returned.", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/tmpresponses.JobLog" + } + } + }, + "default": { + "description": "Success, job's logs returned.", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/tmpresponses.JobLog" + } + } + } + } + } + }, + "/api/v0/meta/alerts": { + "get": { + "description": "\n*Authorization:* Client must be authenticated an admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "meta" + ], + "summary": "Get alerts", + "operationId": "GetAlerts", + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Alert" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Alert" + } + } + } + } + } + }, + "/api/v0/meta/cluster_status": { + "get": { + "description": "\n*Authorization:* Client must be authenticated an admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "meta" + ], + "summary": "Get cluster status", + "operationId": "GetClusterStatus", + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ClusterStatus" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ClusterStatus" + } + } + } + } + }, + "/api/v0/meta/features": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as any active user in the system\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "meta" + ], + "summary": "Get features", + "operationId": "GetFeatures", + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Features" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Features" + } + } + } + } + }, + "/api/v0/meta/settings": { + "get": { + "description": "\n*Authorization:* Client must be authenticated an admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "meta" + ], + "summary": "Get settings", + "operationId": "GetSettings", + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Settings" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Settings" + } + } + } + }, + "post": { + "description": "\n*Authorization:* Client must be authenticated an admin.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "meta" + ], + "summary": "Update settings", + "operationId": "UpdateSettings", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.Settings" + } + } + ], + "responses": { + "202": { + "description": "success" + }, + "400": { + "description": "INVALID_SETTINGS: The submitted settings change request contains invalid values." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/remote/registry": { + "post": { + "description": "\n*Authorization:* Client must be authenticated as any active user in the system. Credentials provided in the request body must be for an active user in the remote system.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "remote" + ], + "summary": "Create a check for connection status of remote registry", + "operationId": "CreateRemoteRegistryCheck", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreateRemoteRegistryCheck" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.RemoteRegistryCheck" + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as any active user in the system. Results will be filtered to only those repositories visible to the client.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List all repositories", + "operationId": "ListRepositories", + "parameters": [ + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repositories" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repositories" + } + } + } + } + }, + "/api/v0/repositories/scan/toggle": { + "post": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" access to the repository\n(i.e., user owns the repo or is a member of a team with \"admin\" level access to the organization\"s repository).\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Toggles scan on push for all repositories", + "operationId": "ToggleAllRepositoriesScanOnPush", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.ToggleScanOnPush" + } + } + ], + "responses": { + "200": { + "description": "Successfully toggled scan on push for all repositories." + }, + "400": { + "description": "INVALID_JSON: Unable to parse JSON" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "Successfully toggled scan on push for all repositories." + } + } + } + }, + "/api/v0/repositories/{namespace}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as any active user in the system. Results will be filtered to only those repositories visible to the client.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List repositories in a namespace", + "operationId": "ListNamespaceRepositories", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repositories" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_ACCOUNT: An account with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repositories" + } + } + } + }, + "post": { + "description": "\n*Authorization:* Client must be authenticated as a user who has admin access to the\nrepository namespace (i.e., user owns the repo or is a member of a team with\n\"admin\" level access to the organization's namespace of repositories).\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Create repository", + "operationId": "CreateRepository", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreateRepo" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.Repository" + } + }, + "400": { + "description": "REPOSITORY_EXISTS: A repository with the same name already exists." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_ACCOUNT: An account with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "View details of a repository", + "operationId": "GetRepository", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repository" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repository" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" access to the repository\n(i.e., user owns the repo or is a member of a team with \"admin\" level access to the organization\"s repository).\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Remove a repository", + "operationId": "DeleteRepository", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The domain used to push tags to DTR. Must be set to obtain/manipulate Notary related information", + "name": "domain", + "in": "query" + } + ], + "responses": { + "204": { + "description": "success or repository does not exist" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "REPOSITORY_CONTAINS_TAGS_IN_NOTARY: This repository contains tags in notary and can't be deleted until all tags in notary are removed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + }, + "patch": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" access to the repository\n(i.e., user owns the repo or is a member of a team with \"admin\" level access to the organization\"s repository).\n\nNote that a repository cannot be renamed this way.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Update details of a repository", + "operationId": "PatchRepository", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.UpdateRepo" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repository" + } + }, + "400": { + "description": "INVALID_REPOSITORY_VISIBILITY: The visibility value of the repository is invalid." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Repository" + } + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/events": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the events for a repository", + "operationId": "ListRepoEvents", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "string", + "name": "publishedBefore", + "in": "query" + }, + { + "type": "string", + "name": "publishedAfter", + "in": "query" + }, + { + "type": "string", + "description": "UUID of the user or organization that performed the event", + "name": "actorId", + "in": "query" + }, + { + "type": "string", + "description": "Type of events to filter by", + "name": "eventType", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Exclude image pull events", + "name": "excludePull", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Events" + } + }, + "400": { + "description": "INVALID_PARAMETERS: Unable to parse query parameters" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.Events" + } + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/manifests": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the available manifests for a repository", + "operationId": "ListRepoManifests", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Manifest" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Manifest" + } + } + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/manifests/{reference}": { + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"write\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Delete a manifest for a repository", + "operationId": "DeleteRepoManifest", + "deprecated": true, + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "digest or tag for an image manifest", + "name": "reference", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success" + }, + "400": { + "description": "INVALID_DIGEST: The given digest is invalid." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REF: A ref with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/mirroringPolicies": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the mirroring policies for a repository", + "operationId": "ListRepoMirroringPolicies", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.MirroringPolicy" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.MirroringPolicy" + } + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Deletes a set of mirroring policies for a repository", + "operationId": "DeleteRepoMirroringPolicies", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/forms.DeleteMirroringPolicyIDs" + } + } + } + ], + "responses": { + "204": { + "description": "success" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/pollMirroringPolicies": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the poll mirroring policies for a repository", + "operationId": "ListRepoPollMirroringPolicies", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PollMirroringPolicy" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PollMirroringPolicy" + } + } + } + } + }, + "post": { + "description": "*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the local repository and credentials to the remote repository.", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Create a poll mirroring policy for a repository", + "operationId": "CreateRepoPollMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to evaluate the policy on creation", + "name": "initialEvaluation", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreatePollMirroringPolicy" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.PollMirroringPolicy" + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "INVALID_POLL_MIRRORING_POLICY: The given poll mirroring policy is invalid." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/pollMirroringPolicies/{pollmirroringpolicyid}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Retrieve a specific poll mirroring policy for a repository", + "operationId": "GetRepoPollMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "poll mirroring policy id", + "name": "pollmirroringpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PollMirroringPolicy" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_POLL_MIRRORING_POLICY: A poll mirroring policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PollMirroringPolicy" + } + } + } + }, + "put": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the local repository and credentials to the remote repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Updates a specific poll mirroring policy for a repository", + "operationId": "UpdateRepoPollMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "poll mirroring policy id", + "name": "pollmirroringpolicyid", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.UpdatePollMirroringPolicy" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PollMirroringPolicy" + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "INVALID_POLL_MIRRORING_POLICY: The given poll mirroring policy is invalid." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PollMirroringPolicy" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Deletes a specific poll mirroring policy for a repository", + "operationId": "DeleteRepoPollMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "poll mirroring policy id", + "name": "pollmirroringpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success or poll mirroring policy does not exist" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_POLL_MIRRORING_POLICY: A poll mirroring policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/promotionPolicies": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the source or target repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the promotion policies for a repository", + "operationId": "ListRepoPromotionPolicies", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to list promotion policies for a repository as a source or destination.", + "name": "source", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PromotionPolicy" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PromotionPolicy" + } + } + } + } + }, + "post": { + "description": "*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the source and target repository.\nRules for the policy can be on the following fields and their respective operators:\n- \"tag\"\n\t- \"eq\": equals\n\t- \"sw\": starts with\n\t- \"ew\": ends with\n\t- \"c\": contains\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"license.name\"\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"component.name\"\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"vulnerability_all\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_critical\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_major\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_minor\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"updated_at\"\n\t- \"bf\": before\n\t- \"af\": after\n- \"updated_at_duration\"\n\t- \"bf\": before\n\t- \"af\": after\n\nThe tag template is used to rename the tag in the target repository. The\nfollowing symbols are allowed:\n- \"%n\": The tag to promote (e.g. 1, 4.5, latest)\n- \"%A\": Day of the week (e.g. Sunday, Monday)\n- \"%a\": Day of the week, abbreviated (e.g. Sun, Mon , Tue)\n- \"%w\": Day of the week, as a number (e.g. 0, 1, 6)\n- \"%d\": Number for the day of the month (e.g. 01, 15, 31)\n- \"%B\": Month (e.g. January, December)\n- \"%b\": Month, abbreviated (e.g. Jan, Jun, Dec)\n- \"%m\": Month, as a number (e.g. 01, 06, 12)\n- \"%Y\": Year (e.g. 1999, 2015, 2048)\n- \"%y\": Year, two digits (e.g. 99, 15, 48)\n- \"%H\": Hour, in 24 hour format (e.g. 00, 12, 23)\n- \"%I\": Hour, in 12 hour format (e.g. 01, 10, 10)\n- \"%p\": Period of the day (e.g. AM, PM)\n- \"%M\": Minute (e.g. 00, 10, 59)\n- \"%S\": Second (e.g. 00, 10, 59)\n- \"%f\": Microsecond (e.g. 000000, 999999)\n- \"%Z\": Name for the timezone (e.g. UTC, PST, EST)\n- \"%j\": Day of the year (e.g. 001, 200, 366)\n- \"%W\": Week of the year (e.g. 00, 10 , 53)\n", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Create a promotion policy for a repository", + "operationId": "CreateRepoPromotionPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to evaluate the policy on creation", + "name": "initialEvaluation", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreatePromotionPolicy" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.PromotionPolicy" + } + }, + "400": { + "description": "INVALID_PROMOTION_POLICY: The given promotion policy is invalid." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/promotionPolicies/{promotionpolicyid}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the source or target repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Retrieve a specific promotion policy for a repository", + "operationId": "GetRepoPromotionPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "promotion policy id", + "name": "promotionpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PromotionPolicy" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_PROMOTION_POLICY: A promotion policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PromotionPolicy" + } + } + } + }, + "put": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the source and target repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Updates a specific promotion policy for a repository", + "operationId": "UpdateRepoPromotionPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "promotion policy id", + "name": "promotionpolicyid", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to evaluate the policy on creation", + "name": "initialEvaluation", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.UpdatePromotionPolicy" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PromotionPolicy" + } + }, + "400": { + "description": "INVALID_PROMOTION_POLICY: The given promotion policy is invalid." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_PROMOTION_POLICY: A promotion policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PromotionPolicy" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the source or target repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Deletes a specific promotion policy for a repository", + "operationId": "DeleteRepoPromotionPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "promotion policy id", + "name": "promotionpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success or promotion policy does not exist" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_PROMOTION_POLICY: A promotion policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/pruningPolicies": { + "get": { + "description": "*Authorization:* Client must be authenticated as a user who has visibility to the repository.", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the pruning policies for a repository", + "operationId": "ListRepoPruningPolicies", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PruningPolicy" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PruningPolicy" + } + } + } + } + }, + "post": { + "description": "*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\nRules for the policy can be on the following fields and their respective operators:\n- \"tag\"\n\t- \"eq\": equals\n\t- \"sw\": starts with\n\t- \"ew\": ends with\n\t- \"c\": contains\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"license.name\"\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"component.name\"\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"vulnerability_all\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_critical\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_major\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_minor\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"updated_at\"\n\t- \"bf\": before\n\t- \"af\": after\n- \"updated_at_duration\"\n\t- \"bf\": before\n\t- \"af\": after\n", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Create a pruning policy for a repository", + "operationId": "CreateRepoPruningPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to evaluate the policy on creation", + "name": "initialEvaluation", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreatePruningPolicy" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.PruningPolicy" + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "INVALID_PRUNING_POLICY: The given pruning policy is invalid." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/pruningPolicies/test": { + "post": { + "description": "*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Test a pruning policy for a repository", + "operationId": "TestRepoPruningPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreatePruningPolicy" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ThinTag" + } + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "INVALID_PRUNING_POLICY: The given pruning policy is invalid." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ThinTag" + } + } + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/pruningPolicies/{pruningpolicyid}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Retrieve a specific pruning policy for a repository", + "operationId": "GetRepoPruningPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "pruning policy id", + "name": "pruningpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PruningPolicy" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_PRUNING_POLICY: A pruning policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PruningPolicy" + } + } + } + }, + "put": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Updates a specific pruning policy for a repository", + "operationId": "UpdateRepoPruningPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "pruning policy id", + "name": "pruningpolicyid", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to evaluate the policy on creation", + "name": "initialEvaluation", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.UpdatePruningPolicy" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PruningPolicy" + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "INVALID_PRUNING_POLICY: The given pruning policy is invalid." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PruningPolicy" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Deletes a specific pruning policy for a repository", + "operationId": "DeleteRepoPruningPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "pruning policy id", + "name": "pruningpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success or pruning policy does not exist" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_PRUNING_POLICY: A pruning policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/pushMirroringPolicies": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the push mirroring policies for a repository", + "operationId": "ListRepoPushMirroringPolicies", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PushMirroringPolicy" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.PushMirroringPolicy" + } + } + } + } + }, + "post": { + "description": "*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the local repository and credentials to the remote repository.\nRules for the policy can be on the following fields and their respective operators:\n- \"tag\"\n\t- \"eq\": equals\n\t- \"sw\": starts with\n\t- \"ew\": ends with\n\t- \"c\": contains\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"license.name\"\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"component.name\"\n\t- \"oo\": one of\n\t- \"noo\": not one of\n- \"vulnerability_all\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_critical\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_major\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"vulnerability_minor\"\n\t- \"gte\": greater than or equals\n\t- \"gt\": greater than\n\t- \"eq\": equals\n\t- \"neq\": not equals\n\t- \"lte\": less than or equals\n\t- \"lt\": less than\n- \"updated_at\"\n\t- \"bf\": before\n\t- \"af\": after\n- \"updated_at_duration\"\n\t- \"bf\": before\n\t- \"af\": after\n\nThe tag template is used to rename the tag in the target repository. The\nfollowing symbols are allowed:\n- \"%n\": The tag to promote (e.g. 1, 4.5, latest)\n- \"%A\": Day of the week (e.g. Sunday, Monday)\n- \"%a\": Day of the week, abbreviated (e.g. Sun, Mon , Tue)\n- \"%w\": Day of the week, as a number (e.g. 0, 1, 6)\n- \"%d\": Number for the day of the month (e.g. 01, 15, 31)\n- \"%B\": Month (e.g. January, December)\n- \"%b\": Month, abbreviated (e.g. Jan, Jun, Dec)\n- \"%m\": Month, as a number (e.g. 01, 06, 12)\n- \"%Y\": Year (e.g. 1999, 2015, 2048)\n- \"%y\": Year, two digits (e.g. 99, 15, 48)\n- \"%H\": Hour, in 24 hour format (e.g. 00, 12, 23)\n- \"%I\": Hour, in 12 hour format (e.g. 01, 10, 10)\n- \"%p\": Period of the day (e.g. AM, PM)\n- \"%M\": Minute (e.g. 00, 10, 59)\n- \"%S\": Second (e.g. 00, 10, 59)\n- \"%f\": Microsecond (e.g. 000000, 999999)\n- \"%Z\": Name for the timezone (e.g. UTC, PST, EST)\n- \"%j\": Day of the year (e.g. 001, 200, 366)\n- \"%W\": Week of the year (e.g. 00, 10 , 53)\n", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Create a push mirroring policy for a repository", + "operationId": "CreateRepoPushMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to evaluate the policy on creation", + "name": "initialEvaluation", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreatePushMirroringPolicy" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.PushMirroringPolicy" + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "INVALID_PUSH_MIRRORING_POLICY: The given push mirroring policy is invalid." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/pushMirroringPolicies/{pushmirroringpolicyid}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Retrieve a specific push mirroring policy for a repository", + "operationId": "GetRepoPushMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "push mirroring policy id", + "name": "pushmirroringpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PushMirroringPolicy" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_PUSH_MIRRORING_POLICY: A push mirroring policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PushMirroringPolicy" + } + } + } + }, + "put": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the local repository and credentials to the remote repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Updates a specific push mirroring policy for a repository", + "operationId": "UpdateRepoPushMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "push mirroring policy id", + "name": "pushmirroringpolicyid", + "in": "path", + "required": true + }, + { + "type": "boolean", + "default": true, + "description": "Whether to evaluate the policy on creation", + "name": "initialEvaluation", + "in": "query" + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.UpdatePushMirroringPolicy" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PushMirroringPolicy" + } + }, + "400": { + "description": "REMOTE_REGISTRY_INVALID_PERMISSIONS: Remote user not authorized to access the requested resource." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "INVALID_PUSH_MIRRORING_POLICY: The given push mirroring policy is invalid." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.PushMirroringPolicy" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the local repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Deletes a specific push mirroring policy for a repository", + "operationId": "DeleteRepoPushMirroringPolicy", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "push mirroring policy id", + "name": "pushmirroringpolicyid", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success or push mirroring policy does not exist" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_PUSH_MIRRORING_POLICY: A push mirroring policy with the given id does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/tags": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the available tags for a repository", + "operationId": "ListRepoTags", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the manifest for each tag", + "name": "includeManifests", + "in": "query" + }, + { + "type": "string", + "description": "The domain used to push tags to DTR. Must be set to obtain/manipulate Notary related information", + "name": "domain", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ListTag" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.ListTag" + } + } + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/tags/{reference}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has visibility to the repository.\nIf the ref given is to a manifest list, multiple Tag objects will be returned, one for each manifest in the list.\nSimilarly if the ref is a digest the API will return all tags referencing that digest.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Retrieve a specific tag for a repository", + "operationId": "ListRepoTag", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "digest or tag for an image manifest", + "name": "reference", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The domain used to push tags to DTR. Must be set to obtain/manipulate Notary related information", + "name": "domain", + "in": "query" + }, + { + "type": "string", + "description": "Operating system of the tag", + "name": "os", + "in": "query" + }, + { + "type": "string", + "description": "Architecture of the tag", + "name": "arch", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Tag" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REF: A ref with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Tag" + } + } + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/tags/{tag}": { + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"write\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Delete a tag for a repository", + "operationId": "DeleteRepoTag", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "tag name", + "name": "tag", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The domain used to push tags to DTR. Must be set to obtain/manipulate Notary related information", + "name": "domain", + "in": "query" + } + ], + "responses": { + "204": { + "description": "success" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "409": { + "description": "TAG_IN_NOTARY: This tag is in notary and can't be deleted until it is removed from notary" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/tags/{tag}/promotion": { + "post": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"read\" level access to the source repository and \"write\" level access to the target repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Promotes a specific tag for a repository", + "operationId": "CreateRepoTagPromotion", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "tag name", + "name": "tag", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreatePromotion" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.Promotion" + } + }, + "400": { + "description": "INVALID_TAG_NAME: The given tag name is either too long or contains illegal characters." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/tags/{tag}/pushMirroring": { + "post": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"read\" level access to the local repository and \"write\" level access to the remote repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Mirrors a local tag by pushing to a remote repository", + "operationId": "CreateRepoTagPushMirroring", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "tag name", + "name": "tag", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.CreateMirroring" + } + } + ], + "responses": { + "201": { + "description": "success", + "schema": { + "$ref": "#/definitions/responses.Mirroring" + } + }, + "400": { + "description": "INVALID_TAG_NAME: The given tag name is either too long or contains illegal characters." + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TAG: A tag with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/teamAccess": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List teams granted access to an organization-owned repository", + "operationId": "ListRepoTeamAccess", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListRepoTeamAccess" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListRepoTeamAccess" + } + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/teamAccess/{teamname}": { + "put": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Set a team's access to an orgnization-owned repository", + "operationId": "GrantRepoTeamAccess", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "team name", + "name": "teamname", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.Access" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListRepoTeamAccess" + } + }, + "400": { + "description": "the team does not belong to the organization" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TEAM: A team with the given name does not exist in the organization." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListRepoTeamAccess" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "Revoke a team's acccess to an organization-owned repository", + "operationId": "RevokeRepoTeamAccess", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "team name", + "name": "teamname", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success or the team is not in the access list or there is no such team in the organization" + }, + "400": { + "description": "the repository is not owned by an organization" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TEAM: A team with the given name does not exist in the organization." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/repositories/{namespace}/{reponame}/webhooks": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the repository.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositories" + ], + "summary": "List the webhook subscriptions for a repository", + "operationId": "ListRepoWebhooks", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "name of repository", + "name": "reponame", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_REPOSITORY: A repository with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + } + } + } + }, + "/api/v0/repositoryNamespaces/{namespace}/teamAccess": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as an admin or a member of the organization.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositoryNamespaces" + ], + "summary": "List teams granted access to an organization-owned namespace of repositories", + "operationId": "ListRepoNamespaceTeamAccess", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "The ID of the first record on the page", + "name": "pageStart", + "in": "query" + }, + { + "type": "integer", + "default": 10, + "description": "Maximum number of results to return", + "name": "pageSize", + "in": "query" + }, + { + "type": "boolean", + "default": false, + "description": "Whether to include the resource count in the response header", + "name": "count", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListRepoNamespaceTeamAccess" + } + }, + "400": { + "description": "the namespace is not owned by an organization" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_ACCOUNT: An account with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.ListRepoNamespaceTeamAccess" + } + } + } + } + }, + "/api/v0/repositoryNamespaces/{namespace}/teamAccess/{teamname}": { + "get": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level\naccess to the namespace or is a member of the team.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositoryNamespaces" + ], + "summary": "Get a team's granted access to an organization-owned namespace of repositories", + "operationId": "GetRepoNamespaceTeamAccess", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "team name", + "name": "teamname", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.NamespaceTeamAccess" + } + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_NAMESPACE_TEAM_ACCESS: An access grant for the given team in the given namespace does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.NamespaceTeamAccess" + } + } + } + }, + "put": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the namespace.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositoryNamespaces" + ], + "summary": "Set a team's access to an organization-owned namespace of repositories", + "operationId": "GrantRepoNamespaceTeamAccess", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "team name", + "name": "teamname", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.Access" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.NamespaceTeamAccess" + } + }, + "400": { + "description": "the team does not belong to the owning organization" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TEAM: A team with the given name does not exist in the organization." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "$ref": "#/definitions/responses.NamespaceTeamAccess" + } + } + } + }, + "delete": { + "description": "\n*Authorization:* Client must be authenticated as a user who has \"admin\" level access to the namespace.\n\t\t", + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "repositoryNamespaces" + ], + "summary": "Revoke a team's access to an organization-owned namespace of repositories", + "operationId": "RevokeRepoNamespaceTeamAccess", + "parameters": [ + { + "type": "string", + "description": "namespace/owner of repository", + "name": "namespace", + "in": "path", + "required": true + }, + { + "type": "string", + "description": "team name", + "name": "teamname", + "in": "path", + "required": true + } + ], + "responses": { + "204": { + "description": "success or the team does not exist in the access list or there is no such team in the organization" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_TEAM: A team with the given name does not exist in the organization." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + } + } + } + }, + "/api/v0/webhooks": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "webhooks" + ], + "summary": "List Webhooks", + "operationId": "ListWebhooks", + "parameters": [ + { + "type": "string", + "default": "any", + "description": "The type of webhook to list", + "name": "webhookType", + "in": "query" + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + }, + "400": { + "description": "INVALID_JSON: Unable to parse JSON" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + } + } + }, + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "webhooks" + ], + "summary": "Create Webhook", + "operationId": "CreateWebhook", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.Webhook" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + }, + "400": { + "description": "INVALID_JSON: Unable to parse JSON" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_ACCOUNT: An account with the given name does not exist." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + } + } + } + }, + "/api/v0/webhooks/test": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "webhooks" + ], + "summary": "Test Webhook", + "operationId": "TestWebhook", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.WebhookTestPayload" + } + } + ], + "responses": { + "200": { + "description": "OK" + }, + "400": { + "description": "INVALID_JSON: Unable to parse JSON" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK" + } + } + } + }, + "/api/v0/webhooks/update": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "webhooks" + ], + "summary": "Update Webhook", + "operationId": "UpdateWebhook", + "parameters": [ + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/forms.WebhookUpdate" + } + } + ], + "responses": { + "200": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + }, + "400": { + "description": "INVALID_JSON: Unable to parse JSON" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK", + "schema": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Webhook" + } + } + } + } + } + }, + "/api/v0/webhooks/{webhook}": { + "delete": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "webhooks" + ], + "summary": "Delete Webhook", + "operationId": "DeleteWebhook", + "parameters": [ + { + "type": "string", + "description": "webhook subscription ID", + "name": "webhook", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "OK" + }, + "401": { + "description": "NOT_AUTHENTICATED: The client is not authenticated." + }, + "403": { + "description": "NOT_AUTHORIZED: The client is not authorized." + }, + "404": { + "description": "NO_SUCH_WEBHOOK: A webhook subscription with the given name does not exist for the given repository." + }, + "405": { + "description": "NOT_ALLOWED: Method Not Allowed" + }, + "406": { + "description": "NOT_ACCEPTABLE: Not Acceptable" + }, + "415": { + "description": "UNSUPPORTED_MEDIA_TYPE: Unsupported Media Type" + }, + "default": { + "description": "OK" + } + } + } + }, + "/api/v0/workers": { + "get": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "workers" + ], + "summary": "List all workers", + "operationId": "ListWorkers", + "responses": { + "200": { + "description": "Success, list of workers returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Workers" + } + }, + "default": { + "description": "Success, list of workers returned.", + "schema": { + "$ref": "#/definitions/tmpresponses.Workers" + } + } + } + } + }, + "/api/v0/workers/{id}/capacity": { + "post": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "workers" + ], + "summary": "Update the capacity for a worker", + "operationId": "UpdateWorkerCapacity", + "parameters": [ + { + "type": "string", + "description": "ID of worker to update", + "name": "id", + "in": "path", + "required": true + }, + { + "name": "body", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/tmpforms.UpdateWorkerCapacity" + } + } + ], + "responses": { + "202": { + "description": "Success." + } + } + } + } + }, + "definitions": { + "forms.Access": { + "required": [ + "accessLevel" + ], + "properties": { + "accessLevel": { + "type": "string", + "enum": [ + "read-only", + "read-write", + "admin" + ] + } + } + }, + "forms.CreateAPIToken": { + "properties": { + "tokenLabel": { + "type": "string" + } + } + }, + "forms.CreateContentCache": { + "required": [ + "name", + "host" + ], + "properties": { + "host": { + "type": "string" + }, + "name": { + "type": "string" + } + } + }, + "forms.CreateMirroring": { + "required": [ + "remoteHost", + "remoteRepository", + "remoteCA", + "skipTLSVerification", + "remoteTag", + "username", + "password", + "authToken" + ], + "properties": { + "authToken": { + "type": "string" + }, + "password": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "remoteTag": { + "type": "string" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "username": { + "type": "string" + } + } + }, + "forms.CreatePollMirroringPolicy": { + "required": [ + "remoteHost", + "remoteRepository", + "remoteCA", + "skipTLSVerification", + "enabled", + "username", + "password", + "authToken" + ], + "properties": { + "authToken": { + "type": "string" + }, + "enabled": { + "type": "boolean" + }, + "password": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "username": { + "type": "string" + } + } + }, + "forms.CreatePromotion": { + "required": [ + "targetRepository", + "targetTag" + ], + "properties": { + "targetRepository": { + "type": "string" + }, + "targetTag": { + "type": "string" + } + } + }, + "forms.CreatePromotionPolicy": { + "required": [ + "rules", + "targetRepository", + "tagTemplate", + "enabled" + ], + "properties": { + "enabled": { + "type": "boolean" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + }, + "tagTemplate": { + "type": "string" + }, + "targetRepository": { + "type": "string" + } + } + }, + "forms.CreatePruningPolicy": { + "required": [ + "rules", + "enabled" + ], + "properties": { + "enabled": { + "type": "boolean" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + } + } + }, + "forms.CreatePushMirroringPolicy": { + "required": [ + "rules", + "remoteHost", + "remoteRepository", + "remoteCA", + "skipTLSVerification", + "tagTemplate", + "enabled", + "username", + "password", + "authToken" + ], + "properties": { + "authToken": { + "type": "string" + }, + "enabled": { + "type": "boolean" + }, + "password": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tagTemplate": { + "type": "string" + }, + "username": { + "type": "string" + } + } + }, + "forms.CreateRemoteRegistryCheck": { + "required": [ + "remoteHost", + "remoteRepository", + "remoteCA", + "skipTLSVerification", + "username", + "password", + "authToken" + ], + "properties": { + "authToken": { + "type": "string" + }, + "password": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "username": { + "type": "string" + } + } + }, + "forms.CreateRepo": { + "required": [ + "name", + "shortDescription", + "longDescription", + "scanOnPush", + "immutableTags", + "enableManifestLists", + "tagLimit" + ], + "properties": { + "enableManifestLists": { + "type": "boolean" + }, + "immutableTags": { + "type": "boolean" + }, + "longDescription": { + "type": "string" + }, + "name": { + "type": "string" + }, + "scanOnPush": { + "type": "boolean" + }, + "shortDescription": { + "type": "string" + }, + "tagLimit": { + "type": "integer", + "format": "integer" + }, + "visibility": { + "type": "string", + "enum": [ + "public", + "private" + ] + } + } + }, + "forms.DeleteMirroringPolicyIDs": { + "required": [ + "id" + ], + "properties": { + "id": { + "type": "string" + } + } + }, + "forms.EmptyForm": {}, + "forms.Image": { + "required": [ + "name", + "digest" + ], + "properties": { + "digest": { + "type": "string" + }, + "name": { + "type": "string" + } + } + }, + "forms.ImagesForm": { + "required": [ + "images" + ], + "properties": { + "images": { + "type": "array", + "items": { + "$ref": "#/definitions/forms.Image" + } + } + } + }, + "forms.ScanOptions": { + "required": [ + "scan", + "check" + ], + "properties": { + "check": { + "type": "boolean" + }, + "scan": { + "type": "boolean" + } + } + }, + "forms.Settings": { + "required": [ + "dtrHost", + "sso", + "createRepositoryOnPush", + "disableUpgrades", + "reportAnalytics", + "anonymizeAnalytics", + "disableBackupWarning", + "webTLSCert", + "webTLSKey", + "webTLSCA", + "scanningEnabled", + "scanningSyncOnline", + "scanningDeadline", + "scanningEnableAutoRecheck", + "jobHistoryCompactionEnabled", + "jobHistoryToKeep", + "jobHistoryMaxAge", + "repoEventHistoryCompactionEnabled", + "repoEventHistoryToKeep", + "repoEventHistoryMaxAge", + "readOnlyRegistry" + ], + "properties": { + "anonymizeAnalytics": { + "type": "boolean" + }, + "createRepositoryOnPush": { + "type": "boolean" + }, + "disableBackupWarning": { + "type": "boolean" + }, + "disableUpgrades": { + "type": "boolean" + }, + "dtrHost": { + "type": "string" + }, + "jobHistoryCompactionEnabled": { + "type": "boolean" + }, + "jobHistoryMaxAge": { + "type": "string" + }, + "jobHistoryToKeep": { + "type": "integer", + "format": "int64" + }, + "readOnlyRegistry": { + "type": "boolean" + }, + "repoEventHistoryCompactionEnabled": { + "type": "boolean" + }, + "repoEventHistoryMaxAge": { + "type": "string" + }, + "repoEventHistoryToKeep": { + "type": "integer", + "format": "int64" + }, + "reportAnalytics": { + "type": "boolean" + }, + "scanningDeadline": { + "type": "integer" + }, + "scanningEnableAutoRecheck": { + "type": "boolean" + }, + "scanningEnabled": { + "type": "boolean" + }, + "scanningSyncOnline": { + "type": "boolean" + }, + "sso": { + "type": "boolean" + }, + "webTLSCA": { + "type": "string" + }, + "webTLSCert": { + "type": "string" + }, + "webTLSKey": { + "type": "string" + } + } + }, + "forms.ToggleScanOnPush": { + "required": [ + "scanOnPush" + ], + "properties": { + "scanOnPush": { + "type": "boolean" + } + } + }, + "forms.UpdateAPIToken": { + "properties": { + "isActive": { + "type": "boolean" + }, + "tokenLabel": { + "type": "string" + } + } + }, + "forms.UpdatePollMirroringPolicy": { + "properties": { + "authToken": { + "type": "string" + }, + "enabled": { + "type": "boolean" + }, + "password": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "username": { + "type": "string" + } + } + }, + "forms.UpdatePromotionPolicy": { + "properties": { + "enabled": { + "type": "boolean" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + }, + "tagTemplate": { + "type": "string" + }, + "targetRepository": { + "type": "string" + } + } + }, + "forms.UpdatePruningPolicy": { + "properties": { + "enabled": { + "type": "boolean" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + } + } + }, + "forms.UpdatePushMirroringPolicy": { + "properties": { + "authToken": { + "type": "string" + }, + "enabled": { + "type": "boolean" + }, + "password": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tagTemplate": { + "type": "string" + }, + "username": { + "type": "string" + } + } + }, + "forms.UpdateRepo": { + "required": [ + "immutableTags" + ], + "properties": { + "enableManifestLists": { + "type": "boolean" + }, + "immutableTags": { + "type": "boolean" + }, + "longDescription": { + "type": "string" + }, + "scanOnPush": { + "type": "boolean" + }, + "shortDescription": { + "type": "string" + }, + "tagLimit": { + "type": "integer", + "format": "integer" + }, + "visibility": { + "type": "string", + "enum": [ + "public", + "private" + ] + } + } + }, + "forms.UserSettings": { + "properties": { + "contentCacheUUID": { + "type": "string" + } + } + }, + "forms.VulnOverrideOption": { + "required": [ + "component", + "componentVersion", + "cve", + "notes" + ], + "properties": { + "component": { + "type": "string" + }, + "componentVersion": { + "type": "string" + }, + "cve": { + "type": "string" + }, + "notes": { + "type": "string" + } + } + }, + "forms.Webhook": { + "required": [ + "endpoint", + "tlsCert", + "skipTLSVerification" + ], + "properties": { + "endpoint": { + "type": "string" + }, + "key": { + "type": "string" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tlsCert": { + "type": "string" + }, + "type": { + "type": "string", + "enum": [ + "TAG_PUSH", + "TAG_PULL", + "TAG_DELETE", + "PROMOTION", + "PUSH_MIRRORING", + "POLL_MIRRORING", + "MANIFEST_PUSH", + "MANIFEST_PULL", + "MANIFEST_DELETE", + "REPO_EVENT", + "SCAN_COMPLETED", + "SCAN_FAILED", + "SCANNER_UPDATE_COMPLETED" + ] + } + } + }, + "forms.WebhookTestPayload": { + "required": [ + "type", + "endpoint", + "tlsCert", + "skipTLSVerification" + ], + "properties": { + "endpoint": { + "type": "string" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tlsCert": { + "type": "string" + }, + "type": { + "type": "string" + } + } + }, + "forms.WebhookUpdate": { + "required": [ + "id", + "inactive", + "tlsCert", + "skipTLSVerification" + ], + "properties": { + "id": { + "type": "string" + }, + "inactive": { + "type": "boolean" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tlsCert": { + "type": "string" + } + } + }, + "responses.APIToken": { + "required": [ + "hashedToken", + "tokenLabel", + "isActive", + "lastUsed", + "createdAt", + "generatedBy", + "creatorUa" + ], + "properties": { + "createdAt": { + "type": "string", + "format": "date-time" + }, + "creatorUa": { + "type": "string" + }, + "generatedBy": { + "type": "string" + }, + "hashedToken": { + "type": "string" + }, + "isActive": { + "type": "boolean" + }, + "lastUsed": { + "type": "string", + "format": "date-time" + }, + "tokenLabel": { + "type": "string" + } + } + }, + "responses.Account": { + "required": [ + "name", + "id", + "fullName", + "isOrg" + ], + "properties": { + "fullName": { + "description": "Full Name of the account", + "type": "string" + }, + "id": { + "description": "ID of the account", + "type": "string" + }, + "isActive": { + "description": "Whether the user is active and can login (users only)", + "type": "boolean" + }, + "isAdmin": { + "description": "Whether the user is a system admin (users only)", + "type": "boolean" + }, + "isImported": { + "description": "Whether the user was imported from an upstream identity provider", + "type": "boolean" + }, + "isOrg": { + "description": "Whether the account is an organization (or user)", + "type": "boolean" + }, + "membersCount": { + "description": "The number of members of the organization", + "type": "integer", + "format": "int32" + }, + "name": { + "description": "Name of the account", + "type": "string" + }, + "teamsCount": { + "description": "The number of teams in the organization", + "type": "integer", + "format": "int32" + } + } + }, + "responses.Alert": { + "required": [ + "message" + ], + "properties": { + "class": { + "type": "string" + }, + "id": { + "type": "string" + }, + "img": { + "type": "string" + }, + "message": { + "type": "string" + }, + "url": { + "type": "string" + } + } + }, + "responses.Autocomplete": { + "properties": { + "accountResults": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Account" + } + }, + "repositoryResults": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Repository" + } + } + } + }, + "responses.ClusterStatus": { + "required": [ + "rethink_system_tables", + "replica_health", + "replica_timestamp", + "replica_readonly", + "gc_lock_holder" + ], + "properties": { + "gc_lock_holder": { + "type": "string" + }, + "replica_health": { + "type": "object", + "additionalProperties": { + "type": "string" + } + }, + "replica_readonly": { + "type": "object", + "additionalProperties": { + "type": "boolean" + } + }, + "replica_timestamp": { + "type": "object", + "additionalProperties": { + "type": "string" + } + }, + "rethink_system_tables": { + "type": "object" + } + } + }, + "responses.Component": { + "required": [ + "component", + "version", + "vulns", + "fullpath" + ], + "properties": { + "component": { + "type": "string" + }, + "fullpath": { + "type": "array", + "items": { + "type": "string" + } + }, + "license": { + "$ref": "#/definitions/responses.License" + }, + "version": { + "type": "string" + }, + "vulns": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.VulnerabilityDetails" + } + } + } + }, + "responses.ContentCache": { + "required": [ + "id", + "name", + "host" + ], + "properties": { + "host": { + "type": "string" + }, + "id": { + "type": "string" + }, + "name": { + "type": "string" + } + } + }, + "responses.DetailedSummary": { + "required": [ + "sha256sum" + ], + "properties": { + "components": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Component" + } + }, + "sha256sum": { + "type": "string" + } + } + }, + "responses.DockerRepository": { + "required": [ + "description", + "is_official", + "is_trusted", + "name", + "star_count" + ], + "properties": { + "description": { + "type": "string" + }, + "is_official": { + "type": "boolean" + }, + "is_trusted": { + "type": "boolean" + }, + "name": { + "type": "string" + }, + "star_count": { + "type": "integer", + "format": "int32" + } + } + }, + "responses.DockerSearch": { + "required": [ + "num_results", + "query", + "results" + ], + "properties": { + "num_results": { + "type": "integer", + "format": "int32" + }, + "query": { + "type": "string" + }, + "results": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.DockerRepository" + } + } + } + }, + "responses.DockerfileLine": { + "required": [ + "line", + "layerDigest", + "size", + "isEmpty" + ], + "properties": { + "isEmpty": { + "type": "boolean" + }, + "layerDigest": { + "type": "string" + }, + "line": { + "type": "string" + }, + "mediaType": { + "type": "string" + }, + "size": { + "type": "integer", + "format": "int64" + }, + "urls": { + "type": "array", + "items": { + "type": "string" + } + } + } + }, + "responses.Events": { + "required": [ + "events" + ], + "properties": { + "events": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.Event" + } + } + } + }, + "responses.Features": { + "required": [ + "scanningEnabled", + "scanningLicensed", + "promotionLicensed", + "mirroringLicensed", + "metadataStoreOptedIn", + "onlineGCEnabled", + "db_version", + "ucpHost" + ], + "properties": { + "db_version": { + "type": "integer", + "format": "int32" + }, + "metadataStoreOptedIn": { + "type": "boolean" + }, + "mirroringLicensed": { + "type": "boolean" + }, + "onlineGCEnabled": { + "type": "boolean" + }, + "promotionLicensed": { + "type": "boolean" + }, + "scanningEnabled": { + "type": "boolean" + }, + "scanningLicensed": { + "type": "boolean" + }, + "ucpHost": { + "type": "string" + } + } + }, + "responses.Language": { + "required": [ + "language" + ], + "properties": { + "language": { + "type": "string" + } + } + }, + "responses.LayerVulnOverride": { + "required": [ + "pk", + "digest", + "component", + "componentVersion", + "cve", + "notes" + ], + "properties": { + "component": { + "type": "string" + }, + "componentVersion": { + "type": "string" + }, + "cve": { + "type": "string" + }, + "digest": { + "type": "string" + }, + "notes": { + "type": "string" + }, + "pk": { + "type": "string" + } + } + }, + "responses.License": { + "required": [ + "name", + "type", + "url" + ], + "properties": { + "name": { + "type": "string" + }, + "type": { + "type": "string" + }, + "url": { + "type": "string" + } + } + }, + "responses.ListRepoNamespaceTeamAccess": { + "required": [ + "namespace", + "teamAccessList" + ], + "properties": { + "namespace": { + "type": "string" + }, + "teamAccessList": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.TeamAccess" + } + } + } + }, + "responses.ListRepoTeamAccess": { + "required": [ + "repository", + "teamAccessList" + ], + "properties": { + "repository": { + "$ref": "#/definitions/responses.Repository" + }, + "teamAccessList": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.TeamAccess" + } + } + } + }, + "responses.ListTag": { + "required": [ + "name", + "digest", + "author", + "updatedAt", + "createdAt", + "hashMismatch", + "inNotary", + "manifest" + ], + "properties": { + "author": { + "type": "string" + }, + "createdAt": { + "type": "string", + "format": "date-time" + }, + "digest": { + "type": "string" + }, + "hashMismatch": { + "description": "true if the hashes from notary and registry don't match", + "type": "boolean" + }, + "inNotary": { + "description": "true if the tag exists in Notary", + "type": "boolean" + }, + "manifest": { + "$ref": "#/definitions/responses.Manifest" + }, + "mirroring": { + "$ref": "#/definitions/responses.Mirroring" + }, + "name": { + "type": "string" + }, + "promotion": { + "$ref": "#/definitions/responses.Promotion" + }, + "updatedAt": { + "type": "string", + "format": "date-time" + } + } + }, + "responses.ListTeamRepoAccess": { + "required": [ + "team", + "repositoryAccessList" + ], + "properties": { + "repositoryAccessList": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.RepoAccess" + } + }, + "team": { + "$ref": "#/definitions/responses.Team" + } + } + }, + "responses.Manifest": { + "required": [ + "digest" + ], + "properties": { + "architecture": { + "type": "string" + }, + "author": { + "type": "string" + }, + "configDigest": { + "type": "string" + }, + "configMediaType": { + "type": "string" + }, + "createdAt": { + "type": "string", + "format": "date-time" + }, + "digest": { + "type": "string" + }, + "dockerfile": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.DockerfileLine" + } + }, + "mediaType": { + "type": "string" + }, + "os": { + "type": "string" + }, + "osVersion": { + "type": "string" + }, + "size": { + "type": "integer", + "format": "int64" + } + } + }, + "responses.Mirroring": { + "required": [ + "mirroringPolicyID", + "digest", + "remoteRepository", + "remoteTag" + ], + "properties": { + "digest": { + "type": "string" + }, + "mirroringPolicyID": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "remoteTag": { + "type": "string" + } + } + }, + "responses.MirroringPolicy": { + "required": [ + "id", + "mirroringType", + "username", + "localRepository", + "remoteHost", + "remoteRepository", + "remoteCA", + "skipTLSVerification", + "enabled", + "lastMirroredAt", + "lastStatus" + ], + "properties": { + "enabled": { + "type": "boolean" + }, + "id": { + "type": "string" + }, + "lastMirroredAt": { + "type": "string" + }, + "lastStatus": { + "$ref": "#/definitions/schema.MirroringStatus" + }, + "localRepository": { + "type": "string" + }, + "mirroringType": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tagTemplate": { + "type": "string" + }, + "username": { + "type": "string" + } + } + }, + "responses.NamespaceTeamAccess": { + "required": [ + "accessLevel", + "team", + "namespace" + ], + "properties": { + "accessLevel": { + "type": "string", + "enum": [ + "read-only", + "read-write", + "admin" + ] + }, + "namespace": { + "type": "string" + }, + "team": { + "$ref": "#/definitions/responses.Team" + } + } + }, + "responses.NautilusStatus": { + "required": [ + "state", + "scanner_version", + "scannerUpdatedAt", + "db_version", + "db_updated_at", + "lastDBUpdateFailed", + "lastVulnOverridesDBUpdateFailed" + ], + "properties": { + "db_updated_at": { + "type": "string", + "format": "date-time" + }, + "db_version": { + "type": "integer", + "format": "int32" + }, + "lastDBUpdateFailed": { + "type": "boolean" + }, + "lastVulnOverridesDBUpdateFailed": { + "type": "boolean" + }, + "replicas": { + "type": "object", + "additionalProperties": { + "$ref": "#/definitions/schema.ScannerFingerprint" + } + }, + "scannerUpdatedAt": { + "type": "string", + "format": "date-time" + }, + "scanner_version": { + "type": "integer", + "format": "int32" + }, + "state": { + "type": "integer", + "format": "int32" + } + } + }, + "responses.NewAPIToken": { + "required": [ + "token", + "hashedToken", + "tokenLabel", + "isActive", + "lastUsed", + "createdAt", + "generatedBy", + "creatorUa" + ], + "properties": { + "createdAt": { + "type": "string", + "format": "date-time" + }, + "creatorUa": { + "type": "string" + }, + "generatedBy": { + "type": "string" + }, + "hashedToken": { + "type": "string" + }, + "isActive": { + "type": "boolean" + }, + "lastUsed": { + "type": "string", + "format": "date-time" + }, + "token": { + "type": "string" + }, + "tokenLabel": { + "type": "string" + } + } + }, + "responses.Note": { + "required": [ + "reason", + "type" + ], + "properties": { + "reason": { + "type": "string" + }, + "type": { + "type": "string" + } + } + }, + "responses.OldScanSummary": { + "required": [ + "namespace", + "reponame", + "tag", + "critical", + "major", + "minor", + "last_scan_status", + "check_completed_at", + "should_rescan", + "has_foreign_layers" + ], + "properties": { + "check_completed_at": { + "type": "string", + "format": "date-time" + }, + "critical": { + "type": "integer", + "format": "int32" + }, + "has_foreign_layers": { + "type": "boolean" + }, + "last_scan_status": { + "type": "integer", + "format": "int32" + }, + "layer_details": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.DetailedSummary" + } + }, + "major": { + "type": "integer", + "format": "int32" + }, + "minor": { + "type": "integer", + "format": "int32" + }, + "namespace": { + "type": "string" + }, + "reponame": { + "type": "string" + }, + "should_rescan": { + "type": "boolean" + }, + "tag": { + "type": "string" + } + } + }, + "responses.PollMirroringPolicy": { + "required": [ + "id", + "username", + "localRepository", + "remoteHost", + "remoteRepository", + "remoteCA", + "skipTLSVerification", + "enabled", + "lastMirroredAt", + "lastStatus" + ], + "properties": { + "enabled": { + "type": "boolean" + }, + "id": { + "type": "string" + }, + "lastMirroredAt": { + "type": "string" + }, + "lastStatus": { + "$ref": "#/definitions/schema.MirroringStatus" + }, + "localRepository": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "username": { + "type": "string" + } + } + }, + "responses.Promotion": { + "required": [ + "promotionPolicyID", + "string", + "sourceRepository", + "sourceTag" + ], + "properties": { + "promotionPolicyID": { + "type": "string" + }, + "sourceRepository": { + "type": "string" + }, + "sourceTag": { + "type": "string" + }, + "string": { + "type": "string" + } + } + }, + "responses.PromotionPolicy": { + "required": [ + "id", + "rules", + "sourceRepository", + "targetRepository", + "tagTemplate", + "enabled", + "lastPromotedAt" + ], + "properties": { + "enabled": { + "type": "boolean" + }, + "id": { + "type": "string" + }, + "lastPromotedAt": { + "type": "string", + "format": "date-time" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + }, + "sourceRepository": { + "type": "string" + }, + "tagTemplate": { + "type": "string" + }, + "targetRepository": { + "type": "string" + } + } + }, + "responses.PruningPolicy": { + "required": [ + "id", + "rules", + "repository", + "enabled", + "lastPrunedAt" + ], + "properties": { + "enabled": { + "type": "boolean" + }, + "id": { + "type": "string" + }, + "lastPrunedAt": { + "type": "string" + }, + "repository": { + "type": "string" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + } + } + }, + "responses.PushMirroringPolicy": { + "required": [ + "id", + "rules", + "username", + "localRepository", + "remoteHost", + "remoteRepository", + "remoteCA", + "skipTLSVerification", + "tagTemplate", + "enabled", + "lastMirroredAt", + "lastStatus" + ], + "properties": { + "enabled": { + "type": "boolean" + }, + "id": { + "type": "string" + }, + "lastMirroredAt": { + "type": "string" + }, + "lastStatus": { + "$ref": "#/definitions/schema.MirroringStatus" + }, + "localRepository": { + "type": "string" + }, + "remoteCA": { + "type": "string" + }, + "remoteHost": { + "type": "string" + }, + "remoteRepository": { + "type": "string" + }, + "rules": { + "type": "array", + "items": { + "$ref": "#/definitions/ruleengine.Rule" + } + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tagTemplate": { + "type": "string" + }, + "username": { + "type": "string" + } + } + }, + "responses.RemoteRegistryCheck": { + "required": [ + "registryType", + "permissions" + ], + "properties": { + "permissions": { + "$ref": "#/definitions/responses.RemoteRepositoryPermissions" + }, + "registryType": { + "type": "string" + } + } + }, + "responses.RemoteRepositoryPermissions": { + "required": [ + "read", + "write" + ], + "properties": { + "read": { + "type": "boolean" + }, + "write": { + "type": "boolean" + } + } + }, + "responses.ReplicaSettings": { + "required": [ + "HTTPPort", + "HTTPSPort", + "node" + ], + "properties": { + "HTTPPort": { + "type": "integer" + }, + "HTTPSPort": { + "type": "integer" + }, + "node": { + "type": "string" + } + } + }, + "responses.RepoAccess": { + "required": [ + "accessLevel", + "repository" + ], + "properties": { + "accessLevel": { + "type": "string", + "enum": [ + "read-only", + "read-write", + "admin" + ] + }, + "repository": { + "$ref": "#/definitions/responses.Repository" + } + } + }, + "responses.RepoUserAccess": { + "required": [ + "accessLevel", + "user", + "repository" + ], + "properties": { + "accessLevel": { + "type": "string", + "enum": [ + "read-only", + "read-write", + "admin" + ] + }, + "repository": { + "$ref": "#/definitions/responses.Repository" + }, + "user": { + "$ref": "#/definitions/responses.Account" + } + } + }, + "responses.Repositories": { + "required": [ + "repositories" + ], + "properties": { + "repositories": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Repository" + } + } + } + }, + "responses.Repository": { + "required": [ + "id", + "namespace", + "namespaceType", + "name", + "shortDescription", + "visibility", + "scanOnPush", + "immutableTags", + "enableManifestLists", + "pulls", + "pushes", + "tagLimit" + ], + "properties": { + "enableManifestLists": { + "type": "boolean" + }, + "id": { + "type": "string" + }, + "immutableTags": { + "type": "boolean" + }, + "longDescription": { + "type": "string" + }, + "name": { + "type": "string" + }, + "namespace": { + "type": "string" + }, + "namespaceType": { + "type": "string", + "enum": [ + "user", + "organization" + ] + }, + "pulls": { + "type": "integer", + "format": "integer" + }, + "pushes": { + "type": "integer", + "format": "integer" + }, + "scanOnPush": { + "type": "boolean" + }, + "shortDescription": { + "type": "string" + }, + "tagLimit": { + "type": "integer", + "format": "integer" + }, + "visibility": { + "type": "string", + "enum": [ + "public", + "private" + ] + } + } + }, + "responses.ScanSummary": { + "required": [ + "scannedImage", + "shouldRescan" + ], + "properties": { + "scanStatus": { + "type": "integer", + "format": "int32" + }, + "scannedImage": { + "$ref": "#/definitions/schema.ScannedImage" + }, + "shouldRescan": { + "type": "boolean" + } + } + }, + "responses.Settings": { + "required": [ + "dtrHost", + "sso", + "createRepositoryOnPush", + "replicaSettings", + "httpProxy", + "httpsProxy", + "noProxy", + "disableUpgrades", + "reportAnalytics", + "anonymizeAnalytics", + "disableBackupWarning", + "logProtocol", + "logHost", + "logLevel", + "webTLSCert", + "webTLSCA", + "replicaID", + "scanningEnabled", + "scanningSyncOnline", + "scanningDeadline", + "scanningEnableAutoRecheck", + "jobHistoryCompactionEnabled", + "jobHistoryToKeep", + "jobHistoryMaxAge", + "repoEventHistoryCompactionEnabled", + "repoEventHistoryToKeep", + "repoEventHistoryMaxAge", + "storageVolume", + "nfsHost", + "nfsPath", + "readOnlyRegistry" + ], + "properties": { + "anonymizeAnalytics": { + "type": "boolean" + }, + "createRepositoryOnPush": { + "type": "boolean" + }, + "disableBackupWarning": { + "type": "boolean" + }, + "disableUpgrades": { + "type": "boolean" + }, + "dtrHost": { + "type": "string" + }, + "httpProxy": { + "type": "string" + }, + "httpsProxy": { + "type": "string" + }, + "jobHistoryCompactionEnabled": { + "type": "boolean" + }, + "jobHistoryMaxAge": { + "type": "string" + }, + "jobHistoryToKeep": { + "type": "integer", + "format": "int64" + }, + "logHost": { + "type": "string" + }, + "logLevel": { + "type": "string" + }, + "logProtocol": { + "type": "string" + }, + "nfsHost": { + "type": "string" + }, + "nfsPath": { + "type": "string" + }, + "noProxy": { + "type": "string" + }, + "readOnlyRegistry": { + "type": "boolean" + }, + "replicaID": { + "type": "string" + }, + "replicaSettings": { + "type": "object", + "additionalProperties": { + "$ref": "#/definitions/responses.ReplicaSettings" + } + }, + "repoEventHistoryCompactionEnabled": { + "type": "boolean" + }, + "repoEventHistoryMaxAge": { + "type": "string" + }, + "repoEventHistoryToKeep": { + "type": "integer", + "format": "int64" + }, + "reportAnalytics": { + "type": "boolean" + }, + "scanningDeadline": { + "type": "integer" + }, + "scanningEnableAutoRecheck": { + "type": "boolean" + }, + "scanningEnabled": { + "type": "boolean" + }, + "scanningSyncOnline": { + "type": "boolean" + }, + "sso": { + "type": "boolean" + }, + "storageVolume": { + "type": "string" + }, + "webTLSCA": { + "type": "string" + }, + "webTLSCert": { + "type": "string" + } + } + }, + "responses.Tag": { + "required": [ + "digest", + "createdAt", + "inNotary", + "manifest", + "name", + "author", + "updatedAt", + "hashMismatch" + ], + "properties": { + "author": { + "type": "string" + }, + "components": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.ScannerComponent" + } + }, + "createdAt": { + "type": "string", + "format": "date-time" + }, + "digest": { + "type": "string" + }, + "hashMismatch": { + "description": "true if the hashes from notary and registry don't match", + "type": "boolean" + }, + "inNotary": { + "description": "true if the tag exists in Notary", + "type": "boolean" + }, + "licenses": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.ScannerLicense" + } + }, + "manifest": { + "$ref": "#/definitions/responses.Manifest" + }, + "mirroring": { + "$ref": "#/definitions/responses.Mirroring" + }, + "name": { + "type": "string" + }, + "promotion": { + "$ref": "#/definitions/responses.Promotion" + }, + "updatedAt": { + "type": "string", + "format": "date-time" + }, + "vuln_summary": { + "$ref": "#/definitions/responses.OldScanSummary" + } + } + }, + "responses.Team": { + "required": [ + "id", + "clientUserIsMember" + ], + "properties": { + "clientUserIsMember": { + "type": "boolean" + }, + "id": { + "type": "string" + } + } + }, + "responses.TeamAccess": { + "required": [ + "accessLevel", + "team" + ], + "properties": { + "accessLevel": { + "type": "string", + "enum": [ + "read-only", + "read-write", + "admin" + ] + }, + "team": { + "$ref": "#/definitions/responses.Team" + } + } + }, + "responses.ThinScanSummaries": { + "required": [ + "images" + ], + "properties": { + "images": { + "type": "object", + "additionalProperties": { + "$ref": "#/definitions/schema.ScannerVulnCount" + } + } + } + }, + "responses.ThinTag": { + "required": [ + "name", + "digest" + ], + "properties": { + "digest": { + "type": "string" + }, + "name": { + "type": "string" + } + } + }, + "responses.UserSettings": { + "required": [ + "ContentCacheUUID" + ], + "properties": { + "ContentCacheUUID": { + "type": "string" + } + } + }, + "responses.Vulnerability": { + "required": [ + "cve", + "cvss", + "summary" + ], + "properties": { + "cve": { + "type": "string" + }, + "cvss": { + "type": "number", + "format": "float" + }, + "summary": { + "type": "string" + } + } + }, + "responses.VulnerabilityDetails": { + "required": [ + "vuln", + "exact", + "notes" + ], + "properties": { + "exact": { + "type": "boolean" + }, + "notes": { + "type": "array", + "items": { + "$ref": "#/definitions/responses.Note" + } + }, + "vuln": { + "$ref": "#/definitions/responses.Vulnerability" + } + } + }, + "responses.Webhook": { + "required": [ + "id", + "type", + "key", + "endpoint", + "authorID", + "createdAt", + "inactive", + "tlsCert", + "skipTLSVerification" + ], + "properties": { + "authorID": { + "type": "string" + }, + "createdAt": { + "type": "string", + "format": "date-time" + }, + "endpoint": { + "type": "string" + }, + "id": { + "type": "string" + }, + "inactive": { + "type": "boolean" + }, + "key": { + "type": "string" + }, + "lastSuccessfulAt": { + "type": "string", + "format": "date-time" + }, + "skipTLSVerification": { + "type": "boolean" + }, + "tlsCert": { + "type": "string" + }, + "type": { + "type": "string" + } + } + }, + "ruleengine.Rule": { + "properties": { + "field": { + "type": "string" + }, + "operator": { + "type": "string" + }, + "values": { + "type": "array", + "items": { + "type": "string" + } + } + } + }, + "schema.Event": { + "required": [ + "id", + "publishedAt", + "actor", + "actorType", + "type", + "object", + "repository" + ], + "properties": { + "actor": { + "type": "string" + }, + "actorType": { + "type": "string" + }, + "id": { + "type": "string" + }, + "object": { + "$ref": "#/definitions/schema.Object" + }, + "publishedAt": { + "type": "string", + "format": "date-time" + }, + "repository": { + "type": "string" + }, + "target": { + "$ref": "#/definitions/schema.Object" + }, + "type": { + "type": "string" + } + } + }, + "schema.LayerVulnOverride": { + "required": [ + "pk", + "digest", + "component", + "componentVersion", + "cve", + "notes" + ], + "properties": { + "component": { + "type": "string" + }, + "componentVersion": { + "type": "string" + }, + "cve": { + "type": "string" + }, + "digest": { + "type": "string" + }, + "notes": { + "type": "string" + }, + "pk": { + "type": "string" + } + } + }, + "schema.MirroringStatus": { + "required": [ + "code", + "detail", + "timestamp" + ], + "properties": { + "code": { + "type": "string" + }, + "detail": { + "type": "string" + }, + "timestamp": { + "type": "string", + "format": "date-time" + } + } + }, + "schema.Object": { + "required": [ + "id", + "type" + ], + "properties": { + "content": { + "type": "string" + }, + "id": { + "type": "string" + }, + "name": { + "type": "string" + }, + "type": { + "type": "string" + } + } + }, + "schema.ScannedImage": { + "required": [ + "pk", + "namespace", + "repository", + "tag", + "manifestDigest", + "totalVulnCount", + "licenses", + "layers", + "components", + "cves", + "maxCVSSValue", + "scannerFingerprint", + "vulnOverrides" + ], + "properties": { + "components": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.ScannerComponent" + } + }, + "cves": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.ScannerCVE" + } + }, + "layers": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.ScannerLayer" + } + }, + "licenses": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.ScannerLicense" + } + }, + "manifestDigest": { + "type": "string" + }, + "maxCVSSValue": { + "type": "number", + "format": "float" + }, + "namespace": { + "type": "string" + }, + "pk": { + "type": "string" + }, + "repository": { + "type": "string" + }, + "scannerFingerprint": { + "$ref": "#/definitions/schema.ScannerFingerprint" + }, + "tag": { + "type": "string" + }, + "totalVulnCount": { + "$ref": "#/definitions/schema.ScannerVulnCount" + }, + "vulnOverrides": { + "type": "array", + "items": { + "$ref": "#/definitions/schema.LayerVulnOverride" + } + } + } + }, + "schema.ScannerCVE": { + "required": [ + "cvePK", + "summary", + "cvss", + "notes" + ], + "properties": { + "cvePK": { + "type": "string" + }, + "cvss": { + "type": "number", + "format": "float" + }, + "notes": { + "type": "string" + }, + "summary": { + "type": "string" + } + } + }, + "schema.ScannerComponent": { + "required": [ + "componentPK", + "vulnCount", + "name", + "version", + "filepaths", + "cves", + "licenses", + "source" + ], + "properties": { + "componentPK": { + "type": "string" + }, + "cves": { + "type": "array", + "items": { + "type": "string" + } + }, + "filepaths": { + "type": "array", + "items": { + "type": "string" + } + }, + "licenses": { + "type": "array", + "items": { + "type": "string" + } + }, + "name": { + "type": "string" + }, + "source": { + "type": "string" + }, + "version": { + "type": "string" + }, + "vulnCount": { + "$ref": "#/definitions/schema.ScannerVulnCount" + } + } + }, + "schema.ScannerFingerprint": { + "required": [ + "scannerType", + "version" + ], + "properties": { + "scannerType": { + "type": "integer", + "format": "int32" + }, + "version": { + "type": "array", + "items": { + "type": "string" + } + } + } + }, + "schema.ScannerLayer": { + "required": [ + "digest", + "mediaType", + "author", + "size", + "components" + ], + "properties": { + "author": { + "type": "string" + }, + "components": { + "type": "array", + "items": { + "type": "string" + } + }, + "digest": { + "type": "string" + }, + "mediaType": { + "type": "string" + }, + "size": { + "type": "integer", + "format": "int64" + } + } + }, + "schema.ScannerLicense": { + "required": [ + "name", + "url", + "type" + ], + "properties": { + "name": { + "type": "string" + }, + "type": { + "type": "string" + }, + "url": { + "type": "string" + } + } + }, + "schema.ScannerVulnCount": { + "required": [ + "critical", + "major", + "minor" + ], + "properties": { + "critical": { + "type": "integer", + "format": "int32" + }, + "major": { + "type": "integer", + "format": "int32" + }, + "minor": { + "type": "integer", + "format": "int32" + } + } + }, + "tmpforms.ActionConfigCreate": { + "required": [ + "action", + "parameters" + ], + "properties": { + "action": { + "description": "The action to modify the config for", + "type": "string" + }, + "parameters": { + "description": "Extra parameters to pass to the job. The available parameters depend on the job. These are overwritten by any corresponding parameters set in the job itself.", + "type": "object", + "additionalProperties": { + "type": "string" + } + } + } + }, + "tmpforms.CronCreate": { + "required": [ + "action", + "schedule", + "retries", + "capacityMap", + "parameters", + "deadline", + "stopTimeout" + ], + "properties": { + "action": { + "description": "The action which the cron will perform", + "type": "string" + }, + "capacityMap": { + "description": "The map of required capacity", + "type": "object", + "additionalProperties": { + "type": "integer" + } + }, + "deadline": { + "description": "After this amount of time has passed, a SIGTERM will be sent", + "type": "string" + }, + "parameters": { + "description": "Extra parameters to pass to the job. The available parameters depend on the job.", + "type": "object", + "additionalProperties": { + "type": "string" + } + }, + "retries": { + "description": "The number of times to retry a job if it fails", + "type": "integer", + "format": "int32" + }, + "schedule": { + "description": "The for the cron as a cronspec string: (seconds) (minutes) (hours) (day of month) (month) (day of week) or @hourly, @weekly, etc.", + "type": "string" + }, + "stopTimeout": { + "description": "This long after SIGTERM is sent, SIGKILL will be sent if the proccess is still alive", + "type": "string" + } + } + }, + "tmpforms.EmptyForm": {}, + "tmpforms.JobSubmission": { + "required": [ + "action", + "parameters", + "retries", + "capacityMap", + "deadline", + "stopTimeout", + "scheduledAt" + ], + "properties": { + "action": { + "description": "The action which the job will perform", + "type": "string" + }, + "capacityMap": { + "description": "The map of required capacity", + "type": "object", + "additionalProperties": { + "type": "integer" + } + }, + "deadline": { + "description": "After this amount of time has passed, a SIGTERM will be sent", + "type": "string" + }, + "parameters": { + "description": "Parameters to start the job with", + "type": "object", + "additionalProperties": { + "type": "string" + } + }, + "retries": { + "description": "The number of times to retry a job if it fails", + "type": "integer", + "format": "int32" + }, + "scheduledAt": { + "description": "The time at which to run the job. Empty string or no value means now. Format: RFC3339", + "type": "string" + }, + "stopTimeout": { + "description": "This long after SIGTERM is sent, SIGKILL will be sent if the proccess is still alive", + "type": "string" + } + } + }, + "tmpforms.UpdateWorkerCapacity": { + "required": [ + "capacityMap" + ], + "properties": { + "capacityMap": { + "description": "The new capacity for the worker, representing roughly the amount of RAM to use", + "type": "object", + "additionalProperties": { + "type": "integer" + } + } + } + }, + "tmpresponses.ActionConfig": { + "required": [ + "id", + "action", + "parameters" + ], + "properties": { + "action": { + "description": "The action this config refers to.", + "type": "string" + }, + "id": { + "description": "Randomly generated UUID for foreign references.", + "type": "string" + }, + "parameters": { + "description": "Extra parameters to pass to the job. The available parameters depend on the job.", + "type": "object", + "additionalProperties": { + "type": "string" + } + } + } + }, + "tmpresponses.ActionConfigs": { + "required": [ + "actionConfigs" + ], + "properties": { + "actionConfigs": { + "type": "array", + "items": { + "$ref": "#/definitions/tmpresponses.ActionConfig" + } + } + } + }, + "tmpresponses.Cron": { + "required": [ + "id", + "action", + "schedule", + "retries", + "capacityMap", + "parameters", + "deadline", + "stopTimeout", + "nextRun" + ], + "properties": { + "action": { + "description": "The action to be performed by jobs spawned from this cron.", + "type": "string" + }, + "capacityMap": { + "description": "The map of required capacity", + "type": "object", + "additionalProperties": { + "type": "integer" + } + }, + "deadline": { + "type": "string" + }, + "id": { + "description": "Randomly generated UUID for foreign references.", + "type": "string" + }, + "nextRun": { + "description": "The next time the job will run.", + "type": "string", + "format": "date-time" + }, + "parameters": { + "description": "Extra parameters to pass to the job. The available parameters depend on the job.", + "type": "object", + "additionalProperties": { + "type": "string" + } + }, + "retries": { + "description": "The number of times to retry the job if it fails", + "type": "integer", + "format": "int32" + }, + "schedule": { + "description": "The schedule for this cron as a cronspec string: (seconds) (minutes) (hours) (day of month) (month) (day of week) or @hourly, @weekly, etc.", + "type": "string" + }, + "stopTimeout": { + "description": "This long after SIGTERM is sent, SIGKILL will be sent if the proccess is still alive", + "type": "string" + } + } + }, + "tmpresponses.Crons": { + "required": [ + "crons" + ], + "properties": { + "crons": { + "type": "array", + "items": { + "$ref": "#/definitions/tmpresponses.Cron" + } + } + } + }, + "tmpresponses.Job": { + "required": [ + "id", + "retryFromID", + "workerID", + "status", + "scheduledAt", + "lastUpdated", + "action", + "retriesLeft", + "retriesTotal", + "capacityMap", + "parameters", + "deadline", + "stopTimeout" + ], + "properties": { + "action": { + "description": "The action this job performs", + "type": "string" + }, + "capacityMap": { + "description": "The map of required capacity", + "type": "object", + "additionalProperties": { + "type": "integer" + } + }, + "deadline": { + "type": "string" + }, + "id": { + "description": "The ID of the job", + "type": "string" + }, + "lastUpdated": { + "description": "The last time at which the status of this job was updated", + "type": "string", + "format": "date-time" + }, + "parameters": { + "description": "Extra parameters to pass to the job. The available parameters depend on the job.", + "type": "object", + "additionalProperties": { + "type": "string" + } + }, + "retriesLeft": { + "description": "The number of times to retry the job if it fails", + "type": "integer", + "format": "int32" + }, + "retriesTotal": { + "description": "The total number of times to retry the original job if it fails", + "type": "integer", + "format": "int32" + }, + "retryFromID": { + "description": "The ID of the job this job retried from", + "type": "string" + }, + "scheduledAt": { + "description": "The time at which this job was scheduled", + "type": "string", + "format": "date-time" + }, + "status": { + "description": "The current status of the job", + "type": "string", + "enum": [ + "waiting", + "running", + "done", + "canceled", + "errored" + ] + }, + "stopTimeout": { + "description": "This long after SIGTERM is sent, SIGKILL will be sent if the proccess is still alive", + "type": "string" + }, + "workerID": { + "description": "The ID of the worker which performed the job, unclaimed by a worker if empty", + "type": "string" + } + } + }, + "tmpresponses.JobLog": { + "required": [ + "data", + "lineNum" + ], + "properties": { + "data": { + "type": "string" + }, + "lineNum": { + "type": "integer", + "format": "int32" + } + } + }, + "tmpresponses.Jobs": { + "required": [ + "jobs" + ], + "properties": { + "jobs": { + "type": "array", + "items": { + "$ref": "#/definitions/tmpresponses.Job" + } + } + } + }, + "tmpresponses.Worker": { + "required": [ + "id", + "status", + "capacityMap", + "heartbeatExpiration" + ], + "properties": { + "capacityMap": { + "description": "A map used to represent now much load the worker should be allocated. Only security scanning jobs use this and the value is roughly equivalent to expected memory usage in bytes.", + "type": "object", + "additionalProperties": { + "type": "integer" + } + }, + "heartbeatExpiration": { + "description": "Time after which the worker should be considered dead.", + "type": "string" + }, + "id": { + "description": "Randomly generated UUID for foreign references.", + "type": "string" + }, + "status": { + "description": "Status of the worker", + "type": "string" + } + } + }, + "tmpresponses.Workers": { + "required": [ + "workers" + ], + "properties": { + "workers": { + "type": "array", + "items": { + "$ref": "#/definitions/tmpresponses.Worker" + } + } + } + } + }, + "tags": [ + { + "description": "Accounts", + "name": "accounts" + }, + { + "description": "Admin", + "name": "meta" + }, + { + "description": "Content Caches", + "name": "content_caches" + }, + { + "description": "Repositories", + "name": "repositories" + }, + { + "description": "Repository Namespaces", + "name": "repositoryNamespaces" + }, + { + "description": "Events", + "name": "events" + }, + { + "description": "Docker Security Scanner", + "name": "imagescan" + }, + { + "description": "Webhooks", + "name": "webhooks" + }, + { + "description": "Jobs", + "name": "jobs" + }, + { + "description": "Crons", + "name": "crons" + }, + { + "description": "Workers", + "name": "workers" + }, + { + "description": "Action Configs", + "name": "action_configs" + } + ] +} diff --git a/reference/dtr/2.6/cli/install.md b/reference/dtr/2.6/cli/install.md index 36cfed2311..325a09f5b4 100644 --- a/reference/dtr/2.6/cli/install.md +++ b/reference/dtr/2.6/cli/install.md @@ -23,11 +23,13 @@ After installing DTR, you can join additional DTR replicas using `docker/dtr joi ## Example Usage +```bash $ docker run -it --rm docker/dtr:{{ site.dtr_version }}.0 install \ --ucp-node \ --ucp-insecure-tls +``` -> Note: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment. +> **Note**: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment. ## Options diff --git a/reference/dtr/2.6/cli/reconfigure.md b/reference/dtr/2.6/cli/reconfigure.md index 1c4b124dc1..796144240f 100644 --- a/reference/dtr/2.6/cli/reconfigure.md +++ b/reference/dtr/2.6/cli/reconfigure.md @@ -26,7 +26,7 @@ time, configure your DTR for high availability. | Option | Environment Variable | Description | |:------------------------------|:--------------------------|:-------------------------------------------------------------------------------------| -| `--debug` | $DEBUG | Enable debug mode for additional logs. | +| `--debug` | $DEBUG | Enable debug mode for additional logs of this bootstrap container (the log level of downstream DTR containers can be set with `--log-level`). | | `--dtr-ca` | $DTR_CA | Use a PEM-encoded TLS CA certificate for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own root CA public certificate with `--dtr-ca "$(cat ca.pem)"`. | | `--dtr-cert` | $DTR_CERT | Use a PEM-encoded TLS certificate for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own public key certificate with `--dtr-cert "$(cat cert.pem)"`. If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file to establish a chain of trust. | | `--dtr-external-url` | $DTR_EXTERNAL_URL | URL of the host or load balancer clients use to reach DTR. When you use this flag, users are redirected to UCP for logging in. Once authenticated they are redirected to the url you specify in this flag. If you don't use this flag, DTR is deployed without single sign-on with UCP. Users and teams are shared but users login separately into the two applications. You can enable and disable single sign-on in the DTR settings. Format `https://host[:port]`, where port is the value you used with `--replica-https-port`. | diff --git a/reference/ucp/3.1/cli/install.md b/reference/ucp/3.1/cli/install.md index 355ec80ed1..5528f1cdc3 100644 --- a/reference/ucp/3.1/cli/install.md +++ b/reference/ucp/3.1/cli/install.md @@ -42,41 +42,42 @@ command. | Option | Description | |:-------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `--debug, D` | Enable debug mode | -| `--jsonlog` | Produce json formatted output for easier parsing | -| `--interactive, i` | Run in interactive mode and prompt for configuration values | -| `--admin-username` | The UCP administrator username | | `--admin-password` | The UCP administrator password | -| `--san` | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) | | `--unmanaged-cni` | This determines who manages the CNI plugin, using `true` or `false`. The default is `false`. The `true` value installs UCP without a managed CNI plugin. UCP and the Kubernetes components will be running but pod to pod networking will not function until a CNI plugin is manually installed. This will impact some functionality of UCP until a CNI plugin is running. | -| `--host-address` | The network address to advertise to other nodes. Format: IP address or network interface name | -| `--data-path-addr` | Address or interface to use for data path traffic. Format: IP address or network interface name | -| `--controller-port` | Port for the web UI and API | -| `--kube-apiserver-port` | Port for the Kubernetes API server (default: 6443) | -| `--swarm-port` | Port for the Docker Swarm manager. Used for backwards compatibility | -| `--swarm-grpc-port` | Port for communication between nodes | +| `--admin-username` | The UCP administrator username | +| `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility | +| `--cloud-provider` | The cloud provider for the cluster | `--cni-installer-url` | Deprecated feature. A URL pointing to a Kubernetes YAML file to be used as an installer for the CNI plugin of the cluster. If specified, the default CNI plugin is not installed. If the URL uses the HTTPS scheme, no certificate verification is performed. | -| `--unmanaged-cni` | flag to indicate if cni provider is calico and managed by UCP (calico is the default CNI provider). The default value is "true" when using the default Calico CNI. | -| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IPs from (Default: `192.168.0.0/16`) | -| `--cloud-provider` | The cloud provider for the cluster | -| `--dns` | Set custom DNS servers for the UCP containers | -| `--dns-opt` | Set DNS options for the UCP containers | -| `--dns-search` | Set custom DNS search domains for the UCP containers | -| `--unlock-key` | The unlock key for this swarm-mode cluster, if one exists. | -| `--existing-config` | Use the latest existing UCP config during this installation. The install fails if a config is not found. | -| `--force-minimums` | Force the install/upgrade even if the system doesn't meet the minimum requirements. | -| `--pull` | Pull UCP images: `always`, when `missing`, or `never` | -| `--registry-username` | Username to use when pulling images | -| `--registry-password` | Password to use when pulling images | -| `--kv-timeout` | Timeout in milliseconds for the key-value store | -| `--kv-snapshot-count` | Number of changes between key-value store snapshots | -| `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility | +| `--controller-port` | Port for the web UI and API +| `--data-path-addr` | Address or interface to use for data path traffic. Format: IP address or network interface name +| `--debug, D` | Enable debug mode | | `--disable-tracking` | Disable anonymous tracking and analytics | | `--disable-usage` | Disable anonymous usage reporting | -| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation | -| `--preserve-certs` | Don't generate certificates if they already exist | -| `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility | -| `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility | -| `--external-service-lb` | Set the external service load balancer reported in the UI | +| `--dns` | Set custom DNS servers for the UCP containers | +| `--dns-opt` | Set DNS options for the UCP containers | +| `--dns-search` | Set custom DNS search domains for the UCP containers | | `--enable-profiling` | Enable performance profiling | -| `--license` | Add a license: e.g.` --license "$(cat license.lic)" ` | +| `--existing-config` | Use the latest existing UCP config during this installation. The install fails if a config is not found. | +| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation | +| `--external-service-lb` | Set the external service load balancer reported in the UI | | `--force-insecure-tcp` | Force install to continue even with unauthenticated Docker Engine ports | +| `--force-minimums` | Force the install/upgrade even if the system doesn't meet the minimum requirements. | +| `--host-address` | The network address to advertise to other nodes. Format: IP address or network interface name | +| `--interactive, i` | Run in interactive mode and prompt for configuration values | +| `--jsonlog` | Produce json formatted output for easier parsing | +| `--kube-apiserver-port` | Port for the Kubernetes API server (default: 6443) | +| `--kv-snapshot-count` | Number of changes between key-value store snapshots | +| `--kv-timeout` | Timeout in milliseconds for the key-value store | +| `--license` | Add a license: e.g.` --license "$(cat license.lic)" ` | +| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IPs from (Default: `192.168.0.0/16`) | +| `--preserve-certs` | Don't generate certificates if they already exist | +| `--pull` | Pull UCP images: `always`, when `missing`, or `never` | +| `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility | +| `--registry-username` | Username to use when pulling images | +| `--registry-password` | Password to use when pulling images | +| `--san` | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) | +| `--skip-cloud-provider` | Disables checks that rely on detecting the cloud provider (if any) on which the cluster is currently running. | +| `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility | +| `--swarm-port` | Port for the Docker Swarm manager. Used for backwards compatibility | +| `--swarm-grpc-port` | Port for communication between nodes | +| `--unlock-key` | The unlock key for this swarm-mode cluster, if one exists. | +| `--unmanaged-cni` |The default value of `false` indicates that Kubernetes networking is managed by UCP with its default managed CNI plugin, Calico. When set to `true`, UCP does not deploy or manage the lifecycle of the default CNI plugin - the CNI plugin is deployed and managed independently of UCP. Note that when `unmanaged-cni=true`, networking in the cluster will not function for Kubernetes until a CNI plugin is deployed. | diff --git a/registry/help.md b/registry/help.md index ff5a76077b..694af283a1 100644 --- a/registry/help.md +++ b/registry/help.md @@ -7,7 +7,7 @@ title: Get help If you need help, or just want to chat, you can reach us: - on the [Docker forums](https://forums.docker.com/c/open-source-projects/opensrcreg). -- on the [Docker community Slack](https://dockercommunity.slack.com/messages/C31GQCJN7/). +- on the [Docker community Slack](https://blog.docker.com/2016/11/introducing-docker-community-directory-docker-community-slack/). - on the [mailing list](https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution) (mail at ). If you want to report a bug: diff --git a/registry/recipes/nginx.md b/registry/recipes/nginx.md index 6673b2a22b..07ed4b62be 100644 --- a/registry/recipes/nginx.md +++ b/registry/recipes/nginx.md @@ -38,7 +38,7 @@ you want through the secondary authentication mechanism implemented inside your proxy, it also requires that you move TLS termination from the Registry to the proxy itself. -> ***NOTE:*** Docker does not recommend binding your registry to `localhost:5000` without +> **Note**: Docker does not recommend binding your registry to `localhost:5000` without > authentication. This creates a potential loophole in your Docker Registry security. > As a result, anyone who can log on to the server where your Docker Registry is running > can push images without authentication. diff --git a/release-notes/docker-compose.md b/release-notes/docker-compose.md index d4a72da42b..5021768df4 100644 --- a/release-notes/docker-compose.md +++ b/release-notes/docker-compose.md @@ -954,7 +954,7 @@ naming scheme accordingly before upgrading. - Containers dependencies can now be set up to wait on positive healthchecks when declared using `depends_on`. See the documentation for the updated syntax. - **Note:** This feature will not be ported to version 3 Compose files. + **Note**: This feature will not be ported to version 3 Compose files. - Added support for the `sysctls` parameter in service definitions diff --git a/storage/storagedriver/overlayfs-driver.md b/storage/storagedriver/overlayfs-driver.md index 3c8092dd18..a8404d1c67 100644 --- a/storage/storagedriver/overlayfs-driver.md +++ b/storage/storagedriver/overlayfs-driver.md @@ -16,8 +16,8 @@ storage driver as `overlay` or `overlay2`. > **Note**: If you use OverlayFS, use the `overlay2` driver rather than the > `overlay` driver, because it is more efficient in terms of inode utilization. > To use the new driver, you need version 4.0 or higher of the Linux kernel, -> unless you are a Docker EE user on RHEL or CentOS, in which case you need -> version 3.10.0-514 or higher of the kernel and to follow some extra steps. +> or RHEL or CentOS using version 3.10.0-514 and above. When using Docker EE +> on RHEL or CentOS, you will need to follow some extra steps. > > For more information about differences between `overlay` vs `overlay2`, check > [Docker storage drivers](select-storage-driver.md). diff --git a/storage/storagedriver/zfs-driver.md b/storage/storagedriver/zfs-driver.md index 46f83af919..2f3faa159a 100644 --- a/storage/storagedriver/zfs-driver.md +++ b/storage/storagedriver/zfs-driver.md @@ -44,7 +44,7 @@ use unless you have substantial experience with ZFS on Linux. and push existing images to Docker Hub or a private repository, so that you do not need to re-create them later. -> ***NOTE:*** There is no need to use `MountFlags=slave` with Docker Engine 18.09 or +> **Note**: There is no need to use `MountFlags=slave` with Docker Engine 18.09 or > later because `dockerd` and `containerd` are in different mount namespaces. ## Configure Docker with the `zfs` storage driver diff --git a/test.md b/test.md index 5b04db1254..e747cf2620 100644 --- a/test.md +++ b/test.md @@ -709,6 +709,15 @@ incoming := map[string]interface{}{ } ``` +### PowerShell + +```powershell +Install-Module DockerMsftProvider -Force +Install-Package Docker -ProviderName DockerMsftProvider -Force +[System.Environment]::SetEnvironmentVariable("DOCKER_FIPS", "1", "Machine") +Expand-Archive docker-18.09.1.zip -DestinationPath $Env:ProgramFiles -Force +``` + ### Python ```python