Merge branch '2019-09-patch-release-notes' into interlock3-updates
|
|
@ -1512,7 +1512,7 @@ manuals:
|
||||||
- title: Specifying a routing mode
|
- title: Specifying a routing mode
|
||||||
path: /ee/ucp/interlock/usage/interlock-vip-mode/
|
path: /ee/ucp/interlock/usage/interlock-vip-mode/
|
||||||
- title: Using routing labels
|
- title: Using routing labels
|
||||||
path: /ee/ucp/interlock/usage/labels-reference.md/
|
path: /ee/ucp/interlock/usage/labels-reference/
|
||||||
- title: Implementing redirects
|
- title: Implementing redirects
|
||||||
path: /ee/ucp/interlock/usage/redirects/
|
path: /ee/ucp/interlock/usage/redirects/
|
||||||
- title: Implementing a service cluster
|
- title: Implementing a service cluster
|
||||||
|
|
@ -4041,6 +4041,8 @@ manuals:
|
||||||
title: Trust Chain
|
title: Trust Chain
|
||||||
- path: /docker-hub/publish/byol/
|
- path: /docker-hub/publish/byol/
|
||||||
title: Bring Your Own License (BYOL)
|
title: Bring Your Own License (BYOL)
|
||||||
|
- path: /docker-hub/deactivate-account/
|
||||||
|
title: Deactivate an account or an organization
|
||||||
- sectiontitle: Open-source projects
|
- sectiontitle: Open-source projects
|
||||||
section:
|
section:
|
||||||
- sectiontitle: Docker Notary
|
- sectiontitle: Docker Notary
|
||||||
|
|
|
||||||
|
|
@ -45,13 +45,13 @@ your client and daemon API versions.
|
||||||
|
|
||||||
{% if site.data[include.datafolder][include.datafile].experimentalcli %}
|
{% if site.data[include.datafolder][include.datafile].experimentalcli %}
|
||||||
|
|
||||||
> This command is experimental.
|
> This command is experimental on the Docker client.
|
||||||
|
>
|
||||||
|
> **It should not be used in production environments.**
|
||||||
>
|
>
|
||||||
> This command is experimental on the Docker client. It should not be used in
|
|
||||||
> production environments.
|
|
||||||
> To enable experimental features in the Docker CLI, edit the
|
> To enable experimental features in the Docker CLI, edit the
|
||||||
> [config.json](/engine/reference/commandline/cli.md#configuration-files)
|
> [config.json](/engine/reference/commandline/cli.md#configuration-files)
|
||||||
> and set `experimental` to `enabled`. You can go [here](https://github.com/docker/docker.github.io/blob/master/_includes/experimental.md)
|
> and set `experimental` to `enabled`. You can go [here](https://docs.docker.com/engine/reference/commandline/cli/#experimental-features)
|
||||||
> for more information.
|
> for more information.
|
||||||
{: .important }
|
{: .important }
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -9,29 +9,27 @@
|
||||||
</div>
|
</div>
|
||||||
<div class="navbar-collapse collapse">
|
<div class="navbar-collapse collapse">
|
||||||
<ul class="primary nav navbar-nav">
|
<ul class="primary nav navbar-nav">
|
||||||
<li><a href="https://docker.com/what-docker">What is Docker?</a></li>
|
<li><a href="https://docker.com/why-docker">Why Docker?</a></li>
|
||||||
<li><a href="https://docker.com/get-docker">Product</a></li>
|
<li><a href="https://docker.com/get-started">Product</a></li>
|
||||||
<li class="dropdown">
|
<li class="dropdown">
|
||||||
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Get Docker <span class="caret"></span></a>
|
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Get Docker <span class="caret"></span></a>
|
||||||
<ul class="dropdown-menu nav-main">
|
<ul class="dropdown-menu nav-main">
|
||||||
<h6 class="dropdown-header">For Desktops</h6>
|
<h6 class="dropdown-header">For Desktop</h6>
|
||||||
<li><a href="https://docker.com/docker-mac">Mac</a></li>
|
<li><a href="https://docker.com/products/docker-desktop">Mac & Windows</a></li>
|
||||||
<li><a href="https://docker.com/docker-windows">Windows</a></li>
|
|
||||||
<h6 class="dropdown-header">For Cloud Providers</h6>
|
<h6 class="dropdown-header">For Cloud Providers</h6>
|
||||||
<li><a href="https://docker.com/docker-aws">AWS</a></li>
|
<li><a href="https://docker.com/partners/aws">AWS</a></li>
|
||||||
<li><a href="https://docker.com/docker-microsoft-azure">Azure</a></li>
|
<li><a href="https://docker.com/partners/microsoft">Azure</a></li>
|
||||||
<h6 class="dropdown-header">For Servers</h6>
|
<h6 class="dropdown-header">For Servers</h6>
|
||||||
<li><a href="https://docker.com/docker-windows-server">Windows Server</a></li>
|
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-windows">Windows Server</a></li>
|
||||||
<li><a href="https://docker.com/docker-centos">CentOS</a></li>
|
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-centos">CentOS</a></li>
|
||||||
<li><a href="https://docker.com/docker-debian">Debian</a></li>
|
<li><a href="https://hub.docker.com/editions/community/docker-ce-server-debian">Debian</a></li>
|
||||||
<li><a href="https://docker.com/docker-fedora">Fedora</a></li>
|
<li><a href="https://hub.docker.com/editions/community/docker-ce-server-fedora">Fedora</a></li>
|
||||||
<li><a href="https://docker.com/docker-oracle-linux">Oracle Enterprise Linux</a></li>
|
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-oraclelinux">Oracle Enterprise Linux</a></li>
|
||||||
<li><a href="https://docker.com/docker-rhel">RHEL</a></li>
|
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-rhel">RHEL</a></li>
|
||||||
<li><a href="https://docker.com/docker-sles">SLES</a></li>
|
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-sles">SLES</a></li>
|
||||||
<li><a href="https://docker.com/docker-ubuntu">Ubuntu</a></li>
|
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-ubuntu">Ubuntu</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
</li>
|
</li>
|
||||||
<li><a href="https://docs.docker.com">Docs</a></li>
|
|
||||||
<li><a href="https://docker.com/docker-community">Community</a></li>
|
<li><a href="https://docker.com/docker-community">Community</a></li>
|
||||||
<li><a href="https://hub.docker.com/signup">Create Docker ID</a></li>
|
<li><a href="https://hub.docker.com/signup">Create Docker ID</a></li>
|
||||||
<li><a href="https://hub.docker.com/sso/start">Sign In</a></li>
|
<li><a href="https://hub.docker.com/sso/start">Sign In</a></li>
|
||||||
|
|
|
||||||
|
|
@ -10,19 +10,33 @@ keywords: Docker, application template, Application Designer,
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Docker Template is a CLI plugin that introduces a top-level `docker template` command that allows users to create new Docker applications by using a library of templates. There are two types of templates — service templates and application templates.
|
Docker Template is a CLI plugin that introduces a top-level `docker template`
|
||||||
|
command that allows users to create new Docker applications by using a library
|
||||||
|
of templates. There are two types of templates — service templates and
|
||||||
|
application templates.
|
||||||
|
|
||||||
A _service template_ is a container image that generates code and contains the metadata associated with the image.
|
A _service template_ is a container image that generates code and contains the
|
||||||
|
metadata associated with the image.
|
||||||
|
|
||||||
- The container image takes `/run/configuration` mounted file as input to generate assets such as code, Dockerfile, and `docker-compose.yaml` for a given service, and writes the output to the `/project` mounted folder.
|
- The container image takes `/run/configuration` mounted file as input to
|
||||||
|
generate assets such as code, Dockerfile, and `docker-compose.yaml` for a
|
||||||
|
given service, and writes the output to the `/project` mounted folder.
|
||||||
|
|
||||||
- The metadata file that describes the service template is called the service definition. It contains the name of the service, description, and available parameters such as ports, volumes, etc. For a complete list of parameters that are allowed, see [Docker Template API reference](/ee/app-template/api-reference).
|
- The metadata file that describes the service template is called the service
|
||||||
|
definition. It contains the name of the service, description, and available
|
||||||
|
parameters such as ports, volumes, etc. For a complete list of parameters that
|
||||||
|
are allowed, see [Docker Template API
|
||||||
|
reference](/ee/app-template/api-reference).
|
||||||
|
|
||||||
An _application template_ is a collection of one or more service templates. An application template generates a Dockerfile per service and only one Compose file for the entire application, aggregating all services.
|
An _application template_ is a collection of one or more service templates. An
|
||||||
|
application template generates a Dockerfile per service and only one Compose
|
||||||
|
file for the entire application, aggregating all services.
|
||||||
|
|
||||||
## Create a custom service template
|
## Create a custom service template
|
||||||
|
|
||||||
A Docker template contains a predefined set of service and application templates. To create a custom template based on your requirements, you must complete the following steps:
|
A Docker template contains a predefined set of service and application
|
||||||
|
templates. To create a custom template based on your requirements, you must
|
||||||
|
complete the following steps:
|
||||||
|
|
||||||
1. Create a service container image
|
1. Create a service container image
|
||||||
2. Create the service template definition
|
2. Create the service template definition
|
||||||
|
|
@ -31,52 +45,36 @@ A Docker template contains a predefined set of service and application templates
|
||||||
|
|
||||||
### Create a service container image
|
### Create a service container image
|
||||||
|
|
||||||
A service template provides the description required by Docker Template to scaffold a project. A service template runs inside a container with two bind mounts:
|
A service template provides the description required by Docker Template to
|
||||||
|
scaffold a project. A service template runs inside a container with two bind
|
||||||
|
mounts:
|
||||||
|
|
||||||
1. `/run/configuration`, a JSON file which contains all settings such as parameters, image name, etc. For example:
|
1. `/run/configuration`, a JSON file which contains all settings such as
|
||||||
|
parameters, image name, etc. For example:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"parameters": {
|
"parameters": {
|
||||||
"externalPort": "80",
|
"externalPort": "80",
|
||||||
"artifactId": "com.company.app"
|
"artifactId": "com.company.app"
|
||||||
},
|
},
|
||||||
...
|
...
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
2. `/project`, the output folder to which the container image writes the generated assets.
|
2. `/project`, the output folder to which the container image writes the generated assets.
|
||||||
|
|
||||||
#### Basic service template
|
#### Basic service template
|
||||||
|
|
||||||
To create a basic service template, you need to create two files — a dockerfile and a docker compose file in a new folder. For example, to create a new MySQL service template, create the following files in a folder called `my-service`:
|
Services that generate a template using code must contain the following files
|
||||||
|
that are valid:
|
||||||
|
|
||||||
`docker-compose.yaml`
|
- A *Dockerfile* located at the root of the `my-service` folder. This is the
|
||||||
|
Dockerfile that is used for the service when running the application.
|
||||||
|
|
||||||
```yaml
|
- A *docker-compose.yaml* file located at the root of the `my-service` folder.
|
||||||
version: "3.6"
|
The `docker-compose.yaml` file must contain the service declaration and any
|
||||||
services:
|
optional volumes or secrets.
|
||||||
mysql:
|
|
||||||
image: mysql
|
|
||||||
```
|
|
||||||
|
|
||||||
`Dockerfile`
|
|
||||||
|
|
||||||
```conf
|
|
||||||
FROM alpine
|
|
||||||
COPY docker-compose.yaml .
|
|
||||||
CMD cp docker-compose.yaml /project/
|
|
||||||
```
|
|
||||||
|
|
||||||
This adds a MySQL service to your application.
|
|
||||||
|
|
||||||
#### Create a service with code
|
|
||||||
|
|
||||||
Services that generate a template using code must contain the following files that are valid:
|
|
||||||
|
|
||||||
- A *Dockerfile* located at the root of the `my-service` folder. This is the Dockerfile that is used for the service when running the application.
|
|
||||||
|
|
||||||
- A *docker-compose.yaml* file located at the root of the `my-service` folder. The `docker-compose.yaml` file must contain the service declaration and any optional volumes or secrets.
|
|
||||||
|
|
||||||
Here’s an example of a simple NodeJS service:
|
Here’s an example of a simple NodeJS service:
|
||||||
|
|
||||||
|
|
@ -124,9 +122,11 @@ COPY . .
|
||||||
CMD ["yarn", "run", "start"]
|
CMD ["yarn", "run", "start"]
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note:** After scaffolding the template, you can add the default files your template contains to the `assets` folder.
|
> **Note:** After scaffolding the template, you can add the default files your
|
||||||
|
> template contains to the `assets` folder.
|
||||||
|
|
||||||
The next step is to build and push the service template image to a remote repository by running the following command:
|
The next step is to build and push the service template image to a remote
|
||||||
|
repository by running the following command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd [...]/my-service
|
cd [...]/my-service
|
||||||
|
|
@ -134,20 +134,17 @@ docker build -t org/my-service .
|
||||||
docker push org/my-service
|
docker push org/my-service
|
||||||
```
|
```
|
||||||
|
|
||||||
To build and push the image to an instance of Docker Trusted Registry(DTR), or to an external registry, specify the name of the repository:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd [...]/my-service
|
|
||||||
docker build -t myrepo:5000/my-service .
|
|
||||||
docker push myrepo:5000/my-service
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create the service template definition
|
### Create the service template definition
|
||||||
|
|
||||||
The service definition contains metadata that describes a service template. It contains the name of the service, description, and available parameters such as ports, volumes, etc.
|
The service definition contains metadata that describes a service template. It
|
||||||
After creating the service definition, you can proceed to [Add templates to Docker Template](#add-templates-to-docker-template) to add the service definition to the Docker Template repository.
|
contains the name of the service, description, and available parameters such as
|
||||||
|
ports, volumes, etc. After creating the service definition, you can proceed to
|
||||||
|
[Add templates to Docker Template](#add-templates-to-docker-template) to add
|
||||||
|
the service definition to the Docker Template repository.
|
||||||
|
|
||||||
Of all the available service and application definitions, Docker Template has access to only one catalog, referred to as the ‘repository’. It uses the catalog content to display service and application templates to the end user.
|
Of all the available service and application definitions, Docker Template has
|
||||||
|
access to only one catalog, referred to as the ‘repository’. It uses the
|
||||||
|
catalog content to display service and application templates to the end user.
|
||||||
|
|
||||||
Here is an example of the Express service definition:
|
Here is an example of the Express service definition:
|
||||||
|
|
||||||
|
|
@ -155,7 +152,9 @@ Here is an example of the Express service definition:
|
||||||
- apiVersion: v1alpha1 # constant
|
- apiVersion: v1alpha1 # constant
|
||||||
kind: ServiceTemplate # constant
|
kind: ServiceTemplate # constant
|
||||||
metadata:
|
metadata:
|
||||||
name: Express # the name of the service
|
name: Express # the name of the service
|
||||||
|
platforms:
|
||||||
|
- linux
|
||||||
spec:
|
spec:
|
||||||
title: Express # The title/label of the service
|
title: Express # The title/label of the service
|
||||||
icon: https://docker-application-template.s3.amazonaws.com/assets/express.png # url for an icon
|
icon: https://docker-application-template.s3.amazonaws.com/assets/express.png # url for an icon
|
||||||
|
|
@ -164,21 +163,32 @@ Here is an example of the Express service definition:
|
||||||
image: org/my-service:latest
|
image: org/my-service:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
The most important section here is `image: org/my-service:latest`. This is the image associated with this service template. You can use this line to point to any image. For example, you can use an Express image directly from the hub `docker.io/dockertemplate/express:latest` or from the DTR private repository `myrepo:5000/my-service:latest`. The other properties in the service definition are mostly metadata for display and indexation purposes.
|
The most important section here is `image: org/my-service:latest`. This is the
|
||||||
|
image associated with this service template. You can use this line to point to
|
||||||
|
any image. For example, you can use an Express image directly from the hub
|
||||||
|
`docker.io/dockertemplate/express:latest` or from the DTR private repository
|
||||||
|
`myrepo/my-service:latest`. The other properties in the service definition are
|
||||||
|
mostly metadata for display and indexation purposes.
|
||||||
|
|
||||||
#### Adding parameters to the service
|
#### Adding parameters to the service
|
||||||
|
|
||||||
Now that you have created a simple express service, you can customize it based on your requirements. For example, you can choose the version of NodeJS to use when running the service.
|
Now that you have created a simple express service, you can customize it based
|
||||||
|
on your requirements. For example, you can choose the version of NodeJS to use
|
||||||
|
when running the service.
|
||||||
|
|
||||||
To customize a service, you need to complete the following tasks:
|
To customize a service, you need to complete the following tasks:
|
||||||
|
|
||||||
1. Declare the parameters in the service definition. This tells Docker Template whether or not the CLI can accept the parameters, and allows the [Application Designer](/ee/desktop/app-designer) to be aware of the new options.
|
1. Declare the parameters in the service definition. This tells Docker Template
|
||||||
|
whether or not the CLI can accept the parameters, and allows the
|
||||||
|
[Application Designer](/ee/desktop/app-designer) to be aware of the new
|
||||||
|
options.
|
||||||
|
|
||||||
2. Use the parameters during service construction.
|
2. Use the parameters during service construction.
|
||||||
|
|
||||||
#### Declare the parameters
|
#### Declare the parameters
|
||||||
|
|
||||||
Add the parameters available to the application. The following example adds the NodeJS version and the external port:
|
Add the parameters available to the application. The following example adds the
|
||||||
|
NodeJS version and the external port:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
- [...]
|
- [...]
|
||||||
|
|
@ -205,7 +215,8 @@ Add the parameters available to the application. The following example adds the
|
||||||
|
|
||||||
#### Use the parameters during service construction
|
#### Use the parameters during service construction
|
||||||
|
|
||||||
When you run the service template container, a volume is mounted making the service parameters available at `/run/configuration`.
|
When you run the service template container, a volume is mounted making the
|
||||||
|
service parameters available at `/run/configuration`.
|
||||||
|
|
||||||
The file matches the following go struct:
|
The file matches the following go struct:
|
||||||
|
|
||||||
|
|
@ -232,25 +243,34 @@ type ConfiguredService struct {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
You can then use the file to obtain values for the parameters and use this information based on your requirements. However, in most cases, the JSON file is used to interpolate the variables. Therefore, we provide a utility called `interpolator` that expands variables in templates. For more information, see [Interpolator](#interpolator).
|
You can then use the file to obtain values for the parameters and use this
|
||||||
|
information based on your requirements. However, in most cases, the JSON file
|
||||||
|
is used to interpolate the variables. Therefore, we provide a utility called
|
||||||
|
`interpolator` that expands variables in templates. For more information, see
|
||||||
|
[Interpolator](#interpolator).
|
||||||
|
|
||||||
To use the `interpolator` image, update `my-service/Dockerfile` to use the following Dockerfile:
|
To use the `interpolator` image, update `my-service/Dockerfile` to use the
|
||||||
|
following Dockerfile:
|
||||||
|
|
||||||
```conf
|
```conf
|
||||||
FROM dockertemplate/interpolator:v0.1.5
|
FROM dockertemplate/interpolator:v0.1.5
|
||||||
COPY assets .
|
COPY assets .
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note:** The interpolator tag must match the version used in Docker Template. Verify this using the `docker template version` command .
|
> **Note:** The interpolator tag must match the version used in Docker
|
||||||
|
> Template. Verify this using the `docker template version` command .
|
||||||
|
|
||||||
This places the interpolator image in the `/assets` folder and copies the folder to the target `/project` folder. If you prefer to do this manually, use a Dockerfile instead:
|
This places the interpolator image in the `/assets` folder and copies the
|
||||||
|
folder to the target `/project` folder. If you prefer to do this manually, use
|
||||||
|
a Dockerfile instead:
|
||||||
|
|
||||||
```conf
|
```conf
|
||||||
WORKDIR /assets
|
WORKDIR /assets
|
||||||
CMD ["/interpolator", "-config", "/run/configuration", "-source", "/assets", "-destination", "/project"]
|
CMD ["/interpolator", "-config", "/run/configuration", "-source", "/assets", "-destination", "/project"]
|
||||||
```
|
```
|
||||||
|
|
||||||
When this is complete, use the newly added node option in `my-service/assets/Dockerfile`, by replacing the line:
|
When this is complete, use the newly added node option in
|
||||||
|
`my-service/assets/Dockerfile`, by replacing the line:
|
||||||
|
|
||||||
`FROM node:9`
|
`FROM node:9`
|
||||||
|
|
||||||
|
|
@ -262,11 +282,15 @@ Now, build and push the image to your repository.
|
||||||
|
|
||||||
### Add service template to the library
|
### Add service template to the library
|
||||||
|
|
||||||
You must add the service to a repository file in order to see it when you run the `docker template ls` command, or to make the service available in the Application Designer.
|
You must add the service to a repository file in order to see it when you run
|
||||||
|
the `docker template ls` command, or to make the service available in the
|
||||||
|
Application Designer.
|
||||||
|
|
||||||
#### Create the repository file
|
#### Create the repository file
|
||||||
|
|
||||||
Create a local repository file called `library.yaml` anywhere on your local drive and add the newly created service definitions and application definitions to it.
|
Create a local repository file called `library.yaml` anywhere on your local
|
||||||
|
drive and add the newly created service definitions and application definitions
|
||||||
|
to it.
|
||||||
|
|
||||||
`library.yaml`
|
`library.yaml`
|
||||||
|
|
||||||
|
|
@ -284,42 +308,51 @@ services: # List of service templates available
|
||||||
|
|
||||||
#### Add the local repository to docker-template settings
|
#### Add the local repository to docker-template settings
|
||||||
|
|
||||||
> **Note:** You can also use the instructions in this section to add templates to the [Application Designer](/ee/desktop/app-designer).
|
> **Note:** You can also use the instructions in this section to add templates
|
||||||
|
> to the [Application Designer](/ee/desktop/app-designer).
|
||||||
|
|
||||||
Now that you have created a local repository and added service definitions to it, you must make Docker Template aware of these. To do this:
|
Now that you have created a local repository and added service definitions to
|
||||||
|
it, you must make Docker Template aware of these. To do this:
|
||||||
|
|
||||||
1. Edit `~/.docker/application-template/preferences.yaml` as follows:
|
1. Edit `~/.docker/application-template/preferences.yaml` as follows:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: v1alpha1
|
apiVersion: v1alpha1
|
||||||
channel: master
|
channel: master
|
||||||
kind: Preferences
|
kind: Preferences
|
||||||
repositories:
|
repositories:
|
||||||
- name: library-master
|
- name: library-master
|
||||||
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Add your local repository:
|
2. Add your local repository:
|
||||||
|
|
||||||
```yaml
|
> **Note:** Do not remove or comment out the default library `library-master`.
|
||||||
apiVersion: v1alpha1
|
> This library contain template plugins that are required to build all Docker
|
||||||
channel: master
|
> Templates.
|
||||||
kind: Preferences
|
|
||||||
repositories:
|
|
||||||
- name: custom-services
|
|
||||||
url: file:///path/to/my/library.yaml
|
|
||||||
- name: library-master
|
|
||||||
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
When configuring a local repository on Windows, the `url` structure is slightly different:
|
```yaml
|
||||||
|
apiVersion: v1alpha1
|
||||||
|
channel: master
|
||||||
|
kind: Preferences
|
||||||
|
repositories:
|
||||||
|
- name: custom-services
|
||||||
|
url: file:///path/to/my/library.yaml
|
||||||
|
- name: library-master
|
||||||
|
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
When configuring a local repository on Windows, the `url` structure is slightly
|
||||||
|
different:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
- name: custom-services
|
- name: custom-services
|
||||||
url: file://c:/path/to/my/library.yaml
|
url: file://c:/path/to/my/library.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
After updating the `preferences.yaml` file, run `docker template ls` or restart the Application Designer and select **Custom application**. The new service should now be visible in the list of available services.
|
After updating the `preferences.yaml` file, run `docker template ls` or restart
|
||||||
|
the Application Designer and select **Custom application**. The new service
|
||||||
|
should now be visible in the list of available services.
|
||||||
|
|
||||||
### Share custom service templates
|
### Share custom service templates
|
||||||
|
|
||||||
|
|
@ -329,11 +362,14 @@ To share a custom service template, you must complete the following steps:
|
||||||
|
|
||||||
2. Share the service definition (for example, GitHub)
|
2. Share the service definition (for example, GitHub)
|
||||||
|
|
||||||
3. Ensure the receiver has modified their `preferences.yaml` file to point to the service definition that you have shared, and are permitted to accept remote images.
|
3. Ensure the receiver has modified their `preferences.yaml` file to point to
|
||||||
|
the service definition that you have shared, and are permitted to accept
|
||||||
|
remote images.
|
||||||
|
|
||||||
## Create a custom application template
|
## Create a custom application template
|
||||||
|
|
||||||
An application template is a collection of one or more service templates. You must complete the following steps to create a custom application template:
|
An application template is a collection of one or more service templates. You
|
||||||
|
must complete the following steps to create a custom application template:
|
||||||
|
|
||||||
1. Create an application template definition
|
1. Create an application template definition
|
||||||
|
|
||||||
|
|
@ -343,17 +379,26 @@ An application template is a collection of one or more service templates. You mu
|
||||||
|
|
||||||
### Create the application definition
|
### Create the application definition
|
||||||
|
|
||||||
An application template definition contains metadata that describes an application template. It contains information such as the name and description of the template, the services it contains, and the parameters for each of the services.
|
An application template definition contains metadata that describes an
|
||||||
|
application template. It contains information such as the name and description
|
||||||
|
of the template, the services it contains, and the parameters for each of the
|
||||||
|
services.
|
||||||
|
|
||||||
Before you create an application template definition, you must create a repository that contains the services you are planning to include in the template. For more information, see [Create the repository file](#create-the-repository-file).
|
Before you create an application template definition, you must create a
|
||||||
|
repository that contains the services you are planning to include in the
|
||||||
|
template. For more information, see [Create the repository
|
||||||
|
file](#create-the-repository-file).
|
||||||
|
|
||||||
For example, to create an Express and MySQL application, the application definition must be similar to the following yaml file:
|
For example, to create an Express and MySQL application, the application
|
||||||
|
definition must be similar to the following yaml file:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: v1alpha1 #constant
|
apiVersion: v1alpha1 #constant
|
||||||
kind: ApplicationTemplate #constant
|
kind: ApplicationTemplate #constant
|
||||||
metadata:
|
metadata:
|
||||||
name: express-mysql #the name of the application
|
name: express-mysql #the name of the application
|
||||||
|
platforms:
|
||||||
|
- linux
|
||||||
spec:
|
spec:
|
||||||
description: Sample application with a NodeJS backend and a MySQL database
|
description: Sample application with a NodeJS backend and a MySQL database
|
||||||
services: # list of the services
|
services: # list of the services
|
||||||
|
|
@ -368,7 +413,9 @@ spec:
|
||||||
|
|
||||||
### Add the template to the library
|
### Add the template to the library
|
||||||
|
|
||||||
Create a local repository file called `library.yaml` anywhere on your local drive. If you have already created the `library.yaml` file, add the application definitions to it.
|
Create a local repository file called `library.yaml` anywhere on your local
|
||||||
|
drive. If you have already created the `library.yaml` file, add the application
|
||||||
|
definitions to it.
|
||||||
|
|
||||||
`library.yaml`
|
`library.yaml`
|
||||||
|
|
||||||
|
|
@ -392,40 +439,48 @@ templates: # List of application templates available
|
||||||
|
|
||||||
### Add the local repository to `docker-template` settings
|
### Add the local repository to `docker-template` settings
|
||||||
|
|
||||||
Now that you have created a local repository and added application definitions, you must make Docker Template aware of these. To do this:
|
Now that you have created a local repository and added application definitions,
|
||||||
|
you must make Docker Template aware of these. To do this:
|
||||||
|
|
||||||
1. Edit `~/.docker/application-template/preferences.yaml` as follows:
|
1. Edit `~/.docker/application-template/preferences.yaml` as follows:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: v1alpha1
|
apiVersion: v1alpha1
|
||||||
channel: master
|
channel: master
|
||||||
kind: Preferences
|
kind: Preferences
|
||||||
repositories:
|
repositories:
|
||||||
- name: library-master
|
- name: library-master
|
||||||
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Add your local repository:
|
2. Add your local repository:
|
||||||
|
|
||||||
```yaml
|
> **Note:** Do not remove or comment out the default library `library-master`.
|
||||||
apiVersion: v1alpha1
|
> This library contain template plugins that are required to build all Docker
|
||||||
channel: master
|
> Templates.
|
||||||
kind: Preferences
|
|
||||||
repositories:
|
|
||||||
- name: custom-services
|
|
||||||
url: file:///path/to/my/library.yaml
|
|
||||||
- name: library-master
|
|
||||||
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
When configuring a local repository on Windows, the `url` structure is slightly different:
|
```yaml
|
||||||
|
apiVersion: v1alpha1
|
||||||
|
channel: master
|
||||||
|
kind: Preferences
|
||||||
|
repositories:
|
||||||
|
- name: custom-services
|
||||||
|
url: file:///path/to/my/library.yaml
|
||||||
|
- name: library-master
|
||||||
|
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
When configuring a local repository on Windows, the `url` structure is slightly
|
||||||
|
different:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
- name: custom-services
|
- name: custom-services
|
||||||
url: file://c:/path/to/my/library.yaml
|
url: file://c:/path/to/my/library.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
After updating the `preferences.yaml` file, run `docker template ls` or restart the Application Designer and select **Custom application**. The new template should now be visible in the list of available templates.
|
After updating the `preferences.yaml` file, run `docker template ls` or restart
|
||||||
|
the Application Designer and select **Custom application**. The new template
|
||||||
|
should now be visible in the list of available templates.
|
||||||
|
|
||||||
### Share the custom application template
|
### Share the custom application template
|
||||||
|
|
||||||
|
|
@ -435,29 +490,39 @@ To share a custom application template, you must complete the following steps:
|
||||||
|
|
||||||
2. Share the application definition (for example, GitHub)
|
2. Share the application definition (for example, GitHub)
|
||||||
|
|
||||||
3. Ensure the receiver has modified their `preferences.yaml` file to point to the application definition that you have shared, and are permitted to accept remote images.
|
3. Ensure the receiver has modified their `preferences.yaml` file to point to
|
||||||
|
the application definition that you have shared, and are permitted to accept
|
||||||
|
remote images.
|
||||||
|
|
||||||
## Interpolator
|
## Interpolator
|
||||||
|
|
||||||
The `interpolator` utility is basically an image containing a binary which:
|
The `interpolator` utility is basically an image containing a binary which:
|
||||||
|
|
||||||
- takes a folder (assets folder) and the service parameter file as input,
|
- takes a folder (assets folder) and the service parameter file as input,
|
||||||
- replaces variables in the input folder using the parameters specified by the user (for example, the service name, external port, etc), and
|
- replaces variables in the input folder using the parameters specified by the
|
||||||
|
user (for example, the service name, external port, etc), and
|
||||||
- writes the interpolated files to the destination folder.
|
- writes the interpolated files to the destination folder.
|
||||||
|
|
||||||
The interpolator implementation uses [Golang template](https://golang.org/pkg/text/template/) to aggregate the services to create the final application. If your service template uses the `interpolator` image by default, it expects all the asset files to be located in the `/assets` folder:
|
The interpolator implementation uses [Golang
|
||||||
|
template](https://golang.org/pkg/text/template/) to aggregate the services to
|
||||||
|
create the final application. If your service template uses the `interpolator`
|
||||||
|
image by default, it expects all the asset files to be located in the `/assets`
|
||||||
|
folder:
|
||||||
|
|
||||||
`/interpolator -source /assets -destination /project`
|
`/interpolator -source /assets -destination /project`
|
||||||
|
|
||||||
However, you can create your own scaffolding script that performs calls to the `interpolator`.
|
However, you can create your own scaffolding script that performs calls to the
|
||||||
|
`interpolator`.
|
||||||
|
|
||||||
> **Note:** It is not mandatory to use the `interpolator` utility. You can use a utility of your choice to handle parameter replacement and file copying to achieve the same result.
|
> **Note:** It is not mandatory to use the `interpolator` utility. You can use
|
||||||
|
> a utility of your choice to handle parameter replacement and file copying to
|
||||||
|
> achieve the same result.
|
||||||
|
|
||||||
The following table lists the `interpolator` binary options:
|
The following table lists the `interpolator` binary options:
|
||||||
|
|
||||||
| Parameter | Default value | Description |
|
| Parameter | Default value | Description |
|
||||||
| :----------------------|:---------------------------|:----------------------------------------|
|
| :----------------|:---------------------|:--------------------------------------------------------------|
|
||||||
| `-source` | none | Source file or folder to interpolate from|
|
| `-source` | none | Source file or folder to interpolate from |
|
||||||
| `-destination` | none | Destination file or folder to copy the interpolated files to|
|
| `-destination` | none | Destination file or folder to copy the interpolated files to |
|
||||||
| `-config` | `/run/configuration` | The path to the json configuration file |
|
| `-config` | `/run/configuration` | The path to the json configuration file |
|
||||||
| `-skip-template` | false | If set to `true`, it copies assets without any transformation |
|
| `-skip-template` | false | If set to `true`, it copies assets without any transformation |
|
||||||
|
|
|
||||||
|
|
@ -82,7 +82,7 @@ $ docker buildx build --platform linux/amd64,linux/arm64 .
|
||||||
Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with `--platform` using the native architecture of the build node. A list of build arguments like `BUILDPLATFORM` and `TARGETPLATFORM` is available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.
|
Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with `--platform` using the native architecture of the build node. A list of build arguments like `BUILDPLATFORM` and `TARGETPLATFORM` is available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.
|
||||||
|
|
||||||
```
|
```
|
||||||
FROM --platform $BUILDPLATFORM golang:alpine AS build
|
FROM --platform=$BUILDPLATFORM golang:alpine AS build
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
ARG BUILDPLATFORM
|
ARG BUILDPLATFORM
|
||||||
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
|
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
|
||||||
|
|
|
||||||
|
|
@ -94,7 +94,7 @@ The following components are available:
|
||||||
|
|
||||||
- `subscription`: (Optional) Configuration options for Docker Enterprise
|
- `subscription`: (Optional) Configuration options for Docker Enterprise
|
||||||
Subscriptions.
|
Subscriptions.
|
||||||
- `cloudstor`: (Optional) Configuration options for Docker CloudStor.
|
- `cloudstor`: (Optional) Configuration options for Docker Cloudstor.
|
||||||
- `dtr`: (Optional) Configuration options for Docker Trusted Registry.
|
- `dtr`: (Optional) Configuration options for Docker Trusted Registry.
|
||||||
- `engine`: (Optional) Configuration options for Docker Engine.
|
- `engine`: (Optional) Configuration options for Docker Engine.
|
||||||
- `ucp`: (Optional) Configuration options for Docker Universal Control Plane.
|
- `ucp`: (Optional) Configuration options for Docker Universal Control Plane.
|
||||||
|
|
@ -110,10 +110,29 @@ Provide Docker Enterprise subscription information
|
||||||
`false`
|
`false`
|
||||||
|
|
||||||
#### cloudstor
|
#### cloudstor
|
||||||
Customizes the installation of Docker Cloudstor.
|
Docker Cloudstor is a Docker Swarm Plugin that provides persistent storage to
|
||||||
|
Docker Swarm Clusters deployed on to AWS or Azure. By default Docker Cloudstor
|
||||||
|
is not installed on Docker Enterprise environments created with Docker Cluster.
|
||||||
|
|
||||||
- `version`: (Optional) The version of Cloudstor to install. Default is `1.0`.
|
```yaml
|
||||||
- `use_efs`: (Optional) Specifies whether an Elastic File System should be provisioned. Defaults to `false`.
|
cluster:
|
||||||
|
cloudstor:
|
||||||
|
version: '1.0'
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information on Docker Cloudstor see:
|
||||||
|
|
||||||
|
- [Cloudstor for AWS](/docker-for-aws/persistent-data-volumes/)
|
||||||
|
- [Cloudstor for Azure]([/docker-for-azure/persistent-data-volumes/)
|
||||||
|
|
||||||
|
The following optional elements can be specified:
|
||||||
|
|
||||||
|
- `version`: (Required) The version of Docker Cloudstor to install. The default
|
||||||
|
is `disabled`. The only released version of Docker Cloudstor at this time is
|
||||||
|
`1.0`.
|
||||||
|
- `use_efs`: (Optional) Specifies whether an Elastic File System should be
|
||||||
|
provisioned. By default Docker Cloudstor on AWS uses Elastic Block Store,
|
||||||
|
therefore this value defaults to `false`.
|
||||||
|
|
||||||
#### dtr
|
#### dtr
|
||||||
Customizes the installation of Docker Trusted Registry.
|
Customizes the installation of Docker Trusted Registry.
|
||||||
|
|
|
||||||
|
|
@ -51,12 +51,24 @@ For more information about Cluster files, refer to the
|
||||||
Docker Cluster has commands for managing the whole lifecycle of your cluster:
|
Docker Cluster has commands for managing the whole lifecycle of your cluster:
|
||||||
|
|
||||||
* Create and destroy clusters
|
* Create and destroy clusters
|
||||||
* Scale up or Scale down clusters
|
* Scale up or scale down clusters
|
||||||
* Upgrade clusters
|
* Upgrade clusters
|
||||||
* View the status of clusters
|
* View the status of clusters
|
||||||
* Backup and Restore clusters
|
* Backup and restore clusters
|
||||||
|
|
||||||
## Cluster reference pages
|
## Export Docker Cluster artifacts
|
||||||
|
|
||||||
|
You can export both Terraform and Ansible scripts to deploy certain components standalone or with custom configurations. Use the following commands to export those scripts:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker container run --detach --name dci --entrypoint sh docker/cluster:latest
|
||||||
|
docker container cp dci:/cluster/terraform terraform
|
||||||
|
docker container cp dci:/cluster/ansible ansible
|
||||||
|
docker container stop dci
|
||||||
|
docker container rm dci
|
||||||
|
```
|
||||||
|
|
||||||
|
## Where to go next
|
||||||
|
|
||||||
- [Get started with Docker Cluster on AWS](aws.md)
|
- [Get started with Docker Cluster on AWS](aws.md)
|
||||||
- [Command line reference](/engine/reference/commandline/cluster/)
|
- [Command line reference](/engine/reference/commandline/cluster/)
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,23 @@ This page provides information about Docker Cluster versions.
|
||||||
|
|
||||||
# Version 1
|
# Version 1
|
||||||
|
|
||||||
|
|
||||||
|
## 1.2.0
|
||||||
|
(2019-10-2)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Added new env variable type which allows users to supply cluster variable values as an environment variable (DCIS-509)
|
||||||
|
|
||||||
|
### Fixes
|
||||||
|
|
||||||
|
- Fixed an issue where errors in the cluster commands would return exit code of 0 (DCIS-508)
|
||||||
|
|
||||||
|
- New error message displayed when a docker login is required:
|
||||||
|
```Checking for licenses on Docker Hub
|
||||||
|
Error: no Hub login info found; please run 'docker login' first
|
||||||
|
```
|
||||||
|
|
||||||
## Version 1.1.0
|
## Version 1.1.0
|
||||||
(2019-09-03)
|
(2019-09-03)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -62,7 +62,7 @@ services.
|
||||||
|
|
||||||
web:
|
web:
|
||||||
image: example/my_web_app:latest
|
image: example/my_web_app:latest
|
||||||
links:
|
depends_on:
|
||||||
- db
|
- db
|
||||||
- cache
|
- cache
|
||||||
|
|
||||||
|
|
@ -136,7 +136,7 @@ Start with a **docker-compose.yml**.
|
||||||
|
|
||||||
web:
|
web:
|
||||||
image: example/my_web_app:latest
|
image: example/my_web_app:latest
|
||||||
links:
|
depends_on:
|
||||||
- db
|
- db
|
||||||
|
|
||||||
db:
|
db:
|
||||||
|
|
@ -147,7 +147,7 @@ export or backup.
|
||||||
|
|
||||||
dbadmin:
|
dbadmin:
|
||||||
build: database_admin/
|
build: database_admin/
|
||||||
links:
|
depends_on:
|
||||||
- db
|
- db
|
||||||
|
|
||||||
To start a normal environment run `docker-compose up -d`. To run a database
|
To start a normal environment run `docker-compose up -d`. To run a database
|
||||||
|
|
@ -177,9 +177,9 @@ is useful if you have several services that reuse a common set of configuration
|
||||||
options. Using `extends` you can define a common set of service options in one
|
options. Using `extends` you can define a common set of service options in one
|
||||||
place and refer to it from anywhere.
|
place and refer to it from anywhere.
|
||||||
|
|
||||||
Keep in mind that `links`, `volumes_from`, and `depends_on` are never shared
|
Keep in mind that `volumes_from` and `depends_on` are never shared between
|
||||||
between services using `extends`. These exceptions exist to avoid implicit
|
services using `extends`. These exceptions exist to avoid implicit
|
||||||
dependencies; you always define `links` and `volumes_from` locally. This ensures
|
dependencies; you always define `volumes_from` locally. This ensures
|
||||||
dependencies between services are clearly visible when reading the current file.
|
dependencies between services are clearly visible when reading the current file.
|
||||||
Defining these locally also ensures that changes to the referenced file don't
|
Defining these locally also ensures that changes to the referenced file don't
|
||||||
break anything.
|
break anything.
|
||||||
|
|
@ -233,7 +233,7 @@ You can also write other services and link your `web` service to them:
|
||||||
environment:
|
environment:
|
||||||
- DEBUG=1
|
- DEBUG=1
|
||||||
cpu_shares: 5
|
cpu_shares: 5
|
||||||
links:
|
depends_on:
|
||||||
- db
|
- db
|
||||||
db:
|
db:
|
||||||
image: postgres
|
image: postgres
|
||||||
|
|
@ -264,7 +264,7 @@ common configuration:
|
||||||
command: /code/run_web_app
|
command: /code/run_web_app
|
||||||
ports:
|
ports:
|
||||||
- 8080:8080
|
- 8080:8080
|
||||||
links:
|
depends_on:
|
||||||
- queue
|
- queue
|
||||||
- db
|
- db
|
||||||
|
|
||||||
|
|
@ -273,7 +273,7 @@ common configuration:
|
||||||
file: common.yml
|
file: common.yml
|
||||||
service: app
|
service: app
|
||||||
command: /code/run_worker
|
command: /code/run_worker
|
||||||
links:
|
depends_on:
|
||||||
- queue
|
- queue
|
||||||
|
|
||||||
## Adding and overriding configuration
|
## Adding and overriding configuration
|
||||||
|
|
|
||||||
|
|
@ -94,10 +94,7 @@ no separate VirtualBox is required.
|
||||||
|
|
||||||
### What are system requirements for Docker for Mac?
|
### What are system requirements for Docker for Mac?
|
||||||
|
|
||||||
You need a Mac that supports hardware virtualization and can run at
|
You need a Mac that supports hardware virtualization. For more information, see [Docker Desktop Mac system requirements](install/#system-requirements).
|
||||||
least macOS `10.11` (macOS El Capitan). See also
|
|
||||||
[What to know before you install](install#what-to-know-before-you-install) in
|
|
||||||
the install guide.
|
|
||||||
|
|
||||||
### Do I need to reinstall Docker for Mac if I change the name of my macOS account?
|
### Do I need to reinstall Docker for Mac if I change the name of my macOS account?
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,91 +4,96 @@ keywords: mac, disk
|
||||||
title: Disk utilization in Docker for Mac
|
title: Disk utilization in Docker for Mac
|
||||||
---
|
---
|
||||||
|
|
||||||
Docker for Mac stores Linux containers and images in a single, large "disk image" file
|
Docker Desktop stores Linux containers and images in a single, large "disk image" file in the Mac filesystem. This is different from Docker on Linux, which usually stores containers and images in the `/var/lib/docker` directory.
|
||||||
in the Mac filesystem. This is different from Docker on Linux, which usually stores containers
|
|
||||||
and images in the `/var/lib/docker` directory.
|
|
||||||
|
|
||||||
## Where is the "disk image" file?
|
## Where is the disk image file?
|
||||||
|
|
||||||
To locate the "disk image" file, first select the whale menu icon and then select
|
To locate the disk image file, select the Docker icon and then
|
||||||
**Preferences...**. When the **Preferences...** window is displayed, select **Disk** and then **Reveal in Finder**:
|
**Preferences** > **Resources** > **Advanced**.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The **Preferences...** window shows how much actual disk space the "disk image" file is consuming.
|
The **Advanced** tab displays the location of the disk image. It also displays the maximum size of the disk image and the actual space the disk image is consuming. Note that other tools might display space usage of the file in terms of the maximum file size, and not the actual file size.
|
||||||
In this example, the "disk image" file is consuming 2.4 GB out of a maximum of 64 GB.
|
|
||||||
|
|
||||||
Note that other tools might display space usage of the file in terms of the maximum file size, not the actual file size.
|
|
||||||
|
|
||||||
## If the file is too big
|
## If the file is too big
|
||||||
|
|
||||||
If the file is too big, you can
|
If the disk image file is too big, you can:
|
||||||
|
|
||||||
- move it to a bigger drive,
|
- move it to a bigger drive,
|
||||||
- delete unnecessary containers and images, or
|
- delete unnecessary containers and images, or
|
||||||
- reduce the maximum allowable size of the file.
|
- reduce the maximum allowable size of the file.
|
||||||
|
|
||||||
### Move the file to a bigger drive
|
### Move the file to a bigger drive
|
||||||
|
|
||||||
To move the file, open the **Preferences...** menu, select **Disk** and then select
|
To move the disk image file to a different location:
|
||||||
on **Move disk image**. Do not move the file directly in the finder or Docker for Mac will
|
|
||||||
lose track of it.
|
1. Select **Preferences** > **Resources** > **Advanced**.
|
||||||
|
|
||||||
|
2. In the **Disk image location** section, click **Browse** and choose a new location for the disk image.
|
||||||
|
|
||||||
|
3. Click **Apply & Restart** for the changes to take effect.
|
||||||
|
|
||||||
|
Do not move the file directly in Finder as this can cause Docker Desktop to lose track of the file.
|
||||||
|
|
||||||
### Delete unnecessary containers and images
|
### Delete unnecessary containers and images
|
||||||
|
|
||||||
To check whether you have too many unnecessary containers and images:
|
Check whether you have any unnecessary containers and images. If your client and daemon API are running version 1.25 or later (use the `docker version` command on the client to check your client and daemon API versions), you can see the detailed space usage information by running:
|
||||||
|
|
||||||
If your client and daemon API are version 1.25 or later (use the docker version command on the client to check your client and daemon API versions.), you can display detailed space usage information with:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
docker system df -v
|
docker system df -v
|
||||||
```
|
```
|
||||||
|
|
||||||
Alternatively, you can list images with:
|
Alternatively, to list images, run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker image ls
|
$ docker image ls
|
||||||
```
|
```
|
||||||
and then list containers with:
|
|
||||||
|
and then, to list containers, run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker container ls -a
|
$ docker container ls -a
|
||||||
```
|
```
|
||||||
|
|
||||||
If there are lots of unneeded objects, try the command
|
If there are lots of redundant objects, run the command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker system prune
|
$ docker system prune
|
||||||
```
|
```
|
||||||
This removes all stopped containers, unused networks, dangling images, and build cache.
|
|
||||||
|
|
||||||
Note that it might take a few minutes before space becomes free on the host, depending
|
This command removes all stopped containers, unused networks, dangling images, and build cache.
|
||||||
on what format the "disk image" file is in:
|
|
||||||
- If the file is named `Docker.raw`: space on the host should be reclaimed within a few
|
|
||||||
seconds.
|
|
||||||
- If the file is named `Docker.qcow2`: space will be freed by a background process after
|
|
||||||
a few minutes.
|
|
||||||
|
|
||||||
Note that space is only freed when images are deleted. Space is not freed automatically
|
It might take a few minutes to reclaim space on the host depending on the format of the disk image file:
|
||||||
when files are deleted inside running containers. To trigger a space reclamation at any
|
|
||||||
point, use the command:
|
- If the file is named `Docker.raw`: space on the host should be reclaimed within a few seconds.
|
||||||
|
- If the file is named `Docker.qcow2`: space will be freed by a background process after a few minutes.
|
||||||
|
|
||||||
|
Space is only freed when images are deleted. Space is not freed automatically when files are deleted inside running containers. To trigger a space reclamation at any point, run the command:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker run --privileged --pid=host justincormack/nsenter1 /sbin/fstrim /var/lib/docker
|
$ docker run --privileged --pid=host docker/desktop-reclaim-space
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that many tools will report the maximum file size, not the actual file size.
|
Note that many tools report the maximum file size, not the actual file size.
|
||||||
To query the actual size of the file on the host from a terminal, use:
|
To query the actual size of the file on the host from a terminal, run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ cd ~/Library/Containers/com.docker.docker/Data
|
$ cd ~/Library/Containers/com.docker.docker/Data
|
||||||
$ cd vms/0 # or com.docker.driver.amd64-linux
|
$ cd vms/0/data
|
||||||
$ ls -klsh Docker.raw
|
$ ls -klsh Docker.raw
|
||||||
2333548 -rw-r--r--@ 1 akim staff 64G Dec 13 17:42 Docker.raw
|
2333548 -rw-r--r--@ 1 username staff 64G Dec 13 17:42 Docker.raw
|
||||||
```
|
```
|
||||||
In this example, the actual size of the disk is `2333548` KB, whereas the maximum size
|
|
||||||
of the disk is `64` GB.
|
In this example, the actual size of the disk is `2333548` KB, whereas the maximum size of the disk is `64` GB.
|
||||||
|
|
||||||
### Reduce the maximum size of the file
|
### Reduce the maximum size of the file
|
||||||
|
|
||||||
To reduce the maximum size of the file, select the whale menu icon and then select
|
To reduce the maximum size of the disk image file:
|
||||||
**Preferences...**. When the **Preferences...** window is displayed, select **Disk**.
|
|
||||||
The **Disk** window contains a slider that allows the maximum disk size to be set.
|
|
||||||
**Warning**: If the maximum size is reduced, the current file will be deleted and, therefore, all
|
|
||||||
containers and images will be lost.
|
|
||||||
|
|
||||||
|
1. Select the Docker icon and then select **Preferences** > **Resources** > **Advanced**.
|
||||||
|
|
||||||
|
2. The **Disk image size** section contains a slider that allows you to change the maximum size of the disk image. Adjust the slider to set a lower limit.
|
||||||
|
|
||||||
|
3. Click **Apply & Restart**.
|
||||||
|
|
||||||
|
When you reduce the maximum size, the current disk image file is deleted, and therefore, all containers and images will be lost.
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,52 @@
|
||||||
|
---
|
||||||
|
title: Deactivating an account or an organization
|
||||||
|
description: Learn how to deactivate a Docker Hub account or an organization
|
||||||
|
keywords: Docker Hub, delete, deactivate, account, organization
|
||||||
|
---
|
||||||
|
|
||||||
|
Your Docker Hub account or organization may also be linked to other Docker products and services, so deactivating it will also disable access to those products and services.
|
||||||
|
|
||||||
|
## Deactivating an account
|
||||||
|
|
||||||
|
Before deactivating your Docker Hub account, please complete the following:
|
||||||
|
|
||||||
|
1. Download any images and tags you want to keep:
|
||||||
|
`docker pull -a <image>:<tag>`.
|
||||||
|
|
||||||
|
3. If you have an active subscription, downgrade it to the **free** plan.
|
||||||
|
|
||||||
|
In Docker Hub, navigate to **_Your Account_** > **Account Settings** > **Billing**.
|
||||||
|
|
||||||
|
4. If you have an enterprise license, download the key.
|
||||||
|
|
||||||
|
In Docker Hub, navigate to **_Your Account_** > **Accounts Settings** > **Licenses**. The download link will no longer be available after your account is disabled.
|
||||||
|
|
||||||
|
5. If you belong to any organizations, remove your account from all of them.
|
||||||
|
|
||||||
|
6. If you are the sole owner of any organization, either add someone to the **owners** team and then remove yourself from the organization, or deactivate the organization as well.
|
||||||
|
|
||||||
|
7. Unlink your [Github and Bitbucket accounts](https://docs.docker.com/docker-hub/builds/link-source/#unlink-a-github-user-account).
|
||||||
|
|
||||||
|
Once you have completed all the steps above, you may deactivate your account. On Docker Hub, go to **_Your Account_** > **Accounts Settings** > **Deactivate Account**.
|
||||||
|
|
||||||
|
> This cannot be undone! Be sure you've gathered all the data you need from your account before deactivating it.
|
||||||
|
{: .warning }
|
||||||
|
|
||||||
|
|
||||||
|
## Deactivating an organization
|
||||||
|
|
||||||
|
Before deactivating an organization, please complete the following:
|
||||||
|
|
||||||
|
1. Download any images and tags you want to keep:
|
||||||
|
`docker pull -a <image>:<tag>`.
|
||||||
|
|
||||||
|
2. If you have an active subscription, downgrade it to the **free** plan:
|
||||||
|
|
||||||
|
In Docker Hub, navigate to **Organizations** > **_Your Organization_** > **Billing**.
|
||||||
|
|
||||||
|
3. Unlink your [Github and Bitbucket accounts](https://docs.docker.com/docker-hub/builds/link-source/#unlink-a-github-user-account).
|
||||||
|
|
||||||
|
Once you have completed all the steps above, you may deactivate your organization. On Docker Hub, go to **Organizations** > **_Your Organization_** > **Settings** > **Deactivate Org**.
|
||||||
|
|
||||||
|
> This cannot be undone! Be sure you've gathered all the data you need from your organization before deactivating it.
|
||||||
|
{: .warning }
|
||||||
|
After Width: | Height: | Size: 132 KiB |
|
After Width: | Height: | Size: 90 KiB |
|
After Width: | Height: | Size: 120 KiB |
|
After Width: | Height: | Size: 36 KiB |
|
After Width: | Height: | Size: 68 KiB |
|
After Width: | Height: | Size: 125 KiB |
|
After Width: | Height: | Size: 80 KiB |
|
After Width: | Height: | Size: 111 KiB |
|
After Width: | Height: | Size: 128 KiB |
|
After Width: | Height: | Size: 83 KiB |
|
After Width: | Height: | Size: 33 KiB |
|
|
@ -6,72 +6,137 @@ redirect_from:
|
||||||
- /docker-cloud/orgs/
|
- /docker-cloud/orgs/
|
||||||
---
|
---
|
||||||
|
|
||||||
Docker Hub Organizations let you create teams so you can give your team access to shared image repositories.
|
Docker Hub Organizations let you create teams so you can give your team access
|
||||||
|
to shared image repositories.
|
||||||
|
|
||||||
### How Organizations & Teams Work
|
- **Organizations** are collections of teams and repositories that can be managed together.
|
||||||
|
- **Teams** are groups of Docker Hub users that belong to an organization.
|
||||||
|
|
||||||
- **Organizations** are a collection of teams and repositories that can be managed together.
|
> **Note**: in Docker Hub, users cannot belong directly to an organization.
|
||||||
- **Teams** are groups of Docker Hub users that belong to your organization.
|
They belong only to teams within an organization.
|
||||||
|
|
||||||
> **Note**: in Docker Hub, users cannot be associated directly to an organization. They belong only to teams within an organization.
|
## Working with organizations
|
||||||
|
|
||||||
### Create an organization
|
### Create an organization
|
||||||
|
|
||||||
1. Start by clicking on [Organizations](https://cloud.docker.com/orgs) in Docker Hub
|
1. Start by clicking on **[Organizations](https://hub.docker.com/orgs)** in
|
||||||
2. Click on "Create Organization"
|
Docker Hub.
|
||||||
|
|
||||||
|
2. Click on **Create Organization**.
|
||||||
|
|
||||||
3. Provide information about your organization:
|
3. Provide information about your organization:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You've created an organization. You'll see you have a team, the **owners** team with a single member (you!)
|
You've created an organization. You'll see you have a team, the **owners** team
|
||||||
|
with a single member (you!).
|
||||||
|
|
||||||
### The owners team
|
#### The owners team
|
||||||
|
|
||||||
The **owners** team is a special team that has full access to all repositories in the Organization.
|
The **owners** team is a special team that has full access to all repositories
|
||||||
|
in the organization.
|
||||||
|
|
||||||
Members of this team can:
|
Members of this team can:
|
||||||
- Manage Organization settings and billing
|
- Manage organization settings and billing
|
||||||
- Create a team and modify the membership of any team
|
- Create a team and modify the membership of any team
|
||||||
- Access and modify any repository belonging to the Organization
|
- Access and modify any repository belonging to the organization
|
||||||
|
|
||||||
|
## Working with teams and members
|
||||||
|
|
||||||
### Create a team
|
### Create a team
|
||||||
|
|
||||||
To create a team:
|
1. Go to **Organizations** in Docker Hub, and select your organization.
|
||||||
|
|
||||||
|
2. Open the **Teams** tab and click **Create Team**.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
3. Fill out your team's information and click **Create**.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
1. Go to your organization by clicking on **Organizations** in Docker Hub, and select your organization.
|
|
||||||
2. Click **Create Team** 
|
|
||||||
3. Fill out your team's information and click **Create** 
|
|
||||||
|
|
||||||
### Add a member to a team
|
### Add a member to a team
|
||||||
|
|
||||||
1. Visit your team's page in Docker Hub. Click on **Organizations** > **_Your Organization_** > **_Your Team Name_**
|
You can add a member to a team in one of two ways.
|
||||||
2. Click on **Add User**
|
|
||||||
3. Provide the user's Docker ID username _or_ email to add them to the team 
|
|
||||||
|
|
||||||
> **Note**: You are not automatically added to teams created by your organization.
|
If the user isn't in your organization:
|
||||||
|
|
||||||
|
1. Go **Organizations** in Docker Hub, and select your organization.
|
||||||
|
|
||||||
|
2. Click **Add Member**.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
3. Provide the user's Docker ID username _or_ email, and select a team from the dropdown.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
If the user already belongs to another team in the organization:
|
||||||
|
|
||||||
|
1. Open the team's page in Docker Hub: **Organizations** > **_Your Organization_** > **Teams** > **_Your Team Name_**
|
||||||
|
|
||||||
|
2. Click **Add User**.
|
||||||
|
3. Provide the user's Docker ID username _or_ email to add them to the team.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> **Note**: You are not automatically added to teams created by your organization.
|
||||||
|
|
||||||
### Remove team members
|
### Remove team members
|
||||||
|
|
||||||
To remove a member from a team, click the **x** next to their name:
|
To remove a member from all teams in an organization:
|
||||||
|
|
||||||
|
1. Go **Organizations** in Docker Hub, and select your organization.
|
||||||
|
|
||||||
|
2. Click the **x** next to a member's name:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
To remove a member from a specific team:
|
||||||
|
|
||||||
|
1. Open the team this user is on. You can do this in one of two ways:
|
||||||
|
|
||||||
|
* If you know the team name, go to **Organizations** > **_Your Organization_** > **Teams** > **_Team Name_**.
|
||||||
|
|
||||||
|
> **Note:** You can filter the **Teams** tab by username, but you have to use the format _@username_ in the search field (partial names will not work).
|
||||||
|
|
||||||
|
* If you don't know the team name, go to **Organizations** > **_Your Organization_** and search for the user. Hover over **View** to see all of their teams, then click on **View** > **_Team Name_**.
|
||||||
|
|
||||||
|
2. Find the user in the list, and click the **x** next to the user's name to remove them.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
### Give a team access to a repository
|
### Give a team access to a repository
|
||||||
|
|
||||||
To provide a team access to a repository:
|
1. Visit the repository list on Docker Hub by clicking on **Repositories**.
|
||||||
|
|
||||||
1. Visit the repository list on Docker Hub by clicking on **Repositories**
|
2. Select your organization in the namespace dropdown list.
|
||||||
2. Select your organization in the namespace dropdown list
|
|
||||||
3. Click the repository you'd like to edit 
|
3. Click the repository you'd like to edit.
|
||||||
4. Click the **Permissions** tab
|
|
||||||
5. Select the team, permissions level (more on this below) and click **+**
|

|
||||||
6. Click the **+** button to add 
|
|
||||||
|
4. Click the **Permissions** tab.
|
||||||
|
|
||||||
|
5. Select the team, the [permissions level](#permissions-reference), and click **+** to save.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
### View a team's permissions for all repositories
|
### View a team's permissions for all repositories
|
||||||
|
|
||||||
To view a team's permissions over all repos:
|
To view a team's permissions over all repos:
|
||||||
1. Click on **Organizations**, then select your organization and team.
|
|
||||||
2. Click on the **Permissions** tab where you can view which repositories this team has access to 
|
1. Open **Organizations** > **_Your Organization_** > **Teams** > **_Team Name_**.
|
||||||
|
|
||||||
|
2. Click on the **Permissions** tab, where you can view the repositories this team can access.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You can also edit repository permissions from this tab.
|
||||||
|
|
||||||
|
|
||||||
### Permissions reference
|
### Permissions reference
|
||||||
|
|
@ -81,7 +146,7 @@ automatically have Read permissions:
|
||||||
|
|
||||||
- `Read` access allows users to view, search, and pull a private repository in the same way as they can a public repository.
|
- `Read` access allows users to view, search, and pull a private repository in the same way as they can a public repository.
|
||||||
- `Write` access allows users to push to repositories on Docker Hub.
|
- `Write` access allows users to push to repositories on Docker Hub.
|
||||||
- `Admin` access allows users to modify the repositories "Description", "Collaborators" rights, "Public/Private" visibility and "Delete".
|
- `Admin` access allows users to modify the repositories "Description", "Collaborators" rights, "Public/Private" visibility, and "Delete".
|
||||||
|
|
||||||
> **Note**: A User who has not yet verified their email address only has
|
> **Note**: A User who has not yet verified their email address only has
|
||||||
> `Read` access to the repository, regardless of the rights their team
|
> `Read` access to the repository, regardless of the rights their team
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,21 @@ toc_max: 2
|
||||||
Here you can learn about the latest changes, new features, bug fixes, and
|
Here you can learn about the latest changes, new features, bug fixes, and
|
||||||
known issues for each Docker Hub release.
|
known issues for each Docker Hub release.
|
||||||
|
|
||||||
|
## 2019-10-02
|
||||||
|
|
||||||
|
### Enhancements
|
||||||
|
* You can now manage teams and members straight from your [organization page](https://hub.docker.com/orgs). Each organization page now breaks down into these tabs:
|
||||||
|
* **New:** Members - manage your members directly from this page (delete, add, or open their teams)
|
||||||
|
* **New:** Teams - search by team or username, and open up any team page to manage the team
|
||||||
|
* Repositories
|
||||||
|
* Settings
|
||||||
|
* Billing
|
||||||
|
|
||||||
|
### Bug fixes
|
||||||
|
|
||||||
|
* Fixed an issue where Kinematic could not connect and log in to Docker Hub.
|
||||||
|
|
||||||
|
|
||||||
## 2019-09-19
|
## 2019-09-19
|
||||||
|
|
||||||
### New features
|
### New features
|
||||||
|
|
|
||||||
|
|
@ -79,7 +79,7 @@ the install command on. DTR will be installed on the UCP worker defined by the
|
||||||
automatically reconfigured to trust DTR.
|
automatically reconfigured to trust DTR.
|
||||||
|
|
||||||
* With DTR 2.7, you can [enable browser authentication via client
|
* With DTR 2.7, you can [enable browser authentication via client
|
||||||
certificates](/ee/enable-authentication-via-client-certificates/) at install
|
certificates](/ee/enable-client-certificate-authentication/) at install
|
||||||
time. This bypasses the DTR login page and hides the logout button, thereby
|
time. This bypasses the DTR login page and hides the logout button, thereby
|
||||||
skipping the need for entering your username and password.
|
skipping the need for entering your username and password.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -11,7 +11,7 @@ Get the [30-day
|
||||||
trial available at the Docker hub](https://hub.docker.com/editions/enterprise/docker-ee-trial/trial).
|
trial available at the Docker hub](https://hub.docker.com/editions/enterprise/docker-ee-trial/trial).
|
||||||
|
|
||||||
Once you get your trial license, you can install Docker Enterprise's Universal
|
Once you get your trial license, you can install Docker Enterprise's Universal
|
||||||
Control Plane and Docker Trusted Regsitry on Linux Servers. Windows Servers
|
Control Plane and Docker Trusted Registry on Linux Servers. Windows Servers
|
||||||
can only be used as Universal Control Plane Worker Nodes.
|
can only be used as Universal Control Plane Worker Nodes.
|
||||||
|
|
||||||
Learn more about the Universal Control Plane's system requirements
|
Learn more about the Universal Control Plane's system requirements
|
||||||
|
|
|
||||||
|
|
@ -105,20 +105,20 @@ email address, for example, `jane.doe@subsidiary1.com`.
|
||||||
## Configure the LDAP integration
|
## Configure the LDAP integration
|
||||||
|
|
||||||
To configure UCP to create and authenticate users by using an LDAP directory,
|
To configure UCP to create and authenticate users by using an LDAP directory,
|
||||||
go to the UCP web interface, navigate to the **Admin Settings** page and click
|
go to the UCP web interface, navigate to the **Admin Settings** page, and click
|
||||||
**Authentication & Authorization** to select the method used to create and
|
**Authentication & Authorization** to select the method used to create and
|
||||||
authenticate users.
|
authenticate users. [Learn about additional UCP configuration options](../../configure/ucp-configuration-file.md#configuration-options).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
In the **LDAP Enabled** section, click **Yes** to The LDAP settings appear.
|
In the **LDAP Enabled** section, click **Yes**.
|
||||||
Now configure your LDAP directory integration.
|
Now configure your LDAP directory integration.
|
||||||
|
|
||||||
## Default role for all private collections
|
## Default role for all private collections
|
||||||
|
|
||||||
Use this setting to change the default permissions of new users.
|
Use this setting to change the default permissions of new users.
|
||||||
|
|
||||||
Click the dropdown to select the permission level that UCP assigns by default
|
Click the drop-down menu to select the permission level that UCP assigns by default
|
||||||
to the private collections of new users. For example, if you change the value
|
to the private collections of new users. For example, if you change the value
|
||||||
to `View Only`, all users who log in for the first time after the setting is
|
to `View Only`, all users who log in for the first time after the setting is
|
||||||
changed have `View Only` access to their private collections, but permissions
|
changed have `View Only` access to their private collections, but permissions
|
||||||
|
|
@ -141,13 +141,16 @@ Click **Yes** to enable integrating UCP users and teams with LDAP servers.
|
||||||
| No simple pagination | If your LDAP server doesn't support pagination. |
|
| No simple pagination | If your LDAP server doesn't support pagination. |
|
||||||
| Just-In-Time User Provisioning | Whether to create user accounts only when users log in for the first time. The default value of `true` is recommended. If you upgraded from UCP 2.0.x, the default is `false`. |
|
| Just-In-Time User Provisioning | Whether to create user accounts only when users log in for the first time. The default value of `true` is recommended. If you upgraded from UCP 2.0.x, the default is `false`. |
|
||||||
|
|
||||||
> **Note**: LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0.
|
> Note
|
||||||
|
>
|
||||||
|
> LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with
|
||||||
|
> some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0.
|
||||||
|
|
||||||
{: .with-border}
|
{: .with-border}
|
||||||
|
|
||||||
Click **Confirm** to add your LDAP domain.
|
Click **Confirm** to add your LDAP domain.
|
||||||
|
|
||||||
To integrate with more LDAP servers, click **Add LDAP Domain**.
|
To integrate with more LDAP servers, click **Add LDAP Domain**.
|
||||||
|
|
||||||
## LDAP user search configurations
|
## LDAP user search configurations
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -82,6 +82,7 @@ docker container run --rm {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_ver
|
||||||
| `lifetime_minutes` | no | The initial session lifetime, in minutes. The default is 60 minutes. |
|
| `lifetime_minutes` | no | The initial session lifetime, in minutes. The default is 60 minutes. |
|
||||||
| `renewal_threshold_minutes` | no | The length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A zero value disables session extension. The default is 20 minutes. |
|
| `renewal_threshold_minutes` | no | The length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A zero value disables session extension. The default is 20 minutes. |
|
||||||
| `per_user_limit` | no | The maximum number of sessions that a user can have active simultaneously. If creating a new session would put a user over this limit, the least recently used session will be deleted. A value of zero disables limiting the number of sessions that users may have. The default is 10. |
|
| `per_user_limit` | no | The maximum number of sessions that a user can have active simultaneously. If creating a new session would put a user over this limit, the least recently used session will be deleted. A value of zero disables limiting the number of sessions that users may have. The default is 10. |
|
||||||
|
| `store_token_per_session` | no | If set, the user token is stored in `sessionStorage` instead of `localStorage`. Note that this option will log the user out and require them to log back in since they are actively changing how their authentication is stored. |
|
||||||
|
|
||||||
### registries array (optional)
|
### registries array (optional)
|
||||||
|
|
||||||
|
|
@ -107,7 +108,9 @@ Configures audit logging options for UCP components.
|
||||||
|
|
||||||
Specifies scheduling options and the default orchestrator for new nodes.
|
Specifies scheduling options and the default orchestrator for new nodes.
|
||||||
|
|
||||||
> **Note**: If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag.
|
> Note
|
||||||
|
>
|
||||||
|
> If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag.
|
||||||
|
|
||||||
| Parameter | Required | Description |
|
| Parameter | Required | Description |
|
||||||
|:------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------|
|
|:------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
|
@ -136,7 +139,9 @@ Specifies whether DTR images require signing.
|
||||||
|
|
||||||
### log_configuration table (optional)
|
### log_configuration table (optional)
|
||||||
|
|
||||||
> Note: This feature has been deprecated. Refer to the [Deprecation notice](https://docs.docker.com/ee/ucp/release-notes/#deprecation-notice) for additional information.
|
> Note
|
||||||
|
>
|
||||||
|
> This feature has been deprecated. Refer to the [Deprecation notice](https://docs.docker.com/ee/ucp/release-notes/#deprecation-notice) for additional information.
|
||||||
|
|
||||||
Configures the logging options for UCP components.
|
Configures the logging options for UCP components.
|
||||||
|
|
||||||
|
|
@ -223,8 +228,9 @@ components. Assigning these values overrides the settings in a container's
|
||||||
| `worker_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes. |
|
| `worker_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes. |
|
||||||
| `kubelet_max_pods` | yes | Set Number of Pods that can run on a node. Default is `110`.
|
| `kubelet_max_pods` | yes | Set Number of Pods that can run on a node. Default is `110`.
|
||||||
|
|
||||||
|
> Note
|
||||||
*dev indicates that the functionality is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the Docker Enterprise Software Support Agreement.
|
>
|
||||||
|
> dev indicates that the functionality is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the Docker Enterprise Software Support Agreement.
|
||||||
|
|
||||||
### iSCSI (optional)
|
### iSCSI (optional)
|
||||||
Configures iSCSI options for UCP.
|
Configures iSCSI options for UCP.
|
||||||
|
|
|
||||||
|
|
@ -6,11 +6,11 @@ redirect_from:
|
||||||
- /ee/ucp/admin/install/install-on-azure/
|
- /ee/ucp/admin/install/install-on-azure/
|
||||||
---
|
---
|
||||||
|
|
||||||
Docker Universal Control Plane (UCP) closely integrates with Microsoft Azure for its Kubernetes Networking
|
Docker Universal Control Plane (UCP) closely integrates with Microsoft Azure for its Kubernetes Networking
|
||||||
and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure,
|
and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure,
|
||||||
the Calico CNI leverages the Azure networking infrastructure for data path
|
the Calico CNI leverages the Azure networking infrastructure for data path
|
||||||
networking and the Azure IPAM for IP address management. There are
|
networking and the Azure IPAM for IP address management. There are
|
||||||
infrastructure prerequisites required prior to UCP installation for the
|
infrastructure prerequisites required prior to UCP installation for the
|
||||||
Calico / Azure integration.
|
Calico / Azure integration.
|
||||||
|
|
||||||
## Docker UCP Networking
|
## Docker UCP Networking
|
||||||
|
|
@ -31,10 +31,9 @@ There are two options for provisoning IPs for the Kubernetes cluster on Azure:
|
||||||
or an ARM template. You can find an example of an ARM template
|
or an ARM template. You can find an example of an ARM template
|
||||||
[here](#manually-provision-ip-address-pools-as-part-of-an-azure-virtual-machine-scale-set).
|
[here](#manually-provision-ip-address-pools-as-part-of-an-azure-virtual-machine-scale-set).
|
||||||
|
|
||||||
## Azure Prerequisites
|
## Azure Prerequisites
|
||||||
|
|
||||||
You must meet the following infrastructure prerequisites in order
|
You must meet the following infrastructure prerequisites to successfully deploy Docker UCP on Azure. **Failure to meet these prerequisites may result in significant errors during the installation process.**
|
||||||
to successfully deploy Docker UCP on Azure:
|
|
||||||
|
|
||||||
- All UCP Nodes (Managers and Workers) need to be deployed into the same Azure
|
- All UCP Nodes (Managers and Workers) need to be deployed into the same Azure
|
||||||
Resource Group. The Azure Networking components (Virtual Network, Subnets,
|
Resource Group. The Azure Networking components (Virtual Network, Subnets,
|
||||||
|
|
@ -60,10 +59,10 @@ to successfully deploy Docker UCP on Azure:
|
||||||
|
|
||||||
UCP requires the following information for the installation:
|
UCP requires the following information for the installation:
|
||||||
|
|
||||||
- `subscriptionId` - The Azure Subscription ID in which the UCP
|
- `subscriptionId` - The Azure Subscription ID in which the UCP
|
||||||
objects are being deployed.
|
objects are being deployed.
|
||||||
- `tenantId` - The Azure Active Directory Tenant ID in which the UCP
|
- `tenantId` - The Azure Active Directory Tenant ID in which the UCP
|
||||||
objects are being deployed.
|
objects are being deployed.
|
||||||
- `aadClientId` - The Azure Service Principal ID.
|
- `aadClientId` - The Azure Service Principal ID.
|
||||||
- `aadClientSecret` - The Azure Service Principal Secret Key.
|
- `aadClientSecret` - The Azure Service Principal Secret Key.
|
||||||
|
|
||||||
|
|
@ -80,7 +79,7 @@ parameters as is.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"cloud":"AzurePublicCloud",
|
"cloud":"AzurePublicCloud",
|
||||||
"tenantId": "***",
|
"tenantId": "***",
|
||||||
"subscriptionId": "***",
|
"subscriptionId": "***",
|
||||||
"aadClientId": "***",
|
"aadClientId": "***",
|
||||||
|
|
@ -97,14 +96,20 @@ parameters as is.
|
||||||
There are some optional parameters for Azure deployments:
|
There are some optional parameters for Azure deployments:
|
||||||
|
|
||||||
- `primaryAvailabilitySetName` - The Worker Nodes availability set.
|
- `primaryAvailabilitySetName` - The Worker Nodes availability set.
|
||||||
- `vnetResourceGroup` - The Virtual Network Resource group, if your Azure Network objects live in a
|
- `vnetResourceGroup` - The Virtual Network Resource group, if your Azure Network objects live in a
|
||||||
seperate resource group.
|
seperate resource group.
|
||||||
- `routeTableName` - If you have defined multiple Route tables within
|
- `routeTableName` - If you have defined multiple Route tables within
|
||||||
an Azure subnet.
|
an Azure subnet.
|
||||||
|
|
||||||
See the [Kubernetes Azure Cloud Provider Config](https://github.com/kubernetes/cloud-provider-azure/blob/master/docs/cloud-provider-config.md) for more details on this configuration file.
|
See the [Kubernetes Azure Cloud Provider Config](https://github.com/kubernetes/cloud-provider-azure/blob/master/docs/cloud-provider-config.md) for more details on this configuration file.
|
||||||
|
|
||||||
## Considerations for IPAM Configuration
|
## Guidelines for IPAM Configuration
|
||||||
|
|
||||||
|
> **Warning**
|
||||||
|
>
|
||||||
|
> You must follow these guidelines and either use the appropriate size network in Azure or take the proper action to fit within the subnet.
|
||||||
|
> Failure to follow these guidelines may cause significant issues during the
|
||||||
|
> installation process.
|
||||||
|
|
||||||
The subnet and the virtual network associated with the primary interface of the
|
The subnet and the virtual network associated with the primary interface of the
|
||||||
Azure virtual machines need to be configured with a large enough address
|
Azure virtual machines need to be configured with a large enough address
|
||||||
|
|
@ -113,7 +118,7 @@ the number of nodes in the cluster.
|
||||||
|
|
||||||
For example, in a cluster of 256 nodes, make sure that the address space of the subnet and the
|
For example, in a cluster of 256 nodes, make sure that the address space of the subnet and the
|
||||||
virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods
|
virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods
|
||||||
concurrently on a node. This would be ***in addition to*** initial IP allocations to virtual machine
|
concurrently on a node. This would be ***in addition to*** initial IP allocations to virtual machine
|
||||||
NICs (network interfaces) during Azure resource creation.
|
NICs (network interfaces) during Azure resource creation.
|
||||||
|
|
||||||
Accounting for IP addresses that are allocated to NICs during virtual machine bring-up, set
|
Accounting for IP addresses that are allocated to NICs during virtual machine bring-up, set
|
||||||
|
|
@ -122,7 +127,7 @@ ensures that the network can dynamically allocate at least 32768 addresses,
|
||||||
plus a buffer for initial allocations for primary IP addresses.
|
plus a buffer for initial allocations for primary IP addresses.
|
||||||
|
|
||||||
> Azure IPAM, UCP, and Kubernetes
|
> Azure IPAM, UCP, and Kubernetes
|
||||||
>
|
>
|
||||||
> The Azure IPAM module queries an Azure virtual machine's metadata to obtain
|
> The Azure IPAM module queries an Azure virtual machine's metadata to obtain
|
||||||
> a list of IP addresses which are assigned to the virtual machine's NICs. The
|
> a list of IP addresses which are assigned to the virtual machine's NICs. The
|
||||||
> IPAM module allocates these IP addresses to Kubernetes pods. You configure the
|
> IPAM module allocates these IP addresses to Kubernetes pods. You configure the
|
||||||
|
|
@ -192,7 +197,7 @@ for each virtual machine in the virtual machine scale set.
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## UCP Installation
|
## UCP Installation
|
||||||
|
|
||||||
### Adjust the IP Count Value
|
### Adjust the IP Count Value
|
||||||
|
|
||||||
|
|
@ -246,7 +251,9 @@ subnet, and the `--host-address` maps to the private IP address of the master
|
||||||
node. Finally if you want to adjust the amount of IP addresses provisioned to
|
node. Finally if you want to adjust the amount of IP addresses provisioned to
|
||||||
each virtual machine pass `--azure-ip-count`.
|
each virtual machine pass `--azure-ip-count`.
|
||||||
|
|
||||||
> Note: The `pod-cidr` range must match the Azure Virtual Network's Subnet
|
> **Note**
|
||||||
|
>
|
||||||
|
> The `pod-cidr` range must match the Azure Virtual Network's Subnet
|
||||||
> attached the hosts. For example, if the Azure Virtual Network had the range
|
> attached the hosts. For example, if the Azure Virtual Network had the range
|
||||||
> `172.0.0.0/16` with Virtual Machines provisioned on an Azure Subnet of
|
> `172.0.0.0/16` with Virtual Machines provisioned on an Azure Subnet of
|
||||||
> `172.0.1.0/24`, then the Pod CIDR should also be `172.0.1.0/24`.
|
> `172.0.1.0/24`, then the Pod CIDR should also be `172.0.1.0/24`.
|
||||||
|
|
|
||||||
|
|
@ -26,28 +26,28 @@ This can lead to misconfigurations that are difficult to troubleshoot.
|
||||||
Complete the following checks:
|
Complete the following checks:
|
||||||
|
|
||||||
- Systems:
|
- Systems:
|
||||||
|
|
||||||
- Confirm time sync across all nodes (and check time daemon logs for any large time drifting)
|
- Confirm time sync across all nodes (and check time daemon logs for any large time drifting)
|
||||||
- Check system requirements `PROD=4` `vCPU/16GB` for UCP managers and DTR replicas
|
- Check system requirements `PROD=4` `vCPU/16GB` for UCP managers and DTR replicas
|
||||||
- Review the full UCP/DTR/Engine port requirements
|
- Review the full UCP/DTR/Engine port requirements
|
||||||
- Ensure that your cluster nodes meet the minimum requirements
|
- Ensure that your cluster nodes meet the minimum requirements
|
||||||
- Before performing any upgrade, ensure that you meet all minimum requirements listed
|
- Before performing any upgrade, ensure that you meet all minimum requirements listed
|
||||||
in [UCP System requirements](/ee/ucp/admin/install/system-requirements/), including port openings (UCP 3.x added more
|
in [UCP System requirements](/ee/ucp/admin/install/system-requirements/), including port openings (UCP 3.x added more
|
||||||
required ports for Kubernetes), memory, and disk space. For example, manager nodes must have at least 8GB of memory.
|
required ports for Kubernetes), memory, and disk space. For example, manager nodes must have at least 8GB of memory.
|
||||||
> **Note**: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
|
> **Note**: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
|
||||||
> Azure then please ensure all of the Azure [prerequisites](install-on-azure.md/#azure-prerequisites)
|
> Azure then please ensure all of the Azure [prerequisites](install-on-azure.md/#azure-prerequisites)
|
||||||
> are met.
|
> are met.
|
||||||
|
|
||||||
- Storage:
|
- Storage:
|
||||||
|
|
||||||
- Check `/var/` storage allocation and increase if it is over 70% usage.
|
- Check `/var/` storage allocation and increase if it is over 70% usage.
|
||||||
- In addition, check all nodes’ local filesystems for any disk storage issues (and DTR backend storage, for example, NFS).
|
- In addition, check all nodes’ local filesystems for any disk storage issues (and DTR backend storage, for example, NFS).
|
||||||
- If not using Overlay2 storage drivers please take this opportunity to do so, you will find stability there. (Note:
|
- If not using Overlay2 storage drivers please take this opportunity to do so, you will find stability there. (Note:
|
||||||
The transition from Device mapper to Overlay2 is a destructive rebuild.)
|
The transition from Device mapper to Overlay2 is a destructive rebuild.)
|
||||||
|
|
||||||
- Operating system:
|
- Operating system:
|
||||||
|
|
||||||
- If cluster nodes OS branch is older (Ubuntu 14.x, RHEL 7.3, etc), consider patching all relevant packages to the
|
- If cluster nodes OS branch is older (Ubuntu 14.x, RHEL 7.3, etc), consider patching all relevant packages to the
|
||||||
most recent (including kernel).
|
most recent (including kernel).
|
||||||
- Rolling restart of each node before upgrade (to confirm in-memory settings are the same as startup-scripts).
|
- Rolling restart of each node before upgrade (to confirm in-memory settings are the same as startup-scripts).
|
||||||
- Run `check-config.sh` on each cluster node (after rolling restart) for any kernel compatibility issues.
|
- Run `check-config.sh` on each cluster node (after rolling restart) for any kernel compatibility issues.
|
||||||
|
|
@ -55,7 +55,7 @@ Complete the following checks:
|
||||||
- Procedural:
|
- Procedural:
|
||||||
|
|
||||||
- Perform a Swarm, UCP and DTR backups pre-upgrade
|
- Perform a Swarm, UCP and DTR backups pre-upgrade
|
||||||
- Gather compose file/service/stack files
|
- Gather compose file/service/stack files
|
||||||
- Generate a UCP Support dump (for point in time) pre-upgrade
|
- Generate a UCP Support dump (for point in time) pre-upgrade
|
||||||
- Preload Engine/UCP/DTR images. If your cluster is offline (with no
|
- Preload Engine/UCP/DTR images. If your cluster is offline (with no
|
||||||
connection to the internet) then Docker provides tarballs containing all
|
connection to the internet) then Docker provides tarballs containing all
|
||||||
|
|
@ -69,26 +69,26 @@ Complete the following checks:
|
||||||
```
|
```
|
||||||
|
|
||||||
- Load troubleshooting packages (netshoot, etc)
|
- Load troubleshooting packages (netshoot, etc)
|
||||||
- Best order for upgrades: Engine, UCP, and then DTR. Note: The scope of this topic is limited to upgrade instructions for UCP.
|
- Best order for upgrades: Engine, UCP, and then DTR. Note: The scope of this topic is limited to upgrade instructions for UCP.
|
||||||
|
|
||||||
- Upgrade strategy:
|
- Upgrade strategy:
|
||||||
For each worker node that requires an upgrade, you can upgrade that node in place or you can replace the node
|
For each worker node that requires an upgrade, you can upgrade that node in place or you can replace the node
|
||||||
with a new worker node. The type of upgrade you perform depends on what is needed for each node:
|
with a new worker node. The type of upgrade you perform depends on what is needed for each node:
|
||||||
|
|
||||||
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
|
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
|
||||||
manager node. Automatically upgrades the entire cluster.
|
manager node. Automatically upgrades the entire cluster.
|
||||||
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
|
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
|
||||||
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
|
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
|
||||||
advanced than the automated, in-place cluster upgrade.
|
advanced than the automated, in-place cluster upgrade.
|
||||||
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
|
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
|
||||||
Automatically upgrades manager nodes and allows you to control the order of worker node upgrades.
|
Automatically upgrades manager nodes and allows you to control the order of worker node upgrades.
|
||||||
- [Replace all worker nodes using blue-green deployment](#replace-existing-worker-nodes-using-blue-green-deployment):
|
- [Replace all worker nodes using blue-green deployment](#replace-existing-worker-nodes-using-blue-green-deployment):
|
||||||
Performed using the CLI. This type of upgrade allows you to
|
Performed using the CLI. This type of upgrade allows you to
|
||||||
stand up a new cluster in parallel to the current code
|
stand up a new cluster in parallel to the current code
|
||||||
and cut over when complete. This type of upgrade allows you to join new worker nodes,
|
and cut over when complete. This type of upgrade allows you to join new worker nodes,
|
||||||
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
|
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
|
||||||
in batches of multiple nodes rather than one at a time, and shut down servers to
|
in batches of multiple nodes rather than one at a time, and shut down servers to
|
||||||
remove worker nodes. This type of upgrade is the most advanced.
|
remove worker nodes. This type of upgrade is the most advanced.
|
||||||
|
|
||||||
## Back up your cluster
|
## Back up your cluster
|
||||||
|
|
||||||
|
|
@ -118,48 +118,26 @@ Starting with the manager nodes, and then worker nodes:
|
||||||
and check that the node is healthy and is part of the cluster.
|
and check that the node is healthy and is part of the cluster.
|
||||||
|
|
||||||
## Upgrade UCP
|
## Upgrade UCP
|
||||||
When upgrading Docker Universal Control Plane (UCP) to version {{ page.ucp_version }}, you can choose from
|
When upgrading Docker Universal Control Plane (UCP) to version {{ page.ucp_version }}, you can choose from
|
||||||
different upgrade workflows:
|
different upgrade workflows:
|
||||||
> **Important**: In all upgrade workflows, manager nodes are automatically upgraded in place. You cannot control the order
|
> **Important**: In all upgrade workflows, manager nodes are automatically upgraded in place. You cannot control the order
|
||||||
of manager node upgrades.
|
of manager node upgrades.
|
||||||
|
|
||||||
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
|
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
|
||||||
manager node. Automatically upgrades the entire cluster.
|
manager node. Automatically upgrades the entire cluster.
|
||||||
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
|
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
|
||||||
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
|
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
|
||||||
advanced than the automated, in-place cluster upgrade.
|
advanced than the automated, in-place cluster upgrade.
|
||||||
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
|
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
|
||||||
Automatically upgrades manager nodes and allows you to control the order of worker node upgrades.
|
Automatically upgrades manager nodes and allows you to control the order of worker node upgrades.
|
||||||
- [Replace all worker nodes using blue-green deployment](#replace-existing-worker-nodes-using-blue-green-deployment):
|
- [Replace all worker nodes using blue-green deployment](#replace-existing-worker-nodes-using-blue-green-deployment):
|
||||||
Performed using the CLI. This type of upgrade allows you to
|
Performed using the CLI. This type of upgrade allows you to
|
||||||
stand up a new cluster in parallel to the current code
|
stand up a new cluster in parallel to the current code
|
||||||
and cut over when complete. This type of upgrade allows you to join new worker nodes,
|
and cut over when complete. This type of upgrade allows you to join new worker nodes,
|
||||||
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
|
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
|
||||||
in batches of multiple nodes rather than one at a time, and shut down servers to
|
in batches of multiple nodes rather than one at a time, and shut down servers to
|
||||||
remove worker nodes. This type of upgrade is the most advanced.
|
remove worker nodes. This type of upgrade is the most advanced.
|
||||||
|
|
||||||
### Use the web interface to perform an upgrade
|
|
||||||
|
|
||||||
> **Note**: If you plan to add nodes to the UCP cluster, use the [CLI](#use-the-cli-to-perform-an-upgrade) for the upgrade.
|
|
||||||
|
|
||||||
When an upgrade is available for a UCP installation, a banner appears.
|
|
||||||
|
|
||||||
{: .with-border}
|
|
||||||
|
|
||||||
Clicking this message takes an admin user directly to the upgrade process.
|
|
||||||
It can be found under the **Upgrade** tab of the **Admin Settings** section.
|
|
||||||
|
|
||||||
{: .with-border}
|
|
||||||
|
|
||||||
In the **Available Versions** drop down, select the version you want to update.
|
|
||||||
Copy and paste the CLI command provided into a terminal on a manager node to
|
|
||||||
perform the upgrade.
|
|
||||||
|
|
||||||
During the upgrade, the web interface will be unavailable, and you should wait
|
|
||||||
until completion before continuing to interact with it. When the upgrade
|
|
||||||
completes, you'll see a notification that a newer version of the web interface
|
|
||||||
is available and a browser refresh is required to see it.
|
|
||||||
|
|
||||||
### Use the CLI to perform an upgrade
|
### Use the CLI to perform an upgrade
|
||||||
|
|
||||||
There are two different ways to upgrade a UCP Cluster via the CLI. The first is
|
There are two different ways to upgrade a UCP Cluster via the CLI. The first is
|
||||||
|
|
@ -193,7 +171,7 @@ $ docker container run --rm -it \
|
||||||
|
|
||||||
The upgrade command will print messages regarding the progress of the upgrade as
|
The upgrade command will print messages regarding the progress of the upgrade as
|
||||||
it automatically upgrades UCP on all nodes in the cluster.
|
it automatically upgrades UCP on all nodes in the cluster.
|
||||||
|
|
||||||
### Phased in-place cluster upgrade
|
### Phased in-place cluster upgrade
|
||||||
|
|
||||||
The phased approach of upgrading UCP, introduced in UCP 3.2, allows granular
|
The phased approach of upgrading UCP, introduced in UCP 3.2, allows granular
|
||||||
|
|
@ -228,7 +206,7 @@ $ docker container run --rm -it \
|
||||||
|
|
||||||
The `--manual-worker-upgrade` flag will add an upgrade-hold label to all worker
|
The `--manual-worker-upgrade` flag will add an upgrade-hold label to all worker
|
||||||
nodes. UCP will be constantly monitor this label, and if that label is removed
|
nodes. UCP will be constantly monitor this label, and if that label is removed
|
||||||
UCP will then upgrade the node.
|
UCP will then upgrade the node.
|
||||||
|
|
||||||
In order to trigger the upgrade on a worker node, you will have to remove the
|
In order to trigger the upgrade on a worker node, you will have to remove the
|
||||||
label.
|
label.
|
||||||
|
|
@ -240,7 +218,7 @@ $ docker node update --label-rm com.docker.ucp.upgrade-hold <node name or id>
|
||||||
(Optional) Joining new worker nodes to the cluster. Once the manager nodes have
|
(Optional) Joining new worker nodes to the cluster. Once the manager nodes have
|
||||||
been upgraded to a new UCP version, new worker nodes can be added to the
|
been upgraded to a new UCP version, new worker nodes can be added to the
|
||||||
cluster, assuming they are running the corresponding new docker engine
|
cluster, assuming they are running the corresponding new docker engine
|
||||||
version.
|
version.
|
||||||
|
|
||||||
The swarm join token can be found in the UCP UI, or while ssh'd on a UCP
|
The swarm join token can be found in the UCP UI, or while ssh'd on a UCP
|
||||||
manager node. More information on finding the swarm token can be found
|
manager node. More information on finding the swarm token can be found
|
||||||
|
|
@ -252,16 +230,16 @@ $ docker swarm join --token SWMTKN-<YOUR TOKEN> <manager ip>:2377
|
||||||
|
|
||||||
### Replace existing worker nodes using blue-green deployment
|
### Replace existing worker nodes using blue-green deployment
|
||||||
|
|
||||||
This workflow is used to create a parallel environment for a new deployment, which can greatly reduce downtime, upgrades
|
This workflow is used to create a parallel environment for a new deployment, which can greatly reduce downtime, upgrades
|
||||||
worker node engines without disrupting workloads, and allows traffic to be migrated to the new environment with
|
worker node engines without disrupting workloads, and allows traffic to be migrated to the new environment with
|
||||||
worker node rollback capability. This type of upgrade creates a parallel environment for reduced downtime and workload disruption.
|
worker node rollback capability. This type of upgrade creates a parallel environment for reduced downtime and workload disruption.
|
||||||
|
|
||||||
> **Note**: Steps 2 through 6 can be repeated for groups of nodes - you do not have to replace all worker
|
> **Note**: Steps 2 through 6 can be repeated for groups of nodes - you do not have to replace all worker
|
||||||
nodes in the cluster at one time.
|
nodes in the cluster at one time.
|
||||||
|
|
||||||
1. Upgrade manager nodes
|
1. Upgrade manager nodes
|
||||||
|
|
||||||
- The `--manual-worker-upgrade` command automatically upgrades manager nodes first, and then allows you to control
|
- The `--manual-worker-upgrade` command automatically upgrades manager nodes first, and then allows you to control
|
||||||
the upgrade of the UCP components on the worker nodes using node labels.
|
the upgrade of the UCP components on the worker nodes using node labels.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
@ -275,8 +253,8 @@ nodes in the cluster at one time.
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Join new worker nodes
|
2. Join new worker nodes
|
||||||
|
|
||||||
- New worker nodes have newer engines already installed and have the new UCP version running when they join the cluster.
|
- New worker nodes have newer engines already installed and have the new UCP version running when they join the cluster.
|
||||||
On the manager node, run commands similar to the following examples to get the Swarm Join token and add new worker nodes:
|
On the manager node, run commands similar to the following examples to get the Swarm Join token and add new worker nodes:
|
||||||
```
|
```
|
||||||
docker swarm join-token worker
|
docker swarm join-token worker
|
||||||
|
|
@ -290,21 +268,21 @@ nodes in the cluster at one time.
|
||||||
docker swarm join --token SWMTKN-<YOUR TOKEN> <manager ip>:2377
|
docker swarm join --token SWMTKN-<YOUR TOKEN> <manager ip>:2377
|
||||||
```
|
```
|
||||||
4. Pause all existing worker nodes
|
4. Pause all existing worker nodes
|
||||||
|
|
||||||
- This ensures that new workloads are not deployed on existing nodes.
|
- This ensures that new workloads are not deployed on existing nodes.
|
||||||
```
|
```
|
||||||
docker node update --availability pause <node name>
|
docker node update --availability pause <node name>
|
||||||
```
|
```
|
||||||
5. Drain paused nodes for workload migration
|
5. Drain paused nodes for workload migration
|
||||||
|
|
||||||
- Redeploy workloads on all existing nodes to new nodes. Because all existing nodes are “paused”, workloads are
|
- Redeploy workloads on all existing nodes to new nodes. Because all existing nodes are “paused”, workloads are
|
||||||
automatically rescheduled onto new nodes.
|
automatically rescheduled onto new nodes.
|
||||||
```
|
```
|
||||||
docker node update --availability drain <node name>
|
docker node update --availability drain <node name>
|
||||||
```
|
```
|
||||||
6. Remove drained nodes
|
6. Remove drained nodes
|
||||||
|
|
||||||
- After each node is fully drained, it can be shut down and removed from the cluster. On each worker node that is
|
- After each node is fully drained, it can be shut down and removed from the cluster. On each worker node that is
|
||||||
getting removed from the cluster, run a command similar to the following example :
|
getting removed from the cluster, run a command similar to the following example :
|
||||||
```
|
```
|
||||||
docker swarm leave <node name>
|
docker swarm leave <node name>
|
||||||
|
|
@ -314,9 +292,9 @@ nodes in the cluster at one time.
|
||||||
docker node rm <node name>
|
docker node rm <node name>
|
||||||
```
|
```
|
||||||
7. Remove old UCP agents
|
7. Remove old UCP agents
|
||||||
|
|
||||||
- After upgrade completion, remove old UCP agents, which includes 390x and Windows agents, that were carried over
|
- After upgrade completion, remove old UCP agents, which includes 390x and Windows agents, that were carried over
|
||||||
from the previous install by running the following command on the manager node:
|
from the previous install by running the following command on the manager node:
|
||||||
```
|
```
|
||||||
docker service rm ucp-agent
|
docker service rm ucp-agent
|
||||||
docker service rm ucp-agent-win
|
docker service rm ucp-agent-win
|
||||||
|
|
@ -326,44 +304,44 @@ nodes in the cluster at one time.
|
||||||
### Troubleshooting
|
### Troubleshooting
|
||||||
|
|
||||||
- Upgrade compatibility
|
- Upgrade compatibility
|
||||||
|
|
||||||
- The upgrade command automatically checks for multiple `ucp-worker-agents` before
|
- The upgrade command automatically checks for multiple `ucp-worker-agents` before
|
||||||
proceeding with the upgrade. The existence of multiple `ucp-worker-agents` might indicate
|
proceeding with the upgrade. The existence of multiple `ucp-worker-agents` might indicate
|
||||||
that the cluster still in the middle of a prior manual upgrade and you must resolve the
|
that the cluster still in the middle of a prior manual upgrade and you must resolve the
|
||||||
conflicting node labels issues before proceeding with the upgrade.
|
conflicting node labels issues before proceeding with the upgrade.
|
||||||
|
|
||||||
- Upgrade failures
|
- Upgrade failures
|
||||||
- For worker nodes, an upgrade failure can be rolled back by changing the node label back
|
- For worker nodes, an upgrade failure can be rolled back by changing the node label back
|
||||||
to the previous target version. Rollback of manager nodes is not supported.
|
to the previous target version. Rollback of manager nodes is not supported.
|
||||||
|
|
||||||
- Kubernetes errors in node state messages after upgrading UCP
|
- Kubernetes errors in node state messages after upgrading UCP
|
||||||
(from https://github.com/docker/kbase/how-to-resolve-kubernetes-errors-after-upgrading-ucp/readme.md)
|
(from https://github.com/docker/kbase/how-to-resolve-kubernetes-errors-after-upgrading-ucp/readme.md)
|
||||||
|
|
||||||
- The following information applies If you have upgraded to UCP 3.0.0 or newer:
|
- The following information applies If you have upgraded to UCP 3.0.0 or newer:
|
||||||
|
|
||||||
- After performing a UCP upgrade from 2.2.x to 3.x.x, you might see unhealthy nodes in your UCP
|
- After performing a UCP upgrade from 2.2.x to 3.x.x, you might see unhealthy nodes in your UCP
|
||||||
dashboard with any of the following errors listed:
|
dashboard with any of the following errors listed:
|
||||||
```
|
```
|
||||||
Awaiting healthy status in Kubernetes node inventory
|
Awaiting healthy status in Kubernetes node inventory
|
||||||
Kubelet is unhealthy: Kubelet stopped posting node status
|
Kubelet is unhealthy: Kubelet stopped posting node status
|
||||||
```
|
```
|
||||||
|
|
||||||
- Alternatively, you may see other port errors such as the one below in the ucp-controller
|
- Alternatively, you may see other port errors such as the one below in the ucp-controller
|
||||||
container logs:
|
container logs:
|
||||||
```
|
```
|
||||||
http: proxy error: dial tcp 10.14.101.141:12388: connect: no route to host
|
http: proxy error: dial tcp 10.14.101.141:12388: connect: no route to host
|
||||||
```
|
```
|
||||||
|
|
||||||
- UCP 3.x.x requires additional opened ports for Kubernetes use. For ports that are used by the
|
- UCP 3.x.x requires additional opened ports for Kubernetes use. For ports that are used by the
|
||||||
latest UCP versions and the scope of port use, refer to
|
latest UCP versions and the scope of port use, refer to
|
||||||
[this page](https://docs.docker.com/ee/ucp/admin/install/system-requirements/#ports-used).
|
[this page](https://docs.docker.com/ee/ucp/admin/install/system-requirements/#ports-used).
|
||||||
|
|
||||||
- If you have upgraded from UCP 2.2.x to 3.0.x, verify that the ports 179, 6443, 6444 and 10250 are
|
- If you have upgraded from UCP 2.2.x to 3.0.x, verify that the ports 179, 6443, 6444 and 10250 are
|
||||||
open for Kubernetes traffic.
|
open for Kubernetes traffic.
|
||||||
|
|
||||||
- If you have upgraded to UCP 3.1.x, in addition to the ports listed above, do also open
|
- If you have upgraded to UCP 3.1.x, in addition to the ports listed above, do also open
|
||||||
ports 9099 and 12388.
|
ports 9099 and 12388.
|
||||||
|
|
||||||
### Recommended upgrade paths
|
### Recommended upgrade paths
|
||||||
|
|
||||||
From UCP 3.0: UCP 3.0 -> UCP 3.1 -> UCP 3.2
|
From UCP 3.0: UCP 3.0 -> UCP 3.1 -> UCP 3.2
|
||||||
|
|
@ -378,4 +356,4 @@ From UCP 2.0: UCP 2.0 -> UCP 2.1 -> UCP 2.2
|
||||||
|
|
||||||
## Where to go next
|
## Where to go next
|
||||||
|
|
||||||
- [Upgrade DTR](/e/dtr/admin/upgrade/)
|
- [Upgrade DTR](/ee/dtr/admin/upgrade/)
|
||||||
|
|
|
||||||
|
Before Width: | Height: | Size: 110 KiB |
|
Before Width: | Height: | Size: 68 KiB |
|
|
@ -69,7 +69,7 @@ Swarm services use `update-delay` to control the speed at which a service is upd
|
||||||
|
|
||||||
Use `update-delay` if …
|
Use `update-delay` if …
|
||||||
|
|
||||||
- You are optimizing for the least number of dropped connections and a longer update cycle is an acceptable tradeoff.
|
- You are optimizing for the least number of dropped connections and a longer update cycle as an acceptable tradeoff.
|
||||||
- Interlock update convergence takes a long time in your environment (can occur when having large amount of overlay networks).
|
- Interlock update convergence takes a long time in your environment (can occur when having large amount of overlay networks).
|
||||||
|
|
||||||
Do not use `update-delay` if …
|
Do not use `update-delay` if …
|
||||||
|
|
|
||||||
|
|
@ -12,7 +12,7 @@ First, using an existing Docker engine, save the images:
|
||||||
```bash
|
```bash
|
||||||
$> docker save {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} > interlock.tar
|
$> docker save {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} > interlock.tar
|
||||||
$> docker save {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }} > interlock-extension-nginx.tar
|
$> docker save {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }} > interlock-extension-nginx.tar
|
||||||
$> docker save {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} > nginx.tar
|
$> docker save {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} > interlock-proxy-nginx.tar
|
||||||
```
|
```
|
||||||
|
|
||||||
> Note
|
> Note
|
||||||
|
|
@ -32,7 +32,7 @@ Next, copy these files to each node in the Docker Swarm cluster and run the foll
|
||||||
```bash
|
```bash
|
||||||
$> docker load < interlock.tar
|
$> docker load < interlock.tar
|
||||||
$> docker load < interlock-extension-nginx.tar
|
$> docker load < interlock-extension-nginx.tar
|
||||||
$> docker load < nginx:alpine.tar
|
$> docker load < interlock-proxy-nginx.tar
|
||||||
```
|
```
|
||||||
|
|
||||||
## Next steps
|
## Next steps
|
||||||
|
|
|
||||||
|
|
@ -33,7 +33,7 @@ deployed. As part of this, services using HRM labels are inspected.
|
||||||
3. The HRM service is removed.
|
3. The HRM service is removed.
|
||||||
4. The `ucp-interlock` service is deployed with the configuration created.
|
4. The `ucp-interlock` service is deployed with the configuration created.
|
||||||
5. The `ucp-interlock` service deploys the `ucp-interlock-extension` and
|
5. The `ucp-interlock` service deploys the `ucp-interlock-extension` and
|
||||||
`ucp-interlock-proxy-services`.
|
`ucp-interlock-proxy` services.
|
||||||
|
|
||||||
The only way to rollback from an upgrade is by restoring from a backup taken
|
The only way to rollback from an upgrade is by restoring from a backup taken
|
||||||
before the upgrade. If something goes wrong during the upgrade process, you
|
before the upgrade. If something goes wrong during the upgrade process, you
|
||||||
|
|
@ -90,7 +90,7 @@ don't have any configuration with the same name by running:
|
||||||
* If either the `ucp-interlock-extension` or `ucp-interlock-proxy` services are
|
* If either the `ucp-interlock-extension` or `ucp-interlock-proxy` services are
|
||||||
not running, it's possible that there are port conflicts.
|
not running, it's possible that there are port conflicts.
|
||||||
As a workaround re-enable the layer 7 routing configuration from the
|
As a workaround re-enable the layer 7 routing configuration from the
|
||||||
[UCP settings page](deploy/index.md). Make sure the ports you choose are not
|
[UCP settings page](index.md). Make sure the ports you choose are not
|
||||||
being used by other services.
|
being used by other services.
|
||||||
|
|
||||||
## Workarounds and clean-up
|
## Workarounds and clean-up
|
||||||
|
|
|
||||||
|
|
@ -199,7 +199,7 @@ To customize subnet allocation for your Swarm networks, you can [optionally conf
|
||||||
For example, the following command is used when initializing Swarm:
|
For example, the following command is used when initializing Swarm:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker swarm init --default-address-pool 10.20.0.0/16 --default-addr-pool-mask-length 26`
|
$ docker swarm init --default-addr-pool 10.20.0.0/16 --default-addr-pool-mask-length 26`
|
||||||
```
|
```
|
||||||
|
|
||||||
Whenever a user creates a network, but does not use the `--subnet` command line option, the subnet for this network will be allocated sequentially from the next available subnet from the pool. If the specified network is already allocated, that network will not be used for Swarm.
|
Whenever a user creates a network, but does not use the `--subnet` command line option, the subnet for this network will be allocated sequentially from the next available subnet from the pool. If the specified network is already allocated, that network will not be used for Swarm.
|
||||||
|
|
|
||||||
|
|
@ -79,11 +79,11 @@ To create the custom address pool for Swarm, you must define at least one defaul
|
||||||
|
|
||||||
Docker allocates subnet addresses from the address ranges specified by the `--default-addr-pool` option. For example, a command line option `--default-addr-pool 10.10.0.0/16` indicates that Docker will allocate subnets from that `/16` address range. If `--default-addr-pool-mask-len` were unspecified or set explicitly to 24, this would result in 256 `/24` networks of the form `10.10.X.0/24`.
|
Docker allocates subnet addresses from the address ranges specified by the `--default-addr-pool` option. For example, a command line option `--default-addr-pool 10.10.0.0/16` indicates that Docker will allocate subnets from that `/16` address range. If `--default-addr-pool-mask-len` were unspecified or set explicitly to 24, this would result in 256 `/24` networks of the form `10.10.X.0/24`.
|
||||||
|
|
||||||
The subnet range comes from the `--default-addr-pool`, (such as `10.10.0.0/16`). The size of 16 there represents the number of networks one can create within that `default-addr-pool` range. The `--default-address-pool` option may occur multiple times with each option providing additional addresses for docker to use for overlay subnets.
|
The subnet range comes from the `--default-addr-pool`, (such as `10.10.0.0/16`). The size of 16 there represents the number of networks one can create within that `default-addr-pool` range. The `--default-addr-pool` option may occur multiple times with each option providing additional addresses for docker to use for overlay subnets.
|
||||||
|
|
||||||
The format of the command is:
|
The format of the command is:
|
||||||
```
|
```
|
||||||
$ docker swarm init --default-address-pool <IP range in CIDR> [--default-address-pool <IP range in CIDR> --default-addr-pool-mask-length <CIDR value>]
|
$ docker swarm init --default-addr-pool <IP range in CIDR> [--default-addr-pool <IP range in CIDR> --default-addr-pool-mask-length <CIDR value>]
|
||||||
```
|
```
|
||||||
To create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this:
|
To create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this:
|
||||||
|
|
||||||
|
|
@ -105,7 +105,7 @@ all the subnets are exhausted.
|
||||||
Refer to the following pages for more information:
|
Refer to the following pages for more information:
|
||||||
- [Swarm networking](./networking.md) for more information about the default address pool usage
|
- [Swarm networking](./networking.md) for more information about the default address pool usage
|
||||||
- [UCP Installation Planning](../../ee/ucp/admin/install/plan-installation.md) for more information about planning the network design before installation
|
- [UCP Installation Planning](../../ee/ucp/admin/install/plan-installation.md) for more information about planning the network design before installation
|
||||||
- `docker swarm init` [CLI reference](../reference/commandline/swarm_init.md) for more detail on the `--default-address-pool` flag.
|
- `docker swarm init` [CLI reference](../reference/commandline/swarm_init.md) for more detail on the `--default-addr-pool` flag.
|
||||||
|
|
||||||
### Configure the advertise address
|
### Configure the advertise address
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -90,7 +90,7 @@ This `Dockerfile` refers to a couple of files we haven't created yet, namely
|
||||||
Create two more files, `requirements.txt` and `app.py`, and put them in the same
|
Create two more files, `requirements.txt` and `app.py`, and put them in the same
|
||||||
folder with the `Dockerfile`. This completes our app, which as you can see is
|
folder with the `Dockerfile`. This completes our app, which as you can see is
|
||||||
quite simple. When the above `Dockerfile` is built into an image, `app.py` and
|
quite simple. When the above `Dockerfile` is built into an image, `app.py` and
|
||||||
`requirements.txt` is present because of that `Dockerfile`'s `COPY` command,
|
`requirements.txt` are present because of that `Dockerfile`'s `COPY` command,
|
||||||
and the output from `app.py` is accessible over HTTP thanks to the `EXPOSE`
|
and the output from `app.py` is accessible over HTTP thanks to the `EXPOSE`
|
||||||
command.
|
command.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -46,22 +46,34 @@ On {{ linux-dist-long }}, Docker EE supports storage drivers, `overlay2` and `de
|
||||||
|
|
||||||
### FIPS 140-2 cryptographic module support
|
### FIPS 140-2 cryptographic module support
|
||||||
|
|
||||||
[Federal Information Processing Standards (FIPS) Publication 140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf) is a United States Federal security requirement for cryptographic modules.
|
[Federal Information Processing Standards (FIPS) Publication 140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf)
|
||||||
|
is a United States Federal security requirement for cryptographic modules.
|
||||||
|
|
||||||
With Docker EE Basic license for versions 18.03 and later, Docker provides FIPS 140-2 support in RHEL 7.3, 7.4 and 7.5. This includes a FIPS supported cryptographic module. If the RHEL implementation already has FIPS support enabled, FIPS is automatically enabled in the Docker engine.
|
With Docker Engine - Enterprise Basic license for versions 18.03 and later,
|
||||||
|
Docker provides FIPS 140-2 support in RHEL 7.3, 7.4 and 7.5. This includes a
|
||||||
|
FIPS supported cryptographic module. If the RHEL implementation already has FIPS
|
||||||
|
support enabled, FIPS is also automatically enabled in the Docker engine. If
|
||||||
|
FIPS support is not already enabled in your RHEL implementation, visit the
|
||||||
|
[Red Hat Product Documentation](https://access.redhat.com/documentation/en-us/)
|
||||||
|
for instructions on how to enable it.
|
||||||
|
|
||||||
To verify the FIPS-140-2 module is enabled in the Linux kernel, confirm the file `/proc/sys/crypto/fips_enabled` contains `1`.
|
To verify the FIPS-140-2 module is enabled in the Linux kernel, confirm the file
|
||||||
|
`/proc/sys/crypto/fips_enabled` contains `1`.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ cat /proc/sys/crypto/fips_enabled
|
$ cat /proc/sys/crypto/fips_enabled
|
||||||
1
|
1
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**: FIPS is only supported in the Docker Engine EE. UCP and DTR currently do not have support for FIPS-140-2.
|
> **Note**: FIPS is only supported in the Docker Engine Engine - Enterprise. UCP
|
||||||
|
> and DTR currently do not have support for FIPS-140-2.
|
||||||
|
|
||||||
To enable FIPS 140-2 compliance on a system that is not in FIPS 140-2 mode, do the following:
|
You can override FIPS 140-2 compliance on a system that is not in FIPS 140-2
|
||||||
|
mode. Note, this **does not** change FIPS 140-2 mode on the system. To override
|
||||||
|
the FIPS 140-2 mode, follow ths steps below.
|
||||||
|
|
||||||
Create a file called `/etc/systemd/system/docker.service.d/fips-module.conf`. It needs to contain the following:
|
Create a file called `/etc/systemd/system/docker.service.d/fips-module.conf`.
|
||||||
|
Add the following:
|
||||||
|
|
||||||
```
|
```
|
||||||
[Service]
|
[Service]
|
||||||
|
|
@ -76,7 +88,8 @@ Restart the Docker service as root.
|
||||||
|
|
||||||
`$ sudo systemctl restart docker`
|
`$ sudo systemctl restart docker`
|
||||||
|
|
||||||
To confirm Docker is running with FIPS-140-2 enabled, run the `docker info` command:
|
To confirm Docker is running with FIPS-140-2 enabled, run the `docker info`
|
||||||
|
command:
|
||||||
|
|
||||||
{% raw %}
|
{% raw %}
|
||||||
```
|
```
|
||||||
|
|
@ -85,13 +98,13 @@ docker info --format {{.SecurityOptions}}
|
||||||
```
|
```
|
||||||
{% endraw %}
|
{% endraw %}
|
||||||
|
|
||||||
### Disabling FIPS-140-2
|
### Disabling FIPS-140-2
|
||||||
|
|
||||||
If the system has the FIPS 140-2 cryptographic module installed on the operating system,
|
If the system has the FIPS 140-2 cryptographic module installed on the operating
|
||||||
it is possible to disable FIPS-140-2 compliance.
|
system, it is possible to disable FIPS-140-2 compliance.
|
||||||
|
|
||||||
To disable FIPS 140-2 in Docker but not the operating system, set the value `DOCKER_FIPS=0`
|
To disable FIPS 140-2 in Docker but not the operating system, set the value
|
||||||
in the `/etc/systemd/system/docker.service.d/fips-module.conf`.
|
`DOCKER_FIPS=0` in the `/etc/systemd/system/docker.service.d/fips-module.conf`.
|
||||||
|
|
||||||
Reload the Docker configuration to systemd.
|
Reload the Docker configuration to systemd.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -53,9 +53,9 @@ for a lot more information.
|
||||||
|
|
||||||
## Prevent Docker from manipulating iptables
|
## Prevent Docker from manipulating iptables
|
||||||
|
|
||||||
To prevent Docker from manipulating the `iptables` policies at all, set the
|
It is possible to set the `iptables` key to `false` in the Docker engine's configuration file at `/etc/docker.daemon.json`, but this option is not appropriate for most users. It is not possible to completely prevent Docker from creating `iptables` rules, and creating them after-the-fact is extremely involved and beyond the scope of these instructions. Setting `iptables` to `false` will more than likely break container networking for the Docker engine.
|
||||||
`iptables` key to `false` in `/etc/docker/daemon.json`. This is inappropriate
|
|
||||||
for most users, because the `iptables` policies then need to be managed by hand.
|
For system integrators who wish to build the Docker runtime into other applications, explore the [`moby` project](https://mobyproject.org/).
|
||||||
|
|
||||||
## Next steps
|
## Next steps
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -50,7 +50,7 @@ the `--mount` flag was used for swarm services. However, starting with Docker
|
||||||
is mounted in the container. May be specified as `destination`, `dst`,
|
is mounted in the container. May be specified as `destination`, `dst`,
|
||||||
or `target`.
|
or `target`.
|
||||||
- The `tmpfs-type` and `tmpfs-mode` options. See
|
- The `tmpfs-type` and `tmpfs-mode` options. See
|
||||||
[tmpfs options](#tmpfs-options).
|
[tmpfs options](#specify-tmpfs-options).
|
||||||
|
|
||||||
The examples below show both the `--mount` and `--tmpfs` syntax where possible,
|
The examples below show both the `--mount` and `--tmpfs` syntax where possible,
|
||||||
and `--mount` is presented first.
|
and `--mount` is presented first.
|
||||||
|
|
|
||||||