Merge branch '2019-09-patch-release-notes' into interlock3-updates

This commit is contained in:
Traci Morrison 2019-10-07 09:56:44 -04:00 committed by GitHub
commit 70afdf4765
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
42 changed files with 646 additions and 392 deletions

View File

@ -1512,7 +1512,7 @@ manuals:
- title: Specifying a routing mode
path: /ee/ucp/interlock/usage/interlock-vip-mode/
- title: Using routing labels
path: /ee/ucp/interlock/usage/labels-reference.md/
path: /ee/ucp/interlock/usage/labels-reference/
- title: Implementing redirects
path: /ee/ucp/interlock/usage/redirects/
- title: Implementing a service cluster
@ -4041,6 +4041,8 @@ manuals:
title: Trust Chain
- path: /docker-hub/publish/byol/
title: Bring Your Own License (BYOL)
- path: /docker-hub/deactivate-account/
title: Deactivate an account or an organization
- sectiontitle: Open-source projects
section:
- sectiontitle: Docker Notary

View File

@ -45,13 +45,13 @@ your client and daemon API versions.
{% if site.data[include.datafolder][include.datafile].experimentalcli %}
> This command is experimental.
> This command is experimental on the Docker client.
>
> **It should not be used in production environments.**
>
> This command is experimental on the Docker client. It should not be used in
> production environments.
> To enable experimental features in the Docker CLI, edit the
> [config.json](/engine/reference/commandline/cli.md#configuration-files)
> and set `experimental` to `enabled`. You can go [here](https://github.com/docker/docker.github.io/blob/master/_includes/experimental.md)
> and set `experimental` to `enabled`. You can go [here](https://docs.docker.com/engine/reference/commandline/cli/#experimental-features)
> for more information.
{: .important }

View File

@ -9,29 +9,27 @@
</div>
<div class="navbar-collapse collapse">
<ul class="primary nav navbar-nav">
<li><a href="https://docker.com/what-docker">What is Docker?</a></li>
<li><a href="https://docker.com/get-docker">Product</a></li>
<li><a href="https://docker.com/why-docker">Why Docker?</a></li>
<li><a href="https://docker.com/get-started">Product</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Get Docker <span class="caret"></span></a>
<ul class="dropdown-menu nav-main">
<h6 class="dropdown-header">For Desktops</h6>
<li><a href="https://docker.com/docker-mac">Mac</a></li>
<li><a href="https://docker.com/docker-windows">Windows</a></li>
<h6 class="dropdown-header">For Desktop</h6>
<li><a href="https://docker.com/products/docker-desktop">Mac & Windows</a></li>
<h6 class="dropdown-header">For Cloud Providers</h6>
<li><a href="https://docker.com/docker-aws">AWS</a></li>
<li><a href="https://docker.com/docker-microsoft-azure">Azure</a></li>
<li><a href="https://docker.com/partners/aws">AWS</a></li>
<li><a href="https://docker.com/partners/microsoft">Azure</a></li>
<h6 class="dropdown-header">For Servers</h6>
<li><a href="https://docker.com/docker-windows-server">Windows Server</a></li>
<li><a href="https://docker.com/docker-centos">CentOS</a></li>
<li><a href="https://docker.com/docker-debian">Debian</a></li>
<li><a href="https://docker.com/docker-fedora">Fedora</a></li>
<li><a href="https://docker.com/docker-oracle-linux">Oracle Enterprise Linux</a></li>
<li><a href="https://docker.com/docker-rhel">RHEL</a></li>
<li><a href="https://docker.com/docker-sles">SLES</a></li>
<li><a href="https://docker.com/docker-ubuntu">Ubuntu</a></li>
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-windows">Windows Server</a></li>
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-centos">CentOS</a></li>
<li><a href="https://hub.docker.com/editions/community/docker-ce-server-debian">Debian</a></li>
<li><a href="https://hub.docker.com/editions/community/docker-ce-server-fedora">Fedora</a></li>
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-oraclelinux">Oracle Enterprise Linux</a></li>
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-rhel">RHEL</a></li>
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-sles">SLES</a></li>
<li><a href="https://hub.docker.com/editions/enterprise/docker-ee-server-ubuntu">Ubuntu</a></li>
</ul>
</li>
<li><a href="https://docs.docker.com">Docs</a></li>
<li><a href="https://docker.com/docker-community">Community</a></li>
<li><a href="https://hub.docker.com/signup">Create Docker ID</a></li>
<li><a href="https://hub.docker.com/sso/start">Sign In</a></li>

View File

@ -10,19 +10,33 @@ keywords: Docker, application template, Application Designer,
## Overview
Docker Template is a CLI plugin that introduces a top-level `docker template` command that allows users to create new Docker applications by using a library of templates. There are two types of templates — service templates and application templates.
Docker Template is a CLI plugin that introduces a top-level `docker template`
command that allows users to create new Docker applications by using a library
of templates. There are two types of templates — service templates and
application templates.
A _service template_ is a container image that generates code and contains the metadata associated with the image.
A _service template_ is a container image that generates code and contains the
metadata associated with the image.
- The container image takes `/run/configuration` mounted file as input to generate assets such as code, Dockerfile, and `docker-compose.yaml` for a given service, and writes the output to the `/project` mounted folder.
- The container image takes `/run/configuration` mounted file as input to
generate assets such as code, Dockerfile, and `docker-compose.yaml` for a
given service, and writes the output to the `/project` mounted folder.
- The metadata file that describes the service template is called the service definition. It contains the name of the service, description, and available parameters such as ports, volumes, etc. For a complete list of parameters that are allowed, see [Docker Template API reference](/ee/app-template/api-reference).
- The metadata file that describes the service template is called the service
definition. It contains the name of the service, description, and available
parameters such as ports, volumes, etc. For a complete list of parameters that
are allowed, see [Docker Template API
reference](/ee/app-template/api-reference).
An _application template_ is a collection of one or more service templates. An application template generates a Dockerfile per service and only one Compose file for the entire application, aggregating all services.
An _application template_ is a collection of one or more service templates. An
application template generates a Dockerfile per service and only one Compose
file for the entire application, aggregating all services.
## Create a custom service template
A Docker template contains a predefined set of service and application templates. To create a custom template based on your requirements, you must complete the following steps:
A Docker template contains a predefined set of service and application
templates. To create a custom template based on your requirements, you must
complete the following steps:
1. Create a service container image
2. Create the service template definition
@ -31,52 +45,36 @@ A Docker template contains a predefined set of service and application templates
### Create a service container image
A service template provides the description required by Docker Template to scaffold a project. A service template runs inside a container with two bind mounts:
A service template provides the description required by Docker Template to
scaffold a project. A service template runs inside a container with two bind
mounts:
1. `/run/configuration`, a JSON file which contains all settings such as parameters, image name, etc. For example:
1. `/run/configuration`, a JSON file which contains all settings such as
parameters, image name, etc. For example:
```json
{
"parameters": {
"externalPort": "80",
"artifactId": "com.company.app"
},
...
}
```
```json
{
"parameters": {
"externalPort": "80",
"artifactId": "com.company.app"
},
...
}
```
2. `/project`, the output folder to which the container image writes the generated assets.
#### Basic service template
To create a basic service template, you need to create two files — a dockerfile and a docker compose file in a new folder. For example, to create a new MySQL service template, create the following files in a folder called `my-service`:
Services that generate a template using code must contain the following files
that are valid:
`docker-compose.yaml`
- A *Dockerfile* located at the root of the `my-service` folder. This is the
Dockerfile that is used for the service when running the application.
```yaml
version: "3.6"
services:
mysql:
image: mysql
```
`Dockerfile`
```conf
FROM alpine
COPY docker-compose.yaml .
CMD cp docker-compose.yaml /project/
```
This adds a MySQL service to your application.
#### Create a service with code
Services that generate a template using code must contain the following files that are valid:
- A *Dockerfile* located at the root of the `my-service` folder. This is the Dockerfile that is used for the service when running the application.
- A *docker-compose.yaml* file located at the root of the `my-service` folder. The `docker-compose.yaml` file must contain the service declaration and any optional volumes or secrets.
- A *docker-compose.yaml* file located at the root of the `my-service` folder.
The `docker-compose.yaml` file must contain the service declaration and any
optional volumes or secrets.
Heres an example of a simple NodeJS service:
@ -124,9 +122,11 @@ COPY . .
CMD ["yarn", "run", "start"]
```
> **Note:** After scaffolding the template, you can add the default files your template contains to the `assets` folder.
> **Note:** After scaffolding the template, you can add the default files your
> template contains to the `assets` folder.
The next step is to build and push the service template image to a remote repository by running the following command:
The next step is to build and push the service template image to a remote
repository by running the following command:
```bash
cd [...]/my-service
@ -134,20 +134,17 @@ docker build -t org/my-service .
docker push org/my-service
```
To build and push the image to an instance of Docker Trusted Registry(DTR), or to an external registry, specify the name of the repository:
```bash
cd [...]/my-service
docker build -t myrepo:5000/my-service .
docker push myrepo:5000/my-service
```
### Create the service template definition
The service definition contains metadata that describes a service template. It contains the name of the service, description, and available parameters such as ports, volumes, etc.
After creating the service definition, you can proceed to [Add templates to Docker Template](#add-templates-to-docker-template) to add the service definition to the Docker Template repository.
The service definition contains metadata that describes a service template. It
contains the name of the service, description, and available parameters such as
ports, volumes, etc. After creating the service definition, you can proceed to
[Add templates to Docker Template](#add-templates-to-docker-template) to add
the service definition to the Docker Template repository.
Of all the available service and application definitions, Docker Template has access to only one catalog, referred to as the repository. It uses the catalog content to display service and application templates to the end user.
Of all the available service and application definitions, Docker Template has
access to only one catalog, referred to as the repository. It uses the
catalog content to display service and application templates to the end user.
Here is an example of the Express service definition:
@ -155,7 +152,9 @@ Here is an example of the Express service definition:
- apiVersion: v1alpha1 # constant
kind: ServiceTemplate # constant
metadata:
name: Express # the name of the service
name: Express # the name of the service
platforms:
- linux
spec:
title: Express # The title/label of the service
icon: https://docker-application-template.s3.amazonaws.com/assets/express.png # url for an icon
@ -164,21 +163,32 @@ Here is an example of the Express service definition:
image: org/my-service:latest
```
The most important section here is `image: org/my-service:latest`. This is the image associated with this service template. You can use this line to point to any image. For example, you can use an Express image directly from the hub `docker.io/dockertemplate/express:latest` or from the DTR private repository `myrepo:5000/my-service:latest`. The other properties in the service definition are mostly metadata for display and indexation purposes.
The most important section here is `image: org/my-service:latest`. This is the
image associated with this service template. You can use this line to point to
any image. For example, you can use an Express image directly from the hub
`docker.io/dockertemplate/express:latest` or from the DTR private repository
`myrepo/my-service:latest`. The other properties in the service definition are
mostly metadata for display and indexation purposes.
#### Adding parameters to the service
Now that you have created a simple express service, you can customize it based on your requirements. For example, you can choose the version of NodeJS to use when running the service.
Now that you have created a simple express service, you can customize it based
on your requirements. For example, you can choose the version of NodeJS to use
when running the service.
To customize a service, you need to complete the following tasks:
1. Declare the parameters in the service definition. This tells Docker Template whether or not the CLI can accept the parameters, and allows the [Application Designer](/ee/desktop/app-designer) to be aware of the new options.
1. Declare the parameters in the service definition. This tells Docker Template
whether or not the CLI can accept the parameters, and allows the
[Application Designer](/ee/desktop/app-designer) to be aware of the new
options.
2. Use the parameters during service construction.
#### Declare the parameters
Add the parameters available to the application. The following example adds the NodeJS version and the external port:
Add the parameters available to the application. The following example adds the
NodeJS version and the external port:
```yaml
- [...]
@ -205,7 +215,8 @@ Add the parameters available to the application. The following example adds the
#### Use the parameters during service construction
When you run the service template container, a volume is mounted making the service parameters available at `/run/configuration`.
When you run the service template container, a volume is mounted making the
service parameters available at `/run/configuration`.
The file matches the following go struct:
@ -232,25 +243,34 @@ type ConfiguredService struct {
}
```
You can then use the file to obtain values for the parameters and use this information based on your requirements. However, in most cases, the JSON file is used to interpolate the variables. Therefore, we provide a utility called `interpolator` that expands variables in templates. For more information, see [Interpolator](#interpolator).
You can then use the file to obtain values for the parameters and use this
information based on your requirements. However, in most cases, the JSON file
is used to interpolate the variables. Therefore, we provide a utility called
`interpolator` that expands variables in templates. For more information, see
[Interpolator](#interpolator).
To use the `interpolator` image, update `my-service/Dockerfile` to use the following Dockerfile:
To use the `interpolator` image, update `my-service/Dockerfile` to use the
following Dockerfile:
```conf
FROM dockertemplate/interpolator:v0.1.5
COPY assets .
```
> **Note:** The interpolator tag must match the version used in Docker Template. Verify this using the `docker template version` command .
> **Note:** The interpolator tag must match the version used in Docker
> Template. Verify this using the `docker template version` command .
This places the interpolator image in the `/assets` folder and copies the folder to the target `/project` folder. If you prefer to do this manually, use a Dockerfile instead:
This places the interpolator image in the `/assets` folder and copies the
folder to the target `/project` folder. If you prefer to do this manually, use
a Dockerfile instead:
```conf
WORKDIR /assets
CMD ["/interpolator", "-config", "/run/configuration", "-source", "/assets", "-destination", "/project"]
```
When this is complete, use the newly added node option in `my-service/assets/Dockerfile`, by replacing the line:
When this is complete, use the newly added node option in
`my-service/assets/Dockerfile`, by replacing the line:
`FROM node:9`
@ -262,11 +282,15 @@ Now, build and push the image to your repository.
### Add service template to the library
You must add the service to a repository file in order to see it when you run the `docker template ls` command, or to make the service available in the Application Designer.
You must add the service to a repository file in order to see it when you run
the `docker template ls` command, or to make the service available in the
Application Designer.
#### Create the repository file
Create a local repository file called `library.yaml` anywhere on your local drive and add the newly created service definitions and application definitions to it.
Create a local repository file called `library.yaml` anywhere on your local
drive and add the newly created service definitions and application definitions
to it.
`library.yaml`
@ -284,42 +308,51 @@ services: # List of service templates available
#### Add the local repository to docker-template settings
> **Note:** You can also use the instructions in this section to add templates to the [Application Designer](/ee/desktop/app-designer).
> **Note:** You can also use the instructions in this section to add templates
> to the [Application Designer](/ee/desktop/app-designer).
Now that you have created a local repository and added service definitions to it, you must make Docker Template aware of these. To do this:
Now that you have created a local repository and added service definitions to
it, you must make Docker Template aware of these. To do this:
1. Edit `~/.docker/application-template/preferences.yaml` as follows:
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
2. Add your local repository:
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: custom-services
url: file:///path/to/my/library.yaml
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
> **Note:** Do not remove or comment out the default library `library-master`.
> This library contain template plugins that are required to build all Docker
> Templates.
When configuring a local repository on Windows, the `url` structure is slightly different:
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: custom-services
url: file:///path/to/my/library.yaml
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
When configuring a local repository on Windows, the `url` structure is slightly
different:
```yaml
- name: custom-services
url: file://c:/path/to/my/library.yaml
```
After updating the `preferences.yaml` file, run `docker template ls` or restart the Application Designer and select **Custom application**. The new service should now be visible in the list of available services.
After updating the `preferences.yaml` file, run `docker template ls` or restart
the Application Designer and select **Custom application**. The new service
should now be visible in the list of available services.
### Share custom service templates
@ -329,11 +362,14 @@ To share a custom service template, you must complete the following steps:
2. Share the service definition (for example, GitHub)
3. Ensure the receiver has modified their `preferences.yaml` file to point to the service definition that you have shared, and are permitted to accept remote images.
3. Ensure the receiver has modified their `preferences.yaml` file to point to
the service definition that you have shared, and are permitted to accept
remote images.
## Create a custom application template
An application template is a collection of one or more service templates. You must complete the following steps to create a custom application template:
An application template is a collection of one or more service templates. You
must complete the following steps to create a custom application template:
1. Create an application template definition
@ -343,17 +379,26 @@ An application template is a collection of one or more service templates. You mu
### Create the application definition
An application template definition contains metadata that describes an application template. It contains information such as the name and description of the template, the services it contains, and the parameters for each of the services.
An application template definition contains metadata that describes an
application template. It contains information such as the name and description
of the template, the services it contains, and the parameters for each of the
services.
Before you create an application template definition, you must create a repository that contains the services you are planning to include in the template. For more information, see [Create the repository file](#create-the-repository-file).
Before you create an application template definition, you must create a
repository that contains the services you are planning to include in the
template. For more information, see [Create the repository
file](#create-the-repository-file).
For example, to create an Express and MySQL application, the application definition must be similar to the following yaml file:
For example, to create an Express and MySQL application, the application
definition must be similar to the following yaml file:
```yaml
apiVersion: v1alpha1 #constant
kind: ApplicationTemplate #constant
metadata:
name: express-mysql #the name of the application
platforms:
- linux
spec:
description: Sample application with a NodeJS backend and a MySQL database
services: # list of the services
@ -368,7 +413,9 @@ spec:
### Add the template to the library
Create a local repository file called `library.yaml` anywhere on your local drive. If you have already created the `library.yaml` file, add the application definitions to it.
Create a local repository file called `library.yaml` anywhere on your local
drive. If you have already created the `library.yaml` file, add the application
definitions to it.
`library.yaml`
@ -392,40 +439,48 @@ templates: # List of application templates available
### Add the local repository to `docker-template` settings
Now that you have created a local repository and added application definitions, you must make Docker Template aware of these. To do this:
Now that you have created a local repository and added application definitions,
you must make Docker Template aware of these. To do this:
1. Edit `~/.docker/application-template/preferences.yaml` as follows:
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
2. Add your local repository:
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: custom-services
url: file:///path/to/my/library.yaml
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
> **Note:** Do not remove or comment out the default library `library-master`.
> This library contain template plugins that are required to build all Docker
> Templates.
When configuring a local repository on Windows, the `url` structure is slightly different:
```yaml
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: custom-services
url: file:///path/to/my/library.yaml
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
When configuring a local repository on Windows, the `url` structure is slightly
different:
```yaml
- name: custom-services
url: file://c:/path/to/my/library.yaml
```
After updating the `preferences.yaml` file, run `docker template ls` or restart the Application Designer and select **Custom application**. The new template should now be visible in the list of available templates.
After updating the `preferences.yaml` file, run `docker template ls` or restart
the Application Designer and select **Custom application**. The new template
should now be visible in the list of available templates.
### Share the custom application template
@ -435,29 +490,39 @@ To share a custom application template, you must complete the following steps:
2. Share the application definition (for example, GitHub)
3. Ensure the receiver has modified their `preferences.yaml` file to point to the application definition that you have shared, and are permitted to accept remote images.
3. Ensure the receiver has modified their `preferences.yaml` file to point to
the application definition that you have shared, and are permitted to accept
remote images.
## Interpolator
The `interpolator` utility is basically an image containing a binary which:
- takes a folder (assets folder) and the service parameter file as input,
- replaces variables in the input folder using the parameters specified by the user (for example, the service name, external port, etc), and
- replaces variables in the input folder using the parameters specified by the
user (for example, the service name, external port, etc), and
- writes the interpolated files to the destination folder.
The interpolator implementation uses [Golang template](https://golang.org/pkg/text/template/) to aggregate the services to create the final application. If your service template uses the `interpolator` image by default, it expects all the asset files to be located in the `/assets` folder:
The interpolator implementation uses [Golang
template](https://golang.org/pkg/text/template/) to aggregate the services to
create the final application. If your service template uses the `interpolator`
image by default, it expects all the asset files to be located in the `/assets`
folder:
`/interpolator -source /assets -destination /project`
However, you can create your own scaffolding script that performs calls to the `interpolator`.
However, you can create your own scaffolding script that performs calls to the
`interpolator`.
> **Note:** It is not mandatory to use the `interpolator` utility. You can use a utility of your choice to handle parameter replacement and file copying to achieve the same result.
> **Note:** It is not mandatory to use the `interpolator` utility. You can use
> a utility of your choice to handle parameter replacement and file copying to
> achieve the same result.
The following table lists the `interpolator` binary options:
| Parameter | Default value | Description |
| :----------------------|:---------------------------|:----------------------------------------|
| `-source` | none | Source file or folder to interpolate from|
| `-destination` | none | Destination file or folder to copy the interpolated files to|
| `-config` | `/run/configuration` | The path to the json configuration file |
| `-skip-template` | false | If set to `true`, it copies assets without any transformation |
| Parameter | Default value | Description |
| :----------------|:---------------------|:--------------------------------------------------------------|
| `-source` | none | Source file or folder to interpolate from |
| `-destination` | none | Destination file or folder to copy the interpolated files to |
| `-config` | `/run/configuration` | The path to the json configuration file |
| `-skip-template` | false | If set to `true`, it copies assets without any transformation |

View File

@ -82,7 +82,7 @@ $ docker buildx build --platform linux/amd64,linux/arm64 .
Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with `--platform` using the native architecture of the build node. A list of build arguments like `BUILDPLATFORM` and `TARGETPLATFORM` is available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.
```
FROM --platform $BUILDPLATFORM golang:alpine AS build
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log

View File

@ -94,7 +94,7 @@ The following components are available:
- `subscription`: (Optional) Configuration options for Docker Enterprise
Subscriptions.
- `cloudstor`: (Optional) Configuration options for Docker CloudStor.
- `cloudstor`: (Optional) Configuration options for Docker Cloudstor.
- `dtr`: (Optional) Configuration options for Docker Trusted Registry.
- `engine`: (Optional) Configuration options for Docker Engine.
- `ucp`: (Optional) Configuration options for Docker Universal Control Plane.
@ -110,10 +110,29 @@ Provide Docker Enterprise subscription information
`false`
#### cloudstor
Customizes the installation of Docker Cloudstor.
Docker Cloudstor is a Docker Swarm Plugin that provides persistent storage to
Docker Swarm Clusters deployed on to AWS or Azure. By default Docker Cloudstor
is not installed on Docker Enterprise environments created with Docker Cluster.
- `version`: (Optional) The version of Cloudstor to install. Default is `1.0`.
- `use_efs`: (Optional) Specifies whether an Elastic File System should be provisioned. Defaults to `false`.
```yaml
cluster:
cloudstor:
version: '1.0'
```
For more information on Docker Cloudstor see:
- [Cloudstor for AWS](/docker-for-aws/persistent-data-volumes/)
- [Cloudstor for Azure]([/docker-for-azure/persistent-data-volumes/)
The following optional elements can be specified:
- `version`: (Required) The version of Docker Cloudstor to install. The default
is `disabled`. The only released version of Docker Cloudstor at this time is
`1.0`.
- `use_efs`: (Optional) Specifies whether an Elastic File System should be
provisioned. By default Docker Cloudstor on AWS uses Elastic Block Store,
therefore this value defaults to `false`.
#### dtr
Customizes the installation of Docker Trusted Registry.

View File

@ -51,12 +51,24 @@ For more information about Cluster files, refer to the
Docker Cluster has commands for managing the whole lifecycle of your cluster:
* Create and destroy clusters
* Scale up or Scale down clusters
* Scale up or scale down clusters
* Upgrade clusters
* View the status of clusters
* Backup and Restore clusters
* Backup and restore clusters
## Cluster reference pages
## Export Docker Cluster artifacts
You can export both Terraform and Ansible scripts to deploy certain components standalone or with custom configurations. Use the following commands to export those scripts:
```bash
docker container run --detach --name dci --entrypoint sh docker/cluster:latest
docker container cp dci:/cluster/terraform terraform
docker container cp dci:/cluster/ansible ansible
docker container stop dci
docker container rm dci
```
## Where to go next
- [Get started with Docker Cluster on AWS](aws.md)
- [Command line reference](/engine/reference/commandline/cluster/)

View File

@ -8,6 +8,23 @@ This page provides information about Docker Cluster versions.
# Version 1
## 1.2.0
(2019-10-2)
### Features
- Added new env variable type which allows users to supply cluster variable values as an environment variable (DCIS-509)
### Fixes
- Fixed an issue where errors in the cluster commands would return exit code of 0 (DCIS-508)
- New error message displayed when a docker login is required:
```Checking for licenses on Docker Hub
Error: no Hub login info found; please run 'docker login' first
```
## Version 1.1.0
(2019-09-03)

View File

@ -62,7 +62,7 @@ services.
web:
image: example/my_web_app:latest
links:
depends_on:
- db
- cache
@ -136,7 +136,7 @@ Start with a **docker-compose.yml**.
web:
image: example/my_web_app:latest
links:
depends_on:
- db
db:
@ -147,7 +147,7 @@ export or backup.
dbadmin:
build: database_admin/
links:
depends_on:
- db
To start a normal environment run `docker-compose up -d`. To run a database
@ -177,9 +177,9 @@ is useful if you have several services that reuse a common set of configuration
options. Using `extends` you can define a common set of service options in one
place and refer to it from anywhere.
Keep in mind that `links`, `volumes_from`, and `depends_on` are never shared
between services using `extends`. These exceptions exist to avoid implicit
dependencies; you always define `links` and `volumes_from` locally. This ensures
Keep in mind that `volumes_from` and `depends_on` are never shared between
services using `extends`. These exceptions exist to avoid implicit
dependencies; you always define `volumes_from` locally. This ensures
dependencies between services are clearly visible when reading the current file.
Defining these locally also ensures that changes to the referenced file don't
break anything.
@ -233,7 +233,7 @@ You can also write other services and link your `web` service to them:
environment:
- DEBUG=1
cpu_shares: 5
links:
depends_on:
- db
db:
image: postgres
@ -264,7 +264,7 @@ common configuration:
command: /code/run_web_app
ports:
- 8080:8080
links:
depends_on:
- queue
- db
@ -273,7 +273,7 @@ common configuration:
file: common.yml
service: app
command: /code/run_worker
links:
depends_on:
- queue
## Adding and overriding configuration

View File

@ -94,10 +94,7 @@ no separate VirtualBox is required.
### What are system requirements for Docker for Mac?
You need a Mac that supports hardware virtualization and can run at
least macOS `10.11` (macOS El Capitan). See also
[What to know before you install](install#what-to-know-before-you-install) in
the install guide.
You need a Mac that supports hardware virtualization. For more information, see [Docker Desktop Mac system requirements](install/#system-requirements).
### Do I need to reinstall Docker for Mac if I change the name of my macOS account?

View File

@ -4,91 +4,96 @@ keywords: mac, disk
title: Disk utilization in Docker for Mac
---
Docker for Mac stores Linux containers and images in a single, large "disk image" file
in the Mac filesystem. This is different from Docker on Linux, which usually stores containers
and images in the `/var/lib/docker` directory.
Docker Desktop stores Linux containers and images in a single, large "disk image" file in the Mac filesystem. This is different from Docker on Linux, which usually stores containers and images in the `/var/lib/docker` directory.
## Where is the "disk image" file?
## Where is the disk image file?
To locate the "disk image" file, first select the whale menu icon and then select
**Preferences...**. When the **Preferences...** window is displayed, select **Disk** and then **Reveal in Finder**:
To locate the disk image file, select the Docker icon and then
**Preferences** > **Resources** > **Advanced**.
![Disk preferences](images/settings-disk.png)
![Disk preferences](images/menu/prefs-advanced.png)
The **Preferences...** window shows how much actual disk space the "disk image" file is consuming.
In this example, the "disk image" file is consuming 2.4 GB out of a maximum of 64 GB.
Note that other tools might display space usage of the file in terms of the maximum file size, not the actual file size.
The **Advanced** tab displays the location of the disk image. It also displays the maximum size of the disk image and the actual space the disk image is consuming. Note that other tools might display space usage of the file in terms of the maximum file size, and not the actual file size.
## If the file is too big
If the file is too big, you can
If the disk image file is too big, you can:
- move it to a bigger drive,
- delete unnecessary containers and images, or
- reduce the maximum allowable size of the file.
### Move the file to a bigger drive
To move the file, open the **Preferences...** menu, select **Disk** and then select
on **Move disk image**. Do not move the file directly in the finder or Docker for Mac will
lose track of it.
To move the disk image file to a different location:
1. Select **Preferences** > **Resources** > **Advanced**.
2. In the **Disk image location** section, click **Browse** and choose a new location for the disk image.
3. Click **Apply & Restart** for the changes to take effect.
Do not move the file directly in Finder as this can cause Docker Desktop to lose track of the file.
### Delete unnecessary containers and images
To check whether you have too many unnecessary containers and images:
If your client and daemon API are version 1.25 or later (use the docker version command on the client to check your client and daemon API versions.), you can display detailed space usage information with:
Check whether you have any unnecessary containers and images. If your client and daemon API are running version 1.25 or later (use the `docker version` command on the client to check your client and daemon API versions), you can see the detailed space usage information by running:
```
docker system df -v
```
Alternatively, you can list images with:
Alternatively, to list images, run:
```bash
$ docker image ls
```
and then list containers with:
and then, to list containers, run:
```bash
$ docker container ls -a
```
If there are lots of unneeded objects, try the command
If there are lots of redundant objects, run the command:
```bash
$ docker system prune
```
This removes all stopped containers, unused networks, dangling images, and build cache.
Note that it might take a few minutes before space becomes free on the host, depending
on what format the "disk image" file is in:
- If the file is named `Docker.raw`: space on the host should be reclaimed within a few
seconds.
- If the file is named `Docker.qcow2`: space will be freed by a background process after
a few minutes.
This command removes all stopped containers, unused networks, dangling images, and build cache.
Note that space is only freed when images are deleted. Space is not freed automatically
when files are deleted inside running containers. To trigger a space reclamation at any
point, use the command:
It might take a few minutes to reclaim space on the host depending on the format of the disk image file:
- If the file is named `Docker.raw`: space on the host should be reclaimed within a few seconds.
- If the file is named `Docker.qcow2`: space will be freed by a background process after a few minutes.
Space is only freed when images are deleted. Space is not freed automatically when files are deleted inside running containers. To trigger a space reclamation at any point, run the command:
```
$ docker run --privileged --pid=host justincormack/nsenter1 /sbin/fstrim /var/lib/docker
$ docker run --privileged --pid=host docker/desktop-reclaim-space
```
Note that many tools will report the maximum file size, not the actual file size.
To query the actual size of the file on the host from a terminal, use:
Note that many tools report the maximum file size, not the actual file size.
To query the actual size of the file on the host from a terminal, run:
```bash
$ cd ~/Library/Containers/com.docker.docker/Data
$ cd vms/0 # or com.docker.driver.amd64-linux
$ cd vms/0/data
$ ls -klsh Docker.raw
2333548 -rw-r--r--@ 1 akim staff 64G Dec 13 17:42 Docker.raw
2333548 -rw-r--r--@ 1 username staff 64G Dec 13 17:42 Docker.raw
```
In this example, the actual size of the disk is `2333548` KB, whereas the maximum size
of the disk is `64` GB.
In this example, the actual size of the disk is `2333548` KB, whereas the maximum size of the disk is `64` GB.
### Reduce the maximum size of the file
To reduce the maximum size of the file, select the whale menu icon and then select
**Preferences...**. When the **Preferences...** window is displayed, select **Disk**.
The **Disk** window contains a slider that allows the maximum disk size to be set.
**Warning**: If the maximum size is reduced, the current file will be deleted and, therefore, all
containers and images will be lost.
To reduce the maximum size of the disk image file:
1. Select the Docker icon and then select **Preferences** > **Resources** > **Advanced**.
2. The **Disk image size** section contains a slider that allows you to change the maximum size of the disk image. Adjust the slider to set a lower limit.
3. Click **Apply & Restart**.
When you reduce the maximum size, the current disk image file is deleted, and therefore, all containers and images will be lost.

View File

@ -0,0 +1,52 @@
---
title: Deactivating an account or an organization
description: Learn how to deactivate a Docker Hub account or an organization
keywords: Docker Hub, delete, deactivate, account, organization
---
Your Docker Hub account or organization may also be linked to other Docker products and services, so deactivating it will also disable access to those products and services.
## Deactivating an account
Before deactivating your Docker Hub account, please complete the following:
1. Download any images and tags you want to keep:
`docker pull -a <image>:<tag>`.
3. If you have an active subscription, downgrade it to the **free** plan.
In Docker Hub, navigate to **_Your Account_** > **Account Settings** > **Billing**.
4. If you have an enterprise license, download the key.
In Docker Hub, navigate to **_Your Account_** > **Accounts Settings** > **Licenses**. The download link will no longer be available after your account is disabled.
5. If you belong to any organizations, remove your account from all of them.
6. If you are the sole owner of any organization, either add someone to the **owners** team and then remove yourself from the organization, or deactivate the organization as well.
7. Unlink your [Github and Bitbucket accounts](https://docs.docker.com/docker-hub/builds/link-source/#unlink-a-github-user-account).
Once you have completed all the steps above, you may deactivate your account. On Docker Hub, go to **_Your Account_** > **Accounts Settings** > **Deactivate Account**.
> This cannot be undone! Be sure you've gathered all the data you need from your account before deactivating it.
{: .warning }
## Deactivating an organization
Before deactivating an organization, please complete the following:
1. Download any images and tags you want to keep:
`docker pull -a <image>:<tag>`.
2. If you have an active subscription, downgrade it to the **free** plan:
In Docker Hub, navigate to **Organizations** > **_Your Organization_** > **Billing**.
3. Unlink your [Github and Bitbucket accounts](https://docs.docker.com/docker-hub/builds/link-source/#unlink-a-github-user-account).
Once you have completed all the steps above, you may deactivate your organization. On Docker Hub, go to **Organizations** > **_Your Organization_** > **Settings** > **Deactivate Org**.
> This cannot be undone! Be sure you've gathered all the data you need from your organization before deactivating it.
{: .warning }

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -6,72 +6,137 @@ redirect_from:
- /docker-cloud/orgs/
---
Docker Hub Organizations let you create teams so you can give your team access to shared image repositories.
Docker Hub Organizations let you create teams so you can give your team access
to shared image repositories.
### How Organizations & Teams Work
- **Organizations** are collections of teams and repositories that can be managed together.
- **Teams** are groups of Docker Hub users that belong to an organization.
- **Organizations** are a collection of teams and repositories that can be managed together.
- **Teams** are groups of Docker Hub users that belong to your organization.
> **Note**: in Docker Hub, users cannot belong directly to an organization.
They belong only to teams within an organization.
> **Note**: in Docker Hub, users cannot be associated directly to an organization. They belong only to teams within an organization.
## Working with organizations
### Create an organization
1. Start by clicking on [Organizations](https://cloud.docker.com/orgs) in Docker Hub
2. Click on "Create Organization"
1. Start by clicking on **[Organizations](https://hub.docker.com/orgs)** in
Docker Hub.
2. Click on **Create Organization**.
3. Provide information about your organization:
![Create Organization](images/orgs-create.png)
![Create organization](images/orgs-create2019.png)
You've created an organization. You'll see you have a team, the **owners** team with a single member (you!)
You've created an organization. You'll see you have a team, the **owners** team
with a single member (you!).
### The owners team
#### The owners team
The **owners** team is a special team that has full access to all repositories in the Organization.
The **owners** team is a special team that has full access to all repositories
in the organization.
Members of this team can:
- Manage Organization settings and billing
- Manage organization settings and billing
- Create a team and modify the membership of any team
- Access and modify any repository belonging to the Organization
- Access and modify any repository belonging to the organization
## Working with teams and members
### Create a team
To create a team:
1. Go to **Organizations** in Docker Hub, and select your organization.
2. Open the **Teams** tab and click **Create Team**.
![Teams view](images/orgs-teams2019.png)
3. Fill out your team's information and click **Create**.
![Create a team](images/orgs-new-team2019.png)
1. Go to your organization by clicking on **Organizations** in Docker Hub, and select your organization.
2. Click **Create Team** ![Create Team](images/orgs-team-create.png)
3. Fill out your team's information and click **Create** ![Create Modal](images/orgs-team-create-submit.png)
### Add a member to a team
1. Visit your team's page in Docker Hub. Click on **Organizations** > **_Your Organization_** > **_Your Team Name_**
2. Click on **Add User**
3. Provide the user's Docker ID username _or_ email to add them to the team ![Add User to Team](images/orgs-team-add-user.png)
You can add a member to a team in one of two ways.
> **Note**: You are not automatically added to teams created by your organization.
If the user isn't in your organization:
1. Go **Organizations** in Docker Hub, and select your organization.
2. Click **Add Member**.
![Add member from members list](images/org-members2019.png)
3. Provide the user's Docker ID username _or_ email, and select a team from the dropdown.
![Add user to team from org page](images/orgs-add-member2019.png)
If the user already belongs to another team in the organization:
1. Open the team's page in Docker Hub: **Organizations** > **_Your Organization_** > **Teams** > **_Your Team Name_**
2. Click **Add User**.
3. Provide the user's Docker ID username _or_ email to add them to the team.
![Add user to team from team page](images/teams-add-member2019.png)
> **Note**: You are not automatically added to teams created by your organization.
### Remove team members
To remove a member from a team, click the **x** next to their name:
To remove a member from all teams in an organization:
1. Go **Organizations** in Docker Hub, and select your organization.
2. Click the **x** next to a member's name:
![Add User to Team](images/org-members2019.png)
To remove a member from a specific team:
1. Open the team this user is on. You can do this in one of two ways:
* If you know the team name, go to **Organizations** > **_Your Organization_** > **Teams** > **_Team Name_**.
> **Note:** You can filter the **Teams** tab by username, but you have to use the format _@username_ in the search field (partial names will not work).
* If you don't know the team name, go to **Organizations** > **_Your Organization_** and search for the user. Hover over **View** to see all of their teams, then click on **View** > **_Team Name_**.
2. Find the user in the list, and click the **x** next to the user's name to remove them.
![List of members on a team](images/orgs-team-members2019.png)
![Add User to Team](images/orgs-team-remove-user.png)
### Give a team access to a repository
To provide a team access to a repository:
1. Visit the repository list on Docker Hub by clicking on **Repositories**.
1. Visit the repository list on Docker Hub by clicking on **Repositories**
2. Select your organization in the namespace dropdown list
3. Click the repository you'd like to edit ![Org Repos](images/orgs-list-repos.png)
4. Click the **Permissions** tab
5. Select the team, permissions level (more on this below) and click **+**
6. Click the **+** button to add ![Add Repo Permissions for Team](images/orgs-add-team-permissions.png)
2. Select your organization in the namespace dropdown list.
3. Click the repository you'd like to edit.
![Org Repos](images/repos-list2019.png)
4. Click the **Permissions** tab.
5. Select the team, the [permissions level](#permissions-reference), and click **+** to save.
![Add Repo Permissions for Team](images/orgs-repo-perms2019.png)
### View a team's permissions for all repositories
To view a team's permissions over all repos:
1. Click on **Organizations**, then select your organization and team.
2. Click on the **Permissions** tab where you can view which repositories this team has access to ![Team Audit Permissions](images/orgs-audit-permissions.png)
1. Open **Organizations** > **_Your Organization_** > **Teams** > **_Team Name_**.
2. Click on the **Permissions** tab, where you can view the repositories this team can access.
![Team Audit Permissions](images/orgs-teams-perms2019.png)
You can also edit repository permissions from this tab.
### Permissions reference
@ -81,7 +146,7 @@ automatically have Read permissions:
- `Read` access allows users to view, search, and pull a private repository in the same way as they can a public repository.
- `Write` access allows users to push to repositories on Docker Hub.
- `Admin` access allows users to modify the repositories "Description", "Collaborators" rights, "Public/Private" visibility and "Delete".
- `Admin` access allows users to modify the repositories "Description", "Collaborators" rights, "Public/Private" visibility, and "Delete".
> **Note**: A User who has not yet verified their email address only has
> `Read` access to the repository, regardless of the rights their team

View File

@ -8,6 +8,21 @@ toc_max: 2
Here you can learn about the latest changes, new features, bug fixes, and
known issues for each Docker Hub release.
## 2019-10-02
### Enhancements
* You can now manage teams and members straight from your [organization page](https://hub.docker.com/orgs). Each organization page now breaks down into these tabs:
* **New:** Members - manage your members directly from this page (delete, add, or open their teams)
* **New:** Teams - search by team or username, and open up any team page to manage the team
* Repositories
* Settings
* Billing
### Bug fixes
* Fixed an issue where Kinematic could not connect and log in to Docker Hub.
## 2019-09-19
### New features

View File

@ -79,7 +79,7 @@ the install command on. DTR will be installed on the UCP worker defined by the
automatically reconfigured to trust DTR.
* With DTR 2.7, you can [enable browser authentication via client
certificates](/ee/enable-authentication-via-client-certificates/) at install
certificates](/ee/enable-client-certificate-authentication/) at install
time. This bypasses the DTR login page and hides the logout button, thereby
skipping the need for entering your username and password.

View File

@ -11,7 +11,7 @@ Get the [30-day
trial available at the Docker hub](https://hub.docker.com/editions/enterprise/docker-ee-trial/trial).
Once you get your trial license, you can install Docker Enterprise's Universal
Control Plane and Docker Trusted Regsitry on Linux Servers. Windows Servers
Control Plane and Docker Trusted Registry on Linux Servers. Windows Servers
can only be used as Universal Control Plane Worker Nodes.
Learn more about the Universal Control Plane's system requirements

View File

@ -105,20 +105,20 @@ email address, for example, `jane.doe@subsidiary1.com`.
## Configure the LDAP integration
To configure UCP to create and authenticate users by using an LDAP directory,
go to the UCP web interface, navigate to the **Admin Settings** page and click
go to the UCP web interface, navigate to the **Admin Settings** page, and click
**Authentication & Authorization** to select the method used to create and
authenticate users.
authenticate users. [Learn about additional UCP configuration options](../../configure/ucp-configuration-file.md#configuration-options).
![](../../../images/authentication-authorization.png)
In the **LDAP Enabled** section, click **Yes** to The LDAP settings appear.
In the **LDAP Enabled** section, click **Yes**.
Now configure your LDAP directory integration.
## Default role for all private collections
Use this setting to change the default permissions of new users.
Click the dropdown to select the permission level that UCP assigns by default
Click the drop-down menu to select the permission level that UCP assigns by default
to the private collections of new users. For example, if you change the value
to `View Only`, all users who log in for the first time after the setting is
changed have `View Only` access to their private collections, but permissions
@ -141,13 +141,16 @@ Click **Yes** to enable integrating UCP users and teams with LDAP servers.
| No simple pagination | If your LDAP server doesn't support pagination. |
| Just-In-Time User Provisioning | Whether to create user accounts only when users log in for the first time. The default value of `true` is recommended. If you upgraded from UCP 2.0.x, the default is `false`. |
> **Note**: LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0.
> Note
>
> LDAP connections using certificates created with TLS v1.2 do not currently advertise support for sha512WithRSAEncryption in the TLS handshake which leads to issues establishing connections with
> some clients. Support for advertising sha512WithRSAEncryption will be added in UCP 3.1.0.
![](../../../images/ldap-integration-1.png){: .with-border}
Click **Confirm** to add your LDAP domain.
To integrate with more LDAP servers, click **Add LDAP Domain**.
To integrate with more LDAP servers, click **Add LDAP Domain**.
## LDAP user search configurations

View File

@ -82,6 +82,7 @@ docker container run --rm {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_ver
| `lifetime_minutes` | no | The initial session lifetime, in minutes. The default is 60 minutes. |
| `renewal_threshold_minutes` | no | The length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A zero value disables session extension. The default is 20 minutes. |
| `per_user_limit` | no | The maximum number of sessions that a user can have active simultaneously. If creating a new session would put a user over this limit, the least recently used session will be deleted. A value of zero disables limiting the number of sessions that users may have. The default is 10. |
| `store_token_per_session` | no | If set, the user token is stored in `sessionStorage` instead of `localStorage`. Note that this option will log the user out and require them to log back in since they are actively changing how their authentication is stored. |
### registries array (optional)
@ -107,7 +108,9 @@ Configures audit logging options for UCP components.
Specifies scheduling options and the default orchestrator for new nodes.
> **Note**: If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag.
> Note
>
> If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag.
| Parameter | Required | Description |
|:------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------|
@ -136,7 +139,9 @@ Specifies whether DTR images require signing.
### log_configuration table (optional)
> Note: This feature has been deprecated. Refer to the [Deprecation notice](https://docs.docker.com/ee/ucp/release-notes/#deprecation-notice) for additional information.
> Note
>
> This feature has been deprecated. Refer to the [Deprecation notice](https://docs.docker.com/ee/ucp/release-notes/#deprecation-notice) for additional information.
Configures the logging options for UCP components.
@ -223,8 +228,9 @@ components. Assigning these values overrides the settings in a container's
| `worker_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes. |
| `kubelet_max_pods` | yes | Set Number of Pods that can run on a node. Default is `110`.
*dev indicates that the functionality is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the Docker Enterprise Software Support Agreement.
> Note
>
> dev indicates that the functionality is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the Docker Enterprise Software Support Agreement.
### iSCSI (optional)
Configures iSCSI options for UCP.

View File

@ -33,8 +33,7 @@ There are two options for provisoning IPs for the Kubernetes cluster on Azure:
## Azure Prerequisites
You must meet the following infrastructure prerequisites in order
to successfully deploy Docker UCP on Azure:
You must meet the following infrastructure prerequisites to successfully deploy Docker UCP on Azure. **Failure to meet these prerequisites may result in significant errors during the installation process.**
- All UCP Nodes (Managers and Workers) need to be deployed into the same Azure
Resource Group. The Azure Networking components (Virtual Network, Subnets,
@ -104,7 +103,13 @@ an Azure subnet.
See the [Kubernetes Azure Cloud Provider Config](https://github.com/kubernetes/cloud-provider-azure/blob/master/docs/cloud-provider-config.md) for more details on this configuration file.
## Considerations for IPAM Configuration
## Guidelines for IPAM Configuration
> **Warning**
>
> You must follow these guidelines and either use the appropriate size network in Azure or take the proper action to fit within the subnet.
> Failure to follow these guidelines may cause significant issues during the
> installation process.
The subnet and the virtual network associated with the primary interface of the
Azure virtual machines need to be configured with a large enough address
@ -246,7 +251,9 @@ subnet, and the `--host-address` maps to the private IP address of the master
node. Finally if you want to adjust the amount of IP addresses provisioned to
each virtual machine pass `--azure-ip-count`.
> Note: The `pod-cidr` range must match the Azure Virtual Network's Subnet
> **Note**
>
> The `pod-cidr` range must match the Azure Virtual Network's Subnet
> attached the hosts. For example, if the Azure Virtual Network had the range
> `172.0.0.0/16` with Virtual Machines provisioned on an Azure Subnet of
> `172.0.1.0/24`, then the Pod CIDR should also be `172.0.1.0/24`.

View File

@ -77,7 +77,7 @@ with a new worker node. The type of upgrade you perform depends on what is neede
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
advanced than the automated, in-place cluster upgrade.
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
@ -125,7 +125,7 @@ of manager node upgrades.
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
advanced than the automated, in-place cluster upgrade.
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
@ -138,28 +138,6 @@ advanced than the automated, in-place cluster upgrade.
in batches of multiple nodes rather than one at a time, and shut down servers to
remove worker nodes. This type of upgrade is the most advanced.
### Use the web interface to perform an upgrade
> **Note**: If you plan to add nodes to the UCP cluster, use the [CLI](#use-the-cli-to-perform-an-upgrade) for the upgrade.
When an upgrade is available for a UCP installation, a banner appears.
![](../../images/upgrade-ucp-1.png){: .with-border}
Clicking this message takes an admin user directly to the upgrade process.
It can be found under the **Upgrade** tab of the **Admin Settings** section.
![](../../images/upgrade-ucp-2.png){: .with-border}
In the **Available Versions** drop down, select the version you want to update.
Copy and paste the CLI command provided into a terminal on a manager node to
perform the upgrade.
During the upgrade, the web interface will be unavailable, and you should wait
until completion before continuing to interact with it. When the upgrade
completes, you'll see a notification that a newer version of the web interface
is available and a browser refresh is required to see it.
### Use the CLI to perform an upgrade
There are two different ways to upgrade a UCP Cluster via the CLI. The first is
@ -348,7 +326,7 @@ nodes in the cluster at one time.
Kubelet is unhealthy: Kubelet stopped posting node status
```
- Alternatively, you may see other port errors such as the one below in the ucp-controller
- Alternatively, you may see other port errors such as the one below in the ucp-controller
container logs:
```
http: proxy error: dial tcp 10.14.101.141:12388: connect: no route to host
@ -378,4 +356,4 @@ From UCP 2.0: UCP 2.0 -> UCP 2.1 -> UCP 2.2
## Where to go next
- [Upgrade DTR](/e/dtr/admin/upgrade/)
- [Upgrade DTR](/ee/dtr/admin/upgrade/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

View File

@ -69,7 +69,7 @@ Swarm services use `update-delay` to control the speed at which a service is upd
Use `update-delay` if …
- You are optimizing for the least number of dropped connections and a longer update cycle is an acceptable tradeoff.
- You are optimizing for the least number of dropped connections and a longer update cycle as an acceptable tradeoff.
- Interlock update convergence takes a long time in your environment (can occur when having large amount of overlay networks).
Do not use `update-delay` if …

View File

@ -12,7 +12,7 @@ First, using an existing Docker engine, save the images:
```bash
$> docker save {{ page.ucp_org }}/ucp-interlock:{{ page.ucp_version }} > interlock.tar
$> docker save {{ page.ucp_org }}/ucp-interlock-extension:{{ page.ucp_version }} > interlock-extension-nginx.tar
$> docker save {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} > nginx.tar
$> docker save {{ page.ucp_org }}/ucp-interlock-proxy:{{ page.ucp_version }} > interlock-proxy-nginx.tar
```
> Note
@ -32,7 +32,7 @@ Next, copy these files to each node in the Docker Swarm cluster and run the foll
```bash
$> docker load < interlock.tar
$> docker load < interlock-extension-nginx.tar
$> docker load < nginx:alpine.tar
$> docker load < interlock-proxy-nginx.tar
```
## Next steps

View File

@ -33,7 +33,7 @@ deployed. As part of this, services using HRM labels are inspected.
3. The HRM service is removed.
4. The `ucp-interlock` service is deployed with the configuration created.
5. The `ucp-interlock` service deploys the `ucp-interlock-extension` and
`ucp-interlock-proxy-services`.
`ucp-interlock-proxy` services.
The only way to rollback from an upgrade is by restoring from a backup taken
before the upgrade. If something goes wrong during the upgrade process, you
@ -90,7 +90,7 @@ don't have any configuration with the same name by running:
* If either the `ucp-interlock-extension` or `ucp-interlock-proxy` services are
not running, it's possible that there are port conflicts.
As a workaround re-enable the layer 7 routing configuration from the
[UCP settings page](deploy/index.md). Make sure the ports you choose are not
[UCP settings page](index.md). Make sure the ports you choose are not
being used by other services.
## Workarounds and clean-up

View File

@ -199,7 +199,7 @@ To customize subnet allocation for your Swarm networks, you can [optionally conf
For example, the following command is used when initializing Swarm:
```bash
$ docker swarm init --default-address-pool 10.20.0.0/16 --default-addr-pool-mask-length 26`
$ docker swarm init --default-addr-pool 10.20.0.0/16 --default-addr-pool-mask-length 26`
```
Whenever a user creates a network, but does not use the `--subnet` command line option, the subnet for this network will be allocated sequentially from the next available subnet from the pool. If the specified network is already allocated, that network will not be used for Swarm.

View File

@ -79,11 +79,11 @@ To create the custom address pool for Swarm, you must define at least one defaul
Docker allocates subnet addresses from the address ranges specified by the `--default-addr-pool` option. For example, a command line option `--default-addr-pool 10.10.0.0/16` indicates that Docker will allocate subnets from that `/16` address range. If `--default-addr-pool-mask-len` were unspecified or set explicitly to 24, this would result in 256 `/24` networks of the form `10.10.X.0/24`.
The subnet range comes from the `--default-addr-pool`, (such as `10.10.0.0/16`). The size of 16 there represents the number of networks one can create within that `default-addr-pool` range. The `--default-address-pool` option may occur multiple times with each option providing additional addresses for docker to use for overlay subnets.
The subnet range comes from the `--default-addr-pool`, (such as `10.10.0.0/16`). The size of 16 there represents the number of networks one can create within that `default-addr-pool` range. The `--default-addr-pool` option may occur multiple times with each option providing additional addresses for docker to use for overlay subnets.
The format of the command is:
```
$ docker swarm init --default-address-pool <IP range in CIDR> [--default-address-pool <IP range in CIDR> --default-addr-pool-mask-length <CIDR value>]
$ docker swarm init --default-addr-pool <IP range in CIDR> [--default-addr-pool <IP range in CIDR> --default-addr-pool-mask-length <CIDR value>]
```
To create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this:
@ -105,7 +105,7 @@ all the subnets are exhausted.
Refer to the following pages for more information:
- [Swarm networking](./networking.md) for more information about the default address pool usage
- [UCP Installation Planning](../../ee/ucp/admin/install/plan-installation.md) for more information about planning the network design before installation
- `docker swarm init` [CLI reference](../reference/commandline/swarm_init.md) for more detail on the `--default-address-pool` flag.
- `docker swarm init` [CLI reference](../reference/commandline/swarm_init.md) for more detail on the `--default-addr-pool` flag.
### Configure the advertise address

View File

@ -90,7 +90,7 @@ This `Dockerfile` refers to a couple of files we haven't created yet, namely
Create two more files, `requirements.txt` and `app.py`, and put them in the same
folder with the `Dockerfile`. This completes our app, which as you can see is
quite simple. When the above `Dockerfile` is built into an image, `app.py` and
`requirements.txt` is present because of that `Dockerfile`'s `COPY` command,
`requirements.txt` are present because of that `Dockerfile`'s `COPY` command,
and the output from `app.py` is accessible over HTTP thanks to the `EXPOSE`
command.

View File

@ -46,22 +46,34 @@ On {{ linux-dist-long }}, Docker EE supports storage drivers, `overlay2` and `de
### FIPS 140-2 cryptographic module support
[Federal Information Processing Standards (FIPS) Publication 140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf) is a United States Federal security requirement for cryptographic modules.
[Federal Information Processing Standards (FIPS) Publication 140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf)
is a United States Federal security requirement for cryptographic modules.
With Docker EE Basic license for versions 18.03 and later, Docker provides FIPS 140-2 support in RHEL 7.3, 7.4 and 7.5. This includes a FIPS supported cryptographic module. If the RHEL implementation already has FIPS support enabled, FIPS is automatically enabled in the Docker engine.
With Docker Engine - Enterprise Basic license for versions 18.03 and later,
Docker provides FIPS 140-2 support in RHEL 7.3, 7.4 and 7.5. This includes a
FIPS supported cryptographic module. If the RHEL implementation already has FIPS
support enabled, FIPS is also automatically enabled in the Docker engine. If
FIPS support is not already enabled in your RHEL implementation, visit the
[Red Hat Product Documentation](https://access.redhat.com/documentation/en-us/)
for instructions on how to enable it.
To verify the FIPS-140-2 module is enabled in the Linux kernel, confirm the file `/proc/sys/crypto/fips_enabled` contains `1`.
To verify the FIPS-140-2 module is enabled in the Linux kernel, confirm the file
`/proc/sys/crypto/fips_enabled` contains `1`.
```
$ cat /proc/sys/crypto/fips_enabled
1
```
> **Note**: FIPS is only supported in the Docker Engine EE. UCP and DTR currently do not have support for FIPS-140-2.
> **Note**: FIPS is only supported in the Docker Engine Engine - Enterprise. UCP
> and DTR currently do not have support for FIPS-140-2.
To enable FIPS 140-2 compliance on a system that is not in FIPS 140-2 mode, do the following:
You can override FIPS 140-2 compliance on a system that is not in FIPS 140-2
mode. Note, this **does not** change FIPS 140-2 mode on the system. To override
the FIPS 140-2 mode, follow ths steps below.
Create a file called `/etc/systemd/system/docker.service.d/fips-module.conf`. It needs to contain the following:
Create a file called `/etc/systemd/system/docker.service.d/fips-module.conf`.
Add the following:
```
[Service]
@ -76,7 +88,8 @@ Restart the Docker service as root.
`$ sudo systemctl restart docker`
To confirm Docker is running with FIPS-140-2 enabled, run the `docker info` command:
To confirm Docker is running with FIPS-140-2 enabled, run the `docker info`
command:
{% raw %}
```
@ -87,11 +100,11 @@ docker info --format {{.SecurityOptions}}
### Disabling FIPS-140-2
If the system has the FIPS 140-2 cryptographic module installed on the operating system,
it is possible to disable FIPS-140-2 compliance.
If the system has the FIPS 140-2 cryptographic module installed on the operating
system, it is possible to disable FIPS-140-2 compliance.
To disable FIPS 140-2 in Docker but not the operating system, set the value `DOCKER_FIPS=0`
in the `/etc/systemd/system/docker.service.d/fips-module.conf`.
To disable FIPS 140-2 in Docker but not the operating system, set the value
`DOCKER_FIPS=0` in the `/etc/systemd/system/docker.service.d/fips-module.conf`.
Reload the Docker configuration to systemd.

View File

@ -53,9 +53,9 @@ for a lot more information.
## Prevent Docker from manipulating iptables
To prevent Docker from manipulating the `iptables` policies at all, set the
`iptables` key to `false` in `/etc/docker/daemon.json`. This is inappropriate
for most users, because the `iptables` policies then need to be managed by hand.
It is possible to set the `iptables` key to `false` in the Docker engine's configuration file at `/etc/docker.daemon.json`, but this option is not appropriate for most users. It is not possible to completely prevent Docker from creating `iptables` rules, and creating them after-the-fact is extremely involved and beyond the scope of these instructions. Setting `iptables` to `false` will more than likely break container networking for the Docker engine.
For system integrators who wish to build the Docker runtime into other applications, explore the [`moby` project](https://mobyproject.org/).
## Next steps

View File

@ -50,7 +50,7 @@ the `--mount` flag was used for swarm services. However, starting with Docker
is mounted in the container. May be specified as `destination`, `dst`,
or `target`.
- The `tmpfs-type` and `tmpfs-mode` options. See
[tmpfs options](#tmpfs-options).
[tmpfs options](#specify-tmpfs-options).
The examples below show both the `--mount` and `--tmpfs` syntax where possible,
and `--mount` is presented first.