Merge branch 'master' into ip-range-table
10
README.md
|
|
@ -1,11 +1,9 @@
|
|||
# Docs @ Docker
|
||||
Welcome to the repo for our documentation. This is the source for
|
||||
[https://docs.docker.com/](https://docs.docker.com/).
|
||||
|
||||
Welcome to the repo for our documentation. This is the source for
|
||||
[https://docs.docker.com/](https://docs.docker.com/).
|
||||
|
||||
Feel free to send us pull requests and file issues. Our docs are completely
|
||||
Feel free to send us pull requests and file issues. Our docs are completely
|
||||
open source and we deeply appreciate contributions from our community!
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Providing feedback](#providing-feedback)
|
||||
|
|
@ -360,7 +358,7 @@ If you are using a version of the documentation that is no longer supported, whi
|
|||
|
||||
- By entering your version number and selecting it from the branch selection list for this repo
|
||||
- By directly accessing the Github URL for your version. For example, https://github.com/docker/docker.github.io/tree/v1.9 for `v1.9`
|
||||
- By running a container of the specific [tag for your documentation version](https://cloud.docker.com/u/docs/repository/docker/docs/docker.github.io/general#read-these-docs-offline)
|
||||
- By running a container of the specific [tag for your documentation version](https://hub.docker.com/r/docs/docker.github.io/tags)
|
||||
in Docker Hub. For example, run the following to access `v1.9`:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -199,8 +199,8 @@
|
|||
*Also known as : docker-machine*
|
||||
<a class="glossary" name="namespace">namespace</a>: |
|
||||
A [Linux namespace](http://man7.org/linux/man-pages/man7/namespaces.7.html)
|
||||
is a Linux kernel feature that isolates and virtualizes system resources. Processes which restricted to
|
||||
a class="glossary" namespace can only interact with resources or processes that are part of the same namespace. Namespaces
|
||||
is a Linux kernel feature that isolates and virtualizes system resources. Processes which are restricted to
|
||||
a namespace can only interact with resources or processes that are part of the same namespace. Namespaces
|
||||
are an important part of Docker's isolation model. Namespaces exist for each type of
|
||||
resource, including `net` (networking), `mnt` (storage), `pid` (processes), `uts` (hostname control),
|
||||
and `user` (UID mapping). For more information about namespaces, see [Docker run reference](/engine/reference/run/)
|
||||
|
|
|
|||
375
_data/toc.yaml
|
|
@ -86,12 +86,6 @@ guides:
|
|||
title: Binaries
|
||||
- path: /install/linux/linux-postinstall/
|
||||
title: Optional Linux post-installation steps
|
||||
- title: MacOS
|
||||
path: /docker-for-mac/install/
|
||||
nosync: true
|
||||
- title: Microsoft Windows
|
||||
path: /docker-for-windows/install/
|
||||
nosync: true
|
||||
- sectiontitle: Docker Enterprise
|
||||
section:
|
||||
- title: About Docker Enterprise
|
||||
|
|
@ -110,9 +104,6 @@ guides:
|
|||
title: Ubuntu
|
||||
- path: /install/windows/docker-ee/
|
||||
title: Microsoft Windows Server
|
||||
- title: Release notes
|
||||
path: /engine/release-notes/
|
||||
nosync: true
|
||||
- sectiontitle: Compatibility between Docker versions
|
||||
section:
|
||||
- path: /engine/ce-ee-node-activate/
|
||||
|
|
@ -133,13 +124,13 @@ guides:
|
|||
section:
|
||||
- title: "Part 1: Orientation and setup"
|
||||
path: /get-started/
|
||||
- title: "Part 2: Containerizing an Application"
|
||||
- title: "Part 2: Containerizing an application"
|
||||
path: /get-started/part2/
|
||||
- title: "Part 3: Deploying to Kubernetes"
|
||||
path: /get-started/part3/
|
||||
- title: "Part 4: Deploying to Swarm"
|
||||
path: /get-started/part4/
|
||||
- title: "Part 5: Sharing Images on Docker Hub"
|
||||
- title: "Part 5: Sharing images on Docker Hub"
|
||||
path: /get-started/part5/
|
||||
- path: /get-started/resources/
|
||||
title: "Educational resources"
|
||||
|
|
@ -161,20 +152,8 @@ guides:
|
|||
title: Docker build enhancements for 18.09
|
||||
- path: /develop/develop-images/multistage-build/
|
||||
title: Use multi-stage builds
|
||||
- path: /engine/reference/builder/
|
||||
title: Dockerfile reference
|
||||
nosync: true
|
||||
- path: /develop/develop-images/image_management/
|
||||
title: Manage images
|
||||
- path: /samples/
|
||||
title: Docker app examples
|
||||
nosync: true
|
||||
- sectiontitle: Develop using the Docker Engine SDKs and API
|
||||
section:
|
||||
- path: /develop/sdk/
|
||||
title: Overview
|
||||
- path: /develop/sdk/examples/
|
||||
title: SDK and API examples
|
||||
- sectiontitle: Configure networking
|
||||
section:
|
||||
- path: /network/
|
||||
|
|
@ -215,7 +194,6 @@ guides:
|
|||
title: (Legacy) Container links
|
||||
- path: /network/overlay-standalone.swarm/
|
||||
title: Overlay networks for Swarm Classic
|
||||
|
||||
- sectiontitle: Manage application data
|
||||
section:
|
||||
- path: /storage/
|
||||
|
|
@ -262,12 +240,6 @@ guides:
|
|||
title: Configure and run Docker
|
||||
- path: /config/daemon/systemd/
|
||||
title: Control Docker with systemd
|
||||
- path: /config/labels-custom-metadata/
|
||||
title: Apply custom metadata to daemons
|
||||
nosync: true
|
||||
- path: /config/containers/logging/configure/
|
||||
title: Configuring default drivers
|
||||
nosync: true
|
||||
- sectiontitle: Work with external tools
|
||||
section:
|
||||
- path: /config/thirdparty/
|
||||
|
|
@ -286,12 +258,6 @@ guides:
|
|||
title: Container runtime metrics
|
||||
- path: /config/containers/resource_constraints/
|
||||
title: Runtime options with Memory, CPUs, and GPUs
|
||||
- path: /config/labels-custom-metadata/
|
||||
title: Apply custom metadata to containers
|
||||
nosync: true
|
||||
- path: /config/pruning/
|
||||
title: Prune unused containers
|
||||
nosync: true
|
||||
- sectiontitle: Logging
|
||||
section:
|
||||
- path: /config/containers/logging/
|
||||
|
|
@ -328,9 +294,6 @@ guides:
|
|||
title: Journald logging driver
|
||||
- path: /config/containers/logging/splunk/
|
||||
title: Splunk logging driver
|
||||
- path: /registry/recipes/mirror/
|
||||
title: Run a local registry mirror
|
||||
nosync: true
|
||||
- sectiontitle: Work with external tools
|
||||
section:
|
||||
- path: /config/thirdparty/dsc/
|
||||
|
|
@ -341,8 +304,6 @@ guides:
|
|||
title: Chef
|
||||
- path: /config/thirdparty/puppet/
|
||||
title: Puppet
|
||||
- path: /config/thirdparty/ambassador_pattern_linking/
|
||||
title: (Obsolete) Link via an ambassador container
|
||||
- sectiontitle: Security
|
||||
section:
|
||||
- path: /engine/security/security/
|
||||
|
|
@ -375,6 +336,8 @@ guides:
|
|||
title: Seccomp security profiles for Docker
|
||||
- path: /engine/security/userns-remap/
|
||||
title: Isolate containers with a user namespace
|
||||
- path: /engine/security/rootless/
|
||||
title: Run the Docker daemon as a non-root user (Rootless mode)
|
||||
- sectiontitle: Scale your app
|
||||
section:
|
||||
- path: /engine/swarm/
|
||||
|
|
@ -427,8 +390,6 @@ guides:
|
|||
title: Manage sensitive data with Docker secrets
|
||||
- path: /engine/swarm/swarm_manager_locking/
|
||||
title: Lock your swarm
|
||||
- path: /engine/swarm/networking/
|
||||
title: Manage swarm service networks
|
||||
- path: /engine/swarm/admin_guide/
|
||||
title: Swarm administration guide
|
||||
- path: /engine/swarm/raft/
|
||||
|
|
@ -483,29 +444,34 @@ guides:
|
|||
title: FedRAMP
|
||||
- path: /compliance/fisma/
|
||||
title: FISMA
|
||||
|
||||
- sectiontitle: Open source at Docker
|
||||
section:
|
||||
- path: /opensource/
|
||||
title: Contribute to documentation
|
||||
- path: /opensource/ways/
|
||||
title: Other ways to contribute
|
||||
|
||||
- sectiontitle: Documentation archive
|
||||
section:
|
||||
- path: /docsarchive/
|
||||
title: View the docs archives
|
||||
- path: /hackathon/
|
||||
title: Docs hackathon results
|
||||
|
||||
reference:
|
||||
- sectiontitle: File formats
|
||||
section:
|
||||
- title: Dockerfile reference
|
||||
path: /engine/reference/builder/
|
||||
- title: Compose file reference
|
||||
path: /compose/compose-file/
|
||||
nosync: true
|
||||
- sectiontitle: Compose file reference
|
||||
section:
|
||||
- path: /compose/compose-file/
|
||||
title: Version 3
|
||||
- path: /compose/compose-file/compose-file-v2/
|
||||
title: Version 2
|
||||
- path: /compose/compose-file/compose-file-v1/
|
||||
title: Version 1
|
||||
- path: /compose/compose-file/compose-versioning/
|
||||
title: About versions and upgrading
|
||||
- path: /compose/faq/
|
||||
title: Frequently asked questions
|
||||
|
||||
- sectiontitle: Command-Line Interfaces (CLIs)
|
||||
section:
|
||||
|
|
@ -629,7 +595,6 @@ reference:
|
|||
title: docker checkpoint ls
|
||||
- path: /engine/reference/commandline/checkpoint_rm/
|
||||
title: docker checkpoint rm
|
||||
|
||||
- sectiontitle: docker cluster *
|
||||
section:
|
||||
- path: /engine/reference/commandline/cluster/
|
||||
|
|
@ -1070,14 +1035,64 @@ reference:
|
|||
title: docker volume rm
|
||||
- path: /engine/reference/commandline/wait/
|
||||
title: docker wait
|
||||
- sectiontitle: Docker Compose CLI reference
|
||||
section:
|
||||
- path: /compose/reference/overview/
|
||||
title: Overview of docker-compose CLI
|
||||
- path: /compose/reference/envvars/
|
||||
title: CLI environment variables
|
||||
- path: /compose/completion/
|
||||
title: Command-line completion
|
||||
- path: /compose/reference/build/
|
||||
title: build
|
||||
- path: /compose/reference/bundle/
|
||||
title: bundle
|
||||
- path: /compose/reference/config/
|
||||
title: config
|
||||
- path: /compose/reference/create/
|
||||
title: create
|
||||
- path: /compose/reference/down/
|
||||
title: down
|
||||
- path: /compose/reference/events/
|
||||
title: events
|
||||
- path: /compose/reference/exec/
|
||||
title: exec
|
||||
- path: /compose/reference/help/
|
||||
title: help
|
||||
- path: /compose/reference/kill/
|
||||
title: kill
|
||||
- path: /compose/reference/logs/
|
||||
title: logs
|
||||
- path: /compose/reference/pause/
|
||||
title: pause
|
||||
- path: /compose/reference/port/
|
||||
title: port
|
||||
- path: /compose/reference/ps/
|
||||
title: ps
|
||||
- path: /compose/reference/pull/
|
||||
title: pull
|
||||
- path: /compose/reference/push/
|
||||
title: push
|
||||
- path: /compose/reference/restart/
|
||||
title: restart
|
||||
- path: /compose/reference/rm/
|
||||
title: rm
|
||||
- path: /compose/reference/run/
|
||||
title: run
|
||||
- path: /compose/reference/scale/
|
||||
title: scale
|
||||
- path: /compose/reference/start/
|
||||
title: start
|
||||
- path: /compose/reference/stop/
|
||||
title: stop
|
||||
- path: /compose/reference/top/
|
||||
title: top
|
||||
- path: /compose/reference/unpause/
|
||||
title: unpause
|
||||
- path: /compose/reference/up/
|
||||
title: up
|
||||
- title: Daemon CLI (dockerd)
|
||||
path: /engine/reference/commandline/dockerd/
|
||||
- title: Machine (docker-machine) CLI
|
||||
path: /machine/reference/
|
||||
nosync: true
|
||||
- title: Compose (docker-compose) CLI
|
||||
path: /compose/reference/overview/
|
||||
nosync: true
|
||||
- sectiontitle: DTR CLI
|
||||
section:
|
||||
- path: /reference/dtr/2.7/cli/
|
||||
|
|
@ -1126,7 +1141,6 @@ reference:
|
|||
title: uninstall-ucp
|
||||
- path: /reference/ucp/3.2/cli/upgrade/
|
||||
title: upgrade
|
||||
|
||||
- sectiontitle: Application Programming Interfaces (APIs)
|
||||
section:
|
||||
- sectiontitle: Docker Engine API
|
||||
|
|
@ -1191,23 +1205,52 @@ reference:
|
|||
title: v1.18 reference
|
||||
- title: DTR API
|
||||
path: /reference/dtr/2.7/api/
|
||||
- title: UCP API
|
||||
path: /reference/ucp/3.2/api/
|
||||
- title: Registry API
|
||||
path: /registry/spec/api/
|
||||
nosync: true
|
||||
|
||||
- title: Template API
|
||||
path: /app-template/api-reference/
|
||||
- title: UCP API
|
||||
path: /reference/ucp/3.2/api/
|
||||
- sectiontitle: Drivers and specifications
|
||||
section:
|
||||
- title: Image specification
|
||||
path: /registry/spec/manifest-v2-2/
|
||||
- title: Machine drivers
|
||||
path: /machine/drivers/os-base/
|
||||
- title: Registry token authentication
|
||||
path: /registry/spec/auth/
|
||||
- title: Registry storage drivers
|
||||
path: /registry/storage-drivers/
|
||||
|
||||
- sectiontitle: Registry image manifests
|
||||
section:
|
||||
- path: /registry/spec/manifest-v2-1/
|
||||
title: Image manifest v 2, schema 1
|
||||
- path: /registry/spec/manifest-v2-2/
|
||||
title: Image manifest v 2, schema 2
|
||||
- path: /registry/spec/deprecated-schema-v1/
|
||||
title: Update deprecated schema v1 images
|
||||
- sectiontitle: Registry token authorization
|
||||
section:
|
||||
- path: /registry/spec/auth/
|
||||
title: Docker Registry token authentication
|
||||
- path: /registry/spec/auth/jwt/
|
||||
title: Token authentication implementation
|
||||
- path: /registry/spec/auth/oauth/
|
||||
title: Oauth2 token authentication
|
||||
- path: /registry/spec/auth/scope/
|
||||
title: Token scope documentation
|
||||
- path: /registry/spec/auth/token/
|
||||
title: Token authentication specification
|
||||
- sectiontitle: Registry storage drivers
|
||||
section:
|
||||
- path: /registry/storage-drivers/
|
||||
title: Storage driver overview
|
||||
- path: /registry/storage-drivers/oss/
|
||||
title: Aliyun OSS storage driver
|
||||
- path: /registry/storage-drivers/filesystem/
|
||||
title: Filesystem storage driver
|
||||
- path: /registry/storage-drivers/gcs/
|
||||
title: GCS storage driver
|
||||
- path: /registry/storage-drivers/inmemory/
|
||||
title: In-memory storage driver
|
||||
- path: /registry/storage-drivers/azure/
|
||||
title: Microsoft Azure storage driver
|
||||
- path: /registry/storage-drivers/s3/
|
||||
title: S3 storage driver
|
||||
- path: /registry/storage-drivers/swift/
|
||||
title: Swift storage driver
|
||||
- sectiontitle: Compliance control references
|
||||
section:
|
||||
- sectiontitle: NIST 800-53
|
||||
|
|
@ -1305,13 +1348,8 @@ manuals:
|
|||
title: Cluster file structure
|
||||
- path: /cluster/reference/envvars/
|
||||
title: Environment variables
|
||||
- path: /cluster/reference/
|
||||
title: Subcommands
|
||||
- sectiontitle: Docker Engine - Enterprise
|
||||
section:
|
||||
- path: /ee/supported-platforms/
|
||||
title: Install Docker Engine - Enterprise
|
||||
nosync: true
|
||||
- title: Release notes
|
||||
path: /engine/release-notes/
|
||||
- sectiontitle: Universal Control Plane
|
||||
|
|
@ -1336,6 +1374,8 @@ manuals:
|
|||
section:
|
||||
- path: /ee/ucp/admin/install/cloudproviders/install-on-azure/
|
||||
title: Install on Azure
|
||||
- path: /ee/ucp/admin/install/cloudproviders/install-on-azure-custom/
|
||||
title: Custom Azure roles
|
||||
- path: /ee/ucp/admin/install/cloudproviders/install-on-aws/
|
||||
title: Install on AWS
|
||||
- path: /ee/ucp/admin/install/upgrade/
|
||||
|
|
@ -1364,6 +1404,8 @@ manuals:
|
|||
title: Create UCP audit logs
|
||||
- path: /ee/ucp/admin/configure/enable-saml-authentication/
|
||||
title: Enable SAML authentication
|
||||
- path: /ee/ucp/admin/configure/integrate-saml/
|
||||
title: SAML integration
|
||||
- path: /ee/ucp/admin/configure/integrate-scim/
|
||||
title: SCIM integration
|
||||
- path: /ee/ucp/admin/configure/enable-helm-tiller/
|
||||
|
|
@ -1416,9 +1458,6 @@ manuals:
|
|||
path: /ee/admin/backup/back-up-ucp/
|
||||
- title: Restore UCP
|
||||
path: /ee/admin/restore/restore-ucp/
|
||||
- title: CLI reference
|
||||
path: /reference/ucp/3.2/cli/
|
||||
nosync: true
|
||||
- sectiontitle: Authorize role-based access
|
||||
section:
|
||||
- path: /ee/ucp/authorization/
|
||||
|
|
@ -1507,10 +1546,6 @@ manuals:
|
|||
path: /ee/ucp/interlock/usage/canary/
|
||||
- title: Using context or path-based routing
|
||||
path: /ee/ucp/interlock/usage/context/
|
||||
- title: Specifying a routing mode
|
||||
path: /ee/ucp/interlock/usage/interlock-vip-mode/
|
||||
- title: Using routing labels
|
||||
path: /ee/ucp/interlock/usage/labels-reference/
|
||||
- title: Publishing a default host service
|
||||
path: /ee/ucp/interlock/usage/default-backend/
|
||||
- title: Specifying a routing mode
|
||||
|
|
@ -1571,9 +1606,6 @@ manuals:
|
|||
path: /ee/ucp/kubernetes/cluster-ingress/canary/
|
||||
- title: Implementing Persistent (sticky) Sessions
|
||||
path: /ee/ucp/kubernetes/cluster-ingress/sticky/
|
||||
- title: API reference
|
||||
path: /reference/ucp/3.2/api/
|
||||
nosync: true
|
||||
- path: /ee/ucp/release-notes/
|
||||
title: Release notes
|
||||
- sectiontitle: Previous versions
|
||||
|
|
@ -2692,9 +2724,9 @@ manuals:
|
|||
- title: Repair a cluster
|
||||
path: /ee/dtr/admin/disaster-recovery/repair-a-cluster/
|
||||
- title: Create a backup
|
||||
path: /ee/dtr/admin/disaster-recovery/create-a-backup/
|
||||
path: /ee/admin/backup/back-up-dtr/
|
||||
- title: Restore from a backup
|
||||
path: /ee/dtr/admin/disaster-recovery/restore-from-backup/
|
||||
path: /ee/admin/restore/restore-dtr/
|
||||
- title: CLI reference
|
||||
path: /reference/dtr/2.7/cli/
|
||||
nosync: true
|
||||
|
|
@ -3758,9 +3790,6 @@ manuals:
|
|||
section:
|
||||
- path: /ee/docker-ee-architecture/
|
||||
title: Docker Enterprise Architecture
|
||||
- path: /ee/supported-platforms/
|
||||
title: Supported platforms
|
||||
nosync: true
|
||||
- path: /ee/end-to-end-install/
|
||||
title: Deploy Docker Enterprise
|
||||
- path: /ee/upgrade/
|
||||
|
|
@ -3771,24 +3800,12 @@ manuals:
|
|||
title: Overview
|
||||
- path: /ee/admin/backup/back-up-swarm/
|
||||
title: Back up Docker Swarm
|
||||
- path: /ee/admin/backup/back-up-ucp/
|
||||
title: Back up UCP
|
||||
- path: /ee/admin/backup/back-up-dtr/
|
||||
title: Back up DTR
|
||||
- path: /engine/reference/commandline/cluster_backup/
|
||||
title: Back up clusters with Docker Cluster
|
||||
- sectiontitle: Restore Docker Enterprise
|
||||
section:
|
||||
- path: /ee/admin/restore/
|
||||
title: Overview
|
||||
- path: /ee/admin/restore/restore-swarm/
|
||||
title: Restore Docker Swarm
|
||||
- path: /ee/admin/restore/restore-ucp/
|
||||
title: Restore UCP
|
||||
- path: /ee/admin/restore/restore-dtr/
|
||||
title: Restore DTR
|
||||
- path: /cluster/reference/restore/
|
||||
title: Restore clusters with Docker Cluster
|
||||
- sectiontitle: Disaster Recovery
|
||||
section:
|
||||
- path: /ee/admin/disaster-recovery/
|
||||
|
|
@ -3813,28 +3830,10 @@ manuals:
|
|||
title: Images
|
||||
- path: /assemble/adv-backend-manage/
|
||||
title: Advanced Backend Management
|
||||
- path: /engine/reference/commandline/assemble/
|
||||
title: CLI reference
|
||||
- sectiontitle: Docker App
|
||||
section:
|
||||
- path: /app/working-with-app/
|
||||
title: Working with Docker App
|
||||
- path: /engine/reference/commandline/app/
|
||||
title: CLI reference
|
||||
- sectiontitle: Docker Template
|
||||
section:
|
||||
- path: /app-template/working-with-template/
|
||||
title: Working with Docker Template
|
||||
- path: /app-template/api-reference/
|
||||
title: API reference
|
||||
- path: /engine/reference/commandline/template/
|
||||
title: CLI reference
|
||||
- sectiontitle: Docker Buildx
|
||||
section:
|
||||
- path: /buildx/working-with-buildx/
|
||||
title: Working with Docker Buildx
|
||||
- path: /engine/reference/commandline/buildx/
|
||||
title: CLI reference
|
||||
- path: /app/working-with-app/
|
||||
title: Docker App
|
||||
- path: /buildx/working-with-buildx/
|
||||
title: Docker Buildx
|
||||
- sectiontitle: Docker Compose
|
||||
section:
|
||||
- path: /compose/
|
||||
|
|
@ -3843,74 +3842,6 @@ manuals:
|
|||
title: Install Compose
|
||||
- path: /compose/gettingstarted/
|
||||
title: Getting started
|
||||
- sectiontitle: Compose (docker-compose) CLI reference
|
||||
section:
|
||||
- path: /compose/reference/overview/
|
||||
title: Overview of docker-compose CLI
|
||||
- path: /compose/reference/envvars/
|
||||
title: CLI environment variables
|
||||
- path: /compose/completion/
|
||||
title: Command-line completion
|
||||
- path: /compose/reference/build/
|
||||
title: build
|
||||
- path: /compose/reference/bundle/
|
||||
title: bundle
|
||||
- path: /compose/reference/config/
|
||||
title: config
|
||||
- path: /compose/reference/create/
|
||||
title: create
|
||||
- path: /compose/reference/down/
|
||||
title: down
|
||||
- path: /compose/reference/events/
|
||||
title: events
|
||||
- path: /compose/reference/exec/
|
||||
title: exec
|
||||
- path: /compose/reference/help/
|
||||
title: help
|
||||
- path: /compose/reference/kill/
|
||||
title: kill
|
||||
- path: /compose/reference/logs/
|
||||
title: logs
|
||||
- path: /compose/reference/pause/
|
||||
title: pause
|
||||
- path: /compose/reference/port/
|
||||
title: port
|
||||
- path: /compose/reference/ps/
|
||||
title: ps
|
||||
- path: /compose/reference/pull/
|
||||
title: pull
|
||||
- path: /compose/reference/push/
|
||||
title: push
|
||||
- path: /compose/reference/restart/
|
||||
title: restart
|
||||
- path: /compose/reference/rm/
|
||||
title: rm
|
||||
- path: /compose/reference/run/
|
||||
title: run
|
||||
- path: /compose/reference/scale/
|
||||
title: scale
|
||||
- path: /compose/reference/start/
|
||||
title: start
|
||||
- path: /compose/reference/stop/
|
||||
title: stop
|
||||
- path: /compose/reference/top/
|
||||
title: top
|
||||
- path: /compose/reference/unpause/
|
||||
title: unpause
|
||||
- path: /compose/reference/up/
|
||||
title: up
|
||||
- sectiontitle: Compose file reference
|
||||
section:
|
||||
- path: /compose/compose-file/
|
||||
title: Version 3
|
||||
- path: /compose/compose-file/compose-file-v2/
|
||||
title: Version 2
|
||||
- path: /compose/compose-file/compose-file-v1/
|
||||
title: Version 1
|
||||
- path: /compose/compose-file/compose-versioning/
|
||||
title: About versions and upgrading
|
||||
- path: /compose/faq/
|
||||
title: Frequently asked questions
|
||||
- path: /compose/bundles/
|
||||
title: Docker stacks and distributed application bundles
|
||||
- path: /compose/swarm/
|
||||
|
|
@ -3925,20 +3856,14 @@ manuals:
|
|||
title: Networking in Compose
|
||||
- path: /compose/production/
|
||||
title: Using Compose in production
|
||||
- path: /compose/link-env-deprecated/
|
||||
title: Link environment variables (deprecated)
|
||||
- path: /compose/startup-order/
|
||||
title: Control startup order
|
||||
- path: /compose/samples-for-compose/
|
||||
title: Sample apps with Compose
|
||||
- path: /release-notes/docker-compose/
|
||||
title: Docker Compose release notes
|
||||
- sectiontitle: Docker Context
|
||||
section:
|
||||
- path: /engine/context/working-with-contexts/
|
||||
title: Working with Docker Contexts
|
||||
- path: /engine/reference/commandline/context/
|
||||
title: CLI reference
|
||||
- path: /engine/context/working-with-contexts/
|
||||
title: Docker Context
|
||||
- sectiontitle: Docker Desktop for Mac
|
||||
section:
|
||||
- path: /docker-for-mac/
|
||||
|
|
@ -3992,7 +3917,7 @@ manuals:
|
|||
- path: /docker-for-windows/edge-release-notes/
|
||||
title: Edge release notes
|
||||
- path: /docker-for-windows/wsl-tech-preview/
|
||||
title: Docker Desktop WSL 2 Tech Preview
|
||||
title: Docker Desktop WSL 2 backend
|
||||
- title: Docker ID accounts
|
||||
path: /docker-id/
|
||||
- sectiontitle: Docker Hub
|
||||
|
|
@ -4045,8 +3970,6 @@ manuals:
|
|||
title: Advanced automated builds
|
||||
- path: /docker-hub/builds/link-source/
|
||||
title: Link to GitHub and BitBucket
|
||||
- path: /docker-hub/builds/classic/
|
||||
title: Classic automated builds
|
||||
- sectiontitle: Publisher & certified content
|
||||
section:
|
||||
- path: /docker-hub/publish/
|
||||
|
|
@ -4065,6 +3988,8 @@ manuals:
|
|||
title: Trust chain
|
||||
- path: /docker-hub/publish/byol/
|
||||
title: Bring Your Own License (BYOL)
|
||||
- path: /app-template/working-with-template/
|
||||
title: Docker Template
|
||||
- sectiontitle: Open-source projects
|
||||
section:
|
||||
- sectiontitle: Docker Notary
|
||||
|
|
@ -4123,48 +4048,6 @@ manuals:
|
|||
title: Compatibility
|
||||
- path: /registry/help/
|
||||
title: Getting help
|
||||
- sectiontitle: Registry reference
|
||||
section:
|
||||
- path: /registry/spec/api/
|
||||
title: Registry HTTP API v2
|
||||
- sectiontitle: Registry image manifests
|
||||
section:
|
||||
- path: /registry/spec/manifest-v2-1/
|
||||
title: Image manifest v 2, schema 1
|
||||
- path: /registry/spec/manifest-v2-2/
|
||||
title: Image manifest v 2, schema 2
|
||||
- path: /registry/spec/deprecated-schema-v1/
|
||||
title: Update deprecated schema v1 images
|
||||
- sectiontitle: Registry storage drivers
|
||||
section:
|
||||
- path: /registry/storage-drivers/
|
||||
title: Storage driver overview
|
||||
- path: /registry/storage-drivers/oss/
|
||||
title: Aliyun OSS storage driver
|
||||
- path: /registry/storage-drivers/filesystem/
|
||||
title: Filesystem storage driver
|
||||
- path: /registry/storage-drivers/gcs/
|
||||
title: GCS storage driver
|
||||
- path: /registry/storage-drivers/inmemory/
|
||||
title: In-memory storage driver
|
||||
- path: /registry/storage-drivers/azure/
|
||||
title: Microsoft Azure storage driver
|
||||
- path: /registry/storage-drivers/s3/
|
||||
title: S3 storage driver
|
||||
- path: /registry/storage-drivers/swift/
|
||||
title: Swift storage driver
|
||||
- sectiontitle: Registry specifications
|
||||
section:
|
||||
- path: /registry/spec/auth/
|
||||
title: Docker Registry token authentication
|
||||
- path: /registry/spec/auth/jwt/
|
||||
title: Token authentication implementation
|
||||
- path: /registry/spec/auth/oauth/
|
||||
title: Oauth2 token authentication
|
||||
- path: /registry/spec/auth/scope/
|
||||
title: Token scope documentation
|
||||
- path: /registry/spec/auth/token/
|
||||
title: Token authentication specification
|
||||
- path: /release-notes/
|
||||
title: Release notes
|
||||
- sectiontitle: Superseded products and tools
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
[SSH](/engine/reference/glossary.md#ssh) is a secure protocol for accessing remote machines and applications. It
|
||||
[SSH](/../../../glossary.md#SSH) is a secure protocol for accessing remote machines and applications. It
|
||||
provides authentication and encrypts data communication over insecure networks.
|
||||
|
||||
These topics describe how to find existing SSH keys or generate new ones, and
|
||||
|
|
|
|||
|
|
@ -45,6 +45,14 @@ The advantage of using a repository from which to install Docker Engine - Enterp
|
|||
{% elsif section == "set-up-yum-repo" %}
|
||||
You only need to set up the repository once, after which you can install Docker Engine - Enterprise _from_ the repo and repeatedly upgrade as necessary.
|
||||
|
||||
{% if linux-dist == "rhel" %}
|
||||
|
||||
<ul class="nav nav-tabs">
|
||||
<li class="active"><a data-toggle="tab" data-target="#RHEL_7" data-group="7">RHEL 7</a></li>
|
||||
<li><a data-toggle="tab" data-target="#RHEL_8" data-group="8">RHEL 8</a></li>
|
||||
</ul>
|
||||
<div class="tab-content" id="myFirstTab">
|
||||
<div id="RHEL_7" class="tab-pane fade in active" markdown="1">
|
||||
1. Remove existing Docker repositories from `/etc/yum.repos.d/`:
|
||||
|
||||
```bash
|
||||
|
|
@ -63,14 +71,12 @@ You only need to set up the repository once, after which you can install Docker
|
|||
$ sudo -E sh -c 'echo "$DOCKERURL/{{ linux-dist-url-slug }}" > /etc/yum/vars/dockerurl'
|
||||
```
|
||||
|
||||
{% if linux-dist == "rhel" %}
|
||||
Also, store your OS version string in `/etc/yum/vars/dockerosversion`. Most users should use `7`, but you can also use the more specific minor version, starting from `7.2`.
|
||||
Also, store your OS version string in `/etc/yum/vars/dockerosversion`. Most users should use `7` or `8`, but you can also use the more specific minor version, starting from `7.2`.
|
||||
|
||||
```bash
|
||||
$ sudo sh -c 'echo "7" > /etc/yum/vars/dockerosversion'
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
4. Install required packages: `yum-utils` provides the _yum-config-manager_ utility, and `device-mapper-persistent-data` and `lvm2` are required by the _devicemapper_ storage driver:
|
||||
|
||||
|
|
@ -80,7 +86,6 @@ You only need to set up the repository once, after which you can install Docker
|
|||
lvm2
|
||||
```
|
||||
|
||||
{% if linux-dist == "rhel" %}
|
||||
5. Enable the `extras` RHEL repository. This ensures access to the `container-selinux` package required by `docker-ee`.
|
||||
|
||||
The repository can differ per your architecture and cloud provider, so review the options in this step before running:
|
||||
|
|
@ -113,9 +118,90 @@ You only need to set up the repository once, after which you can install Docker
|
|||
```bash
|
||||
$ sudo yum-config-manager --enable rhui-rhel-7-server-rhui-extras-rpms
|
||||
```
|
||||
|
||||
6. Add the Docker Engine - Enterprise **stable** repository:
|
||||
|
||||
```bash
|
||||
$ sudo -E yum-config-manager \
|
||||
--add-repo \
|
||||
"$DOCKERURL/{{ linux-dist-url-slug }}/docker-ee.repo"
|
||||
```
|
||||
|
||||
</div>
|
||||
<div id="RHEL_8" class="tab-pane fade" markdown="1">
|
||||
1. Remove existing Docker repositories from `/etc/yum.repos.d/`:
|
||||
|
||||
```bash
|
||||
$ sudo rm /etc/yum.repos.d/docker*.repo
|
||||
```
|
||||
|
||||
2. Temporarily store the URL (that you [copied above](#find-your-docker-ee-repo-url)) in an environment variable. Replace `<DOCKER-EE-URL>` with your URL in the following command. This variable assignment does not persist when the session ends:
|
||||
|
||||
```bash
|
||||
$ export DOCKERURL="<DOCKER-EE-URL>"
|
||||
```
|
||||
|
||||
3. Store the value of the variable, `DOCKERURL` (from the previous step), in a `yum` variable in `/etc/yum/vars/`:
|
||||
|
||||
```bash
|
||||
$ sudo -E sh -c 'echo "$DOCKERURL/{{ linux-dist-url-slug }}" > /etc/yum/vars/dockerurl'
|
||||
```
|
||||
|
||||
Also, store your OS version string in `/etc/yum/vars/dockerosversion`. Most users should use `8`, but you can also use the more specific minor version.
|
||||
|
||||
```bash
|
||||
$ sudo sh -c 'echo "8" > /etc/yum/vars/dockerosversion'
|
||||
```
|
||||
|
||||
|
||||
4. Install required packages: `yum-utils` provides the _yum-config-manager_ utility, and `device-mapper-persistent-data` and `lvm2` are required by the _devicemapper_ storage driver:
|
||||
|
||||
```bash
|
||||
$ sudo yum install -y yum-utils \
|
||||
device-mapper-persistent-data \
|
||||
lvm2
|
||||
```
|
||||
|
||||
5. Add the Docker Engine - Enterprise **stable** repository:
|
||||
|
||||
```bash
|
||||
$ sudo -E yum-config-manager \
|
||||
--add-repo \
|
||||
"$DOCKERURL/{{ linux-dist-url-slug }}/docker-ee.repo"
|
||||
```
|
||||
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
{% if linux-dist != "rhel" %}
|
||||
|
||||
1. Remove existing Docker repositories from `/etc/yum.repos.d/`:
|
||||
|
||||
```bash
|
||||
$ sudo rm /etc/yum.repos.d/docker*.repo
|
||||
```
|
||||
|
||||
2. Temporarily store the URL (that you [copied above](#find-your-docker-ee-repo-url)) in an environment variable. Replace `<DOCKER-EE-URL>` with your URL in the following command. This variable assignment does not persist when the session ends:
|
||||
|
||||
```bash
|
||||
$ export DOCKERURL="<DOCKER-EE-URL>"
|
||||
```
|
||||
|
||||
3. Store the value of the variable, `DOCKERURL` (from the previous step), in a `yum` variable in `/etc/yum/vars/`:
|
||||
|
||||
```bash
|
||||
$ sudo -E sh -c 'echo "$DOCKERURL/{{ linux-dist-url-slug }}" > /etc/yum/vars/dockerurl'
|
||||
```
|
||||
|
||||
4. Install required packages: `yum-utils` provides the _yum-config-manager_ utility, and `device-mapper-persistent-data` and `lvm2` are required by the _devicemapper_ storage driver:
|
||||
|
||||
```bash
|
||||
$ sudo yum install -y yum-utils \
|
||||
device-mapper-persistent-data \
|
||||
lvm2
|
||||
```
|
||||
|
||||
{% if linux-dist == "oraclelinux" %}
|
||||
|
||||
5. Enable the `ol7_addons` Oracle repository. This ensures access to the `container-selinux` package required by `docker-ee`.
|
||||
|
|
@ -133,13 +219,13 @@ You only need to set up the repository once, after which you can install Docker
|
|||
--add-repo \
|
||||
"$DOCKERURL/{{ linux-dist-url-slug }}/docker-ee.repo"
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% elsif section == "install-using-yum-repo" %}
|
||||
|
||||
> **Note**: If you need to run Docker Engine - Enterprise 2.0, please see the following instructions:
|
||||
> * [18.03](https://docs.docker.com/v18.03/ee/supported-platforms/) - Older Docker Engine - Enterprise Engine only release
|
||||
> * [17.06](https://docs.docker.com/v17.06/engine/installation/) - Docker Enterprise Edition 2.0 (Docker Engine,
|
||||
> * [17.06](https://docs.docker.com/v17.06/engine/installation/) - Docker Enterprise Edition 2.0 (Docker Engine,
|
||||
> UCP, and DTR).
|
||||
|
||||
1. Install the latest patch release, or go to the next step to install a specific version:
|
||||
|
|
@ -212,6 +298,15 @@ To manually install Docker Enterprise, download the `.{{ package-format | downca
|
|||
{% elsif section == "install-using-yum-package" %}
|
||||
|
||||
{% if linux-dist == "rhel" %}
|
||||
<ul class="nav nav-tabs">
|
||||
<li class="active"><a data-toggle="tab" data-target="#RHEL-7" data-group="7">RHEL 7</a></li>
|
||||
<li><a data-toggle="tab" data-target="#RHEL-8" data-group="8">RHEL 8</a></li>
|
||||
</ul>
|
||||
|
||||
<div class="tab-content" id="mySecondTab">
|
||||
|
||||
<div id="RHEL-7" class="tab-pane fade in active" markdown="1">
|
||||
|
||||
1. Enable the `extras` RHEL repository. This ensures access to the `container-selinux` package which is required by `docker-ee`:
|
||||
|
||||
```bash
|
||||
|
|
@ -219,26 +314,58 @@ To manually install Docker Enterprise, download the `.{{ package-format | downca
|
|||
```
|
||||
|
||||
Alternately, obtain that package manually from Red Hat. There is no way to publicly browse this repository.
|
||||
{% endif %}
|
||||
|
||||
{% if linux-dist == "centos" %}
|
||||
1. Go to the Docker Engine - Enterprise repository URL associated with your trial or subscription
|
||||
in your browser. Go to `{{ linux-dist-url-slug }}/7/x86_64/stable-<VERSION>/Packages`
|
||||
and download the `.{{ package-format | downcase }}` file for the Docker version you want to install.
|
||||
{% endif %}
|
||||
2. Go to the Docker Engine - Enterprise repository URL associated with your
|
||||
trial or subscription in your browser. Go to
|
||||
`{{ linux-dist-url-slug }}/`. Choose your {{ linux-dist-long }} version,
|
||||
architecture, and Docker version. Download the
|
||||
`.{{ package-format | downcase }}` file from the `Packages` directory.
|
||||
|
||||
> If you have trouble with `selinux` using the packages under the `7` directory,
|
||||
> try choosing the version-specific directory instead, such as `7.3`.
|
||||
|
||||
3. Install Docker Enterprise, changing the path below to the path where you downloaded
|
||||
the Docker package.
|
||||
|
||||
```bash
|
||||
$ sudo yum install /path/to/package.rpm
|
||||
```
|
||||
|
||||
Docker is installed but not started. The `docker` group is created, but no
|
||||
users are added to the group.
|
||||
|
||||
4. Start Docker:
|
||||
|
||||
> If using `devicemapper`, ensure it is properly configured before starting Docker, per the [storage guide](/storage/storagedriver/device-mapper-driver/){: target="_blank" class="_" }.
|
||||
|
||||
```bash
|
||||
$ sudo systemctl start docker
|
||||
```
|
||||
|
||||
5. Verify that Docker Engine - Enterprise is installed correctly by running the `hello-world`
|
||||
image. This command downloads a test image, runs it in a container, prints
|
||||
an informational message, and exits:
|
||||
|
||||
```bash
|
||||
$ sudo docker run hello-world
|
||||
```
|
||||
|
||||
Docker Engine - Enterprise is installed and running. Use `sudo` to run Docker commands. See
|
||||
[Linux postinstall](/install/linux/linux-postinstall.md){: target="_blank" class="_" } to allow
|
||||
non-privileged users to run Docker commands.
|
||||
|
||||
</div>
|
||||
|
||||
<div id="RHEL-8" class="tab-pane fade" markdown="1">
|
||||
|
||||
{% if linux-dist == "rhel" or linux-dist == "oraclelinux" %}
|
||||
1. Go to the Docker Engine - Enterprise repository URL associated with your
|
||||
trial or subscription in your browser. Go to
|
||||
`{{ linux-dist-url-slug }}/`. Choose your {{ linux-dist-long }} version,
|
||||
architecture, and Docker version. Download the
|
||||
`.{{ package-format | downcase }}` file from the `Packages` directory.
|
||||
|
||||
{% if linux-dist == "rhel" %}
|
||||
> If you have trouble with `selinux` using the packages under the `7` directory,
|
||||
> try choosing the version-specific directory instead, such as `7.3`.
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
> If you have trouble with `selinux` using the packages under the `8` directory,
|
||||
> try choosing the version-specific directory instead.
|
||||
|
||||
2. Install Docker Enterprise, changing the path below to the path where you downloaded
|
||||
the Docker package.
|
||||
|
|
@ -270,6 +397,56 @@ To manually install Docker Enterprise, download the `.{{ package-format | downca
|
|||
[Linux postinstall](/install/linux/linux-postinstall.md){: target="_blank" class="_" } to allow
|
||||
non-privileged users to run Docker commands.
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{% endif %}
|
||||
{% if linux-dist != "rhel" %}
|
||||
{% if linux-dist == "centos" %}
|
||||
1. Go to the Docker Engine - Enterprise repository URL associated with your trial or subscription
|
||||
in your browser. Go to `{{ linux-dist-url-slug }}/7/x86_64/stable-<VERSION>/Packages`
|
||||
and download the `.{{ package-format | downcase }}` file for the Docker version you want to install.
|
||||
{% endif %}
|
||||
|
||||
{% if linux-dist == "oraclelinux" %}
|
||||
1. Go to the Docker Engine - Enterprise repository URL associated with your
|
||||
trial or subscription in your browser. Go to
|
||||
`{{ linux-dist-url-slug }}/`. Choose your {{ linux-dist-long }} version,
|
||||
architecture, and Docker version. Download the
|
||||
`.{{ package-format | downcase }}` file from the `Packages` directory.
|
||||
|
||||
{% endif %}
|
||||
|
||||
2. Install Docker Enterprise, changing the path below to the path where you downloaded
|
||||
the Docker package.
|
||||
|
||||
```bash
|
||||
$ sudo yum install /path/to/package.rpm
|
||||
```
|
||||
|
||||
Docker is installed but not started. The `docker` group is created, but no
|
||||
users are added to the group.
|
||||
|
||||
3. Start Docker:
|
||||
|
||||
> If using `devicemapper`, ensure it is properly configured before starting Docker, per the [storage guide](/storage/storagedriver/device-mapper-driver/){: target="_blank" class="_" }.
|
||||
|
||||
```bash
|
||||
$ sudo systemctl start docker
|
||||
```
|
||||
|
||||
4. Verify that Docker Engine - Enterprise is installed correctly by running the `hello-world`
|
||||
image. This command downloads a test image, runs it in a container, prints
|
||||
an informational message, and exits:
|
||||
|
||||
```bash
|
||||
$ sudo docker run hello-world
|
||||
```
|
||||
|
||||
Docker Engine - Enterprise is installed and running. Use `sudo` to run Docker commands. See
|
||||
[Linux postinstall](/install/linux/linux-postinstall.md){: target="_blank" class="_" } to allow
|
||||
non-privileged users to run Docker commands.
|
||||
{% endif %}
|
||||
|
||||
{% elsif section == "upgrade-using-yum-package" %}
|
||||
|
||||
|
|
|
|||
|
|
@ -66,6 +66,13 @@ Docker Engine - Community is installed. It starts automatically on `DEB`-based d
|
|||
`systemctl` or `service` command. As the message indicates, non-root users can't
|
||||
run Docker commands by default.
|
||||
|
||||
> **Note**:
|
||||
>
|
||||
> To install Docker without root privileges, see
|
||||
> [Run the Docker daemon as a non-root user (Rootless mode)](/engine/security/rootless.md).
|
||||
>
|
||||
> Rootless mode is currently available as an experimental feature.
|
||||
|
||||
#### Upgrade Docker after using the convenience script
|
||||
|
||||
If you installed Docker using the convenience script, you should upgrade Docker
|
||||
|
|
|
|||
|
|
@ -179,7 +179,7 @@
|
|||
{% unless page.notags == true %}
|
||||
{% assign keywords = page.keywords | split:"," %}
|
||||
{% for keyword in keywords %}{% assign strippedKeyword = keyword | strip %}
|
||||
{% capture keywordlist %}{{ keywordlist }}<a href="/glossary/?term={{strippedKeyword}}">{{strippedKeyword}}</a>{% unless forloop.last %}, {% endunless %}{% endcapture %}
|
||||
{% capture keywordlist %}{{ keywordlist }}<a href="https://docs.docker.com/search/?q={{strippedKeyword}}">{{strippedKeyword}}</a>{% unless forloop.last %}, {% endunless %}{% endcapture %}
|
||||
{% endfor %}
|
||||
{% if keywordlist.size > 0 %}<span class="glyphicon glyphicon-tags" style="padding-right: 10px"></span><span style="vertical-align: 2px">{{ keywordlist }}</span>{% endif %}
|
||||
{% endunless %}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Working with Docker Template (experimental)
|
||||
description: Working with Docker Application Template
|
||||
title: Docker Template
|
||||
description: Working with Docker Template
|
||||
keywords: Docker, application template, Application Designer,
|
||||
---
|
||||
|
||||
|
|
@ -26,7 +26,7 @@ given service, and writes the output to the `/project` mounted folder.
|
|||
definition. It contains the name of the service, description, and available
|
||||
parameters such as ports, volumes, etc. For a complete list of parameters that
|
||||
are allowed, see [Docker Template API
|
||||
reference](/ee/app-template/api-reference).
|
||||
reference](/app-template/api-reference/).
|
||||
|
||||
An _application template_ is a collection of one or more service templates. An
|
||||
application template generates a Dockerfile per service and only one Compose
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Working with Docker App (experimental)
|
||||
title: Docker App
|
||||
description: Learn about Docker App
|
||||
keywords: Docker App, applications, compose, orchestration
|
||||
---
|
||||
|
|
@ -194,7 +194,7 @@ There are several options for deploying a Docker App project.
|
|||
- Deploy as a Compose app application
|
||||
- Deploy as a Docker Stack application
|
||||
|
||||
All three options are discussed, starting with deploying as a native Dock App application.
|
||||
All three options are discussed, starting with deploying as a native Docker App application.
|
||||
|
||||
#### Deploy as a native Docker App
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Docker Assemble (experimental)
|
||||
title: Docker Assemble
|
||||
description: Installing Docker Assemble
|
||||
keywords: Assemble, Docker Enterprise, plugin, Spring Boot, .NET, c#, F#
|
||||
---
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Working with Docker Buildx (experimental)
|
||||
title: Docker Buildx
|
||||
description: Working with Docker Buildx
|
||||
keywords: Docker, buildx, multi-arch
|
||||
---
|
||||
|
|
|
|||
|
|
@ -327,6 +327,36 @@ that are running on worker nodes.
|
|||
|
||||
*dev indicates that the functionality is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the Docker Enterprise Software Support Agreement.
|
||||
|
||||
#### vpc
|
||||
|
||||
If you are deploying on to AWS, by default Docker Cluster will create a new AWS
|
||||
VPC (Virtual Private Cloud) for the Docker Enterprise resources. To specify an
|
||||
existing VPC, a user can specify a VPC ID in the Cluster File.
|
||||
|
||||
```yaml
|
||||
cluster:
|
||||
vpc:
|
||||
id: vpc-existing-vpc-id
|
||||
```
|
||||
|
||||
Docker Cluster assumes the VPC CIDR is `172.31.0.0/16`, so will therefore
|
||||
attempt to create AWS subnets from this range. Docker Cluster can not utilise
|
||||
existing AWS subnets. To instruct Docker Cluster to provision subnets for an
|
||||
alternative CIDR you can pass a new CIDR into the Cluster File.
|
||||
|
||||
```yaml
|
||||
cluster:
|
||||
vpc:
|
||||
id: vpc-existing-vpc-id
|
||||
cidr: "192.168.0.0/16"
|
||||
```
|
||||
|
||||
The following elements can be specified:
|
||||
|
||||
- `id` - (Required) The existing AWS VPC ID `vpc-xxx`
|
||||
- `cidr` - If the VPC's CIDR is not the default `172.31.0.0/16` an alternative
|
||||
CIDR can be specified here.
|
||||
|
||||
### provider
|
||||
Defines where the cluster's resources are provisioned, as well as provider-specific configuration such as tags.
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ title: "Quickstart: Compose and ASP.NET Core with SQL Server"
|
|||
|
||||
This quick-start guide demonstrates how to use Docker Engine on Linux and Docker
|
||||
Compose to set up and run the sample ASP.NET Core application using the
|
||||
[.NET Core SDK image](hub.docker.com/_/microsoft-dotnet-core-sdk)
|
||||
[.NET Core SDK image](https://hub.docker.com/_/microsoft-dotnet-core-sdk)
|
||||
with the
|
||||
[SQL Server on Linux image](https://hub.docker.com/_/microsoft-mssql-server).
|
||||
You just need to have [Docker Engine](/install/index.md)
|
||||
|
|
@ -112,7 +112,7 @@ configure this app to use our SQL Server database, and then create a
|
|||
|
||||
This file defines the `web` and `db` micro-services, their relationship, the
|
||||
ports they are using, and their specific environment variables.
|
||||
|
||||
|
||||
> **Note**: You may receive an error if you choose the wrong Compose file
|
||||
> version. Be sure to choose a verison that is compatible with your system.
|
||||
|
||||
|
|
|
|||
|
|
@ -26,12 +26,6 @@ However, [swarm mode](/engine/swarm/index.md), multi-service applications, and
|
|||
stack files now are fully supported. A stack file is a particular type of
|
||||
[version 3 Compose file](/compose/compose-file/index.md).
|
||||
|
||||
If you are just getting started with Docker and want to learn the best way to
|
||||
deploy multi-service applications, a good place to start is the [Get Started
|
||||
walkthrough](/get-started/). This shows you how to define
|
||||
a service configuration in a Compose file, deploy the app, and use
|
||||
the relevant tools and commands.
|
||||
|
||||
## Produce a bundle
|
||||
|
||||
The easiest way to produce a bundle is to generate it using `docker-compose`
|
||||
|
|
@ -212,8 +206,6 @@ A service has the following fields:
|
|||
|
||||
## Related topics
|
||||
|
||||
* [Get started walkthrough](/get-started/)
|
||||
|
||||
* [docker stack deploy](/engine/reference/commandline/stack_deploy/) command
|
||||
|
||||
* [deploy](/compose/compose-file/index.md#deploy) option in [Compose files](/compose/compose-file/index.md)
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ The default path for a Compose file is `./docker-compose.yml`.
|
|||
|
||||
>**Tip**: You can use either a `.yml` or `.yaml` extension for this file. They both work.
|
||||
|
||||
A [container](/engine/reference/glossary.md#container) definition contains configuration which are applied to each
|
||||
A [container](/../../glossary.md#container) definition contains configuration which are applied to each
|
||||
container started for that service, much like passing command-line parameters to
|
||||
`docker run`. Likewise, network and volume definitions are analogous to
|
||||
`docker network create` and `docker volume create`.
|
||||
|
|
@ -196,8 +196,8 @@ or a list:
|
|||
args:
|
||||
- buildno=1
|
||||
- gitcommithash=cdc3b19
|
||||
|
||||
> **Note**: In your Dockerfile, if you specify `ARG` before the `FROM` instruction,
|
||||
|
||||
> **Note**: In your Dockerfile, if you specify `ARG` before the `FROM` instruction,
|
||||
> If you need an argument to be available in both places, also specify it under the `FROM` instruction.
|
||||
> See [Understand how ARGS and FROM interact](/engine/reference/builder/#understand-how-arg-and-from-interact) for usage details.
|
||||
|
||||
|
|
|
|||
|
|
@ -2586,6 +2586,5 @@ stack.
|
|||
- [User guide](/compose/index.md)
|
||||
- [Installing Compose](/compose/install/)
|
||||
- [Compose file versions and upgrading](compose-versioning.md)
|
||||
- [Get started with Docker](/get-started/)
|
||||
- [Samples](/samples/)
|
||||
- [Command line reference](/compose/reference/)
|
||||
|
|
|
|||
|
|
@ -59,7 +59,6 @@ Compose has commands for managing the whole lifecycle of your application:
|
|||
## Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [Getting Started](gettingstarted.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with WordPress](wordpress.md)
|
||||
|
|
@ -71,10 +70,10 @@ Compose has commands for managing the whole lifecycle of your application:
|
|||
|
||||
The features of Compose that make it effective are:
|
||||
|
||||
* [Multiple isolated environments on a single host](overview.md#Multiple-isolated-environments-on-a-single-host)
|
||||
* [Preserve volume data when containers are created](overview.md#preserve-volume-data-when-containers-are-created)
|
||||
* [Only recreate containers that have changed](overview.md#only-recreate-containers-that-have-changed)
|
||||
* [Variables and moving a composition between environments](overview.md#variables-and-moving-a-composition-between-environments)
|
||||
* [Multiple isolated environments on a single host](#multiple-isolated-environments-on-a-single-host)
|
||||
* [Preserve volume data when containers are created](#preserve-volume-data-when-containers-are-created)
|
||||
* [Only recreate containers that have changed](#only-recreate-containers-that-have-changed)
|
||||
* [Variables and moving a composition between environments](#variables-and-moving-a-composition-between-environments)
|
||||
|
||||
### Multiple isolated environments on a single host
|
||||
|
||||
|
|
|
|||
|
|
@ -5,9 +5,12 @@ title: docker-compose create
|
|||
notoc: true
|
||||
---
|
||||
|
||||
> **This command is deprecated.** Use the [up](up.md) command with `--no-start`
|
||||
instead.
|
||||
{: .warning }
|
||||
|
||||
```
|
||||
Creates containers for a service.
|
||||
This command is deprecated. Use the `up` command with `--no-start` instead.
|
||||
|
||||
Usage: create [options] [SERVICE...]
|
||||
|
||||
|
|
|
|||
|
|
@ -5,10 +5,11 @@ title: docker-compose scale
|
|||
notoc: true
|
||||
---
|
||||
|
||||
> **Note**: This command is deprecated. Use the [up](up.md) command with the
|
||||
`--scale` flag instead. Beware that using `up` with `--scale` flag has some
|
||||
[subtle differences](https://github.com/docker/compose/issues/5251) with the `scale` command as it incorporates the behaviour
|
||||
of `up` command.
|
||||
> **This command is deprecated.** Use the [up](up.md) command with the
|
||||
`--scale` flag instead. Beware that using `up` with the `--scale` flag has
|
||||
some [subtle differences](https://github.com/docker/compose/issues/5251) with
|
||||
the `scale` command, as it incorporates the behaviour of the `up` command.
|
||||
{: .warning }
|
||||
|
||||
```
|
||||
Usage: scale [options] [SERVICE=NUM...]
|
||||
|
|
|
|||
|
|
@ -31,14 +31,4 @@ Docker Compose to set up and run a Rails/PostgreSQL app.
|
|||
|
||||
- [Quickstart: Compose and WordPress](/compose/wordpress.md) - Shows how to
|
||||
use Docker Compose to set up and run WordPress in an isolated environment
|
||||
with Docker containers.
|
||||
|
||||
## Samples that include Compose in the workflows
|
||||
|
||||
These samples include working with Docker Compose as part of broader learning
|
||||
goals:
|
||||
|
||||
- [Get Started with Docker](/get-started/index.md) - This multi-part tutorial covers writing your first app, data storage, networking, and swarms,
|
||||
and ends with your app running on production servers in the cloud.
|
||||
|
||||
- [Deploying an app to a Swarm](https://github.com/docker/labs/blob/master/beginner/chapters/votingapp.md) - This tutorial from [Docker Labs](https://github.com/docker/labs/blob/master/README.md) shows you how to create and customize a sample voting app, deploy it to a [swarm](/engine/swarm.md), test it, reconfigure the app, and redeploy.
|
||||
with Docker containers.
|
||||
|
|
@ -50,7 +50,7 @@ example sets two configurable options on the `json-file` logging driver:
|
|||
}
|
||||
```
|
||||
|
||||
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
|
||||
> **Note**: `log-opts` configuration options in the `daemon.json` configuration
|
||||
> file must be provided as strings. Boolean and numeric values (such as the value
|
||||
> for `max-file` in the example above) must therefore be enclosed in quotes (`"`).
|
||||
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ The `gelf` logging driver supports the following options:
|
|||
| Option | Required | Description | Example value |
|
||||
| :------------------------- | :-------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------- |
|
||||
| `gelf-address` | required | The address of the GELF server. `tcp` and `udp` are the only supported URI specifier and you must specify the port. | `--log-opt gelf-address=udp://192.168.0.42:12201` |
|
||||
| `gelf-compression-type` | optional | `UDP Only` The type of compression the GELF driver uses to compress each log message. Allowed values are `gzip`, `zlib` and `none`. The default is `gzip`. | `--log-opt gelf-compression-type=gzip` |
|
||||
| `gelf-compression-type` | optional | `UDP Only` The type of compression the GELF driver uses to compress each log message. Allowed values are `gzip`, `zlib` and `none`. The default is `gzip`. **Note that enabled compression leads to excessive CPU usage, so it is highly recommended to set this to `none`**. | `--log-opt gelf-compression-type=gzip` |
|
||||
| `gelf-compression-level` | optional | `UDP Only` The level of compression when `gzip` or `zlib` is the `gelf-compression-type`. An integer in the range of `-1` to `9` (BestCompression). Default value is 1 (BestSpeed). Higher levels provide more compression at lower speed. Either `-1` or `0` disables compression. | `--log-opt gelf-compression-level=2` |
|
||||
| `gelf-tcp-max-reconnect` | optional | `TCP Only` The maximum number of reconnection attempts when the connection drop. An positive integer. Default value is 3. | `--log-opt gelf-tcp-max-reconnect=3` |
|
||||
| `gelf-tcp-reconnect-delay` | optional | `TCP Only` The number of seconds to wait between reconnection attempts. A positive integer. Default value is 1. | `--log-opt gelf-tcp-reconnect-delay=1` |
|
||||
|
|
|
|||
|
|
@ -176,7 +176,7 @@ YAML file (which is discussed further in this document.)
|
|||
|
||||
>**Note**: Changing your storage backend requires you to restart the Trusted Registry.
|
||||
|
||||
See the [Registry configuration](/registry/configuration.md)
|
||||
See the [Docker Registry storage driver](/registry/storage-drivers/)
|
||||
documentation for the full options specific to each driver. Storage drivers can
|
||||
be customized through the [Docker Registry storage driver
|
||||
API](/registry/storage-drivers/index.md#storage-driver-api).
|
||||
|
|
@ -184,21 +184,21 @@ API](/registry/storage-drivers/index.md#storage-driver-api).
|
|||
|
||||
### Filesystem settings
|
||||
|
||||
The [filesystem storage backend](/registry/configuration.md#filesystem)
|
||||
The [filesystem storage backend](/registry/storage-drivers/filesystem)
|
||||
has only one setting, the "Storage directory".
|
||||
|
||||
### S3 settings
|
||||
|
||||
If you select the [S3 storage backend](/registry/configuration.md#s3), then you
|
||||
If you select the [S3 storage backend](/registry/storage-drivers/s3), then you
|
||||
need to set "AWS region", "Bucket name", "Access Key", and "Secret Key".
|
||||
|
||||
### Azure settings
|
||||
|
||||
Set the "Account name", "Account key", "Container", and "Realm" on the [Azure storage backend](/registry/configuration.md#azure) page.
|
||||
Set the "Account name", "Account key", "Container", and "Realm" on the [Azure storage backend](/registry/storage-drivers/azure) page.
|
||||
|
||||
### Openstack Swift settings
|
||||
|
||||
View the [Openstack Swift settings](/registry/configuration.md#openstack-swift)
|
||||
View the [Openstack Swift settings](/registry/storage-drivers/openstack-swift)
|
||||
documentation so that you can set up your storage settings: authurl, username,
|
||||
password, container, tenant, tenantid, domain, domainid, insecureskipverify,
|
||||
region, chunksize, and prefix.
|
||||
|
|
@ -230,4 +230,4 @@ ensure your choices make sense.
|
|||
|
||||
## See also
|
||||
|
||||
* [Configure security settings](config-security.md)
|
||||
* [Configure security settings](config-security.md)
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ adequate space available. To do so, you can run the following commands:
|
|||
### Amazon S3
|
||||
|
||||
S3 stores data as objects within “buckets” where you read, write, and delete
|
||||
objects in that container. It too, has a `rootdirectory` parameter. If you select this option, there will be some tasks that you need to first perform [on AWS](https://aws.amazon.com/s3/getting-started/).
|
||||
objects in that container. It too, has a `rootdirectory` parameter. If you select this option, there will be some tasks that you need to first perform [on AWS](https://aws.amazon.com/s3/getting-started/).
|
||||
|
||||
1. You must create an S3 bucket, and write down its name and the AWS zone it
|
||||
runs on.
|
||||
|
|
@ -155,7 +155,7 @@ YAML file (which is discussed further in this document.)
|
|||
|
||||
>**Note**: Changing your storage backend requires you to restart the Trusted Registry.
|
||||
|
||||
See the [Registry configuration](/registry/configuration.md)
|
||||
See the [Docker Registry storage driver](/registry/storage-drivers/)
|
||||
documentation for the full options specific to each driver. Storage drivers can
|
||||
be customized through the [Docker Registry storage driver
|
||||
API](/registry/storage-drivers/index.md#storage-driver-api).
|
||||
|
|
@ -163,21 +163,21 @@ API](/registry/storage-drivers/index.md#storage-driver-api).
|
|||
|
||||
### Filesystem settings
|
||||
|
||||
The [filesystem storage backend](/registry/configuration.md#filesystem)
|
||||
The [filesystem storage backend](/registry/storage-drivers/filesystem)
|
||||
has only one setting, the "Storage directory".
|
||||
|
||||
### S3 settings
|
||||
|
||||
If you select the [S3 storage backend](/registry/configuration.md#s3), then you
|
||||
If you select the [S3 storage backend](/registry/storage-drivers/s3), then you
|
||||
need to set "AWS region", "Bucket name", "Access Key", and "Secret Key".
|
||||
|
||||
### Azure settings
|
||||
|
||||
Set the "Account name", "Account key", "Container", and "Realm" on the [Azure storage backend](/registry/configuration.md#azure) page.
|
||||
Set the "Account name", "Account key", "Container", and "Realm" on the [Azure storage backend](/registry/storage-drivers/azure) page.
|
||||
|
||||
### Openstack Swift settings
|
||||
|
||||
View the [Openstack Swift settings](/registry/configuration.md#openstack-swift)
|
||||
View the [Openstack Swift settings](/registry/storage-drivers/openstack-swift)
|
||||
documentation so that you can set up your storage settings: authurl, username,
|
||||
password, container, tenant, tenantid, domain, domainid, insecureskipverify,
|
||||
region, chunksize, and prefix.
|
||||
|
|
|
|||
|
|
@ -49,14 +49,14 @@ You can run that snippet on any node where Docker is installed. As an example
|
|||
you can SSH into a UCP node and run the DTR installer from there. By default
|
||||
the installer runs in interactive mode and prompts you for any additional
|
||||
information that is necessary.
|
||||
[Learn more about the installer](/reference/dtr/2.6/cli/install/).
|
||||
[Learn more about the installer](/v18.09/reference/dtr/2.6/cli/install/).
|
||||
|
||||
By default DTR is deployed with self-signed certificates, so your UCP deployment
|
||||
might not be able to pull images from DTR.
|
||||
Use the `--dtr-external-url <dtr-domain>:<port>` optional flag while deploying
|
||||
DTR, so that UCP is automatically reconfigured to trust DTR. Since [HSTS (HTTP Strict-Transport-Security)
|
||||
header](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) is included in all API responses,
|
||||
make sure to specify the FQDN (Fully Qualified Domain Name) of your DTR, or your browser may refuse
|
||||
header](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) is included in all API responses,
|
||||
make sure to specify the FQDN (Fully Qualified Domain Name) of your DTR, or your browser may refuse
|
||||
to load the web interface.
|
||||
|
||||
## Step 4. Check that DTR is running
|
||||
|
|
@ -125,13 +125,13 @@ To add replicas to a DTR cluster, use the `docker/dtr join` command:
|
|||
--ucp-node <ucp-node-name> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
|
||||
> --ucp-node
|
||||
>
|
||||
> The <ucp-node-name> following the --ucp-node flag is the target node to
|
||||
> install the DTR replica. This is NOT the UCP Manager URL.
|
||||
{: .important}
|
||||
|
||||
|
||||
3. Check that all replicas are running.
|
||||
|
||||
In your browser, navigate to the Docker **Universal Control Plane**
|
||||
|
|
|
|||
|
|
@ -78,34 +78,6 @@ keep image size small:
|
|||
standalone containers, consider migrating to use single-replica services, so
|
||||
that you can take advantage of these service-only features.
|
||||
|
||||
## Use swarm services when possible
|
||||
|
||||
- When possible, design your application with the ability to scale using swarm
|
||||
services.
|
||||
- Even if you only need to run a single instance of your application, swarm
|
||||
services provide several advantages over standalone containers. A service's
|
||||
configuration is declarative, and Docker is always working to keep the
|
||||
desired and actual state in sync.
|
||||
- Networks and volumes can be connected and disconnected from swarm services,
|
||||
and Docker handles redeploying the individual service containers in a
|
||||
non-disruptive way. Standalone containers need to be manually stopped, removed,
|
||||
and recreated to accommodate configuration changes.
|
||||
- Several features, such as the ability to store
|
||||
[secrets](/engine/swarm/secrets.md) and [configs](/engine/swarm/configs.md),
|
||||
are only available to services rather than standalone containers. These
|
||||
features allow you to keep your images as generic as possible and to avoid
|
||||
storing sensitive data within the Docker images or containers themselves.
|
||||
- Let `docker stack deploy` handle any image pulls for you, instead of using
|
||||
`docker pull`. This way, your deployment doesn't try to pull from nodes
|
||||
that are down. Also, when new nodes are added to the swarm, images are
|
||||
pulled automatically.
|
||||
|
||||
There are limitations around sharing data amongst nodes of a swarm service.
|
||||
If you use [Docker for AWS](/docker-for-aws/persistent-data-volumes.md) or
|
||||
[Docker for Azure](/docker-for-azure/persistent-data-volumes.md), you can use the
|
||||
Cloudstor plugin to share data amongst your swarm service nodes. You can also
|
||||
write your application data into a separate database which supports simultaneous
|
||||
updates.
|
||||
|
||||
## Use CI/CD for testing and deployment
|
||||
|
||||
|
|
|
|||
|
|
@ -11,13 +11,13 @@ Most Dockerfiles start from a parent image. If you need to completely control
|
|||
the contents of your image, you might need to create a base image instead.
|
||||
Here's the difference:
|
||||
|
||||
- A [parent image](/glossary.md?term=parent%20image) is the image that your
|
||||
- A [parent image](/glossary.md#parent_image) is the image that your
|
||||
image is based on. It refers to the contents of the `FROM` directive in the
|
||||
Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent
|
||||
image. Most Dockerfiles start from a parent image, rather than a base image.
|
||||
However, the terms are sometimes used interchangeably.
|
||||
|
||||
- A [base image](/glossary.md?term=base%20image) has `FROM scratch` in its Dockerfile.
|
||||
- A [base image](/glossary.md#base_image) has `FROM scratch` in its Dockerfile.
|
||||
|
||||
This topic shows you several ways to create a base image. The specific process
|
||||
will depend heavily on the Linux distribution you want to package. We have some
|
||||
|
|
|
|||
|
|
@ -12,6 +12,25 @@ notes](release-notes). For Docker Desktop system requirements, see
|
|||
|
||||
## Edge Releases of 2019
|
||||
|
||||
### Docker Desktop Community 2.1.6.0
|
||||
2019-11-18
|
||||
|
||||
[Download](https://download.docker.com/mac/edge/40807/Docker.dmg)
|
||||
|
||||
### Upgrades
|
||||
|
||||
- [Docker 19.03.5](https://github.com/docker/docker-ce/releases/tag/v19.03.5)
|
||||
- [Go 1.12.13](https://golang.org/doc/devel/release.html#go1.12)
|
||||
|
||||
### New
|
||||
|
||||
Added the ability to start and stop Compose-based applications and view combined logs in the Docker Desktop **Dashboard** UI.
|
||||
|
||||
### Bug fixes and minor changes
|
||||
|
||||
- Fixed port forwarding when containers are using `overlay` networks.
|
||||
- Fixed a container start error when a container has more than one port with an arbitrary or not-yet-configured external port number. For example, `docker run -p 80 -p 443 nginx`. Fixes [docker/for-win#4935](https://github.com/docker/for-win/issues/4935) and [docker/compose#6998](https://github.com/docker/compose/issues/6998).
|
||||
|
||||
### Docker Desktop Community 2.1.5.0
|
||||
2019-11-04
|
||||
|
||||
|
|
@ -58,6 +77,7 @@ Fixed an issue that caused VMs running on older hardware with macOS Catalina to
|
|||
- Improved the navigation in **Settings** and **Troubleshoot** UI.
|
||||
- Fixed a bug in the UEFI boot menu that sometimes caused Docker Desktop to hang during restart. Fixes [docker/for-mac#2655](https://github.com/docker/for-mac/issues/2655) and [docker/for-mac#3921](https://github.com/docker/for-mac/issues/3921).
|
||||
- Docker Desktop now allows users to access the host’s SSH agent inside containers. Fixes [docker/for-mac#410](https://github.com/docker/for-mac/issues/410)
|
||||
- Docker Machine is no longer included in the Docker Desktop installer. You can download it separately from the [Docker Machine releases](https://github.com/docker/machine/releases) page.
|
||||
|
||||
### Docker Desktop Community 2.1.3.0
|
||||
2019-09-16
|
||||
|
|
|
|||
|
|
@ -31,12 +31,6 @@ running different versions.
|
|||
```shell
|
||||
$ docker --version
|
||||
Docker version {{ site.docker_ce_version }}, build c97c6d6
|
||||
|
||||
$ docker-compose --version
|
||||
docker-compose version {{ site.compose_version }}, build 8dd22a9
|
||||
|
||||
$ docker-machine --version
|
||||
docker-machine version {{ site.machine_version }}, build 9ba6da9
|
||||
```
|
||||
|
||||
## Explore the application
|
||||
|
|
@ -531,4 +525,4 @@ After you have successfully authenticated, you can access your organizations and
|
|||
|
||||
* Check out the blog post, [What’s New in Docker 17.06 Community Edition
|
||||
(CE)](https://blog.docker.com/2017/07/whats-new-docker-17-06-community-edition-ce/){:
|
||||
target="_blank" class="_"}.
|
||||
target="_blank" class="_"}.
|
||||
|
|
@ -122,4 +122,4 @@ For information on how to back up and restore data volumes, see [Backup, restore
|
|||
- [Release notes](release-notes.md) lists component updates, new features, and
|
||||
improvements associated with Stable releases. For information about Edge releases, see [Edge release
|
||||
notes](edge-release-notes.md).
|
||||
- [Get started with Docker](/get-started/) provides a general Docker tutorial.
|
||||
- [Get started with Docker](/get-started/) provides a general Docker tutorial.
|
||||
|
|
@ -8,28 +8,32 @@ toc_min: 1
|
|||
toc_max: 2
|
||||
---
|
||||
|
||||
Here are the main improvements and issues per stable release, starting with the
|
||||
current release. The documentation is updated for each release.
|
||||
This page contains information about the new features, improvements, known issues, and bug fixes in Docker Desktop Stable releases.
|
||||
|
||||
For system requirements, see
|
||||
For information about Edge releases, see the [Edge release notes](edge-release-notes). For Docker Desktop system requirements, see
|
||||
[What to know before you install](install.md#what-to-know-before-you-install).
|
||||
|
||||
Release notes for _stable_ releases are listed below, [_edge_ release
|
||||
notes](edge-release-notes) are also available. (Following the Docker Engine - Community release model,
|
||||
'beta' releases are called 'edge' releases.) You can learn about both kinds of
|
||||
releases, and download stable and edge product installers at [Download Docker
|
||||
Desktop for Mac](install.md#download-docker-for-mac).
|
||||
|
||||
## Stable Releases of 2019
|
||||
|
||||
## Docker Desktop Community 2.1.0.4
|
||||
2019-10-21
|
||||
## Docker Desktop Community 2.1.0.5
|
||||
2019-11-18
|
||||
|
||||
> [Download](https://hub.docker.com/?overlay=onboarding)
|
||||
>
|
||||
> You must sign in to Docker Hub to download Docker Desktop.
|
||||
|
||||
Docker Desktop 2.1.0.4 contains a Kubernetes upgrade. Note that your local Kubernetes cluster will be reset after installing this version.
|
||||
Docker Desktop 2.1.0.5 contains a Kubernetes upgrade. Note that your local Kubernetes cluster will be reset after installing this version.
|
||||
|
||||
### Upgrades
|
||||
|
||||
- [Docker 19.03.5](https://github.com/docker/docker-ce/releases/tag/v19.03.5)
|
||||
- [Kubernetes 1.14.8](https://github.com/kubernetes/kubernetes/releases/tag/v1.14.8)
|
||||
- [Go 1.12.13](https://golang.org/doc/devel/release.html#go1.12)
|
||||
|
||||
## Docker Desktop Community 2.1.0.4
|
||||
2019-10-21
|
||||
|
||||
[Download](https://download.docker.com/mac/stable/39773/Docker.dmg)
|
||||
|
||||
### Upgrades
|
||||
|
||||
|
|
@ -791,4 +795,4 @@ events or unexpected unmounts.
|
|||
|
||||
* Docker 1.12.0
|
||||
* Docker Machine 0.8.0
|
||||
* Docker Compose 1.8.0
|
||||
* Docker Compose 1.8.0
|
||||
|
|
@ -12,6 +12,52 @@ notes](release-notes). For Docker Desktop system requirements, see
|
|||
|
||||
## Edge Releases of 2019
|
||||
|
||||
### Docker Desktop Community 2.1.6.1
|
||||
2019-11-20
|
||||
|
||||
[Download](https://download.docker.com/win/edge/40920/Docker%20Desktop%20Installer.exe)
|
||||
|
||||
### Bug fixes and minor changes
|
||||
|
||||
- Fixed an issue that prevented Kubernetes to start with WSL 2 on machines with multiple CPU cores.
|
||||
- Fixed a rare issue that caused to Docker Desktop to crash with the error `Unable to stop Hyper-V VM: Cannot validate argument on parameter 'SwitchName'. The argument is null or empty.`
|
||||
|
||||
### Known issue
|
||||
|
||||
Windows Insider Preview Slow Ring users cannot run WSL 2 after upgrading to Docker Desktop Edge 2.1.6.1 release as WSL 2 requires Windows 10 Insider Preview build 19018 or higher.
|
||||
|
||||
### Docker Desktop Community 2.1.6.0
|
||||
2019-11-18
|
||||
|
||||
[Download](https://download.docker.com/win/edge/40807/Docker%20Desktop%20Installer.exe)
|
||||
|
||||
### Upgrades
|
||||
|
||||
- [Docker 19.03.5](https://github.com/docker/docker-ce/releases/tag/v19.03.5)
|
||||
- [Go 1.12.13](https://golang.org/doc/devel/release.html#go1.12)
|
||||
|
||||
### New
|
||||
|
||||
Added the ability to start and stop Compose-based applications and view combined logs in the Docker Desktop **Dashboard** UI.
|
||||
|
||||
### Bug fixes and minor changes
|
||||
|
||||
- Docker Desktop now automatically restarts after an update.
|
||||
- Fixed an issue where Docker Desktop auto-start was not being disabled properly on some machines.
|
||||
- Fixed a container start error when a container has more than one port with an arbitrary or not-yet-configured external port number. For example, `docker run -p 80 -p 443 nginx`). Fixes [docker/for-win#4935](https://github.com/docker/for-win/issues/4935) and [docker/compose#6998](https://github.com/docker/compose/issues/6998).
|
||||
- Fixed an issue which caused Docker Desktop to crash when resetting to factory defaults while running Windows containers.
|
||||
- Fixed multiple issues related to Fast Startup.
|
||||
- Injected Docker CLI, CLI plugins, Docker Compose, Notary, and kubectl into WSL distros when Docker Desktop WSL integration is enabled.
|
||||
- Fixed an issue where bind mounts created with Docker Compose from a WSL distro were incorrectly translated. Fixes [docker/for-win#5084](https://github.com/docker/for-win/issues/5084).
|
||||
- Docker Desktop now supports inotify events on shared filesystems for Windows file sharing.
|
||||
- Fixed a cache invalidation bug when a file in a shared volume is renamed on the host for Windows file sharing.
|
||||
- Fixed a handle leak when calling `Mknod` on a shared volume for Windows file sharing.
|
||||
- To make VM startup more reliable, Docker Desktop now avoids adding a Hyper-V NIC to the Windows VM when using Hypervisor sockets for Windows file sharing (rather than Samba).
|
||||
|
||||
### Known issue
|
||||
|
||||
Windows Insider Preview Slow Ring users cannot run WSL 2 after upgrading to Docker Desktop Edge 2.1.6.0 release as WSL 2 requires Windows 10 Insider Preview build 19018 or higher.
|
||||
|
||||
### Docker Desktop Community 2.1.5.0
|
||||
2019-11-04
|
||||
|
||||
|
|
@ -31,7 +77,7 @@ This release contains a Kubernetes upgrade. Note that your local Kubernetes clus
|
|||
|
||||
To access the Dashboard UI, select the Docker menu from the system tray and then click **Dashboard**.
|
||||
|
||||
- **WSL 2 backend:** The new Docker Desktop WSL 2 backend replaces the Docker Desktop WSL 2 Tech Preview. The WSL 2 backend architecture introduces support for Kubernetes, provides an updated Docker daemon, offers VPN-friendly networking, and additional features. For more information, see [Docker Desktop WSL 2 backend](https://engineering.docker.com/2019/10/new-docker-desktop-wsl2-backend/).
|
||||
- **WSL 2 backend:** The new Docker Desktop WSL 2 backend replaces the Docker Desktop WSL 2 Tech Preview. The WSL 2 backend architecture introduces support for Kubernetes, provides an updated Docker daemon, offers VPN-friendly networking, and additional features. For more information, see [Docker Desktop WSL 2 backend](https://docs.docker.com/docker-for-windows/wsl-tech-preview/).
|
||||
|
||||
- **New file sharing implementation:** Docker Desktop introduces a new file sharing implementation which uses gRPC, FUSE, and Hypervisor sockets instead of Samba, CIFS, and Hyper-V networking. The new implementation offers improved I/O performance. Additionally, when using the new file system:
|
||||
|
||||
|
|
@ -64,6 +110,7 @@ This release contains a Kubernetes upgrade. Note that your local Kubernetes clus
|
|||
|
||||
- Improved the navigation in **Settings** and **Troubleshoot** UI.
|
||||
- Fixed a bug that prevented users from accessing WSL 2 Tech Preview. Fixes [docker/for-win#4734](https://github.com/docker/for-win/issues/4734).
|
||||
- Docker Machine is no longer included in the Docker Desktop installer. You can download it separately from the [Docker Machine releases](https://github.com/docker/machine/releases) page.
|
||||
|
||||
### Docker Desktop Community 2.1.3.0
|
||||
2019-09-16
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 49 KiB |
|
After Width: | Height: | Size: 92 KiB |
|
|
@ -39,22 +39,15 @@ See [Install Docker Desktop](install.md){: target="_blank" class="_"} for downlo
|
|||
> docker run hello-world
|
||||
|
||||
docker : Unable to find image 'hello-world:latest' locally
|
||||
...
|
||||
|
||||
latest:
|
||||
Pulling from library/hello-world
|
||||
ca4f61b1923c:
|
||||
Pulling fs layer
|
||||
ca4f61b1923c:
|
||||
Download complete
|
||||
ca4f61b1923c:
|
||||
Pull complete
|
||||
Digest: sha256:97ce6fa4b6cdc0790cda65fe7290b74cfebd9fa0c9b8c38e979330d547d22ce1
|
||||
latest: Pulling from library/hello-world
|
||||
1b930d010525: Pull complete
|
||||
Digest: sha256:c3b4ada4687bbaa170745b3e4dd8ac3f194ca95b2d0518b417fb47e5879d9b5f
|
||||
Status: Downloaded newer image for hello-world:latest
|
||||
|
||||
Hello from Docker!
|
||||
This message shows that your installation appears to be working correctly.
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
4. List the `hello-world` _image_ that was downloaded from Docker Hub:
|
||||
|
|
@ -89,18 +82,12 @@ running something more complex, such as an OS and a webserver.
|
|||
> docker run --interactive --tty ubuntu bash
|
||||
|
||||
docker : Unable to find image 'ubuntu:latest' locally
|
||||
...
|
||||
|
||||
latest:
|
||||
Pulling from library/ubuntu
|
||||
22dc81ace0ea:
|
||||
Pulling fs layer
|
||||
1a8b3c87dba3:
|
||||
Pulling fs layer
|
||||
91390a1c435a:
|
||||
Pulling fs layer
|
||||
...
|
||||
Digest: sha256:e348fbbea0e0a0e73ab0370de151e7800684445c509d46195aef73e090a49bd6
|
||||
latest: Pulling from library/ubuntu
|
||||
22e816666fd6: Pull complete
|
||||
079b6d2a1e53: Pull complete
|
||||
11048ebae908: Pull complete
|
||||
c58094023a2e: Pull complete
|
||||
Digest: sha256:a7b8b7b33e44b123d7f997bd4d3d0a59fafc63e203d17efedf09ff3f6f516152
|
||||
Status: Downloaded newer image for ubuntu:latest
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -8,27 +8,33 @@ toc_min: 1
|
|||
toc_max: 2
|
||||
---
|
||||
|
||||
Here are the main improvements and issues per stable release, starting with the
|
||||
current release. The documentation is always updated for each release.
|
||||
This page contains information about the new features, improvements, known issues, and bug fixes in Docker Desktop Stable releases.
|
||||
|
||||
For system requirements, see
|
||||
For information about Edge releases, see the [Edge release notes](edge-release-notes). For Docker Desktop system requirements, see
|
||||
[What to know before you install](install.md#what-to-know-before-you-install).
|
||||
|
||||
Release notes for _stable_ releases are listed below, [_edge_ release
|
||||
notes](edge-release-notes) are also available. (Following the Docker Engine - Community release model,
|
||||
'beta' releases are called 'edge' releases.) You can learn about both kinds of
|
||||
releases, and download stable and edge product installers at [Download Docker
|
||||
for Windows](install.md#download-docker-for-windows).
|
||||
|
||||
## Stable Releases of 2019
|
||||
|
||||
## Docker Desktop Community 2.1.0.4
|
||||
2019-10-21
|
||||
## Docker Desktop Community 2.1.0.5
|
||||
2019-11-18
|
||||
|
||||
> [Download](https://hub.docker.com/?overlay=onboarding)
|
||||
>
|
||||
> You must sign in to Docker Hub to download Docker Desktop.
|
||||
|
||||
Docker Desktop 2.1.0.5 contains a Kubernetes upgrade. Note that your local Kubernetes cluster will be reset after installing this version.
|
||||
|
||||
### Upgrades
|
||||
|
||||
- [Docker 19.03.5](https://github.com/docker/docker-ce/releases/tag/v19.03.5)
|
||||
- [Kubernetes 1.14.8](https://github.com/kubernetes/kubernetes/releases/tag/v1.14.8)
|
||||
- [Go 1.12.13](https://golang.org/doc/devel/release.html#go1.12)
|
||||
|
||||
## Docker Desktop Community 2.1.0.4
|
||||
2019-10-21
|
||||
|
||||
[Download](https://download.docker.com/win/stable/39773/Docker%20Desktop%20Installer.exe)
|
||||
|
||||
Docker Desktop 2.1.0.4 contains a Kubernetes upgrade. Note that your local Kubernetes cluster will be reset after installing this version.
|
||||
|
||||
### Upgrades
|
||||
|
|
@ -775,4 +781,4 @@ We did not distribute a 1.12.4 stable release
|
|||
|
||||
* Docker 1.12.0
|
||||
* Docker Machine 0.8.0
|
||||
* Docker Compose 1.8.0
|
||||
* Docker Compose 1.8.0
|
||||
|
|
@ -1,31 +1,45 @@
|
|||
---
|
||||
description: Docker Desktop WSL 2 Tech Preview
|
||||
keywords: Docker, WSL, WSL 2, Tech Preview, Windows Subsystem for Linux
|
||||
title: Docker Desktop WSL 2 Tech Preview
|
||||
description: Docker Desktop WSL 2 backend
|
||||
keywords: WSL, WSL 2 Tech Preview, Windows Subsystem for Linux
|
||||
title: Docker Desktop WSL 2 backend
|
||||
toc_min: 1
|
||||
toc_max: 2
|
||||
---
|
||||
|
||||
# Overview
|
||||
The new Docker Desktop WSL 2 backend replaces the Docker Desktop WSL 2 Tech Preview. The WSL 2 backend architecture introduces support for Kubernetes, provides an updated Docker daemon, offers VPN-friendly networking, and additional features.
|
||||
|
||||
Welcome to Docker Desktop WSL 2 Tech Preview. This Tech Preview introduces support to run Docker Desktop with WSL 2. We really appreciate you trialing this Tech Preview. Your feedback is very important to us. Please let us know your feedback by creating an issue in the [Docker Desktop for Windows GitHub](https://github.com/docker/for-win/issues) repository and adding the **WSL 2** label.
|
||||
WSL 2 introduces a significant architectural change as it is a full Linux kernel built by Microsoft, allowing Linux containers to run natively without emulation. With Docker Desktop running on WSL 2, users can leverageLinux workspaces and avoid having to maintain both Linux and Windows build scripts.
|
||||
|
||||
WSL 2 introduces a significant architectural change as it is a full Linux kernel built by Microsoft, allowing Linux containers to run natively without emulation. With Docker Desktop WSL 2 Tech Preview, users can access Linux workspaces without having to maintain both Linux and Windows build scripts.
|
||||
|
||||
Docker Desktop also leverages the dynamic memory allocation feature in WSL 2 to greatly improve the resource consumption. This means, Docker Desktop only uses the required amount of CPU and memory resources, enabling CPU and memory-intensive tasks such as building a container to run much faster.
|
||||
Docker Desktop also leverages the dynamic memory allocation feature in WSL 2 to greatly improve the resource consumption. This means, Docker Desktop only uses the required amount of CPU and memory resources it needs, while enabling CPU and memory-intensive tasks such as building a container to run much faster.
|
||||
|
||||
Additionally, with WSL 2, the time required to start a Docker daemon after a cold start is significantly faster. It takes less than 2 seconds to start the Docker daemon when compared to tens of seconds in the current version of Docker Desktop.
|
||||
|
||||
> Note that it is currently not possible to run Kubernetes while running Docker Desktop on WSL 2. However, you can continue to use Kubernetes in the non-WSL 2 Docker Desktop using the Daemon **Settings** option.
|
||||
Your feedback is very important to us. Please let us know your feedback by creating an issue in the [Docker Desktop for Windows GitHub](https://github.com/docker/for-win/issues) repository and adding the **WSL 2** label.
|
||||
|
||||
# Prerequisites
|
||||
|
||||
Before you install Docker Desktop WSL 2 Tech Preview, you must complete the following steps:
|
||||
Before you install Docker Desktop WSL 2 backend, you must complete the following steps:
|
||||
|
||||
1. Install Windows 10 Insider Preview build 18932 or later.
|
||||
1. Install Windows 10 Insider Preview build 19018 or higher.
|
||||
2. Enable WSL 2 feature on Windows. For detailed instructions, refer to the [Microsoft documentation](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install).
|
||||
3. Install a default distribution based on Ubuntu 18.04. You can check this with `wsl lsb_release -a`. You can download Ubuntu 18.04 from the [Microsoft store](https://www.microsoft.com/en-us/p/ubuntu-1804-lts/9n9tngvndl3q).
|
||||
4. Ensure the Ubuntu distribution runs in WSL 2 mode. WSL can run distributions in both v1 or v2 mode.
|
||||
|
||||
# Download
|
||||
|
||||
Download [Docker Desktop Edge 2.1.6.0](https://download.docker.com/win/edge/40807/Docker%20Desktop%20Installer.exe) or a later release.
|
||||
|
||||
# Install
|
||||
|
||||
Ensure you have completed the steps described in the Prerequisites section **before** installing the Docker Desktop Edge release.
|
||||
|
||||
1. Follow the usual Docker Desktop installation instructions to install Docker Desktop.
|
||||
2. Start Docker Desktop from the Windows Start menu.
|
||||
3. From the Docker menu, select **Settings** > **General**.
|
||||
|
||||

|
||||
|
||||
4. Select the **Enable the experimental WSL 2 based engine** check box.
|
||||
5. Click **Apply & Restart**.
|
||||
6. Ensure the distribution runs in WSL 2 mode. WSL can run distributions in both v1 or v2 mode.
|
||||
|
||||
To check the WSL mode, run:
|
||||
|
||||
|
|
@ -34,24 +48,8 @@ Before you install Docker Desktop WSL 2 Tech Preview, you must complete the foll
|
|||
To upgrade to v2, run:
|
||||
|
||||
`wsl --set-version <distro name> 2`
|
||||
5. Set Ubuntu 18.04 as the default distribution.
|
||||
7. When Docker Desktop restarts, go to **Settings** > **Resources** > **WSL Integration** and then select from which WSL 2 distributions you would like to access Docker.
|
||||
|
||||
`wsl -s ubuntu 18.04`
|
||||

|
||||
|
||||
# Download
|
||||
|
||||
To download the Tech Preview, click [Docker Desktop WSL 2 Tech Preview Installer](https://download.docker.com/win/edge/36883/Docker%20Desktop%20Installer.exe).
|
||||
|
||||
# Installation
|
||||
|
||||
Ensure you have completed the steps described in the Prerequisites section **before** installing the Tech Preview.
|
||||
|
||||
Follow the usual Docker Desktop installation instructions to install the Tech Preview. After a successful installation, the Docker Desktop menu displays the **WSL 2 Tech Preview** option.
|
||||
|
||||

|
||||
|
||||
Select **WSL 2 Tech Preview** from the menu to start, stop, and configure the daemon running in WSL 2. When the WSL 2 daemon starts, a docker CLI context is automatically created for it, and the CLI configuration points to the context. You can list contexts by running `docker context ls`.
|
||||
|
||||

|
||||
|
||||
Docker Desktop allows you to toggle between the WSL modes. To use the classic daemon, run `docker context use default`. To switch to WSL 2, run `docker context use wsl`.
|
||||
8. Click **Apply & Restart** for the changes to take effect.
|
||||
|
|
|
|||
|
|
@ -65,4 +65,4 @@ a password, enter your token instead.
|
|||
|
||||
If you have 2FA enabled, you must use a personal access token when logging in
|
||||
from the Docker CLI. If you don't have it enabled, this is an optional (but
|
||||
more secure) method of authentication.
|
||||
more secure) method of authentication.
|
||||
|
|
@ -1,90 +0,0 @@
|
|||
---
|
||||
description: Explains the difference between Classic and new Automated Builds
|
||||
keywords: automated, build, images
|
||||
title: Classic Automated Builds
|
||||
---
|
||||
|
||||
With the launch of the new Docker Hub, we are introducing an improved Automated Build experience.
|
||||
|
||||
Automated Builds created using an older version of Docker Hub are now labelled "Classic".
|
||||
If you were using Docker Cloud to manage builds, your builds are already the latest version of Automated Builds.
|
||||
|
||||
All automated builds created going forward will get the new experience. If you are creating a new
|
||||
Automated Build for the first time, see [docs](/docker-hub/builds.md#configure-automated-build-settings).
|
||||
|
||||
In the coming months, we will gradually convert Classic Automated Builds into new Automated Builds. This should
|
||||
be a seamless process for most users.
|
||||
|
||||
|
||||
## Managing Classic Automated Builds
|
||||
|
||||
You can manage both Classic and new Automated Builds from the **Builds** tab
|
||||
|
||||
Repository with Classic Automated Build:
|
||||
|
||||

|
||||
|
||||
Build settings can be configured similarly to those on the old Docker Hub.
|
||||
|
||||
If you have previously created an automated build in both the old Docker Hub and Docker Cloud, you can switch between
|
||||
Classic and new Automated Builds.
|
||||
|
||||
New Automated Build is displayed by default. You can switch to Classic Automated Build by clicking on this link at the top
|
||||
|
||||

|
||||
|
||||
Likewise, you can switch back to new Automated Build by clicking on this link at the top
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
## Adding Github webhook manually
|
||||
|
||||
A GitHub webhook allows GitHub to notify Docker Hub when something has
|
||||
been committed to a given Git repository.
|
||||
|
||||
When you create a Classic Automated Build, a webhook should get automatically added to your GitHub
|
||||
repository.
|
||||
|
||||
To add, confirm, or modify the webhook, log in to GitHub, then navigate to
|
||||
the repository. Within the repository, select **Settings > Webhooks**.
|
||||
You must have admin privileges on the repository to view or modify
|
||||
this setting. Click **Add webhook**, and use the following settings:
|
||||
|
||||
|
||||
| Field | Value |
|
||||
| ------|------ |
|
||||
| Payload URL | https://registry.hub.docker.com/hooks/github |
|
||||
| Content type | application/json |
|
||||
| Which events would you like to trigger this webhook? | Just the push event |
|
||||
| Active | checked |
|
||||
|
||||
The image below shows the **Webhooks/Add webhook** form with the above settings reflected:
|
||||
|
||||

|
||||
|
||||
If configured correctly, you'll see this in the **Webhooks** view
|
||||

|
||||
|
||||
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
|
||||
**Q: I've previously linked my GitHub/Bitbucket account in the old Docker Hub. Why do I need to re-link it?**
|
||||
|
||||
A: The new Docker Hub uses a different permissions model. [Linking is only a few clicks by going to account settings](link-source.md).
|
||||
with the new Docker Hub.
|
||||
|
||||
> **Note**: If you are linking a source code provider to create autobuilds for a team, follow the instructions to [create a service account](/docker-hub/builds.md#service-users-for-team-autobuilds) for the team before linking the account as described below.
|
||||
|
||||
**Q: What happens to automated builds I created in the old Docker Hub?**
|
||||
|
||||
A: They are now Classic Automated Builds. There are no functional differences with the old automated builds and everything
|
||||
(build triggers, existing build rules) should continue to work seamlessly.
|
||||
|
||||
**Q: Is it possible to convert an existing Classic Automated Build?**
|
||||
|
||||
A: This is currently unsupported. However, we are working to transition all builds into new experience in
|
||||
the coming months.
|
||||
|
After Width: | Height: | Size: 131 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 60 KiB |
|
Before Width: | Height: | Size: 254 KiB After Width: | Height: | Size: 106 KiB |
|
Before Width: | Height: | Size: 313 KiB After Width: | Height: | Size: 152 KiB |
|
After Width: | Height: | Size: 176 KiB |
|
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 48 KiB |
|
|
@ -1,12 +1,13 @@
|
|||
---
|
||||
description: Set up Automated builds
|
||||
keywords: automated, build, images
|
||||
description: Set up automated builds
|
||||
keywords: automated, build, images, Docker Hub
|
||||
redirect_from:
|
||||
- /docker-hub/builds/automated-build/
|
||||
- /docker-cloud/feature-reference/automated-build/
|
||||
- /docker-cloud/builds/automated-build/
|
||||
- /docker-cloud/builds/
|
||||
title: Set up Automated builds
|
||||
- /docker-hub/builds/classic/
|
||||
title: Set up automated builds
|
||||
---
|
||||
|
||||
|
||||
|
|
@ -268,22 +269,9 @@ You can specify a regular expression (regex) so that only matching branches or
|
|||
tags are built. You can also use the results of the regex to create the Docker
|
||||
tag that is applied to the built image.
|
||||
|
||||
You can use the variable `{sourceref}` to use the branch or tag name that
|
||||
matched the regex in the Docker tag applied to the resulting built image. (The
|
||||
variable includes the whole source name, not just the portion that matched the
|
||||
regex.) You can also use up to nine regular expression capture groups
|
||||
You can use up to nine regular expression capture groups
|
||||
(expressions enclosed in parentheses) to select a source to build, and reference
|
||||
these in the Docker Tag field using `{\1}` through `{\9}`.
|
||||
|
||||
**Regex example: build from version number branch and tag with version number**
|
||||
|
||||
You might want to automatically build any branches that end with a number
|
||||
formatted like a version number, and tag their resulting Docker images using a
|
||||
name that incorporates that branch name.
|
||||
|
||||
To do this, specify a `branch` build with the regex `/[0-9.]+$/` in the
|
||||
**Source** field, and use the formula `version-{sourceref}` in the **Docker
|
||||
tag** field.
|
||||
these in the **Docker Tag** field using `{\1}` through `{\9}`.
|
||||
|
||||
<!-- Capture groups Not a priority
|
||||
#### Regex example: build from version number branch and tag with version number
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
description: Link to GitHub and BitBucket
|
||||
keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, documentation, trusted, builds, trusted builds, automated builds, GitHub
|
||||
title: Configure Automated Builds from GitHub and BitBucket
|
||||
keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, trusted, builds, trusted builds, automated builds, GitHub
|
||||
title: Configure automated builds from GitHub and BitBucket
|
||||
redirect_from:
|
||||
- /docker-hub/github/
|
||||
- /docker-hub/bitbucket/
|
||||
|
|
@ -18,60 +18,57 @@ organizations.
|
|||
|
||||
## Link to a GitHub user account
|
||||
|
||||
1. Click **Settings** in the top-right dropdown navigation.
|
||||
1. Log in to Docker Hub using your Docker ID.
|
||||
|
||||
2. Click or scroll down to **Linked Accounts**.
|
||||
2. Click **Account Settings** in the top-right dropdown navigation, then open **Linked Accounts**.
|
||||
|
||||
3. Click the plug icon for the source provider you want to link.
|
||||
3. Click **Connect** for the source provider you want to link.
|
||||
|
||||

|
||||

|
||||
|
||||
4. Review the settings for the **Docker Hub Builder** OAuth application.
|
||||

|
||||
|
||||

|
||||
|
||||
>**Note**: If you are the owner of any GitHub organizations, you might see
|
||||
options to grant Docker Hub access to them from this screen. You can also
|
||||
individually edit an organization's Third-party access settings to grant or
|
||||
individually edit an organization's third-party access settings to grant or
|
||||
revoke Docker Hub's access. See [Grant access to a GitHub
|
||||
organization](link-source.md#grant-access-to-a-github-organization) to learn more.
|
||||
organization](link-source.md#grant-access-to-a-github-organization) to
|
||||
learn more.
|
||||
|
||||
5. Click **Authorize application** to save the link.
|
||||
5. Click **Authorize docker** to save the link.
|
||||
|
||||
## Link to a Bitbucket user account
|
||||
|
||||
1. Log in to Docker Hub using your Docker ID.
|
||||
|
||||
2. Click **Settings** in the top-right dropdown navigation.
|
||||
2. Click **Account Settings** in the top-right dropdown navigation, then open
|
||||
the **Linked Accounts** section.
|
||||
|
||||
3. Scroll to the **Linked Accounts** section.
|
||||
3. Click **Connect** for the source provider you want to link.
|
||||
|
||||
4. Click the plug icon for the source provider you want to link.
|
||||

|
||||
|
||||

|
||||
4. If necessary, log in to Bitbucket.
|
||||
|
||||
5. If necessary, log in to Bitbucket.
|
||||
|
||||
6. On the page that appears, click **Grant access**.
|
||||
5. On the page that appears, click **Grant access**.
|
||||
|
||||
### Unlink a GitHub user account
|
||||
|
||||
To revoke Docker Hub's access to your GitHub account, you must unlink it both
|
||||
from Docker Hub, *and* from your GitHub account.
|
||||
|
||||
1. Click **Settings** in the top-right dropdown navigation, and click or scroll to the
|
||||
**Linked Accounts** section.
|
||||
1. Click **Account Settings** in the top-right dropdown navigation, then open
|
||||
the **Linked Accounts** section.
|
||||
|
||||
2. Click the plug icon next to the source provider you want to remove.
|
||||
|
||||
The icon turns gray and has a slash through it when the account is disabled
|
||||
but not revoked. You can use this to _temporarily_ disable a linked source
|
||||
code provider account.
|
||||
3. Go to your GitHub account's **Settings** page.
|
||||
|
||||
4. Go to your GitHub account's **Settings** page.
|
||||
4. Click **Applications** in the left navigation bar.
|
||||
|
||||
5. Click **OAuth applications** in the left navigation bar.
|
||||
|
||||
6. Click **Revoke** next to the Docker Hub Builder application.
|
||||
5. Click the `...` menu to the right of the Docker Hub Builder application and select **Revoke**.
|
||||
|
||||
> **Note**: Each repository that is configured as an automated build source
|
||||
contains a webhook that notifies Docker Hub of changes in the repository.
|
||||
|
|
@ -109,6 +106,7 @@ section at the lower left.
|
|||
5. Click the pencil icon next to Docker Hub Builder.
|
||||
|
||||
6. Click **Grant access** next to the organization.
|
||||
|
||||

|
||||
|
||||
|
||||
|
|
@ -117,10 +115,14 @@ section at the lower left.
|
|||
To revoke Docker Hub's access to an organization's GitHub repositories:
|
||||
|
||||
1. From your GitHub Account settings, locate the **Organization settings** section at the lower left.
|
||||
|
||||
2. Click the organization you want to revoke Docker Hub's access to.
|
||||
|
||||
3. From the Organization Profile menu, click **Third-party access**.
|
||||
The page displays a list of third party applications and their access status.
|
||||
|
||||
4. Click the pencil icon next to Docker Hub Builder.
|
||||
|
||||
5. On the next page, click **Deny access**.
|
||||
|
||||
|
||||
|
|
@ -129,13 +131,12 @@ To revoke Docker Hub's access to an organization's GitHub repositories:
|
|||
To permanently revoke Docker Hub's access to your Bitbucket account, you must
|
||||
unlink it both from Docker Hub, *and* from your Bitbucket account.
|
||||
|
||||
1. Find **Settings** in the top-right dropdown navigation, and scroll to **Linked Accounts**
|
||||
1. Log in to Docker Hub using your Docker ID.
|
||||
|
||||
2. Click the plug icon next to the source provider you want to remove.
|
||||
2. Click **Account Settings** in the top-right dropdown navigation, then open
|
||||
the **Linked Accounts** section.
|
||||
|
||||
The icon turns gray and has a slash through it when the account is disabled,
|
||||
however access may not have been revoked. You can use this to _temporarily_
|
||||
disable a linked source code provider account.
|
||||
3. Click the plug icon next to the source provider you want to remove.
|
||||
|
||||
4. Go to your Bitbucket account and click the user menu icon in the top-right corner.
|
||||
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 256 KiB |
|
Before Width: | Height: | Size: 160 KiB After Width: | Height: | Size: 82 KiB |
|
After Width: | Height: | Size: 131 KiB |
|
Before Width: | Height: | Size: 110 KiB After Width: | Height: | Size: 120 KiB |
|
After Width: | Height: | Size: 182 KiB |
|
After Width: | Height: | Size: 105 KiB |
|
Before Width: | Height: | Size: 524 KiB After Width: | Height: | Size: 120 KiB |
|
|
@ -167,4 +167,4 @@ automatically have Read permissions:
|
|||
|
||||
> **Note**: A User who has not yet verified their email address only has
|
||||
> `Read` access to the repository, regardless of the rights their team
|
||||
> membership has given them.
|
||||
> membership has given them.
|
||||
|
After Width: | Height: | Size: 168 KiB |
|
After Width: | Height: | Size: 115 KiB |
|
After Width: | Height: | Size: 188 KiB |
|
After Width: | Height: | Size: 158 KiB |
|
|
@ -8,13 +8,15 @@ redirect_from:
|
|||
|
||||
## Permitted content and support options
|
||||
|
||||
* Content that runs on Docker Enterprise may be published on Docker Hub under a Verified Publisher profile. This content may also qualify
|
||||
to become a Docker Certified Container or Plugin image, and thus backed by
|
||||
collaborative Docker/Publisher support.
|
||||
* Content that runs on Docker Enterprise may be published on Docker Hub under a
|
||||
Verified Publisher profile. This content may also qualify to become a Docker
|
||||
Certified Container or Plugin image, and thus become backed by collaborative
|
||||
Docker/Publisher support.
|
||||
|
||||
* Content that runs on the Docker Community may be published in Docker Hub, but is not supported by Docker nor is it eligible to become Certified.
|
||||
* Content that runs on the Docker Community may be published in Docker Hub, but
|
||||
is not supported by Docker nor is it eligible to become Certified.
|
||||
|
||||
* Content that requires a non Certified Infrastructure environment may not be
|
||||
* Content that requires a non-Certified Infrastructure environment may not be
|
||||
published.
|
||||
|
||||
|
||||
|
|
@ -28,8 +30,7 @@ redirect_from:
|
|||
## Onboarding
|
||||
|
||||
The Docker Hub publishing process begins from the landing page: sign in with
|
||||
your Docker ID and specify a product name and image source from a private or public
|
||||
repository.
|
||||
your Docker ID and specify a product name and image source from a private or public repository.
|
||||
|
||||
After specifying a source, provide the content-manifest items to populate your
|
||||
product details page. These items include logos, descriptions, and licensing and
|
||||
|
|
@ -208,19 +209,28 @@ To interpret the results of a scanned image:
|
|||
1. Log on to [Docker Hub](https://hub.docker.com){: target="_blank" class="_"}.
|
||||
|
||||
2. Navigate to the repository details page (for example,
|
||||
[busybox](https://hub.docker.com/_/busybox){: target="_blank" class="_"}).
|
||||
[nodejs](https://hub.docker.com/_/nodejs){: target="_blank" class="_"}).
|
||||
|
||||
3. Click **Tags**.
|
||||

|
||||
In this section, you can now view the different architectures separately to easily identify the right image for the architecture you need, complete with image size and operating system information.
|
||||

|
||||
|
||||
4. Click on the digest for a particular architecture, you will now also be able to see the actual source of the image – the layer-by-layer details that make up the image. 
|
||||

|
||||
|
||||
5. Click on the first row, you’ll see that the image contains multiple components and that multiple components have known vulnerabilities ranging from minor to critical. To explore further, click on the caret to expand and view all of the found vulnerabilities:
|
||||
In this section, you can now view the different architectures separately to
|
||||
easily identify the right image for the architecture you need, complete
|
||||
with image size and operating system information.
|
||||
|
||||

|
||||
Each vulnerability is linked directly to the CVE so that you can learn more about the CVE and its implications.
|
||||

|
||||
|
||||
4. Click on the digest for a particular architecture. You can now also see the
|
||||
actual source of the image: the layer-by-layer details that make up the image.
|
||||
|
||||

|
||||
|
||||
5. Click on any row in the **Image History** list. You’ll see that the image contains multiple components, and that some of them have known vulnerabilities ranging from minor to critical. To explore further, click on the caret to expand and view all of the found vulnerabilities:
|
||||
|
||||

|
||||
|
||||
Each vulnerability is linked directly to the CVE (Common Vulnerabilities and Exposures) list entry so that you can learn more about the CVE entry and its implications.
|
||||
|
||||
#### Classification of issues
|
||||
|
||||
|
|
@ -301,7 +311,7 @@ can also be listed alongside external references to your product.
|
|||
|
||||
#### How is support handled?
|
||||
|
||||
All Docker Certified Container images and plugins running on Docker Enterprise come with support provided directly by the publisher, under your existing SLA.
|
||||
All Docker Certified Container images and plugins running on Docker Enterprise come with support provided directly by the publisher, under your existing SLA.
|
||||
Normally, a customer contacts the publisher for container and application level
|
||||
issues. Likewise, a customer contacts Docker for Docker Enterprise support. In the
|
||||
case where a customer calls Docker (or vice versa) about an issue on the
|
||||
|
|
@ -316,11 +326,11 @@ Partner](https://goto.docker.com/2019-Partner-Program-Technology.html){: target=
|
|||
|
||||
#### What is the difference between Official Images and Docker Certified?
|
||||
|
||||
Official Images is a program sponsored by Docker for the curation and packaging of Open Source Software. While upstream vendors are sometimes involved, this is not always the case. Docker Certified content is explicitly provided, maintained, and supported directly by the ISV.
|
||||
Official Images is a program sponsored by Docker for the curation and packaging of Open Source Software. While upstream vendors are sometimes involved, this is not always the case. Docker Certified content is explicitly provided, maintained, and supported directly by the ISV.
|
||||
|
||||
#### How is certification of plugins handled?
|
||||
|
||||
Docker Certification program recognizes the need to apply special scrutiny and
|
||||
testing to containers that access system level interfaces like storage volumes
|
||||
and networking. Docker identifies these special containers as “Plugins” which
|
||||
require additional testing by the publisher or Docker.
|
||||
require additional testing by the publisher or Docker.
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ image certification and publishing process as outlined below:
|
|||
1. Producers sign and push their images using Docker Content Trust to a private staging area. To do this, run a `docker push` command with Content Trust enabled:
|
||||
|
||||
```shell
|
||||
DOCKER_CONTENT_TRUST=1 docker pull <image>
|
||||
DOCKER_CONTENT_TRUST=1 docker push <image>
|
||||
```
|
||||
|
||||
2. Docker verifies the signatures to guarantee authenticity, integrity, and freshness of the image. All of the individual layers of your image, and the combination thereof, are encompassed as part of this verification check. [Read more detail about Content Trust in Docker's documentation](/engine/security/trust/content_trust/#understand-trust-in-docker).
|
||||
|
|
|
|||
|
|
@ -9,6 +9,18 @@ toc_max: 2
|
|||
Here you can learn about the latest changes, new features, bug fixes, and
|
||||
known issues for each Docker Hub release.
|
||||
|
||||
# 2019-11-04
|
||||
|
||||
### Enhancements
|
||||
|
||||
* The [repositories page](https://docs.docker.com/docker-hub/repos/) and all
|
||||
related settings and tabs have been updated and moved from `cloud.docker.com`
|
||||
to `hub.docker.com`. You can access the page at its new URL: [https://hub.docker.com/repositories](https://hub.docker.com/repositories).
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Scan results don't appear for some official images.
|
||||
|
||||
# 2019-10-21
|
||||
|
||||
### New features
|
||||
|
|
@ -18,8 +30,6 @@ known issues for each Docker Hub release.
|
|||
> not be able to recover your account.
|
||||
{: .important }
|
||||
|
||||
|
||||
|
||||
### Enhancements
|
||||
* As a security measure, when two-factor authentication is enabled, the Docker CLI requires a personal access token instead of a password to log in.
|
||||
|
||||
|
|
@ -88,4 +98,4 @@ has been updated.
|
|||
|
||||
### Known Issues
|
||||
|
||||
* Scan results don't appear for some official images.
|
||||
* Scan results don't appear for some official images.
|
||||
|
|
@ -7,42 +7,57 @@ title: Repositories
|
|||
Docker Hub repositories allow you share container images with your team,
|
||||
customers, or the Docker community at large.
|
||||
|
||||
Docker images are pushed to Docker Hub through the [`docker push`](https://docs.docker.com/engine/reference/commandline/push/) command. A single Docker Hub repository can hold many Docker images (stored as **tags**).
|
||||
Docker images are pushed to Docker Hub through the [`docker push`](https://docs.docker.com/engine/reference/commandline/push/)
|
||||
command. A single Docker Hub repository can hold many Docker images (stored as
|
||||
**tags**).
|
||||
|
||||
## Creating Repositories
|
||||
## Creating repositories
|
||||
|
||||
To create a repository, sign into Docker Hub, click on **Repositories** then **Create Repo**:
|
||||
To create a repository, sign into Docker Hub, click on **Repositories** then
|
||||
**Create Repository**:
|
||||
|
||||

|
||||
|
||||
When creating a new repository, you can choose to put it in your Docker ID
|
||||
namespace, or that of any [Organization](/docker-hub/orgs.md) that you are in the "Owners"
|
||||
team. The Repository Name needs to be unique in that namespace, can be two
|
||||
When creating a new repository:
|
||||
|
||||
* You can choose to put it in your Docker ID
|
||||
namespace, or in any [organization](/docker-hub/orgs.md) where you are an
|
||||
[_owner_](/orgs/#the-owners-team).
|
||||
|
||||
* The repository name needs to be unique in that namespace, can be two
|
||||
to 255 characters, and can only contain lowercase letters, numbers or `-` and
|
||||
`_`.
|
||||
|
||||
The "Short Description" of 100 characters is used in the search results,
|
||||
while the "Full Description" can be used as the Readme for the repository, and
|
||||
can use Markdown to add simple formatting.
|
||||
* The description can be up to 100 characters and is used in the search
|
||||
result.
|
||||
|
||||
After you hit the "Create" button, you then need to `docker push` images to that
|
||||
Hub based repository.
|
||||
* You can link a GitHub or Bitbucket account now, or choose to do it
|
||||
later in the repository settings.
|
||||
|
||||

|
||||
|
||||
|
||||
After you hit the **Create** button, you can start using `docker push` to push
|
||||
images to this repository.
|
||||
|
||||
## Pushing a Docker container image to Docker Hub
|
||||
|
||||
To push a repository to the Docker Hub, you must
|
||||
name your local image using your Docker Hub username, and the
|
||||
repository name that you created through Docker Hub on the web.
|
||||
To push an image to Docker Hub, you must first name your local image using your
|
||||
Docker Hub username and the repository name that you created through Docker Hub
|
||||
on the web.
|
||||
|
||||
You can add multiple images to a repository, by adding a specific `:<tag>` to
|
||||
it (for example `docs/base:testing`). If it's not specified, the tag defaults to
|
||||
`latest`.
|
||||
You can add multiple images to a repository by adding a specific `:<tag>` to
|
||||
them (for example `docs/base:testing`). If it's not specified, the tag defaults
|
||||
to `latest`.
|
||||
|
||||
You can name your local images either when you build it, using
|
||||
`docker build -t <hub-user>/<repo-name>[:<tag>]`,
|
||||
by re-tagging an existing local image `docker tag <existing-image> <hub-user>/<repo-name>[:<tag>]`,
|
||||
or by using `docker commit <existing-container> <hub-user>/<repo-name>[:<tag>]` to commit
|
||||
changes.
|
||||
Name your local images using one of these methods:
|
||||
* When you build them, using
|
||||
`docker build -t <hub-user>/<repo-name>[:<tag>]`
|
||||
|
||||
* By re-tagging an existing local image `docker tag <existing-image> <hub-user>/<repo-name>[:<tag>]`
|
||||
|
||||
* By using `docker commit <existing-container> <hub-user>/<repo-name>[:<tag>]`
|
||||
to commit changes
|
||||
|
||||
Now you can push this repository to the registry designated by its name or tag.
|
||||
|
||||
|
|
@ -51,22 +66,23 @@ Now you can push this repository to the registry designated by its name or tag.
|
|||
The image is then uploaded and available for use by your teammates and/or
|
||||
the community.
|
||||
|
||||
## Private Repositories
|
||||
## Private repositories
|
||||
|
||||
Private repositories allow you keep container images private, either to your own account or within an organization or
|
||||
team.
|
||||
Private repositories let you keep container images private, either to your
|
||||
own account or within an organization or team.
|
||||
|
||||
To create a private repo select **Private** when creating a private repo:
|
||||
To create a private repository, select **Private** when creating a repository:
|
||||
|
||||

|
||||
|
||||
You can also make an existing repository private by going to the repo's **Settings** tab:
|
||||
You can also make an existing repository private by going to its **Settings** tab:
|
||||
|
||||

|
||||
|
||||
You get one private repository for free with your Docker Hub user account (not usable for
|
||||
organizations you're a member of). If you need more private repositories for your user account, upgrade
|
||||
your Docker Hub plan from your [Billing Information](https://hub.docker.com/billing/plan) page.
|
||||
You get one private repository for free with your Docker Hub user account (not
|
||||
usable for organizations you're a member of). If you need more private
|
||||
repositories for your user account, upgrade your Docker Hub plan from your
|
||||
[Billing Information](https://hub.docker.com/billing/plan) page.
|
||||
|
||||
Once the private repository is created, you can `push` and `pull` images to and
|
||||
from it using Docker.
|
||||
|
|
@ -74,11 +90,11 @@ from it using Docker.
|
|||
> **Note**: You need to be signed in and have access to work with a
|
||||
> private repository.
|
||||
|
||||
> **Note**: Private repositories are not currently available to search through the
|
||||
top-level search or `docker search`
|
||||
> **Note**: Private repositories are not currently available to search through
|
||||
> the top-level search or `docker search`.
|
||||
|
||||
You can designate collaborators and manage their access to a private
|
||||
repository from that repository's *Settings* page. You can also toggle the
|
||||
repository from that repository's **Settings** page. You can also toggle the
|
||||
repository's status between public and private, if you have an available
|
||||
repository slot open. Otherwise, you can upgrade your
|
||||
[Docker Hub](https://hub.docker.com/account/billing-plans/) plan.
|
||||
|
|
@ -102,53 +118,59 @@ see the [organizations documentation](/docker-hub/orgs.md).
|
|||
## Viewing repository tags
|
||||
|
||||
Docker Hub's individual repositories view shows you the available tags and the
|
||||
size of the associated image. Go to the "Repositories" view and click on a
|
||||
size of the associated image. Go to the **Repositories** view and click on a
|
||||
repository to see its tags.
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
Image sizes are the cumulative space taken up by the image and all its parent
|
||||
images. This is also the disk space used by the contents of the .tar file
|
||||
images. This is also the disk space used by the contents of the `.tar` file
|
||||
created when you `docker save` an image.
|
||||
|
||||
To view tags, click on "Tags" tab and then select a tag to view.
|
||||
To view individual tags, click on the **Tags** tab.
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||
Select a tag's digest to view details.
|
||||
|
||||

|
||||
|
||||
## Searching for Repositories
|
||||
|
||||
You can search the [Docker Hub](https://hub.docker.com) registry through its search
|
||||
interface or by using the command line interface. Searching can find images by
|
||||
image name, user name, or description:
|
||||
You can search the [Docker Hub](https://hub.docker.com) registry through its
|
||||
search interface or by using the command line interface. Searching can find
|
||||
images by image name, username, or description:
|
||||
|
||||
$ docker search centos
|
||||
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
|
||||
centos The official build of CentOS. 1034 [OK]
|
||||
ansible/centos7-ansible Ansible on Centos7 43 [OK]
|
||||
tutum/centos Centos image with SSH access. For the root... 13 [OK]
|
||||
...
|
||||
```
|
||||
$ docker search centos
|
||||
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
|
||||
centos The official build of CentOS. 1034 [OK]
|
||||
ansible/centos7-ansible Ansible on Centos7 43 [OK]
|
||||
tutum/centos Centos image with SSH access. For the root... 13 [OK]
|
||||
...
|
||||
```
|
||||
|
||||
There you can see two example results: `centos` and `ansible/centos7-ansible`.
|
||||
The second result shows that it comes from the public repository of a user,
|
||||
named `ansible/`, while the first result, `centos`, doesn't explicitly list a
|
||||
repository which means that it comes from the top-level namespace for [Official
|
||||
Images](/docker-hub/official_images.md). The `/` character separates a user's
|
||||
repository which means that it comes from the top-level namespace for [official
|
||||
images](/docker-hub/official_images.md). The `/` character separates a user's
|
||||
repository from the image name.
|
||||
|
||||
Once you've found the image you want, you can download it with `docker pull <imagename>`:
|
||||
|
||||
$ docker pull centos
|
||||
latest: Pulling from centos
|
||||
6941bfcbbfca: Pull complete
|
||||
41459f052977: Pull complete
|
||||
fd44297e2ddb: Already exists
|
||||
centos:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
|
||||
Digest: sha256:d601d3b928eb2954653c59e65862aabb31edefa868bd5148a41fa45004c12288
|
||||
Status: Downloaded newer image for centos:latest
|
||||
```
|
||||
$ docker pull centos
|
||||
latest: Pulling from centos
|
||||
6941bfcbbfca: Pull complete
|
||||
41459f052977: Pull complete
|
||||
fd44297e2ddb: Already exists
|
||||
centos:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
|
||||
Digest: sha256:d601d3b928eb2954653c59e65862aabb31edefa868bd5148a41fa45004c12288
|
||||
Status: Downloaded newer image for centos:latest
|
||||
```
|
||||
|
||||
You now have an image from which you can run containers.
|
||||
|
||||
|
|
|
|||
|
|
@ -251,4 +251,4 @@ The following table describes the backup schema returned by the `GET` and `LIST`
|
|||
### Where to go next
|
||||
|
||||
- [Back up the Docker Trusted Registry](./back-up-dtr/)
|
||||
|
||||
|
||||
|
|
@ -12,7 +12,7 @@ The administrator configuration file allows you to customize and standardize you
|
|||
|
||||
When you install Docker Desktop Enterprise, a configuration file with default values is installed at the following location. Do not change the location of the `admin-settings.json` file.
|
||||
|
||||
`\%ProgramData%\DockerDesktop\admin-settings.json`
|
||||
`%ProgramData%\DockerDesktop\admin-settings.json`
|
||||
|
||||
which defaults to:
|
||||
|
||||
|
|
|
|||
|
|
@ -15,6 +15,19 @@ For information on system requirements, installation, and download, see:
|
|||
|
||||
For Docker Enterprise Engine release notes, see [Docker Engine release notes](/engine/release-notes).
|
||||
|
||||
## Version 2.1.0.8
|
||||
2019-11-14
|
||||
|
||||
Docker Desktop Enterprise 2.1.0.8 contains a Kubernetes upgrade. Note that your local Kubernetes cluster in Version Pack 3.0 will be reset after installing this version.
|
||||
|
||||
### Upgrades
|
||||
|
||||
- [Docker 19.03.5](https://docs.docker.com/engine/release-notes/#19035/) in Version Pack Enterprise 3.0
|
||||
- [Kubernetes 1.14.8](https://github.com/kubernetes/kubernetes/releases/tag/v1.14.8) in Version Pack Enterprise 3.0
|
||||
- [Docker 18.09.11](https://docs.docker.com/engine/release-notes/#180911) in Version Pack Enterprise 2.1
|
||||
- [Docker 17.06.2-ee-25](https://docs.docker.com/engine/release-notes/#17062-ee-25) in Version Pack Enterprise 2.0
|
||||
- [Go 1.12.13](https://golang.org/doc/devel/release.html#go1.12)
|
||||
|
||||
## Version 2.1.0.7
|
||||
2019-10-18
|
||||
|
||||
|
|
@ -238,4 +251,4 @@ Workaround: After signing back into Windows, when Docker Desktop has started, ri
|
|||
|
||||
- **Device management**: The Docker Desktop Enterprise installer is available as standard MSI (Win) and PKG (Mac) downloads, which allows administrators to script an installation across many developer machines.
|
||||
|
||||
- **Administrative control**: IT organizations can specify and lock configuration parameters for creation of a standardized development environment, including disabling drive sharing and limiting version pack installations. Developers run commands in the command line without worrying about configuration settings.
|
||||
- **Administrative control**: IT organizations can specify and lock configuration parameters for creation of a standardized development environment, including disabling drive sharing and limiting version pack installations. Developers run commands in the command line without worrying about configuration settings.
|
||||
|
|
@ -30,7 +30,7 @@ DTR supports the following storage systems:
|
|||
* [OpenStack Swift](/registry/storage-drivers/swift/)
|
||||
* [Google Cloud Storage](/registry/storage-drivers/gcs/)
|
||||
|
||||
> **Note**: Some of the previous links are meant to be informative and are not representative of DTR's implementation of these storage systems.
|
||||
> **Note**: Some of the previous links are meant to be informative and are not representative of DTR's implementation of these storage systems.
|
||||
|
||||
To configure the storage backend, log in to the DTR web interface
|
||||
as an admin, and navigate to **System > Storage**.
|
||||
|
|
@ -46,9 +46,9 @@ See [Docker Registry Configuration](/registry/configuration.md) for configuratio
|
|||
|
||||
By default, DTR creates a volume named `dtr-registry-<replica-id>` to store
|
||||
your images using the local filesystem. You can customize the name and path of
|
||||
the volume by using `docker/dtr install --dtr-storage-volume` or `docker/dtr reconfigure --dtr-storage-volume`.
|
||||
the volume by using `docker/dtr install --dtr-storage-volume` or `docker/dtr reconfigure --dtr-storage-volume`.
|
||||
|
||||
> When running DTR 2.5 (with experimental online garbage collection) and 2.6.0 to 2.6.3, there is an issue with [reconfiguring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the `--nfs-storage-url` flag issue, manually create a storage volume on each DTR node. If DTR is already installed in your cluster, [reconfigure DTR](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) with the `--dtr-storage-volume` flag using your newly-created volume.
|
||||
> When running DTR 2.5 (with experimental online garbage collection) and 2.6.0 to 2.6.3, there is an issue with [reconfiguring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the `--nfs-storage-url` flag issue, manually create a storage volume on each DTR node. If DTR is already installed in your cluster, [reconfigure DTR](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) with the `--dtr-storage-volume` flag using your newly-created volume.
|
||||
{: .warning}
|
||||
|
||||
If you're deploying DTR with high-availability, you need to use NFS or any other
|
||||
|
|
@ -80,7 +80,7 @@ all replicas can share the same storage backend.
|
|||
### Amazon S3
|
||||
|
||||
DTR supports Amazon S3 or other storage systems that are S3-compatible like Minio.
|
||||
[Learn how to configure DTR with Amazon S3](s3.md).
|
||||
[Learn how to configure DTR with Amazon S3](s3.md).
|
||||
|
||||
|
||||
|
||||
|
|
@ -90,6 +90,6 @@ DTR supports Amazon S3 or other storage systems that are S3-compatible like Mini
|
|||
- [Use NFS](nfs.md)
|
||||
- [Use S3](s3.md)
|
||||
- CLI reference pages
|
||||
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/2.6/cli/restore/)
|
||||
- [docker/dtr install](/reference/dtr/{{ site.dtr_version }}/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/{{ site.dtr_version }}/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/{{ site.dtr_version }}/cli/restore/)
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ keywords: registry, dtr, storage, nfs
|
|||
---
|
||||
|
||||
You can configure DTR to store Docker images in an NFS directory. Starting in DTR 2.6,
|
||||
changing storage backends involves initializing a new metadatastore instead of reusing an existing volume.
|
||||
changing storage backends involves initializing a new metadatastore instead of reusing an existing volume.
|
||||
This helps facilitate [online garbage collection](/ee/dtr/admin/configure/garbage-collection/#under-the-hood).
|
||||
See [changes to NFS reconfiguration below](/ee/dtr/admin/configure/external-storage/nfs/#reconfigure-dtr-to-use-nfs) if you have previously configured DTR to use NFS.
|
||||
|
||||
|
|
@ -46,14 +46,14 @@ configuration, so you will not need to specify it again.
|
|||
|
||||
### Reconfigure DTR to use NFS
|
||||
|
||||
To support **NFS v4**, more NFS options have been added to the CLI. See [New Features for 2.6.0 - CLI](/ee/dtr/release-notes/#260) for updates to [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/).
|
||||
To support **NFS v4**, more NFS options have been added to the CLI. See [New Features for 2.6.0 - CLI](/ee/dtr/release-notes/#260) for updates to [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/).
|
||||
|
||||
> When running DTR 2.5 (with experimental online garbage collection) and 2.6.0 to 2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the `--nfs-storage-url` flag issue, manually create a storage volume. If DTR is already installed in your cluster, [reconfigure DTR](/reference/dtr/2.6/cli/reconfigure/) with the `--dtr-storage-volume` flag using your newly-created volume.
|
||||
> When running DTR 2.5 (with experimental online garbage collection) and 2.6.0 to 2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the `--nfs-storage-url` flag issue, manually create a storage volume. If DTR is already installed in your cluster, [reconfigure DTR](/reference/dtr/2.6/cli/reconfigure/) with the `--dtr-storage-volume` flag using your newly-created volume.
|
||||
>
|
||||
> See [Reconfigure Using a Local NFS Volume]( https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for Docker's recommended recovery strategy.
|
||||
> See [Reconfigure Using a Local NFS Volume]( https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for Docker's recommended recovery strategy.
|
||||
{: .warning}
|
||||
|
||||
#### DTR 2.6.4
|
||||
#### DTR 2.6.4
|
||||
|
||||
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/2.6/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. The following shows you how to reconfigure DTR using an NFSv4 volume as a storage backend:
|
||||
|
||||
|
|
@ -82,6 +82,6 @@ docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version}}
|
|||
- [Restore from a backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/)
|
||||
- [Configure where images are stored](index.md)
|
||||
- CLI reference pages
|
||||
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/2.6/cli/restore/)
|
||||
- [docker/dtr install](/reference/dtr/{{ site.dtr_version }}/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/{{ site.dtr_version }}/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/{{ site.dtr_version }}/cli/restore/)
|
||||
|
|
|
|||
|
|
@ -133,11 +133,11 @@ DTR supports the following S3 regions:
|
|||
|
||||
## Update your S3 settings on the web interface
|
||||
|
||||
When running 2.5.x (with experimental garbage collection) or 2.6.0-2.6.4, there is an issue with [changing your S3 settings on the web interface](/ee/dtr/release-notes#version-26) which leads to erased metadata. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed.
|
||||
When running 2.5.x (with experimental garbage collection) or 2.6.0-2.6.4, there is an issue with [changing your S3 settings on the web interface](/ee/dtr/release-notes#version-26) which leads to erased metadata. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed.
|
||||
|
||||
## Restore DTR with S3
|
||||
|
||||
To [restore DTR using your previously configured S3 settings](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretocloudstorage), use `docker/dtr restore` with `--dtr-use-default-storage` to keep your metadata.
|
||||
To [restore DTR using your previously configured S3 settings](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretocloudstorage), use `docker/dtr restore` with `--dtr-use-default-storage` to keep your metadata.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
@ -145,10 +145,6 @@ To [restore DTR using your previously configured S3 settings](https://success.do
|
|||
- [Restore from a backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/)
|
||||
- [Configure where images are stored](index.md)
|
||||
- CLI reference pages
|
||||
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/2.6/cli/restore/)
|
||||
|
||||
|
||||
|
||||
|
||||
- [docker/dtr install](/reference/dtr/{{ site.dtr_version }}/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/{{ site.dtr_version }}/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/{{ site.dtr_version }}/cli/restore/)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Starting in DTR 2.6, switching storage backends initializes a new metadata store
|
|||
|
||||
## DTR 2.6.4 and above
|
||||
|
||||
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/2.7/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure. If you are not worried about losing your existing tags, you can skip the recommended steps below and [perform a reconfigure](/reference/dtr/2.7/cli/reconfigure/).
|
||||
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/{{ site.dtr_version }}/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure. If you are not worried about losing your existing tags, you can skip the recommended steps below and [perform a reconfigure](/reference/dtr/{{ site.dtr_version }}/cli/reconfigure/).
|
||||
|
||||
### Best practice for data migration
|
||||
|
||||
|
|
@ -18,11 +18,11 @@ Docker recommends the following steps for your storage backend and metadata migr
|
|||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
2. [Back up your existing metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata). See [docker/dtr backup](/reference/dtr/2.7/cli/backup/) for CLI command description and options.
|
||||
2. [Back up your existing metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata). See [docker/dtr backup](/reference/dtr/{{ site.dtr_version }}/cli/backup/) for CLI command description and options.
|
||||
|
||||
3. Migrate the contents of your current storage backend to the new one you are switching to. For example, upload your current storage data to your new NFS server.
|
||||
|
||||
4. [Restore DTR from your backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/) and specify your new storage backend. See [docker/dtr destroy](/reference/dtr/2.7/cli/destroy/) and [docker/dtr restore](/reference/dtr/2.7/cli/backup/) for CLI command descriptions and options.
|
||||
4. [Restore DTR from your backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/) and specify your new storage backend. See [docker/dtr destroy](/reference/dtr/2.7/cli/destroy/) and [docker/dtr restore](/reference/dtr/{{ site.dtr_version }}/cli/backup/) for CLI command descriptions and options.
|
||||
|
||||
5. With DTR restored from your backup and your storage data migrated to your new backend, garbage collect any dangling blobs using the following API request:
|
||||
|
||||
|
|
@ -45,7 +45,7 @@ If you have a long maintenance window, you can skip some steps from above and do
|
|||
|
||||
2. Migrate the contents of your current storage backend to the new one you are switching to. For example, upload your current storage data to your new NFS server.
|
||||
|
||||
3. [Reconfigure DTR](/reference/dtr/2.7/cli/reconfigure) while specifying the `--storage-migrated` flag to preserve your existing tags.
|
||||
3. [Reconfigure DTR](/reference/dtr/{{ site.dtr_version }}/cli/reconfigure) while specifying the `--storage-migrated` flag to preserve your existing tags.
|
||||
|
||||
|
||||
## DTR 2.6.0-2.6.4 and DTR 2.5 (with experimental garbage collection)
|
||||
|
|
@ -63,5 +63,5 @@ Upgrade to [DTR 2.6.4](#dtr-264-and-above) and follow [best practice for data mi
|
|||
- [Use NFS](nfs.md)
|
||||
- [Use S3](s3.md)
|
||||
- CLI reference pages
|
||||
- [docker/dtr install](/reference/dtr/2.7/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/2.7/cli/reconfigure/)
|
||||
- [docker/dtr install](/reference/dtr/{{ site.dtr_version }}/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/{{ site.dtr_version }}/cli/reconfigure/)
|
||||
|
|
|
|||
|
|
@ -22,6 +22,15 @@ to upgrade your installation to the latest release.
|
|||
|
||||
# Version 2.7
|
||||
|
||||
## 2.7.4
|
||||
(2019-11-13)
|
||||
|
||||
### Bug fixes
|
||||
* Fixed a bug where UCP pulling image vulnerability summaries from DTR caused excessive CPU load in UCP. (docker/dhe-deploy #10784)
|
||||
|
||||
### Security
|
||||
* Bumped the Golang version for DTR to `1.12.12`. (docker/dhe-deploy #10769)
|
||||
|
||||
## 2.7.3
|
||||
(2019-10-08)
|
||||
|
||||
|
|
@ -106,7 +115,7 @@ Refer to [DTR image vulnerabilities](https://success.docker.com/article/dtr-imag
|
|||
* **Web Interface**
|
||||
|
||||
* Users can now filter events by object type. (docker/dhe-deploy #10231)
|
||||
* Docker artifacts such as apps, plugins, images, and multi-arch images are shown as distinct types with granular views into app details including metadata and scan results for an application's constituent images. [Learn more](https://beta.docs.docker.com/app/working-with-app/).
|
||||
* Docker artifacts such as apps, plugins, images, and multi-arch images are shown as distinct types with granular views into app details including metadata and scan results for an application's constituent images. [Learn more](https://docs.docker.com/app/working-with-app/).
|
||||
* Users can now import a client certificate and key to the browser in order to access the web interface without using their credentials.
|
||||
* The **Logout** menu item is hidden from the left navigation pane if client certificates are used for DTR authentication instead of user credentials. (docker/dhe-deploy#10147)
|
||||
|
||||
|
|
@ -118,7 +127,7 @@ Refer to [DTR image vulnerabilities](https://success.docker.com/article/dtr-imag
|
|||
|
||||
* The Docker CLI now includes a `docker registry` management command which lets you interact with Docker Hub and trusted registries.
|
||||
* Features supported on both DTR and Hub include listing remote tags and inspecting image manifests.
|
||||
* Features supported on DTR alone include removing tags, listing repository events (such as image pushes and pulls), listing asynchronous jobs (such as mirroring pushes and pulls), and reviewing job logs. [Learn more](https://beta.docs.docker.com/engine/reference/commandline/registry/).
|
||||
* Features supported on DTR alone include removing tags, listing repository events (such as image pushes and pulls), listing asynchronous jobs (such as mirroring pushes and pulls), and reviewing job logs. [Learn more](https://docs.docker.com/engine/reference/commandline/registry/).
|
||||
|
||||
* **Client Cert-based Authentication**
|
||||
|
||||
|
|
@ -157,6 +166,16 @@ Refer to [DTR image vulnerabilities](https://success.docker.com/article/dtr-imag
|
|||
|
||||
# Version 2.6
|
||||
|
||||
## 2.6.11
|
||||
(2019-11-13)
|
||||
|
||||
### Bug fixes
|
||||
* DTR 2.6 will now refuse to accept Docker App pushes, as apps are only available in experimental mode from 2.7 onward. (docker/dhe-deploy #10775)
|
||||
* Fixed a bug where UCP pulling image vulnerability summaries from DTR caused excessive CPU load in UCP. (docker/dhe-deploy #10784)
|
||||
|
||||
### Security
|
||||
* Bumped the Golang version for DTR to `1.12.12`. (docker/dhe-deploy #10769)
|
||||
|
||||
## 2.6.10
|
||||
(2019-10-08)
|
||||
|
||||
|
|
@ -486,6 +505,15 @@ Refer to [DTR image vulnerabilities](https://success.docker.com/article/dtr-imag
|
|||
>
|
||||
> Upgrade path from 2.5.x to 2.6: Upgrade directly to 2.6.4.
|
||||
|
||||
## 2.5.15
|
||||
(2019-11-13)
|
||||
|
||||
### Bug fixes
|
||||
* DTR 2.5 will now refuse to accept Docker App pushes, as apps are only available in experimental mode from 2.7 onward. (docker/dhe-deploy #10775)
|
||||
|
||||
### Security
|
||||
* Bumped the Golang version for DTR to `1.12.12`. (docker/dhe-deploy #10769)
|
||||
|
||||
## 2.5.14
|
||||
(2019-09-03)
|
||||
|
||||
|
|
|
|||
|
|
@ -9,22 +9,22 @@ keywords: registry, tag pruning, tag limit, repo management
|
|||
Tag pruning is the process of cleaning up unnecessary or unwanted repository tags. As of v2.6, you can configure the Docker Trusted Registry (DTR) to automatically perform tag pruning on repositories that you manage by:
|
||||
|
||||
* specifying a tag pruning policy or alternatively,
|
||||
* setting a tag limit
|
||||
* setting a tag limit
|
||||
|
||||
> Tag Pruning
|
||||
>
|
||||
> When run, tag pruning only deletes a tag and does not carry out any actual blob deletion. For actual blob deletions, see [Garbage Collection](../../admin/configure/garbage-collection.md).
|
||||
> When run, tag pruning only deletes a tag and does not carry out any actual blob deletion. For actual blob deletions, see [Garbage Collection](/../admin/configure/garbage-collection.md).
|
||||
|
||||
> Known Issue
|
||||
>
|
||||
> While the tag limit field is disabled when you turn on immutability for a new repository, this is currently [not the case with **Repository Settings**](/ee/dtr/release-notes/#known-issues). As a workaround, turn off immutability when setting a tag limit via **Repository Settings > Pruning**.
|
||||
> While the tag limit field is disabled when you turn on immutability for a new repository, this is currently [not the case with **Repository Settings**](/ee/dtr/release-notes/#known-issues). As a workaround, turn off immutability when setting a tag limit via **Repository Settings > Pruning**.
|
||||
|
||||
In the following section, we will cover how to specify a tag pruning policy and set a tag limit on repositories that you manage. It will not include modifying or deleting a tag pruning policy.
|
||||
|
||||
## Specify a tag pruning policy
|
||||
|
||||
As a repository administrator, you can now add tag pruning policies on each repository that you manage. To get started, navigate to `https://<dtr-url>` and log in with your credentials.
|
||||
|
||||
|
||||
Select **Repositories** on the left navigation pane, and then click on the name of the repository
|
||||
that you want to update. Note that you will have to click on the repository name following
|
||||
the `/` after the specific namespace for your repository.
|
||||
|
|
@ -43,15 +43,15 @@ DTR allows you to set your pruning triggers based on the following image attribu
|
|||
| Tag name | Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values | Tag name = `test`|
|
||||
| Component name | Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values | Component name starts with `b` |
|
||||
| Vulnerabilities | Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number | Critical vulnerabilities = `3` |
|
||||
| License | Whether the image uses an intellectual property license and is one of or not one of your specified words | License name = `docker` |
|
||||
| License | Whether the image uses an intellectual property license and is one of or not one of your specified words | License name = `docker` |
|
||||
| Last updated at | Whether the last image update was before your specified number of hours, days, weeks, or months. For details on valid time units, see [Go's ParseDuration function](https://golang.org/pkg/time/#ParseDuration). | Last updated at: Hours = `12` |
|
||||
|
||||
Specify one or more image attributes to add to your pruning criteria, then choose:
|
||||
|
||||
- **Prune future tags** to save the policy and apply your selection to future tags. Only matching tags after the policy addition will be pruned during garbage collection.
|
||||
- **Prune all tags** to save the policy, and evaluate both existing and future tags on your repository.
|
||||
- **Prune all tags** to save the policy, and evaluate both existing and future tags on your repository.
|
||||
|
||||
Upon selection, you will see a confirmation message and will be redirected to your newly updated **Pruning** tab.
|
||||
Upon selection, you will see a confirmation message and will be redirected to your newly updated **Pruning** tab.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
|
|
@ -69,8 +69,8 @@ In addition to pruning policies, you can also set tag limits on repositories tha
|
|||
{: .with-border}
|
||||
|
||||
To set a tag limit, do the following:
|
||||
1. Select the repository that you want to update and click the **Settings** tab.
|
||||
2. Turn off immutability for the repository.
|
||||
1. Select the repository that you want to update and click the **Settings** tab.
|
||||
2. Turn off immutability for the repository.
|
||||
3. Specify a number in the **Pruning** section and click **Save**. The **Pruning** tab will now display your tag limit above the prune triggers list along with a link to modify this setting.
|
||||
|
||||
|
||||
|
|
|
|||
10
ee/index.md
|
|
@ -136,8 +136,8 @@ Windows applications typically require Active Directory authentication in order
|
|||
## Docker Enterprise and the CLI
|
||||
|
||||
Docker Enterprise exposes the standard Docker API, so you can continue using the tools
|
||||
that you already know, including the Docker CLI client, to deploy and manage your
|
||||
applications.
|
||||
that you already know, [including the Docker CLI client](./ucp/user-access/cli/),
|
||||
to deploy and manage your applications.
|
||||
|
||||
For example, you can use the `docker info` command to check the
|
||||
status of a Swarm managed by Docker Enterprise:
|
||||
|
|
@ -166,8 +166,8 @@ Managers: 1
|
|||
|
||||
## Use the Kubernetes CLI
|
||||
|
||||
Docker Enterprise exposes the standard Kubernetes API, so you can use `kubectl` to
|
||||
manage your Kubernetes workloads:
|
||||
Docker Enterprise exposes the standard Kubernetes API, so you can use [kubectl
|
||||
to manage your Kubernetes workloads](./ucp/user-access/cli/):
|
||||
|
||||
```bash
|
||||
kubectl cluster-info
|
||||
|
|
@ -183,7 +183,7 @@ To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
|||
```
|
||||
|
||||
## Docker Context
|
||||
A new Docker CLI plugin called `docker context` is available with this release. `docker context` helps manage connections to multiple environments so you do not have to remember and type out connection strings. [Read more](../engine/reference/commandline/context/) about `docker context`.
|
||||
A new Docker CLI plugin called `docker context` is available with client version 19.03.0. `docker context` helps manage connections to multiple environments so you do not have to remember and type out connection strings. [Read more](../engine/reference/commandline/context/) about `docker context`.
|
||||
|
||||
|
||||
## Where to go next
|
||||
|
|
|
|||
|
|
@ -4,41 +4,42 @@ description: Learn how to use SAML to link a UCP team with an Identity Provider
|
|||
keywords: cluster, node, join
|
||||
---
|
||||
|
||||
# SAML integration
|
||||
## SAML integration
|
||||
|
||||
## Typical steps involved in SAML integration:
|
||||
1. Configure IdP.
|
||||
2. Enable SAML and configure UCP as the Service Provider in **Admin Settings** -> **Authentication and Authorization**.
|
||||
Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between parties. The SAML integration process is described below.
|
||||
|
||||
1. Configure the Identity Provider (IdP).
|
||||
2. Enable SAML and configure UCP as the Service Provider under **Admin Settings > Authentication and Authorization**.
|
||||
3. Create (Edit) Teams to link with the Group memberships. This updates team membership information when a user signs in with SAML.
|
||||
|
||||
### Configure IdP:
|
||||
### Configure IdP
|
||||
Service Provider metadata is available at `https://<SP Host>/enzi/v0/saml/metadata`
|
||||
after SAML is enabled. The metadata link is also labeled as `entityID`.
|
||||
|
||||
> **Note**: Only `POST` binding is supported for the 'Assertion Consumer Service', which is located
|
||||
> Note
|
||||
>
|
||||
> Only `POST` binding is supported for the 'Assertion Consumer Service', which is located
|
||||
at `https://<SP Host>/enzi/v0/saml/acs`.
|
||||
|
||||
### Enable SAML and configure UCP
|
||||
After UCP sends an `AuthnRequest` to the IdP, the following `Assertion`
|
||||
is expected:
|
||||
After UCP sends an `AuthnRequest` to the IdP, the following `Assertion` is expected:
|
||||
|
||||
- `Subject` includes a `NameID` that is identified as the UCP username.
|
||||
In `AuthnRequest`, `NameIDFormat` is set to `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified`.
|
||||
This allows maximum compatibility for various Identity Providers.
|
||||
- `Subject` includes a `NameID` that is identified as the username for UCP. In `AuthnRequest`, `NameIDFormat` is set to `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified`. This allows maximum compatibility for various Identity Providers.
|
||||
|
||||
```xml
|
||||
<saml2:Subject>
|
||||
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">ry4nz</saml2:NameID>
|
||||
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">mobywhale</saml2:NameID>
|
||||
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
|
||||
<saml2:SubjectConfirmationData NotOnOrAfter="2018-09-10T20:04:48.001Z" Recipient="https://18.237.224.122/enzi/v0/saml/acs"/>
|
||||
</saml2:SubjectConfirmation>
|
||||
</saml2:Subject>
|
||||
```
|
||||
|
||||
- Optional `Attribute` named `fullname` is mapped to the **Full Name** field
|
||||
in the UCP account.
|
||||
- An optional `Attribute` named `fullname` is mapped to the **Full Name** field in the UCP account.
|
||||
|
||||
> **Note**: UCP uses the value of the first occurrence of an `Attribute` with `Name="fullname"` as the **Full Name**.
|
||||
> Note
|
||||
>
|
||||
> UCP uses the value of the first occurrence of an `Attribute` with `Name="fullname"` as the **Full Name**.
|
||||
|
||||
```xml
|
||||
<saml2:Attribute Name="fullname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
|
||||
|
|
@ -49,10 +50,11 @@ in the UCP account.
|
|||
</saml2:Attribute>
|
||||
```
|
||||
|
||||
- Optional `Attribute` named `member-of` is linked to the UCP team.
|
||||
Values are set in the UCP interface.
|
||||
- An optional `Attribute` named `member-of` is linked to the UCP team. The values are set in the UCP interface.
|
||||
|
||||
> **Note**: UCP uses all `AttributeStatements` and `Attributes` in the `Assertion` with `Name="member-of"`.
|
||||
> Note
|
||||
>
|
||||
> UCP uses all `AttributeStatements` and `Attributes` in the `Assertion` with `Name="member-of"`.
|
||||
|
||||
```xml
|
||||
<saml2:Attribute Name="member-of" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
|
||||
|
|
@ -62,8 +64,11 @@ Values are set in the UCP interface.
|
|||
</saml2:AttributeValue>
|
||||
</saml2:Attribute>
|
||||
```
|
||||
- An optional `Attribute` with the name `is-admin` is used to identify if the user is an administrator.
|
||||
|
||||
- Optional `Attribute` named `is-admin` determines if the user is an administrator. The content in the `AttributeValue` is ignored.
|
||||
> Note
|
||||
>
|
||||
> When there is an `Attribute` with the name `is-admin`, the user is an administrator. The content in the `AttributeValue` is ignored.
|
||||
|
||||
```xml
|
||||
<saml2:Attribute Name="is-admin" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
|
||||
|
|
@ -74,20 +79,20 @@ Values are set in the UCP interface.
|
|||
</saml2:Attribute>
|
||||
```
|
||||
|
||||
#### Okta Configuration
|
||||
Configuring with Okta is straightforward, as shown in the following examples:
|
||||
#### Okta configuration
|
||||
The Okta configuration is shown in the following examples.
|
||||
|
||||

|
||||

|
||||
|
||||
When two or more group names are expected to return with the Assertion, use the `regex` filter. For example, use the value `apple|orange`
|
||||
to return groups `apple` and `orange`.
|
||||
When two or more group names are expected to return with the Assertion, use the `regex` filter. For example, use the value `apple|orange` to return groups `apple` and `orange`.
|
||||
|
||||

|
||||
|
||||
### Service Provider Configuration
|
||||
Enter the Identity Provider's metadata URL to obtain its metadata. To access the URL, you might need to
|
||||
provide the CA certificate that can verify the remote server.
|
||||
### Service Provider configuration
|
||||
Enter the Identity Provider's metadata URL to obtain its metadata. To access the URL, you may need to provide the CA certificate that can verify the remote server.
|
||||
|
||||
### Link Group memberships with users
|
||||
Use the 'edit' or 'create' team dialog to associate SAML group assertion with the UCP team to synchronize user team membership when the user logs in.
|
||||
|
||||
### Link Group memeberships with users
|
||||
Use the 'edit' or 'create' team dialog to associate SAML group assertion with
|
||||
the UCP team so that user team membership is synchronized when the user logs in.
|
||||

|
||||
|
|
|
|||
|
|
@ -226,7 +226,10 @@ components. Assigning these values overrides the settings in a container's
|
|||
| `local_volume_collection_mapping` | no | Store data about collections for volumes in UCP's local KV store instead of on the volume labels. This is used for enforcing access control on volumes. |
|
||||
| `manager_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on manager nodes. |
|
||||
| `worker_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes. |
|
||||
| `kubelet_max_pods` | yes | Set Number of Pods that can run on a node. Default is `110`.
|
||||
| `kubelet_max_pods` | yes | Set Number of Pods that can run on a node. Default is `110`.|
|
||||
| `secure-overlay` | no | Set to `true` to enable IPSec network encryption in Kubernetes. Default is `false`. |
|
||||
| `image_scan_aggregation_enabled` | no | Set to `true` to enable image scan result aggregation. This feature displays image vulnerabilities in shared resource/containers and shared resources/images pages. Default is `false`.|
|
||||
|`swarm_polling_disabled` | no | Set to `true` to turn off auto-refresh (which defaults to 15 seconds) and only call the Swarm API once. Default is `false`. |
|
||||
|
||||
> Note
|
||||
>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,173 @@
|
|||
---
|
||||
title: Custom Azure Roles
|
||||
description: Learn how to create custom RBAC roles to run Docker Enterprise on Azure.
|
||||
keywords: Universal Control Plane, UCP, install, Docker Enterprise, Azure, Swarm
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes how to create Azure custom roles to deploy Docker Enterprise resources.
|
||||
|
||||
## Deploy a Docker Enterprise Cluster into a single resource group
|
||||
|
||||
A [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups) is a container that holds resources for an Azure solution. These resources are the virtual machines (VMs), networks, and storage accounts associated with the swarm.
|
||||
|
||||
To create a custom, all-in-one role with permissions to deploy a Docker Enterprise cluster into a single resource group:
|
||||
|
||||
1. Create the role permissions JSON file.
|
||||
```bash
|
||||
{
|
||||
"Name": "Docker Platform All-in-One",
|
||||
"IsCustom": true,
|
||||
"Description": "Can install and manage Docker platform.",
|
||||
"Actions": [
|
||||
"Microsoft.Authorization/*/read",
|
||||
"Microsoft.Authorization/roleAssignments/write",
|
||||
"Microsoft.Compute/availabilitySets/read",
|
||||
"Microsoft.Compute/availabilitySets/write",
|
||||
"Microsoft.Compute/disks/read",
|
||||
"Microsoft.Compute/disks/write",
|
||||
"Microsoft.Compute/virtualMachines/extensions/read",
|
||||
"Microsoft.Compute/virtualMachines/extensions/write",
|
||||
"Microsoft.Compute/virtualMachines/read",
|
||||
"Microsoft.Compute/virtualMachines/write",
|
||||
"Microsoft.Network/loadBalancers/read",
|
||||
"Microsoft.Network/loadBalancers/write",
|
||||
"Microsoft.Network/loadBalancers/backendAddressPools/join/action",
|
||||
"Microsoft.Network/networkInterfaces/read",
|
||||
"Microsoft.Network/networkInterfaces/write",
|
||||
"Microsoft.Network/networkInterfaces/join/action",
|
||||
"Microsoft.Network/networkSecurityGroups/read",
|
||||
"Microsoft.Network/networkSecurityGroups/write",
|
||||
"Microsoft.Network/networkSecurityGroups/join/action",
|
||||
"Microsoft.Network/networkSecurityGroups/securityRules/read",
|
||||
"Microsoft.Network/networkSecurityGroups/securityRules/write",
|
||||
"Microsoft.Network/publicIPAddresses/read",
|
||||
"Microsoft.Network/publicIPAddresses/write",
|
||||
"Microsoft.Network/publicIPAddresses/join/action",
|
||||
"Microsoft.Network/virtualNetworks/read",
|
||||
"Microsoft.Network/virtualNetworks/write",
|
||||
"Microsoft.Network/virtualNetworks/subnets/read",
|
||||
"Microsoft.Network/virtualNetworks/subnets/write",
|
||||
"Microsoft.Network/virtualNetworks/subnets/join/action",
|
||||
"Microsoft.Resources/subscriptions/resourcegroups/read",
|
||||
"Microsoft.Resources/subscriptions/resourcegroups/write",
|
||||
"Microsoft.Security/advancedThreatProtectionSettings/read",
|
||||
"Microsoft.Security/advancedThreatProtectionSettings/write",
|
||||
"Microsoft.Storage/*/read",
|
||||
"Microsoft.Storage/storageAccounts/listKeys/action",
|
||||
"Microsoft.Storage/storageAccounts/write"
|
||||
],
|
||||
"NotActions": [],
|
||||
"AssignableScopes": [
|
||||
"/subscriptions/6096d756-3192-4c1f-ac62-35f1c823085d"
|
||||
]
|
||||
}
|
||||
```
|
||||
2. Create the Azure RBAC role.
|
||||
```bash
|
||||
az role definition create --role-definition all-in-one-role.json
|
||||
```
|
||||
|
||||
## Deploy Docker Enterprise compute resources
|
||||
|
||||
Compute resources act as servers for running containers.
|
||||
|
||||
To create a custom role to deploy Docker Enterprise compute resources only:
|
||||
|
||||
1. Create the role permissions JSON file.
|
||||
```bash
|
||||
{
|
||||
"Name": "Docker Platform",
|
||||
"IsCustom": true,
|
||||
"Description": "Can install and run Docker platform.",
|
||||
"Actions": [
|
||||
"Microsoft.Authorization/*/read",
|
||||
"Microsoft.Authorization/roleAssignments/write",
|
||||
"Microsoft.Compute/availabilitySets/read",
|
||||
"Microsoft.Compute/availabilitySets/write",
|
||||
"Microsoft.Compute/disks/read",
|
||||
"Microsoft.Compute/disks/write",
|
||||
"Microsoft.Compute/virtualMachines/extensions/read",
|
||||
"Microsoft.Compute/virtualMachines/extensions/write",
|
||||
"Microsoft.Compute/virtualMachines/read",
|
||||
"Microsoft.Compute/virtualMachines/write",
|
||||
"Microsoft.Network/loadBalancers/read",
|
||||
"Microsoft.Network/loadBalancers/write",
|
||||
"Microsoft.Network/networkInterfaces/read",
|
||||
"Microsoft.Network/networkInterfaces/write",
|
||||
"Microsoft.Network/networkInterfaces/join/action",
|
||||
"Microsoft.Network/publicIPAddresses/read",
|
||||
"Microsoft.Network/virtualNetworks/read",
|
||||
"Microsoft.Network/virtualNetworks/subnets/read",
|
||||
"Microsoft.Network/virtualNetworks/subnets/join/action",
|
||||
"Microsoft.Resources/subscriptions/resourcegroups/read",
|
||||
"Microsoft.Resources/subscriptions/resourcegroups/write",
|
||||
"Microsoft.Security/advancedThreatProtectionSettings/read",
|
||||
"Microsoft.Security/advancedThreatProtectionSettings/write",
|
||||
"Microsoft.Storage/storageAccounts/read",
|
||||
"Microsoft.Storage/storageAccounts/listKeys/action",
|
||||
"Microsoft.Storage/storageAccounts/write"
|
||||
],
|
||||
"NotActions": [],
|
||||
"AssignableScopes": [
|
||||
"/subscriptions/6096d756-3192-4c1f-ac62-35f1c823085d"
|
||||
]
|
||||
}
|
||||
```
|
||||
2. Create the Docker Platform RBAC role.
|
||||
```bash
|
||||
az role definition create --role-definition platform-role.json
|
||||
```
|
||||
|
||||
## Deploy Docker Enterprise network resources
|
||||
|
||||
Network resources are services inside your cluster. These resources can include virtual networks, security groups, address pools, and gateways.
|
||||
|
||||
To create a custom role to deploy Docker Enterprise network resources only:
|
||||
|
||||
1. Create the role permissions JSON file.
|
||||
```bash
|
||||
{
|
||||
"Name": "Docker Networking",
|
||||
"IsCustom": true,
|
||||
"Description": "Can install and manage Docker platform networking.",
|
||||
"Actions": [
|
||||
"Microsoft.Authorization/*/read",
|
||||
"Microsoft.Network/loadBalancers/read",
|
||||
"Microsoft.Network/loadBalancers/write",
|
||||
"Microsoft.Network/loadBalancers/backendAddressPools/join/action",
|
||||
"Microsoft.Network/networkInterfaces/read",
|
||||
"Microsoft.Network/networkInterfaces/write",
|
||||
"Microsoft.Network/networkInterfaces/join/action",
|
||||
"Microsoft.Network/networkSecurityGroups/read",
|
||||
"Microsoft.Network/networkSecurityGroups/write",
|
||||
"Microsoft.Network/networkSecurityGroups/join/action",
|
||||
"Microsoft.Network/networkSecurityGroups/securityRules/read",
|
||||
"Microsoft.Network/networkSecurityGroups/securityRules/write",
|
||||
"Microsoft.Network/publicIPAddresses/read",
|
||||
"Microsoft.Network/publicIPAddresses/write",
|
||||
"Microsoft.Network/publicIPAddresses/join/action",
|
||||
"Microsoft.Network/virtualNetworks/read",
|
||||
"Microsoft.Network/virtualNetworks/write",
|
||||
"Microsoft.Network/virtualNetworks/subnets/read",
|
||||
"Microsoft.Network/virtualNetworks/subnets/write",
|
||||
"Microsoft.Network/virtualNetworks/subnets/join/action",
|
||||
"Microsoft.Resources/subscriptions/resourcegroups/read",
|
||||
"Microsoft.Resources/subscriptions/resourcegroups/write"
|
||||
],
|
||||
"NotActions": [],
|
||||
"AssignableScopes": [
|
||||
"/subscriptions/6096d756-3192-4c1f-ac62-35f1c823085d"
|
||||
]
|
||||
}
|
||||
```
|
||||
2. Create the Docker Networking RBAC role.
|
||||
```bash
|
||||
az role definition create --role-definition networking-role.json
|
||||
```
|
||||
|
||||
## Where to go next
|
||||
* [Azure Container Instances documentation](https://docs.microsoft.com/en-us/azure/container-instances/)
|
||||
* [docker/ucp overview](https://docs.docker.com/reference/ucp/3.2/cli/)
|
||||
* [Universal Control Plane overview](https://docs.docker.com/ee/ucp/)
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Plan a production UCP installation
|
||||
title: Plan your installation
|
||||
description: Learn about the Docker Universal Control Plane architecture, and the requirements to install it on production.
|
||||
keywords: UCP, install, Docker EE
|
||||
---
|
||||
|
|
@ -10,7 +10,7 @@ deploying Docker Universal Control Plane for production.
|
|||
|
||||
## System requirements
|
||||
|
||||
Before installing UCP you should make sure that all nodes (physical or virtual
|
||||
Before installing UCP, make sure that all nodes (physical or virtual
|
||||
machines) that you'll manage with UCP:
|
||||
|
||||
* [Comply with the system requirements](system-requirements.md), and
|
||||
|
|
@ -36,32 +36,21 @@ node3.company.example.com
|
|||
|
||||
## Static IP addresses
|
||||
|
||||
Docker UCP requires each node on the cluster to have a static IP address.
|
||||
Docker UCP requires each node on the cluster to have a static IPv4 address.
|
||||
Before installing UCP, ensure your network and nodes are configured to support
|
||||
this.
|
||||
|
||||
## Avoid IP range conflicts
|
||||
|
||||
The following table indicates which subnet configurations can safely overlap explicitly **between** clusters and which can overlap **within** a cluster.
|
||||
The following table lists recommendations to avoid IP range conflicts.
|
||||
|
||||
| Subnet | Can overlap between clusters | Can overlap within clusters |
|
||||
|----------------------------|------------------------------|-----------------------------|
|
||||
| `default-address-pools` | Yes | No |
|
||||
| `fixed-cidr` | Yes | No |
|
||||
| `bip` | Yes | No |
|
||||
| `default-addr-pool` | Yes | No |
|
||||
| `pod-cidr`[^1] | Yes | No |
|
||||
| `service-cluster-ip-range`[^1] | Yes | No |
|
||||
|
||||
The following list provides more information about the subnets described in the table.
|
||||
|
||||
* **`default-address-pools`:** This subnet is only accessible on the local node. This subnet can be the same between clusters, even on the same infra subnet. This subnet Can be the same on all nodes in a cluster. This subnet should **not** overlap between clusters.
|
||||
* **`fixed-cidr` and `bip`:** `docker0` is a subset of `default-address-pools`, and for the purposes of avoiding subnet overlaps, is potentially redundant to `default-address-pools`. This is not a required configuration for subnet overlap avoidance. These subnets can be the same on all nodes in a cluster.
|
||||
* **`default-addr-pool`:** This subnet is sncapsulated within swarm VXLAN overlay. This subnet is only accessible within the cluster. This subnet can be the same between clusters, even on the same infra subnet. This subnet should **not** overlap between clusters.
|
||||
* **`pod-cidr`:** This subnet is encapsulated in IP-IP (or VXLAN with forthcoming Windows CNI). This subnet is only accessible from within the cluster. This subnet can be the same between clusters, even on the same infra subnet. This subnet should **not** overlap between clusters.
|
||||
* **`service-cluster-ip-range`:** This subent is also encapsulated in IP-IP or VXLAN. This subnet is only accessible from within the cluster. This subnet can be the same between clusters, even on the same infra subnet. This subnet should **not** overlap between clusters.
|
||||
|
||||
[^1]: Azure without Windows VXLAN CNI uses infrastructure routes pod-pod, so whether or not these can overlap between clusters depends on the routing and security policies between the clusters.
|
||||
| Component | Subnet | Range | Default IP address |
|
||||
|------------|----------------------------|------------------------------------------|----------------|
|
||||
| Engine | `fixed-cidr` | CIDR range for `docker0` interface and local containers | 172.17.0.0/16 |
|
||||
| Engine | `default-address-pools` | CIDR range for `docker_gwbridge` interface and bridge networks | 172.18.0.0/16 |
|
||||
| Swarm | `default-addr-pool` | CIDR range for Swarm overlay networks | 10.0.0.0/8 |
|
||||
| Kubernetes | `pod-cidr` | CIDR range for Kubernetes pods | 192.168.0.0/16 |
|
||||
| Kubernetes | `service-cluster-ip-range` | CIDR range for Kubernetes services | 10.96.0.0/16 |
|
||||
|
||||
### Engine
|
||||
|
||||
|
|
@ -91,7 +80,9 @@ This range must be an IPv4 range for fixed IPs, and must be a subset of the brid
|
|||
|
||||
The `docker_gwbridge` is a virtual bridge that connects the overlay networks (including the `ingress` network) to an individual Docker engine's physical network. Docker creates it automatically when you initialize a swarm or join a Docker host to a swarm, but it is not a Docker device. It exists in the kernel of the Docker host. The default subnet for `docker_gwbridge` is `172.18.0.0/16`.
|
||||
|
||||
> **Note**: If you need to customize the `docker_gwbridge` settings, you must do so before joining the host to the swarm, or after temporarily removing the host from the swarm.
|
||||
> Note
|
||||
>
|
||||
> If you need to customize the `docker_gwbridge` settings, you must do so before joining the host to the swarm, or after temporarily removing the host from the swarm.
|
||||
|
||||
The recommended way to configure the `docker_gwbridge` settings is to use the `daemon.json` file. You can specify one or more of the following settings to configure the interface:
|
||||
|
||||
|
|
@ -114,23 +105,26 @@ This range must be an IPv4 range for fixed IPs, and must be a subset of the brid
|
|||
|
||||
Swarm uses a default address pool of `10.0.0.0/8` for its overlay networks. If this conflicts with your current network implementation, please use a custom IP address pool. To specify a custom IP address pool, use the `--default-addr-pool` command line option during [Swarm initialization](../../../../engine/swarm/swarm-mode.md).
|
||||
|
||||
> **Note**: The Swarm `default-addr-pool` setting is separate from the Docker engine `default-address-pools` setting. They are two separate ranges that are used for different purposes.
|
||||
> Note
|
||||
>
|
||||
> The Swarm `default-addr-pool` setting is separate from the Docker engine `default-address-pools` setting. They are two separate ranges that are used for different purposes.
|
||||
|
||||
> **Note**: Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be initialized using this flag and UCP must be installed on top of it.
|
||||
> Note
|
||||
>
|
||||
> Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be initialized using this flag and UCP must be installed on top of it.
|
||||
|
||||
### Kubernetes
|
||||
|
||||
There are 2 internal IP ranges used within Kubernetes that may overlap and
|
||||
There are two internal IP ranges used within Kubernetes that may overlap and
|
||||
conflict with the underlying infrastructure:
|
||||
|
||||
* The Pod Network - Each Pod in Kubernetes is given an IP address from either
|
||||
the Calico or Azure IPAM services. In a default installation Pods are given
|
||||
IP addresses on the `192.168.0.0/16` range. This can be customized at install
|
||||
time using the `--pod-cidr` flag.
|
||||
|
||||
IP addresses on the `192.168.0.0/16` range. This can be customized at install time by passing the `--pod-cidr` flag to the
|
||||
[UCP install command](/reference/ucp/{{ site.ucp_version }}/cli/install/).
|
||||
* The Services Network - When a user exposes a Service in Kubernetes it is
|
||||
accessible via a VIP, this VIP comes from a Cluster IP Range. By default on UCP
|
||||
this range is `10.96.0.0/16`. From UCP 3.1.8 and onwards this value can be
|
||||
this range is `10.96.0.0/16`. Beginning with 3.1.8, this value can be
|
||||
changed at install time with the `--service-cluster-ip-range` flag.
|
||||
|
||||
## Avoid firewall conflicts
|
||||
|
|
@ -139,14 +133,20 @@ For SUSE Linux Enterprise Server 12 SP2 (SLES12), the `FW_LO_NOTRACK` flag is tu
|
|||
|
||||
To turn off the FW_LO_NOTRACK option, edit the `/etc/sysconfig/SuSEfirewall2` file and set `FW_LO_NOTRACK="no"`. Save the file and restart the firewall or reboot.
|
||||
|
||||
For For SUSE Linux Enterprise Server 12 SP3, the default value for `FW_LO_NOTRACK` was changed to `no`.
|
||||
For SUSE Linux Enterprise Server 12 SP3, the default value for `FW_LO_NOTRACK` was changed to `no`.
|
||||
|
||||
For Red Hat Enterprise Linux (RHEL) 8, if firewalld is running and `FirewallBackend=nftables` is set in `/etc/firewalld/firewalld.conf`, change this to `FirewallBackend=iptables`, or you can explicitly run the following commands to allow traffic to enter the default bridge (docker0) network:
|
||||
|
||||
```
|
||||
firewall-cmd --permanent --zone=trusted --add-interface=docker0
|
||||
firewall-cmd --reload
|
||||
```
|
||||
## Time synchronization
|
||||
|
||||
In distributed systems like Docker UCP, time synchronization is critical
|
||||
to ensure proper operation. As a best practice to ensure consistency between
|
||||
the engines in a UCP cluster, all engines should regularly synchronize time
|
||||
with a Network Time Protocol (NTP) server. If a server's clock is skewed,
|
||||
with a Network Time Protocol (NTP) server. If a host node's clock is skewed,
|
||||
unexpected behavior may cause poor performance or even failures.
|
||||
|
||||
## Load balancing strategy
|
||||
|
|
@ -168,14 +168,15 @@ DTR, your load balancer needs to distinguish traffic between the two by IP
|
|||
address or port number.
|
||||
|
||||
* If you want to configure your load balancer to listen on port 443:
|
||||
* Use one load balancer for UCP and another for DTR,
|
||||
* Use one load balancer for UCP and another for DTR.
|
||||
* Use the same load balancer with multiple virtual IPs.
|
||||
* Configure your load balancer to expose UCP or DTR on a port other than 443.
|
||||
|
||||
If you want to install UCP in a high-availability configuration that uses
|
||||
a load balancer in front of your UCP controllers, include the appropriate IP
|
||||
address and FQDN of the load balancer's VIP by using
|
||||
one or more `--san` flags in the [install command](/reference/ucp/3.0/cli/install.md)
|
||||
one or more `--san` flags in the
|
||||
[UCP install command](/reference/ucp/{{ site.ucp_version }}/cli/install/)
|
||||
or when you're asked for additional SANs in interactive mode.
|
||||
[Learn about high availability](../configure/set-up-high-availability.md).
|
||||
|
||||
|
|
|
|||
|
|
@ -154,4 +154,4 @@ Learn more about compatibility and the maintenance lifecycle for these products:
|
|||
## Where to go next
|
||||
|
||||
- [Plan your installation](plan-installation.md)
|
||||
- [UCP architecture](../../ucp-architecture.md)
|
||||
- [UCP architecture](../../ucp-architecture.md)
|
||||
|
|
@ -36,10 +36,13 @@ This runs the uninstall command in interactive mode, so that you are prompted
|
|||
for any necessary configuration values.
|
||||
|
||||
> **Important**: If the `uninstall-ucp` command fails, you can run the following commands to manually uninstall UCP:
|
||||
```
|
||||
docker service rm ucp-agent-win ucp-agent-s390x ucp-agent
|
||||
docker ps -a | grep ucp | awk '{print $1}' | xargs docker rm -f
|
||||
docker volume ls | grep ucp | awk '{print $2}' | xargs docker volume rm
|
||||
```bash
|
||||
#Run the following command on one manager node to remove remaining UCP services
|
||||
docker service rm $(docker service ls -f name=ucp- -q)
|
||||
#Run the following command on each manager node to remove remaining UCP containers
|
||||
docker container rm -f $(docker container ps -a -f name=ucp- -f name=k8s_ -q)
|
||||
#Run the following command on each manager node to remove remaining UCP volumes
|
||||
docker volume rm $(docker volume ls -f name=ucp -q)
|
||||
```
|
||||
|
||||
The UCP configuration is kept in case you want to reinstall UCP with the same
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ Complete the following checks:
|
|||
#### Operating system
|
||||
- If cluster nodes OS branch is older (Ubuntu 14.x, RHEL 7.3, etc), consider patching all relevant packages to the most recent (including kernel).
|
||||
- Rolling restart of each node before upgrade (to confirm in-memory settings are the same as startup-scripts).
|
||||
- Run `check-config.sh` on each cluster node (after rolling restart) for any kernel compatibility issues.
|
||||
- Run `check-config.sh` on each cluster node (after rolling restart) for any kernel compatibility issues. Latest version of the script can be found here: https://github.com/moby/moby/blob/master/contrib/check-config.sh
|
||||
|
||||
#### Procedural
|
||||
- Perform Swarm, UCP and DTR backups before upgrading
|
||||
|
|
@ -296,8 +296,7 @@ nodes in the cluster at one time.
|
|||
- Upgrade failures
|
||||
- For worker nodes, an upgrade failure can be rolled back by changing the node label back
|
||||
to the previous target version. Rollback of manager nodes is not supported.
|
||||
- Kubernetes errors in node state messages after upgrading UCP
|
||||
(from https://github.com/docker/kbase/how-to-resolve-kubernetes-errors-after-upgrading-ucp/readme.md)
|
||||
- [Kubernetes errors in node state messages after upgrading UCP](https://success.docker.com/article/how-to-resolve-kubernetes-errors-after-upgrading-ucp)
|
||||
- The following information applies if you have upgraded to UCP 3.0.0 or newer:
|
||||
- After performing a UCP upgrade from 2.2.x to 3.x.x, you might see unhealthy nodes in your UCP
|
||||
dashboard with any of the following errors listed:
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ A new Kubernetes manifest file with updated ingress rules can be found [here](./
|
|||
1. Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a cluster with Cluster Ingress installed.
|
||||
2. Download the sample Kubernetes manifest file.
|
||||
```bash
|
||||
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-weighted.yaml
|
||||
$ wget https://raw.githubusercontent.com/docker/docker.github.io/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-weighted.yaml
|
||||
```
|
||||
3. Deploy the Kubernetes manifest file.
|
||||
|
||||
|
|
@ -105,4 +105,4 @@ In this case, 100% of the traffic with the `stage=dev` header is sent to the v3
|
|||
|
||||
## Where to go next
|
||||
|
||||
- [Deploy the Sample Application with Sticky Sessions](./sticky/)
|
||||
- [Deploy the Sample Application with Sticky Sessions](./sticky/)
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ requests.
|
|||
1. Source a [UCP Client Bundle](/ee/ucp/user-access/cli/) attached to a cluster with Cluster Ingress installed.
|
||||
2. Download the sample Kubernetes manifest file.
|
||||
```bash
|
||||
$ wget https://github.com/docker/docker.github.io/tree/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-sticky.yaml
|
||||
$ wget https://raw.githubusercontent.com/docker/docker.github.io/master/ee/ucp/kubernetes/cluster-ingress/yaml/ingress-sticky.yaml
|
||||
```
|
||||
3. Deploy the Kubernetes manifest file with the new DestinationRule. This file includes the consistentHash loadBalancer policy.
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -152,7 +152,7 @@ spec:
|
|||
## Use the CLI to deploy Kubernetes objects
|
||||
|
||||
With Docker EE, you deploy your Kubernetes objects on the command line by using
|
||||
`kubectl`. [Install and set up kubectl](https://v1-11.docs.kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
`kubectl`. [Install and set up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
Use a client bundle to configure your client tools, like Docker CLI and `kubectl`
|
||||
to communicate with UCP instead of the local deployments you might have running.
|
||||
|
|
|
|||
|
|
@ -6,8 +6,8 @@ keywords: ucp, cli, administration, kubectl, Kubernetes, security, network, ipse
|
|||
|
||||
Docker Enterprise Edition provides data-plane level IPSec network encryption to securely encrypt application
|
||||
traffic in a Kubernetes cluster. This secures application traffic within a cluster when running in untrusted
|
||||
infrastructure or environments. It is an optional feature of UCP that is enabled by deploying the Secure Overlay
|
||||
components on Kuberenetes when using the default Calico driver for networking configured for IPIP tunnelling
|
||||
infrastructure or environments. It is an optional feature of UCP that is enabled by deploying the SecureOverlay
|
||||
components on Kubernetes when using the default Calico driver for networking configured for IPIP tunneling
|
||||
(the default configuration).
|
||||
|
||||
Kubernetes network encryption is enabled by two components in UCP: the SecureOverlay Agent and SecureOverlay
|
||||
|
|
@ -27,7 +27,7 @@ interface in the UCP host.
|
|||
|
||||
## Requirements
|
||||
|
||||
Kubernetes Network Encryption is supported for the following platforms:
|
||||
Kubernetes network encryption is supported for the following platforms:
|
||||
* Docker Enterprise 2.1+ (UCP 3.1+)
|
||||
* Kubernetes 1.11+
|
||||
* On-premise, AWS, GCE
|
||||
|
|
@ -37,15 +37,15 @@ Kubernetes Network Encryption is supported for the following platforms:
|
|||
|
||||
## Configuring MTUs
|
||||
|
||||
Before deploying the SecureOverlay components one must ensure that Calico is configured so that the IPIP tunnel
|
||||
MTU leaves sufficient headroom for the encryption overhead. Encryption adds 26 bytes of overhead but every IPSec
|
||||
packet size must be a multiple of 4 bytes. IPIP tunnels require 20 bytes of encapsulation overhead. So the IPIP
|
||||
tunnel interface MTU must be no more than "EXTMTU - 46 - ((EXTMTU - 46) modulo 4)" where EXTMTU is the minimum MTU
|
||||
Before deploying the SecureOverlay components, ensure that Calico is configured so that the IPIP tunnel
|
||||
MTU maximum transmission unit (MTU), or the largest packet length that the container will allow, leaves sufficient headroom for the encryption overhead. Encryption adds 26 bytes of overhead, but every IPSec
|
||||
packet size must be a multiple of 4 bytes. IPIP tunnels require 20 bytes of encapsulation overhead. The IPIP
|
||||
tunnel interface MTU must be no more than "EXTMTU - 46 - ((EXTMTU - 46) modulo 4)", where EXTMTU is the minimum MTU
|
||||
of the external interfaces. An IPIP MTU of 1452 should generally be safe for most deployments.
|
||||
|
||||
Changing UCP's MTU requires updating the UCP configuration. This process is described [here](/ee/ucp/admin/configure/ucp-configuration-file).
|
||||
|
||||
The user must update the following values to the new MTU:
|
||||
Update the following values to the new MTU:
|
||||
|
||||
[cluster_config]
|
||||
...
|
||||
|
|
@ -55,11 +55,21 @@ The user must update the following values to the new MTU:
|
|||
|
||||
## Configuring SecureOverlay
|
||||
|
||||
Once the cluster nodes’ MTUs are properly configured, deploy the SecureOverlay components using the Secure Overlay YAML file to UCP.
|
||||
SecureOverlay allows you to enable IPSec network encryption in Kubernetes. Once the cluster nodes’ MTUs are properly configured, deploy the SecureOverlay components using the SecureOverlay YAML file to UCP.
|
||||
|
||||
[Download the Secure Overlay YAML file here.](ucp-secureoverlay.yml)
|
||||
Beginning with UCP 3.2.4, you can configure SecureOverlay in two ways:
|
||||
* Using the UCP configuration file or
|
||||
* Using the SecureOverlay YAML file
|
||||
|
||||
After downloading the YAML file, run the following command from any machine with the properly configured kubectl environment and the proper UCP bundle's credentials:
|
||||
### UCP configuration file
|
||||
|
||||
Add `secure-overlay` to the UCP configuration file. Set this option to `true` to enable IPSec network encryption. The default is `false`. See [cluster_config options](https://docs.docker.com/ee/ucp/admin/configure/ucp-configuration-file/#cluster_config-table-required) for more information.
|
||||
|
||||
### SecureOverlay YAML file
|
||||
|
||||
First, [download the SecureOverlay YAML file.](ucp-secureoverlay.yml)
|
||||
|
||||
Next, issue the following command from any machine with the properly configured kubectl environment and the proper UCP bundle's credentials:
|
||||
|
||||
```
|
||||
$ kubectl apply -f ucp-secureoverlay.yml
|
||||
|
|
@ -67,7 +77,7 @@ $ kubectl apply -f ucp-secureoverlay.yml
|
|||
|
||||
Run this command at cluster installation time before starting any workloads.
|
||||
|
||||
To remove the encryption from the system, issue the command:
|
||||
To remove the encryption from the system, issue the following command:
|
||||
|
||||
```
|
||||
$ kubectl delete -f ucp-secureoverlay.yml
|
||||
|
|
|
|||
|
|
@ -24,6 +24,46 @@ upgrade your installation to the latest release.
|
|||
|
||||
# Version 3.2
|
||||
|
||||
## 3.2.4
|
||||
2019-11-14
|
||||
|
||||
### Known issues
|
||||
* UCP currently turns on vulnerability information for images deployed within UCP by default for upgrades. This may cause clusters to fail due to performance issues. (ENGORC-2746)
|
||||
* For Red Hat Enterprise Linux (RHEL) 8, if firewalld is running and `FirewallBackend=nftables` is set in `/etc/firewalld/firewalld.conf`, change this to `FirewallBackend=iptables`, or you can explicitly run the following commands to allow traffic to enter the default bridge (docker0) network:
|
||||
|
||||
```
|
||||
firewall-cmd --permanent --zone=trusted --add-interface=docker0
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
### Platforms
|
||||
* RHEL 8.0 is now supported.
|
||||
|
||||
### Kubernetes
|
||||
* Kubernetes has been upgraded to version 1.14.8 that fixes CVE-2019-11253.
|
||||
* Added a feature that allows the user to enable SecureOverlay as an add-on on UCP via an install flag called `secure-overlay`. This flag enables IPSec Network Encryption in Kubernetes.
|
||||
|
||||
### Security
|
||||
* Upgraded Golang to 1.12.12. (ENGORC-2762)
|
||||
* Fixed an issue that allowed a user with a `Restricted Control` role to obtain Admin access to UCP. (ENGORC-2781)
|
||||
|
||||
### Bug fixes
|
||||
* Fixed an issue where UCP 3.2 backup performs an append not overwrite when `--file` switch is used. (FIELD-2043)
|
||||
* Fixed an issue where the Calico/latest image was missing from the UCP offline bundle. (FIELD-1584)
|
||||
* Image scan result aggregation is now disabled by default for new UCP installations. This feature can be configured by a new `ImageScanAggregationEnabled` setting in the UCP tuning config. (ENGORC-2746)
|
||||
* Adds authorization checks for the volumes referenced by the `VolumesFrom` Containers option. Previously, this field was ignored by the container create request parser,
|
||||
leading to a gap in permissions checks. (ENGORC-2781)
|
||||
|
||||
### Components
|
||||
|
||||
| Component | Version |
|
||||
| --------------------- | ------- |
|
||||
| UCP | 3.2.4 |
|
||||
| Kubernetes | 1.14.8 |
|
||||
| Calico | 3.8.2 |
|
||||
| Interlock | 3.0.0 |
|
||||
| Interlock NGINX proxy | 1.14.2 |
|
||||
|
||||
## 3.2.3
|
||||
2019-10-21
|
||||
|
||||
|
|
@ -372,6 +412,29 @@ The workaround is to use a swarm service to deploy this change across the cluste
|
|||
|
||||
# Version 3.1
|
||||
|
||||
## 3.1.12
|
||||
2019-11-14
|
||||
|
||||
### Security
|
||||
* Upgraded Golang to 1.12.12.
|
||||
|
||||
### Kubernetes
|
||||
* Kubernetes has been upgraded to fix CVE-2019-11253.
|
||||
|
||||
### Bug fixes
|
||||
* Adds authorization checks for the volumes referenced by the `VolumesFrom` Containers option. Previously, this field was ignored by the container create request parser,
|
||||
leading to a gap in permissions checks. (ENGORC-2781)
|
||||
|
||||
### Components
|
||||
|
||||
| Component | Version |
|
||||
| ----------- | ----------- |
|
||||
| UCP | 3.1.12 |
|
||||
| Kubernetes | 1.14.3 |
|
||||
| Calico | 3.5.7 |
|
||||
| Interlock | 2.4.0 |
|
||||
| Interlock NGINX proxy | 1.14.2 |
|
||||
|
||||
## 3.1.11
|
||||
2019-10-08
|
||||
|
||||
|
|
@ -404,7 +467,7 @@ The workaround is to use a swarm service to deploy this change across the cluste
|
|||
2019-09-03
|
||||
|
||||
### Kubernetes
|
||||
* Kubernetes has been upgraded to version 1.11.10-docker-1, this has been built with Golang 1.12.9.
|
||||
* Kubernetes has been upgraded to version 1.11.10-docker-1. This version was built with Golang 1.12.9.
|
||||
* Kubernetes DNS has been upgraded to 1.14.13 and is now deployed with more than one replica by default.
|
||||
|
||||
### Networking
|
||||
|
|
@ -868,6 +931,28 @@ The following features are deprecated in UCP 3.1.
|
|||
|
||||
# Version 3.0
|
||||
|
||||
## 3.0.16
|
||||
2019-11-14
|
||||
|
||||
### Security
|
||||
* Upgraded Golang to 1.12.12.
|
||||
|
||||
### Kubernetes
|
||||
* Kubernetes has been upgraded to fix CVE-2019-11253.
|
||||
|
||||
### Bug fixes
|
||||
* Adds authorization checks for the volumes referenced by the `VolumesFrom` Containers option. Previously, this field was ignored by the container create request parser,
|
||||
leading to a gap in permissions checks. (ENGORC-2781)
|
||||
|
||||
### Components
|
||||
|
||||
| Component | Version |
|
||||
| ----------- | ----------- |
|
||||
| UCP | 3.0.16 |
|
||||
| Kubernetes | 1.11.2 |
|
||||
| Calico | 3.2.3 |
|
||||
| Interlock (NGINX) | 1.13.12 |
|
||||
|
||||
## 3.0.15
|
||||
2019-10-08
|
||||
|
||||
|
|
@ -893,8 +978,7 @@ The following features are deprecated in UCP 3.1.
|
|||
2019-09-03
|
||||
|
||||
### Kubernetes
|
||||
* Kubernetes has been upgraded to version 1.8.15-docker-7, this has been built
|
||||
with Golang 1.12.9.
|
||||
* Kubernetes has been upgraded to version 1.8.15-docker-7. This version was built with Golang 1.12.9.
|
||||
* Kubernetes DNS has been upgraded to 1.14.13.
|
||||
|
||||
### Networking
|
||||
|
|
@ -1433,6 +1517,16 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads.
|
|||
|
||||
# Version 2.2
|
||||
|
||||
## Version 2.2.23
|
||||
2019-11-14
|
||||
|
||||
### Security
|
||||
* Upgraded Golang to 1.12.12.
|
||||
|
||||
### Bug fixes
|
||||
* Adds authorization checks for the volumes referenced by the `VolumesFrom` Containers option. Previously, this field was ignored by the container create request parser,
|
||||
leading to a gap in permissions checks. (ENGORC-2781)
|
||||
|
||||
## Version 2.2.22
|
||||
2019-10-08
|
||||
|
||||
|
|
@ -1469,7 +1563,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.19
|
||||
|
|
@ -1493,7 +1587,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.18
|
||||
|
|
@ -1516,7 +1610,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.17
|
||||
|
|
@ -1541,7 +1635,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.16
|
||||
|
|
@ -1565,7 +1659,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.15
|
||||
|
|
@ -1593,7 +1687,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.14
|
||||
|
|
@ -1623,7 +1717,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.13
|
||||
|
|
@ -1650,7 +1744,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.12
|
||||
|
|
@ -1679,7 +1773,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.11
|
||||
|
|
@ -1720,7 +1814,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.10
|
||||
|
|
@ -1773,7 +1867,7 @@ instead of the correct image for the worker architecture.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.9
|
||||
|
|
@ -1809,7 +1903,7 @@ is always used, regardless of which one is actually the best match.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.7
|
||||
|
|
@ -1840,7 +1934,7 @@ is always used, regardless of which one is actually the best match.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.6
|
||||
|
|
@ -1910,7 +2004,7 @@ is always used, regardless of which one is actually the best match.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
|
||||
|
|
@ -1947,7 +2041,7 @@ for volumes.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.4
|
||||
|
|
@ -1992,7 +2086,7 @@ for volumes.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## Version 2.2.3
|
||||
|
|
@ -2045,7 +2139,7 @@ for volumes.
|
|||
* Searching for images in the UCP images UI doesn't work.
|
||||
* Removing a stack may leave orphaned volumes.
|
||||
* Storage metrics are not available for Windows.
|
||||
* You can't create a bridge network from the web interface. As a workaround use
|
||||
* You can't create a bridge network from the web interface. As a workaround, use
|
||||
`<node-name>/<network-name>`.
|
||||
|
||||
## version 2.2.2
|
||||
|
|
@ -2177,7 +2271,7 @@ and the API is fully interactive within the UCP web interface.
|
|||
session timeout](https://docs.docker.com/datacenter/ucp/2.2/guides/admin/configure/external-auth/enable-ldap-config-file/).
|
||||
* docker/ucp
|
||||
* The `support` command does not currently produce a valid support dump. As a
|
||||
workaround you can download a support dumps from the web interface.
|
||||
workaround, you can download a support dumps from the web interface.
|
||||
* Windows issues
|
||||
* Disk related metrics do not display for Windows worker nodes.
|
||||
* If upgrading from an existing deployment, ensure that HRM is using a non-encrypted
|
||||
|
|
|
|||
|
|
@ -13,11 +13,11 @@ following components:
|
|||
2. [Universal Control Plane (UCP)](/ee/ucp/admin/install/upgrade/).
|
||||
3. [Docker Trusted Registry (DTR)](/ee/dtr/admin/upgrade/).
|
||||
|
||||
Because some components become temporarily unavailable during an upgrade, schedule upgrades to occur outside of
|
||||
Because some components become temporarily unavailable during an upgrade, schedule upgrades to occur outside of
|
||||
peak business hours to minimize impact to your business.
|
||||
|
||||
## Cluster upgrade best practices
|
||||
Docker Engine - Enterprise upgrades in Swarm clusters should follow these guidelines in order to avoid IP address
|
||||
Docker Engine - Enterprise upgrades in Swarm clusters should follow these guidelines in order to avoid IP address
|
||||
space exhaustion and associated application downtime.
|
||||
|
||||
* New workloads should not be actively scheduled in the cluster during upgrades.
|
||||
|
|
@ -34,9 +34,9 @@ This makes it possible to recover if anything goes wrong during the upgrade.
|
|||
## Check the compatibility matrix
|
||||
|
||||
You should also check the [compatibility matrix](https://success.docker.com/Policies/Compatibility_Matrix),
|
||||
to make sure all Docker EE components are certified to work with one another.
|
||||
to make sure all Docker Engine - Enterprise components are certified to work with one another.
|
||||
You may also want to check the
|
||||
[Docker EE maintenance lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle),
|
||||
[Docker Engine - Enterprise maintenance lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle),
|
||||
to understand until when your version may be supported.
|
||||
|
||||
## Apply firewall rules
|
||||
|
|
@ -58,16 +58,16 @@ Before you upgrade, make sure:
|
|||
|
||||
## IP address consumption in 18.09+
|
||||
|
||||
In Swarm overlay networks, each task connected to a network consumes an IP address on that network. Swarm networks have a
|
||||
finite amount of IPs based on the `--subnet` configured when the network is created. If no subnet is specified then Swarm
|
||||
defaults to a `/24` network with 254 available IP addresses. When the IP space of a network is fully consumed, Swarm tasks
|
||||
In Swarm overlay networks, each task connected to a network consumes an IP address on that network. Swarm networks have a
|
||||
finite amount of IPs based on the `--subnet` configured when the network is created. If no subnet is specified then Swarm
|
||||
defaults to a `/24` network with 254 available IP addresses. When the IP space of a network is fully consumed, Swarm tasks
|
||||
can no longer be scheduled on that network.
|
||||
|
||||
Starting with Docker Engine - Enterprise 18.09 and later, each Swarm node will consume an IP address from every Swarm
|
||||
Starting with Docker Engine - Enterprise 18.09 and later, each Swarm node will consume an IP address from every Swarm
|
||||
network. This IP address is consumed by the Swarm internal load balancer on the network. Swarm networks running on Engine
|
||||
versions 18.09 or greater must be configured to account for this increase in IP usage. Networks at or near consumption
|
||||
prior to engine version 18.09 may have a risk of reaching full utilization that will prevent tasks from being scheduled
|
||||
on to the network.
|
||||
versions 18.09 or greater must be configured to account for this increase in IP usage. Networks at or near consumption
|
||||
prior to engine version 18.09 may have a risk of reaching full utilization that will prevent tasks from being scheduled
|
||||
on to the network.
|
||||
|
||||
Maximum IP consumption per network at any given moment follows the following formula:
|
||||
|
||||
|
|
@ -75,13 +75,13 @@ Maximum IP consumption per network at any given moment follows the following for
|
|||
Max IP Consumed per Network = Number of Tasks on a Swarm Network + 1 IP for each node where these tasks are scheduled
|
||||
```
|
||||
|
||||
To prevent this from happening, overlay networks should have enough capacity prior to an upgrade to 18.09, such that the network will have enough capacity after the upgrade. The below instructions offer tooling and steps to ensure capacity is measured before performing an upgrade.
|
||||
To prevent this from happening, overlay networks should have enough capacity prior to an upgrade to 18.09, such that the network will have enough capacity after the upgrade. The below instructions offer tooling and steps to ensure capacity is measured before performing an upgrade.
|
||||
|
||||
>The above following only applies to containers running on Swarm overlay networks. This does not impact bridge, macvlan, host, or 3rd party docker networks.
|
||||
|
||||
## Upgrade Docker Engine - Enterprise
|
||||
|
||||
To avoid application downtime, you should be running Docker Engine - Enterprise in
|
||||
To avoid application downtime, you should be running Docker Engine - Enterprise in
|
||||
Swarm mode and deploying your workloads as Docker services. That way you can
|
||||
drain the nodes of any workloads before starting the upgrade.
|
||||
|
||||
|
|
@ -103,8 +103,8 @@ time can lead to a loss of quorum, and possible data loss.
|
|||
|
||||
### Determine if the network is in danger of exhaustion
|
||||
|
||||
Starting with a cluster with one or more services configured, determine whether some networks
|
||||
may require updating the IP address space in order to function correctly after an Docker
|
||||
Starting with a cluster with one or more services configured, determine whether some networks
|
||||
may require updating the IP address space in order to function correctly after an Docker
|
||||
Engine - Enterprise 18.09 upgrade.
|
||||
|
||||
1. SSH into a manager node on a cluster where your applications are running.
|
||||
|
|
@ -139,7 +139,7 @@ With an exhausted network, you can triage it using the following steps.
|
|||
|
||||
1. SSH into a manager node on a cluster where your applications are running.
|
||||
|
||||
2. Check the `docker service ls` output. It will display the service that is unable to completely fill all its replicas such as:
|
||||
2. Check the `docker service ls` output. It will display the service that is unable to completely fill all its replicas such as:
|
||||
|
||||
```
|
||||
ID NAME MODE REPLICAS IMAGE PORTS
|
||||
|
|
@ -150,15 +150,15 @@ wn3x4lu9cnln ex_service replicated 19/24 nginx:lat
|
|||
|
||||
```
|
||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
||||
...
|
||||
...
|
||||
i64lee19ia6s \_ ex_service.11 nginx:latest tk1706-ubuntu-1 Shutdown Rejected 7 minutes ago "node is missing network attac…"
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
4. Examine the error using `docker inspect`. In this example, the `docker inspect i64lee19ia6s` output shows the error in the `Status.Err` field:
|
||||
|
||||
```
|
||||
...
|
||||
...
|
||||
"Status": {
|
||||
"Timestamp": "2018-08-24T21:03:37.885405884Z",
|
||||
"State": "rejected",
|
||||
|
|
@ -174,32 +174,32 @@ i64lee19ia6s \_ ex_service.11 nginx:latest tk1706-ubuntu-1
|
|||
...
|
||||
```
|
||||
|
||||
5. Adjust your network subnet in the deployment manifest, such that it has enough IPs required by the application.
|
||||
5. Adjust your network subnet in the deployment manifest, such that it has enough IPs required by the application.
|
||||
|
||||
6. Redeploy the application.
|
||||
6. Redeploy the application.
|
||||
|
||||
7. Confirm the adjusted service deployed successfully.
|
||||
|
||||
### Manager upgrades when moving to Docker Engine - Enterprise 18.09 and later
|
||||
|
||||
The following is a constraint introduced by architectural changes to the Swarm overlay networking when
|
||||
upgrading to Docker Engine - Enterprise 18.09 or later. It only applies to this one-time upgrade and to workloads
|
||||
that are using the Swarm overlay driver. Once upgraded to Docker Engine - Enterprise 18.09, this
|
||||
The following is a constraint introduced by architectural changes to the Swarm overlay networking when
|
||||
upgrading to Docker Engine - Enterprise 18.09 or later. It only applies to this one-time upgrade and to workloads
|
||||
that are using the Swarm overlay driver. Once upgraded to Docker Engine - Enterprise 18.09, this
|
||||
constraint does not impact future upgrades.
|
||||
|
||||
When upgrading to Docker Engine - Enterprise 18.09, manager nodes cannot reschedule new workloads on the
|
||||
managers until all managers have been upgraded to the Docker Engine - Enterprise 18.09 (or higher) version.
|
||||
During the upgrade of the managers, there is a possibility that any new workloads that are scheduled on
|
||||
the managers will fail to schedule until all of the managers have been upgraded.
|
||||
When upgrading to Docker Engine - Enterprise 18.09, manager nodes cannot reschedule new workloads on the
|
||||
managers until all managers have been upgraded to the Docker Engine - Enterprise 18.09 (or higher) version.
|
||||
During the upgrade of the managers, there is a possibility that any new workloads that are scheduled on
|
||||
the managers will fail to schedule until all of the managers have been upgraded.
|
||||
|
||||
In order to avoid any impactful application downtime, it is advised to reschedule any critical workloads
|
||||
on to Swarm worker nodes during the upgrade of managers. Worker nodes and their network functionality
|
||||
will continue to operate independently during any upgrades or outages on the managers. Note that this
|
||||
In order to avoid any impactful application downtime, it is advised to reschedule any critical workloads
|
||||
on to Swarm worker nodes during the upgrade of managers. Worker nodes and their network functionality
|
||||
will continue to operate independently during any upgrades or outages on the managers. Note that this
|
||||
restriction only applies to managers and not worker nodes.
|
||||
|
||||
### Drain the node
|
||||
|
||||
If you are running live application on the cluster while upgrading, remove applications from nodes being upgrades
|
||||
If you are running live application on the cluster while upgrading, remove applications from nodes being upgrades
|
||||
as to not create unplanned outages.
|
||||
|
||||
Start by draining the node so that services get scheduled in another node and
|
||||
|
|
@ -216,22 +216,22 @@ $ docker node update --availability drain <node>
|
|||
To upgrade a node individually by operating system, please follow the instructions
|
||||
listed below:
|
||||
|
||||
* [Windows Server](/install/windows/docker-ee.md#update-docker-ee)
|
||||
* [Ubuntu](/install/linux/docker-ee/ubuntu.md#upgrade-docker-ee)
|
||||
* [RHEL](/install/linux/docker-ee/rhel.md#upgrade-docker-ee)
|
||||
* [CentOS](/install/linux/docker-ee/centos.md#upgrade-docker-ee)
|
||||
* [Oracle Linux](/install/linux/docker-ee/oracle.md#upgrade-docker-ee)
|
||||
* [SLES](/install/linux/docker-ee/suse.md#upgrade-docker-ee)
|
||||
* [Windows Server](/install/windows/docker-ee/#update-docker-engine---enterprise)
|
||||
* [Ubuntu](/install/linux/docker-ee/ubuntu.md/#upgrade-docker-engine---enterprise)
|
||||
* [RHEL](/install/linux/docker-ee/rhel/#upgrade-from-the-repository)
|
||||
* [CentOS](/install/linux/docker-ee/centos/#upgrade-from-the-repository)
|
||||
* [Oracle Linux](/install/linux/docker-ee/oracle/#upgrade-from-the-repository)
|
||||
* [SLES](/install/linux/docker-ee/suse/#upgrade-docker-engine---enterprise)
|
||||
|
||||
### Post-Upgrade steps for Docker Engine - Enterprise
|
||||
|
||||
After all manager and worker nodes have been upgrades, the Swarm cluster can be used again to schedule new
|
||||
workloads. If workloads were previously scheduled off of the managers, they can be rescheduled again.
|
||||
After all manager and worker nodes have been upgrades, the Swarm cluster can be used again to schedule new
|
||||
workloads. If workloads were previously scheduled off of the managers, they can be rescheduled again.
|
||||
If any worker nodes were drained, they can be undrained again by setting `--availability active`.
|
||||
|
||||
## Upgrade UCP
|
||||
|
||||
Once you've upgraded the Docker Engine - Enterprise running on all the nodes,
|
||||
Once you've upgraded the Docker Engine - Enterprise running on all the nodes,
|
||||
[upgrade UCP](/ee/ucp/admin/install/upgrade.md).
|
||||
|
||||
## Upgrade DTR
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Working with Contexts
|
||||
description: Learn about Docker Contexts
|
||||
keywords: engine, contexts, cli, kubernetes
|
||||
title: Docker Context
|
||||
description: Learn about Docker Context
|
||||
keywords: engine, context, cli, kubernetes
|
||||
---
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ offers a high-level tool with several powerful functionalities:
|
|||
uploads and downloads, similar to `git pull`, so new versions of a container
|
||||
can be transferred by only sending diffs.
|
||||
|
||||
- *Component re-use.* Any container can be used as a [*"parent image"*](/glossary.md?term=image) to
|
||||
- *Component re-use.* Any container can be used as a [*parent image*](/glossary.md#parent_image) to
|
||||
create more specialized components. This can be done manually or as part of an
|
||||
automated build. For example you can prepare the ideal Python environment, and
|
||||
use it as a base for 10 different applications. Your ideal PostgreSQL setup can
|
||||
|
|
|
|||
|
|
@ -20,14 +20,14 @@ from the open source. It also incorporates defect fixes for environments in
|
|||
which new features cannot be adopted as quickly for consistency and
|
||||
compatibility reasons.
|
||||
|
||||
> **Note**:
|
||||
> **Note:**
|
||||
> New in 18.09 is an aligned release model for Docker Engine - Community and
|
||||
> Docker Engine - Enterprise. The new versioning scheme is YY.MM.x where x is an
|
||||
> incrementing patch version. The enterprise engine is a superset of the
|
||||
> community engine. They will ship concurrently with the same x patch version
|
||||
> based on the same code base.
|
||||
|
||||
> **Note**:
|
||||
> **Note:**
|
||||
> The client and container runtime are now in separate packages from the daemon
|
||||
> in Docker Engine 18.09. Users should install and update all three packages at
|
||||
> the same time to get the latest patch releases. For example, on Ubuntu:
|
||||
|
|
@ -36,6 +36,26 @@ compatibility reasons.
|
|||
|
||||
# Version 19.03
|
||||
|
||||
## 19.03.5
|
||||
2019-11-14
|
||||
|
||||
### Builder
|
||||
|
||||
* builder-next: Added `entitlements` in builder config. [docker/engine#412](https://github.com/docker/engine/pull/412)
|
||||
* Fix builder-next: permission errors on using build secrets or ssh forwarding with userns-remap. [docker/engine#420](https://github.com/docker/engine/pull/420)
|
||||
* Fix builder-next: copying a symlink inside an already copied directory. [docker/engine#420](https://github.com/docker/engine/pull/420)
|
||||
|
||||
### Packaging
|
||||
|
||||
* Support RHEL 8 packages
|
||||
|
||||
### Runtime
|
||||
|
||||
* Bump Golang to 1.12.12. [docker/engine#418](https://github.com/docker/engine/pull/418)
|
||||
* Update to RootlessKit to v0.7.0 to harden slirp4netns with mount namespace and seccomp. [docker/engine#397](https://github.com/docker/engine/pull/397)
|
||||
* Fix to propagate GetContainer error from event processor. [docker/engine#407](https://github.com/docker/engine/pull/407)
|
||||
* Fix push of OCI image. [docker/engine#405](https://github.com/docker/engine/pull/405)
|
||||
|
||||
## 19.03.4
|
||||
2019-10-17
|
||||
|
||||
|
|
@ -453,6 +473,22 @@ The missing rules are :
|
|||
|
||||
# Version 18.09
|
||||
|
||||
## 18.09.11
|
||||
2019-11-14
|
||||
|
||||
### Builder
|
||||
|
||||
* Fix builder-next: filter type in BuildKit GC config. [docker/engine#409](https://github.com/docker/engine/pull/409)
|
||||
|
||||
### Runtime
|
||||
|
||||
* Bump Golang to 1.12.12.
|
||||
|
||||
### Swarm
|
||||
|
||||
* Fix update out of sequence and increase max recv gRPC message size for nodes and secrets. [docker/swarmkit#2900](https://github.com/docker/swarmkit/pull/2900)
|
||||
* Fix for specifying `--default-addr-pool` for `docker swarm init` not picked up by ingress network. [docker/swarmkit#2892](https://github.com/docker/swarmkit/pull/2892)
|
||||
|
||||
## 18.09.10
|
||||
2019-10-08
|
||||
|
||||
|
|
@ -850,6 +886,22 @@ Ubuntu 14.04 "Trusty Tahr" [docker-ce-packaging#255](https://github.com/docker/d
|
|||
|
||||
# Older Docker Engine EE Release notes
|
||||
|
||||
## 18.03.1-ee-12
|
||||
2019-11-14
|
||||
|
||||
### Client
|
||||
|
||||
* Fix potential out of memory in CLI when running `docker image prune`. [docker/cli#1423](https://github.com/docker/cli/pull/1423)
|
||||
|
||||
### Logging
|
||||
|
||||
* Fix jsonfile logger: follow logs stuck when `max-size` is set and `max-file=1`. [moby/moby#39969](https://github.com/moby/moby/pull/39969)
|
||||
|
||||
### Runtime
|
||||
|
||||
* Update to Go 1.12.12.
|
||||
* Seccomp: add sigprocmask (used by x86 glibc) to default seccomp profile. [moby/moby#39824](https://github.com/moby/moby/pull/39824)
|
||||
|
||||
## 18.03.1-ee-11
|
||||
|
||||
2019-09-03
|
||||
|
|
@ -1041,6 +1093,36 @@ with directory traversal. [moby/moby#39357](https://github.com/moby/moby/pull/39
|
|||
+ Support for `--chown` with `COPY` and `ADD` in `Dockerfile`.
|
||||
+ Added functionality for the `docker logs` command to include the output of multiple logging drivers.
|
||||
|
||||
## 17.06.2-ee-25
|
||||
2019-11-19
|
||||
|
||||
### Builder
|
||||
|
||||
* Fix for ENV in multi-stage builds not being isolated. [moby/moby#35456](https://github.com/moby/moby/pull/35456)
|
||||
|
||||
### Client
|
||||
|
||||
* Fix potential out of memory in CLI when running `docker image prune`. [docker/cli#1423](https://github.com/docker/cli/pull/1423)
|
||||
* Fix compose file schema to prevent invalid properties in `deploy.resources`. [docker/cli#455](https://github.com/docker/cli/pull/455)
|
||||
|
||||
### Logging
|
||||
|
||||
* Fix jsonfile logger: follow logs stuck when `max-size` is set and `max-file=1`. [moby/moby#39969](https://github.com/moby/moby/pull/39969)
|
||||
|
||||
### Runtime
|
||||
|
||||
* Update to Go 1.12.12.
|
||||
* Seccomp: add sigprocmask (used by x86 glibc) to default seccomp profile. [moby/moby#39824](https://github.com/moby/moby/pull/39824)
|
||||
* Fix "device or resource busy" error on container removal with devicemapper. [moby/moby#34573](https://github.com/moby/moby/pull/34573)
|
||||
* Fix `daemon.json` configuration `default-ulimits` not working. [moby/moby#32547](https://github.com/moby/moby/pull/32547)
|
||||
* Fix denial of service with large numbers in `--cpuset-cpus` and `--cpuset-mems`. [moby/moby#37967](https://github.com/moby/moby/pull/37967)
|
||||
* Fix for `docker start` creates host-directory for bind mount, but shouldn't. [moby/moby#35833](https://github.com/moby/moby/pull/35833)
|
||||
* Fix OCI image media types. [moby/moby#37359](https://github.com/moby/moby/pull/37359)
|
||||
|
||||
### Windows
|
||||
|
||||
* Windows: bump RW layer size to 127GB. [moby/moby#35925](https://github.com/moby/moby/pull/35925)
|
||||
|
||||
## 17.06.2-ee-24
|
||||
2019-09-03
|
||||
|
||||
|
|
|
|||
|
|
@ -17,3 +17,7 @@ This section discusses the security features you can configure and use within yo
|
|||
* You can configure secure computing mode (Seccomp) policies to secure system calls in a container. For more information, see [Seccomp security profiles for Docker](seccomp.md).
|
||||
|
||||
* An AppArmor profile for Docker is installed with the official *.deb* packages. For information about this profile and overriding it, see [AppArmor security profiles for Docker](apparmor.md).
|
||||
|
||||
* You can map the root user in the containers to a non-root user. See [Isolate containers with a user namespace](userns-remap.md).
|
||||
|
||||
* You can also run the Docker daemon as a non-root user. See [Run the Docker daemon as a non-root user (Rootless mode)](rootless.md).
|
||||
|
|
@ -0,0 +1,278 @@
|
|||
---
|
||||
description: Run the Docker daemon as a non-root user (Rootless mode)
|
||||
keywords: security, namespaces, rootless
|
||||
title: Run the Docker daemon as a non-root user (Rootless mode)
|
||||
---
|
||||
|
||||
Rootless mode allows running the Docker daemon and containers as a non-root
|
||||
user, for the sake of mitigating potentail vulnerabilities in the daemon and
|
||||
the container runtime.
|
||||
|
||||
Rootless mode does not require root privileges even for installation of the
|
||||
Docker daemon, as long as [the prerequisites](#prerequiresites) are satisfied.
|
||||
|
||||
Rootless mode was introduced in Docker Engine 19.03.
|
||||
|
||||
> **Note**:
|
||||
> Rootless mode is an experimental feature and has [limitations](#known-limitations).
|
||||
|
||||
## How it works
|
||||
Rootless mode executes the Docker daemon and containers inside a user namespace.
|
||||
This is very similar to [`userns-remap` mode](userns-remap.md), except that
|
||||
with `userns-remap` mode, the daemon itself is running with root privileges, whereas in
|
||||
rootless mode, both the daemon and the container are running without root privileges.
|
||||
|
||||
Rootless mode does not use binaries with SETUID bits or file capabilities,
|
||||
except `newuidmap` and `newgidmap`, which are needed to allow multiple
|
||||
UIDs/GIDs to be used in the user namespace.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- `newuidmap` and `newgidmap` need to be installed on the host. These commands
|
||||
are provided by the `uidmap` package on most distros.
|
||||
|
||||
- `/etc/subuid` and `/etc/subgid` should contain at least 65,536 subordinate
|
||||
UIDs/GIDs for the user. In the following example, the user `testuser` has
|
||||
65,536 subordinate UIDs/GIDs (231072-296607).
|
||||
|
||||
```console
|
||||
$ id -u
|
||||
1001
|
||||
$ whoami
|
||||
testuser
|
||||
$ grep ^$(whoami): /etc/subuid
|
||||
testuser:231072:65536
|
||||
$ grep ^$(whoami): /etc/subgid
|
||||
testuser::231072:65536
|
||||
```
|
||||
|
||||
### Distribution-specific hint
|
||||
|
||||
> Note: Using Ubuntu kernel is recommended.
|
||||
|
||||
#### Ubuntu
|
||||
- No preparation is needed.
|
||||
|
||||
- `overlay2` storage driver is enabled by default
|
||||
([Ubuntu-specific kernel patch](https://kernel.ubuntu.com/git/ubuntu/ubuntu-bionic.git/commit/fs/overlayfs?id=3b7da90f28fe1ed4b79ef2d994c81efbc58f1144)).
|
||||
|
||||
- Known to work on Ubuntu 16.04 and 18.04.
|
||||
|
||||
#### Debian GNU/Linux
|
||||
- Add `kernel.unprivileged_userns_clone=1` to `/etc/sysctl.conf` (or
|
||||
`/etc/sysctl.d`) and run `sudo sysctl --system`.
|
||||
|
||||
- To use the `overlay2` storage driver (recommended), run
|
||||
`sudo modprobe overlay permit_mounts_in_userns=1`
|
||||
([Debian-specific kernel patch, introduced in Debian 10](https://salsa.debian.org/kernel-team/linux/blob/283390e7feb21b47779b48e0c8eb0cc409d2c815/debian/patches/debian/overlayfs-permit-mounts-in-userns.patch)).
|
||||
Put the configuration to `/etc/modprobe.d` for persistence.
|
||||
|
||||
- Known to work on Debian 9 and 10.
|
||||
`overlay2` is only supported since Debian 10 and needs `modprobe`
|
||||
configuration described above.
|
||||
|
||||
#### Arch Linux
|
||||
- Add `kernel.unprivileged_userns_clone=1` to `/etc/sysctl.conf` (or
|
||||
`/etc/sysctl.d`) and run `sudo sysctl --system`
|
||||
|
||||
#### openSUSE
|
||||
- `sudo modprobe ip_tables iptable_mangle iptable_nat iptable_filter` is required.
|
||||
This is might be required on other distros as well depending on the configuration.
|
||||
|
||||
- Known to work on openSUSE 15.
|
||||
|
||||
#### Fedora 31 and later
|
||||
- Fedora 31 uses cgroup v2 by default, which is not yet supported by the containerd runtime.
|
||||
Run `sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"`
|
||||
to use cgroup v1.
|
||||
|
||||
#### Fedora 30
|
||||
- No preparation is needed
|
||||
|
||||
#### CentOS 8
|
||||
- No preparation is needed
|
||||
|
||||
#### CentOS 7
|
||||
- Add `user.max_user_namespaces=28633` to `/etc/sysctl.conf` (or
|
||||
`/etc/sysctl.d`) and run `sudo sysctl --system`.
|
||||
|
||||
- `systemctl --user` does not work by default.
|
||||
Run the daemon directly without systemd:
|
||||
`dockerd-rootless.sh --experimental --storage-driver vfs`
|
||||
|
||||
- Known to work on CentOS 7.7. Older releases require extra configuration
|
||||
steps.
|
||||
|
||||
- CentOS 7.6 and older releases require [COPR package `vbatts/shadow-utils-newxidmap`](https://copr.fedorainfracloud.org/coprs/vbatts/shadow-utils-newxidmap/) to be installed.
|
||||
|
||||
- CentOS 7.5 and older releases require running
|
||||
`sudo grubby --update-kernel=ALL --args="user_namespace.enable=1"` and reboot.
|
||||
|
||||
## Known limitations
|
||||
|
||||
- Only `vfs` graphdriver is supported. However, on Ubuntu and Debian 10,
|
||||
`overlay2` and `overlay` are also supported.
|
||||
|
||||
- Following features are not supported:
|
||||
- Cgroups (including `docker top`, which depends on the cgroups)
|
||||
- AppArmor
|
||||
- Checkpoint
|
||||
- Overlay network
|
||||
- Exposing SCTP ports
|
||||
|
||||
- To use `ping` command, see [Routing ping packets](#routing-ping-packets)
|
||||
|
||||
- To expose privileged TCP/UDP ports (< 1024), see [Exposing privileged ports]
|
||||
(#exposing-privileged-ports)
|
||||
|
||||
## Install
|
||||
|
||||
The installation script is available at https://get.docker.com/rootless .
|
||||
|
||||
```console
|
||||
$ curl -fsSL https://get.docker.com/rootless | sh
|
||||
```
|
||||
|
||||
Make sure to run the script as a non-root user.
|
||||
|
||||
The script will show the environment variables that are needed to be set:
|
||||
|
||||
```console
|
||||
$ curl -fsSL https://get.docker.com/rootless | sh
|
||||
...
|
||||
# Docker binaries are installed in /home/testuser/bin
|
||||
# WARN: dockerd is not in your current PATH or pointing to /home/testuser/bin/dockerd
|
||||
# Make sure the following environment variables are set (or add them to ~/.bashrc):
|
||||
|
||||
export PATH=/home/testuser:/bin:$PATH
|
||||
export PATH=$PATH:/sbin
|
||||
export DOCKER_HOST=unix:///run/user/1001/docker.sock
|
||||
|
||||
#
|
||||
# To control docker service run:
|
||||
# systemctl --user (start|stop|restart) docker
|
||||
#
|
||||
```
|
||||
|
||||
To install the binaries manually without using the installer, extract
|
||||
`docker-rootless-extras-<version>.tar.gz` along with `docker-<version>.tar.gz`:
|
||||
https://download.docker.com/linux/static/stable/x86_64/
|
||||
|
||||
## Usage
|
||||
|
||||
### Daemon
|
||||
|
||||
Use `systemctl --user` to manage the lifecycle of the daemon:
|
||||
|
||||
```console
|
||||
$ systemctl --user start docker
|
||||
```
|
||||
|
||||
To launch the daemon on system startup, enable systemd lingering:
|
||||
|
||||
```console
|
||||
$ sudo loginctl enable-linger $(whoami)
|
||||
```
|
||||
|
||||
To run the daemon directly without systemd, you need to run
|
||||
`dockerd-rootless.sh` instead of `dockerd`:
|
||||
|
||||
```console
|
||||
$ dockerd-rootless.sh --experimental --storage-driver vfs
|
||||
```
|
||||
|
||||
As Rootless mode is experimental, currently you always need to run
|
||||
`dockerd-rootless.sh` with `--experimental`.
|
||||
You also need `--storage-driver vfs` unless using Ubuntu or Debian 10 kernel.
|
||||
You don't need to care these flags if you manage the daemon using systemd, as
|
||||
these flags are automatically added to the systemd unit file.
|
||||
|
||||
Remarks about directory paths:
|
||||
- The socket path is set to `$XDG_RUNTIME_DIR/docker.sock` by default.
|
||||
`$XDG_RUNTIME_DIR` is typically set to `/run/user/$UID`.
|
||||
- The data dir is set to `~/.local/share/docker` by default.
|
||||
- The exec dir is set to `$XDG_RUNTIME_DIR/docker` by default.
|
||||
- The daemon config dir is set to `~/.config/docker` (not `~/.docker`, which is
|
||||
used by the client) by default.
|
||||
|
||||
Other remarks:
|
||||
- The `dockerd-rootless.sh` script executes `dockerd` in its own user, mount,
|
||||
and network namespaces. You can enter the namespaces by running
|
||||
`nsenter -U --preserve-credentials -n -m -t $(cat $XDG_RUNTIME_DIR/docker.pid)`.
|
||||
- `docker info` shows `rootless` in `SecurityOptions`
|
||||
- `docker info` shows `none` as `Cgroup Driver`
|
||||
|
||||
### Client
|
||||
|
||||
You need to set the socket path explicitly.
|
||||
|
||||
```console
|
||||
$ export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock
|
||||
$ docker run -d nginx
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
### Rootless Docker in Docker
|
||||
|
||||
To run Rootless Docker inside "rootful" Docker, use `docker:<version>-dind-rootless`
|
||||
image instead of `docker:<version>-dind` image.
|
||||
|
||||
```console
|
||||
$ docker run -d --name dind-rootless --privileged docker:19.03-dind-rootless --experimental
|
||||
```
|
||||
|
||||
`docker:<version>-dind-rootless` image runs as a non-root user (UID 1000).
|
||||
However, `--privileged` is required for disabling seccomp, AppArmor, and mount
|
||||
masks.
|
||||
|
||||
### Expose Docker API socket via TCP
|
||||
|
||||
To expose the Docker API socket via TCP, you need to launch `dockerd-rootless.sh`
|
||||
with `DOCKERD_ROOTLESS_ROOTLESSKIT_FLAGS="-p 0.0.0.0:2376:2376/tcp"`.
|
||||
|
||||
```console
|
||||
$ DOCKERD_ROOTLESS_ROOTLESSKIT_FLAGS="-p 0.0.0.0:2376:2376/tcp" \
|
||||
dockerd-rootless.sh --experimental \
|
||||
-H tcp://0.0.0.0:2376 \
|
||||
--tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem
|
||||
```
|
||||
|
||||
### Routing ping packets
|
||||
|
||||
`ping` command does not work by default.
|
||||
|
||||
Add `net.ipv4.ping_group_range = 0 2147483647` to `/etc/sysctl.conf` (or
|
||||
`/etc/sysctl.d`) and run `sudo sysctl --system` to allow using `ping`.
|
||||
|
||||
### Exposing privileged ports
|
||||
|
||||
To expose privileged ports (< 1024), set `CAP_NET_BIND_SERVICE` on `rootlesskit` binary.
|
||||
|
||||
```console
|
||||
$ sudo setcap cap_net_bind_service=ep $HOME/bin/rootlesskit
|
||||
```
|
||||
|
||||
Or add `net.ipv4.ip_unprivileged_port_start=0` to `/etc/sysctl.conf` (or
|
||||
`/etc/sysctl.d`) and run `sudo sysctl --system`.
|
||||
|
||||
### Limiting resources
|
||||
|
||||
Currently rootless mode ignores cgroup-related `docker run` flags such as
|
||||
`--cpus` and `memory`.
|
||||
|
||||
However, traditional `ulimit` and [`cpulimit`](https://github.com/opsengine/cpulimit)
|
||||
can be still used, though they work in process-granularity rather than in container-granularity.
|
||||
|
||||
### Changing network stack
|
||||
|
||||
`dockerd-rootless.sh` uses [slirp4netns](https://github.com/rootless-containers/slirp4netns)
|
||||
(if installed) or [VPNKit](https://github.com/moby/vpnkit) as the network stack
|
||||
by default.
|
||||
|
||||
These network stacks run in userspace and might have performance overhead.
|
||||
See [RootlessKit documentation](https://github.com/rootless-containers/rootlesskit/tree/v0.7.0#network-drivers) for further information.
|
||||
|
||||
Optionally, you can use `lxc-user-nic` instead for the best performance.
|
||||
To use `lxc-user-nic`, you need to edit [`/etc/lxc/lxc-usernet`](https://github.com/rootless-containers/rootlesskit/tree/v0.7.0#--netlxc-user-nic-experimental)
|
||||
and set `$DOCKERD_ROOTLESS_ROOTLESSKIT_NET=lxc-user-nic`.
|
||||
|
|
@ -79,8 +79,8 @@ started in 2006, and initially merged in kernel 2.6.24.
|
|||
|
||||
Running containers (and applications) with Docker implies running the
|
||||
Docker daemon. This daemon requires `root` privileges unless you opt-in
|
||||
to [Rootless mode](https://github.com/docker/engine/blob/v19.03.0-rc3/docs/rootless.md)
|
||||
(experimental), and you should therefore be aware of some important details.
|
||||
to [Rootless mode](rootless.md) (experimental), and you should therefore
|
||||
be aware of some important details.
|
||||
|
||||
First of all, **only trusted users should be allowed to control your
|
||||
Docker daemon**. This is a direct consequence of some powerful Docker
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ certificates from a custom root CA.
|
|||
* **Rolling updates:** At rollout time you can apply service updates to nodes
|
||||
incrementally. The swarm manager lets you control the delay between service
|
||||
deployment to different sets of nodes. If anything goes wrong, you can
|
||||
roll-back a task to a previous version of the service.
|
||||
roll back to a previous version of the service.
|
||||
|
||||
## What's next?
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
description: Home page for Get Docker
|
||||
keywords: Docker, documentation, manual
|
||||
landing: true
|
||||
title: Get Docker
|
||||
---
|
||||
|
||||
<div class="component-container">
|
||||
<!--start row-->
|
||||
<div class="row">
|
||||
<div class="col-sm-12 col-md-12 col-lg-4 block">
|
||||
<div class="component">
|
||||
<div class="component-icon">
|
||||
<a href="docker-for-mac/"> <img src="../images/apple_48.svg" alt="Docker Desktop for Mac"> </a>
|
||||
</div>
|
||||
<h3 id="docker-for-mac"><a href="docker-for-mac/">Docker Desktop for Mac</a></h3>
|
||||
<p>A native application using the macOS sandbox security model which delivers all Docker tools to your Mac.</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-sm-12 col-md-12 col-lg-4 block">
|
||||
<div class="component">
|
||||
<div class="component-icon">
|
||||
<a href="docker-for-windows/"> <img src="../images/windows_48.svg" alt="Docker Desktop for Windows"> </a>
|
||||
</div>
|
||||
<h3 id="docker-for-windows"><a href="docker-for-windows/">Docker Desktop for Windows</a></h3>
|
||||
<p>A native Windows application which delivers all Docker tools to your Windows computer.</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-sm-12 col-md-12 col-lg-4 block">
|
||||
<div class="component">
|
||||
<div class="component-icon">
|
||||
<a href="install/linux/ubuntu/"> <img src="../images/linux_48.svg" alt="Docker for Linux"> </a>
|
||||
</div>
|
||||
<h3 id="docker-for-linux"><a href="install/linux/ubuntu/">Docker for Linux</a></h3>
|
||||
<p>Install Docker on a computer which already has a Linux distribution installed.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Get Started, Part 1: Orientation and setup"
|
||||
title: "Orientation and setup"
|
||||
keywords: get started, setup, orientation, quickstart, intro, concepts, containers, docker desktop
|
||||
description: Get oriented on some basics of Docker and install Docker Desktop.
|
||||
redirect_from:
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
<ul class="pagination">
|
||||
<li {% if include.selected=="1"%}class="active"{% endif %}><a href="part1">1: Orientation and setup</a></li>
|
||||
<li {% if include.selected=="2"%}class="active"{% endif %}><a href="part2">2: Containerizing an application</a></li>
|
||||
<li {% if include.selected=="3"%}class="active"{% endif %}><a href="part3">3: Deploying to Kubernetes</a></li>
|
||||
<li {% if include.selected=="4"%}class="active"{% endif %}><a href="part4">4: Deploying to Swarm</a></li>
|
||||
<li {% if include.selected=="5"%}class="active"{% endif %}><a href="part5">5: Sharing images on Docker Hub</a></li>
|
||||
<li {% if include.selected=="1"%}class="active"{% endif %}><a href="part1">Orientation and setup</a></li>
|
||||
<li {% if include.selected=="2"%}class="active"{% endif %}><a href="part2">Containerizing an application</a></li>
|
||||
<li {% if include.selected=="3"%}class="active"{% endif %}><a href="part3">Deploying to Kubernetes</a></li>
|
||||
<li {% if include.selected=="4"%}class="active"{% endif %}><a href="part4">Deploying to Swarm</a></li>
|
||||
<li {% if include.selected=="5"%}class="active"{% endif %}><a href="part5">Sharing images on Docker Hub</a></li>
|
||||
</ul>
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Get Started, Part 2: Containerizing an Application"
|
||||
title: "Containerizing an application"
|
||||
keywords: containers, images, dockerfiles, node, code, coding, build, push, run
|
||||
description: Learn how to create a Docker image by writing a Dockerfile, and use it to run a simple container.
|
||||
---
|
||||
|
|
@ -25,7 +25,7 @@ In this stage of the tutorial, let's focus on step 1 of this workflow: creating
|
|||
|
||||
## Setting Up
|
||||
|
||||
1. Clone an example project from GitHub (if you don't have git installed, see the [https://git-scm.com/book/en/v2/Getting-Started-Installing-Git](install instructions) first):
|
||||
1. Clone an example project from GitHub (if you don't have git installed, see the [install instructions](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first):
|
||||
|
||||
```shell
|
||||
git clone -b v1 https://github.com/docker-training/node-bulletin-board
|
||||
|
|
@ -111,4 +111,4 @@ Further documentation for all CLI commands used in this article are available he
|
|||
|
||||
- [docker image *](https://docs.docker.com/engine/reference/commandline/image/)
|
||||
- [docker container *](https://docs.docker.com/engine/reference/commandline/container/)
|
||||
- [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)
|
||||
- [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Get Started, Part 3: Deploying to Kubernetes"
|
||||
title: "Deploying to Kubernetes"
|
||||
keywords: kubernetes, pods, deployments, kubernetes services
|
||||
description: Learn how to describe and deploy a simple application on Kubernetes.
|
||||
---
|
||||
|
|
@ -24,7 +24,7 @@ In order to validate that our containerized application works well on Kubernetes
|
|||
|
||||
All containers in Kubernetes are scheduled as _pods_, which are groups of co-located containers that share some resources. Furthermore, in a realistic application we almost never create individual pods; instead, most of our workloads are scheduled as _deployments_, which are scalable groups of pods maintained automatically by Kubernetes. Lastly, all Kubernetes objects can and should be described in manifests called _Kubernetes YAML_ files; these YAML files describe all the components and configurations of your Kubernetes app, and can be used to easily create and destroy your app in any Kubernetes environment.
|
||||
|
||||
1. You already wrote a very basic Kubernetes YAML file in the first part of this tutorial; let's write a slightly more sophisticated one now, to run and manage our bulletin board. Place the following in a file called `bb.yaml`and save it in the same place you put the other yaml file.
|
||||
1. You already wrote a very basic Kubernetes YAML file in the first part of this tutorial; let's write a slightly more sophisticated one now, to run and manage our bulletin board. Place the following in a file called `bb.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
|
@ -125,8 +125,6 @@ At this point, we have successfully used Docker Desktop to deploy our applicatio
|
|||
|
||||
In addition to deploying to Kubernetes, we have also described our application as a Kubernetes YAML file. This simple text file contains everything we need to create our application in a running state; we can check it into version control and share it with our colleagues, allowing us to distribute our applications to other clusters (like the testing and production clusters that probably come after our development environments) easily.
|
||||
|
||||
[On to Part 4 >>](part4.md){: class="button outline-btn" style="margin-bottom: 30px; margin-right: 100%"}
|
||||
|
||||
## Kubernetes References
|
||||
|
||||
Further documentation for all new Kubernetes objects used in this article are available here:
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Get Started, Part 4: Deploying to Swarm"
|
||||
title: "Deploying to Swarm"
|
||||
keywords: swarm, swarm services, stacks
|
||||
description: Learn how to describe and deploy a simple application on Docker Swarm.
|
||||
---
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Get Started, Part 5: Sharing Images on Docker Hub"
|
||||
title: "Sharing images on Docker Hub"
|
||||
keywords: docker hub, push, images
|
||||
description: Learn how to share images on Docker Hub.
|
||||
---
|
||||
|
|
@ -57,5 +57,3 @@ At this point, you've set up your Docker Hub account and have connected it to yo
|
|||
Now that your image is available on Docker Hub, you'll be able to run it anywhere; if you try to use it on a new cluster that doesn't have it yet, Docker will automatically try and download it from Docker Hub. By moving images around in this way, we no longer need to install any dependencies except Docker and our orchestrator on the machines we want to run our software on; the dependencies of our containerized applications are completely encapsulated and isolated within our images, which we can share via Docker Hub in the manner above.
|
||||
|
||||
Another thing to keep in mind: at the moment, we've only pushed your image to Docker Hub; what about your Dockerfiles, Kube YAML and stack files? A crucial best practice is to keep these in version control, perhaps alongside your source code for your application, and add a link or note in your Docker Hub repository description indicating where these files can be found, preserving the record not only of how your image was built, but how it's meant to be run as a full application.
|
||||
|
||||
|
||||
|
|
|
|||