Revert "Revert "Merge branch 'master' of github.com:docker/docs-private into test-branch-2""
This reverts commit 4c95d161ca
.
25
_config.yml
|
@ -23,7 +23,7 @@ latest_stable_docker_engine_api_version: "1.37"
|
|||
docker_ce_stable_version: "18.03"
|
||||
docker_ce_edge_version: "18.05"
|
||||
docker_ee_version: "17.06"
|
||||
compose_version: "1.21.2"
|
||||
compose_version: "1.22.0"
|
||||
machine_version: "0.14.0"
|
||||
distribution_version: "2.6"
|
||||
dtr_version: "2.5"
|
||||
|
@ -92,7 +92,7 @@ defaults:
|
|||
- scope:
|
||||
path: "install"
|
||||
values:
|
||||
win_latest_build: "docker-17.06.2-ee-8"
|
||||
win_latest_build: "docker-17.06.2-ee-16"
|
||||
- scope:
|
||||
path: "datacenter"
|
||||
values:
|
||||
|
@ -102,27 +102,27 @@ defaults:
|
|||
values:
|
||||
dtr_org: "docker"
|
||||
dtr_repo: "dtr"
|
||||
dtr_version: "2.5.0"
|
||||
dtr_version: "2.5.3"
|
||||
- scope:
|
||||
path: "datacenter/dtr/2.4"
|
||||
values:
|
||||
hide_from_sitemap: true
|
||||
dtr_org: "docker"
|
||||
dtr_repo: "dtr"
|
||||
dtr_version: "2.4.3"
|
||||
dtr_version: "2.4.6"
|
||||
- scope:
|
||||
path: "datacenter/dtr/2.3"
|
||||
values:
|
||||
hide_from_sitemap: true
|
||||
dtr_org: "docker"
|
||||
dtr_repo: "dtr"
|
||||
dtr_version: "2.3.6"
|
||||
dtr_version: "2.3.8"
|
||||
- scope:
|
||||
path: "datacenter/dtr/2.2"
|
||||
values:
|
||||
ucp_version: "2.1"
|
||||
dtr_version: "2.2"
|
||||
docker_image: "docker/dtr:2.2.11"
|
||||
docker_image: "docker/dtr:2.2.12"
|
||||
- scope:
|
||||
path: "datacenter/dtr/2.1"
|
||||
values:
|
||||
|
@ -138,38 +138,41 @@ defaults:
|
|||
values:
|
||||
ucp_org: "docker"
|
||||
ucp_repo: "ucp"
|
||||
ucp_version: "3.0.0"
|
||||
ucp_version: "3.0.4"
|
||||
- scope: # This is a bit of a hack for the get-support.md topic.
|
||||
path: "ee"
|
||||
values:
|
||||
ucp_org: "docker"
|
||||
ucp_repo: "ucp"
|
||||
dtr_repo: "dtr"
|
||||
ucp_version: "3.0.0"
|
||||
ucp_version: "3.0.4"
|
||||
dtr_version: "2.5.0"
|
||||
dtr_latest_image: "docker/dtr:2.5.0"
|
||||
dtr_latest_image: "docker/dtr:2.5.3"
|
||||
- scope:
|
||||
path: "datacenter/ucp/2.2"
|
||||
values:
|
||||
hide_from_sitemap: true
|
||||
ucp_org: "docker"
|
||||
ucp_repo: "ucp"
|
||||
ucp_version: "2.2.9"
|
||||
ucp_version: "2.2.12"
|
||||
- scope:
|
||||
path: "datacenter/ucp/2.1"
|
||||
values:
|
||||
hide_from_sitemap: true
|
||||
ucp_version: "2.1"
|
||||
dtr_version: "2.2"
|
||||
docker_image: "docker/ucp:2.1.8"
|
||||
- scope:
|
||||
path: "datacenter/ucp/2.0"
|
||||
values:
|
||||
hide_from_sitemap: true
|
||||
ucp_version: "2.0"
|
||||
dtr_version: "2.1"
|
||||
docker_image: "docker/ucp:2.0.3"
|
||||
docker_image: "docker/ucp:2.0.4"
|
||||
- scope:
|
||||
path: "datacenter/ucp/1.1"
|
||||
values:
|
||||
hide_from_sitemap: true
|
||||
ucp_version: "1.1"
|
||||
dtr_version: "2.0"
|
||||
|
||||
|
|
|
@ -6,14 +6,62 @@
|
|||
- product: "ucp"
|
||||
version: "3.0"
|
||||
tar-files:
|
||||
- description: "3.0.4 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_3.0.4.tar.gz
|
||||
- description: "3.0.4 IBM Z"
|
||||
url: https://packages.docker.com/caas/ucp_images_s390x_3.0.4.tar.gz
|
||||
- description: "3.0.4 Windows Server 2016 LTSC"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_2016_3.0.4.tar.gz
|
||||
- description: "3.0.4 Windows Server 1709"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_1709_3.0.4.tar.gz
|
||||
- description: "3.0.4 Windows Server 1803"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_1803_3.0.4.tar.gz
|
||||
- description: "3.0.3 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_3.0.3.tar.gz
|
||||
- description: "3.0.3 IBM Z"
|
||||
url: https://packages.docker.com/caas/ucp_images_s390x_3.0.3.tar.gz
|
||||
- description: "3.0.3 Windows Server 2016 LTSC"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_2016_3.0.3.tar.gz
|
||||
- description: "3.0.3 Windows Server 1709"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_1709_3.0.3.tar.gz
|
||||
- description: "3.0.3 Windows Server 1803"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_1803_3.0.3.tar.gz
|
||||
- description: "3.0.2 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_3.0.2.tar.gz
|
||||
- description: "3.0.2 Windows Server 2016 LTSC"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_2016_3.0.2.tar.gz
|
||||
- description: "3.0.2 Windows Server 1709"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_1709_3.0.2.tar.gz
|
||||
- description: "3.0.1 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_3.0.1.tar.gz
|
||||
- description: "3.0.1 Windows Server 2016 LTSC"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_3.0.1.tar.gz
|
||||
- description: "3.0.0 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_3.0.0.tar.gz
|
||||
- description: "3.0.0 Windows"
|
||||
- description: "3.0.0 Windows Server 2016 LTSC"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_3.0.0.tar.gz
|
||||
- product: "ucp"
|
||||
version: "2.2"
|
||||
tar-files:
|
||||
- description: "2.2.9 Linux"
|
||||
- description: "2.2.12 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_2.2.12.tar.gz
|
||||
- description: "2.2.12 IBM Z"
|
||||
url: https://packages.docker.com/caas/ucp_images_s390x_2.2.12.tar.gz
|
||||
- description: "2.2.12 Windows"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_2.2.12.tar.gz
|
||||
- description: "2.2.11 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_2.2.11.tar.gz
|
||||
- description: "2.2.11 IBM Z"
|
||||
url: https://packages.docker.com/caas/ucp_images_s390x_2.2.11.tar.gz
|
||||
- description: "2.2.11 Windows"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_2.2.11.tar.gz
|
||||
- description: "2.2.10 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_2.2.10.tar.gz
|
||||
- description: "2.2.10 IBM Z"
|
||||
url: https://packages.docker.com/caas/ucp_images_s390x_2.2.10.tar.gz
|
||||
- description: "2.2.10 Windows"
|
||||
url: https://packages.docker.com/caas/ucp_images_win_2.2.10.tar.gz
|
||||
- description: "2.2.10 Linux"
|
||||
url: https://packages.docker.com/caas/ucp_images_2.2.9.tar.gz
|
||||
- description: "2.2.9 IBM Z"
|
||||
url: https://packages.docker.com/caas/ucp_images_s390x_2.2.9.tar.gz
|
||||
|
@ -64,13 +112,41 @@
|
|||
- product: "dtr"
|
||||
version: "2.5"
|
||||
tar-files:
|
||||
- description: "DTR 2.5.4 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.5.4.tar.gz
|
||||
- description: "DTR 2.5.4 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.5.4.tar.gz
|
||||
- description: "DTR 2.5.3 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.5.3.tar.gz
|
||||
- description: "DTR 2.5.3 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.5.3.tar.gz
|
||||
- description: "DTR 2.5.2 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.5.2.tar.gz
|
||||
- description: "DTR 2.5.2 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.5.2.tar.gz
|
||||
- description: "DTR 2.5.1 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.5.1.tar.gz
|
||||
- description: "DTR 2.5.1 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.5.1.tar.gz
|
||||
- description: "DTR 2.5.0 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.5.0.tar.gz
|
||||
- description: "DTR 2.4.3 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.4.3.tar.gz
|
||||
- description: "DTR 2.5.0 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.5.0.tar.gz
|
||||
- product: "dtr"
|
||||
version: "2.4"
|
||||
tar-files:
|
||||
- description: "DTR 2.4.6 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.4.6.tar.gz
|
||||
- description: "DTR 2.4.6 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.4.6.tar.gz
|
||||
- description: "DTR 2.4.5 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.4.5.tar.gz
|
||||
- description: "DTR 2.4.5 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.4.5.tar.gz
|
||||
- description: "DTR 2.4.4 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.4.4.tar.gz
|
||||
- description: "DTR 2.4.4 IBM Z"
|
||||
url: https://packages.docker.com/caas/dtr_images_s390x_2.4.4.tar.gz
|
||||
- description: "DTR 2.4.3 Linux x86"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.4.3.tar.gz
|
||||
- description: "DTR 2.4.3 IBM Z"
|
||||
|
@ -90,6 +166,10 @@
|
|||
- product: "dtr"
|
||||
version: "2.3"
|
||||
tar-files:
|
||||
- description: "DTR 2.3.8"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.3.8.tar.gz
|
||||
- description: "DTR 2.3.7"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.3.7.tar.gz
|
||||
- description: "DTR 2.3.6"
|
||||
url: https://packages.docker.com/caas/dtr_images_2.3.6.tar.gz
|
||||
- description: "DTR 2.3.5"
|
||||
|
|
|
@ -28,7 +28,7 @@ options:
|
|||
swarm: false
|
||||
examples: |-
|
||||
```bash
|
||||
$ docker docker image ls
|
||||
$ docker image ls
|
||||
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
|
||||
|
|
276
_data/toc.yaml
|
@ -1643,6 +1643,8 @@ manuals:
|
|||
title: Isolate volumes
|
||||
- path: /ee/ucp/authorization/isolate-nodes/
|
||||
title: Isolate nodes
|
||||
- path: /ee/ucp/authorization/pull-images/
|
||||
title: Allow users to pull images
|
||||
- path: /ee/ucp/authorization/migrate-kubernetes-roles/
|
||||
title: Migrate Kubernetes roles to Docker EE authorization
|
||||
- path: /ee/ucp/authorization/ee-standard/
|
||||
|
@ -1716,7 +1718,7 @@ manuals:
|
|||
- title: Deploy a Compose-based app
|
||||
path: /ee/ucp/kubernetes/deploy-with-compose/
|
||||
- title: Deploy an ingress controller
|
||||
path: /ee/ucp/kubernetes/deploy-ingress-controller/
|
||||
path: /ee/ucp/kubernetes/layer-7-routing/
|
||||
- title: Create a service account for a Kubernetes app
|
||||
path: /ee/ucp/kubernetes/create-service-account/
|
||||
- title: Install a CNI plugin
|
||||
|
@ -1890,6 +1892,7 @@ manuals:
|
|||
title: API reference
|
||||
- path: /ee/ucp/release-notes/
|
||||
title: Release notes
|
||||
nosync: true
|
||||
- path: /datacenter/ucp/2.2/guides/get-support/
|
||||
title: Get support
|
||||
- sectiontitle: Universal Control Plane 2.1
|
||||
|
@ -2546,6 +2549,7 @@ manuals:
|
|||
title: API reference
|
||||
- path: /ee/dtr/release-notes/
|
||||
title: Release notes
|
||||
nosync: true
|
||||
- path: /datacenter/dtr/2.4/guides/support/
|
||||
title: Get support
|
||||
- sectiontitle: Docker Trusted Registry 2.3
|
||||
|
@ -3028,7 +3032,179 @@ manuals:
|
|||
title: Get support
|
||||
- title: Get support
|
||||
path: /ee/get-support/
|
||||
|
||||
- sectiontitle: Docker Cloud
|
||||
section:
|
||||
- sectiontitle: Migration
|
||||
section:
|
||||
- path: /docker-cloud/migration/
|
||||
title: Migration overview
|
||||
- path: /docker-cloud/migration/cloud-to-swarm/
|
||||
title: Migrate to Docker CE
|
||||
- path: /docker-cloud/migration/cloud-to-kube-aks/
|
||||
title: Migration to AKS
|
||||
- path: /docker-cloud/migration/cloud-to-kube-gke/
|
||||
title: Migrate to GKE
|
||||
- path: /docker-cloud/migration/cloud-to-aws-ecs/
|
||||
title: Migrate to Amazon ECS
|
||||
- path: /docker-cloud/migration/deregister-swarms/
|
||||
title: Deregister swarms
|
||||
- path: /docker-cloud/migration/kube-primer/
|
||||
title: Kubernetes primer
|
||||
- path: /docker-cloud/
|
||||
title: About Docker Cloud
|
||||
- path: /docker-cloud/dockerid/
|
||||
title: Docker Cloud settings and Docker ID
|
||||
- path: /docker-cloud/orgs/
|
||||
title: Organizations and teams
|
||||
- sectiontitle: Manage builds and images
|
||||
section:
|
||||
- path: /docker-cloud/builds/
|
||||
title: Builds and images overview
|
||||
- path: /docker-cloud/builds/repos/
|
||||
title: Docker Cloud repositories
|
||||
- path: /docker-cloud/builds/link-source/
|
||||
title: Link to a source code repository
|
||||
- path: /docker-cloud/builds/push-images/
|
||||
title: Push images to Docker Cloud
|
||||
- path: /docker-cloud/builds/automated-build/
|
||||
title: Automated builds
|
||||
- path: /docker-cloud/builds/automated-testing/
|
||||
title: Automated repository tests
|
||||
- path: /docker-cloud/builds/advanced/
|
||||
title: Advanced options for autobuild and autotest
|
||||
- sectiontitle: Manage swarms (beta swarm mode)
|
||||
section:
|
||||
- path: /docker-cloud/cloud-swarm/
|
||||
title: Overview
|
||||
- path: /docker-cloud/cloud-swarm/using-swarm-mode/
|
||||
title: Using Swarm mode
|
||||
- path: /docker-cloud/cloud-swarm/register-swarms/
|
||||
title: Register existing swarms
|
||||
- path: /docker-cloud/cloud-swarm/create-cloud-swarm-aws/
|
||||
title: Create a new swarm on Amazon Web Services in Docker Cloud
|
||||
- path: /docker-cloud/cloud-swarm/create-cloud-swarm-azure/
|
||||
title: Create a new swarm on Microsoft Azure in Docker Cloud
|
||||
- path: /docker-cloud/cloud-swarm/connect-to-swarm/
|
||||
title: Connect to a swarm through Docker Cloud
|
||||
- path: /docker-cloud/cloud-swarm/link-aws-swarm/
|
||||
title: Link Amazon Web Services to Docker Cloud
|
||||
- path: /docker-cloud/cloud-swarm/link-azure-swarm/
|
||||
title: Link Microsoft Azure Cloud Services to Docker Cloud
|
||||
- path: /docker-cloud/cloud-swarm/ssh-key-setup/
|
||||
title: Set up SSH keys
|
||||
- sectiontitle: Manage Infrastructure (standard mode)
|
||||
section:
|
||||
- path: /docker-cloud/infrastructure/
|
||||
title: Infrastructure overview
|
||||
- path: /docker-cloud/infrastructure/deployment-strategies/
|
||||
title: Container distribution strategies
|
||||
- path: /docker-cloud/infrastructure/link-aws/
|
||||
title: Link to Amazon Web Services hosts
|
||||
- path: /docker-cloud/infrastructure/link-do/
|
||||
title: Link to DigitalOcean hosts
|
||||
- path: /docker-cloud/infrastructure/link-azure/
|
||||
title: Link to Microsoft Azure hosts
|
||||
- path: /docker-cloud/infrastructure/link-packet/
|
||||
title: Link to Packet hosts
|
||||
- path: /docker-cloud/infrastructure/link-softlayer/
|
||||
title: Link to SoftLayer hosts
|
||||
- path: /docker-cloud/infrastructure/ssh-into-a-node/
|
||||
title: SSH into a Docker Cloud-managed node
|
||||
- path: /docker-cloud/infrastructure/docker-upgrade/
|
||||
title: Upgrade Docker on a node
|
||||
- path: /docker-cloud/infrastructure/byoh/
|
||||
title: Use the Docker Cloud agent
|
||||
- path: /docker-cloud/infrastructure/cloud-on-packet.net-faq/
|
||||
title: Use Docker Cloud and Packet.net
|
||||
- path: /docker-cloud/infrastructure/cloud-on-aws-faq/
|
||||
title: Use Docker Cloud on AWS
|
||||
- sectiontitle: Manage nodes and apps (standard mode)
|
||||
section:
|
||||
- path: /docker-cloud/standard/
|
||||
title: Overview
|
||||
- sectiontitle: Getting started
|
||||
section:
|
||||
- path: /docker-cloud/getting-started/
|
||||
title: Getting started with Docker Cloud
|
||||
- path: /docker-cloud/getting-started/intro_cloud/
|
||||
title: Introducing Docker Cloud
|
||||
- path: /docker-cloud/getting-started/connect-infra/
|
||||
title: Link to your infrastructure
|
||||
- path: /docker-cloud/getting-started/your_first_node/
|
||||
title: Deploy your first node
|
||||
- path: /docker-cloud/getting-started/your_first_service/
|
||||
title: Deploy your first service
|
||||
- sectiontitle: Deploy an application
|
||||
section:
|
||||
- path: /docker-cloud/getting-started/deploy-app/1_introduction/
|
||||
title: Introduction to deploying an app in Docker Cloud
|
||||
- path: /docker-cloud/getting-started/deploy-app/2_set_up/
|
||||
title: Set up your environment
|
||||
- path: /docker-cloud/getting-started/deploy-app/3_prepare_the_app/
|
||||
title: Prepare the application
|
||||
- path: /docker-cloud/getting-started/deploy-app/4_push_to_cloud_registry/
|
||||
title: Push the image to Docker Cloud's Registry
|
||||
- path: /docker-cloud/getting-started/deploy-app/5_deploy_the_app_as_a_service/
|
||||
title: Deploy the app as a Docker Cloud service
|
||||
- path: /docker-cloud/getting-started/deploy-app/6_define_environment_variables/
|
||||
title: Define environment variables
|
||||
- path: /docker-cloud/getting-started/deploy-app/7_scale_the_service/
|
||||
title: Scale the service
|
||||
- path: /docker-cloud/getting-started/deploy-app/8_view_logs/
|
||||
title: View service logs
|
||||
- path: /docker-cloud/getting-started/deploy-app/9_load-balance_the_service/
|
||||
title: Load-balance the service
|
||||
- path: /docker-cloud/getting-started/deploy-app/10_provision_a_data_backend_for_your_service/
|
||||
title: Provision a data backend for the service
|
||||
- path: /docker-cloud/getting-started/deploy-app/11_service_stacks/
|
||||
title: Stackfiles for your service
|
||||
- path: /docker-cloud/getting-started/deploy-app/12_data_management_with_volumes/
|
||||
title: Data management with volumes
|
||||
- sectiontitle: Manage applications
|
||||
section:
|
||||
- path: /docker-cloud/apps/
|
||||
title: Applications in Docker Cloud
|
||||
- path: /docker-cloud/apps/deploy-to-cloud-btn/
|
||||
title: Add a deploy to Docker Cloud button
|
||||
- path: /docker-cloud/apps/auto-destroy/
|
||||
title: Automatic container destroy
|
||||
- path: /docker-cloud/apps/autorestart/
|
||||
title: Automatic container restart
|
||||
- path: /docker-cloud/apps/auto-redeploy/
|
||||
title: Automatic service redeploy
|
||||
- path: /docker-cloud/apps/load-balance-hello-world/
|
||||
title: Create a proxy or load balancer
|
||||
- path: /docker-cloud/apps/deploy-tags/
|
||||
title: Deployment tags
|
||||
- path: /docker-cloud/apps/stacks/
|
||||
title: Manage service stacks
|
||||
- path: /docker-cloud/apps/ports/
|
||||
title: Publish and expose service or container ports
|
||||
- path: /docker-cloud/apps/service-redeploy/
|
||||
title: Redeploy running services
|
||||
- path: /docker-cloud/apps/service-scaling/
|
||||
title: Scale your service
|
||||
- path: /docker-cloud/apps/api-roles/
|
||||
title: Service API roles
|
||||
- path: /docker-cloud/apps/service-links/
|
||||
title: Service discovery and links
|
||||
- path: /docker-cloud/apps/triggers/
|
||||
title: Use triggers
|
||||
- path: /docker-cloud/apps/volumes/
|
||||
title: Work with data volumes
|
||||
- path: /docker-cloud/apps/stack-yaml-reference/
|
||||
title: Cloud stack file YAML reference
|
||||
- path: /docker-cloud/slack-integration/
|
||||
title: Docker Cloud notifications in Slack
|
||||
- path: /apidocs/docker-cloud/
|
||||
title: Docker Cloud API
|
||||
nosync: true
|
||||
- path: /docker-cloud/installing-cli/
|
||||
title: The Docker Cloud CLI
|
||||
- path: /docker-cloud/docker-errors-faq/
|
||||
title: Known issues in Docker Cloud
|
||||
- path: /docker-cloud/release-notes/
|
||||
title: Release notes
|
||||
- sectiontitle: Docker Compose
|
||||
section:
|
||||
- path: /compose/overview/
|
||||
|
@ -3283,62 +3459,48 @@ manuals:
|
|||
title: Migrate from Boot2Docker to Machine
|
||||
- path: /release-notes/docker-machine/
|
||||
title: Docker Machine release notes
|
||||
|
||||
- sectiontitle: Docker Store
|
||||
section:
|
||||
- path: /docker-store/
|
||||
title: About Docker Store
|
||||
- sectiontitle: Docker Store FAQs
|
||||
section:
|
||||
- path: /docker-store/customer_faq/
|
||||
title: Customer FAQs
|
||||
- path: /docker-store/publisher_faq/
|
||||
title: Publisher FAQs
|
||||
- sectiontitle: For Publishers
|
||||
section:
|
||||
- path: /docker-store/publish/
|
||||
title: Publish content on Docker Store
|
||||
- path: /docker-store/certify-images/
|
||||
title: Certify Docker images
|
||||
- path: /docker-store/certify-plugins-logging/
|
||||
title: Certify Docker logging plugins
|
||||
- path: /docker-store/trustchain/
|
||||
title: Docker Store trust chain
|
||||
- path: /docker-store/byol/
|
||||
title: Bring Your Own License (BYOL)
|
||||
- sectiontitle: Docker Hub
|
||||
section:
|
||||
- title: Docker Hub overview
|
||||
path: /docker-hub/
|
||||
- title: Create Docker Hub account
|
||||
path: /docker-hub/accounts/
|
||||
- title: Run Docker CLI commands
|
||||
path: /docker-hub/commandline/
|
||||
- sectiontitle: Discover content
|
||||
section:
|
||||
- title: Content overview
|
||||
path: /docker-hub/discover/
|
||||
- title: Official repos
|
||||
path: /docker-hub/discover/official-repos/
|
||||
- sectiontitle: Manage repositories
|
||||
section:
|
||||
- title: Repository overview
|
||||
path: /docker-hub/manage/
|
||||
- title: Create and configure repos
|
||||
path: /docker-hub/manage/repos/
|
||||
- title: Create orgs and teams
|
||||
path: /docker-hub/manage/orgs-teams/
|
||||
- title: Push images
|
||||
path: /docker-hub/manage/push-images/
|
||||
- sectiontitle: Autobuild images
|
||||
section:
|
||||
- title: Autobuild Docker images
|
||||
path: /docker-hub/build/
|
||||
- title: Autotest repositories
|
||||
path: /docker-hub/build/autotest/
|
||||
- title: Advanced options
|
||||
path: /docker-hub/build/advanced/
|
||||
- title: Build from GitHub
|
||||
path: /docker-hub/build/github/
|
||||
- title: Build from Bitbucket
|
||||
path: /docker-hub/build/bitbucket/
|
||||
- title: Webhooks
|
||||
path: /docker-hub/build/webhooks/
|
||||
- sectiontitle: Publish content
|
||||
section:
|
||||
- title: Publish Docker images
|
||||
path: /docker-hub/publish/
|
||||
- title: Certify Docker images
|
||||
path: /docker-hub/publish/certify-images/
|
||||
- title: Certify Docker logging plugins
|
||||
path: /docker-hub/publish/certify-plugins-logging/
|
||||
- title: Docker Hub trust chain
|
||||
path: /docker-hub/publish/trustchain/
|
||||
- title: Bring Your Own License (BYOL)
|
||||
path: /docker-hub/publish/byol/
|
||||
- title: FAQs on publishing center
|
||||
path: /docker-hub/publish/faq-publisher/
|
||||
- title: Customer FAQs
|
||||
path: /docker-hub/publish/faq-customer/
|
||||
|
||||
- path: /docker-hub/
|
||||
title: Overview of Docker Hub
|
||||
- path: /docker-hub/accounts/
|
||||
title: Use Docker Hub with Docker ID
|
||||
- path: /docker-hub/orgs/
|
||||
title: Teams & organizations
|
||||
- path: /docker-hub/repos/
|
||||
title: Repositories on Docker Hub
|
||||
- path: /docker-hub/builds/
|
||||
title: Automated builds
|
||||
- path: /docker-hub/webhooks/
|
||||
title: Webhooks for automated builds
|
||||
- path: /docker-hub/bitbucket/
|
||||
title: Automated builds with Bitbucket
|
||||
- path: /docker-hub/github/
|
||||
title: Automated builds from GitHub
|
||||
- path: /docker-hub/official_repos/
|
||||
title: Official repositories on Docker Hub
|
||||
- sectiontitle: Open-source projects
|
||||
section:
|
||||
- sectiontitle: Docker Notary
|
||||
|
@ -3461,7 +3623,7 @@ manuals:
|
|||
title: Docker Compose
|
||||
nosync: true
|
||||
- path: /docker-cloud/release-notes/
|
||||
title:
|
||||
title: Docker Cloud
|
||||
nosync: true
|
||||
- path: /docker-for-aws/release-notes/
|
||||
title: Docker for AWS
|
||||
|
|
|
@ -2,6 +2,12 @@ It is possible to re-use configuration fragments using extension fields. Those
|
|||
special fields can be of any format as long as they are located at the root of
|
||||
your Compose file and their name start with the `x-` character sequence.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> Starting with the 3.7 format (for the 3.x series) and 2.4 format
|
||||
> (for the 2.x series), extension fields are also allowed at the root
|
||||
> of service, volume, network, config and secret definitions.
|
||||
|
||||
```none
|
||||
version: '2.1'
|
||||
x-custom:
|
||||
|
@ -29,7 +35,7 @@ logging:
|
|||
You may write your Compose file as follows:
|
||||
|
||||
```none
|
||||
version: '2.1'
|
||||
version: '3.4'
|
||||
x-logging:
|
||||
&default-logging
|
||||
options:
|
||||
|
@ -50,7 +56,7 @@ It is also possible to partially override values in extension fields using
|
|||
the [YAML merge type](http://yaml.org/type/merge.html). For example:
|
||||
|
||||
```none
|
||||
version: '2.1'
|
||||
version: '3.4'
|
||||
x-volumes:
|
||||
&default-volume
|
||||
driver: foobar-storage
|
||||
|
|
|
@ -2,6 +2,7 @@ This table shows which Compose file versions support specific Docker releases.
|
|||
|
||||
| **Compose file format** | **Docker Engine release** |
|
||||
| ------------------- | ------------------ |
|
||||
| 3.7 | 18.06.0+ |
|
||||
| 3.6 | 18.02.0+ |
|
||||
| 3.5 | 17.12.0+ |
|
||||
| 3.4 | 17.09.0+ |
|
||||
|
@ -19,7 +20,7 @@ This table shows which Compose file versions support specific Docker releases.
|
|||
In addition to Compose file format versions shown in the table, the Compose
|
||||
itself is on a release schedule, as shown in [Compose
|
||||
releases](https://github.com/docker/compose/releases/), but file format versions
|
||||
do not necessairly increment with each release. For example, Compose file format
|
||||
do not necessarily increment with each release. For example, Compose file format
|
||||
3.0 was first introduced in [Compose release
|
||||
1.10.0](https://github.com/docker/compose/releases/tag/1.10.0), and versioned
|
||||
gradually in subsequent releases.
|
||||
|
|
|
@ -1,71 +0,0 @@
|
|||
1. Open a terminal and log into Docker Hub with the Docker CLI:
|
||||
|
||||
```
|
||||
$ docker login
|
||||
|
||||
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
|
||||
Username: gordon
|
||||
Password:
|
||||
WARNING! Your password will be stored unencrypted in /home/gwendolynne/.docker/config.json.
|
||||
Configure a credential helper to remove this warning. See
|
||||
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
|
||||
```
|
||||
|
||||
2. Search for the `busybox` image:
|
||||
|
||||
```
|
||||
$ docker search busybox
|
||||
|
||||
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
|
||||
busybox Busybox base image. 1268 [OK]
|
||||
progrium/busybox 66 [OK]
|
||||
hypriot/rpi-busybox-httpd Raspberry Pi compatible … 41
|
||||
radial/busyboxplus Full-chain, Internet enabled, … 19 [OK]
|
||||
...
|
||||
```
|
||||
|
||||
> Private repos are not returned at the commandline. Go to the Docker Hub UI
|
||||
> to see your allowable repos.
|
||||
|
||||
3. Pull the official busybox image to your machine and list it (to ensure it was
|
||||
pulled):
|
||||
|
||||
```
|
||||
$ docker pull busybox
|
||||
|
||||
Using default tag: latest
|
||||
latest: Pulling from library/busybox
|
||||
07a152489297: Pull complete
|
||||
Digest: sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
|
||||
Status: Downloaded newer image for busybox:latest
|
||||
|
||||
$ docker image ls
|
||||
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
busybox latest 8c811b4aec35 11 days ago 1.15MB
|
||||
|
||||
```
|
||||
|
||||
4. Tag the official image (to differentiate it), list it, and push it to your
|
||||
personal repo:
|
||||
|
||||
```
|
||||
$ docker tag busybox <DOCKER ID>/busybox:test-tag
|
||||
|
||||
$ docker image ls
|
||||
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
gordon/busybox v1 8c811b4aec35 11 days ago 1.15MB
|
||||
busybox latest 8c811b4aec35 11 days ago 1.15MB
|
||||
|
||||
$ docker push <DOCKER ID>/busybox:test-tag
|
||||
```
|
||||
|
||||
5. Log out from Docker Hub:
|
||||
|
||||
```
|
||||
$ docker logout
|
||||
```
|
||||
|
||||
6. Log on to the [Docker Hub UI](https://hub.docker.com){: target="_blank" class="_"} and view the image you
|
||||
pushed.
|
|
@ -51,7 +51,7 @@ You only need to set up the repository once, after which you can install Docker
|
|||
$ sudo rm /etc/yum.repos.d/docker*.repo
|
||||
```
|
||||
|
||||
2. Temporarily store the URL (that you [copied above](#find-your-docker-ee-repo-url)) in an environment variable. Replace `<DOCKER-EE-URL>` with your URL in the following command. This variable assignment does not persist when the session ends.
|
||||
2. Temporarily store the URL (that you [copied above](#find-your-docker-ee-repo-url)) in an environment variable. Replace `<DOCKER-EE-URL>` with your URL in the following command. This variable assignment does not persist when the session ends:
|
||||
|
||||
```bash
|
||||
$ export DOCKERURL="<DOCKER-EE-URL>"
|
||||
|
@ -85,13 +85,13 @@ You only need to set up the repository once, after which you can install Docker
|
|||
|
||||
The repository can differ per your architecture and cloud provider, so review the options in this step before running:
|
||||
|
||||
**For all architectures _except_ IBM Power PC:**
|
||||
**For all architectures _except_ IBM Power:**
|
||||
|
||||
```bash
|
||||
$ sudo yum-config-manager --enable rhel-7-server-extras-rpms
|
||||
```
|
||||
|
||||
**For IBM Power PC only (little endian):**
|
||||
**For IBM Power only (little endian):**
|
||||
|
||||
```bash
|
||||
$ sudo yum-config-manager --enable extras
|
||||
|
@ -127,7 +127,20 @@ You only need to set up the repository once, after which you can install Docker
|
|||
|
||||
{% elsif section == "install-using-yum-repo" %}
|
||||
|
||||
1. Install the _latest version_ of Docker EE, or go to the next step to install a specific version:
|
||||
There are currently two versions of Docker EE Engine available:
|
||||
|
||||
* 18.03 - Use this version if you're only running Docker EE Engine.
|
||||
* 17.06 - Use this version if you're using Docker Enterprise Edition 2.0 (Docker
|
||||
Engine, UCP, and DTR).
|
||||
|
||||
1. By default, Docker EE Engine 17.06 is installed. If you want to install the
|
||||
18.03 version run:
|
||||
|
||||
```bash
|
||||
sudo yum-config-manager --enable docker-ee-stable-18.03
|
||||
```
|
||||
|
||||
2. Install the latest patch release, or go to the next step to install a specific version:
|
||||
|
||||
```bash
|
||||
$ sudo yum -y install docker-ee
|
||||
|
@ -135,7 +148,7 @@ You only need to set up the repository once, after which you can install Docker
|
|||
|
||||
If prompted to accept the GPG key, verify that the fingerprint matches `{{ gpg-fingerprint }}`, and if so, accept it.
|
||||
|
||||
2. To install a _specific version_ of Docker EE (recommended in production), list versions and install:
|
||||
3. To install a _specific version_ of Docker EE (recommended in production), list versions and install:
|
||||
|
||||
a. List and sort the versions available in your repo. This example sorts results by version number, highest to lowest, and is truncated:
|
||||
|
||||
|
@ -155,7 +168,7 @@ You only need to set up the repository once, after which you can install Docker
|
|||
|
||||
Docker is installed but not started. The `docker` group is created, but no users are added to the group.
|
||||
|
||||
3. Start Docker:
|
||||
4. Start Docker:
|
||||
|
||||
> If using `devicemapper`, ensure it is properly configured before starting Docker, per the [storage guide](/storage/storagedriver/device-mapper-driver/){: target="_blank" class="_" }.
|
||||
|
||||
|
@ -163,7 +176,7 @@ You only need to set up the repository once, after which you can install Docker
|
|||
$ sudo systemctl start docker
|
||||
```
|
||||
|
||||
4. Verify that Docker EE is installed correctly by running the `hello-world`
|
||||
5. Verify that Docker EE is installed correctly by running the `hello-world`
|
||||
image. This command downloads a test image, runs it in a container, prints
|
||||
an informational message, and exits:
|
||||
|
||||
|
@ -201,7 +214,7 @@ To manually install Docker EE, download the `.{{ package-format | downcase }}` f
|
|||
|
||||
{% if linux-dist == "centos" %}
|
||||
1. Go to the Docker EE repository URL associated with your trial or subscription
|
||||
in your browser. Go to `{{ linux-dist-url-slug }}/7/x86_64/stable-{{ site.docker_ee_version }}/Packages`
|
||||
in your browser. Go to `{{ linux-dist-url-slug }}/7/x86_64/stable-<VERSION>/Packages`
|
||||
and download the `.{{ package-format | downcase }}` file for the Docker version you want to install.
|
||||
{% endif %}
|
||||
|
||||
|
@ -271,7 +284,14 @@ To manually install Docker EE, download the `.{{ package-format | downcase }}` f
|
|||
$ sudo rm -rf /var/lib/docker
|
||||
```
|
||||
|
||||
3. If desired, remove the `devicemapper` thin pool and reformat the block
|
||||
3. Delete other Docker related resources:
|
||||
```bash
|
||||
$ sudo rm -rf /run/docker
|
||||
$ sudo rm -rf /var/run/docker
|
||||
$ sudo rm -rf /etc/docker
|
||||
```
|
||||
|
||||
4. If desired, remove the `devicemapper` thin pool and reformat the block
|
||||
devices that were part of it.
|
||||
|
||||
You must delete any edited configuration files manually.
|
||||
|
|
|
@ -49,7 +49,7 @@
|
|||
<li><a href="https://www.docker.com/docker">Learn</a></li>
|
||||
<li><a href="https://blog.docker.com" target="_blank">Blog</a></li>
|
||||
<li><a href="https://training.docker.com/" target="_blank">Training</a></li>
|
||||
<li><a href="https://www.docker.com/docker-support-services">Support</a></li>
|
||||
<li><a href="https://success.docker.com/support">Support</a></li>
|
||||
<li><a href="https://success.docker.com/kbase">Knowledge Base</a></li>
|
||||
<li><a href="https://www.docker.com/products/resources">Resources</a></li>
|
||||
</ul>
|
||||
|
|
|
@ -42,10 +42,6 @@
|
|||
</nav>
|
||||
</div>
|
||||
</div>
|
||||
<!-- DockerCon banner -->
|
||||
<div class="banner">
|
||||
<a target="_blank" href="https://2018.dockercon.com/"><img src="/images/dockercon.svg" alt="DockerCon banner"></a>
|
||||
</div>
|
||||
<!-- hero banner text -->
|
||||
<div class="container-fluid">
|
||||
<div class="row">
|
||||
|
|
|
@ -38,7 +38,7 @@ Always examine scripts downloaded from the internet before
|
|||
{:.warning}
|
||||
|
||||
```bash
|
||||
$ curl -fsSL get.docker.com -o get-docker.sh
|
||||
$ curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
$ sudo sh get-docker.sh
|
||||
|
||||
<output truncated>
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
When you register for a Docker ID, your Docker ID is your user namespace
|
||||
in Docker Hub and your username on the [Docker Forums](https://forums.docker.com/){: target="_blank" class="_"}.
|
||||
|
||||
1. Go to [Docker Hub](https://hub.docker.com/){: target="_blank" class="_"}.
|
||||
|
||||
2. Click **Create Docker ID** (top right).
|
||||
|
||||
3. Fill out the required fields:
|
||||
|
||||
- **Docker ID** (or username): Must be 4 to 30 characters long, only numbers
|
||||
and lowercase letters.
|
||||
|
||||
- **Email address**: Must be unique and valid.
|
||||
|
||||
- **Password**: Must be 6 to 128 characters long.
|
||||
|
||||
4. Click **Sign Up**. Docker sends a verification email to the address you
|
||||
provided.
|
||||
|
||||
5. Go to your email and click the link to verify your address. You cannot log
|
||||
in until you verify.
|
|
@ -242,7 +242,7 @@
|
|||
<li style="visibility: hidden"><a href="{{ edit_url }}"><i class="fa fa-pencil-square-o" aria-hidden="true"></i> Edit this page</a></li>{% endif %}
|
||||
<li><a href="https://github.com/docker/docker.github.io/issues/new?body=File: [{{ page.path }}](https://docs.docker.com{{ page.url }})"
|
||||
class="nomunge"><i class="fa fa-check" aria-hidden="true"></i> Request docs changes</a></li>
|
||||
<li><a href="https://www.docker.com/docker-support-services"><i class="fa fa-question" aria-hidden="true"></i> Get support</a></li>
|
||||
<li><a href="https://success.docker.com/support"><i class="fa fa-question" aria-hidden="true"></i> Get support</a></li>
|
||||
<!-- toggle mode -->
|
||||
<li>
|
||||
<div class="toggle-mode">
|
||||
|
|
|
@ -168,7 +168,7 @@ configure this app to use our SQL Server database, and then create a
|
|||
$ docker-compose build
|
||||
```
|
||||
|
||||
1. Make sure you allocate at least 4GB of memory to Docker Engine. Here is how
|
||||
1. Make sure you allocate at least 2GB of memory to Docker Engine. Here is how
|
||||
to do it on
|
||||
[Docker for Mac](/docker-for-mac/#/advanced) and
|
||||
[Docker for Windows](/docker-for-windows/#/advanced).
|
||||
|
|
|
@ -86,7 +86,7 @@ Depending on what you typed on the command line so far, it completes:
|
|||
- service names that make sense in a given context, such as services with running or stopped instances or services based on images vs. services based on Dockerfiles. For `docker-compose scale`, completed service names automatically have "=" appended.
|
||||
- arguments for selected options. For example, `docker-compose kill -s` completes some signals like SIGHUP and SIGUSR1.
|
||||
|
||||
Enjoy working with Compose faster and with less typos!
|
||||
Enjoy working with Compose faster and with fewer typos!
|
||||
|
||||
## Compose documentation
|
||||
|
||||
|
|
|
@ -243,6 +243,8 @@ supported by **Compose 1.21.0+**.
|
|||
Introduces the following additional parameters:
|
||||
|
||||
- [`platform`](compose-file-v2.md#platform) for service definitions
|
||||
- Support for extension fields at the root of service, network, and volume
|
||||
definitions
|
||||
|
||||
### Version 3
|
||||
|
||||
|
@ -301,6 +303,18 @@ Introduces the following additional parameters:
|
|||
|
||||
- [`tmpfs` size](index.md#long-syntax-3) for `tmpfs`-type mounts
|
||||
|
||||
### Version 3.7
|
||||
|
||||
An upgrade of [version 3](#version-3) that introduces new parameters. It is
|
||||
only available with Docker Engine version **18.06.0** and higher.
|
||||
|
||||
Introduces the following additional parameters:
|
||||
|
||||
- [`init`](index.md#init) in service definitions
|
||||
- [`rollback_config`](index.md#rollback_config) in deploy configurations
|
||||
- Support for extension fields at the root of service, network, volume, secret
|
||||
and config definitions
|
||||
|
||||
## Upgrading
|
||||
|
||||
### Version 2.x to 3.x
|
||||
|
|
|
@ -759,6 +759,20 @@ services:
|
|||
window: 120s
|
||||
```
|
||||
|
||||
#### rollback_config
|
||||
|
||||
> [Version 3.7 file format](compose-versioning.md#version-37) and up
|
||||
|
||||
Configures how the service should be rollbacked in case of a failing
|
||||
update.
|
||||
|
||||
- `parallelism`: The number of containers to rollback at a time. If set to 0, all containers rollback simultaneously.
|
||||
- `delay`: The time to wait between each container group's rollback (default 0s).
|
||||
- `failure_action`: What to do if a rollback fails. One of `continue` or `pause` (default `pause`)
|
||||
- `monitor`: Duration after each task update to monitor for failure `(ns|us|ms|s|m|h)` (default 0s).
|
||||
- `max_failure_ratio`: Failure rate to tolerate during a rollback (default 0).
|
||||
- `order`: Order of operations during rollbacks. One of `stop-first` (old task is stopped before starting new one), or `start-first` (new task is started first, and the running tasks briefly overlap) (default `stop-first`).
|
||||
|
||||
#### update_config
|
||||
|
||||
Configures how the service should be updated. Useful for configuring rolling
|
||||
|
@ -792,7 +806,7 @@ services:
|
|||
|
||||
#### Not supported for `docker stack deploy`
|
||||
|
||||
The following sub-options (supported for `docker compose up` and `docker compose run`) are _not supported_ for `docker stack deploy` or the `deploy` key.
|
||||
The following sub-options (supported for `docker-compose up` and `docker-compose run`) are _not supported_ for `docker stack deploy` or the `deploy` key.
|
||||
|
||||
- [build](#build)
|
||||
- [cgroup_parent](#cgroup_parent)
|
||||
|
@ -1118,6 +1132,27 @@ If the image does not exist, Compose attempts to pull it, unless you have also
|
|||
specified [build](#build), in which case it builds it using the specified
|
||||
options and tags it with the specified tag.
|
||||
|
||||
### init
|
||||
|
||||
> [Added in version 3.7 file format](compose-versioning.md#version-37).
|
||||
|
||||
Run an init inside the container that forwards signals and reaps processes.
|
||||
Either set a boolean value to use the default `init`, or specify a path to
|
||||
a custom one.
|
||||
|
||||
version: '3.7'
|
||||
services:
|
||||
web:
|
||||
image: alpine:latest
|
||||
init: true
|
||||
|
||||
|
||||
version: '2.2'
|
||||
services:
|
||||
web:
|
||||
image: alpine:latest
|
||||
init: /usr/libexec/docker-init
|
||||
|
||||
### isolation
|
||||
|
||||
Specify a container’s isolation technology. On Linux, the only supported value
|
||||
|
@ -1246,7 +1281,7 @@ For a full list of supported logging drivers and their options, see
|
|||
|
||||
### network_mode
|
||||
|
||||
Network mode. Use the same values as the docker client `--net` parameter, plus
|
||||
Network mode. Use the same values as the docker client `--network` parameter, plus
|
||||
the special form `service:[service name]`.
|
||||
|
||||
network_mode: "bridge"
|
||||
|
@ -1986,7 +2021,7 @@ conflicting with those used by other software.
|
|||
> [Added in version 3.4 file format](compose-versioning.md#version-34)
|
||||
|
||||
Set a custom name for this volume. The name field can be used to reference
|
||||
networks that contain special characters. The name is used as is
|
||||
volumes that contain special characters. The name is used as is
|
||||
and will **not** be scoped with the stack name.
|
||||
|
||||
version: '3.4'
|
||||
|
|
|
@ -129,10 +129,11 @@ services:
|
|||
When you set the same environment variable in multiple files, here's the
|
||||
priority used by Compose to choose which value to use:
|
||||
|
||||
1. Compose file,
|
||||
2. Environment file,
|
||||
3. Dockerfile,
|
||||
4. Variable is not defined.
|
||||
1. Compose file
|
||||
2. Shell environment variables
|
||||
3. Environment file
|
||||
4. Dockerfile
|
||||
5. Variable is not defined
|
||||
|
||||
In the example below, we set the same environment variable on an Environment
|
||||
file, and the Compose file:
|
||||
|
|
|
@ -119,7 +119,7 @@ the following:
|
|||
redis:
|
||||
image: "redis:alpine"
|
||||
|
||||
This Compose file defines two services, `web` and `redis`. The web service:
|
||||
This Compose file defines two services, `web` and `redis`. The `web` service:
|
||||
|
||||
* Uses an image that's built from the `Dockerfile` in the current directory.
|
||||
* Forwards the exposed port 5000 on the container to port 5000 on the host
|
||||
|
|
Before Width: | Height: | Size: 176 KiB After Width: | Height: | Size: 334 KiB |
|
@ -4,7 +4,7 @@ keywords: documentation, docs, docker, compose, orchestration, containers, netwo
|
|||
title: Networking in Compose
|
||||
---
|
||||
|
||||
> **Note**: This document only applies if you're using [version 2 or higher of the Compose file format](compose-file.md#versioning). Networking features are not supported for version 1 (legacy) Compose files.
|
||||
> This page applies to Compose file formats [version 2](compose-file/compose-file-v2.md) and [higher](compose-file/). Networking features are not supported for Compose file [version 1 (legacy)](compose-file/compose-file-v1.md).
|
||||
|
||||
By default Compose sets up a single
|
||||
[network](/engine/reference/commandline/network_create/) for your app. Each
|
||||
|
@ -83,7 +83,7 @@ Links allow you to define extra aliases by which a service is reachable from ano
|
|||
db:
|
||||
image: postgres
|
||||
|
||||
See the [links reference](compose-file.md#links) for more information.
|
||||
See the [links reference](compose-file/compose-file-v2.md#links) for more information.
|
||||
|
||||
## Multi-host networking
|
||||
|
||||
|
@ -129,12 +129,20 @@ Here's an example Compose file defining two custom networks. The `proxy` service
|
|||
foo: "1"
|
||||
bar: "2"
|
||||
|
||||
Networks can be configured with static IP addresses by setting the [ipv4_address and/or ipv6_address](compose-file.md#ipv4-address-ipv6-address) for each attached network.
|
||||
Networks can be configured with static IP addresses by setting the [ipv4_address and/or ipv6_address](compose-file/compose-file-v2.md#ipv4-address-ipv6-address) for each attached network.
|
||||
|
||||
Networks can also be given a [custom name](compose-file/index.md#name-1) (since version 3.5):
|
||||
|
||||
version: "3.5"
|
||||
networks:
|
||||
frontend:
|
||||
name: custom_frontend
|
||||
driver: custom-driver-1
|
||||
|
||||
For full details of the network configuration options available, see the following references:
|
||||
|
||||
- [Top-level `networks` key](compose-file.md#network-configuration-reference)
|
||||
- [Service-level `networks` key](compose-file.md#networks)
|
||||
- [Top-level `networks` key](compose-file/compose-file-v2.md#network-configuration-reference)
|
||||
- [Service-level `networks` key](compose-file/compose-file-v2.md#networks)
|
||||
|
||||
## Configure the default network
|
||||
|
||||
|
@ -157,7 +165,7 @@ Instead of (or as well as) specifying your own networks, you can also change the
|
|||
|
||||
## Use a pre-existing network
|
||||
|
||||
If you want your containers to join a pre-existing network, use the [`external` option](compose-file.md#network-configuration-reference):
|
||||
If you want your containers to join a pre-existing network, use the [`external` option](compose-file/compose-file-v2.md#network-configuration-reference):
|
||||
|
||||
networks:
|
||||
default:
|
||||
|
|
|
@ -15,7 +15,7 @@ dependencies, define exactly what needs to be included in the
|
|||
container. This is done using a file called `Dockerfile`. To begin with, the
|
||||
Dockerfile consists of:
|
||||
|
||||
FROM ruby:2.3.3
|
||||
FROM ruby:2.5
|
||||
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
|
||||
RUN mkdir /myapp
|
||||
WORKDIR /myapp
|
||||
|
@ -34,7 +34,7 @@ Next, create a bootstrap `Gemfile` which just loads Rails. It'll be overwritten
|
|||
in a moment by `rails new`.
|
||||
|
||||
source 'https://rubygems.org'
|
||||
gem 'rails', '5.0.0.1'
|
||||
gem 'rails', '5.2.0'
|
||||
|
||||
Create an empty `Gemfile.lock` to build our `Dockerfile`.
|
||||
|
||||
|
@ -80,24 +80,26 @@ List the files.
|
|||
|
||||
```shell
|
||||
$ ls -l
|
||||
total 64
|
||||
-rw-r--r-- 1 vmb staff 222 Jun 7 12:05 Dockerfile
|
||||
-rw-r--r-- 1 vmb staff 1738 Jun 7 12:09 Gemfile
|
||||
-rw-r--r-- 1 vmb staff 4297 Jun 7 12:09 Gemfile.lock
|
||||
-rw-r--r-- 1 vmb staff 374 Jun 7 12:09 README.md
|
||||
-rw-r--r-- 1 vmb staff 227 Jun 7 12:09 Rakefile
|
||||
drwxr-xr-x 10 vmb staff 340 Jun 7 12:09 app
|
||||
drwxr-xr-x 8 vmb staff 272 Jun 7 12:09 bin
|
||||
drwxr-xr-x 14 vmb staff 476 Jun 7 12:09 config
|
||||
-rw-r--r-- 1 vmb staff 130 Jun 7 12:09 config.ru
|
||||
drwxr-xr-x 3 vmb staff 102 Jun 7 12:09 db
|
||||
-rw-r--r-- 1 vmb staff 211 Jun 7 12:06 docker-compose.yml
|
||||
drwxr-xr-x 4 vmb staff 136 Jun 7 12:09 lib
|
||||
drwxr-xr-x 3 vmb staff 102 Jun 7 12:09 log
|
||||
drwxr-xr-x 9 vmb staff 306 Jun 7 12:09 public
|
||||
drwxr-xr-x 9 vmb staff 306 Jun 7 12:09 test
|
||||
drwxr-xr-x 4 vmb staff 136 Jun 7 12:09 tmp
|
||||
drwxr-xr-x 3 vmb staff 102 Jun 7 12:09 vendor
|
||||
total 72
|
||||
-rw-r--r-- 1 vmb staff 223 5 26 14:20 Dockerfile
|
||||
-rw-r--r-- 1 vmb staff 2223 5 26 14:24 Gemfile
|
||||
-rw-r--r-- 1 vmb staff 5300 5 26 14:25 Gemfile.lock
|
||||
-rw-r--r-- 1 vmb staff 374 5 26 14:24 README.md
|
||||
-rw-r--r-- 1 vmb staff 227 5 26 14:24 Rakefile
|
||||
drwxr-xr-x 10 vmb staff 320 5 26 14:24 app
|
||||
drwxr-xr-x 9 vmb staff 288 5 26 14:25 bin
|
||||
drwxr-xr-x 16 vmb staff 512 5 26 14:24 config
|
||||
-rw-r--r-- 1 vmb staff 130 5 26 14:24 config.ru
|
||||
drwxr-xr-x 3 vmb staff 96 5 26 14:24 db
|
||||
-rw-r--r-- 1 vmb staff 266 5 26 14:22 docker-compose.yml
|
||||
drwxr-xr-x 4 vmb staff 128 5 26 14:24 lib
|
||||
drwxr-xr-x 3 vmb staff 96 5 26 14:24 log
|
||||
-rw-r--r-- 1 vmb staff 63 5 26 14:24 package.json
|
||||
drwxr-xr-x 9 vmb staff 288 5 26 14:24 public
|
||||
drwxr-xr-x 3 vmb staff 96 5 26 14:24 storage
|
||||
drwxr-xr-x 11 vmb staff 352 5 26 14:24 test
|
||||
drwxr-xr-x 6 vmb staff 192 5 26 14:24 tmp
|
||||
drwxr-xr-x 3 vmb staff 96 5 26 14:24 vendor
|
||||
|
||||
```
|
||||
|
||||
|
@ -164,10 +166,10 @@ seconds — the familiar refrain:
|
|||
db_1 | LOG: database system is ready to accept connections
|
||||
db_1 | LOG: autovacuum launcher started
|
||||
web_1 | => Booting Puma
|
||||
web_1 | => Rails 5.0.0.1 application starting in development on http://0.0.0.0:3000
|
||||
web_1 | => Rails 5.2.0 application starting in development
|
||||
web_1 | => Run `rails server -h` for more startup options
|
||||
web_1 | Puma starting in single mode...
|
||||
web_1 | * Version 3.9.1 (ruby 2.3.3-p222), codename: Private Caller
|
||||
web_1 | * Version 3.11.4 (ruby 2.5.1-p57), codename: Love Song
|
||||
web_1 | * Min threads: 5, max threads: 5
|
||||
web_1 | * Environment: development
|
||||
web_1 | * Listening on tcp://0.0.0.0:3000
|
||||
|
|
|
@ -22,7 +22,7 @@ Services are built once and then tagged, by default as `project_service`. For
|
|||
example, `composetest_db`. If the Compose file specifies an
|
||||
[image](/compose/compose-file/index.md#image) name, the image is
|
||||
tagged with that name, substituting any variables beforehand. See [variable
|
||||
substitution](#variable-substitution).
|
||||
substitution](/compose/compose-file/#variable-substitution).
|
||||
|
||||
If you change a service's Dockerfile or the contents of its
|
||||
build directory, run `docker-compose build` to rebuild it.
|
||||
|
|
|
@ -56,3 +56,9 @@ This opens an interactive PostgreSQL shell for the linked `db` container.
|
|||
If you do not want the `run` command to start linked containers, use the `--no-deps` flag:
|
||||
|
||||
docker-compose run --no-deps web python manage.py shell
|
||||
|
||||
If you want to remove the container after running while overriding the container's restart policy, use the `--rm` flag:
|
||||
|
||||
docker-compose run --rm web python manage.py db upgrade
|
||||
|
||||
This runs a database upgrade script, and removes the container when finished running, even if a restart policy is specified in the service configuration.
|
||||
|
|
|
@ -6,7 +6,7 @@ notoc: true
|
|||
---
|
||||
|
||||
You can control the order of service startup with the
|
||||
[depends_on](compose-file.md#depends-on) option. Compose always starts
|
||||
[depends_on](compose-file.md#depends_on) option. Compose always starts
|
||||
containers in dependency order, where dependencies are determined by
|
||||
`depends_on`, `links`, `volumes_from`, and `network_mode: "service:..."`.
|
||||
|
||||
|
|
|
@ -46,9 +46,7 @@ when the daemon becomes unavailable. **Only do one of the following**.
|
|||
|
||||
## Live restore during upgrades
|
||||
|
||||
The live restore feature supports restoring containers to the daemon for
|
||||
upgrades from one minor release to the next, such as when upgrading from Docker
|
||||
1.12.1 to 1.12.2.
|
||||
Live restore supports keeping containers running across Docker daemon upgrades, though this is limited to patch releases and does not support minor or major daemon upgrades.
|
||||
|
||||
If you skip releases during an upgrade, the daemon may not restore its
|
||||
connection to the containers. If the daemon can't restore the connection, it
|
||||
|
|
|
@ -37,9 +37,9 @@ In the first case, your logs are processed in other ways and you may choose not
|
|||
to use `docker logs`. In the second case, the official `nginx` image shows one
|
||||
workaround, and the official Apache `httpd` image shows another.
|
||||
|
||||
The official `nginx` image creates a symbolic link from
|
||||
`/dev/stdout` to `/var/log/nginx/access.log`, and creates another symbolic link
|
||||
from `/dev/stderr` to `/var/log/nginx/error.log`, overwriting the log files and
|
||||
The official `nginx` image creates a symbolic link from `/var/log/nginx/access.log`
|
||||
to `/dev/stdout`, and creates another symbolic link
|
||||
from `/var/log/nginx/error.log` to `/dev/stderr`, overwriting the log files and
|
||||
causing logs to be sent to the relevant special device instead. See the
|
||||
[Dockerfile](https://github.com/nginxinc/docker-nginx/blob/8921999083def7ba43a06fabd5f80e4406651353/mainline/jessie/Dockerfile#L21-L23).
|
||||
|
||||
|
|
|
@ -118,6 +118,8 @@ Its setting can have complicated effects:
|
|||
- If `--memory-swap` is explicitly set to `-1`, the container is allowed to use
|
||||
unlimited swap, up to the amount available on the host system.
|
||||
|
||||
- Inside the container, tools like `free` report the host's available swap, not what's available inside the container. Don't rely on the output of `free` or similar tools to determine whether swap is present.
|
||||
|
||||
#### Prevent a container from using swap
|
||||
|
||||
If `--memory` and `--memory-swap` are set to the same value, this prevents
|
||||
|
@ -180,7 +182,7 @@ the container's cgroup on the host machine.
|
|||
|:-----------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. Available in Docker 1.13 and higher. |
|
||||
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100 micro-seconds. Most users do not change this from the default. If you use Docker 1.13 or higher, use `--cpus` instead. |
|
||||
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is guaranteed CPU access. In other words, `cpu-quota / cpu-period`. If you use Docker 1.13 or higher, use `--cpus` instead. |
|
||||
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. If you use Docker 1.13 or higher, use `--cpus` instead. |
|
||||
| `--cpuset-cpus` | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be `0-3` (to use the first, second, third, and fourth CPU) or `1,3` (to use the second and fourth CPU). |
|
||||
| `--cpu-shares` | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. `--cpu-shares` does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access. |
|
||||
|
||||
|
|
|
@ -19,55 +19,71 @@ include examples of customizing the output format.
|
|||
`join` concatenates a list of strings to create a single string.
|
||||
It puts a separator between each element in the list.
|
||||
|
||||
{% raw %}
|
||||
$ docker inspect --format '{{join .Args " , "}}' container
|
||||
{% endraw %}
|
||||
{% raw %}
|
||||
```
|
||||
docker inspect --format '{{join .Args " , "}}' container
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
|
||||
## json
|
||||
|
||||
`json` encodes an element as a json string.
|
||||
|
||||
{% raw %}
|
||||
$ docker inspect --format '{{json .Mounts}}' container
|
||||
{% endraw %}
|
||||
|
||||
{% raw %}
|
||||
```
|
||||
docker inspect --format '{{json .Mounts}}' container
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
## lower
|
||||
|
||||
`lower` transforms a string into its lowercase representation.
|
||||
|
||||
{% raw %}
|
||||
$ docker inspect --format "{{lower .Name}}" container
|
||||
{% endraw %}
|
||||
{% raw %}
|
||||
```
|
||||
docker inspect --format "{{lower .Name}}" container
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
## split
|
||||
|
||||
`split` slices a string into a list of strings separated by a separator.
|
||||
|
||||
{% raw %}
|
||||
$ docker inspect --format '{{split (join .Names "/") "/"}}' container
|
||||
{% endraw %}
|
||||
{% raw %}
|
||||
```
|
||||
docker inspect --format '{{split .Image ":"}}'
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
## title
|
||||
|
||||
`title` capitalizes the first character of a string.
|
||||
|
||||
{% raw %}
|
||||
$ docker inspect --format "{{title .Name}}" container
|
||||
{% endraw %}
|
||||
{% raw %}
|
||||
```
|
||||
docker inspect --format "{{title .Name}}" container
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
## upper
|
||||
|
||||
`upper` transforms a string into its uppercase representation.
|
||||
|
||||
{% raw %}
|
||||
$ docker inspect --format "{{upper .Name}}" container
|
||||
{% endraw %}
|
||||
{% raw %}
|
||||
```
|
||||
docker inspect --format "{{upper .Name}}" container
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
|
||||
## println
|
||||
|
||||
`println` prints each value on a new line.
|
||||
|
||||
{% raw %}
|
||||
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{println .IPAddress}}{{end}}' container
|
||||
{% endraw %}
|
||||
{% raw %}
|
||||
```
|
||||
docker inspect --format='{{range .NetworkSettings.Networks}}{{println .IPAddress}}{{end}}' container
|
||||
```
|
||||
{% endraw %}
|
||||
|
|
|
@ -137,9 +137,11 @@ global
|
|||
defaults
|
||||
mode tcp
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 50000
|
||||
timeout server 50000
|
||||
timeout connect 5s
|
||||
timeout client 50s
|
||||
timeout server 50s
|
||||
timeout tunnel 1h
|
||||
timeout client-fin 50s
|
||||
### frontends
|
||||
# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
|
||||
frontend dtr_stats
|
||||
|
|
|
@ -8,39 +8,38 @@ This guide contains tips and tricks for troubleshooting DTR problems.
|
|||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on having overlay networking working in UCP.
|
||||
One way to test if overlay networks are working correctly you can deploy
|
||||
containers in different nodes, that are attached to the same overlay network
|
||||
and see if they can ping one another.
|
||||
High availability in DTR depends on swarm overlay networking. One way to test
|
||||
if overlay networks are working correctly is to deploy containers to the same
|
||||
overlay network on different nodes and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a UCP node, and run:
|
||||
Use SSH to log into a node and run:
|
||||
|
||||
```none
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh docker/dtr
|
||||
```
|
||||
|
||||
Then use SSH to log into another UCP node and run:
|
||||
Then use SSH to log into another node and run:
|
||||
|
||||
```none
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping docker/dtr -c 3 overlay-test1
|
||||
```
|
||||
|
||||
If the second command succeeds, it means that overlay networking is working
|
||||
correctly.
|
||||
If the second command succeeds, it indicates overlay networking is working
|
||||
correctly between those nodes.
|
||||
|
||||
You can run this test with any overlay network, and any Docker image that has
|
||||
`sh` and `ping`.
|
||||
You can run this test with any attachable overlay network and any Docker image
|
||||
that has `sh` and `ping`.
|
||||
|
||||
|
||||
## Access RethinkDB directly
|
||||
|
||||
DTR uses RethinkDB for persisting data and replicating it across replicas.
|
||||
It might be helpful to connect directly to the RethinkDB instance running on a
|
||||
DTR replica to check the DTR internal state.
|
||||
DTR replica to check the DTR internal state.
|
||||
|
||||
> **Warning**: Modifying RethinkDB directly is not supported and may cause
|
||||
> problems.
|
||||
|
@ -51,27 +50,44 @@ commands:
|
|||
|
||||
{% raw %}
|
||||
```bash
|
||||
# REPLICA_ID will be the replica ID for the current node.
|
||||
REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
|
||||
# DTR_REPLICA_ID will be the replica ID for the current node.
|
||||
DTR_REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
|
||||
# List problems in the cluster detected by the current node.
|
||||
echo 'r.db("rethinkdb").table("current_issues")' | \
|
||||
docker run -i --rm \
|
||||
--net dtr-ol \
|
||||
-e DTR_REPLICA_ID=${DTR_REPLICA_ID} \
|
||||
-v dtr-ca-$DTR_REPLICA_ID:/ca \
|
||||
dockerhubenterprise/rethinkcli:v2.2.0-ni non-interactive; \
|
||||
echo
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
On a healthy cluster the output will be `[]`.
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. This
|
||||
container can also be used to connect to the local DTR replica and
|
||||
interactively query the contents of the DB.
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
# DTR_REPLICA_ID will be the replica ID for the current node.
|
||||
DTR_REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
|
||||
# This command will start a RethinkDB client attached to the database
|
||||
# on the current node.
|
||||
docker run -it --rm \
|
||||
--net dtr-ol \
|
||||
-v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.2.0 \
|
||||
$REPLICA_ID
|
||||
-e DTR_REPLICA_ID=${DTR_REPLICA_ID} \
|
||||
-v dtr-ca-$DTR_REPLICA_ID:/ca \
|
||||
dockerhubenterprise/rethinkcli:v2.2.0-ni
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
This container connects to the local DTR replica and launches a RethinkDB client
|
||||
that can be used to inspect the contents of the DB. RethinkDB
|
||||
stores data in different databases that contain multiple tables. The `rethinkcli`
|
||||
tool launches an interactive prompt where you can run RethinkDB
|
||||
queries such as:
|
||||
|
||||
```none
|
||||
# List problems detected within the rethinkdb cluster
|
||||
# List problems in the cluster detected by the current node.
|
||||
> r.db("rethinkdb").table("current_issues")
|
||||
...
|
||||
[]
|
||||
|
||||
# List all the DBs in RethinkDB
|
||||
> r.dbList()
|
||||
|
@ -91,7 +107,7 @@ queries such as:
|
|||
'repositories',
|
||||
'repository_team_access',
|
||||
'tags' ]
|
||||
|
||||
|
||||
# List the entries in the repositories table
|
||||
> r.db('dtr2').table('repositories')
|
||||
[ { id: '19f1240a-08d8-4979-a898-6b0b5b2338d8',
|
||||
|
@ -102,7 +118,7 @@ queries such as:
|
|||
...
|
||||
```
|
||||
|
||||
Indvidual DBs and tables are a private implementation detail and may change in DTR
|
||||
Individual DBs and tables are a private implementation detail and may change in DTR
|
||||
from version to version, but you can always use `dbList()` and `tableList()` to explore
|
||||
the contents and data structure.
|
||||
|
||||
|
|
|
@ -140,9 +140,11 @@ global
|
|||
defaults
|
||||
mode tcp
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 50000
|
||||
timeout server 50000
|
||||
timeout connect 5s
|
||||
timeout client 50s
|
||||
timeout server 50s
|
||||
timeout tunnel 1h
|
||||
timeout client-fin 50s
|
||||
### frontends
|
||||
# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
|
||||
frontend dtr_stats
|
||||
|
|
|
@ -8,39 +8,38 @@ This guide contains tips and tricks for troubleshooting DTR problems.
|
|||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on having overlay networking working in UCP.
|
||||
One way to test if overlay networks are working correctly you can deploy
|
||||
containers in different nodes, that are attached to the same overlay network
|
||||
and see if they can ping one another.
|
||||
High availability in DTR depends on swarm overlay networking. One way to test
|
||||
if overlay networks are working correctly is to deploy containers to the same
|
||||
overlay network on different nodes and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a UCP node, and run:
|
||||
Use SSH to log into a node and run:
|
||||
|
||||
```none
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }}
|
||||
```
|
||||
|
||||
Then use SSH to log into another UCP node and run:
|
||||
Then use SSH to log into another node and run:
|
||||
|
||||
```none
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1
|
||||
```
|
||||
|
||||
If the second command succeeds, it means that overlay networking is working
|
||||
correctly.
|
||||
If the second command succeeds, it indicates overlay networking is working
|
||||
correctly between those nodes.
|
||||
|
||||
You can run this test with any overlay network, and any Docker image that has
|
||||
`sh` and `ping`.
|
||||
You can run this test with any attachable overlay network and any Docker image
|
||||
that has `sh` and `ping`.
|
||||
|
||||
|
||||
## Access RethinkDB directly
|
||||
|
||||
DTR uses RethinkDB for persisting data and replicating it across replicas.
|
||||
It might be helpful to connect directly to the RethinkDB instance running on a
|
||||
DTR replica to check the DTR internal state.
|
||||
DTR replica to check the DTR internal state.
|
||||
|
||||
> **Warning**: Modifying RethinkDB directly is not supported and may cause
|
||||
> problems.
|
||||
|
@ -51,27 +50,44 @@ commands:
|
|||
|
||||
{% raw %}
|
||||
```bash
|
||||
# REPLICA_ID will be the replica ID for the current node.
|
||||
REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
|
||||
# DTR_REPLICA_ID will be the replica ID for the current node.
|
||||
DTR_REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
|
||||
# List problems in the cluster detected by the current node.
|
||||
echo 'r.db("rethinkdb").table("current_issues")' | \
|
||||
docker run -i --rm \
|
||||
--net dtr-ol \
|
||||
-e DTR_REPLICA_ID=${DTR_REPLICA_ID} \
|
||||
-v dtr-ca-$DTR_REPLICA_ID:/ca \
|
||||
dockerhubenterprise/rethinkcli:v2.2.0-ni non-interactive; \
|
||||
echo
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
On a healthy cluster the output will be `[]`.
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. This
|
||||
container can also be used to connect to the local DTR replica and
|
||||
interactively query the contents of the DB.
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
# DTR_REPLICA_ID will be the replica ID for the current node.
|
||||
DTR_REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
|
||||
# This command will start a RethinkDB client attached to the database
|
||||
# on the current node.
|
||||
docker run -it --rm \
|
||||
--net dtr-ol \
|
||||
-v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.2.0 \
|
||||
$REPLICA_ID
|
||||
-e DTR_REPLICA_ID=${DTR_REPLICA_ID} \
|
||||
-v dtr-ca-$DTR_REPLICA_ID:/ca \
|
||||
dockerhubenterprise/rethinkcli:v2.2.0-ni
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
This container connects to the local DTR replica and launches a RethinkDB client
|
||||
that can be used to inspect the contents of the DB. RethinkDB
|
||||
stores data in different databases that contain multiple tables. The `rethinkcli`
|
||||
tool launches an interactive prompt where you can run RethinkDB
|
||||
queries such as:
|
||||
|
||||
```none
|
||||
# List problems detected within the rethinkdb cluster
|
||||
# List problems in the cluster detected by the current node.
|
||||
> r.db("rethinkdb").table("current_issues")
|
||||
...
|
||||
[]
|
||||
|
||||
# List all the DBs in RethinkDB
|
||||
> r.dbList()
|
||||
|
@ -91,7 +107,7 @@ queries such as:
|
|||
'repositories',
|
||||
'repository_team_access',
|
||||
'tags' ]
|
||||
|
||||
|
||||
# List the entries in the repositories table
|
||||
> r.db('dtr2').table('repositories')
|
||||
[ { id: '19f1240a-08d8-4979-a898-6b0b5b2338d8',
|
||||
|
@ -102,7 +118,7 @@ queries such as:
|
|||
...
|
||||
```
|
||||
|
||||
Indvidual DBs and tables are a private implementation detail and may change in DTR
|
||||
Individual DBs and tables are a private implementation detail and may change in DTR
|
||||
from version to version, but you can always use `dbList()` and `tableList()` to explore
|
||||
the contents and data structure.
|
||||
|
||||
|
|
|
@ -11,6 +11,29 @@ known issues for each DTR version.
|
|||
You can then use [the upgrade instructions](admin/upgrade.md),
|
||||
to upgrade your installation to the latest release.
|
||||
|
||||
## Version 2.3.8
|
||||
|
||||
(26 July 2018)
|
||||
|
||||
### Bug Fixes
|
||||
* Fixed bug where repository tag list UI was not loading after a tag migration.
|
||||
|
||||
## Version 2.3.7
|
||||
|
||||
(17 May 2018)
|
||||
|
||||
**New features**
|
||||
|
||||
* Headers added to all API and registry responses to improve security (enforce HTST, XSS Protection, prevent MIME sniffing).
|
||||
|
||||
**Bug fixes**
|
||||
|
||||
* Prevent OOM during garbage collection by reading less data into memory at a time.
|
||||
* Remove a race condition in which repos deleted during tagmigration were causing tagmigration to fail.
|
||||
* Reduce noise in the jobrunner logs by changing some of the more detailed messages to debug level.
|
||||
* Postgres updated to 9.6.6-r0.
|
||||
* Eliminate a race condition in which webhook for license updates doesn't fire.
|
||||
|
||||
## Version 2.3.6
|
||||
|
||||
(13 February 2018)
|
||||
|
|
|
@ -37,7 +37,7 @@ Vulnerability Database that is installed on your DTR instance. When
|
|||
this database is updated, DTR reviews the indexed components for newly
|
||||
discovered vulnerabilities.
|
||||
|
||||
DTR scans both Linux and Windows images, but but by default Docker doesn't push
|
||||
DTR scans both Linux and Windows images, but by default Docker doesn't push
|
||||
foreign image layers for Windows images so DTR can't scan them. If
|
||||
you want DTR to scan your Windows images, [configure Docker to always push image
|
||||
layers](pull-and-push-images.md), and it will scan the non-foreign layers.
|
||||
|
|
|
@ -140,9 +140,11 @@ global
|
|||
defaults
|
||||
mode tcp
|
||||
option dontlognull
|
||||
timeout connect 5000
|
||||
timeout client 50000
|
||||
timeout server 50000
|
||||
timeout connect 5s
|
||||
timeout client 50s
|
||||
timeout server 50s
|
||||
timeout tunnel 1h
|
||||
timeout client-fin 50s
|
||||
### frontends
|
||||
# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
|
||||
frontend dtr_stats
|
||||
|
|
|
@ -8,32 +8,31 @@ This guide contains tips and tricks for troubleshooting DTR problems.
|
|||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on having overlay networking working in UCP.
|
||||
One way to test if overlay networks are working correctly you can deploy
|
||||
containers in different nodes, that are attached to the same overlay network
|
||||
and see if they can ping one another.
|
||||
High availability in DTR depends on swarm overlay networking. One way to test
|
||||
if overlay networks are working correctly is to deploy containers to the same
|
||||
overlay network on different nodes and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a UCP node, and run:
|
||||
Use SSH to log into a node and run:
|
||||
|
||||
```none
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }}
|
||||
```
|
||||
|
||||
Then use SSH to log into another UCP node and run:
|
||||
Then use SSH to log into another node and run:
|
||||
|
||||
```none
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1
|
||||
```
|
||||
|
||||
If the second command succeeds, it means that overlay networking is working
|
||||
correctly.
|
||||
If the second command succeeds, it indicates overlay networking is working
|
||||
correctly between those nodes.
|
||||
|
||||
You can run this test with any overlay network, and any Docker image that has
|
||||
`sh` and `ping`.
|
||||
You can run this test with any attachable overlay network and any Docker image
|
||||
that has `sh` and `ping`.
|
||||
|
||||
|
||||
## Access RethinkDB directly
|
||||
|
@ -51,20 +50,31 @@ commands:
|
|||
|
||||
{% raw %}
|
||||
```bash
|
||||
# This command will start a RethinkDB client attached to the database
|
||||
# on the current node.
|
||||
# List problems in the cluster detected by the current node.
|
||||
echo 'r.db("rethinkdb").table("current_issues")' | \
|
||||
docker exec -i \
|
||||
$(docker ps -q --filter name=dtr-rethinkdb) \
|
||||
rethinkcli non-interactive; \
|
||||
echo
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
On a healthy cluster the output will be `[]`.
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. This
|
||||
container can also be used to connect to the local DTR replica and
|
||||
interactively query the contents of the DB.
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
docker exec -it $(docker ps -q --filter name=dtr-rethinkdb) rethinkcli
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
RethinkDB stores data in different databases that contain multiple tables. The `rethinkcli`
|
||||
tool launches an interactive prompt where you can run RethinkDB
|
||||
queries such as:
|
||||
|
||||
```none
|
||||
# List problems detected within the rethinkdb cluster
|
||||
# List problems in the cluster detected by the current node.
|
||||
> r.db("rethinkdb").table("current_issues")
|
||||
...
|
||||
[]
|
||||
|
||||
# List all the DBs in RethinkDB
|
||||
> r.dbList()
|
||||
|
@ -95,7 +105,7 @@ queries such as:
|
|||
...
|
||||
```
|
||||
|
||||
Indvidual DBs and tables are a private implementation detail and may change in DTR
|
||||
Individual DBs and tables are a private implementation detail and may change in DTR
|
||||
from version to version, but you can always use `dbList()` and `tableList()` to explore
|
||||
the contents and data structure.
|
||||
|
||||
|
|
|
@ -28,13 +28,13 @@ support dump:
|
|||
|
||||
## From the CLI
|
||||
|
||||
To get the support dump from the CLI, use SSH to log into a UCP manager node
|
||||
and run:
|
||||
To get the support dump from the CLI, use SSH to log into a node and run:
|
||||
|
||||
```none
|
||||
docker run --rm \
|
||||
--name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.docker_image }} \
|
||||
support > docker-support.tgz
|
||||
support > \
|
||||
docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz
|
||||
```
|
||||
|
|
|
@ -28,15 +28,15 @@ support dump:
|
|||
|
||||
## From the CLI
|
||||
|
||||
To get the support dump from the CLI, use SSH to log into a UCP manager node
|
||||
and run:
|
||||
To get the support dump from the CLI, use SSH to log into a node and run:
|
||||
|
||||
```none
|
||||
docker run --rm \
|
||||
--name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.docker_image }} \
|
||||
support > docker-support.tgz
|
||||
support > \
|
||||
docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz
|
||||
```
|
||||
|
||||
This support dump only contains logs for the node where you're running the
|
||||
|
|
|
@ -157,7 +157,7 @@ Click **Confirm** to add your LDAP domain.
|
|||
| Filter | The LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. | |
|
||||
| Search subtree instead of just one level | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. | |
|
||||
| Select Group Members | Whether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support `memberOf` search filters. | |
|
||||
| Iterate through group members | If `Select Group Members` is selected, this option searches for users by first iterating over the target group's membership, making a separate LDAP query for each member. as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results. | |
|
||||
| Iterate through group members | If `Select Group Members` is selected, this option searches for users by first iterating over the target group's membership, making a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results. | |
|
||||
| Group DN | If `Select Group Members` is selected, this specifies the distinguished name of the group from which to select users. | |
|
||||
| Group Member Attribute | If `Select Group Members` is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. | |
|
||||
|
||||
|
|
|
@ -24,7 +24,8 @@ docker container run --rm \
|
|||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
--log-driver none \
|
||||
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }}${_ARCH} \
|
||||
support > docker-support.tgz
|
||||
support > \
|
||||
docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz
|
||||
```
|
||||
|
||||
In this example, the environment variable is named `_ARCH`, but you can use any
|
||||
|
|
|
@ -31,8 +31,7 @@ support dump:
|
|||
|
||||
## From the CLI
|
||||
|
||||
To get the support dump from the CLI, use SSH to log into a UCP manager node
|
||||
and run:
|
||||
To get the support dump from the CLI, use SSH to log into a node and run:
|
||||
|
||||
```none
|
||||
docker container run --rm \
|
||||
|
@ -40,7 +39,8 @@ docker container run --rm \
|
|||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
--log-driver none \
|
||||
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} \
|
||||
support > docker-support.tgz
|
||||
support > \
|
||||
docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz
|
||||
```
|
||||
|
||||
This support dump only contains logs for the node where you're running the
|
||||
|
|
|
@ -10,45 +10,67 @@ redirect_from:
|
|||
title: Best practices for writing Dockerfiles
|
||||
---
|
||||
|
||||
Docker can build images automatically by reading the instructions from a
|
||||
`Dockerfile`, a text file that contains all the commands, in order, needed to
|
||||
build a given image. `Dockerfile`s adhere to a specific format and use a
|
||||
specific set of instructions. You can learn the basics on the
|
||||
[Dockerfile Reference](/engine/reference/builder.md) page. If
|
||||
you’re new to writing `Dockerfile`s, you should start there.
|
||||
This document covers recommended best practices and methods for building
|
||||
efficient images.
|
||||
|
||||
This document covers the best practices and methods recommended by Docker,
|
||||
Inc. and the Docker community for building efficient images. To see many of
|
||||
these practices and recommendations in action, check out the Dockerfile for
|
||||
[buildpack-deps](https://github.com/docker-library/buildpack-deps/blob/master/jessie/Dockerfile).
|
||||
Docker builds images automatically by reading the instructions from a
|
||||
`Dockerfile` -- a text file that contains all commands, in order, needed to
|
||||
build a given image. A `Dockerfile` adheres to a specific format and set of
|
||||
instructions which you can find at [Dockerfile reference](/engine/reference/builder/).
|
||||
|
||||
> **Note**: for more detailed explanations of any of the Dockerfile commands
|
||||
>mentioned here, visit the [Dockerfile Reference](/engine/reference/builder.md) page.
|
||||
A Docker image consists of read-only layers each of which represents a
|
||||
Dockerfile instruction. The layers are stacked and each one is a delta of the
|
||||
changes from the previous layer. Consider this `Dockerfile`:
|
||||
|
||||
```conf
|
||||
FROM ubuntu:15.04
|
||||
COPY . /app
|
||||
RUN make /app
|
||||
CMD python /app/app.py
|
||||
```
|
||||
|
||||
Each instruction creates one layer:
|
||||
|
||||
- `FROM` creates a layer from the `ubuntu:15.04` Docker image.
|
||||
- `COPY` adds files from your Docker client's current directory.
|
||||
- `RUN` builds your application with `make`.
|
||||
- `CMD` specifies what command to run within the container.
|
||||
|
||||
When you run an image and generate a container, you add a new _writable layer_
|
||||
(the "container layer") on top of the underlying layers. All changes made to
|
||||
the running container, such as writing new files, modifying existing files, and
|
||||
deleting files, are written to this thin writable container layer.
|
||||
|
||||
For more on image layers (and how Docker builds and stores images), see
|
||||
[About storage drivers](/storage/storagedriver/).
|
||||
|
||||
## General guidelines and recommendations
|
||||
|
||||
### Containers should be ephemeral
|
||||
### Create ephemeral containers
|
||||
|
||||
The container produced by the image your `Dockerfile` defines should be as
|
||||
ephemeral as possible. By “ephemeral,” we mean that it can be stopped and
|
||||
destroyed and a new one built and put in place with an absolute minimum of
|
||||
set-up and configuration. You may want to take a look at the
|
||||
[Processes](https://12factor.net/processes) section of the 12 Factor app
|
||||
The image defined by your `Dockerfile` should generate containers that are as
|
||||
ephemeral as possible. By “ephemeral,” we mean that the container can be stopped
|
||||
and destroyed, then rebuilt and replaced with an absolute minimum set up and
|
||||
configuration.
|
||||
|
||||
Refer to [Processes](https://12factor.net/processes) under _The Twelve-factor App_
|
||||
methodology to get a feel for the motivations of running containers in such a
|
||||
stateless fashion.
|
||||
|
||||
### Build context
|
||||
### Understand build context
|
||||
|
||||
When you issue a `docker build` command, the current working directory is called
|
||||
the _build context_. By default, the Dockerfile is assumed to be located here,
|
||||
but you can specify a different location with the file flag (`-f`). Regardless
|
||||
of where the `Dockerfile` actually lives, all of the recursive contents of files
|
||||
and directories in the current directory are sent to the Docker daemon as the
|
||||
build context.
|
||||
of where the `Dockerfile` actually lives, all recursive contents of files and
|
||||
directories in the current directory are sent to the Docker daemon as the build
|
||||
context.
|
||||
|
||||
> Build context example
|
||||
>
|
||||
> Create a directory for the build context and `cd` into it. Write "hello" into a text file named `hello` and create a Dockerfile that runs `cat` on it. Build the image from within the build context (`.`):
|
||||
> Create a directory for the build context and `cd` into it. Write "hello" into
|
||||
> a text file named `hello` and create a Dockerfile that runs `cat` on it. Build
|
||||
> the image from within the build context (`.`):
|
||||
>
|
||||
> ```shell
|
||||
> mkdir myproject && cd myproject
|
||||
|
@ -57,7 +79,9 @@ build context.
|
|||
> docker build -t helloapp:v1 .
|
||||
> ```
|
||||
>
|
||||
> Now move `Dockerfile` and `hello` into separate directories and build a second version of the image (without relying on cache from the last build). Use the `-f` to point to the Dockerfile and specify the directory of the build context:
|
||||
> Move `Dockerfile` and `hello` into separate directories and build a second
|
||||
> version of the image (without relying on cache from the last build). Use `-f`
|
||||
> to point to the Dockerfile and specify the directory of the build context:
|
||||
>
|
||||
> ```shell
|
||||
> mkdir -p dockerfiles context
|
||||
|
@ -66,37 +90,68 @@ build context.
|
|||
> ```
|
||||
|
||||
Inadvertently including files that are not necessary for building an image
|
||||
results in a larger build context and larger image size. This can increase build
|
||||
time, time to pull and push the image, and the runtime size of containers. To
|
||||
see how big your build context is, look for a message like this when building
|
||||
your `Dockerfile`:
|
||||
results in a larger build context and larger image size. This can increase the
|
||||
time to build the image, time to pull and push it, and the container runtime
|
||||
size. To see how big your build context is, look for a message like this when
|
||||
building your `Dockerfile`:
|
||||
|
||||
```none
|
||||
Sending build context to Docker daemon 187.8MB
|
||||
```
|
||||
|
||||
### Use a .dockerignore file
|
||||
### Pipe Dockerfile through `stdin`
|
||||
|
||||
To exclude files which are not relevant to the build, without restructuring your
|
||||
source repository, use a `.dockerignore` file. This file supports
|
||||
exclusion patterns similar to `.gitignore` files. For information on creating
|
||||
one, see the [.dockerignore file](/engine/reference/builder.md#dockerignore-file).
|
||||
In addition to using a `.dockerignore` file, check out the information below
|
||||
on [multi-stage builds](#use-multi-stage-builds).
|
||||
Docker 17.05 added the ability to build images by piping `Dockerfile` through
|
||||
`stdin` with a _local or remote build-context_. In earlier versions, building an
|
||||
image with a `Dockerfile` from `stdin` did not send the build-context.
|
||||
|
||||
**Docker 17.04 and lower**
|
||||
|
||||
```
|
||||
docker build -t foo -<<EOF
|
||||
FROM busybox
|
||||
RUN echo "hello world"
|
||||
EOF
|
||||
```
|
||||
|
||||
**Docker 17.05 and higher (local build-context)**
|
||||
|
||||
```
|
||||
docker build -t foo . -f-<<EOF
|
||||
FROM busybox
|
||||
RUN echo "hello world"
|
||||
COPY . /my-copied-files
|
||||
EOF
|
||||
```
|
||||
|
||||
**Docker 17.05 and higher (remote build-context)**
|
||||
|
||||
```
|
||||
docker build -t foo https://github.com/thajeztah/pgadmin4-docker.git -f-<<EOF
|
||||
FROM busybox
|
||||
COPY LICENSE config_local.py /usr/local/lib/python2.7/site-packages/pgadmin4/
|
||||
EOF
|
||||
```
|
||||
|
||||
### Exclude with .dockerignore
|
||||
|
||||
To exclude files not relevant to the build (without restructuring your source
|
||||
repository) use a `.dockerignore` file. This file supports exclusion patterns
|
||||
similar to `.gitignore` files. For information on creating one, see the
|
||||
[.dockerignore file](/engine/reference/builder.md#dockerignore-file).
|
||||
|
||||
### Use multi-stage builds
|
||||
|
||||
If you use Docker 17.05 or higher, you can use
|
||||
[multi-stage builds](multistage-build.md) to
|
||||
drastically reduce the size of your final image, without the need to
|
||||
jump through hoops to reduce the number of intermediate layers or remove
|
||||
intermediate files during the build.
|
||||
[Multi-stage builds](multistage-build.md) (in [Docker 17.05](/release-notes/docker-ce/#17050-ce-2017-05-04) or higher)
|
||||
allow you to drastically reduce the size of your final image, without struggling
|
||||
to reduce the number of intermediate layers and files.
|
||||
|
||||
Images being built by the final stage only, you can most of the time benefit
|
||||
both the build cache and minimize images layers.
|
||||
Because an image is built during the final stage of the build process, you can
|
||||
minimize image layers by [leveraging build cache](#leverage-build-cache).
|
||||
|
||||
Your build stage may contain several layers, ordered from the less frequently changed
|
||||
to the more frequently changed for example:
|
||||
For example, if your build contains several layers, you can order them from the
|
||||
less frequently changed (to ensure the build cache is reusable) to the more
|
||||
frequently changed:
|
||||
|
||||
* Install tools you need to build your application
|
||||
|
||||
|
@ -104,25 +159,25 @@ to the more frequently changed for example:
|
|||
|
||||
* Generate your application
|
||||
|
||||
A Dockerfile for a go application could look like:
|
||||
A Dockerfile for a Go application could look like:
|
||||
|
||||
```
|
||||
FROM golang:1.9.2-alpine3.6 AS build
|
||||
|
||||
# Install tools required to build the project
|
||||
# We need to run `docker build --no-cache .` to update those dependencies
|
||||
# Install tools required for project
|
||||
# Run `docker build --no-cache .` to update dependencies
|
||||
RUN apk add --no-cache git
|
||||
RUN go get github.com/golang/dep/cmd/dep
|
||||
|
||||
# Gopkg.toml and Gopkg.lock lists project dependencies
|
||||
# List project dependencies with Gopkg.toml and Gopkg.lock
|
||||
# These layers are only re-built when Gopkg files are updated
|
||||
COPY Gopkg.lock Gopkg.toml /go/src/project/
|
||||
WORKDIR /go/src/project/
|
||||
# Install library dependencies
|
||||
RUN dep ensure -vendor-only
|
||||
|
||||
# Copy all project and build it
|
||||
# This layer is rebuilt when ever a file has changed in the project directory
|
||||
# Copy the entire project and build it
|
||||
# This layer is rebuilt when a file changes in the project directory
|
||||
COPY . /go/src/project/
|
||||
RUN go build -o /bin/project
|
||||
|
||||
|
@ -133,54 +188,51 @@ ENTRYPOINT ["/bin/project"]
|
|||
CMD ["--help"]
|
||||
```
|
||||
|
||||
### Avoid installing unnecessary packages
|
||||
### Don't install unnecessary packages
|
||||
|
||||
To reduce complexity, dependencies, file sizes, and build times, you
|
||||
should avoid installing extra or unnecessary packages just because they
|
||||
might be “nice to have.” For example, you don’t need to include a text editor
|
||||
in a database image.
|
||||
To reduce complexity, dependencies, file sizes, and build times, avoid
|
||||
installing extra or unnecessary packages just because they might be “nice to
|
||||
have.” For example, you don’t need to include a text editor in a database image.
|
||||
|
||||
### Each container should have only one concern
|
||||
### Decouple applications
|
||||
|
||||
Decoupling applications into multiple containers makes it much easier to scale
|
||||
horizontally and reuse containers. For instance, a web application stack might
|
||||
consist of three separate containers, each with its own unique image, to manage
|
||||
the web application, database, and an in-memory cache in a decoupled manner.
|
||||
Each container should have only one concern. Decoupling applications into
|
||||
multiple containers makes it easier to scale horizontally and reuse containers.
|
||||
For instance, a web application stack might consist of three separate
|
||||
containers, each with its own unique image, to manage the web application,
|
||||
database, and an in-memory cache in a decoupled manner.
|
||||
|
||||
You may have heard that there should be "one process per container". While this
|
||||
mantra has good intentions, it is not necessarily true that there should be only
|
||||
one operating system process per container. In addition to the fact that
|
||||
containers can now be [spawned with an init process](/engine/reference/run.md#specifying-an-init-process),
|
||||
Limiting each container to one process is a good rule of thumb, but it is not a
|
||||
hard and fast rule. For example, not only can containers be
|
||||
[spawned with an init process](/engine/reference/run.md#specify-an-init-process),
|
||||
some programs might spawn additional processes of their own accord. For
|
||||
instance, [Celery](http://www.celeryproject.org/) can spawn multiple worker
|
||||
processes, or [Apache](https://httpd.apache.org/) might create a process per
|
||||
request. While "one process per container" is frequently a good rule of thumb,
|
||||
it is not a hard and fast rule. Use your best judgment to keep containers as
|
||||
clean and modular as possible.
|
||||
processes, and [Apache](https://httpd.apache.org/) can create one process per
|
||||
request.
|
||||
|
||||
If containers depend on each other, you can use [Docker container networks](/engine/userguide/networking/)
|
||||
to ensure that these containers can communicate.
|
||||
Use your best judgment to keep containers as clean and modular as possible. If
|
||||
containers depend on each other, you can use [Docker container networks](/engine/userguide/networking/)
|
||||
to ensure that these containers can communicate.
|
||||
|
||||
### Minimize the number of layers
|
||||
|
||||
Prior to Docker 17.05, and even more, prior to Docker 1.10, it was important
|
||||
to minimize the number of layers in your image. The following improvements have
|
||||
mitigated this need:
|
||||
In older versions of Docker, it was important that you minimized the number of
|
||||
layers in your images to ensure they were performant. The following features
|
||||
were added to reduce this limitation:
|
||||
|
||||
- In Docker 1.10 and higher, only `RUN`, `COPY`, and `ADD` instructions create
|
||||
layers. Other instructions create temporary intermediate images, and no longer
|
||||
- In Docker 1.10 and higher, only the instructions `RUN`, `COPY`, `ADD` create
|
||||
layers. Other instructions create temporary intermediate images, and do not
|
||||
directly increase the size of the build.
|
||||
|
||||
- Docker 17.05 and higher add support for
|
||||
[multi-stage builds](multistage-build.md), which allow you to copy only the
|
||||
artifacts you need into the final image. This allows you to include tools and
|
||||
debug information in your intermediate build stages without increasing the
|
||||
size of the final image.
|
||||
- In Docker 17.05 and higher, you can do [multi-stage builds](multistage-build.md)
|
||||
and only copy the artifacts you need into the final image. This allows you to
|
||||
include tools and debug information in your intermediate build stages without
|
||||
increasing the size of the final image.
|
||||
|
||||
### Sort multi-line arguments
|
||||
|
||||
Whenever possible, ease later changes by sorting multi-line arguments
|
||||
alphanumerically. This helps you avoid duplication of packages and make the
|
||||
alphanumerically. This helps to avoid duplication of packages and make the
|
||||
list much easier to update. This also makes PRs a lot easier to read and
|
||||
review. Adding a space before a backslash (`\`) helps as well.
|
||||
|
||||
|
@ -193,57 +245,56 @@ Here’s an example from the [`buildpack-deps` image](https://github.com/docker-
|
|||
mercurial \
|
||||
subversion
|
||||
|
||||
### Build cache
|
||||
### Leverage build cache
|
||||
|
||||
During the process of building an image Docker steps through the
|
||||
instructions in your `Dockerfile` executing each in the order specified.
|
||||
As each instruction is examined Docker looks for an existing image in its
|
||||
cache that it can reuse, rather than creating a new (duplicate) image.
|
||||
If you do not want to use the cache at all you can use the `--no-cache=true`
|
||||
option on the `docker build` command.
|
||||
When building an image, Docker steps through the instructions in your
|
||||
`Dockerfile`, executing each in the order specified. As each instruction is
|
||||
examined, Docker looks for an existing image in its cache that it can reuse,
|
||||
rather than creating a new (duplicate) image.
|
||||
|
||||
However, if you do let Docker use its cache then it is very important to
|
||||
understand when it can, and cannot, find a matching image. The basic rules
|
||||
that Docker follows are outlined below:
|
||||
If you do not want to use the cache at all, you can use the `--no-cache=true`
|
||||
option on the `docker build` command. However, if you do let Docker use its
|
||||
cache, it is important to understand when it can, and cannot, find a matching
|
||||
image. The basic rules that Docker follows are outlined below:
|
||||
|
||||
* Starting with a parent image that is already in the cache, the next
|
||||
instruction is compared against all child images derived from that base
|
||||
image to see if one of them was built using the exact same instruction. If
|
||||
not, the cache is invalidated.
|
||||
- Starting with a parent image that is already in the cache, the next
|
||||
instruction is compared against all child images derived from that base
|
||||
image to see if one of them was built using the exact same instruction. If
|
||||
not, the cache is invalidated.
|
||||
|
||||
* In most cases simply comparing the instruction in the `Dockerfile` with one
|
||||
of the child images is sufficient. However, certain instructions require
|
||||
a little more examination and explanation.
|
||||
- In most cases, simply comparing the instruction in the `Dockerfile` with one
|
||||
of the child images is sufficient. However, certain instructions require more
|
||||
examination and explanation.
|
||||
|
||||
* For the `ADD` and `COPY` instructions, the contents of the file(s)
|
||||
in the image are examined and a checksum is calculated for each file.
|
||||
The last-modified and last-accessed times of the file(s) are not considered in
|
||||
these checksums. During the cache lookup, the checksum is compared against the
|
||||
checksum in the existing images. If anything has changed in the file(s), such
|
||||
as the contents and metadata, then the cache is invalidated.
|
||||
- For the `ADD` and `COPY` instructions, the contents of the file(s)
|
||||
in the image are examined and a checksum is calculated for each file.
|
||||
The last-modified and last-accessed times of the file(s) are not considered in
|
||||
these checksums. During the cache lookup, the checksum is compared against the
|
||||
checksum in the existing images. If anything has changed in the file(s), such
|
||||
as the contents and metadata, then the cache is invalidated.
|
||||
|
||||
* Aside from the `ADD` and `COPY` commands, cache checking does not look at the
|
||||
files in the container to determine a cache match. For example, when processing
|
||||
a `RUN apt-get -y update` command the files updated in the container
|
||||
are not examined to determine if a cache hit exists. In that case just
|
||||
the command string itself is used to find a match.
|
||||
- Aside from the `ADD` and `COPY` commands, cache checking does not look at the
|
||||
files in the container to determine a cache match. For example, when processing
|
||||
a `RUN apt-get -y update` command the files updated in the container
|
||||
are not examined to determine if a cache hit exists. In that case just
|
||||
the command string itself is used to find a match.
|
||||
|
||||
Once the cache is invalidated, all subsequent `Dockerfile` commands
|
||||
generate new images and the cache is not used.
|
||||
Once the cache is invalidated, all subsequent `Dockerfile` commands generate new
|
||||
images and the cache is not used.
|
||||
|
||||
## The Dockerfile instructions
|
||||
## Dockerfile instructions
|
||||
|
||||
These recommendations help you to write an efficient and maintainable
|
||||
`Dockerfile`.
|
||||
These recommendations are designed to help you create an efficient and
|
||||
maintainable `Dockerfile`.
|
||||
|
||||
### FROM
|
||||
|
||||
[Dockerfile reference for the FROM instruction](/engine/reference/builder.md#from)
|
||||
|
||||
Whenever possible, use current Official Repositories as the basis for your
|
||||
image. We recommend the [Alpine image](https://hub.docker.com/_/alpine/)
|
||||
since it’s very tightly controlled and kept minimal (currently under 5 mb),
|
||||
while still being a full distribution.
|
||||
Whenever possible, use current official repositories as the basis for your
|
||||
images. We recommend the [Alpine image](https://hub.docker.com/_/alpine/) as it
|
||||
is tightly controlled and small in size (currently under 5 MB), while still
|
||||
being a full Linux distribution.
|
||||
|
||||
### LABEL
|
||||
|
||||
|
@ -254,14 +305,14 @@ licensing information, to aid in automation, or for other reasons. For each
|
|||
label, add a line beginning with `LABEL` and with one or more key-value pairs.
|
||||
The following examples show the different acceptable formats. Explanatory comments are included inline.
|
||||
|
||||
>**Note**: If your string contains spaces, it must be quoted **or** the spaces
|
||||
must be escaped. If your string contains inner quote characters (`"`), escape
|
||||
them as well.
|
||||
> Strings with spaces must be quoted **or** the spaces must be escaped. Inner
|
||||
> quote characters (`"`), must also be escaped.
|
||||
|
||||
```conf
|
||||
# Set one or more individual labels
|
||||
LABEL com.example.version="0.0.1-beta"
|
||||
LABEL vendor="ACME Incorporated"
|
||||
LABEL vendor1="ACME Incorporated"
|
||||
LABEL vendor2=ZENITH\ Incorporated
|
||||
LABEL com.example.release-date="2015-02-12"
|
||||
LABEL com.example.version.is-production=""
|
||||
```
|
||||
|
@ -297,21 +348,21 @@ objects](/config/labels-custom-metadata.md#managing-labels-on-objects). See also
|
|||
|
||||
[Dockerfile reference for the RUN instruction](/engine/reference/builder.md#run)
|
||||
|
||||
As always, to make your `Dockerfile` more readable, understandable, and
|
||||
maintainable, split long or complex `RUN` statements on multiple lines separated
|
||||
with backslashes.
|
||||
Split long or complex `RUN` statements on multiple lines separated with
|
||||
backslashes to make your `Dockerfile` more readable, understandable, and
|
||||
maintainable.
|
||||
|
||||
#### apt-get
|
||||
|
||||
Probably the most common use-case for `RUN` is an application of `apt-get`. The
|
||||
`RUN apt-get` command, because it installs packages, has several gotchas to look
|
||||
out for.
|
||||
Probably the most common use-case for `RUN` is an application of `apt-get`.
|
||||
Because it installs packages, the `RUN apt-get` command has several gotchas to
|
||||
look out for.
|
||||
|
||||
You should avoid `RUN apt-get upgrade` or `dist-upgrade`, as many of the
|
||||
“essential” packages from the parent images can't upgrade inside an
|
||||
[unprivileged container](/engine/reference/run.md#security-configuration).
|
||||
If a package contained in the parent image is out-of-date, you should contact its
|
||||
maintainers. If you know there’s a particular package, `foo`, that needs to be updated, use
|
||||
Avoid `RUN apt-get upgrade` and `dist-upgrade`, as many of the “essential”
|
||||
packages from the parent images cannot upgrade inside an
|
||||
[unprivileged container](/engine/reference/run.md#security-configuration). If a package
|
||||
contained in the parent image is out-of-date, contact its maintainers. If you
|
||||
know there is a particular package, `foo`, that needs to be updated, use
|
||||
`apt-get install -y foo` to update automatically.
|
||||
|
||||
Always combine `RUN apt-get update` with `apt-get install` in the same `RUN`
|
||||
|
@ -324,8 +375,8 @@ statement. For example:
|
|||
|
||||
|
||||
Using `apt-get update` alone in a `RUN` statement causes caching issues and
|
||||
subsequent `apt-get install` instructions fail.
|
||||
For example, say you have a Dockerfile:
|
||||
subsequent `apt-get install` instructions fail. For example, say you have a
|
||||
Dockerfile:
|
||||
|
||||
FROM ubuntu:14.04
|
||||
RUN apt-get update
|
||||
|
@ -339,12 +390,12 @@ modify `apt-get install` by adding extra package:
|
|||
RUN apt-get install -y curl nginx
|
||||
|
||||
Docker sees the initial and modified instructions as identical and reuses the
|
||||
cache from previous steps. As a result the `apt-get update` is *NOT* executed
|
||||
cache from previous steps. As a result the `apt-get update` is _not_ executed
|
||||
because the build uses the cached version. Because the `apt-get update` is not
|
||||
run, your build can potentially get an outdated version of the `curl` and `nginx`
|
||||
packages.
|
||||
run, your build can potentially get an outdated version of the `curl` and
|
||||
`nginx` packages.
|
||||
|
||||
Using `RUN apt-get update && apt-get install -y` ensures your Dockerfile
|
||||
Using `RUN apt-get update && apt-get install -y` ensures your Dockerfile
|
||||
installs the latest package versions with no further coding or manual
|
||||
intervention. This technique is known as "cache busting". You can also achieve
|
||||
cache-busting by specifying a package version. This is known as version pinning,
|
||||
|
@ -377,17 +428,17 @@ recommendations.
|
|||
s3cmd=1.1.* \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
The `s3cmd` instructions specifies a version `1.1.*`. If the image previously
|
||||
The `s3cmd` argument specifies a version `1.1.*`. If the image previously
|
||||
used an older version, specifying the new one causes a cache bust of `apt-get
|
||||
update` and ensure the installation of the new version. Listing packages on
|
||||
update` and ensures the installation of the new version. Listing packages on
|
||||
each line can also prevent mistakes in package duplication.
|
||||
|
||||
In addition, when you clean up the apt cache by removing `/var/lib/apt/lists`
|
||||
In addition, when you clean up the apt cache by removing `/var/lib/apt/lists` it
|
||||
reduces the image size, since the apt cache is not stored in a layer. Since the
|
||||
`RUN` statement starts with `apt-get update`, the package cache is always
|
||||
refreshed prior to `apt-get install`.
|
||||
|
||||
> **Note**: The official Debian and Ubuntu images [automatically run `apt-get clean`](https://github.com/moby/moby/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105),
|
||||
> Official Debian and Ubuntu images [automatically run `apt-get clean`](https://github.com/moby/moby/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105),
|
||||
> so explicit invocation is not required.
|
||||
|
||||
#### Using pipes
|
||||
|
@ -398,45 +449,42 @@ Some `RUN` commands depend on the ability to pipe the output of one command into
|
|||
RUN wget -O - https://some.site | wc -l > /number
|
||||
```
|
||||
|
||||
Docker executes these commands using the `/bin/sh -c` interpreter, which
|
||||
only evaluates the exit code of the last operation in the pipe to determine
|
||||
success. In the example above this build step succeeds and produces a new
|
||||
image so long as the `wc -l` command succeeds, even if the `wget` command
|
||||
fails.
|
||||
Docker executes these commands using the `/bin/sh -c` interpreter, which only
|
||||
evaluates the exit code of the last operation in the pipe to determine success.
|
||||
In the example above this build step succeeds and produces a new image so long
|
||||
as the `wc -l` command succeeds, even if the `wget` command fails.
|
||||
|
||||
If you want the command to fail due to an error at any stage in the pipe,
|
||||
prepend `set -o pipefail &&` to ensure that an unexpected error prevents
|
||||
the build from inadvertently succeeding. For example:
|
||||
prepend `set -o pipefail &&` to ensure that an unexpected error prevents the
|
||||
build from inadvertently succeeding. For example:
|
||||
|
||||
```Dockerfile
|
||||
RUN set -o pipefail && wget -O - https://some.site | wc -l > /number
|
||||
```
|
||||
|
||||
> **Note**: Not all shells support the `-o pipefail` option. In such
|
||||
> cases (such as the `dash` shell, which is the default shell on
|
||||
> Debian-based images), consider using the *exec* form of `RUN`
|
||||
> to explicitly choose a shell that does support the `pipefail` option.
|
||||
> For example:
|
||||
> Not all shells support the `-o pipefail` option.
|
||||
>
|
||||
|
||||
```Dockerfile
|
||||
RUN ["/bin/bash", "-c", "set -o pipefail && wget -O - https://some.site | wc -l > /number"]
|
||||
```
|
||||
> In such cases (such as the `dash` shell, which is the default shell on
|
||||
> Debian-based images), consider using the _exec_ form of `RUN` to explicitly
|
||||
> choose a shell that does support the `pipefail` option. For example:
|
||||
>
|
||||
> ```Dockerfile
|
||||
> RUN ["/bin/bash", "-c", "set -o pipefail && wget -O - https://some.site | wc -l > /number"]
|
||||
> ```
|
||||
|
||||
### CMD
|
||||
|
||||
[Dockerfile reference for the CMD instruction](/engine/reference/builder.md#cmd)
|
||||
|
||||
The `CMD` instruction should be used to run the software contained by your
|
||||
image, along with any arguments. `CMD` should almost always be used in the
|
||||
form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
|
||||
service, such as Apache and Rails, you would run something like
|
||||
`CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is
|
||||
recommended for any service-based image.
|
||||
image, along with any arguments. `CMD` should almost always be used in the form
|
||||
of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
|
||||
service, such as Apache and Rails, you would run something like `CMD
|
||||
["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is recommended
|
||||
for any service-based image.
|
||||
|
||||
In most other cases, `CMD` should be given an interactive shell, such as bash, python
|
||||
and perl. For example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
|
||||
`CMD [“php”, “-a”]`. Using this form means that when you execute something like
|
||||
In most other cases, `CMD` should be given an interactive shell, such as bash,
|
||||
python and perl. For example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or `CMD
|
||||
[“php”, “-a”]`. Using this form means that when you execute something like
|
||||
`docker run -it python`, you’ll get dropped into a usable shell, ready to go.
|
||||
`CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in
|
||||
conjunction with [`ENTRYPOINT`](/engine/reference/builder.md#entrypoint), unless
|
||||
|
@ -538,8 +586,8 @@ auto-extraction into the image, as in `ADD rootfs.tar.xz /`.
|
|||
|
||||
If you have multiple `Dockerfile` steps that use different files from your
|
||||
context, `COPY` them individually, rather than all at once. This ensures that
|
||||
each step's build cache is only invalidated (forcing the step to be re-run) if the
|
||||
specifically required files change.
|
||||
each step's build cache is only invalidated (forcing the step to be re-run) if
|
||||
the specifically required files change.
|
||||
|
||||
For example:
|
||||
|
||||
|
@ -618,13 +666,12 @@ fi
|
|||
exec "$@"
|
||||
```
|
||||
|
||||
> **Note**:
|
||||
> Configure app as PID 1
|
||||
>
|
||||
> This script uses [the `exec` Bash command](http://wiki.bash-hackers.org/commands/builtin/exec)
|
||||
> so that the final running application becomes the container's PID 1. This allows
|
||||
> the application to receive any Unix signals sent to the container.
|
||||
> See the [`ENTRYPOINT`](/engine/reference/builder.md#entrypoint)
|
||||
> help for more details.
|
||||
|
||||
> so that the final running application becomes the container's PID 1. This
|
||||
> allows the application to receive any Unix signals sent to the container.
|
||||
> For more, see the [`ENTRYPOINT` reference](/engine/reference/builder.md#entrypoint).
|
||||
|
||||
The helper script is copied into the container and run via `ENTRYPOINT` on
|
||||
container start:
|
||||
|
@ -664,35 +711,35 @@ If a service can run without privileges, use `USER` to change to a non-root
|
|||
user. Start by creating the user and group in the `Dockerfile` with something
|
||||
like `RUN groupadd -r postgres && useradd --no-log-init -r -g postgres postgres`.
|
||||
|
||||
> **Note**: Users and groups in an image get a non-deterministic
|
||||
> UID/GID in that the “next” UID/GID gets assigned regardless of image
|
||||
> rebuilds. So, if it’s critical, you should assign an explicit UID/GID.
|
||||
> Consider an explicit UID/GID
|
||||
>
|
||||
> Users and groups in an image are assigned a non-deterministic UID/GID in that
|
||||
> the “next” UID/GID is assigned regardless of image rebuilds. So, if it’s
|
||||
> critical, you should assign an explicit UID/GID.
|
||||
|
||||
> **Note**: Due to an [unresolved bug](https://github.com/golang/go/issues/13548)
|
||||
> in the Go archive/tar package's handling of sparse files, attempting to
|
||||
> create a user with a sufficiently large UID inside a Docker container can
|
||||
> lead to disk exhaustion as `/var/log/faillog` in the container layer is
|
||||
> filled with NUL (\0) characters. Passing the `--no-log-init` flag to
|
||||
> useradd works around this issue. The Debian/Ubuntu `adduser` wrapper
|
||||
> does not support the `--no-log-init` flag and should be avoided.
|
||||
> Due to an [unresolved bug](https://github.com/golang/go/issues/13548) in the
|
||||
> Go archive/tar package's handling of sparse files, attempting to create a user
|
||||
> with a significantly large UID inside a Docker container can lead to disk
|
||||
> exhaustion because `/var/log/faillog` in the container layer is filled with
|
||||
> NULL (\0) characters. A workaround is to pass the `--no-log-init` flag to
|
||||
> useradd. The Debian/Ubuntu `adduser` wrapper does not support this flag.
|
||||
|
||||
Avoid installing or using `sudo` since it has unpredictable TTY and
|
||||
signal-forwarding behavior that can cause problems. If
|
||||
you absolutely need functionality similar to `sudo`, such as initializing the
|
||||
daemon as `root` but running it as non-`root`), consider using
|
||||
[“gosu”](https://github.com/tianon/gosu).
|
||||
Avoid installing or using `sudo` as it has unpredictable TTY and
|
||||
signal-forwarding behavior that can cause problems. If you absolutely need
|
||||
functionality similar to `sudo`, such as initializing the daemon as `root` but
|
||||
running it as non-`root`), consider using [“gosu”](https://github.com/tianon/gosu).
|
||||
|
||||
Lastly, to reduce layers and complexity, avoid switching `USER` back
|
||||
and forth frequently.
|
||||
Lastly, to reduce layers and complexity, avoid switching `USER` back and forth
|
||||
frequently.
|
||||
|
||||
### WORKDIR
|
||||
|
||||
[Dockerfile reference for the WORKDIR instruction](/engine/reference/builder.md#workdir)
|
||||
|
||||
For clarity and reliability, you should always use absolute paths for your
|
||||
`WORKDIR`. Also, you should use `WORKDIR` instead of proliferating
|
||||
instructions like `RUN cd … && do-something`, which are hard to read,
|
||||
troubleshoot, and maintain.
|
||||
`WORKDIR`. Also, you should use `WORKDIR` instead of proliferating instructions
|
||||
like `RUN cd … && do-something`, which are hard to read, troubleshoot, and
|
||||
maintain.
|
||||
|
||||
### ONBUILD
|
||||
|
||||
|
|
|
@ -92,7 +92,7 @@ With multi-stage builds, you use multiple `FROM` statements in your Dockerfile.
|
|||
Each `FROM` instruction can use a different base, and each of them begins a new
|
||||
stage of the build. You can selectively copy artifacts from one stage to
|
||||
another, leaving behind everything you don't want in the final image. To show
|
||||
how this works, Let's adapt the Dockerfile from the previous section to use
|
||||
how this works, let's adapt the Dockerfile from the previous section to use
|
||||
multi-stage builds.
|
||||
|
||||
**`Dockerfile`**:
|
||||
|
|
|
@ -183,6 +183,8 @@ import (
|
|||
"github.com/docker/docker/client"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/docker/pkg/stdcopy"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
|
@ -224,7 +226,7 @@ func main() {
|
|||
panic(err)
|
||||
}
|
||||
|
||||
io.Copy(os.Stdout, out)
|
||||
stdcopy.StdCopy(os.Stdout, os.Stderr, out)
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -280,6 +282,7 @@ file them with the library maintainers.
|
|||
| HTML (Web Components) | [docker-elements](https://github.com/kapalhq/docker-elements) |
|
||||
| Java | [docker-client](https://github.com/spotify/docker-client) |
|
||||
| Java | [docker-java](https://github.com/docker-java/docker-java) |
|
||||
| Java | [docker-java-api](https://github.com/amihaiemil/docker-java-api) |
|
||||
| NodeJS | [dockerode](https://github.com/apocas/dockerode) |
|
||||
| NodeJS | [harbor-master](https://github.com/arhea/harbor-master) |
|
||||
| Perl | [Eixo::Docker](https://github.com/alambike/eixo-docker) |
|
||||
|
|
|
@ -0,0 +1,35 @@
|
|||
---
|
||||
description: API Roles
|
||||
keywords: API, Services, roles
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/api-roles/
|
||||
title: Service API roles
|
||||
notoc: true
|
||||
---
|
||||
|
||||
You can configure a service so that it can access the Docker Cloud API. When you
|
||||
grant API access to a service, its containers receive a token through an
|
||||
environment variable, which is used to query the Docker Cloud API.
|
||||
|
||||
Docker Cloud has a "full access" role which when granted allows any operation
|
||||
to be performed on the API. You can enable this option on the **Environment variables** screen of the Service wizard, or [specify it in your service's stackfile](stack-yaml-reference.md#roles). When enabled, Docker Cloud generates an authorization token for the
|
||||
service's containers which is stored in an environment variable called
|
||||
`DOCKERCLOUD_AUTH`.
|
||||
|
||||
Use this variable to set the `Authorization` HTTP header when calling
|
||||
Docker Cloud's API:
|
||||
|
||||
```bash
|
||||
$ curl -H "Authorization: $DOCKERCLOUD_AUTH" -H "Accept: application/json" https://cloud.docker.com/api/app/v1/service/
|
||||
```
|
||||
|
||||
You can use this feature with Docker Cloud's [automatic environment variables](service-links.md), to let your application inside a container read and perform operations using Docker Cloud's API.
|
||||
|
||||
```bash
|
||||
$ curl -H "Authorization: $DOCKERCLOUD_AUTH" -H "Accept: application/json" $WEB_DOCKERCLOUD_API_URL
|
||||
```
|
||||
|
||||
For example, you can use information retrieved using the API to read the linked
|
||||
endpoints, and use them to reconfigure a proxy container.
|
||||
|
||||
See the [API documentation](/apidocs/docker-cloud.md) for more information on the different API operations available.
|
|
@ -0,0 +1,77 @@
|
|||
---
|
||||
description: Autodestroy
|
||||
keywords: Autodestroy, service, terminate, container
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/auto-destroy/
|
||||
title: Destroy containers automatically
|
||||
---
|
||||
|
||||
When enabled on a service, **Autodestroy** automatically terminates containers
|
||||
when they stop. **This destroys all data in the container on stop.** This is
|
||||
useful for one-time actions that store their results in an external system.
|
||||
|
||||
The following Autodestroy options are available:
|
||||
|
||||
- `OFF`: the container remains in the **Stopped** state regardless of exit code, and is not destroyed.
|
||||
- `ON_SUCCESS`: if the container stops with an exit code of 0 (normal shutdown), Docker Cloud automatically destroys it. If it stops with any other exit code, Docker Cloud leaves it in the **Stopped** state.
|
||||
- `ALWAYS`: if the container stops, Docker Cloud automatically terminates it regardless of the exit code.
|
||||
|
||||
If **Autorestart** is activated, Docker Cloud evaluates whether to try restarting the container or not before evaluating **Autodestroy**.
|
||||
|
||||
## Launch a service with Autodestroy
|
||||
|
||||
You can enable **Autodestroy** on the **Service configuration** step of the **Launch new service** wizard.
|
||||
|
||||

|
||||
|
||||
Autodestroy is set to `OFF` (deactivated) by default.
|
||||
|
||||
### Use the API or CLI
|
||||
|
||||
You can enable autodestroy when launching a service through the API or CLI.
|
||||
|
||||
If not provided, it has a default value of `OFF`. Check our [API documentation](/apidocs/docker-cloud.md) for more information.
|
||||
|
||||
#### Launch with autodestroy using the API
|
||||
```
|
||||
POST /api/app/v1/service/ HTTP/1.1
|
||||
{
|
||||
"autodestroy": "ALWAYS",
|
||||
[...]
|
||||
}
|
||||
```
|
||||
|
||||
#### Launch with autodestroy using the CLI
|
||||
```
|
||||
$ docker-cloud service run --autodestroy ALWAYS [...]
|
||||
```
|
||||
|
||||
## Enable autodestroy on an already deployed service
|
||||
|
||||
You can also activate or deactivate the **Autodestroy** setting on a service
|
||||
after it has been deployed, by editing the service.
|
||||
|
||||
1. Go to the service detail page.
|
||||
2. Click **Edit**.
|
||||
3. Select the new autodestroy setting.
|
||||
4. Click **Save**.
|
||||
|
||||
### Use the API or CLI
|
||||
|
||||
You can set the **Autodestroy** option after the service has been
|
||||
deployed, using the API or CLI.
|
||||
|
||||
Check our [API documentation](/apidocs/docker-cloud.md) for more information.
|
||||
|
||||
#### Enable autodestroy using the API
|
||||
```
|
||||
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
|
||||
{
|
||||
"autodestroy": "ALWAYS"
|
||||
}
|
||||
```
|
||||
|
||||
#### Enable autodestroy using the CLI
|
||||
```
|
||||
$ docker-cloud service set --autodestroy ALWAYS (name or uuid)
|
||||
```
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
description: Autoredeploy
|
||||
keywords: Autoredeploy, image, store, service
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/auto-redeploy/
|
||||
title: Redeploy services automatically
|
||||
---
|
||||
|
||||
[](https://www.youtube.com/watch?v=I4depUwfbFc "Automated Deployments with Docker Cloud"){:target="_blank"}
|
||||
|
||||
Docker Cloud's **Autoredeploy** feature allows a service that uses an image
|
||||
stored in Docker Hub to automatically redeploy whenever a new image is pushed or
|
||||
built.
|
||||
|
||||
> **Notes**:
|
||||
>
|
||||
>* **Autoredeploy** works only for hub images with the _latest_ tag.
|
||||
>
|
||||
>* To enable **autoredeploy** on an image stored in a third party registry,
|
||||
> you need to use [redeploy triggers](triggers.md) instead.
|
||||
|
||||
## Launch a new service with autoredeploy
|
||||
|
||||
You can launch a service with **autoredeploy** enabled by enabling it from the **general settings** section of the **Launch new service** wizard.
|
||||
|
||||

|
||||
|
||||
By default, autoredeploy is *deactivated*.
|
||||
|
||||
### Use the CLI or API
|
||||
|
||||
You can enable **autoredeploy** when launching a service using the CLI or API.
|
||||
|
||||
By default, autoredeploy is set to `false`. See the [API documentation](/apidocs/docker-cloud.md) for more information.
|
||||
|
||||
#### Enable autoredeploy using the CLI
|
||||
|
||||
```
|
||||
$ docker-cloud service run --autoredeploy [...]
|
||||
```
|
||||
|
||||
#### Enable autoredeploy using the API
|
||||
|
||||
```
|
||||
POST /api/app/v1/service/ HTTP/1.1
|
||||
{
|
||||
"autoredeploy": true,
|
||||
[...]
|
||||
}
|
||||
```
|
||||
|
||||
## Enable autoredeploy to an already deployed service
|
||||
|
||||
You can activate or deactivate **autoredeploy** on a service after it has been deployed.
|
||||
|
||||
1. Click into the service detail page.
|
||||
2. Click **Edit**.
|
||||
3. Change the **autoredeploy** setting on the form to `true`.
|
||||
4. Click **Save changes**.
|
||||
|
||||
|
||||
### Use the CLI or API
|
||||
|
||||
You can set the **autoredeploy** option after the service has been deployed,
|
||||
using the CLI or API.
|
||||
|
||||
Check our [API documentation](/apidocs/docker-cloud.md) for more information.
|
||||
|
||||
|
||||
#### Enable autoredeploy using the CLI
|
||||
|
||||
```bash
|
||||
$ docker-cloud service set --autoredeploy (name or uuid)
|
||||
```
|
||||
|
||||
### Enable autoredeploy using the API
|
||||
|
||||
```
|
||||
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
|
||||
{
|
||||
"autoredeploy": true
|
||||
}
|
||||
```
|
|
@ -0,0 +1,88 @@
|
|||
---
|
||||
description: Automatically restart a container in Docker Cloud
|
||||
keywords: container, restart, automated
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/autorestart/
|
||||
title: Restart a container automatically
|
||||
---
|
||||
|
||||
**Autorestart** is a service-level setting that can automatically start your
|
||||
containers if they stop or crash. You can use this setting as an automatic crash
|
||||
recovery mechanism.
|
||||
|
||||
Autorestart uses Docker's `--autorestart` flag. When called, the Docker daemon
|
||||
attempts to restart the container until it succeeds. If the first restart
|
||||
attempts fail, the daemon continues to attempt a restart, but uses an
|
||||
incremental back-off algorithm.
|
||||
|
||||
The following Autorestart options are available:
|
||||
|
||||
- `OFF`: the container does not restart, regardless of the exit code.
|
||||
- `ON_FAILURE`: the container restarts *only* if it stops with an exit code other than 0. (0 is for normal shutdown.)
|
||||
- `ALWAYS`: the container restarts automatically, regardless of the exit code.
|
||||
|
||||
> **Note**: If you are using **Autorestart** set to `ALWAYS`, **Autodestroy** must be set to `OFF`.
|
||||
|
||||
If the Docker daemon in a node restarts (because it was upgraded, or because the
|
||||
underlying node was restarted), the daemon only restarts containers that
|
||||
have **Autorestart** set to `ALWAYS`.
|
||||
|
||||
## Launching a Service with Autorestart
|
||||
|
||||
You can enable **Autorestart** on the **Service configuration** step of the **Launch new service wizard**.
|
||||
|
||||

|
||||
|
||||
Autorestart is set to `OFF` by default, which means that autorestart is *deactivated*.
|
||||
|
||||
### Using the API and CLI
|
||||
|
||||
You can set the **Autorestart** option when launching a service through the
|
||||
API and through the CLI. Autorestart is set to `OFF` by default.
|
||||
|
||||
#### Set autorestart using the API
|
||||
|
||||
```
|
||||
POST /api/app/v1/service/ HTTP/1.1
|
||||
{
|
||||
"autorestart": "ON_FAILURE",
|
||||
[...]
|
||||
}
|
||||
```
|
||||
|
||||
#### Set autorestart using the CLI
|
||||
|
||||
```
|
||||
$ docker-cloud service run --autorestart ON_FAILURE [...]
|
||||
```
|
||||
|
||||
See our [API documentation](/apidocs/docker-cloud.md) for more information.
|
||||
|
||||
## Enabling autorestart on an already deployed service
|
||||
|
||||
You can activate or deactivate **Autorestart** on a service after it has been deployed by editing the service.
|
||||
|
||||
1. Go to the service detail page.
|
||||
2. Click **Edit**.
|
||||
3. Choose the autorestart option to apply.
|
||||
4. Click **Save**.
|
||||
|
||||
### Using the API and CLI
|
||||
|
||||
You can change the **Autorestart** setting after the service has been deployed using the API or CLI.
|
||||
|
||||
#### Enable autorestart using the API
|
||||
```
|
||||
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
|
||||
{
|
||||
"autorestart": "ALWAYS",
|
||||
}
|
||||
```
|
||||
|
||||
#### Enable autorestart using the CLI
|
||||
|
||||
```
|
||||
$ docker-cloud service set --autorestart ALWAYS (name or uuid)
|
||||
```
|
||||
|
||||
See the [API documentation](/apidocs/docker-cloud.md) for more information.
|
|
@ -0,0 +1,116 @@
|
|||
---
|
||||
description: Deployment tags
|
||||
keywords: Deployment, tags, services
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/deploy-tags/
|
||||
title: Deployment tags
|
||||
---
|
||||
|
||||
You can use **Deployment tags** to make sure certain services are deployed only
|
||||
to specific nodes. Tagged services only deploy to nodes that match **all** of
|
||||
the tags on that service. Docker Cloud shows an error if no nodes match all of
|
||||
the service's deployment tags. A node might have extra tags that are not
|
||||
specified on the service, but these do not prevent the service from deploying.
|
||||
|
||||
You can specify multiple tags on services, on individual nodes, and on node clusters. All nodes that are members of a node cluster inherit the tags specified on the cluster. See [Automatic deployment tags](deploy-tags.md#automatic-deployment-tags) to learn more.
|
||||
|
||||
#### Deployment tags example
|
||||
|
||||
In this example, we have five nodes. One is used for development and testing, and four are used for production. The production nodes are distributed between frontend and backend. The table below summarizes their names and tags:
|
||||
|
||||
| Node name | Tags |
|
||||
| --------- | ---- |
|
||||
| my-node-dev | `aws` `us-east-1` `development` `test` `frontend` `backend`|
|
||||
| my-node-prod-1 | `aws` `us-east-1` `production` `frontend` |
|
||||
| my-node-prod-2 | `aws` `us-east-2` `production` `frontend` |
|
||||
| my-node-prod-3 | `aws` `us-east-1` `production` `backend` |
|
||||
| my-node-prod-4 | `aws` `us-east-2` `production` `backend` |
|
||||
|
||||
Imagine that you deploy a service called **my-webapp-dev** with two tags:
|
||||
`development` and `frontend`. All containers for the service would be deployed
|
||||
to the node labeled **my-node-dev**, because the node is tagged with both
|
||||
`development` *and* `frontend`.
|
||||
|
||||
Similarly, if you deploy a production service called **my-webapp-prod** with the
|
||||
two tags `production` and `frontend`, all containers for that service
|
||||
would be deployed to the two nodes **my-node-prod-1** and **my-node-prod-2**
|
||||
because those two nodes are tagged with both `production` *and* `frontend`.
|
||||
|
||||
> **Tip**: Containers are distributed between the two nodes based on the
|
||||
[deployment strategy](../infrastructure/deployment-strategies.md) selected.
|
||||
|
||||
## Automatic deployment tags
|
||||
|
||||
When you launch a node cluster, four tags are automatically assigned to the
|
||||
node cluster and all nodes in that cluster:
|
||||
|
||||
* Provider name (for example `digitalocean`, `aws`)
|
||||
* "[Bring your own node](../infrastructure/byoh.md)" (BYON) status (for example `byon=false` or `byon=true`)
|
||||
* Region name (for example `us-east-1`, `lon1`)
|
||||
* Node cluster name (for example `my-node-cluster-dev-1`)
|
||||
|
||||
## Add tags to a node or node cluster at launch
|
||||
|
||||
A single node is considered a node cluster with a size of 1. Because of this, you create a node cluster even if you are only launching a single node.
|
||||
|
||||
1. Click **Node clusters** in the left navigation menu.
|
||||
2. Click **Create**.
|
||||
3. In the **Deploy tags** field, enter the tags to assign to the cluster and all
|
||||
of its member nodes.
|
||||
|
||||

|
||||
|
||||
When the node cluster scales up, new nodes automatically inherit the
|
||||
node cluster's tags, including the [Automatic deployment tags](deploy-tags.md#automatic-deployment-tags) described above.
|
||||
|
||||
You can see a node cluster's tags on the left side of the cluster's detail page.
|
||||
|
||||
4. Click **Launch node cluster**.
|
||||
|
||||
### Update or add tags on a node or node cluster
|
||||
|
||||
To change the tags on an existing node or node cluster:
|
||||
|
||||
1. Go to the node or node cluster's detail page.
|
||||
2. Click the tags below the node or node cluster status line to edit them.
|
||||
|
||||

|
||||
|
||||
If there are no tags assigned to the cluster, move your cursor under the deployment status line and click the tag icon that appears.
|
||||
|
||||
3. In the dialog that appears, add or remove tags.
|
||||
|
||||
The individual nodes in a cluster inherit all tags from the cluster, including automatic tags. Each individual node can have extra tags in addition to the tags it inherits as a member of a node cluster.
|
||||
|
||||
4. Click **Save** to save your tag changes to the nodes.
|
||||
|
||||
## Add tags to a service at launch
|
||||
|
||||
To deploy a service to a specific node using tags, you must first specify one or more tags on the service. If you don't add any tags to a service, the service is deployed to all available nodes.
|
||||
|
||||
1. Use the **Create new service** wizard to start a new service.
|
||||
|
||||

|
||||
|
||||
2. Select tags from the **deployment constraints** list to add to this service. Only tags that already exist on your nodes appear in the list.
|
||||
|
||||
Tags in a service define which nodes are used on deployment: only nodes that match *all* tags specified in the service are used for deployment.
|
||||
|
||||
### Update or add tags to a service
|
||||
|
||||
You can add or remove tags on a running service from the service's detail view.
|
||||
|
||||
1. From the service detail view, click **Edit**.
|
||||
2. Select tags from the **deployment constraints** list to add to this service. Only tags that already exist on your nodes appear in the list.
|
||||
|
||||

|
||||
|
||||
3. Click **Save Changes**.
|
||||
|
||||
**If you update the tags on a service, you must redeploy the service for them to take effect.** To do this you can terminate all containers and relaunch them, or you can scale
|
||||
your service down to zero nodes and then scale it back up. New containers are
|
||||
deployed to the nodes that match the new tags.
|
||||
|
||||
## Using deployment tags in the API and CLI
|
||||
|
||||
See the [tags API and CLI documentation](/apidocs/docker-cloud.md#tags) for more information on how to use tags with our API and CLI.
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
description: Deploy to Docker Cloud
|
||||
keywords: deploy, docker, cloud
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/deploy-to-cloud/
|
||||
- /docker-cloud/tutorials/deploy-to-cloud/
|
||||
title: Add a "Deploy to Docker Cloud" button
|
||||
---
|
||||
|
||||
The **Deploy to Docker Cloud** button allows developers to deploy stacks with
|
||||
one click in Docker Cloud as long as they are logged in. The button is intended
|
||||
to be added to `README.md` files in public GitHub repositories, although it can
|
||||
be used anywhere else.
|
||||
|
||||
> **Note**: You must be _logged in_ to Docker Cloud for the button to work
|
||||
> Otherwise, the link results in a 404 error.
|
||||
|
||||
This is an example button to deploy our [python quickstart](https://github.com/docker/dockercloud-quickstart-python){: target="_blank" class="_"}:
|
||||
|
||||
<a href="https://cloud.docker.com/stack/deploy/?repo=https://github.com/docker/dockercloud-quickstart-python" target="_blank" class="_"><img src="https://files.cloud.docker.com/images/deploy-to-dockercloud.svg"></a>
|
||||
|
||||
The button redirects the user to the **Launch new Stack** wizard, with the stack
|
||||
definition already filled with the contents of any of the following files (which
|
||||
are fetched in the order shown) from the repository (taking into account branch
|
||||
and relative path):
|
||||
|
||||
* `docker-cloud.yml`
|
||||
* `docker-compose.yml`
|
||||
* `fig.yml`
|
||||
|
||||
The user can still modify the stack definition before deployment.
|
||||
|
||||
## Add the 'Deploy to Docker Cloud' button in GitHub
|
||||
|
||||
You can simply add the following snippet to your `README.md` file:
|
||||
|
||||
```md
|
||||
[](https://cloud.docker.com/stack/deploy/)
|
||||
```
|
||||
|
||||
Docker Cloud detects the HTTP referer header and deploy the stack file found in the repository, branch and relative path where the source `README.md` file is stored.
|
||||
|
||||
|
||||
## Add the 'Deploy to Docker Cloud' button in Docker Hub
|
||||
|
||||
If the button is displayed on the Docker Hub, Docker Cloud cannot automatically detect the source GitHub repository, branch and path. In this case, edit the repository description and add the following code:
|
||||
|
||||
```md
|
||||
[](https://cloud.docker.com/stack/deploy/?repo=<repo_url>)
|
||||
```
|
||||
|
||||
where `<repo_url>` is the path to your GitHub repository (see below).
|
||||
|
||||
|
||||
## Add the 'Deploy to Docker Cloud' button anywhere else
|
||||
|
||||
If you want to use the button somewhere else, such as from external documentation or a landing site, you just need to create a link to the following URL:
|
||||
|
||||
```html
|
||||
https://cloud.docker.com/stack/deploy/?repo=<repo_url>
|
||||
```
|
||||
|
||||
where `<repo_url>` is the path to your GitHub repository. For example:
|
||||
|
||||
* `https://github.com/docker/dockercloud-quickstart-python`
|
||||
* `https://github.com/docker/dockercloud-quickstart-python/tree/staging` to use branch `staging` instead of the default branch
|
||||
* `https://github.com/docker/dockercloud-quickstart-python/tree/master/example` to use branch `master` and the relative path `/example` inside the repository
|
||||
|
||||
You can use your own image for the link (or no image). Our **Deploy to Docker Cloud** image is available at the following URL:
|
||||
|
||||
* `https://files.cloud.docker.com/images/deploy-to-dockercloud.svg`
|
After Width: | Height: | Size: 62 KiB |
After Width: | Height: | Size: 62 KiB |
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 93 KiB |
After Width: | Height: | Size: 92 KiB |
After Width: | Height: | Size: 66 KiB |
After Width: | Height: | Size: 86 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 93 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 82 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 58 KiB |
After Width: | Height: | Size: 95 KiB |
After Width: | Height: | Size: 119 KiB |
After Width: | Height: | Size: 27 KiB |
After Width: | Height: | Size: 66 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 88 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 50 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 86 KiB |
After Width: | Height: | Size: 85 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 46 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 73 KiB |
After Width: | Height: | Size: 80 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 9.4 KiB |
After Width: | Height: | Size: 100 KiB |
After Width: | Height: | Size: 22 KiB |
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
description: Manage your Docker Cloud Applications
|
||||
keywords: applications, reference, Cloud
|
||||
title: Applications in Docker Cloud
|
||||
notoc: true
|
||||
---
|
||||
|
||||
Applications in Docker Cloud are usually several Services linked together using
|
||||
the specifications from a [Stackfile](stacks.md) or a Compose file. You can also
|
||||
create individual services using the Docker Cloud Services wizard, and you can
|
||||
attach [Volumes](volumes.md) to use as long-lived storage for your services.
|
||||
|
||||
If you are using Docker Cloud's autobuild and autotest features, you can also
|
||||
use [autoredeploy](auto-redeploy.md) to automatically redeploy the application
|
||||
each time its underlying services are updated.
|
||||
|
||||
* [Deployment tags](deploy-tags.md)
|
||||
* [Add a Deploy to Docker Cloud button](deploy-to-cloud-btn.md)
|
||||
* [Manage service stacks](stacks.md)
|
||||
* [Stack YAML reference](stack-yaml-reference.md)
|
||||
* [Publish and expose service or container ports](ports.md)
|
||||
* [Redeploy running services](service-redeploy.md)
|
||||
* [Scale your service](service-scaling.md)
|
||||
* [Service API Roles](api-roles.md)
|
||||
* [Service discovery and links](service-links.md)
|
||||
* [Work with data volumes](volumes.md)
|
||||
* [Create a proxy or load balancer](load-balance-hello-world.md)
|
||||
|
||||
### Automate your applications
|
||||
|
||||
Use the following features to automate specific actions on your Docker Cloud applications.
|
||||
|
||||
* [Automatic container destroy](auto-destroy.md)
|
||||
* [Automatic container restart](autorestart.md)
|
||||
* [Autoredeploy](auto-redeploy.md)
|
||||
* [Use triggers](triggers.md)
|
|
@ -0,0 +1,199 @@
|
|||
---
|
||||
description: Create a proxy or load balancer
|
||||
keywords: proxy, load, balancer
|
||||
redirect_from:
|
||||
- /docker-cloud/getting-started/intermediate/load-balance-hello-world/
|
||||
- /docker-cloud/tutorials/load-balance-hello-world/
|
||||
title: Create a proxy or load balancer
|
||||
---
|
||||
|
||||
When you deploy a web service to multiple containers, you might want to load
|
||||
balance between the containers using a proxy or load balancer.
|
||||
|
||||
In this tutorial, you use the **dockercloud/hello-world** image as a sample
|
||||
web service and **dockercloud/haproxy** to load balance traffic to the service.
|
||||
If you follow this tutorial exactly, your traffic is distributed evenly
|
||||
between eight containers in a node cluster containing four nodes.
|
||||
|
||||
## Create a Node Cluster
|
||||
|
||||
First, deploy a node cluster of four nodes.
|
||||
|
||||
1. If you have not linked to a host or cloud services provider, do that now.
|
||||
|
||||
You can find instructions on how to link to your own hosts, or to different providers [here](../infrastructure/index.md).
|
||||
|
||||
2. Click **Node Clusters** in the left-hand navigation menu.
|
||||
|
||||
3. Click **Create**.
|
||||
|
||||
4. Enter a name for the node cluster, select the **Provider**, **Region**, and **Type/Size**.
|
||||
|
||||
5. Add a **deployment tag** of `web`. (This is used to make sure the right services are deployed to the correct nodes.)
|
||||
|
||||
5. Drag or increment the **Number of nodes** slider to **4**.
|
||||
|
||||

|
||||
|
||||
4. Click **Launch node cluster**.
|
||||
|
||||
This might take up to 10 minutes while the nodes are provisioned. This a great time to grab a cup of coffee.
|
||||
|
||||
Once the node cluster is deployed and all four nodes are running, we're
|
||||
ready to continue and launch our web service.
|
||||
|
||||

|
||||
|
||||
## Launch the web service
|
||||
|
||||
1. Click **Services** in the left hand menu, and click **Create**.
|
||||
|
||||
3. Click the **rocket icon** at the top of page, and select the **dockercloud/hello-world** image.
|
||||
|
||||

|
||||
|
||||
4. On the **Service configuration** screen, configure the service using these values:
|
||||
|
||||
* **image**: Set the tag to `latest` so you get the most recent build of the image.
|
||||
* **service name**: `web`. This is what we call the service internally.
|
||||
* **number of containers**: 8
|
||||
* **deployment strategy**: `high availability`. Deploy evenly to all nodes.
|
||||
* **deployment constraints**: `web`. Deploy only to nodes with this tag.
|
||||
|
||||
> **Note**: For this tutorial, make sure you change the *deployment strategy* to **High Availability**, and add the *tag* **web** to ensure this service is deployed to the right nodes.
|
||||
|
||||

|
||||
|
||||
5. Last, scroll down to the **Ports** section and make sure the **published** box is checked next to port 80.
|
||||
|
||||
We're going to access these containers from the public internet, and
|
||||
publishing the port makes them available externally. Make sure you leave the
|
||||
`node port` field unset so that it stays dynamic.
|
||||
|
||||
6. Click **Create and deploy**.
|
||||
|
||||
Docker Cloud switches to the **Service detail** view after you create the
|
||||
service.
|
||||
|
||||
7. Scroll up to the **Containers** section to see the containers as they deploy.
|
||||
|
||||
The icons for each container change color to indicate what phase of deployment they're in. Once all containers are green (successfully started), continue to the next step.
|
||||
|
||||

|
||||
|
||||
## Test the web service
|
||||
|
||||
1. Once your containers are all green (running), scroll down to the
|
||||
**Endpoints** section.
|
||||
|
||||
A list shows all the endpoints available for this service on the public internet.
|
||||
|
||||

|
||||
|
||||
2. Click an endpoint URL (it should look something like
|
||||
`http://web-1.username.cont.dockerapp.io:49154`) to open a new tab in your
|
||||
browser and view the **dockercloud/hello-world** web page. Note the hostname
|
||||
for the page that loads.
|
||||
|
||||

|
||||
|
||||
3. Click other endpoints and check the hostnames. You see different hostnames
|
||||
which match the container name (web-2, web-3, and so on).
|
||||
|
||||
## Launch the load balancer
|
||||
|
||||
We verified that the web service is working, so now we can set up the load balancer.
|
||||
|
||||
1. Click **Services** in the left navigation bar, and click **Create** again.
|
||||
|
||||
This time we launch a load balancer that listens on port 80 and balances the traffic across the 8 containers that are running the `web` service.
|
||||
|
||||
3. Click the **rocket icon** if necessary and find the **Proxies** section.
|
||||
|
||||
4. Click the **dockercloud/haproxy** image.
|
||||
|
||||
5. On the next screen, set the **service name** to `lb`.
|
||||
|
||||
Leave the tag, deployment strategy, and number of containers at their default values.
|
||||
|
||||

|
||||
|
||||
6. Locate the **API Roles** field at end of the **General settings** section.
|
||||
|
||||
7. Set the **API Role** to `Full access`.
|
||||
|
||||
When you assign the service an API role, it passes a `DOCKERCLOUD_AUTH`
|
||||
environment variable to the service's containers, which allows them to query
|
||||
Docker Cloud's API on your behalf. You can [read more about API Roles here](../apps/api-roles.md).
|
||||
|
||||
The **dockercloud/haproxy** image uses the API to check how many containers
|
||||
are in the `web` service we launched earlier. **HAproxy** then uses this
|
||||
information to update its configuration dynamically as the web service
|
||||
scales.
|
||||
|
||||
8. Next, scroll down to the **Ports** section.
|
||||
|
||||
9. Click the **Published** checkbox next to the container port 80.
|
||||
|
||||
10. Click the word *dynamic* next to port 80, and enter 80 to set the published
|
||||
port to also use port 80.
|
||||
|
||||

|
||||
|
||||
11. Scroll down to the **Links** section.
|
||||
|
||||
12. Select `web` from the drop down list, and click the blue **plus sign** to
|
||||
add the link.
|
||||
|
||||
This links the load balancing service `lb` with the web service `web`. The
|
||||
link appears in the table in the Links section.
|
||||
|
||||

|
||||
|
||||
A new set of `WEB` environment variables appears in the service we're about
|
||||
to launch. You can read more about
|
||||
service link environment variables [here](../apps/service-links.md).
|
||||
|
||||
13. Click **Create and deploy** and confirm that the service launches.
|
||||
|
||||
## Test the load-balanced web service
|
||||
|
||||
1. On the load balancer service detail page, scroll down to the **endpoints**
|
||||
section.
|
||||
|
||||
Unlike on the web service, this time the HTTP URL for the load balancer is
|
||||
mapped to port 80.
|
||||
|
||||

|
||||
|
||||
2. Click the endpoint URL to open it in a new tab.
|
||||
|
||||
The same hello-world webpage you saw earlier is shown. Make note of the
|
||||
hostname.
|
||||
|
||||
3. Refresh the web page.
|
||||
|
||||
With each refresh, the hostname changes as the requests are load-balanced to
|
||||
different containers.
|
||||
|
||||

|
||||
|
||||
Each container in the web service has a different hostname, which
|
||||
appears in the webpage as `container_name-#`. When you refresh the
|
||||
page, the load balancer routes the request to a new host and the displayed hostname changes.
|
||||
|
||||
> **Tip**: If you don't see the hostname change, clear your browser's cache
|
||||
or load the page from a different web browser.
|
||||
|
||||
Congratulations! You just deployed a load balanced web service using Docker
|
||||
Cloud!
|
||||
|
||||
## Further reading: load balancing the load balancer
|
||||
|
||||
What if you had so many `web` containers that you needed more than one `lb`
|
||||
container?
|
||||
|
||||
Docker Cloud automatically assigns a DNS endpoint to all services. This endpoint
|
||||
routes to all of the containers of that service. You can use the DNS endpoint to
|
||||
load balance your load balancer. To learn more, read up on [service
|
||||
links](service-links.md).
|
|
@ -0,0 +1,124 @@
|
|||
---
|
||||
description: Publish and expose service or container ports
|
||||
keywords: publish, expose, ports, containers, services
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/ports/
|
||||
title: Publish and expose service or container ports
|
||||
---
|
||||
|
||||
In Docker Cloud you can **publish** or **expose** ports in services and
|
||||
containers, just like you can in Docker Engine (as documented
|
||||
[here](/engine/reference/run.md#expose-incoming-ports)).
|
||||
|
||||
* **Exposed ports** are ports that a container or service is using either to
|
||||
provide a service, or listen on. By default, exposed ports in Docker Cloud are
|
||||
only privately accessible. This means only other services that are linked to
|
||||
the service which is exposing the ports can communicate over the
|
||||
exposed port.
|
||||
|
||||
*Exposed ports* cannot be accessed publicly over the internet.
|
||||
|
||||
* **Published ports** are exposed ports that are accessible publicly over the internet. Published ports are published to the public-facing network interface in which the container is running on the node (host).
|
||||
|
||||
*Published ports* **can** be accessed publicly over the internet.
|
||||
|
||||
## Launch a Service with an exposed port
|
||||
|
||||
If the image that you are using for your service already exposes any ports, these appear in Docker Cloud in the **Launch new service** wizard.
|
||||
|
||||
1. From the **Launch new service** wizard, select the image to use.
|
||||
2. Scroll down to the **Ports** section.
|
||||
|
||||

|
||||
|
||||
The image in this example screenshot *exposes* port 80. Remember, this means
|
||||
that the port is only accessible to other services that link this service. It
|
||||
is not accessible publicly over the internet.
|
||||
|
||||
You can expose more ports from this screen by clicking **Add Port**.
|
||||
|
||||
### Using the API/CLI
|
||||
|
||||
See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) for
|
||||
information on how to launch a service with an exposed port.
|
||||
|
||||
## Launch a Service with a published port
|
||||
|
||||
If the image that you are using for your service already exposes any ports,
|
||||
these appear in Docker Cloud in the **Launch new service** wizard. You can
|
||||
choose to publish and map them from the wizard.
|
||||
|
||||
1. From the **Launch new service** wizard, select the image to use.
|
||||
2. Scroll down to the **Ports** section.
|
||||
This section displays any ports configured in the image.
|
||||
4. Click the **Published** checkbox.
|
||||
5. Optionally, choose which port on the node where you want to make the exposed port available.
|
||||
|
||||
By default, Docker Cloud assigns a published port dynamically. You can also
|
||||
choose a specific port. For example, you might choose to take a port that is
|
||||
exposed internally on port 80, and publish it externally on port 8080.
|
||||

|
||||
|
||||
To access the published port over the internet, connect to the port you
|
||||
specified in the "Node port" section. If you used the default **dynamic**
|
||||
option, find the published port on the service detail page.
|
||||
|
||||
### Using the API/CLI
|
||||
|
||||
See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) on
|
||||
how to launch a service with a published port.
|
||||
|
||||
|
||||
## Check which ports a service has published
|
||||
|
||||
The **Endpoints** section in the Service view lists the published ports for a service. Ports that are exposed internally are not listed in this section but can be viewed by editing the service configuration.
|
||||
|
||||
* The **Service endpoints** list shows the endpoints that automatically round-robin route to the containers in a service.
|
||||
* The **Container endpoints** list shows the endpoints for each individual container. Click the blue "link" icon to open the endpoint URL in a new tab.
|
||||
|
||||
<!-- DCUI-741
|
||||
Ports that are exposed internally display with a closed (locked) padlock
|
||||
icon and published ports (that are exposed to the internet) show an open
|
||||
(unlocked) padlock icon.
|
||||
|
||||
* Exposed ports are listed as **container port/protocol**
|
||||
* Published ports are listed as **node port**->**container port/protocol** -->
|
||||
|
||||

|
||||
|
||||
### Using the API/CLI
|
||||
|
||||
See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) to learn how to list a service's exposed and published ports.
|
||||
|
||||
## Service and container DNS endpoints
|
||||
|
||||
The short word before `dockerapp.io` in an endpoint URL tells you what type of endpoint it is. The three available types are:
|
||||
|
||||
* `node` routes to a specific node or host
|
||||
* `svc` routes round-robin style to the containers of a service
|
||||
* `cont` routes to a specific container within a service regardless of which host the container is deployed on
|
||||
|
||||
For example, you might see an endpoint such as `web.quickstart-python.0a0b0c0d.svc.dockerapp.io`. You would know that this is a `service` endpoint, for reaching the `web` service in the `quickstart-python` stack.
|
||||
|
||||
### Container endpoints
|
||||
|
||||
Each container that has one or more published ports is automatically assigned a
|
||||
DNS endpoint in the format
|
||||
`container-name[.stack-name].shortuuid.cont.dockerapp.io`. This DNS endpoint
|
||||
(single A record) resolves to the public IP of the node where the container is
|
||||
running. If the container is redeployed into another node, the DNS updates
|
||||
automatically and resolves to the new node or host.
|
||||
|
||||
You can see a list of container endpoints on the stack, service or container
|
||||
detail views, in the **Endpoints** tab.
|
||||
|
||||
### Service endpoints
|
||||
|
||||
Each service that has at least one port published with a fixed (not dynamic)
|
||||
host port is assigned a DNS endpoint in the format
|
||||
`service-name[.stack-name].shortuuid.svc.dockerapp.io`. This DNS endpoint
|
||||
(multiple A record) resolves to the IPs of the nodes where the containers are
|
||||
running, in a [round-robin
|
||||
fashion](https://en.wikipedia.org/wiki/Round-robin_DNS).
|
||||
|
||||
You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab.
|
|
@ -0,0 +1,257 @@
|
|||
---
|
||||
description: Service discovery
|
||||
keywords: service, discover, links
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/service-links/
|
||||
title: Service discovery and links
|
||||
---
|
||||
|
||||
Docker Cloud creates a per-user overlay network which connects all containers
|
||||
across all of the user's hosts. This network connects all of your containers on
|
||||
the `10.7.0.0/16` subnet, and gives every container a local IP. This IP persists
|
||||
on each container even if the container is redeployed and ends up on a different
|
||||
host. Every container can reach any other container on any port within the
|
||||
subnet.
|
||||
|
||||
Docker Cloud gives your containers two ways find other services:
|
||||
|
||||
* Using service and container names directly as **hostnames**
|
||||
|
||||
* Using **service links**, which are based on [Docker Compose links](/compose/compose-file/#links)
|
||||
|
||||
**Service and Container Hostnames** update automatically when a service scales
|
||||
up or down or redeploys. As a user, you can configure service names, and Docker
|
||||
Cloud uses these names to find the IP of the services and containers for you.
|
||||
You can use hostnames in your code to provide abstraction that allows you to
|
||||
easily swap service containers or components.
|
||||
|
||||
**Service links** create environment variables which allow containers to
|
||||
communicate with each other within a stack, or with other services outside of a
|
||||
stack. You can specify service links explicitly when you create a new service
|
||||
or edit an existing one, or specify them in the stackfile for a service stack.
|
||||
|
||||
### Hostnames vs service links
|
||||
|
||||
When a service is scaled up, a new hostname is created and automatically
|
||||
resolves to the new IP of the container, and the parent service hostname record
|
||||
also updates to include the new container's IP. However, new service link
|
||||
environment variables are not created, and existing ones are not removed, when a
|
||||
service scales up or down.
|
||||
|
||||
## Using service and container names as hostnames
|
||||
|
||||
You can use hostnames to connect any container in your Docker Cloud account to
|
||||
any other container on your account without having to create service links or
|
||||
manage environment variables. This is the recommended service discovery method.
|
||||
|
||||
Hostnames always resolve to the correct IP for the service or container,
|
||||
and update as the service scales up, scales down, or redeploys. The Docker
|
||||
Cloud automatic DNS service resolves the service name to the correct IP on the
|
||||
overlay network, even if the container has moved or is now on a different host.
|
||||
|
||||
### Discovering containers on the same service or stack
|
||||
|
||||
A container can always discover other containers on the same stack using just
|
||||
the **container name** as hostname. This includes containers of the same
|
||||
service. Similarly, a container can always discover other services on the same
|
||||
stack using the **service name**.
|
||||
|
||||
For example, a container `webapp-1` in the service `webapp` can connect to the
|
||||
container `db-1` in the service `db` by using `db-1` as the hostname. It can
|
||||
also connect to a peer container, `webapp-2`, by using `webapp-2` as the
|
||||
hostname.
|
||||
|
||||
A container `proxy-1` on the same stack could discover all `webapp` containers
|
||||
by using the **service name** `webapp` as hostname. Connecting to the service
|
||||
name resolves as an `A`
|
||||
[round-robin](http://en.wikipedia.org/wiki/Round-robin_DNS) record, listing all
|
||||
IPs of all containers on the service `webapp`.
|
||||
|
||||
### Discovering services or containers on another stack
|
||||
|
||||
To find a service or a container on another stack, append `.<stack_name>` to the
|
||||
service or container name. For example, if `webapp-1` on the stack `production`
|
||||
needs to access container `db-1` on the stack `common`, it could use the
|
||||
hostname `db-1.common` which Docker Cloud resolves to the appropriate IP.
|
||||
|
||||
### Discovering services or containers not included in a stack
|
||||
|
||||
To find a container or service that is not included in a stack, use the service
|
||||
or container name as the hostname.
|
||||
|
||||
If the container making the query is part of a stack, and there is a local match
|
||||
on the same stack, the local match takes precedence over the service or
|
||||
container that is outside the stack.
|
||||
|
||||
> **Tip**: To work around this, you can rename the local match so that it has a
|
||||
more specific name. You might also put the external service or container in a
|
||||
dedicated stack so that you can specify the stack name as part of the namespace.
|
||||
|
||||
## Using service links for service discovery
|
||||
|
||||
Docker Cloud's service linking is modeled on [Docker Compose
|
||||
links](/compose/compose-file/#links) to provide a basic service discovery
|
||||
functionality using directional links recorded in environment variables.
|
||||
|
||||
When you link a "client" service to a "server" service, Docker Cloud performs
|
||||
the following actions on the "client" service:
|
||||
|
||||
1. Creates a group of environment variables that contain information about the exposed ports of the "server" service, including its IP address, port, and protocol.
|
||||
|
||||
2. Copies all of the "server" service environment variables to the "client" service with an `HOSTNAME_ENV_` prefix.
|
||||
|
||||
3. Adds a DNS hostname to the Docker Cloud DNS service that resolves to the "server" service IP address.
|
||||
|
||||
Some environment variables such as the API endpoint are updated when a service
|
||||
scales up or down. Service links are only updated when a service is deployed or
|
||||
redeployed, but are not updated during runtime. No new service link environment
|
||||
variables are created when a service scales up or down.
|
||||
|
||||
>**Tip:** You can specify one of several [container distribution strategies](/docker-cloud/infrastructure/deployment-strategies.md) for
|
||||
applications deployed to multiple nodes. These strategies enable automatic
|
||||
deployments of containers to nodes, and sometimes auto-linking of containers.
|
||||
If a service with
|
||||
[EVERY_NODE](/docker-cloud/infrastructure/deployment-strategies.md#every-node)
|
||||
strategy is linked to another service with EVERY_NODE strategy, containers are
|
||||
linked one-to-one on each node.
|
||||
|
||||
### Service link example
|
||||
|
||||
For the explanation of service linking, consider the following application
|
||||
diagram.
|
||||
|
||||

|
||||
|
||||
Imagine that you are running a web service (`my-web-app`) with 2 containers
|
||||
(`my-web-app-1` and `my-web-app-2`). You want to add a proxy service
|
||||
(`my-proxy`) with one container (`my-proxy-1`) to balance HTTP traffic to
|
||||
each of the containers in your `my-web-app` application, with a link name of
|
||||
`web`.
|
||||
|
||||
### Service link environment variables
|
||||
|
||||
Several environment variables are set on each container at startup to provide
|
||||
link details to other containers. The links created are directional. These are
|
||||
similar to those used by Docker Compose.
|
||||
|
||||
For our example app above, the following environment variables are set in the
|
||||
proxy containers to provide service links. The example proxy application can use
|
||||
these environment variables to configure itself on startup, and start balancing
|
||||
traffic between the two containers of `my-web-app`.
|
||||
|
||||
| Name | Value |
|
||||
|:------------------------|:----------------------|
|
||||
| WEB_1_PORT | `tcp://172.16.0.5:80` |
|
||||
| WEB_1_PORT_80_TCP | `tcp://172.16.0.5:80` |
|
||||
| WEB_1_PORT_80_TCP_ADDR | `172.16.0.5` |
|
||||
| WEB_1_PORT_80_TCP_PORT | `80` |
|
||||
| WEB_1_PORT_80_TCP_PROTO | `tcp` |
|
||||
| WEB_2_PORT | `tcp://172.16.0.6:80` |
|
||||
| WEB_2_PORT_80_TCP | `tcp://172.16.0.6:80` |
|
||||
| WEB_2_PORT_80_TCP_ADDR | `172.16.0.6` |
|
||||
| WEB_2_PORT_80_TCP_PORT | `80` |
|
||||
| WEB_2_PORT_80_TCP_PROTO | `tcp` |
|
||||
|
||||
To create these service links, you would specify the following in your stackfile:
|
||||
|
||||
```yml
|
||||
my-proxy:
|
||||
links:
|
||||
- my-web-app:web
|
||||
```
|
||||
|
||||
This example snippet creates a directional link from `my-proxy` to `my-web-app`, and calls that link `web`.
|
||||
|
||||
### DNS hostnames vs service links
|
||||
|
||||
> **Note**: Hostnames are updated during runtime if the service scales up or down. Environment variables are only set or updated at deploy or redeploy. If your services scale up or down frequently, you should use hostnames rather than service links.
|
||||
|
||||
In the example, the `my-proxy` containers can access the service links using following hostnames:
|
||||
|
||||
| Hostname | Value |
|
||||
|:---------|:--------------------------|
|
||||
| `web` | `172.16.0.5 172.16.0.6` |
|
||||
| `web-1` | `172.16.0.5` |
|
||||
| `web-2` | `172.16.0.6` |
|
||||
|
||||
The best way for the `my-proxy` service to connect to the `my-web-app` service
|
||||
containers is using the hostnames, because they are updated during runtime if
|
||||
`my-web-app` scales up or down. If `my-web-app` scales up, the new hostname
|
||||
`web-3` automatically resolves to the new IP of the container, and the hostname
|
||||
`web` is updated to include the new IP in its round-robin record.
|
||||
|
||||
However, the service link environment variables are not added or updated until
|
||||
the service is redeployed. If `my-web-app` scales up, no new service link
|
||||
environment variables (such as `WEB_3_PORT`, `WEB_3_PORT_80_TCP`, etc) are added
|
||||
to the "client" container. This means the client does not know how to contact
|
||||
the new "server" container.
|
||||
|
||||
### Service environment variables
|
||||
|
||||
Environment variables specified in the service definition are instantiated in
|
||||
each individual container. This ensures that each container has a copy of the
|
||||
service's defined environment variables, and also allows other connecting
|
||||
containers to read them.
|
||||
|
||||
These environment variables are prefixed with the `HOSTNAME_ENV_` in each
|
||||
container.
|
||||
|
||||
In our example, if we launch our `my-web-app` service with an environment
|
||||
variable of `WEBROOT=/login`, the following environment variables are set and
|
||||
available in the proxy containers:
|
||||
|
||||
| Name | Value |
|
||||
|:------------------|:---------|
|
||||
| WEB_1_ENV_WEBROOT | `/login` |
|
||||
| WEB_2_ENV_WEBROOT | `/login` |
|
||||
|
||||
In our example, this enables the "client" service (`my-proxy-1`) to read
|
||||
configuration information such as usernames and passwords, or simple
|
||||
configuration, from the "server" service containers (`my-web-app-1` and
|
||||
`my-web-app-2`).
|
||||
|
||||
#### Docker Cloud specific environment variables
|
||||
|
||||
In addition to the standard Docker environment variables, Docker Cloud also sets
|
||||
special environment variables that enable containers to self-configure. These
|
||||
environment variables are updated on redeploy.
|
||||
|
||||
In the example above, the following environment variables are available in the `my-proxy` containers:
|
||||
|
||||
| Name | Value |
|
||||
|:-------------------------------|:--------------------------------------------------------------------------------------|
|
||||
| WEB_DOCKERCLOUD_API_URL | `https://cloud.docker.com/api/app/v1/service/3b5fbc69-151c-4f08-9164-a4ff988689ff/` |
|
||||
| DOCKERCLOUD_SERVICE_API_URI | `/api/v1/service/651b58c47-479a-4108-b044-aaa274ef6455/` |
|
||||
| DOCKERCLOUD_SERVICE_API_URL | `https://cloud.docker.com/api/app/v1/service/651b58c47-479a-4108-b044-aaa274ef6455/` |
|
||||
| DOCKERCLOUD_CONTAINER_API_URI | `/api/v1/container/20ae2cff-44c0-4955-8fbe-ac5841d1286f/` |
|
||||
| DOCKERCLOUD_CONTAINER_API_URL | `https://cloud.docker.com/api/app/v1/container/20ae2cff-44c0-4955-8fbe-ac5841d1286f/` |
|
||||
| DOCKERCLOUD_NODE_API_URI | `/api/v1/node/d804d973-c8b8-4f5b-a0a0-558151ffcf02/` |
|
||||
| DOCKERCLOUD_NODE_API_URL | `https://cloud.docker.com/api/infra/v1/node/d804d973-c8b8-4f5b-a0a0-558151ffcf02/` |
|
||||
| DOCKERCLOUD_CONTAINER_FQDN | `my-proxy-1.20ae2cff.cont.dockerapp.io` |
|
||||
| DOCKERCLOUD_CONTAINER_HOSTNAME | `my-proxy-1` |
|
||||
| DOCKERCLOUD_SERVICE_FQDN | `my-proxy.651b58c47.svc.dockerapp.io` |
|
||||
| DOCKERCLOUD_SERVICE_HOSTNAME | `my-proxy` |
|
||||
| DOCKERCLOUD_NODE_FQDN | `d804d973-c8b8-4f5b-a0a0-558151ffcf02.node.dockerapp.io` |
|
||||
| DOCKERCLOUD_NODE_HOSTNAME | `d804d973-c8b8-4f5b-a0a0-558151ffcf02` |
|
||||
|
||||
Where:
|
||||
|
||||
* `WEB_DOCKERCLOUD_API_URL` is the Docker Cloud API resource URL of the linked service. Because this is a link, the link name is the environment variable prefix.
|
||||
|
||||
* `DOCKERCLOUD_SERVICE_API_URI` and `DOCKERCLOUD_SERVICE_API_URL` are the Docker Cloud API resource URI and URL of the service running in the container.
|
||||
|
||||
* `DOCKERCLOUD_CONTAINER_API_URI` and `DOCKERCLOUD_CONTAINER_API_URL` are the Docker Cloud API resource URI and URL of the container itself.
|
||||
|
||||
* `DOCKERCLOUD_NODE_API_URI` and `DOCKERCLOUD_NODE_API_URL` are the Docker Cloud API resource URI and URL of the node where the container is running.
|
||||
|
||||
* `DOCKERCLOUD_CONTAINER_HOSTNAME` and `DOCKERCLOUD_CONTAINER_FQDN` are the external hostname and Fully Qualified Domain Name (FQDN) of the container itself.
|
||||
|
||||
* `DOCKERCLOUD_SERVICE_HOSTNAME` and `DOCKERCLOUD_SERVICE_FQDN` are the external hostname and Fully Qualified Domain Name (FQDN) of the service to which the container belongs.
|
||||
|
||||
* `DOCKERCLOUD_NODE_HOSTNAME` and `DOCKERCLOUD_NODE_FQDN` are the external hostname and Fully Qualified Domain Name (FQDN) of the node where the container is running.
|
||||
|
||||
These environment variables are also copied to linked containers with the `NAME_ENV_` prefix.
|
||||
|
||||
If you provide API access to your service, you can use the generated token
|
||||
(stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or
|
||||
automate operations, such as scaling.
|
|
@ -0,0 +1,75 @@
|
|||
---
|
||||
description: Redeploy running services
|
||||
keywords: redeploy, running, services
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/service-redeploy/
|
||||
title: Redeploy a running service
|
||||
---
|
||||
|
||||
You can **redeploy** services in Docker Cloud while they are running to
|
||||
regenerate a service's containers. You might do this when a new version of the
|
||||
image is pushed to the registry, or to apply changes that you made to
|
||||
the service's settings.
|
||||
|
||||
When you redeploy a service, Docker Cloud terminates the current service
|
||||
containers. It then deploys new containers using the most recent service
|
||||
definition, including service and deployment tags, deployment strategies, port
|
||||
mappings, and so on.
|
||||
|
||||
> **Note**: Your containers might be redeployed to different nodes during redeployment.
|
||||
|
||||
#### Container hostnames
|
||||
|
||||
*Container* **hostnames** change on redeployment, and if your service uses
|
||||
**dynamic published ports**, new ports might be used on redeployment.
|
||||
|
||||
Container hostnames appear in the following format:
|
||||
`servicename-1.new-container-short-uuid.cont.dockerapp.io`
|
||||
|
||||
However, containers keep their local IPs after redeployment, even if they end up
|
||||
in different nodes. This means that linked services do not need to be
|
||||
redeployed. To learn more, see [Service Links](service-links.md).
|
||||
|
||||
#### Service hostnames
|
||||
|
||||
*Service* hostnames remain the same after redeployment. Service hostnames are only
|
||||
available for ports that are bound to a specific port on the host. They are
|
||||
_not_ available if the port is dynamically allocated.
|
||||
|
||||
Service hostnames appear in the following format:
|
||||
`servicename.service-short-uuid.svc.dockerapp.io`
|
||||
|
||||
#### Redeploy with volumes
|
||||
|
||||
If your containers use volumes, the new containers can **reuse** the
|
||||
existing volumes. If you chose to reuse the volumes, the containers redeploy to the same nodes to preserve their links to the volumes.
|
||||
|
||||
> **Note**: When you redeploy services with reused volumes, your redeployment can fail if the service's deployment tags no longer allow it to be deployed on the node that the volume resides on. To learn more, see [Deployment Tags](deploy-tags.md).
|
||||
|
||||
## Redeploy a service using the web interface
|
||||
|
||||
1. Click **Services** in the left menu to view a list of services.
|
||||
2. Click the checkbox to the left of the service or services you want to redeploy.
|
||||
2. From the **Actions** menu at the top right, choose **Redeploy**.
|
||||

|
||||
The service begins redeploying immediately.
|
||||
|
||||
<!-- DCUI-732, DCUI-728
|
||||
3. If the container uses volumes, choose whether to reuse them.
|
||||
4. Click **OK** on the confirmation dialog to start the redeployment.-->
|
||||
|
||||
## Redeploy a service using the API and CLI
|
||||
|
||||
See the Docker Cloud [API and CLI documentation](/apidocs/docker-cloud.md#redeploy-a-service) for more information
|
||||
on using our API and CLI to redeploy services.
|
||||
|
||||
## Autoredeploy on image push to Docker Hub
|
||||
|
||||
If your service uses an image stored in Docker Hub or Docker Cloud, you can
|
||||
enable **Autoredeploy** on the service. Autoredeploy triggers a redeployment
|
||||
whenever a new image is pushed. See the [Autoredeploy documentation](auto-redeploy.md) to learn more.
|
||||
|
||||
## Redeploy a service using webhooks
|
||||
|
||||
You can also use **triggers** to redeploy a service, for example when its image
|
||||
is pushed or rebuilt in a third-party registry. See the [Triggers documentation](triggers.md) to learn more.
|
|
@ -0,0 +1,157 @@
|
|||
---
|
||||
description: Scale your service, spawn new containers
|
||||
keywords: spawn, container, service, deploy
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/service-scaling/
|
||||
title: Scale your service
|
||||
---
|
||||
|
||||
Docker Cloud makes it easy to spawn new containers of your service to handle
|
||||
additional load. Two modes are available to allow you to scale services with
|
||||
different configuration requirements.
|
||||
|
||||
## Deployment and scaling modes
|
||||
|
||||
Any service that handles additional load by increasing the number of containers
|
||||
of the service is considered "horizontally scalable".
|
||||
|
||||
There are two deployment modes when scaling a service:
|
||||
|
||||
- **Parallel mode** (default): all containers of a service are
|
||||
deployed at the same time without any links between them. This is
|
||||
the fastest way to deploy, and is the default.
|
||||
|
||||
- **Sequential mode**: each new container is deployed in the service one at a
|
||||
time. Each container is linked to all previous containers using service
|
||||
links. This makes complex configuration possible within the containers
|
||||
startup logic. This mode is explained in detail in the following sections.
|
||||
|
||||
## When should I use Parallel scaling?
|
||||
|
||||
When the containers in a service work independently of each other and do not
|
||||
need to coordinate between themselves, they can be scaled up in parallel mode.
|
||||
|
||||
Examples include:
|
||||
|
||||
- Stateless web servers and proxies
|
||||
- “Worker” instances that process jobs from a queue
|
||||
- “Cron”-style instances that execute periodic tasks
|
||||
|
||||
The default scaling mode is parallel, so no additional configuration is
|
||||
required to use this mode.
|
||||
|
||||
## When should I use Sequential scaling?
|
||||
|
||||
Some services require coordination between different containers to ensure that
|
||||
the service functions correctly. Many databases, such as MySQL for example,
|
||||
require that the containers know about each other at startup time so that
|
||||
traffic can be routed to them appropriately. When this is the case, you should
|
||||
use [sequential scaling](service-scaling.md#sequential-deployment-and-scaling).
|
||||
|
||||
To allow peer-aware container startup, you can enable sequential scaling mode. See [Sequential Scaling](service-scaling.md#sequential-deployment-and-scaling) for more information.
|
||||
|
||||
## Set the initial number of containers
|
||||
|
||||
When you configure a service in Docker Cloud, you can specify an initial number of containers for the service before you launch.
|
||||
|
||||

|
||||
|
||||
Docker Cloud immediately launches as many containers as you specified.
|
||||
|
||||
### Set the initial containers using the API
|
||||
|
||||
You can specify the initial number of containers for a service when deploying it through the API:
|
||||
|
||||
```
|
||||
POST /api/app/v1/service/ HTTP/1.1
|
||||
{
|
||||
"target_num_containers": 2,
|
||||
[...]
|
||||
}
|
||||
```
|
||||
|
||||
If you don’t specify the number of containers to deploy, this command defaults to `1`. See the [API documentation](/apidocs/docker-cloud.md) for more information.
|
||||
|
||||
### Set the initial containers using the CLI
|
||||
|
||||
You can also specify the initial number of containers for a service when deploying it using the CLI:
|
||||
|
||||
```bash
|
||||
$ docker-cloud service run -t 2 [...]
|
||||
```
|
||||
|
||||
If you don’t specify the number of containers to deploy, the CLI uses the default value of `1`. See the [CLI documentation](/apidocs/docker-cloud.md) for more information.
|
||||
|
||||
## Scale an already running service
|
||||
|
||||
If you need to scale a service up or down while it is running, you can change the number of containers from the service detail page:
|
||||
|
||||

|
||||
|
||||
1. Click the slider at the top of the service detail page.
|
||||
2. Drag the slider to the number of containers you want.
|
||||
3. Click **Scale**.
|
||||
|
||||
The application starts scaling immediately, whether this means starting new containers, or gracefully shutting down existing ones.
|
||||
|
||||

|
||||
|
||||
### Scale a running service using the API
|
||||
|
||||
You can scale an already running service through the API:
|
||||
|
||||
```
|
||||
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
|
||||
{
|
||||
"target_num_containers": 2
|
||||
}
|
||||
```
|
||||
See the [scale a service API documentation](/apidocs/docker-cloud.md#scale-a-service).
|
||||
|
||||
### Scale a running service using the CLI
|
||||
|
||||
You can also scale an already running service using the CLI:
|
||||
|
||||
```bash
|
||||
$ docker-cloud service scale (uuid or name) 2
|
||||
```
|
||||
|
||||
See the [scale a service CLI documentation](/apidocs/docker-cloud.md#scale-a-service).
|
||||
|
||||
## Sequential deployment and scaling
|
||||
|
||||
When a service with more than one container is deployed using **sequential deployment** mode, the second and subsequent containers are linked to all the
|
||||
previous ones using [service links](service-links.md). These links are useful if
|
||||
your service needs to know about other instances, for example to allow automatic
|
||||
configuration on startup.
|
||||
|
||||
See the [Service links](service-links.md) topic for a list of environment variables that the links create in your containers.
|
||||
|
||||
You can set the **Sequential deployment** setting on the **Service configuration** step of the **Launch new service** wizard:
|
||||
|
||||

|
||||
|
||||
### Set the scaling mode using the API
|
||||
|
||||
You can also set the `sequential_deployment` option when deploying an
|
||||
application through the API:
|
||||
|
||||
```
|
||||
POST /api/app/v1/service/ HTTP/1.1
|
||||
{
|
||||
"sequential_deployment": true,
|
||||
[...]
|
||||
}
|
||||
```
|
||||
|
||||
See [create a new service](/apidocs/docker-cloud.md#create-a-new-service) for
|
||||
more information.
|
||||
|
||||
### Set the scaling mode using the CLI
|
||||
|
||||
You can also set the `sequential_deployment` option when deploying an
|
||||
application through the CLI:
|
||||
|
||||
```bash
|
||||
$ docker-cloud service run --sequential [...]
|
||||
```
|
|
@ -0,0 +1,329 @@
|
|||
---
|
||||
description: Stack YAML reference for Docker Cloud
|
||||
keywords: YAML, stack, reference, docker cloud
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/stack-yaml-reference/
|
||||
title: Docker Cloud stack file YAML reference
|
||||
---
|
||||
|
||||
A stack is a collection of services that make up an application in a specific environment. Learn more about stacks for Docker Cloud [here](stacks.md). A **stack file** is a file in YAML format that defines one or more services, similar to a `docker-compose.yml` file for Docker Compose but with a few extensions. The default name for this file is `docker-cloud.yml`.
|
||||
|
||||
**Looking for information on stack files for Swarm?** A good place to start is the [Compose reference file](/compose/compose-file/index.md), particularly the section on `deploy` key and its sub-options, and the reference on [Docker stacks](/compose/bundles.md). Also, the new [Getting Started tutorial](/get-started/index.md) demos use of a stack file to deploy an application to a swarm.
|
||||
|
||||
## Stack file example
|
||||
|
||||
Below is an example `docker-cloud.yml`:
|
||||
|
||||
```yml
|
||||
lb:
|
||||
image: dockercloud/haproxy
|
||||
links:
|
||||
- web
|
||||
ports:
|
||||
- "80:80"
|
||||
roles:
|
||||
- global
|
||||
web:
|
||||
image: dockercloud/quickstart-python
|
||||
links:
|
||||
- redis
|
||||
target_num_containers: 4
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
|
||||
Each key defined in `docker-cloud.yml` creates a service with that name in Docker Cloud. In the example above, three services are created: `lb`, `web`, and `redis`. Each service is a dictionary whose possible keys are documented below.
|
||||
|
||||
The `image` key is mandatory. Other keys are optional and are analogous to their [Docker Cloud Service API](/apidocs/docker-cloud.md#create-a-new-service) counterparts.
|
||||
|
||||
## image (required)
|
||||
|
||||
The image used to deploy this service. This is the only mandatory key.
|
||||
|
||||
```yml
|
||||
image: drupal
|
||||
image: dockercloud/hello-world
|
||||
image: my.registry.com/redis
|
||||
```
|
||||
|
||||
## autodestroy
|
||||
Whether the containers for this service should be terminated if they stop (default: `no`, possible values: `no`, `on-success`, `always`).
|
||||
|
||||
```yml
|
||||
autodestroy: always
|
||||
```
|
||||
|
||||
## autoredeploy
|
||||
Whether to redeploy the containers of the service when its image is updated in Docker Cloud registry (default: `false`).
|
||||
|
||||
```yml
|
||||
autoredeploy: true
|
||||
```
|
||||
|
||||
## cap_add, cap_drop
|
||||
Add or drop container capabilities. See `man 7 capabilities` for a full list.
|
||||
|
||||
```yml
|
||||
cap_add:
|
||||
- ALL
|
||||
cap_drop:
|
||||
- NET_ADMIN
|
||||
- SYS_ADMIN
|
||||
```
|
||||
|
||||
## cgroup_parent
|
||||
Specify an optional parent cgroup for the container.
|
||||
|
||||
```yml
|
||||
cgroup_parent: m-executor-abcd
|
||||
```
|
||||
|
||||
## command
|
||||
Override the default command in the image.
|
||||
|
||||
```yml
|
||||
command: echo 'Hello World!'
|
||||
```
|
||||
|
||||
## deployment_strategy
|
||||
Container distribution among nodes (default: `emptiest_node`, possible values: `emptiest_node`, `high_availability`, `every_node`). Learn more [here](../infrastructure/deployment-strategies.md).
|
||||
|
||||
```yml
|
||||
deployment_strategy: high_availability
|
||||
```
|
||||
|
||||
## devices
|
||||
List of device mappings. Uses the same format as the `--device` docker client create option.
|
||||
|
||||
```yml
|
||||
devices:
|
||||
- "/dev/ttyUSB0:/dev/ttyUSB0"
|
||||
```
|
||||
|
||||
## dns
|
||||
Specify custom DNS servers. Can be a single value or a list.
|
||||
|
||||
```yml
|
||||
dns: 8.8.8.8
|
||||
dns:
|
||||
- 8.8.8.8
|
||||
- 9.9.9.9
|
||||
```
|
||||
|
||||
## dns_search
|
||||
Specify custom DNS search domains. Can be a single value or a list.
|
||||
|
||||
```yml
|
||||
dns_search: example.com
|
||||
dns_search:
|
||||
- dc1.example.com
|
||||
- dc2.example.com
|
||||
```
|
||||
|
||||
## environment
|
||||
A list of environment variables to add in the service's containers at launch. The environment variables specified here override any image-defined environment variables. You can use either an array or a dictionary format.
|
||||
|
||||
Dictionary format:
|
||||
```yml
|
||||
environment:
|
||||
PASSWORD: my_password
|
||||
```
|
||||
|
||||
Array format:
|
||||
```yml
|
||||
environment:
|
||||
- PASSWORD=my_password
|
||||
```
|
||||
|
||||
When you use the Docker Cloud CLI to create a stack, you can use the environment variables defined locally in your shell to define those in the stack. This is useful if you don't want to store passwords or other sensitive information in your stack file:
|
||||
|
||||
```yml
|
||||
environment:
|
||||
- PASSWORD
|
||||
```
|
||||
|
||||
## expose
|
||||
Expose ports without publishing them to the host machine - they'll only be accessible from your nodes in Docker Cloud. `udp` ports can be specified with a `/udp` suffix.
|
||||
|
||||
```yml
|
||||
expose:
|
||||
- "80"
|
||||
- "90/udp"
|
||||
```
|
||||
|
||||
## extra_hosts
|
||||
Add hostname mappings. Uses the same values as the docker client `--add-host` parameter.
|
||||
|
||||
```yml
|
||||
extra_hosts:
|
||||
- "somehost:162.242.195.82"
|
||||
- "otherhost:50.31.209.229"
|
||||
```
|
||||
|
||||
## labels
|
||||
Add metadata to containers using Docker Engine labels. You can use either an array or a dictionary.
|
||||
|
||||
We recommend using reverse-DNS notation to prevent your labels from conflicting with those used by other software.
|
||||
|
||||
```yml
|
||||
labels:
|
||||
com.example.description: "Accounting webapp"
|
||||
com.example.department: "Finance"
|
||||
com.example.label-with-empty-value: ""
|
||||
|
||||
labels:
|
||||
- "com.example.description=Accounting webapp"
|
||||
- "com.example.department=Finance"
|
||||
- "com.example.label-with-empty-value"
|
||||
```
|
||||
|
||||
## links
|
||||
Link to another service.
|
||||
|
||||
Either specify both the service unique name and the link alias (`SERVICE:ALIAS`), or just the service unique name (which is also used for the alias). If a service you want to link to is part of a different stack, specify the external stack name too.
|
||||
|
||||
- If the target service belongs to *this* stack, its service unique name is its service name.
|
||||
- If the target service does not belong to *any* stacks (it is a standalone service), its service unique name is its service name.
|
||||
- If the target service belongs to another stack, its service unique name is its service name plus the service stack name, separated by a period (`.`).
|
||||
|
||||
```yml
|
||||
links:
|
||||
- mysql
|
||||
- redis:cache
|
||||
- amqp.staging:amqp
|
||||
```
|
||||
|
||||
Environment variables are created for each link that Docker Cloud resolves to the containers IPs of the linked service. More information [here](service-links.md).
|
||||
|
||||
## net
|
||||
Networking mode. Only "bridge" and "host" options are supported for now.
|
||||
|
||||
```yml
|
||||
net: host
|
||||
```
|
||||
|
||||
## pid
|
||||
Sets the PID mode to the host PID mode. This turns on sharing between container and the host operating system PID address space. Containers launched with this (optional) flag can access and be accessed by other containers in the namespace belonging to the host running the Docker daemon.
|
||||
|
||||
```yml
|
||||
pid: "host"
|
||||
```
|
||||
|
||||
## ports
|
||||
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container port (an ephemeral host port is chosen). `udp` ports can be specified with a `/udp` suffix.
|
||||
|
||||
```yml
|
||||
ports:
|
||||
- "80"
|
||||
- "443:443"
|
||||
- "500/udp"
|
||||
- "4500:4500/udp"
|
||||
- "49022:22"
|
||||
```
|
||||
|
||||
## privileged
|
||||
|
||||
Whether to start the containers with Docker Engine's privileged flag set or not (default: `false`).
|
||||
|
||||
```yml
|
||||
privileged: true
|
||||
```
|
||||
|
||||
## restart
|
||||
Whether the containers for this service should be restarted if they stop (default: `no`, possible values: `no`, `on-failure`, `always`).
|
||||
|
||||
```yml
|
||||
restart: always
|
||||
```
|
||||
|
||||
## roles
|
||||
A list of Docker Cloud API roles to grant the service. The only supported value is `global`, which creates an environment variable `DOCKERCLOUD_AUTH` used to authenticate against Docker Cloud API. Learn more [here](api-roles.md).
|
||||
|
||||
```yml
|
||||
roles:
|
||||
- global
|
||||
```
|
||||
|
||||
## security_opt
|
||||
Override the default labeling scheme for each container.
|
||||
|
||||
```yml
|
||||
security_opt:
|
||||
- label:user:USER
|
||||
- label:role:ROLE
|
||||
```
|
||||
|
||||
## sequential_deployment
|
||||
Whether the containers should be launched and scaled in sequence (default: `false`). Learn more [here](service-scaling.md).
|
||||
|
||||
```yml
|
||||
sequential_deployment: true
|
||||
```
|
||||
|
||||
## tags
|
||||
Indicates the [deploy tags](deploy-tags.md) to select the nodes where containers are created.
|
||||
|
||||
```yml
|
||||
tags:
|
||||
- staging
|
||||
- web
|
||||
```
|
||||
|
||||
## target_num_containers
|
||||
The number of containers to run for this service (default: `1`).
|
||||
|
||||
```yml
|
||||
target_num_containers: 3
|
||||
```
|
||||
|
||||
## volumes
|
||||
Mount paths as volumes, optionally specifying a path on the host machine (`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`).
|
||||
|
||||
```yml
|
||||
volumes:
|
||||
- /etc/mysql
|
||||
- /sys:/sys
|
||||
- /etc:/etc:ro
|
||||
```
|
||||
|
||||
## volumes_from
|
||||
Mount all of the volumes from another service by specifying a service unique name.
|
||||
|
||||
- If the target service belongs to this stack, its service unique name is its service name.
|
||||
- If the target service does not belong to any stack, its service unique name is its service name.
|
||||
- If the target service belongs to another stack, its service unique name is its service name plus the service stack name, separated by ".". Learn more [here](volumes.md).
|
||||
|
||||
```yml
|
||||
volumes_from:
|
||||
- database
|
||||
- mongodb.staging
|
||||
```
|
||||
|
||||
## Single value keys analogous to a `docker run` counterpart
|
||||
|
||||
```
|
||||
working_dir: /app
|
||||
entrypoint: /app/entrypoint.sh
|
||||
user: root
|
||||
hostname: foo
|
||||
domainname: foo.com
|
||||
mac_address: 02:42:ac:11:65:43
|
||||
cpu_shares: 512
|
||||
cpuset: 0,1
|
||||
mem_limit: 100000m
|
||||
memswap_limit: 200000m
|
||||
privileged: true
|
||||
read_only: true
|
||||
stdin_open: true
|
||||
tty: true
|
||||
```
|
||||
|
||||
## Unsupported Docker-compose keys
|
||||
|
||||
Stack files (`docker-cloud.yml`) were designed with `docker-compose.yml` in mind to maximize compatibility. However the following keys used in Compose are not supported in Docker Cloud stackfiles:
|
||||
|
||||
```
|
||||
build
|
||||
external_links
|
||||
env_file
|
||||
```
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
description: Manage service stacks
|
||||
keywords: service, stack, yaml
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/stacks/
|
||||
title: Manage service stacks
|
||||
---
|
||||
|
||||
A **stack** is a collection of services that make up an application in a specific environment. A **stack file** is a file in YAML format, similar to a `docker-compose.yml` file, that defines one or more services. The YAML reference is documented [here](stack-yaml-reference.md).
|
||||
|
||||
Stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately.
|
||||
|
||||
Stack files define environment variables, deployment tags, the number of containers, and related environment-specific configuration. Because of this, you should use a separate stack file for development, staging, production, and other environments.
|
||||
|
||||
### Stack file example
|
||||
|
||||
Below is an example `docker-cloud.yml`:
|
||||
|
||||
```yml
|
||||
lb:
|
||||
image: dockercloud/haproxy
|
||||
links:
|
||||
- web
|
||||
ports:
|
||||
- "80:80"
|
||||
roles:
|
||||
- global
|
||||
web:
|
||||
image: dockercloud/quickstart-python
|
||||
links:
|
||||
- redis
|
||||
target_num_containers: 4
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
|
||||
Each key defined in `docker-cloud.yml` creates a service with that name in Docker Cloud. In the example above, three services are created: `lb`, `web` and `redis`. Each service is a dictionary and its keys are specified below.
|
||||
|
||||
Only the `image` key is mandatory. Other keys are optional and are analogous to their [Docker Cloud Service API](/apidocs/docker-cloud.md#create-a-new-service) counterparts.
|
||||
|
||||
## Create a stack
|
||||
|
||||
Docker Cloud allows you to create stacks from the web interface, as well as via the Docker Cloud API and the `docker-cloud` command line.
|
||||
|
||||
To create a stack from the Docker Cloud web interface:
|
||||
|
||||
1. Log in to Docker Cloud.
|
||||
2. Click **Stacks**.
|
||||
3. Click **Create**.
|
||||
4. Enter a name for the stackfile.
|
||||
5. Enter or paste the stack file in the **Stackfile** field, or drag a file to the field to upload it. (You can also click in the field to browse for and upload a file on your computer.)
|
||||
|
||||

|
||||
|
||||
6. Click **Create** or **Create and deploy**.
|
||||
|
||||
### Create a stack using the API
|
||||
|
||||
You can also create a new stack by uploading a stack file directly using the Docker Cloud API. When you use the API, the stack file is in **JSON** format, like the following example:
|
||||
|
||||
```json
|
||||
POST /api/v1/stack/ HTTP/1.1
|
||||
{
|
||||
"name": "my-new-stack",
|
||||
"services": [
|
||||
{
|
||||
"name": "hello-word",
|
||||
"image": "dockercloud/hello-world",
|
||||
"target_num_containers": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Check our [API documentation](/apidocs/docker-cloud.md#stacks) for more information.
|
||||
|
||||
### Create a stack using the CLI
|
||||
|
||||
You can create a stack from a YAML file by executing:
|
||||
|
||||
```bash
|
||||
$ docker-cloud stack create -f docker-cloud.yml
|
||||
```
|
||||
|
||||
Check our [CLI documentation](/apidocs/docker-cloud.md#stacks) for more information.
|
||||
|
||||
|
||||
## Update an existing stack
|
||||
|
||||
You can specify an existing stack when you create a service, however you might not always have the stack definition ready at that time, or you might later want to add a service to an existing stack.
|
||||
|
||||
To update a stack from the Docker Cloud web interface:
|
||||
|
||||
1. Navigate to the stack you want to update.
|
||||
2. Click **Edit**.
|
||||
|
||||

|
||||
3. Edit the stack file, or upload a new one from your computer.
|
||||
4. Click **Save**.
|
||||
|
||||
### Update an existing stack using the API
|
||||
|
||||
You can also update a stack by uploading the new stack file directly using the Docker Cloud API. When you use the API, the stack file is in **JSON** format, like the following example:
|
||||
|
||||
```json
|
||||
PATCH /api/app/v1/stack/(uuid)/ HTTP/1.1
|
||||
{
|
||||
"services": [
|
||||
{
|
||||
"name": "hello-word",
|
||||
"image": "dockercloud/hello-world",
|
||||
"target_num_containers": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Check our [API documentation](/apidocs/docker-cloud.md#stacks) for more information.
|
||||
|
||||
### Update an existing stack using the CLI
|
||||
|
||||
You can update a stack from a YAML file by executing:
|
||||
|
||||
```bash
|
||||
docker-cloud stack update -f docker-cloud.yml (uuid or name)
|
||||
```
|
||||
|
||||
Check our [CLI documentation](/apidocs/docker-cloud.md#stacks) for more information.
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
description: Use triggers
|
||||
keywords: API, triggers, endpoints
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/triggers/
|
||||
title: Use triggers
|
||||
---
|
||||
|
||||
## What are triggers?
|
||||
|
||||
**Triggers** are API endpoints that redeploy or scale a specific service
|
||||
whenever a `POST` HTTP request is sent to them. You can create one or more
|
||||
triggers per service.
|
||||
|
||||
Triggers do not require any authentication. This allows third party services
|
||||
like Docker Hub to call them, however because of this it is important that you
|
||||
keep their URLs secret.
|
||||
|
||||
The body of the `POST` request is passed in to the new containers as an
|
||||
environment variable called `DOCKERCLOUD_TRIGGER_BODY`.
|
||||
|
||||
### Trigger types
|
||||
|
||||
Docker Cloud supports two types of triggers:
|
||||
|
||||
* **Redeploy** triggers, which redeploy the service when called
|
||||
* **Scale up** triggers, which scale the service by one or more containers when called
|
||||
|
||||
## Create a trigger
|
||||
|
||||
1. Click the name of the service you want to create a trigger for.
|
||||
2. Go to the detail page and scroll down to the **Triggers** section.
|
||||
|
||||

|
||||
|
||||
3. In the **Trigger name** field, enter a name for the trigger.
|
||||
4. Select a trigger type.
|
||||
5. Click the **+** (plus sign) icon.
|
||||
|
||||

|
||||
|
||||
6. Use the POST request URL provided to configure the webhook in your
|
||||
application or third party service.
|
||||
|
||||
## Revoke triggers
|
||||
|
||||
To stop a trigger from automatically scaling or redeploying, you must revoke it.
|
||||
|
||||
1. Go to the detail page of the service.
|
||||
2. Scroll down to the **Triggers** section.
|
||||
3. Click the **trashcan** icon for the trigger you want to revoke.
|
||||
|
||||

|
||||
|
||||
Once the trigger is revoked, it stops accepting requests.
|
||||
|
||||
## Use triggers in the API and CLI
|
||||
|
||||
See our [API and CLI documentation](/apidocs/docker-cloud.md#triggers) to learn how to use triggers with our API and the CLI.
|
|
@ -0,0 +1,69 @@
|
|||
---
|
||||
description: Work with data volumes
|
||||
keywords: data, volumes, create, reuse
|
||||
redirect_from:
|
||||
- /docker-cloud/tutorials/download-volume-data/
|
||||
- /docker-cloud/feature-reference/volumes/
|
||||
title: Work with data volumes
|
||||
---
|
||||
|
||||
In Docker Cloud, you can define one or more data volumes for a service.
|
||||
**Volumes** are directories that are stored outside of the container's
|
||||
filesystem and which hold reusable and shareable data that can persist even when
|
||||
containers are terminated. This data can be reused by the same service on
|
||||
redeployment, or shared with other services.
|
||||
|
||||
## Add a data volume to a service
|
||||
|
||||
Data volumes can be either specified in the image's `Dockerfile` using the
|
||||
[VOLUME instruction](/engine/reference/builder/#volume), or when
|
||||
creating a service.
|
||||
|
||||
To define a data volume in a service, specify the **container path** where it
|
||||
should be created in the **Volumes** step of the **Create new service** wizard.
|
||||
Each container of the service has its own volume. Data volumes are reused
|
||||
when the service is redeployed (data persists in this case), and deleted if the
|
||||
service is terminated.
|
||||
|
||||

|
||||
|
||||
If you don't define a **host path**, Docker Cloud creates a new empty volume.
|
||||
Otherwise, the specified **host path** is mounted on the **container path**.
|
||||
When you specify a host path, you can also specify whether to mount the volume
|
||||
read-only, or read/write.
|
||||
|
||||

|
||||
|
||||
|
||||
## Reuse data volumes from another service
|
||||
|
||||
You can reuse data volumes from another service. To do this when creating a service, go through the **Create new service**, and continue to the **Volumes** step. From the **Volumes** page, choose a source service from the **Add volumes from** menu.
|
||||
|
||||

|
||||
|
||||
All reused data volumes are mounted on the same paths as in the source service.
|
||||
Containers must be on the same host to share volumes, so the containers
|
||||
of the new service deploy to the same nodes where the source service
|
||||
containers are deployed.
|
||||
|
||||
> **Note**: A service with data volumes cannot be terminated until all services that are using its volumes have also been terminated.
|
||||
|
||||
## Back up data volumes
|
||||
|
||||
You might find it helpful to download or back up the data from volumes that are attached to running containers.
|
||||
|
||||
1. Run an SSH service that mounts the volumes of the service you want to back up.
|
||||
|
||||
In the example snippet below, replace `mysql` with the actual service name.
|
||||
|
||||
```
|
||||
$ docker-cloud service run -n downloader -p 22:2222 -e AUTHORIZED_KEYS="$(cat ~/.ssh/id_rsa.pub)" --volumes-from mysql tutum/ubuntu
|
||||
```
|
||||
|
||||
2. Run a `scp` (secure-copy) to download the files to your local machine.
|
||||
|
||||
In the example snippet below, replace `downloader-1.uuid.cont.dockerapp.io` with the container's Fully Qualified Domain Name (FQDN), and replace `/var/lib/mysql` with the path within the container from which you want to download the data. The data is downloaded to the current local folder.
|
||||
|
||||
```
|
||||
$ scp -r -P 2222 root@downloader-1.uuid.cont.dockerapp.io:/var/lib/mysql .
|
||||
```
|
|
@ -0,0 +1,133 @@
|
|||
---
|
||||
description: Automated builds
|
||||
keywords: automated, build, images
|
||||
title: Advanced options for Autobuild and Autotest
|
||||
---
|
||||
|
||||
The following options allow you to customize your automated build and automated test processes.
|
||||
|
||||
## Environment variables for building and testing
|
||||
|
||||
Several utility environment variables are set by the build process, and are
|
||||
available during automated builds, automated tests, and while executing
|
||||
hooks.
|
||||
|
||||
> **Note**: These environment variables are only available to the build and test
|
||||
processes and do not affect your service's run environment.
|
||||
|
||||
* `SOURCE_BRANCH`: the name of the branch or the tag that is currently being tested.
|
||||
* `SOURCE_COMMIT`: the SHA1 hash of the commit being tested.
|
||||
* `COMMIT_MSG`: the message from the commit being tested and built.
|
||||
* `DOCKER_REPO`: the name of the Docker repository being built.
|
||||
* `DOCKERFILE_PATH`: the dockerfile currently being built.
|
||||
* `CACHE_TAG`: the Docker repository tag being built.
|
||||
* `IMAGE_NAME`: the name and tag of the Docker repository being built. (This variable is a combination of `DOCKER_REPO`:`CACHE_TAG`.)
|
||||
|
||||
If you are using these build environment variables in a
|
||||
`docker-compose.test.yml` file for automated testing, declare them in your `sut`
|
||||
service's environment as shown below.
|
||||
|
||||
```none
|
||||
sut:
|
||||
build: .
|
||||
command: run_tests.sh
|
||||
environment:
|
||||
- SOURCE_BRANCH
|
||||
```
|
||||
|
||||
|
||||
## Override build, test or push commands
|
||||
|
||||
Docker Cloud allows you to override and customize the `build`, `test` and `push`
|
||||
commands during automated build and test processes using hooks. For example, you
|
||||
might use a build hook to set build arguments used only during the build
|
||||
process. (You can also set up [custom build phase hooks](#custom-build-phase-hooks) to perform actions in between these commands.)
|
||||
|
||||
**Use these hooks with caution.** The contents of these hook files replace the
|
||||
basic `docker` commands, so you must include a similar build, test or push
|
||||
command in the hook or your automated process does not complete.
|
||||
|
||||
To override these phases, create a folder called `hooks` in your source code
|
||||
repository at the same directory level as your Dockerfile. Create a file called
|
||||
`hooks/build`, `hooks/test`, or `hooks/push` and include commands that the
|
||||
builder process can execute, such as `docker` and `bash` commands (prefixed appropriately with `#!/bin/bash`).
|
||||
|
||||
## Custom build phase hooks
|
||||
|
||||
You can run custom commands between phases of the build process by creating
|
||||
hooks. Hooks allow you to provide extra instructions to the autobuild and
|
||||
autotest processes.
|
||||
|
||||
Create a folder called `hooks` in your source code repository at the same
|
||||
directory level as your Dockerfile. Place files that define the hooks in that
|
||||
folder. Hook files can include both `docker` commands, and `bash` commands as long as they are prefixed appropriately with `#!/bin/bash`. The builder executes the commands in the files before and after each step.
|
||||
|
||||
The following hooks are available:
|
||||
|
||||
* `hooks/post_checkout`
|
||||
* `hooks/pre_build`
|
||||
* `hooks/post_build`
|
||||
* `hooks/pre_test`
|
||||
* `hooks/post_test`
|
||||
* `hooks/pre_push` (only used when executing a build rule or [automated build](automated-build.md) )
|
||||
* `hooks/post_push` (only used when executing a build rule or [automated build](automated-build.md) )
|
||||
|
||||
### Build hook examples
|
||||
|
||||
#### Override the "build" phase to set variables
|
||||
|
||||
Docker Cloud allows you to define build environment variables either in the hook files, or from the automated build UI (which you can then reference in hooks).
|
||||
|
||||
In the following example, we define a build hook that uses `docker build` arguments to set the variable `CUSTOM` based on the value of variable we defined using the Docker Cloud build settings. `$DOCKERFILE_PATH` is a variable that we provide with the name of the Dockerfile we wish to build, and `$IMAGE_NAME` is the name of the image being built.
|
||||
|
||||
```none
|
||||
docker build --build-arg CUSTOM=$VAR -f $DOCKERFILE_PATH -t $IMAGE_NAME .
|
||||
```
|
||||
|
||||
> **Caution**: A `hooks/build` file overrides the basic [docker build](/engine/reference/commandline/build.md) command
|
||||
used by the builder, so you must include a similar build command in the hook or
|
||||
the automated build fails.
|
||||
|
||||
To learn more about Docker build-time variables, see the [docker build documentation](/engine/reference/commandline/build/#set-build-time-variables-build-arg).
|
||||
|
||||
#### Two-phase build
|
||||
|
||||
If your build process requires a component that is not a dependency for your application, you can use a pre-build hook (refers to the `hooks/pre_build` file) to collect and compile required components. In the example below, the hook uses a Docker container to compile a Golang binary that is required before the build.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
echo "=> Building the binary"
|
||||
docker run --privileged \
|
||||
-v $(pwd):/src \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
centurylink/golang-builder
|
||||
```
|
||||
|
||||
#### Push to multiple repos
|
||||
|
||||
By default the build process pushes the image only to the repository where the build settings are configured. If you need to push the same image to multiple repositories, you can set up a `post_push` hook to add additional tags and push to more repositories.
|
||||
|
||||
```none
|
||||
docker tag $IMAGE_NAME $DOCKER_REPO:$SOURCE_COMMIT
|
||||
docker push $DOCKER_REPO:$SOURCE_COMMIT
|
||||
```
|
||||
|
||||
## Source Repository / Branch Clones
|
||||
|
||||
When Docker Cloud pulls a branch from a source code repository, it performs
|
||||
a shallow clone (only the tip of the specified branch). This has the advantage
|
||||
of minimizing the amount of data transfer necessary from the repository and
|
||||
speeding up the build because it pulls only the minimal code necessary.
|
||||
|
||||
Because of this, if you need to perform a custom action that relies on a different
|
||||
branch (such as a `post_push` hook), you can't checkout that branch, unless
|
||||
you do one of the following:
|
||||
|
||||
* You can get a shallow checkout of the target branch by doing the following:
|
||||
|
||||
git fetch origin branch:mytargetbranch --depth 1
|
||||
|
||||
* You can also "unshallow" the clone, which fetches the whole Git history (and potentially
|
||||
takes a long time / moves a lot of data) by using the `--unshallow` flag on the fetch:
|
||||
|
||||
git fetch --unshallow origin
|
|
@ -0,0 +1,439 @@
|
|||
---
|
||||
description: Automated builds
|
||||
keywords: automated, build, images
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/automated-build/
|
||||
title: Automated builds
|
||||
---
|
||||
|
||||
[](https://youtu.be/sl2mfyjnkXk "Automated Builds with Docker Cloud"){:target="_blank" class="_"}
|
||||
|
||||
> **Note**: Docker Cloud's Build functionality is in BETA.
|
||||
|
||||
Docker Cloud can automatically build images from source code in an external
|
||||
repository and automatically push the built image to your Docker
|
||||
repositories.
|
||||
|
||||
When you set up automated builds (also called autobuilds), you create a list of
|
||||
branches and tags that you want to build into Docker images. When you push code
|
||||
to a source code branch (for example in Github) for one of those listed image
|
||||
tags, the push uses a webhook to trigger a new build, which produces a Docker
|
||||
image. The built image is then pushed to the Docker Cloud registry or to an
|
||||
external registry.
|
||||
|
||||
If you have automated tests configured, these run after building but before
|
||||
pushing to the registry. You can use these tests to create a continuous
|
||||
integration workflow where a build that fails its tests does not push the built
|
||||
image. Automated tests do not push images to the registry on their own. [Learn more about automated image testing here.](automated-testing.md)
|
||||
|
||||
You can also just use `docker push` to push pre-built images to these
|
||||
repositories, even if you have automatic builds set up.
|
||||
|
||||

|
||||
|
||||
## Configure automated build settings
|
||||
|
||||
You can configure repositories in Docker Cloud so that they automatically
|
||||
build an image each time you push new code to your source provider. If you have
|
||||
[automated tests](automated-testing.md) configured, the new image is only pushed
|
||||
when the tests succeed.
|
||||
|
||||
Before you set up automated builds you need to [create a repository](repos.md) to build, and [link to your source code provider](link-source.md).
|
||||
|
||||
1. From the **Repositories** section, click into a repository to view its details.
|
||||
|
||||
2. Click the **Builds** tab.
|
||||
|
||||
3. If you are setting up automated builds for the first time, select
|
||||
the code repository service where the image's source code is stored.
|
||||
|
||||
Otherwise, if you are editing the build settings for an existing automated
|
||||
build, click **Configure automated builds**.
|
||||
|
||||
4. Select the **source repository** to build the Docker images from.
|
||||
|
||||
You might need to specify an organization or user (the _namespace_) from the
|
||||
source code provider. Once you select a namespace, its source code
|
||||
repositories appear in the **Select repository** dropdown list.
|
||||
|
||||
5. Choose where to run your build processes.
|
||||
|
||||
You can either run the process on your own infrastructure and optionally [set up specific nodes to build on](automated-build.md#set-up-builder-nodes), or select **Build on Docker Cloud’s infrastructure** you can use the hosted build service
|
||||
offered on Docker Cloud's infrastructure. If you use
|
||||
Docker's infrastructure, select a builder size to run the build
|
||||
process on. This hosted build service is free while it is in Beta.
|
||||
|
||||

|
||||
|
||||
6. If in the previous step you selected **Build on Docker
|
||||
Cloud’s infrastructure**, then you are given the option to select the
|
||||
**Docker Version** used to build this repository. You can choose between
|
||||
the **Stable** and **Edge** versions of Docker.
|
||||
|
||||
Selecting **Edge** lets you to take advantage of [multi-stage builds](/engine/userguide/eng-image/multistage-build/). For more information and examples, see the topic on how to [use multi-stage builds](/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds).
|
||||
|
||||
You can learn more about **stable** and **edge** channels in the
|
||||
[Install Docker overview](/install/)
|
||||
and the [Docker CE Edge](/edge/) topics.
|
||||
|
||||
7. Optionally, enable [autotests](automated-testing.md#enable-automated-tests-on-a-repository).
|
||||
|
||||
8. Review the default **Build Rules**, and optionally click the
|
||||
**plus sign** to add and configure more build rules.
|
||||
|
||||
_Build rules_ control what Docker Cloud builds into images from the contents
|
||||
of the source code repository, and how the resulting images are tagged
|
||||
within the Docker repository.
|
||||
|
||||
A default build rule is set up for you, which you can edit or delete. This
|
||||
default set builds from the `Branch` in your source code repository called
|
||||
`master`, and creates a Docker image tagged with `latest`.
|
||||
|
||||
9. For each branch or tag, enable or disable the **Autobuild** toggle.
|
||||
|
||||
Only branches or tags with autobuild enabled are built, tested, *and* have
|
||||
the resulting image pushed to the repository. Branches with autobuild
|
||||
disabled are built for test purposes (if enabled at the repository
|
||||
level), but the built Docker image is not pushed to the repository.
|
||||
|
||||
10. For each branch or tag, enable or disable the **Build Caching** toggle.
|
||||
|
||||
[Build caching](/engine/userguide/eng-image/dockerfile_best-practices/#/build-cache) can save time if you are building a large image frequently or have
|
||||
many dependencies. You might want to leave build caching disabled to
|
||||
make sure all of your dependencies are resolved at build time, or if
|
||||
you have a large layer that is quicker to build locally.
|
||||
|
||||
11. Click **Save** to save the settings, or click **Save and build** to save and
|
||||
run an initial test.
|
||||
|
||||
A webhook is automatically added to your source code repository to notify
|
||||
Docker Cloud on every push. Only pushes to branches that are listed as the
|
||||
source for one or more tags trigger a build.
|
||||
|
||||
### Set up build rules
|
||||
|
||||
By default when you set up autobuilds, a basic build rule is created for you.
|
||||
This default rule watches for changes to the `master` branch in your source code
|
||||
repository, and builds the `master` branch into a Docker image tagged with
|
||||
`latest`. You
|
||||
|
||||
In the **Build Rules** section, enter one or more sources to build.
|
||||
|
||||
For each source:
|
||||
|
||||
* Select the **Source type** to build either a **tag** or a
|
||||
**branch**. This tells the build system what to look for in the source code
|
||||
repository.
|
||||
|
||||
* Enter the name of the **Source** branch or tag you want to build.
|
||||
|
||||
The first time you configure automated builds, a default build rule is set up
|
||||
for you. This default set builds from the `Branch` in your source code called
|
||||
`master`, and creates a Docker image tagged with `latest`.
|
||||
|
||||
You can also use a regex to select which source branches or tags to build.
|
||||
To learn more, see
|
||||
[regexes](automated-build.md#regexes-and-automated-builds).
|
||||
|
||||
* Enter the tag to apply to Docker images built from this source.
|
||||
|
||||
If you configured a regex to select the source, you can reference the
|
||||
capture groups and use its result as part of the tag. To learn more, see
|
||||
[regexes](automated-build.md#regexes-and-automated-builds).
|
||||
|
||||
* Specify the **Dockerfile location** as a path relative to the root of the source code repository. (If the Dockerfile is at the repository root, leave this path set to `/`.)
|
||||
|
||||
> **Note:** When Docker Cloud pulls a branch from a source code repository, it performs
|
||||
a shallow clone (only the tip of the specified branch). Refer to [Advanced options for Autobuild and Autotest](advanced.md)
|
||||
for more information.
|
||||
|
||||
### Environment variables for builds
|
||||
|
||||
You can set the values for environment variables used in your build processes
|
||||
when you configure an automated build. Add your build environment variables by
|
||||
clicking the plus sign next to the **Build environment variables** section, and
|
||||
then entering a variable name and the value.
|
||||
|
||||
When you set variable values from the Docker Cloud UI, they can be used by the
|
||||
commands you set in `hooks` files, but they are stored so that only users who
|
||||
have `admin` access to the Docker Cloud repository can see their values. This
|
||||
means you can use them to safely store access tokens or other information that
|
||||
should remain secret.
|
||||
|
||||
> **Note**: The variables set on the build configuration screen are used during
|
||||
the build processes _only_ and should not be confused with the environment
|
||||
values used by your service (for example to create service links).
|
||||
|
||||
## Check your active builds
|
||||
|
||||
A summary of a repository's builds appears both on the repository **General**
|
||||
tab, and in the **Builds** tab. The **Builds** tab also displays a color coded
|
||||
bar chart of the build queue times and durations. Both views display the
|
||||
pending, in progress, successful, and failed builds for any tag of the
|
||||
repository.
|
||||
|
||||
From either location, you can click a build job to view its build report. The
|
||||
build report shows information about the build job including the source
|
||||
repository and branch (or tag), the build duration, creation time and location,
|
||||
and the user namespace the build occurred in.
|
||||
|
||||

|
||||
|
||||
## Cancel or retry a build
|
||||
|
||||
While a build is queued or running, a **Cancel** icon appears next to its build
|
||||
report link on the General tab and on the Builds tab. You can also click the
|
||||
**Cancel** button from the build report page, or from the Timeline tab's logs
|
||||
display for the build.
|
||||
|
||||

|
||||
|
||||
If a build fails, a **Retry** icon appears next to the build report line on the
|
||||
General and Builds tabs, and the build report page and Timeline logs also
|
||||
display a **Retry** button.
|
||||
|
||||

|
||||
|
||||
> **Note**: If you are viewing the build details for a repository that belongs
|
||||
to an Organization, the Cancel and Retry buttons only appear if you have `Read & Write` access to the repository.
|
||||
|
||||
|
||||
## Disable an automated build
|
||||
|
||||
Automated builds are enabled per branch or tag, and can be disabled and
|
||||
re-enabled easily. You might do this when you want to only build manually for
|
||||
awhile, for example when you are doing major refactoring in your code. Disabling
|
||||
autobuilds does not disable [autotests](automated-testing.md).
|
||||
|
||||
To disable an automated build:
|
||||
|
||||
1. From the **Repositories** page, click into a repository, and click the **Builds** tab.
|
||||
|
||||
2. Click **Configure automated builds** to edit the repository's build settings.
|
||||
|
||||
3. In the **Build Rules** section, locate the branch or tag you no longer want
|
||||
to automatically build.
|
||||
|
||||
4. Click the **autobuild** toggle next to the configuration line.
|
||||
|
||||
The toggle turns gray when disabled.
|
||||
|
||||
5. Click **Save** to save your changes.
|
||||
|
||||
## Advanced automated build options
|
||||
|
||||
At the minimum you need a build rule composed of a source branch (or tag) and
|
||||
destination Docker tag to set up an automated build. You can also change where
|
||||
the build looks for the Dockerfile, set a path to the files the build use
|
||||
(the build context), set up multiple static tags or branches to build from, and
|
||||
use regular expressions (regexes) to dynamically select source code to build and
|
||||
create dynamic tags.
|
||||
|
||||
All of these options are available from the **Build configuration** screen for
|
||||
each repository. Click **Repositories** from the left navigation, click the name
|
||||
of the repository you want to edit, click the **Builds** tab, and click
|
||||
**Configure Automated builds**.
|
||||
|
||||
### Tag and Branch builds
|
||||
|
||||
You can configure your automated builds so that pushes to specific branches or tags triggers a build.
|
||||
|
||||
1. In the **Build Rules** section, click the plus sign to add more sources to build.
|
||||
|
||||
2. Select the **Source type** to build: either a **tag** or a **branch**.
|
||||
|
||||
This tells the build system what type of source to look for in the code
|
||||
repository.
|
||||
|
||||
3. Enter the name of the **Source** branch or tag you want to build.
|
||||
|
||||
You can enter a name, or use a regex to match which source branch or tag
|
||||
names to build. To learn more, see
|
||||
[regexes](automated-build.md#regexes-and-automated-builds).
|
||||
|
||||
4. Enter the tag to apply to Docker images built from this source.
|
||||
|
||||
If you configured a regex to select the source, you can reference the
|
||||
capture groups and use its result as part of the tag. To learn more, see
|
||||
[regexes](automated-build.md#regexes-and-automated-builds).
|
||||
|
||||
5. Repeat steps 2 through 4 for each new build rule you set up.
|
||||
|
||||
### Set the build context and Dockerfile location
|
||||
|
||||
Depending on how the files are arranged in your source code repository, the
|
||||
files required to build your images may not be at the repository root. If that's
|
||||
the case, you can specify a path where the build looks for the files.
|
||||
|
||||
The _build context_ is the path to the files needed for the build, relative to the root of the repository. Enter the path to these files in the **Build context** field. Enter `/` to set the build context as the root of the source code repository.
|
||||
|
||||
> **Note**: If you delete the default path `/` from the **Build context** field and leave it blank, the build system uses the path to the Dockerfile as the build context. However, to avoid confusion we recommend that you specify the complete path.
|
||||
|
||||
You can specify the **Dockerfile location** as a path relative to the build
|
||||
context. If the Dockerfile is at the root of the build context path, leave the
|
||||
Dockerfile path set to `/`. (If the build context field is blank, set the path
|
||||
to the Dockerfile from the root of the source repository.)
|
||||
|
||||
### Regexes and automated builds
|
||||
|
||||
You can specify a regular expression (regex) so that only matching branches or
|
||||
tags are built. You can also use the results of the regex to create the Docker
|
||||
tag that is applied to the built image.
|
||||
|
||||
You can use the variable `{sourceref}` to use the branch or tag name that
|
||||
matched the regex in the Docker tag applied to the resulting built image. (The
|
||||
variable includes the whole source name, not just the portion that matched the
|
||||
regex.) You can also use up to nine regular expression capture groups
|
||||
(expressions enclosed in parentheses) to select a source to build, and reference
|
||||
these in the Docker Tag field using `{\1}` through `{\9}`.
|
||||
|
||||
**Regex example: build from version number branch and tag with version number**
|
||||
|
||||
You might want to automatically build any branches that end with a number
|
||||
formatted like a version number, and tag their resulting Docker images using a
|
||||
name that incorporates that branch name.
|
||||
|
||||
To do this, specify a `branch` build with the regex `/[0-9.]+$/` in the
|
||||
**Source** field, and use the formula `version-{sourceref}` in the **Docker
|
||||
tag** field.
|
||||
|
||||
<!-- Capture groups Not a priority
|
||||
#### Regex example: build from version number branch and tag with version number
|
||||
|
||||
You could also use capture groups to build and label images that come from various sources. For example, you might have
|
||||
|
||||
`/(alice|bob)-v([0-9.]+)/` -->
|
||||
|
||||
### Create multiple Docker tags from a single build
|
||||
|
||||
By default, each build rule builds a source branch or tag into a Docker image,
|
||||
and then tags that image with a single tag. However, you can also create several
|
||||
tagged Docker images from a single build rule.
|
||||
|
||||
To create multiple tags from a single build rule, enter a comma-separated list
|
||||
of tags in the **Docker tag** field in the build rule. If an image with that tag
|
||||
already exists, Docker Cloud overwrites the image when the build completes
|
||||
successfully. If you have automated tests configured, the build must pass these
|
||||
tests as well before the image is overwritten. You can use both regex references
|
||||
and plain text values in this field simultaneously.
|
||||
|
||||
For example if you want to update the image tagged with `latest` at the same
|
||||
time as you a tag an image for a specific version, you could enter
|
||||
`{sourceref},latest` in the Docker Tag field.
|
||||
|
||||
If you need to update a tag _in another repository_, use [a post_build hook](advanced.md#push-to-multiple-repos) to push to a second repository.
|
||||
|
||||
## Build repositories with linked private submodules
|
||||
|
||||
Docker Cloud sets up a deploy key in your source code repository that allows it
|
||||
to clone the repository and build it, however this key only works for a single,
|
||||
specific code repository. If your source code repository uses private Git
|
||||
submodules (or requires that you clone other private repositories to build),
|
||||
Docker Cloud cannot access these additional repos, your build cannot complete,
|
||||
and an error is logged in your build timeline.
|
||||
|
||||
To work around this, you can set up your automated build using the `SSH_PRIVATE` environment variable to override the deployment key and grant Docker Cloud's build system access to the repositories.
|
||||
|
||||
> **Note**: If you are using autobuild for teams, use [the process below](automated-build.md#service-users-for-team-autobuilds) instead, and configure a service user for your source code provider. You can also do this for an individual account to limit Docker Cloud's access to your source repositories.
|
||||
|
||||
1. Generate a SSH keypair that you use for builds only, and add the public key to your source code provider account.
|
||||
|
||||
This step is optional, but allows you to revoke the build-only keypair without removing other access.
|
||||
|
||||
2. Copy the private half of the keypair to your clipboard.
|
||||
3. In Docker Cloud, navigate to the build page for the repository that has linked private submodules. (If necessary, follow the steps [here](automated-build.md#configure-automated-build-settings) to configure the automated build.)
|
||||
4. At the bottom of the screen, click the plus sign ( **+** ) next to **Build Environment variables**.
|
||||
5. Enter `SSH_PRIVATE` as the name for the new environment variable.
|
||||
6. Paste the private half of the keypair into the **Value** field.
|
||||
7. Click **Save**, or **Save and Build** to validate that the build now completes.
|
||||
|
||||
> **Note**: You must configure your private git submodules using git clone over SSH (`git@submodule.tld:some-submodule.git`) rather than HTTPS.
|
||||
|
||||
## Autobuild for Teams
|
||||
|
||||
When you create an automated build repository in your own account namespace, you can start, cancel, and retry builds, and edit and delete your own repositories.
|
||||
|
||||
These same actions are also available for team repositories from Docker Hub if
|
||||
you are a member of the Organization's `Owners` team. If you are a member of a
|
||||
team with `write` permissions you can start, cancel and retry builds in your
|
||||
team's repositories, but you cannot edit the team repository settings or delete
|
||||
the team repositories. If your user account has `read` permission, or if you're
|
||||
a member of a team with `read` permission, you can view the build configuration
|
||||
including any testing settings.
|
||||
|
||||
| Action/Permission | read | write | admin | owner |
|
||||
| --------------------- | ---- | ----- | ----- | ----- |
|
||||
| view build details | x | x | x | x |
|
||||
| start, cancel, retry | | x | x | x |
|
||||
| edit build settings | | | x | x |
|
||||
| delete build | | | | x |
|
||||
|
||||
### Service users for team autobuilds
|
||||
|
||||
> **Note**: Only members of the `Owners` team can set up automated builds for teams.
|
||||
|
||||
When you set up automated builds for teams, you grant Docker Cloud access to
|
||||
your source code repositories using OAuth tied to a specific user account. This
|
||||
means that Docker Cloud has access to everything that the linked source provider
|
||||
account can access.
|
||||
|
||||
For organizations and teams, we recommend creating a dedicated service account
|
||||
(or "machine user") to grant access to the source provider. This ensures that no
|
||||
builds break as individual users' access permissions change, and that an
|
||||
individual user's personal projects are not exposed to an entire organization.
|
||||
|
||||
This service account should have access to any repositories to be built,
|
||||
and must have administrative access to the source code repositories so it can
|
||||
manage deploy keys. If needed, you can limit this account to only a specific
|
||||
set of repositories required for a specific build.
|
||||
|
||||
If you are building repositories with linked private submodules (private
|
||||
dependencies), you also need to add an override `SSH_PRIVATE` environment
|
||||
variable to automated builds associated with the account.
|
||||
|
||||
1. Create a service user account on your source provider, and generate SSH keys for it.
|
||||
2. Create a "build" team in your organization.
|
||||
3. Ensure that the new "build" team has access to each repository and submodule you need to build.
|
||||
|
||||
Go to the repository's **Settings** page. On Github, add the new "build" team to the list of **Collaborators and Teams**. On Bitbucket, add the "build" team to the list of approved users on the **Access management** screen.
|
||||
|
||||
4. Add the service user to the "build" team on the source provider.
|
||||
|
||||
5. Log in to Docker Cloud as a member of the `Owners` team, switch to the organization, and follow the instructions to [link to source code repository](link-source.md) using the service account.
|
||||
|
||||
> **Note**: You may need to log out of your individual account on the source code provider to create the link to the service account.
|
||||
|
||||
6. Optionally, use the SSH keys you generated to set up any builds with private submodules, using the service account and [the instructions above](automated-build.md#build-repositories-with-linked-private-submodules).
|
||||
|
||||
## What's Next?
|
||||
|
||||
### Customize your build process
|
||||
|
||||
Additional advanced options are available for customizing your automated builds,
|
||||
including utility environment variables, hooks, and build phase overrides. To
|
||||
learn more see [Advanced options for Autobuild and Autotest](advanced.md).
|
||||
|
||||
### Set up builder nodes
|
||||
|
||||
If you are building on your own infrastructure, you can run the build process on
|
||||
specific nodes by adding the `builder` label to them. If no builder nodes are
|
||||
specified, the build containers are deployed using an "emptiest node" strategy.
|
||||
|
||||
You can also limit the number of concurrent builds (including `autotest` builds)
|
||||
on a specific node by using a `builder=n` tag, where the `n` is the number of
|
||||
builds to allow. For example a node tagged with `builder=5` only allows up to
|
||||
five concurrent builds or autotest-builds at the same time.
|
||||
|
||||
### Autoredeploy services on successful build
|
||||
|
||||
You can configure your services to automatically redeploy once the build
|
||||
succeeds. [Learn more about autoredeploy](../apps/auto-redeploy.md)
|
||||
|
||||
### Add automated tests
|
||||
|
||||
To test your code before the image is pushed, you can use
|
||||
Docker Cloud's [Autotest](automated-testing.md) feature which
|
||||
integrates seamlessly with autobuild and autoredeploy.
|
||||
|
||||
> **Note**: While the Autotest feature builds an image for testing purposes, it
|
||||
does not push the resulting image to Docker Cloud or the external registry.
|
|
@ -1,46 +1,26 @@
|
|||
---
|
||||
title: Automated repository tests
|
||||
description: Automated tests
|
||||
keywords: Docker Hub, automated, testing, repository
|
||||
keywords: Automated, testing, repository
|
||||
redirect_from:
|
||||
- /docker-cloud/feature-reference/automated-testing/
|
||||
- /docker-cloud/builds/automated-testing/
|
||||
title: Automated repository tests
|
||||
---
|
||||
|
||||
Docker Hub can automatically test changes to your source code repositories using
|
||||
containers. You can enable `Autotest` on any
|
||||
[Docker Hub repository](../manage/repos) to run tests on each pull request to
|
||||
the source code repository to create a continuous integration testing service.
|
||||
[](https://www.youtube.com/watch?v=KX6PD2MANRI "Automated Tests with Docker Cloud"){:target="_blank" class="_"}
|
||||
|
||||
Docker Cloud can automatically test changes to your source code repositories
|
||||
using containers. You can enable `Autotest` on [any Docker Cloud repository](repos.md) to run tests on each pull request to the source code
|
||||
repository to create a continuous integration testing service.
|
||||
|
||||
Enabling `Autotest` builds an image for testing purposes, but does **not**
|
||||
automatically push the built image to the Docker repository. If you want to push
|
||||
built images to your Docker Hub repository, enable [automated builds](index).
|
||||
built images to your Docker Cloud repository, enable [Automated Builds](automated-build.md).
|
||||
|
||||
## Set up automated test files
|
||||
|
||||
To set up your automated tests, create a `docker-compose.test.yml` file which
|
||||
defines a `sut` service that lists the tests to be run.
|
||||
|
||||
This file has a structure something like this:
|
||||
|
||||
```
|
||||
lb:
|
||||
image: dockerhub/haproxy
|
||||
links:
|
||||
- web
|
||||
ports:
|
||||
- "80:80"
|
||||
roles:
|
||||
- global
|
||||
web:
|
||||
image: dockerhub/quickstart-python
|
||||
links:
|
||||
- redis
|
||||
target_num_containers: 4
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
|
||||
defines a `sut` service that lists the tests to be run. This file has a structure
|
||||
similar to the [docker-cloud.yml](/docker-cloud/apps/stack-yaml-reference/).
|
||||
The `docker-compose.test.yml` file should be located in the same directory that
|
||||
contains the Dockerfile used to build the image.
|
||||
|
||||
|
@ -59,41 +39,40 @@ You can define any number of linked services in this file. The only requirement
|
|||
is that `sut` is defined. Its return code determines if tests passed or not.
|
||||
Tests **pass** if the `sut` service returns `0`, and **fail** otherwise.
|
||||
|
||||
> `sut` service
|
||||
>
|
||||
> Only the `sut` service and all other services listed in `depends_on` are
|
||||
> started. For instance, if you have services that poll for changes in other
|
||||
> services, be sure to include the polling services in the `depends_on` list to
|
||||
> make sure all of your services start.
|
||||
> **Note**: Only the `sut` service and all other services listed in `depends_on`
|
||||
are started. For instance, if you have services that poll for changes in other
|
||||
services, be sure to include the polling services in the `depends_on` list to
|
||||
make sure all of your services start.
|
||||
|
||||
You can define more than one `docker-compose.test.yml` file if needed. Any file
|
||||
that ends in `.test.yml` is used for testing, and the tests run sequentially.
|
||||
You can also use
|
||||
[custom build hooks](advanced.md#override-build-test-or-push-commands) to further customize
|
||||
You can also use [custom build
|
||||
hooks](advanced.md#override-build-test-or-push-commands) to further customize
|
||||
your test behavior.
|
||||
|
||||
> Enabling automated builds runs any tests defined in the `test.yml` files.
|
||||
> **Note**: If you enable Automated builds, they also run any tests defined
|
||||
in the `test.yml` files.
|
||||
|
||||
## Enable automated tests on a repository
|
||||
|
||||
To enable testing on a source code repository, you must first create an
|
||||
associated build-repository in Docker Hub. Your `Autotest` settings are
|
||||
configured on the same page as [automated builds](index), however you do not
|
||||
need to enable autobuilds to use `Autotest`. Autobuild is enabled per branch or
|
||||
tag, and you do not need to enable it at all.
|
||||
associated build-repository in Docker Cloud. Your `Autotest` settings are
|
||||
configured on the same page as [automated builds](automated-build.md), however
|
||||
you do not need to enable Autobuilds to use `Autotest`. Autobuild is enabled per
|
||||
branch or tag, and you do not need to enable it at all.
|
||||
|
||||
Only branches that are configured to use **Autobuild** push images to the
|
||||
Docker repository, regardless of the Autotest settings.
|
||||
|
||||
1. Log in to Docker Hub and select **Repositories** in the left navigation.
|
||||
1. Log in to Docker Cloud and select **Repositories** in the left navigation.
|
||||
|
||||
2. Select the repository you want to enable `Autotest` on.
|
||||
3. Select the repository you want to enable `Autotest` on.
|
||||
|
||||
3. From the repository view, click the **Builds** tab.
|
||||
4. From the repository view, click the **Builds** tab.
|
||||
|
||||
4. Click **Configure automated builds**.
|
||||
4. Click **Configure automated builds**.
|
||||
|
||||
5. Configure the automated build settings as explained in [Automated Builds](index).
|
||||
5. Configure the automated build settings as explained in [Automated Builds](automated-build.md).
|
||||
|
||||
At minimum you must configure:
|
||||
|
||||
|
@ -101,7 +80,7 @@ Docker repository, regardless of the Autotest settings.
|
|||
* the build location
|
||||
* at least one build rule
|
||||
|
||||
6. Choose your **Autotest** option.
|
||||
8. Choose your **Autotest** option.
|
||||
|
||||
The following options are available:
|
||||
|
||||
|
@ -116,12 +95,12 @@ Docker repository, regardless of the Autotest settings.
|
|||
pull requests to branches that match a build rule, including when the
|
||||
pull request originated in an external source repository.
|
||||
|
||||
> For security purposes, autotest on _external pull requests_ is limited on
|
||||
> public repositories. Private images are not pulled and environment
|
||||
> variables defined in Docker Hub ware not available. Automated builds
|
||||
> continue to work as usual.
|
||||
> **Note**: For security purposes, autotest on _external pull requests_ is
|
||||
limited on public repositories. Private images are not pulled and
|
||||
environment variables defined in Docker Cloud ware not
|
||||
available. Automated builds continue to work as usual.
|
||||
|
||||
7. Click **Save** to save the settings, or click **Save and build** to save and
|
||||
9. Click **Save** to save the settings, or click **Save and build** to save and
|
||||
run an initial test.
|
||||
|
||||
## Check your test results
|