[WIP] Hub/Store/Cloud SaaS consolidation (#644)

This commit is contained in:
Gwendolynne Barr 2018-06-11 12:30:35 -07:00 committed by GitHub
parent 64d1bf5020
commit 07306b639d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
343 changed files with 2564 additions and 11292 deletions

View File

@ -3028,177 +3028,7 @@ manuals:
title: Get support
- title: Get support
path: /ee/get-support/
- sectiontitle: Docker Cloud
section:
- sectiontitle: Migration
section:
- path: /docker-cloud/migration/
title: Migration overview
- path: /docker-cloud/migration/cloud-to-swarm/
title: Migrate to Docker CE
- path: /docker-cloud/migration/cloud-to-kube-aks/
title: Migration to AKS
- path: /docker-cloud/migration/cloud-to-kube-gke/
title: Migrate to GKE
- path: /docker-cloud/migration/deregister-swarms/
title: Deregister swarms
- path: /docker-cloud/migration/kube-primer/
title: Kubernetes primer
- path: /docker-cloud/
title: About Docker Cloud
- path: /docker-cloud/dockerid/
title: Docker Cloud settings and Docker ID
- path: /docker-cloud/orgs/
title: Organizations and teams
- sectiontitle: Manage builds and images
section:
- path: /docker-cloud/builds/
title: Builds and images overview
- path: /docker-cloud/builds/repos/
title: Docker Cloud repositories
- path: /docker-cloud/builds/link-source/
title: Link to a source code repository
- path: /docker-cloud/builds/push-images/
title: Push images to Docker Cloud
- path: /docker-cloud/builds/automated-build/
title: Automated builds
- path: /docker-cloud/builds/automated-testing/
title: Automated repository tests
- path: /docker-cloud/builds/advanced/
title: Advanced options for autobuild and autotest
- sectiontitle: Manage swarms (beta swarm mode)
section:
- path: /docker-cloud/cloud-swarm/
title: Overview
- path: /docker-cloud/cloud-swarm/using-swarm-mode/
title: Using Swarm mode
- path: /docker-cloud/cloud-swarm/register-swarms/
title: Register existing swarms
- path: /docker-cloud/cloud-swarm/create-cloud-swarm-aws/
title: Create a new swarm on Amazon Web Services in Docker Cloud
- path: /docker-cloud/cloud-swarm/create-cloud-swarm-azure/
title: Create a new swarm on Microsoft Azure in Docker Cloud
- path: /docker-cloud/cloud-swarm/connect-to-swarm/
title: Connect to a swarm through Docker Cloud
- path: /docker-cloud/cloud-swarm/link-aws-swarm/
title: Link Amazon Web Services to Docker Cloud
- path: /docker-cloud/cloud-swarm/link-azure-swarm/
title: Link Microsoft Azure Cloud Services to Docker Cloud
- path: /docker-cloud/cloud-swarm/ssh-key-setup/
title: Set up SSH keys
- sectiontitle: Manage Infrastructure (standard mode)
section:
- path: /docker-cloud/infrastructure/
title: Infrastructure overview
- path: /docker-cloud/infrastructure/deployment-strategies/
title: Container distribution strategies
- path: /docker-cloud/infrastructure/link-aws/
title: Link to Amazon Web Services hosts
- path: /docker-cloud/infrastructure/link-do/
title: Link to DigitalOcean hosts
- path: /docker-cloud/infrastructure/link-azure/
title: Link to Microsoft Azure hosts
- path: /docker-cloud/infrastructure/link-packet/
title: Link to Packet hosts
- path: /docker-cloud/infrastructure/link-softlayer/
title: Link to SoftLayer hosts
- path: /docker-cloud/infrastructure/ssh-into-a-node/
title: SSH into a Docker Cloud-managed node
- path: /docker-cloud/infrastructure/docker-upgrade/
title: Upgrade Docker on a node
- path: /docker-cloud/infrastructure/byoh/
title: Use the Docker Cloud agent
- path: /docker-cloud/infrastructure/cloud-on-packet.net-faq/
title: Use Docker Cloud and Packet.net
- path: /docker-cloud/infrastructure/cloud-on-aws-faq/
title: Use Docker Cloud on AWS
- sectiontitle: Manage nodes and apps (standard mode)
section:
- path: /docker-cloud/standard/
title: Overview
- sectiontitle: Getting started
section:
- path: /docker-cloud/getting-started/
title: Getting started with Docker Cloud
- path: /docker-cloud/getting-started/intro_cloud/
title: Introducing Docker Cloud
- path: /docker-cloud/getting-started/connect-infra/
title: Link to your infrastructure
- path: /docker-cloud/getting-started/your_first_node/
title: Deploy your first node
- path: /docker-cloud/getting-started/your_first_service/
title: Deploy your first service
- sectiontitle: Deploy an application
section:
- path: /docker-cloud/getting-started/deploy-app/1_introduction/
title: Introduction to deploying an app in Docker Cloud
- path: /docker-cloud/getting-started/deploy-app/2_set_up/
title: Set up your environment
- path: /docker-cloud/getting-started/deploy-app/3_prepare_the_app/
title: Prepare the application
- path: /docker-cloud/getting-started/deploy-app/4_push_to_cloud_registry/
title: Push the image to Docker Cloud's Registry
- path: /docker-cloud/getting-started/deploy-app/5_deploy_the_app_as_a_service/
title: Deploy the app as a Docker Cloud service
- path: /docker-cloud/getting-started/deploy-app/6_define_environment_variables/
title: Define environment variables
- path: /docker-cloud/getting-started/deploy-app/7_scale_the_service/
title: Scale the service
- path: /docker-cloud/getting-started/deploy-app/8_view_logs/
title: View service logs
- path: /docker-cloud/getting-started/deploy-app/9_load-balance_the_service/
title: Load-balance the service
- path: /docker-cloud/getting-started/deploy-app/10_provision_a_data_backend_for_your_service/
title: Provision a data backend for the service
- path: /docker-cloud/getting-started/deploy-app/11_service_stacks/
title: Stackfiles for your service
- path: /docker-cloud/getting-started/deploy-app/12_data_management_with_volumes/
title: Data management with volumes
- sectiontitle: Manage applications
section:
- path: /docker-cloud/apps/
title: Applications in Docker Cloud
- path: /docker-cloud/apps/deploy-to-cloud-btn/
title: Add a deploy to Docker Cloud button
- path: /docker-cloud/apps/auto-destroy/
title: Automatic container destroy
- path: /docker-cloud/apps/autorestart/
title: Automatic container restart
- path: /docker-cloud/apps/auto-redeploy/
title: Automatic service redeploy
- path: /docker-cloud/apps/load-balance-hello-world/
title: Create a proxy or load balancer
- path: /docker-cloud/apps/deploy-tags/
title: Deployment tags
- path: /docker-cloud/apps/stacks/
title: Manage service stacks
- path: /docker-cloud/apps/ports/
title: Publish and expose service or container ports
- path: /docker-cloud/apps/service-redeploy/
title: Redeploy running services
- path: /docker-cloud/apps/service-scaling/
title: Scale your service
- path: /docker-cloud/apps/api-roles/
title: Service API roles
- path: /docker-cloud/apps/service-links/
title: Service discovery and links
- path: /docker-cloud/apps/triggers/
title: Use triggers
- path: /docker-cloud/apps/volumes/
title: Work with data volumes
- path: /docker-cloud/apps/stack-yaml-reference/
title: Cloud stack file YAML reference
- path: /docker-cloud/slack-integration/
title: Docker Cloud notifications in Slack
- path: /apidocs/docker-cloud/
title: Docker Cloud API
nosync: true
- path: /docker-cloud/installing-cli/
title: The Docker Cloud CLI
- path: /docker-cloud/docker-errors-faq/
title: Known issues in Docker Cloud
- path: /docker-cloud/release-notes/
title: Release notes
- sectiontitle: Docker Compose
section:
- path: /compose/overview/
@ -3453,48 +3283,62 @@ manuals:
title: Migrate from Boot2Docker to Machine
- path: /release-notes/docker-machine/
title: Docker Machine release notes
- sectiontitle: Docker Store
section:
- path: /docker-store/
title: About Docker Store
- sectiontitle: Docker Store FAQs
section:
- path: /docker-store/customer_faq/
title: Customer FAQs
- path: /docker-store/publisher_faq/
title: Publisher FAQs
- sectiontitle: For Publishers
section:
- path: /docker-store/publish/
title: Publish content on Docker Store
- path: /docker-store/certify-images/
title: Certify Docker images
- path: /docker-store/certify-plugins-logging/
title: Certify Docker logging plugins
- path: /docker-store/trustchain/
title: Docker Store trust chain
- path: /docker-store/byol/
title: Bring Your Own License (BYOL)
- sectiontitle: Docker Hub
section:
- path: /docker-hub/
title: Overview of Docker Hub
- path: /docker-hub/accounts/
title: Use Docker Hub with Docker ID
- path: /docker-hub/orgs/
title: Teams & organizations
- path: /docker-hub/repos/
title: Repositories on Docker Hub
- path: /docker-hub/builds/
title: Automated builds
- path: /docker-hub/webhooks/
title: Webhooks for automated builds
- path: /docker-hub/bitbucket/
title: Automated builds with Bitbucket
- path: /docker-hub/github/
title: Automated builds from GitHub
- path: /docker-hub/official_repos/
title: Official repositories on Docker Hub
- title: Docker Hub overview
path: /docker-hub/
- title: Create Docker Hub account
path: /docker-hub/accounts/
- title: Run Docker CLI commands
path: /docker-hub/commandline/
- sectiontitle: Discover content
section:
- title: Content overview
path: /docker-hub/discover/
- title: Official repos
path: /docker-hub/discover/official-repos/
- sectiontitle: Manage repositories
section:
- title: Repository overview
path: /docker-hub/manage/
- title: Create and configure repos
path: /docker-hub/manage/repos/
- title: Create orgs and teams
path: /docker-hub/manage/orgs-teams/
- title: Push images
path: /docker-hub/manage/push-images/
- sectiontitle: Autobuild images
section:
- title: Autobuild Docker images
path: /docker-hub/build/
- title: Autotest repositories
path: /docker-hub/build/autotest/
- title: Advanced options
path: /docker-hub/build/advanced/
- title: Build from GitHub
path: /docker-hub/build/github/
- title: Build from Bitbucket
path: /docker-hub/build/bitbucket/
- title: Webhooks
path: /docker-hub/build/webhooks/
- sectiontitle: Publish content
section:
- title: Publish Docker images
path: /docker-hub/publish/
- title: Certify Docker images
path: /docker-hub/publish/certify-images/
- title: Certify Docker logging plugins
path: /docker-hub/publish/certify-plugins-logging/
- title: Docker Hub trust chain
path: /docker-hub/publish/trustchain/
- title: Bring Your Own License (BYOL)
path: /docker-hub/publish/byol/
- title: FAQs on publishing center
path: /docker-hub/publish/faq-publisher/
- title: Customer FAQs
path: /docker-hub/publish/faq-customer/
- sectiontitle: Open-source projects
section:
- sectiontitle: Docker Notary
@ -3617,7 +3461,7 @@ manuals:
title: Docker Compose
nosync: true
- path: /docker-cloud/release-notes/
title: Docker Cloud
title:
nosync: true
- path: /docker-for-aws/release-notes/
title: Docker for AWS

View File

@ -0,0 +1,71 @@
1. Open a terminal and log into Docker Hub with the Docker CLI:
```
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: gordon
Password:
WARNING! Your password will be stored unencrypted in /home/gwendolynne/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
```
2. Search for the `busybox` image:
```
$ docker search busybox
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
busybox Busybox base image. 1268 [OK]
progrium/busybox 66 [OK]
hypriot/rpi-busybox-httpd Raspberry Pi compatible … 41
radial/busyboxplus Full-chain, Internet enabled, … 19 [OK]
...
```
> Private repos are not returned at the commandline. Go to the Docker Hub UI
> to see your allowable repos.
3. Pull the official busybox image to your machine and list it (to ensure it was
pulled):
```
$ docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
07a152489297: Pull complete
Digest: sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
Status: Downloaded newer image for busybox:latest
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest 8c811b4aec35 11 days ago 1.15MB
```
4. Tag the official image (to differentiate it), list it, and push it to your
personal repo:
```
$ docker tag busybox <DOCKER ID>/busybox:test-tag
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
gordon/busybox v1 8c811b4aec35 11 days ago 1.15MB
busybox latest 8c811b4aec35 11 days ago 1.15MB
$ docker push <DOCKER ID>/busybox:test-tag
```
5. Log out from Docker Hub:
```
$ docker logout
```
6. Log on to the [Docker Hub UI](https://hub.docker.com){: target="_blank" class="_"} and view the image you
pushed.

View File

@ -0,0 +1,21 @@
When you register for a Docker ID, your Docker ID is your user namespace
in Docker Hub and your username on the [Docker Forums](https://forums.docker.com/){: target="_blank" class="_"}.
1. Go to [Docker Hub](https://hub.docker.com/){: target="_blank" class="_"}.
2. Click **Create Docker ID** (top right).
3. Fill out the required fields:
- **Docker ID** (or username): Must be 4 to 30 characters long, only numbers
and lowercase letters.
- **Email address**: Must be unique and valid.
- **Password**: Must be 6 to 128 characters long.
4. Click **Sign Up**. Docker sends a verification email to the address you
provided.
5. Go to your email and click the link to verify your address. You cannot log
in until you verify.

View File

@ -1,35 +0,0 @@
---
description: API Roles
keywords: API, Services, roles
redirect_from:
- /docker-cloud/feature-reference/api-roles/
title: Service API roles
notoc: true
---
You can configure a service so that it can access the Docker Cloud API. When you
grant API access to a service, its containers receive a token through an
environment variable, which is used to query the Docker Cloud API.
Docker Cloud has a "full access" role which when granted allows any operation
to be performed on the API. You can enable this option on the **Environment variables** screen of the Service wizard, or [specify it in your service's stackfile](stack-yaml-reference.md#roles). When enabled, Docker Cloud generates an authorization token for the
service's containers which is stored in an environment variable called
`DOCKERCLOUD_AUTH`.
Use this variable to set the `Authorization` HTTP header when calling
Docker Cloud's API:
```bash
$ curl -H "Authorization: $DOCKERCLOUD_AUTH" -H "Accept: application/json" https://cloud.docker.com/api/app/v1/service/
```
You can use this feature with Docker Cloud's [automatic environment variables](service-links.md), to let your application inside a container read and perform operations using Docker Cloud's API.
```bash
$ curl -H "Authorization: $DOCKERCLOUD_AUTH" -H "Accept: application/json" $WEB_DOCKERCLOUD_API_URL
```
For example, you can use information retrieved using the API to read the linked
endpoints, and use them to reconfigure a proxy container.
See the [API documentation](/apidocs/docker-cloud.md) for more information on the different API operations available.

View File

@ -1,77 +0,0 @@
---
description: Autodestroy
keywords: Autodestroy, service, terminate, container
redirect_from:
- /docker-cloud/feature-reference/auto-destroy/
title: Destroy containers automatically
---
When enabled on a service, **Autodestroy** automatically terminates containers
when they stop. **This destroys all data in the container on stop.** This is
useful for one-time actions that store their results in an external system.
The following Autodestroy options are available:
- `OFF`: the container remains in the **Stopped** state regardless of exit code, and is not destroyed.
- `ON_SUCCESS`: if the container stops with an exit code of 0 (normal shutdown), Docker Cloud automatically destroys it. If it stops with any other exit code, Docker Cloud leaves it in the **Stopped** state.
- `ALWAYS`: if the container stops, Docker Cloud automatically terminates it regardless of the exit code.
If **Autorestart** is activated, Docker Cloud evaluates whether to try restarting the container or not before evaluating **Autodestroy**.
## Launch a service with Autodestroy
You can enable **Autodestroy** on the **Service configuration** step of the **Launch new service** wizard.
![](images/autodestroy.png)
Autodestroy is set to `OFF` (deactivated) by default.
### Use the API or CLI
You can enable autodestroy when launching a service through the API or CLI.
If not provided, it has a default value of `OFF`. Check our [API documentation](/apidocs/docker-cloud.md) for more information.
#### Launch with autodestroy using the API
```
POST /api/app/v1/service/ HTTP/1.1
{
"autodestroy": "ALWAYS",
[...]
}
```
#### Launch with autodestroy using the CLI
```
$ docker-cloud service run --autodestroy ALWAYS [...]
```
## Enable autodestroy on an already deployed service
You can also activate or deactivate the **Autodestroy** setting on a service
after it has been deployed, by editing the service.
1. Go to the service detail page.
2. Click **Edit**.
3. Select the new autodestroy setting.
4. Click **Save**.
### Use the API or CLI
You can set the **Autodestroy** option after the service has been
deployed, using the API or CLI.
Check our [API documentation](/apidocs/docker-cloud.md) for more information.
#### Enable autodestroy using the API
```
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
{
"autodestroy": "ALWAYS"
}
```
#### Enable autodestroy using the CLI
```
$ docker-cloud service set --autodestroy ALWAYS (name or uuid)
```

View File

@ -1,83 +0,0 @@
---
description: Autoredeploy
keywords: Autoredeploy, image, store, service
redirect_from:
- /docker-cloud/feature-reference/auto-redeploy/
title: Redeploy services automatically
---
[![Automated Deployments with Docker Cloud](images/video-auto-redeploy-docker-cloud.png)](https://www.youtube.com/watch?v=I4depUwfbFc "Automated Deployments with Docker Cloud"){:target="_blank"}
Docker Cloud's **Autoredeploy** feature allows a service that uses an image
stored in Docker Hub to automatically redeploy whenever a new image is pushed or
built.
> **Notes**:
>
>* **Autoredeploy** works only for hub images with the _latest_ tag.
>
>* To enable **autoredeploy** on an image stored in a third party registry,
> you need to use [redeploy triggers](triggers.md) instead.
## Launch a new service with autoredeploy
You can launch a service with **autoredeploy** enabled by enabling it from the **general settings** section of the **Launch new service** wizard.
![](images/service-wizard-autoredeploy.png)
By default, autoredeploy is *deactivated*.
### Use the CLI or API
You can enable **autoredeploy** when launching a service using the CLI or API.
By default, autoredeploy is set to `false`. See the [API documentation](/apidocs/docker-cloud.md) for more information.
#### Enable autoredeploy using the CLI
```
$ docker-cloud service run --autoredeploy [...]
```
#### Enable autoredeploy using the API
```
POST /api/app/v1/service/ HTTP/1.1
{
"autoredeploy": true,
[...]
}
```
## Enable autoredeploy to an already deployed service
You can activate or deactivate **autoredeploy** on a service after it has been deployed.
1. Click into the service detail page.
2. Click **Edit**.
3. Change the **autoredeploy** setting on the form to `true`.
4. Click **Save changes**.
### Use the CLI or API
You can set the **autoredeploy** option after the service has been deployed,
using the CLI or API.
Check our [API documentation](/apidocs/docker-cloud.md) for more information.
#### Enable autoredeploy using the CLI
```bash
$ docker-cloud service set --autoredeploy (name or uuid)
```
### Enable autoredeploy using the API
```
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
{
"autoredeploy": true
}
```

View File

@ -1,88 +0,0 @@
---
description: Automatically restart a container in Docker Cloud
keywords: container, restart, automated
redirect_from:
- /docker-cloud/feature-reference/autorestart/
title: Restart a container automatically
---
**Autorestart** is a service-level setting that can automatically start your
containers if they stop or crash. You can use this setting as an automatic crash
recovery mechanism.
Autorestart uses Docker's `--autorestart` flag. When called, the Docker daemon
attempts to restart the container until it succeeds. If the first restart
attempts fail, the daemon continues to attempt a restart, but uses an
incremental back-off algorithm.
The following Autorestart options are available:
- `OFF`: the container does not restart, regardless of the exit code.
- `ON_FAILURE`: the container restarts *only* if it stops with an exit code other than 0. (0 is for normal shutdown.)
- `ALWAYS`: the container restarts automatically, regardless of the exit code.
> **Note**: If you are using **Autorestart** set to `ALWAYS`, **Autodestroy** must be set to `OFF`.
If the Docker daemon in a node restarts (because it was upgraded, or because the
underlying node was restarted), the daemon only restarts containers that
have **Autorestart** set to `ALWAYS`.
## Launching a Service with Autorestart
You can enable **Autorestart** on the **Service configuration** step of the **Launch new service wizard**.
![](images/autorestart.png)
Autorestart is set to `OFF` by default, which means that autorestart is *deactivated*.
### Using the API and CLI
You can set the **Autorestart** option when launching a service through the
API and through the CLI. Autorestart is set to `OFF` by default. 
#### Set autorestart using the API
```
POST /api/app/v1/service/ HTTP/1.1
{
"autorestart": "ON_FAILURE",
[...]
}
```
#### Set autorestart using the CLI
```
$ docker-cloud service run --autorestart ON_FAILURE [...]
```
See our [API documentation](/apidocs/docker-cloud.md) for more information.
## Enabling autorestart on an already deployed service
You can activate or deactivate **Autorestart** on a service after it has been deployed by editing the service.
1. Go to the service detail page.
2. Click **Edit**.
3. Choose the autorestart option to apply.
4. Click **Save**.
### Using the API and CLI
You can change the **Autorestart** setting after the service has been deployed using the API or CLI.
#### Enable autorestart using the API
```
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
{
"autorestart": "ALWAYS",
}
```
#### Enable autorestart using the CLI
```
$ docker-cloud service set --autorestart ALWAYS (name or uuid)
```
See the [API documentation](/apidocs/docker-cloud.md) for more information.

View File

@ -1,116 +0,0 @@
---
description: Deployment tags
keywords: Deployment, tags, services
redirect_from:
- /docker-cloud/feature-reference/deploy-tags/
title: Deployment tags
---
You can use **Deployment tags** to make sure certain services are deployed only
to specific nodes. Tagged services only deploy to nodes that match **all** of
the tags on that service. Docker Cloud shows an error if no nodes match all of
the service's deployment tags. A node might have extra tags that are not
specified on the service, but these do not prevent the service from deploying.
You can specify multiple tags on services, on individual nodes, and on node clusters. All nodes that are members of a node cluster inherit the tags specified on the cluster. See [Automatic deployment tags](deploy-tags.md#automatic-deployment-tags) to learn more.
#### Deployment tags example
In this example, we have five nodes. One is used for development and testing, and four are used for production. The production nodes are distributed between frontend and backend. The table below summarizes their names and tags:
| Node name | Tags |
| --------- | ---- |
| my-node-dev | `aws` `us-east-1` `development` `test` `frontend` `backend`|
| my-node-prod-1 | `aws` `us-east-1` `production` `frontend` |
| my-node-prod-2 | `aws` `us-east-2` `production` `frontend` |
| my-node-prod-3 | `aws` `us-east-1` `production` `backend` |
| my-node-prod-4 | `aws` `us-east-2` `production` `backend` |
Imagine that you deploy a service called **my-webapp-dev** with two tags:
`development` and `frontend`. All containers for the service would be deployed
to the node labeled **my-node-dev**, because the node is tagged with both
`development` *and* `frontend`.
Similarly, if you deploy a production service called **my-webapp-prod** with the
two tags `production` and `frontend`, all containers for that service
would be deployed to the two nodes **my-node-prod-1** and **my-node-prod-2**
because those two nodes are tagged with both `production` *and* `frontend`.
> **Tip**: Containers are distributed between the two nodes based on the
[deployment strategy](../infrastructure/deployment-strategies.md) selected.
## Automatic deployment tags
When you launch a node cluster, four tags are automatically assigned to the
node cluster and all nodes in that cluster:
* Provider name (for example `digitalocean`, `aws`)
* "[Bring your own node](../infrastructure/byoh.md)" (BYON) status (for example `byon=false` or `byon=true`)
* Region name (for example `us-east-1`, `lon1`)
* Node cluster name (for example `my-node-cluster-dev-1`)
## Add tags to a node or node cluster at launch
A single node is considered a node cluster with a size of 1. Because of this, you create a node cluster even if you are only launching a single node.
1. Click **Node clusters** in the left navigation menu.
2. Click **Create**.
3. In the **Deploy tags** field, enter the tags to assign to the cluster and all
of its member nodes.
![](images/nodecluster-wizard-tags.png)
When the node cluster scales up, new nodes automatically inherit the
node cluster's tags, including the [Automatic deployment tags](deploy-tags.md#automatic-deployment-tags) described above.
You can see a node cluster's tags on the left side of the cluster's detail page.
4. Click **Launch node cluster**.
### Update or add tags on a node or node cluster
To change the tags on an existing node or node cluster:
1. Go to the node or node cluster's detail page.
2. Click the tags below the node or node cluster status line to edit them.
![](images/node-detail-tags.png)
If there are no tags assigned to the cluster, move your cursor under the deployment status line and click the tag icon that appears.
3. In the dialog that appears, add or remove tags.
The individual nodes in a cluster inherit all tags from the cluster, including automatic tags. Each individual node can have extra tags in addition to the tags it inherits as a member of a node cluster.
4. Click **Save** to save your tag changes to the nodes.
## Add tags to a service at launch
To deploy a service to a specific node using tags, you must first specify one or more tags on the service. If you don't add any tags to a service, the service is deployed to all available nodes.
1. Use the **Create new service** wizard to start a new service.
![](images/service-wizard-tags.png)
2. Select tags from the **deployment constraints** list to add to this service. Only tags that already exist on your nodes appear in the list.
Tags in a service define which nodes are used on deployment: only nodes that match *all* tags specified in the service are used for deployment.
### Update or add tags to a service
You can add or remove tags on a running service from the service's detail view.
1. From the service detail view, click **Edit**.
2. Select tags from the **deployment constraints** list to add to this service. Only tags that already exist on your nodes appear in the list.
![](images/service-wizard-tags.png)
3. Click **Save Changes**.
**If you update the tags on a service, you must redeploy the service for them to take effect.** To do this you can terminate all containers and relaunch them, or you can scale
your service down to zero nodes and then scale it back up. New containers are
deployed to the nodes that match the new tags.
## Using deployment tags in the API and CLI
See the [tags API and CLI documentation](/apidocs/docker-cloud.md#tags) for more information on how to use tags with our API and CLI.

View File

@ -1,71 +0,0 @@
---
description: Deploy to Docker Cloud
keywords: deploy, docker, cloud
redirect_from:
- /docker-cloud/feature-reference/deploy-to-cloud/
- /docker-cloud/tutorials/deploy-to-cloud/
title: Add a "Deploy to Docker Cloud" button
---
The **Deploy to Docker Cloud** button allows developers to deploy stacks with
one click in Docker Cloud as long as they are logged in. The button is intended
to be added to `README.md` files in public GitHub repositories, although it can
be used anywhere else.
> **Note**: You must be _logged in_ to Docker Cloud for the button to work
> Otherwise, the link results in a 404 error.
This is an example button to deploy our [python quickstart](https://github.com/docker/dockercloud-quickstart-python){: target="_blank" class="_"}:
<a href="https://cloud.docker.com/stack/deploy/?repo=https://github.com/docker/dockercloud-quickstart-python" target="_blank" class="_"><img src="https://files.cloud.docker.com/images/deploy-to-dockercloud.svg"></a>
The button redirects the user to the **Launch new Stack** wizard, with the stack
definition already filled with the contents of any of the following files (which
are fetched in the order shown) from the repository (taking into account branch
and relative path):
* `docker-cloud.yml`
* `docker-compose.yml`
* `fig.yml`
The user can still modify the stack definition before deployment.
## Add the 'Deploy to Docker Cloud' button in GitHub
You can simply add the following snippet to your `README.md` file:
```md
[![Deploy to Docker Cloud](https://files.cloud.docker.com/images/deploy-to-dockercloud.svg)](https://cloud.docker.com/stack/deploy/)
```
Docker Cloud detects the HTTP referer header and deploy the stack file found in the repository, branch and relative path where the source `README.md` file is stored.
## Add the 'Deploy to Docker Cloud' button in Docker Hub
If the button is displayed on the Docker Hub, Docker Cloud cannot automatically detect the source GitHub repository, branch and path. In this case, edit the repository description and add the following code:
```md
[![Deploy to Docker Cloud](https://files.cloud.docker.com/images/deploy-to-dockercloud.svg)](https://cloud.docker.com/stack/deploy/?repo=<repo_url>)
```
where `<repo_url>` is the path to your GitHub repository (see below).
## Add the 'Deploy to Docker Cloud' button anywhere else
If you want to use the button somewhere else, such as from external documentation or a landing site, you just need to create a link to the following URL:
```html
https://cloud.docker.com/stack/deploy/?repo=<repo_url>
```
where `<repo_url>` is the path to your GitHub repository. For example:
* `https://github.com/docker/dockercloud-quickstart-python`
* `https://github.com/docker/dockercloud-quickstart-python/tree/staging` to use branch `staging` instead of the default branch
* `https://github.com/docker/dockercloud-quickstart-python/tree/master/example` to use branch `master` and the relative path `/example` inside the repository
You can use your own image for the link (or no image). Our **Deploy to Docker Cloud** image is available at the following URL:
* `https://files.cloud.docker.com/images/deploy-to-dockercloud.svg`

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

View File

@ -1,36 +0,0 @@
---
description: Manage your Docker Cloud Applications
keywords: applications, reference, Cloud
title: Applications in Docker Cloud
notoc: true
---
Applications in Docker Cloud are usually several Services linked together using
the specifications from a [Stackfile](stacks.md) or a Compose file. You can also
create individual services using the Docker Cloud Services wizard, and you can
attach [Volumes](volumes.md) to use as long-lived storage for your services.
If you are using Docker Cloud's autobuild and autotest features, you can also
use [autoredeploy](auto-redeploy.md) to automatically redeploy the application
each time its underlying services are updated.
* [Deployment tags](deploy-tags.md)
* [Add a Deploy to Docker Cloud button](deploy-to-cloud-btn.md)
* [Manage service stacks](stacks.md)
* [Stack YAML reference](stack-yaml-reference.md)
* [Publish and expose service or container ports](ports.md)
* [Redeploy running services](service-redeploy.md)
* [Scale your service](service-scaling.md)
* [Service API Roles](api-roles.md)
* [Service discovery and links](service-links.md)
* [Work with data volumes](volumes.md)
* [Create a proxy or load balancer](load-balance-hello-world.md)
### Automate your applications
Use the following features to automate specific actions on your Docker Cloud applications.
* [Automatic container destroy](auto-destroy.md)
* [Automatic container restart](autorestart.md)
* [Autoredeploy](auto-redeploy.md)
* [Use triggers](triggers.md)

View File

@ -1,199 +0,0 @@
---
description: Create a proxy or load balancer
keywords: proxy, load, balancer
redirect_from:
- /docker-cloud/getting-started/intermediate/load-balance-hello-world/
- /docker-cloud/tutorials/load-balance-hello-world/
title: Create a proxy or load balancer
---
When you deploy a web service to multiple containers, you might want to load
balance between the containers using a proxy or load balancer.
In this tutorial, you use the **dockercloud/hello-world** image as a sample
web service and **dockercloud/haproxy** to load balance traffic to the service.
If you follow this tutorial exactly, your traffic is distributed evenly
between eight containers in a node cluster containing four nodes.
## Create a Node Cluster
First, deploy a node cluster of four nodes.
1. If you have not linked to a host or cloud services provider, do that now.
You can find instructions on how to link to your own hosts, or to different providers [here](../infrastructure/index.md).
2. Click **Node Clusters** in the left-hand navigation menu.
3. Click **Create**.
4. Enter a name for the node cluster, select the **Provider**, **Region**, and **Type/Size**.
5. Add a **deployment tag** of `web`. (This is used to make sure the right services are deployed to the correct nodes.)
5. Drag or increment the **Number of nodes** slider to **4**.
![](images/lbd-node-wizard.png)
4. Click **Launch node cluster**.
This might take up to 10 minutes while the nodes are provisioned. This a great time to grab a cup of coffee.
Once the node cluster is deployed and all four nodes are running, we're
ready to continue and launch our web service.
![](images/lbd-four-nodes.png)
## Launch the web service
1. Click **Services** in the left hand menu, and click **Create**.
3. Click the **rocket icon** at the top of page, and select the **dockercloud/hello-world** image.
![](images/lbd-hello-world-jumpstart.png)
4. On the **Service configuration** screen, configure the service using these values:
* **image**: Set the tag to `latest` so you get the most recent build of the image.
* **service name**: `web`. This is what we call the service internally.
* **number of containers**: 8
* **deployment strategy**: `high availability`. Deploy evenly to all nodes.
* **deployment constraints**: `web`. Deploy only to nodes with this tag.
> **Note**: For this tutorial, make sure you change the *deployment strategy* to **High Availability**, and add the *tag* **web** to ensure this service is deployed to the right nodes.
![](images/lbd-web-conf.png)
5. Last, scroll down to the **Ports** section and make sure the **published** box is checked next to port 80.
We're going to access these containers from the public internet, and
publishing the port makes them available externally. Make sure you leave the
`node port` field unset so that it stays dynamic.
6. Click **Create and deploy**.
Docker Cloud switches to the **Service detail** view after you create the
service.
7. Scroll up to the **Containers** section to see the containers as they deploy.
The icons for each container change color to indicate what phase of deployment they're in. Once all containers are green (successfully started), continue to the next step.
![](images/lbd-containers-start.png)
## Test the web service
1. Once your containers are all green (running), scroll down to the
**Endpoints** section.
A list shows all the endpoints available for this service on the public internet.
![Available endpoints](images/lbd-endpoints.png)
2. Click an endpoint URL (it should look something like
`http://web-1.username.cont.dockerapp.io:49154`) to open a new tab in your
browser and view the **dockercloud/hello-world** web page. Note the hostname
for the page that loads.
![Endpoint URL details](images/lbd-hostname-1.png)
3. Click other endpoints and check the hostnames. You see different hostnames
which match the container name (web-2, web-3, and so on).
## Launch the load balancer
We verified that the web service is working, so now we can set up the load balancer.
1. Click **Services** in the left navigation bar, and click **Create** again.
This time we launch a load balancer that listens on port 80 and balances the traffic across the 8 containers that are running the `web` service. 
3. Click the **rocket icon** if necessary and find the **Proxies** section.
4. Click the **dockercloud/haproxy** image.
5. On the next screen, set the **service name** to `lb`.
Leave the tag, deployment strategy, and number of containers at their default values.
![](images/lbd-lb-conf.png)
6. Locate the **API Roles** field at end of the **General settings** section.
7. Set the **API Role** to `Full access`.
When you assign the service an API role, it passes a `DOCKERCLOUD_AUTH`
environment variable to the service's containers, which allows them to query
Docker Cloud's API on your behalf. You can [read more about API Roles here](../apps/api-roles.md).
The **dockercloud/haproxy** image uses the API to check how many containers
are in the `web` service we launched earlier. **HAproxy** then uses this
information to update its configuration dynamically as the web service
scales. 
8. Next, scroll down to the **Ports** section.
9. Click the **Published** checkbox next to the container port 80.
10. Click the word *dynamic* next to port 80, and enter 80 to set the published
port to also use port 80. 
![](images/lbd-lb-ports.png)
11. Scroll down to the **Links** section.
12. Select `web` from the drop down list, and click the blue **plus sign** to
add the link.
This links the load balancing service `lb` with the web service `web`. The
link appears in the table in the Links section.
![Links section](images/lbd-lb-envvar.png)
A new set of `WEB` environment variables appears in the service we're about
to launch. You can read more about
service link environment variables [here](../apps/service-links.md).
13. Click **Create and deploy** and confirm that the service launches.
## Test the load-balanced web service
1. On the load balancer service detail page, scroll down to the **endpoints**
section.
Unlike on the web service, this time the HTTP URL for the load balancer is
mapped to port 80. 
![Load balancer mapped to port 80](images/lbd-lb-endpoint.png)
2. Click the endpoint URL to open it in a new tab.
The same hello-world webpage you saw earlier is shown. Make note of the
hostname.
3. Refresh the web page.
With each refresh, the hostname changes as the requests are load-balanced to
different containers. 
![Changing hostname](images/lbd-reload.gif)
Each container in the web service has a different hostname, which
appears in the webpage as `container_name-#`. When you refresh the
page, the load balancer routes the request to a new host and the displayed hostname changes.
> **Tip**: If you don't see the hostname change, clear your browser's cache
or load the page from a different web browser. 
Congratulations! You just deployed a load balanced web service using Docker
Cloud!
## Further reading: load balancing the load balancer
What if you had so many `web` containers that you needed more than one `lb`
container?
Docker Cloud automatically assigns a DNS endpoint to all services. This endpoint
routes to all of the containers of that service. You can use the DNS endpoint to
load balance your load balancer. To learn more, read up on [service
links](service-links.md).

View File

@ -1,124 +0,0 @@
---
description: Publish and expose service or container ports
keywords: publish, expose, ports, containers, services
redirect_from:
- /docker-cloud/feature-reference/ports/
title: Publish and expose service or container ports
---
In Docker Cloud you can **publish** or **expose** ports in services and
containers, just like you can in Docker Engine (as documented
[here](/engine/reference/run.md#expose-incoming-ports)).
* **Exposed ports** are ports that a container or service is using either to
provide a service, or listen on. By default, exposed ports in Docker Cloud are
only privately accessible. This means only other services that are linked to
the service which is exposing the ports can communicate over the
exposed port.
*Exposed ports* cannot be accessed publicly over the internet.
* **Published ports** are exposed ports that are accessible publicly over the internet. Published ports are published to the public-facing network interface in which the container is running on the node (host).
*Published ports* **can** be accessed publicly over the internet.
## Launch a Service with an exposed port
If the image that you are using for your service already exposes any ports, these appear in Docker Cloud in the **Launch new service** wizard.
1. From the **Launch new service** wizard, select the image to use.
2. Scroll down to the **Ports** section.
![](images/exposing-port.png)
The image in this example screenshot *exposes* port 80. Remember, this means
that the port is only accessible to other services that link this service. It
is not accessible publicly over the internet.
You can expose more ports from this screen by clicking **Add Port**.
### Using the API/CLI
See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) for
information on how to launch a service with an exposed port.
## Launch a Service with a published port
If the image that you are using for your service already exposes any ports,
these appear in Docker Cloud in the **Launch new service** wizard. You can
choose to publish and map them from the wizard.
1. From the **Launch new service** wizard, select the image to use.
2. Scroll down to the **Ports** section.
This section displays any ports configured in the image.
4. Click the **Published** checkbox.
5. Optionally, choose which port on the node where you want to make the exposed port available.
By default, Docker Cloud assigns a published port dynamically. You can also
choose a specific port. For example, you might choose to take a port that is
exposed internally on port 80, and publish it externally on port 8080.
![](images/publishing-port.png)
To access the published port over the internet, connect to the port you
specified in the "Node port" section. If you used the default **dynamic**
option, find the published port on the service detail page.
### Using the API/CLI
See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) on
how to launch a service with a published port.
## Check which ports a service has published
The **Endpoints** section in the Service view lists the published ports for a service. Ports that are exposed internally are not listed in this section but can be viewed by editing the service configuration.
* The **Service endpoints** list shows the endpoints that automatically round-robin route to the containers in a service.
* The **Container endpoints** list shows the endpoints for each individual container. Click the blue "link" icon to open the endpoint URL in a new tab.
<!-- DCUI-741
Ports that are exposed internally display with a closed (locked) padlock
icon and published ports (that are exposed to the internet) show an open
(unlocked) padlock icon.
* Exposed ports are listed as **container port/protocol**
* Published ports are listed as **node port**->**container port/protocol** -->
![](images/ports-published.png)
### Using the API/CLI
See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) to learn how to list a service's exposed and published ports.
## Service and container DNS endpoints
The short word before `dockerapp.io` in an endpoint URL tells you what type of endpoint it is. The three available types are:
* `node` routes to a specific node or host
* `svc` routes round-robin style to the containers of a service
* `cont` routes to a specific container within a service regardless of which host the container is deployed on
For example, you might see an endpoint such as `web.quickstart-python.0a0b0c0d.svc.dockerapp.io`. You would know that this is a `service` endpoint, for reaching the `web` service in the `quickstart-python` stack.
### Container endpoints
Each container that has one or more published ports is automatically assigned a
DNS endpoint in the format
`container-name[.stack-name].shortuuid.cont.dockerapp.io`. This DNS endpoint
(single A record) resolves to the public IP of the node where the container is
running. If the container is redeployed into another node, the DNS updates
automatically and resolves to the new node or host.
You can see a list of container endpoints on the stack, service or container
detail views, in the **Endpoints** tab.
### Service endpoints
Each service that has at least one port published with a fixed (not dynamic)
host port is assigned a DNS endpoint in the format
`service-name[.stack-name].shortuuid.svc.dockerapp.io`. This DNS endpoint
(multiple A record) resolves to the IPs of the nodes where the containers are
running, in a [round-robin
fashion](https://en.wikipedia.org/wiki/Round-robin_DNS).
You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab.

View File

@ -1,257 +0,0 @@
---
description: Service discovery
keywords: service, discover, links
redirect_from:
- /docker-cloud/feature-reference/service-links/
title: Service discovery and links
---
Docker Cloud creates a per-user overlay network which connects all containers
across all of the user's hosts. This network connects all of your containers on
the `10.7.0.0/16` subnet, and gives every container a local IP. This IP persists
on each container even if the container is redeployed and ends up on a different
host. Every container can reach any other container on any port within the
subnet.
Docker Cloud gives your containers two ways find other services:
* Using service and container names directly as **hostnames**
* Using **service links**, which are based on [Docker Compose links](/compose/compose-file/#links)
**Service and Container Hostnames** update automatically when a service scales
up or down or redeploys. As a user, you can configure service names, and Docker
Cloud uses these names to find the IP of the services and containers for you.
You can use hostnames in your code to provide abstraction that allows you to
easily swap service containers or components.
**Service links** create environment variables which allow containers to
communicate with each other within a stack, or with other services outside of a
stack. You can specify service links explicitly when you create a new service
or edit an existing one, or specify them in the stackfile for a service stack.
### Hostnames vs service links
When a service is scaled up, a new hostname is created and automatically
resolves to the new IP of the container, and the parent service hostname record
also updates to include the new container's IP. However, new service link
environment variables are not created, and existing ones are not removed, when a
service scales up or down.
## Using service and container names as hostnames
You can use hostnames to connect any container in your Docker Cloud account to
any other container on your account without having to create service links or
manage environment variables. This is the recommended service discovery method.
Hostnames always resolve to the correct IP for the service or container,
and update as the service scales up, scales down, or redeploys. The Docker
Cloud automatic DNS service resolves the service name to the correct IP on the
overlay network, even if the container has moved or is now on a different host.
### Discovering containers on the same service or stack
A container can always discover other containers on the same stack using just
the **container name** as hostname. This includes containers of the same
service. Similarly, a container can always discover other services on the same
stack using the **service name**.
For example, a container `webapp-1` in the service `webapp` can connect to the
container `db-1` in the service `db` by using `db-1` as the hostname. It can
also connect to a peer container, `webapp-2`, by using `webapp-2` as the
hostname.
A container `proxy-1` on the same stack could discover all `webapp` containers
by using the **service name** `webapp` as hostname. Connecting to the service
name resolves as an `A`
[round-robin](http://en.wikipedia.org/wiki/Round-robin_DNS) record, listing all
IPs of all containers on the service `webapp`.
### Discovering services or containers on another stack
To find a service or a container on another stack, append `.<stack_name>` to the
service or container name. For example, if `webapp-1` on the stack `production`
needs to access container `db-1` on the stack `common`, it could use the
hostname `db-1.common` which Docker Cloud resolves to the appropriate IP.
### Discovering services or containers not included in a stack
To find a container or service that is not included in a stack, use the service
or container name as the hostname.
If the container making the query is part of a stack, and there is a local match
on the same stack, the local match takes precedence over the service or
container that is outside the stack.
> **Tip**: To work around this, you can rename the local match so that it has a
more specific name. You might also put the external service or container in a
dedicated stack so that you can specify the stack name as part of the namespace.
## Using service links for service discovery
Docker Cloud's service linking is modeled on [Docker Compose
links](/compose/compose-file/#links) to provide a basic service discovery
functionality using directional links recorded in environment variables.
When you link a "client" service to a "server" service, Docker Cloud performs
the following actions on the "client" service:
1. Creates a group of environment variables that contain information about the exposed ports of the "server" service, including its IP address, port, and protocol.
2. Copies all of the "server" service environment variables to the "client" service with an `HOSTNAME_ENV_` prefix.
3. Adds a DNS hostname to the Docker Cloud DNS service that resolves to the "server" service IP address.
Some environment variables such as the API endpoint are updated when a service
scales up or down. Service links are only updated when a service is deployed or
redeployed, but are not updated during runtime. No new service link environment
variables are created when a service scales up or down.
>**Tip:** You can specify one of several [container distribution strategies](/docker-cloud/infrastructure/deployment-strategies.md) for
applications deployed to multiple nodes. These strategies enable automatic
deployments of containers to nodes, and sometimes auto-linking of containers.
If a service with
[EVERY_NODE](/docker-cloud/infrastructure/deployment-strategies.md#every-node)
strategy is linked to another service with EVERY_NODE strategy, containers are
linked one-to-one on each node.
### Service link example
For the explanation of service linking, consider the following application
diagram.
![](images/service-links-diagram.png)
Imagine that you are running a web service (`my-web-app`) with 2 containers
(`my-web-app-1` and `my-web-app-2`). You want to add a proxy service
(`my-proxy`) with one container (`my-proxy-1`) to balance HTTP traffic to
each of the containers in your `my-web-app` application, with a link name of
`web`.
### Service link environment variables
Several environment variables are set on each container at startup to provide
link details to other containers. The links created are directional. These are
similar to those used by Docker Compose.
For our example app above, the following environment variables are set in the
proxy containers to provide service links. The example proxy application can use
these environment variables to configure itself on startup, and start balancing
traffic between the two containers of `my-web-app`.
| Name | Value |
|:------------------------|:----------------------|
| WEB_1_PORT | `tcp://172.16.0.5:80` |
| WEB_1_PORT_80_TCP | `tcp://172.16.0.5:80` |
| WEB_1_PORT_80_TCP_ADDR | `172.16.0.5` |
| WEB_1_PORT_80_TCP_PORT | `80` |
| WEB_1_PORT_80_TCP_PROTO | `tcp` |
| WEB_2_PORT | `tcp://172.16.0.6:80` |
| WEB_2_PORT_80_TCP | `tcp://172.16.0.6:80` |
| WEB_2_PORT_80_TCP_ADDR | `172.16.0.6` |
| WEB_2_PORT_80_TCP_PORT | `80` |
| WEB_2_PORT_80_TCP_PROTO | `tcp` |
To create these service links, you would specify the following in your stackfile:
```yml
my-proxy:
links:
- my-web-app:web
```
This example snippet creates a directional link from `my-proxy` to `my-web-app`, and calls that link `web`.
### DNS hostnames vs service links
> **Note**: Hostnames are updated during runtime if the service scales up or down. Environment variables are only set or updated at deploy or redeploy. If your services scale up or down frequently, you should use hostnames rather than service links.
In the example, the `my-proxy` containers can access the service links using following hostnames:
| Hostname | Value |
|:---------|:--------------------------|
| `web` | `172.16.0.5 172.16.0.6` |
| `web-1` | `172.16.0.5` |
| `web-2` | `172.16.0.6` |
The best way for the `my-proxy` service to connect to the `my-web-app` service
containers is using the hostnames, because they are updated during runtime if
`my-web-app` scales up or down. If `my-web-app` scales up, the new hostname
`web-3` automatically resolves to the new IP of the container, and the hostname
`web` is updated to include the new IP in its round-robin record.
However, the service link environment variables are not added or updated until
the service is redeployed. If `my-web-app` scales up, no new service link
environment variables (such as `WEB_3_PORT`, `WEB_3_PORT_80_TCP`, etc) are added
to the "client" container. This means the client does not know how to contact
the new "server" container.
### Service environment variables
Environment variables specified in the service definition are instantiated in
each individual container. This ensures that each container has a copy of the
service's defined environment variables, and also allows other connecting
containers to read them.
These environment variables are prefixed with the `HOSTNAME_ENV_` in each
container.
In our example, if we launch our `my-web-app` service with an environment
variable of `WEBROOT=/login`, the following environment variables are set and
available in the proxy containers:
| Name | Value |
|:------------------|:---------|
| WEB_1_ENV_WEBROOT | `/login` |
| WEB_2_ENV_WEBROOT | `/login` |
In our example, this enables the "client" service (`my-proxy-1`) to read
configuration information such as usernames and passwords, or simple
configuration, from the "server" service containers (`my-web-app-1` and
`my-web-app-2`).
#### Docker Cloud specific environment variables
In addition to the standard Docker environment variables, Docker Cloud also sets
special environment variables that enable containers to self-configure. These
environment variables are updated on redeploy.
In the example above, the following environment variables are available in the `my-proxy` containers:
| Name | Value |
|:-------------------------------|:--------------------------------------------------------------------------------------|
| WEB_DOCKERCLOUD_API_URL | `https://cloud.docker.com/api/app/v1/service/3b5fbc69-151c-4f08-9164-a4ff988689ff/` |
| DOCKERCLOUD_SERVICE_API_URI | `/api/v1/service/651b58c47-479a-4108-b044-aaa274ef6455/` |
| DOCKERCLOUD_SERVICE_API_URL | `https://cloud.docker.com/api/app/v1/service/651b58c47-479a-4108-b044-aaa274ef6455/` |
| DOCKERCLOUD_CONTAINER_API_URI | `/api/v1/container/20ae2cff-44c0-4955-8fbe-ac5841d1286f/` |
| DOCKERCLOUD_CONTAINER_API_URL | `https://cloud.docker.com/api/app/v1/container/20ae2cff-44c0-4955-8fbe-ac5841d1286f/` |
| DOCKERCLOUD_NODE_API_URI | `/api/v1/node/d804d973-c8b8-4f5b-a0a0-558151ffcf02/` |
| DOCKERCLOUD_NODE_API_URL | `https://cloud.docker.com/api/infra/v1/node/d804d973-c8b8-4f5b-a0a0-558151ffcf02/` |
| DOCKERCLOUD_CONTAINER_FQDN | `my-proxy-1.20ae2cff.cont.dockerapp.io` |
| DOCKERCLOUD_CONTAINER_HOSTNAME | `my-proxy-1` |
| DOCKERCLOUD_SERVICE_FQDN | `my-proxy.651b58c47.svc.dockerapp.io` |
| DOCKERCLOUD_SERVICE_HOSTNAME | `my-proxy` |
| DOCKERCLOUD_NODE_FQDN | `d804d973-c8b8-4f5b-a0a0-558151ffcf02.node.dockerapp.io` |
| DOCKERCLOUD_NODE_HOSTNAME | `d804d973-c8b8-4f5b-a0a0-558151ffcf02` |
Where:
* `WEB_DOCKERCLOUD_API_URL` is the Docker Cloud API resource URL of the linked service. Because this is a link, the link name is the environment variable prefix.
* `DOCKERCLOUD_SERVICE_API_URI` and `DOCKERCLOUD_SERVICE_API_URL` are the Docker Cloud API resource URI and URL of the service running in the container.
* `DOCKERCLOUD_CONTAINER_API_URI` and `DOCKERCLOUD_CONTAINER_API_URL` are the Docker Cloud API resource URI and URL of the container itself.
* `DOCKERCLOUD_NODE_API_URI` and `DOCKERCLOUD_NODE_API_URL` are the Docker Cloud API resource URI and URL of the node where the container is running.
* `DOCKERCLOUD_CONTAINER_HOSTNAME` and `DOCKERCLOUD_CONTAINER_FQDN` are the external hostname and Fully Qualified Domain Name (FQDN) of the container itself.
* `DOCKERCLOUD_SERVICE_HOSTNAME` and `DOCKERCLOUD_SERVICE_FQDN` are the external hostname and Fully Qualified Domain Name (FQDN) of the service to which the container belongs.
* `DOCKERCLOUD_NODE_HOSTNAME` and `DOCKERCLOUD_NODE_FQDN` are the external hostname and Fully Qualified Domain Name (FQDN) of the node where the container is running.
These environment variables are also copied to linked containers with the `NAME_ENV_` prefix.
If you provide API access to your service, you can use the generated token
(stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or
automate operations, such as scaling.

View File

@ -1,75 +0,0 @@
---
description: Redeploy running services
keywords: redeploy, running, services
redirect_from:
- /docker-cloud/feature-reference/service-redeploy/
title: Redeploy a running service
---
You can **redeploy** services in Docker Cloud while they are running to
regenerate a service's containers. You might do this when a new version of the
image is pushed to the registry, or to apply changes that you made to
the service's settings.
When you redeploy a service, Docker Cloud terminates the current service
containers. It then deploys new containers using the most recent service
definition, including service and deployment tags, deployment strategies, port
mappings, and so on.
> **Note**: Your containers might be redeployed to different nodes during redeployment.
#### Container hostnames
*Container* **hostnames** change on redeployment, and if your service uses
**dynamic published ports**, new ports might be used on redeployment.
Container hostnames appear in the following format:
`servicename-1.new-container-short-uuid.cont.dockerapp.io`
However, containers keep their local IPs after redeployment, even if they end up
in different nodes. This means that linked services do not need to be
redeployed. To learn more, see [Service Links](service-links.md).
#### Service hostnames
*Service* hostnames remain the same after redeployment. Service hostnames are only
available for ports that are bound to a specific port on the host. They are
_not_ available if the port is dynamically allocated.
Service hostnames appear in the following format:
`servicename.service-short-uuid.svc.dockerapp.io`
#### Redeploy with volumes
If your containers use volumes, the new containers can **reuse** the
existing volumes. If you chose to reuse the volumes, the containers redeploy to the same nodes to preserve their links to the volumes.
> **Note**: When you redeploy services with reused volumes, your redeployment can fail if the service's deployment tags no longer allow it to be deployed on the node that the volume resides on. To learn more, see [Deployment Tags](deploy-tags.md).
## Redeploy a service using the web interface
1. Click **Services** in the left menu to view a list of services.
2. Click the checkbox to the left of the service or services you want to redeploy.
2. From the **Actions** menu at the top right, choose **Redeploy**.
![](images/redeploy-service.png)
The service begins redeploying immediately.
<!-- DCUI-732, DCUI-728
3. If the container uses volumes, choose whether to reuse them.
4. Click **OK** on the confirmation dialog to start the redeployment.-->
## Redeploy a service using the API and CLI
See the Docker Cloud [API and CLI documentation](/apidocs/docker-cloud.md#redeploy-a-service) for more information
on using our API and CLI to redeploy services.
## Autoredeploy on image push to Docker Hub
If your service uses an image stored in Docker Hub or Docker Cloud, you can
enable **Autoredeploy** on the service. Autoredeploy triggers a redeployment
whenever a new image is pushed. See the [Autoredeploy documentation](auto-redeploy.md) to learn more.
## Redeploy a service using webhooks
You can also use **triggers** to redeploy a service, for example when its image
is pushed or rebuilt in a third-party registry. See the [Triggers documentation](triggers.md) to learn more.

View File

@ -1,157 +0,0 @@
---
description: Scale your service, spawn new containers
keywords: spawn, container, service, deploy
redirect_from:
- /docker-cloud/feature-reference/service-scaling/
title: Scale your service
---
Docker Cloud makes it easy to spawn new containers of your service to handle
additional load. Two modes are available to allow you to scale services with
different configuration requirements.
## Deployment and scaling modes
Any service that handles additional load by increasing the number of containers
of the service is considered "horizontally scalable".
There are two deployment modes when scaling a service:
- **Parallel mode** (default): all containers of a service are
deployed at the same time without any links between them. This is
the fastest way to deploy, and is the default.
- **Sequential mode**: each new container is deployed in the service one at a
time. Each container is linked to all previous containers using service
links. This makes complex configuration possible within the containers
startup logic. This mode is explained in detail in the following sections.
## When should I use Parallel scaling?
When the containers in a service work independently of each other and do not
need to coordinate between themselves, they can be scaled up in parallel mode.
Examples include:
- Stateless web servers and proxies
- “Worker” instances that process jobs from a queue
- “Cron”-style instances that execute periodic tasks
The default scaling mode is parallel, so no additional configuration is
required to use this mode.
## When should I use Sequential scaling?
Some services require coordination between different containers to ensure that
the service functions correctly. Many databases, such as MySQL for example,
require that the containers know about each other at startup time so that
traffic can be routed to them appropriately. When this is the case, you should
use [sequential scaling](service-scaling.md#sequential-deployment-and-scaling).
To allow peer-aware container startup, you can enable sequential scaling mode. See [Sequential Scaling](service-scaling.md#sequential-deployment-and-scaling) for more information.
## Set the initial number of containers
When you configure a service in Docker Cloud, you can specify an initial number of containers for the service before you launch.
![](images/service-wizard-scale.png)
Docker Cloud immediately launches as many containers as you specified.
### Set the initial containers using the API
You can specify the initial number of containers for a service when deploying it through the API:
```
POST /api/app/v1/service/ HTTP/1.1
{
"target_num_containers": 2,
[...]
}
```
If you dont specify the number of containers to deploy, this command defaults to `1`. See the [API documentation](/apidocs/docker-cloud.md) for more information.
### Set the initial containers using the CLI
You can also specify the initial number of containers for a service when deploying it using the CLI:
```bash
$ docker-cloud service run -t 2 [...]
```
If you dont specify the number of containers to deploy, the CLI uses the default value of `1`. See the [CLI documentation](/apidocs/docker-cloud.md) for more information.
## Scale an already running service
If you need to scale a service up or down while it is running, you can change the number of containers from the service detail page:
![](images/service-before-scaling.png)
1. Click the slider at the top of the service detail page.
2. Drag the slider to the number of containers you want.
3. Click **Scale**.
The application starts scaling immediately, whether this means starting new containers, or gracefully shutting down existing ones.
![](images/service-during-scaling.png)
### Scale a running service using the API
You can scale an already running service through the API:
```
PATCH /api/app/v1/service/(uuid)/ HTTP/1.1
{
"target_num_containers": 2
}
```
See the [scale a service API documentation](/apidocs/docker-cloud.md#scale-a-service).
### Scale a running service using the CLI
You can also scale an already running service using the CLI:
```bash
$ docker-cloud service scale (uuid or name) 2
```
See the [scale a service CLI documentation](/apidocs/docker-cloud.md#scale-a-service).
## Sequential deployment and scaling
When a service with more than one container is deployed using **sequential deployment** mode, the second and subsequent containers are linked to all the
previous ones using [service links](service-links.md). These links are useful if
your service needs to know about other instances, for example to allow automatic
configuration on startup.
See the [Service links](service-links.md) topic for a list of environment variables that the links create in your containers.
You can set the **Sequential deployment** setting on the **Service configuration** step of the **Launch new service** wizard:
![](images/service-wizard-sequential-deployment.png)
### Set the scaling mode using the API
You can also set the `sequential_deployment` option when deploying an
application through the API:
```
POST /api/app/v1/service/ HTTP/1.1
{
"sequential_deployment": true,
[...]
}
```
See [create a new service](/apidocs/docker-cloud.md#create-a-new-service) for
more information.
### Set the scaling mode using the CLI
You can also set the `sequential_deployment` option when deploying an
application through the CLI: 
```bash
$ docker-cloud service run --sequential [...] 
```

View File

@ -1,329 +0,0 @@
---
description: Stack YAML reference for Docker Cloud
keywords: YAML, stack, reference, docker cloud
redirect_from:
- /docker-cloud/feature-reference/stack-yaml-reference/
title: Docker Cloud stack file YAML reference
---
A stack is a collection of services that make up an application in a specific environment. Learn more about stacks for Docker Cloud [here](stacks.md). A **stack file** is a file in YAML format that defines one or more services, similar to a `docker-compose.yml` file for Docker Compose but with a few extensions. The default name for this file is `docker-cloud.yml`.
**Looking for information on stack files for Swarm?** A good place to start is the [Compose reference file](/compose/compose-file/index.md), particularly the section on `deploy` key and its sub-options, and the reference on [Docker stacks](/compose/bundles.md). Also, the new [Getting Started tutorial](/get-started/index.md) demos use of a stack file to deploy an application to a swarm.
## Stack file example
Below is an example `docker-cloud.yml`:
```yml
lb:
image: dockercloud/haproxy
links:
- web
ports:
- "80:80"
roles:
- global
web:
image: dockercloud/quickstart-python
links:
- redis
target_num_containers: 4
redis:
image: redis
```
Each key defined in `docker-cloud.yml` creates a service with that name in Docker Cloud. In the example above, three services are created: `lb`, `web`, and `redis`. Each service is a dictionary whose possible keys are documented below.
The `image` key is mandatory. Other keys are optional and are analogous to their [Docker Cloud Service API](/apidocs/docker-cloud.md#create-a-new-service) counterparts.
## image (required)
The image used to deploy this service. This is the only mandatory key.
```yml
image: drupal
image: dockercloud/hello-world
image: my.registry.com/redis
```
## autodestroy
Whether the containers for this service should be terminated if they stop (default: `no`, possible values: `no`, `on-success`, `always`).
```yml
autodestroy: always
```
## autoredeploy
Whether to redeploy the containers of the service when its image is updated in Docker Cloud registry (default: `false`).
```yml
autoredeploy: true
```
## cap_add, cap_drop
Add or drop container capabilities. See `man 7 capabilities` for a full list.
```yml
cap_add:
- ALL
cap_drop:
- NET_ADMIN
- SYS_ADMIN
```
## cgroup_parent
Specify an optional parent cgroup for the container.
```yml
cgroup_parent: m-executor-abcd
```
## command
Override the default command in the image.
```yml
command: echo 'Hello World!'
```
## deployment_strategy
Container distribution among nodes (default: `emptiest_node`, possible values: `emptiest_node`, `high_availability`, `every_node`). Learn more [here](../infrastructure/deployment-strategies.md).
```yml
deployment_strategy: high_availability
```
## devices
List of device mappings. Uses the same format as the `--device` docker client create option.
```yml
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
```
## dns
Specify custom DNS servers. Can be a single value or a list.
```yml
dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9
```
## dns_search
Specify custom DNS search domains. Can be a single value or a list.
```yml
dns_search: example.com
dns_search:
- dc1.example.com
- dc2.example.com
```
## environment
A list of environment variables to add in the service's containers at launch. The environment variables specified here override any image-defined environment variables. You can use either an array or a dictionary format.
Dictionary format:
```yml
environment:
PASSWORD: my_password
```
Array format:
```yml
environment:
- PASSWORD=my_password
```
When you use the Docker Cloud CLI to create a stack, you can use the environment variables defined locally in your shell to define those in the stack. This is useful if you don't want to store passwords or other sensitive information in your stack file:
```yml
environment:
- PASSWORD
```
## expose
Expose ports without publishing them to the host machine - they'll only be accessible from your nodes in Docker Cloud. `udp` ports can be specified with a `/udp` suffix.
```yml
expose:
- "80"
- "90/udp"
```
## extra_hosts
Add hostname mappings. Uses the same values as the docker client `--add-host` parameter.
```yml
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
```
## labels
Add metadata to containers using Docker Engine labels. You can use either an array or a dictionary.
We recommend using reverse-DNS notation to prevent your labels from conflicting with those used by other software.
```yml
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
```
## links
Link to another service.
Either specify both the service unique name and the link alias (`SERVICE:ALIAS`), or just the service unique name (which is also used for the alias). If a service you want to link to is part of a different stack, specify the external stack name too.
- If the target service belongs to *this* stack, its service unique name is its service name.
- If the target service does not belong to *any* stacks (it is a standalone service), its service unique name is its service name.
- If the target service belongs to another stack, its service unique name is its service name plus the service stack name, separated by a period (`.`).
```yml
links:
- mysql
- redis:cache
- amqp.staging:amqp
```
Environment variables are created for each link that Docker Cloud resolves to the containers IPs of the linked service. More information [here](service-links.md).
## net
Networking mode. Only "bridge" and "host" options are supported for now.
```yml
net: host
```
## pid
Sets the PID mode to the host PID mode. This turns on sharing between container and the host operating system PID address space. Containers launched with this (optional) flag can access and be accessed by other containers in the namespace belonging to the host running the Docker daemon.
```yml
pid: "host"
```
## ports
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container port (an ephemeral host port is chosen). `udp` ports can be specified with a `/udp` suffix.
```yml
ports:
- "80"
- "443:443"
- "500/udp"
- "4500:4500/udp"
- "49022:22"
```
## privileged
Whether to start the containers with Docker Engine's privileged flag set or not (default: `false`).
```yml
privileged: true
```
## restart
Whether the containers for this service should be restarted if they stop (default: `no`, possible values: `no`, `on-failure`, `always`).
```yml
restart: always
```
## roles
A list of Docker Cloud API roles to grant the service. The only supported value is `global`, which creates an environment variable `DOCKERCLOUD_AUTH` used to authenticate against Docker Cloud API. Learn more [here](api-roles.md).
```yml
roles:
- global
```
## security_opt
Override the default labeling scheme for each container.
```yml
security_opt:
- label:user:USER
- label:role:ROLE
```
## sequential_deployment
Whether the containers should be launched and scaled in sequence (default: `false`). Learn more [here](service-scaling.md).
```yml
sequential_deployment: true
```
## tags
Indicates the [deploy tags](deploy-tags.md) to select the nodes where containers are created.
```yml
tags:
- staging
- web
```
## target_num_containers
The number of containers to run for this service (default: `1`).
```yml
target_num_containers: 3
```
## volumes
Mount paths as volumes, optionally specifying a path on the host machine (`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`).
```yml
volumes:
- /etc/mysql
- /sys:/sys
- /etc:/etc:ro
```
## volumes_from
Mount all of the volumes from another service by specifying a service unique name.
- If the target service belongs to this stack, its service unique name is its service name.
- If the target service does not belong to any stack, its service unique name is its service name.
- If the target service belongs to another stack, its service unique name is its service name plus the service stack name, separated by ".". Learn more [here](volumes.md).
```yml
volumes_from:
- database
- mongodb.staging
```
## Single value keys analogous to a `docker run` counterpart
```
working_dir: /app
entrypoint: /app/entrypoint.sh
user: root
hostname: foo
domainname: foo.com
mac_address: 02:42:ac:11:65:43
cpu_shares: 512
cpuset: 0,1
mem_limit: 100000m
memswap_limit: 200000m
privileged: true
read_only: true
stdin_open: true
tty: true
```
## Unsupported Docker-compose keys
Stack files (`docker-cloud.yml`) were designed with `docker-compose.yml` in mind to maximize compatibility. However the following keys used in Compose are not supported in Docker Cloud stackfiles:
```
build
external_links
env_file
```

View File

@ -1,128 +0,0 @@
---
description: Manage service stacks
keywords: service, stack, yaml
redirect_from:
- /docker-cloud/feature-reference/stacks/
title: Manage service stacks
---
A **stack** is a collection of services that make up an application in a specific environment. A **stack file** is a file in YAML format, similar to a `docker-compose.yml` file, that defines one or more services. The YAML reference is documented [here](stack-yaml-reference.md).
Stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately.
Stack files define environment variables, deployment tags, the number of containers, and related environment-specific configuration. Because of this, you should use a separate stack file for development, staging, production, and other environments.
### Stack file example
Below is an example `docker-cloud.yml`:
```yml
lb:
image: dockercloud/haproxy
links:
- web
ports:
- "80:80"
roles:
- global
web:
image: dockercloud/quickstart-python
links:
- redis
target_num_containers: 4
redis:
image: redis
```
Each key defined in `docker-cloud.yml` creates a service with that name in Docker Cloud. In the example above, three services are created: `lb`, `web` and `redis`. Each service is a dictionary and its keys are specified below.
Only the `image` key is mandatory. Other keys are optional and are analogous to their [Docker Cloud Service API](/apidocs/docker-cloud.md#create-a-new-service) counterparts.
## Create a stack
Docker Cloud allows you to create stacks from the web interface, as well as via the Docker Cloud API and the `docker-cloud` command line.
To create a stack from the Docker Cloud web interface:
1. Log in to Docker Cloud.
2. Click **Stacks**.
3. Click **Create**.
4. Enter a name for the stackfile.
5. Enter or paste the stack file in the **Stackfile** field, or drag a file to the field to upload it. (You can also click in the field to browse for and upload a file on your computer.)
![](images/stack-create.png)
6. Click **Create** or **Create and deploy**.
### Create a stack using the API
You can also create a new stack by uploading a stack file directly using the Docker Cloud API. When you use the API, the stack file is in **JSON** format, like the following example:
```json
POST /api/v1/stack/ HTTP/1.1
{
"name": "my-new-stack",
"services": [
{
"name": "hello-word",
"image": "dockercloud/hello-world",
"target_num_containers": 2
}
]
}
```
Check our [API documentation](/apidocs/docker-cloud.md#stacks) for more information.
### Create a stack using the CLI
You can create a stack from a YAML file by executing:
```bash
$ docker-cloud stack create -f docker-cloud.yml
```
Check our [CLI documentation](/apidocs/docker-cloud.md#stacks) for more information.
## Update an existing stack
You can specify an existing stack when you create a service, however you might not always have the stack definition ready at that time, or you might later want to add a service to an existing stack.
To update a stack from the Docker Cloud web interface:
1. Navigate to the stack you want to update.
2. Click **Edit**.
![](images/stack-edit.png)
3. Edit the stack file, or upload a new one from your computer.
4. Click **Save**.
### Update an existing stack using the API
You can also update a stack by uploading the new stack file directly using the Docker Cloud API. When you use the API, the stack file is in **JSON** format, like the following example:
```json
PATCH /api/app/v1/stack/(uuid)/ HTTP/1.1
{
"services": [
{
"name": "hello-word",
"image": "dockercloud/hello-world",
"target_num_containers": 2
}
]
}
```
Check our [API documentation](/apidocs/docker-cloud.md#stacks) for more information.
### Update an existing stack using the CLI
You can update a stack from a YAML file by executing:
```bash
docker-cloud stack update -f docker-cloud.yml (uuid or name)
```
Check our [CLI documentation](/apidocs/docker-cloud.md#stacks) for more information.

View File

@ -1,59 +0,0 @@
---
description: Use triggers
keywords: API, triggers, endpoints
redirect_from:
- /docker-cloud/feature-reference/triggers/
title: Use triggers
---
## What are triggers?
**Triggers** are API endpoints that redeploy or scale a specific service
whenever a `POST` HTTP request is sent to them. You can create one or more
triggers per service.
Triggers do not require any authentication. This allows third party services
like Docker Hub to call them, however because of this it is important that you
keep their URLs secret.
The body of the `POST` request is passed in to the new containers as an
environment variable called `DOCKERCLOUD_TRIGGER_BODY`.
### Trigger types
Docker Cloud supports two types of triggers:
* **Redeploy** triggers, which redeploy the service when called
* **Scale up** triggers, which scale the service by one or more containers when called
## Create a trigger
1. Click the name of the service you want to create a trigger for.
2. Go to the detail page and scroll down to the **Triggers** section.
![](images/triggers-tab-blank.png)
3. In the **Trigger name** field, enter a name for the trigger.
4. Select a trigger type.
5. Click the **+** (plus sign) icon.
![](images/new-trigger-created.png)
6. Use the POST request URL provided to configure the webhook in your
application or third party service.
## Revoke triggers
To stop a trigger from automatically scaling or redeploying, you must revoke it.
1. Go to the detail page of the service.
2. Scroll down to the **Triggers** section.
3. Click the **trashcan** icon for the trigger you want to revoke.
![](images/revoke-trigger.png)
Once the trigger is revoked, it stops accepting requests.
## Use triggers in the API and CLI
See our [API and CLI documentation](/apidocs/docker-cloud.md#triggers) to learn how to use triggers with our API and the CLI.

View File

@ -1,69 +0,0 @@
---
description: Work with data volumes
keywords: data, volumes, create, reuse
redirect_from:
- /docker-cloud/tutorials/download-volume-data/
- /docker-cloud/feature-reference/volumes/
title: Work with data volumes
---
In Docker Cloud, you can define one or more data volumes for a service.
**Volumes** are directories that are stored outside of the container's
filesystem and which hold reusable and shareable data that can persist even when
containers are terminated. This data can be reused by the same service on
redeployment, or shared with other services.
## Add a data volume to a service
Data volumes can be either specified in the image's `Dockerfile` using the
[VOLUME instruction](/engine/reference/builder/#volume), or when
creating a service.
To define a data volume in a service, specify the **container path** where it
should be created in the **Volumes** step of the **Create new service** wizard.
Each container of the service has its own volume. Data volumes are reused
when the service is redeployed (data persists in this case), and deleted if the
service is terminated.
![](images/data-volumes-wizard.png)
If you don't define a **host path**, Docker Cloud creates a new empty volume.
Otherwise, the specified **host path** is mounted on the **container path**.
When you specify a host path, you can also specify whether to mount the volume
read-only, or read/write.
![](images/host-volumes-wizard.png)
## Reuse data volumes from another service
You can reuse data volumes from another service. To do this when creating a service, go through the **Create new service**, and continue to the **Volumes** step. From the **Volumes** page, choose a source service from the **Add volumes from** menu.
![](images/volumes-from-wizard.png)
All reused data volumes are mounted on the same paths as in the source service.
Containers must be on the same host to share volumes, so the containers
of the new service deploy to the same nodes where the source service
containers are deployed.
> **Note**: A service with data volumes cannot be terminated until all services that are using its volumes have also been terminated.
## Back up data volumes
You might find it helpful to download or back up the data from volumes that are attached to running containers.
1. Run an SSH service that mounts the volumes of the service you want to back up.
In the example snippet below, replace `mysql` with the actual service name.
```
$ docker-cloud service run -n downloader -p 22:2222 -e AUTHORIZED_KEYS="$(cat ~/.ssh/id_rsa.pub)" --volumes-from mysql tutum/ubuntu
```
2. Run a `scp` (secure-copy) to download the files to your local machine.
In the example snippet below, replace `downloader-1.uuid.cont.dockerapp.io` with the container's Fully Qualified Domain Name (FQDN), and replace `/var/lib/mysql` with the path within the container from which you want to download the data. The data is downloaded to the current local folder.
```
$ scp -r -P 2222 root@downloader-1.uuid.cont.dockerapp.io:/var/lib/mysql .
```

View File

@ -1,133 +0,0 @@
---
description: Automated builds
keywords: automated, build, images
title: Advanced options for Autobuild and Autotest
---
The following options allow you to customize your automated build and automated test processes.
## Environment variables for building and testing
Several utility environment variables are set by the build process, and are
available during automated builds, automated tests, and while executing
hooks.
> **Note**: These environment variables are only available to the build and test
processes and do not affect your service's run environment.
* `SOURCE_BRANCH`: the name of the branch or the tag that is currently being tested.
* `SOURCE_COMMIT`: the SHA1 hash of the commit being tested.
* `COMMIT_MSG`: the message from the commit being tested and built.
* `DOCKER_REPO`: the name of the Docker repository being built.
* `DOCKERFILE_PATH`: the dockerfile currently being built.
* `CACHE_TAG`: the Docker repository tag being built.
* `IMAGE_NAME`: the name and tag of the Docker repository being built. (This variable is a combination of `DOCKER_REPO`:`CACHE_TAG`.)
If you are using these build environment variables in a
`docker-compose.test.yml` file for automated testing, declare them in your `sut`
service's environment as shown below.
```none
sut:
build: .
command: run_tests.sh
environment:
- SOURCE_BRANCH
```
## Override build, test or push commands
Docker Cloud allows you to override and customize the `build`, `test` and `push`
commands during automated build and test processes using hooks. For example, you
might use a build hook to set build arguments used only during the build
process. (You can also set up [custom build phase hooks](#custom-build-phase-hooks) to perform actions in between these commands.)
**Use these hooks with caution.** The contents of these hook files replace the
basic `docker` commands, so you must include a similar build, test or push
command in the hook or your automated process does not complete.
To override these phases, create a folder called `hooks` in your source code
repository at the same directory level as your Dockerfile. Create a file called
`hooks/build`, `hooks/test`, or `hooks/push` and include commands that the
builder process can execute, such as `docker` and `bash` commands (prefixed appropriately with `#!/bin/bash`).
## Custom build phase hooks
You can run custom commands between phases of the build process by creating
hooks. Hooks allow you to provide extra instructions to the autobuild and
autotest processes.
Create a folder called `hooks` in your source code repository at the same
directory level as your Dockerfile. Place files that define the hooks in that
folder. Hook files can include both `docker` commands, and `bash` commands as long as they are prefixed appropriately with `#!/bin/bash`. The builder executes the commands in the files before and after each step.
The following hooks are available:
* `hooks/post_checkout`
* `hooks/pre_build`
* `hooks/post_build`
* `hooks/pre_test`
* `hooks/post_test`
* `hooks/pre_push` (only used when executing a build rule or [automated build](automated-build.md) )
* `hooks/post_push` (only used when executing a build rule or [automated build](automated-build.md) )
### Build hook examples
#### Override the "build" phase to set variables
Docker Cloud allows you to define build environment variables either in the hook files, or from the automated build UI (which you can then reference in hooks).
In the following example, we define a build hook that uses `docker build` arguments to set the variable `CUSTOM` based on the value of variable we defined using the Docker Cloud build settings. `$DOCKERFILE_PATH` is a variable that we provide with the name of the Dockerfile we wish to build, and `$IMAGE_NAME` is the name of the image being built.
```none
docker build --build-arg CUSTOM=$VAR -f $DOCKERFILE_PATH -t $IMAGE_NAME .
```
> **Caution**: A `hooks/build` file overrides the basic [docker build](/engine/reference/commandline/build.md) command
used by the builder, so you must include a similar build command in the hook or
the automated build fails.
To learn more about Docker build-time variables, see the [docker build documentation](/engine/reference/commandline/build/#set-build-time-variables-build-arg).
#### Two-phase build
If your build process requires a component that is not a dependency for your application, you can use a pre-build hook (refers to the `hooks/pre_build` file) to collect and compile required components. In the example below, the hook uses a Docker container to compile a Golang binary that is required before the build.
```bash
#!/bin/bash
echo "=> Building the binary"
docker run --privileged \
-v $(pwd):/src \
-v /var/run/docker.sock:/var/run/docker.sock \
centurylink/golang-builder
```
#### Push to multiple repos
By default the build process pushes the image only to the repository where the build settings are configured. If you need to push the same image to multiple repositories, you can set up a `post_push` hook to add additional tags and push to more repositories.
```none
docker tag $IMAGE_NAME $DOCKER_REPO:$SOURCE_COMMIT
docker push $DOCKER_REPO:$SOURCE_COMMIT
```
## Source Repository / Branch Clones
When Docker Cloud pulls a branch from a source code repository, it performs
a shallow clone (only the tip of the specified branch). This has the advantage
of minimizing the amount of data transfer necessary from the repository and
speeding up the build because it pulls only the minimal code necessary.
Because of this, if you need to perform a custom action that relies on a different
branch (such as a `post_push` hook), you can't checkout that branch, unless
you do one of the following:
* You can get a shallow checkout of the target branch by doing the following:
git fetch origin branch:mytargetbranch --depth 1
* You can also "unshallow" the clone, which fetches the whole Git history (and potentially
takes a long time / moves a lot of data) by using the `--unshallow` flag on the fetch:
git fetch --unshallow origin

View File

@ -1,439 +0,0 @@
---
description: Automated builds
keywords: automated, build, images
redirect_from:
- /docker-cloud/feature-reference/automated-build/
title: Automated builds
---
[![Automated Builds with Docker Cloud](images/video-auto-builds-docker-cloud.png)](https://youtu.be/sl2mfyjnkXk "Automated Builds with Docker Cloud"){:target="_blank" class="_"}
> **Note**: Docker Cloud's Build functionality is in BETA.
Docker Cloud can automatically build images from source code in an external
repository and automatically push the built image to your Docker
repositories.
When you set up automated builds (also called autobuilds), you create a list of
branches and tags that you want to build into Docker images. When you push code
to a source code branch (for example in Github) for one of those listed image
tags, the push uses a webhook to trigger a new build, which produces a Docker
image. The built image is then pushed to the Docker Cloud registry or to an
external registry.
If you have automated tests configured, these run after building but before
pushing to the registry. You can use these tests to create a continuous
integration workflow where a build that fails its tests does not push the built
image. Automated tests do not push images to the registry on their own. [Learn more about automated image testing here.](automated-testing.md)
You can also just use `docker push` to push pre-built images to these
repositories, even if you have automatic builds set up.
![An automated build dashboard](images/build-dashboard.png)
## Configure automated build settings
You can configure repositories in Docker Cloud so that they automatically
build an image each time you push new code to your source provider. If you have
[automated tests](automated-testing.md) configured, the new image is only pushed
when the tests succeed.
Before you set up automated builds you need to [create a repository](repos.md) to build, and [link to your source code provider](link-source.md).
1. From the **Repositories** section, click into a repository to view its details.
2. Click the **Builds** tab.
3. If you are setting up automated builds for the first time, select
the code repository service where the image's source code is stored.
Otherwise, if you are editing the build settings for an existing automated
build, click **Configure automated builds**.
4. Select the **source repository** to build the Docker images from.
You might need to specify an organization or user (the _namespace_) from the
source code provider. Once you select a namespace, its source code
repositories appear in the **Select repository** dropdown list.
5. Choose where to run your build processes.
You can either run the process on your own infrastructure and optionally [set up specific nodes to build on](automated-build.md#set-up-builder-nodes), or select **Build on Docker Clouds infrastructure** you can use the hosted build service
offered on Docker Cloud's infrastructure. If you use
Docker's infrastructure, select a builder size to run the build
process on. This hosted build service is free while it is in Beta.
![Editing build configurations](images/edit-repository-builds.png)
6. If in the previous step you selected **Build on Docker
Clouds infrastructure**, then you are given the option to select the
**Docker Version** used to build this repository. You can choose between
the **Stable** and **Edge** versions of Docker.
Selecting **Edge** lets you to take advantage of [multi-stage builds](/engine/userguide/eng-image/multistage-build/). For more information and examples, see the topic on how to [use multi-stage builds](/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds).
You can learn more about **stable** and **edge** channels in the
[Install Docker overview](/install/)
and the [Docker CE Edge](/edge/) topics.
7. Optionally, enable [autotests](automated-testing.md#enable-automated-tests-on-a-repository).
8. Review the default **Build Rules**, and optionally click the
**plus sign** to add and configure more build rules.
_Build rules_ control what Docker Cloud builds into images from the contents
of the source code repository, and how the resulting images are tagged
within the Docker repository.
A default build rule is set up for you, which you can edit or delete. This
default set builds from the `Branch` in your source code repository called
`master`, and creates a Docker image tagged with `latest`.
9. For each branch or tag, enable or disable the **Autobuild** toggle.
Only branches or tags with autobuild enabled are built, tested, *and* have
the resulting image pushed to the repository. Branches with autobuild
disabled are built for test purposes (if enabled at the repository
level), but the built Docker image is not pushed to the repository.
10. For each branch or tag, enable or disable the **Build Caching** toggle.
[Build caching](/engine/userguide/eng-image/dockerfile_best-practices/#/build-cache) can save time if you are building a large image frequently or have
many dependencies. You might want to leave build caching disabled to
make sure all of your dependencies are resolved at build time, or if
you have a large layer that is quicker to build locally.
11. Click **Save** to save the settings, or click **Save and build** to save and
run an initial test.
A webhook is automatically added to your source code repository to notify
Docker Cloud on every push. Only pushes to branches that are listed as the
source for one or more tags trigger a build.
### Set up build rules
By default when you set up autobuilds, a basic build rule is created for you.
This default rule watches for changes to the `master` branch in your source code
repository, and builds the `master` branch into a Docker image tagged with
`latest`. You
In the **Build Rules** section, enter one or more sources to build.
For each source:
* Select the **Source type** to build either a **tag** or a
**branch**. This tells the build system what to look for in the source code
repository.
* Enter the name of the **Source** branch or tag you want to build.
The first time you configure automated builds, a default build rule is set up
for you. This default set builds from the `Branch` in your source code called
`master`, and creates a Docker image tagged with `latest`.
You can also use a regex to select which source branches or tags to build.
To learn more, see
[regexes](automated-build.md#regexes-and-automated-builds).
* Enter the tag to apply to Docker images built from this source.
If you configured a regex to select the source, you can reference the
capture groups and use its result as part of the tag. To learn more, see
[regexes](automated-build.md#regexes-and-automated-builds).
* Specify the **Dockerfile location** as a path relative to the root of the source code repository. (If the Dockerfile is at the repository root, leave this path set to `/`.)
> **Note:** When Docker Cloud pulls a branch from a source code repository, it performs
a shallow clone (only the tip of the specified branch). Refer to [Advanced options for Autobuild and Autotest](advanced.md)
for more information.
### Environment variables for builds
You can set the values for environment variables used in your build processes
when you configure an automated build. Add your build environment variables by
clicking the plus sign next to the **Build environment variables** section, and
then entering a variable name and the value.
When you set variable values from the Docker Cloud UI, they can be used by the
commands you set in `hooks` files, but they are stored so that only users who
have `admin` access to the Docker Cloud repository can see their values. This
means you can use them to safely store access tokens or other information that
should remain secret.
> **Note**: The variables set on the build configuration screen are used during
the build processes _only_ and should not be confused with the environment
values used by your service (for example to create service links).
## Check your active builds
A summary of a repository's builds appears both on the repository **General**
tab, and in the **Builds** tab. The **Builds** tab also displays a color coded
bar chart of the build queue times and durations. Both views display the
pending, in progress, successful, and failed builds for any tag of the
repository.
From either location, you can click a build job to view its build report. The
build report shows information about the build job including the source
repository and branch (or tag), the build duration, creation time and location,
and the user namespace the build occurred in.
![screen showing a build report](images/build-report.png)
## Cancel or retry a build
While a build is queued or running, a **Cancel** icon appears next to its build
report link on the General tab and on the Builds tab. You can also click the
**Cancel** button from the build report page, or from the Timeline tab's logs
display for the build.
![list of builds showing the cancel icon](images/build-cancelicon.png)
If a build fails, a **Retry** icon appears next to the build report line on the
General and Builds tabs, and the build report page and Timeline logs also
display a **Retry** button.
![Timeline view showing the retry build button](images/retry-build.png)
> **Note**: If you are viewing the build details for a repository that belongs
to an Organization, the Cancel and Retry buttons only appear if you have `Read & Write` access to the repository.
## Disable an automated build
Automated builds are enabled per branch or tag, and can be disabled and
re-enabled easily. You might do this when you want to only build manually for
awhile, for example when you are doing major refactoring in your code. Disabling
autobuilds does not disable [autotests](automated-testing.md).
To disable an automated build:
1. From the **Repositories** page, click into a repository, and click the **Builds** tab.
2. Click **Configure automated builds** to edit the repository's build settings.
3. In the **Build Rules** section, locate the branch or tag you no longer want
to automatically build.
4. Click the **autobuild** toggle next to the configuration line.
The toggle turns gray when disabled.
5. Click **Save** to save your changes.
## Advanced automated build options
At the minimum you need a build rule composed of a source branch (or tag) and
destination Docker tag to set up an automated build. You can also change where
the build looks for the Dockerfile, set a path to the files the build use
(the build context), set up multiple static tags or branches to build from, and
use regular expressions (regexes) to dynamically select source code to build and
create dynamic tags.
All of these options are available from the **Build configuration** screen for
each repository. Click **Repositories** from the left navigation, click the name
of the repository you want to edit, click the **Builds** tab, and click
**Configure Automated builds**.
### Tag and Branch builds
You can configure your automated builds so that pushes to specific branches or tags triggers a build.
1. In the **Build Rules** section, click the plus sign to add more sources to build.
2. Select the **Source type** to build: either a **tag** or a **branch**.
This tells the build system what type of source to look for in the code
repository.
3. Enter the name of the **Source** branch or tag you want to build.
You can enter a name, or use a regex to match which source branch or tag
names to build. To learn more, see
[regexes](automated-build.md#regexes-and-automated-builds).
4. Enter the tag to apply to Docker images built from this source.
If you configured a regex to select the source, you can reference the
capture groups and use its result as part of the tag. To learn more, see
[regexes](automated-build.md#regexes-and-automated-builds).
5. Repeat steps 2 through 4 for each new build rule you set up.
### Set the build context and Dockerfile location
Depending on how the files are arranged in your source code repository, the
files required to build your images may not be at the repository root. If that's
the case, you can specify a path where the build looks for the files.
The _build context_ is the path to the files needed for the build, relative to the root of the repository. Enter the path to these files in the **Build context** field. Enter `/` to set the build context as the root of the source code repository.
> **Note**: If you delete the default path `/` from the **Build context** field and leave it blank, the build system uses the path to the Dockerfile as the build context. However, to avoid confusion we recommend that you specify the complete path.
You can specify the **Dockerfile location** as a path relative to the build
context. If the Dockerfile is at the root of the build context path, leave the
Dockerfile path set to `/`. (If the build context field is blank, set the path
to the Dockerfile from the root of the source repository.)
### Regexes and automated builds
You can specify a regular expression (regex) so that only matching branches or
tags are built. You can also use the results of the regex to create the Docker
tag that is applied to the built image.
You can use the variable `{sourceref}` to use the branch or tag name that
matched the regex in the Docker tag applied to the resulting built image. (The
variable includes the whole source name, not just the portion that matched the
regex.) You can also use up to nine regular expression capture groups
(expressions enclosed in parentheses) to select a source to build, and reference
these in the Docker Tag field using `{\1}` through `{\9}`.
**Regex example: build from version number branch and tag with version number**
You might want to automatically build any branches that end with a number
formatted like a version number, and tag their resulting Docker images using a
name that incorporates that branch name.
To do this, specify a `branch` build with the regex `/[0-9.]+$/` in the
**Source** field, and use the formula `version-{sourceref}` in the **Docker
tag** field.
<!-- Capture groups Not a priority
#### Regex example: build from version number branch and tag with version number
You could also use capture groups to build and label images that come from various sources. For example, you might have
`/(alice|bob)-v([0-9.]+)/` -->
### Create multiple Docker tags from a single build
By default, each build rule builds a source branch or tag into a Docker image,
and then tags that image with a single tag. However, you can also create several
tagged Docker images from a single build rule.
To create multiple tags from a single build rule, enter a comma-separated list
of tags in the **Docker tag** field in the build rule. If an image with that tag
already exists, Docker Cloud overwrites the image when the build completes
successfully. If you have automated tests configured, the build must pass these
tests as well before the image is overwritten. You can use both regex references
and plain text values in this field simultaneously.
For example if you want to update the image tagged with `latest` at the same
time as you a tag an image for a specific version, you could enter
`{sourceref},latest` in the Docker Tag field.
If you need to update a tag _in another repository_, use [a post_build hook](advanced.md#push-to-multiple-repos) to push to a second repository.
## Build repositories with linked private submodules
Docker Cloud sets up a deploy key in your source code repository that allows it
to clone the repository and build it, however this key only works for a single,
specific code repository. If your source code repository uses private Git
submodules (or requires that you clone other private repositories to build),
Docker Cloud cannot access these additional repos, your build cannot complete,
and an error is logged in your build timeline.
To work around this, you can set up your automated build using the `SSH_PRIVATE` environment variable to override the deployment key and grant Docker Cloud's build system access to the repositories.
> **Note**: If you are using autobuild for teams, use [the process below](automated-build.md#service-users-for-team-autobuilds) instead, and configure a service user for your source code provider. You can also do this for an individual account to limit Docker Cloud's access to your source repositories.
1. Generate a SSH keypair that you use for builds only, and add the public key to your source code provider account.
This step is optional, but allows you to revoke the build-only keypair without removing other access.
2. Copy the private half of the keypair to your clipboard.
3. In Docker Cloud, navigate to the build page for the repository that has linked private submodules. (If necessary, follow the steps [here](automated-build.md#configure-automated-build-settings) to configure the automated build.)
4. At the bottom of the screen, click the plus sign ( **+** ) next to **Build Environment variables**.
5. Enter `SSH_PRIVATE` as the name for the new environment variable.
6. Paste the private half of the keypair into the **Value** field.
7. Click **Save**, or **Save and Build** to validate that the build now completes.
> **Note**: You must configure your private git submodules using git clone over SSH (`git@submodule.tld:some-submodule.git`) rather than HTTPS.
## Autobuild for Teams
When you create an automated build repository in your own account namespace, you can start, cancel, and retry builds, and edit and delete your own repositories.
These same actions are also available for team repositories from Docker Hub if
you are a member of the Organization's `Owners` team. If you are a member of a
team with `write` permissions you can start, cancel and retry builds in your
team's repositories, but you cannot edit the team repository settings or delete
the team repositories. If your user account has `read` permission, or if you're
a member of a team with `read` permission, you can view the build configuration
including any testing settings.
| Action/Permission | read | write | admin | owner |
| --------------------- | ---- | ----- | ----- | ----- |
| view build details | x | x | x | x |
| start, cancel, retry | | x | x | x |
| edit build settings | | | x | x |
| delete build | | | | x |
### Service users for team autobuilds
> **Note**: Only members of the `Owners` team can set up automated builds for teams.
When you set up automated builds for teams, you grant Docker Cloud access to
your source code repositories using OAuth tied to a specific user account. This
means that Docker Cloud has access to everything that the linked source provider
account can access.
For organizations and teams, we recommend creating a dedicated service account
(or "machine user") to grant access to the source provider. This ensures that no
builds break as individual users' access permissions change, and that an
individual user's personal projects are not exposed to an entire organization.
This service account should have access to any repositories to be built,
and must have administrative access to the source code repositories so it can
manage deploy keys. If needed, you can limit this account to only a specific
set of repositories required for a specific build.
If you are building repositories with linked private submodules (private
dependencies), you also need to add an override `SSH_PRIVATE` environment
variable to automated builds associated with the account.
1. Create a service user account on your source provider, and generate SSH keys for it.
2. Create a "build" team in your organization.
3. Ensure that the new "build" team has access to each repository and submodule you need to build.
Go to the repository's **Settings** page. On Github, add the new "build" team to the list of **Collaborators and Teams**. On Bitbucket, add the "build" team to the list of approved users on the **Access management** screen.
4. Add the service user to the "build" team on the source provider.
5. Log in to Docker Cloud as a member of the `Owners` team, switch to the organization, and follow the instructions to [link to source code repository](link-source.md) using the service account.
> **Note**: You may need to log out of your individual account on the source code provider to create the link to the service account.
6. Optionally, use the SSH keys you generated to set up any builds with private submodules, using the service account and [the instructions above](automated-build.md#build-repositories-with-linked-private-submodules).
## What's Next?
### Customize your build process
Additional advanced options are available for customizing your automated builds,
including utility environment variables, hooks, and build phase overrides. To
learn more see [Advanced options for Autobuild and Autotest](advanced.md).
### Set up builder nodes
If you are building on your own infrastructure, you can run the build process on
specific nodes by adding the `builder` label to them. If no builder nodes are
specified, the build containers are deployed using an "emptiest node" strategy.
You can also limit the number of concurrent builds (including `autotest` builds)
on a specific node by using a `builder=n` tag, where the `n` is the number of
builds to allow. For example a node tagged with `builder=5` only allows up to
five concurrent builds or autotest-builds at the same time.
### Autoredeploy services on successful build
You can configure your services to automatically redeploy once the build
succeeds. [Learn more about autoredeploy](../apps/auto-redeploy.md)
### Add automated tests
To test your code before the image is pushed, you can use
Docker Cloud's [Autotest](automated-testing.md) feature which
integrates seamlessly with autobuild and autoredeploy.
> **Note**: While the Autotest feature builds an image for testing purposes, it
does not push the resulting image to Docker Cloud or the external registry.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

View File

@ -1,25 +0,0 @@
---
description: Manage Builds and Images in Docker Cloud
keywords: builds, images, Cloud
title: Builds and images overview
notoc: true
---
Docker Cloud provides a hosted registry service where you can create
repositories to store your Docker images. You can choose to push images to the
repositories, or link to your source code and build them directly in Docker
Cloud.
You can build images manually, or set up automated builds to rebuild your Docker
image on each `git push` to the source code. You can also create automated
tests, and when the tests pass use autoredeploy to automatically update your
running services when a build passes its tests.
* [Repositories in Docker Cloud](repos.md)
* [Push images to Docker Cloud](push-images.md)
* [Link to a source code repository](link-source.md)
* [Automated builds](automated-build.md)
* [Automated repository tests](automated-testing.md)
* [Advanced options for Autobuild and Autotest](advanced.md)
![Docker Cloud repository General view](images/repo-general.png){:width="650px"}

View File

@ -1,153 +0,0 @@
---
description: Link to your source code repository
keywords: sourcecode, github, bitbucket, Cloud
redirect_from:
- /docker-cloud/tutorials/link-source/
title: Link Docker Cloud to a source code provider
---
To automate building and testing of your images, you link to your hosted source
code service to Docker Cloud so that it can access your source code
repositories. You can configure this link for user accounts or
organizations.
If you only push pre-built images to Docker Cloud's registry, you do not
need to link your source code provider.
> **Note**: If you are linking a source code provider to create autobuilds for a team, follow the instructions to [create a service account](automated-build.md#service-users-for-team-autobuilds) for the team before linking the account as described below.
## Link to a GitHub user account
1. Click **Cloud settings** in the left navigation bar.
2. Click or scroll down to **Source providers**.
3. Click the plug icon for the source provider you want to link.
![Linking source providers](images/source-providers.png)
4. Review the settings for the **Docker Cloud Builder** OAuth application.
![Granting access to GitHub account](images/link-source-github-ind.png)
>**Note**: If you are the owner of any Github organizations, you might see
options to grant Docker Cloud access to them from this screen. You can also
individually edit an organization's Third-party access settings to grant or
revoke Docker Cloud's access. See [Grant access to a GitHub
organization](link-source.md#grant-access-to-a-github-organization) to learn more.
5. Click **Authorize application** to save the link.
You are now ready to create a new image!
### Unlink a GitHub user account
To revoke Docker Cloud's access to your GitHub account, you must unlink it both
from Docker Cloud, *and* from your GitHub account.
1. Click **Cloud settings** in the left navigation, and click or scroll to the
**Source providers** section.
2. Click the plug icon next to the source provider you want to remove.
The icon turns gray and has a slash through it when the account is disabled
but not revoked. You can use this to _temporarily_ disable a linked source
code provider account.
4. Go to your GitHub account's **Settings** page.
5. Click **OAuth applications** in the left navigation bar.
![Revoking access to GitHub account](images/link-source-github-ind-revoke.png)
6. Click **Revoke** next to the Docker Cloud Builder application.
> **Note**: Each repository that is configured as an automated build source
contains a webhook that notifies Docker Cloud of changes in the repository.
This webhook is not automatically removed when you revoke access to a source
code provider.
## Grant access to a GitHub organization
If you are the owner of a Github organization you can grant or revoke Docker
Cloud's access to the organization's repositories. Depending on the GitHub
organization's settings, you may need to be an organization owner.
If the organization has not had specific access granted or revoked before, you
can often grant access at the same time as you link your user account. In this
case, a **Grant access** button appears next to the organization name in the
link accounts screen, as shown below. If this button does not appear, you must
manually grant the application's access.
![Granting access to GitHub organization](images/link-source-github-org-lite.png)
To manually grant Docker Cloud access to a GitHub organization:
1. Link your user account using the instructions above.
2. From your GitHub Account settings, locate the **Organization settings**
section at the lower left.
3. Click the organization you want to give Docker Cloud access to.
4. From the Organization Profile menu, click **Third-party access**.
The page displays a list of third party applications and their access
status.
5. Click the pencil icon next to Docker Cloud Builder.
6. Click **Grant access** next to the organization.
![Granting access to GitHub organization manually](images/link-source-github-org.png)
### Revoke access to a GitHub organization
To revoke Docker Cloud's access to an organization's GitHub repositories:
1. From your GitHub Account settings, locate the **Organization settings** section at the lower left.
2. Click the organization you want to revoke Docker Cloud's access to.
3. From the Organization Profile menu, click **Third-party access**.
The page displays a list of third party applications and their access status.
4. Click the pencil icon next to Docker Cloud Builder.
![Revoking access to GitHub organization](images/link-source-github-org-revoke.png)
5. On the next page, click **Deny access**.
## Link to a Bitbucket user account
1. Log in to Docker Cloud using your Docker ID.
2. Click the gear icon in the left navigation to go to your **Cloud settings**.
3. Scroll to the **Source providers** section.
4. Click the plug icon for the source provider you want to link.
![Linking Bitbucket](images/source-providers.png)
5. If necessary, log in to Bitbucket.
6. On the page that appears, click **Grant access**.
### Unlink a Bitbucket user account
To permanently revoke Docker Cloud's access to your Bitbucket account, you must
unlink it both from Docker Cloud, *and* from your Bitbucket account.
1. From your **Cloud settings** page, click **Source providers**
2. Click the plug icon next to the source provider you want to remove.
The icon turns gray and has a slash through it when the account is disabled,
however access may not have been revoked. You can use this to _temporarily_
disable a linked source code provider account.
4. Go to your Bitbucket account and click the user menu icon in the top right corner.
5. Click **Bitbucket settings**.
6. On the page that appears, click **OAuth**.
7. Click **Revoke** next to the Docker Cloud line.
> **Note**: Each repository that is configured as an automated build source
contains a webhook that notifies Docker Cloud of changes in the repository. This
webhook is not automatically removed when you revoke access to a source code
provider.

View File

@ -1,60 +0,0 @@
---
description: Push images to Docker Cloud
keywords: images, private, registry
redirect_from:
- /docker-cloud/getting-started/intermediate/pushing-images-to-dockercloud/
- /docker-cloud/tutorials/pushing-images-to-dockercloud/
title: Push images to Docker Cloud
notoc: true
---
Docker Cloud uses Docker Hub as its native registry for storing both public and
private repositories. Once you push your images to Docker Hub, they are
available in Docker Cloud.
If you don't have Swarm Mode enable, images pushed to Docker Hub automatically appear for you on the **Services/Wizard** page on Docker Cloud.
> **Note**: You must use Docker Engine 1.6 or later to push to Docker Hub.
Follow the [official installation instructions](/install/index.md){: target="_blank" class="_" } depending on your system.
1. In a terminal window, set the environment variable **DOCKER_ID_USER** as *your username* in Docker Cloud.
This allows you to copy and paste the commands directly from this tutorial.
```
$ export DOCKER_ID_USER="username"
```
If you don't want to set this environment variable, change the examples in
this tutorial to replace `DOCKER_ID_USER` with your Docker Cloud username.
2. Log in to Docker Cloud using the `docker login` command.
```
$ docker login
```
This logs you in using your Docker ID, which is shared between both Docker Hub and Docker Cloud.
If you have never logged in to Docker Hub or Docker Cloud and do not have a Docker ID, running this command prompts you to create a Docker ID.
3. Tag your image using `docker tag`.
In the example below replace `my_image` with your image's name, and `DOCKER_ID_USER` with your Docker Cloud username if needed.
```
$ docker tag my_image $DOCKER_ID_USER/my_image
```
4. Push your image to Docker Hub using `docker push` (making the same replacements as in the previous step).
```
$ docker push $DOCKER_ID_USER/my_image
```
5. Check that the image you just pushed appears in Docker Cloud.
Go to Docker Cloud and navigate to the **Repositories** tab and confirm that your image appears in this list.
>**Note**: If you're a member of any organizations that are using Docker
> Cloud, you might need to switch to the organization account namespace using the
> account menu at the upper right to see other repositories.

View File

@ -1,145 +0,0 @@
---
description: Create and edit Docker Cloud repositories
keywords: Docker Cloud repositories, automated, build, images
title: Docker Cloud repositories
---
Repositories in Docker Cloud store your Docker images. You can create
repositories and manually [push images](push-images.md) using `docker push`, or
you can link to a source code provider and use [automated builds](automated-build.md) to build the images for you. These repositories
can be either public or private.
![Docker Cloud repository General view](images/repo-general.png)
Additionally, you can access your Docker Hub repositories and automated builds
from within Docker Cloud.
## Create a new repository in Docker Cloud
To store your images in Docker Cloud, you create a repository. All individual users can create one private repository for free, and can create unlimited public repositories.
1. Click **Repositories** in the left navigation.
2. Click **Create**.
3. Enter a **name** and an optional **description**.
4. Choose a visibility setting for the repository.
5. Optionally, click a linked source code provider to set up automated builds.
1. Select a namespace from that source code provider.
2. From that namespace, select a repository to build.
3. Optionally, expand the build settings section to set up build rules and enable or disable Autobuilds.
> **Note**: You do not need to set up automated builds right away, and you can change the build settings at any time after the repository is created. If you choose not to enable automated builds, you can still push images to the repository using the `docker` or `docker-cloud` CLI.
6. Click **Create**.
![Create repository page](images/create-repository.png)
### Repositories for Organizations
Only members of an organization's `Owners` team can create new repositories for
the organization. Members of `Owners` can also change the organization's billing
information, and link the organization to a source code provider to set up
automated builds.
A member of the `Owners` team must also set up the repository's access
permissions so that other teams within the organization can use it. To learn
more, see the [organizations and teams documentation](../orgs.md#set-team-permissions).
## Edit an existing repository in Docker Cloud
You can edit repositories in Docker Cloud to change the description and build configuration.
From the **General** page, you can edit the repository's short description, or click to edit the version of the ReadMe displayed on the repository page.
> **Note**: Edits to the Docker Cloud **ReadMe** are not reflected in the source code linked to a repository.
To run a build, or to set up or change automated build settings, click the **Builds** tab, and click **Configure Automated Builds**. See the documentation on [configuring automated build settings](automated-build.md#configure-automated-build-settings) for more
information.
## Change repository privacy settings
Repositories in Docker Cloud can be either public or private. Public
repositories are visible from the Docker Store's Community Content section, and
can also be searched for from Docker Cloud's **Create Service** wizard. Private
repositories are only visible to the user account that created it (unless it
belongs to an Organization, see below).
> **Note**: These _privacy_ settings are separate from the [repository _access_ permissions](../orgs.md#change-team-permissions-for-an-individual-repository) available for repositories shared among members of an [organization](../orgs.md).
If a private repository belongs to an [Organization](../orgs.md), members of the
`Owners` team configure access. Only members of the `Owners` team can change an
organization's repository privacy settings.
Each Docker Cloud account comes with one free private repository. Additional
private repositories are available for subscribers on paid plans.
To change a repository's privacy settings:
1. Navigate to the repository in Docker Cloud.
2. Click the **Settings** tab.
3. Click the **Make public** or **Make private** button.
4. In the dialog that appears, enter the name of the repository to confirm the change.
5. Click the button to save the change.
## Delete a repository
When you delete a repository in Docker Cloud, all of the images in that
repository are also deleted.
If automated builds are configured for the repository, the build rules and
settings are deleted along with any Docker Security Scan results. However, this
does not affect the code in the linked source code repository, and does not
remove the source code provider link.
If you are running a service from deleted repository , the service continues
to run, but cannot be scaled up or redeployed. If any builds use the Docker
`FROM` directive and reference a deleted repository, those builds fail.
To delete a repository:
1. Navigate to the repository, and click the **Settings** tab.
2. Click **Delete**.
3. Enter the name of the repository to confirm deletion, and click **Delete**.
External (third-party) repositories cannot be deleted from within Docker Cloud,
however you can remove a link to them using the same process for a repository in
Docker Cloud. The link is removed, but images in the external repository are not
deleted.
> **Note**: If the repository to be deleted or removed belongs to an [Organization](../orgs.md), only members of the `Owners` team can delete it.
## Link to a repository from a third party registry
You can link to repositories hosted on a third party registry. This allows you
to deploy images from the third party registry to nodes in Docker Cloud, and
also allows you to enable automated builds which push built images back to the
registry.
> **Note**: To link to a repository that you want to share with an organization, contact a member of the organization's `Owners` team. Only the Owners team can import new external registry repositories for an organization.
1. Click **Repositories** in the side menu.
2. Click the down arrow menu next to the **Create** button.
3. Select **Import**.
4. Enter the name of the repository that you want to add.
For example, `registry.com/namespace/reponame` where `registry.com` is the
hostname of the registry.
![Import repository popup](images/third-party-images-modal.png)
5. Enter credentials for the registry.
> **Note**: These credentials must have **push** permission to push
built images back to the repository. If you provide **read-only**
credentials, you can run automated tests and deploy from the
repository to your nodes, but you cannot push built images to
it.
6. Click **Import**.
7. Confirm that the repository on the third-party registry now appears in your **Repositories** dropdown list.
## What's next?
Once you create or link to a repository in Docker Cloud, you can set up [automated testing](automated-testing.md) and [automated builds](automated-build.md).

View File

@ -1,170 +0,0 @@
---
previewflag: cloud-swarm
description: how to register and unregister swarms in Docker Cloud
keywords: swarm mode, swarms, orchestration Cloud, fleet management
title: Connect to a swarm through Docker Cloud
---
Docker Cloud allows you to connect your local Docker Engine to any swarm you
have access to in Docker Cloud. There are a couple of different ways to do this,
depending on how you are running Docker on your local system:
- [Connect to a swarm with a Docker Cloud generated run command](#connect-to-a-swarm-with-a-docker-cloud-generated-run-command)
- [Use Docker for Mac or Docker for Windows (Edge) to connect to swarms](#use-docker-for-mac-or-windows-edge-to-connect-to-swarms)
## Connect to a swarm with a Docker Cloud generated run command
On platforms other than Docker for Mac or Docker for Windows (Edge channel), you
can connect to a swarm manually at the command line by running a proxy container
in your local Docker instance, which connects to a manager node on the target
swarm.
1. Log in to Docker Cloud in your web browser.
2. Click **Swarms** in the top navigation, and click the name of the swarm you want to connect to.
3. Copy the command provided in the dialog that appears.
![Connect to swarm popup](images/swarm-connect.png)
4. In a terminal window connected to your local Docker Engine, paste the command, and press **Enter**.
You are prompted for your Docker ID and password, then the local Docker Engine downloads a containerized Docker Cloud client tool, and connects to the swarm.
```
$ docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client orangesnap/vote-swarm
Use your Docker ID credentials to authenticate:
Username: orangesnap
Password:
=> You can now start using the swarm orangesnap/vote-swarm by executing:
export DOCKER_HOST=tcp://127.0.0.1:32770
```
5. To complete the connection process, run the `export DOCKER_HOST` command as provided in the output of the previous command. This connects your local shell to the client proxy.
Be sure to include the given client connection port in the URL. For our example, the command is: `export DOCKER_HOST=tcp://127.0.0.1:32770`.
(If you are connecting to your first swarm, the _command:port_ is likely to be `export DOCKER_HOST=tcp://127.0.0.1:32768`.)
6. Now, you can run `docker node ls` to verify that the swarm is running.
Here is an example of `docker node ls` output for a swarm running one manager and two workers on **Amazon Web Services**.
```
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
dhug6p7arwrm3a9j62zh0a0hf ip-172-31-23-167.us-west-1.compute.internal Ready Active
xmbxtffkrzaveqhyuouj0rxso ip-172-31-4-109.us-west-1.compute.internal Ready Active
yha4q9bleg80kvbn9tqgxd69g * ip-172-31-24-61.us-west-1.compute.internal Ready Active Leader
```
Here is an example of `docker node ls` output for a swarm running one manager and two workers on **Microsoft Azure Cloud Services**.
```
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6uotpiv8vyxsjzdtux13nkvj4 swarm-worker000001 Ready Active
qmvk4swo9rdv1viu9t88dw0t3 swarm-worker000000 Ready Active
w7kgzzdkka0k2svssz1dk1fzw * swarm-manager000000 Ready Active Leader
```
From this point on, you can use the
[CLI commands](/engine/swarm/index.md#swarm-mode-cli-commands)
to manage your cloud-hosted [swarm mode](/engine/swarm/) just as you
would a local swarm.
7. Now that your swarm is set up, try out the example to [deploy a service to the swarm](/engine/swarm/swarm-tutorial/deploy-service/),
and other subsequent tasks in the Swarm getting started tutorial.
### Switch between your swarm and Docker hosts in the same shell
To switch to Docker hosts:
* If you are running Docker for Mac or Docker for Windows, and want to
connect to the Docker Engine for those apps, run `docker-machine env -u`
as a preview, then run the unset command: `eval $(docker-machine env -u)`.
For example:
```
$ docker-machine env -u
unset DOCKER_TLS_VERIFY
unset DOCKER_HOST
unset DOCKER_CERT_PATH
unset DOCKER_MACHINE_NAME
# Run this command to configure your shell:
# eval $(docker-machine env -u)
```
* If you are using Docker Machine, and want to switch to one of your local VMs, be sure to unset `DOCKER_TLS_VERIFY`. Best practice is similar to the previous step. Run `docker-machine env -u` as a preview, then run the unset command: `eval $(docker-machine env -u)`. Follow this with `docker-machine ls` to view your current machines, then connect to the one you want with `docker-machine env my-local-machine` and run the given `eval` command. For example:
```
$ docker-machine env my-local-machine
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/victoriabialas/.docker/machine/machines/my-local-machine"
export DOCKER_MACHINE_NAME="my-local-machine"
# Run this command to configure your shell:
# eval $(docker-machine env my-local-machine)
```
To switch back to the deployed swarm, re-run the `export DOCKER_HOST` command with the connection port for the swarm you want to work with. (For example, `export DOCKER_HOST=tcp://127.0.0.1:32770`)
To learn more, see [Unset environment variables in the current shell](/machine/get-started/#unset-environment-variables-in-the-current-shell).
## Use Docker for Mac or Windows (Edge) to connect to swarms
On Docker for Mac and Docker for Windows current Edge releases,
you can access your Docker Cloud account and connect directly to your swarms through those Docker desktop application menus.
* See [Docker Cloud (Edge feature) in Docker for Mac topics](/docker-for-mac/#docker-cloud-edge-feature)
* See [Docker Cloud (Edge feature) in Docker for Windows topics](/docker-for-windows/#docker-cloud-edge-feature)
> **Tip**: This is different from using Docker for Mac or Windows with
Docker Machine as described in previous examples. Here, we are
by-passing Docker Machine, and using the desktop Moby VM directly, so
there is no need to manually set shell environment variables.
This works the same way on both Docker for Mac and Docker for Windows.
Here is an example, showing the Docker for Mac UI.
1. Make sure you are logged in to your Docker Cloud account on the desktop app.
![Docker for Mac Cloud login](images/d4mac-cloud-login.png)
2. Choose the swarm you want from the menu.
![Docker for Mac Cloud login](images/d4mac-swarm-connect.png)
3. A new terminal window opens and connects to the swarm you chose. The swarm name is shown at the prompt. For this example, we connected to `vote-swarm`.
```shell
[vote-swarm] ~
```
4. Now, you can run `docker node ls` to verify that the swarm is running.
```shell
[vote-swarm] ~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
7ex8inrg8xzgonaunwp35zxfl ip-172-31-6-204.us-west-1.compute.internal Ready Active
ec3kxibdxqhgw5aele7x853er * ip-172-31-0-178.us-west-1.compute.internal Ready Active Leader
z4ngrierv27wdm6oy0z3t9r1z ip-172-31-31-240.us-west-1.compute.internal Ready Active
```
## Reconnect a swarm
If you accidentally unregister a swarm from Docker Cloud, or decide that you
want to re-register the swarm after it has been removed, you can
[re-register it](register-swarms.md#register-a-swarm) using the same
process as a normal registration. If the swarm is registered to
an organization, its access permissions were deleted when it was
unregistered, and must be recreated.
> **Note**: You cannot register a new or different swarm under the name of a
swarm that was unregistered. To re-register a swarm, it must have the same swarm
ID as it did when previously registered.
## Where to go next
Learn how to [create a new swarm in Docker Cloud](create-cloud-swarm.md).

View File

@ -1,113 +0,0 @@
---
previewflag: cloud-swarm
description: Create new swarms on AWS with Docker Cloud
keywords: swarm mode, swarms, create swarm, Cloud, AWS
title: Create a new swarm on Amazon Web Services in Docker Cloud
---
{% include content/cloud-swarm-overview.md %}
## Link your service provider to Docker Cloud
To create a swarm, you need to give Docker Cloud permission to deploy swarm
nodes on your behalf in your cloud services provider account.
If you haven't yet linked Docker Cloud to AWS, follow the steps in [Link Amazon Web Services to Docker Cloud](link-aws-swarm.md). Once it's
linked, it shows up on the **Swarms -> Create** page as a connected service
provider.
![](images/aws-creds-cloud.png)
## Create a swarm
1. If necessary, log in to Docker Cloud and switch to Swarm Mode
2. Click **Swarms** in the top navigation, then click **Create**.
Alternatively, you can select **+ -> Swarm** from the top navigation to get to the same page.
3. Enter a name for the new swarm.
Your Docker ID is pre-populated. In the example, our swarm name
is "vote-swarm".
![](images/aws-create-swarm-1-name.png)
>**Tip:** For Docker Cloud, use all lower case letters for swarm names. No spaces, capitalized letters, or special characters other than `.`, `_`, or `-` are allowed. AWS does not accept underscores in the name `_`.
4. Select Amazon Web Services as the service provider and select a channel (`Stable` or `Edge`) from the drop-down menu.
You can learn more about **stable** and **edge** channels in the [Install Docker overview](/install/) and the [Docker CE Edge](/edge/) topics.
In this example, we use the `Stable` channel.
![](images/aws-create-swarm-0.png)
5. Select a **Region** from the drop-down menu.
> **Tip:** The SSH keys available to you in the next steps are
filtered by the region you select here. Make sure that you have
appropriate SSH keys available on the region you select.
Optionally, click **Region Advanced Settings** to configure a
[Virtual Private Cloud(VPC)](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html) on which to run this swarm.
![](images/aws-create-swarm-3-region.png)
For guidance on setting up a VPC, see [Recommended VPC and subnet setup](/docker-for-aws/faqs/#can-i-use-my-existing-vpc) in the Docker for AWS topics.
6. Choose how many swarm managers and swarm worker nodes to deploy.
Here, we create one manager and two worker nodes. (This maps nicely to the [Swarm tutorial setup](/engine/swarm/swarm-tutorial/index.md) and the [voting app sample in Docker Labs](https://github.com/docker/labs/blob/master/beginner/chapters/votingapp.md).)
![](images/cloud-create-swarm-4-size.png)
7. Configure swarm properties.
![](images/aws-create-swarm-5-properties.png)
* Select a public SSH key for Docker Cloud to use to connect to the
nodes on AWS. Public keys from the [key pairs you configured on AWS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) are provided in the drop-down menu. Only keys associated with the
Region you selected (in step 5) are shown.
* Choose whether to provide daily resource cleanup.
Enabling this option helps to avoid charges for resources that you are no longer using. (See also, topics on [resource cleanup](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CleaningUp.html) in the AWS documentation.)
* Enable or disable Cloudwatch for container logging.
When enabled, Docker sends container logs to [Amazon Cloudwatch](https://aws.amazon.com/cloudwatch/), as described in the Docker for AWS topic on [Logging](/docker-for-aws/index.md#logging).
7. Select the instance sizes for the managers, and for the workers.
![](images/aws-create-swarm-6-manager-worker.png)
In general, the larger your swarm, the larger the instance sizes you should use. See the Docker for AWS topics for more on [resource configuration](/docker-for-aws/index.md#configuration).
9. Click **Create**.
Docker for AWS bootstraps all of the recommended infrastructure to
start using Docker on AWS automatically. You don't need to worry
about rolling your own instances, security groups, or load balancers
when using Docker for AWS. (To learn more, see
[Why Docker for AWS](/docker-for-aws/why.md).)
This takes a few minutes. When the swarm is ready, its indicator on the Swarms page shows steady green.
![](images/aws-create-swarm-7-list.png)
> **Note**: At this time, you cannot add nodes to a swarm from
within Docker Cloud. To add new nodes to an existing swarm,
log in to your AWS account, and add nodes manually. (You can
unregister or dissolve swarms directly from Docker Cloud.)
## Where to go next
Learn how to [connect to a swarm through Docker Cloud](connect-to-swarm.md).
Learn how to [register existing swarms](register-swarms.md).
You can get an overivew of topics on [swarms in Docker Cloud](index.md).
To find out more about Docker swarm in general, see the Docker engine
[Swarm Mode overview](/engine/swarm/).

View File

@ -1,119 +0,0 @@
---
previewflag: cloud-swarm
description: Create new swarms on Azure with Docker Cloud
keywords: swarm mode, swarms, create swarm, Cloud, Azure
title: Create a new swarm on Microsoft Azure in Docker Cloud
---
[![Deploying Swarms on Microsoft Azure with Docker Cloud](images/video-azure-docker-cloud.png)](https://www.youtube.com/watch?v=LlpyiGAVBVg "Deploying Swarms on Microsoft Azure with Docker Cloud"){:target="_blank" class="_"}
{% include content/cloud-swarm-overview.md %}
## Link Docker Cloud to your service provider
To create a swarm, you need to give Docker Cloud permission to deploy swarm
nodes on your behalf in your cloud services provider account.
If you haven't yet linked Docker Cloud to Azure, follow the steps in [Link Microsoft Azure Cloud Services to Docker Cloud](link-azure-swarm/). Once it's
linked, it shows up on the **Swarms -> Create** page as a connected service
provider.
![](images/azure-creds-cloud.png)
> **Note:** If you are using a Microsoft Azure Visual Studio MSDN
subscription, you need to enable _programmatic deployments_ on the Docker CE
VM Azure Marketplace item. See the Microsoft Azure blog post on [Working with
Marketplace Images on Azure Resource
Manager](https://azure.microsoft.com/en-us/blog/working-with-marketplace-images-on-azure-resource-manager/){: target="_blank" class="_"} for instructions on how to do this.
## Create a swarm
1. If necessary, log in to Docker Cloud and switch to Swarm Mode
2. Click **Swarms** in the top navigation, then click **Create**.
Alternatively, you can select **+ -> Swarm** from the top navigation to
get to the same page.
3. Enter a name for the new swarm.
Your Docker ID is pre-populated. In the example, our swarm name
is "vote_swarm".
![](images/azure-create-swarm-1-name.png)
>**Tip:** Use all lower case letters for swarm names. No spaces, capitalized letters, or special characters other than `.`, `_`, or `-` are allowed.
4. Select Microsoft Azure as the service provider, select a channel (`Stable` or `Edge`) from the drop-down menu, provide an App name, and select the Azure
Subscription you want to use.
You can learn more about **stable** and **edge** channels in the [Install Docker overview](install/) and the [Docker CE Edge](/edge/) topics.
In this example, we use the `Stable` channel, our app name is "voting_app" and we've selected a Pay-As-You-Go subscription.
![](images/azure-create-swarm-0.png)
5. Make sure that **Create new resource group** is selected, provide a name for the group, and select a location from the drop-down menu.
Our example app is called `swarm_vote_resources`, and it is located in West US.
![](images/azure-create-swarm-3-resource-group.png)
>**Tip:** Be sure to create a new resource group for a swarm. If you choose to use an existing group, the swarm fails as Azure does not currently support this.
6. Choose how many swarm managers and worker nodes to deploy.
Here, we create one manager and two worker nodes. (This maps nicely to the [Swarm tutorial setup](/engine/swarm/swarm-tutorial/index.md) and the [voting app sample in Docker Labs](https://github.com/docker/labs/blob/master/beginner/chapters/votingapp.md).)
![](images/cloud-create-swarm-4-size.png)
8. Configure swarm properties, SSH key and resource cleanup.
Copy-paste the public [SSH key](ssh-key-setup.md) you want to use to connect to the nodes. (Provide the one for which you have the private key locally.)
![](images/azure-create-swarm-5-properties.png)
* To list existing SSH keys: `ls -al ~/.ssh`
* To copy the public SSH key to your clipboard: `pbcopy < ~/.ssh/id_rsa.pub`
Choose whether to provide daily resource cleanup. (Enabling this
option helps avoid charges for resources that you are no longer
using.)
7. Select the machine sizes for the managers, and for the workers.
![](images/azure-create-swarm-6-manager-worker.png)
The larger your swarm, the larger the machine size you should use.
To learn more about resource setup, see [configuration options](/docker-for-azure/index.md#configuration) in the Docker
for Azure topics.
You can find Microsoft Azure Linux Virtual Machine pricing and options [here](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/).
9. Click **Create**.
Docker for Azure bootstraps all of the recommended infrastructure to start
using Docker on Azure automatically. You dont need to worry about rolling
your own instances, security groups, or load balancers when using Docker for
Azure. (To learn more, see [Why Docker for Azure](/docker-for-azure/why.md).)
This takes a few minutes. When the swarm is ready, its indicator on the Swarms page shows steady green.
![](images/azure-create-swarm-7-list.png)
> **Note**: At this time, you cannot add nodes to a swarm from
within Docker Cloud. To add new nodes to an existing swarm,
log in to your Azure account, and add nodes manually. (You can
unregister or dissolve swarms directly from Docker Cloud.)
## Where to go next
Learn how to [connect to a swarm through Docker Cloud](connect-to-swarm.md).
Learn how to [register existing swarms](register-swarms.md).
You can get an overivew of topics on [swarms in Docker Cloud](index.md).
To find out more about Docker swarm in general, see the Docker engine
[Swarm Mode overview](/engine/swarm/).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 149 KiB

Some files were not shown because too many files have changed in this diff Show More