* Manage and Monitor Users
* LDAP Settings material * Closes #651 * Combining work into single branch * Updating index * Fixing code examples * Adding in note from Johnny's feedabck * Menu positions, HA terms * Copy edit of users * Adding deploy an application * Updating the overview page to include more text * Updating with comments from review page * Updating the constraints and port * Layout check/fix Signed-off-by: Mary Anthony <mary@docker.com>
|
@ -1,6 +1,6 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
title = "Deploy an application thru UCP"
|
||||
title = "Deploy an application"
|
||||
description = "Deploy an application thru UCP"
|
||||
keywords = ["tbd, tbd"]
|
||||
[menu.main]
|
||||
|
@ -10,37 +10,397 @@ weight=-80
|
|||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Deploy an application thru UCP
|
||||
# Deploy an application onto UCP
|
||||
|
||||
Intro 1-2 paras, page purpose, intended user, list steps if page is tutorial.
|
||||
In this quickstart, you learn how to deploy multi-container applications onto UCP.
|
||||
While UCP is intended for deploying multi-container applications, the workflow
|
||||
for developing them begins outside of the UCP installation. This page explains
|
||||
the recommended workflow for developing applications. Then, it shows you
|
||||
step-by-step how to deploy the fully developed application.
|
||||
|
||||
The sample is written for a novice network administrator. You should have a
|
||||
basic skills on Linux systems and `ssh` experience. Some knowledge of Git is
|
||||
also useful but not strictly required.
|
||||
|
||||
>**Note**: The command examples in this page were tested for a Mac OSX environment.
|
||||
If you are in another, you may need to adjust the commands to use analogous
|
||||
commands for you environment.
|
||||
|
||||
## Understand the development workflow
|
||||
Diagram of expected workflow
|
||||
|
||||
## Step 1: Review the Compose file node requirements
|
||||
* images
|
||||
* nodes
|
||||
* volumes
|
||||
* networks
|
||||
UCP is at the end of the application development workflow. You should only
|
||||
deploy, or allowed to be deployed, individual containers or multi-container
|
||||
applications that have been systematically developed and tested.
|
||||
|
||||
## Step 2: Add nodes for supporting application
|
||||
Your development team should develop in a local environment using the Docker
|
||||
open source software (OSS) components. These components include:
|
||||
|
||||
tbd
|
||||
* Docker Engine
|
||||
* Docker Machine (if development is on Mac or Windows)
|
||||
* Docker Swarm
|
||||
* Docker Compose
|
||||
* Docker Hub (for publicly available images)
|
||||
|
||||
## Step 3: Import Compose file into UCP
|
||||
Developing an application can include using public images from Docker Hub and
|
||||
developing new custom images. If there are multiple containers involved, the
|
||||
team should configure and test container port configurations. For applications
|
||||
that require them, the team may need to create Docker container volumes and
|
||||
ensure they are of sufficient size.
|
||||
|
||||
tbd
|
||||
Once the team has developed a microservice application, they should test it
|
||||
locally at scale on a Swarm cluster. The Swarm documentation includes detailed
|
||||
documentation about [troubleshooting a microservice
|
||||
application](https://docs.docker.com/swarm/scale/05-troubleshoot/).
|
||||
|
||||
## Step 4: Scaling an application
|
||||
* existing load balancer
|
||||
* adding a load balancer
|
||||
The output of application development should be a Docker Compose file and a set
|
||||
of images ready for deployment. These images can be stored in Docker Hub. If
|
||||
your company is using Docker Trusted Registry, the team may want to or be
|
||||
required to store their application images in the company registry. The team
|
||||
must ensure store the images in an accessible registry account.
|
||||
|
||||
## Step 4: Verify the deployment
|
||||
* Dashboard
|
||||
* Images
|
||||
|
||||
## Step 5: Review UCP logs
|
||||
## Step 1. Before you begin
|
||||
|
||||
tbd
|
||||
This example requires that you have an installed UCP deployment and that you have
|
||||
<a href="/ucp/networking/" target="_blank">enabled container networking</a>
|
||||
on that installation. Take a moment to check this requirement.
|
||||
|
||||
## Troubleshooting section
|
||||
When deploying an application to UCP, you work from a local environment using
|
||||
the UCP client bundle for your UCP user. You should never deploy from the
|
||||
command-line while directly logged into a UCP node. The deploy on this page,
|
||||
requires that your local environment includes the following software:
|
||||
|
||||
* <a href="https://git-scm.com/" target="_blank">Git</a>
|
||||
* Docker Engine
|
||||
* Docker Compose
|
||||
|
||||
While not always the case, the expectation is your local environment is a
|
||||
Windows or Mac machine. If your personal machine is a Linux machine that Docker
|
||||
Engine supports, such a configuration works too.
|
||||
|
||||
### Windows or Mac prerequisites
|
||||
|
||||
Because Docker Engine and UCP both rely on Linux-specific features, you can't
|
||||
run natively in Mac or Windows. Instead, you must install the Docker Toolbox
|
||||
application. Docker Toolbox installs:
|
||||
|
||||
* Docker Machine for running `docker-machine` commands
|
||||
* Docker Engine for running the `docker` commands
|
||||
* Docker Compose for running the `docker-compose` commands
|
||||
* Kitematic, the Docker GUI
|
||||
* a Quickstart shell preconfigured for a Engine command-line environment
|
||||
* Oracle VirtualBox
|
||||
|
||||
These tools enable you to run Engine CLI commands from your Mac OS X or Windows
|
||||
shell.
|
||||
|
||||
Your Mac must be running OS X 10.8 "Mountain Lion" or higher to install Toolbox.
|
||||
To check your Mac OS X version, see <a
|
||||
href="https://docs.docker.com/mac/step_one/" target="_blank">the Docker Engine
|
||||
getting started on Mac</a>.
|
||||
|
||||
On Windows, your machine must have a 64-bit operating system running Windows 7 or
|
||||
higher. Additionally, you must make sure that virtualization is enabled on your
|
||||
machine. For information on how to check for virtualization, see <a
|
||||
href="https://docs.docker.com/windows/step_one/" target="_blank">the Docker
|
||||
Engine getting started on Windows</a>.
|
||||
|
||||
If you haven't already done so, make you have installed Docker Toolbox on your
|
||||
local <a href="https://docs.docker.com/engine/installation/mac/"
|
||||
target="_blank">Mac OS X</a> or <a
|
||||
href="https://docs.docker.com/engine/installation/windows/"
|
||||
target="_blank">Windows machine</a>. After a successful installation, continue
|
||||
to the next step.
|
||||
|
||||
### About a Linux environment
|
||||
|
||||
If your local environment is Linux, make sure you have installed the <a
|
||||
href="https://docs.docker.com/engine/installation" target="_blank">correct
|
||||
Docker Engine for your Linux OS</a>. Also, make sure you have installed <a
|
||||
href="http://docs-stage.docker.com/compose/install/" target="_blank">Docker
|
||||
Compose</a>.
|
||||
|
||||
## Step 2. Get the client bundle and configure a shell
|
||||
|
||||
In this step, you download the *client bundle*. To issue commands to a UCP node,
|
||||
your local shell environment must be configured with the same security
|
||||
certificates as the UCP application itself. The client bundle contains the
|
||||
certificates and a script to configure a shell environment.
|
||||
|
||||
Download the bundle and configure your environment.
|
||||
|
||||
1. If you haven't already done so, log into UCP.
|
||||
|
||||
2. Choose **admin > Profile** from the right-hand menu.
|
||||
|
||||
Any user can download their certificates. So, if you were logged in under a
|
||||
user name such as `davey` the path to download bundle is **davey >
|
||||
Profile**. Since you are logged in as `admin`, the path is `admin`.
|
||||
|
||||
3. Click **Create Client Bundle**.
|
||||
|
||||
The browser downloads the `ucp-bundle-admin.zip` file.
|
||||
|
||||
4. Open a shell on your local terminal.
|
||||
|
||||
5. If you are on Mac or Windows, ensure your shell does not have an active Docker Machine VM.
|
||||
|
||||
$ docker-machine ls
|
||||
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
|
||||
moxie - virtualbox Stopped Unknown
|
||||
test - virtualbox Running tcp://192.168.99.100:2376 v1.10.1
|
||||
|
||||
While Machine has a stopped and running host, neither is active in the
|
||||
shell. You know this because neither host shows an * (asterisk) indicating
|
||||
the shell is configured.
|
||||
|
||||
6. Create a directory to hold the deploy information.
|
||||
|
||||
$ mkdir deploy-app
|
||||
|
||||
7. Inside of a `deploy-app` create a directory to hold your UCP bundle files.
|
||||
|
||||
$ mkdir deploy-app/bundle
|
||||
|
||||
8. Change into the `deploy-app/bundle` directory and move the downloaded bundle into it.
|
||||
|
||||
$ cd deploy-app/bundle
|
||||
$ mv ~/Downloads/ucp-bundle-admin.zip .
|
||||
|
||||
9. Unzip the client bundle.
|
||||
|
||||
$ unzip bundle.zip
|
||||
Archive: bundle.zip
|
||||
extracting: ca.pem
|
||||
extracting: cert.pem
|
||||
extracting: key.pem
|
||||
extracting: cert.pub
|
||||
extracting: env.sh
|
||||
|
||||
10. Change into the directory that was created when the bundle was unzipped
|
||||
|
||||
11. Execute the `env.sh` script to set the appropriate environment variables for your UCP deployment
|
||||
|
||||
$ source env.sh
|
||||
|
||||
12. Verify that you are connected to UCP by using the `docker info` command.
|
||||
|
||||
$ docker info
|
||||
Containers: 11
|
||||
Running: 11
|
||||
Paused: 0
|
||||
Stopped: 0
|
||||
Images: 22
|
||||
... <output snipped>
|
||||
Plugins:
|
||||
Volume:
|
||||
Network:
|
||||
Kernel Version: 4.2.0-23-generic
|
||||
Operating System: linux
|
||||
Architecture: amd64
|
||||
CPUs: 3
|
||||
Total Memory: 11.58 GiB
|
||||
Name: ucp-controller-ucpdemo-0
|
||||
ID: DYZQ:I5RM:VM6K:MUFZ:JXCU:H45Y:SFU4:CBPS:OMXC:LQ3S:L2HQ:VEWW
|
||||
Labels:
|
||||
com.docker.ucp.license_key=QMb9Ux2PKj-IshswTScxsd19n-c8LwtP-pQiDWy2nVtg
|
||||
com.docker.ucp.license_max_engines=10
|
||||
com.docker.ucp.license_expires=2016-05-03 19:52:02 +0000 UTC
|
||||
|
||||
|
||||
## Step 3: Learn about the application
|
||||
|
||||
The application you'll be deploying is a voting application. The voting
|
||||
application is a dockerized microservice application. It uses a parallel web
|
||||
frontend that sends jobs to asynchronous background workers. The application's
|
||||
design can accommodate arbitrarily large scale. The diagram below shows the high
|
||||
level architecture of the application.
|
||||
|
||||

|
||||
|
||||
The application is fully dockerized with all services running inside of
|
||||
containers.
|
||||
|
||||
The frontend consists of an Interlock load balancer with *n* frontend web
|
||||
servers and associated queues. The load balancer can handle an arbitrary number
|
||||
of web containers behind it (`frontend01`- `frontendN`). The web containers run
|
||||
a simple Python Flask application. Each web container accepts votes and queues
|
||||
them to a Redis container on the same node. Each web container and Redis queue
|
||||
pair operates independently.
|
||||
|
||||
The load balancer together with the independent pairs allows the entire
|
||||
application to scale to an arbitrary size as needed to meet demand.
|
||||
|
||||
Behind the frontend is a worker tier which runs on separate nodes. This tier:
|
||||
|
||||
* scans the Redis containers
|
||||
* dequeues votes
|
||||
* deduplicates votes to prevent double voting
|
||||
* commits the results to a Postgres container running on a separate node
|
||||
|
||||
Just like the front end, the worker tier can also scale arbitrarily.
|
||||
|
||||
When deploying in UCP, you won't need this exact architecture. For example, you
|
||||
won't need the Interlock load balancer. Part of the work of UCP administrator
|
||||
may be to polish the application the team created, leaving only what's needed for UCP.
|
||||
|
||||
For example, the team fully <a
|
||||
href="https://github.com/docker/swarm-microservice-demo-v1" target="_blank">
|
||||
developed and tested through a local environment using the open source Docker
|
||||
ecosystem</a>. The Docker Compose file they created looks like this:
|
||||
|
||||
```
|
||||
#
|
||||
# Compose file to run the voting app and dependent services
|
||||
#
|
||||
version: '2'
|
||||
services:
|
||||
web-vote-app:
|
||||
build: web-vote-app
|
||||
environment:
|
||||
WEB_VOTE_NUMBER: "01"
|
||||
constraint:node: "=frontend01"
|
||||
vote-worker:
|
||||
build: vote-worker
|
||||
environment:
|
||||
FROM_REDIS_HOST: 1
|
||||
TO_REDIS_HOST: 1
|
||||
results-app:
|
||||
build: results-app
|
||||
redis01:
|
||||
image: redis:3
|
||||
store:
|
||||
image: postgres:9.5
|
||||
environment:
|
||||
- POSTGRES_USER=postgres
|
||||
- POSTGRES_PASSWORD=pg8675309
|
||||
```
|
||||
|
||||
In this `docker-compose.file` includes a `build` command. You should never
|
||||
`build` an image against the UCP controller or its nodes. You can find out if
|
||||
the team built and stored the images described in the file, or you can build the
|
||||
images yourself and push them to a registry. After a little work you could come
|
||||
up with a `docker-compose.yml` that looks like this:
|
||||
|
||||
```
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
voting-app:
|
||||
image: docker/example-voting-app-voting-app
|
||||
ports:
|
||||
- "80"
|
||||
networks:
|
||||
- voteapp
|
||||
result-app:
|
||||
image: docker/example-voting-app-result-app
|
||||
ports:
|
||||
- "80"
|
||||
networks:
|
||||
- voteapp
|
||||
worker:
|
||||
image: docker/example-voting-app-worker
|
||||
networks:
|
||||
- voteapp
|
||||
redis:
|
||||
image: redis
|
||||
ports:
|
||||
- "6379"
|
||||
networks:
|
||||
- voteapp
|
||||
container_name: redis
|
||||
db:
|
||||
image: postgres:9.4
|
||||
volumes:
|
||||
- "db-data:/var/lib/postgresql/data"
|
||||
networks:
|
||||
- voteapp
|
||||
container_name: db
|
||||
volumes:
|
||||
db-data:
|
||||
|
||||
networks:
|
||||
voteapp:
|
||||
```
|
||||
|
||||
This revised compose file uses a set of images stored in Docker Hub. They happen
|
||||
to be in Docker repositories because the sample application was built by a
|
||||
Docker team. Compose allows you to designate a network and it defaults to
|
||||
creating an `overlay` network. So, you can specify which networks in UCP to run
|
||||
on. In this case, you won't manually create the networks, you'll let Compose create
|
||||
the network for you.
|
||||
|
||||
## Step 4. Deploy the application
|
||||
|
||||
In this step, you deploy the application in UCP.
|
||||
|
||||
1. Bring up the shell you configured in the [Step
|
||||
2](#step-2-get-the-client-bundle-and-configure-a-shell).
|
||||
|
||||
2. Clone the sample compose file onto your local machine..
|
||||
|
||||
$ git clone https://github.com/nicolaka/voteapp-base.git
|
||||
|
||||
The clone command creates a `voteapp-base` directory containing the Compose
|
||||
file.
|
||||
|
||||
4. Change into the `voteapp-base` directory.
|
||||
|
||||
$ cd voteapp-base
|
||||
|
||||
6. Deploy the application.
|
||||
|
||||
$ docker-compose up
|
||||
Creating network "voteappbase_voteapp" with the default driver
|
||||
Pulling db (postgres:9.4)...
|
||||
ucpdemo-0: Pulling postgres:9.4... : downloaded
|
||||
ucpdemo-2: Pulling postgres:9.4... : downloaded
|
||||
ucpdemo-1: Pulling postgres:9.4... : downloaded
|
||||
Creating db
|
||||
Pulling redis (redis:latest)...
|
||||
ucpdemo-0: Pulling redis:latest... : downloaded
|
||||
ucpdemo-2: Pulling redis:latest... : downloaded
|
||||
ucpdemo-1: Pulling redis:latest... : downloaded
|
||||
Creating redis
|
||||
Pulling worker (docker/example-voting-app-worker:latest)...
|
||||
|
||||
Compose creates the `voteappbase_voteapp` network and deploys the application.
|
||||
|
||||
7. From UCP, go to the **Applications** page inside UCP.
|
||||
|
||||
Your new application should appear in the list.
|
||||
|
||||
8. Expand to the app to see which nodes the application containers are running in.
|
||||
|
||||

|
||||
|
||||
## Step 5. Test the application
|
||||
|
||||
Now that the application is deployed and running, it's time to test it. To do
|
||||
this, you configure a DNS mapping on the node where you are running
|
||||
`votingapp_web-vote-app_1` container. browser. This maps the "votingapp.local"
|
||||
DNS name to the public IP address of the `votingapp_web-vote-app_1` node.
|
||||
|
||||
1. Configure the DNS name resolution on your local machine for browsing.
|
||||
|
||||
- On Windows machines this is done by adding `votingapp.local <votingapp_web-vote-app_1-public-ip>` to the `C:\Windows\System32\Drivers\etc\hosts file`. Modifying this file requires administrator privileges. To open the file with administrator privileges, right-click `C:\Windows\System32\notepad.exe` and select `Run as administrator`. Once Notepad is open, click `file` > `open` and open the file and make the edit.
|
||||
|
||||
- On OSX machines this is done by adding `votingapp.local <votingapp_web-vote-app_1-public-ip>` to `/private/etc/hosts`.
|
||||
|
||||
- On most Linux machines this is done by adding `votingapp.local <votingapp_web-vote-app_1-public-ip>` to `/etc/hosts`.
|
||||
|
||||
Be sure to replace `<votingapp_web-vote-app_1-public-ip>` with the public IP address of
|
||||
your `votingapp_web-vote-app_1` node. You can find the `votingapp_web-vote-app_1` node's Public IP by
|
||||
selecting the node from within the UCP dashboard.
|
||||
|
||||
2. Verify the mapping worked with a `ping` command from your local machine.
|
||||
|
||||
ping votingapp.local
|
||||
Pinging votingapp.local [54.183.164.230] with 32 bytes of data:
|
||||
Reply from 54.183.164.230: bytes=32 time=164ms TTL=42
|
||||
Reply from 54.183.164.230: bytes=32 time=163ms TTL=42
|
||||
Reply from 54.183.164.230: bytes=32 time=169ms TTL=42
|
||||
|
||||
3. Point your web browser to [http://votingapp.local](http://votingapp.local)
|
||||
|
||||

|
||||
|
|
|
@ -0,0 +1,189 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
title = "Integrate with Trusted Registry"
|
||||
description = "Integrate UCP with Docker Trusted Registry"
|
||||
keywords = ["trusted, registry, integrate, UCP, DTR"]
|
||||
[menu.main]
|
||||
parent="mn_ucp"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Integrate UCP with Docker Trusted Registry
|
||||
|
||||
This page explains how to integrate Universal Control Plane (UCP) with the
|
||||
Docker Trusted Registry (DTR). Trusted Registry is a image storage and
|
||||
management service that you can install within your company's private
|
||||
infrastructure.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You must have already installed DTR on your infrastructure before performing
|
||||
this procedure. The DTR server and the UCP controller must be able to
|
||||
communicate over your network infrastructure.
|
||||
|
||||
The Universal Control Plane and Trusted Registry are both part of the Docker
|
||||
Datacenter solution. This means the license you use for UCP works with DTR or,
|
||||
if you have a DTR license, it also works with UCP.
|
||||
|
||||
## Step 1: (Optional) Prepare a cert script
|
||||
|
||||
If you are using a self-signed or third-party CA, you need to the prepare a
|
||||
`cert_create.sh` script. You'll use this script to install the self-signed cert
|
||||
on the nodes in your UCP cluster.
|
||||
|
||||
1. Create a file called `cert_create.sh` with your favorite editor.
|
||||
|
||||
2. Add the following to content to the file.
|
||||
|
||||
DTR_HOST="<my-dtr-host-dns-name"
|
||||
mkdir -p ~/.docker/tls/${DTR_HOST}
|
||||
openssl s_client -host ${DTR_HOST} -port 443 </dev/null 2>/dev/null | openssl x509 -outform PEM | tee ~/.docker/tls/${DTR_HOST}/ca.crt
|
||||
|
||||
3. Replace the `<my-dtr-host-dns-name>` value with the fully qualified DNS
|
||||
value for your DTR instance.
|
||||
|
||||
4. Save and close the `cert_create.sh` file.
|
||||
|
||||
4. Set execute permission on the file.
|
||||
|
||||
$ chmod 755 cert_create.sh
|
||||
|
||||
## Step 2: Configure DTR and UCP
|
||||
|
||||
In this step, you configure DTR and UCP to communicate. To do this you need an admin level certificate bundle for UCP or terminal access to the UCP controller.
|
||||
|
||||
1. Log into or connect to the UCP primary controller.
|
||||
|
||||
2. Generate the UCP certificates using the `ucp dump-certs` command.
|
||||
|
||||
This command generates the certificates for the Swarm cluster.
|
||||
|
||||
$ docker run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp dump-certs --cluster -ca > /tmp/cluster-root-chain.pem
|
||||
|
||||
3. Cat or edit the `cluster-root-chain.pem` file.
|
||||
|
||||
4. Copy the certificate.
|
||||
|
||||
This example illustrates what you should copy, your installation certificate
|
||||
will be different.
|
||||
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFGDCCAwCgAwIBAgIIIQjwMnZnj2gwDQYJKoZIhvcNAQENBQAwGDEWMBQGA1UE
|
||||
AxMNU3dhcm0gUm9vdCBDQTAeFw0xNjAyMTAxNzQzMDBaFw0yMTAyMDgxNzQzMDBa
|
||||
MBgxFjAUBgNVBAMTDVN3YXJtIFJvb3QgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4IC
|
||||
DwAwggIKAoICAQC5UtvO/xju7INdZkXA9TG7T6JYo1CIf5yZz9LZBDrexSAx7uPi
|
||||
7b5YmWGUA26VgBDvAFuLuQNRy/OlITNoFIEG0yovw6waLcqr597ox9d9jeaJ4ths
|
||||
...<output snip>...
|
||||
2wDuqlzByRVTO0NL4BX0QV1J6LFtrlWU92WxTcOV8T7Zc4mzQNMHfiIZcHH/p3+7
|
||||
cRA7HVdljltI8UETcrEvTKb/h1BiPlhzpIfIHwMdA2UScGgJlaH7wA0LpeJGWtUc
|
||||
AKrb2kTIXNQq7phH
|
||||
-----END CERTIFICATE-----
|
||||
|
||||
5. Login to the Trusted Registry dashboard as a user.
|
||||
|
||||
6. Choose **Settings > General** page.
|
||||
|
||||
7. Locate the **Auth Bypass TLS Root CA** field.
|
||||
|
||||
8. Paste certificate you copied into the field.
|
||||
|
||||
9. (Optional) If you are using a self-signed or third-party CA, do the following
|
||||
on each node in your UCP cluster:
|
||||
|
||||
a. Log into a UCP node using an account with `sudo` privileges.
|
||||
|
||||
b. Copy the `cert_create.sh`to the node.
|
||||
|
||||
c. Run the `cert_create.sh` on the node.
|
||||
|
||||
$ sudo cert_create.sh
|
||||
|
||||
d. Verify the cert was created.
|
||||
|
||||
$ sudo cat ~/.docker/tls/${DTR_HOST}/ca.crt
|
||||
|
||||
|
||||
## Step 2: Confirm the integration
|
||||
|
||||
The best way to confirm the integration is to push and pull an image from a UCP node.
|
||||
|
||||
1. Open a terminal session on a UCP node.
|
||||
|
||||
2. Pull Docker's `hello-world` image.
|
||||
|
||||
$ docker pull hello-world
|
||||
Using default tag: latest
|
||||
latest: Pulling from library/hello-world
|
||||
03f4658f8b78: Pull complete
|
||||
a3ed95caeb02: Pull complete
|
||||
Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
|
||||
Status: Downloaded newer image for hello-world:latest
|
||||
|
||||
3. Get the `IMAGE ID` of the `hello-world` image.
|
||||
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
hello-world latest 690ed74de00f 4 months ago 960 B
|
||||
|
||||
5. Retag the `hello-world` image with a new tag.
|
||||
|
||||
The syntax for tagging an image is:
|
||||
|
||||
docker tag <ID> <username>/<image-name>:<tag>
|
||||
|
||||
Make sure to replace `<username>` with your actual username and the <ID>
|
||||
with the ID of the `hello-world` image you pulled.
|
||||
|
||||
$ docker tag 690ed74de00f username/hello-world:test
|
||||
|
||||
4. Login into the DTR instance from the command line.
|
||||
|
||||
The example below uses `mydtr.company.com` as the URL for the DTR instance.
|
||||
Your's will be different.
|
||||
|
||||
$ docker login mydtr.company.com
|
||||
|
||||
Provide your username password when prompted.
|
||||
|
||||
5. Push your newly tagged image to the DTR instance.
|
||||
|
||||
The following is an example only, substitute your DTR URL and username when
|
||||
you run this command.
|
||||
|
||||
$ docker push mydtr.company.com/username/hello-world:test
|
||||
|
||||
|
||||
## Troubleshooting section
|
||||
|
||||
This section details common problems you can encounter when working with the DTR /
|
||||
UCP integration.
|
||||
|
||||
### Unknown authority error on push
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
% docker push mydtr.acme.com/jdoe/myrepo:latest
|
||||
The push refers to a repository [mydtr.acme.com/jdoe/myrepo]
|
||||
unable to ping registry endpoint https://mydtr.acme.com/v0/
|
||||
v2 ping attempt failed with error: Get https://mydtr.acme.com/v2/: x509: certificate signed by unknown authority
|
||||
v1 ping attempt failed with error: Get https://mydtr.acme.com/v1/_ping: x509: certificate signed by unknown authority
|
||||
```
|
||||
|
||||
Review the trust settings in DTR and make sure they are correct. Try repasting
|
||||
the first PEM block from the `chain.pem` file.
|
||||
|
||||
### Authentication required
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
% docker push mydtr.acme.com/jdoe/myrepo:latest
|
||||
The push refers to a repository [mydtr.acme.com/jdoe/myrepo]
|
||||
5f70bf18a086: Preparing
|
||||
2c84284818d1: Preparing
|
||||
unauthorized: authentication required
|
||||
```
|
||||
|
||||
You must login before you can push to DTR.
|
|
@ -10,7 +10,7 @@ weight=-100
|
|||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Evaluation installation
|
||||
# Evaluation installation and quickstart
|
||||
|
||||
This page helps you to learn about Docker Universal Control Plane (UCP) at a
|
||||
high-level through installing and running UCP in your local, sandbox
|
||||
|
@ -48,17 +48,21 @@ a Docker Swarm cluster. The UCP installation process by default secures the clus
|
|||

|
||||
|
||||
This example is intended as an introduction for non-technical users wanting to
|
||||
explore UCP for themselves. If you are a high technical user intending to act as
|
||||
explore UCP for themselves. If you are a highly technical user intending to act as
|
||||
UCP administration operator, you may prefer to skip this and go straight to
|
||||
[Plan a production installation](plan-production-install.md).
|
||||
|
||||
>**Note**: The command examples in this page were tested for a Mac OSX environment.
|
||||
If you are in another, you may need to adjust the commands to use analogous
|
||||
commands for you environment.
|
||||
|
||||
## Step 2. Verify the prerequisites
|
||||
|
||||
Because Docker Engine and UCP both rely on Linux-specific features, you can't
|
||||
run natively in Mac or Windows. Instead, you must install the Docker Toolbox
|
||||
application. The application installs a VirtualBox Virtual Machine (VM), the
|
||||
Docker Engine itself, and the Docker Toolbox management tool. These tools enable you to run Engine CLI commands from your Mac OS X or Windows shell.
|
||||
Docker Engine itself, and the Docker Toolbox management tool. These tools enable
|
||||
you to run Engine CLI commands from your Mac OS X or Windows shell.
|
||||
|
||||
Your Mac must be running OS X 10.8 "Mountain Lion" or higher to perform this
|
||||
procedure. To check your Mac OS X version, see <a href="https://docs.docker.com/mac/step_one/" target="_blank">the Docker Engine getting started on Mac</a>.
|
||||
|
@ -71,7 +75,7 @@ If you haven't already done so, make you have installed Docker Toolbox on your l
|
|||
|
||||
## Step 3. Provision hosts with Engine
|
||||
|
||||
In this step, you provision to VMs for your UCP installation. This step is
|
||||
In this step, you provision two VMs for your UCP installation. This step is
|
||||
purely to enable your evaluation. You would never run UCP in production on local
|
||||
VMs with the open source Engine.
|
||||
|
||||
|
@ -96,7 +100,9 @@ Set up the nodes for your evaluation:
|
|||
3.00 GB disk space. When you create your virtual host, you supply options to
|
||||
size it appropriately.
|
||||
|
||||
$ docker-machine create -d virtualbox --virtualbox-memory "3000" --virtualbox-disk-size "6000" node1
|
||||
$ docker-machine create -d virtualbox \
|
||||
--virtualbox-memory "2000" \
|
||||
--virtualbox-disk-size "5000" node1
|
||||
Running pre-create checks...
|
||||
Creating machine...
|
||||
(node1) Copying /Users/mary/.docker/machine/cache/boot2docker.iso to /Users/mary/.docker/machine/machines/node1/boot2docker.iso...
|
||||
|
@ -118,7 +124,8 @@ Set up the nodes for your evaluation:
|
|||
|
||||
4. Create a VM named `node2`.
|
||||
|
||||
$ docker-machine create -d virtualbox --virtualbox-memory "5000" node2
|
||||
$ docker-machine create -d virtualbox \
|
||||
--virtualbox-memory "2000" node2
|
||||
|
||||
5. Use the Machine `ls` command to list your hosts.
|
||||
|
||||
|
@ -174,10 +181,10 @@ UCP's high availability feature. High availability allows you to designate
|
|||
several nodes as controller replicas. In this way, if one controller fails
|
||||
a replica node is ready to take its place.
|
||||
|
||||
For this evaluation, you won't don't need that level of robustness. A single
|
||||
For this evaluation, you won't need that level of robustness. A single
|
||||
host for the controller suffices.
|
||||
|
||||
1. If you don't already have open, open a terminal on your computer.
|
||||
1. If you don't already have one, open a terminal on your computer.
|
||||
|
||||
2. Connect the terminal environment to the `node1` you created.
|
||||
|
||||
|
@ -195,11 +202,27 @@ host for the controller suffices.
|
|||
|
||||
$ eval $(docker-machine env node1)
|
||||
|
||||
c. Verify that `node1` has an active environment.
|
||||
|
||||
$ docker-machine ls
|
||||
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
|
||||
node1 * virtualbox Running tcp://192.168.99.104:2376 v1.10.0
|
||||
node2 - virtualbox Running tcp://192.168.99.102:2376 v1.10.0
|
||||
|
||||
An `*` (asterisk) in the `ACTIVE` field indicates that the `node1` environment is active.
|
||||
|
||||
The client will send the `docker` commands in the following steps to the Docker Engine on on `node1`.
|
||||
|
||||
3. Start the `ucp` tool to install interactively.
|
||||
|
||||
$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock --name ucp docker/ucp install --swarm-port 3376 -i --host-address $(docker-machine ip node1)
|
||||
>**Note**: If you are on a Windows system, your shell can't resolve the
|
||||
`$(docker-machine ip node2)` variable. So, replace it with the actual IP
|
||||
address.
|
||||
|
||||
$ docker run --rm -it \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
--name ucp docker/ucp install -i\
|
||||
--swarm-port 3376 --host-address $(docker-machine ip node1)
|
||||
Unable to find image 'docker/ucp:latest' locally
|
||||
latest: Pulling from docker/ucp
|
||||
0198ad4008dc: Pull complete
|
||||
|
@ -279,17 +302,23 @@ In this step, you log into UCP, get a license, and install it. Docker allows you
|
|||
|
||||
5. Enter `admin` for the username along with the password you provided to the `install`.
|
||||
|
||||
After you enter the correct credentials, the UCP dashboard displays.
|
||||
After you enter the correct credentials, the UCP dashboard prompts for a
|
||||
license.
|
||||
|
||||

|
||||
|
||||
6. Press *Skip for now* to continue to the dashboard.
|
||||
|
||||

|
||||
|
||||
The dashboard shows a single node, your controller node. It also shows you a message saying that you need a license. Docker allows you to download a trial license.
|
||||
The dashboard shows a single node, your controller node. It also shows you a
|
||||
banner saying that you need a license.
|
||||
|
||||
6. Follow the link on the UCP **Dashboard** to the Docker website to get a trial license.
|
||||
|
||||
You must fill out a short form. After you complete the form, you are prompted with some **Installation Steps**.
|
||||
|
||||
7. Press *Next* until you reach the **Add License** step.
|
||||
7. Press **Next** until you reach the **Add License** step.
|
||||
|
||||

|
||||
|
||||
|
@ -322,7 +351,8 @@ controller. You'll know you've succeeded if you see this list:
|
|||

|
||||
|
||||
The containers reflect the architecture of UCP. The containers are running
|
||||
Swarm, a key-value store process, and some containers with certificate volumes. Explore the other resources
|
||||
Swarm, a key-value store process, and some containers with certificate volumes.
|
||||
Explore the other resources.
|
||||
|
||||
## Step 7. Join a node
|
||||
|
||||
|
@ -330,7 +360,7 @@ In this step, you join your UCP `node2` to the controller using the `ucp join`
|
|||
subcommand. In a UCP production installation, you'd do this step for each node
|
||||
you want to add.
|
||||
|
||||
1. If you don't already have open, open a terminal on your computer.
|
||||
1. If you don't already have one, open a terminal on your computer.
|
||||
|
||||
2. Connect the terminal environment to the `node2` you provisioned earlier.
|
||||
|
||||
|
@ -352,7 +382,14 @@ you want to add.
|
|||
|
||||
2. Run the `docker/ucp join` command.
|
||||
|
||||
$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock --name ucp docker/ucp join -i --host-address $(docker-machine ip node2)
|
||||
>**Note**: If you are on a Windows system, your shell can't resolve the
|
||||
`$(docker-machine ip node2)` variable. So, replace it with the actual IP
|
||||
address.
|
||||
|
||||
$ docker run --rm -it \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
--name ucp docker/ucp join -i \
|
||||
--host-address $(docker-machine ip node2)
|
||||
|
||||
The `join` pulls several images and prompts you for the UCL of the UCP Server.
|
||||
|
||||
|
@ -435,7 +472,7 @@ dashboard. The container will run an Nginx server, so you'll need to launch the
|
|||
4. Enter `nginx` for the image name.
|
||||
|
||||
An image is simply predefined software you want to run. The software might
|
||||
an actual standalone application or maybe some component software necessary
|
||||
be an actual standalone application or maybe some component software necessary
|
||||
to support a complex service.
|
||||
|
||||
5. Enter `nginx_server` for the container name.
|
||||
|
@ -450,7 +487,7 @@ dashboard. The container will run an Nginx server, so you'll need to launch the
|
|||
|
||||
8. Use the plus sign to add another **Port**.
|
||||
|
||||
9. For this port, enter `90` in the **Port** and **Host Port** field.
|
||||
9. For this port, enter `80` in the **Port** and **Host Port** field.
|
||||
|
||||
When you are done, your dialog looks like the following:
|
||||
|
||||
|
@ -499,78 +536,90 @@ Download the bundle and configure your environment.
|
|||
|
||||
The browser downloads the `ucp-bundle-admin.zip` file.
|
||||
|
||||
4. Open a new shell on your local machine.
|
||||
|
||||
5. Make sure your shell is does not have an active Docker Machine host.
|
||||
|
||||
$ docker-machine ls
|
||||
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
|
||||
moxie - virtualbox Stopped Unknown
|
||||
test - virtualbox Running tcp://192.168.99.100:2376 v1.10.1
|
||||
|
||||
While Machine has a stopped and running host, neither is active in the shell. You know this because neither host shows an * (asterisk) indicating the shell is configured.
|
||||
|
||||
4. Create a directory to hold the deploy information.
|
||||
|
||||
$ mkdir deploy-app
|
||||
|
||||
4. Navigate to where the bundle was downloaded, and unzip the client bundle
|
||||
|
||||
$ unzip bundle.zip
|
||||
Archive: bundle.zip
|
||||
extracting: ca.pem
|
||||
extracting: cert.pem
|
||||
extracting: key.pem
|
||||
extracting: cert.pub
|
||||
extracting: env.sh
|
||||
$ unzip bundle.zip
|
||||
Archive: bundle.zip
|
||||
extracting: ca.pem
|
||||
extracting: cert.pem
|
||||
extracting: key.pem
|
||||
extracting: cert.pub
|
||||
extracting: env.sh
|
||||
|
||||
5. Change into the directory that was created when the bundle was unzipped
|
||||
|
||||
6. Execute the `env.sh` script to set the appropriate environment variables for your UCP deployment
|
||||
6. Execute the `env.sh` script to set the appropriate environment variables for your UCP deployment.
|
||||
|
||||
$ source env.sh
|
||||
$ source env.sh
|
||||
|
||||
7. Connect the terminal environment to the `node1`, your controller node.
|
||||
If you are on Windows, you may need to set the environment variables manually.
|
||||
|
||||
$ eval "$(docker-machine env node1)"
|
||||
7. Run `docker info` to examine the UCP deployment.
|
||||
|
||||
7. Run `docker info` to examine the Docker Swarm.
|
||||
Your output should show that you are managing UCP vs. a single node.
|
||||
|
||||
Your output should show that you are managing the swarm vs. a single node.
|
||||
|
||||
$ docker info
|
||||
$ docker info
|
||||
Containers: 12
|
||||
Running: 0
|
||||
Paused: 0
|
||||
Stopped: 0
|
||||
Images: 17
|
||||
Role: primary
|
||||
Strategy: spread
|
||||
Filters: health, port, dependency, affinity, constraint
|
||||
Nodes: 2
|
||||
node1: 192.168.99.106:12376
|
||||
└ Status: Healthy
|
||||
└ Containers: 9
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 3.01 GiB
|
||||
└ Labels: executiondriver=native-0.2, kernelversion=4.1.17-boot2docker, operatingsystem=Boot2Docker 1.10.0 (TCL 6.4.1); master : b09ed60 - Thu Feb 4 20:16:08 UTC 2016, provider=virtualbox, storagedriver=aufs
|
||||
└ Error: (none)
|
||||
└ UpdatedAt: 2016-02-09T12:03:16Z
|
||||
node2: 192.168.99.107:12376
|
||||
└ Status: Healthy
|
||||
└ Containers: 3
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 4.956 GiB
|
||||
└ Labels: executiondriver=native-0.2, kernelversion=4.1.17-boot2docker, operatingsystem=Boot2Docker 1.10.0 (TCL 6.4.1); master : b09ed60 - Thu Feb 4 20:16:08 UTC 2016, provider=virtualbox, storagedriver=aufs
|
||||
└ Error: (none)
|
||||
└ UpdatedAt: 2016-02-09T12:03:11Z
|
||||
Cluster Managers: 1
|
||||
192.168.99.106: Healthy
|
||||
└ Orca Controller: https://192.168.99.106:443
|
||||
└ Swarm Manager: tcp://192.168.99.106:3376
|
||||
└ KV: etcd://192.168.99.106:12379
|
||||
Plugins:
|
||||
Volume:
|
||||
Network:
|
||||
CPUs: 2
|
||||
Total Memory: 7.966 GiB
|
||||
Name: ucp-controller-node1
|
||||
ID: P5QI:ZFCX:ELZ6:RX2F:ADCT:SJ7X:LAMQ:AA4L:ZWGR:IA5V:CXDE:FTT2
|
||||
WARNING: No oom kill disable support
|
||||
WARNING: No cpu cfs quota support
|
||||
WARNING: No cpu cfs period support
|
||||
WARNING: No cpu shares support
|
||||
WARNING: No cpuset support
|
||||
Labels:
|
||||
com.docker.ucp.license_key=p3vPAznHhbitGG_KM36NvCWDiDDEU7aP_Y9z4i7V4DNb
|
||||
com.docker.ucp.license_max_engines=1
|
||||
com.docker.ucp.license_expires=2016-11-11 00:53:53 +0000 UTC
|
||||
$ docker info
|
||||
Containers: 12
|
||||
Running: 0
|
||||
Paused: 0
|
||||
Stopped: 0
|
||||
Images: 17
|
||||
Role: primary
|
||||
Strategy: spread
|
||||
Filters: health, port, dependency, affinity, constraint
|
||||
Nodes: 2
|
||||
node1: 192.168.99.106:12376
|
||||
└ Status: Healthy
|
||||
└ Containers: 9
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 3.01 GiB
|
||||
└ Labels: executiondriver=native-0.2, kernelversion=4.1.17-boot2docker, operatingsystem=Boot2Docker 1.10.0 (TCL 6.4.1); master : b09ed60 - Thu Feb 4 20:16:08 UTC 2016, provider=virtualbox, storagedriver=aufs
|
||||
└ Error: (none)
|
||||
└ UpdatedAt: 2016-02-09T12:03:16Z
|
||||
node2: 192.168.99.107:12376
|
||||
└ Status: Healthy
|
||||
└ Containers: 3
|
||||
└ Reserved CPUs: 0 / 1
|
||||
└ Reserved Memory: 0 B / 4.956 GiB
|
||||
└ Labels: executiondriver=native-0.2, kernelversion=4.1.17-boot2docker, operatingsystem=Boot2Docker 1.10.0 (TCL 6.4.1); master : b09ed60 - Thu Feb 4 20:16:08 UTC 2016, provider=virtualbox, storagedriver=aufs
|
||||
└ Error: (none)
|
||||
└ UpdatedAt: 2016-02-09T12:03:11Z
|
||||
Cluster Managers: 1
|
||||
192.168.99.106: Healthy
|
||||
└ Orca Controller: https://192.168.99.106:443
|
||||
└ Swarm Manager: tcp://192.168.99.106:3376
|
||||
└ KV: etcd://192.168.99.106:12379
|
||||
Plugins:
|
||||
Volume:
|
||||
Network:
|
||||
CPUs: 2
|
||||
Total Memory: 7.966 GiB
|
||||
Name: ucp-controller-node1
|
||||
ID: P5QI:ZFCX:ELZ6:RX2F:ADCT:SJ7X:LAMQ:AA4L:ZWGR:IA5V:CXDE:FTT2
|
||||
WARNING: No oom kill disable support
|
||||
WARNING: No cpu cfs quota support
|
||||
WARNING: No cpu cfs period support
|
||||
WARNING: No cpu shares support
|
||||
WARNING: No cpuset support
|
||||
Labels:
|
||||
com.docker.ucp.license_key=p3vPAznHhbitGG_KM36NvCWDiDDEU7aP_Y9z4i7V4DNb
|
||||
com.docker.ucp.license_max_engines=1
|
||||
com.docker.ucp.license_expires=2016-11-11 00:53:53 +0000 UTC
|
||||
|
||||
## Step 11. Deploy with the CLI
|
||||
|
||||
|
@ -610,11 +659,11 @@ In this exercise, you'll launch another Nginx container. Only this time, you'll
|
|||
|
||||
8. Scroll down to the ports section.
|
||||
|
||||
You'll see an IP address with port `80/tcp` for the server. This time, you'll
|
||||
find that the port mapped on this container than the one created yourself.
|
||||
That's because the command didn't explicitly map a port, so the Engine chose
|
||||
mapped the default Nginx port `80` inside the container to an arbitrary port
|
||||
on the node.
|
||||
You'll see an IP address with port `80/tcp` for the server. This time, you'll
|
||||
find that the port mapped on this container than the one created yourself.
|
||||
That's because the command didn't explicitly map a port, so the Engine chose
|
||||
mapped the default Nginx port `80` inside the container to an arbitrary port
|
||||
on the node.
|
||||
|
||||
4. Copy the IP address to your browser and paste the information you copied.
|
||||
|
||||
|
|
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 54 KiB |
After Width: | Height: | Size: 47 KiB |
After Width: | Height: | Size: 33 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 67 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 49 KiB |
After Width: | Height: | Size: 35 KiB |
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 55 KiB |
After Width: | Height: | Size: 119 KiB |
54
index.md
|
@ -9,16 +9,62 @@ identifier="mn_ucp"
|
|||
|
||||
# Docker Universal Control Plane
|
||||
|
||||
Intro 1-2 paras, page purpose, intended user, list steps if page is tutorial.
|
||||
Docker Universal Control Plane (UCP) is an enterprise on premise solution that
|
||||
enables IT operations teams to deploy and manage their Dockerized applications
|
||||
in production. It gives developers and administrators the agility and
|
||||
portability they need to manage Docker resources, all from within the enterprise
|
||||
firewall.
|
||||
|
||||
## Deploy, manage, and monitor Docker Engine resources
|
||||
|
||||
tbd
|
||||
Universal Control Plane can be deployed to any private infrastructure and public
|
||||
cloud including Microsoft Azure, Digital Ocean, Amazon Web Services, and
|
||||
SoftLayer.
|
||||
|
||||
Once deployed, UCP uses Docker Swarm to create and manage clusters, tested up to
|
||||
10,000 nodes deployed in any private data center or public cloud provider.
|
||||
|
||||
## Secure communications
|
||||
|
||||
tbd
|
||||
UCP has built in in security, and integration with existing LDAP/AD for
|
||||
authentication and role based access control. Optionally, you can use its native
|
||||
integration with Docker Trusted registry. The integration with Docker Trusted
|
||||
Registry allows enterprises to leverage Docker Content Trust (Notary in the
|
||||
open source world), a built in security tool for signing images.
|
||||
|
||||
|
||||
Universal Control Plane is the only tool on
|
||||
the market that comes comes with Docker Content Trust directly
|
||||
out of the box. With these integrations Universal Control Plane
|
||||
gives enterprise IT security teams the necessary control over their
|
||||
environment and application content.
|
||||
|
||||
## Control user access
|
||||
|
||||
tbd
|
||||
Security is top of mind for many enterprise IT operations teams. Docker UCP
|
||||
integrates with existing tools like LDAP/AD for user authentication and its
|
||||
integration with Docker Trusted Registry. This integration enables enterprises
|
||||
to control the entire build, ship and run workflow in a secure fashion.
|
||||
|
||||
Within Docker UCP, you can do a local set up for account information, or you can
|
||||
do centralized authentication by linking UCP with your LDAP or Active Directory.
|
||||
The integration with Docker Trusted Registry also means that you can use Docker
|
||||
Content Trust, to sign your images and ensure that they are not altered in
|
||||
anyway and are safe for use within your organization. Users can pull images from
|
||||
Docker Trusted Registry into Docker UCP and not have to worry about their
|
||||
security.
|
||||
|
||||
For RBAC, within UCP you can view the roles of existing accounts as well as see
|
||||
the roles that they have within UCP. The granular role based access control
|
||||
allows you to control who can access certain images. This drastically reduces
|
||||
organizational risk within enterprises.
|
||||
|
||||
## Where to go next
|
||||
|
||||
* If you are interested in evaluating UCP on your laptop, you can use the [evaluation installation and quickstart](evaluation-installation.md).
|
||||
|
||||
* Technical users and managers can get detailed explanation of the UCP architecture and requirements from the [plan a production installation](plan-production-install.md) page. The step-by-step [production installation](plan-production-install.md) is also available.
|
||||
|
||||
* To learn more about controlling users in UCP, see [Manage and authorize UCP users](manage/monitor-manage-users.md).
|
||||
|
||||
* If you have used UCP in our BETA program, be sure to read the [release notes](release_notes.md).
|
||||
|
|
51
kv_store.md
|
@ -10,7 +10,9 @@ parent="mn_ucp"
|
|||
# UCP Key/Value Store Backend
|
||||
|
||||
In this release, UCP leverages the [etcd](https://github.com/coreos/etcd/) KV
|
||||
store internally for node discovery and high availability. This use is specific to UCP. The services you deploy on UCP can use whichever key-store is appropriate for the service.
|
||||
store internally for node discovery and high availability. This use is specific
|
||||
to UCP. The services you deploy on UCP can use whichever key-store is
|
||||
appropriate for the service.
|
||||
|
||||
Under normal circumstances, you should not have to access the KV store
|
||||
directly. To mitigate unforeseen problems or change advanced settings,
|
||||
|
@ -43,68 +45,69 @@ at https://github.com/coreos/etcd/blob/master/Documentation/api.md
|
|||
|
||||
### Troubleshooting with etcdctl
|
||||
|
||||
The `ucp-kv` container(s) running on the primary controller (and
|
||||
replicas in an HA configuration) contain the `etcdctl` binary, which
|
||||
can be accessed using `docker exec`. The following list some examples
|
||||
(and their output) using the tool to perform various tasks on the
|
||||
etcd cluster. These commands assume you are running directly against
|
||||
the engine in question. If you are running these commands through UCP,
|
||||
you should specify the node specific container name.
|
||||
The `ucp-kv` container(s) running on the primary controller (and replicas in an
|
||||
HA configuration) contain the `etcdctl` binary, which can be accessed using
|
||||
`docker exec`. The examples (and their output) using the tool to perform
|
||||
various tasks on the `etcd` cluster.
|
||||
|
||||
* Check the health of the etcd cluster (on failure it will exit with an error code, and no output)
|
||||
```
|
||||
These commands assume you are running directly against the Docker Engine in
|
||||
question. If you are running these commands through UCP, you should specify the
|
||||
node specific container name.
|
||||
|
||||
Check the health of the etcd cluster (on failure it will exit with an error code, and no output)
|
||||
|
||||
```bash
|
||||
docker exec -it ucp-kv etcdctl \
|
||||
--endpoint https://127.0.0.1:2379 \
|
||||
--ca-file /etc/docker/ssl/ca.pem \
|
||||
--cert-file /etc/docker/ssl/cert.pem \
|
||||
--key-file /etc/docker/ssl/key.pem \
|
||||
cluster-health
|
||||
```
|
||||
```
|
||||
|
||||
member 16c9ae1872e8b1f0 is healthy: got healthy result from https://192.168.122.64:12379
|
||||
member c5a24cfdb4263e72 is healthy: got healthy result from https://192.168.122.196:12379
|
||||
member ca3c1bb18f1b30bf is healthy: got healthy result from https://192.168.122.223:12379
|
||||
cluster is healthy
|
||||
```
|
||||
|
||||
* List the current members of the cluster
|
||||
```
|
||||
List the current members of the cluster
|
||||
|
||||
```bash
|
||||
docker exec -it ucp-kv etcdctl \
|
||||
--endpoint https://127.0.0.1:2379 \
|
||||
--ca-file /etc/docker/ssl/ca.pem \
|
||||
--cert-file /etc/docker/ssl/cert.pem \
|
||||
--key-file /etc/docker/ssl/key.pem \
|
||||
member list
|
||||
```
|
||||
```
|
||||
|
||||
16c9ae1872e8b1f0: name=orca-kv-192.168.122.64 peerURLs=https://192.168.122.64:12380 clientURLs=https://192.168.122.64:12379
|
||||
c5a24cfdb4263e72: name=orca-kv-192.168.122.196 peerURLs=https://192.168.122.196:12380 clientURLs=https://192.168.122.196:12379
|
||||
ca3c1bb18f1b30bf: name=orca-kv-192.168.122.223 peerURLs=https://192.168.122.223:12380 clientURLs=https://192.168.122.223:12379
|
||||
```
|
||||
|
||||
* Remove a failed member (use the list above first to get the ID)
|
||||
```
|
||||
Remove a failed member (use the list above first to get the ID)
|
||||
|
||||
```bash
|
||||
docker exec -it ucp-kv etcdctl \
|
||||
--endpoint https://127.0.0.1:2379 \
|
||||
--ca-file /etc/docker/ssl/ca.pem \
|
||||
--cert-file /etc/docker/ssl/cert.pem \
|
||||
--key-file /etc/docker/ssl/key.pem \
|
||||
member remove c5a24cfdb4263e72
|
||||
```
|
||||
```
|
||||
|
||||
Removed member c5a24cfdb4263e72 from cluster
|
||||
```
|
||||
|
||||
* Show the current value of a key
|
||||
```
|
||||
Show the current value of a key
|
||||
|
||||
```bash
|
||||
docker exec -it ucp-kv etcdctl \
|
||||
--endpoint https://127.0.0.1:2379 \
|
||||
--ca-file /etc/docker/ssl/ca.pem \
|
||||
--cert-file /etc/docker/ssl/cert.pem \
|
||||
--key-file /etc/docker/ssl/key.pem \
|
||||
ls /docker/swarm/nodes
|
||||
```
|
||||
```
|
||||
|
||||
/docker/swarm/nodes/192.168.122.196:12376
|
||||
/docker/swarm/nodes/192.168.122.64:12376
|
||||
/docker/swarm/nodes/192.168.122.223:12376
|
||||
|
|
|
@ -1,21 +1,16 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
draft=true
|
||||
title = "Manage and monitor"
|
||||
description = "Manage, monitor, troubleshoot"
|
||||
keywords = ["Manage, monitor, troubleshoot"]
|
||||
[menu.main]
|
||||
identifier="mn_manage_ucp"
|
||||
parent="mn_ucp"
|
||||
weight=88
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# UCP resource management and monitoring
|
||||
|
||||
* [Your UCP installation](monitor-ucp.md)
|
||||
* [Applications](monitor-manage-applications.md)
|
||||
* [Containers](monitor-manage-containers.md)
|
||||
* [Images](monitor-manage-images.md)
|
||||
* [Container networks](monitor-manage-networks.md)
|
||||
* [Users](monitor-manage-users.md)
|
||||
* [volumes](monitor-manage-volumes.md)
|
||||
|
|
|
@ -1,28 +1,193 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
title = "Users"
|
||||
description = "Manage and troubleshoot UCP Users"
|
||||
keywords = ["tbd, tbd"]
|
||||
title = "Manage and authorize users"
|
||||
description = "Manage and authorize users"
|
||||
keywords = ["authorize, authentication, users, teams, UCP, Docker, objects"]
|
||||
[menu.main]
|
||||
parent="mn_ucp"
|
||||
parent="mn_manage_ucp"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Monitor, manage, and troubleshoot Users
|
||||
# Manage and authorize UCP users
|
||||
|
||||
Intro 1-2 paras, page purpose, intended user, list steps if page is tutorial.
|
||||
This page explains how to manage users and authorize users within the UCP.
|
||||
Managing users requires that you understand how to create users and combine them
|
||||
into teams. Authorizing users requires that you understand how to apply roles
|
||||
and create permissions within UCP. On this page, you learn to do both. You also
|
||||
learn about the features and systems of UCP that support user management and
|
||||
authorization.
|
||||
|
||||
## Understand user authorization in UCP
|
||||
|
||||
## Understand UCP user roles
|
||||
Users in UCP have two levels of authorization. They may have authorization to
|
||||
manage UCP and they have authorization to access the Docker objects and
|
||||
resources that UCP manages. You can authorize user to UCP manage UCP by enabling
|
||||
the **IS A UCP ADMIN** in a user's **Account Details**.
|
||||
|
||||
tbd
|
||||

|
||||
|
||||
## Importing users from LDAP
|
||||
Users that are UCP administrators have authorization to fully access all Docker
|
||||
objects in your production system. This authorization is the granted both
|
||||
whether access is through the GUI or the command line.
|
||||
|
||||
tbd
|
||||
Users within UCP have *permissions* assigned to them by default. This authorizes
|
||||
what a user can do to Docker resource such as volumes, networks, images, and
|
||||
containers. UCP allows you define default permissions for a user when you create
|
||||
that user. In this release of UCP, more granular access to just one object, the
|
||||
container object, is possible through the use of teams.
|
||||
|
||||
## Manually creating users
|
||||
The possible permissions are:
|
||||
|
||||
tbd
|
||||
| Type | Description |
|
||||
|--------------------|--------------------------------------------------------------------------------------------------------------|
|
||||
| No Access | Cannot access any resources. |
|
||||
| View Only | Can view resources. This role grants the ability to view a container but not restart, kill, or remove it. |
|
||||
| Restricted Control | Can edit resources. This role grants the ability to create, restart, kill, and remove containers. |
|
||||
| Full Control | Can do anything possible to resources. This role grants full rights to all actions on containers. |
|
||||
|
||||
## Troubleshoot problems with Users
|
||||
For containers only, you can extend the default access permissions with more
|
||||
granular, role-based permissions. Docker Engine allows container creators to
|
||||
apply arbitrary, descriptive strings called *labels* to a container. If you
|
||||
define labels for use by container creators, you can leverage these
|
||||
labels with UCP teams to configure role-based access to containers.
|
||||
|
||||
The general process for configuring role-based access to containers is:
|
||||
|
||||
* Identify one or more labels to apply to containers.
|
||||
* Create one or more teams.
|
||||
* Define a permission by combining a pre-identified label with a role value.
|
||||
* Add users to the team.
|
||||
* Ensure container creators use the pre-defined labels.
|
||||
|
||||
Once you configure it, users have this access through UCP and through their
|
||||
interactions on the command line via the client bundle.
|
||||
|
||||
>**Note**: Users can by-pass all UCP authorization controls by logging into a UCP node via
|
||||
standard SSH and addressing the Swarm cluster directly. For this reason, You
|
||||
must be sure to secure network access to a cluster's nodes.
|
||||
|
||||
## Understand Restricted Control
|
||||
|
||||
Containers run as services on your network. Without proper knowledge, users can
|
||||
launch a container with an insecure configuration. To reduce the risk of this
|
||||
happening, the **Restricted Control** limits the options users can use when
|
||||
launching containers.
|
||||
|
||||
A user with **Restricted Control** can create, restart, kill, or remove a
|
||||
container. These users are can not `docker exec` into a container. Additionally,
|
||||
**Restricted Control** prevents users from running a container with these
|
||||
options:
|
||||
|
||||
| Prevented Option | Description |
|
||||
|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `--privileged` | A “privileged” container is given access to all devices. |
|
||||
| `--cap-add` | The ability to expand the kernel-level capabilities a user or process has in a container. |
|
||||
| host mounted volumes | Mount a volume from the host where the container is running. |
|
||||
| `--ipc` | The ability to set a container's IPC (POSIX/SysV IPC) namespace mode. This provides separation of named shared memory segments, semaphores and message queues. mode |
|
||||
| `--pid` | PID namespace provides separation of processes. The PID Namespace removes the view of the system processes, and allows process ids to be reused including pid 1. |
|
||||
|
||||
Users that attempt to create containers with these options receive an error message.
|
||||
|
||||
## Creating users through UCP
|
||||
|
||||
UCP offers two ways to create user accounts. You can manually create accounts
|
||||
one-at-a-time or you can import users as a group into a team via UCP's LDAP
|
||||
integration. To create an individual user, do the following:
|
||||
|
||||
1. Click **Users & Teams** from the UCP dashboard.
|
||||
|
||||
2. Click **Create User**.
|
||||
|
||||

|
||||
|
||||
3. Complete the fields for the user.
|
||||
|
||||
The **DEFAULT PERMISSIONS** define the default access role a user has to all
|
||||
the Docker objects and resources in the system. You can refine and extend access
|
||||
on containers by adding a user to a **Team** later.
|
||||
|
||||
4. Click **Save** to create the user.
|
||||
|
||||
## Creating a team
|
||||
|
||||
UCP offers two ways to create teams. You can manually create teams one-at-a-time
|
||||
or you can populate a team by importing multiple users via an LDAP or Active
|
||||
Directory connection. The teams you populate one-at-a-time are **Managed** teams
|
||||
meaning they contain only users managed by UCP.
|
||||
|
||||
Teams you create via an LDAP or Active Directory connection are known as
|
||||
**Discovered** teams. To use LDAP or Active Directory, you must have already
|
||||
[configured the AUTH settings in UCP](ldap-config.md). When you create a
|
||||
**Discovered** team, the system imports the members and applies the default
|
||||
authorization set in UCP's **AUTH** settings. The value appears in the **DEFAULT
|
||||
PERMISSIONS FOR NEW DISCOVERED ACCOUNTS** field.
|
||||
|
||||

|
||||
|
||||
To create **Discovered** team with LDAP or Active Directory, do the following:
|
||||
|
||||
1. Login into UCP as a user with UCP ADMIN authorization.
|
||||
|
||||
2. Click **Users & Teams** from the UCP dashboard.
|
||||
|
||||
3. Click **Create a Team**.
|
||||
|
||||
The system displays the **Create Team** page. At this point, you decide what
|
||||
**TYPE** of team you want to create. You can't change or convert the team
|
||||
**TYPE** later.
|
||||
|
||||
4. Choose **Discovered** from the **TYPE** dropdown.
|
||||
|
||||
The system displays options for the **Discovered** team. Completing this
|
||||
dialog requires that you have a basic understanding of LDAP or access to
|
||||
someone who does.
|
||||
|
||||
5. Enter a **Name** for the team.
|
||||
|
||||
5. Enter an **LDAP DN** value.
|
||||
|
||||
This value is a distinguished name (DN) identify the group you want to
|
||||
import. A distinguished name describes a position in an LDAP
|
||||
directory information tree (DIT).
|
||||
|
||||
6. Enter a **LDAP MEMBER ATTRIBUTE** value.
|
||||
|
||||
This identifies the attribute you should use to retrieve the values.
|
||||
|
||||

|
||||
|
||||
7. Save the team.
|
||||
|
||||
After a moment, the system creates a team with the users matching
|
||||
your team specification.
|
||||
|
||||

|
||||
|
||||
## Add permissions to a team
|
||||
|
||||
You can use a team to simply organize **Managed** users or to import/organize
|
||||
**Discovered** users. Optionally, you can also add permissions to a the team.
|
||||
Permissions are a combination of labels and roles you can apply to a team.
|
||||
Permissions authorize users to act on containers with the matching labels
|
||||
according to roles you define.
|
||||
|
||||
>**Note**: For correct application, you must ensure the labels exist on
|
||||
containers deployed ins UCP.
|
||||
|
||||
To add **Permissions** to a team, do the following:
|
||||
|
||||
1. Select the team.
|
||||
|
||||
2. Choose **PERMISSIONS**.
|
||||
|
||||
3. Click **Add Label**.
|
||||
|
||||

|
||||
|
||||
4. Click **Save**.
|
||||
|
||||
## Related information
|
||||
|
||||
To learn how to apply labels, see the how to [Apply custom
|
||||
metadata](https://docs.docker.com/engine/userguide/labels-custom-metadata/)
|
||||
Engine documentation.
|
||||
|
|
|
@ -4,7 +4,7 @@ title = "Monitor and troubleshoot UCP"
|
|||
description = "Monitor your Docker Universal Control Plane installation, and learn how to troubleshoot it."
|
||||
keywords = ["Docker, UCP, troubleshoot"]
|
||||
[menu.main]
|
||||
parent="mn_ucp"
|
||||
parent="mn_manage_ucp"
|
||||
weight=-80
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
|
30
overview.md
|
@ -8,27 +8,27 @@ weight="-100"
|
|||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
<!--[metadata]>
|
||||
NOTICE:
|
||||
The current doc theme does not allow us to create a split menu: a menu whose label is both a link and a dropdown. Our theme redesign fixes this but it is in RFD, so earliest out is about March 2016. In the mean time
|
||||
|
||||
* the overview.md pages to show up in the menu.
|
||||
* when a user clicks Overview, we will redirect go to https://docs.docker.com/ucp/
|
||||
* Later, after the redesign goes live, we will re-redirect in the unlikely event someone was able to book mark https://docs.docker.com/ucp/overview
|
||||
<![end-metadata]-->
|
||||
# UCP table of contents
|
||||
|
||||
# UCP overview
|
||||
Universal Control Plane is a Docker native solution designed to provision and
|
||||
cluster Docker hosts and their resources. You can use UCP to deploy and manage
|
||||
Dockerized applications. UCP has full support for the Docker API. This feature
|
||||
means an easy deployment of applications from development to test to production
|
||||
– without code changes.
|
||||
|
||||
* [Overview](overview.md)
|
||||
The UCP documentation includes the following topics:
|
||||
|
||||
* [UCP Overview](index.md)
|
||||
* [Evaluation installation](evaluation-install.md)
|
||||
* [Plan a production installation](plan-production-install.md)
|
||||
* [Install UCP for production](production-install.md)
|
||||
* [Upgrade a production installation](production-upgrade.md)
|
||||
* [Manage, monitor, and troubleshoot UCP and its resources](manage/monitor-ucp.md)
|
||||
* [Commands reference](reference/index.md)
|
||||
* [Work with Docker Support](support.md)
|
||||
* [UCP Release Notes](release_notes.md)
|
||||
* [UCP Key/Value Store Backend](kv_store.md)
|
||||
* [Set up container networking with UCP](networking.md)
|
||||
* [Set up high availability](understand_ha.md)
|
||||
* [Deploy an application thru UCP](deploy-application.md)
|
||||
* [UCP Key/Value Store Backend](kv_store.md)
|
||||
* [Manage, monitor, and troubleshoot UCP and its resources](manage/monitor-ucp.md)
|
||||
* [Manage and authorize users](manage/monitor-manage-users.md)
|
||||
* [The ucp tool reference](reference/index.md)
|
||||
* [Work with Docker Support](support.md)
|
||||
* [UCP Release Notes](release_notes.md)
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
draft="true"
|
||||
title = "Upgrade production installation"
|
||||
description = "Upgrade production installation"
|
||||
keywords = ["tbd, tbd"]
|
||||
|
|
|
@ -22,15 +22,17 @@ docker run --rm \
|
|||
## Description
|
||||
|
||||
Dumps out the public certificates for the UCP controller running on the local
|
||||
engine. Use the output of this command to populate local certificate trust
|
||||
stores as desired.
|
||||
engine. By default, this command dumps both the CA and certificate. You can use
|
||||
the output of this command to populate local certificate trust stores as
|
||||
desired.
|
||||
|
||||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|-----------------------|------------------------------------------------------------------------------|
|
||||
| `--debug`, `-D` | Enable debug |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive`, `-i` | Enable interactive mode.,You are prompted to enter all required information. |
|
||||
| `--swarm` | Dump the Docker Swarm CA cert instead of the public cert. |
|
||||
| Option | Description |
|
||||
|-----------------------|-------------------------------------------------------------------------------------|
|
||||
| `--debug`, `-D` | Enable debug |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive`, `-i` | Enable interactive mode.,You are prompted to enter all required information. |
|
||||
| `--ca` | Dump only the contents of the `ca.pem` file (default is to dump both ca and cert). |
|
||||
| `--cluster` | Dump the internal UCP Cluster Root CA and cert instead of the public server cert. |
|
||||
|
|
|
@ -29,11 +29,15 @@ Use one or more '--controller' arguments to specify *all* of the
|
|||
UCP controllers in this cluster.
|
||||
|
||||
The '--host-address' argument specifies the public advertise address for the
|
||||
particular node you are running the command on. This host-address is how other
|
||||
particular node you are running the command on. This host-address is how other
|
||||
nodes in UCP talk to this node. You may specify an IP or hostname, and the
|
||||
command automatically detects and fills in the port number. If you omit the
|
||||
address, the tool attempts to discover the node's address.
|
||||
|
||||
This command uses the exit status of 0 for success. An exit status of 1 is used
|
||||
when run without the '--update' flag and when the configuration needs updating,
|
||||
and 2 is used for any failures.
|
||||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|
@ -41,6 +45,6 @@ address, the tool attempts to discover the node's address.
|
|||
| `--debug`, `-D` | Enable debug. |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing. |
|
||||
| `--interactive`, `-i` | Enable interactive mode. You are prompted to enter all required information. |
|
||||
| `--list` | Display the Engine discovery configuration |
|
||||
| `--update` | Apply engine discovery configuration changes. |
|
||||
| `--controller [--controller option --controller option]` | Update discovery with one or more controller's external IP address or hostname. |
|
||||
| `--host-address` | Update the external IP address or hostname this node advertises itself as [`$UCP_HOST_ADDRESS`]. |
|
||||
|
|
|
@ -4,7 +4,7 @@ description = "Run UCP commands"
|
|||
[menu.main]
|
||||
identifier = "ucp_ref"
|
||||
parent = "mn_ucp"
|
||||
weight=110
|
||||
weight=90
|
||||
+++
|
||||
|
||||
# ucp tool Reference
|
||||
|
|
|
@ -5,7 +5,7 @@ description="Your Docker subscription gives you access to prioritized support. Y
|
|||
keywords = ["Docker, support", "help"]
|
||||
[menu.main]
|
||||
parent="mn_ucp"
|
||||
weight="89"
|
||||
weight="92"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
|
|
@ -23,43 +23,65 @@ This document summarizes UCP's high availability feature and the concepts that
|
|||
support it. It also explains general guidelines for deploying a highly available
|
||||
UCP in production.
|
||||
|
||||
## Concepts and terminology
|
||||
## Understand high availability terms and containers
|
||||
|
||||
* **Primary Controller** This is the first host you run the bootstrapper `install` against. It runs the following containers/services
|
||||
The **primary controller** is the first host you run the `ucp` tools `install`
|
||||
subcommand against. It runs the following containers/services:
|
||||
|
||||
* **ucp-kv** This etcd container runs the replicated KV store inside UCP. The services you deploy on UCP can use whichever key-store is appropriate for the service.
|
||||
* **ucp-swarm-manger** This Swarm Manager uses the replicated KV store for leader election and cluster membership tracking
|
||||
* **ucp-controller** This container runs the UCP server, using the replicated KV store for configuration state
|
||||
* **ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
|
||||
* **ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
|
||||
* **ucp-swarm-ca[-proxy]** These **unreplicated** containers run the Swarm CA used for admin certificate bundles, and adding new nodes
|
||||
* **ucp-ca[-proxy]** These **unreplicated** containers run the (optional) UCP CA used for signing user bundles.
|
||||
* **Replica Node** This is a node you `join` to the primary using the `--replica` flag and it contributes to the availability of the cluster
|
||||
* **ucp-kv** This etcd container runs the replicated KV store
|
||||
* **ucp-swarm-manger** This Swarm Manager uses the replicated KV store for leader election and cluster membership tracking
|
||||
* **ucp-controller** This container runs the UCP server, using the replicated KV store for configuration state
|
||||
* **ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
|
||||
* **ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
|
||||
* **Non-Replica Node** These nodes provide additional capacity, but do not enhance the availability of the UCP/Swarm infrastructure
|
||||
* **ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
|
||||
* **ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
|
||||
| Name | Description |
|
||||
|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `ucp-kv` | This `etcd` container runs the replicated KV store inside UCP. The services you deploy on UCP can use whichever key-store is appropriate for the service. |
|
||||
| `ucp-swarm-manager` | This Swarm manager uses the replicated KV store for leader election and cluster membership tracking |
|
||||
| `ucp-controller` | This container runs the UCP server, using the replicated KV store for configuration state. |
|
||||
| `ucp-swarm-join` | Runs the `swarm join` command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster |
|
||||
| `ucp-proxy` | Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon |
|
||||
| `ucp-cluster-root-ca` | These **unreplicated** containers run the Swarm CA used for admin certificate bundles, and adding new nodes |
|
||||
| `ucp-client-root-ca` | These **unreplicated** containers run the (optional) UCP CA used for signing user bundles. |
|
||||
|
||||
A **replica node** is a node you `join` to the primary using the `--replica`
|
||||
flag and it contributes to the availability of the cluster. A replica node is
|
||||
running the following container/services.
|
||||
|
||||
| Name | Description |
|
||||
|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
|`ucp-kv` | This etcd container runs the replicated KV store. |
|
||||
|`ucp-swarm-manager` | This Swarm manager uses the replicated KV store for leader election and cluster membership tracking. |
|
||||
|`ucp-controller` | This container runs the UCP server, using the replicated KV store for configuration state. |
|
||||
|`ucp-swarm-join` | Runs the `swarm join` command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster. |
|
||||
|`ucp-proxy` | Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon. |
|
||||
|
||||
The remaining **non-replica node** nodes provide additional capacity, but do not
|
||||
enhance the availability of the UCP/Swarm infrastructure. These nodes run the
|
||||
following:
|
||||
|
||||
| Name | Description |
|
||||
|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
|`ucp-swarm-join` | Runs the `swarm join` command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster. |
|
||||
|`ucp-proxy` | Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon. |
|
||||
|
||||
## Sizing your deployment
|
||||
|
||||
If you are planning an HA deployment, you should have a minimum of 3 controllers
|
||||
configured, a primary and two replicas. Never run a cluster with only the
|
||||
primary controller and a single replica. This results in an HA
|
||||
configuration of "2-nodes" where quorum is also "2-nodes" (to prevent
|
||||
split-brain.)
|
||||
primary controller and a single replica. This results in an HA configuration of
|
||||
"2-nodes" where quorum is also "2-nodes" (to prevent split-brain.)
|
||||
|
||||
If either the primary or single replica were to fail, the cluster is unusable until they are repaired. In fact, you actually have a higher failure
|
||||
probability than if you just ran a non-HA setup with no replica.
|
||||
Currently, UCP supports a combination of 3, 5, or 7 controller nodes where one
|
||||
node is primary and the others are replicas.
|
||||
|
||||
If either the primary or single replica were to fail, the cluster is unusable
|
||||
until they are repaired. In fact, you actually have a higher failure probability
|
||||
than if you just ran a non-HA setup with no replica.
|
||||
|
||||
## Load balancing UCP cluster-store
|
||||
|
||||
At present, UCP does not include a load-balancer. You may configure one your own. If you do, you can load balance between the primary and replica nodes on port `443` for web access to the system via a single IP/hostnamed.
|
||||
At present, UCP does not include a load-balancer. You may configure one your
|
||||
own. If you do, you can load balance between the primary and replica nodes on
|
||||
port `443` for web access to the system via a single IP/hostnamed.
|
||||
|
||||
If an external load balancer is not used, system administrators should note the IP/hostname of the primary and all controller replicas. In this way, an administrator can access them when needed.
|
||||
If an external load balancer is not used, system administrators should note the
|
||||
IP/hostname of the primary and all controller replicas. In this way, an
|
||||
administrator can access them when needed.
|
||||
|
||||
* Backups:
|
||||
* Users should always back up their volumes (see the other guides for a complete list of named volumes)
|
||||
|
|