diff --git a/README.md b/README.md index d700366c81..bff722714d 100644 --- a/README.md +++ b/README.md @@ -107,7 +107,7 @@ of [https://docs.docker.com/](https://docs.docker.com/). ## Staging the docs -You have two options: +You have three options: 1. On your local machine, clone this repo and run our staging container: @@ -169,7 +169,17 @@ You have two options: running on http://localhost:4000/ by default. To stop it, use `CTRL+C`. You can continue working in a second terminal and Jekyll will rebuild the website incrementally. Refresh the browser to preview your changes. + +3. Build and run a Docker image for your working branch. + + ```bash + $ docker build -t docker build -t docs/docker.github.io: . + $ docker run --rm -it -p 4000:4000 docs/docker.github.io: + ``` + After the `docker run` command, copy the URL provided in the container build output in a browser, + http://0.0.0.0:4000, and verify your changes. + ## Read these docs offline To read the docs offline, you can use either a standalone container or a swarm service. diff --git a/_data/toc.yaml b/_data/toc.yaml index cc25549a3b..b3bf1af4cb 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -1230,8 +1230,6 @@ manuals: title: Join Windows worker nodes to your cluster - path: /ee/ucp/admin/configure/join-nodes/use-a-load-balancer/ title: Use a load balancer - - path: /ee/ucp/admin/configure/integrate-with-multiple-registries/ - title: Integrate with multiple registries - path: /ee/ucp/admin/configure/deploy-route-reflectors/ title: Improve network performance with Route Reflectors - sectiontitle: Monitor and troubleshoot @@ -1444,7 +1442,7 @@ manuals: section: - path: /datacenter/ucp/3.0/guides/admin/monitor-and-troubleshoot/ title: Monitor the cluster status - - path: /datacenter/ucp/3.0/admin/monitor-and-troubleshoot/troubleshoot-node-messages/ + - path: /datacenter/ucp/3.0/guides/admin/monitor-and-troubleshoot/troubleshoot-node-messages/ title: Troubleshoot node messages - path: /datacenter/ucp/3.0/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs/ title: Troubleshoot with logs @@ -1516,27 +1514,75 @@ manuals: title: Web-based access - path: /datacenter/ucp/3.0/guides/user/access-ucp/cli-based-access/ title: CLI-based access - - sectiontitle: Deploy an application + - path: /datacenter/ucp/3.0/guides/user/access-ucp/kubectl/ + title: Install the Kubernetes CLI + - sectiontitle: Deploy apps with Swarm section: - - path: /datacenter/ucp/3.0/guides/user/services/deploy-a-service/ - title: Deploy a service - - path: /datacenter/ucp/3.0/guides/user/services/use-domain-names-to-access-services/ - title: Use domain names to access services - - path: /datacenter/ucp/3.0/guides/user/services/ - title: Deploy an app from the UI - - path: /datacenter/ucp/3.0/guides/user/services/deploy-app-cli/ - title: Deploy an app from the CLI - - path: /datacenter/ucp/3.0/guides/user/services/deploy-stack-to-collection/ + - path: /datacenter/ucp/3.0/guides/user/swarm/ + title: Deploy a single service + - path: /datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app/ + title: Deploy a multi-service app + - path: /datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection/ title: Deploy application resources to a collection - - sectiontitle: Secrets + - path: /datacenter/ucp/3.0/guides/user/swarm/use-secrets/ + title: Use secrets in your services + - sectiontitle: Layer 7 routing + section: + - path: /datacenter/ucp/3.0/guides/user/interlock/ + title: Overview + - path: /datacenter/ucp/3.0/guides/user/interlock/architecture/ + title: Architecture + - sectiontitle: Deploy + section: + - title: Simple deployment + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/ + - title: Configure your deployment + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/configure/ + - title: Production deployment + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/production/ + - title: Host mode networking + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking/ + - title: Configuration reference + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference/ + - sectiontitle: Route traffic to services + section: + - title: Simple swarm service + path: /datacenter/ucp/3.0/guides/user/interlock/usage/ + - title: Set a default service + path: /datacenter/ucp/3.0/guides/user/interlock/usage/default-service/ + - title: Applications with TLS + path: /datacenter/ucp/3.0/guides/user/interlock/usage/tls/ + - title: Application redirects + path: /datacenter/ucp/3.0/guides/user/interlock/usage/redirects/ + - title: Persistent (sticky) sessions + path: /datacenter/ucp/3.0/guides/user/interlock/usage/sessions/ + - title: Websockets + path: /datacenter/ucp/3.0/guides/user/interlock/usage/websockets/ + - title: Canary application instances + path: /datacenter/ucp/3.0/guides/user/interlock/usage/canary/ + - title: Service clusters + path: /datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters/ + - title: Context/Path based routing + path: /datacenter/ucp/3.0/guides/user/interlock/usage/context/ + - title: Service labels reference + path: /datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference/ + - title: Layer 7 routing upgrade + path: /datacenter/ucp/3.0/guides/user/interlock/upgrade/ + - sectiontitle: Deploy apps with Kubernetes section: - - path: /datacenter/ucp/3.0/guides/user/secrets/ - title: Manage secrets - - path: /datacenter/ucp/3.0/guides/user/secrets/grant-revoke-access/ - title: Grant access to secrets + - title: Deploy a workload + path: /datacenter/ucp/3.0/guides/user/kubernetes/ + - title: Deploy a Compose-based app + path: /datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose/ + - title: Deploy an ingress controller + path: /datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing/ + - title: Create a service account for a Kubernetes app + path: /datacenter/ucp/3.0/guides/user/kubernetes/create-service-account/ + - title: Install a CNI plugin + path: /datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin/ - path: /datacenter/ucp/3.0/reference/api/ title: API reference - - path: /ee/ucp/release-notes/ + - path: /ee/ucp/release-notes/#version-30 title: Release notes nosync: true - path: /datacenter/ucp/3.0/guides/get-support/ @@ -1587,6 +1633,8 @@ manuals: title: Restrict services to worker nodes - path: /datacenter/ucp/2.2/guides/admin/configure/run-only-the-images-you-trust/ title: Run only the images you trust + - path: /datacenter/ucp/2.2/guides/admin/configure/use-trusted-images-for-ci/ + title: Use trusted images for continuous integration - path: /datacenter/ucp/2.2/guides/admin/configure/scale-your-cluster/ title: Scale your cluster - path: /datacenter/ucp/2.2/guides/admin/configure/set-session-timeout/ @@ -1701,7 +1749,7 @@ manuals: title: Grant access to secrets - path: /datacenter/ucp/2.2/reference/api/ title: API reference - - path: /ee/ucp/release-notes/ + - path: /ee/ucp/release-notes/#version-22 title: Release notes nosync: true - path: /datacenter/ucp/2.2/guides/get-support/ @@ -1752,6 +1800,8 @@ manuals: title: Use domain names to access services - path: /datacenter/ucp/2.1/guides/admin/configure/run-only-the-images-you-trust/ title: Run only the images you trust + - path: /datacenter/ucp/2.1/guides/admin/configure/use-trusted-images-for-ci/ + title: Use trusted images for continuous integration - path: /datacenter/ucp/2.1/guides/admin/configure/integrate-with-dtr/ title: Integrate with Docker Trusted Registry - path: /datacenter/ucp/2.1/guides/admin/configure/external-auth/ @@ -2207,10 +2257,8 @@ manuals: section: - path: /ee/dtr/user/manage-images/sign-images/ title: Sign an image - - path: /ee/dtr/user/manage-images/sign-images/delegate-image-signing/ - title: Delegate image signing - - path: /ee/dtr/user/manage-images/sign-images/manage-trusted-repositories/ - title: Manage trusted repositories + - path: /ee/dtr/user/manage-images/sign-images/trust-with-remote-ucp/ + title: Trust with a Remote UCP - sectiontitle: Promotion policies and mirroring section: - title: Overview diff --git a/_includes/footer.html b/_includes/footer.html index b47515ee47..36222b4b05 100644 --- a/_includes/footer.html +++ b/_includes/footer.html @@ -47,6 +47,7 @@
  • Documentation
  • Learn
  • Blog
  • +
  • Engineering Blog
  • Training
  • Support
  • Knowledge Base
  • diff --git a/compose/compose-file/index.md b/compose/compose-file/index.md index 4a736aaaa2..862603619b 100644 --- a/compose/compose-file/index.md +++ b/compose/compose-file/index.md @@ -726,8 +726,8 @@ Each of these is a single value, analogous to its [docker service create](/engine/reference/commandline/service_create.md) counterpart. In this general example, the `redis` service is constrained to use no more than -50M of memory and `0.50` (50%) of available processing time (CPU), and has -`20M` of memory and `0.25` CPU time reserved (as always available to it). +50M of memory and `0.50` (50% of a single core) of available processing time (CPU), +and has `20M` of memory and `0.25` CPU time reserved (as always available to it). ```none version: '3' @@ -1888,7 +1888,7 @@ volume mounts (shared filesystems)](/docker-for-mac/osxfs-caching.md). ### domainname, hostname, ipc, mac\_address, privileged, read\_only, shm\_size, stdin\_open, tty, user, working\_dir Each of these is a single value, analogous to its -[docker run](/engine/reference/run.md) counterpart. +[docker run](/engine/reference/run.md) counterpart. Note that `mac_address` is a legacy option. user: postgresql working_dir: /code diff --git a/config/containers/logging/json-file.md b/config/containers/logging/json-file.md index 6e397885f6..913f08d305 100644 --- a/config/containers/logging/json-file.md +++ b/config/containers/logging/json-file.md @@ -13,6 +13,10 @@ and writes them in files using the JSON format. The JSON format annotates each l origin (`stdout` or `stderr`) and its timestamp. Each log file contains information about only one container. +```json +{"log":"Log line is here\n","stream":"stdout","time":"2019-01-01T11:11:11.111111111Z"} +``` + ## Usage To use the `json-file` driver as the default logging driver, set the `log-driver` diff --git a/datacenter/ucp/3.0/guides/admin/configure/use-trusted-images-for-ci.md b/datacenter/ucp/3.0/guides/admin/configure/use-trusted-images-for-ci.md deleted file mode 100644 index 0a563d339d..0000000000 --- a/datacenter/ucp/3.0/guides/admin/configure/use-trusted-images-for-ci.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -description: Set up and configure content trust and signing policy for use with a continuous integration system -keywords: cup, trust, notary, security, continuous integration -title: Use trusted images for continuous integration ---- - -The document provides a minimal example on setting up Docker Content Trust (DCT) in -Universal Control Plane (UCP) for use with a Continuous Integration (CI) system. It -covers setting up the necessary accounts and trust delegations to restrict only those -images built by your CI system to be deployed to your UCP managed cluster. - -## Set up UCP accounts and teams - -The first step is to create a user account for your CI system. For the purposes of -this document we will assume you are using Jenkins as your CI system and will therefore -name the account "jenkins". As an admin user logged in to UCP, navigate to "User Management" -and select "Add User". Create a user with the name "jenkins" and set a strong password. - -Next, create a team called "CI" and add the "jenkins" user to this team. All signing -policy is team based, so if we want to grant only a single user the ability to sign images -destined to be deployed on the cluster, we must create a team for this one user. - -## Set up the signing policy - -While still logged in as an admin, navigate to "Admin Settings" and select the "Content Trust" -subsection. Select the checkbox to enable content trust and in the select box that appears, -select the "CI" team we have just created. Save the settings. - -This policy will require that every image that referenced in a `docker image pull`, -`docker container run`, or `docker service create` must be signed by a key corresponding -to a member of the "CI" team. In this case, the only member is the "jenkins" user. - -## Create keys for the Jenkins user - -The signing policy implementation uses the certificates issued in user client bundles -to connect a signature to a user. Using an incognito browser window (or otherwise), -log in to the "jenkins" user account you created earlier. Download a client bundle for -this user. It is also recommended to change the description associated with the public -key stored in UCP such that you can identify in the future which key is being used for -signing. - -Each time a user retrieves a new client bundle, a new keypair is generated. It is therefore -necessary to keep track of a specific bundle that a user chooses to designate as their signing bundle. - -Once you have decompressed the client bundle, the only two files you need for the purposes -of signing are `cert.pem` and `key.pem`. These represent the public and private parts of -the user's signing identity respectively. We will load the `key.pem` file onto the Jenkins -servers, and use `cert.pem` to create delegations for the "jenkins" user in our -Trusted Collection. - -## Prepare the Jenkins server - -### Load `key.pem` on Jenkins - -You will need to use the notary client to load keys onto your Jenkins server. Simply run -`notary -d /path/to/.docker/trust key import /path/to/key.pem`. You will be asked to set -a password to encrypt the key on disk. For automated signing, this password can be configured -into the environment under the variable name `DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE`. The `-d` -flag to the command specifies the path to the `trust` subdirectory within the server's `docker` -configuration directory. Typically this is found at `~/.docker/trust`. - -### Enable content trust - -There are two ways to enable content trust: globally, and per operation. To enabled content -trust globally, set the environment variable `DOCKER_CONTENT_TRUST=1`. To enable on a per -operation basis, wherever you run `docker image push` in your Jenkins scripts, add the flag -`--disable-content-trust=false`. You may wish to use this second option if you only want -to sign some images. - -The Jenkins server is now prepared to sign images, but we need to create delegations referencing -the key to give it the necessary permissions. - -## Initialize a repository - -Any commands displayed in this section should _not_ be run from the Jenkins server. You -will most likely want to run them from your local system. - -If this is a new repository, create it in Docker Trusted Registry (DTR) or Docker Hub, -depending on which you use to store your images, before proceeding further. - -We will now initialize the trust data and create the delegation that provides the Jenkins -key with permissions to sign content. The following commands initialize the trust data and -rotate snapshotting responsibilities to the server. This is necessary to ensure human involvement -is not required to publish new content. - -``` -notary -s https://my_notary_server.com -d ~/.docker/trust init my_repository -notary -s https://my_notary_server.com -d ~/.docker/trust key rotate my_repository snapshot -r -notary -s https://my_notary_server.com -d ~/.docker/trust publish my_repository -``` - -The `-s` flag specifies the server hosting a notary service. If you are operating against -Docker Hub, this will be `https://notary.docker.io`. If you are operating against your own DTR -instance, this will be the same hostname you use in image names when running docker commands preceded -by the `https://` scheme. For example, if you would run `docker image push my_dtr:4443/me/an_image` the value -of the `-s` flag would be expected to be `https://my_dtr:4443`. - -If you use DTR, the name of the repository should be identical to the full name you use -in a `docker image push` command. If you use Docker Hub, the name you use in a `docker image push` -must be preceded by `docker.io/`. For instance, if you ran `docker image push me/alpine`, you then -use `notary init docker.io/me/alpine`. - -For brevity, we will exclude the `-s` and `-d` flags from subsequent command, but be aware you -will still need to provide them for the commands to work correctly. - -Now that the repository is initialized, we need to create the delegations for Jenkins. Docker -Content Trust treats a delegation role called `targets/releases` specially. It considers this -delegation to contain the canonical list of published images for the repository. For this reason, -you should add all users to this delegation with the following command: - -``` -notary delegation add my_repository targets/releases --all-paths /path/to/cert.pem -``` - -This solves a number of prioritization problems that would result from the need to determine -which delegation should ultimately be trusted for a specific image. However, since any user -can sign the `targets/releases` role it is not trusted -in determining if a signing policy has been met. Therefore, you also need to create a -delegation specifically for Jenkins: - -``` -notary delegation add my_repository targets/jenkins --all-paths /path/to/cert.pem -``` - -We will then publish both these updates (remember to add the correct `-s` and `-d` flags): - -``` -notary publish my_repository -``` - -Informational (Advanced): If we included the `targets/releases` role in determining if a signing policy -had been met, we would run into the situation of images being opportunistically deployed when -an appropriate user signs. In the scenario we have described so far, only images signed by -the "CI" team (containing only the "jenkins" user) should be deployable. If a user "Moby" could -also sign images but was not part of the "CI" team, they might sign and publish a new `targets/releases` -that contained their image. UCP would refuse to deploy this image because it was not signed -by the "CI" team. However, the next time Jenkins published an image, it would update and sign -the `targets/releases` role as whole, enabling "Moby" to deploy their image. - -## Conclusion - -With the Trusted Collection initialized, and delegations created, the Jenkins server will -now use the key we imported to sign any images we push to this repository. - -Through either the Docker CLI, or the UCP browser interface, we will find that any images -that do not meet our signing policy cannot be used. The signing policy we set up requires -that the "CI" team must have signed any image we attempt to `docker image pull`, `docker container run`, -or `docker service create`, and the only member of that team is the "jenkins" user. This -restricts us to only running images that were published by our Jenkins CI system. diff --git a/datacenter/ucp/3.0/guides/admin/install/upgrade.md b/datacenter/ucp/3.0/guides/admin/install/upgrade.md index 47a6b2271d..a01974bb97 100644 --- a/datacenter/ucp/3.0/guides/admin/install/upgrade.md +++ b/datacenter/ucp/3.0/guides/admin/install/upgrade.md @@ -1,11 +1,11 @@ --- -title: Upgrade to UCP 2.2 +title: Upgrade to UCP 3.0 description: Learn how to upgrade Docker Universal Control Plane with minimal impact to your users. keywords: UCP, upgrade, update --- This page guides you in upgrading Docker Universal Control Plane (UCP) to -version 2.2. +version 3.0. Before upgrading to a new version of UCP, check the [release notes](../../release-notes/index.md) for this version for information @@ -37,8 +37,8 @@ This allows you to recover if something goes wrong during the upgrade process. > Upgrading and backup archives > > The backup archive is version-specific, so you can't use it during the -> upgrade process. For example, if you create a backup archive for a UCP 2.1 -> swarm, you can't use the archive file after you upgrade to UCP 2.2. +> upgrade process. For example, if you create a backup archive for a UCP 2.2 +> swarm, you can't use the archive file after you upgrade to UCP 3.0. ## Upgrade Docker Engine @@ -112,13 +112,13 @@ all the nodes managed by UCP are healthy. ## Recommended upgrade paths -If you're running a UCP version that's lower than 2.1, first upgrade to the -latest 2.1 version, then upgrade to 2.2. Use these rules for your upgrade -path to UCP 2.2: +If you're running a UCP version that's lower than 2.2, first upgrade to the +latest 2.2 version, then upgrade to 3.0. Use these rules for your upgrade +path to UCP 3.0: -- From UCP 1.1: UCP 1.1 -> UCP 2.1 -> UCP 2.2 -- From UCP 2.0: UCP 2.0 -> UCP 2.1 -> UCP 2.2 -- From UCP 2.1: UCP 2.1 -> UCP 2.2 +- From UCP 1.1: UCP 1.1 -> UCP 2.2 -> UCP 3.0 +- From UCP 2.0: UCP 2.0 -> UCP 2.2 -> UCP 3.0 +- From UCP 2.2: UCP 2.2 -> UCP 3.0 ## Where to go next diff --git a/datacenter/ucp/3.0/guides/architecture.md b/datacenter/ucp/3.0/guides/architecture.md index f74bbb9464..4afa6d38f9 100644 --- a/datacenter/ucp/3.0/guides/architecture.md +++ b/datacenter/ucp/3.0/guides/architecture.md @@ -5,13 +5,13 @@ keywords: ucp, architecture --- Universal Control Plane is a containerized application that runs on -[Docker Enterprise Edition](/ee/index.md) and extends its functionality -to make it easier to deploy, configure, and monitor your applications at scale. +[Docker Enterprise Edition](/ee/index.md), extending its functionality +to simplify the deployment, configuration, and monitoring of your applications at scale. UCP also secures Docker with role-based access control so that only authorized users can make changes and deploy applications to your Docker cluster. -![](images/architecture-1.svg) +![](images/ucp-architecture-1.svg){: .with-border} Once Universal Control Plane (UCP) instance is deployed, developers and IT operations no longer interact with Docker Engine directly, but interact with @@ -25,7 +25,7 @@ the Docker CLI client and Docker Compose. Docker UCP leverages the clustering and orchestration functionality provided by Docker. -![](images/architecture-2.svg) +![](images/ucp-architecture-2.svg){: .with-border} A swarm is a collection of nodes that are in the same Docker cluster. [Nodes](/engine/swarm/key-concepts.md) in a Docker swarm operate in one of two @@ -66,38 +66,89 @@ on a node depend on whether the node is a manager or a worker. > on Windows, the `ucp-agent` component is named `ucp-agent-win`. > [Learn about architecture-specific images](admin/install/architecture-specific-images.md). +Internally, UCP uses the following components: + +* Calico 3.0.1 +* Kubernetes 1.8.11 + ### UCP components in manager nodes Manager nodes run all UCP services, including the web UI and data stores that persist the state of UCP. These are the UCP services running on manager nodes: -| UCP component | Description | -|:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ucp-agent | Monitors the node and ensures the right UCP services are running | -| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | -| ucp-auth-api | The centralized service for identity and authentication used by UCP and DTR | -| ucp-auth-store | Stores authentication configurations and data for users, organizations, and teams | -| ucp-auth-worker | Performs scheduled LDAP synchronizations and cleans authentication and authorization data | -| ucp-client-root-ca | A certificate authority to sign client bundles | -| ucp-cluster-root-ca | A certificate authority used for TLS communication between UCP components | -| ucp-controller | The UCP web server | -| ucp-dsinfo | Docker system information collection script to assist with troubleshooting | -| ucp-kv | Used to store the UCP configurations. Don't use it in your applications, since it's for internal use only | -| ucp-metrics | Used to collect and process metrics for a node, like the disk space available | -| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components | -| ucp-swarm-manager | Used to provide backwards-compatibility with Docker Swarm | +| UCP component | Description | +|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| k8s_calico-kube-controllers | A cluster-scoped Kubernetes controller used to coordinate Calico networking. Runs on one manager node only. | +| k8s_calico-node | The Calico node agent, which coordinates networking fabric according to the cluster-wide Calico configuration. Part of the `calico-node` daemonset. Runs on all nodes. Configure the CNI plugin by using the `--cni-installer-url` flag. If this flag isn't set, UCP uses Calico as the default CNI plugin. | +| k8s_install-cni_calico-node | A container that's responsible for installing the Calico CNI plugin binaries and configuration on each host. Part of the `calico-node` daemonset. Runs on all nodes. | +| k8s_POD_calico-node | Pause container for the `calico-node` pod. | +| k8s_POD_calico-kube-controllers | Pause container for the `calico-kube-controllers` pod. | +| k8s_POD_compose | Pause container for the `compose` pod. | +| k8s_POD_kube-dns | Pause container for the `kube-dns` pod. | +| k8s_ucp-dnsmasq-nanny | A dnsmasq instance used in the Kubernetes DNS Service. Part of the `kube-dns` deployment. Runs on one manager node only. | +| k8s_ucp-kube-compose | A custom Kubernetes resource component that's responsible for translating Compose files into Kubernetes constructs. Part of the `compose` deployment. Runs on one manager node only. | +| k8s_ucp-kube-dns | The main Kubernetes DNS Service, used by pods to [resolve service names](https://v1-8.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/). Part of the `kube-dns` deployment. Runs on one manager node only. Provides service discovery for Kubernetes services and pods. A set of three containers deployed via Kubernetes as a single pod. | +| k8s_ucp-kubedns-sidecar | Health checking and metrics daemon of the Kubernetes DNS Service. Part of the `kube-dns` deployment. Runs on one manager node only. | +| ucp-agent | Monitors the node and ensures the right UCP services are running. | +| ucp-auth-api | The centralized service for identity and authentication used by UCP and DTR. | +| ucp-auth-store | Stores authentication configurations and data for users, organizations, and teams. | +| ucp-auth-worker | Performs scheduled LDAP synchronizations and cleans authentication and authorization data. | +| ucp-client-root-ca | A certificate authority to sign client bundles. | +| ucp-cluster-root-ca | A certificate authority used for TLS communication between UCP components. | +| ucp-controller | The UCP web server. | +| ucp-dsinfo | Docker system information collection script to assist with troubleshooting. | +| ucp-interlock | Monitors swarm workloads configured to use Layer 7 routing. Only runs when you enable Layer 7 routing. | +| ucp-interlock-proxy | A service that provides load balancing and proxying for swarm workloads. Only runs when you enable Layer 7 routing. | +| ucp-kube-apiserver | A master component that serves the Kubernetes API. It persists its state in `etcd` directly, and all other components communicate with API server directly. | +| ucp-kube-controller-manager | A master component that manages the desired state of controllers and other Kubernetes objects. It monitors the API server and performs background tasks when needed. | +| ucp-kubelet | The Kubernetes node agent running on every node, which is responsible for running Kubernetes pods, reporting the health of the node, and monitoring resource usage. | +| ucp-kube-proxy | The networking proxy running on every node, which enables pods to contact Kubernetes services and other pods, via cluster IP addresses. | +| ucp-kube-scheduler | A master component that handles scheduling of pods. It communicates with the API server only to obtain workloads that need to be scheduled. | +| ucp-kv | Used to store the UCP configurations. Don't use it in your applications, since it's for internal use only. Also used by Kubernetes components. | +| ucp-metrics | Used to collect and process metrics for a node, like the disk space available. | +| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components. | +| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | +| ucp-swarm-manager | Used to provide backwards-compatibility with Docker Swarm. | + ### UCP components in worker nodes Worker nodes are the ones where you run your applications. These are the UCP services running on worker nodes: -| UCP component | Description | -|:--------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ucp-agent | Monitors the node and ensures the right UCP services are running | -| ucp-dsinfo | Docker system information collection script to assist with troubleshooting | -| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | -| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components | +| UCP component | Description | +|:----------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| k8s_calico-node | The Calico node agent, which coordinates networking fabric according to the cluster-wide Calico configuration. Part of the `calico-node` daemonset. Runs on all nodes. | +| k8s_install-cni_calico-node | A container that's responsible for installing the Calico CNI plugin binaries and configuration on each host. Part of the `calico-node` daemonset. Runs on all nodes. | +| k8s_POD_calico-node | "Pause" container for the Calico-node pod. By default, this container is hidden, but you can see it by running `docker ps -a`. | +| ucp-agent | Monitors the node and ensures the right UCP services are running | +| ucp-interlock-extension | Helper service that reconfigures the ucp-interlock-proxy service based on the swarm workloads that are running. | +| ucp-interlock-proxy | A service that provides load balancing and proxying for swarm workloads. Only runs when you enable Layer 7 routing. | +| ucp-dsinfo | Docker system information collection script to assist with troubleshooting | +| ucp-kubelet | The kubernetes node agent running on every node, which is responsible for running Kubernetes pods, reporting the health of the node, and monitoring resource usage | +| ucp-kube-proxy | The networking proxy running on every node, which enables pods to contact Kubernetes services and other pods, via cluster IP addresses | +| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | +| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components | + +## Pause containers + +Every pod in Kubernetes has a _pause_ container, which is an "empty" container +that bootstraps the pod to establish all of the namespaces. Pause containers +hold the cgroups, reservations, and namespaces of a pod before its individual +containers are created. The pause container's image is always present, so the +allocation of the pod's resources is instantaneous. + +By default, pause containers are hidden, but you can see them by running +`docker ps -a`. + +``` +docker ps -a | grep -I pause + +8c9707885bf6 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_calico-kube-controllers-559f6948dc-5c84l_kube-system_d00e5130-1bf4-11e8-b426-0242ac110011_0 +258da23abbf5 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_kube-dns-6d46d84946-tqpzr_kube-system_d63acec6-1bf4-11e8-b426-0242ac110011_0 +2e27b5d31a06 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_compose-698cf787f9-dxs29_kube-system_d5866b3c-1bf4-11e8-b426-0242ac110011_0 +5d96dff73458 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_calico-node-4fjgv_kube-system_d043a0ea-1bf4-11e8-b426-0242ac110011_0 +``` ## Volumes used by UCP @@ -129,6 +180,16 @@ driver. By default, the data for these volumes can be found at `/var/lib/docker/volumes//_data`. +## Configurations use by UCP + +| Configuration name | Description | +|:-------------------------------|:-------------------------------------------------------------------------------------------------| +| com.docker.interlock.extension | Configuration for the Interlock extension service that monitors and configures the proxy service | +| com.docker.interlock.proxy | Configuration for the service responsible for handling user requests and routing them | +| com.docker.license | The Docker EE license | +| com.docker.ucp.config | The UCP controller configuration. Most of the settings available on the UCP UI are stored here | +| com.docker.ucp.interlock.conf | Configuration for the core Interlock service | + ## How you interact with UCP There are two ways to interact with UCP: the web UI or the CLI. @@ -136,17 +197,16 @@ There are two ways to interact with UCP: the web UI or the CLI. You can use the UCP web UI to manage your swarm, grant and revoke user permissions, deploy, configure, manage, and monitor your applications. -![](images/architecture-3.svg) +![](images/ucp-architecture-3.svg){: .with-border} UCP also exposes the standard Docker API, so you can continue using existing tools like the Docker CLI client. Since UCP secures your cluster with role-based access control, you need to configure your Docker CLI client and other client tools to authenticate your requests using -[client certificates](user/access-ucp/index.md) that you can download +[client certificates](user-access/index.md) that you can download from your UCP profile page. - ## Where to go next -* [System requirements](admin/install/system-requirements.md) -* [Plan your installation](admin/install/system-requirements.md) +- [System requirements](admin/install/system-requirements.md) +- [Plan your installation](admin/install/plan-installation.md) diff --git a/datacenter/ucp/3.0/guides/images/change-orchestrator-for-node-1.png b/datacenter/ucp/3.0/guides/images/change-orchestrator-for-node-1.png new file mode 100644 index 0000000000..d625a5cd8e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/change-orchestrator-for-node-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/cli-based-access-2.png b/datacenter/ucp/3.0/guides/images/cli-based-access-2.png new file mode 100644 index 0000000000..c4067603d9 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/cli-based-access-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/cli-based-access-3.png b/datacenter/ucp/3.0/guides/images/cli-based-access-3.png new file mode 100644 index 0000000000..5d274e7207 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/cli-based-access-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/client-bundle.png b/datacenter/ucp/3.0/guides/images/client-bundle.png new file mode 100644 index 0000000000..e4a419ada3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/client-bundle.png differ diff --git a/datacenter/ucp/3.0/guides/images/create-service-account-1.png b/datacenter/ucp/3.0/guides/images/create-service-account-1.png new file mode 100644 index 0000000000..e850b04384 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/create-service-account-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/create-service-account-2.png b/datacenter/ucp/3.0/guides/images/create-service-account-2.png new file mode 100644 index 0000000000..278ed3da9b Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/create-service-account-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/create-service-account-3.png b/datacenter/ucp/3.0/guides/images/create-service-account-3.png new file mode 100644 index 0000000000..f1bba1a46a Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/create-service-account-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/custom-role-30.png b/datacenter/ucp/3.0/guides/images/custom-role-30.png new file mode 100644 index 0000000000..6143991782 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/custom-role-30.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-a-service-5.png b/datacenter/ucp/3.0/guides/images/deploy-a-service-5.png new file mode 100644 index 0000000000..8e465aa42f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-a-service-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-1.png b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-1.png new file mode 100644 index 0000000000..e2877a88be Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-2.png b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-2.png new file mode 100644 index 0000000000..18454e3b28 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-3.png b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-3.png new file mode 100644 index 0000000000..dfc731d7ed Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-1.png b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-1.png new file mode 100644 index 0000000000..f9b13475bf Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-2.png b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-2.png new file mode 100644 index 0000000000..ae4c2d5273 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-3.png b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-3.png new file mode 100644 index 0000000000..6af93ab000 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-1.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-1.png new file mode 100644 index 0000000000..31eb5a1cdd Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-2.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-2.png new file mode 100644 index 0000000000..287ca51080 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-3.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-3.png new file mode 100644 index 0000000000..4717b49611 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-4.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-4.png new file mode 100644 index 0000000000..c729de596e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-5.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-5.png new file mode 100644 index 0000000000..ce7b501568 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-1.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-1.png new file mode 100644 index 0000000000..c3e79b02d3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-2.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-2.png new file mode 100644 index 0000000000..ef6298e086 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-3.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-3.png new file mode 100644 index 0000000000..6cd2861668 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-4.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-4.png new file mode 100644 index 0000000000..bd5ff0b29e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-5.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-5.png new file mode 100644 index 0000000000..e2b5b332ee Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-1.png b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-1.png new file mode 100644 index 0000000000..06ee08c838 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-2.png b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-2.png new file mode 100644 index 0000000000..6741c4fd46 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-architecture-1.svg b/datacenter/ucp/3.0/guides/images/interlock-architecture-1.svg new file mode 100644 index 0000000000..83e759938a --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-architecture-1.svg @@ -0,0 +1,204 @@ + + + + interlock-architecture-1 + Created with Sketch. + + + + + + + + + + + + + Docker swarm managed with UCP + + + + + + + + UCP + + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + + + UCP + + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + + + UCP + + + + + + interlock-extension + + + + + + wordpress:8000 + + + + + + + worker node + + + + + + + + + + + + UCP + + + + + + ucp-interlock + + + + + + + manager node + + + + + + + + + + + + your load balancer + + + + + + + + + + + + + + + + + + + + + http://wordpress.example.org + + + + + + + wordpress-net + + + + + + + + + + + + + + + + + + + + + + + + + + + ucp-interlock + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-default-service-1.png b/datacenter/ucp/3.0/guides/images/interlock-default-service-1.png new file mode 100644 index 0000000000..5c63a95e94 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-default-service-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-default-service-2.png b/datacenter/ucp/3.0/guides/images/interlock-default-service-2.png new file mode 100644 index 0000000000..b12883d062 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-default-service-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-deploy-production-1.svg b/datacenter/ucp/3.0/guides/images/interlock-deploy-production-1.svg new file mode 100644 index 0000000000..48ccb3f7ca --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-deploy-production-1.svg @@ -0,0 +1,207 @@ + + + + interlock-deploy-production-1 + Created with Sketch. + + + + + + + + Docker swarm managed with UCP + + + + + + node-6 + + + + + UCP + + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + node-5 + + + + + UCP + + + + + + interlock-proxy:80 + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + node-4 + + + + + UCP + + + + + + interlock-extension + + + + + + wordpress:8000 + + + + + + + worker node + + + + + + + + + + node-3 + + + + + UCP + + + + + + + manager node + + + + + + + + node-2 + + + + + UCP + + + + + + + manager node + + + + + + + + node-1 + + + + + UCP + + + + + + ucp-interlock + + + + + + + manager node + + + + + + + + + + + + your load balancer + + + + + + + + + + + + + + + + + + + + + http://wordpress.example.org + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-install-1.svg b/datacenter/ucp/3.0/guides/images/interlock-install-1.svg new file mode 100644 index 0000000000..649439a15d --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-install-1.svg @@ -0,0 +1,198 @@ + + + + use-domain-names-1 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 192.168.99.104 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + 192.168.99.103 + + + + + + worker node + + + + + + + UCP + + + + + + + + + 192.168.99.102 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.101 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.100 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + swarm routing mesh + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 192.168.99.100:8000 + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-install-2.svg b/datacenter/ucp/3.0/guides/images/interlock-install-2.svg new file mode 100644 index 0000000000..070eeb9340 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-install-2.svg @@ -0,0 +1,198 @@ + + + + use-domain-names-2 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 192.168.99.104 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + 192.168.99.103 + + + + + + worker node + + + + + + + UCP + + + + + + + + + 192.168.99.102 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.101 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.100 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + HTTP routing mesh + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + wordpress.example.org:80 + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-install-3.png b/datacenter/ucp/3.0/guides/images/interlock-install-3.png new file mode 100644 index 0000000000..9ecc24f6fc Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-install-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-overview-1.svg b/datacenter/ucp/3.0/guides/images/interlock-overview-1.svg new file mode 100644 index 0000000000..20bbc751d1 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-overview-1.svg @@ -0,0 +1,180 @@ + + + + interlock-overview-1 + Created with Sketch. + + + + + + + + + + Docker swarm managed with UCP + + + + + + node-5 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + node-4 + + + + + + worker node + + + + + + + UCP + + + + + + + + + node-3 + + + + + + manager node + + + + + + + UCP + + + + + + + node-2 + + + + + + manager node + + + + + + + UCP + + + + + + + node-1 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + swarm routing mesh + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + http://node-5:8000 + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-overview-2.svg b/datacenter/ucp/3.0/guides/images/interlock-overview-2.svg new file mode 100644 index 0000000000..8f9b9ad0d7 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-overview-2.svg @@ -0,0 +1,186 @@ + + + + interlock-overview-2 + Created with Sketch. + + + + + + + + + + Docker swarm managed with UCP + + + + + + node-5 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + node-4 + + + + + + worker node + + + + + + + UCP + + + + + + + + + node-3 + + + + + + manager node + + + + + + + UCP + + + + + + + node-2 + + + + + + manager node + + + + + + + UCP + + + + + + + node-1 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + swarm routing mesh + + + + + + layer 7 routing + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + http://wordpress.example.org + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-tls-1.png b/datacenter/ucp/3.0/guides/images/interlock-tls-1.png new file mode 100644 index 0000000000..d49625d287 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-tls-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-tls-2.png b/datacenter/ucp/3.0/guides/images/interlock-tls-2.png new file mode 100644 index 0000000000..d906147e02 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-tls-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-tls-3.png b/datacenter/ucp/3.0/guides/images/interlock-tls-3.png new file mode 100644 index 0000000000..151055ada7 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-tls-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-10.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-10.png new file mode 100644 index 0000000000..a997704510 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-10.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-5.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-5.png new file mode 100644 index 0000000000..59f74cf267 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-6.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-6.png new file mode 100644 index 0000000000..2674a02259 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-6.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-7.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-7.png new file mode 100644 index 0000000000..f6a4bedbe9 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-7.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-8.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-8.png new file mode 100644 index 0000000000..66c62569da Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-8.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-9.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-9.png new file mode 100644 index 0000000000..c2bfd3ed83 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-9.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-volumes-0.png b/datacenter/ucp/3.0/guides/images/isolate-volumes-0.png new file mode 100644 index 0000000000..70a8c16ff5 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-volumes-0.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-volumes-0a.png b/datacenter/ucp/3.0/guides/images/isolate-volumes-0a.png new file mode 100644 index 0000000000..7116bb0ddb Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-volumes-0a.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-1.png b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-1.png new file mode 100644 index 0000000000..c522d4d64d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-2.png b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-2.png new file mode 100644 index 0000000000..7e07794d2e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-3.png b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-3.png new file mode 100644 index 0000000000..b2a475e2b5 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-windows-nodes-to-cluster-1.png b/datacenter/ucp/3.0/guides/images/join-windows-nodes-to-cluster-1.png new file mode 100644 index 0000000000..3519ffb121 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-windows-nodes-to-cluster-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-create-role.png b/datacenter/ucp/3.0/guides/images/kube-create-role.png new file mode 100644 index 0000000000..a7c56e7e32 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-create-role.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-grant-rolebinding.png b/datacenter/ucp/3.0/guides/images/kube-grant-rolebinding.png new file mode 100644 index 0000000000..e8c739273d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-grant-rolebinding.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-grant-roleselect.png b/datacenter/ucp/3.0/guides/images/kube-grant-roleselect.png new file mode 100644 index 0000000000..e72d915aad Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-grant-roleselect.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-grant-wizard.png b/datacenter/ucp/3.0/guides/images/kube-grant-wizard.png new file mode 100644 index 0000000000..974b9f312e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-grant-wizard.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-rbac-grants.png b/datacenter/ucp/3.0/guides/images/kube-rbac-grants.png new file mode 100644 index 0000000000..9cb1bcfdc4 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-rbac-grants.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-rbac-roles.png b/datacenter/ucp/3.0/guides/images/kube-rbac-roles.png new file mode 100644 index 0000000000..a6cb551bf0 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-rbac-roles.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-role-create.png b/datacenter/ucp/3.0/guides/images/kube-role-create.png new file mode 100644 index 0000000000..0a189e293f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-role-create.png differ diff --git a/datacenter/ucp/3.0/guides/images/kubernetes-version.png b/datacenter/ucp/3.0/guides/images/kubernetes-version.png new file mode 100644 index 0000000000..60a248e849 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kubernetes-version.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-1.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-1.png new file mode 100644 index 0000000000..66465741e5 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-2.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-2.png new file mode 100644 index 0000000000..6954506496 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-3.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-3.png new file mode 100644 index 0000000000..b39138c587 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-4.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-4.png new file mode 100644 index 0000000000..26b91d3f4d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-secrets-4a.png b/datacenter/ucp/3.0/guides/images/manage-secrets-4a.png new file mode 100644 index 0000000000..adb5d85db2 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-secrets-4a.png differ diff --git a/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-1.png b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-1.png new file mode 100644 index 0000000000..3bb600c12f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-2.png b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-2.png new file mode 100644 index 0000000000..d609ab7f76 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/overview-1.png b/datacenter/ucp/3.0/guides/images/overview-1.png new file mode 100644 index 0000000000..7bb908139f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/overview-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/overview-2.png b/datacenter/ucp/3.0/guides/images/overview-2.png new file mode 100644 index 0000000000..22261dd985 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/overview-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/rbac-pull-images-1.png b/datacenter/ucp/3.0/guides/images/rbac-pull-images-1.png new file mode 100644 index 0000000000..9802b4cc1b Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/rbac-pull-images-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/rbac-pull-images-2.png b/datacenter/ucp/3.0/guides/images/rbac-pull-images-2.png new file mode 100644 index 0000000000..cea41ea5c3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/rbac-pull-images-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/rbac-roles.png b/datacenter/ucp/3.0/guides/images/rbac-roles.png new file mode 100644 index 0000000000..9a4902f2ba Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/rbac-roles.png differ diff --git a/datacenter/ucp/3.0/guides/images/route-simple-app-1.png b/datacenter/ucp/3.0/guides/images/route-simple-app-1.png new file mode 100644 index 0000000000..38a4402e41 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/route-simple-app-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/saml_enabled.png b/datacenter/ucp/3.0/guides/images/saml_enabled.png new file mode 100644 index 0000000000..022c9e37fb Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/saml_enabled.png differ diff --git a/datacenter/ucp/3.0/guides/images/saml_settings.png b/datacenter/ucp/3.0/guides/images/saml_settings.png new file mode 100644 index 0000000000..89d1d437de Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/saml_settings.png differ diff --git a/datacenter/ucp/3.0/guides/images/ucp-architecture-1.svg b/datacenter/ucp/3.0/guides/images/ucp-architecture-1.svg new file mode 100644 index 0000000000..abd4a32d15 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/ucp-architecture-1.svg @@ -0,0 +1,71 @@ + + + + architecture-1 + Created with Sketch. + + + + + + + + + + cloud servers + + + + + + virtual servers + + + + + + physical servers + + + + + + + Docker EE Engine + + + + + + Universal Control Plane + + + + + + Docker Trusted Registry + + + + + + your applications + + + + + + + deploy and manage + + + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/ucp-architecture-2.svg b/datacenter/ucp/3.0/guides/images/ucp-architecture-2.svg new file mode 100644 index 0000000000..46e7833789 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/ucp-architecture-2.svg @@ -0,0 +1,166 @@ + + + + architecture-2 + Created with Sketch. + + + + + Docker swarm + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/ucp-architecture-3.svg b/datacenter/ucp/3.0/guides/images/ucp-architecture-3.svg new file mode 100644 index 0000000000..6a9c66a0a3 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/ucp-architecture-3.svg @@ -0,0 +1,233 @@ + + + + architecture-3 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + Docker swarm + + + + + + + + your load balancer + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + + + UI + + + + + + CLI + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create01.png b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create01.png new file mode 100644 index 0000000000..685c9d8c92 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create01.png differ diff --git a/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create02.png b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create02.png new file mode 100644 index 0000000000..936dae2e59 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create02.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png index 67b0e5d299..3d58cd0675 100644 Binary files a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png and b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png index 0c041d16c2..358d15996b 100644 Binary files a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png and b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png b/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png index 071cd1e10b..b08d65659b 100644 Binary files a/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png and b/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-1.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-1.png new file mode 100644 index 0000000000..7e8b573ca9 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-2.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-2.png new file mode 100644 index 0000000000..0f1f1824c0 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-3.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-3.png new file mode 100644 index 0000000000..47fc63e364 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-4.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-4.png new file mode 100644 index 0000000000..56cb6abb9b Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-5.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-5.png new file mode 100644 index 0000000000..07073cc859 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-1.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-1.png new file mode 100644 index 0000000000..9fb281cda3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-2.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-2.png new file mode 100644 index 0000000000..81f249d46e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-3.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-3.png new file mode 100644 index 0000000000..afca7bc7ea Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-4.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-4.png new file mode 100644 index 0000000000..1a3e41f131 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-5.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-5.png new file mode 100644 index 0000000000..19f5336bae Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/web-based-access-1.png b/datacenter/ucp/3.0/guides/images/web-based-access-1.png new file mode 100644 index 0000000000..fb7304147d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/web-based-access-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/web-based-access-2.png b/datacenter/ucp/3.0/guides/images/web-based-access-2.png index 65313e945a..00437d1c22 100644 Binary files a/datacenter/ucp/3.0/guides/images/web-based-access-2.png and b/datacenter/ucp/3.0/guides/images/web-based-access-2.png differ diff --git a/datacenter/ucp/3.0/guides/index.md b/datacenter/ucp/3.0/guides/index.md index a054b6794a..626dbe7c3c 100644 --- a/datacenter/ucp/3.0/guides/index.md +++ b/datacenter/ucp/3.0/guides/index.md @@ -1,41 +1,68 @@ --- title: Universal Control Plane overview -description: Learn about Docker Universal Control Plane, the enterprise-grade cluster - management solution from Docker. -keywords: ucp, overview, orchestration, clustering -redirect_from: -- /ucp/ +description: | + Learn about Docker Universal Control Plane, the enterprise-grade cluster management solution from Docker. +keywords: ucp, overview, orchestration, cluster --- Docker Universal Control Plane (UCP) is the enterprise-grade cluster management solution from Docker. You install it on-premises or in your virtual private -cloud, and it helps you manage your Docker swarm and applications through a +cloud, and it helps you manage your Docker cluster and applications through a single interface. -![](../../../images/ucp.png){: .with-border} +![](images/overview-1.png){: .with-border} -## Centralized swarm management +## Centralized cluster management With Docker, you can join up to thousands of physical or virtual machines -together to create a container cluster, or swarm, allowing you to deploy your +together to create a container cluster that allows you to deploy your applications at scale. Docker Universal Control Plane extends the -functionality provided by Docker to make it easier to manage your swarm +functionality provided by Docker to make it easier to manage your cluster from a centralized place. You can manage and monitor your container cluster using a graphical UI. -![](../../../images/try-ddc-2.png){: .with-border} +![](images/overview-2.png){: .with-border} -Since UCP exposes the standard Docker API, you can continue using the tools +## Deploy, manage, and monitor + +With Docker UCP, you can manage from a centralized place all of the computing +resources you have available, like nodes, volumes, and networks. + +You can also deploy and monitor your applications and services. + +## Built-in security and access control + +Docker UCP has its own built-in authentication mechanism and integrates with +LDAP services. It also has role-based access control (RBAC), so that you can +control who can access and make changes to your cluster and applications. +[Learn about role-based access control](authorization/index.md). + +![](images/overview-3.png){: .with-border} + +Docker UCP integrates with Docker Trusted Registry so that you can keep the +Docker images you use for your applications behind your firewall, where they +are safe and can't be tampered with. + +You can also enforce security policies and only allow running applications +that use Docker images you know and trust. + +## Use the Docker CLI client + +Because UCP exposes the standard Docker API, you can continue using the tools you already know, including the Docker CLI client, to deploy and manage your applications. -As an example, you can use the `docker info` command to check the -status of a Docker swarm managed by UCP: +For example, you can use the `docker info` command to check the status of a +cluster that's managed by UCP: -```none -$ docker info +```bash +docker info +``` +This command produces the output that you expect from the Docker EE Engine: + +```bash Containers: 38 Running: 23 Paused: 0 @@ -51,30 +78,7 @@ Managers: 1 … ``` -## Deploy, manage, and monitor - -With Docker UCP, you can manage from a centralized place all of the computing -resources you have available, like nodes, volumes, and networks. - -You can also deploy and monitor your applications and services. - -## Built-in security and access control - -Docker UCP has its own built-in authentication mechanism and integrates with -LDAP services. It also has role-based access control (RBAC), so that you can -control who can access and make changes to your swarm and applications. -[Learn about role-based access control](access-control/index.md). - -![](images/overview-3.png){: .with-border} - -Docker UCP integrates with Docker Trusted Registry so that you can keep the -Docker images you use for your applications behind your firewall, where they -are safe and can't be tampered with. - -You can also enforce security policies and only allow running applications -that use Docker images you know and trust. - ## Where to go next -* [UCP architecture](architecture.md) -* [Install UCP](admin/install/index.md) +- [Install UCP](admin/install/index.md) +- [Docker EE Platform 2.0 architecture](/ee/docker-ee-architecture.md) diff --git a/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md b/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md new file mode 100644 index 0000000000..f7d73a825f --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md @@ -0,0 +1,104 @@ +--- +title: Install the Kubernetes CLI +description: Learn how to install kubectl, the Kubernetes command-line tool, on Docker Universal Control Plane. +keywords: ucp, cli, administration, kubectl, Kubernetes +--- + +Docker EE 2.0 and higher deploys Kubernetes as part of a UCP installation. +Deploy, manage, and monitor Kubernetes workloads from the UCP dashboard. Users can +also interact with the Kubernetes deployment through the Kubernetes +command-line tool named kubectl. + +To access the UCP cluster with kubectl, install the [UCP client bundle](cli.md). + +> Kubernetes on Docker for Mac and Docker for Windows +> +> Docker for Mac and Docker for Windows provide a standalone Kubernetes server that +> runs on your development machine, with kubectl installed by default. This installation is +> separate from the Kubernetes deployment on a UCP cluster. +> Learn how to [deploy to Kubernetes on Docker for Mac](/docker-for-mac/kubernetes.md). +{: .important} + +## Install the kubectl binary + +To use kubectl, install the binary on a workstation which has access to your UCP endpoint. + +> Must install compatible version +> +> Kubernetes only guarantees compatibility with kubectl versions that are +/-1 minor versions away from the Kubernetes version. +{: .important} + +First, find which version of Kubernetes is running in your cluster. This can be found +within the Universal Control Plane dashboard or at the UCP API endpoint [version](/reference/ucp/3.0/api/). + +From the UCP dashboard, click on **About Docker EE** within the **Admin** menu in the top left corner + of the dashboard. Then navigate to **Kubernetes**. + + ![Find Kubernetes version](../images/kubernetes-version.png){: .with-border} + +Once you have the Kubernetes version, install the kubectl client for the relevant +operating system. + + +
    +
    +``` +# Set the Kubernetes version as found in the UCP Dashboard or API +k8sversion=v1.8.11 + +# Get the kubectl binary. +curl -LO https://storage.googleapis.com/kubernetes-release/release/$k8sversion/bin/darwin/amd64/kubectl + +# Make the kubectl binary executable. +chmod +x ./kubectl + +# Move the kubectl executable to /usr/local/bin. +sudo mv ./kubectl /usr/local/bin/kubectl +``` +
    +
    +
    +``` +# Set the Kubernetes version as found in the UCP Dashboard or API +k8sversion=v1.8.11 + +# Get the kubectl binary. +curl -LO https://storage.googleapis.com/kubernetes-release/release/$k8sversion/bin/linux/amd64/kubectl + +# Make the kubectl binary executable. +chmod +x ./kubectl + +# Move the kubectl executable to /usr/local/bin. +sudo mv ./kubectl /usr/local/bin/kubectl +``` +
    +
    +
    +You can download the binary from this [link](https://storage.googleapis.com/kubernetes-release/release/v.1.8.11/bin/windows/amd64/kubectl.exe) + +If you have curl installed on your system, you use these commands in Powershell. + +```cmd +$env:k8sversion = "v1.8.11" + +curl https://storage.googleapis.com/kubernetes-release/release/$env:k8sversion/bin/windows/amd64/kubectl.exe +``` +
    +
    +
    + +## Using kubectl with a Docker EE cluster + +Docker Enterprise Edition provides users unique certificates and keys to authenticate against + the Docker and Kubernetes APIs. Instructions on how to download these certificates and how to + configure kubectl to use them can be found in [CLI-based access.](cli.md#download-client-certificates) + +## Where to go next + +- [Deploy a workload to a Kubernetes cluster](../kubernetes.md) +- [Deploy to Kubernetes on Docker for Mac](/docker-for-mac/kubernetes.md) + diff --git a/datacenter/ucp/3.0/guides/user/interlock/architecture.md b/datacenter/ucp/3.0/guides/user/interlock/architecture.md new file mode 100644 index 0000000000..3b29d88561 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/architecture.md @@ -0,0 +1,73 @@ +--- +title: Interlock architecture +description: Learn more about the architecture of the layer 7 routing solution + for Docker swarm services. +keywords: routing, proxy +--- + +The layer 7 routing solution for swarm workloads is known as Interlock, and has +three components: + +* **Interlock-proxy**: This is a proxy/load-balancing service that handles the +requests from the outside world. By default this service is a containerized +NGINX deployment. +* **Interlock-extension**: This is a helper service that generates the +configuration used by the proxy service. +* **Interlock**: This is the central piece of the layer 7 routing solution. +It uses the Docker API to monitor events, and manages the extension and +proxy services. + +This is what the default configuration looks like, once you enable layer 7 +routing in UCP: + +![](../images/interlock-architecture-1.svg) + +An Interlock service starts running on a manager node, an Interlock-extension +service starts running on a worker node, and two replicas of the +Interlock-proxy service run on worker nodes. + +If you don't have any worker nodes in your cluster, then all Interlock +components run on manager nodes. + +## Deployment lifecycle + +By default layer 7 routing is disabled, so an administrator first needs to +enable this service from the UCP web UI. + +Once that happens: + +1. UCP creates the `ucp-interlock` overlay network. +2. UCP deploys the `ucp-interlock` service and attaches it both to the Docker +socket and the overlay network that was created. This allows the Interlock +service to use the Docker API. That's also the reason why this service needs to +run on a manger node. +3. The `ucp-interlock` service starts the `ucp-interlock-extension` service +and attaches it to the `ucp-interlock` network. This allows both services +to communicate. +4. The `ucp-interlock-extension` generates a configuration to be used by +the proxy service. By default the proxy service is NGINX, so this service +generates a standard NGINX configuration. +5. The `ucp-interlock` service takes the proxy configuration and uses it to +start the `ucp-interlock-proxy` service. + +At this point everything is ready for you to start using the layer 7 routing +service with your swarm workloads. + +## Routing lifecycle + +Once the layer 7 routing service is enabled, you apply specific labels to +your swarm services. The labels define the hostnames that are routed to the +service, the ports used, and other routing configurations. + +Once you deploy or update a swarm service with those labels: + +1. The `ucp-interlock` service is monitoring the Docker API for events and +publishes the events to the `ucp-interlock-extension` service. +2. That service in turn generates a new configuration for the proxy service, +based on the labels you've added to your services. +3. The `ucp-interlock` service takes the new configuration and reconfigures the +`ucp-interlock-proxy` to start using it. + +This all happens in milliseconds and with rolling updates. Even though +services are being reconfigured, users won't notice it. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference.md new file mode 100644 index 0000000000..daf93c97c3 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference.md @@ -0,0 +1,146 @@ +--- +title: Layer 7 routing configuration reference +description: Learn the configuration options for the UCP layer 7 routing solution +keywords: routing, proxy +--- + +Once you enable the layer 7 routing service, UCP creates the +`com.docker.ucp.interlock.conf-1` configuration and uses it to configure all +the internal components of this service. + +The configuration is managed as a TOML file. + +## Example configuration + +Here's an example of the default configuration used by UCP: + +```toml +ListenAddr = ":8080" +DockerURL = "unix:///var/run/docker.sock" +AllowInsecure = false +PollInterval = "3s" + +[Extensions] + [Extensions.default] + Image = "docker/ucp-interlock-extension:3.0.1" + ServiceName = "ucp-interlock-extension" + Args = [] + Constraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"] + ProxyImage = "docker/ucp-interlock-proxy:3.0.1" + ProxyServiceName = "ucp-interlock-proxy" + ProxyConfigPath = "/etc/nginx/nginx.conf" + ProxyReplicas = 2 + ProxyStopSignal = "SIGQUIT" + ProxyStopGracePeriod = "5s" + ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"] + PublishMode = "ingress" + PublishedPort = 80 + TargetPort = 80 + PublishedSSLPort = 8443 + TargetSSLPort = 443 + [Extensions.default.Labels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.ContainerLabels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.ProxyLabels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.ProxyContainerLabels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.Config] + Version = "" + User = "nginx" + PidPath = "/var/run/proxy.pid" + MaxConnections = 1024 + ConnectTimeout = 600 + SendTimeout = 600 + ReadTimeout = 600 + IPHash = false + AdminUser = "" + AdminPass = "" + SSLOpts = "" + SSLDefaultDHParam = 1024 + SSLDefaultDHParamPath = "" + SSLVerify = "required" + WorkerProcesses = 1 + RLimitNoFile = 65535 + SSLCiphers = "HIGH:!aNULL:!MD5" + SSLProtocols = "TLSv1.2" + AccessLogPath = "/dev/stdout" + ErrorLogPath = "/dev/stdout" + MainLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" '\n\t\t '$status $body_bytes_sent \"$http_referer\" '\n\t\t '\"$http_user_agent\" \"$http_x_forwarded_for\"';" + TraceLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" $status '\n\t\t '$body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n\t\t '\"$http_x_forwarded_for\" $request_id $msec $request_time '\n\t\t '$upstream_connect_time $upstream_header_time $upstream_response_time';" + KeepaliveTimeout = "75s" + ClientMaxBodySize = "32m" + ClientBodyBufferSize = "8k" + ClientHeaderBufferSize = "1k" + LargeClientHeaderBuffers = "4 8k" + ClientBodyTimeout = "60s" + UnderscoresInHeaders = false +``` + +## Core configurations + +These are the configurations used for the `ucp-interlock` service. The following +options are available: + +| Option | Type | Description | +|:-------------------|:------------|:-----------------------------------------------------------------------------------------------| +| `ListenAddr` | string | Address to serve the Interlock GRPC API. Defaults to `8080`. | +| `DockerURL` | string | Path to the socket or TCP address to the Docker API. Defaults to `unix:///var/run/docker.sock` | +| `TLSCACert` | string | Path to the CA certificate for connecting securely to the Docker API. | +| `TLSCert` | string | Path to the certificate for connecting securely to the Docker API. | +| `TLSKey` | string | Path to the key for connecting securely to the Docker API. | +| `AllowInsecure` | bool | Skip TLS verification when connecting to the Docker API via TLS. | +| `PollInterval` | string | Interval to poll the Docker API for changes. Defaults to `3s`. | +| `EndpointOverride` | string | Override the default GRPC API endpoint for extensions. The default is detected via Swarm. | +| `Extensions` | []Extension | Array of extensions as listed below. | + +## Extension configuration + +Interlock must contain at least one extension to service traffic. +The following options are available to configure the extensions: + +| Option | Type | Description | +|:-------------------|:------------------|:------------------------------------------------------------------------------| +| `Image` | string | Name of the Docker image to use for the extension service. | +| `Args` | []string | Arguments to be passed to the Docker extension service upon creation. | +| `Labels` | map[string]string | Labels to add to the extension service. | +| `ServiceName` | string | Name of the extension service. | +| `ProxyImage` | string | Name of the Docker image to use for the proxy service. | +| `ProxyArgs` | []string | Arguments to be passed to the proxy service upon creation. | +| `ProxyLabels` | map[string]string | Labels to add to the proxy service. | +| `ProxyServiceName` | string | Name of the proxy service. | +| `ProxyConfigPath` | string | Path in the service for the generated proxy configuration. | +| `ServiceCluster` | string | Name of the cluster this extension services. | +| `PublishMode` | string | Publish mode for the proxy service. Supported values are `ingress` or `host`. | +| `PublishedPort` | int | Port where the proxy service serves non-TLS traffic. | +| `PublishedSSLPort` | int | Port where the proxy service serves TLS traffic. | +| `Template` | string | Docker configuration object that is used as the extension template. | +| `Config` | Config | Proxy configuration used by the extensions as listed below. | + +## Proxy configuration + +By default NGINX is used as a proxy, so the following NGINX options are +available for the proxy service: + +| Option | Type | Description | +|:------------------------|:-------|:-----------------------------------------------------------------------------------------------------| +| `User` | string | User to be used in the proxy. | +| `PidPath` | string | Path to the pid file for the proxy service. | +| `MaxConnections` | int | Maximum number of connections for proxy service. | +| `ConnectTimeout` | int | Timeout in seconds for clients to connect. | +| `SendTimeout` | int | Timeout in seconds for the service to send a request to the proxied upstream. | +| `ReadTimeout` | int | Timeout in seconds for the service to read a response from the proxied upstream. | +| `IPHash` | bool | Specifies that requests are distributed between servers based on client IP addresses. | +| `SSLOpts` | string | Options to be passed when configuring SSL. | +| `SSLDefaultDHParam` | int | Size of DH parameters. | +| `SSLDefaultDHParamPath` | string | Path to DH parameters file. | +| `SSLVerify` | string | SSL client verification. | +| `WorkerProcesses` | string | Number of worker processes for the proxy service. | +| `RLimitNoFile` | int | Number of maxiumum open files for the proxy service. | +| `SSLCiphers` | string | SSL ciphers to use for the proxy service. | +| `SSLProtocols` | string | Enable the specified TLS protocols. | +| `AccessLogPath` | string | Path to use for access logs (default: `/dev/stdout`). | +| `ErrorLogPath` | string | Path to use for error logs (default: `/dev/stdout`). | +| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger. | +| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger. | diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/configure.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/configure.md new file mode 100644 index 0000000000..b0f9ef6b39 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/configure.md @@ -0,0 +1,64 @@ +--- +title: Configure the layer 7 routing service +description: Learn how to configure the layer 7 routing solution for UCP, that allows + you to route traffic to swarm services. +keywords: routing, proxy +--- + +[When enabling the layer 7 routing solution](index.md) from the UCP web UI, +you can configure the ports for incoming traffic. If you want to further +customize the layer 7 routing solution, you can do it by updating the +`ucp-interlock` service with a new Docker configuration. + +Here's how it works: + +1. Find out what configuration is currently being used for the `ucp-interlock` +service and save it to a file: + + {% raw %} + ```bash + CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' ucp-interlock) + docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml + ``` + {% endraw %} + +2. Make the necessary changes to the `config.toml` file. + [Learn about the configuration options available](configuration-reference.md). + +3. Create a new Docker configuration object from the file you've edited: + + ```bash + NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))" + docker config create $NEW_CONFIG_NAME config.toml + ``` + +3. Update the `ucp-interlock` service to start using the new configuration: + + ```bash + docker service update \ + --config-rm $CURRENT_CONFIG_NAME \ + --config-add source=$NEW_CONFIG_NAME,target=/config.toml \ + ucp-interlock + ``` + +By default the `ucp-interlock` service is configured to pause if you provide an +invalid configuration. The service won't restart without a manual intervention. + +If you want the service to automatically rollback to a previous stable +configuration, you can update it with: + +```bash +docker service update \ + --update-failure-action rollback \ + ucp-interlock +``` + +Another thing to be aware is that every time you enable the layer 7 routing +solution from the UCP UI, the `ucp-interlock` service is started using the +default configuration. + +If you've customized the configuration used by the `ucp-interlock` service, +you'll have to update it again to use the Docker configuration object +you've created. + + diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking.md new file mode 100644 index 0000000000..ed7e922d20 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking.md @@ -0,0 +1,100 @@ +--- +title: Host mode networking +description: Learn how to configure the UCP layer 7 routing solution with + host mode networking. +keywords: routing, proxy +redirect_from: + - /ee/ucp/interlock/usage/host-mode-networking/ +--- + +By default the layer 7 routing components communicate with one another using +overlay networks. You can customize the components to use host mode networking +instead. + +You can choose to: + +* Configure the `ucp-interlock` and `ucp-interlock-extension` services to +communicate using host mode networking. +* Configure the `ucp-interlock-proxy` and your swarm service to communicate +using host mode networking. +* Use host mode networking for all of the components. + +In this example we'll start with a production-grade deployment of the layer +7 routing solution and update it so that use host mode networking instead of +overlay networking. + +When using host mode networking you won't be able to use DNS service discovery, +since that functionality requires overlay networking. +For two services to communicate, each service needs to know the IP address of +the node where the other service is running. + +## Production-grade deployment + +If you haven't already, configure the +[layer 7 routing solution for production](production.md). + +Once you've done that, the `ucp-interlock-proxy` service replicas should be +running on their own dedicated nodes. + +## Update the ucp-interlock config + +[Update the ucp-interlock service configuration](configure.md) so that it uses +host mode networking. + +Update the `PublishMode` key to: + +```toml +PublishMode = "host" +``` + +When updating the `ucp-interlock` service to use the new Docker configuration, +make sure to update it so that it starts publishes its port on the host: + +```bash +docker service update \ + --config-rm $CURRENT_CONFIG_NAME \ + --config-add source=$NEW_CONFIG_NAME,target=/config.toml \ + --publish-add mode=host,target=8080 \ + ucp-interlock +``` + +The `ucp-interlock` and `ucp-interlock-extension` services are now communicating +using host mode networking. + +## Deploy your swarm services + +Now you can deploy your swarm services. In this example we'll deploy a demo +service that also uses host mode networking. +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker service create \ + --name demo \ + --detach=false \ + --label com.docker.lb.hosts=app.example.org \ + --label com.docker.lb.port=8080 \ + --publish mode=host,target=8080 \ + --env METADATA="demo" \ + ehazlett/docker-demo +``` + +Docker allocates a high random port on the host where the service can be reached. +To test that everything is working you can run: + +```bash +curl --header "Host: app.example.org" \ + http://:/ping +``` + +Where: + +* `` is the domain name or IP address of a node where the proxy +service is running. +* `` is the [port you're using to route HTTP traffic](index.md). + +If everything is working correctly, you should get a JSON result like: + +```json +{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"} +``` diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md new file mode 100644 index 0000000000..6cda7383c7 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md @@ -0,0 +1,18 @@ +--- +title: Enable layer 7 routing +description: Learn how to enable the layer 7 routing solution for UCP, that allows + you to route traffic to swarm services. +keywords: routing, proxy +--- + +To enable support for layer 7 routing, also known as HTTP routing mesh, +log in to the UCP web UI as an administrator, navigate to the **Admin Settings** +page, and click the **Routing Mesh** option. Check the **Enable routing mesh** option. + +![http routing mesh](../../images/interlock-install-3.png){: .with-border} + +By default, the routing mesh service listens on port 80 for HTTP and port +8443 for HTTPS. Change the ports if you already have services that are using +them. + +Once you save, the layer 7 routing service can be used by your swarm services. diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md new file mode 100644 index 0000000000..fb17de7a92 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md @@ -0,0 +1,89 @@ +--- +title: Configure layer 7 routing for production +description: Learn how to configure the layer 7 routing solution for a production + environment. +keywords: routing, proxy +--- + +The layer 7 solution that ships out of the box with UCP is highly available +and fault tolerant. It is also designed to work independently of how many +nodes you're managing with UCP. + +![production deployment](../../images/interlock-deploy-production-1.svg) + +For a production-grade deployment, you should tune the default deployment to +have two nodes dedicated for running the two replicas of the +`ucp-interlock-proxy` service. This ensures: + +* The proxy services have dedicated resources to handle user requests. You +can configure these nodes with higher performance network interfaces. +* No application traffic can be routed to a manager node. This makes your +deployment secure. +* The proxy service is running on two nodes. If one node fails, layer 7 routing +continues working. + +To achieve this you need to: + +1. Enable layer 7 routing. [Learn how](index.md). +2. Pick two nodes that are going to be dedicated to run the proxy service. +3. Apply labels to those nodes, so that you can constrain the proxy service to +only run on nodes with those labels. +4. Update the `ucp-interlock` service to deploy proxies using that constraint. +5. Configure your load balancer to route traffic to the dedicated nodes only. + +## Apply labels to nodes + +In this example, we chose node-5 and node-6 to be dedicated just for running +the proxy service. To apply labels to those nodes run: + +```bash +docker node update --label-add nodetype=loadbalancer +``` + +To make sure the label was successfully applied, run: + +{% raw %} +```bash +docker node inspect --format '{{ index .Spec.Labels "nodetype" }}' +``` +{% endraw %} + +The command should print "loadbalancer". + +## Configure the ucp-interlock service + +Now that your nodes are labelled, you need to update the `ucp-interlock` +service configuration to deploy the proxy service with the correct constraints. + +Add another constraint to the `ProxyConstraints` array: + +```toml +[Extensions] + [Extensions.default] + ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"] +``` + +[Learn how to configure ucp-interlock](configure.md). + +> Known issue +> +> In UCP 3.0.0 the `ucp-interlock` service won't redeploy the proxy replicas +> when you update the configuration. As a workaround, +> [deploy a demo service](../usage/index.md). Once you do that, the proxy +services are redeployed and scheduled on the correct nodes. +{: .important} + +Once you reconfigure the `ucp-interlock` service, you can check if the proxy +service is running on the dedicated nodes: + +```bash +docker service ps ucp-interlock-proxy +``` + +## Configure your load balancer + +Once the proxy service is running on dedicated nodes, configure your upstream +load balancer with the domain names or IP addresses of those nodes. + +This makes sure all traffic is directed to these nodes. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/index.md b/datacenter/ucp/3.0/guides/user/interlock/index.md new file mode 100644 index 0000000000..cd63d61bfe --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/index.md @@ -0,0 +1,52 @@ +--- +title: Layer 7 routing overview +description: Learn how to route layer 7 traffic to your swarm services +keywords: routing, proxy +--- + +Docker Engine running in swarm mode has a routing mesh, which makes it easy +to expose your services to the outside world. Since all nodes participate +in the routing mesh, users can access your service by contacting any node. + +![swarm routing mess](../images/interlock-overview-1.svg) + +In this example the WordPress service is listening on port 8000 of the routing +mesh. Even though the service is running on a single node, users can access +WordPress using the domain name or IP of any of the nodes that are part of +the swarm. + +UCP extends this one step further with layer 7 layer routing (also known as +application layer 7), allowing users to access Docker services using domain names +instead of IP addresses. + +This functionality is made available through the Interlock component. + +![layer 7 routing](../images/interlock-overview-2.svg) + +In this example, users can access the WordPress service using +`http://wordpress.example.org`. Interlock takes care of routing traffic to +the right place. + +Interlock is specific to the Swarm orchestrator. If you're trying to route +traffic to your Kubernetes applications, check +[layer 7 routing with Kubernetes.](../kubernetes/layer-7-routing.md) + +## Features and benefits + +Layer 7 routing in UCP supports: + +* **High availability**: All the components used for layer 7 routing leverage +Docker swarm for high availability, and handle failures gracefully. +* **Automatic configuration**: UCP monitors your services and automatically +reconfigures the proxy services so that everything handled for you. +* **Scalability**: You can customize and tune the proxy services that handle +user-facing requests to meet whatever demand your services have. +* **TLS**: You can leverage Docker secrets to securely manage TLS Certificates +and keys for your services. Both TLS termination and TCP passthrough are supported. +* **Context-based routing**: You can define where to route the request based on +context or path. +* **Host mode networking**: By default layer 7 routing leverages the Docker Swarm +routing mesh, but you don't have to. You can use host mode networking for maximum +performance. +* **Security**: The layer 7 routing components that are exposed to the outside +world run on worker nodes. Even if they get compromised, your cluster won't. diff --git a/datacenter/ucp/3.0/guides/user/interlock/upgrade.md b/datacenter/ucp/3.0/guides/user/interlock/upgrade.md new file mode 100644 index 0000000000..426b8e499b --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/upgrade.md @@ -0,0 +1,129 @@ +--- +title: Layer 7 routing upgrade +description: Learn how to route layer 7 traffic to your swarm services +keywords: routing, proxy, hrm +--- + +The [HTTP routing mesh](/datacenter/ucp/2.2/guides/admin/configure/use-domain-names-to-access-services.md) +functionality was redesigned in UCP 3.0 for greater security and flexibility. +The functionality was also renamed to "layer 7 routing", to make it easier for +new users to get started. + +[Learn about the new layer 7 routing functionality](index.md). + +To route traffic to your service you apply specific labels to your swarm +services, describing the hostname for the service and other configurations. +Things work in the same way as they did with the HTTP routing mesh, with the +only difference being that you use different labels. + +You don't have to manually update your services. During the upgrade process to +3.0, UCP updates the services to start using new labels. + +This article describes the upgrade process for the routing component, so that +you can troubleshoot UCP and your services, in case something goes wrong with +the upgrade. + +# UCP upgrade process + +If you are using the HTTP routing mesh, and start an upgrade to UCP 3.0: + +1. UCP starts a reconciliation process to ensure all internal components are +deployed. As part of this, services using HRM labels are inspected. +2. UCP creates the `com.docker.ucp.interlock.conf-` based on HRM configurations. +3. The HRM service is removed. +4. The `ucp-interlock` service is deployed with the configuration created. +5. The `ucp-interlock` service deploys the `ucp-interlock-extension` and +`ucp-interlock-proxy-services`. + +The only way to rollback from an upgrade is by restoring from a backup taken +before the upgrade. If something goes wrong during the upgrade process, you +need to troubleshoot the interlock services and your services, since the HRM +service won't be running after the upgrade. + +[Learn more about the interlock services and architecture](architecture.md). + +## Check that routing works + +After upgrading to UCP 3.0, you should check if all swarm services are still +routable. + +For services using HTTP: + +```bash +curl -vs http://:/ -H "Host: " +``` + +For services using HTTPS: + +```bash +curl -vs https://: +``` + +After the upgrade, check that you can still use the same hostnames to access +the swarm services. + +## The ucp-interlock services are not running + +After the upgrade to UCP 3.0, the following services should be running: + +* `ucp-interlock`: monitors swarm workloads configured to use layer 7 routing. +* `ucp-interlock-extension`: Helper service that generates the configuration for +the `ucp-interlock-proxy` service. +* `ucp-interlock-proxy`: A service that provides load balancing and proxying for +swarm workloads. + +To check if these services are running, use a client bundle with administrator +permissions and run: + +```bash +docker ps --filter "name=ucp-interlock" +``` + +* If the `ucp-interlock` service doesn't exist or is not running, something went +wrong with the reconciliation step. +* If this still doesn't work, it's possible that UCP is having problems creating +the `com.docker.ucp.interlock.conf-1`, due to name conflicts. Make sure you +don't have any configuration with the same name by running: + ``` + docker config ls --filter "name=com.docker.ucp.interlock" + ``` +* If either the `ucp-interlock-extension` or `ucp-interlock-proxy` services are +not running, it's possible that there are port conflicts. +As a workaround re-enable the layer 7 routing configuration from the +[UCP settings page](deploy/index.md). Make sure the ports you choose are not +being used by other services. + +## Workarounds and clean-up + +If you have any of the problems above, disable and enable the layer 7 routing +setting on the [UCP settings page](deploy/index.md). This redeploys the +services with their default configuration. + +When doing that make sure you specify the same ports you were using for HRM, +and that no other services are listening on those ports. + +You should also check if the `ucp-hrm` service is running. If it is, you should +stop it since it can conflict with the `ucp-interlock-proxy` service. + +## Optionally remove labels + +As part of the upgrade process UCP adds the +[labels specific to the new layer 7 routing solution](usage/labels-reference.md). + +You can update your services to remove the old HRM labels, since they won't be +used anymore. + +## Optionally segregate control traffic + +Interlock is designed so that all the control traffic is kept separate from +the application traffic. + +If before upgrading you had all your applications attached to the `ucp-hrm` +network, after upgrading you can update your services to start using a +dedicated network for routing that's not shared with other services. +[Learn how to use a dedicated network](usage/index.md). + +If before upgrading you had a dedicate network to route traffic to each service, +Interlock will continue using those dedicated networks. However the +`ucp-interlock` will be attached to each of those networks. You can update +the `ucp-interlock` service so that it is only connected to the `ucp-hrm` network. diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/canary.md b/datacenter/ucp/3.0/guides/user/interlock/usage/canary.md new file mode 100644 index 0000000000..138dc5816b --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/canary.md @@ -0,0 +1,107 @@ +--- +title: Canary application instances +description: Learn how to do canary deployments for your Docker swarm services +keywords: routing, proxy +--- + +In this example we will publish a service and deploy an updated service as canary instances. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the initial service: + +```bash +$> docker service create \ + --name demo-v1 \ + --network demo \ + --detach=false \ + --replicas=4 \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --env METADATA="demo-version-1" \ + ehazlett/docker-demo +``` + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local`: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to demo.local (127.0.0.1) port 80 (#0) +> GET /ping HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Wed, 08 Nov 2017 20:28:26 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 120 +< Connection: keep-alive +< Set-Cookie: session=1510172906715624280; Path=/; Expires=Thu, 09 Nov 2017 20:28:26 GMT; Max-Age=86400 +< x-request-id: f884cf37e8331612b8e7630ad0ee4e0d +< x-proxy-id: 5ad7c31f9f00 +< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64 +< x-upstream-addr: 10.0.2.4:8080 +< x-upstream-response-time: 1510172906.714 +< +{"instance":"df20f55fc943","version":"0.1","metadata":"demo-version-1","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"} +``` + +Notice the `metadata` with `demo-version-1`. + +Now we will deploy a "new" version: + +```bash +$> docker service create \ + --name demo-v2 \ + --network demo \ + --detach=false \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --env METADATA="demo-version-2" \ + --env VERSION="0.2" \ + ehazlett/docker-demo +``` + +Since this has a replica of one (1) and the initial version has four (4) replicas 20% of application traffic +will be sent to `demo-version-2`: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"23d9a5ec47ef","version":"0.1","metadata":"demo-version-1","request_id":"060c609a3ab4b7d9462233488826791c"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"f42f7f0a30f9","version":"0.1","metadata":"demo-version-1","request_id":"c848e978e10d4785ac8584347952b963"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"1b0d55ed3d2f","version":"0.2","metadata":"demo-version-2","request_id":"b86ff1476842e801bf20a1b5f96cf94e"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"} +``` + +To increase traffic to the new version add more replicas with `docker scale`: + +```bash +$> docker service scale demo-v2=4 +demo-v2 +``` + +To complete the upgrade, scale the `demo-v1` service to zero (0): + +```bash +$> docker service scale demo-v1=0 +demo-v1 +``` + +This will route all application traffic to the new version. If you need to rollback, simply scale the v1 service +back up and v2 down. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/context.md b/datacenter/ucp/3.0/guides/user/interlock/usage/context.md new file mode 100644 index 0000000000..a8f4daa5ec --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/context.md @@ -0,0 +1,65 @@ +--- +title: Context/path based routing +description: Learn how to do route traffic to your Docker swarm services based + on a url path +keywords: routing, proxy +--- + +In this example we will publish a service using context or path based routing. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the initial service: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.context_root=/app \ + --label com.docker.lb.context_root_rewrite=true \ + --env METADATA="demo-context-root" \ + ehazlett/docker-demo +``` + +> Only one path per host +> +> Interlock supports only one path per host per service cluster. Once a +> particular `com.docker.lb.hosts` label has been applied, it cannot be applied +> again in the same service cluster. +{: .important} + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local`: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/app/ +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) +> GET /app/ HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Fri, 17 Nov 2017 14:25:17 GMT +< Content-Type: text/html; charset=utf-8 +< Transfer-Encoding: chunked +< Connection: keep-alive +< x-request-id: 077d18b67831519defca158e6f009f82 +< x-proxy-id: 77c0c37d2c46 +< x-server-info: interlock/2.0.0-dev (732c77e7) linux/amd64 +< x-upstream-addr: 10.0.1.3:8080 +< x-upstream-response-time: 1510928717.306 +... +``` + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md b/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md new file mode 100644 index 0000000000..0602d8c1c9 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md @@ -0,0 +1,50 @@ +--- +title: Set a default service +description: Learn about Interlock, an application routing and load balancing system + for Docker Swarm. +keywords: ucp, interlock, load balancing +--- + +The default proxy service used by UCP to provide layer 7 routing is NGINX, +so when users try to access a route that hasn't been configured, they will +see the default NGINX 404 page. + +![Default NGINX page](../../images/interlock-default-service-1.png){: .with-border} + +You can customize this by labelling a service with +`com.docker.lb.defaul_backend=true`. When users try to access a route that's +not configured, they are redirected to this service. + +As an example, create a `docker-compose.yml` file with: + +```yaml +version: "3.2" + +services: + demo: + image: ehazlett/interlock-default-app + deploy: + replicas: 1 + labels: + com.docker.lb.default_backend: "true" + com.docker.lb.port: 80 + networks: + - demo-network + +networks: + demo-network: + driver: overlay +``` + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +Once users try to access a route that's not configured, they are directed +to this demo service. + +![Custom default page](../../images/interlock-default-service-2.png){: .with-border} + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/index.md b/datacenter/ucp/3.0/guides/user/interlock/usage/index.md new file mode 100644 index 0000000000..4895b67160 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/index.md @@ -0,0 +1,95 @@ +--- +title: Route traffic to a simple swarm service +description: Learn how to do canary deployments for your Docker swarm services +keywords: routing, proxy +--- + +Once the [layer 7 routing solution is enabled](../deploy/index.md), you can +start using it in your swarm services. + +In this example we'll deploy a simple service which: + +* Has a JSON endpoint that returns the ID of the task serving the request. +* Has a web UI that shows how many tasks the service is running. +* Can be reached at `http://app.example.org`. + +## Deploy the service + +Create a `docker-compose.yml` file with: + +```yaml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + networks: + - demo-network + +networks: + demo-network: + driver: overlay +``` + +Note that: + +* The `com.docker.lb.hosts` label defines the hostname for the service. When +the layer 7 routing solution gets a request containing `app.example.org` in +the host header, that request is forwarded to the demo service. +* The `com.docker.lb.network` defines which network the `ucp-interlock-proxy` +should attach to in order to be able to communicate with the demo service. +To use layer 7 routing, your services need to be attached to at least one network. +If your service is only attached to a single network, you don't need to add +a label to specify which network to use for routing. +* The `com.docker.lb.port` label specifies which port the `ucp-interlock-proxy` +service should use to communicate with this demo service. +* Your service doesn't need to expose a port in the swarm routing mesh. All +communications are done using the network you've specified. + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +The `ucp-interlock` service detects that your service is using these labels +and automatically reconfigures the `ucp-interlock-proxy` service. + +## Test using the CLI + +To test that requests are routed to the demo service, run: + +```bash +curl --header "Host: app.example.org" \ + http://:/ping +``` + +Where: + +* `` is the domain name or IP address of a UCP node. +* `` is the [port you're using to route HTTP traffic](../deploy/index.md). + +If everything is working correctly, you should get a JSON result like: + +```json +{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"} +``` + +## Test using a browser + +Since the demo service exposes an HTTP endpoint, you can also use your browser +to validate that everything is working. + +Make sure the `/etc/hosts` file in your system has an entry mapping +`app.example.org` to the IP address of a UCP node. Once you do that, you'll be +able to start using the service from your browser. + +![browser](../../images/route-simple-app-1.png){: .with-border } + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/interlock_service_clusters.png b/datacenter/ucp/3.0/guides/user/interlock/usage/interlock_service_clusters.png new file mode 100644 index 0000000000..84ad5f1898 Binary files /dev/null and b/datacenter/ucp/3.0/guides/user/interlock/usage/interlock_service_clusters.png differ diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference.md b/datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference.md new file mode 100644 index 0000000000..263c055286 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference.md @@ -0,0 +1,31 @@ +--- +title: Layer 7 routing labels reference +description: Learn about the labels you can use in your swarm services to route + layer 7 traffic to them. +keywords: routing, proxy +--- + +Once the layer 7 routing solution is enabled, you can +[start using it in your swarm services](index.md). + +The following labels are available for you to use in swarm services: + + +| Label | Description | Example | +|:---------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------| +| `com.docker.lb.hosts` | Comma separated list of the hosts that the service should serve. | `example.com,test.com` | +| `com.docker.lb.port` | Port to use for internal upstream communication. | `8080` | +| `com.docker.lb.network` | Name of network the proxy service should attach to for upstream connectivity. | `app-network-a` | +| `com.docker.lb.context_root` | Context or path to use for the application. | `/app` | +| `com.docker.lb.context_root_rewrite` | Boolean to enable rewrite for the context root. | `true` | +| `com.docker.lb.ssl_only` | Boolean to force SSL for application. | `true` | +| `com.docker.lb.ssl_cert` | Docker secret to use for the SSL certificate. | `example.com.cert` | +| `com.docker.lb.ssl_key` | Docker secret to use for the SSL key. | `example.com.key` | +| `com.docker.lb.websocket_endpoints` | Comma separated list of endpoints to configure to be upgraded for websockets. | `/ws,/foo` | +| `com.docker.lb.service_cluster` | Name of the service cluster to use for the application. | `us-east` | +| `com.docker.lb.ssl_backend` | Enable SSL communication to the upstreams. | `true` | +| `com.docker.lb.ssl_backend_tls_verify` | Verification mode for the upstream TLS. | `none` | +| `com.docker.lb.sticky_session_cookie` | Cookie to use for sticky sessions. | `none` | +| `com.docker.lb.redirects` | Semi-colon separated list of redirects to add in the format of `,`. Example: `http://old.example.com,http://new.example.com;` | `none` | +| `com.docker.lb.ssl_passthrough` | Enable SSL passthrough. | `false` | + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/redirects.md b/datacenter/ucp/3.0/guides/user/interlock/usage/redirects.md new file mode 100644 index 0000000000..0f060b7a3c --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/redirects.md @@ -0,0 +1,69 @@ +--- +title: Application redirects +description: Learn how to implement redirects using swarm services and the + layer 7 routing solution for UCP. +keywords: routing, proxy, redirects +--- + +Once the [layer 7 routing solution is enabled](../deploy/index.md), you can +start using it in your swarm services. In this example we'll deploy a simple +service that can be reached at `app.example.org`. We'll also redirect +requests to `old.example.org` to that service. + +To do that, create a docker-compose.yml file with: + +```yaml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org,old.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + com.docker.lb.redirects: http://old.example.org,http://app.example.org + networks: + - demo-network + +networks: + demo-network: + driver: overlay +``` + +Note that the demo service has labels to signal that traffic for both +`app.example.org` and `old.example.org` should be routed to this service. +There's also a label indicating that all traffic directed to `old.example.org` +should be redirected to `app.example.org`. + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +You can also use the CLI to test if the redirect is working, by running: + +```bash +curl --head --header "Host: old.example.org" http://: +``` + +You should see something like: + +```none +HTTP/1.1 302 Moved Temporarily +Server: nginx/1.13.8 +Date: Thu, 29 Mar 2018 23:16:46 GMT +Content-Type: text/html +Content-Length: 161 +Connection: keep-alive +Location: http://app.example.org/ +``` + +You can also test that the redirect works from your browser. For that, you +need to make sure you add entries for both `app.example.org` and +`old.example.org` to your `/etc/hosts` file, mapping them to the IP address +of a UCP node. diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters.md b/datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters.md new file mode 100644 index 0000000000..b5baf30a55 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters.md @@ -0,0 +1,200 @@ +--- +title: Service clusters +description: Learn about Interlock, an application routing and load balancing system + for Docker Swarm. +keywords: ucp, interlock, load balancing +--- + +In this example we will configure an eight (8) node Swarm cluster that uses service clusters +to route traffic to different proxies. There are three (3) managers +and five (5) workers. Two of the workers are configured with node labels to be dedicated +ingress cluster load balancer nodes. These will receive all application traffic. + +This example will not cover the actual deployment of infrastructure. +It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes). +See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help +getting a Swarm cluster deployed. + +![Interlock Service Clusters](interlock_service_clusters.png) + +We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy +service. Once you are logged into one of the Swarm managers run the following to add node labels +to the dedicated ingress workers: + +```bash +$> docker node update --label-add nodetype=loadbalancer --label-add region=us-east lb-00 +lb-00 +$> docker node update --label-add nodetype=loadbalancer --label-add region=us-west lb-01 +lb-01 +``` + +You can inspect each node to ensure the labels were successfully added: + +```bash +{% raw %} +$> docker node inspect -f '{{ .Spec.Labels }}' lb-00 +map[nodetype:loadbalancer region:us-east] +$> docker node inspect -f '{{ .Spec.Labels }}' lb-01 +map[nodetype:loadbalancer region:us-west] +{% endraw %} +``` + +Next, we will create a configuration object for Interlock that contains multiple extensions with varying service clusters: + +```bash +$> cat << EOF | docker config create service.interlock.conf - +ListenAddr = ":8080" +DockerURL = "unix:///var/run/docker.sock" +PollInterval = "3s" + +[Extensions] + [Extensions.us-east] + Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview" + Args = ["-D"] + ServiceName = "interlock-ext-us-east" + ProxyImage = "nginx:alpine" + ProxyArgs = [] + ProxyServiceName = "interlock-proxy-us-east" + ProxyConfigPath = "/etc/nginx/nginx.conf" + ServiceCluster = "us-east" + PublishMode = "host" + PublishedPort = 80 + TargetPort = 80 + PublishedSSLPort = 443 + TargetSSLPort = 443 + [Extensions.us-east.Config] + User = "nginx" + PidPath = "/var/run/proxy.pid" + WorkerProcesses = 1 + RlimitNoFile = 65535 + MaxConnections = 2048 + [Extensions.us-east.Labels] + ext_region = "us-east" + [Extensions.us-east.ProxyLabels] + proxy_region = "us-east" + + [Extensions.us-west] + Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview" + Args = ["-D"] + ServiceName = "interlock-ext-us-west" + ProxyImage = "nginx:alpine" + ProxyArgs = [] + ProxyServiceName = "interlock-proxy-us-west" + ProxyConfigPath = "/etc/nginx/nginx.conf" + ServiceCluster = "us-west" + PublishMode = "host" + PublishedPort = 80 + TargetPort = 80 + PublishedSSLPort = 443 + TargetSSLPort = 443 + [Extensions.us-west.Config] + User = "nginx" + PidPath = "/var/run/proxy.pid" + WorkerProcesses = 1 + RlimitNoFile = 65535 + MaxConnections = 2048 + [Extensions.us-west.Labels] + ext_region = "us-west" + [Extensions.us-west.ProxyLabels] + proxy_region = "us-west" +EOF +oqkvv1asncf6p2axhx41vylgt +``` +Note that we are using "host" mode networking in order to use the same ports (`80` and `443`) in the cluster. We cannot use ingress +networking as it reserves the port across all nodes. If you want to use ingress networking you will have to use different ports +for each service cluster. + +Next we will create a dedicated network for Interlock and the extensions: + +```bash +$> docker network create -d overlay interlock +``` + +Now we can create the Interlock service: + +```bash +$> docker service create \ + --name interlock \ + --mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \ + --network interlock \ + --constraint node.role==manager \ + --config src=service.interlock.conf,target=/config.toml \ + interlockpreview/interlock:2.0.0-preview -D run -c /config.toml +sjpgq7h621exno6svdnsvpv9z +``` + +## Configure Proxy Services +Once we have the node labels we can re-configure the Interlock Proxy services to be constrained to the +workers for each region. Again, from a manager run the following to pin the proxy services to the ingress workers: + +```bash +$> docker service update \ + --constraint-add node.labels.nodetype==loadbalancer \ + --constraint-add node.labels.region==us-east \ + interlock-proxy-us-east +$> docker service update \ + --constraint-add node.labels.nodetype==loadbalancer \ + --constraint-add node.labels.region==us-west \ + interlock-proxy-us-west +``` + +We are now ready to deploy applications. First we will create individual networks for each application: + +```bash +$> docker network create -d overlay demo-east +$> docker network create -d overlay demo-west +``` + +Next we will deploy the application in the `us-east` service cluster: + +```bash +$> docker service create \ + --name demo-east \ + --network demo-east \ + --detach=true \ + --label com.docker.lb.hosts=demo-east.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.service_cluster=us-east \ + --env METADATA="us-east" \ + ehazlett/docker-demo +``` + +Now we deploy the application in the `us-west` service cluster: + +```bash +$> docker service create \ + --name demo-west \ + --network demo-west \ + --detach=true \ + --label com.docker.lb.hosts=demo-west.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.service_cluster=us-west \ + --env METADATA="us-west" \ + ehazlett/docker-demo +``` + +Only the service cluster that is designated will be configured for the applications. For example, the `us-east` service cluster +will not be configured to serve traffic for the `us-west` service cluster and vice versa. We can see this in action when we +send requests to each service cluster. + +When we send a request to the `us-east` service cluster it only knows about the `us-east` application (be sure to ssh to the `lb-00` node): + +```bash +{% raw %} +$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping +{"instance":"1b2d71619592","version":"0.1","metadata":"us-east","request_id":"3d57404cf90112eee861f9d7955d044b"} +$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping + +404 Not Found + +

    404 Not Found

    +
    nginx/1.13.6
    + + +{% endraw %} +``` + +Application traffic is isolated to each service cluster. Interlock also ensures that a proxy will only be updated if it has corresponding updates +to its designated service cluster. So in this example, updates to the `us-east` cluster will not affect the `us-west` cluster. If there is a problem +the others will not be affected. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/sessions.md b/datacenter/ucp/3.0/guides/user/interlock/usage/sessions.md new file mode 100644 index 0000000000..f1104ec486 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/sessions.md @@ -0,0 +1,131 @@ +--- +title: Persistent (sticky) sessions +description: Learn how to configure your swarm services with persistent sessions + using UCP. +keywords: routing, proxy +--- + +In this example we will publish a service and configure the proxy for persistent (sticky) sessions. + +# Cookies +In the following example we will show how to configure sticky sessions using cookies. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the service with the cookie to use for sticky sessions: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --replicas=5 \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.sticky_session_cookie=session \ + --label com.docker.lb.port=8080 \ + --env METADATA="demo-sticky" \ + ehazlett/docker-demo +``` + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local` +and configured to use sticky sessions: + +```bash +$> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/ping +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) +> GET /ping HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> Cookie: session=1510171444496686286 +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Wed, 08 Nov 2017 20:04:36 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 117 +< Connection: keep-alive +* Replaced cookie session="1510171444496686286" for domain demo.local, path /, expire 0 +< Set-Cookie: session=1510171444496686286 +< x-request-id: 3014728b429320f786728401a83246b8 +< x-proxy-id: eae36bf0a3dc +< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64 +< x-upstream-addr: 10.0.2.5:8080 +< x-upstream-response-time: 1510171476.948 +< +{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"} +``` + +Notice the `Set-Cookie` from the application. This is stored by the `curl` command and sent with subsequent requests +which are pinned to the same instance. If you make a few requests you will notice the same `x-upstream-addr`. + +# IP Hashing +In this example we show how to configure sticky sessions using client IP hashing. This is not as flexible or consistent +as cookies but enables workarounds for some applications that cannot use the other method. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the service with the cookie to use for sticky sessions using IP hashing: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --replicas=5 \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.ip_hash=true \ + --env METADATA="demo-sticky" \ + ehazlett/docker-demo +``` + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local` +and configured to use sticky sessions: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) +> GET /ping HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Wed, 08 Nov 2017 20:04:36 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 117 +< Connection: keep-alive +< x-request-id: 3014728b429320f786728401a83246b8 +< x-proxy-id: eae36bf0a3dc +< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64 +< x-upstream-addr: 10.0.2.5:8080 +< x-upstream-response-time: 1510171476.948 +< +{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"} +``` + +You can use `docker service scale demo=10` to add some more replicas. Once scaled, you will notice that requests are pinned +to a specific backend. + +Note: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is +expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are +determined a new "sticky" backend will be chosen and that will be the dedicated upstream. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md b/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md new file mode 100644 index 0000000000..32f7e9910e --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md @@ -0,0 +1,197 @@ +--- +title: Applications with SSL +description: Learn how to configure your swarm services with TLS using the layer + 7 routing solution for UCP. +keywords: routing, proxy, tls +redirect_from: + - /ee/ucp/interlock/usage/ssl/ +--- + +Once the [layer 7 routing solution is enabled](../deploy/index.md), you can +start using it in your swarm services. You have two options for securing your +services with TLS: + +* Let the proxy terminate the TLS connection. All traffic between end-users and +the proxy is encrypted, but the traffic going between the proxy and your swarm +service is not secured. +* Let your swarm service terminate the TLS connection. The end-to-end traffic +is encrypted and the proxy service allows TLS traffic to passthrough unchanged. + +In this example we'll deploy a service that can be reached at `app.example.org` +using these two options. + +No matter how you choose to secure your swarm services, there are two steps to +route traffic with TLS: + +1. Create [Docker secrets](/engine/swarm/secrets.md) to manage from a central +place the private key and certificate used for TLS. +2. Add labels to your swarm service for UCP to reconfigure the proxy service. + + +## Let the proxy handle TLS + +In this example we'll deploy a swarm service and let the proxy service handle +the TLS connection. All traffic between the proxy and the swarm service is +not secured, so you should only use this option if you trust that no one can +monitor traffic inside services running on your datacenter. + +![TLS Termination](../../images/interlock-tls-1.png) + +Start by getting a private key and certificate for the TLS connection. Make +sure the Common Name in the certificate matches the name where your service +is going to be available. + +You can generate a self-signed certificate for `app.example.org` by running: + +```bash +openssl req \ + -new \ + -newkey rsa:4096 \ + -days 3650 \ + -nodes \ + -x509 \ + -subj "/C=US/ST=CA/L=SF/O=Docker-demo/CN=app.example.org" \ + -keyout app.example.org.key \ + -out app.example.org.cert +``` + +Then, create a docker-compose.yml file with the following content: + +```yml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + com.docker.lb.ssl_cert: demo_app.example.org.cert + com.docker.lb.ssl_key: demo_app.example.org.key + environment: + METADATA: proxy-handles-tls + networks: + - demo-network + +networks: + demo-network: + driver: overlay +secrets: + app.example.org.cert: + file: ./app.example.org.cert + app.example.org.key: + file: ./app.example.org.key +``` + +Notice that the demo service has labels describing that the proxy service should +route traffic to `app.example.org` to this service. All traffic between the +service and proxy takes place using the `demo-network` network. The service also +has labels describing the Docker secrets to use on the proxy service to terminate +the TLS connection. + +Since the private key and certificate are stored as Docker secrets, you can +easily scale the number of replicas used for running the proxy service. Docker +takes care of distributing the secrets to the replicas. + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +The service is now running. To test that everything is working correctly you +first need to update your `/etc/hosts` file to map `app.example.org` to the +IP address of a UCP node. + +In a production deployment, you'll have to create a DNS entry so that your +users can access the service using the domain name of your choice. +After doing that, you'll be able to access your service at: + +```bash +https://: +``` + +Where: +* `hostname` is the name you used with the `com.docker.lb.hosts` label. +* `https-port` is the port you've configured in the [UCP settings](../deploy/index.md). + +![Browser screenshot](../../images/interlock-tls-2.png){: .with-border} + +Since we're using self-sign certificates in this example, client tools like +browsers display a warning that the connection is insecure. + +You can also test from the CLI: + +```bash +curl --insecure \ + --resolve :: \ + https://:/ping +``` + +If everything is properly configured you should get a JSON payload: + +```json +{"instance":"f537436efb04","version":"0.1","request_id":"5a6a0488b20a73801aa89940b6f8c5d2"} +``` + +Since the proxy uses SNI to decide where to route traffic, make sure you're +using a version of curl that includes the SNI header with insecure requests. +If this doesn't happen, curl displays an error saying that the SSL handshake +was aborterd. + + +## Let your service handle TLS + +You can also encrypt the traffic from end-users to your swarm service. + +![End-to-end encryption](../../images/interlock-tls-3.png) + + +To do that, deploy your swarm service using the following docker-compose.yml file: + +```yml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + command: --tls-cert=/run/secrets/cert.pem --tls-key=/run/secrets/key.pem + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + com.docker.lb.ssl_passthrough: "true" + environment: + METADATA: end-to-end-TLS + networks: + - demo-network + secrets: + - source: app.example.org.cert + target: /run/secrets/cert.pem + - source: app.example.org.key + target: /run/secrets/key.pem + +networks: + demo-network: + driver: overlay +secrets: + app.example.org.cert: + file: ./app.example.org.cert + app.example.org.key: + file: ./app.example.org.key +``` + +Notice that we've update the service to start using the secrets with the +private key and certificate. The service is also labeled with +`com.docker.lb.ssl_passthrough: true`, signaling UCP to configure the proxy +service such that TLS traffic for `app.example.org` is passed to the service. + +Since the connection is fully encrypt from end-to-end, the proxy service +won't be able to add metadata such as version info or request ID to the +response headers. diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/websockets.md b/datacenter/ucp/3.0/guides/user/interlock/usage/websockets.md new file mode 100644 index 0000000000..ec2b1b46b5 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/websockets.md @@ -0,0 +1,36 @@ +--- +title: Websockets +description: Learn how to use websocket in your swarm services when using the + layer 7 routing solution for UCP. +keywords: routing, proxy +--- + +In this example we will publish a service and configure support for websockets. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the service with websocket endpoints: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.websocket_endpoints=/ws \ + ehazlett/websocket-chat +``` + +Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file. +This uses the browser for websocket communication so you will need to have an entry or use a routable domain. + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local`. Open +two instances of your browser and you should see text on both instances as you type. + diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/create-service-account.md b/datacenter/ucp/3.0/guides/user/kubernetes/create-service-account.md new file mode 100644 index 0000000000..3e7336fd62 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/create-service-account.md @@ -0,0 +1,89 @@ +--- +title: Create a service account for a Kubernetes app +description: Learn how to use a service account to give a Kubernetes workload access to cluster resources. +keywords: UCP, Docker EE, Kubernetes, authorization, access control, grant +--- + +Kubernetes enables access control for workloads by providing service accounts. +A service account represents an identity for processes that run in a pod. +When a process is authenticated through a service account, it can contact the +API server and access cluster resources. If a pod doesn't have an assigned +service account, it gets the `default` service account. +Learn about [managing service accounts](https://v1-8.docs.kubernetes.io/docs/admin/service-accounts-admin/). + +In Docker EE, you give a service account access to cluster resources by +creating a grant, the same way that you would give access to a user or a team. +Learn how to [grant access to cluster resources](../authorization/index.md). + +In this example, you create a service account and a grant that could be used +for an NGINX server. + +## Create the Kubernetes namespace + +A Kubernetes user account is global, but a service account is scoped to a +namespace, so you need to create a namespace before you create a service +account. + +1. Navigate to the **Namespaces** page and click **Create**. +2. In the **Object YAML** editor, append the following text. + ```yaml + metadata: + name: nginx + ``` +3. Click **Create**. +4. In the **nginx** namespace, click the **More options** icon, + and in the context menu, select **Set Context**, and click **Confirm**. + + ![](../images/create-service-account-1.png){: .with-border} + +5. Click the **Set context for all namespaces** toggle and click **Confirm**. + +## Create a service account + +Create a service account named `nginx-service-account` in the `nginx` +namespace. + +1. Navigate to the **Service Accounts** page and click **Create**. +2. In the **Namespace** dropdown, select **nginx**. +3. In the **Object YAML** editor, paste the following text. + ```yaml + apiVersion: v1 + kind: ServiceAccount + metadata: + name: nginx-service-account + ``` +3. Click **Create**. + + ![](../images/create-service-account-2.png){: .with-border} + +## Create a grant + +To give the service account access to cluster resources, create a grant with +`Restricted Control` permissions. + +1. Navigate to the **Grants** page and click **Create Grant**. +2. In the left pane, click **Resource Sets**, and in the **Type** section, + click **Namespaces**. +3. Select the **nginx** namespace. +4. In the left pane, click **Roles**. In the **Role** dropdown, select + **Restricted Control**. +5. In the left pane, click **Subjects**, and select **Service Account**. + + > Service account subject type + > + > The **Service Account** option in the **Subject Type** section appears only + > when a Kubernetes namespace is present. + {: .important} + +6. In the **Namespace** dropdown, select **nginx**, and in the + **Service Account** dropdown, select **nginx-service-account**. +7. Click **Create**. + + ![](../images/create-service-account-3.png){: .with-border} + +Now `nginx-service-account` has access to all cluster resources that are +assigned to the `nginx` namespace. + +## Where to go next + +- [Deploy an ingress controller for a Kubernetes app](deploy-ingress-controller.md) \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose.md b/datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose.md new file mode 100644 index 0000000000..64172cc844 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose.md @@ -0,0 +1,92 @@ +--- +title: Deploy a Compose-based app to a Kubernetes cluster +description: Use Docker Enterprise Edition to deploy a Kubernetes workload from a Docker compose. +keywords: UCP, Docker EE, Kubernetes, Compose +redirect_from: + - /ee/ucp/user/services/deploy-compose-on-kubernetes/ +--- + +Docker Enterprise Edition enables deploying [Docker Compose](/compose/overview.md/) +files to Kubernetes clusters. Starting in Compile file version 3.3, you use the +same `docker-compose.yml` file that you use for Swarm deployments, but you +specify **Kubernetes workloads** when you deploy the stack. The result is a +true Kubernetes app. + +## Get access to a Kubernetes namespace + +To deploy a stack to Kubernetes, you need a namespace for the app's resources. +Contact your Docker EE administrator to get access to a namespace. In this +example, the namespace has the name `lab-words`. +[Learn to grant access to a Kubernetes namespace](../authorization/grant-permissions/#kubernetes-grants). + +## Create a Kubernetes app from a Compose file + +In this example, you create a simple app, named "lab-words", by using a Compose +file. The following yaml defines the stack: + +```yaml +version: '3.3' + +services: + web: + build: web + image: dockerdemos/lab-web + volumes: + - "./web/static:/static" + ports: + - "80:80" + + words: + build: words + image: dockerdemos/lab-words + deploy: + replicas: 5 + endpoint_mode: dnsrr + resources: + limits: + memory: 16M + reservations: + memory: 16M + + db: + build: db + image: dockerdemos/lab-db +``` + +1. Open the UCP web UI, and in the left pane, click **Shared resources**. +2. Click **Stacks**, and in the **Stacks** page, click **Create stack**. +3. In the **Name** textbox, type "lab-words". +4. In the **Mode** dropdown, select **Kubernetes workloads**. +5. In the **Namespace** drowdown, select **lab-words**. +6. In the **docker-compose.yml** editor, paste the previous YAML. +7. Click **Create** to deploy the stack. + +## Inspect the deployment + +After a few minutes have passed, all of the pods in the `lab-words` deployment +are running. + +1. In the left pane, click **Pods**. Confirm that there are seven pods and + that their status is **Running**. If any have a status of **Pending**, + wait until they're all running. +2. Click one of the pods that has a name starting with **words**, and in the + details pane, scroll down to the **Pod IP** to view the pod's internal IP + address. + + ![](../images/deploy-compose-kubernetes-1.png){: .with-border} + +3. In the left pane, click **Load balancers** and find the **web-published** service. +4. Click the **web-published** service, and in the details pane, scroll down to the + **Spec** section. +5. Under **Ports**, click the URL to open the web UI for the `lab-words` app. + + ![](../images/deploy-compose-kubernetes-2.png){: .with-border} + +6. Look at the IP addresses that are displayed in each tile. The IP address + of the pod you inspected previously may be listed. If it's not, refresh the + page until you see it. + + ![](../images/deploy-compose-kubernetes-3.png){: .with-border} + +7. Refresh the page to see how the load is balanced across the pods. + diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/index.md b/datacenter/ucp/3.0/guides/user/kubernetes/index.md new file mode 100644 index 0000000000..3daebde71d --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/index.md @@ -0,0 +1,258 @@ +--- +title: Deploy a workload to a Kubernetes cluster +description: Use Docker Enterprise Edition to deploy Kubernetes workloads from yaml files. +keywords: UCP, Docker EE, orchestration, Kubernetes, cluster +redirect_from: + - /ee/ucp/user/services/deploy-kubernetes-workload/ +--- + +The Docker EE web UI enables deploying your Kubernetes YAML files. In most +cases, no modifications are necessary to deploy on a cluster that's managed by +Docker EE. + +## Deploy an NGINX server + +In this example, a simple Kubernetes Deployment object for an NGINX server is +defined in YAML: + +```yaml +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + replicas: 2 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 +``` + +The YAML specifies an earlier version of NGINX, which will be updated in a +later section. + +1. Open the Docker EE web UI, and in the left pane, click **Kubernetes**. +2. Click **Create** to open the **Create Kubernetes Object** page. +3. In the **Namespace** dropdown, select **default**. +4. In the **Object YAML** editor, paste the previous YAML. +5. Click **Create**. + +![](../images/deploy-kubernetes-workload-1.png){: .with-border} + +## Inspect the deployment + +The Docker EE web UI shows the status of your deployment when you click the +links in the **Kubernetes** section of the left pane. + +1. In the left pane. click **Controllers** to see the resource controllers + that Docker EE created for the NGINX server. +2. Click the **nginx-deployment** controller, and in the details pane, scroll + to the **Template** section. This shows the values that Docker EE used to + create the deployment. +3. In the left pane, click **Pods** to see the pods that are provisioned for + the NGINX server. Click one of the pods, and in the details pane, scroll to + the **Status** section to see that pod's phase, IP address, and other + properties. + +![](../images/deploy-kubernetes-workload-2.png){: .with-border} + +## Expose the server + +The NGINX server is up and running, but it's not accessible from outside of the +cluster. Add a `NodePort` service to expose the server on a specified port: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: nginx + labels: + app: nginx +spec: + type: NodePort + ports: + - port: 80 + nodePort: 32768 + selector: + app: nginx +``` + +The service connects the cluster's internal port 80 to the external port +32768. + +1. Repeat the previous steps and copy-paste the YAML that defines the `nginx` + service into the **Object YAML** editor on the + **Create Kubernetes Object** page. When you click **Create**, the + **Load Balancers** page opens. +2. Click the **nginx** service, and in the details pane, find the **Ports** + section. + + ![](../images/deploy-kubernetes-workload-3.png){: .with-border} + +3. Click the link that's labeled **URL** to view the default NGINX page. + +The YAML definition connects the service to the NGINX server by using the +app label `nginx` and a corresponding label selector. +[Learn about using a service to expose your app](https://v1-8.docs.kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/). + +## Update the deployment + +Update an existing deployment by applying an updated YAML file. In this +example, the server is scaled up to four replicas and updated to a later +version of NGINX. + +```yaml +... +spec: + progressDeadlineSeconds: 600 + replicas: 4 + revisionHistoryLimit: 10 + selector: + matchLabels: + app: nginx + strategy: + rollingUpdate: + maxSurge: 25% + maxUnavailable: 25% + type: RollingUpdate + template: + metadata: + creationTimestamp: null + labels: + app: nginx + spec: + containers: + - image: nginx:1.8 +... +``` + +1. In the left pane, click **Controllers** and select **nginx-deployment**. +2. In the details pane, click **Configure**, and in the **Edit Deployment** + page, find the **replicas: 2** entry. +3. Change the number of replicas to 4, so the line reads **replicas: 4**. +4. Find the **image: nginx:1.7.9** entry and change it to **image: nginx:1.8**. + + ![](../images/deploy-kubernetes-workload-4.png){: .with-border} + +5. Click **Save** to update the deployment with the new YAML. +6. In the left pane, click **Pods** to view the newly created replicas. + + ![](../images/deploy-kubernetes-workload-5.png){: .with-border} + +## Use the CLI to deploy Kubernetes objects + +With Docker EE, you deploy your Kubernetes objects on the command line by using +`kubectl`. [Install and set up kubectl](https://v1-8.docs.kubernetes.io/docs/tasks/tools/install-kubectl/). + +Use a client bundle to configure your client tools, like Docker CLI and `kubectl` +to communicate with UCP instead of the local deployments you might have running. +[Get your client bundle by using the Docker EE web UI or the command line](../user-access/cli.md). + +When you have the client bundle set up, you can deploy a Kubernetes object +from YAML. + +```yaml +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + replicas: 2 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx + labels: + app: nginx +spec: + type: NodePort + ports: + - port: 80 + nodePort: 32768 + selector: + app: nginx +``` + +Save the previous YAML to a file named "deployment.yaml", and use the following +command to deploy the NGINX server: + +```bash +kubectl apply -f deployment.yaml +``` + +## Inspect the deployment + +Use the `describe deployment` option to inspect the deployment: + +```bash +kubectl describe deployment nginx-deployment +``` + +Also, you can use the Docker EE web UI to see the deployment's pods and +controllers. + +## Update the deployment + +Update an existing deployment by applying an updated YAML file. + +Edit deployment.yaml and change the following lines: + +- Increase the number of replicas to 4, so the line reads **replicas: 4**. +- Update the NGINX version by specifying **image: nginx:1.8**. + +Save the edited YAML to a file named "update.yaml", and use the following +command to deploy the NGINX server: + +```bash +kubectl apply -f update.yaml +``` + +Check that the deployment was scaled out by listing the deployments in the +cluster: + +```bash + kubectl get deployments +``` + +You should see four pods in the deployment: + +```bash +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +nginx-deployment 4 4 4 4 2d +``` + +Check that the pods are running the updated image: + +```bash +kubectl describe deployment nginx-deployment | grep -i image +``` + +You should see the currently running image: + +```bash + Image: nginx:1.8 +``` + diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin.md b/datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin.md new file mode 100644 index 0000000000..b16cf194d8 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin.md @@ -0,0 +1,93 @@ +--- +title: Install a CNI plugin +description: Learn how to install a Container Networking Interface plugin on Docker Universal Control Plane. +keywords: ucp, cli, administration, kubectl, Kubernetes, cni, Container Networking Interface, flannel, weave, ipip, calico +--- + +With Docker Universal Control Plane, you can install a third-party Container +Networking Interface (CNI) plugin when you install UCP, by using the +`--cni-installer-url` option. By default, Docker EE installs the built-in +[Calico](https://github.com/projectcalico/cni-plugin) plugin, but you can +override the default and install a plugin of your choice, +like [Flannel](https://github.com/coreos/flannel) or +[Weave](https://www.weave.works/). + +# Install UCP with a custom CNI plugin + +Modify the [UCP install command-line](../admin/install/index.md#step-4-install-ucp) +to add the `--cni-installer-url` [option](/reference/ucp/3.0/cli/install.md), +providing a URL for the location of the CNI plugin's YAML file: + +```bash +docker container run --rm -it --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \ + --host-address \ + --cni-installer-url \ + --interactive +``` + +You must provide a correct YAML installation file for the CNI plugin, but most +of the default files work on Docker EE with no modification. + +## YAML files for CNI plugins + +Use the following commands to get the YAML files for popular CNI plugins. + +- [Flannel](https://github.com/coreos/flannel) + ```bash + # Get the URL for the Flannel CNI plugin. + CNI_URL="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" + ``` +- [Weave](https://www.weave.works/) + ```bash + # Get the URL for the Weave CNI plugin. + CNI_URL="https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI5IiwgR2l0VmVyc2lvbjoidjEuOS4zIiwgR2l0Q29tbWl0OiJkMjgzNTQxNjU0NGYyOThjOTE5ZTJlYWQzYmUzZDA4NjRiNTIzMjNiIiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxOC0wMi0wN1QxMjoyMjoyMVoiLCBHb1ZlcnNpb246ImdvMS45LjIiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjgrIiwgR2l0VmVyc2lvbjoidjEuOC4yLWRvY2tlci4xNDMrYWYwODAwNzk1OWUyY2UiLCBHaXRDb21taXQ6ImFmMDgwMDc5NTllMmNlYWUxMTZiMDk4ZWNhYTYyNGI0YjI0MjBkODgiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE4LTAyLTAxVDIzOjI2OjE3WiIsIEdvVmVyc2lvbjoiZ28xLjguMyIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg==" + ``` + If you have kubectl available, for example by using + [Docker for Mac](/docker-for-mac/kubernetes.md), you can use the following + command to get the URL for the [Weave](https://www.weave.works/) CNI plugin: + ```bash + # Get the URL for the Weave CNI plugin. + CNI_URL="https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" + ``` +- [Romana](http://docs.romana.io/) + ```bash + # Get the URL for the Romana CNI plugin. + CNI_URL="https://raw.githubusercontent.com/romana/romana/master/docs/kubernetes/romana-kubeadm.yml" + ``` + +## Disable IP in IP overlay tunneling + +The Calico CNI plugin supports both overlay (IPIP) and underlay forwarding +technologies. By default, Docker UCP uses IPIP overlay tunneling. + +If you're used to managing applications at the network level through the +underlay visibility, or you want to reuse existing networking tools in the +underlay, you may want to disable the IPIP functionality. Run the following +commands on the Kubernetes master node to disable IPIP overlay tunneling. + +```bash +# Exec into the Calico Kubernetes controller container. +docker exec -it $(docker ps --filter name=k8s_calico-kube-controllers_calico-kube-controllers -q) sh + +# Download calicoctl +wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl + +# Get the IP pool configuration. +./calicoctl get ippool -o yaml > ippool.yaml + +# Edit the file: Disable IPIP in ippool.yaml by setting "ipipMode: Never". + +# Apply the edited file to the Calico plugin. +./calicoctl apply -f ippool.yaml + +``` + +These steps disable overlay tunneling, and Calico uses the underlay networking, +in environments where it's supported. + +## Where to go next + +- [Install UCP for production](../admin/install.md) +- [Deploy a workload to a Kubernetes cluster](../kubernetes.md) diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing.md b/datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing.md new file mode 100644 index 0000000000..c1d343e0b2 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing.md @@ -0,0 +1,310 @@ +--- +title: Layer 7 routing +description: Learn how to route traffic to your Kubernetes workloads in + Docker Enterprise Edition. +keywords: UCP, Kubernetes, ingress, routing +redirect_from: + - /ee/ucp/kubernetes/deploy-ingress-controller/ +--- + +When you deploy a Kubernetes application, you may want to make it accessible +to users using hostnames instead of IP addresses. + +Kubernetes provides **ingress controllers** for this. This functionality is +specific to Kubernetes. If you're trying to route traffic to Swarm-based +applications, check [layer 7 routing with Swarm](../interlock/index.md). + +Use an ingress controller when you want to: + +* Give your Kubernetes app an externally-reachable URL. +* Load-balance traffic to your app. + +Kubernetes provides an NGINX ingress controller that you can use in Docker EE +without modifications. +Learn about [ingress in Kubernetes](https://v1-8.docs.kubernetes.io/docs/concepts/services-networking/ingress/). + +## Create a dedicated namespace + +1. Navigate to the **Namespaces** page, and click **Create**. +2. In the **Object YAML** editor, append the following text. + ```yaml + metadata: + name: ingress-nginx + ``` + + The finished YAML should look like this. + + ```yaml + apiVersion: v1 + kind: Namespace + metadata: + name: ingress-nginx + ``` +3. Click **Create**. +4. In the **ingress-nginx** namespace, click the **More options** icon, + and in the context menu, select **Set Context**. + + ![](../images/deploy-ingress-controller-1.png){: .with-border} + +## Create a grant + +The default service account that's associated with the `ingress-nginx` +namespace needs access to Kubernetes resources, so create a grant with +`Restricted Control` permissions. + +1. From UCP, navigate to the **Grants** page, and click **Create Grant**. +2. Within the **Subject** pane, select **Service Account**. For the + **Namespace** select **ingress-nginx**, and select **default** for + the **Service Account**. Click **Next**. +3. Within the **Role** pane, select **Restricted Control**, and then click + **Next**. +4. Within the **Resource Set** pane, select the **Type** **Namespace**, and + select the **Apply grant to all existing and new namespaces** toggle. +5. Click **Create**. + +> Ingress and role-based access control +> +> Docker EE has an access control system that differs from Kubernetes RBAC. +> If your ingress controller has access control requirements, you need to +> create corresponding UCP grants. Learn to +> [migrate Kubernetes roles to Docker EE authorization](../authorization/migrate-kubernetes-roles.md). +{: .important} + +## Deploy NGINX ingress controller + +The cluster is ready for the ingress controller deployment, which has three +main components: + +- a simple HTTP server, named `default-http-backend`, +- an ingress controller, named `nginx-ingress-controller`, and +- a service that exposes the app, named `ingress-nginx`. + +Navigate to the **Create Kubernetes Object** page, and in the **Object YAML** +editor, paste the following YAML. + +```yaml +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: default-http-backend + labels: + app: default-http-backend + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: default-http-backend + template: + metadata: + labels: + app: default-http-backend + annotations: + seccomp.security.alpha.kubernetes.io/pod: docker/default + spec: + terminationGracePeriodSeconds: 60 + containers: + - name: default-http-backend + # Any image is permissable as long as: + # 1. It serves a 404 page at / + # 2. It serves 200 on a /healthz endpoint + image: gcr.io/google_containers/defaultbackend:1.4 + livenessProbe: + httpGet: + path: /healthz + port: 8080 + scheme: HTTP + initialDelaySeconds: 30 + timeoutSeconds: 5 + ports: + - containerPort: 8080 + resources: + limits: + cpu: 10m + memory: 20Mi + requests: + cpu: 10m + memory: 20Mi +--- +apiVersion: v1 +kind: Service +metadata: + name: default-http-backend + namespace: ingress-nginx + labels: + app: default-http-backend +spec: + ports: + - port: 80 + targetPort: 8080 + selector: + app: default-http-backend +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: tcp-services + namespace: ingress-nginx +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: udp-services + namespace: ingress-nginx +--- +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: nginx-ingress-controller + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: ingress-nginx + template: + metadata: + labels: + app: ingress-nginx + annotations: + prometheus.io/port: '10254' + prometheus.io/scrape: 'true' + seccomp.security.alpha.kubernetes.io/pod: docker/default + spec: + initContainers: + - command: + - sh + - -c + - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535" + image: alpine:3.6 + imagePullPolicy: IfNotPresent + name: sysctl + securityContext: + privileged: true + containers: + - name: nginx-ingress-controller + image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1 + args: + - /nginx-ingress-controller + - --default-backend-service=$(POD_NAMESPACE)/default-http-backend + - --configmap=$(POD_NAMESPACE)/nginx-configuration + - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services + - --udp-services-configmap=$(POD_NAMESPACE)/udp-services + - --annotations-prefix=nginx.ingress.kubernetes.io + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 + livenessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + readinessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: ingress-nginx + namespace: ingress-nginx +spec: + type: NodePort + ports: + - name: http + port: 80 + targetPort: 80 + protocol: TCP + - name: https + port: 443 + targetPort: 443 + protocol: TCP + selector: + app: ingress-nginx +``` + +## Check your deployment + +The `default-http-backend` provides a simple service that serves a 404 page +at `/` and serves 200 on the `/healthz` endpoint. + +1. Navigate to the **Controllers** page and confirm that the + **default-http-backend** and **nginx-ingress-controller** objects are + scheduled. + + > Scheduling latency + > + > It may take several seconds for the HTTP backend and the ingress controller's + > `Deployment` and `ReplicaSet` objects to be scheduled. + {: .important} + + ![](../images/deploy-ingress-controller-2.png){: .with-border} + +2. When the workload is running, navigate to the **Load Balancers** page + and click the **ingress-nginx** service. + + ![](../images/deploy-ingress-controller-3.png){: .with-border} + +3. In the details pane, click the first URL in the **Ports** section. + + A new page opens, displaying `default backend - 404`. + +## Check your deployment from the CLI + +From the command line, confirm that the deployment is running by using +`curl` with the URL that's shown on the details pane of the **ingress-nginx** +service. + +```bash +curl -I http://:/ +``` + +This command returns the following result. + +``` +HTTP/1.1 404 Not Found +Server: nginx/1.13.8 +``` + +Test the server's health ping service by appending `/healthz` to the URL. + +```bash +curl -I http://:/healthz +``` + +This command returns the following result. + +``` +HTTP/1.1 200 OK +Server: nginx/1.13.8 +``` diff --git a/datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app.md b/datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app.md new file mode 100644 index 0000000000..eb7462c80c --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app.md @@ -0,0 +1,160 @@ +--- +title: Deploy a multi-service app +description: Learn how to deploy containerized applications on a cluster, with Docker Universal Control Plane. +keywords: ucp, deploy, application, stack, service, compose +redirect_from: + - /ee/ucp/user/services/ + - /ee/ucp/swarm/deploy-from-cli/ + - /ee/ucp/swarm/deploy-from-ui/ +--- + +Docker Universal Control Plane allows you to use the tools you already know, +like `docker stack deploy` to deploy multi-service applications. You can +also deploy your applications from the UCP web UI. + +In this example we'll deploy a multi-service application that allows users to +vote on whether they prefer cats or dogs. + +```yaml +version: "3" +services: + + # A Redis key-value store to serve as message queue + redis: + image: redis:alpine + ports: + - "6379" + networks: + - frontend + + # A PostgreSQL database for persistent storage + db: + image: postgres:9.4 + volumes: + - db-data:/var/lib/postgresql/data + networks: + - backend + + # Web UI for voting + vote: + image: dockersamples/examplevotingapp_vote:before + ports: + - 5000:80 + networks: + - frontend + depends_on: + - redis + + # Web UI to count voting results + result: + image: dockersamples/examplevotingapp_result:before + ports: + - 5001:80 + networks: + - backend + depends_on: + - db + + # Worker service to read from message queue + worker: + image: dockersamples/examplevotingapp_worker + networks: + - frontend + - backend + +networks: + frontend: + backend: + +volumes: + db-data: +``` + +## From the web UI + +To deploy your applications from the **UCP web UI**, on the left navigation bar +expand **Shared resources**, choose **Stacks**, and click **Create stack**. + +![Stack list](../../images/deploy-multi-service-app-1.png){: .with-border} + +Choose the name you want for your stack, and choose **Swarm services** as the +deployment mode. + +When you choose this option, UCP deploys your app using the +Docker swarm built-in orchestrator. If you choose 'Basic containers' as the +deployment mode, UCP deploys your app using the classic Swarm orchestrator. + +Then copy-paste the application definition in docker-compose.yml format. + +![Deploy stack](../../images/deploy-multi-service-app-2.png){: .with-border} + +Once you're done click **Create** to deploy the stack. + +## From the CLI + +To deploy the application from the CLI, start by configuring your Docker +CLI using a [UCP client bundle](../user-access/cli.md). + +Then, create a file named `docker-stack.yml` with the content of the yaml above, +and run: + + + +
    +
    +``` +docker stack deploy --compose-file voting_app +``` +
    +
    +``` +docker-compose --file docker-compose.yml --project-name voting_app up -d +``` +
    +
    + + +## Check your app + +Once the multi-service application is deployed, it shows up in the UCP web UI. +The 'Stacks' page shows that you've deployed the voting app. + +![Stack deployed](../../images/deploy-multi-service-app-3.png){: .with-border} + +You can also inspect the individual services of the app you deployed. For that, +click the **voting_app** to open the details pane, open **Inspect resources** and +choose **Services**, since this app was deployed with the built-in Docker swarm +orchestrator. + +![Service list](../../images/deploy-multi-service-app-4.png){: .with-border} + +You can also use the Docker CLI to check the status of your app: + +``` +docker stack ps voting_app +``` + +Great! The app is deployed so we can cast votes by accessing the service that's +listening on port 5000. +You don't need to know the ports a service listens to. You can +**click the voting_app_vote** service and click on the **Published endpoints** +link. + +![Voting app](../../images/deploy-multi-service-app-5.png){: .with-border} + +## Limitations + +When deploying applications from the web UI, you can't reference any external +files, no matter if you're using the built-in swarm orchestrator or classic +Swarm. For that reason, the following keywords are not supported: + +* build +* dockerfile +* env_file + +Also, UCP doesn't store the stack definition you've used to deploy the stack. +You can use a version control system for this. + diff --git a/datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection.md b/datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection.md new file mode 100644 index 0000000000..746f5b44f3 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection.md @@ -0,0 +1,103 @@ +--- +title: Deploy application resources to a collection +description: Learn how to manage user access to application resources by using collections. +keywords: UCP, authentication, user management, stack, collection, role, application, resources +redirect_from: + - /ee/ucp/user/services/deploy-stack-to-collection/ +--- + +Docker Universal Control Plane enforces role-based access control when you +deploy services. By default, you don't need to do anything, because UCP deploys +your services to a default collection, unless you specify another one. You can +customize the default collection in your UCP profile page. +[Learn more about access control and collections](../authorization/index.md). + +UCP defines a collection by its path. For example, a user's default collection +has the path `/Shared/Private/`. To deploy a service to a collection +that you specify, assign the collection's path to the *access label* of the +service. The access label is named `com.docker.ucp.access.label`. + +When UCP deploys a service, it doesn't automatically create the collections +that correspond with your access labels. An administrator must create these +collections and [grant users access to them](../authorization/grant-permissions.md). +Deployment fails if UCP can't find a specified collection or if the user +doesn't have access to it. + +## Deploy a service to a collection by using the CLI + +Here's an example of a `docker service create` command that deploys a service +to a `/Shared/database` collection: + +```bash +docker service create \ + --name redis_2 \ + --label com.docker.ucp.access.label="/Shared/database" + redis:3.0.6 +``` + +## Deploy services to a collection by using a Compose file + +You can also specify a target collection for a service in a Compose file. +In the service definition, add a `labels:` dictionary, and assign the +collection's path to the `com.docker.ucp.access.label` key. + +If you don't specify access labels in the Compose file, resources are placed in +the user's default collection when the stack is deployed. + +You can place a stack's resources into multiple collections, but most of the +time, you won't need to do this. + +Here's an example of a Compose file that specifies two services, WordPress and +MySQL, and gives them the access label `/Shared/wordpress`: + +```yaml +version: '3.1' + +services: + + wordpress: + image: wordpress + ports: + - 8080:80 + environment: + WORDPRESS_DB_PASSWORD: example + deploy: + labels: + com.docker.ucp.access.label: /Shared/wordpress + mysql: + image: mysql:5.7 + environment: + MYSQL_ROOT_PASSWORD: example + deploy: + labels: + com.docker.ucp.access.label: /Shared/wordpress +``` + +To deploy the application: + +1. In the UCP web UI, navigate to the **Stacks** page and click **Create Stack**. +2. Name the app "wordpress". +3. From the **Mode** dropdown, select **Swarm Services**. +4. Copy and paste the previous compose file into the **docker-compose.yml** editor. +5. Click **Create** to deploy the application, and click **Done** when the + deployment completes. + + ![](../../images/deploy-stack-to-collection-1.png){: .with-border} + +If the `/Shared/wordpress` collection doesn't exist, or if you don't have +a grant for accessing it, UCP reports an error. + +To confirm that the service deployed to the `/Shared/wordpress` collection: + +1. In the **Stacks** page, click **wordpress**. +2. In the details pane, click **Inspect Resource** and select **Services**. +3. On the **Services** page, click **wordpress_mysql**. In the details pane, + make sure that the **Collection** is `/Shared/wordpress`. + +![](../../images/deploy-stack-to-collection-2.png){: .with-border} + +## Where to go next + +- [Deploy a Compose-based app to a Kubernetes cluster](../kubernetes/deploy-with-compose.md) +- [Set metadata on a service (-l, –label)](/engine/reference/commandline/service_create/#set-metadata-on-a-service--l-label.md) +- [Docker object labels](/engine/userguide/labels-custom-metadata/.md) diff --git a/datacenter/ucp/3.0/guides/user/swarm/index.md b/datacenter/ucp/3.0/guides/user/swarm/index.md new file mode 100644 index 0000000000..76ff69bcaa --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/index.md @@ -0,0 +1,67 @@ +--- +title: Deploy a single service +description: Learn how to deploy services to a cluster managed by Universal Control Plane. +keywords: ucp, deploy, service +redirect_from: + - /ee/ucp/user/services/deploy-a-service/ +--- + +You can deploy and monitor your services from the UCP web UI. In this example +we'll deploy an [NGINX](https://www.nginx.com/) web server and make it +accessible on port `8000`. + +In your browser, navigate to the UCP web UI and click **Services**. On the +**Create a Service** page, click **Create Service** to configure the +NGINX service. + +Fill in the following fields: + +| Field | Value | +|:-------------|:-------------| +| Service name | nginx | +| Image name | nginx:latest | + +![](../../images/deploy-a-service-1.png){: .with-border} + +In the left pane, click **Network**. In the **Ports** section, +click **Publish Port** and fill in the following fields: + +| Field | Value | +|:---------------|:--------| +| Target port | 80 | +| Protocol | tcp | +| Publish mode | Ingress | +| Published port | 8000 | + +![](../../images/deploy-a-service-2.png){: .with-border} + +Click **Confirm** to map the ports for the NGINX service. + +Once you've specified the service image and ports, click **Create** to +deploy the service into the UCP cluster. + +![](../../images/deploy-a-service-3.png){: .with-border} + +Once the service is up and running, you'll be able to see the default NGINX +page, by going to `http://:8000`. In the **Services** list, click the +**nginx** service, and in the details pane, click the link under +**Published Endpoints**. + +![](../../images/deploy-a-service-4.png){: .with-border} + +Clicking the link opens a new tab that shows the default NGINX home page. + +![](../../images/deploy-a-service-5.png){: .with-border} + +## Use the CLI to deploy the service + +You can also deploy the same service from the CLI. Once you've set up your +[UCP client bundle](../user-access/cli.md), run: + +```bash +docker service create --name nginx \ + --publish mode=ingress,target=80,published=8000 \ + --label com.docker.ucp.access.owner= \ + nginx +``` + diff --git a/datacenter/ucp/3.0/guides/user/swarm/use-secrets.md b/datacenter/ucp/3.0/guides/user/swarm/use-secrets.md new file mode 100644 index 0000000000..1fb05bc865 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/use-secrets.md @@ -0,0 +1,193 @@ +--- +title: Manage secrets +description: Learn how to manage your passwords, certificates, and other secrets in a secure way with Docker EE +keywords: UCP, secret, password, certificate, private key +redirect_from: + - /ee/ucp/user/secrets/ +--- + +When deploying and orchestrating services, you often need to configure them +with sensitive information like passwords, TLS certificates, or private keys. + +Universal Control Plane allows you to store this sensitive information, also +known as *secrets*, in a secure way. It also gives you role-based access control +so that you can control which users can use a secret in their services +and which ones can manage the secret. + +UCP extends the functionality provided by Docker Engine, so you can continue +using the same workflows and tools you already use, like the Docker CLI client. +[Learn how to use secrets with Docker](/engine/swarm/secrets/). + +In this example, we're going to deploy a WordPress application that's composed of +two services: + +* wordpress: The service that runs Apache, PHP, and WordPress +* wordpress-db: a MySQL database used for data persistence + +Instead of configuring our services to use a plain text password stored in an +environment variable, we're going to create a secret to store the password. +When we deploy those services, we'll attach the secret to them, which creates +a file with the password inside the container running the service. +Our services will be able to use that file, but no one else will be able +to see the plain text password. + +To make things simpler, we're not going to configure the database service to +persist data. When the service stops, the data is lost. + +## Create a secret + +In the UCP web UI, open the **Swarm** section and click **Secrets**. + +![](../../images/manage-secrets-1.png){: .with-border} + +Click **Create Secret** to create a new secret. Once you create the secret +you won't be able to edit it or see the secret data again. + +![](../../images/manage-secrets-2.png){: .with-border} + +Assign a unique name to the secret and set its value. You can optionally define +a permission label so that other users have permission to use this secret. Also +note that a service and secret must have the same permission label, or both +must have no permission label at all, in order to be used together. + +In this example, the secret is named `wordpress-password-v1`, to make it easier +to track which version of the password our services are using. + + +## Use secrets in your services + +Before creating the MySQL and WordPress services, we need to create the network +that they're going to use to communicate with one another. + +Navigate to the **Networks** page, and create the `wordpress-network` with the +default settings. + +![](../../images/manage-secrets-3.png){: .with-border} + +Now create the MySQL service: + +1. Navigate to the **Services** page and click **Create Service**. Name the + service "wordpress-db", and for the **Task Template**, use the "mysql:5.7" + image. +2. In the left pane, click **Network**. In the **Networks** section, click + **Attach Network**, and in the dropdown, select **wordpress-network**. +3. In the left pane, click **Environment**. The Environment page is where you + assign secrets, environment variables, and labels to the service. +4. In the **Secrets** section, click **Use Secret**, and in the **Secret Name** + dropdown, select **wordpress-password-v1**. Click **Confirm** to associate + the secret with the service. +5. In the **Environment Variable** section, click **Add Environment Variable** and enter + the string "MYSQL_ROOT_PASSWORD_FILE=/run/secrets/wordpress-password-v1" to + create an environment variable that holds the path to the password file in + the container. +6. If you specified a permission label on the secret, you must set the same + permission label on this service. If the secret doesn't have a permission + label, then this service also can't have a permission label. +7. Click **Create** to deploy the MySQL service. + +This creates a MySQL service that's attached to the `wordpress-network` network +and that uses the `wordpress-password-v1` secret. By default, this creates a file +with the same name at `/run/secrets/` inside the container running +the service. + +We also set the `MYSQL_ROOT_PASSWORD_FILE` environment variable to configure +MySQL to use the content of the `/run/secrets/wordpress-password-v1` file as +the root password. + +![](../../images/manage-secrets-4.png){: .with-border} + +Now that the MySQL service is running, we can deploy a WordPress service that +uses MySQL as a storage backend: + +1. Navigate to the **Services** page and click **Create Service**. Name the + service "wordpress", and for the **Task Template**, use the + "wordpress:latest" image. +2. In the left pane, click **Network**. In the **Networks** section, click + **Attach Network**, and in the dropdown, select **wordpress-network**. +3. In the left pane, click **Environment**. +4. In the **Secrets** section, click **Use Secret**, and in the **Secret Name** + dropdown, select **wordpress-password-v1**. Click **Confirm** to associate + the secret with the service. +5. In the **Environment Variable**, click **Add Environment Variable** and enter + the string "WORDPRESS_DB_PASSWORD_FILE=/run/secrets/wordpress-password-v1" to + create an environment variable that holds the path to the password file in + the container. +6. Add another environment variable and enter the string + "WORDPRESS_DB_HOST=wordpress-db:3306". +7. If you specified a permission label on the secret, you must set the same + permission label on this service. If the secret doesn't have a permission + label, then this service also can't have a permission label. +8. Click **Create** to deploy the WordPress service. + +![](../../images/manage-secrets-4a.png){: .with-border} + +This creates the WordPress service attached to the same network as the MySQL +service so that they can communicate, and maps the port 80 of the service to +port 8000 of the cluster routing mesh. + +![](../../images/manage-secrets-5.png){: .with-border} + +Once you deploy this service, you'll be able to access it using the +IP address of any node in your UCP cluster, on port 8000. + +![](../../images/manage-secrets-6.png){: .with-border} + +## Update a secret + +If the secret gets compromised, you'll need to rotate it so that your services +start using a new secret. In this case, we need to change the password we're +using and update the MySQL and WordPress services to use the new password. + +Since secrets are immutable in the sense that you can't change the data +they store after they are created, we can use the following process to achieve +this: + +1. Create a new secret with a different password. +2. Update all the services that are using the old secret to use the new one + instead. +3. Delete the old secret. + +Let's rotate the secret we've created. Navigate to the **Secrets** page +and create a new secret named `wordpress-password-v2`. + +![](../../images/manage-secrets-7.png){: .with-border} + +This example is simple, and we know which services we need to update, +but in the real world, this might not always be the case. + +Click the **wordpress-password-v1** secret. In the details pane, +click **Inspect Resource**, and in the dropdown, select **Services**. + +![](../../images/manage-secrets-8.png){: .with-border} + +Start by updating the `wordpress-db` service to stop using the secret +`wordpress-password-v1` and use the new version instead. + +The `MYSQL_ROOT_PASSWORD_FILE` environment variable is currently set to look for +a file at `/run/secrets/wordpress-password-v1` which won't exist after we +update the service. So we have two options: + +1. Update the environment variable to have the value +`/run/secrets/wordpress-password-v2`, or +2. Instead of mounting the secret file in `/run/secrets/wordpress-password-v2` +(the default), we can customize it to be mounted in`/run/secrets/wordpress-password-v1` +instead. This way we don't need to change the environment variable. This is +what we're going to do. + +When adding the secret to the services, instead of leaving the **Target Name** +field with the default value, set it with `wordpress-password-v1`. This will make +the file with the content of `wordpress-password-v2` be mounted in +`/run/secrets/wordpress-password-v1`. + +Delete the `wordpress-password-v1` secret, and click **Update**. + +![](../../images/manage-secrets-9.png){: .with-border} + +Then do the same thing for the WordPress service. After this is done, the +WordPress application is running and using the new password. + +## Managing secrets through the CLI + +You can find additional documentation on managing secrets through the CLI at [How Docker manages secrets](/engine/swarm/secrets/#read-more-about-docker-secret-commands). + + diff --git a/docker-for-aws/release-notes.md b/docker-for-aws/release-notes.md index 2ebd3dfb14..a3249e7f4b 100644 --- a/docker-for-aws/release-notes.md +++ b/docker-for-aws/release-notes.md @@ -6,33 +6,19 @@ title: Docker for AWS release notes {% include d4a_buttons.md %} -## Enterprise Edition -[Docker Enterprise Edition Lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle){: target="_blank" class="_"} - -[Deploy Docker Enterprise Edition (EE) for AWS](https://hub.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn blank_"} - -### 17.06 EE - -- Docker engine 17.06 EE -- For Std/Adv external logging has been removed, as it is now handled by [UCP](https://docs.docker.com/datacenter/ucp/2.0/guides/configuration/configure-logs/){: target="_blank" class="_"} -- UCP 2.2.3 -- DTR 2.3.3 - -### 17.03 EE - -- Docker engine 17.03 EE -- UCP 2.1.5 -- DTR 2.2.7 - - > **Note** Starting with 18.02.0-CE EFS encryption option has been removed to prevent the [recreation of the EFS volume](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html){: target="_blank" class="_"}. ## Stable channel -### 18.06.1 CE - {{aws_blue_latest}} +### 18.09.2 +Release date: 2/24/2019 + +- Docker Engine upgraded to [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2){: target="_blank" class="_"} + +### 18.06.1 CE + Release date: 8/24/2018 - Docker Engine upgraded to [Docker 18.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v18.06.1-ce){: target="_blank" class="_"} @@ -139,3 +125,21 @@ Release date: 10/18/2017 ## Template archive If you are looking for templates from older releases, check out the [template archive](/docker-for-aws/archive.md). + +## Enterprise Edition +[Docker Enterprise Edition Lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle){: target="_blank" class="_"} + +[Deploy Docker Enterprise Edition (EE) for AWS](https://hub.docker.com/editions/enterprise/docker-ee-aws?tab=description){: target="_blank" class="button outline-btn blank_"} + +### 17.06 EE + +- Docker engine 17.06 EE +- For Std/Adv external logging has been removed, as it is now handled by [UCP](https://docs.docker.com/datacenter/ucp/2.0/guides/configuration/configure-logs/){: target="_blank" class="_"} +- UCP 2.2.3 +- DTR 2.3.3 + +### 17.03 EE + +- Docker engine 17.03 EE +- UCP 2.1.5 +- DTR 2.2.7 diff --git a/docker-for-azure/release-notes.md b/docker-for-azure/release-notes.md index a6dbe43590..a151dcea35 100644 --- a/docker-for-azure/release-notes.md +++ b/docker-for-azure/release-notes.md @@ -9,6 +9,11 @@ title: Docker for Azure Release Notes ## Enterprise Edition [Docker Enterprise Edition Lifecycle](https://success.docker.com/Policies/Maintenance_Lifecycle){: target="_blank"} +### 17.06.2-ee-19 EE +- Docker engine 17.06.2-ee-19 EE +- UCP 2.2.16 +- DTR 2.3.10 + ### 17.06 EE - Docker engine 17.06 EE @@ -24,10 +29,15 @@ title: Docker for Azure Release Notes ## Stable channel -### 18.06.1 CE - {{azure_blue_latest}} +### 18.09.2 +Release date: 2/24/2019 + + - Docker Engine upgraded to [Docker 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2){: target="_blank" class="_"} + +### 18.06.1 CE + Release date: 8/24/2018 - Docker Engine upgraded to [Docker 18.06.1 CE](https://github.com/docker/docker-ce/releases/tag/v18.06.1-ce){: target="_blank" class="_"} diff --git a/docker-for-windows/install.md b/docker-for-windows/install.md index b1fd76f684..b2b14c0147 100644 --- a/docker-for-windows/install.md +++ b/docker-for-windows/install.md @@ -49,6 +49,8 @@ Hub](https://hub.docker.com/editions/community/docker-ce-desktop-windows){: more information, see [Running Docker Desktop for Windows in nested virtualization scenarios](troubleshoot.md#running-docker-for-windows-in-nested-virtualization-scenarios) +**Note**: Refer to the [Docker compatibility matrix](https://success.docker.com/article/compatibility-matrix) for complete Docker compatibility information with Windows Server. + ### About Windows containers Looking for information on using Windows containers? diff --git a/docker-for-windows/troubleshoot.md b/docker-for-windows/troubleshoot.md index 7fb64faaf9..4a49fd257d 100644 --- a/docker-for-windows/troubleshoot.md +++ b/docker-for-windows/troubleshoot.md @@ -406,9 +406,9 @@ limitations with regard to networking due to the current implementation of Windows NAT (WinNAT). These limitations may potentially resolve as the Windows containers project evolves. -One thing you may encounter rather immediately is that published ports on -Windows containers do not do loopback to the local host. Instead, container -endpoints are only reachable from the host using the container's IP and port. +Windows containers work with published ports on localhost beginning with Windows 10 1809 using Docker Desktop for Windows as well as Windows Server 2019 / 1809 using Docker EE. + +If you are working with a version prior to `Windows 10 18.09`, published ports on Windows containers have an issue with loopback to the localhost. You can only reach container endpoints from the host using the container's IP and port. With `Windows 10 18.09`, containers work with published ports on localhost. So, in a scenario where you use Docker to pull an image and run a webserver with a command like this: diff --git a/ee/dtr/images/delegate-image-signing-1.svg b/ee/dtr/images/delegate-image-signing-1.svg deleted file mode 100644 index 73ffb9a892..0000000000 --- a/ee/dtr/images/delegate-image-signing-1.svg +++ /dev/null @@ -1,179 +0,0 @@ - - - - - -delegate-image-signing-1 -Created with Sketch. - - - - - - - - IT ops team - - - QA team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - dev/node - - - - dev/java - - - - dev/nginx - - - - - - - diff --git a/ee/dtr/images/remoteucp-addregistry.png b/ee/dtr/images/remoteucp-addregistry.png new file mode 100644 index 0000000000..18566e06cc Binary files /dev/null and b/ee/dtr/images/remoteucp-addregistry.png differ diff --git a/ee/dtr/images/remoteucp-enablesigning.png b/ee/dtr/images/remoteucp-enablesigning.png new file mode 100644 index 0000000000..f06eb7f72b Binary files /dev/null and b/ee/dtr/images/remoteucp-enablesigning.png differ diff --git a/ee/dtr/images/remoteucp-graphic.png b/ee/dtr/images/remoteucp-graphic.png new file mode 100644 index 0000000000..fd91a5ff0a Binary files /dev/null and b/ee/dtr/images/remoteucp-graphic.png differ diff --git a/ee/dtr/images/remoteucp-signedimage.png b/ee/dtr/images/remoteucp-signedimage.png new file mode 100644 index 0000000000..381d548372 Binary files /dev/null and b/ee/dtr/images/remoteucp-signedimage.png differ diff --git a/ee/dtr/images/sign-an-image-3.png b/ee/dtr/images/sign-an-image-3.png index c8253b31a6..a5c2030356 100644 Binary files a/ee/dtr/images/sign-an-image-3.png and b/ee/dtr/images/sign-an-image-3.png differ diff --git a/ee/dtr/user/access-dtr/configure-your-notary-client.md b/ee/dtr/user/access-dtr/configure-your-notary-client.md deleted file mode 100644 index e2880b9cb3..0000000000 --- a/ee/dtr/user/access-dtr/configure-your-notary-client.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: Configure your Notary client -description: Learn how to configure your Notary client to push and pull images from Docker Trusted Registry. -keywords: registry, notary, trust ---- - -The Docker CLI client makes it easy to sign images but to streamline that -process it generates a set of private and public keys that are not tied -to your UCP account. This means that you'll be able to push and sign images to -DTR, but UCP won't trust those images since it doesn't know anything about -the keys you're using. - -So before signing and pushing images to DTR you should: - -* Configure the Notary CLI client -* Import your UCP private keys to the Notary client - -This allows you to start signing images with the private keys in your UCP -client bundle, that UCP can trace back to your user account. - -## System requirements - -The version of Notary you install, depends on the version of the Docker CLI -you're using: - -* Docker CLI 17.08 or older, use Notary 0.4.3. -* Docker CLI 17.09 or newer, use Notary 0.6.0. - -## Download the Notary CLI client - -If you're using Docker Desktop for Mac or Docker Desktop for Windows, you already have the -`notary` command installed. - -If you're running Docker on a Linux distribution, you can [download -Notary from Github](https://github.com/docker/notary/releases). As an example: - -```bash -# Get the latest binary -curl -L -o notary - -# Make it executable -chmod +x notary - -# Move it to a location in your path. Use the -Z option if you're using SELinux. -sudo mv -Z notary /usr/bin/ -``` - -## Configure the Notary CLI client - -Before you use the Notary CLI client, you need to configure it to make it -talk with the Notary server that's part of DTR. - -There's two ways to do this, either by passing flags to the notary command, -or using a configuration file. - -### With flags - -Run the Notary command with: - -```bash -notary --server https:// --trustDir ~/.docker/trust --tlscacert --help -``` - -Here's what the flags mean: - -| Flag | Purpose | -|:--------------|:----------------------------------------------------------------------------------------------------------------------------------| -| `--server` | The Notary server to query | -| `--trustDir` | Path to the local directory where trust metadata will be stored | -| `--tlscacert` | Path to the DTR CA certificate. If you've configured your system to trust the DTR CA certificate, you don't need to use this flag | - -To avoid having to type all the flags when using the command, you can set an -alias: - - - -
    -
    -``` -alias notary="notary --server https:// --trustDir ~/.docker/trust --tlscacert " -``` -
    -
    -
    -``` -set-alias notary "notary --server https:// --trustDir ~/.docker/trust --tlscacert " -``` -
    -
    -
    - -### With a configuration file - -You can also configure Notary by creating a `~/.notary/config.json` file with -the following content: - -```json -{ - "trust_dir" : "~/.docker/trust", - "remote_server": { - "url": "https://:", - "root_ca": "" - } -} -``` - -To validate your configuration, try running the `notary list` command on a -DTR repository that already has signed images: - -```bash -notary list // -``` - -The command should print a list of digests for each signed image on the -repository. - -## Import your UCP key - -The last step in configuring the Notary CLI client is to import the private -key of your UCP client bundle. -[Get a new client bundle if you don't have one yet](/datacenter/ucp/2.2/guides/user/access-ucp/cli-based-access.md). - -Import the private key in your UCP bundle into the Notary CLI client: - -```bash -notary key import -``` - -The private key is copied to `~/.docker/trust`, and you'll be prompted for a -password to encrypt it. - -You can validate what keys Notary knows about by running: - -```bash -notary key list -``` - -The key you've imported should be listed with the role `delegation`. diff --git a/ee/dtr/user/manage-images/sign-images/delegate-image-signing.md b/ee/dtr/user/manage-images/sign-images/delegate-image-signing.md deleted file mode 100644 index ae4813fcfa..0000000000 --- a/ee/dtr/user/manage-images/sign-images/delegate-image-signing.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: Delegate image signing -description: Learn how to grant permission for others to sign images in Docker Trusted Registry. -keywords: registry, sign, trust ---- - -Instead of signing all the images yourself, you can delegate that task -to other users. - -A typical workflow looks like this: - -1. A repository owner creates a repository in DTR, and initializes the trust -metadata for that repository -3. Team members download a UCP client bundle and share their public key -certificate with the repository owner -4. The repository owner delegates signing to the team members -5. Team members can sign images using the private keys in their UCP client -bundles - -In this example, the IT ops team creates and initializes trust for the -`dev/nginx`. Then they allow users in the QA team to push and sign images in -that repository. - -![teams](../../../images/delegate-image-signing-1.svg) - -## Create a repository and initialize trust - -A member of the IT ops team starts by configuring their -[Notary CLI client](../../access-dtr/configure-your-notary-client.md). - -Then they create the `dev/nginx` repository, -[initialize the trust metadata](index.md) for that repository, and grant -write access to members of the QA team, so that they can push images to that -repository. - -## Ask for the public key certificates - -The member of the IT ops team then asks the QA team for their public key -certificate files that are part of their UCP client bundle. - -If they don't have a UCP client bundle, -[they can download a new one](/ee/ucp/user-access/cli.md). - -## Delegate image signing - -When delegating trust, you associate a public key certificate with a role name. -UCP requires that you delegate trust to two different roles: - -* `targets/releases` -* `targets/`, where `` is the UCP team the user belongs to - -In this example we'll delegate trust to `targets/releases` and `targets/qa`: - -```bash -# Delegate trust, and add that public key with the role targets/releases -notary delegation add dtr.example.org/dev/nginx targets/releases \ - --all-paths --publish - -# Delegate trust, and add that public key with the role targets/admin -notary delegation add dtr.example.org/dev/nginx targets/qa \ - --all-paths --publish -``` - -Now members from the QA team just have to [configure their Notary CLI client -with UCP private keys](../../access-dtr/configure-your-notary-client.md) -to be able to [push and sign images](index.md) into the `dev/nginx` repository. - - -## Where to go next - -- [Manage trusted repositories](manage-trusted-repositories.md) diff --git a/ee/dtr/user/manage-images/sign-images/index.md b/ee/dtr/user/manage-images/sign-images/index.md index c0166a9e76..a26f42cc9d 100644 --- a/ee/dtr/user/manage-images/sign-images/index.md +++ b/ee/dtr/user/manage-images/sign-images/index.md @@ -2,175 +2,245 @@ title: Sign an image description: Learn how to sign the images you push to Docker Trusted Registry. keywords: registry, sign, trust +redirect_from: +- /ee/dtr/user/manage-images/sign-images/delegate-image-signing/ +- /ee/dtr/user/manage-images/sign-images/manage-trusted-repositories/ --- -By default, when you push an image to DTR, the Docker CLI client doesn't -sign the image. +2 Key components of the Docker Trusted Registry is the Notary Server and Notary +Signer. These 2 containers give us the required components to use Docker Content +Trust right out of the box. [Docker Content +Trust](/engine/security/trust/content_trust/) allows us to sign image tags, +therefore whoever pulls the image can validate that they are getting the image +you create, or a forged one. + +As part of Docker Trusted Registry both the Notary server and the Registry +server are accessed through a front end Proxy, with both components sharing the +UCP's RBAC Engine. Therefore no additional configuration of the Docker Client +is required to use trust. + +Docker Content Trust is integrated into the Docker CLI, allowing you to +configure repositories, add signers and sign images all through the `$ docker +trust` command. ![image without signature](../../../images/sign-an-image-1.svg) -You can configure the Docker CLI client to sign the images you push to DTR. -This allows whoever pulls your image to validate if they are getting the image -you created, or a forged one. - -To sign an image, you can run: - -```bash -export DOCKER_CONTENT_TRUST=1 -docker push //: -``` - -This pushes the image to DTR and creates trust metadata. It also creates -public and private key pairs to sign the trust metadata, and pushes that metadata -to the Notary Server internal to DTR. - -![image with signature](../../../images/sign-an-image-2.svg) - - ## Sign images that UCP can trust -With the command above you'll be able to sign your DTR images, but UCP won't -trust them because it can't tie the private key you're using to sign the images -to your UCP account. +UCP has a feature which will prevent [untrusted +images](/ee/ucp/admin/configure/run-only-the-images-you-trust/) from being +deployed on the cluster. To use this feature, we first need to upload and sign +images into DTR. To tie the signed images back to UCP, we will actually sign the +images with private keys of UCP users. Inside of a UCP Client bundle the +`key.pem` can be used a User's private key, with the `cert.pem` being a public +key within a x509 certificate. To sign images in a way that UCP trusts them, you need to: -* Configure your Notary client -* Initialize trust metadata for the repository -* Delegate signing to the keys in your UCP client bundle +1. Download a Client Bundle for a User you want to use to sign the images. +2. Load the private key of the User into your workstations trust store. +3. Initialize trust metadata for the repository. +4. Delegate signing for that repository to the UCP User. +5. Sign the Image. -In this example we're going to pull an NGINX image from Docker Hub, -re-tag it as `dtr.example.org/dev/nginx:1`, push the image to DTR and sign it -in a way that is trusted by UCP. If you manage multiple repositories, you'll -have to do the same procedure for every one of them. +In this example we're going to pull a nginx image from the Docker Hub, re-tag it +as `dtr.example.com/dev/nginx:1`, push the image to DTR and sign it in a way +that is trusted by UCP. If you manage multiple repositories, you'll have to do +the same procedure for each repository. -### Configure your Notary client +### Import a UCP User's Private Key -Start by [configuring your Notary client](../../access-dtr/configure-your-notary-client.md). -This ensures the Docker an Notary CLI clients know about your UCP private keys. - -### Initialize the trust metadata - -Then you need to initialize the trust metadata for the new repository, and -the easiest way to do it is by pushing an image to that repository. Navigate to -the **DTR web UI**, and create a repository for your image. -In this example we've created the `dev/nginx` repository. - -From the Docker CLI client, pull an NGINX image from Docker Hub, -re-tag it, sign and push it to DTR. +Once you have download and extracted a UCP User's client bundle into your local +directory, you need to load the Private key into the local Docker trust store +`(~/.docker/trust)`. The name used here is purely metadata to help keep track of +which keys you have imported. ```bash -# Pull NGINX from Docker Hub -docker pull nginx:latest - -# Re-tag NGINX -docker tag nginx:latest dtr.example.org/dev/nginx:1 - -# Log into DTR -docker login dtr.example.org - -# Sign and push the image to DTR -export DOCKER_CONTENT_TRUST=1 -docker push dtr.example.org/dev/nginx:1 +$ docker trust key load --name jeff key.pem +Loading key from "key.pem"... +Enter passphrase for new jeff key with ID a453196: +Repeat passphrase for new jeff key with ID a453196: +Successfully imported key from key.pem ``` -This pushes the image to DTR and initializes the trust metadata for that -repository. +### Initialize the trust metadata and add the Public Key + +Next, we need to initiate trust metadata for a DTR repository. If you have not +done so already, navigate to the **DTR web UI**, and create a repository for +your image. In this example we've created the `prod/nginx` repository. + +As part of initiating the repository, we will add the public key of the UCP User +as a signer. You will be asked for a number of passphrases to protect the keys. +Please keep note of these passphrases, and to learn more about managing keys +head to the Docker Content Trust documentation +[here](/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server). + + +```bash +$ docker trust signer add --key cert.pem jeff dtr.example.com/prod/nginx +Adding signer "jeff" to dtr.example.com/prod/nginx... +Initializing signed repository for dtr.example.com/prod/nginx... +Enter passphrase for root key with ID 4a72d81: +Enter passphrase for new repository key with ID e0d15a2: +Repeat passphrase for new repository key with ID e0d15a2: +Successfully initialized "dtr.example.com/prod/nginx" +Successfully added signer: jeff to dtr.example.com/prod/nginx +``` + +We can inspect the trust metadata of the repository to make sure the User has +been added correctly. + +```bash +$ docker trust inspect --pretty dtr.example.com/prod/nginx + +No signatures for dtr.example.com/prod/nginx + +List of signers and their keys for dtr.example.com/prod/nginx + +SIGNER KEYS +jeff 927f30366699 + +Administrative keys for dtr.example.com/prod/nginx + + Repository Key: e0d15a24b741ab049470298734397afbea539400510cb30d3b996540b4a2506b + Root Key: b74854cb27cc25220ede4b08028967d1c6e297a759a6939dfef1ea72fbdd7b9a +``` + +### Sign the Image + +Finally, we will sign an image tag. These steps download the Image from the +Docker Hub, retag the Image to the DTR repository, push the image up to DTR, as +well as signing the tag with the UCP User's keys. + +```bash +$ docker pull nginx:latest + +$ docker tag nginx:latest dtr.example.com/prod/nginx:1 + +$ docker trust sign dtr.example.com/prod/nginx:1 +Signing and pushing trust data for local image dtr.example.com/prod/nginx:1, may overwrite remote trust data +The push refers to repository [dtr.example.com/prod/nginx] +6b5e2ed60418: Pushed +92c15149e23b: Pushed +0a07e81f5da3: Pushed +1: digest: sha256:5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 size: 948 +Signing and pushing trust metadata +Enter passphrase for jeff key with ID 927f303: +Successfully signed dtr.example.com/prod/nginx:1 +``` + +We can inspect the trust metadata again to make sure the image tag has been +signed successfully. + +```bash +$ docker trust inspect --pretty dtr.example.com/prod/nginx:1 + +Signatures for dtr.example.com/prod/nginx:1 + +SIGNED TAG DIGEST SIGNERS +1 5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 jeff + +List of signers and their keys for dtr.example.com/prod/nginx:1 + +SIGNER KEYS +jeff 927f30366699 + +Administrative keys for dtr.example.com/prod/nginx:1 + + Repository Key: e0d15a24b741ab049470298734397afbea539400510cb30d3b996540b4a2506b + Root Key: b74854cb27cc25220ede4b08028967d1c6e297a759a6939dfef1ea72fbdd7b9a +``` + +Or we can have a look at the signed image from within the **DTR UI**. ![DTR](../../../images/sign-an-image-3.png){: .with-border} -DTR shows that the image is signed, but UCP won't trust the image -because it doesn't have any information about the private keys used to sign -the image. +### Adding Additional Delegations -### Delegate trust to your UCP keys +If you wanted to sign this image with multiple UCP Users, maybe if you had a use +case where an image needed to be signed by a member of the `Security` team and a +member of the `Developers` team. Then you can add multiple signers to a +repository. -To sign images in a way that is trusted by UCP, you need to delegate trust, so -that you can sign images with the private keys in your UCP client bundle. - -When delegating trust you associate a public key certificate with a role name. -UCP requires that you delegate trust to two different roles: - -* `targets/releases` -* `targets/`, where `` is the UCP team the user belongs to - -In this example we'll delegate trust to `targets/releases` and `targets/admin`: +To do so, first load a private key from a UCP User of the Security Team's in to +the local Docker Trust Store. ```bash -# Delegate trust, and add that public key with the role targets/releases -notary delegation add --publish \ - dtr.example.org/dev/nginx \ - targets/releases \ - --all-paths - -# Delegate trust, and add that public key with the role targets/admin -notary delegation add --publish \ - dtr.example.org/dev/nginx \ - targets/admin \ - --all-paths +$ docker trust key load --name security key.pem +Loading key from "key.pem"... +Enter passphrase for new security key with ID 5ac7d9a: +Repeat passphrase for new security key with ID 5ac7d9a: +Successfully imported key from key.pem ``` -To push the new signing metadata to the Notary server, you'll have to push -the image again: +Upload the Public Key to the Notary Server and Sign the Image. You will be asked +for both the Developers passphrase, as well as the Security Users passphrase to +sign the tag. -```none -docker push dtr.example.org/dev/nginx:1 +```bash +$ docker trust signer add --key cert.pem security dtr.example.com/prod/nginx +Adding signer "security" to dtr.example.com/prod/nginx... +Enter passphrase for repository key with ID e0d15a2: +Successfully added signer: security to dtr.example.com/prod/nginx + +$ docker trust sign dtr.example.com/prod/nginx:1 +Signing and pushing trust metadata for dtr.example.com/prod/nginx:1 +Existing signatures for tag 1 digest 5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 from: +jeff +Enter passphrase for jeff key with ID 927f303: +Enter passphrase for security key with ID 5ac7d9a: +Successfully signed dtr.example.com/prod/nginx:1 ``` -## Under the hood +Finally, we can check the tag again to make sure it is now signed by 2 +signatures. -Both Docker and Notary CLI clients interact with the Notary server to: +```bash +$ docker trust inspect --pretty dtr.example.com/prod/nginx:1 -* Keep track of the metadata of signed images -* Validate the signatures of the images you pull +Signatures for dtr.example.com/prod/nginx:1 -This metadata is also kept locally in `~/.docker/trust`. +SIGNED TAG DIGEST SIGNERS +1 5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 jeff, security -```none -. -|-- private -| |-- root_keys -| | `-- 993ad247476da081e45fdb6c28edc4462f0310a55da4acf1e08404c551d94c14.key -| `-- tuf_keys -| `-- dtr.example.org -| `-- dev -| `-- nginx -| |-- 98a93b2e52c594de4d13d7268a4a5f28ade5fc1cb5f44cc3a4ab118572a86848.key -| `-- f7917aef77d0d4bf8204af78c0716dac6649346ebea1c4cde7a1bfa363c502ce.key -`-- tuf - `-- dtr.example.org - `-- dev - `-- nginx - |-- changelist - `-- metadata - |-- root.json - |-- snapshot.json - |-- targets.json - `-- timestamp.json +List of signers and their keys for dtr.example.com/prod/nginx:1 + +SIGNER KEYS +jeff 927f30366699 +security 5ac7d9af7222 + +Administrative keys for dtr.example.com/prod/nginx:1 + + Repository Key: e0d15a24b741ab049470298734397afbea539400510cb30d3b996540b4a2506b + Root Key: b74854cb27cc25220ede4b08028967d1c6e297a759a6939dfef1ea72fbdd7b9a ``` -The `private` directory contains the private keys the Docker CLI client uses -to sign the images. Make sure you create backups of this directory so that -you don't lose your signing keys. +For more advanced use cases like this, more information can be found [here](/engine/security/trust/trust_delegation/) -The Docker and Notary CLI clients integrate with Yubikey. If you have a Yubikey -plugged in when initializing trust for a repository, the root key is stored on -the Yubikey instead of in the trust directory. -When you run any command that needs the `root` key, Docker and Notary CLI -clients look on the Yubikey first, and use the trust directory as a fallback. +## Delete trust data -The `tuf` directory contains the trust metadata for the images you've -signed. For each repository there are four files. +If an Administrator wants to delete a DTR repository that contains Trust +metadata, they will be prompted to delete the trust metadata first before the +repository can be removed. -| File | Description | -|:-----------------|:--------------------------------------------------------------------------------------------------------------------------| -| `root.json` | Has data about other keys and their roles. This data is signed by the root key. | -| `targets.json` | Has data about the digest and size for an image. This data is signed by the target key. | -| `snapshot.json` | Has data about the version number of the root.json and targets.json files. This data is signed by the snapshot key. | -| `timestamp.json` | Has data about the digest, size, and version number for the snapshot.json file. This data is signed by the timestamp key. | +To delete trust metadata we need to use the Notary CLI. For information on how +to download and configure the Notary CLI head +[here](/engine/security/trust/trust_delegation/#configuring-the-notary-client) -[Learn more about trust metadata](/notary/service_architecture.md). + +```bash +$ notary delete dtr.example.com/prod/nginx --remote +Deleting trust data for repository dtr.example.com/prod/nginx +Enter username: admin +Enter password: +Successfully deleted local and remote trust data for repository dtr.example.com/prod/nginx +``` + +If you don't include the `--remote` flag, Notary deletes local cached content +but will not delete data from the Notary server. ## Where to go next -* [Delegate image signing](delegate-image-signing.md) +* [Automating Docker Content + Trust](/engine/security/trust/trust_automation/) +* [Using Docker Content Trust with a Remote UCP](./trust-with-remote-ucp.md) \ No newline at end of file diff --git a/ee/dtr/user/manage-images/sign-images/manage-trusted-repositories.md b/ee/dtr/user/manage-images/sign-images/manage-trusted-repositories.md deleted file mode 100644 index 2cee2d83d0..0000000000 --- a/ee/dtr/user/manage-images/sign-images/manage-trusted-repositories.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Manage trusted repositories -description: Learn how to use the Notary CLI client to manage trusted repositories -keywords: dtr, trust, notary, security ---- - -Once you -[configure the Notary CLI client](../../access-dtr/configure-your-notary-client.md), -you can use it to manage your private keys, list trust data from any repository -you have access to, authorize other team members to sign images, and rotate -keys if a private key has been compromised. - -## List trust data - -List the trust data for a repository by running: - -```bash -notary list // -``` - -You can get one of the following errors, or a list with the images that have -been signed: - -| Message | Description | -|:--------------------------------------------|:-----------------------------------------------------------------------------------------------------------------| -| `fatal: client is offline` | Either the repository server can't be reached, or your Notary CLI client is misconfigured | -| `fatal: does not have trust data` | There's no trust data for the repository. Either run `notary init` or sign and push an image to that repository. | -| `No targets present in this repository` | The repository has been initialized, but doesn't contain any signed images | - -## Initialize trust for a repository - -There's two ways to initialize trust data for a repository. You can either -sign and push an image to that repository: - -```bash -export DOCKER_CONTENT_TRUST=1 -docker push // -``` - -or - -``` -notary init // --publish -``` - -## Manage staged changes - -The Notary CLI client stages changes before publishing them to the server. -You can manage the changes that are staged by running: - -```bash -# Check what changes are staged -notary status // - -# Unstage a specific change -notary status // --unstage 0 - -# Alternatively, unstage all changes -notary status // --reset -``` - -When you're ready to publish your changes to the Notary server, run: - -```bash -notary publish // -``` - -## Delete trust data - -Administrator users can remove all signatures from a trusted repository by -running: - -```bash -notary delete // --remote -``` - -If you don't include the `--remote` flag, Notary deletes local cached content -but will not delete data from the Notary server. - - -## Change the passphrase for a key - -The Notary CLI client manages the keys used to sign the image metadata. To -list all the keys managed by the Notary CLI client, run: - -```bash -notary key list -``` - -To change the passphrase used to encrypt one of the keys, run: - -```bash -notary key passwd -``` - -## Rotate keys - -If one of the private keys is compromised you can rotate that key, so that -images that were signed with the key stop being trusted. - -For keys that are kept offline and managed by the Notary CLI client, such the -keys with the root, targets, and snapshot roles, you can rotate them with: - -```bash -notary key rotate // -``` - -The Notary CLI client generates a new key for the role you specified, and -prompts you for a passphrase to encrypt it. -Then you're prompted for the passphrase for the key you're rotating, and if it -is correct, the Notary CLI client contacts the Notary server to publish the -change. - -You can also rotate keys that are stored in the Notary server, such as the keys -with the snapshot or timestamp role. For that, run: - -```bash -notary key rotate // --server-managed -``` - -## Manage keys for delegation roles - -To delegate image signing to other UCP users, get the `cert.pem` file that's -included in their client bundle and run: - -```bash -notary delegation add \ - // targets/ user1.pem user2.pem \ - --all-paths --publish -``` - -You can also remove keys from a delegation role: - -```bash -# Remove the given keys from a delegation role -notary delegation remove \ - // targets/ \ - --publish - -# Alternatively, you can remove keys from all delegation roles -notary delegation purge // --key --key -``` - -## Troubleshooting - -Notary CLI has a `-D` flag that you can use to increase the logging level. You -can use this for troubleshooting. - -Usually most problems are fixed by ensuring you're communicating with the -correct Notary server, using the `-s` flag, and that you're using the correct -directory where your private keys are stored, with the `-d` flag. - -## Where to go next - -- [Learn more about Notary](/notary/advanced_usage.md) -- [Notary architecture](/notary/service_architecture.md) diff --git a/ee/dtr/user/manage-images/sign-images/trust-with-remote-ucp.md b/ee/dtr/user/manage-images/sign-images/trust-with-remote-ucp.md new file mode 100644 index 0000000000..e5eaf576bb --- /dev/null +++ b/ee/dtr/user/manage-images/sign-images/trust-with-remote-ucp.md @@ -0,0 +1,249 @@ +--- +title: Using Docker Content Trust with a Remote UCP Cluster +description: Learn how to use a single DTR's trust data with remote UCPs. +keywords: registry, sign, trust, notary +redirect_from: +- /ee/ucp/admin/configure/integrate-with-multiple-registries/ +--- + +For more advanced deployments, you may want to share one Docker Trusted Registry +across multiple Universal Control Planes. However, customers wanting to adopt +this model alongside the [Only Run Signed +Images](../.../../ucp/admin/configure/run-only-the-images-you-trust.md) UCP feature, run into problems as each UCP operates an independent set of users. + +Docker Content Trust (DCT) gets around this problem, since users from +a remote UCP are able to sign images in the central DTR and still apply runtime +enforcement. + +In the following example, we will connect DTR managed by UCP cluster 1 with a remote UCP cluster which we are calling UCP cluster 2, sign the +image with a user from UCP cluster 2, and provide runtime enforcement +within UCP cluster 2. This process could be repeated over and over, +integrating DTR with multiple remote UCP clusters, signing the image with users +from each environment, and then providing runtime enforcement in each remote UCP +cluster separately. + +![](../../../images/remoteucp-graphic.png) + +> Before attempting this guide, familiarize yourself with [Docker Content +> Trust](engine/security/trust/content_trust/#signing-images-with-docker-content-trust) +> and [Only Run Signed +> Images](../.../../ucp/admin/configure/run-only-the-images-you-trust.md) on a +> single UCP. Many of the concepts within this guide may be new without that +> background. + +## Prerequisites + +- Cluster 1, running UCP 3.0.x or higher, with a DTR 2.5.x or higher deployed + within the cluster. +- Cluster 2, running UCP 3.0.x or higher, with no DTR node. +- Nodes on Cluster 2 need to trust the Certificate Authority which signed DTR's + TLS Certificate. This can be tested by logging on to a cluster 2 virtual + machine and running `curl https://dtr.example.com`. +- The DTR TLS Certificate needs be properly configured, ensuring that the + **Loadbalancer/Public Address** field has been configured, with this address + included [within the + certificate](../../../admin/configure/use-your-own-tls-certificates/). +- A machine with the [Docker Client](/ee/ucp/user-access/cli/) (CE 17.12 / + EE 1803 or newer) installed, as this contains the relevant `$ docker trust` + commands. + +## Registering DTR with a remote Universal Control Plane + +As there is no registry running within cluster 2, by default UCP will not know +where to check for trust data. Therefore, the first thing we need to do is +register DTR within the remote UCP in cluster 2. When you normally +install DTR, this registration process happens by default to +a local UCP, or cluster 1. + +> The registration process allows the remote UCP to get signature data from DTR, +> however this will not provide Single Sign On (SSO). Users on cluster 2 will not be +> synced with cluster 1's UCP or DTR. Therefore when pulling images, registry +> authentication will still need to be passed as part of the service definition +> if the repository is private. See +> [Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token) +> or [Docker +> Swarm](https://docs.docker.com/engine/swarm/services/#create-a-service-using-an-image-on-a-private-registry) examples. + +To add a new registry, retrieve the Certificate +Authority (CA) used to sign the DTR TLS Certificate through the DTR URL's +`/ca` endpoint. + +```bash +$ curl -ks https://dtr.example.com/ca > dtr.crt +``` + +Next, convert the DTR certificate into a JSON configuration file +for registration within the UCP for cluster 2. + +You can find a template of the `dtr-bundle.json` below. Replace the host address with your DTR URL, and enter the contents of the DTR CA certificate between the new line commands `\n and \n`. + +> ### JSON Formatting +> Ensure there are no line breaks between each line +> of the DTR CA certificate within the JSON file. Use your favorite JSON formatter for validation. + +```bash +$ cat dtr-bundle.json +{ + "hostAddress": "dtr.example.com", + "caBundle": "-----BEGIN CERTIFICATE-----\n\n-----END CERTIFICATE-----" +} +``` + +Now upload the configuration file to cluster 2's UCP +through the UCP API endpoint, `/api/config/trustedregistry_`. To authenticate +against the API of cluster 2's UCP, we have downloaded a [UCP client +bundle](/ee/ucp/user-access/cli/#download-client-certificates/), extracted it in +the current directory, and will reference the keys for authentication. + +```bash +$ curl --cacert ca.pem --cert cert.pem --key key.pem \ + -X POST \ + -H "Accept: application/json" \ + -H "Content-Type: application/json" \ + -d @dtr-bundle.json \ + https://cluster2.example.com/api/config/trustedregistry_ +``` + +Navigate to the UCP web interface to verify that the JSON file was imported successfully, as the UCP endpoint will not +output anything. Select **Admin > Admin Settings > Docker +Trusted Registry**. If the registry has been added successfully, you should see +the DTR listed. + +![](../../../images/remoteucp-addregistry.png){: .with-border} + + +Additionally, you can check the full [configuration +file](/ee/ucp/admin/configure/ucp-configuration-file/) within cluster 2's UCP. +Once downloaded, the `ucp-config.toml` file should now contain a section called +`[registries]` + +```bash +$ curl --cacert ca.pem --cert cert.pem --key key.pem https://cluster2.example.com/api/ucp/config-toml > ucp-config.toml +``` + +If the new registry isn't shown in the list, check the `ucp-controller` container logs on cluster 2. + +## Signing an image in DTR + +We will now sign an image and push this to DTR. To sign images we need a user's public private +key pair from cluster 2. It can be found in a client bundle, with +`key.pem` being a private key and `cert.pem` being the public key on an **X.509** +certificate. + +First, load the private key into the local Docker trust store +`(~/.docker/trust)`. The name used here is purely metadata to help keep track of +which keys you have imported. + +``` +$ docker trust key load --name cluster2admin key.pem +Loading key from "key.pem"... +Enter passphrase for new cluster2admin key with ID a453196: +Repeat passphrase for new cluster2admin key with ID a453196: +Successfully imported key from key.pem +``` + +Next initiate the repository, and add the public key of cluster 2's user +as a signer. You will be asked for a number of passphrases to protect the keys. +Keep note of these passphrases, and see [Docker Content Trust documentation] +(/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server) to learn more about managing keys. + + +``` +$ docker trust signer add --key cert.pem cluster2admin dtr.example.com/admin/trustdemo +Adding signer "cluster2admin" to dtr.example.com/admin/trustdemo... +Initializing signed repository for dtr.example.com/admin/trustdemo... +Enter passphrase for root key with ID 4a72d81: +Enter passphrase for new repository key with ID dd4460f: +Repeat passphrase for new repository key with ID dd4460f: +Successfully initialized "dtr.example.com/admin/trustdemo" +Successfully added signer: cluster2admin to dtr.example.com/admin/trustdemo +``` + +Finally, sign the image tag. This pushes the image up to DTR, as well as +signs the tag with the user from cluster 2's keys. + +``` +$ docker trust sign dtr.example.com/admin/trustdemo:1 +Signing and pushing trust data for local image dtr.example.com/admin/trustdemo:1, may overwrite remote trust data +The push refers to repository [dtr.olly.dtcntr.net/admin/trustdemo] +27c0b07c1b33: Layer already exists +aa84c03b5202: Layer already exists +5f6acae4a5eb: Layer already exists +df64d3292fd6: Layer already exists +1: digest: sha256:37062e8984d3b8fde253eba1832bfb4367c51d9f05da8e581bd1296fc3fbf65f size: 1153 +Signing and pushing trust metadata +Enter passphrase for cluster2admin key with ID a453196: +Successfully signed dtr.example.com/admin/trustdemo:1 +``` + +Within the DTR web interface, you should now be able to see your newly pushed tag with the **Signed** text next to the size. + +![](../../../images/remoteucp-signedimage.png){: .with-border} + + +You could sign this image multiple times if required, whether it's multiple +teams from the same cluster wanting to sign the image, or you integrating DTR with more remote UCPs so users from clusters 1, +2, 3, or more can all sign the same image. + +## Enforce Signed Image Tags on the Remote UCP + +We can now enable **Only Run Signed Images** on the remote UCP. To do this, +login to cluster 2's UCP web interface as an admin. Select **Admin > Admin Settings > Docker Content +Trust**. + +See [Run only the images you trust](/ee/ucp/admin/configure/run-only-the-images-you-trust/) for more information on only running signed images in UCP. + + +![](../../../images/remoteucp-enablesigning.png){: .with-border} + + +Finally we can now deploy a workload on cluster 2, using a signed +image from a DTR running on cluster 1. This workload could be a simple `$ docker +run`, a Swarm Service, or a Kubernetes workload. As a simple test, source a +client bundle, and try running one of your signed images. + +``` +$ source env.sh + +$ docker service create dtr.example.com/admin/trustdemo:1 +nqsph0n6lv9uzod4lapx0gwok +overall progress: 1 out of 1 tasks +1/1: running [==================================================>] +verify: Service converged + +$ docker service ls +ID NAME MODE REPLICAS IMAGE PORTS +nqsph0n6lv9u laughing_lamarr replicated 1/1 dtr.example.com/admin/trustdemo:1 +``` + +## Troubleshooting + +If the image is stored in a private repository within DTR, you need to pass credentials to the +Orchestrator as there is no SSO between cluster 2 and DTR. See the relevant +[Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token) or [Docker + Swarm](https://docs.docker.com/engine/swarm/services/#create-a-service-using-an-image-on-a-private-registry) + documentation for more details. + +### Example Errors + +``` +image or trust data does not exist for dtr.example.com/admin/trustdemo:1 +``` + +This means something went wrong when initiating the repository or signing the +image, as the tag contains no signing data. + +``` +Error response from daemon: image did not meet required signing policy + +dtr.example.com/admin/trustdemo:1: image did not meet required signing policy +``` + +This means that the image was signed correctly, however the user who signed the +image does not meet the signing policy in cluster 2. This could be because you +signed the image with the wrong user keys. + +## Where to go next + +- [Learn more about Notary](/notary/advanced_usage.md) +- [Notary architecture](/notary/service_architecture.md) diff --git a/ee/ucp/admin/configure/deploy-route-reflectors.md b/ee/ucp/admin/configure/deploy-route-reflectors.md index ca8f6090ed..9db8544119 100644 --- a/ee/ucp/admin/configure/deploy-route-reflectors.md +++ b/ee/ucp/admin/configure/deploy-route-reflectors.md @@ -127,45 +127,41 @@ kubectl create -f calico-rr.yaml ## Configure calicoctl To reconfigure Calico to use Route Reflectors instead of a node-to-node mesh, -you'll need to SSH into a UCP node and download the `calicoctl` tool. - -Log in to a UCP node using SSH, and run: +you'll need to tell `calicoctl` where to find the etcd key-value store managed +by UCP. From a CLI with a UCP client bundle, create a shell alias to start +`calicoctl` using the `{{ page.ucp_org }}/ucp-dsinfo` image: ``` -sudo curl --location https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl \ - --output /usr/bin/calicoctl -sudo chmod +x /usr/bin/calicoctl -``` - -Now you need to configure `calicoctl` to communicate with the etcd key-value -store managed by UCP. Create a file named `/etc/calico/calicoctl.cfg` with -the following content: - -``` -apiVersion: projectcalico.org/v3 -kind: CalicoAPIConfig -metadata: -spec: - datastoreType: "etcdv3" - etcdEndpoints: "127.0.0.1:12378" - etcdKeyFile: "/var/lib/docker/volumes/ucp-node-certs/_data/key.pem" - etcdCertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/cert.pem" - etcdCACertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/ca.pem" +UCP_VERSION=$(docker version --format {% raw %}'{{index (split .Server.Version "/") 1}}'{% endraw %}) +alias calicoctl="\ +docker run -i --rm \ + --pid host \ + --net host \ + -e constraint:ostype==linux \ + -e ETCD_ENDPOINTS=127.0.0.1:12378 \ + -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \ + -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \ + -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \ + -v /var/run/calico:/var/run/calico \ + -v ucp-node-certs:/ucp-node-certs:ro \ + {{ page.ucp_org }}/ucp-dsinfo:${UCP_VERSION} \ + calicoctl \ +" ``` ## Disable node-to-node BGP mesh -Not that you've configured `calicoctl`, you can check the current Calico BGP +Now that you've configured `calicoctl`, you can check the current Calico BGP configuration: ``` -sudo calicoctl get bgpconfig +calicoctl get bgpconfig ``` If you don't see any configuration listed, create one by running: ``` -cat << EOF | sudo calicoctl create -f - +calicoctl create -f - < bgp.yaml +calicoctl get bgpconfig --output yaml > bgp.yaml ``` Edit the `bgp.yaml` file, updating `nodeToNodeMeshEnabled` to `false`. Then update Calico configuration by running: ``` -sudo calicoctl replace -f bgp.yaml +calicoctl replace -f - < bgp.yaml ``` ## Configure Calico to use Route Reflectors @@ -198,14 +194,14 @@ To configure Calico to use the Route Reflectors you need to know the AS number for your network first. For that, run: ``` -sudo calicoctl get nodes --output=wide +calicoctl get nodes --output=wide ``` Now that you have the AS number, you can create the Calico configuration. For each Route Reflector, customize and run the following snippet: ``` -sudo calicoctl create -f - << EOF +calicoctl create -f - << EOF apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: @@ -233,19 +229,34 @@ Using your UCP client bundle, run: ``` # Find the Pod name -kubectl get pods -n kube-system -o wide | grep +kubectl -n kube-system \ + get pods --selector k8s-app=calico-node -o wide | \ + grep # Delete the Pod -kubectl delete pod -n kube-system +kubectl -n kube-system delete pod ``` ## Validate peers -Now you can check that other `calico-node` pods running on other nodes are -peering with the Route Reflector: +Now you can check that `calico-node` pods running on other nodes are peering +with the Route Reflector. Use a Swarm affinity filter to run `calicoctl node +status` on any node running `calico-node`: ``` -sudo calicoctl node status +UCP_VERSION=$(docker version --format {% raw %}'{{index (split .Server.Version "/") 1}}'{% endraw %}) +docker run -i --rm \ + --pid host \ + --net host \ + -e affinity:container=='k8s_calico-node.*' \ + -e ETCD_ENDPOINTS=127.0.0.1:12378 \ + -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \ + -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \ + -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \ + -v /var/run/calico:/var/run/calico \ + -v ucp-node-certs:/ucp-node-certs:ro \ + {{ page.ucp_org }}/ucp-dsinfo:${UCP_VERSION} \ + calicoctl node status ``` You should see something like: diff --git a/ee/ucp/admin/configure/integrate-with-multiple-registries.md b/ee/ucp/admin/configure/integrate-with-multiple-registries.md deleted file mode 100644 index fdf19a4281..0000000000 --- a/ee/ucp/admin/configure/integrate-with-multiple-registries.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: Integrate with multiple registries -description: Integrate UCP with multiple registries -keywords: trust, registry, integrate, UCP, DTR -redirect_from: - - /datacenter/ucp/3.0/guides/admin/configure/integrate-with-multiple-registries/ ---- - -Universal Control Plane can pull and run images from any image registry, -including Docker Trusted Registry and Docker Hub. - -If your registry uses globally-trusted TLS certificates, everything works -out of the box, and you don't need to configure anything. But if your registries -use self-signed certificates or certificates issues by your own Certificate -Authority, you need to configure UCP to trust those registries. - -## Trust Docker Trusted Registry - -To configure UCP to trust a DTR deployment, you need to update the -[UCP system configuration](ucp-configuration-file.md) to include one entry for -each DTR deployment: - -``` -[[registries]] - host_address = "dtr.example.org" - ca_bundle = """ ------BEGIN CERTIFICATE----- -... ------END CERTIFICATE-----""" - -[[registries]] - host_address = "internal-dtr.example.org:444" - ca_bundle = """ ------BEGIN CERTIFICATE----- -... ------END CERTIFICATE-----""" -``` - -You only need to include the port section if your DTR deployment is running -on a port other than 443. - -You can customize and use the script below to generate a file named -`trust-dtr.toml` with the configuration needed for your DTR deployment. - -``` -# Replace this url by your DTR deployment url and port -DTR_URL=https://dtr.example.org -DTR_PORT=443 - -dtr_full_url=${DTR_URL}:${DTR_PORT} -dtr_ca_url=${dtr_full_url}/ca - -# Strip protocol and default https port -dtr_host_address=${dtr_full_url#"https://"} -dtr_host_address=${dtr_host_address%":443"} - -# Create the registry configuration and save it -cat < trust-dtr.toml - -[[registries]] - # host address should not contain protocol or port if using 443 - host_address = $dtr_host_address - ca_bundle = """ -$(curl -sk $dtr_ca_url)""" -EOL -``` - -You can then append the content of `trust-dtr.toml` to your current UCP -configuration to make UCP trust this DTR deployment. - -## Where to go next - -- [Integrate with LDAP by using a configuration file](external-auth/enable-ldap-config-file.md) diff --git a/ee/ucp/admin/configure/join-nodes/join-windows-nodes-to-cluster.md b/ee/ucp/admin/configure/join-nodes/join-windows-nodes-to-cluster.md index af5031695d..825ca2a6a5 100644 --- a/ee/ucp/admin/configure/join-nodes/join-windows-nodes-to-cluster.md +++ b/ee/ucp/admin/configure/join-nodes/join-windows-nodes-to-cluster.md @@ -16,6 +16,8 @@ Follow these steps to enable a worker node on Windows. 2. Configure the Windows node. 3. Join the Windows node to the cluster. +**Note**: Refer to the [Docker compatibility matrix](https://success.docker.com/article/compatibility-matrix) for complete Docker compatibility information with Windows Server. + ## Install Docker Engine - Enterprise on Windows Server [Install Docker Engine - Enterprise](/engine/installation/windows/docker-ee/#use-a-script-to-install-docker-ee) diff --git a/ee/ucp/admin/configure/use-trusted-images-for-ci.md b/ee/ucp/admin/configure/use-trusted-images-for-ci.md deleted file mode 100644 index 601a27fe07..0000000000 --- a/ee/ucp/admin/configure/use-trusted-images-for-ci.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: Use trusted images for continuous integration -description: Set up and configure content trust and signing policy for use with a continuous integration system -keywords: cup, trust, notary, security, continuous integration ---- - -The document provides a minimal example on setting up Docker Content Trust (DCT) in -Universal Control Plane (UCP) for use with a Continuous Integration (CI) system. It -covers setting up the necessary accounts and trust delegations to restrict only those -images built by your CI system to be deployed to your UCP managed cluster. - -## Set up UCP accounts and teams - -The first step is to create a user account for your CI system. For the purposes of -this document we will assume you are using Jenkins as your CI system and will therefore -name the account "jenkins". As an admin user logged in to UCP, navigate to "User Management" -and select "Add User". Create a user with the name "jenkins" and set a strong password. - -Next, create a team called "CI" and add the "jenkins" user to this team. All signing -policy is team based, so if we want only a single user to be able to sign images -destined to be deployed on the cluster, we must create a team for this one user. - -## Set up the signing policy - -While still logged in as an admin, navigate to "Admin Settings" and select the "Content Trust" -subsection. Select the checkbox to enable content trust and in the select box that appears, -select the "CI" team we have just created. Save the settings. - -This policy will require that every image that referenced in a `docker image pull`, -`docker container run`, or `docker service create` must be signed by a key corresponding -to a member of the "CI" team. In this case, the only member is the "jenkins" user. - -## Create keys for the Jenkins user - -The signing policy implementation uses the certificates issued in user client bundles -to connect a signature to a user. Using an incognito browser window (or otherwise), -log in to the "jenkins" user account you created earlier. Download a client bundle for -this user. It is also recommended to change the description associated with the public -key stored in UCP such that you can identify in the future which key is being used for -signing. - -Each time a user retrieves a new client bundle, a new keypair is generated. It is therefore -necessary to keep track of a specific bundle that a user chooses to designate as their signing bundle. - -Once you have decompressed the client bundle, the only two files you need for the purposes -of signing are `cert.pem` and `key.pem`. These represent the public and private parts of -the user's signing identity respectively. We will load the `key.pem` file onto the Jenkins -servers, and use `cert.pem` to create delegations for the "jenkins" user in our -Trusted Collection. - -## Prepare the Jenkins server - -### Load `key.pem` on Jenkins - -You will need to use the notary client to load keys onto your Jenkins server. Simply run -`notary -d /path/to/.docker/trust key import /path/to/key.pem`. You will be asked to set -a password to encrypt the key on disk. For automated signing, this password can be configured -into the environment under the variable name `DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE`. The `-d` -flag to the command specifies the path to the `trust` subdirectory within the server's `docker` -configuration directory. Typically this is found at `~/.docker/trust`. - -### Enable content trust - -There are two ways to enable content trust: globally, and per operation. To enabled content -trust globally, set the environment variable `DOCKER_CONTENT_TRUST=1`. To enable on a per -operation basis, wherever you run `docker image push` in your Jenkins scripts, add the flag -`--disable-content-trust=false`. You may wish to use this second option if you only want -to sign some images. - -The Jenkins server is now prepared to sign images, but we need to create delegations referencing -the key to give it the necessary permissions. - -## Initialize a repository - -Any commands displayed in this section should _not_ be run from the Jenkins server. You -will most likely want to run them from your local system. - -If this is a new repository, create it in Docker Trusted Registry (DTR) or Docker Hub, -depending on which you use to store your images, before proceeding further. - -We will now initialize the trust data and create the delegation that provides the Jenkins -key with permissions to sign content. The following commands initialize the trust data and -rotate snapshotting responsibilities to the server. This is necessary to ensure human involvement -is not required to publish new content. - -``` -notary -s https://my_notary_server.com -d ~/.docker/trust init my_repository -notary -s https://my_notary_server.com -d ~/.docker/trust key rotate my_repository snapshot -r -notary -s https://my_notary_server.com -d ~/.docker/trust publish my_repository -``` - -The `-s` flag specifies the server hosting a notary service. If you are operating against -Docker Hub, this will be `https://notary.docker.io`. If you are operating against your own DTR -instance, this will be the same hostname you use in image names when running docker commands preceded -by the `https://` scheme. For example, if you would run `docker image push my_dtr:4443/me/an_image` the value -of the `-s` flag would be expected to be `https://my_dtr:4443`. - -If you are using DTR, the name of the repository should be identical to the full name you use -in a `docker image push` command. If however you use Docker Hub, the name you use in a `docker image push` -must be preceded by `docker.io/`. i.e. if you ran `docker image push me/alpine`, you would -`notary init docker.io/me/alpine`. - -For brevity, we will exclude the `-s` and `-d` flags from subsequent command, but be aware you -will still need to provide them for the commands to work correctly. - -Now that the repository is initialized, we need to create the delegations for Jenkins. Docker -Content Trust treats a delegation role called `targets/releases` specially. It considers this -delegation to contain the canonical list of published images for the repository. It is therefore -generally desirable to add all users to this delegation with the following command: - -``` -notary delegation add my_repository targets/releases --all-paths /path/to/cert.pem -``` - -This solves a number of prioritization problems that would result from needing to determine -which delegation should ultimately be trusted for a specific image. However, because it -is anticipated that any user will be able to sign the `targets/releases` role it is not trusted -in determining if a signing policy has been met. Therefore it is also necessary to create a -delegation specifically for Jenkins: - -``` -notary delegation add my_repository targets/jenkins --all-paths /path/to/cert.pem -``` - -We will then publish both these updates (remember to add the correct `-s` and `-d` flags): - -``` -notary publish my_repository -``` - -Informational (Advanced): If we included the `targets/releases` role in determining if a signing policy -had been met, we would run into the situation of images being opportunistically deployed when -an appropriate user signs. In the scenario we have described so far, only images signed by -the "CI" team (containing only the "jenkins" user) should be deployable. If a user "Moby" could -also sign images but was not part of the "CI" team, they might sign and publish a new `targets/releases` -that contained their image. UCP would refuse to deploy this image because it was not signed -by the "CI" team. However, the next time Jenkins published an image, it would update and sign -the `targets/releases` role as whole, enabling "Moby" to deploy their image. - -## Conclusion - -With the Trusted Collection initialized, and delegations created, the Jenkins server will -now use the key we imported to sign any images we push to this repository. - -Through either the Docker CLI, or the UCP browser interface, we will find that any images -that do not meet our signing policy cannot be used. The signing policy we set up requires -that the "CI" team must have signed any image we attempt to `docker image pull`, `docker container run`, -or `docker service create`, and the only member of that team is the "jenkins" user. This -restricts us to only running images that were published by our Jenkins CI system. diff --git a/ee/ucp/admin/install/index.md b/ee/ucp/admin/install/index.md index e754fea706..2f9340829b 100644 --- a/ee/ucp/admin/install/index.md +++ b/ee/ucp/admin/install/index.md @@ -2,8 +2,6 @@ title: Install UCP for production description: Learn how to install Docker Universal Control Plane on production. keywords: Universal Control Plane, UCP, install, Docker EE -redirect_from: - - /datacenter/ucp/3.0/guides/admin/install/ --- Docker Universal Control Plane (UCP) is a containerized application that you diff --git a/ee/ucp/authorization/_site/group-resources.html b/ee/ucp/authorization/_site/group-resources.html index 1ce779c9f8..fb9daa9272 100644 --- a/ee/ucp/authorization/_site/group-resources.html +++ b/ee/ucp/authorization/_site/group-resources.html @@ -18,7 +18,7 @@ and resource quotas for the namespace.

    Each Kubernetes resources can only be in one namespace, and namespaces cannot be nested inside one another.

    -

    Learn more about Kubernetes namespaces.

    +

    Learn more about Kubernetes namespaces.

    Swarm collections

    diff --git a/ee/ucp/authorization/_site/index.html b/ee/ucp/authorization/_site/index.html index bd8fc3f9c3..8503976690 100644 --- a/ee/ucp/authorization/_site/index.html +++ b/ee/ucp/authorization/_site/index.html @@ -68,7 +68,7 @@ networks, nodes, services, secrets, and volumes.

  • Kubernetes namespaces: A -namespace +namespace is a logical area for a Kubernetes cluster. Kubernetes comes with a default namespace for your cluster objects, plus two more namespaces for system and public resources. You can create custom namespaces, but unlike Swarm diff --git a/ee/ucp/authorization/_site/migrate-kubernetes-roles.html b/ee/ucp/authorization/_site/migrate-kubernetes-roles.html index 80f639d3c8..c64b429c29 100644 --- a/ee/ucp/authorization/_site/migrate-kubernetes-roles.html +++ b/ee/ucp/authorization/_site/migrate-kubernetes-roles.html @@ -1,6 +1,6 @@

    With Docker Enterprise Edition, you can create roles and grants that implement the permissions that are defined in your Kubernetes apps. -Learn about RBAC authorization in Kubernetes.

    +Learn about RBAC authorization in Kubernetes.

    Docker EE has its own implementation of role-based access control, so you can’t use Kubernetes RBAC objects directly. Instead, you create UCP roles diff --git a/ee/ucp/authorization/group-resources.md b/ee/ucp/authorization/group-resources.md index 67f3fdbad4..01722b334f 100644 --- a/ee/ucp/authorization/group-resources.md +++ b/ee/ucp/authorization/group-resources.md @@ -24,7 +24,7 @@ and resource quotas for the namespace. Each Kubernetes resources can only be in one namespace, and namespaces cannot be nested inside one another. -[Learn more about Kubernetes namespaces](https://v1-8.docs.kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). +[Learn more about Kubernetes namespaces](https://v1-11.docs.kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). ## Swarm collections diff --git a/ee/ucp/authorization/index.md b/ee/ucp/authorization/index.md index aaacb86ef6..eb239040dc 100644 --- a/ee/ucp/authorization/index.md +++ b/ee/ucp/authorization/index.md @@ -67,7 +67,7 @@ To control user access, cluster resources are grouped into Docker Swarm networks, nodes, services, secrets, and volumes. - **Kubernetes namespaces**: A -[namespace](https://v1-8.docs.kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) +[namespace](https://v1-11.docs.kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) is a logical area for a Kubernetes cluster. Kubernetes comes with a `default` namespace for your cluster objects, plus two more namespaces for system and public resources. You can create custom namespaces, but unlike Swarm diff --git a/ee/ucp/authorization/migrate-kubernetes-roles.md b/ee/ucp/authorization/migrate-kubernetes-roles.md index ec2e861983..de00cbf30c 100644 --- a/ee/ucp/authorization/migrate-kubernetes-roles.md +++ b/ee/ucp/authorization/migrate-kubernetes-roles.md @@ -6,7 +6,7 @@ keywords: authorization, authentication, authorize, authenticate, user, team, UC With Docker Enterprise Edition, you can create roles and grants that implement the permissions that are defined in your Kubernetes apps. -Learn about [RBAC authorization in Kubernetes](https://v1-8.docs.kubernetes.io/docs/admin/authorization/rbac/). +Learn about [RBAC authorization in Kubernetes](https://v1-11.docs.kubernetes.io/docs/admin/authorization/rbac/). Docker EE has its own implementation of role-based access control, so you can't use Kubernetes RBAC objects directly. Instead, you create UCP roles diff --git a/ee/ucp/images/ingress-deploy.png b/ee/ucp/images/ingress-deploy.png new file mode 100644 index 0000000000..cadbf33928 Binary files /dev/null and b/ee/ucp/images/ingress-deploy.png differ diff --git a/ee/ucp/images/kubernetes-version.png b/ee/ucp/images/kubernetes-version.png index 60a248e849..eaf80406fd 100644 Binary files a/ee/ucp/images/kubernetes-version.png and b/ee/ucp/images/kubernetes-version.png differ diff --git a/ee/ucp/index.md b/ee/ucp/index.md index ac171fe4ff..e8bc8a4625 100644 --- a/ee/ucp/index.md +++ b/ee/ucp/index.md @@ -5,7 +5,6 @@ description: | keywords: ucp, overview, orchestration, cluster redirect_from: - /ucp/ - - /datacenter/ucp/3.0/guides/ --- Docker Universal Control Plane (UCP) is the enterprise-grade cluster management diff --git a/ee/ucp/kubernetes/create-service-account.md b/ee/ucp/kubernetes/create-service-account.md index 3e7336fd62..ac2c5ffa17 100644 --- a/ee/ucp/kubernetes/create-service-account.md +++ b/ee/ucp/kubernetes/create-service-account.md @@ -9,7 +9,7 @@ A service account represents an identity for processes that run in a pod. When a process is authenticated through a service account, it can contact the API server and access cluster resources. If a pod doesn't have an assigned service account, it gets the `default` service account. -Learn about [managing service accounts](https://v1-8.docs.kubernetes.io/docs/admin/service-accounts-admin/). +Learn about [managing service accounts](https://v1-11.docs.kubernetes.io/docs/admin/service-accounts-admin/). In Docker EE, you give a service account access to cluster resources by creating a grant, the same way that you would give access to a user or a team. @@ -86,4 +86,4 @@ assigned to the `nginx` namespace. ## Where to go next -- [Deploy an ingress controller for a Kubernetes app](deploy-ingress-controller.md) \ No newline at end of file +- [Deploy an ingress controller for a Kubernetes app](deploy-ingress-controller.md) diff --git a/ee/ucp/kubernetes/index.md b/ee/ucp/kubernetes/index.md index d9081a790e..37c82117be 100644 --- a/ee/ucp/kubernetes/index.md +++ b/ee/ucp/kubernetes/index.md @@ -152,7 +152,7 @@ spec: ## Use the CLI to deploy Kubernetes objects With Docker EE, you deploy your Kubernetes objects on the command line by using -`kubectl`. [Install and set up kubectl](https://v1-8.docs.kubernetes.io/docs/tasks/tools/install-kubectl/). +`kubectl`. [Install and set up kubectl](https://v1-11.docs.kubernetes.io/docs/tasks/tools/install-kubectl/). Use a client bundle to configure your client tools, like Docker CLI and `kubectl` to communicate with UCP instead of the local deployments you might have running. diff --git a/ee/ucp/kubernetes/layer-7-routing.md b/ee/ucp/kubernetes/layer-7-routing.md index 2fbd0eca19..25808e6f5f 100644 --- a/ee/ucp/kubernetes/layer-7-routing.md +++ b/ee/ucp/kubernetes/layer-7-routing.md @@ -1,7 +1,6 @@ --- title: Layer 7 routing -description: Learn how to route traffic to your Kubernetes workloads in - Docker Enterprise Edition. +description: Learn how to route traffic to your Kubernetes workloads in Docker Enterprise Edition. keywords: UCP, Kubernetes, ingress, routing redirect_from: - /ee/ucp/kubernetes/deploy-ingress-controller/ @@ -24,4 +23,3 @@ A popular ingress controller within the Kubernetes Community is the [NGINX contr Learn about [ingress in Kubernetes](https://v1-11.docs.kubernetes.io/docs/concepts/services-networking/ingress/). For an example of a YAML NGINX kube ingress deployment, refer to . - diff --git a/ee/ucp/ucp-architecture.md b/ee/ucp/ucp-architecture.md index 110e4b649f..1821896523 100644 --- a/ee/ucp/ucp-architecture.md +++ b/ee/ucp/ucp-architecture.md @@ -87,7 +87,7 @@ persist the state of UCP. These are the UCP services running on manager nodes: | k8s_POD_kube-dns | Pause container for the `kube-dns` pod. | | k8s_ucp-dnsmasq-nanny | A dnsmasq instance used in the Kubernetes DNS Service. Part of the `kube-dns` deployment. Runs on one manager node only. | | k8s_ucp-kube-compose | A custom Kubernetes resource component that's responsible for translating Compose files into Kubernetes constructs. Part of the `compose` deployment. Runs on one manager node only. | -| k8s_ucp-kube-dns | The main Kubernetes DNS Service, used by pods to [resolve service names](https://v1-8.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/). Part of the `kube-dns` deployment. Runs on one manager node only. Provides service discovery for Kubernetes services and pods. A set of three containers deployed via Kubernetes as a single pod. | +| k8s_ucp-kube-dns | The main Kubernetes DNS Service, used by pods to [resolve service names](https://v1-11.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/). Part of the `kube-dns` deployment. Runs on one manager node only. Provides service discovery for Kubernetes services and pods. A set of three containers deployed via Kubernetes as a single pod. | | k8s_ucp-kubedns-sidecar | Health checking and metrics daemon of the Kubernetes DNS Service. Part of the `kube-dns` deployment. Runs on one manager node only. | | ucp-agent | Monitors the node and ensures the right UCP services are running. | | ucp-auth-api | The centralized service for identity and authentication used by UCP and DTR. | diff --git a/ee/ucp/user-access/cli.md b/ee/ucp/user-access/cli.md index 4d52ade24d..afcfcfc78f 100644 --- a/ee/ucp/user-access/cli.md +++ b/ee/ucp/user-access/cli.md @@ -3,7 +3,6 @@ title: CLI-based access description: Learn how to access Docker Universal Control Plane from the CLI. keywords: ucp, cli, administration redirect_from: - - /datacenter/ucp/3.0/guides/user/access-ucp/cli-based-access/ - /ee/ucp/user/access-ucp/cli-based-access/ --- diff --git a/ee/ucp/user-access/kubectl.md b/ee/ucp/user-access/kubectl.md index defebb992e..c2ff249d3d 100644 --- a/ee/ucp/user-access/kubectl.md +++ b/ee/ucp/user-access/kubectl.md @@ -83,7 +83,7 @@ You can download the binary from this [link](https://storage.googleapis.com/kube If you have curl installed on your system, you use these commands in Powershell. ```cmd -$env:k8sversion = "v1.8.11" +$env:k8sversion = "v1.11.5" curl https://storage.googleapis.com/kubernetes-release/release/$env:k8sversion/bin/windows/amd64/kubectl.exe ``` diff --git a/engine/release-notes.md b/engine/release-notes.md index cac26a4d50..c72a14dddf 100644 --- a/engine/release-notes.md +++ b/engine/release-notes.md @@ -246,7 +246,7 @@ Update your configuration if this command prints a non-empty value for `MountFla ### Deprecation Notice -As of EE 2.2, Docker will deprecate support for Device Mapper as a storage driver. It will continue to be supported at this +As of EE 2.1, Docker has deprecated support for Device Mapper as a storage driver. It will continue to be supported at this time, but support will be removed in a future release. Docker will continue to support Device Mapper for existing EE 2.0 and 2.1 customers. Please contact Sales for more information. diff --git a/engine/security/trust/trust_delegation.md b/engine/security/trust/trust_delegation.md index 2aa6e46973..23fddeb322 100644 --- a/engine/security/trust/trust_delegation.md +++ b/engine/security/trust/trust_delegation.md @@ -2,6 +2,8 @@ description: Delegations for content trust keywords: trust, security, delegations, keys, repository title: Delegations for content trust +redirect_from: +- /ee/dtr/user/access-dtr/configure-your-notary-client/ --- Delegations in Docker Content Trust (DCT) allow you to control who can and cannot sign diff --git a/get-started/part2.md b/get-started/part2.md index cf0407ebf3..162290edde 100644 --- a/get-started/part2.md +++ b/get-started/part2.md @@ -402,6 +402,7 @@ application by running this container in a **service**. [Continue to Part 3 >>](part3.md){: class="button outline-btn"} +Or, learn how to [launch your container on your own machine using Digital Ocean](https://docs.docker.com/machine/examples/ocean/){: target="_blank" class="_" }. ## Recap and cheat sheet (optional) diff --git a/get-started/part3.md b/get-started/part3.md index 28ec86e5ca..180f066f26 100644 --- a/get-started/part3.md +++ b/get-started/part3.md @@ -96,8 +96,9 @@ This `docker-compose.yml` file tells Docker to do the following: - Pull [the image we uploaded in step 2](part2.md) from the registry. - Run 5 instances of that image as a service - called `web`, limiting each one to use, at most, 10% of the CPU (across all - cores), and 50MB of RAM. + called `web`, limiting each one to use, at most, 10% of a single core of + CPU time (this could also be e.g. "1.5" to mean 1 and half core for each), + and 50MB of RAM. - Immediately restart containers if one fails. diff --git a/get-started/part4.md b/get-started/part4.md index fbbea8ac25..58c9a48cc7 100644 --- a/get-started/part4.md +++ b/get-started/part4.md @@ -126,6 +126,8 @@ so they can connect to each other. Now, create a couple of VMs using our node management tool, `docker-machine`: +> **Note**: you need to run the following as administrator or else you don't have the permission to create hyperv VMs! + ```shell docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm2 @@ -143,6 +145,8 @@ You now have two VMs created, named `myvm1` and `myvm2`. Use this command to list the machines and get their IP addresses. +> **Note**: you need to run the following as administrator or else you don't get any resonable output (only "UNKNOWN"). + ```shell docker-machine ls ``` diff --git a/machine/examples/ocean.md b/machine/examples/ocean.md index 1fd0f0663c..f4bb2cde80 100644 --- a/machine/examples/ocean.md +++ b/machine/examples/ocean.md @@ -143,4 +143,5 @@ provider console, Machine loses track of the server status. Use the - [Understand Machine concepts](../concepts.md) - [Docker Machine driver reference](../drivers/index.md) - [Docker Machine subcommand reference](../reference/index.md) +- [Create containers for your Docker Machine](../../get-started/part2.md) - [Provision a Docker Swarm cluster with Docker Machine](/swarm/provision-with-machine.md)