diff --git a/_data/toc.yaml b/_data/toc.yaml index 26b237c3a7..a3770641ba 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -1506,24 +1506,74 @@ manuals: title: Web-based access - path: /datacenter/ucp/3.0/guides/user/access-ucp/cli-based-access/ title: CLI-based access - - sectiontitle: Deploy an application + - path: /datacenter/ucp/3.0/guides/user/access-ucp/kubectl/ + title: Install the Kubernetes CLI + - sectiontitle: Deploy apps with Swarm section: - - path: /datacenter/ucp/3.0/guides/user/services/deploy-a-service/ - title: Deploy a service - - path: /datacenter/ucp/3.0/guides/user/services/use-domain-names-to-access-services/ - title: Use domain names to access services - - path: /datacenter/ucp/3.0/guides/user/services/ - title: Deploy an app from the UI - - path: /datacenter/ucp/3.0/guides/user/services/deploy-app-cli/ - title: Deploy an app from the CLI - - path: /datacenter/ucp/3.0/guides/user/services/deploy-stack-to-collection/ + - path: /datacenter/ucp/3.0/guides/user/swarm/ + title: Deploy a single service + - path: /datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app/ + title: Deploy a multi-service app + - path: /datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection/ title: Deploy application resources to a collection - - sectiontitle: Secrets + - path: /datacenter/ucp/3.0/guides/user/swarm/use-secrets/ + title: Use secrets in your services + - sectiontitle: Layer 7 routing + section: + - path: /datacenter/ucp/3.0/guides/user/interlock/ + title: Overview + - path: /datacenter/ucp/3.0/guides/user/interlock/architecture/ + title: Architecture + - sectiontitle: Deploy + section: + - title: Simple deployment + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/ + - title: Configure your deployment + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/configure/ + - title: Production deployment + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/production/ + - title: Host mode networking + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking/ + - title: Configuration reference + path: /datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference/ + - sectiontitle: Route traffic to services + section: + - title: Simple swarm service + path: /datacenter/ucp/3.0/guides/user/interlock/usage/ + - title: Set a default service + path: /datacenter/ucp/3.0/guides/user/interlock/usage/default-service/ + - title: Applications with TLS + path: /datacenter/ucp/3.0/guides/user/interlock/usage/tls/ + - title: Application redirects + path: /datacenter/ucp/3.0/guides/user/interlock/usage/redirects/ + - title: Persistent (sticky) sessions + path: /datacenter/ucp/3.0/guides/user/interlock/usage/sessions/ + - title: Websockets + path: /datacenter/ucp/3.0/guides/user/interlock/usage/websockets/ + - title: Canary application instances + path: /datacenter/ucp/3.0/guides/user/interlock/usage/canary/ + - title: Service clusters + path: /datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters/ + - title: Context/Path based routing + path: /datacenter/ucp/3.0/guides/user/interlock/usage/context/ + - title: VIP backend mode + path: /datacenter/ucp/3.0/guides/user/interlock/usage/interlock-vip-mode/ + - title: Service labels reference + path: /datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference/ + - title: Layer 7 routing upgrade + path: /datacenter/ucp/3.0/guides/user/interlock/upgrade/ + - sectiontitle: Deploy apps with Kubernetes section: - - path: /datacenter/ucp/3.0/guides/user/secrets/ - title: Manage secrets - - path: /datacenter/ucp/3.0/guides/user/secrets/grant-revoke-access/ - title: Grant access to secrets + - title: Deploy a workload + path: /datacenter/ucp/3.0/guides/user/kubernetes/ + - title: Deploy a Compose-based app + path: /datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose/ + - title: Deploy an ingress controller + path: /datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing/ + - title: Create a service account for a Kubernetes app + path: /datacenter/ucp/3.0/guides/user/kubernetes/create-service-account/ + - title: Install a CNI plugin + path: /datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin/ - path: /datacenter/ucp/3.0/reference/api/ title: API reference - path: /ee/ucp/release-notes/ diff --git a/datacenter/ucp/3.0/guides/architecture.md b/datacenter/ucp/3.0/guides/architecture.md index f74bbb9464..4afa6d38f9 100644 --- a/datacenter/ucp/3.0/guides/architecture.md +++ b/datacenter/ucp/3.0/guides/architecture.md @@ -5,13 +5,13 @@ keywords: ucp, architecture --- Universal Control Plane is a containerized application that runs on -[Docker Enterprise Edition](/ee/index.md) and extends its functionality -to make it easier to deploy, configure, and monitor your applications at scale. +[Docker Enterprise Edition](/ee/index.md), extending its functionality +to simplify the deployment, configuration, and monitoring of your applications at scale. UCP also secures Docker with role-based access control so that only authorized users can make changes and deploy applications to your Docker cluster. -![](images/architecture-1.svg) +![](images/ucp-architecture-1.svg){: .with-border} Once Universal Control Plane (UCP) instance is deployed, developers and IT operations no longer interact with Docker Engine directly, but interact with @@ -25,7 +25,7 @@ the Docker CLI client and Docker Compose. Docker UCP leverages the clustering and orchestration functionality provided by Docker. -![](images/architecture-2.svg) +![](images/ucp-architecture-2.svg){: .with-border} A swarm is a collection of nodes that are in the same Docker cluster. [Nodes](/engine/swarm/key-concepts.md) in a Docker swarm operate in one of two @@ -66,38 +66,89 @@ on a node depend on whether the node is a manager or a worker. > on Windows, the `ucp-agent` component is named `ucp-agent-win`. > [Learn about architecture-specific images](admin/install/architecture-specific-images.md). +Internally, UCP uses the following components: + +* Calico 3.0.1 +* Kubernetes 1.8.11 + ### UCP components in manager nodes Manager nodes run all UCP services, including the web UI and data stores that persist the state of UCP. These are the UCP services running on manager nodes: -| UCP component | Description | -|:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ucp-agent | Monitors the node and ensures the right UCP services are running | -| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | -| ucp-auth-api | The centralized service for identity and authentication used by UCP and DTR | -| ucp-auth-store | Stores authentication configurations and data for users, organizations, and teams | -| ucp-auth-worker | Performs scheduled LDAP synchronizations and cleans authentication and authorization data | -| ucp-client-root-ca | A certificate authority to sign client bundles | -| ucp-cluster-root-ca | A certificate authority used for TLS communication between UCP components | -| ucp-controller | The UCP web server | -| ucp-dsinfo | Docker system information collection script to assist with troubleshooting | -| ucp-kv | Used to store the UCP configurations. Don't use it in your applications, since it's for internal use only | -| ucp-metrics | Used to collect and process metrics for a node, like the disk space available | -| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components | -| ucp-swarm-manager | Used to provide backwards-compatibility with Docker Swarm | +| UCP component | Description | +|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| k8s_calico-kube-controllers | A cluster-scoped Kubernetes controller used to coordinate Calico networking. Runs on one manager node only. | +| k8s_calico-node | The Calico node agent, which coordinates networking fabric according to the cluster-wide Calico configuration. Part of the `calico-node` daemonset. Runs on all nodes. Configure the CNI plugin by using the `--cni-installer-url` flag. If this flag isn't set, UCP uses Calico as the default CNI plugin. | +| k8s_install-cni_calico-node | A container that's responsible for installing the Calico CNI plugin binaries and configuration on each host. Part of the `calico-node` daemonset. Runs on all nodes. | +| k8s_POD_calico-node | Pause container for the `calico-node` pod. | +| k8s_POD_calico-kube-controllers | Pause container for the `calico-kube-controllers` pod. | +| k8s_POD_compose | Pause container for the `compose` pod. | +| k8s_POD_kube-dns | Pause container for the `kube-dns` pod. | +| k8s_ucp-dnsmasq-nanny | A dnsmasq instance used in the Kubernetes DNS Service. Part of the `kube-dns` deployment. Runs on one manager node only. | +| k8s_ucp-kube-compose | A custom Kubernetes resource component that's responsible for translating Compose files into Kubernetes constructs. Part of the `compose` deployment. Runs on one manager node only. | +| k8s_ucp-kube-dns | The main Kubernetes DNS Service, used by pods to [resolve service names](https://v1-8.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/). Part of the `kube-dns` deployment. Runs on one manager node only. Provides service discovery for Kubernetes services and pods. A set of three containers deployed via Kubernetes as a single pod. | +| k8s_ucp-kubedns-sidecar | Health checking and metrics daemon of the Kubernetes DNS Service. Part of the `kube-dns` deployment. Runs on one manager node only. | +| ucp-agent | Monitors the node and ensures the right UCP services are running. | +| ucp-auth-api | The centralized service for identity and authentication used by UCP and DTR. | +| ucp-auth-store | Stores authentication configurations and data for users, organizations, and teams. | +| ucp-auth-worker | Performs scheduled LDAP synchronizations and cleans authentication and authorization data. | +| ucp-client-root-ca | A certificate authority to sign client bundles. | +| ucp-cluster-root-ca | A certificate authority used for TLS communication between UCP components. | +| ucp-controller | The UCP web server. | +| ucp-dsinfo | Docker system information collection script to assist with troubleshooting. | +| ucp-interlock | Monitors swarm workloads configured to use Layer 7 routing. Only runs when you enable Layer 7 routing. | +| ucp-interlock-proxy | A service that provides load balancing and proxying for swarm workloads. Only runs when you enable Layer 7 routing. | +| ucp-kube-apiserver | A master component that serves the Kubernetes API. It persists its state in `etcd` directly, and all other components communicate with API server directly. | +| ucp-kube-controller-manager | A master component that manages the desired state of controllers and other Kubernetes objects. It monitors the API server and performs background tasks when needed. | +| ucp-kubelet | The Kubernetes node agent running on every node, which is responsible for running Kubernetes pods, reporting the health of the node, and monitoring resource usage. | +| ucp-kube-proxy | The networking proxy running on every node, which enables pods to contact Kubernetes services and other pods, via cluster IP addresses. | +| ucp-kube-scheduler | A master component that handles scheduling of pods. It communicates with the API server only to obtain workloads that need to be scheduled. | +| ucp-kv | Used to store the UCP configurations. Don't use it in your applications, since it's for internal use only. Also used by Kubernetes components. | +| ucp-metrics | Used to collect and process metrics for a node, like the disk space available. | +| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components. | +| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | +| ucp-swarm-manager | Used to provide backwards-compatibility with Docker Swarm. | + ### UCP components in worker nodes Worker nodes are the ones where you run your applications. These are the UCP services running on worker nodes: -| UCP component | Description | -|:--------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ucp-agent | Monitors the node and ensures the right UCP services are running | -| ucp-dsinfo | Docker system information collection script to assist with troubleshooting | -| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | -| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components | +| UCP component | Description | +|:----------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| k8s_calico-node | The Calico node agent, which coordinates networking fabric according to the cluster-wide Calico configuration. Part of the `calico-node` daemonset. Runs on all nodes. | +| k8s_install-cni_calico-node | A container that's responsible for installing the Calico CNI plugin binaries and configuration on each host. Part of the `calico-node` daemonset. Runs on all nodes. | +| k8s_POD_calico-node | "Pause" container for the Calico-node pod. By default, this container is hidden, but you can see it by running `docker ps -a`. | +| ucp-agent | Monitors the node and ensures the right UCP services are running | +| ucp-interlock-extension | Helper service that reconfigures the ucp-interlock-proxy service based on the swarm workloads that are running. | +| ucp-interlock-proxy | A service that provides load balancing and proxying for swarm workloads. Only runs when you enable Layer 7 routing. | +| ucp-dsinfo | Docker system information collection script to assist with troubleshooting | +| ucp-kubelet | The kubernetes node agent running on every node, which is responsible for running Kubernetes pods, reporting the health of the node, and monitoring resource usage | +| ucp-kube-proxy | The networking proxy running on every node, which enables pods to contact Kubernetes services and other pods, via cluster IP addresses | +| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | +| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components | + +## Pause containers + +Every pod in Kubernetes has a _pause_ container, which is an "empty" container +that bootstraps the pod to establish all of the namespaces. Pause containers +hold the cgroups, reservations, and namespaces of a pod before its individual +containers are created. The pause container's image is always present, so the +allocation of the pod's resources is instantaneous. + +By default, pause containers are hidden, but you can see them by running +`docker ps -a`. + +``` +docker ps -a | grep -I pause + +8c9707885bf6 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_calico-kube-controllers-559f6948dc-5c84l_kube-system_d00e5130-1bf4-11e8-b426-0242ac110011_0 +258da23abbf5 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_kube-dns-6d46d84946-tqpzr_kube-system_d63acec6-1bf4-11e8-b426-0242ac110011_0 +2e27b5d31a06 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_compose-698cf787f9-dxs29_kube-system_d5866b3c-1bf4-11e8-b426-0242ac110011_0 +5d96dff73458 dockereng/ucp-pause:3.0.0-6d332d3 "/pause" 47 hours ago Up 47 hours k8s_POD_calico-node-4fjgv_kube-system_d043a0ea-1bf4-11e8-b426-0242ac110011_0 +``` ## Volumes used by UCP @@ -129,6 +180,16 @@ driver. By default, the data for these volumes can be found at `/var/lib/docker/volumes//_data`. +## Configurations use by UCP + +| Configuration name | Description | +|:-------------------------------|:-------------------------------------------------------------------------------------------------| +| com.docker.interlock.extension | Configuration for the Interlock extension service that monitors and configures the proxy service | +| com.docker.interlock.proxy | Configuration for the service responsible for handling user requests and routing them | +| com.docker.license | The Docker EE license | +| com.docker.ucp.config | The UCP controller configuration. Most of the settings available on the UCP UI are stored here | +| com.docker.ucp.interlock.conf | Configuration for the core Interlock service | + ## How you interact with UCP There are two ways to interact with UCP: the web UI or the CLI. @@ -136,17 +197,16 @@ There are two ways to interact with UCP: the web UI or the CLI. You can use the UCP web UI to manage your swarm, grant and revoke user permissions, deploy, configure, manage, and monitor your applications. -![](images/architecture-3.svg) +![](images/ucp-architecture-3.svg){: .with-border} UCP also exposes the standard Docker API, so you can continue using existing tools like the Docker CLI client. Since UCP secures your cluster with role-based access control, you need to configure your Docker CLI client and other client tools to authenticate your requests using -[client certificates](user/access-ucp/index.md) that you can download +[client certificates](user-access/index.md) that you can download from your UCP profile page. - ## Where to go next -* [System requirements](admin/install/system-requirements.md) -* [Plan your installation](admin/install/system-requirements.md) +- [System requirements](admin/install/system-requirements.md) +- [Plan your installation](admin/install/plan-installation.md) diff --git a/datacenter/ucp/3.0/guides/images/change-orchestrator-for-node-1.png b/datacenter/ucp/3.0/guides/images/change-orchestrator-for-node-1.png new file mode 100644 index 0000000000..d625a5cd8e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/change-orchestrator-for-node-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/cli-based-access-2.png b/datacenter/ucp/3.0/guides/images/cli-based-access-2.png new file mode 100644 index 0000000000..c4067603d9 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/cli-based-access-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/cli-based-access-3.png b/datacenter/ucp/3.0/guides/images/cli-based-access-3.png new file mode 100644 index 0000000000..5d274e7207 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/cli-based-access-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/client-bundle.png b/datacenter/ucp/3.0/guides/images/client-bundle.png new file mode 100644 index 0000000000..e4a419ada3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/client-bundle.png differ diff --git a/datacenter/ucp/3.0/guides/images/create-service-account-1.png b/datacenter/ucp/3.0/guides/images/create-service-account-1.png new file mode 100644 index 0000000000..e850b04384 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/create-service-account-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/create-service-account-2.png b/datacenter/ucp/3.0/guides/images/create-service-account-2.png new file mode 100644 index 0000000000..278ed3da9b Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/create-service-account-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/create-service-account-3.png b/datacenter/ucp/3.0/guides/images/create-service-account-3.png new file mode 100644 index 0000000000..f1bba1a46a Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/create-service-account-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/custom-role-30.png b/datacenter/ucp/3.0/guides/images/custom-role-30.png new file mode 100644 index 0000000000..6143991782 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/custom-role-30.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-a-service-5.png b/datacenter/ucp/3.0/guides/images/deploy-a-service-5.png new file mode 100644 index 0000000000..8e465aa42f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-a-service-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-1.png b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-1.png new file mode 100644 index 0000000000..e2877a88be Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-2.png b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-2.png new file mode 100644 index 0000000000..18454e3b28 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-3.png b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-3.png new file mode 100644 index 0000000000..dfc731d7ed Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-compose-kubernetes-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-1.png b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-1.png new file mode 100644 index 0000000000..f9b13475bf Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-2.png b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-2.png new file mode 100644 index 0000000000..ae4c2d5273 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-3.png b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-3.png new file mode 100644 index 0000000000..6af93ab000 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-ingress-controller-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-1.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-1.png new file mode 100644 index 0000000000..31eb5a1cdd Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-2.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-2.png new file mode 100644 index 0000000000..287ca51080 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-3.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-3.png new file mode 100644 index 0000000000..4717b49611 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-4.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-4.png new file mode 100644 index 0000000000..c729de596e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-5.png b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-5.png new file mode 100644 index 0000000000..ce7b501568 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-kubernetes-workload-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-1.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-1.png new file mode 100644 index 0000000000..c3e79b02d3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-2.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-2.png new file mode 100644 index 0000000000..ef6298e086 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-3.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-3.png new file mode 100644 index 0000000000..6cd2861668 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-4.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-4.png new file mode 100644 index 0000000000..bd5ff0b29e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-5.png b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-5.png new file mode 100644 index 0000000000..e2b5b332ee Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-multi-service-app-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-1.png b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-1.png new file mode 100644 index 0000000000..06ee08c838 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-2.png b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-2.png new file mode 100644 index 0000000000..6741c4fd46 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/deploy-stack-to-collection-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-architecture-1.svg b/datacenter/ucp/3.0/guides/images/interlock-architecture-1.svg new file mode 100644 index 0000000000..83e759938a --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-architecture-1.svg @@ -0,0 +1,204 @@ + + + + interlock-architecture-1 + Created with Sketch. + + + + + + + + + + + + + Docker swarm managed with UCP + + + + + + + + UCP + + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + + + UCP + + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + + + UCP + + + + + + interlock-extension + + + + + + wordpress:8000 + + + + + + + worker node + + + + + + + + + + + + UCP + + + + + + ucp-interlock + + + + + + + manager node + + + + + + + + + + + + your load balancer + + + + + + + + + + + + + + + + + + + + + http://wordpress.example.org + + + + + + + wordpress-net + + + + + + + + + + + + + + + + + + + + + + + + + + + ucp-interlock + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-default-service-1.png b/datacenter/ucp/3.0/guides/images/interlock-default-service-1.png new file mode 100644 index 0000000000..5c63a95e94 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-default-service-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-default-service-2.png b/datacenter/ucp/3.0/guides/images/interlock-default-service-2.png new file mode 100644 index 0000000000..b12883d062 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-default-service-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-deploy-production-1.svg b/datacenter/ucp/3.0/guides/images/interlock-deploy-production-1.svg new file mode 100644 index 0000000000..48ccb3f7ca --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-deploy-production-1.svg @@ -0,0 +1,207 @@ + + + + interlock-deploy-production-1 + Created with Sketch. + + + + + + + + Docker swarm managed with UCP + + + + + + node-6 + + + + + UCP + + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + node-5 + + + + + UCP + + + + + + interlock-proxy:80 + + + + + interlock-proxy:80 + + + + + + + worker node + + + + + + + + node-4 + + + + + UCP + + + + + + interlock-extension + + + + + + wordpress:8000 + + + + + + + worker node + + + + + + + + + + node-3 + + + + + UCP + + + + + + + manager node + + + + + + + + node-2 + + + + + UCP + + + + + + + manager node + + + + + + + + node-1 + + + + + UCP + + + + + + ucp-interlock + + + + + + + manager node + + + + + + + + + + + + your load balancer + + + + + + + + + + + + + + + + + + + + + http://wordpress.example.org + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-install-1.svg b/datacenter/ucp/3.0/guides/images/interlock-install-1.svg new file mode 100644 index 0000000000..649439a15d --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-install-1.svg @@ -0,0 +1,198 @@ + + + + use-domain-names-1 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 192.168.99.104 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + 192.168.99.103 + + + + + + worker node + + + + + + + UCP + + + + + + + + + 192.168.99.102 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.101 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.100 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + swarm routing mesh + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 192.168.99.100:8000 + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-install-2.svg b/datacenter/ucp/3.0/guides/images/interlock-install-2.svg new file mode 100644 index 0000000000..070eeb9340 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-install-2.svg @@ -0,0 +1,198 @@ + + + + use-domain-names-2 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 192.168.99.104 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + 192.168.99.103 + + + + + + worker node + + + + + + + UCP + + + + + + + + + 192.168.99.102 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.101 + + + + + + manager node + + + + + + + UCP + + + + + + + 192.168.99.100 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + HTTP routing mesh + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + wordpress.example.org:80 + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-install-3.png b/datacenter/ucp/3.0/guides/images/interlock-install-3.png new file mode 100644 index 0000000000..9ecc24f6fc Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-install-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-overview-1.svg b/datacenter/ucp/3.0/guides/images/interlock-overview-1.svg new file mode 100644 index 0000000000..20bbc751d1 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-overview-1.svg @@ -0,0 +1,180 @@ + + + + interlock-overview-1 + Created with Sketch. + + + + + + + + + + Docker swarm managed with UCP + + + + + + node-5 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + node-4 + + + + + + worker node + + + + + + + UCP + + + + + + + + + node-3 + + + + + + manager node + + + + + + + UCP + + + + + + + node-2 + + + + + + manager node + + + + + + + UCP + + + + + + + node-1 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + swarm routing mesh + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + http://node-5:8000 + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-overview-2.svg b/datacenter/ucp/3.0/guides/images/interlock-overview-2.svg new file mode 100644 index 0000000000..8f9b9ad0d7 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/interlock-overview-2.svg @@ -0,0 +1,186 @@ + + + + interlock-overview-2 + Created with Sketch. + + + + + + + + + + Docker swarm managed with UCP + + + + + + node-5 + + + + + + worker node + + + + + + + UCP + + + + + + wordpress:8000 + + + + + + + node-4 + + + + + + worker node + + + + + + + UCP + + + + + + + + + node-3 + + + + + + manager node + + + + + + + UCP + + + + + + + node-2 + + + + + + manager node + + + + + + + UCP + + + + + + + node-1 + + + + + + manager node + + + + + + + UCP + + + + + + + + + + + swarm routing mesh + + + + + + layer 7 routing + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + http://wordpress.example.org + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/interlock-tls-1.png b/datacenter/ucp/3.0/guides/images/interlock-tls-1.png new file mode 100644 index 0000000000..d49625d287 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-tls-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-tls-2.png b/datacenter/ucp/3.0/guides/images/interlock-tls-2.png new file mode 100644 index 0000000000..d906147e02 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-tls-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/interlock-tls-3.png b/datacenter/ucp/3.0/guides/images/interlock-tls-3.png new file mode 100644 index 0000000000..151055ada7 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/interlock-tls-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-10.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-10.png new file mode 100644 index 0000000000..a997704510 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-10.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-5.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-5.png new file mode 100644 index 0000000000..59f74cf267 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-6.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-6.png new file mode 100644 index 0000000000..2674a02259 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-6.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-7.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-7.png new file mode 100644 index 0000000000..f6a4bedbe9 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-7.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-8.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-8.png new file mode 100644 index 0000000000..66c62569da Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-8.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-nodes-9.png b/datacenter/ucp/3.0/guides/images/isolate-nodes-9.png new file mode 100644 index 0000000000..c2bfd3ed83 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-nodes-9.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-volumes-0.png b/datacenter/ucp/3.0/guides/images/isolate-volumes-0.png new file mode 100644 index 0000000000..70a8c16ff5 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-volumes-0.png differ diff --git a/datacenter/ucp/3.0/guides/images/isolate-volumes-0a.png b/datacenter/ucp/3.0/guides/images/isolate-volumes-0a.png new file mode 100644 index 0000000000..7116bb0ddb Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/isolate-volumes-0a.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-1.png b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-1.png new file mode 100644 index 0000000000..c522d4d64d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-2.png b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-2.png new file mode 100644 index 0000000000..7e07794d2e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-3.png b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-3.png new file mode 100644 index 0000000000..b2a475e2b5 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-nodes-to-cluster-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/join-windows-nodes-to-cluster-1.png b/datacenter/ucp/3.0/guides/images/join-windows-nodes-to-cluster-1.png new file mode 100644 index 0000000000..3519ffb121 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/join-windows-nodes-to-cluster-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-create-role.png b/datacenter/ucp/3.0/guides/images/kube-create-role.png new file mode 100644 index 0000000000..a7c56e7e32 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-create-role.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-grant-rolebinding.png b/datacenter/ucp/3.0/guides/images/kube-grant-rolebinding.png new file mode 100644 index 0000000000..e8c739273d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-grant-rolebinding.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-grant-roleselect.png b/datacenter/ucp/3.0/guides/images/kube-grant-roleselect.png new file mode 100644 index 0000000000..e72d915aad Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-grant-roleselect.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-grant-wizard.png b/datacenter/ucp/3.0/guides/images/kube-grant-wizard.png new file mode 100644 index 0000000000..974b9f312e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-grant-wizard.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-rbac-grants.png b/datacenter/ucp/3.0/guides/images/kube-rbac-grants.png new file mode 100644 index 0000000000..9cb1bcfdc4 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-rbac-grants.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-rbac-roles.png b/datacenter/ucp/3.0/guides/images/kube-rbac-roles.png new file mode 100644 index 0000000000..a6cb551bf0 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-rbac-roles.png differ diff --git a/datacenter/ucp/3.0/guides/images/kube-role-create.png b/datacenter/ucp/3.0/guides/images/kube-role-create.png new file mode 100644 index 0000000000..0a189e293f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kube-role-create.png differ diff --git a/datacenter/ucp/3.0/guides/images/kubernetes-version.png b/datacenter/ucp/3.0/guides/images/kubernetes-version.png new file mode 100644 index 0000000000..60a248e849 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/kubernetes-version.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-1.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-1.png new file mode 100644 index 0000000000..66465741e5 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-2.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-2.png new file mode 100644 index 0000000000..6954506496 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-3.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-3.png new file mode 100644 index 0000000000..b39138c587 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-4.png b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-4.png new file mode 100644 index 0000000000..26b91d3f4d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-and-deploy-private-images-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/manage-secrets-4a.png b/datacenter/ucp/3.0/guides/images/manage-secrets-4a.png new file mode 100644 index 0000000000..adb5d85db2 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/manage-secrets-4a.png differ diff --git a/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-1.png b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-1.png new file mode 100644 index 0000000000..3bb600c12f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-2.png b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-2.png new file mode 100644 index 0000000000..d609ab7f76 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/migrate-kubernetes-roles-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/overview-1.png b/datacenter/ucp/3.0/guides/images/overview-1.png new file mode 100644 index 0000000000..7bb908139f Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/overview-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/overview-2.png b/datacenter/ucp/3.0/guides/images/overview-2.png new file mode 100644 index 0000000000..22261dd985 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/overview-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/rbac-pull-images-1.png b/datacenter/ucp/3.0/guides/images/rbac-pull-images-1.png new file mode 100644 index 0000000000..9802b4cc1b Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/rbac-pull-images-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/rbac-pull-images-2.png b/datacenter/ucp/3.0/guides/images/rbac-pull-images-2.png new file mode 100644 index 0000000000..cea41ea5c3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/rbac-pull-images-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/rbac-roles.png b/datacenter/ucp/3.0/guides/images/rbac-roles.png new file mode 100644 index 0000000000..9a4902f2ba Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/rbac-roles.png differ diff --git a/datacenter/ucp/3.0/guides/images/route-simple-app-1.png b/datacenter/ucp/3.0/guides/images/route-simple-app-1.png new file mode 100644 index 0000000000..38a4402e41 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/route-simple-app-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/saml_enabled.png b/datacenter/ucp/3.0/guides/images/saml_enabled.png new file mode 100644 index 0000000000..022c9e37fb Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/saml_enabled.png differ diff --git a/datacenter/ucp/3.0/guides/images/saml_settings.png b/datacenter/ucp/3.0/guides/images/saml_settings.png new file mode 100644 index 0000000000..89d1d437de Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/saml_settings.png differ diff --git a/datacenter/ucp/3.0/guides/images/ucp-architecture-1.svg b/datacenter/ucp/3.0/guides/images/ucp-architecture-1.svg new file mode 100644 index 0000000000..abd4a32d15 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/ucp-architecture-1.svg @@ -0,0 +1,71 @@ + + + + architecture-1 + Created with Sketch. + + + + + + + + + + cloud servers + + + + + + virtual servers + + + + + + physical servers + + + + + + + Docker EE Engine + + + + + + Universal Control Plane + + + + + + Docker Trusted Registry + + + + + + your applications + + + + + + + deploy and manage + + + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/ucp-architecture-2.svg b/datacenter/ucp/3.0/guides/images/ucp-architecture-2.svg new file mode 100644 index 0000000000..46e7833789 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/ucp-architecture-2.svg @@ -0,0 +1,166 @@ + + + + architecture-2 + Created with Sketch. + + + + + Docker swarm + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/ucp-architecture-3.svg b/datacenter/ucp/3.0/guides/images/ucp-architecture-3.svg new file mode 100644 index 0000000000..6a9c66a0a3 --- /dev/null +++ b/datacenter/ucp/3.0/guides/images/ucp-architecture-3.svg @@ -0,0 +1,233 @@ + + + + architecture-3 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + Docker swarm + + + + + + + + your load balancer + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + worker node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP worker + + + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + manager node + + + + + + + Docker EE + + + + + + UCP agent + + + + + + UCP manager + + + + + + + + + + + + UI + + + + + + CLI + + + + + + + \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create01.png b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create01.png new file mode 100644 index 0000000000..685c9d8c92 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create01.png differ diff --git a/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create02.png b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create02.png new file mode 100644 index 0000000000..936dae2e59 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/ucp_usermgmt_users_create02.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png index 67b0e5d299..3d58cd0675 100644 Binary files a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png and b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png index 0c041d16c2..358d15996b 100644 Binary files a/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png and b/datacenter/ucp/3.0/guides/images/use-constraints-in-stack-deployment.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png b/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png index 071cd1e10b..b08d65659b 100644 Binary files a/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png and b/datacenter/ucp/3.0/guides/images/use-externally-signed-certs-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-1.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-1.png new file mode 100644 index 0000000000..7e8b573ca9 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-2.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-2.png new file mode 100644 index 0000000000..0f1f1824c0 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-3.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-3.png new file mode 100644 index 0000000000..47fc63e364 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-4.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-4.png new file mode 100644 index 0000000000..56cb6abb9b Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/use-nfs-volume-5.png b/datacenter/ucp/3.0/guides/images/use-nfs-volume-5.png new file mode 100644 index 0000000000..07073cc859 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/use-nfs-volume-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-1.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-1.png new file mode 100644 index 0000000000..9fb281cda3 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-2.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-2.png new file mode 100644 index 0000000000..81f249d46e Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-2.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-3.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-3.png new file mode 100644 index 0000000000..afca7bc7ea Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-3.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-4.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-4.png new file mode 100644 index 0000000000..1a3e41f131 Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-4.png differ diff --git a/datacenter/ucp/3.0/guides/images/view-namespace-resources-5.png b/datacenter/ucp/3.0/guides/images/view-namespace-resources-5.png new file mode 100644 index 0000000000..19f5336bae Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/view-namespace-resources-5.png differ diff --git a/datacenter/ucp/3.0/guides/images/web-based-access-1.png b/datacenter/ucp/3.0/guides/images/web-based-access-1.png new file mode 100644 index 0000000000..fb7304147d Binary files /dev/null and b/datacenter/ucp/3.0/guides/images/web-based-access-1.png differ diff --git a/datacenter/ucp/3.0/guides/images/web-based-access-2.png b/datacenter/ucp/3.0/guides/images/web-based-access-2.png index 65313e945a..00437d1c22 100644 Binary files a/datacenter/ucp/3.0/guides/images/web-based-access-2.png and b/datacenter/ucp/3.0/guides/images/web-based-access-2.png differ diff --git a/datacenter/ucp/3.0/guides/index.md b/datacenter/ucp/3.0/guides/index.md index a054b6794a..ac171fe4ff 100644 --- a/datacenter/ucp/3.0/guides/index.md +++ b/datacenter/ucp/3.0/guides/index.md @@ -1,41 +1,71 @@ --- title: Universal Control Plane overview -description: Learn about Docker Universal Control Plane, the enterprise-grade cluster - management solution from Docker. -keywords: ucp, overview, orchestration, clustering +description: | + Learn about Docker Universal Control Plane, the enterprise-grade cluster management solution from Docker. +keywords: ucp, overview, orchestration, cluster redirect_from: -- /ucp/ + - /ucp/ + - /datacenter/ucp/3.0/guides/ --- Docker Universal Control Plane (UCP) is the enterprise-grade cluster management solution from Docker. You install it on-premises or in your virtual private -cloud, and it helps you manage your Docker swarm and applications through a +cloud, and it helps you manage your Docker cluster and applications through a single interface. -![](../../../images/ucp.png){: .with-border} +![](images/overview-1.png){: .with-border} -## Centralized swarm management +## Centralized cluster management With Docker, you can join up to thousands of physical or virtual machines -together to create a container cluster, or swarm, allowing you to deploy your +together to create a container cluster that allows you to deploy your applications at scale. Docker Universal Control Plane extends the -functionality provided by Docker to make it easier to manage your swarm +functionality provided by Docker to make it easier to manage your cluster from a centralized place. You can manage and monitor your container cluster using a graphical UI. -![](../../../images/try-ddc-2.png){: .with-border} +![](images/overview-2.png){: .with-border} -Since UCP exposes the standard Docker API, you can continue using the tools +## Deploy, manage, and monitor + +With Docker UCP, you can manage from a centralized place all of the computing +resources you have available, like nodes, volumes, and networks. + +You can also deploy and monitor your applications and services. + +## Built-in security and access control + +Docker UCP has its own built-in authentication mechanism and integrates with +LDAP services. It also has role-based access control (RBAC), so that you can +control who can access and make changes to your cluster and applications. +[Learn about role-based access control](authorization/index.md). + +![](images/overview-3.png){: .with-border} + +Docker UCP integrates with Docker Trusted Registry so that you can keep the +Docker images you use for your applications behind your firewall, where they +are safe and can't be tampered with. + +You can also enforce security policies and only allow running applications +that use Docker images you know and trust. + +## Use the Docker CLI client + +Because UCP exposes the standard Docker API, you can continue using the tools you already know, including the Docker CLI client, to deploy and manage your applications. -As an example, you can use the `docker info` command to check the -status of a Docker swarm managed by UCP: +For example, you can use the `docker info` command to check the status of a +cluster that's managed by UCP: -```none -$ docker info +```bash +docker info +``` +This command produces the output that you expect from the Docker EE Engine: + +```bash Containers: 38 Running: 23 Paused: 0 @@ -51,30 +81,7 @@ Managers: 1 … ``` -## Deploy, manage, and monitor - -With Docker UCP, you can manage from a centralized place all of the computing -resources you have available, like nodes, volumes, and networks. - -You can also deploy and monitor your applications and services. - -## Built-in security and access control - -Docker UCP has its own built-in authentication mechanism and integrates with -LDAP services. It also has role-based access control (RBAC), so that you can -control who can access and make changes to your swarm and applications. -[Learn about role-based access control](access-control/index.md). - -![](images/overview-3.png){: .with-border} - -Docker UCP integrates with Docker Trusted Registry so that you can keep the -Docker images you use for your applications behind your firewall, where they -are safe and can't be tampered with. - -You can also enforce security policies and only allow running applications -that use Docker images you know and trust. - ## Where to go next -* [UCP architecture](architecture.md) -* [Install UCP](admin/install/index.md) +- [Install UCP](admin/install/index.md) +- [Docker EE Platform 2.0 architecture](/ee/docker-ee-architecture.md) diff --git a/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md b/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md new file mode 100644 index 0000000000..f7d73a825f --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md @@ -0,0 +1,104 @@ +--- +title: Install the Kubernetes CLI +description: Learn how to install kubectl, the Kubernetes command-line tool, on Docker Universal Control Plane. +keywords: ucp, cli, administration, kubectl, Kubernetes +--- + +Docker EE 2.0 and higher deploys Kubernetes as part of a UCP installation. +Deploy, manage, and monitor Kubernetes workloads from the UCP dashboard. Users can +also interact with the Kubernetes deployment through the Kubernetes +command-line tool named kubectl. + +To access the UCP cluster with kubectl, install the [UCP client bundle](cli.md). + +> Kubernetes on Docker for Mac and Docker for Windows +> +> Docker for Mac and Docker for Windows provide a standalone Kubernetes server that +> runs on your development machine, with kubectl installed by default. This installation is +> separate from the Kubernetes deployment on a UCP cluster. +> Learn how to [deploy to Kubernetes on Docker for Mac](/docker-for-mac/kubernetes.md). +{: .important} + +## Install the kubectl binary + +To use kubectl, install the binary on a workstation which has access to your UCP endpoint. + +> Must install compatible version +> +> Kubernetes only guarantees compatibility with kubectl versions that are +/-1 minor versions away from the Kubernetes version. +{: .important} + +First, find which version of Kubernetes is running in your cluster. This can be found +within the Universal Control Plane dashboard or at the UCP API endpoint [version](/reference/ucp/3.0/api/). + +From the UCP dashboard, click on **About Docker EE** within the **Admin** menu in the top left corner + of the dashboard. Then navigate to **Kubernetes**. + + ![Find Kubernetes version](../images/kubernetes-version.png){: .with-border} + +Once you have the Kubernetes version, install the kubectl client for the relevant +operating system. + + +
+
+``` +# Set the Kubernetes version as found in the UCP Dashboard or API +k8sversion=v1.8.11 + +# Get the kubectl binary. +curl -LO https://storage.googleapis.com/kubernetes-release/release/$k8sversion/bin/darwin/amd64/kubectl + +# Make the kubectl binary executable. +chmod +x ./kubectl + +# Move the kubectl executable to /usr/local/bin. +sudo mv ./kubectl /usr/local/bin/kubectl +``` +
+
+
+``` +# Set the Kubernetes version as found in the UCP Dashboard or API +k8sversion=v1.8.11 + +# Get the kubectl binary. +curl -LO https://storage.googleapis.com/kubernetes-release/release/$k8sversion/bin/linux/amd64/kubectl + +# Make the kubectl binary executable. +chmod +x ./kubectl + +# Move the kubectl executable to /usr/local/bin. +sudo mv ./kubectl /usr/local/bin/kubectl +``` +
+
+
+You can download the binary from this [link](https://storage.googleapis.com/kubernetes-release/release/v.1.8.11/bin/windows/amd64/kubectl.exe) + +If you have curl installed on your system, you use these commands in Powershell. + +```cmd +$env:k8sversion = "v1.8.11" + +curl https://storage.googleapis.com/kubernetes-release/release/$env:k8sversion/bin/windows/amd64/kubectl.exe +``` +
+
+
+ +## Using kubectl with a Docker EE cluster + +Docker Enterprise Edition provides users unique certificates and keys to authenticate against + the Docker and Kubernetes APIs. Instructions on how to download these certificates and how to + configure kubectl to use them can be found in [CLI-based access.](cli.md#download-client-certificates) + +## Where to go next + +- [Deploy a workload to a Kubernetes cluster](../kubernetes.md) +- [Deploy to Kubernetes on Docker for Mac](/docker-for-mac/kubernetes.md) + diff --git a/datacenter/ucp/3.0/guides/user/interlock/architecture.md b/datacenter/ucp/3.0/guides/user/interlock/architecture.md new file mode 100644 index 0000000000..3b29d88561 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/architecture.md @@ -0,0 +1,73 @@ +--- +title: Interlock architecture +description: Learn more about the architecture of the layer 7 routing solution + for Docker swarm services. +keywords: routing, proxy +--- + +The layer 7 routing solution for swarm workloads is known as Interlock, and has +three components: + +* **Interlock-proxy**: This is a proxy/load-balancing service that handles the +requests from the outside world. By default this service is a containerized +NGINX deployment. +* **Interlock-extension**: This is a helper service that generates the +configuration used by the proxy service. +* **Interlock**: This is the central piece of the layer 7 routing solution. +It uses the Docker API to monitor events, and manages the extension and +proxy services. + +This is what the default configuration looks like, once you enable layer 7 +routing in UCP: + +![](../images/interlock-architecture-1.svg) + +An Interlock service starts running on a manager node, an Interlock-extension +service starts running on a worker node, and two replicas of the +Interlock-proxy service run on worker nodes. + +If you don't have any worker nodes in your cluster, then all Interlock +components run on manager nodes. + +## Deployment lifecycle + +By default layer 7 routing is disabled, so an administrator first needs to +enable this service from the UCP web UI. + +Once that happens: + +1. UCP creates the `ucp-interlock` overlay network. +2. UCP deploys the `ucp-interlock` service and attaches it both to the Docker +socket and the overlay network that was created. This allows the Interlock +service to use the Docker API. That's also the reason why this service needs to +run on a manger node. +3. The `ucp-interlock` service starts the `ucp-interlock-extension` service +and attaches it to the `ucp-interlock` network. This allows both services +to communicate. +4. The `ucp-interlock-extension` generates a configuration to be used by +the proxy service. By default the proxy service is NGINX, so this service +generates a standard NGINX configuration. +5. The `ucp-interlock` service takes the proxy configuration and uses it to +start the `ucp-interlock-proxy` service. + +At this point everything is ready for you to start using the layer 7 routing +service with your swarm workloads. + +## Routing lifecycle + +Once the layer 7 routing service is enabled, you apply specific labels to +your swarm services. The labels define the hostnames that are routed to the +service, the ports used, and other routing configurations. + +Once you deploy or update a swarm service with those labels: + +1. The `ucp-interlock` service is monitoring the Docker API for events and +publishes the events to the `ucp-interlock-extension` service. +2. That service in turn generates a new configuration for the proxy service, +based on the labels you've added to your services. +3. The `ucp-interlock` service takes the new configuration and reconfigures the +`ucp-interlock-proxy` to start using it. + +This all happens in milliseconds and with rolling updates. Even though +services are being reconfigured, users won't notice it. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference.md new file mode 100644 index 0000000000..daf93c97c3 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/configuration-reference.md @@ -0,0 +1,146 @@ +--- +title: Layer 7 routing configuration reference +description: Learn the configuration options for the UCP layer 7 routing solution +keywords: routing, proxy +--- + +Once you enable the layer 7 routing service, UCP creates the +`com.docker.ucp.interlock.conf-1` configuration and uses it to configure all +the internal components of this service. + +The configuration is managed as a TOML file. + +## Example configuration + +Here's an example of the default configuration used by UCP: + +```toml +ListenAddr = ":8080" +DockerURL = "unix:///var/run/docker.sock" +AllowInsecure = false +PollInterval = "3s" + +[Extensions] + [Extensions.default] + Image = "docker/ucp-interlock-extension:3.0.1" + ServiceName = "ucp-interlock-extension" + Args = [] + Constraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"] + ProxyImage = "docker/ucp-interlock-proxy:3.0.1" + ProxyServiceName = "ucp-interlock-proxy" + ProxyConfigPath = "/etc/nginx/nginx.conf" + ProxyReplicas = 2 + ProxyStopSignal = "SIGQUIT" + ProxyStopGracePeriod = "5s" + ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"] + PublishMode = "ingress" + PublishedPort = 80 + TargetPort = 80 + PublishedSSLPort = 8443 + TargetSSLPort = 443 + [Extensions.default.Labels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.ContainerLabels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.ProxyLabels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.ProxyContainerLabels] + "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" + [Extensions.default.Config] + Version = "" + User = "nginx" + PidPath = "/var/run/proxy.pid" + MaxConnections = 1024 + ConnectTimeout = 600 + SendTimeout = 600 + ReadTimeout = 600 + IPHash = false + AdminUser = "" + AdminPass = "" + SSLOpts = "" + SSLDefaultDHParam = 1024 + SSLDefaultDHParamPath = "" + SSLVerify = "required" + WorkerProcesses = 1 + RLimitNoFile = 65535 + SSLCiphers = "HIGH:!aNULL:!MD5" + SSLProtocols = "TLSv1.2" + AccessLogPath = "/dev/stdout" + ErrorLogPath = "/dev/stdout" + MainLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" '\n\t\t '$status $body_bytes_sent \"$http_referer\" '\n\t\t '\"$http_user_agent\" \"$http_x_forwarded_for\"';" + TraceLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" $status '\n\t\t '$body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n\t\t '\"$http_x_forwarded_for\" $request_id $msec $request_time '\n\t\t '$upstream_connect_time $upstream_header_time $upstream_response_time';" + KeepaliveTimeout = "75s" + ClientMaxBodySize = "32m" + ClientBodyBufferSize = "8k" + ClientHeaderBufferSize = "1k" + LargeClientHeaderBuffers = "4 8k" + ClientBodyTimeout = "60s" + UnderscoresInHeaders = false +``` + +## Core configurations + +These are the configurations used for the `ucp-interlock` service. The following +options are available: + +| Option | Type | Description | +|:-------------------|:------------|:-----------------------------------------------------------------------------------------------| +| `ListenAddr` | string | Address to serve the Interlock GRPC API. Defaults to `8080`. | +| `DockerURL` | string | Path to the socket or TCP address to the Docker API. Defaults to `unix:///var/run/docker.sock` | +| `TLSCACert` | string | Path to the CA certificate for connecting securely to the Docker API. | +| `TLSCert` | string | Path to the certificate for connecting securely to the Docker API. | +| `TLSKey` | string | Path to the key for connecting securely to the Docker API. | +| `AllowInsecure` | bool | Skip TLS verification when connecting to the Docker API via TLS. | +| `PollInterval` | string | Interval to poll the Docker API for changes. Defaults to `3s`. | +| `EndpointOverride` | string | Override the default GRPC API endpoint for extensions. The default is detected via Swarm. | +| `Extensions` | []Extension | Array of extensions as listed below. | + +## Extension configuration + +Interlock must contain at least one extension to service traffic. +The following options are available to configure the extensions: + +| Option | Type | Description | +|:-------------------|:------------------|:------------------------------------------------------------------------------| +| `Image` | string | Name of the Docker image to use for the extension service. | +| `Args` | []string | Arguments to be passed to the Docker extension service upon creation. | +| `Labels` | map[string]string | Labels to add to the extension service. | +| `ServiceName` | string | Name of the extension service. | +| `ProxyImage` | string | Name of the Docker image to use for the proxy service. | +| `ProxyArgs` | []string | Arguments to be passed to the proxy service upon creation. | +| `ProxyLabels` | map[string]string | Labels to add to the proxy service. | +| `ProxyServiceName` | string | Name of the proxy service. | +| `ProxyConfigPath` | string | Path in the service for the generated proxy configuration. | +| `ServiceCluster` | string | Name of the cluster this extension services. | +| `PublishMode` | string | Publish mode for the proxy service. Supported values are `ingress` or `host`. | +| `PublishedPort` | int | Port where the proxy service serves non-TLS traffic. | +| `PublishedSSLPort` | int | Port where the proxy service serves TLS traffic. | +| `Template` | string | Docker configuration object that is used as the extension template. | +| `Config` | Config | Proxy configuration used by the extensions as listed below. | + +## Proxy configuration + +By default NGINX is used as a proxy, so the following NGINX options are +available for the proxy service: + +| Option | Type | Description | +|:------------------------|:-------|:-----------------------------------------------------------------------------------------------------| +| `User` | string | User to be used in the proxy. | +| `PidPath` | string | Path to the pid file for the proxy service. | +| `MaxConnections` | int | Maximum number of connections for proxy service. | +| `ConnectTimeout` | int | Timeout in seconds for clients to connect. | +| `SendTimeout` | int | Timeout in seconds for the service to send a request to the proxied upstream. | +| `ReadTimeout` | int | Timeout in seconds for the service to read a response from the proxied upstream. | +| `IPHash` | bool | Specifies that requests are distributed between servers based on client IP addresses. | +| `SSLOpts` | string | Options to be passed when configuring SSL. | +| `SSLDefaultDHParam` | int | Size of DH parameters. | +| `SSLDefaultDHParamPath` | string | Path to DH parameters file. | +| `SSLVerify` | string | SSL client verification. | +| `WorkerProcesses` | string | Number of worker processes for the proxy service. | +| `RLimitNoFile` | int | Number of maxiumum open files for the proxy service. | +| `SSLCiphers` | string | SSL ciphers to use for the proxy service. | +| `SSLProtocols` | string | Enable the specified TLS protocols. | +| `AccessLogPath` | string | Path to use for access logs (default: `/dev/stdout`). | +| `ErrorLogPath` | string | Path to use for error logs (default: `/dev/stdout`). | +| `MainLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for main logger. | +| `TraceLogFormat` | string | [Format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) to use for trace logger. | diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/configure.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/configure.md new file mode 100644 index 0000000000..b0f9ef6b39 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/configure.md @@ -0,0 +1,64 @@ +--- +title: Configure the layer 7 routing service +description: Learn how to configure the layer 7 routing solution for UCP, that allows + you to route traffic to swarm services. +keywords: routing, proxy +--- + +[When enabling the layer 7 routing solution](index.md) from the UCP web UI, +you can configure the ports for incoming traffic. If you want to further +customize the layer 7 routing solution, you can do it by updating the +`ucp-interlock` service with a new Docker configuration. + +Here's how it works: + +1. Find out what configuration is currently being used for the `ucp-interlock` +service and save it to a file: + + {% raw %} + ```bash + CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' ucp-interlock) + docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml + ``` + {% endraw %} + +2. Make the necessary changes to the `config.toml` file. + [Learn about the configuration options available](configuration-reference.md). + +3. Create a new Docker configuration object from the file you've edited: + + ```bash + NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))" + docker config create $NEW_CONFIG_NAME config.toml + ``` + +3. Update the `ucp-interlock` service to start using the new configuration: + + ```bash + docker service update \ + --config-rm $CURRENT_CONFIG_NAME \ + --config-add source=$NEW_CONFIG_NAME,target=/config.toml \ + ucp-interlock + ``` + +By default the `ucp-interlock` service is configured to pause if you provide an +invalid configuration. The service won't restart without a manual intervention. + +If you want the service to automatically rollback to a previous stable +configuration, you can update it with: + +```bash +docker service update \ + --update-failure-action rollback \ + ucp-interlock +``` + +Another thing to be aware is that every time you enable the layer 7 routing +solution from the UCP UI, the `ucp-interlock` service is started using the +default configuration. + +If you've customized the configuration used by the `ucp-interlock` service, +you'll have to update it again to use the Docker configuration object +you've created. + + diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking.md new file mode 100644 index 0000000000..ed7e922d20 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/host-mode-networking.md @@ -0,0 +1,100 @@ +--- +title: Host mode networking +description: Learn how to configure the UCP layer 7 routing solution with + host mode networking. +keywords: routing, proxy +redirect_from: + - /ee/ucp/interlock/usage/host-mode-networking/ +--- + +By default the layer 7 routing components communicate with one another using +overlay networks. You can customize the components to use host mode networking +instead. + +You can choose to: + +* Configure the `ucp-interlock` and `ucp-interlock-extension` services to +communicate using host mode networking. +* Configure the `ucp-interlock-proxy` and your swarm service to communicate +using host mode networking. +* Use host mode networking for all of the components. + +In this example we'll start with a production-grade deployment of the layer +7 routing solution and update it so that use host mode networking instead of +overlay networking. + +When using host mode networking you won't be able to use DNS service discovery, +since that functionality requires overlay networking. +For two services to communicate, each service needs to know the IP address of +the node where the other service is running. + +## Production-grade deployment + +If you haven't already, configure the +[layer 7 routing solution for production](production.md). + +Once you've done that, the `ucp-interlock-proxy` service replicas should be +running on their own dedicated nodes. + +## Update the ucp-interlock config + +[Update the ucp-interlock service configuration](configure.md) so that it uses +host mode networking. + +Update the `PublishMode` key to: + +```toml +PublishMode = "host" +``` + +When updating the `ucp-interlock` service to use the new Docker configuration, +make sure to update it so that it starts publishes its port on the host: + +```bash +docker service update \ + --config-rm $CURRENT_CONFIG_NAME \ + --config-add source=$NEW_CONFIG_NAME,target=/config.toml \ + --publish-add mode=host,target=8080 \ + ucp-interlock +``` + +The `ucp-interlock` and `ucp-interlock-extension` services are now communicating +using host mode networking. + +## Deploy your swarm services + +Now you can deploy your swarm services. In this example we'll deploy a demo +service that also uses host mode networking. +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker service create \ + --name demo \ + --detach=false \ + --label com.docker.lb.hosts=app.example.org \ + --label com.docker.lb.port=8080 \ + --publish mode=host,target=8080 \ + --env METADATA="demo" \ + ehazlett/docker-demo +``` + +Docker allocates a high random port on the host where the service can be reached. +To test that everything is working you can run: + +```bash +curl --header "Host: app.example.org" \ + http://:/ping +``` + +Where: + +* `` is the domain name or IP address of a node where the proxy +service is running. +* `` is the [port you're using to route HTTP traffic](index.md). + +If everything is working correctly, you should get a JSON result like: + +```json +{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"} +``` diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md new file mode 100644 index 0000000000..6cda7383c7 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md @@ -0,0 +1,18 @@ +--- +title: Enable layer 7 routing +description: Learn how to enable the layer 7 routing solution for UCP, that allows + you to route traffic to swarm services. +keywords: routing, proxy +--- + +To enable support for layer 7 routing, also known as HTTP routing mesh, +log in to the UCP web UI as an administrator, navigate to the **Admin Settings** +page, and click the **Routing Mesh** option. Check the **Enable routing mesh** option. + +![http routing mesh](../../images/interlock-install-3.png){: .with-border} + +By default, the routing mesh service listens on port 80 for HTTP and port +8443 for HTTPS. Change the ports if you already have services that are using +them. + +Once you save, the layer 7 routing service can be used by your swarm services. diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md new file mode 100644 index 0000000000..fb17de7a92 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md @@ -0,0 +1,89 @@ +--- +title: Configure layer 7 routing for production +description: Learn how to configure the layer 7 routing solution for a production + environment. +keywords: routing, proxy +--- + +The layer 7 solution that ships out of the box with UCP is highly available +and fault tolerant. It is also designed to work independently of how many +nodes you're managing with UCP. + +![production deployment](../../images/interlock-deploy-production-1.svg) + +For a production-grade deployment, you should tune the default deployment to +have two nodes dedicated for running the two replicas of the +`ucp-interlock-proxy` service. This ensures: + +* The proxy services have dedicated resources to handle user requests. You +can configure these nodes with higher performance network interfaces. +* No application traffic can be routed to a manager node. This makes your +deployment secure. +* The proxy service is running on two nodes. If one node fails, layer 7 routing +continues working. + +To achieve this you need to: + +1. Enable layer 7 routing. [Learn how](index.md). +2. Pick two nodes that are going to be dedicated to run the proxy service. +3. Apply labels to those nodes, so that you can constrain the proxy service to +only run on nodes with those labels. +4. Update the `ucp-interlock` service to deploy proxies using that constraint. +5. Configure your load balancer to route traffic to the dedicated nodes only. + +## Apply labels to nodes + +In this example, we chose node-5 and node-6 to be dedicated just for running +the proxy service. To apply labels to those nodes run: + +```bash +docker node update --label-add nodetype=loadbalancer +``` + +To make sure the label was successfully applied, run: + +{% raw %} +```bash +docker node inspect --format '{{ index .Spec.Labels "nodetype" }}' +``` +{% endraw %} + +The command should print "loadbalancer". + +## Configure the ucp-interlock service + +Now that your nodes are labelled, you need to update the `ucp-interlock` +service configuration to deploy the proxy service with the correct constraints. + +Add another constraint to the `ProxyConstraints` array: + +```toml +[Extensions] + [Extensions.default] + ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"] +``` + +[Learn how to configure ucp-interlock](configure.md). + +> Known issue +> +> In UCP 3.0.0 the `ucp-interlock` service won't redeploy the proxy replicas +> when you update the configuration. As a workaround, +> [deploy a demo service](../usage/index.md). Once you do that, the proxy +services are redeployed and scheduled on the correct nodes. +{: .important} + +Once you reconfigure the `ucp-interlock` service, you can check if the proxy +service is running on the dedicated nodes: + +```bash +docker service ps ucp-interlock-proxy +``` + +## Configure your load balancer + +Once the proxy service is running on dedicated nodes, configure your upstream +load balancer with the domain names or IP addresses of those nodes. + +This makes sure all traffic is directed to these nodes. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/index.md b/datacenter/ucp/3.0/guides/user/interlock/index.md new file mode 100644 index 0000000000..cd63d61bfe --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/index.md @@ -0,0 +1,52 @@ +--- +title: Layer 7 routing overview +description: Learn how to route layer 7 traffic to your swarm services +keywords: routing, proxy +--- + +Docker Engine running in swarm mode has a routing mesh, which makes it easy +to expose your services to the outside world. Since all nodes participate +in the routing mesh, users can access your service by contacting any node. + +![swarm routing mess](../images/interlock-overview-1.svg) + +In this example the WordPress service is listening on port 8000 of the routing +mesh. Even though the service is running on a single node, users can access +WordPress using the domain name or IP of any of the nodes that are part of +the swarm. + +UCP extends this one step further with layer 7 layer routing (also known as +application layer 7), allowing users to access Docker services using domain names +instead of IP addresses. + +This functionality is made available through the Interlock component. + +![layer 7 routing](../images/interlock-overview-2.svg) + +In this example, users can access the WordPress service using +`http://wordpress.example.org`. Interlock takes care of routing traffic to +the right place. + +Interlock is specific to the Swarm orchestrator. If you're trying to route +traffic to your Kubernetes applications, check +[layer 7 routing with Kubernetes.](../kubernetes/layer-7-routing.md) + +## Features and benefits + +Layer 7 routing in UCP supports: + +* **High availability**: All the components used for layer 7 routing leverage +Docker swarm for high availability, and handle failures gracefully. +* **Automatic configuration**: UCP monitors your services and automatically +reconfigures the proxy services so that everything handled for you. +* **Scalability**: You can customize and tune the proxy services that handle +user-facing requests to meet whatever demand your services have. +* **TLS**: You can leverage Docker secrets to securely manage TLS Certificates +and keys for your services. Both TLS termination and TCP passthrough are supported. +* **Context-based routing**: You can define where to route the request based on +context or path. +* **Host mode networking**: By default layer 7 routing leverages the Docker Swarm +routing mesh, but you don't have to. You can use host mode networking for maximum +performance. +* **Security**: The layer 7 routing components that are exposed to the outside +world run on worker nodes. Even if they get compromised, your cluster won't. diff --git a/datacenter/ucp/3.0/guides/user/interlock/upgrade.md b/datacenter/ucp/3.0/guides/user/interlock/upgrade.md new file mode 100644 index 0000000000..426b8e499b --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/upgrade.md @@ -0,0 +1,129 @@ +--- +title: Layer 7 routing upgrade +description: Learn how to route layer 7 traffic to your swarm services +keywords: routing, proxy, hrm +--- + +The [HTTP routing mesh](/datacenter/ucp/2.2/guides/admin/configure/use-domain-names-to-access-services.md) +functionality was redesigned in UCP 3.0 for greater security and flexibility. +The functionality was also renamed to "layer 7 routing", to make it easier for +new users to get started. + +[Learn about the new layer 7 routing functionality](index.md). + +To route traffic to your service you apply specific labels to your swarm +services, describing the hostname for the service and other configurations. +Things work in the same way as they did with the HTTP routing mesh, with the +only difference being that you use different labels. + +You don't have to manually update your services. During the upgrade process to +3.0, UCP updates the services to start using new labels. + +This article describes the upgrade process for the routing component, so that +you can troubleshoot UCP and your services, in case something goes wrong with +the upgrade. + +# UCP upgrade process + +If you are using the HTTP routing mesh, and start an upgrade to UCP 3.0: + +1. UCP starts a reconciliation process to ensure all internal components are +deployed. As part of this, services using HRM labels are inspected. +2. UCP creates the `com.docker.ucp.interlock.conf-` based on HRM configurations. +3. The HRM service is removed. +4. The `ucp-interlock` service is deployed with the configuration created. +5. The `ucp-interlock` service deploys the `ucp-interlock-extension` and +`ucp-interlock-proxy-services`. + +The only way to rollback from an upgrade is by restoring from a backup taken +before the upgrade. If something goes wrong during the upgrade process, you +need to troubleshoot the interlock services and your services, since the HRM +service won't be running after the upgrade. + +[Learn more about the interlock services and architecture](architecture.md). + +## Check that routing works + +After upgrading to UCP 3.0, you should check if all swarm services are still +routable. + +For services using HTTP: + +```bash +curl -vs http://:/ -H "Host: " +``` + +For services using HTTPS: + +```bash +curl -vs https://: +``` + +After the upgrade, check that you can still use the same hostnames to access +the swarm services. + +## The ucp-interlock services are not running + +After the upgrade to UCP 3.0, the following services should be running: + +* `ucp-interlock`: monitors swarm workloads configured to use layer 7 routing. +* `ucp-interlock-extension`: Helper service that generates the configuration for +the `ucp-interlock-proxy` service. +* `ucp-interlock-proxy`: A service that provides load balancing and proxying for +swarm workloads. + +To check if these services are running, use a client bundle with administrator +permissions and run: + +```bash +docker ps --filter "name=ucp-interlock" +``` + +* If the `ucp-interlock` service doesn't exist or is not running, something went +wrong with the reconciliation step. +* If this still doesn't work, it's possible that UCP is having problems creating +the `com.docker.ucp.interlock.conf-1`, due to name conflicts. Make sure you +don't have any configuration with the same name by running: + ``` + docker config ls --filter "name=com.docker.ucp.interlock" + ``` +* If either the `ucp-interlock-extension` or `ucp-interlock-proxy` services are +not running, it's possible that there are port conflicts. +As a workaround re-enable the layer 7 routing configuration from the +[UCP settings page](deploy/index.md). Make sure the ports you choose are not +being used by other services. + +## Workarounds and clean-up + +If you have any of the problems above, disable and enable the layer 7 routing +setting on the [UCP settings page](deploy/index.md). This redeploys the +services with their default configuration. + +When doing that make sure you specify the same ports you were using for HRM, +and that no other services are listening on those ports. + +You should also check if the `ucp-hrm` service is running. If it is, you should +stop it since it can conflict with the `ucp-interlock-proxy` service. + +## Optionally remove labels + +As part of the upgrade process UCP adds the +[labels specific to the new layer 7 routing solution](usage/labels-reference.md). + +You can update your services to remove the old HRM labels, since they won't be +used anymore. + +## Optionally segregate control traffic + +Interlock is designed so that all the control traffic is kept separate from +the application traffic. + +If before upgrading you had all your applications attached to the `ucp-hrm` +network, after upgrading you can update your services to start using a +dedicated network for routing that's not shared with other services. +[Learn how to use a dedicated network](usage/index.md). + +If before upgrading you had a dedicate network to route traffic to each service, +Interlock will continue using those dedicated networks. However the +`ucp-interlock` will be attached to each of those networks. You can update +the `ucp-interlock` service so that it is only connected to the `ucp-hrm` network. diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/canary.md b/datacenter/ucp/3.0/guides/user/interlock/usage/canary.md new file mode 100644 index 0000000000..138dc5816b --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/canary.md @@ -0,0 +1,107 @@ +--- +title: Canary application instances +description: Learn how to do canary deployments for your Docker swarm services +keywords: routing, proxy +--- + +In this example we will publish a service and deploy an updated service as canary instances. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the initial service: + +```bash +$> docker service create \ + --name demo-v1 \ + --network demo \ + --detach=false \ + --replicas=4 \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --env METADATA="demo-version-1" \ + ehazlett/docker-demo +``` + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local`: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to demo.local (127.0.0.1) port 80 (#0) +> GET /ping HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Wed, 08 Nov 2017 20:28:26 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 120 +< Connection: keep-alive +< Set-Cookie: session=1510172906715624280; Path=/; Expires=Thu, 09 Nov 2017 20:28:26 GMT; Max-Age=86400 +< x-request-id: f884cf37e8331612b8e7630ad0ee4e0d +< x-proxy-id: 5ad7c31f9f00 +< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64 +< x-upstream-addr: 10.0.2.4:8080 +< x-upstream-response-time: 1510172906.714 +< +{"instance":"df20f55fc943","version":"0.1","metadata":"demo-version-1","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"} +``` + +Notice the `metadata` with `demo-version-1`. + +Now we will deploy a "new" version: + +```bash +$> docker service create \ + --name demo-v2 \ + --network demo \ + --detach=false \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --env METADATA="demo-version-2" \ + --env VERSION="0.2" \ + ehazlett/docker-demo +``` + +Since this has a replica of one (1) and the initial version has four (4) replicas 20% of application traffic +will be sent to `demo-version-2`: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"23d9a5ec47ef","version":"0.1","metadata":"demo-version-1","request_id":"060c609a3ab4b7d9462233488826791c"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"f42f7f0a30f9","version":"0.1","metadata":"demo-version-1","request_id":"c848e978e10d4785ac8584347952b963"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"1b0d55ed3d2f","version":"0.2","metadata":"demo-version-2","request_id":"b86ff1476842e801bf20a1b5f96cf94e"} +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +{"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"} +``` + +To increase traffic to the new version add more replicas with `docker scale`: + +```bash +$> docker service scale demo-v2=4 +demo-v2 +``` + +To complete the upgrade, scale the `demo-v1` service to zero (0): + +```bash +$> docker service scale demo-v1=0 +demo-v1 +``` + +This will route all application traffic to the new version. If you need to rollback, simply scale the v1 service +back up and v2 down. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/context.md b/datacenter/ucp/3.0/guides/user/interlock/usage/context.md new file mode 100644 index 0000000000..a8f4daa5ec --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/context.md @@ -0,0 +1,65 @@ +--- +title: Context/path based routing +description: Learn how to do route traffic to your Docker swarm services based + on a url path +keywords: routing, proxy +--- + +In this example we will publish a service using context or path based routing. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the initial service: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.context_root=/app \ + --label com.docker.lb.context_root_rewrite=true \ + --env METADATA="demo-context-root" \ + ehazlett/docker-demo +``` + +> Only one path per host +> +> Interlock supports only one path per host per service cluster. Once a +> particular `com.docker.lb.hosts` label has been applied, it cannot be applied +> again in the same service cluster. +{: .important} + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local`: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/app/ +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) +> GET /app/ HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Fri, 17 Nov 2017 14:25:17 GMT +< Content-Type: text/html; charset=utf-8 +< Transfer-Encoding: chunked +< Connection: keep-alive +< x-request-id: 077d18b67831519defca158e6f009f82 +< x-proxy-id: 77c0c37d2c46 +< x-server-info: interlock/2.0.0-dev (732c77e7) linux/amd64 +< x-upstream-addr: 10.0.1.3:8080 +< x-upstream-response-time: 1510928717.306 +... +``` + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md b/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md new file mode 100644 index 0000000000..0602d8c1c9 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md @@ -0,0 +1,50 @@ +--- +title: Set a default service +description: Learn about Interlock, an application routing and load balancing system + for Docker Swarm. +keywords: ucp, interlock, load balancing +--- + +The default proxy service used by UCP to provide layer 7 routing is NGINX, +so when users try to access a route that hasn't been configured, they will +see the default NGINX 404 page. + +![Default NGINX page](../../images/interlock-default-service-1.png){: .with-border} + +You can customize this by labelling a service with +`com.docker.lb.defaul_backend=true`. When users try to access a route that's +not configured, they are redirected to this service. + +As an example, create a `docker-compose.yml` file with: + +```yaml +version: "3.2" + +services: + demo: + image: ehazlett/interlock-default-app + deploy: + replicas: 1 + labels: + com.docker.lb.default_backend: "true" + com.docker.lb.port: 80 + networks: + - demo-network + +networks: + demo-network: + driver: overlay +``` + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +Once users try to access a route that's not configured, they are directed +to this demo service. + +![Custom default page](../../images/interlock-default-service-2.png){: .with-border} + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/index.md b/datacenter/ucp/3.0/guides/user/interlock/usage/index.md new file mode 100644 index 0000000000..4895b67160 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/index.md @@ -0,0 +1,95 @@ +--- +title: Route traffic to a simple swarm service +description: Learn how to do canary deployments for your Docker swarm services +keywords: routing, proxy +--- + +Once the [layer 7 routing solution is enabled](../deploy/index.md), you can +start using it in your swarm services. + +In this example we'll deploy a simple service which: + +* Has a JSON endpoint that returns the ID of the task serving the request. +* Has a web UI that shows how many tasks the service is running. +* Can be reached at `http://app.example.org`. + +## Deploy the service + +Create a `docker-compose.yml` file with: + +```yaml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + networks: + - demo-network + +networks: + demo-network: + driver: overlay +``` + +Note that: + +* The `com.docker.lb.hosts` label defines the hostname for the service. When +the layer 7 routing solution gets a request containing `app.example.org` in +the host header, that request is forwarded to the demo service. +* The `com.docker.lb.network` defines which network the `ucp-interlock-proxy` +should attach to in order to be able to communicate with the demo service. +To use layer 7 routing, your services need to be attached to at least one network. +If your service is only attached to a single network, you don't need to add +a label to specify which network to use for routing. +* The `com.docker.lb.port` label specifies which port the `ucp-interlock-proxy` +service should use to communicate with this demo service. +* Your service doesn't need to expose a port in the swarm routing mesh. All +communications are done using the network you've specified. + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +The `ucp-interlock` service detects that your service is using these labels +and automatically reconfigures the `ucp-interlock-proxy` service. + +## Test using the CLI + +To test that requests are routed to the demo service, run: + +```bash +curl --header "Host: app.example.org" \ + http://:/ping +``` + +Where: + +* `` is the domain name or IP address of a UCP node. +* `` is the [port you're using to route HTTP traffic](../deploy/index.md). + +If everything is working correctly, you should get a JSON result like: + +```json +{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"} +``` + +## Test using a browser + +Since the demo service exposes an HTTP endpoint, you can also use your browser +to validate that everything is working. + +Make sure the `/etc/hosts` file in your system has an entry mapping +`app.example.org` to the IP address of a UCP node. Once you do that, you'll be +able to start using the service from your browser. + +![browser](../../images/route-simple-app-1.png){: .with-border } + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/interlock_service_clusters.png b/datacenter/ucp/3.0/guides/user/interlock/usage/interlock_service_clusters.png new file mode 100644 index 0000000000..84ad5f1898 Binary files /dev/null and b/datacenter/ucp/3.0/guides/user/interlock/usage/interlock_service_clusters.png differ diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference.md b/datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference.md new file mode 100644 index 0000000000..263c055286 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/labels-reference.md @@ -0,0 +1,31 @@ +--- +title: Layer 7 routing labels reference +description: Learn about the labels you can use in your swarm services to route + layer 7 traffic to them. +keywords: routing, proxy +--- + +Once the layer 7 routing solution is enabled, you can +[start using it in your swarm services](index.md). + +The following labels are available for you to use in swarm services: + + +| Label | Description | Example | +|:---------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------| +| `com.docker.lb.hosts` | Comma separated list of the hosts that the service should serve. | `example.com,test.com` | +| `com.docker.lb.port` | Port to use for internal upstream communication. | `8080` | +| `com.docker.lb.network` | Name of network the proxy service should attach to for upstream connectivity. | `app-network-a` | +| `com.docker.lb.context_root` | Context or path to use for the application. | `/app` | +| `com.docker.lb.context_root_rewrite` | Boolean to enable rewrite for the context root. | `true` | +| `com.docker.lb.ssl_only` | Boolean to force SSL for application. | `true` | +| `com.docker.lb.ssl_cert` | Docker secret to use for the SSL certificate. | `example.com.cert` | +| `com.docker.lb.ssl_key` | Docker secret to use for the SSL key. | `example.com.key` | +| `com.docker.lb.websocket_endpoints` | Comma separated list of endpoints to configure to be upgraded for websockets. | `/ws,/foo` | +| `com.docker.lb.service_cluster` | Name of the service cluster to use for the application. | `us-east` | +| `com.docker.lb.ssl_backend` | Enable SSL communication to the upstreams. | `true` | +| `com.docker.lb.ssl_backend_tls_verify` | Verification mode for the upstream TLS. | `none` | +| `com.docker.lb.sticky_session_cookie` | Cookie to use for sticky sessions. | `none` | +| `com.docker.lb.redirects` | Semi-colon separated list of redirects to add in the format of `,`. Example: `http://old.example.com,http://new.example.com;` | `none` | +| `com.docker.lb.ssl_passthrough` | Enable SSL passthrough. | `false` | + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/redirects.md b/datacenter/ucp/3.0/guides/user/interlock/usage/redirects.md new file mode 100644 index 0000000000..0f060b7a3c --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/redirects.md @@ -0,0 +1,69 @@ +--- +title: Application redirects +description: Learn how to implement redirects using swarm services and the + layer 7 routing solution for UCP. +keywords: routing, proxy, redirects +--- + +Once the [layer 7 routing solution is enabled](../deploy/index.md), you can +start using it in your swarm services. In this example we'll deploy a simple +service that can be reached at `app.example.org`. We'll also redirect +requests to `old.example.org` to that service. + +To do that, create a docker-compose.yml file with: + +```yaml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org,old.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + com.docker.lb.redirects: http://old.example.org,http://app.example.org + networks: + - demo-network + +networks: + demo-network: + driver: overlay +``` + +Note that the demo service has labels to signal that traffic for both +`app.example.org` and `old.example.org` should be routed to this service. +There's also a label indicating that all traffic directed to `old.example.org` +should be redirected to `app.example.org`. + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +You can also use the CLI to test if the redirect is working, by running: + +```bash +curl --head --header "Host: old.example.org" http://: +``` + +You should see something like: + +```none +HTTP/1.1 302 Moved Temporarily +Server: nginx/1.13.8 +Date: Thu, 29 Mar 2018 23:16:46 GMT +Content-Type: text/html +Content-Length: 161 +Connection: keep-alive +Location: http://app.example.org/ +``` + +You can also test that the redirect works from your browser. For that, you +need to make sure you add entries for both `app.example.org` and +`old.example.org` to your `/etc/hosts` file, mapping them to the IP address +of a UCP node. diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters.md b/datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters.md new file mode 100644 index 0000000000..b5baf30a55 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/service-clusters.md @@ -0,0 +1,200 @@ +--- +title: Service clusters +description: Learn about Interlock, an application routing and load balancing system + for Docker Swarm. +keywords: ucp, interlock, load balancing +--- + +In this example we will configure an eight (8) node Swarm cluster that uses service clusters +to route traffic to different proxies. There are three (3) managers +and five (5) workers. Two of the workers are configured with node labels to be dedicated +ingress cluster load balancer nodes. These will receive all application traffic. + +This example will not cover the actual deployment of infrastructure. +It assumes you have a vanilla Swarm cluster (`docker init` and `docker swarm join` from the nodes). +See the [Swarm](https://docs.docker.com/engine/swarm/) documentation if you need help +getting a Swarm cluster deployed. + +![Interlock Service Clusters](interlock_service_clusters.png) + +We will configure the load balancer worker nodes (`lb-00` and `lb-01`) with node labels in order to pin the Interlock Proxy +service. Once you are logged into one of the Swarm managers run the following to add node labels +to the dedicated ingress workers: + +```bash +$> docker node update --label-add nodetype=loadbalancer --label-add region=us-east lb-00 +lb-00 +$> docker node update --label-add nodetype=loadbalancer --label-add region=us-west lb-01 +lb-01 +``` + +You can inspect each node to ensure the labels were successfully added: + +```bash +{% raw %} +$> docker node inspect -f '{{ .Spec.Labels }}' lb-00 +map[nodetype:loadbalancer region:us-east] +$> docker node inspect -f '{{ .Spec.Labels }}' lb-01 +map[nodetype:loadbalancer region:us-west] +{% endraw %} +``` + +Next, we will create a configuration object for Interlock that contains multiple extensions with varying service clusters: + +```bash +$> cat << EOF | docker config create service.interlock.conf - +ListenAddr = ":8080" +DockerURL = "unix:///var/run/docker.sock" +PollInterval = "3s" + +[Extensions] + [Extensions.us-east] + Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview" + Args = ["-D"] + ServiceName = "interlock-ext-us-east" + ProxyImage = "nginx:alpine" + ProxyArgs = [] + ProxyServiceName = "interlock-proxy-us-east" + ProxyConfigPath = "/etc/nginx/nginx.conf" + ServiceCluster = "us-east" + PublishMode = "host" + PublishedPort = 80 + TargetPort = 80 + PublishedSSLPort = 443 + TargetSSLPort = 443 + [Extensions.us-east.Config] + User = "nginx" + PidPath = "/var/run/proxy.pid" + WorkerProcesses = 1 + RlimitNoFile = 65535 + MaxConnections = 2048 + [Extensions.us-east.Labels] + ext_region = "us-east" + [Extensions.us-east.ProxyLabels] + proxy_region = "us-east" + + [Extensions.us-west] + Image = "interlockpreview/interlock-extension-nginx:2.0.0-preview" + Args = ["-D"] + ServiceName = "interlock-ext-us-west" + ProxyImage = "nginx:alpine" + ProxyArgs = [] + ProxyServiceName = "interlock-proxy-us-west" + ProxyConfigPath = "/etc/nginx/nginx.conf" + ServiceCluster = "us-west" + PublishMode = "host" + PublishedPort = 80 + TargetPort = 80 + PublishedSSLPort = 443 + TargetSSLPort = 443 + [Extensions.us-west.Config] + User = "nginx" + PidPath = "/var/run/proxy.pid" + WorkerProcesses = 1 + RlimitNoFile = 65535 + MaxConnections = 2048 + [Extensions.us-west.Labels] + ext_region = "us-west" + [Extensions.us-west.ProxyLabels] + proxy_region = "us-west" +EOF +oqkvv1asncf6p2axhx41vylgt +``` +Note that we are using "host" mode networking in order to use the same ports (`80` and `443`) in the cluster. We cannot use ingress +networking as it reserves the port across all nodes. If you want to use ingress networking you will have to use different ports +for each service cluster. + +Next we will create a dedicated network for Interlock and the extensions: + +```bash +$> docker network create -d overlay interlock +``` + +Now we can create the Interlock service: + +```bash +$> docker service create \ + --name interlock \ + --mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \ + --network interlock \ + --constraint node.role==manager \ + --config src=service.interlock.conf,target=/config.toml \ + interlockpreview/interlock:2.0.0-preview -D run -c /config.toml +sjpgq7h621exno6svdnsvpv9z +``` + +## Configure Proxy Services +Once we have the node labels we can re-configure the Interlock Proxy services to be constrained to the +workers for each region. Again, from a manager run the following to pin the proxy services to the ingress workers: + +```bash +$> docker service update \ + --constraint-add node.labels.nodetype==loadbalancer \ + --constraint-add node.labels.region==us-east \ + interlock-proxy-us-east +$> docker service update \ + --constraint-add node.labels.nodetype==loadbalancer \ + --constraint-add node.labels.region==us-west \ + interlock-proxy-us-west +``` + +We are now ready to deploy applications. First we will create individual networks for each application: + +```bash +$> docker network create -d overlay demo-east +$> docker network create -d overlay demo-west +``` + +Next we will deploy the application in the `us-east` service cluster: + +```bash +$> docker service create \ + --name demo-east \ + --network demo-east \ + --detach=true \ + --label com.docker.lb.hosts=demo-east.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.service_cluster=us-east \ + --env METADATA="us-east" \ + ehazlett/docker-demo +``` + +Now we deploy the application in the `us-west` service cluster: + +```bash +$> docker service create \ + --name demo-west \ + --network demo-west \ + --detach=true \ + --label com.docker.lb.hosts=demo-west.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.service_cluster=us-west \ + --env METADATA="us-west" \ + ehazlett/docker-demo +``` + +Only the service cluster that is designated will be configured for the applications. For example, the `us-east` service cluster +will not be configured to serve traffic for the `us-west` service cluster and vice versa. We can see this in action when we +send requests to each service cluster. + +When we send a request to the `us-east` service cluster it only knows about the `us-east` application (be sure to ssh to the `lb-00` node): + +```bash +{% raw %} +$> curl -H "Host: demo-east.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping +{"instance":"1b2d71619592","version":"0.1","metadata":"us-east","request_id":"3d57404cf90112eee861f9d7955d044b"} +$> curl -H "Host: demo-west.local" http://$(docker node inspect -f '{{ .Status.Addr }}' lb-00)/ping + +404 Not Found + +

404 Not Found

+
nginx/1.13.6
+ + +{% endraw %} +``` + +Application traffic is isolated to each service cluster. Interlock also ensures that a proxy will only be updated if it has corresponding updates +to its designated service cluster. So in this example, updates to the `us-east` cluster will not affect the `us-west` cluster. If there is a problem +the others will not be affected. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/sessions.md b/datacenter/ucp/3.0/guides/user/interlock/usage/sessions.md new file mode 100644 index 0000000000..f1104ec486 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/sessions.md @@ -0,0 +1,131 @@ +--- +title: Persistent (sticky) sessions +description: Learn how to configure your swarm services with persistent sessions + using UCP. +keywords: routing, proxy +--- + +In this example we will publish a service and configure the proxy for persistent (sticky) sessions. + +# Cookies +In the following example we will show how to configure sticky sessions using cookies. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the service with the cookie to use for sticky sessions: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --replicas=5 \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.sticky_session_cookie=session \ + --label com.docker.lb.port=8080 \ + --env METADATA="demo-sticky" \ + ehazlett/docker-demo +``` + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local` +and configured to use sticky sessions: + +```bash +$> curl -vs -c cookie.txt -b cookie.txt -H "Host: demo.local" http://127.0.0.1/ping +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) +> GET /ping HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> Cookie: session=1510171444496686286 +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Wed, 08 Nov 2017 20:04:36 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 117 +< Connection: keep-alive +* Replaced cookie session="1510171444496686286" for domain demo.local, path /, expire 0 +< Set-Cookie: session=1510171444496686286 +< x-request-id: 3014728b429320f786728401a83246b8 +< x-proxy-id: eae36bf0a3dc +< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64 +< x-upstream-addr: 10.0.2.5:8080 +< x-upstream-response-time: 1510171476.948 +< +{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"} +``` + +Notice the `Set-Cookie` from the application. This is stored by the `curl` command and sent with subsequent requests +which are pinned to the same instance. If you make a few requests you will notice the same `x-upstream-addr`. + +# IP Hashing +In this example we show how to configure sticky sessions using client IP hashing. This is not as flexible or consistent +as cookies but enables workarounds for some applications that cannot use the other method. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the service with the cookie to use for sticky sessions using IP hashing: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --replicas=5 \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.ip_hash=true \ + --env METADATA="demo-sticky" \ + ehazlett/docker-demo +``` + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local` +and configured to use sticky sessions: + +```bash +$> curl -vs -H "Host: demo.local" http://127.0.0.1/ping +* Trying 127.0.0.1... +* TCP_NODELAY set +* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) +> GET /ping HTTP/1.1 +> Host: demo.local +> User-Agent: curl/7.54.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.13.6 +< Date: Wed, 08 Nov 2017 20:04:36 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 117 +< Connection: keep-alive +< x-request-id: 3014728b429320f786728401a83246b8 +< x-proxy-id: eae36bf0a3dc +< x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64 +< x-upstream-addr: 10.0.2.5:8080 +< x-upstream-response-time: 1510171476.948 +< +{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"} +``` + +You can use `docker service scale demo=10` to add some more replicas. Once scaled, you will notice that requests are pinned +to a specific backend. + +Note: due to the way the IP hashing works for extensions, you will notice a new upstream address when scaling replicas. This is +expected as internally the proxy uses the new set of replicas to decide on a backend on which to pin. Once the upstreams are +determined a new "sticky" backend will be chosen and that will be the dedicated upstream. + diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md b/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md new file mode 100644 index 0000000000..32f7e9910e --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md @@ -0,0 +1,197 @@ +--- +title: Applications with SSL +description: Learn how to configure your swarm services with TLS using the layer + 7 routing solution for UCP. +keywords: routing, proxy, tls +redirect_from: + - /ee/ucp/interlock/usage/ssl/ +--- + +Once the [layer 7 routing solution is enabled](../deploy/index.md), you can +start using it in your swarm services. You have two options for securing your +services with TLS: + +* Let the proxy terminate the TLS connection. All traffic between end-users and +the proxy is encrypted, but the traffic going between the proxy and your swarm +service is not secured. +* Let your swarm service terminate the TLS connection. The end-to-end traffic +is encrypted and the proxy service allows TLS traffic to passthrough unchanged. + +In this example we'll deploy a service that can be reached at `app.example.org` +using these two options. + +No matter how you choose to secure your swarm services, there are two steps to +route traffic with TLS: + +1. Create [Docker secrets](/engine/swarm/secrets.md) to manage from a central +place the private key and certificate used for TLS. +2. Add labels to your swarm service for UCP to reconfigure the proxy service. + + +## Let the proxy handle TLS + +In this example we'll deploy a swarm service and let the proxy service handle +the TLS connection. All traffic between the proxy and the swarm service is +not secured, so you should only use this option if you trust that no one can +monitor traffic inside services running on your datacenter. + +![TLS Termination](../../images/interlock-tls-1.png) + +Start by getting a private key and certificate for the TLS connection. Make +sure the Common Name in the certificate matches the name where your service +is going to be available. + +You can generate a self-signed certificate for `app.example.org` by running: + +```bash +openssl req \ + -new \ + -newkey rsa:4096 \ + -days 3650 \ + -nodes \ + -x509 \ + -subj "/C=US/ST=CA/L=SF/O=Docker-demo/CN=app.example.org" \ + -keyout app.example.org.key \ + -out app.example.org.cert +``` + +Then, create a docker-compose.yml file with the following content: + +```yml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + com.docker.lb.ssl_cert: demo_app.example.org.cert + com.docker.lb.ssl_key: demo_app.example.org.key + environment: + METADATA: proxy-handles-tls + networks: + - demo-network + +networks: + demo-network: + driver: overlay +secrets: + app.example.org.cert: + file: ./app.example.org.cert + app.example.org.key: + file: ./app.example.org.key +``` + +Notice that the demo service has labels describing that the proxy service should +route traffic to `app.example.org` to this service. All traffic between the +service and proxy takes place using the `demo-network` network. The service also +has labels describing the Docker secrets to use on the proxy service to terminate +the TLS connection. + +Since the private key and certificate are stored as Docker secrets, you can +easily scale the number of replicas used for running the proxy service. Docker +takes care of distributing the secrets to the replicas. + +Set up your CLI client with a [UCP client bundle](../../user-access/cli.md), +and deploy the service: + +```bash +docker stack deploy --compose-file docker-compose.yml demo +``` + +The service is now running. To test that everything is working correctly you +first need to update your `/etc/hosts` file to map `app.example.org` to the +IP address of a UCP node. + +In a production deployment, you'll have to create a DNS entry so that your +users can access the service using the domain name of your choice. +After doing that, you'll be able to access your service at: + +```bash +https://: +``` + +Where: +* `hostname` is the name you used with the `com.docker.lb.hosts` label. +* `https-port` is the port you've configured in the [UCP settings](../deploy/index.md). + +![Browser screenshot](../../images/interlock-tls-2.png){: .with-border} + +Since we're using self-sign certificates in this example, client tools like +browsers display a warning that the connection is insecure. + +You can also test from the CLI: + +```bash +curl --insecure \ + --resolve :: \ + https://:/ping +``` + +If everything is properly configured you should get a JSON payload: + +```json +{"instance":"f537436efb04","version":"0.1","request_id":"5a6a0488b20a73801aa89940b6f8c5d2"} +``` + +Since the proxy uses SNI to decide where to route traffic, make sure you're +using a version of curl that includes the SNI header with insecure requests. +If this doesn't happen, curl displays an error saying that the SSL handshake +was aborterd. + + +## Let your service handle TLS + +You can also encrypt the traffic from end-users to your swarm service. + +![End-to-end encryption](../../images/interlock-tls-3.png) + + +To do that, deploy your swarm service using the following docker-compose.yml file: + +```yml +version: "3.2" + +services: + demo: + image: ehazlett/docker-demo + command: --tls-cert=/run/secrets/cert.pem --tls-key=/run/secrets/key.pem + deploy: + replicas: 1 + labels: + com.docker.lb.hosts: app.example.org + com.docker.lb.network: demo-network + com.docker.lb.port: 8080 + com.docker.lb.ssl_passthrough: "true" + environment: + METADATA: end-to-end-TLS + networks: + - demo-network + secrets: + - source: app.example.org.cert + target: /run/secrets/cert.pem + - source: app.example.org.key + target: /run/secrets/key.pem + +networks: + demo-network: + driver: overlay +secrets: + app.example.org.cert: + file: ./app.example.org.cert + app.example.org.key: + file: ./app.example.org.key +``` + +Notice that we've update the service to start using the secrets with the +private key and certificate. The service is also labeled with +`com.docker.lb.ssl_passthrough: true`, signaling UCP to configure the proxy +service such that TLS traffic for `app.example.org` is passed to the service. + +Since the connection is fully encrypt from end-to-end, the proxy service +won't be able to add metadata such as version info or request ID to the +response headers. diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/websockets.md b/datacenter/ucp/3.0/guides/user/interlock/usage/websockets.md new file mode 100644 index 0000000000..ec2b1b46b5 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/websockets.md @@ -0,0 +1,36 @@ +--- +title: Websockets +description: Learn how to use websocket in your swarm services when using the + layer 7 routing solution for UCP. +keywords: routing, proxy +--- + +In this example we will publish a service and configure support for websockets. + +First we will create an overlay network so that service traffic is isolated and secure: + +```bash +$> docker network create -d overlay demo +1se1glh749q1i4pw0kf26mfx5 +``` + +Next we will create the service with websocket endpoints: + +```bash +$> docker service create \ + --name demo \ + --network demo \ + --detach=false \ + --label com.docker.lb.hosts=demo.local \ + --label com.docker.lb.port=8080 \ + --label com.docker.lb.websocket_endpoints=/ws \ + ehazlett/websocket-chat +``` + +Note: for this to work you must have an entry for `demo.local` in your local hosts (i.e. `/etc/hosts`) file. +This uses the browser for websocket communication so you will need to have an entry or use a routable domain. + +Interlock will detect once the service is available and publish it. Once the tasks are running +and the proxy service has been updated the application should be available via `http://demo.local`. Open +two instances of your browser and you should see text on both instances as you type. + diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/create-service-account.md b/datacenter/ucp/3.0/guides/user/kubernetes/create-service-account.md new file mode 100644 index 0000000000..3e7336fd62 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/create-service-account.md @@ -0,0 +1,89 @@ +--- +title: Create a service account for a Kubernetes app +description: Learn how to use a service account to give a Kubernetes workload access to cluster resources. +keywords: UCP, Docker EE, Kubernetes, authorization, access control, grant +--- + +Kubernetes enables access control for workloads by providing service accounts. +A service account represents an identity for processes that run in a pod. +When a process is authenticated through a service account, it can contact the +API server and access cluster resources. If a pod doesn't have an assigned +service account, it gets the `default` service account. +Learn about [managing service accounts](https://v1-8.docs.kubernetes.io/docs/admin/service-accounts-admin/). + +In Docker EE, you give a service account access to cluster resources by +creating a grant, the same way that you would give access to a user or a team. +Learn how to [grant access to cluster resources](../authorization/index.md). + +In this example, you create a service account and a grant that could be used +for an NGINX server. + +## Create the Kubernetes namespace + +A Kubernetes user account is global, but a service account is scoped to a +namespace, so you need to create a namespace before you create a service +account. + +1. Navigate to the **Namespaces** page and click **Create**. +2. In the **Object YAML** editor, append the following text. + ```yaml + metadata: + name: nginx + ``` +3. Click **Create**. +4. In the **nginx** namespace, click the **More options** icon, + and in the context menu, select **Set Context**, and click **Confirm**. + + ![](../images/create-service-account-1.png){: .with-border} + +5. Click the **Set context for all namespaces** toggle and click **Confirm**. + +## Create a service account + +Create a service account named `nginx-service-account` in the `nginx` +namespace. + +1. Navigate to the **Service Accounts** page and click **Create**. +2. In the **Namespace** dropdown, select **nginx**. +3. In the **Object YAML** editor, paste the following text. + ```yaml + apiVersion: v1 + kind: ServiceAccount + metadata: + name: nginx-service-account + ``` +3. Click **Create**. + + ![](../images/create-service-account-2.png){: .with-border} + +## Create a grant + +To give the service account access to cluster resources, create a grant with +`Restricted Control` permissions. + +1. Navigate to the **Grants** page and click **Create Grant**. +2. In the left pane, click **Resource Sets**, and in the **Type** section, + click **Namespaces**. +3. Select the **nginx** namespace. +4. In the left pane, click **Roles**. In the **Role** dropdown, select + **Restricted Control**. +5. In the left pane, click **Subjects**, and select **Service Account**. + + > Service account subject type + > + > The **Service Account** option in the **Subject Type** section appears only + > when a Kubernetes namespace is present. + {: .important} + +6. In the **Namespace** dropdown, select **nginx**, and in the + **Service Account** dropdown, select **nginx-service-account**. +7. Click **Create**. + + ![](../images/create-service-account-3.png){: .with-border} + +Now `nginx-service-account` has access to all cluster resources that are +assigned to the `nginx` namespace. + +## Where to go next + +- [Deploy an ingress controller for a Kubernetes app](deploy-ingress-controller.md) \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose.md b/datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose.md new file mode 100644 index 0000000000..64172cc844 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/deploy-with-compose.md @@ -0,0 +1,92 @@ +--- +title: Deploy a Compose-based app to a Kubernetes cluster +description: Use Docker Enterprise Edition to deploy a Kubernetes workload from a Docker compose. +keywords: UCP, Docker EE, Kubernetes, Compose +redirect_from: + - /ee/ucp/user/services/deploy-compose-on-kubernetes/ +--- + +Docker Enterprise Edition enables deploying [Docker Compose](/compose/overview.md/) +files to Kubernetes clusters. Starting in Compile file version 3.3, you use the +same `docker-compose.yml` file that you use for Swarm deployments, but you +specify **Kubernetes workloads** when you deploy the stack. The result is a +true Kubernetes app. + +## Get access to a Kubernetes namespace + +To deploy a stack to Kubernetes, you need a namespace for the app's resources. +Contact your Docker EE administrator to get access to a namespace. In this +example, the namespace has the name `lab-words`. +[Learn to grant access to a Kubernetes namespace](../authorization/grant-permissions/#kubernetes-grants). + +## Create a Kubernetes app from a Compose file + +In this example, you create a simple app, named "lab-words", by using a Compose +file. The following yaml defines the stack: + +```yaml +version: '3.3' + +services: + web: + build: web + image: dockerdemos/lab-web + volumes: + - "./web/static:/static" + ports: + - "80:80" + + words: + build: words + image: dockerdemos/lab-words + deploy: + replicas: 5 + endpoint_mode: dnsrr + resources: + limits: + memory: 16M + reservations: + memory: 16M + + db: + build: db + image: dockerdemos/lab-db +``` + +1. Open the UCP web UI, and in the left pane, click **Shared resources**. +2. Click **Stacks**, and in the **Stacks** page, click **Create stack**. +3. In the **Name** textbox, type "lab-words". +4. In the **Mode** dropdown, select **Kubernetes workloads**. +5. In the **Namespace** drowdown, select **lab-words**. +6. In the **docker-compose.yml** editor, paste the previous YAML. +7. Click **Create** to deploy the stack. + +## Inspect the deployment + +After a few minutes have passed, all of the pods in the `lab-words` deployment +are running. + +1. In the left pane, click **Pods**. Confirm that there are seven pods and + that their status is **Running**. If any have a status of **Pending**, + wait until they're all running. +2. Click one of the pods that has a name starting with **words**, and in the + details pane, scroll down to the **Pod IP** to view the pod's internal IP + address. + + ![](../images/deploy-compose-kubernetes-1.png){: .with-border} + +3. In the left pane, click **Load balancers** and find the **web-published** service. +4. Click the **web-published** service, and in the details pane, scroll down to the + **Spec** section. +5. Under **Ports**, click the URL to open the web UI for the `lab-words` app. + + ![](../images/deploy-compose-kubernetes-2.png){: .with-border} + +6. Look at the IP addresses that are displayed in each tile. The IP address + of the pod you inspected previously may be listed. If it's not, refresh the + page until you see it. + + ![](../images/deploy-compose-kubernetes-3.png){: .with-border} + +7. Refresh the page to see how the load is balanced across the pods. + diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/index.md b/datacenter/ucp/3.0/guides/user/kubernetes/index.md new file mode 100644 index 0000000000..3daebde71d --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/index.md @@ -0,0 +1,258 @@ +--- +title: Deploy a workload to a Kubernetes cluster +description: Use Docker Enterprise Edition to deploy Kubernetes workloads from yaml files. +keywords: UCP, Docker EE, orchestration, Kubernetes, cluster +redirect_from: + - /ee/ucp/user/services/deploy-kubernetes-workload/ +--- + +The Docker EE web UI enables deploying your Kubernetes YAML files. In most +cases, no modifications are necessary to deploy on a cluster that's managed by +Docker EE. + +## Deploy an NGINX server + +In this example, a simple Kubernetes Deployment object for an NGINX server is +defined in YAML: + +```yaml +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + replicas: 2 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 +``` + +The YAML specifies an earlier version of NGINX, which will be updated in a +later section. + +1. Open the Docker EE web UI, and in the left pane, click **Kubernetes**. +2. Click **Create** to open the **Create Kubernetes Object** page. +3. In the **Namespace** dropdown, select **default**. +4. In the **Object YAML** editor, paste the previous YAML. +5. Click **Create**. + +![](../images/deploy-kubernetes-workload-1.png){: .with-border} + +## Inspect the deployment + +The Docker EE web UI shows the status of your deployment when you click the +links in the **Kubernetes** section of the left pane. + +1. In the left pane. click **Controllers** to see the resource controllers + that Docker EE created for the NGINX server. +2. Click the **nginx-deployment** controller, and in the details pane, scroll + to the **Template** section. This shows the values that Docker EE used to + create the deployment. +3. In the left pane, click **Pods** to see the pods that are provisioned for + the NGINX server. Click one of the pods, and in the details pane, scroll to + the **Status** section to see that pod's phase, IP address, and other + properties. + +![](../images/deploy-kubernetes-workload-2.png){: .with-border} + +## Expose the server + +The NGINX server is up and running, but it's not accessible from outside of the +cluster. Add a `NodePort` service to expose the server on a specified port: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: nginx + labels: + app: nginx +spec: + type: NodePort + ports: + - port: 80 + nodePort: 32768 + selector: + app: nginx +``` + +The service connects the cluster's internal port 80 to the external port +32768. + +1. Repeat the previous steps and copy-paste the YAML that defines the `nginx` + service into the **Object YAML** editor on the + **Create Kubernetes Object** page. When you click **Create**, the + **Load Balancers** page opens. +2. Click the **nginx** service, and in the details pane, find the **Ports** + section. + + ![](../images/deploy-kubernetes-workload-3.png){: .with-border} + +3. Click the link that's labeled **URL** to view the default NGINX page. + +The YAML definition connects the service to the NGINX server by using the +app label `nginx` and a corresponding label selector. +[Learn about using a service to expose your app](https://v1-8.docs.kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/). + +## Update the deployment + +Update an existing deployment by applying an updated YAML file. In this +example, the server is scaled up to four replicas and updated to a later +version of NGINX. + +```yaml +... +spec: + progressDeadlineSeconds: 600 + replicas: 4 + revisionHistoryLimit: 10 + selector: + matchLabels: + app: nginx + strategy: + rollingUpdate: + maxSurge: 25% + maxUnavailable: 25% + type: RollingUpdate + template: + metadata: + creationTimestamp: null + labels: + app: nginx + spec: + containers: + - image: nginx:1.8 +... +``` + +1. In the left pane, click **Controllers** and select **nginx-deployment**. +2. In the details pane, click **Configure**, and in the **Edit Deployment** + page, find the **replicas: 2** entry. +3. Change the number of replicas to 4, so the line reads **replicas: 4**. +4. Find the **image: nginx:1.7.9** entry and change it to **image: nginx:1.8**. + + ![](../images/deploy-kubernetes-workload-4.png){: .with-border} + +5. Click **Save** to update the deployment with the new YAML. +6. In the left pane, click **Pods** to view the newly created replicas. + + ![](../images/deploy-kubernetes-workload-5.png){: .with-border} + +## Use the CLI to deploy Kubernetes objects + +With Docker EE, you deploy your Kubernetes objects on the command line by using +`kubectl`. [Install and set up kubectl](https://v1-8.docs.kubernetes.io/docs/tasks/tools/install-kubectl/). + +Use a client bundle to configure your client tools, like Docker CLI and `kubectl` +to communicate with UCP instead of the local deployments you might have running. +[Get your client bundle by using the Docker EE web UI or the command line](../user-access/cli.md). + +When you have the client bundle set up, you can deploy a Kubernetes object +from YAML. + +```yaml +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + replicas: 2 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx + labels: + app: nginx +spec: + type: NodePort + ports: + - port: 80 + nodePort: 32768 + selector: + app: nginx +``` + +Save the previous YAML to a file named "deployment.yaml", and use the following +command to deploy the NGINX server: + +```bash +kubectl apply -f deployment.yaml +``` + +## Inspect the deployment + +Use the `describe deployment` option to inspect the deployment: + +```bash +kubectl describe deployment nginx-deployment +``` + +Also, you can use the Docker EE web UI to see the deployment's pods and +controllers. + +## Update the deployment + +Update an existing deployment by applying an updated YAML file. + +Edit deployment.yaml and change the following lines: + +- Increase the number of replicas to 4, so the line reads **replicas: 4**. +- Update the NGINX version by specifying **image: nginx:1.8**. + +Save the edited YAML to a file named "update.yaml", and use the following +command to deploy the NGINX server: + +```bash +kubectl apply -f update.yaml +``` + +Check that the deployment was scaled out by listing the deployments in the +cluster: + +```bash + kubectl get deployments +``` + +You should see four pods in the deployment: + +```bash +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +nginx-deployment 4 4 4 4 2d +``` + +Check that the pods are running the updated image: + +```bash +kubectl describe deployment nginx-deployment | grep -i image +``` + +You should see the currently running image: + +```bash + Image: nginx:1.8 +``` + diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin.md b/datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin.md new file mode 100644 index 0000000000..b16cf194d8 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/install-cni-plugin.md @@ -0,0 +1,93 @@ +--- +title: Install a CNI plugin +description: Learn how to install a Container Networking Interface plugin on Docker Universal Control Plane. +keywords: ucp, cli, administration, kubectl, Kubernetes, cni, Container Networking Interface, flannel, weave, ipip, calico +--- + +With Docker Universal Control Plane, you can install a third-party Container +Networking Interface (CNI) plugin when you install UCP, by using the +`--cni-installer-url` option. By default, Docker EE installs the built-in +[Calico](https://github.com/projectcalico/cni-plugin) plugin, but you can +override the default and install a plugin of your choice, +like [Flannel](https://github.com/coreos/flannel) or +[Weave](https://www.weave.works/). + +# Install UCP with a custom CNI plugin + +Modify the [UCP install command-line](../admin/install/index.md#step-4-install-ucp) +to add the `--cni-installer-url` [option](/reference/ucp/3.0/cli/install.md), +providing a URL for the location of the CNI plugin's YAML file: + +```bash +docker container run --rm -it --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \ + --host-address \ + --cni-installer-url \ + --interactive +``` + +You must provide a correct YAML installation file for the CNI plugin, but most +of the default files work on Docker EE with no modification. + +## YAML files for CNI plugins + +Use the following commands to get the YAML files for popular CNI plugins. + +- [Flannel](https://github.com/coreos/flannel) + ```bash + # Get the URL for the Flannel CNI plugin. + CNI_URL="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" + ``` +- [Weave](https://www.weave.works/) + ```bash + # Get the URL for the Weave CNI plugin. + CNI_URL="https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI5IiwgR2l0VmVyc2lvbjoidjEuOS4zIiwgR2l0Q29tbWl0OiJkMjgzNTQxNjU0NGYyOThjOTE5ZTJlYWQzYmUzZDA4NjRiNTIzMjNiIiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxOC0wMi0wN1QxMjoyMjoyMVoiLCBHb1ZlcnNpb246ImdvMS45LjIiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjgrIiwgR2l0VmVyc2lvbjoidjEuOC4yLWRvY2tlci4xNDMrYWYwODAwNzk1OWUyY2UiLCBHaXRDb21taXQ6ImFmMDgwMDc5NTllMmNlYWUxMTZiMDk4ZWNhYTYyNGI0YjI0MjBkODgiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE4LTAyLTAxVDIzOjI2OjE3WiIsIEdvVmVyc2lvbjoiZ28xLjguMyIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg==" + ``` + If you have kubectl available, for example by using + [Docker for Mac](/docker-for-mac/kubernetes.md), you can use the following + command to get the URL for the [Weave](https://www.weave.works/) CNI plugin: + ```bash + # Get the URL for the Weave CNI plugin. + CNI_URL="https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" + ``` +- [Romana](http://docs.romana.io/) + ```bash + # Get the URL for the Romana CNI plugin. + CNI_URL="https://raw.githubusercontent.com/romana/romana/master/docs/kubernetes/romana-kubeadm.yml" + ``` + +## Disable IP in IP overlay tunneling + +The Calico CNI plugin supports both overlay (IPIP) and underlay forwarding +technologies. By default, Docker UCP uses IPIP overlay tunneling. + +If you're used to managing applications at the network level through the +underlay visibility, or you want to reuse existing networking tools in the +underlay, you may want to disable the IPIP functionality. Run the following +commands on the Kubernetes master node to disable IPIP overlay tunneling. + +```bash +# Exec into the Calico Kubernetes controller container. +docker exec -it $(docker ps --filter name=k8s_calico-kube-controllers_calico-kube-controllers -q) sh + +# Download calicoctl +wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl + +# Get the IP pool configuration. +./calicoctl get ippool -o yaml > ippool.yaml + +# Edit the file: Disable IPIP in ippool.yaml by setting "ipipMode: Never". + +# Apply the edited file to the Calico plugin. +./calicoctl apply -f ippool.yaml + +``` + +These steps disable overlay tunneling, and Calico uses the underlay networking, +in environments where it's supported. + +## Where to go next + +- [Install UCP for production](../admin/install.md) +- [Deploy a workload to a Kubernetes cluster](../kubernetes.md) diff --git a/datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing.md b/datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing.md new file mode 100644 index 0000000000..c1d343e0b2 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/kubernetes/layer-7-routing.md @@ -0,0 +1,310 @@ +--- +title: Layer 7 routing +description: Learn how to route traffic to your Kubernetes workloads in + Docker Enterprise Edition. +keywords: UCP, Kubernetes, ingress, routing +redirect_from: + - /ee/ucp/kubernetes/deploy-ingress-controller/ +--- + +When you deploy a Kubernetes application, you may want to make it accessible +to users using hostnames instead of IP addresses. + +Kubernetes provides **ingress controllers** for this. This functionality is +specific to Kubernetes. If you're trying to route traffic to Swarm-based +applications, check [layer 7 routing with Swarm](../interlock/index.md). + +Use an ingress controller when you want to: + +* Give your Kubernetes app an externally-reachable URL. +* Load-balance traffic to your app. + +Kubernetes provides an NGINX ingress controller that you can use in Docker EE +without modifications. +Learn about [ingress in Kubernetes](https://v1-8.docs.kubernetes.io/docs/concepts/services-networking/ingress/). + +## Create a dedicated namespace + +1. Navigate to the **Namespaces** page, and click **Create**. +2. In the **Object YAML** editor, append the following text. + ```yaml + metadata: + name: ingress-nginx + ``` + + The finished YAML should look like this. + + ```yaml + apiVersion: v1 + kind: Namespace + metadata: + name: ingress-nginx + ``` +3. Click **Create**. +4. In the **ingress-nginx** namespace, click the **More options** icon, + and in the context menu, select **Set Context**. + + ![](../images/deploy-ingress-controller-1.png){: .with-border} + +## Create a grant + +The default service account that's associated with the `ingress-nginx` +namespace needs access to Kubernetes resources, so create a grant with +`Restricted Control` permissions. + +1. From UCP, navigate to the **Grants** page, and click **Create Grant**. +2. Within the **Subject** pane, select **Service Account**. For the + **Namespace** select **ingress-nginx**, and select **default** for + the **Service Account**. Click **Next**. +3. Within the **Role** pane, select **Restricted Control**, and then click + **Next**. +4. Within the **Resource Set** pane, select the **Type** **Namespace**, and + select the **Apply grant to all existing and new namespaces** toggle. +5. Click **Create**. + +> Ingress and role-based access control +> +> Docker EE has an access control system that differs from Kubernetes RBAC. +> If your ingress controller has access control requirements, you need to +> create corresponding UCP grants. Learn to +> [migrate Kubernetes roles to Docker EE authorization](../authorization/migrate-kubernetes-roles.md). +{: .important} + +## Deploy NGINX ingress controller + +The cluster is ready for the ingress controller deployment, which has three +main components: + +- a simple HTTP server, named `default-http-backend`, +- an ingress controller, named `nginx-ingress-controller`, and +- a service that exposes the app, named `ingress-nginx`. + +Navigate to the **Create Kubernetes Object** page, and in the **Object YAML** +editor, paste the following YAML. + +```yaml +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: default-http-backend + labels: + app: default-http-backend + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: default-http-backend + template: + metadata: + labels: + app: default-http-backend + annotations: + seccomp.security.alpha.kubernetes.io/pod: docker/default + spec: + terminationGracePeriodSeconds: 60 + containers: + - name: default-http-backend + # Any image is permissable as long as: + # 1. It serves a 404 page at / + # 2. It serves 200 on a /healthz endpoint + image: gcr.io/google_containers/defaultbackend:1.4 + livenessProbe: + httpGet: + path: /healthz + port: 8080 + scheme: HTTP + initialDelaySeconds: 30 + timeoutSeconds: 5 + ports: + - containerPort: 8080 + resources: + limits: + cpu: 10m + memory: 20Mi + requests: + cpu: 10m + memory: 20Mi +--- +apiVersion: v1 +kind: Service +metadata: + name: default-http-backend + namespace: ingress-nginx + labels: + app: default-http-backend +spec: + ports: + - port: 80 + targetPort: 8080 + selector: + app: default-http-backend +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: tcp-services + namespace: ingress-nginx +--- +kind: ConfigMap +apiVersion: v1 +metadata: + name: udp-services + namespace: ingress-nginx +--- +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: nginx-ingress-controller + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: ingress-nginx + template: + metadata: + labels: + app: ingress-nginx + annotations: + prometheus.io/port: '10254' + prometheus.io/scrape: 'true' + seccomp.security.alpha.kubernetes.io/pod: docker/default + spec: + initContainers: + - command: + - sh + - -c + - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535" + image: alpine:3.6 + imagePullPolicy: IfNotPresent + name: sysctl + securityContext: + privileged: true + containers: + - name: nginx-ingress-controller + image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1 + args: + - /nginx-ingress-controller + - --default-backend-service=$(POD_NAMESPACE)/default-http-backend + - --configmap=$(POD_NAMESPACE)/nginx-configuration + - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services + - --udp-services-configmap=$(POD_NAMESPACE)/udp-services + - --annotations-prefix=nginx.ingress.kubernetes.io + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 + livenessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + readinessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: 10254 + scheme: HTTP + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: ingress-nginx + namespace: ingress-nginx +spec: + type: NodePort + ports: + - name: http + port: 80 + targetPort: 80 + protocol: TCP + - name: https + port: 443 + targetPort: 443 + protocol: TCP + selector: + app: ingress-nginx +``` + +## Check your deployment + +The `default-http-backend` provides a simple service that serves a 404 page +at `/` and serves 200 on the `/healthz` endpoint. + +1. Navigate to the **Controllers** page and confirm that the + **default-http-backend** and **nginx-ingress-controller** objects are + scheduled. + + > Scheduling latency + > + > It may take several seconds for the HTTP backend and the ingress controller's + > `Deployment` and `ReplicaSet` objects to be scheduled. + {: .important} + + ![](../images/deploy-ingress-controller-2.png){: .with-border} + +2. When the workload is running, navigate to the **Load Balancers** page + and click the **ingress-nginx** service. + + ![](../images/deploy-ingress-controller-3.png){: .with-border} + +3. In the details pane, click the first URL in the **Ports** section. + + A new page opens, displaying `default backend - 404`. + +## Check your deployment from the CLI + +From the command line, confirm that the deployment is running by using +`curl` with the URL that's shown on the details pane of the **ingress-nginx** +service. + +```bash +curl -I http://:/ +``` + +This command returns the following result. + +``` +HTTP/1.1 404 Not Found +Server: nginx/1.13.8 +``` + +Test the server's health ping service by appending `/healthz` to the URL. + +```bash +curl -I http://:/healthz +``` + +This command returns the following result. + +``` +HTTP/1.1 200 OK +Server: nginx/1.13.8 +``` diff --git a/datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app.md b/datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app.md new file mode 100644 index 0000000000..eb7462c80c --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/deploy-multi-service-app.md @@ -0,0 +1,160 @@ +--- +title: Deploy a multi-service app +description: Learn how to deploy containerized applications on a cluster, with Docker Universal Control Plane. +keywords: ucp, deploy, application, stack, service, compose +redirect_from: + - /ee/ucp/user/services/ + - /ee/ucp/swarm/deploy-from-cli/ + - /ee/ucp/swarm/deploy-from-ui/ +--- + +Docker Universal Control Plane allows you to use the tools you already know, +like `docker stack deploy` to deploy multi-service applications. You can +also deploy your applications from the UCP web UI. + +In this example we'll deploy a multi-service application that allows users to +vote on whether they prefer cats or dogs. + +```yaml +version: "3" +services: + + # A Redis key-value store to serve as message queue + redis: + image: redis:alpine + ports: + - "6379" + networks: + - frontend + + # A PostgreSQL database for persistent storage + db: + image: postgres:9.4 + volumes: + - db-data:/var/lib/postgresql/data + networks: + - backend + + # Web UI for voting + vote: + image: dockersamples/examplevotingapp_vote:before + ports: + - 5000:80 + networks: + - frontend + depends_on: + - redis + + # Web UI to count voting results + result: + image: dockersamples/examplevotingapp_result:before + ports: + - 5001:80 + networks: + - backend + depends_on: + - db + + # Worker service to read from message queue + worker: + image: dockersamples/examplevotingapp_worker + networks: + - frontend + - backend + +networks: + frontend: + backend: + +volumes: + db-data: +``` + +## From the web UI + +To deploy your applications from the **UCP web UI**, on the left navigation bar +expand **Shared resources**, choose **Stacks**, and click **Create stack**. + +![Stack list](../../images/deploy-multi-service-app-1.png){: .with-border} + +Choose the name you want for your stack, and choose **Swarm services** as the +deployment mode. + +When you choose this option, UCP deploys your app using the +Docker swarm built-in orchestrator. If you choose 'Basic containers' as the +deployment mode, UCP deploys your app using the classic Swarm orchestrator. + +Then copy-paste the application definition in docker-compose.yml format. + +![Deploy stack](../../images/deploy-multi-service-app-2.png){: .with-border} + +Once you're done click **Create** to deploy the stack. + +## From the CLI + +To deploy the application from the CLI, start by configuring your Docker +CLI using a [UCP client bundle](../user-access/cli.md). + +Then, create a file named `docker-stack.yml` with the content of the yaml above, +and run: + + + +
+
+``` +docker stack deploy --compose-file voting_app +``` +
+
+``` +docker-compose --file docker-compose.yml --project-name voting_app up -d +``` +
+
+ + +## Check your app + +Once the multi-service application is deployed, it shows up in the UCP web UI. +The 'Stacks' page shows that you've deployed the voting app. + +![Stack deployed](../../images/deploy-multi-service-app-3.png){: .with-border} + +You can also inspect the individual services of the app you deployed. For that, +click the **voting_app** to open the details pane, open **Inspect resources** and +choose **Services**, since this app was deployed with the built-in Docker swarm +orchestrator. + +![Service list](../../images/deploy-multi-service-app-4.png){: .with-border} + +You can also use the Docker CLI to check the status of your app: + +``` +docker stack ps voting_app +``` + +Great! The app is deployed so we can cast votes by accessing the service that's +listening on port 5000. +You don't need to know the ports a service listens to. You can +**click the voting_app_vote** service and click on the **Published endpoints** +link. + +![Voting app](../../images/deploy-multi-service-app-5.png){: .with-border} + +## Limitations + +When deploying applications from the web UI, you can't reference any external +files, no matter if you're using the built-in swarm orchestrator or classic +Swarm. For that reason, the following keywords are not supported: + +* build +* dockerfile +* env_file + +Also, UCP doesn't store the stack definition you've used to deploy the stack. +You can use a version control system for this. + diff --git a/datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection.md b/datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection.md new file mode 100644 index 0000000000..746f5b44f3 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/deploy-to-collection.md @@ -0,0 +1,103 @@ +--- +title: Deploy application resources to a collection +description: Learn how to manage user access to application resources by using collections. +keywords: UCP, authentication, user management, stack, collection, role, application, resources +redirect_from: + - /ee/ucp/user/services/deploy-stack-to-collection/ +--- + +Docker Universal Control Plane enforces role-based access control when you +deploy services. By default, you don't need to do anything, because UCP deploys +your services to a default collection, unless you specify another one. You can +customize the default collection in your UCP profile page. +[Learn more about access control and collections](../authorization/index.md). + +UCP defines a collection by its path. For example, a user's default collection +has the path `/Shared/Private/`. To deploy a service to a collection +that you specify, assign the collection's path to the *access label* of the +service. The access label is named `com.docker.ucp.access.label`. + +When UCP deploys a service, it doesn't automatically create the collections +that correspond with your access labels. An administrator must create these +collections and [grant users access to them](../authorization/grant-permissions.md). +Deployment fails if UCP can't find a specified collection or if the user +doesn't have access to it. + +## Deploy a service to a collection by using the CLI + +Here's an example of a `docker service create` command that deploys a service +to a `/Shared/database` collection: + +```bash +docker service create \ + --name redis_2 \ + --label com.docker.ucp.access.label="/Shared/database" + redis:3.0.6 +``` + +## Deploy services to a collection by using a Compose file + +You can also specify a target collection for a service in a Compose file. +In the service definition, add a `labels:` dictionary, and assign the +collection's path to the `com.docker.ucp.access.label` key. + +If you don't specify access labels in the Compose file, resources are placed in +the user's default collection when the stack is deployed. + +You can place a stack's resources into multiple collections, but most of the +time, you won't need to do this. + +Here's an example of a Compose file that specifies two services, WordPress and +MySQL, and gives them the access label `/Shared/wordpress`: + +```yaml +version: '3.1' + +services: + + wordpress: + image: wordpress + ports: + - 8080:80 + environment: + WORDPRESS_DB_PASSWORD: example + deploy: + labels: + com.docker.ucp.access.label: /Shared/wordpress + mysql: + image: mysql:5.7 + environment: + MYSQL_ROOT_PASSWORD: example + deploy: + labels: + com.docker.ucp.access.label: /Shared/wordpress +``` + +To deploy the application: + +1. In the UCP web UI, navigate to the **Stacks** page and click **Create Stack**. +2. Name the app "wordpress". +3. From the **Mode** dropdown, select **Swarm Services**. +4. Copy and paste the previous compose file into the **docker-compose.yml** editor. +5. Click **Create** to deploy the application, and click **Done** when the + deployment completes. + + ![](../../images/deploy-stack-to-collection-1.png){: .with-border} + +If the `/Shared/wordpress` collection doesn't exist, or if you don't have +a grant for accessing it, UCP reports an error. + +To confirm that the service deployed to the `/Shared/wordpress` collection: + +1. In the **Stacks** page, click **wordpress**. +2. In the details pane, click **Inspect Resource** and select **Services**. +3. On the **Services** page, click **wordpress_mysql**. In the details pane, + make sure that the **Collection** is `/Shared/wordpress`. + +![](../../images/deploy-stack-to-collection-2.png){: .with-border} + +## Where to go next + +- [Deploy a Compose-based app to a Kubernetes cluster](../kubernetes/deploy-with-compose.md) +- [Set metadata on a service (-l, –label)](/engine/reference/commandline/service_create/#set-metadata-on-a-service--l-label.md) +- [Docker object labels](/engine/userguide/labels-custom-metadata/.md) diff --git a/datacenter/ucp/3.0/guides/user/swarm/index.md b/datacenter/ucp/3.0/guides/user/swarm/index.md new file mode 100644 index 0000000000..76ff69bcaa --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/index.md @@ -0,0 +1,67 @@ +--- +title: Deploy a single service +description: Learn how to deploy services to a cluster managed by Universal Control Plane. +keywords: ucp, deploy, service +redirect_from: + - /ee/ucp/user/services/deploy-a-service/ +--- + +You can deploy and monitor your services from the UCP web UI. In this example +we'll deploy an [NGINX](https://www.nginx.com/) web server and make it +accessible on port `8000`. + +In your browser, navigate to the UCP web UI and click **Services**. On the +**Create a Service** page, click **Create Service** to configure the +NGINX service. + +Fill in the following fields: + +| Field | Value | +|:-------------|:-------------| +| Service name | nginx | +| Image name | nginx:latest | + +![](../../images/deploy-a-service-1.png){: .with-border} + +In the left pane, click **Network**. In the **Ports** section, +click **Publish Port** and fill in the following fields: + +| Field | Value | +|:---------------|:--------| +| Target port | 80 | +| Protocol | tcp | +| Publish mode | Ingress | +| Published port | 8000 | + +![](../../images/deploy-a-service-2.png){: .with-border} + +Click **Confirm** to map the ports for the NGINX service. + +Once you've specified the service image and ports, click **Create** to +deploy the service into the UCP cluster. + +![](../../images/deploy-a-service-3.png){: .with-border} + +Once the service is up and running, you'll be able to see the default NGINX +page, by going to `http://:8000`. In the **Services** list, click the +**nginx** service, and in the details pane, click the link under +**Published Endpoints**. + +![](../../images/deploy-a-service-4.png){: .with-border} + +Clicking the link opens a new tab that shows the default NGINX home page. + +![](../../images/deploy-a-service-5.png){: .with-border} + +## Use the CLI to deploy the service + +You can also deploy the same service from the CLI. Once you've set up your +[UCP client bundle](../user-access/cli.md), run: + +```bash +docker service create --name nginx \ + --publish mode=ingress,target=80,published=8000 \ + --label com.docker.ucp.access.owner= \ + nginx +``` + diff --git a/datacenter/ucp/3.0/guides/user/swarm/use-secrets.md b/datacenter/ucp/3.0/guides/user/swarm/use-secrets.md new file mode 100644 index 0000000000..1fb05bc865 --- /dev/null +++ b/datacenter/ucp/3.0/guides/user/swarm/use-secrets.md @@ -0,0 +1,193 @@ +--- +title: Manage secrets +description: Learn how to manage your passwords, certificates, and other secrets in a secure way with Docker EE +keywords: UCP, secret, password, certificate, private key +redirect_from: + - /ee/ucp/user/secrets/ +--- + +When deploying and orchestrating services, you often need to configure them +with sensitive information like passwords, TLS certificates, or private keys. + +Universal Control Plane allows you to store this sensitive information, also +known as *secrets*, in a secure way. It also gives you role-based access control +so that you can control which users can use a secret in their services +and which ones can manage the secret. + +UCP extends the functionality provided by Docker Engine, so you can continue +using the same workflows and tools you already use, like the Docker CLI client. +[Learn how to use secrets with Docker](/engine/swarm/secrets/). + +In this example, we're going to deploy a WordPress application that's composed of +two services: + +* wordpress: The service that runs Apache, PHP, and WordPress +* wordpress-db: a MySQL database used for data persistence + +Instead of configuring our services to use a plain text password stored in an +environment variable, we're going to create a secret to store the password. +When we deploy those services, we'll attach the secret to them, which creates +a file with the password inside the container running the service. +Our services will be able to use that file, but no one else will be able +to see the plain text password. + +To make things simpler, we're not going to configure the database service to +persist data. When the service stops, the data is lost. + +## Create a secret + +In the UCP web UI, open the **Swarm** section and click **Secrets**. + +![](../../images/manage-secrets-1.png){: .with-border} + +Click **Create Secret** to create a new secret. Once you create the secret +you won't be able to edit it or see the secret data again. + +![](../../images/manage-secrets-2.png){: .with-border} + +Assign a unique name to the secret and set its value. You can optionally define +a permission label so that other users have permission to use this secret. Also +note that a service and secret must have the same permission label, or both +must have no permission label at all, in order to be used together. + +In this example, the secret is named `wordpress-password-v1`, to make it easier +to track which version of the password our services are using. + + +## Use secrets in your services + +Before creating the MySQL and WordPress services, we need to create the network +that they're going to use to communicate with one another. + +Navigate to the **Networks** page, and create the `wordpress-network` with the +default settings. + +![](../../images/manage-secrets-3.png){: .with-border} + +Now create the MySQL service: + +1. Navigate to the **Services** page and click **Create Service**. Name the + service "wordpress-db", and for the **Task Template**, use the "mysql:5.7" + image. +2. In the left pane, click **Network**. In the **Networks** section, click + **Attach Network**, and in the dropdown, select **wordpress-network**. +3. In the left pane, click **Environment**. The Environment page is where you + assign secrets, environment variables, and labels to the service. +4. In the **Secrets** section, click **Use Secret**, and in the **Secret Name** + dropdown, select **wordpress-password-v1**. Click **Confirm** to associate + the secret with the service. +5. In the **Environment Variable** section, click **Add Environment Variable** and enter + the string "MYSQL_ROOT_PASSWORD_FILE=/run/secrets/wordpress-password-v1" to + create an environment variable that holds the path to the password file in + the container. +6. If you specified a permission label on the secret, you must set the same + permission label on this service. If the secret doesn't have a permission + label, then this service also can't have a permission label. +7. Click **Create** to deploy the MySQL service. + +This creates a MySQL service that's attached to the `wordpress-network` network +and that uses the `wordpress-password-v1` secret. By default, this creates a file +with the same name at `/run/secrets/` inside the container running +the service. + +We also set the `MYSQL_ROOT_PASSWORD_FILE` environment variable to configure +MySQL to use the content of the `/run/secrets/wordpress-password-v1` file as +the root password. + +![](../../images/manage-secrets-4.png){: .with-border} + +Now that the MySQL service is running, we can deploy a WordPress service that +uses MySQL as a storage backend: + +1. Navigate to the **Services** page and click **Create Service**. Name the + service "wordpress", and for the **Task Template**, use the + "wordpress:latest" image. +2. In the left pane, click **Network**. In the **Networks** section, click + **Attach Network**, and in the dropdown, select **wordpress-network**. +3. In the left pane, click **Environment**. +4. In the **Secrets** section, click **Use Secret**, and in the **Secret Name** + dropdown, select **wordpress-password-v1**. Click **Confirm** to associate + the secret with the service. +5. In the **Environment Variable**, click **Add Environment Variable** and enter + the string "WORDPRESS_DB_PASSWORD_FILE=/run/secrets/wordpress-password-v1" to + create an environment variable that holds the path to the password file in + the container. +6. Add another environment variable and enter the string + "WORDPRESS_DB_HOST=wordpress-db:3306". +7. If you specified a permission label on the secret, you must set the same + permission label on this service. If the secret doesn't have a permission + label, then this service also can't have a permission label. +8. Click **Create** to deploy the WordPress service. + +![](../../images/manage-secrets-4a.png){: .with-border} + +This creates the WordPress service attached to the same network as the MySQL +service so that they can communicate, and maps the port 80 of the service to +port 8000 of the cluster routing mesh. + +![](../../images/manage-secrets-5.png){: .with-border} + +Once you deploy this service, you'll be able to access it using the +IP address of any node in your UCP cluster, on port 8000. + +![](../../images/manage-secrets-6.png){: .with-border} + +## Update a secret + +If the secret gets compromised, you'll need to rotate it so that your services +start using a new secret. In this case, we need to change the password we're +using and update the MySQL and WordPress services to use the new password. + +Since secrets are immutable in the sense that you can't change the data +they store after they are created, we can use the following process to achieve +this: + +1. Create a new secret with a different password. +2. Update all the services that are using the old secret to use the new one + instead. +3. Delete the old secret. + +Let's rotate the secret we've created. Navigate to the **Secrets** page +and create a new secret named `wordpress-password-v2`. + +![](../../images/manage-secrets-7.png){: .with-border} + +This example is simple, and we know which services we need to update, +but in the real world, this might not always be the case. + +Click the **wordpress-password-v1** secret. In the details pane, +click **Inspect Resource**, and in the dropdown, select **Services**. + +![](../../images/manage-secrets-8.png){: .with-border} + +Start by updating the `wordpress-db` service to stop using the secret +`wordpress-password-v1` and use the new version instead. + +The `MYSQL_ROOT_PASSWORD_FILE` environment variable is currently set to look for +a file at `/run/secrets/wordpress-password-v1` which won't exist after we +update the service. So we have two options: + +1. Update the environment variable to have the value +`/run/secrets/wordpress-password-v2`, or +2. Instead of mounting the secret file in `/run/secrets/wordpress-password-v2` +(the default), we can customize it to be mounted in`/run/secrets/wordpress-password-v1` +instead. This way we don't need to change the environment variable. This is +what we're going to do. + +When adding the secret to the services, instead of leaving the **Target Name** +field with the default value, set it with `wordpress-password-v1`. This will make +the file with the content of `wordpress-password-v2` be mounted in +`/run/secrets/wordpress-password-v1`. + +Delete the `wordpress-password-v1` secret, and click **Update**. + +![](../../images/manage-secrets-9.png){: .with-border} + +Then do the same thing for the WordPress service. After this is done, the +WordPress application is running and using the new password. + +## Managing secrets through the CLI + +You can find additional documentation on managing secrets through the CLI at [How Docker manages secrets](/engine/swarm/secrets/#read-more-about-docker-secret-commands). + +