Update docs (#6268)
|
@ -2649,13 +2649,26 @@ manuals:
|
|||
title: Get support
|
||||
- sectiontitle: Docker Cloud
|
||||
section:
|
||||
- sectiontitle: Migration
|
||||
section:
|
||||
- path: /docker-cloud/migration/
|
||||
title: Migration overview
|
||||
- path: /docker-cloud/migration/cloud-to-swarm/
|
||||
title: Migrate to Docker CE
|
||||
- path: /docker-cloud/migration/cloud-to-kube-aks/
|
||||
title: Migration to AKS
|
||||
- path: /docker-cloud/migration/cloud-to-kube-gke/
|
||||
title: Migrate to GKE
|
||||
- path: /docker-cloud/migration/deregister-swarms/
|
||||
title: Deregister swarms
|
||||
- path: /docker-cloud/migration/kube-primer/
|
||||
title: Kubernetes primer
|
||||
- path: /docker-cloud/
|
||||
title: About Docker Cloud
|
||||
- path: /docker-cloud/dockerid/
|
||||
title: Docker Cloud settings and Docker ID
|
||||
- path: /docker-cloud/orgs/
|
||||
title: Organizations and teams
|
||||
|
||||
- sectiontitle: Manage builds and images
|
||||
section:
|
||||
- path: /docker-cloud/builds/
|
||||
|
|
|
@ -0,0 +1,789 @@
|
|||
---
|
||||
description: How to migrate apps from Docker Cloud to AKS
|
||||
keywords: cloud, migration, kubernetes, azure, aks
|
||||
title: Migrate Docker Cloud stacks to Azure Container Service
|
||||
---
|
||||
|
||||
## AKS Kubernetes
|
||||
|
||||
This page explains how to prepare your applications for migration from Docker Cloud to [Azure Container Service (AKS)](https://azure.microsoft.com/en-us/free/){: target="_blank" class="_"} clusters. AKS is a hosted Kubernetes service on Microsoft Azure. It exposes standard Kubernetes APIs so that standard Kubernetes tools and apps run on it without needing to be reconfigured.
|
||||
|
||||
At a high level, migrating your Docker Cloud applications requires that you:
|
||||
|
||||
- **Build** a target environment (Kubernetes cluster on AKS).
|
||||
- **Convert** your Docker Cloud YAML stackfiles.
|
||||
- **Test** the converted YAML stackfiles in the new environment.
|
||||
- **Point** your application CNAMES to new service endpoints.
|
||||
- **Migrate** your applications from Docker Cloud to the new environment.
|
||||
|
||||
To demonstrate, we **build** a target environment of AKS nodes, **convert** the Docker Cloud stackfile for [example-voting-app](https://github.com/dockersamples/example-voting-app){: target="_blank" class="_"} to a Kubernetes manifest, and **test** the manifest in the new environment to ensure that it is safe to migrate.
|
||||
|
||||
> The actual process of migrating -- switching customers from your Docker Cloud applications to AKS applications -- will vary by application and environment.
|
||||
|
||||
## Voting-app example
|
||||
|
||||
The Docker Cloud stack of our example voting application is defined in [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}. This document explains how `dockercloud.yml` is converted to a Kubernetes YAML manifest file so that you have the tools to do the same for your applications.
|
||||
|
||||
In the [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}, the voting app is defined as a stack of six microservices:
|
||||
|
||||
- **vote**: Web front-end that displays voting options
|
||||
- **redis**: In-memory k/v store that collects votes
|
||||
- **worker**: Stores votes in database
|
||||
- **db**: Persistent store for votes
|
||||
- **result**: Web server that pulls and displays results from database
|
||||
- **lb**: Container-based load balancer
|
||||
|
||||
Votes are accepted with the `vote` service and stored in persistent backend database (`db`) with the help of services, `redis`, `worker`, and `lb`. The vote tally is displayed with the `result` service.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
## Migration prerequisites
|
||||
|
||||
To complete the migration from Docker Cloud to Kubernetes on AKS, you need:
|
||||
|
||||
- An active Azure subscription with billing enabled.
|
||||
|
||||
## Build target environment
|
||||
|
||||
Azure Container Service (AKS) is a managed Kubernetes service. Azure takes care of all of the Kubernetes control plane management (the master nodes) -- delivering the control plane APIs, managing control plane HA, managing control plane upgrades, etc. You only need to look after worker nodes -- how many, the size and spec, where to deploy them, etc.
|
||||
|
||||
High-level steps to build a working AKS cluster are:
|
||||
|
||||
1. Generate credentials to register AKS with Azure AD.
|
||||
2. Deploy an AKS cluster (and register with Azure AD).
|
||||
3. Connect to the AKS cluster.
|
||||
|
||||
### Generate AD registration credentials
|
||||
|
||||
Currently, AKS needs to be manually registered with Azure Active Directory (AD) so that it can receive security tokens and integrate with secure sign-on and authorization.
|
||||
|
||||
> _When you register an [Azure AD "application"](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects){: target="_blank" class="_"}_ _in the Azure portal, two objects are created in your Azure AD tenant: an application object, and a service principal object._
|
||||
|
||||
The following steps create the registration and output the credentials required to register AKS when deploying a cluster.
|
||||
|
||||
1. Log in to the [Azure portal](https://portal.azure.com){: target="_blank" class="_"}.
|
||||
2. Click **Azure Active Directory** > **App registrations** > **New application registration**.
|
||||
3. Assign a **Name**, select application type **Web app/API**, and enter a **Sign-on URL**. The sign-on URL needs to be a valid DNS name but does not need to be resolvable. An example might be `https://k8s-vote.com`.
|
||||
4. Click **Create**.
|
||||
5. Copy and save the **Application ID** (this is your **Service principal client ID**).
|
||||
6. Click **Settings** > **Keys** and set a description and duration.
|
||||
7. Click **Save**.
|
||||
8. Copy and save the **Value** (this your **Service principal client secret**, and also the only time you will see it, so don't lose it!).
|
||||
|
||||
You now have the credentials required to register AKS as part of the next section.
|
||||
|
||||
### Deploy an AKS cluster
|
||||
|
||||
In this section, we build a three-node cluster; your cluster should probably be based on the configuration of your Docker Cloud node cluster.
|
||||
|
||||
Whereas Docker Cloud deploys work to all nodes in a cluster (managers and workers), _Kubernetes only deploys work to worker nodes_. This affects how you should size your cluster. If your Docker Cloud node cluster was working well with three managers and two workers of a particular size, you should probably size your AKS cluster to have five nodes of a similar size.
|
||||
|
||||
> In Docker Cloud, to see the configuration of each of your clusters, select **Node Clusters** > _your_cluster_.
|
||||
|
||||
Before continuing, ensure you know:
|
||||
|
||||
- Your **Azure subscription credentials**
|
||||
- **Azure region** to which you want to deploy your AKS cluster
|
||||
- **SSH public key** to use when connecting to AKS nodes
|
||||
- **Service principal client ID** and **Service principal client secret** (from the previous section)
|
||||
- **Number, size, and spec** of the worker nodes you want.
|
||||
|
||||
To deploy a cluster of AKS nodes:
|
||||
|
||||
1. Select **+Create a resource** from the left-hand panel of the Azure portal dashboard.
|
||||
|
||||
2. Select **Containers** > **Azure Container Service - AKS (preview)**. _Do not select the other ACS option._
|
||||
|
||||
3. Fill out the required fields and click **OK**:
|
||||
|
||||
- **Cluster name**: Set any name for the cluster.
|
||||
- **Kubernetes version**: Select one of the 1.8.x versions.
|
||||
- **Subscription**: Select the subscription to pay for the cluster.
|
||||
- **Resource group**: Create a new resource group or choose one from your existing list.
|
||||
- **Location**: Select the Azure region to which to deploy the cluster. AKS may not be available in all Azure regions.
|
||||
|
||||
4. Configure additional AKS cluster parameters and click **OK**:
|
||||
|
||||
- **User name**: The default option should be fine.
|
||||
- **SSH public key**: The public key (certificate) of a key-pair that you own and that can be used for SSH. If you need to generate a new set, you can use tools such as `ssh-keygen` or PuTTY. The key should be a minimum of 2048 bits of type `ssh-rsa`.
|
||||
- **Service principal client ID**: The application ID that you copied in an earlier step.
|
||||
- **Service principal client secret**: The password value that you copied in a previous step.
|
||||
- **Node count**: The number of _worker_ nodes that you want in the cluster. It should probably match the _total_ number of nodes in your existing Docker Cloud node cluster (managers + workers).
|
||||
- **Node virtual machine size**: The size and specification of each AKS _worker_ node. It should probably match the configuration of your existing Docker Cloud node cluster.
|
||||
|
||||
5. Review the configuration on the Summary screen and click **OK** to deploy the cluster. It can take a few minutes.
|
||||
|
||||
### Connect to the AKS cluster
|
||||
|
||||
You can connect to your AKS cluster from the web-based [Azure Cloud Shell](https://docs.microsoft.com/en-us/azure/cloud-shell/overview){: target="_blank" class="_"}; but to do so from your laptop, or other local terminal, you must:
|
||||
|
||||
- Install the Azure CLI tool (`az`).
|
||||
- Install the Kubernetes CLI (`kubectl`)
|
||||
- Configure `kubectl` to connect to your AKS cluster.
|
||||
|
||||
To connect to your AKS cluster from a local terminal:
|
||||
|
||||
1. Download and install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest){: target="_blank" class="_"} for your Operating System.
|
||||
|
||||
2. With the Azure CLI, install the Kubernetes CLI, `kubectl`.
|
||||
|
||||
```
|
||||
> az aks install-cli
|
||||
Downloading client to C:\Program Files (x86)\kubectl.exe from...
|
||||
```
|
||||
|
||||
You can install `kubectl` with or without `az`. If you have `kubectl` already installed, ensure that the current context is correct:
|
||||
|
||||
```
|
||||
> kubectl config get-context
|
||||
> kubectl config use-context <my_aks_namespace>
|
||||
```
|
||||
|
||||
3. Start the Azure login process:
|
||||
|
||||
```
|
||||
> az login
|
||||
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter...
|
||||
```
|
||||
|
||||
4. Open the "devicelogin" page in a browser and paste the authentication code. When complete, the CLI returns some JSON.
|
||||
|
||||
5. Get the credentials and use them to configure `kubectl`:
|
||||
|
||||
The values for `--resource-group` and `--name` are the Resource Group and Cluster Name that you set in the previous steps. Substitute the values below with the values for your environment.
|
||||
|
||||
```
|
||||
> az aks get-credentials --resource-group=k8s-vote --name=k8s-vote
|
||||
Merged "k8s-vote" as current context in C:\Users\nigel\.kube\config
|
||||
```
|
||||
|
||||
6. Test that `kubectl` can connect to your cluster.
|
||||
|
||||
```
|
||||
> kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
aks-agentpool-29046111-0 Ready agent 3m v1.8.1
|
||||
aks-agentpool-29046111-1 Ready agent 2m v1.8.1
|
||||
aks-agentpool-29046111-2 Ready agent 2m v1.8.1
|
||||
```
|
||||
|
||||
If the values returned match your AKS cluster (number of nodes, age, and version), then you have successfully configured `kubectl` to manage your AKS cluster.
|
||||
|
||||
You now have an AKS cluster and have configured `kubectl` to manage it. Let's look at how to convert your Docker Cloud app into a Kubernetes app.
|
||||
|
||||
## Convert Docker Cloud stackfile
|
||||
|
||||
**In the following sections, we discuss each service definition separately, but you should group them into one stackfile with the `.yml` extension, for example, [k8s-vote.yml](#combined-manifest-k8s-vote.yml){: target="_blank" class="_"}.**
|
||||
|
||||
To prepare your applications for migration from Docker Cloud to Kubernetes, you must recreate your Docker Cloud stackfiles as Kubernetes _manifests_. Once you have each application converted, you can test and deploy. Like Docker Cloud stackfiles, Kubernetes manifests are YAML files but usually longer and more complex.
|
||||
|
||||
> In Docker Cloud, to find the stackfiles for your existing applications, you can either: (1) Select **Stacks** > _your_stack_ > **Edit**, or (2) Select **Stacks** > _your_stack_ and scroll down.
|
||||
|
||||
In the Docker Cloud stackfile, the six Docker _services_ in our `example-voting-app` stack are defined as **top-level keys**:
|
||||
|
||||
```
|
||||
db:
|
||||
redis:
|
||||
result:
|
||||
lb:
|
||||
vote:
|
||||
worker:
|
||||
```
|
||||
|
||||
Kubernetes applications are built from objects (such as [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/){: target="_blank" class="_"})
|
||||
and object abstractions (such as [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/){: target="_blank" class="_"}
|
||||
and [Services](https://kubernetes.io/docs/concepts/services-networking/service/){: target="_blank" class="_"}). For each _Docker service_ in our voting app stack, we create one Kubernetes Deployment and one _Kubernetes Service_. Each Kubernetes Deployment spawns Pods. A Pod represents one or more containers (usually one) and is the smallest unit of work in Kubernetes.
|
||||
|
||||
> A [Docker serivce](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/){: target="_blank" class="_"} is one component of an application that is generated from one image.
|
||||
> A [Kubernetes service](https://kubernetes.io/docs/concepts/services-networking/service/){: target="_blank" class="_"} is a networking construct that load balances Pods behind a proxy.
|
||||
|
||||
A Kubernetes Deployment defines the application "service" -- which Docker image to use and the runtime instructions (which container ports to map and the container restart policy). The Deployment is also where you define rolling updates, rollbacks, and other advanced features.
|
||||
|
||||
A Kubernetes Service object is an abstraction that provides stable networking for a set of Pods. A Service is where you can register a cluster-wide DNS name and virtual IP (VIP) for accessing the Pods, and also create cloud-native load balancers.
|
||||
|
||||
This diagram shows four Pods deployed as part of a single Deployment. Each Pod is labeled as “app=vote”. The Deployment has a label selector, “app=vote”, and this combination of labels and label selector is what allows the Deployment object to manage Pods (create, terminate, scale, update, roll back, and so on). Likewise, the Service object selects Pods on the same label (“app-vote”) which allows the service to provide a stable network abstraction (IP and DNS name) for the Pods.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
### db service
|
||||
|
||||
> Consider using a hosted database service for production databases. This is something that, ideally, should not change as part of your migration away from Docker Cloud stacks.
|
||||
|
||||
**Docker Cloud stackfile**: The Docker Cloud stackfile defines an image and a restart policy for the `db` service.
|
||||
|
||||
```
|
||||
db:
|
||||
image: 'postgres:9.4'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Kubernetes manifest**: The Kubernetes translation defines two object types or "kinds": a _Deployment_ and a _Service_ (separated by three dashes `---`). Each object includes an API version, metadata (labels and name), and a `spec` field for object configuration (that is, the Deployment Pods and the Service).
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: db
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
containers:
|
||||
- image: postgres:9.4
|
||||
name: db
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: db
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: db
|
||||
```
|
||||
|
||||
About the Kubernetes fields in general:
|
||||
|
||||
- `apiVersion` sets the schema version for Kubernetes to use when managing the object. The versions set here are supported on AKS (1.7.7 and 1.8.1).
|
||||
- `kind` defines the object type. In this example, we only define Deployments and Services but there are many others.
|
||||
- `metadata` assigns a name and set of labels to the object.
|
||||
- `spec` is where we configure the object. In a Deployment, `spec` defines the Pods to deploy.
|
||||
|
||||
It is important that **Pod labels** (`Deployment.spec.template.metadata.labels`) match both the Deployment label selector (`Deployment.spec.selector.matchLabels`) and the Service label selector (`Service.spec.selector`). This is how the Deployment object knows which Pods to manage and how the Service object knows which Pods to provide networking for.
|
||||
|
||||
> Deployment and Service label selectors have different fields in the YAML file because Deployments use [set-based selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement){: target="_blank" class="_"}
|
||||
and Services use [equality-based selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#equality-based-requirement){: target="_blank" class="_"}.
|
||||
|
||||
For the `db` Deployment, we define a container called `db` based on the `postgres:9.4` Docker image, and define a restart policy. All Pods created by this Deployment have the label, `app=db` and the Deployment selects on them.
|
||||
|
||||
The `db` Service is a “headless” service (`clusterIP: None`). Headless services are useful when you want a stable DNS name but do not need the cluster-wide VIP. They create a stable DNS record, but instead of creating a VIP, they map the DNS name to multiple
|
||||
[A records](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records){: target="_blank" class="_"} -- one for each Pod associated with the Service.
|
||||
|
||||
The Service’s label selector (`Service.spec.selector`) has the value, "app=db". This means the Service provides stable networking and load balancing for all Pods on the cluster labeled as “app=db”. Pods defined in the Deployment section are all labelled as "app-db". It is this mapping between the Service label selector and the Pod labels that tells the Service object which Pods for which to provide networking.
|
||||
|
||||
### redis service
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
redis:
|
||||
image: 'redis:latest'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
containers:
|
||||
- image: redis:alpine
|
||||
name: redis
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
```
|
||||
|
||||
Here, the Deployment object deploys a Pod from the `redis:alpine` image and sets the container port to `6379`. It also sets the `labels` for the Pods to the same value ("app=redis") as the Deployment’s label selector to tie the two together.
|
||||
|
||||
The Service object defines a cluster-wide DNS mapping for the name "redis" on port 6379. This means that traffic for `tcp://redis:6379` is routed to this Service and is load balanced across all Pods on the cluster with the "app=redis" label. The Service is accessed on the cluster-wide `port` and forwards to the Pods on the `targetPort`. Again, the label-selector for the Service and the labels for the Pods are what tie the two together.
|
||||
|
||||
The diagram shows traffic intended for `tcp://redis:6379` being sent to the redis Service and then load balanced across all Pods that match the Service label selector.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
### lb service
|
||||
|
||||
The Docker Cloud stackfile defines an `lb` service to balance traffic to the vote service. On AKS, this is not necessary because Kubernetes lets you define a Service object with `type=balancer`, which creates a native Azure load balancer to do this job. We demonstrate in the `vote` section.
|
||||
|
||||
### vote service
|
||||
|
||||
The Docker Cloud stackfile for the `vote` service defines an image, a restart policy, and a specific number of Pods (replicas: 5). It also enables the Docker Cloud `autoredeploy` feature. We can tell that it listens on port 80 because the Docker Cloud `lb` service forwards traffic to it on port 80; we can also inspect its image.
|
||||
|
||||
> **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment.
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
vote:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-vote:latest'
|
||||
restart: always
|
||||
target_num_containers: 5
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: vote
|
||||
replicas: 5
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-vote:latest
|
||||
name: vote
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: vote
|
||||
```
|
||||
|
||||
Again, we ensure that both Deployment and Service objects can find the Pods with matching labels ("app=vote"). We also set the number of Pod replicas to five (`Deployment.spec.replicas`) so that it matches the `target_num_containers` from the Docker Cloud stackfile.
|
||||
|
||||
We define the Service as "type=loadbalancer". This creates a native Azure load balancer with a stable, publicly routable IP for the service. It also maps port 80 so that traffic hitting port 80 is load balanced across all five Pod replicas in the cluster. (This is why the `lb` service from the Docker Cloud app is not needed.)
|
||||
|
||||
### worker service
|
||||
|
||||
Like the `vote` service, the `worker` service defines an image, a restart policy, and a specific number of Pods (replicas: 5). It also defines the Docker Cloud `autoredeploy` policy (which is not supported in AKS).
|
||||
|
||||
> **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment.
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
worker:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-worker:latest'
|
||||
restart: always
|
||||
target_num_containers: 3
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: worker
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-worker:latest
|
||||
name: worker
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: worker
|
||||
```
|
||||
|
||||
Again, we ensure that both Deployment and Service objects can find the Pods with matching labels ("app=worker").
|
||||
|
||||
The `worker` Service (like `db`) is another ["headless" service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services){: target="_blank" class="_"} where a DNS name is created and mapped to individual
|
||||
[A records](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records){: target="_blank" class="_"} for each Pod rather than a cluster-wide VIP.
|
||||
|
||||
### result service
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
result:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-result:latest'
|
||||
ports:
|
||||
- '80:80'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: result
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-result:latest
|
||||
name: result
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: result
|
||||
```
|
||||
|
||||
The Deployment section defines the usual names, labels and container spec. The `result` Service (like the `vote` Service) defines a native Azure load balancer to distribute external traffic to the cluster on port 80.
|
||||
|
||||
### Combined manifest k8s-vote.yml
|
||||
|
||||
You can combine all Deployments and Services in a single YAML file, or have individual YAML files per Docker Cloud service. The choice is yours, but it's usually easier to deploy and manage one file.
|
||||
|
||||
> You should manage your Kubernetes manifest files the way you manage your application code -- checking them in and out of version control repositories etc.
|
||||
|
||||
Here, we combine all the Kubernetes definitions explained above into one YAML file that we call, `k8s-vote.yml`.
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: db
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
containers:
|
||||
- image: postgres:9.4
|
||||
name: db
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: db
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: db
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
containers:
|
||||
- image: redis:alpine
|
||||
name: redis
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: vote
|
||||
replicas: 5
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-vote:latest
|
||||
name: vote
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: vote
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: worker
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-worker:latest
|
||||
name: worker
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: worker
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: result
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-result:latest
|
||||
name: result
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: result
|
||||
```
|
||||
|
||||
Save the Kubernetes manifest file (as `k8s-vote.yml`) and check it into version control.
|
||||
|
||||
## Test the app on AKS
|
||||
|
||||
Before migrating, you should thoroughly test each new Kubernetes manifest on a AKS cluster. Healthy testing includes _deploying_ the application with the new manifest file, performing _scaling_ operations, increasing _load_, running _failure_ scenarios, and doing _updates_ and _rollbacks_. These tests are specific to each of your applications. You should also manage your manifest files in a version control system.
|
||||
|
||||
The following steps explain how to deploy your app from the Kubernetes manifest file and verify that it is running. The steps are based on the sample application used throughout this guide, but the general commands should work for any app.
|
||||
|
||||
> Run from an [Azure Cloud Shell](https://shell.azure.com/){: target="_blank" class="_"} or local terminal with `kubectl` configured to talk to your AKS cluster.
|
||||
|
||||
1. Verify that your shell/terminal is configured to talk to your AKS cluster. The output should match your cluster.
|
||||
|
||||
```
|
||||
> kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
aks-agentpool-29046111-0 Ready agent 6h v1.8.1
|
||||
aks-agentpool-29046111-1 Ready agent 6h v1.8.1
|
||||
aks-agentpool-29046111-2 Ready agent 6h v1.8.1
|
||||
```
|
||||
|
||||
2. Deploy your Kubernetes application to your cluster.
|
||||
|
||||
The Kubernetes manifest here is `ks8-vote.yml` and lives in the system PATH. To use a different manifest, substitute `ks8-vote.yml` with the name of your manifest file.
|
||||
|
||||
```
|
||||
> kubectl create -f k8s-vote.yml
|
||||
|
||||
deployment "db" created
|
||||
service "db" created
|
||||
deployment "redis" created
|
||||
service "redis" created
|
||||
deployment "vote" created
|
||||
service "vote" created
|
||||
deployment "worker" created
|
||||
service "worker" created
|
||||
deployment "result" created
|
||||
service "result" created
|
||||
```
|
||||
|
||||
3. Check the status of the app (both Deployments and Services):
|
||||
|
||||
```
|
||||
> kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
db 1 1 1 1 43s
|
||||
redis 1 1 1 1 43s
|
||||
result 1 1 1 1 43s
|
||||
vote 5 5 5 5 43s
|
||||
worker 3 3 3 3 43s
|
||||
|
||||
> kubectl get services
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
db ClusterIP None <none> 55555/TCP 48s
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 6h
|
||||
redis ClusterIP 10.0.168.188 <none> 6379/TCP 48s
|
||||
result LoadBalancer 10.0.76.157 <pending> 80:31033/TCP 47s
|
||||
vote LoadBalancer 10.0.244.254 <pending> 80:31330/TCP 48s
|
||||
worker ClusterIP None <none> 55555/TCP 48s
|
||||
```
|
||||
|
||||
Both `LoadBalancer` Services are `pending` because it takes a minute or two to provision an Azure load balancer. You can run `kubectl get svc --watch` to see when they are ready. Once provisioned, the output looks like this (with different external IPs):
|
||||
|
||||
```
|
||||
> kubectl get services
|
||||
<Snip>
|
||||
result LoadBalancer 10.0.76.157 52.174.195.232 80:31033/TCP 7m
|
||||
vote LoadBalancer 10.0.244.254 52.174.196.199 80:31330/TCP 8m
|
||||
```
|
||||
|
||||
4. Test that the application works in your new environment.
|
||||
|
||||
For example, the voting app exposes two web front-ends -- one for casting votes and the other for viewing results:
|
||||
|
||||
- Copy/paste the `EXTERNAL-IP` value for the `vote` service into a browser and cast a vote.
|
||||
- Copy/paste the `EXTERNAL-IP` value for the `result` service into a browser and ensure your vote registered.
|
||||
|
||||
If you had a CI/CD pipeline with automated tests and deployments for your Docker Cloud stacks, you should build, test, and implement one for each application on AKS.
|
||||
|
||||
> You can extend your Kubernetes manifest file with advanced features to perform rolling updates and simple rollbacks. But you should not do this until you have confirmed your application is working with the simple manifest file.
|
||||
|
||||
## Migrate apps from Docker Cloud
|
||||
|
||||
> Remember to point your application CNAMES to new service endpoints.
|
||||
|
||||
How you migrate your applications is unique to your environment and applications.
|
||||
|
||||
- Plan with all developers and operations teams.
|
||||
- Plan with customers.
|
||||
- Plan with owners of other applications that interact with your Docker Cloud app.
|
||||
- Plan a rollback strategy if problems occur.
|
||||
|
||||
Once your migration is in process, check that the everything is working as expected. Ensure that users are hitting the new application on the Docker CE infrastructure and getting expected results.
|
||||
|
||||
> Think before you terminate stacks and clusters
|
||||
>
|
||||
> Do not terminate your Docker Cloud stacks or node clusters until some time after the migration has been signed off as successful. If there are problems, you may need to roll back and try again.
|
||||
{: .warning}
|
|
@ -0,0 +1,787 @@
|
|||
---
|
||||
description: How to migrate apps from Docker Cloud to GKE
|
||||
keywords: cloud, migration, kubernetes, google, gke
|
||||
title: Migrate Docker Cloud stacks to Google Kubernetes Engine
|
||||
---
|
||||
|
||||
## GKE Kubernetes
|
||||
|
||||
This page explains how to prepare your applications for migration from Docker Cloud to [Google Kubernetes Engine (GKE)](https://cloud.google.com/free/){: target="_blank" class="_"} clusters. GKE is a hosted Kubernetes service on Google Cloud Platform (GCP). It exposes standard Kubernetes APIs so that standard Kubernetes tools and apps run on it without needing to be reconfigured.
|
||||
|
||||
At a high level, migrating your Docker Cloud applications requires that you:
|
||||
|
||||
- **Build** a target environment (Kubernetes cluster on GKE).
|
||||
- **Convert** your Docker Cloud YAML stackfiles.
|
||||
- **Test** the converted YAML stackfiles in the new environment.
|
||||
- **Point** your application CNAMES to new service endpoints.
|
||||
- **Migrate** your applications from Docker Cloud to the new environment.
|
||||
|
||||
To demonstrate, we **build** a target environment of GKE nodes, **convert** the Docker Cloud stackfile for [example-voting-app](https://github.com/dockersamples/example-voting-app){: target="_blank" class="_"} to a Kubernetes manifest, and **test** the manifest in the new environment to ensure that it is safe to migrate.
|
||||
|
||||
> The actual process of migrating -- switching customers from your Docker Cloud applications to GKE applications -- will vary by application and environment.
|
||||
|
||||
## Voting-app example
|
||||
|
||||
The Docker Cloud stack of our example voting application is defined in [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}. This document explains how `dockercloud.yml` is converted to a Kubernetes YAML manifest file so that you have the tools to do the same for your applications.
|
||||
|
||||
In the [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}, the voting app is defined as a stack of six microservices:
|
||||
|
||||
- **vote**: Web front-end that displays voting options
|
||||
- **redis**: In-memory k/v store that collects votes
|
||||
- **worker**: Stores votes in database
|
||||
- **db**: Persistent store for votes
|
||||
- **result**: Web server that pulls and displays results from database
|
||||
- **lb**: Container-based load balancer
|
||||
|
||||
Votes are accepted with the `vote` service and stored in persistent backend database (`db`) with the help of services, `redis`, `worker`, and `lb`. The vote tally is displayed with the `result` service.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
## Migration prerequisites
|
||||
|
||||
To complete the migration from Docker Cloud to Kubernetes on GKE, you need:
|
||||
|
||||
- An active Google Cloud subscription with billing enabled.
|
||||
|
||||
## Build target environment
|
||||
|
||||
Google Kubernetes Engine (GKE) is a managed Kubernetes service on the Google Cloud Platform (GCP). It takes care of all of the Kubernetes control plane management (the master nodes) -- delivering the control plane APIs, managing control plane HA, managing control plane upgrades, etc. You only need to look after worker nodes -- how many, the size and spec, where to deploy them, etc.
|
||||
|
||||
High-level steps to build a working GKE cluster are:
|
||||
|
||||
1. Create a new GKE project.
|
||||
2. Create a GKE cluster.
|
||||
3. Connect to the GKE cluster.
|
||||
|
||||
### Create a new GKE project
|
||||
|
||||
Everything in the Google Cloud Platform has to sit inside of a _project_. Let's create one.
|
||||
|
||||
1. Log in to the [Google Cloud Platform Console](https://console.cloud.google.com){: target="_blank" class="_"}.
|
||||
2. Create a new project. Either:
|
||||
|
||||
- Select **Create an empty project** from the home screen, or ...
|
||||
- Open **Select a project** from the top of the screen and click **+**.
|
||||
|
||||
3. Name the project and click **Create**. It may take a minute.
|
||||
|
||||
> The examples in this document assume a project named, `proj-k8s-vote`.
|
||||
|
||||
### Create a GKE cluster
|
||||
|
||||
In this section, we build a three-node cluster; your cluster should probably be based on the configuration of your Docker Cloud node cluster.
|
||||
|
||||
Whereas Docker Cloud deploys work to all nodes in a cluster (managers and workers), _Kubernetes only deploys work to worker nodes_. This affects how you should size your cluster. If your Docker Cloud node cluster was working well with three managers and two workers of a particular size, you should probably size your GKE cluster to have five nodes of a similar size.
|
||||
|
||||
> In Docker Cloud, to see the configuration of each of your clusters, select **Node Clusters** > _your_cluster_.
|
||||
|
||||
Before continuing, ensure you know:
|
||||
|
||||
- **Region and zone** in which you want to deploy your GKE cluster
|
||||
- **Number, size, and spec** of the worker nodes you want.
|
||||
|
||||
To build:
|
||||
|
||||
1. Log into the [GCP Console](https://console.cloud.google.com){: target="_blank" class="_"}.
|
||||
|
||||
2. Select your project from **Select a project** at the top of the Console screen.
|
||||
|
||||
3. Click **Kubernetes Engine** from the left-hand menu. It may take a minute to start.
|
||||
|
||||
4. Click **Create Cluster**.
|
||||
|
||||
5. Configure the required cluster options:
|
||||
|
||||
- **Name:** An arbitrary name for the cluster.
|
||||
- **Description:** An arbitrary description for the cluster.
|
||||
- **Location:** Determines if the Kubernetes control plane nodes (masters) are in a single availability zone or spread across availability zones within a GCP Region.
|
||||
- **Zone/Region:** The zone or region in which to deploy the cluster.
|
||||
- **Cluster version:** The Kubernetes version. You should probably use a 1.8.x or 1.9.x version.
|
||||
- **Machine type:** The type of GKE VM for the worker nodes. This should probably match your Docker Cloud node cluster.
|
||||
- **Node image:** The OS to run on each Kubernetes worker node. Use Ubuntu if you require NFS, glusterfs, Sysdig, or Debian packages, otherwise use a [COS (container-optimized OS)](https://cloud.google.com/container-optimized-os/).
|
||||
- **Size:** The number of _worker_ nodes that you want in the GKE cluster. It should probably match the _total_ number of nodes in your existing Docker Cloud node cluster (managers + workers).
|
||||
|
||||
You should carefully consider the other configuration options; but most deployments should be OK with default values.
|
||||
|
||||
6. Click **Create**. It takes a minute or two for the cluster to create.
|
||||
|
||||
Once the cluster is created, you can click its name to see more details.
|
||||
|
||||
### Connect to the GKE cluster
|
||||
|
||||
You can connect to your GKE cluster from the web-based [Google Cloud Shell](https://cloud.google.com/shell/){: target="_blank" class="_"}; but to do so from your laptop, or other local terminal, you must:
|
||||
|
||||
- Install and configure the `gcloud` CLI tool.
|
||||
- Install the Kubernetes CLI (`kubectl`)
|
||||
- Configure `kubectl` to connect to your cluster.
|
||||
|
||||
The `gcloud` tool is the command-line tool for interacting with the Google Cloud Platform. It is installed as part of the Google Cloud SDK.
|
||||
|
||||
1. Download and install the [Cloud SDK](https://cloud.google.com/sdk/){: target="_blank" class="_"} for your operating system.
|
||||
|
||||
2. Configure `gcloud` and follow all the prompts:
|
||||
|
||||
```
|
||||
$ gcloud init --console-only
|
||||
```
|
||||
|
||||
> Follow _all_ prompts, including the one to open a web browser and approve the requested authorizations. As part of the procedure you must copy and paste a code into the terminal window to authorize `gcloud`.
|
||||
|
||||
3. Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl):
|
||||
|
||||
```
|
||||
$ gcloud components list
|
||||
$ gcloud components install kubectl
|
||||
```
|
||||
|
||||
You can install `kubectl` with or without `glcoud`. If you have `kubectl` already installed, ensure that the current context is correct:
|
||||
|
||||
```
|
||||
$ kubectl config get-context
|
||||
$ kubectl config use-context <my_gke_namespace>
|
||||
```
|
||||
|
||||
4. Configure `kubectl` to talk to your GKE cluster.
|
||||
|
||||
- In GKE, click the **Connect** button at the end of the line representing your cluster.
|
||||
- Copy the long command and paste to your local terminal window. Your command may differ.
|
||||
|
||||
```
|
||||
$ gcloud container clusters get-credentials clus-k8s-vote --zone europe-west2-c --project proj-k8s-vote
|
||||
|
||||
Fetching cluster endpoint and auth data.
|
||||
kubeconfig entry generated for clus-k8s-vote.
|
||||
```
|
||||
|
||||
5. Test the `kubectl` configuration:
|
||||
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
gke-clus-k8s-vote-default-pool-81bd226c-2jtp Ready <none> 1h v1.9.2-gke.1
|
||||
gke-clus-k8s-vote-default-pool-81bd226c-mn4k Ready <none> 1h v1.9.2-gke.1
|
||||
gke-clus-k8s-vote-default-pool-81bd226c-qjm2 Ready <none> 1h v1.9.2-gke.1
|
||||
```
|
||||
|
||||
If the values returned match your GKE cluster (number of nodes, age, and version), then you have successfully configured `kubectl` to manage your GKE cluster.
|
||||
|
||||
You now have a GKE cluster and have configured `kubectl` to manage it. Let's look at how to convert your Docker Cloud app into a Kubernetes app.
|
||||
|
||||
## Convert Docker Cloud stackfile
|
||||
|
||||
**In the following sections, we discuss each service definition separately, but you should group them into one stackfile with the `.yml` extension, for example, [k8s-vote.yml](#combined-manifest-k8s-vote.yml){: target="_blank" class="_"}.**
|
||||
|
||||
To prepare your applications for migration from Docker Cloud to Kubernetes, you must recreate your Docker Cloud stackfiles as Kubernetes _manifests_. Once you have each application converted, you can test and deploy. Like Docker Cloud stackfiles, Kubernetes manifests are YAML files but usually longer and more complex.
|
||||
|
||||
> In Docker Cloud, to find the stackfiles for your existing applications, you can either: (1) Select **Stacks** > _your_stack_ > **Edit**, or (2) Select **Stacks** > _your_stack_ and scroll down.
|
||||
|
||||
In the Docker Cloud stackfile, the six Docker _services_ in our `example-voting-app` stack are defined as **top-level keys**:
|
||||
|
||||
```
|
||||
db:
|
||||
redis:
|
||||
result:
|
||||
lb:
|
||||
vote:
|
||||
worker:
|
||||
```
|
||||
|
||||
Kubernetes applications are built from objects (such as [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/){: target="_blank" class="_"})
|
||||
and object abstractions (such as [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/){: target="_blank" class="_"}
|
||||
and [Services](https://kubernetes.io/docs/concepts/services-networking/service/){: target="_blank" class="_"}). For each _Docker service_ in our voting app stack, we create one Kubernetes Deployment and one _Kubernetes Service_. Each Kubernetes Deployment spawns Pods. A Pod is a set of containers and also the smallest unit of work in Kubernetes.
|
||||
|
||||
> A [Docker serivce](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/){: target="_blank" class="_"} is one component of an application that is generated from one image.
|
||||
> A [Kubernetes service](https://kubernetes.io/docs/concepts/services-networking/service/){: target="_blank" class="_"} is a networking construct that load balances Pods behind a proxy.
|
||||
|
||||
A Kubernetes Deployment defines the application "service" -- which Docker image to use and the runtime instructions (which container ports to map and the container restart policy). The Deployment is also where you define rolling updates, rollbacks, and other advanced features.
|
||||
|
||||
A Kubernetes Service object is an abstraction that provides stable networking for a set of Pods. A Service is where you can register a cluster-wide DNS name and virtual IP (VIP) for accessing the Pods, and also create cloud-native load balancers.
|
||||
|
||||
This diagram shows four Pods deployed as part of a single Deployment. Each Pod is labeled as “app=vote”. The Deployment has a label selector, “app=vote”, and this combination of labels and label selector is what allows the Deployment object to manage Pods (create, terminate, scale, update, roll back, and so on). Likewise, the Service object selects Pods on the same label (“app-vote”) which allows the service to provide a stable network abstraction (IP and DNS name) for the Pods.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
### db service
|
||||
|
||||
> Consider using a hosted database service for production databases. This is something that, ideally, should not change as part of your migration away from Docker Cloud stacks.
|
||||
|
||||
**Docker Cloud stackfile**: The Docker Cloud stackfile defines an image and a restart policy for the `db` service.
|
||||
|
||||
```
|
||||
db:
|
||||
image: 'postgres:9.4'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Kubernetes manifest**: The Kubernetes translation defines two object types or "kinds": a _Deployment_ and a _Service_ (separated by three dashes `---`). Each object includes an API version, metadata (labels and name), and a `spec` field for object configuration (that is, the Deployment Pods and the Service).
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: db
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
containers:
|
||||
- image: postgres:9.4
|
||||
name: db
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: db
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: db
|
||||
```
|
||||
|
||||
About the Kubernetes fields in general:
|
||||
|
||||
- `apiVersion` sets the schema version for Kubernetes to use when managing the object.
|
||||
- `kind` defines the object type. In this example, we only define Deployments and Services but there are many others.
|
||||
- `metadata` assigns a name and set of labels to the object.
|
||||
- `spec` is where we configure the object. In a Deployment, `spec` defines the Pods to deploy.
|
||||
|
||||
It is important that **Pod labels** (`Deployment.spec.template.metadata.labels`) match both the Deployment label selector (`Deployment.spec.selector.matchLabels`) and the Service label selector (`Service.spec.selector`). This is how the Deployment object knows which Pods to manage and how the Service object knows which Pods to provide networking for.
|
||||
|
||||
> Deployment and Service label selectors have different fields in the YAML file because Deployments use [set-based selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement){: target="_blank" class="_"}
|
||||
and Services use [equality-based selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#equality-based-requirement){: target="_blank" class="_"}.
|
||||
|
||||
For the `db` Deployment, we define a container called `db` based on the `postgres:9.4` Docker image, and define a restart policy. All Pods created by this Deployment have the label, `app=db` and the Deployment selects on them.
|
||||
|
||||
The `db` Service is a “headless” service (`clusterIP: None`). Headless services are useful when you want a stable DNS name but do not need the cluster-wide VIP. They create a stable DNS record, but instead of creating a VIP, they map the DNS name to multiple
|
||||
[A records](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records){: target="_blank" class="_"} -- one for each Pod associated with the Service.
|
||||
|
||||
The Service’s label selector (`Service.spec.selector`) has the value, "app=db". This means the Service provides stable networking and load balancing for all Pods on the cluster labeled as “app=db”. Pods defined in the Deployment section are all labelled as "app-db". It is this mapping between the Service label selector and the Pod labels that tells the Service object which Pods for which to provide networking.
|
||||
|
||||
### redis service
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
redis:
|
||||
image: 'redis:latest'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
containers:
|
||||
- image: redis:alpine
|
||||
name: redis
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
```
|
||||
|
||||
Here, the Deployment object deploys a Pod from the `redis:alpine` image and sets the container port to `6379`. It also sets the `labels` for the Pods to the same value ("app=redis") as the Deployment’s label selector to tie the two together.
|
||||
|
||||
The Service object defines a cluster-wide DNS mapping for the name "redis" on port 6379. This means that traffic for `tcp://redis:6379` is routed to this Service and is load balanced across all Pods on the cluster with the "app=redis" label. The Service is accessed on the cluster-wide `port` and forwards to the Pods on the `targetPort`. Again, the label-selector for the Service and the labels for the Pods are what tie the two together.
|
||||
|
||||
The diagram shows traffic intended for `tcp://redis:6379` being sent to the redis Service and then load balanced across all Pods that match the Service label selector.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
### lb service
|
||||
|
||||
The Docker Cloud stackfile defines an `lb` service to balance traffic to the vote service. On GKE, this is not necessary because Kubernetes lets you define a Service object with `type=balancer`, which creates a native GCP balancer to do this job. We demonstrate in the `vote` section.
|
||||
|
||||
### vote service
|
||||
|
||||
The Docker Cloud stackfile for the `vote` service defines an image, a restart policy, and a specific number of Pods (replicas: 5). It also enables the Docker Cloud `autoredeploy` feature. We can tell that it listens on port 80 because the Docker Cloud `lb` service forwards traffic to it on port 80; we can also inspect its image.
|
||||
|
||||
> **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment.
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
vote:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-vote:latest'
|
||||
restart: always
|
||||
target_num_containers: 5
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: vote
|
||||
replicas: 5
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-vote:latest
|
||||
name: vote
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: vote
|
||||
```
|
||||
|
||||
Again, we ensure that both Deployment and Service objects can find the Pods with matching labels ("app=vote"). We also set the number of Pod replicas to five (`Deployment.spec.replicas`) so that it matches the `target_num_containers` from the Docker Cloud stackfile.
|
||||
|
||||
We define the Service as "type=loadbalancer". This creates a native GCP load balancer with a stable, publicly routable IP for the service. It also maps port 80 so that traffic hitting port 80 is load balanced across all five Pod replicas in the cluster. (This is why the `lb` service from the Docker Cloud app is not needed.)
|
||||
|
||||
### worker service
|
||||
|
||||
Like the `vote` service, the `worker` service defines an image, a restart policy, and a specific number of Pods (replicas: 5). It also defines the Docker Cloud `autoredeploy` policy (which is not supported in GKE).
|
||||
|
||||
> **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment.
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
worker:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-worker:latest'
|
||||
restart: always
|
||||
target_num_containers: 3
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: worker
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-worker:latest
|
||||
name: worker
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: worker
|
||||
```
|
||||
|
||||
Again, we ensure that both Deployment and Service objects can find the Pods with matching labels ("app=worker").
|
||||
|
||||
The `worker` Service (like `db`) is another ["headless" service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services){: target="_blank" class="_"} where a DNS name is created and mapped to individual
|
||||
[A records](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records){: target="_blank" class="_"} for each Pod rather than a cluster-wide VIP.
|
||||
|
||||
### result service
|
||||
|
||||
**Docker Cloud stackfile**:
|
||||
|
||||
```
|
||||
result:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-result:latest'
|
||||
ports:
|
||||
- '80:80'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Kubernetes manifest**:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: result
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-result:latest
|
||||
name: result
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: result
|
||||
```
|
||||
|
||||
The Deployment section defines the usual names, labels and container spec. The `result` Service (like the `vote` Service) defines a GCP-native load balancer to distribute external traffic to the cluster on port 80.
|
||||
|
||||
### Combined manifest k8s-vote.yml
|
||||
|
||||
You can combine all Deployments and Services in a single YAML file, or have individual YAML files per Docker Cloud service. The choice is yours, but it's usually easier to deploy and manage one file.
|
||||
|
||||
> You should manage your Kubernetes manifest files the way you manage your application code -- checking them in and out of version control repositories etc.
|
||||
|
||||
Here, we combine all the Kubernetes definitions explained above into one YAML file that we call, `k8s-vote.yml`.
|
||||
|
||||
```
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: db
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
containers:
|
||||
- image: postgres:9.4
|
||||
name: db
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: db
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: db
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
containers:
|
||||
- image: redis:alpine
|
||||
name: redis
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: vote
|
||||
replicas: 5
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-vote:latest
|
||||
name: vote
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: vote
|
||||
name: vote
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: vote
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: worker
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-worker:latest
|
||||
name: worker
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 55555
|
||||
targetPort: 0
|
||||
selector:
|
||||
app: worker
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: result
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
spec:
|
||||
containers:
|
||||
- image: docker/example-voting-app-result:latest
|
||||
name: result
|
||||
ports:
|
||||
- containerPort: 80
|
||||
restartPolicy: Always
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: result
|
||||
name: result
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: result
|
||||
```
|
||||
|
||||
Save the Kubernetes manifest file (as `k8s-vote.yml`) and check it into version control.
|
||||
|
||||
## Test the app on GKE
|
||||
|
||||
Before migrating, you should thoroughly test each new Kubernetes manifest on a GKE cluster. Healthy testing includes _deploying_ the application with the new manifest file, performing _scaling_ operations, increasing _load_, running _failure_ scenarios, and doing _updates_ and _rollbacks_. These tests are specific to each of your applications. You should also manage your manifest files in a version control system.
|
||||
|
||||
The following steps explain how to deploy your app from the Kubernetes manifest file and verify that it is running. The steps are based on the sample application used throughout this guide, but the general commands should work for any app.
|
||||
|
||||
> Run from a [Google Cloud Shell](https://cloud.google.com/shell/){: target="_blank" class="_"}
|
||||
or local terminal with `kubectl` configured to talk to your GKE cluster.
|
||||
|
||||
1. Verify that your shell/terminal is configured to talk to your GKE cluster. If the output matches your cluster, you're ready to proceed with the next steps.
|
||||
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
gke-clus-k8s-vote-default-pool-81bd226c-2jtp Ready <none> 1h v1.9.2-gke.1
|
||||
gke-clus-k8s-vote-default-pool-81bd226c-mn4k Ready <none> 1h v1.9.2-gke.1
|
||||
gke-clus-k8s-vote-default-pool-81bd226c-qjm2 Ready <none> 1h v1.9.2-gke.1
|
||||
|
||||
```
|
||||
|
||||
2. Deploy your Kubernetes application to your cluster.
|
||||
|
||||
The Kubernetes manifest here is `ks8-vote.yml` and lives in the system PATH. To use a different manifest, substitute `ks8-vote.yml` with the name of your manifest file.
|
||||
|
||||
```
|
||||
$ kubectl create -f k8s-vote.yml
|
||||
|
||||
deployment "db" created
|
||||
service "db" created
|
||||
deployment "redis" created
|
||||
service "redis" created
|
||||
deployment "vote" created
|
||||
service "vote" created
|
||||
deployment "worker" created
|
||||
service "worker" created
|
||||
deployment "result" created
|
||||
service "result" created
|
||||
```
|
||||
|
||||
3. Check the status of the app (both Deployments and Services):
|
||||
|
||||
```
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
db 1 1 1 1 43s
|
||||
redis 1 1 1 1 43s
|
||||
result 1 1 1 1 43s
|
||||
vote 5 5 5 5 43s
|
||||
worker 3 3 3 3 43s
|
||||
|
||||
$ kubectl get services
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
db ClusterIP None <none> 55555/TCP 48s
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 6h
|
||||
redis ClusterIP 10.0.168.188 <none> 6379/TCP 48s
|
||||
result LoadBalancer 10.0.76.157 <pending> 80:31033/TCP 47s
|
||||
vote LoadBalancer 10.0.244.254 <pending> 80:31330/TCP 48s
|
||||
worker ClusterIP None <none> 55555/TCP 48s
|
||||
```
|
||||
|
||||
Both `LoadBalancer` Services are `pending` because it takes a minute or two to provision a GCP load balancer. You can run `kubectl get svc --watch` to see when they are ready. Once provisioned, the output looks like this (with different external IPs):
|
||||
|
||||
```
|
||||
$ kubectl get services
|
||||
<Snip>
|
||||
result LoadBalancer 10.0.76.157 52.174.195.232 80:31033/TCP 7m
|
||||
vote LoadBalancer 10.0.244.254 52.174.196.199 80:31330/TCP 8m
|
||||
```
|
||||
|
||||
4. Test that the application works in your new environment.
|
||||
|
||||
For example, the voting app exposes two web front-ends -- one for casting votes and the other for viewing results:
|
||||
|
||||
- Copy/paste the `EXTERNAL-IP` value for the `vote` service into a browser and cast a vote.
|
||||
- Copy/paste the `EXTERNAL-IP` value for the `result` service into a browser and ensure your vote registered.
|
||||
|
||||
If you had a CI/CD pipeline with automated tests and deployments for your Docker Cloud stacks, you should build, test, and implement one for each application on GKE.
|
||||
|
||||
> You can extend your Kubernetes manifest file with advanced features to perform rolling updates and simple rollbacks. But you should not do this until you have confirmed your application is working with the simple manifest file.
|
||||
|
||||
## Migrate apps from Docker Cloud
|
||||
|
||||
> Remember to point your application CNAMES to new service endpoints.
|
||||
|
||||
How you migrate your applications is unique to your environment and applications.
|
||||
|
||||
- Plan with all developers and operations teams.
|
||||
- Plan with customers.
|
||||
- Plan with owners of other applications that interact with your Docker Cloud app.
|
||||
- Plan a rollback strategy if problems occur.
|
||||
|
||||
Once your migration is in process, check that everything is working as expected. Ensure that users are hitting the new application on the GKE infrastructure and getting expected results.
|
||||
|
||||
> Think before you terminate stacks and clusters
|
||||
>
|
||||
> Do not terminate your Docker Cloud stacks or node clusters until some time after the migration has been signed off as successful. If there are problems, you may need to roll back and try again.
|
||||
{: .warning}
|
|
@ -0,0 +1,504 @@
|
|||
---
|
||||
description: How to migrate apps from Docker Cloud to Docker CE
|
||||
keywords: cloud, migration, swarm, community
|
||||
title: Migrate Docker Cloud stacks to Docker CE swarm
|
||||
---
|
||||
|
||||
## Docker CE in swarm mode
|
||||
|
||||
This page explains how to prepare your applications for migration from Docker Cloud to applications running as _service stacks_ on clusters of Docker Community Edition (CE) nodes in swarm mode. You can also use [Docker Enterprise Edition](https://www.docker.com/enterprise-edition){: target="_blank" class="_"} (Docker EE) for your target environment.
|
||||
|
||||
At a high level, migrating your Docker Cloud applications requires that you:
|
||||
|
||||
- **Build** a target environment (Docker CE in swarm mode).
|
||||
- **Convert** your Docker Cloud YAML stackfiles.
|
||||
- **Test** the converted YAML stackfiles in the new environment.
|
||||
- **Point** your application CNAMES to new service endpoints.
|
||||
- **Migrate** your applications from Docker Cloud to the new environment.
|
||||
|
||||
To demonstrate, we **build** a Docker CE swarm cluster, **convert** the Docker Cloud stackfile for [example-voting-app](https://github.com/dockersamples/example-voting-app){: target="_blank" class="_"} to a service stack format, and **test** the service stack file in swarm mode to ensure that it is safe to migrate.
|
||||
|
||||
> The actual process of migrating -- switching customers from your Docker Cloud applications to Docker CE applications -- will vary by application and environment.
|
||||
|
||||
## Voting-app example
|
||||
|
||||
The Docker Cloud stack of our example voting application is defined in [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}. The Docker CE service stack (for our target environment) is defined in
|
||||
[docker-stack.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/docker-stack.yml){: target="_blank" class="_"}. This document explains how `dockercloud.yml` is converted to `docker-stack.yml` so that you have the tools to do the same for your applications.
|
||||
|
||||
In the [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}, the voting app is defined as a stack of six microservices:
|
||||
|
||||
- **vote**: Web front-end that displays voting options
|
||||
- **redis**: In-memory k/v store that collects votes
|
||||
- **worker**: Stores votes in database
|
||||
- **db**: Persistent store for votes
|
||||
- **result**: Web server that pulls and displays results from database
|
||||
- **lb**: Container-based load balancer
|
||||
|
||||
Votes are accepted with the `vote` service and stored in persistent backend database (`db`) with the help of services, `redis`, `worker`, and `lb`. The vote tally is displayed with the `result` service.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
## Migration prerequisites
|
||||
|
||||
To complete the migration from Docker Cloud to Docker CE in swarm mode, you need:
|
||||
|
||||
- **Docker CE nodes** (in a public cloud or on-premises) organized as a swarm cluster
|
||||
- **SSH access** to the nodes in the swarm cluster
|
||||
|
||||
You _may_ also need the following application-specific things:
|
||||
|
||||
- **Permanent public IP addresses and hostnames** for nodes
|
||||
- **External load balancers** configured to direct traffic to Docker CE nodes
|
||||
|
||||
## Build target environment
|
||||
|
||||
Our target environment is a cluster of Docker CE nodes configured in swarm mode. A swarm cluster comprises one or more manager and worker nodes.
|
||||
|
||||
To ensure high availability (HA) of the swarm control plane in production, you should include an odd number (3+) of manager nodes, usually no more than seven. They should be spread across availability zones and connected by high-speed reliable networks. For information on building a secure HA swarm cluster for production, see [Swarm mode overview](https://docs.docker.com/engine/swarm/){: target="_blank" class="_"}.
|
||||
|
||||
### Plan Docker CE nodes
|
||||
|
||||
How you plan and build your nodes will depend on your application requirements, but you should expect to:
|
||||
|
||||
- Choose a **platform** (cloud or on-premises) to host your Docker CE nodes.
|
||||
- Estimate **node size and spec** (your Docker Cloud nodes can be a guide).
|
||||
- Calculate the **number of nodes** for managers and workers (manager HA requires 3/5/7 managers).
|
||||
- Decide **node distribution** across availability zones for high availability (HA).
|
||||
- Ensure **nodes can communicate** over the network and have stable resolvable hostnames.
|
||||
- Configure **load balancers**.
|
||||
|
||||
Your swarm cluster of Docker CE nodes should probably resemble your existing Docker Cloud node cluster. For example, if you currently have nodes of a particular size and spec, in hosted availability zones, your target swarm cluster should probably match that.
|
||||
|
||||
> In Docker Cloud, to see the configuration of each of your clusters, select **Node Clusters** > _your_cluster_.
|
||||
|
||||
This diagram shows a six-node swarm cluster spread across two availability zones:
|
||||
|
||||
{:width="600px"}
|
||||
|
||||
### Configure swarm cluster
|
||||
|
||||
Configuring a swarm cluster of Docker CE nodes involves the following high-level steps:
|
||||
|
||||
1. Deploy nodes and install Docker CE.
|
||||
2. Initialize swarm mode (which creates one manager).
|
||||
3. _[optional] Add manager nodes (for HA)._
|
||||
4. Add worker nodes.
|
||||
|
||||
In this demo, we build a swarm cluster with six nodes (3 managers/3 workers), but you can use more (or fewer, for example, 1 manager/2 workers). For manager HA, create a minimum of three manager nodes. You can add as many workers as you like.
|
||||
|
||||
1. Deploy six nodes and install the latest version of [Docker CE](https://docs.docker.com/install/){: target="_blank" class="_"} on each.
|
||||
|
||||
2. Initialize a swarm cluster from one node (that automatically becomes the first manager in the swarm):
|
||||
|
||||
```
|
||||
$ docker swarm init
|
||||
```
|
||||
|
||||
> Our swarm cluster uses self-signed certificates. To use an [external CA](https://docs.docker.com/engine/reference/commandline/swarm_init/#--external-ca){: target="_blank" class="_"}, initialize with the option, `--external-ca`. You should also build your nodes in appropriate availability zones.
|
||||
|
||||
> You can use the flag, `--advertise-addr`, to define the IP and port that other nodes should use to connect to this manager. You can even specify an IP that does not exist on the node, such one for a load balancer. See [docker swarm init](https://docs.docker.com/engine/reference/commandline/swarm_init/#--advertise-addr){: target="_blank" class="_"}.
|
||||
|
||||
3. Extract and **safely store** the manager _join-token_ required to add manager nodes.
|
||||
|
||||
```
|
||||
$ docker swarm join-token manager
|
||||
```
|
||||
|
||||
4. Extract and **safely store** the worker _join-token_ required to add worker nodes.
|
||||
|
||||
```
|
||||
$ docker swarm join-token worker
|
||||
```
|
||||
|
||||
> Keep your join tokens safe and secure as bad people can join managers with them!
|
||||
|
||||
5. **[optional]** If you deployed six nodes, you can add two manager nodes with the _manager_ join token. Run the command on each node designated as a manager. The join token and network details will differ in your environment.
|
||||
|
||||
```
|
||||
$ docker swarm join --token <insert-manager-join-token> <IP-and-port>
|
||||
```
|
||||
|
||||
6. Add two or more worker nodes with the _worker_ join token. Run the command on each node designated as a worker. The join token and network details will differ in your environment.
|
||||
|
||||
```
|
||||
$ docker swarm join --token <insert-worker-join-token> <IP-and-port>
|
||||
```
|
||||
|
||||
7. List the nodes from one of the managers (if you have more than one) to verify the status of the swarm. In the `MANAGER STATUS` column, manager nodes are either "Leader" or "Reachable". Worker nodes are blank.
|
||||
|
||||
```
|
||||
$ docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
vrx...vr1 * node1 Ready Active Leader
|
||||
f4b...fbd node2 Ready Active Reachable
|
||||
f2v...sdo node3 Ready Active Reachable
|
||||
bvb...l55 node4 Ready Active
|
||||
hf2...kvc node5 Ready Active
|
||||
p49...aav node6 Ready Active
|
||||
```
|
||||
|
||||
With your target environment configured, let us look at the application and convert the Docker Cloud stackfile to a service stack.
|
||||
|
||||
## Convert Docker Cloud stackfile
|
||||
|
||||
**In the following sections, we discuss each service definition separately, but you should group them into one stackfile with the `.yml` extension, for example, [docker-stack.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/docker-stack.yml){: target="_blank" class="_"}.**
|
||||
|
||||
To prepare your applications for migration from Docker Cloud to Docker CE in swarm mode, you must recreate your Docker Cloud stackfiles (**source** files) as _service stack_ stackfiles (**target** files). Once you have each application defined as a service stack, you can test and deploy.
|
||||
|
||||
> In Docker Cloud, to find the stackfiles for your existing applications, you can either: (1) Select **Stacks** > _your_stack_ > **Edit**, or (2) Select **Stacks** > _your_stack_ and scroll down.
|
||||
|
||||
In the sections below, we step through each service in [example-voting-app](https://github.com/dockersamples/example-voting-app){: target="_blank" class="_"} and explain how the Docker Cloud source file
|
||||
([dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}) is converted to the service stack target file
|
||||
([docker-stack.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/docker-stack.yml){: target="_blank" class="_"}). We provide a simple version of each service definition (one that does a like-for-like conversion with no added bells and whistles), and an extended version that demonstrates more features in swarm mode.
|
||||
|
||||
- **Simple example:** Only includes the necessary features for _this_ migration to work.
|
||||
- **Extended example:** Includes some advanced features that improves application management.
|
||||
|
||||
> This is not a best practice guide
|
||||
>
|
||||
> This document shows you how to convert a Docker Cloud application to a Docker CE application and run it in a swarm. Along the way it introduces some of the advanced features offered by service stacks. It is not intended to be a best practice guide, but more of a "what's possible guide".
|
||||
|
||||
### Top- and sub-level keys
|
||||
|
||||
In the Docker Cloud stackfile, the six services are defined as top-level keys, whereas in the _service stack_ stackfile, they are sub-level keys.
|
||||
|
||||
**Cloud source**: Services are **top-level keys**:
|
||||
|
||||
```
|
||||
db:
|
||||
redis:
|
||||
result:
|
||||
lb:
|
||||
vote:
|
||||
worker:
|
||||
```
|
||||
|
||||
**Swarm target**: Services are **sub-level keys** (below the top-level key, `services`), and the Compose file format version is defined at the top (and is required).
|
||||
|
||||
```
|
||||
version: "3.5"
|
||||
services:
|
||||
db:
|
||||
redis:
|
||||
result:
|
||||
vote:
|
||||
worker:
|
||||
```
|
||||
|
||||
Notice that we removed the `lb` service -- this is because it is not needed in swarm mode. In Docker Cloud, the `lb` service accepts incoming traffic on port 80 and load balances across all replicas in the `vote` front-end service. In swarm mode, load balancing is built-in with a native transport-layer routing mesh called the [swarm mode service mesh](/../../engine/swarm/ingress/){: target="_blank" class="_"}.
|
||||
|
||||
### db service
|
||||
|
||||
> Consider using a hosted database service for production databases. This is something that, ideally, should not change as part of your migration away from Docker Cloud stacks.
|
||||
|
||||
**Cloud source**: The Docker Cloud `db` service defines an image and a restart policy:
|
||||
|
||||
```
|
||||
db:
|
||||
image: 'postgres:9.4'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Swarm target**: This can be translated into a service stack service as follows:
|
||||
|
||||
```
|
||||
db:
|
||||
image: postgres:9.4
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
**Swarm target (extended)**: You can also add best practices, documentation, and advanced features, to improve application management:
|
||||
|
||||
```
|
||||
db:
|
||||
image: postgres:9.4
|
||||
volumes:
|
||||
- db-data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- backend
|
||||
deploy:
|
||||
placement:
|
||||
constraints: [node.role == manager]
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
Let's step through some fields:
|
||||
|
||||
- `volumes` places the Postgres database on a named volume called **db-data** and mounts it into the service replica at `/var/lib/postgresql/data`. This ensures that the data written by the application persists in the event that the Postgres container fails.
|
||||
- `networks` adds security by putting the service on a backend network.
|
||||
- `deploy.placement.constraints` forces the service to run on manager nodes. In a single-manager swarm, this ensures that the service always starts on the same node and has access to the same volume.
|
||||
- `deploy.restart_policy.condition` tells Docker to restart any service replica that has stopped (no matter the exit code).
|
||||
|
||||
### redis service
|
||||
|
||||
**Cloud source**: The Docker Cloud `redis` service defines an image and a restart policy.
|
||||
|
||||
```
|
||||
redis:
|
||||
image: 'redis:latest'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Swarm target**: This can be translated into a service stack service as follows.
|
||||
|
||||
```
|
||||
redis:
|
||||
image: redis:latest
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
**Swarm target (extended)**:
|
||||
|
||||
```
|
||||
redis:
|
||||
image: redis:alpine
|
||||
ports:
|
||||
- "6379"
|
||||
networks:
|
||||
- frontend
|
||||
deploy:
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
Let's step through each field.
|
||||
|
||||
- `image` defines the exact same image as the Docker Cloud stackfile.
|
||||
- `ports` defines the network port that the service should operate on -- this can actually be omitted as it's the default port for redis.
|
||||
- `networks` deploys the service on a network called `frontend`.
|
||||
- `deploy.replicas` ensures there is always one instance (one replica) of the service running.
|
||||
- `deploy.restart_policy.condition` tells Docker to restart any service replica that has stopped (no matter the exit code).
|
||||
|
||||
### result service
|
||||
|
||||
**Cloud source**:
|
||||
|
||||
```
|
||||
result:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-result:latest'
|
||||
ports:
|
||||
- '80:80'
|
||||
restart: always
|
||||
```
|
||||
|
||||
**Swarm target**:
|
||||
|
||||
```
|
||||
result:
|
||||
image: docker/example-voting-app-result:latest
|
||||
ports:
|
||||
- 5001:80
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
Notice the different port mappings in the two stackfiles. The Docker Cloud application makes two services available on port 80 (using different nodes). The `result` service is published directly on port 80, and the `vote` service is published indirectly on port 80 using the `lb` service.
|
||||
|
||||
In the _service stack_ stackfile, we publish these two services on different ports -- `vote` on port 5000 and `result` service on port 5001. If this is a problem for your users or application, you may be able to:
|
||||
|
||||
- Publish this service on port 80 and any other service on a different port.
|
||||
- Use host mode and publish both services on port 80 by using placement constraints to run them on different nodes.
|
||||
- Use a frontend service, such as HAProxy, and route the traffic based on a virtual host.
|
||||
|
||||
**Swarm target (extended)**
|
||||
|
||||
```
|
||||
result:
|
||||
image: dockersamples/examplevotingapp_result:latest
|
||||
ports:
|
||||
- 5001:80
|
||||
networks:
|
||||
- backend
|
||||
depends_on:
|
||||
- db
|
||||
deploy:
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
The extended version adds the following:
|
||||
|
||||
- `networks` places all service replicas on a network called `backend`.
|
||||
- `depends_on` tells Docker to start the `db` service before starting this one.
|
||||
- `deploy.replicas` tells Docker to create a single replica for this service.
|
||||
- `deploy.restart_policy.condition` tells Docker to restart any service replica that has stopped (no matter the exit code).
|
||||
|
||||
### lb service
|
||||
|
||||
In Docker Cloud, the `lb` service was used to proxy connections on port 80 to the `vote` service. We do not need to migrate the `lb` service because Docker CE in swarm mode has native load balancing built into its service mesh.
|
||||
|
||||
If your applications are running load balancers, such as `dockercloud/haproxy`, you _may_ no longer need them when migrating to stacks on Docker CE. Be sure to test your application and consult with your Docker technical account manager for further details.
|
||||
|
||||
### vote service
|
||||
|
||||
The Docker Cloud `vote` service defines an image, a restart policy, service replicas. It also defines an `autoredeploy` policy which is not supported natively in service stacks.
|
||||
|
||||
> **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment.
|
||||
|
||||
**Cloud source**:
|
||||
|
||||
```
|
||||
vote:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-vote:latest'
|
||||
restart: always
|
||||
target_num_containers: 5
|
||||
```
|
||||
|
||||
**Swarm target**:
|
||||
|
||||
```
|
||||
vote:
|
||||
image: dockersamples/examplevotingapp_vote:latest
|
||||
ports:
|
||||
- 5000:80
|
||||
deploy:
|
||||
replicas: 5
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
Again, the Docker Cloud version of the voting application publishes both the `result` and `vote` services on port 80 (where the `vote` service is made available on port 80 with the `lb` service).
|
||||
|
||||
Docker Swarm only allows a single service to be published on a swarm-wide port (because in this example, we are in swarm mode and using the routing mesh option for network configuration). To get around this, we publish the `vote` service on port 5000 (as we did with the `result` service on port 5001).
|
||||
|
||||
> For the difference between swarm mode (with ingress networking) and host mode, see [Use swarm mode routing mesh](/../../engine/swarm/ingress/).
|
||||
|
||||
**Swarm target (extended)**:
|
||||
|
||||
```
|
||||
vote:
|
||||
image: dockersamples/examplevotingapp_vote:latest
|
||||
ports:
|
||||
- 5000:80
|
||||
networks:
|
||||
- frontend
|
||||
depends_on:
|
||||
- redis
|
||||
deploy:
|
||||
replicas: 5
|
||||
update_config:
|
||||
parallelism: 2
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
About some fields:
|
||||
|
||||
- `networks` places all service replicas on a network called `frontend`.
|
||||
- `depends_on` tells Docker to start the `redis` service before starting the `vote` service.
|
||||
- `deploy.replicas` tells Docker to create 5 replicas for the `vote` service (and we need at least 3 for the parallelism setting).
|
||||
- `deploy.update_config` tells Docker how to perform rolling updates on the service. While not strictly needed, `update_config` settings are extremely helpful when doing application updates. Here, `parallelism: 2` tells swarm to update two instances of the service at a time, and wait for 10 seconds in between each set of two.
|
||||
- `deploy.restart_policy.condition` tells Docker to restart any service replica that has stopped (no matter the exit code).
|
||||
|
||||
### worker service
|
||||
|
||||
**Cloud source**: The Docker Cloud `worker` service defines an image, a restart policy, and a number of service replicas. It also defines an `autoredeploy` policy which is not supported natively in service stacks.
|
||||
|
||||
```
|
||||
worker:
|
||||
autoredeploy: true
|
||||
image: 'docker/example-voting-app-worker:latest'
|
||||
restart: always
|
||||
target_num_containers: 3
|
||||
```
|
||||
|
||||
**Swarm target**:
|
||||
|
||||
```
|
||||
worker:
|
||||
image: dockersamples/examplevotingapp_worker
|
||||
deploy:
|
||||
replicas: 3
|
||||
restart_policy:
|
||||
condition: any
|
||||
```
|
||||
|
||||
**Swarm target (extended)**:
|
||||
|
||||
```
|
||||
worker:
|
||||
image: dockersamples/examplevotingapp_worker
|
||||
networks:
|
||||
- frontend
|
||||
- backend
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 3
|
||||
labels: [APP=VOTING]
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 10s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
placement:
|
||||
constraints: [node.role == manager]
|
||||
```
|
||||
|
||||
All of the settings mentioned here are application specific and may not be needed in your application.
|
||||
|
||||
- `networks` tells Docker to attach replicas to two networks (named "frontend" and "backend") allowing them to communicate with services on either one.
|
||||
- `deploy.placement.constraints` ensures that replicas for this service always start on a manager node.
|
||||
- `deploy.restart_policy.condition` tells Docker to restart any service replica that has stopped (no matter the exit code). It makes 3 attempts to restart, gives each restart attempt 120 seconds to complete, and waits 10 seconds before trying again.
|
||||
|
||||
## Test converted stackfile
|
||||
|
||||
Before migrating, you should thoroughly test each new stackfile in a Docker CE cluster in swarm mode. Test the simple stackfile first -- that is, the stackfile that most literally mimics what you have in Docker Cloud. Once that works, start testing some of the more robust features in the extended examples.
|
||||
|
||||
Healthy testing includes _deploying_ the application with the new stackfile, performing _scaling_ operations, increasing _load_, running _failure_ scenarios, and doing _updates_ and _rollbacks_. These tests are specific to each of your applications. You should also manage your manifest files in a version control system.
|
||||
|
||||
The following steps explain how to deploy your app from the **target** Docker Swarm stackfile and verify that it is running. Perform the following from a manager node in your swarm cluster.
|
||||
|
||||
1. Deploy the app from the _service stack_ stackfile you created.
|
||||
|
||||
```
|
||||
$ docker stack deploy -c example-stack.yaml example-stack
|
||||
```
|
||||
|
||||
The format of the command is `docker stack deploy -c <name-of-stackfile> <name-of-stack>` where the name of the stack is arbitrary but should be probably be meaningful.
|
||||
|
||||
2. Test that the stack is running.
|
||||
|
||||
```
|
||||
$ docker stack ls
|
||||
NAME SERVICES
|
||||
example-stack 5
|
||||
```
|
||||
|
||||
3. Get more details about the stack and the services running as part of it.
|
||||
|
||||
4. Test that the application works in your new environment.
|
||||
|
||||
For example, the voting app exposes two web front-ends -- one for casting votes and the other for viewing results. We exposed the `vote` service on port 5000, and the `result` service on port 5001. To connect to either of them, open a web browser and point it to the public IP or public hostname of any swarm node on the required port:
|
||||
|
||||
- Go to <public-IP-or-hostname>:5000 and cast a vote.
|
||||
- Go to <public-IP-or-hostname>:5001 and view the result of your vote.
|
||||
|
||||
If you had a CI/CD pipeline with automated tests and deployments for your Docker Cloud stacks, you should build, test, and implement one for each application on Docker CE.
|
||||
|
||||
## Migrate apps from Docker Cloud
|
||||
|
||||
> Remember to point your application CNAMES to new service endpoints.
|
||||
|
||||
How you migrate your applications is unique to your environment and applications.
|
||||
|
||||
- Plan with all developers and operations teams.
|
||||
- Plan with customers.
|
||||
- Plan with owners of other applications that interact with your Docker Cloud app.
|
||||
- Plan a rollback strategy if problems occur.
|
||||
|
||||
Once your migration is in process, check that the everything is working as expected. Ensure that users are hitting the new application on the Docker CE infrastructure and getting expected results.
|
||||
|
||||
> Think before you terminate stacks and clusters
|
||||
>
|
||||
> Do not terminate your Docker Cloud stacks or node clusters until some time after the migration has been signed off as successful. If there are problems, you may need to roll back and try again.
|
||||
{: .warning}
|
|
@ -0,0 +1,131 @@
|
|||
---
|
||||
description: How to deregister swarms on Docker Cloud
|
||||
keywords: cloud, swarm, migration
|
||||
title: Deregister Swarms on Docker Cloud
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This page explains how to deregister a Swarm cluster from Docker Cloud so that it can be managed independently. We explain how to deregister on both Amazon Web Services (AWS) and Microsoft Azure (because Docker Cloud swarms run on either AWS or Azure behind the scenes).
|
||||
|
||||
You do not need to migrate or reconfigure your applications as part of this procedure. The only thing that changes is that your Swarm cluster no longer integrates with Docker services (such as Docker Cloud, Docker for Mac, or Docker for Windows).
|
||||
|
||||
### Prerequisites
|
||||
|
||||
To complete this procedure you need:
|
||||
|
||||
- An AWS or Azure account that lets you inspect resources such as instances.
|
||||
|
||||
### High-level steps
|
||||
|
||||
- Verify that you can SSH to your Swarm nodes (on AWS and Azure).
|
||||
- Deregister your Swarm from Docker Cloud.
|
||||
- Clean up old Docker Cloud resources.
|
||||
|
||||
## SSH to your Swarm
|
||||
|
||||
It is vital that you can SSH to your Docker Cloud Swarm before you deregister it from Docker Cloud.
|
||||
|
||||
Your Docker Cloud Swarm runs on either AWS or Azure, so to SSH to your Swarm nodes, you must know the public IP addresses or public DNS names of your nodes. The simplest way to find this information is with the native AWS or Azure tools.
|
||||
|
||||
### How to SSH to AWS nodes
|
||||
|
||||
1. Log on to the AWS console and open the **EC2 Dashboard** for the **region** that hosts your Swarm nodes.
|
||||
|
||||
2. Locate your instances and note their DNS names and IPs.
|
||||
|
||||
By default, AWS labels your Swarm nodes as _swarm-name_-worker or _swarm-name_-manager. For example, a Swarm called "prod-equus" in Docker Cloud, has manager and worker nodes in AWS labelled, "prod-equus-manager" and "prod-equus-worker" respectively.
|
||||
|
||||
You will also have a load balancer (type=classic) that includes the name of the Swarm. It accepts Docker commands on port 2376 and balances them to the manager nodes in the Swarm (as the server proxy is only deployed on the managers).
|
||||
|
||||
3. Open an SSH session to each node in the cluster.
|
||||
|
||||
This example opens an SSH session to a Swarm node with:
|
||||
|
||||
- Private key = “awskey.pem”
|
||||
- Username = “docker”
|
||||
- Public DNS name = “ec2-34-244-56-42.eu-west-1.compute.amazonaws.com”
|
||||
|
||||
```
|
||||
$ ssh -i ./awskey.pem docker@ec2-34-244-56-42.eu-west-1.compute.amazonaws.com
|
||||
```
|
||||
|
||||
Once you are certain that you are able to SSH to _all nodes_ in your Swarm, you can [deregister from Docker Cloud](#deregister-swarm-from-docker-cloud).
|
||||
|
||||
> If you do not have the keys required to SSH on to your nodes, you can deploy new public keys to your nodes using [this procedure](https://github.com/docker/dockercloud-authorizedkeys/blob/master/README.md){: target="_blank" class="_"}. You should perform this operation before deregistering your Swarm from Docker Cloud.
|
||||
|
||||
### How to SSH to Azure nodes
|
||||
|
||||
In Azure, you can only SSH to manager nodes because worker nodes do not get public IPs and public DNS names. If you need to log on to worker nodes, you can use your manager nodes as jump hosts.
|
||||
|
||||
1. Log on to the Azure portal and click **Resource groups**.
|
||||
|
||||
2. Click on the resource group that contains your Swarm. The `DEPLOYMENT NAME` should match the name of your Swarm.
|
||||
|
||||
3. Click into the deployment with the name of your Swarm and verify the values. For example, the `DOCKERCLOUDCLUSTERNAME` value under **Inputs** should exactly match the name of your Swarm as shown in Docker Cloud.
|
||||
|
||||
4. Copy the value from `SSH TARGETS` under **Outputs** and paste it into a new browser tab.
|
||||
|
||||
This takes you to the inbound NAT Rules for the external load balancer that provides SSH access to your Swarm. It displays a list of all of the **Swarm managers** (not workers) including public IP address (`DESTINATION`) and port (`SERVICE`) that you can use to gain SSH access.
|
||||
|
||||
5. Open an SSH session to each manager in the cluster. Use public IP and port to connect.
|
||||
|
||||
This example creates an SSH session with user `docker` to a swarm manager at `51.140.229.154` on port `50000` with the `azkey.pem` private key in the current directory.
|
||||
|
||||
```
|
||||
ssh -i ./azkey.pem -p 50000 docker@51.140.229.154
|
||||
```
|
||||
|
||||
> If you do not know which private key to use, you can see the public key under `SSHPUBLICKEY` in the **Outputs** section of the Deployment. You can compare this value to the contents of public keys you have on file.
|
||||
|
||||
6. Log on to your worker nodes by using your manager nodes as jump hosts. With
|
||||
[SSH agent forwarding enabled](https://docs.docker.com/docker-for-azure/deploy/#connecting-to-your-linux-worker-nodes-using-ssh), SSH from the manager nodes to the workers nodes over the private network.
|
||||
|
||||
Once you are certain that you are able to SSH to the manager nodes in your Swarm you can [deregister from Docker Cloud](#deregister-swarm-from-docker-cloud).
|
||||
|
||||
> If you do not have the keys required to SSH on to your nodes, you can deploy new public keys to your nodes using [this procedure](https://github.com/docker/dockercloud-authorizedkeys/blob/master/README.md){: target="_blank" class="_"}. You should perform this operation before deregistering your Swarm from Docker Cloud.
|
||||
|
||||
## Deregister swarm from Docker Cloud
|
||||
|
||||
> Proceed with caution
|
||||
>
|
||||
> Only deregister if you know the details of your Swarm nodes (cloud provider, public DNS names, public IP address, etc.) and you have verified that you can SSH to each node with your private key.
|
||||
{: .warning}
|
||||
|
||||
1. Open the Docker Cloud web UI and click **Swarms**.
|
||||
|
||||
2. Click the three dots to the right of the Swarm you want to deregister and select **Unregister**.
|
||||
|
||||
3. Confirm the deregistration process.
|
||||
|
||||
The Swarm is now deregistered from the Docker Cloud web UI and no longer is visible in other products such as Docker for Mac and Docker for Windows.
|
||||
|
||||
## Clean up Docker Cloud resources
|
||||
|
||||
The final step is to clean up old Docker cloud resources such as the service, network and secret.
|
||||
|
||||
Docker Cloud deployed a service on your Swarm called `dockercloud-server-proxy` to proxy and load balance incoming Docker commands on port 2376 across all manager nodes. It has a network called `dockercloud-server-proxy-network` and a secret called `dockercloud-server-proxy-secret`.
|
||||
|
||||
All of these should be removed:
|
||||
|
||||
1. Open an SSH session to a Swarm manager _for the correct swarm!_
|
||||
|
||||
2. Remove the service:
|
||||
|
||||
```
|
||||
$ docker service rm dockercloud-server-proxy
|
||||
```
|
||||
|
||||
3. Remove the network:
|
||||
|
||||
```
|
||||
$ docker network rm dockercloud-server-proxy-network
|
||||
```
|
||||
|
||||
4. Remove the secret:
|
||||
|
||||
```
|
||||
$ docker secret rm dockercloud-server-proxy-secret
|
||||
```
|
||||
|
||||
Your Docker Swarm cluster is now deregistered from Docker Cloud and you can manage it independently.
|
After Width: | Height: | Size: 118 KiB |
After Width: | Height: | Size: 261 KiB |
After Width: | Height: | Size: 81 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 121 KiB |
After Width: | Height: | Size: 110 KiB |
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
description: Migrating from Docker Cloud
|
||||
keywords: cloud, migration
|
||||
title: Migration overview
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
<span class="badge badge-warning">Important</span> **Cluster and application management services in Docker Cloud are shutting down on May 21. You must migrate your applications from Docker Cloud to another platform and deregister your Swarms.**
|
||||
|
||||
The Docker Cloud runtime is being discontinued. This means that you will no longer be able to manage your nodes, swarm clusters, and the applications that run on them in Docker Cloud. To protect your applications, you must must migrate them to another platform, and if applicable, deregister your Swarms from Docker Cloud. The documents in this section explain how.
|
||||
|
||||
- [Migrate Docker Cloud stacks to Docker CE swarm](cloud-to-swarm){: target="_blank" class="_"}
|
||||
- [Migrate Docker Cloud stacks to Azure Container Service](cloud-to-kube-aks){: target="_blank" class="_"}
|
||||
- [Migrate Docker Cloud stacks to Google Kubernetes Engine](cloud-to-kube-gke){: target="_blank" class="_"}
|
||||
- [Deregister Swarms on Docker Cloud](deregister-swarms){: target="_blank" class="_"}
|
||||
- [Kubernetes primer](kube-primer){: target="_blank" class="_"}
|
||||
|
||||
## What stays the same
|
||||
|
||||
**How users and external systems interact with your Docker applications**. Your Docker images, autobuilds, automated tests, and overall application functionality remain the same. For example, if your application uses a Docker image called `myorg/webfe:v3`, and publishes container port `80` to external port `80`, none of this changes.
|
||||
|
||||
Docker Cloud SaaS features stay! We are _not_ removing automated builds and registry storage services.
|
||||
|
||||
## What changes
|
||||
|
||||
**How you manage your applications**. We are removing cluster management and the ability to deploy and manage Docker Cloud stacks. As part of the migration, you will no longer be able to:
|
||||
|
||||
- Manage your nodes and clusters in Docker Cloud.
|
||||
- Deploy and manage applications from the Docker Cloud web UI.
|
||||
- Autoredeploy your applications.
|
||||
- Integrate users with other parts the Docker platform with their Docker ID.
|
||||
|
||||
> **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment.
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
description: Kubernetes orchestration primer
|
||||
keywords: cloud, migration, kubernetes, primer
|
||||
title: Kubernetes primer
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Like Docker Cloud applications, Kubernetes applications are defined in YAML files and can run on public cloud infrastructure. Important Kubernetes concepts are:
|
||||
|
||||
- The Kubernetes cluster
|
||||
- The Kubernetes application
|
||||
|
||||
## Kubernetes cluster
|
||||
|
||||
A Kubernetes cluster is made up of _masters_ and _nodes_. These can be cloud instances or VMs in your data center.
|
||||
|
||||
The diagram below shows a Kubernetes cluster with three masters and three nodes.
|
||||
|
||||
{:width="400px"}
|
||||
|
||||
### Masters
|
||||
|
||||
**Masters** run the control plane services and also issue work to nodes. They are the equivalent to _managers_ in a Docker Cloud or Docker Swarm cluster. They handle:
|
||||
|
||||
- Exposing the main Kubernetes API
|
||||
- The cluster store
|
||||
- The scheduler
|
||||
- All of the _controllers_ (such as Deployments)
|
||||
- Assigning jobs to nodes
|
||||
|
||||
### Nodes
|
||||
|
||||
**Nodes** receive and execute work assigned by masters. They are equivalent to _workers_ in a Docker Cloud or Docker Swarm cluster.
|
||||
|
||||
You should run all of your work on nodes and _not_ on masters. This may differ from Docker Cloud where you may have run some work on manager nodes.
|
||||
|
||||
### Hosted services
|
||||
|
||||
You can run a Kubernetes cluster on-premises where you manage everything yourself -- masters (control plane) and nodes. But Control plane high availability (HA) can be difficult to configure.
|
||||
|
||||
Cloud providers such as [Microsoft Azure](https://azure.microsoft.com/en-us/free/){: target="_blank" class="_"},
|
||||
[Google Cloud Platform (GCP)](https://cloud.google.com/free/){: target="_blank" class="_"}, and
|
||||
[Amazon Web Services (AWS)](https://aws.amazon.com/free/){: target="_blank" class="_"}, provide hosted Kubernetes services:
|
||||
|
||||
- Azure Container Service (AKS)
|
||||
- Google Kubernetes Engine (GKE)
|
||||
- Amazon Elastic Container Service for Kubernetes (EKS)
|
||||
|
||||
Each provides the Kubernetes control plane as a managed service, meaning the platform takes care of things such as control plane high availability (HA) and control plane upgrades. In fact, you have no access to the control plane (masters).
|
||||
|
||||
|
||||
> The managed control plane service is usually free but worker nodes are not.
|
||||
|
||||
## Kubernetes application
|
||||
|
||||
A Kubernetes app is any containerized application defined in a Kubernetes manifest file.
|
||||
|
||||
### Manifest
|
||||
|
||||
The manifest file (usually written in YAML) tells Kubernetes everything it needs to know about the application, as well as how to deploy and manage it. For example:
|
||||
|
||||
- Images and containers to run
|
||||
- Network ports to publish
|
||||
- How to scale the app (up or down as demand requires)
|
||||
- How to perform rolling updates
|
||||
- How to perform rollbacks
|
||||
|
||||
### Pods and Services
|
||||
|
||||
In the Docker world, the atomic unit of deployment is the _Docker container_. In the Kubernetes world, it is the _Pod_. If you already understand containers, you can think of a **[Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/){: target="_blank" class="_"}** as one or more related containers. For the most part, Pods have a single container and are almost analogous to a container.
|
||||
|
||||
A Kubernetes **[Service](https://kubernetes.io/docs/concepts/services-networking/service/){: target="_blank" class="_"}** is an object abstraction that sits in front of a set of Pods and provides a static virtual IP (VIP) address and DNS name. The main purpose of a Kubernetes Service is to provide stable networking for groups of Pods.
|
||||
|
||||
Kubernetes Services can also be used to provision cloud-native load balancers and provide load balancing of requests coming in to the cluster from external sources. Examples include integration with native load balancers on AWS, Azure, and GCP.
|
||||
|
||||
### Deployments
|
||||
|
||||
Docker has a higher level construct called a _Docker service_ (different from a Kubernetes Service) that wraps around a container and adds things such as scalability and rolling updates. Kubernetes also has a higher level construct called a _Deployment_. A Kubernetes **[Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/){: target="_blank" class="_"}** is a "controller" that wraps around a set of Pods and adds things such as scalability, rolling updates, and simple rollbacks.
|
||||
|
||||
The diagram below shows a Service object providing a DNS name and stable IP for a Deployment of 4 Pods.
|
||||
|
||||
{:width="500px"}
|
||||
|
||||
## Managing Kubernetes apps
|
||||
|
||||
Docker apps are usually managed with the `docker` command line utility. Docker Cloud apps can be managed with the Docker Cloud CLI. Kubernetes apps are managed with the `kubectl` command line utility.
|
||||
|
||||
### Common commands
|
||||
|
||||
This command deploys a Docker application, named `test-app`, from a YAML configuration file called `app1.yml`:
|
||||
|
||||
```
|
||||
$ docker stack deploy -c app1.yml test-app
|
||||
```
|
||||
|
||||
This command deploys a Kubernetes application from a YAML manifest file called `k8s-app1.yml`:
|
||||
|
||||
```
|
||||
$ kubectl create -f k8s-app.yml
|
||||
```
|
||||
|
||||
Some other useful `kubectl` commands include:
|
||||
|
||||
- `kubectl get` prints a short description about an object. For Deployments, run: `kubectl get deploy`.
|
||||
- `kubectl describe` prints detailed information about an object. For a Deployment named "app1", run: `kubectl describe deploy app1`
|
||||
- `kubectl delete` deletes a resource on the cluster. To delete a Deployment created with the `app1.yml` manifest file, run: `kubectl delete -f app1.yml`.
|
||||
|
||||
### Sample manifest
|
||||
|
||||
Below is a simple Kubernetes manifest file containing a Deployment and a Service.
|
||||
|
||||
- The Deployment lists everything about the app, including how many Pod replicas to deploy, and the spec of the Pods to be deployed.
|
||||
- The Service defines an external load balancer that listens on port 80 and load-balances traffic across all ports with the "app=vote" label.
|
||||
|
||||
Everything in Kubernetes is loosely connected with labels. The three blue boxes show the **[labels and label selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/){: target="_blank" class="_"}** that connect the service to the Pods, and the Pods to the Deployment.
|
||||
|
||||
> Indentation is important in Kubernetes manifests, and you should indent with two spaces.
|
||||
|
||||
{:width="650px"}
|