mirror of https://github.com/docker/docs.git
docs freshness for deployment and orchestration (#18114)
Co-authored-by: Allie Sadler <alliesadler@f693mt7fh6.home>
This commit is contained in:
parent
f1c324a560
commit
3785c66c30
|
@ -17,9 +17,9 @@ The Docker Azure Integration enables developers to use native Docker commands to
|
|||
|
||||
In addition, the integration between Docker and Microsoft developer technologies allow developers to use the Docker CLI to:
|
||||
|
||||
- Easily log into Azure
|
||||
- Sign in to Azure
|
||||
- Set up an ACI context in one Docker command allowing you to switch from a local context to a cloud context and run applications quickly and easily
|
||||
- Simplify single container and multi-container application development using the Compose specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service
|
||||
- Simplify single container and multi-container application development using the Compose Specification, allowing a developer to invoke fully Docker-compatible commands seamlessly for the first time natively within a cloud container service
|
||||
|
||||
Also see the [full list of container features supported by ACI](aci-container-features.md) and [full list of compose features supported by ACI](aci-compose-features.md).
|
||||
|
||||
|
@ -31,6 +31,7 @@ To deploy Docker containers on Azure, you must meet the following requirements:
|
|||
|
||||
- [Download for Mac](../desktop/install/mac-install.md)
|
||||
- [Download for Windows](../desktop/install/windows-install.md)
|
||||
- [Download for Linux](../desktop/install/linux-install.md)
|
||||
|
||||
Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux).
|
||||
|
||||
|
@ -43,35 +44,35 @@ Docker not only runs containers locally, but also enables developers to seamless
|
|||
The following sections contain instructions on how to deploy your Docker containers on ACI.
|
||||
Also see the [full list of container features supported by ACI](aci-container-features.md).
|
||||
|
||||
### Log into Azure
|
||||
### Sign in to Azure
|
||||
|
||||
Run the following commands to log into Azure:
|
||||
Run the following commands to sign in to Azure:
|
||||
|
||||
```console
|
||||
$ docker login azure
|
||||
```
|
||||
|
||||
This opens your web browser and prompts you to enter your Azure login credentials.
|
||||
This opens your web browser and prompts you to enter your Azure sign in credentials.
|
||||
If the Docker CLI cannot open a browser, it will fall back to the [Azure device code flow](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code) and lets you connect manually.
|
||||
Note that the [Azure command line](https://docs.microsoft.com/en-us/cli/azure/) login is separated from the Docker CLI Azure login.
|
||||
Note that the [Azure command line](https://docs.microsoft.com/en-us/cli/azure/) sign in is separated from the Docker CLI Azure sign in.
|
||||
|
||||
Alternatively, you can log in without interaction (typically in
|
||||
Alternatively, you can sign in without interaction (typically in
|
||||
scripts or continuous integration scenarios), using an Azure Service
|
||||
Principal, with `docker login azure --client-id xx --client-secret yy --tenant-id zz`
|
||||
|
||||
>**Note**
|
||||
>
|
||||
> Logging in through the Azure Service Provider obtains an access token valid
|
||||
> Signing in through the Azure Service Provider obtains an access token valid
|
||||
for a short period (typically 1h), but it does not allow you to automatically
|
||||
and transparently refresh this token. You must manually re-login
|
||||
when the access token has expired when logging in with a Service Provider.
|
||||
and transparently refresh this token. You must manually re-authenticate
|
||||
when the access token has expired when signing in with a Service Provider.
|
||||
|
||||
You can also use the `--tenant-id` option alone to specify a tenant, if
|
||||
you have several ones available in Azure.
|
||||
|
||||
### Create an ACI context
|
||||
|
||||
After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI.
|
||||
After you have signed in, you need to create a Docker context associated with ACI to deploy containers in ACI.
|
||||
Creating an ACI context requires an Azure subscription, a [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal), and a region.
|
||||
For example, let us create a new context called `myacicontext`:
|
||||
|
||||
|
@ -79,7 +80,7 @@ For example, let us create a new context called `myacicontext`:
|
|||
$ docker context create aci myacicontext
|
||||
```
|
||||
|
||||
This command automatically uses your Azure login credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags: `--subscription-id`,
|
||||
This command automatically uses your Azure sign in credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags: `--subscription-id`,
|
||||
`--resource-group`, and `--location`.
|
||||
|
||||
If you don't have any existing resource groups in your Azure account, the `docker context create aci myacicontext` command creates one for you. You don’t have to specify any additional options to do this.
|
||||
|
@ -94,7 +95,7 @@ default * moby Current DOCKER_HOST based configuration
|
|||
|
||||
### Run a container
|
||||
|
||||
Now that you've logged in and created an ACI context, you can start using Docker commands to deploy containers on ACI.
|
||||
Now you can start using Docker commands to deploy containers on ACI.
|
||||
|
||||
There are two ways to use your new ACI context. You can use the `--context` flag with the Docker command to specify that you would like to run the command using your newly created ACI context.
|
||||
|
||||
|
@ -144,7 +145,7 @@ You can also deploy and manage multi-container applications defined in Compose f
|
|||
All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file.
|
||||
Name resolution between containers is achieved by writing service names in the `/etc/hosts` file that is shared automatically by all containers in the container group.
|
||||
|
||||
Also see the [full list of compose features supported by ACI](aci-compose-features.md).
|
||||
Also see the [full list of Compose features supported by ACI](aci-compose-features.md).
|
||||
|
||||
1. Ensure you are using your ACI context. You can do this either by specifying the `--context myacicontext` flag or by setting the default context using the command `docker context use myacicontext`.
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ aliases:
|
|||
|
||||
## Overview
|
||||
|
||||
The Docker Compose CLI enables developers to use native Docker commands to run applications in Amazon Elastic Container Service (ECS) when building cloud-native applications.
|
||||
The Docker Compose CLI lets developers use native Docker commands to run applications in Amazon Elastic Container Service (ECS) when building cloud-native applications.
|
||||
|
||||
The integration between Docker and Amazon ECS allows developers to use the Docker Compose CLI to:
|
||||
|
||||
|
@ -30,6 +30,7 @@ To deploy Docker containers on ECS, you must meet the following requirements:
|
|||
|
||||
- [Download for Mac](../desktop/install/mac-install.md)
|
||||
- [Download for Windows](../desktop/install/windows-install.md)
|
||||
- [Download for Linux](../desktop/install/linux-install.md)
|
||||
|
||||
Alternatively, install the [Docker Compose CLI for Linux](#install-the-docker-compose-cli-on-linux).
|
||||
|
||||
|
@ -106,7 +107,7 @@ the setup command lets you select an existing AWS profile to connect to Amazon.
|
|||
Otherwise, you can create a new profile by passing an
|
||||
[AWS access key ID and a secret access key](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
|
||||
Finally, you can configure your ECS context to retrieve AWS credentials by `AWS_*` environment variables, which is a common way to integrate with
|
||||
third-party tools and single-sign-on providers.
|
||||
third-party tools and Single Sign-On providers.
|
||||
|
||||
```console
|
||||
? Create a Docker context using: [Use arrows to move, type to filter]
|
||||
|
|
|
@ -8,11 +8,8 @@ description: Learn how to describe and deploy a simple application on Kubernetes
|
|||
|
||||
- Download and install Docker Desktop as described in [Get Docker](../get-docker.md).
|
||||
- Work through containerizing an application in [Part 2](02_our_app.md).
|
||||
- Make sure that Kubernetes is enabled on your Docker Desktop:
|
||||
- **Mac**: Click the Docker icon in your menu bar, navigate to **Settings** and make sure there's a green light beside 'Kubernetes'.
|
||||
- **Windows**: Click the Docker icon in the system tray and navigate to **Settings** and make sure there's a green light beside 'Kubernetes'.
|
||||
|
||||
If Kubernetes isn't running, follow the instructions in [Orchestration](orchestration.md) of this tutorial to finish setting it up.
|
||||
- Make sure that Kubernetes is turned on in Docker Desktop:
|
||||
If Kubernetes isn't running, follow the instructions in [Orchestration](orchestration.md) to finish setting it up.
|
||||
|
||||
## Introduction
|
||||
|
||||
|
@ -22,9 +19,9 @@ In order to validate that our containerized application works well on Kubernetes
|
|||
|
||||
## Describing apps using Kubernetes YAML
|
||||
|
||||
All containers in Kubernetes are scheduled as _pods_, which are groups of co-located containers that share some resources. Furthermore, in a realistic application we almost never create individual pods; instead, most of our workloads are scheduled as _deployments_, which are scalable groups of pods maintained automatically by Kubernetes. Lastly, all Kubernetes objects can and should be described in manifests called _Kubernetes YAML_ files. These YAML files describe all the components and configurations of your Kubernetes app, and can be used to easily create and destroy your app in any Kubernetes environment.
|
||||
All containers in Kubernetes are scheduled as pods, which are groups of co-located containers that share some resources. Furthermore, in a realistic application we almost never create individual pods. Instead, most of our workloads are scheduled as deployments, which are scalable groups of pods maintained automatically by Kubernetes. Lastly, all Kubernetes objects can and should be described in manifests called Kubernetes YAML files. These YAML files describe all the components and configurations of your Kubernetes app, and can be used to easily create and destroy your app in any Kubernetes environment.
|
||||
|
||||
1. You already wrote a very basic Kubernetes YAML file in the Orchestration overview part of this tutorial. Now, let's write a slightly more sophisticated YAML file to run and manage our Todo app, the container `getting-started` image created in [Part 2](02_our_app.md) of the Quickstart tutorial. Place the following in a file called `bb.yaml`:
|
||||
You already wrote a very basic Kubernetes YAML file in the Orchestration overview part of this tutorial. Now, let's write a slightly more sophisticated YAML file to run and manage our Todo app, the container `getting-started` image created in [Part 2](02_our_app.md) of the Quickstart tutorial. Place the following in a file called `bb.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
@ -74,20 +71,20 @@ All containers in Kubernetes are scheduled as _pods_, which are groups of co-loc
|
|||
|
||||
## Deploy and check your application
|
||||
|
||||
1. In a terminal, navigate to where you created `bb.yaml` and deploy your application to Kubernetes:
|
||||
1. In a terminal, navigate to where you created `bb.yaml` and deploy your application to Kubernetes:
|
||||
|
||||
```console
|
||||
$ kubectl apply -f bb.yaml
|
||||
```
|
||||
|
||||
you should see output that looks like the following, indicating your Kubernetes objects were created successfully:
|
||||
You should see output that looks like the following, indicating your Kubernetes objects were created successfully:
|
||||
|
||||
```shell
|
||||
deployment.apps/bb-demo created
|
||||
service/bb-entrypoint created
|
||||
```
|
||||
|
||||
2. Make sure everything worked by listing your deployments:
|
||||
2. Make sure everything worked by listing your deployments:
|
||||
|
||||
```console
|
||||
$ kubectl get deployments
|
||||
|
@ -112,9 +109,9 @@ All containers in Kubernetes are scheduled as _pods_, which are groups of co-loc
|
|||
|
||||
In addition to the default `kubernetes` service, we see our `bb-entrypoint` service, accepting traffic on port 30001/TCP.
|
||||
|
||||
3. Open a browser and visit your Todo app at `localhost:30001`; you should see your Todo application, the same as when we ran it as a stand-alone container in [Part 2](02_our_app.md) of the Quickstart tutorial.
|
||||
3. Open a browser and visit your Todo app at `localhost:30001`. You should see your Todo application, the same as when we ran it as a stand-alone container in [Part 2](02_our_app.md) of the Quickstart tutorial.
|
||||
|
||||
4. Once satisfied, tear down your application:
|
||||
4. Once satisfied, tear down your application:
|
||||
|
||||
```console
|
||||
$ kubectl delete -f bb.yaml
|
||||
|
@ -122,7 +119,7 @@ All containers in Kubernetes are scheduled as _pods_, which are groups of co-loc
|
|||
|
||||
## Conclusion
|
||||
|
||||
At this point, we have successfully used Docker Desktop to deploy our application to a fully-featured Kubernetes environment on our development machine. We haven't done much with Kubernetes yet, but the door is now open; you can begin adding other components to your app and taking advantage of all the features and power of Kubernetes, right on your own machine.
|
||||
At this point, we have successfully used Docker Desktop to deploy our application to a fully-featured Kubernetes environment on our development machine. You can now add other components to your app and taking advantage of all the features and power of Kubernetes, right on your own machine.
|
||||
|
||||
In addition to deploying to Kubernetes, we have also described our application as a Kubernetes YAML file. This simple text file contains everything we need to create our application in a running state. We can check it into version control and share it with our colleagues, allowing us to distribute our applications to other clusters (like the testing and production clusters that probably come after our development environments) easily.
|
||||
|
||||
|
|
|
@ -22,20 +22,20 @@ The advanced modules teach you how to:
|
|||
1. [Set up and use a Kubernetes environment on your development machine](kube-deploy.md)
|
||||
2. [Set up and use a Swarm environment on your development machine](swarm-deploy.md)
|
||||
|
||||
## Enable Kubernetes
|
||||
## Turn on Kubernetes
|
||||
|
||||
Docker Desktop will set up Kubernetes for you quickly and easily. Follow the setup and validation instructions appropriate for your operating system:
|
||||
Docker Desktop sets up Kubernetes for you quickly and easily. Follow the setup and validation instructions appropriate for your operating system:
|
||||
|
||||
{{< tabs group="os" >}}
|
||||
{{< tab name="Mac" >}}
|
||||
{{< tab name="Mac and Linux" >}}
|
||||
|
||||
### Mac
|
||||
|
||||
1. After installing Docker Desktop, you should see a Docker icon in your menu bar. Click on it, and navigate to **Settings** > **Kubernetes**.
|
||||
1. From the Docker Dashboard, navigate to **Settings**, and select the **Kubernetes** tab.
|
||||
|
||||
2. Check the checkbox labeled **Enable Kubernetes**, and click **Apply & Restart**. Docker Desktop will automatically set up Kubernetes for you. You'll know that Kubernetes has been successfully enabled when you see a green light beside 'Kubernetes _running_' in **Settings**.
|
||||
2. Select the checkbox labeled **Enable Kubernetes**, and select **Apply & Restart**. Docker Desktop automatically sets up Kubernetes for you. You'll know that Kubernetes has been successfully enabled when you see a green light beside 'Kubernetes _running_' in **Settings**.
|
||||
|
||||
3. In order to confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
|
||||
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -51,13 +51,13 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set
|
|||
|
||||
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
|
||||
|
||||
4. In a terminal, navigate to where you created `pod.yaml` and create your pod:
|
||||
4. In a terminal, navigate to where you created `pod.yaml` and create your pod:
|
||||
|
||||
```console
|
||||
$ kubectl apply -f pod.yaml
|
||||
```
|
||||
|
||||
5. Check that your pod is up and running:
|
||||
5. Check that your pod is up and running:
|
||||
|
||||
```console
|
||||
$ kubectl get pods
|
||||
|
@ -70,7 +70,7 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set
|
|||
demo 1/1 Running 0 4s
|
||||
```
|
||||
|
||||
6. Check that you get the logs you'd expect for a ping process:
|
||||
6. Check that you get the logs you'd expect for a ping process:
|
||||
|
||||
```console
|
||||
$ kubectl logs demo
|
||||
|
@ -86,7 +86,7 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set
|
|||
...
|
||||
```
|
||||
|
||||
7. Finally, tear down your test pod:
|
||||
7. Finally, tear down your test pod:
|
||||
|
||||
```console
|
||||
$ kubectl delete -f pod.yaml
|
||||
|
@ -97,11 +97,11 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set
|
|||
|
||||
### Windows
|
||||
|
||||
1. After installing Docker Desktop, you should see a Docker icon in your system tray. Right-click on it, and navigate **Settings** > **Kubernetes**.
|
||||
1. From the Docker Dashboard, navigate to **Settings**, and select the **Kubernetes** tab.
|
||||
|
||||
2. Check the checkbox labeled **Enable Kubernetes**, and click **Apply & Restart**. Docker Desktop will automatically set up Kubernetes for you. You'll know that Kubernetes has been successfully enabled when you see a green light beside 'Kubernetes _running_' in the **Settings** menu.
|
||||
2. Select the checkbox labeled **Enable Kubernetes**, and select **Apply & Restart**. Docker Desktop automatically sets up Kubernetes for you. You'll know that Kubernetes has been successfully enabled when you see a green light beside 'Kubernetes _running_' in the **Settings** menu.
|
||||
|
||||
3. In order to confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
|
||||
3. To confirm that Kubernetes is up and running, create a text file called `pod.yaml` with the following content:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -117,13 +117,13 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set
|
|||
|
||||
This describes a pod with a single container, isolating a simple ping to 8.8.8.8.
|
||||
|
||||
4. In PowerShell, navigate to where you created `pod.yaml` and create your pod:
|
||||
4. In PowerShell, navigate to where you created `pod.yaml` and create your pod:
|
||||
|
||||
```console
|
||||
$ kubectl apply -f pod.yaml
|
||||
```
|
||||
|
||||
5. Check that your pod is up and running:
|
||||
5. Check that your pod is up and running:
|
||||
|
||||
```console
|
||||
$ kubectl get pods
|
||||
|
@ -136,7 +136,7 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set
|
|||
demo 1/1 Running 0 4s
|
||||
```
|
||||
|
||||
6. Check that you get the logs you'd expect for a ping process:
|
||||
6. Check that you get the logs you'd expect for a ping process:
|
||||
|
||||
```console
|
||||
$ kubectl logs demo
|
||||
|
@ -152,7 +152,7 @@ Docker Desktop will set up Kubernetes for you quickly and easily. Follow the set
|
|||
...
|
||||
```
|
||||
|
||||
7. Finally, tear down your test pod:
|
||||
7. Finally, tear down your test pod:
|
||||
|
||||
```console
|
||||
$ kubectl delete -f pod.yaml
|
||||
|
@ -170,7 +170,7 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
|
||||
### Mac
|
||||
|
||||
1. Open a terminal, and initialize Docker Swarm mode:
|
||||
1. Open a terminal, and initialize Docker Swarm mode:
|
||||
|
||||
```console
|
||||
$ docker swarm init
|
||||
|
@ -188,13 +188,13 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
||||
```
|
||||
|
||||
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
|
||||
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
|
||||
|
||||
```console
|
||||
$ docker service create --name demo alpine:latest ping 8.8.8.8
|
||||
```
|
||||
|
||||
3. Check that your service created one running container:
|
||||
3. Check that your service created one running container:
|
||||
|
||||
```console
|
||||
$ docker service ps demo
|
||||
|
@ -207,7 +207,7 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
|
||||
```
|
||||
|
||||
4. Check that you get the logs you'd expect for a ping process:
|
||||
4. Check that you get the logs you'd expect for a ping process:
|
||||
|
||||
```console
|
||||
$ docker service logs demo
|
||||
|
@ -223,7 +223,7 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
...
|
||||
```
|
||||
|
||||
5. Finally, tear down your test service:
|
||||
5. Finally, tear down your test service:
|
||||
|
||||
```console
|
||||
$ docker service rm demo
|
||||
|
@ -234,7 +234,7 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
|
||||
### Windows
|
||||
|
||||
1. Open a powershell, and initialize Docker Swarm mode:
|
||||
1. Open a PowerShell, and initialize Docker Swarm mode:
|
||||
|
||||
```console
|
||||
$ docker swarm init
|
||||
|
@ -252,13 +252,13 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
||||
```
|
||||
|
||||
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
|
||||
2. Run a simple Docker service that uses an alpine-based filesystem, and isolates a ping to 8.8.8.8:
|
||||
|
||||
```console
|
||||
$ docker service create --name demo alpine:latest ping 8.8.8.8
|
||||
```
|
||||
|
||||
3. Check that your service created one running container:
|
||||
3. Check that your service created one running container:
|
||||
|
||||
```console
|
||||
$ docker service ps demo
|
||||
|
@ -271,7 +271,7 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
463j2s3y4b5o demo.1 alpine:latest docker-desktop Running Running 8 seconds ago
|
||||
```
|
||||
|
||||
4. Check that you get the logs you'd expect for a ping process:
|
||||
4. Check that you get the logs you'd expect for a ping process:
|
||||
|
||||
```console
|
||||
$ docker service logs demo
|
||||
|
@ -287,7 +287,7 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
...
|
||||
```
|
||||
|
||||
5. Finally, tear down your test service:
|
||||
5. Finally, tear down your test service:
|
||||
|
||||
```console
|
||||
$ docker service rm demo
|
||||
|
@ -298,7 +298,7 @@ Docker Desktop runs primarily on Docker Engine, which has everything you need to
|
|||
|
||||
## Conclusion
|
||||
|
||||
At this point, you've confirmed that you can run simple containerized workloads in Kubernetes and Swarm. The next step will be to write a YAML file that describes how to run and manage these containers.
|
||||
At this point, you've confirmed that you can run simple containerized workloads in Kubernetes and Swarm. The next step is to write a YAML file that describes how to run and manage these containers.
|
||||
|
||||
- [Deploy to Kubernetes](kube-deploy.md)
|
||||
- [Deploy to Swarm](swarm-deploy.md)
|
||||
|
|
|
@ -20,11 +20,11 @@ aliases:
|
|||
|
||||
Now that we've demonstrated that the individual components of our application run as stand-alone containers and shown how to deploy it using Kubernetes, let's look at how to arrange for them to be managed by Docker Swarm. Swarm provides many tools for scaling, networking, securing and maintaining your containerized applications, above and beyond the abilities of containers themselves.
|
||||
|
||||
In order to validate that our containerized application works well on Swarm, we'll use Docker Desktop's built in Swarm environment right on our development machine to deploy our application, before handing it off to run on a full Swarm cluster in production. The Swarm environment created by Docker Desktop is _fully featured_, meaning it has all the Swarm features your app will enjoy on a real cluster, accessible from the convenience of your development machine.
|
||||
In order to validate that our containerized application works well on Swarm, we'll use Docker Desktop's built in Swarm environment right on our development machine to deploy our application, before handing it off to run on a full Swarm cluster in production. The Swarm environment created by Docker Desktop is fully featured, meaning it has all the Swarm features your app will enjoy on a real cluster, accessible from the convenience of your development machine.
|
||||
|
||||
## Describe apps using stack files
|
||||
|
||||
Swarm never creates individual containers like we did in the previous step of this tutorial. Instead, all Swarm workloads are scheduled as _services_, which are scalable groups of containers with added networking features maintained automatically by Swarm. Furthermore, all Swarm objects can and should be described in manifests called _stack files_. These YAML files describe all the components and configurations of your Swarm app, and can be used to easily create and destroy your app in any Swarm environment.
|
||||
Swarm never creates individual containers like we did in the previous step of this tutorial. Instead, all Swarm workloads are scheduled as services, which are scalable groups of containers with added networking features maintained automatically by Swarm. Furthermore, all Swarm objects can and should be described in manifests called stack files. These YAML files describe all the components and configurations of your Swarm app, and can be used to easily create and destroy your app in any Swarm environment.
|
||||
|
||||
Let's write a simple stack file to run and manage our Todo app, the container `getting-started` image created in [Part 2](02_our_app.md) of the Quickstart tutorial. Place the following in a file called `bb-stack.yaml`:
|
||||
|
||||
|
@ -40,21 +40,21 @@ services:
|
|||
- "8000:3000"
|
||||
```
|
||||
|
||||
In this Swarm YAML file, we have just one object: a `service`, describing a scalable group of identical containers. In this case, you'll get just one container (the default), and that container will be based on your `getting-started` image created in [Part 2](02_our_app.md) of the Quickstart tutorial. In addition, We've asked Swarm to forward all traffic arriving at port 8000 on our development machine to port 3000 inside our getting-started container.
|
||||
In this Swarm YAML file, we have just one object, a `service`, describing a scalable group of identical containers. In this case, you'll get just one container (the default), and that container will be based on your `getting-started` image created in [Part 2](02_our_app.md) of the Quickstart tutorial. In addition, We've asked Swarm to forward all traffic arriving at port 8000 on our development machine to port 3000 inside our getting-started container.
|
||||
|
||||
> **Kubernetes Services and Swarm Services are very different!**
|
||||
> **Kubernetes Services and Swarm Services are very different**
|
||||
>
|
||||
> Despite the similar name, the two orchestrators mean very different things by
|
||||
> the term 'service'. In Swarm, a service provides both scheduling _and_
|
||||
> the term 'service'. In Swarm, a service provides both scheduling and
|
||||
> networking facilities, creating containers and providing tools for routing
|
||||
> traffic to them. In Kubernetes, scheduling and networking are handled
|
||||
> separately: _deployments_ (or other controllers) handle the scheduling of
|
||||
> containers as pods, while _services_ are responsible only for adding
|
||||
> separately, deployments (or other controllers) handle the scheduling of
|
||||
> containers as pods, while services are responsible only for adding
|
||||
> networking features to those pods.
|
||||
|
||||
## Deploy and check your application
|
||||
|
||||
1. Deploy your application to Swarm:
|
||||
1. Deploy your application to Swarm:
|
||||
|
||||
```console
|
||||
$ docker stack deploy -c bb-stack.yaml demo
|
||||
|
@ -69,7 +69,7 @@ In this Swarm YAML file, we have just one object: a `service`, describing a scal
|
|||
|
||||
Notice that in addition to your service, Swarm also creates a Docker network by default to isolate the containers deployed as part of your stack.
|
||||
|
||||
2. Make sure everything worked by listing your service:
|
||||
2. Make sure everything worked by listing your service:
|
||||
|
||||
```console
|
||||
$ docker service ls
|
||||
|
@ -84,9 +84,9 @@ In this Swarm YAML file, we have just one object: a `service`, describing a scal
|
|||
|
||||
This indicates 1/1 containers you asked for as part of your services are up and running. Also, we see that port 8000 on your development machine is getting forwarded to port 3000 in your getting-started container.
|
||||
|
||||
3. Open a browser and visit your Todo app at `localhost:8000`; you should see your Todo application, the same as when we ran it as a stand-alone container in Part 2 of the Quickstart tutorial.
|
||||
3. Open a browser and visit your Todo app at `localhost:8000`; you should see your Todo application, the same as when we ran it as a stand-alone container in Part 2 of the Quickstart tutorial.
|
||||
|
||||
4. Once satisfied, tear down your application:
|
||||
4. Once satisfied, tear down your application:
|
||||
|
||||
```console
|
||||
$ docker stack rm demo
|
||||
|
@ -94,7 +94,7 @@ In this Swarm YAML file, we have just one object: a `service`, describing a scal
|
|||
|
||||
## Conclusion
|
||||
|
||||
At this point, we have successfully used Docker Desktop to deploy our application to a fully-featured Swarm environment on our development machine. We haven't done much with Swarm yet, but the door is now open: you can begin adding other components to your app and taking advantage of all the features and power of Swarm, right on your own machine.
|
||||
At this point, we have successfully used Docker Desktop to deploy our application to a fully-featured Swarm environment on our development machine. You can now add other components to your app and taking advantage of all the features and power of Swarm, right on your own machine.
|
||||
|
||||
In addition to deploying to Swarm, we have also described our application as a stack file. This simple text file contains everything we need to create our application in a running state; we can check it into version control and share it with our colleagues, allowing us to distribute our applications to other clusters (like the testing and production clusters that probably come after our development environments) easily.
|
||||
|
||||
|
|
Loading…
Reference in New Issue