Fix some doubled words

Signed-off-by: Misty Stanley-Jones <misty@docker.com>
This commit is contained in:
Misty Stanley-Jones 2016-11-28 11:32:17 -08:00
parent b1ce89daf2
commit 756f4d974c
39 changed files with 93 additions and 88 deletions

View File

@ -112,7 +112,7 @@ In this step, you create a Django started project by building the image from the
If you are running Docker on Linux, the files `django-admin` created are owned
by root. This happens because the container runs as the root user. Change the
ownership of the the new files.
ownership of the new files.
sudo chown -R $USER:$USER .

View File

@ -91,7 +91,7 @@ First, Compose will build the image for the `web` service using the `Dockerfile`
If you are running Docker on Linux, the files `rails new` created are owned by
root. This happens because the container runs as the root user. Change the
ownership of the the new files.
ownership of the new files.
sudo chown -R $USER:$USER .

View File

@ -14,7 +14,10 @@ Docker command-line client. If you're using `docker-machine`, then the `eval "$(
## COMPOSE\_PROJECT\_NAME
Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively.
Sets the project name. This value is prepended along with the service name to
the container on start up. For example, if you project name is `myapp` and it
includes two services `db` and `web` then compose starts containers named
`myapp_db_1` and `myapp_web_1` respectively.
Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME`
defaults to the `basename` of the project directory. See also the `-p`
@ -87,4 +90,4 @@ Users of Docker Machine and Docker Toolbox on Windows should always set this.
- [User guide](../index.md)
- [Installing Compose](../install.md)
- [Compose file reference](../compose-file.md)
- [Environment file](../env-file.md)
- [Environment file](../env-file.md)

View File

@ -57,7 +57,7 @@ they have dedicated resources for them.
It also makes it easier to implement backup policies and disaster recovery
plans for UCP and DTR.
To have have high-availability on UCP and DTR, you need a minimum of:
To have high-availability on UCP and DTR, you need a minimum of:
* 3 dedicated nodes to install UCP with high availability,
* 3 dedicated nodes to install DTR with high availability,
@ -68,7 +68,7 @@ To have have high-availability on UCP and DTR, you need a minimum of:
## Load balancing
DTR does not provide a load balancing service. You can use use an on-premises
DTR does not provide a load balancing service. You can use an on-premises
or cloud-based load balancer to balance requests across multiple DTR replicas.
Make sure you configure your load balancer to:
@ -82,4 +82,4 @@ not.
## Where to go next
* [Backups and disaster recovery](backups-and-disaster-recovery.md)
* [DTR architecture](../architecture.md)
* [DTR architecture](../architecture.md)

View File

@ -67,7 +67,7 @@ To install DTR:
3. Check that DTR is running.
In your browser, navigate to the the Docker **Universal Control Plane**
In your browser, navigate to the Docker **Universal Control Plane**
web UI, and navigate to the **Applications** screen. DTR should be listed
as an application.
@ -143,7 +143,7 @@ replicas:
3. Check that all replicas are running.
In your browser, navigate to the the Docker **Universal Control Plane**
In your browser, navigate to the Docker **Universal Control Plane**
web UI, and navigate to the **Applications** screen. All replicas should
be displayed.
@ -158,4 +158,4 @@ replicas:
## See also
* [Install DTR offline](install-dtr-offline.md)
* [Upgrade DTR](upgrade/upgrade-major.md)
* [Upgrade DTR](upgrade/upgrade-major.md)

View File

@ -11,7 +11,7 @@ Before installing, be sure your infrastructure has these requirements.
## Software requirements
To install DTR on a node, that node node must be part of a Docker Universal
To install DTR on a node, that node must be part of a Docker Universal
Control Plane 1.1 cluster.
## Ports used
@ -45,4 +45,4 @@ Docker Datacenter is a software subscription that includes 3 products:
## Where to go next
* [DTR architecture](../architecture.md)
* [Install DTR](index.md)
* [Install DTR](index.md)

View File

@ -63,7 +63,7 @@ To start the migration:
2. Use the docker/dtr migrate command.
When you run the docker/dtr migrate command, Docker pulls the necessary
images from Docker Hub. If the the host where DTR 1.4.3 is not connected
images from Docker Hub. If the host where DTR 1.4.3 is not connected
to the internet, you need to
[download the images to the host](../install-dtr-offline.md).
@ -183,7 +183,7 @@ replicas:
3. Check that all replicas are running.
In your browser, navigate to the the Docker **Universal Control Plane**
In your browser, navigate to the Docker **Universal Control Plane**
web UI, and navigate to the **Applications** screen. All replicas should
be displayed.
@ -204,4 +204,4 @@ containers.
## Where to go next
* [Upgrade to DTR 2.x](index.md)
* [Monitor DTR](../../monitor-troubleshoot/index.md)
* [Monitor DTR](../../monitor-troubleshoot/index.md)

View File

@ -18,7 +18,7 @@ docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr
docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1
```
You can create new new overlay network for this test with `docker network create
You can create new overlay network for this test with `docker network create
-d overaly network-name`. You can also use any images that contain `sh` and
`ping` for this test.
@ -65,4 +65,4 @@ via the following docker command:
```
docker run --rm -v dtr-ca-$REPLICA_ID:/ca --net dtr-br -it --entrypoint /etcdctl docker/dtr-etcd:v2.2.4 --endpoint https://dtr-etcd-$REPLICA_ID.dtr-br:2379 --ca-file /ca/etcd/cert.pem --key-file /ca/etcd-client/key.pem --cert-file /ca/etcd-client/cert.pem
```
```

View File

@ -14,7 +14,7 @@ A team defines the permissions a set of users have for a set of repositories.
To create a new team, go to the **DTR web UI**, and navigate to the
**Organizations** page.
Then **click the organization** where you want to create the team. In this
example, we'll create the 'billing' team team under the 'whale' organization.
example, we'll create the 'billing' team under the 'whale' organization.
![](../images/create-and-manage-teams-1.png)
@ -54,4 +54,4 @@ There are three permission levels available:
## Where to go next
* [Create and manage users](create-and-manage-users.md)
* [Create and manage organizations](create-and-manage-orgs.md)
* [Create and manage organizations](create-and-manage-orgs.md)

View File

@ -56,7 +56,7 @@ they have dedicated resources for them.
It also makes it easier to implement backup policies and disaster recovery
plans for UCP and DTR.
To have have high-availability on UCP and DTR, you need a minimum of:
To have high-availability on UCP and DTR, you need a minimum of:
* 3 dedicated nodes to install UCP with high availability,
* 3 dedicated nodes to install DTR with high availability,
@ -67,7 +67,7 @@ To have have high-availability on UCP and DTR, you need a minimum of:
## Load balancing
DTR does not provide a load balancing service. You can use use an on-premises
DTR does not provide a load balancing service. You can use an on-premises
or cloud-based load balancer to balance requests across multiple DTR replicas.
Make sure you configure your load balancer to:

View File

@ -56,7 +56,7 @@ Check the [reference documentation to learn more](../../reference/cli/install.md
## Step 4. Check that DTR is running
In your browser, navigate to the the Docker **Universal Control Plane**
In your browser, navigate to the Docker **Universal Control Plane**
web UI, and navigate to the **Applications** screen. DTR should be listed
as an application.
@ -122,7 +122,7 @@ replicas:
4. Check that all replicas are running.
In your browser, navigate to the the Docker **Universal Control Plane**
In your browser, navigate to the Docker **Universal Control Plane**
web UI, and navigate to the **Applications** screen. All replicas should
be displayed.

View File

@ -14,7 +14,7 @@ docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr
docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1
```
You can create new new overlay network for this test with `docker network create -d overaly network-name`.
You can create new overlay network for this test with `docker network create -d overaly network-name`.
You can also use any images that contain `sh` and `ping` for this test.
If the second command succeeds, overlay networking is working.

View File

@ -13,7 +13,7 @@ A team defines the permissions a set of users have for a set of repositories.
To create a new team, go to the **DTR web UI**, and navigate to the
**Organizations** page.
Then **click the organization** where you want to create the team. In this
example, we'll create the 'billing' team team under the 'whale' organization.
example, we'll create the 'billing' team under the 'whale' organization.
![](../images/create-and-manage-teams-1.png)

View File

@ -136,7 +136,7 @@ To enable the networking feature, do the following.
INFO[0001] Successfully delivered signal to daemon
```
The `host-address` value is the the external address of the node you're
The `host-address` value is the external address of the node you're
operating against. This is the address other nodes when communicating with
each other across the communication network.
@ -275,4 +275,4 @@ Remember, you'll need to restart the daemon each time you change the start optio
## Where to go next
* [Integrate with DTR](dtr-integration.md)
* [Set up high availability](../high-availability/set-up-high-availability.md)
* [Set up high availability](../high-availability/set-up-high-availability.md)

View File

@ -181,7 +181,7 @@ host for the controller works fine.
````
Running this `eval` command sends the `docker` commands in the following
steps to the Docker Engine on on `node1`.
steps to the Docker Engine on `node1`.
c. Verify that `node1` is the active environment.

View File

@ -73,7 +73,7 @@ of specific teams
* Added an HTTP routing mesh for enabling hostname routing for services
(experimental)
* The UCP web UI now lets you know when a new version is available, and upgrades
to the the new version with a single click
to the new version with a single click
**Installer**

View File

@ -23,7 +23,7 @@ docker run -it --rm \
This command uninstalls UCP from the swarm, but preserves the swarm so that
your applications can continue running.
After UCP is uninstalled you can use the the 'docker swarm leave' and
After UCP is uninstalled you can use the 'docker swarm leave' and
'docker node rm' commands to remove nodes from the swarm.
Once UCP is uninstalled, you won't be able to join nodes to the swarm unless

View File

@ -66,7 +66,7 @@ option, find the published port on the service detail page.
### Using the API/CLI
See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) on
how to launch a service with a a published port.
how to launch a service with a published port.
## Check which ports a service has published
@ -81,7 +81,7 @@ Ports that are exposed internally display with a closed (locked) padlock
icon and published ports (that are exposed to the internet) show an open
(unlocked) padlock icon.
* Exposed ports are listed as as **container port/protocol**
* Exposed ports are listed as **container port/protocol**
* Published ports are listed as **node port**->**container port/protocol** -->
![](images/ports-published.png)
@ -121,4 +121,4 @@ not dynamic) is assigned a DNS endpoint in the format
running, in a [round-robin
fashion](https://en.wikipedia.org/wiki/Round-robin_DNS).
You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab.
You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab.

View File

@ -118,7 +118,7 @@ Environment variables specified in the service definition are instantiated in ea
These environment variables are prefixed with the `HOSTNAME_ENV_` in each container.
In our example, if we launch our `my-web-app` service with an environment variable of `WEBROOT=/login`, the following environment variables are set and available available in the proxy containers:
In our example, if we launch our `my-web-app` service with an environment variable of `WEBROOT=/login`, the following environment variables are set and available in the proxy containers:
| Name | Value |
|:------------------|:---------|
@ -161,4 +161,4 @@ Where:
These environment variables are also copied to linked containers with the `NAME_ENV_` prefix.
If you provide API access to your service, you can use the generated token (stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or automate operations, such as scaling.
If you provide API access to your service, you can use the generated token (stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or automate operations, such as scaling.

View File

@ -53,7 +53,7 @@ web-2 ab045c42 ▶ Running my-username/python-quickstart:late
Use either of the URLs from the `container ps` command to visit one of your service's containers, either using your browser or curl.
In the example output above, the URL `web-1.my-username.cont.dockerapp.io:49162` reaches the web app on the first container, and `web-2.my-username.cont.dockerapp.io:49156` reaches the web app on the the second container.
In the example output above, the URL `web-1.my-username.cont.dockerapp.io:49162` reaches the web app on the first container, and `web-2.my-username.cont.dockerapp.io:49156` reaches the web app on the second container.
If you use curl to visit the pages, you should see something like this:
@ -66,4 +66,4 @@ Hello Python Users!</br>Hostname: web-2</br>Counter: Redis Cache not found, coun
Congratulations! You now have *two* containers running in your **web** service.
Next: [View service logs](8_view_logs.md)
Next: [View service logs](8_view_logs.md)

View File

@ -28,4 +28,4 @@ Once you log in, a message appears prompting you to confirm the link.
## What's next?
You're ready to start using using DigitalOcean as the infrastructure provider for Docker Cloud! If you came here from the tutorial, click here to [continue the tutorial and deploy your first node](../getting-started/your_first_node.md).
You're ready to start using DigitalOcean as the infrastructure provider for Docker Cloud! If you came here from the tutorial, click here to [continue the tutorial and deploy your first node](../getting-started/your_first_node.md).

View File

@ -276,7 +276,9 @@ ln -s /Applications/Docker.app/Contents/Resources/etc/docker-compose.bash-comple
* Try out the [Getting Started with Docker](/engine/getstarted/index.md) tutorial.
* Dig in deeper with [learn by example](/engine/tutorials/index.md) tutorials on on building images, running containers, networking, managing data, and storing images on Docker Hub.
* Dig in deeper with [learn by example](/engine/tutorials/index.md) tutorials on
building images, running containers, networking, managing data, and storing
images on Docker Hub.
* See [Example Applications](examples.md) for example applications that include setting up services and databases in Docker Compose.

View File

@ -163,21 +163,21 @@ GB/s. With large sequential IO operations, `osxfs` can achieve throughput of
around 250 MB/s which, while not native speed, will not be the bottleneck for
most applications which perform acceptably on HDDs.
Latency is the time it takes for a file system system call to complete. For
instance, the time between a thread issuing write in a container and resuming
with the number of bytes written. With a classical block-based file system, this
latency is typically under 10μs (microseconds). With `osxfs`, latency is
presently around 200μs for most operations or 20x slower. For workloads which
demand many sequential roundtrips, this results in significant observable
slowdown. To reduce the latency, we need to shorten the data path from a Linux
system call to OS X and back again. This requires tuning each component in the
data path in turn -- some of which require significant engineering effort. Even
if we achieve a huge latency reduction of 100μs/roundtrip, we will still "only"
see a doubling of performance. This is typical of performance engineering, which
requires significant effort to analyze slowdowns and develop optimized
components. We know how we can likely halve the roundtrip time but we haven't
implemented those improvements yet (more on this below in [What you can
do](osxfs.md#what-you-can-do)).
Latency is the time it takes for a file system call to complete. For instance,
the time between a thread issuing write in a container and resuming with the
number of bytes written. With a classical block-based file system, this latency
is typically under 10μs (microseconds). With `osxfs`, latency is presently
around 200μs for most operations or 20x slower. For workloads which demand many
sequential roundtrips, this results in significant observable slowdown. To
reduce the latency, we need to shorten the data path from a Linux system call to
OS X and back again. This requires tuning each component in the data path in
turn -- some of which require significant engineering effort. Even if we achieve
a huge latency reduction of 100μs/roundtrip, we will still "only" see a doubling
of performance. This is typical of performance engineering, which requires
significant effort to analyze slowdowns and develop optimized components. We
know how we can likely halve the roundtrip time but we haven't implemented those
improvements yet (more on this below in
[What you can do](osxfs.md#what-you-can-do)).
There is hope for significant performance improvement in the near term despite
these fundamental communication channel properties, which are difficult to

View File

@ -250,7 +250,7 @@ know before you install](index.md#what-to-know-before-you-install).
* Restart your Mac to stop / discard any vestige of the daemon running from the previously installed version.
* Run the the uninstall commands from the menu.
* Run the uninstall commands from the menu.
<p></p>

View File

@ -4,10 +4,10 @@ keywords: docker, opensource
title: Open source components and licensing
---
Docker Desktop Editions are built using open source software software. For
Docker Desktop Editions are built using open source software. For
details on the licensing, choose <img src="../images/whale-x.png">
-->&nbsp;**About** from within the application, then click **Acknowledgements**.
Docker Desktop Editions distribute some components that are licensed under the
GNU General Public License. You can download the source for these components
[here](https://download.docker.com/opensource/License.tar.gz).
[here](https://download.docker.com/opensource/License.tar.gz).

View File

@ -49,7 +49,7 @@ can use in email or the forum to reference the upload.
### inotify on shared drives does not work
Currently, `inotify` does not work on Docker for Windows. This will become
evident, for example, when when an application needs to read/write to a
evident, for example, when an application needs to read/write to a
container across a mounted drive. This is a known issue that the team is working
on. Below is a temporary workaround, and a link to the issue.

View File

@ -34,7 +34,7 @@ To get started, log in to Docker Hub and click the "Create &#x25BC;" menu item
at the top right of the screen. Then select [Create Automated
Build](https://hub.docker.com/add/automated-build/bitbucket/).
Select the the linked Bitbucket account, and then choose a repository to set up
Select the linked Bitbucket account, and then choose a repository to set up
an Automated Build for.
## The Bitbucket webhook
@ -46,4 +46,4 @@ You can also manually add a webhook from your repository's **Settings** page.
Set the URL to `https://registry.hub.docker.com/hooks/bitbucket`, to be
triggered for repository pushes.
![bitbucket-hooks](images/bitbucket-hook.png)
![bitbucket-hooks](images/bitbucket-hook.png)

View File

@ -39,7 +39,7 @@ For Docker Cloud, Hub, and Store, log in using the web interface.
![Login using the web interface](images/login-cloud.png)
You can also log in using the the `docker login` command. (You can read more about `docker login` [here](../engine/reference/commandline/login/).)
You can also log in using the `docker login` command. (You can read more about `docker login` [here](../engine/reference/commandline/login/).)
> **Note:** When you use the `docker login` command, your credentials are stored
in your home directory in `.docker/config.json`. The password is hashed in this

View File

@ -55,7 +55,7 @@ each month, and the charge will come from Docker, Inc. Your billing cycle is a
If your payment failed because the card expired or was canceled, you need to
update your credit card information or add an additional card.
Click the user icon menu menu in the upper right corner, and click
Click the user icon menu in the upper right corner, and click
**Billing**. Click the **Payment methods** tab to update your credit card and
contact information.
@ -77,4 +77,4 @@ You can view and download your all active licenses for an organization from the
Subscriptions page.
Click the user icon menu at the top right, choose **Subscriptions** and then
select the organization from the **Accounts** drop down menu.
select the organization from the **Accounts** drop down menu.

View File

@ -219,7 +219,7 @@ compresses each log message. The accepted values are `gzip`, `zlib` and `none`.
The `gelf-compression-level` option can be used to change the level of
compression when `gzip` or `zlib` is selected as `gelf-compression-type`.
Accepted value must be from from -1 to 9 (BestCompression). Higher levels
Accepted value must be from -1 to 9 (BestCompression). Higher levels
typically run slower but compress more. Default value is 1 (BestSpeed).
## Fluentd options
@ -297,4 +297,4 @@ The Google Cloud Logging driver supports the following options:
```
For detailed information about working with this logging driver, see the
[Google Cloud Logging driver](gcplogs.md). reference documentation.
[Google Cloud Logging driver](gcplogs.md). reference documentation.

View File

@ -70,10 +70,10 @@ to the Swarmkit API and overlay networking. The other nodes on the swarm must be
able to access the manager node on its advertise address IP address.
If you don't specify an advertise address, Docker checks if the system has a
single IP address. If so, Docker uses the IP address with with the listening
port `2377` by default. If the system has multiple IP addresses, you must
specify the correct `--advertise-addr` to enable inter-manager communication
and overlay networking:
single IP address. If so, Docker uses the IP address with the listening port
`2377` by default. If the system has multiple IP addresses, you must specify the
correct `--advertise-addr` to enable inter-manager communication and overlay
networking:
```bash
$ docker swarm init --advertise-addr <MANAGER-IP>
@ -135,7 +135,7 @@ SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacr
Be careful with the join tokens because they are the secrets necessary to join
the swarm. In particular, checking a secret into version control is a bad
practice because it would allow anyone with access to the the application source
practice because it would allow anyone with access to the application source
code to add new nodes to the swarm. Manager tokens are especially sensitive
because they allow a new manager node to join and gain control over the whole
swarm.
@ -169,4 +169,4 @@ To add a worker to this swarm, run the following command:
* [Join nodes to a swarm](join-nodes.md)
* `swarm init` [command line reference](../reference/commandline/swarm_init.md)
* [Swarm mode tutorial](swarm-tutorial/index.md)
* [Swarm mode tutorial](swarm-tutorial/index.md)

View File

@ -11,7 +11,7 @@ is not a requirement to deploy a service.
1. Open a terminal and ssh into the machine where you run your manager node. For
example, the tutorial uses a machine named `manager1`.
2. Run the the following command:
2. Run the following command:
```bash
$ docker service create --replicas 1 --name helloworld alpine ping docker.com
@ -36,4 +36,4 @@ example, the tutorial uses a machine named `manager1`.
## What's next?
Now you've deployed a service to the swarm, you're ready to [inspect the service](inspect-service.md).
Now you've deployed a service to the swarm, you're ready to [inspect the service](inspect-service.md).

View File

@ -120,7 +120,7 @@ you want to keep, `push` them Docker Hub or your private Docker Trusted
Registry before attempting this procedure.
Stop the Docker daemon. Then, ensure that you have a spare block device at
`/dev/xvdb`. The device identifier may be be different in your environment and
`/dev/xvdb`. The device identifier may be different in your environment and
you should substitute your own values throughout the procedure.
### Install Zfs on Ubuntu 16.04 LTS
@ -319,4 +319,4 @@ SSD.
performance. This is because they bypass the storage driver and do not incur
any of the potential overheads introduced by thin provisioning and
copy-on-write. For this reason, you should place heavy write workloads on data
volumes.
volumes.

View File

@ -42,7 +42,7 @@ for an ordinary speaker of English with a basic university education. If your
prose is simple, clear, and straightforward it will translate readily.
One way to think about this is to assume Dockers users are generally university
educated and read at at least a "16th" grade level (meaning they have a
educated and read at least a "16th" grade level (meaning they have a
university degree). You can use a [readability
tester](https://readability-score.com/) to help guide your judgement. For
example, the readability score for the phrase "Containers should be ephemeral"
@ -273,4 +273,4 @@ call-outs is red.
Be sure to include descriptive alt-text for the graphic. This greatly helps
users with accessibility issues.
Lastly, be sure you have permission to use any included graphics.
Lastly, be sure you have permission to use any included graphics.

View File

@ -99,7 +99,7 @@ you use the manager to install the `tar` and `xz` tools from the collection.
The system displays the available packages.
8. Click on the the **msys-tar bin** package and choose **Mark for Installation**.
8. Click on the **msys-tar bin** package and choose **Mark for Installation**.
9. Click on the **msys-xz bin** package and choose **Mark for Installation**.
@ -254,4 +254,4 @@ from GitHub.
## Where to go next
In the next section, you'll [learn how to set up and configure Git for
contributing to Docker](set-up-git.md).
contributing to Docker](set-up-git.md).

View File

@ -79,7 +79,7 @@ authentication.
![](images/trust-diagram.jpg)
The trusted third party in this diagram is the the Certificate Authority (CA)
The trusted third party in this diagram is the Certificate Authority (CA)
server. Like the country in the passport example, a CA creates, signs, issues,
revokes certificates. Trust is established by installing the CA's root
certificate on the host running the Docker Engine daemon. The Docker Engine CLI then requests
@ -157,4 +157,4 @@ facing production workloads exposed to untrusted networks.
## Related information
* [Configure Docker Swarm for TLS](configure-tls.md)
* [Docker security](/engine/security/security/)
* [Docker security](/engine/security/security/)

View File

@ -48,7 +48,7 @@ POST "/images/create" : "docker import" flow not implement
<code>GET "/containers/{name:.*}/json"</code>
</td>
<td>
<code>HostIP</code> replaced by the the actual Node's IP if <code>HostIP</code> is <code>0.0.0.0</code>
<code>HostIP</code> replaced by the actual Node's IP if <code>HostIP</code> is <code>0.0.0.0</code>
</td>
</tr>
<tr>
@ -64,7 +64,7 @@ POST "/images/create" : "docker import" flow not implement
<code>GET "/containers/json"</code>
</td>
<td>
<code>HostIP</code> replaced by the the actual Node's IP if <code>HostIP</code> is <code>0.0.0.0</code>
<code>HostIP</code> replaced by the actual Node's IP if <code>HostIP</code> is <code>0.0.0.0</code>
</td>
</tr>
<tr>
@ -178,4 +178,4 @@ $ docker run --rm -it yourprivateimage:latest
- [Docker Swarm overview](/swarm/)
- [Discovery options](/swarm/discovery/)
- [Scheduler strategies](/swarm/scheduler/strategy/)
- [Scheduler filters](/swarm/scheduler/filter/)
- [Scheduler filters](/swarm/scheduler/filter/)

View File

@ -296,7 +296,7 @@ the containers at once. This extra credit
In general, Compose starts services in reverse order they appear in the file.
So, if you want a service to start before all the others, make it the last
service in the file file. This application relies on a volume and a network,
service in the file. This application relies on a volume and a network,
declare those at the bottom of the file.
3. Check your work against <a href="../docker-compose.yml" target="_blank">this
@ -417,4 +417,4 @@ Congratulations. You have successfully walked through manually deploying a
microservice-based application to a Swarm cluster. Of course, not every
deployment goes smoothly. Now that you've learned how to successfully deploy an
application at scale, you should learn [what to consider when troubleshooting
large applications running on a Swarm cluster](troubleshoot.md).
large applications running on a Swarm cluster](troubleshoot.md).

View File

@ -22,7 +22,7 @@ While this example uses Docker Machine, this is only one example of an
infrastructure you can use. You can create the environment design on whatever
infrastructure you wish. For example, you could place the application on another
public cloud platform such as Azure or DigitalOcean, on premises in your data
center, or even in in a test environment on your laptop.
center, or even in a test environment on your laptop.
Finally, these instructions use some common `bash` command substitution techniques to
resolve some values, for example:
@ -430,4 +430,4 @@ commands below, notice the label you are applying to each node.
## Next Step
Your key-value store, load balancer, and Swarm cluster infrastructure are up. You are
ready to [build and run the voting application](deploy-app.md) on it.
ready to [build and run the voting application](deploy-app.md) on it.