diff --git a/compose/django.md b/compose/django.md index 60d401f208..b94e698eb5 100644 --- a/compose/django.md +++ b/compose/django.md @@ -112,7 +112,7 @@ In this step, you create a Django started project by building the image from the If you are running Docker on Linux, the files `django-admin` created are owned by root. This happens because the container runs as the root user. Change the - ownership of the the new files. + ownership of the new files. sudo chown -R $USER:$USER . diff --git a/compose/rails.md b/compose/rails.md index 8670bd2802..8061bbcac0 100644 --- a/compose/rails.md +++ b/compose/rails.md @@ -91,7 +91,7 @@ First, Compose will build the image for the `web` service using the `Dockerfile` If you are running Docker on Linux, the files `rails new` created are owned by root. This happens because the container runs as the root user. Change the -ownership of the the new files. +ownership of the new files. sudo chown -R $USER:$USER . diff --git a/compose/reference/envvars.md b/compose/reference/envvars.md index efb36b0121..2597861fb4 100644 --- a/compose/reference/envvars.md +++ b/compose/reference/envvars.md @@ -14,7 +14,10 @@ Docker command-line client. If you're using `docker-machine`, then the `eval "$( ## COMPOSE\_PROJECT\_NAME -Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively. +Sets the project name. This value is prepended along with the service name to +the container on start up. For example, if you project name is `myapp` and it +includes two services `db` and `web` then compose starts containers named +`myapp_db_1` and `myapp_web_1` respectively. Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME` defaults to the `basename` of the project directory. See also the `-p` @@ -87,4 +90,4 @@ Users of Docker Machine and Docker Toolbox on Windows should always set this. - [User guide](../index.md) - [Installing Compose](../install.md) - [Compose file reference](../compose-file.md) -- [Environment file](../env-file.md) \ No newline at end of file +- [Environment file](../env-file.md) diff --git a/datacenter/dtr/2.0/high-availability/index.md b/datacenter/dtr/2.0/high-availability/index.md index b8f7ee2663..fc17e92873 100644 --- a/datacenter/dtr/2.0/high-availability/index.md +++ b/datacenter/dtr/2.0/high-availability/index.md @@ -57,7 +57,7 @@ they have dedicated resources for them. It also makes it easier to implement backup policies and disaster recovery plans for UCP and DTR. -To have have high-availability on UCP and DTR, you need a minimum of: +To have high-availability on UCP and DTR, you need a minimum of: * 3 dedicated nodes to install UCP with high availability, * 3 dedicated nodes to install DTR with high availability, @@ -68,7 +68,7 @@ To have have high-availability on UCP and DTR, you need a minimum of: ## Load balancing -DTR does not provide a load balancing service. You can use use an on-premises +DTR does not provide a load balancing service. You can use an on-premises or cloud-based load balancer to balance requests across multiple DTR replicas. Make sure you configure your load balancer to: @@ -82,4 +82,4 @@ not. ## Where to go next * [Backups and disaster recovery](backups-and-disaster-recovery.md) -* [DTR architecture](../architecture.md) \ No newline at end of file +* [DTR architecture](../architecture.md) diff --git a/datacenter/dtr/2.0/install/index.md b/datacenter/dtr/2.0/install/index.md index 02a0eccc6a..affab7678b 100644 --- a/datacenter/dtr/2.0/install/index.md +++ b/datacenter/dtr/2.0/install/index.md @@ -67,7 +67,7 @@ To install DTR: 3. Check that DTR is running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. DTR should be listed as an application. @@ -143,7 +143,7 @@ replicas: 3. Check that all replicas are running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. All replicas should be displayed. @@ -158,4 +158,4 @@ replicas: ## See also * [Install DTR offline](install-dtr-offline.md) -* [Upgrade DTR](upgrade/upgrade-major.md) \ No newline at end of file +* [Upgrade DTR](upgrade/upgrade-major.md) diff --git a/datacenter/dtr/2.0/install/system-requirements.md b/datacenter/dtr/2.0/install/system-requirements.md index 24b197827c..d7e6a3513a 100644 --- a/datacenter/dtr/2.0/install/system-requirements.md +++ b/datacenter/dtr/2.0/install/system-requirements.md @@ -11,7 +11,7 @@ Before installing, be sure your infrastructure has these requirements. ## Software requirements -To install DTR on a node, that node node must be part of a Docker Universal +To install DTR on a node, that node must be part of a Docker Universal Control Plane 1.1 cluster. ## Ports used @@ -45,4 +45,4 @@ Docker Datacenter is a software subscription that includes 3 products: ## Where to go next * [DTR architecture](../architecture.md) -* [Install DTR](index.md) \ No newline at end of file +* [Install DTR](index.md) diff --git a/datacenter/dtr/2.0/install/upgrade/upgrade-major.md b/datacenter/dtr/2.0/install/upgrade/upgrade-major.md index c88bd00737..f06fde4092 100644 --- a/datacenter/dtr/2.0/install/upgrade/upgrade-major.md +++ b/datacenter/dtr/2.0/install/upgrade/upgrade-major.md @@ -63,7 +63,7 @@ To start the migration: 2. Use the docker/dtr migrate command. When you run the docker/dtr migrate command, Docker pulls the necessary - images from Docker Hub. If the the host where DTR 1.4.3 is not connected + images from Docker Hub. If the host where DTR 1.4.3 is not connected to the internet, you need to [download the images to the host](../install-dtr-offline.md). @@ -183,7 +183,7 @@ replicas: 3. Check that all replicas are running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. All replicas should be displayed. @@ -204,4 +204,4 @@ containers. ## Where to go next * [Upgrade to DTR 2.x](index.md) -* [Monitor DTR](../../monitor-troubleshoot/index.md) \ No newline at end of file +* [Monitor DTR](../../monitor-troubleshoot/index.md) diff --git a/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md b/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md index c0d9c2fc4e..892088c9db 100644 --- a/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md +++ b/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md @@ -18,7 +18,7 @@ docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1 ``` -You can create new new overlay network for this test with `docker network create +You can create new overlay network for this test with `docker network create -d overaly network-name`. You can also use any images that contain `sh` and `ping` for this test. @@ -65,4 +65,4 @@ via the following docker command: ``` docker run --rm -v dtr-ca-$REPLICA_ID:/ca --net dtr-br -it --entrypoint /etcdctl docker/dtr-etcd:v2.2.4 --endpoint https://dtr-etcd-$REPLICA_ID.dtr-br:2379 --ca-file /ca/etcd/cert.pem --key-file /ca/etcd-client/key.pem --cert-file /ca/etcd-client/cert.pem -``` \ No newline at end of file +``` diff --git a/datacenter/dtr/2.0/user-management/create-and-manage-teams.md b/datacenter/dtr/2.0/user-management/create-and-manage-teams.md index 79f4027fa6..ceb7e9ee8c 100644 --- a/datacenter/dtr/2.0/user-management/create-and-manage-teams.md +++ b/datacenter/dtr/2.0/user-management/create-and-manage-teams.md @@ -14,7 +14,7 @@ A team defines the permissions a set of users have for a set of repositories. To create a new team, go to the **DTR web UI**, and navigate to the **Organizations** page. Then **click the organization** where you want to create the team. In this -example, we'll create the 'billing' team team under the 'whale' organization. +example, we'll create the 'billing' team under the 'whale' organization. ![](../images/create-and-manage-teams-1.png) @@ -54,4 +54,4 @@ There are three permission levels available: ## Where to go next * [Create and manage users](create-and-manage-users.md) -* [Create and manage organizations](create-and-manage-orgs.md) \ No newline at end of file +* [Create and manage organizations](create-and-manage-orgs.md) diff --git a/datacenter/dtr/2.1/guides/high-availability/index.md b/datacenter/dtr/2.1/guides/high-availability/index.md index 36a0f58374..486303000f 100644 --- a/datacenter/dtr/2.1/guides/high-availability/index.md +++ b/datacenter/dtr/2.1/guides/high-availability/index.md @@ -56,7 +56,7 @@ they have dedicated resources for them. It also makes it easier to implement backup policies and disaster recovery plans for UCP and DTR. -To have have high-availability on UCP and DTR, you need a minimum of: +To have high-availability on UCP and DTR, you need a minimum of: * 3 dedicated nodes to install UCP with high availability, * 3 dedicated nodes to install DTR with high availability, @@ -67,7 +67,7 @@ To have have high-availability on UCP and DTR, you need a minimum of: ## Load balancing -DTR does not provide a load balancing service. You can use use an on-premises +DTR does not provide a load balancing service. You can use an on-premises or cloud-based load balancer to balance requests across multiple DTR replicas. Make sure you configure your load balancer to: diff --git a/datacenter/dtr/2.1/guides/install/index.md b/datacenter/dtr/2.1/guides/install/index.md index 1cec50cf85..04a77d110e 100644 --- a/datacenter/dtr/2.1/guides/install/index.md +++ b/datacenter/dtr/2.1/guides/install/index.md @@ -56,7 +56,7 @@ Check the [reference documentation to learn more](../../reference/cli/install.md ## Step 4. Check that DTR is running -In your browser, navigate to the the Docker **Universal Control Plane** +In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. DTR should be listed as an application. @@ -122,7 +122,7 @@ replicas: 4. Check that all replicas are running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. All replicas should be displayed. diff --git a/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md b/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md index 3a1726603b..81ef8688b5 100644 --- a/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md +++ b/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md @@ -14,7 +14,7 @@ docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1 ``` -You can create new new overlay network for this test with `docker network create -d overaly network-name`. +You can create new overlay network for this test with `docker network create -d overaly network-name`. You can also use any images that contain `sh` and `ping` for this test. If the second command succeeds, overlay networking is working. diff --git a/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md b/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md index ec526fb17f..ae90f33be4 100644 --- a/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md +++ b/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md @@ -13,7 +13,7 @@ A team defines the permissions a set of users have for a set of repositories. To create a new team, go to the **DTR web UI**, and navigate to the **Organizations** page. Then **click the organization** where you want to create the team. In this -example, we'll create the 'billing' team team under the 'whale' organization. +example, we'll create the 'billing' team under the 'whale' organization. ![](../images/create-and-manage-teams-1.png) diff --git a/datacenter/ucp/1.1/configuration/multi-host-networking.md b/datacenter/ucp/1.1/configuration/multi-host-networking.md index c4c9289a78..cced7b3d3a 100644 --- a/datacenter/ucp/1.1/configuration/multi-host-networking.md +++ b/datacenter/ucp/1.1/configuration/multi-host-networking.md @@ -136,7 +136,7 @@ To enable the networking feature, do the following. INFO[0001] Successfully delivered signal to daemon ``` - The `host-address` value is the the external address of the node you're + The `host-address` value is the external address of the node you're operating against. This is the address other nodes when communicating with each other across the communication network. @@ -275,4 +275,4 @@ Remember, you'll need to restart the daemon each time you change the start optio ## Where to go next * [Integrate with DTR](dtr-integration.md) -* [Set up high availability](../high-availability/set-up-high-availability.md) \ No newline at end of file +* [Set up high availability](../high-availability/set-up-high-availability.md) diff --git a/datacenter/ucp/1.1/install-sandbox.md b/datacenter/ucp/1.1/install-sandbox.md index dd84a19f55..ec12c6af02 100644 --- a/datacenter/ucp/1.1/install-sandbox.md +++ b/datacenter/ucp/1.1/install-sandbox.md @@ -181,7 +181,7 @@ host for the controller works fine. ```` Running this `eval` command sends the `docker` commands in the following - steps to the Docker Engine on on `node1`. + steps to the Docker Engine on `node1`. c. Verify that `node1` is the active environment. diff --git a/datacenter/ucp/2.0/guides/release-notes.md b/datacenter/ucp/2.0/guides/release-notes.md index 531552b24b..87e6f3d565 100644 --- a/datacenter/ucp/2.0/guides/release-notes.md +++ b/datacenter/ucp/2.0/guides/release-notes.md @@ -73,7 +73,7 @@ of specific teams * Added an HTTP routing mesh for enabling hostname routing for services (experimental) * The UCP web UI now lets you know when a new version is available, and upgrades -to the the new version with a single click +to the new version with a single click **Installer** diff --git a/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md b/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md index a86c2e954d..0a3944dd4e 100644 --- a/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md +++ b/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md @@ -23,7 +23,7 @@ docker run -it --rm \ This command uninstalls UCP from the swarm, but preserves the swarm so that your applications can continue running. -After UCP is uninstalled you can use the the 'docker swarm leave' and +After UCP is uninstalled you can use the 'docker swarm leave' and 'docker node rm' commands to remove nodes from the swarm. Once UCP is uninstalled, you won't be able to join nodes to the swarm unless diff --git a/docker-cloud/apps/ports.md b/docker-cloud/apps/ports.md index a5b29d10bb..2f72455632 100644 --- a/docker-cloud/apps/ports.md +++ b/docker-cloud/apps/ports.md @@ -66,7 +66,7 @@ option, find the published port on the service detail page. ### Using the API/CLI See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) on -how to launch a service with a a published port. +how to launch a service with a published port. ## Check which ports a service has published @@ -81,7 +81,7 @@ Ports that are exposed internally display with a closed (locked) padlock icon and published ports (that are exposed to the internet) show an open (unlocked) padlock icon. -* Exposed ports are listed as as **container port/protocol** +* Exposed ports are listed as **container port/protocol** * Published ports are listed as **node port**->**container port/protocol** --> ![](images/ports-published.png) @@ -121,4 +121,4 @@ not dynamic) is assigned a DNS endpoint in the format running, in a [round-robin fashion](https://en.wikipedia.org/wiki/Round-robin_DNS). -You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab. \ No newline at end of file +You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab. diff --git a/docker-cloud/apps/service-links.md b/docker-cloud/apps/service-links.md index 38f0958a21..8265ae1a84 100644 --- a/docker-cloud/apps/service-links.md +++ b/docker-cloud/apps/service-links.md @@ -118,7 +118,7 @@ Environment variables specified in the service definition are instantiated in ea These environment variables are prefixed with the `HOSTNAME_ENV_` in each container. -In our example, if we launch our `my-web-app` service with an environment variable of `WEBROOT=/login`, the following environment variables are set and available available in the proxy containers: +In our example, if we launch our `my-web-app` service with an environment variable of `WEBROOT=/login`, the following environment variables are set and available in the proxy containers: | Name | Value | |:------------------|:---------| @@ -161,4 +161,4 @@ Where: These environment variables are also copied to linked containers with the `NAME_ENV_` prefix. -If you provide API access to your service, you can use the generated token (stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or automate operations, such as scaling. \ No newline at end of file +If you provide API access to your service, you can use the generated token (stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or automate operations, such as scaling. diff --git a/docker-cloud/getting-started/deploy-app/7_scale_the_service.md b/docker-cloud/getting-started/deploy-app/7_scale_the_service.md index 6cbc58102b..528a29b98a 100644 --- a/docker-cloud/getting-started/deploy-app/7_scale_the_service.md +++ b/docker-cloud/getting-started/deploy-app/7_scale_the_service.md @@ -53,7 +53,7 @@ web-2 ab045c42 ▶ Running my-username/python-quickstart:late Use either of the URLs from the `container ps` command to visit one of your service's containers, either using your browser or curl. -In the example output above, the URL `web-1.my-username.cont.dockerapp.io:49162` reaches the web app on the first container, and `web-2.my-username.cont.dockerapp.io:49156` reaches the web app on the the second container. +In the example output above, the URL `web-1.my-username.cont.dockerapp.io:49162` reaches the web app on the first container, and `web-2.my-username.cont.dockerapp.io:49156` reaches the web app on the second container. If you use curl to visit the pages, you should see something like this: @@ -66,4 +66,4 @@ Hello Python Users!
Hostname: web-2
Counter: Redis Cache not found, coun Congratulations! You now have *two* containers running in your **web** service. -Next: [View service logs](8_view_logs.md) \ No newline at end of file +Next: [View service logs](8_view_logs.md) diff --git a/docker-cloud/infrastructure/link-do.md b/docker-cloud/infrastructure/link-do.md index 7cea8c0371..097fe6a582 100644 --- a/docker-cloud/infrastructure/link-do.md +++ b/docker-cloud/infrastructure/link-do.md @@ -28,4 +28,4 @@ Once you log in, a message appears prompting you to confirm the link. ## What's next? -You're ready to start using using DigitalOcean as the infrastructure provider for Docker Cloud! If you came here from the tutorial, click here to [continue the tutorial and deploy your first node](../getting-started/your_first_node.md). \ No newline at end of file +You're ready to start using DigitalOcean as the infrastructure provider for Docker Cloud! If you came here from the tutorial, click here to [continue the tutorial and deploy your first node](../getting-started/your_first_node.md). diff --git a/docker-for-mac/index.md b/docker-for-mac/index.md index fbae5f6aa8..8290392cb6 100644 --- a/docker-for-mac/index.md +++ b/docker-for-mac/index.md @@ -276,7 +276,9 @@ ln -s /Applications/Docker.app/Contents/Resources/etc/docker-compose.bash-comple * Try out the [Getting Started with Docker](/engine/getstarted/index.md) tutorial. -* Dig in deeper with [learn by example](/engine/tutorials/index.md) tutorials on on building images, running containers, networking, managing data, and storing images on Docker Hub. +* Dig in deeper with [learn by example](/engine/tutorials/index.md) tutorials on + building images, running containers, networking, managing data, and storing + images on Docker Hub. * See [Example Applications](examples.md) for example applications that include setting up services and databases in Docker Compose. diff --git a/docker-for-mac/osxfs.md b/docker-for-mac/osxfs.md index a1e7d88b5d..cf15e164ce 100644 --- a/docker-for-mac/osxfs.md +++ b/docker-for-mac/osxfs.md @@ -163,21 +163,21 @@ GB/s. With large sequential IO operations, `osxfs` can achieve throughput of around 250 MB/s which, while not native speed, will not be the bottleneck for most applications which perform acceptably on HDDs. -Latency is the time it takes for a file system system call to complete. For -instance, the time between a thread issuing write in a container and resuming -with the number of bytes written. With a classical block-based file system, this -latency is typically under 10μs (microseconds). With `osxfs`, latency is -presently around 200μs for most operations or 20x slower. For workloads which -demand many sequential roundtrips, this results in significant observable -slowdown. To reduce the latency, we need to shorten the data path from a Linux -system call to OS X and back again. This requires tuning each component in the -data path in turn -- some of which require significant engineering effort. Even -if we achieve a huge latency reduction of 100μs/roundtrip, we will still "only" -see a doubling of performance. This is typical of performance engineering, which -requires significant effort to analyze slowdowns and develop optimized -components. We know how we can likely halve the roundtrip time but we haven't -implemented those improvements yet (more on this below in [What you can -do](osxfs.md#what-you-can-do)). +Latency is the time it takes for a file system call to complete. For instance, +the time between a thread issuing write in a container and resuming with the +number of bytes written. With a classical block-based file system, this latency +is typically under 10μs (microseconds). With `osxfs`, latency is presently +around 200μs for most operations or 20x slower. For workloads which demand many +sequential roundtrips, this results in significant observable slowdown. To +reduce the latency, we need to shorten the data path from a Linux system call to +OS X and back again. This requires tuning each component in the data path in +turn -- some of which require significant engineering effort. Even if we achieve +a huge latency reduction of 100μs/roundtrip, we will still "only" see a doubling +of performance. This is typical of performance engineering, which requires +significant effort to analyze slowdowns and develop optimized components. We +know how we can likely halve the roundtrip time but we haven't implemented those +improvements yet (more on this below in +[What you can do](osxfs.md#what-you-can-do)). There is hope for significant performance improvement in the near term despite these fundamental communication channel properties, which are difficult to diff --git a/docker-for-mac/troubleshoot.md b/docker-for-mac/troubleshoot.md index e9500f1cd2..81995091d3 100644 --- a/docker-for-mac/troubleshoot.md +++ b/docker-for-mac/troubleshoot.md @@ -250,7 +250,7 @@ know before you install](index.md#what-to-know-before-you-install). * Restart your Mac to stop / discard any vestige of the daemon running from the previously installed version. - * Run the the uninstall commands from the menu. + * Run the uninstall commands from the menu.

diff --git a/docker-for-windows/opensource.md b/docker-for-windows/opensource.md index c35bb07d2c..ac5d22f1cf 100644 --- a/docker-for-windows/opensource.md +++ b/docker-for-windows/opensource.md @@ -4,10 +4,10 @@ keywords: docker, opensource title: Open source components and licensing --- -Docker Desktop Editions are built using open source software software. For +Docker Desktop Editions are built using open source software. For details on the licensing, choose --> **About** from within the application, then click **Acknowledgements**. Docker Desktop Editions distribute some components that are licensed under the GNU General Public License. You can download the source for these components -[here](https://download.docker.com/opensource/License.tar.gz). \ No newline at end of file +[here](https://download.docker.com/opensource/License.tar.gz). diff --git a/docker-for-windows/troubleshoot.md b/docker-for-windows/troubleshoot.md index e830f54c76..c2ed96cd00 100644 --- a/docker-for-windows/troubleshoot.md +++ b/docker-for-windows/troubleshoot.md @@ -49,7 +49,7 @@ can use in email or the forum to reference the upload. ### inotify on shared drives does not work Currently, `inotify` does not work on Docker for Windows. This will become -evident, for example, when when an application needs to read/write to a +evident, for example, when an application needs to read/write to a container across a mounted drive. This is a known issue that the team is working on. Below is a temporary workaround, and a link to the issue. diff --git a/docker-hub/bitbucket.md b/docker-hub/bitbucket.md index bf8ba8de3c..9182c151d5 100644 --- a/docker-hub/bitbucket.md +++ b/docker-hub/bitbucket.md @@ -34,7 +34,7 @@ To get started, log in to Docker Hub and click the "Create ▼" menu item at the top right of the screen. Then select [Create Automated Build](https://hub.docker.com/add/automated-build/bitbucket/). -Select the the linked Bitbucket account, and then choose a repository to set up +Select the linked Bitbucket account, and then choose a repository to set up an Automated Build for. ## The Bitbucket webhook @@ -46,4 +46,4 @@ You can also manually add a webhook from your repository's **Settings** page. Set the URL to `https://registry.hub.docker.com/hooks/bitbucket`, to be triggered for repository pushes. -![bitbucket-hooks](images/bitbucket-hook.png) \ No newline at end of file +![bitbucket-hooks](images/bitbucket-hook.png) diff --git a/docker-id/index.md b/docker-id/index.md index 8f43bbce5b..9901a8aff8 100644 --- a/docker-id/index.md +++ b/docker-id/index.md @@ -39,7 +39,7 @@ For Docker Cloud, Hub, and Store, log in using the web interface. ![Login using the web interface](images/login-cloud.png) -You can also log in using the the `docker login` command. (You can read more about `docker login` [here](../engine/reference/commandline/login/).) +You can also log in using the `docker login` command. (You can read more about `docker login` [here](../engine/reference/commandline/login/).) > **Note:** When you use the `docker login` command, your credentials are stored in your home directory in `.docker/config.json`. The password is hashed in this diff --git a/docker-store/faq.md b/docker-store/faq.md index 569cbf5b69..6abdb1f31e 100644 --- a/docker-store/faq.md +++ b/docker-store/faq.md @@ -55,7 +55,7 @@ each month, and the charge will come from Docker, Inc. Your billing cycle is a If your payment failed because the card expired or was canceled, you need to update your credit card information or add an additional card. -Click the user icon menu menu in the upper right corner, and click +Click the user icon menu in the upper right corner, and click **Billing**. Click the **Payment methods** tab to update your credit card and contact information. @@ -77,4 +77,4 @@ You can view and download your all active licenses for an organization from the Subscriptions page. Click the user icon menu at the top right, choose **Subscriptions** and then -select the organization from the **Accounts** drop down menu. \ No newline at end of file +select the organization from the **Accounts** drop down menu. diff --git a/engine/admin/logging/overview.md b/engine/admin/logging/overview.md index d2d008d5a9..4e3a08fc39 100644 --- a/engine/admin/logging/overview.md +++ b/engine/admin/logging/overview.md @@ -219,7 +219,7 @@ compresses each log message. The accepted values are `gzip`, `zlib` and `none`. The `gelf-compression-level` option can be used to change the level of compression when `gzip` or `zlib` is selected as `gelf-compression-type`. -Accepted value must be from from -1 to 9 (BestCompression). Higher levels +Accepted value must be from -1 to 9 (BestCompression). Higher levels typically run slower but compress more. Default value is 1 (BestSpeed). ## Fluentd options @@ -297,4 +297,4 @@ The Google Cloud Logging driver supports the following options: ``` For detailed information about working with this logging driver, see the -[Google Cloud Logging driver](gcplogs.md). reference documentation. \ No newline at end of file +[Google Cloud Logging driver](gcplogs.md). reference documentation. diff --git a/engine/swarm/swarm-mode.md b/engine/swarm/swarm-mode.md index 1817655d72..32d0e8c6aa 100644 --- a/engine/swarm/swarm-mode.md +++ b/engine/swarm/swarm-mode.md @@ -70,10 +70,10 @@ to the Swarmkit API and overlay networking. The other nodes on the swarm must be able to access the manager node on its advertise address IP address. If you don't specify an advertise address, Docker checks if the system has a -single IP address. If so, Docker uses the IP address with with the listening -port `2377` by default. If the system has multiple IP addresses, you must -specify the correct `--advertise-addr` to enable inter-manager communication -and overlay networking: +single IP address. If so, Docker uses the IP address with the listening port +`2377` by default. If the system has multiple IP addresses, you must specify the +correct `--advertise-addr` to enable inter-manager communication and overlay +networking: ```bash $ docker swarm init --advertise-addr @@ -135,7 +135,7 @@ SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacr Be careful with the join tokens because they are the secrets necessary to join the swarm. In particular, checking a secret into version control is a bad -practice because it would allow anyone with access to the the application source +practice because it would allow anyone with access to the application source code to add new nodes to the swarm. Manager tokens are especially sensitive because they allow a new manager node to join and gain control over the whole swarm. @@ -169,4 +169,4 @@ To add a worker to this swarm, run the following command: * [Join nodes to a swarm](join-nodes.md) * `swarm init` [command line reference](../reference/commandline/swarm_init.md) -* [Swarm mode tutorial](swarm-tutorial/index.md) \ No newline at end of file +* [Swarm mode tutorial](swarm-tutorial/index.md) diff --git a/engine/swarm/swarm-tutorial/deploy-service.md b/engine/swarm/swarm-tutorial/deploy-service.md index 540cc2d9eb..46474d6db4 100644 --- a/engine/swarm/swarm-tutorial/deploy-service.md +++ b/engine/swarm/swarm-tutorial/deploy-service.md @@ -11,7 +11,7 @@ is not a requirement to deploy a service. 1. Open a terminal and ssh into the machine where you run your manager node. For example, the tutorial uses a machine named `manager1`. -2. Run the the following command: +2. Run the following command: ```bash $ docker service create --replicas 1 --name helloworld alpine ping docker.com @@ -36,4 +36,4 @@ example, the tutorial uses a machine named `manager1`. ## What's next? -Now you've deployed a service to the swarm, you're ready to [inspect the service](inspect-service.md). \ No newline at end of file +Now you've deployed a service to the swarm, you're ready to [inspect the service](inspect-service.md). diff --git a/engine/userguide/storagedriver/zfs-driver.md b/engine/userguide/storagedriver/zfs-driver.md index 5f54a5fcd3..55febdf386 100644 --- a/engine/userguide/storagedriver/zfs-driver.md +++ b/engine/userguide/storagedriver/zfs-driver.md @@ -120,7 +120,7 @@ you want to keep, `push` them Docker Hub or your private Docker Trusted Registry before attempting this procedure. Stop the Docker daemon. Then, ensure that you have a spare block device at -`/dev/xvdb`. The device identifier may be be different in your environment and +`/dev/xvdb`. The device identifier may be different in your environment and you should substitute your own values throughout the procedure. ### Install Zfs on Ubuntu 16.04 LTS @@ -319,4 +319,4 @@ SSD. performance. This is because they bypass the storage driver and do not incur any of the potential overheads introduced by thin provisioning and copy-on-write. For this reason, you should place heavy write workloads on data -volumes. \ No newline at end of file +volumes. diff --git a/opensource/doc-style.md b/opensource/doc-style.md index 02ebaa2d5d..1642a2c63c 100644 --- a/opensource/doc-style.md +++ b/opensource/doc-style.md @@ -42,7 +42,7 @@ for an ordinary speaker of English with a basic university education. If your prose is simple, clear, and straightforward it will translate readily. One way to think about this is to assume Docker’s users are generally university -educated and read at at least a "16th" grade level (meaning they have a +educated and read at least a "16th" grade level (meaning they have a university degree). You can use a [readability tester](https://readability-score.com/) to help guide your judgement. For example, the readability score for the phrase "Containers should be ephemeral" @@ -273,4 +273,4 @@ call-outs is red. Be sure to include descriptive alt-text for the graphic. This greatly helps users with accessibility issues. -Lastly, be sure you have permission to use any included graphics. \ No newline at end of file +Lastly, be sure you have permission to use any included graphics. diff --git a/opensource/project/software-req-win.md b/opensource/project/software-req-win.md index 1c75f76161..4952f62cbc 100644 --- a/opensource/project/software-req-win.md +++ b/opensource/project/software-req-win.md @@ -99,7 +99,7 @@ you use the manager to install the `tar` and `xz` tools from the collection. The system displays the available packages. -8. Click on the the **msys-tar bin** package and choose **Mark for Installation**. +8. Click on the **msys-tar bin** package and choose **Mark for Installation**. 9. Click on the **msys-xz bin** package and choose **Mark for Installation**. @@ -254,4 +254,4 @@ from GitHub. ## Where to go next In the next section, you'll [learn how to set up and configure Git for -contributing to Docker](set-up-git.md). \ No newline at end of file +contributing to Docker](set-up-git.md). diff --git a/swarm/secure-swarm-tls.md b/swarm/secure-swarm-tls.md index f682475b50..2737ed07e2 100644 --- a/swarm/secure-swarm-tls.md +++ b/swarm/secure-swarm-tls.md @@ -79,7 +79,7 @@ authentication. ![](images/trust-diagram.jpg) -The trusted third party in this diagram is the the Certificate Authority (CA) +The trusted third party in this diagram is the Certificate Authority (CA) server. Like the country in the passport example, a CA creates, signs, issues, revokes certificates. Trust is established by installing the CA's root certificate on the host running the Docker Engine daemon. The Docker Engine CLI then requests @@ -157,4 +157,4 @@ facing production workloads exposed to untrusted networks. ## Related information * [Configure Docker Swarm for TLS](configure-tls.md) -* [Docker security](/engine/security/security/) \ No newline at end of file +* [Docker security](/engine/security/security/) diff --git a/swarm/swarm-api.md b/swarm/swarm-api.md index acd91b4f74..2ab53320ed 100644 --- a/swarm/swarm-api.md +++ b/swarm/swarm-api.md @@ -48,7 +48,7 @@ POST "/images/create" : "docker import" flow not implement GET "/containers/{name:.*}/json" - HostIP replaced by the the actual Node's IP if HostIP is 0.0.0.0 + HostIP replaced by the actual Node's IP if HostIP is 0.0.0.0 @@ -64,7 +64,7 @@ POST "/images/create" : "docker import" flow not implement GET "/containers/json" - HostIP replaced by the the actual Node's IP if HostIP is 0.0.0.0 + HostIP replaced by the actual Node's IP if HostIP is 0.0.0.0 @@ -178,4 +178,4 @@ $ docker run --rm -it yourprivateimage:latest - [Docker Swarm overview](/swarm/) - [Discovery options](/swarm/discovery/) - [Scheduler strategies](/swarm/scheduler/strategy/) -- [Scheduler filters](/swarm/scheduler/filter/) \ No newline at end of file +- [Scheduler filters](/swarm/scheduler/filter/) diff --git a/swarm/swarm_at_scale/deploy-app.md b/swarm/swarm_at_scale/deploy-app.md index 45e1d8f5ee..7ab2e0fd29 100644 --- a/swarm/swarm_at_scale/deploy-app.md +++ b/swarm/swarm_at_scale/deploy-app.md @@ -296,7 +296,7 @@ the containers at once. This extra credit In general, Compose starts services in reverse order they appear in the file. So, if you want a service to start before all the others, make it the last - service in the file file. This application relies on a volume and a network, + service in the file. This application relies on a volume and a network, declare those at the bottom of the file. 3. Check your work against this @@ -417,4 +417,4 @@ Congratulations. You have successfully walked through manually deploying a microservice-based application to a Swarm cluster. Of course, not every deployment goes smoothly. Now that you've learned how to successfully deploy an application at scale, you should learn [what to consider when troubleshooting -large applications running on a Swarm cluster](troubleshoot.md). \ No newline at end of file +large applications running on a Swarm cluster](troubleshoot.md). diff --git a/swarm/swarm_at_scale/deploy-infra.md b/swarm/swarm_at_scale/deploy-infra.md index ffcbf85511..ce7431dd3a 100644 --- a/swarm/swarm_at_scale/deploy-infra.md +++ b/swarm/swarm_at_scale/deploy-infra.md @@ -22,7 +22,7 @@ While this example uses Docker Machine, this is only one example of an infrastructure you can use. You can create the environment design on whatever infrastructure you wish. For example, you could place the application on another public cloud platform such as Azure or DigitalOcean, on premises in your data -center, or even in in a test environment on your laptop. +center, or even in a test environment on your laptop. Finally, these instructions use some common `bash` command substitution techniques to resolve some values, for example: @@ -430,4 +430,4 @@ commands below, notice the label you are applying to each node. ## Next Step Your key-value store, load balancer, and Swarm cluster infrastructure are up. You are -ready to [build and run the voting application](deploy-app.md) on it. \ No newline at end of file +ready to [build and run the voting application](deploy-app.md) on it.