diff --git a/compose/django.md b/compose/django.md index 60d401f208..b94e698eb5 100644 --- a/compose/django.md +++ b/compose/django.md @@ -112,7 +112,7 @@ In this step, you create a Django started project by building the image from the If you are running Docker on Linux, the files `django-admin` created are owned by root. This happens because the container runs as the root user. Change the - ownership of the the new files. + ownership of the new files. sudo chown -R $USER:$USER . diff --git a/compose/rails.md b/compose/rails.md index 8670bd2802..8061bbcac0 100644 --- a/compose/rails.md +++ b/compose/rails.md @@ -91,7 +91,7 @@ First, Compose will build the image for the `web` service using the `Dockerfile` If you are running Docker on Linux, the files `rails new` created are owned by root. This happens because the container runs as the root user. Change the -ownership of the the new files. +ownership of the new files. sudo chown -R $USER:$USER . diff --git a/compose/reference/envvars.md b/compose/reference/envvars.md index efb36b0121..2597861fb4 100644 --- a/compose/reference/envvars.md +++ b/compose/reference/envvars.md @@ -14,7 +14,10 @@ Docker command-line client. If you're using `docker-machine`, then the `eval "$( ## COMPOSE\_PROJECT\_NAME -Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively. +Sets the project name. This value is prepended along with the service name to +the container on start up. For example, if you project name is `myapp` and it +includes two services `db` and `web` then compose starts containers named +`myapp_db_1` and `myapp_web_1` respectively. Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME` defaults to the `basename` of the project directory. See also the `-p` @@ -87,4 +90,4 @@ Users of Docker Machine and Docker Toolbox on Windows should always set this. - [User guide](../index.md) - [Installing Compose](../install.md) - [Compose file reference](../compose-file.md) -- [Environment file](../env-file.md) \ No newline at end of file +- [Environment file](../env-file.md) diff --git a/datacenter/dtr/2.0/high-availability/index.md b/datacenter/dtr/2.0/high-availability/index.md index b8f7ee2663..fc17e92873 100644 --- a/datacenter/dtr/2.0/high-availability/index.md +++ b/datacenter/dtr/2.0/high-availability/index.md @@ -57,7 +57,7 @@ they have dedicated resources for them. It also makes it easier to implement backup policies and disaster recovery plans for UCP and DTR. -To have have high-availability on UCP and DTR, you need a minimum of: +To have high-availability on UCP and DTR, you need a minimum of: * 3 dedicated nodes to install UCP with high availability, * 3 dedicated nodes to install DTR with high availability, @@ -68,7 +68,7 @@ To have have high-availability on UCP and DTR, you need a minimum of: ## Load balancing -DTR does not provide a load balancing service. You can use use an on-premises +DTR does not provide a load balancing service. You can use an on-premises or cloud-based load balancer to balance requests across multiple DTR replicas. Make sure you configure your load balancer to: @@ -82,4 +82,4 @@ not. ## Where to go next * [Backups and disaster recovery](backups-and-disaster-recovery.md) -* [DTR architecture](../architecture.md) \ No newline at end of file +* [DTR architecture](../architecture.md) diff --git a/datacenter/dtr/2.0/install/index.md b/datacenter/dtr/2.0/install/index.md index 02a0eccc6a..affab7678b 100644 --- a/datacenter/dtr/2.0/install/index.md +++ b/datacenter/dtr/2.0/install/index.md @@ -67,7 +67,7 @@ To install DTR: 3. Check that DTR is running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. DTR should be listed as an application. @@ -143,7 +143,7 @@ replicas: 3. Check that all replicas are running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. All replicas should be displayed. @@ -158,4 +158,4 @@ replicas: ## See also * [Install DTR offline](install-dtr-offline.md) -* [Upgrade DTR](upgrade/upgrade-major.md) \ No newline at end of file +* [Upgrade DTR](upgrade/upgrade-major.md) diff --git a/datacenter/dtr/2.0/install/system-requirements.md b/datacenter/dtr/2.0/install/system-requirements.md index 24b197827c..d7e6a3513a 100644 --- a/datacenter/dtr/2.0/install/system-requirements.md +++ b/datacenter/dtr/2.0/install/system-requirements.md @@ -11,7 +11,7 @@ Before installing, be sure your infrastructure has these requirements. ## Software requirements -To install DTR on a node, that node node must be part of a Docker Universal +To install DTR on a node, that node must be part of a Docker Universal Control Plane 1.1 cluster. ## Ports used @@ -45,4 +45,4 @@ Docker Datacenter is a software subscription that includes 3 products: ## Where to go next * [DTR architecture](../architecture.md) -* [Install DTR](index.md) \ No newline at end of file +* [Install DTR](index.md) diff --git a/datacenter/dtr/2.0/install/upgrade/upgrade-major.md b/datacenter/dtr/2.0/install/upgrade/upgrade-major.md index c88bd00737..f06fde4092 100644 --- a/datacenter/dtr/2.0/install/upgrade/upgrade-major.md +++ b/datacenter/dtr/2.0/install/upgrade/upgrade-major.md @@ -63,7 +63,7 @@ To start the migration: 2. Use the docker/dtr migrate command. When you run the docker/dtr migrate command, Docker pulls the necessary - images from Docker Hub. If the the host where DTR 1.4.3 is not connected + images from Docker Hub. If the host where DTR 1.4.3 is not connected to the internet, you need to [download the images to the host](../install-dtr-offline.md). @@ -183,7 +183,7 @@ replicas: 3. Check that all replicas are running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. All replicas should be displayed. @@ -204,4 +204,4 @@ containers. ## Where to go next * [Upgrade to DTR 2.x](index.md) -* [Monitor DTR](../../monitor-troubleshoot/index.md) \ No newline at end of file +* [Monitor DTR](../../monitor-troubleshoot/index.md) diff --git a/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md b/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md index c0d9c2fc4e..892088c9db 100644 --- a/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md +++ b/datacenter/dtr/2.0/monitor-troubleshoot/troubleshoot.md @@ -18,7 +18,7 @@ docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1 ``` -You can create new new overlay network for this test with `docker network create +You can create new overlay network for this test with `docker network create -d overaly network-name`. You can also use any images that contain `sh` and `ping` for this test. @@ -65,4 +65,4 @@ via the following docker command: ``` docker run --rm -v dtr-ca-$REPLICA_ID:/ca --net dtr-br -it --entrypoint /etcdctl docker/dtr-etcd:v2.2.4 --endpoint https://dtr-etcd-$REPLICA_ID.dtr-br:2379 --ca-file /ca/etcd/cert.pem --key-file /ca/etcd-client/key.pem --cert-file /ca/etcd-client/cert.pem -``` \ No newline at end of file +``` diff --git a/datacenter/dtr/2.0/user-management/create-and-manage-teams.md b/datacenter/dtr/2.0/user-management/create-and-manage-teams.md index 79f4027fa6..ceb7e9ee8c 100644 --- a/datacenter/dtr/2.0/user-management/create-and-manage-teams.md +++ b/datacenter/dtr/2.0/user-management/create-and-manage-teams.md @@ -14,7 +14,7 @@ A team defines the permissions a set of users have for a set of repositories. To create a new team, go to the **DTR web UI**, and navigate to the **Organizations** page. Then **click the organization** where you want to create the team. In this -example, we'll create the 'billing' team team under the 'whale' organization. +example, we'll create the 'billing' team under the 'whale' organization.  @@ -54,4 +54,4 @@ There are three permission levels available: ## Where to go next * [Create and manage users](create-and-manage-users.md) -* [Create and manage organizations](create-and-manage-orgs.md) \ No newline at end of file +* [Create and manage organizations](create-and-manage-orgs.md) diff --git a/datacenter/dtr/2.1/guides/high-availability/index.md b/datacenter/dtr/2.1/guides/high-availability/index.md index 36a0f58374..486303000f 100644 --- a/datacenter/dtr/2.1/guides/high-availability/index.md +++ b/datacenter/dtr/2.1/guides/high-availability/index.md @@ -56,7 +56,7 @@ they have dedicated resources for them. It also makes it easier to implement backup policies and disaster recovery plans for UCP and DTR. -To have have high-availability on UCP and DTR, you need a minimum of: +To have high-availability on UCP and DTR, you need a minimum of: * 3 dedicated nodes to install UCP with high availability, * 3 dedicated nodes to install DTR with high availability, @@ -67,7 +67,7 @@ To have have high-availability on UCP and DTR, you need a minimum of: ## Load balancing -DTR does not provide a load balancing service. You can use use an on-premises +DTR does not provide a load balancing service. You can use an on-premises or cloud-based load balancer to balance requests across multiple DTR replicas. Make sure you configure your load balancer to: diff --git a/datacenter/dtr/2.1/guides/install/index.md b/datacenter/dtr/2.1/guides/install/index.md index 1cec50cf85..04a77d110e 100644 --- a/datacenter/dtr/2.1/guides/install/index.md +++ b/datacenter/dtr/2.1/guides/install/index.md @@ -56,7 +56,7 @@ Check the [reference documentation to learn more](../../reference/cli/install.md ## Step 4. Check that DTR is running -In your browser, navigate to the the Docker **Universal Control Plane** +In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. DTR should be listed as an application. @@ -122,7 +122,7 @@ replicas: 4. Check that all replicas are running. - In your browser, navigate to the the Docker **Universal Control Plane** + In your browser, navigate to the Docker **Universal Control Plane** web UI, and navigate to the **Applications** screen. All replicas should be displayed. diff --git a/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md b/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md index 3a1726603b..81ef8688b5 100644 --- a/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md +++ b/datacenter/dtr/2.1/guides/monitor-troubleshoot/troubleshoot.md @@ -14,7 +14,7 @@ docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1 ``` -You can create new new overlay network for this test with `docker network create -d overaly network-name`. +You can create new overlay network for this test with `docker network create -d overaly network-name`. You can also use any images that contain `sh` and `ping` for this test. If the second command succeeds, overlay networking is working. diff --git a/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md b/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md index ec526fb17f..ae90f33be4 100644 --- a/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md +++ b/datacenter/dtr/2.1/guides/user-management/create-and-manage-teams.md @@ -13,7 +13,7 @@ A team defines the permissions a set of users have for a set of repositories. To create a new team, go to the **DTR web UI**, and navigate to the **Organizations** page. Then **click the organization** where you want to create the team. In this -example, we'll create the 'billing' team team under the 'whale' organization. +example, we'll create the 'billing' team under the 'whale' organization.  diff --git a/datacenter/ucp/1.1/configuration/multi-host-networking.md b/datacenter/ucp/1.1/configuration/multi-host-networking.md index c4c9289a78..cced7b3d3a 100644 --- a/datacenter/ucp/1.1/configuration/multi-host-networking.md +++ b/datacenter/ucp/1.1/configuration/multi-host-networking.md @@ -136,7 +136,7 @@ To enable the networking feature, do the following. INFO[0001] Successfully delivered signal to daemon ``` - The `host-address` value is the the external address of the node you're + The `host-address` value is the external address of the node you're operating against. This is the address other nodes when communicating with each other across the communication network. @@ -275,4 +275,4 @@ Remember, you'll need to restart the daemon each time you change the start optio ## Where to go next * [Integrate with DTR](dtr-integration.md) -* [Set up high availability](../high-availability/set-up-high-availability.md) \ No newline at end of file +* [Set up high availability](../high-availability/set-up-high-availability.md) diff --git a/datacenter/ucp/1.1/install-sandbox.md b/datacenter/ucp/1.1/install-sandbox.md index dd84a19f55..ec12c6af02 100644 --- a/datacenter/ucp/1.1/install-sandbox.md +++ b/datacenter/ucp/1.1/install-sandbox.md @@ -181,7 +181,7 @@ host for the controller works fine. ```` Running this `eval` command sends the `docker` commands in the following - steps to the Docker Engine on on `node1`. + steps to the Docker Engine on `node1`. c. Verify that `node1` is the active environment. diff --git a/datacenter/ucp/2.0/guides/release-notes.md b/datacenter/ucp/2.0/guides/release-notes.md index 531552b24b..87e6f3d565 100644 --- a/datacenter/ucp/2.0/guides/release-notes.md +++ b/datacenter/ucp/2.0/guides/release-notes.md @@ -73,7 +73,7 @@ of specific teams * Added an HTTP routing mesh for enabling hostname routing for services (experimental) * The UCP web UI now lets you know when a new version is available, and upgrades -to the the new version with a single click +to the new version with a single click **Installer** diff --git a/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md b/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md index a86c2e954d..0a3944dd4e 100644 --- a/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md +++ b/datacenter/ucp/2.0/reference/cli/uninstall-ucp.md @@ -23,7 +23,7 @@ docker run -it --rm \ This command uninstalls UCP from the swarm, but preserves the swarm so that your applications can continue running. -After UCP is uninstalled you can use the the 'docker swarm leave' and +After UCP is uninstalled you can use the 'docker swarm leave' and 'docker node rm' commands to remove nodes from the swarm. Once UCP is uninstalled, you won't be able to join nodes to the swarm unless diff --git a/docker-cloud/apps/ports.md b/docker-cloud/apps/ports.md index a5b29d10bb..2f72455632 100644 --- a/docker-cloud/apps/ports.md +++ b/docker-cloud/apps/ports.md @@ -66,7 +66,7 @@ option, find the published port on the service detail page. ### Using the API/CLI See the API and CLI documentation [here](/apidocs/docker-cloud.md#service) on -how to launch a service with a a published port. +how to launch a service with a published port. ## Check which ports a service has published @@ -81,7 +81,7 @@ Ports that are exposed internally display with a closed (locked) padlock icon and published ports (that are exposed to the internet) show an open (unlocked) padlock icon. -* Exposed ports are listed as as **container port/protocol** +* Exposed ports are listed as **container port/protocol** * Published ports are listed as **node port**->**container port/protocol** -->  @@ -121,4 +121,4 @@ not dynamic) is assigned a DNS endpoint in the format running, in a [round-robin fashion](https://en.wikipedia.org/wiki/Round-robin_DNS). -You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab. \ No newline at end of file +You can see a list of service endpoints on the stack and service detail views, under the **Endpoints** tab. diff --git a/docker-cloud/apps/service-links.md b/docker-cloud/apps/service-links.md index 38f0958a21..8265ae1a84 100644 --- a/docker-cloud/apps/service-links.md +++ b/docker-cloud/apps/service-links.md @@ -118,7 +118,7 @@ Environment variables specified in the service definition are instantiated in ea These environment variables are prefixed with the `HOSTNAME_ENV_` in each container. -In our example, if we launch our `my-web-app` service with an environment variable of `WEBROOT=/login`, the following environment variables are set and available available in the proxy containers: +In our example, if we launch our `my-web-app` service with an environment variable of `WEBROOT=/login`, the following environment variables are set and available in the proxy containers: | Name | Value | |:------------------|:---------| @@ -161,4 +161,4 @@ Where: These environment variables are also copied to linked containers with the `NAME_ENV_` prefix. -If you provide API access to your service, you can use the generated token (stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or automate operations, such as scaling. \ No newline at end of file +If you provide API access to your service, you can use the generated token (stored in `DOCKERCLOUD_AUTH`) to access these API URLs to gather information or automate operations, such as scaling. diff --git a/docker-cloud/getting-started/deploy-app/7_scale_the_service.md b/docker-cloud/getting-started/deploy-app/7_scale_the_service.md index 6cbc58102b..528a29b98a 100644 --- a/docker-cloud/getting-started/deploy-app/7_scale_the_service.md +++ b/docker-cloud/getting-started/deploy-app/7_scale_the_service.md @@ -53,7 +53,7 @@ web-2 ab045c42 ▶ Running my-username/python-quickstart:late Use either of the URLs from the `container ps` command to visit one of your service's containers, either using your browser or curl. -In the example output above, the URL `web-1.my-username.cont.dockerapp.io:49162` reaches the web app on the first container, and `web-2.my-username.cont.dockerapp.io:49156` reaches the web app on the the second container. +In the example output above, the URL `web-1.my-username.cont.dockerapp.io:49162` reaches the web app on the first container, and `web-2.my-username.cont.dockerapp.io:49156` reaches the web app on the second container. If you use curl to visit the pages, you should see something like this: @@ -66,4 +66,4 @@ Hello Python Users!Hostname: web-2Counter: Redis Cache not found, coun Congratulations! You now have *two* containers running in your **web** service. -Next: [View service logs](8_view_logs.md) \ No newline at end of file +Next: [View service logs](8_view_logs.md) diff --git a/docker-cloud/infrastructure/link-do.md b/docker-cloud/infrastructure/link-do.md index 7cea8c0371..097fe6a582 100644 --- a/docker-cloud/infrastructure/link-do.md +++ b/docker-cloud/infrastructure/link-do.md @@ -28,4 +28,4 @@ Once you log in, a message appears prompting you to confirm the link. ## What's next? -You're ready to start using using DigitalOcean as the infrastructure provider for Docker Cloud! If you came here from the tutorial, click here to [continue the tutorial and deploy your first node](../getting-started/your_first_node.md). \ No newline at end of file +You're ready to start using DigitalOcean as the infrastructure provider for Docker Cloud! If you came here from the tutorial, click here to [continue the tutorial and deploy your first node](../getting-started/your_first_node.md). diff --git a/docker-for-mac/index.md b/docker-for-mac/index.md index fbae5f6aa8..8290392cb6 100644 --- a/docker-for-mac/index.md +++ b/docker-for-mac/index.md @@ -276,7 +276,9 @@ ln -s /Applications/Docker.app/Contents/Resources/etc/docker-compose.bash-comple * Try out the [Getting Started with Docker](/engine/getstarted/index.md) tutorial. -* Dig in deeper with [learn by example](/engine/tutorials/index.md) tutorials on on building images, running containers, networking, managing data, and storing images on Docker Hub. +* Dig in deeper with [learn by example](/engine/tutorials/index.md) tutorials on + building images, running containers, networking, managing data, and storing + images on Docker Hub. * See [Example Applications](examples.md) for example applications that include setting up services and databases in Docker Compose. diff --git a/docker-for-mac/osxfs.md b/docker-for-mac/osxfs.md index a1e7d88b5d..cf15e164ce 100644 --- a/docker-for-mac/osxfs.md +++ b/docker-for-mac/osxfs.md @@ -163,21 +163,21 @@ GB/s. With large sequential IO operations, `osxfs` can achieve throughput of around 250 MB/s which, while not native speed, will not be the bottleneck for most applications which perform acceptably on HDDs. -Latency is the time it takes for a file system system call to complete. For -instance, the time between a thread issuing write in a container and resuming -with the number of bytes written. With a classical block-based file system, this -latency is typically under 10μs (microseconds). With `osxfs`, latency is -presently around 200μs for most operations or 20x slower. For workloads which -demand many sequential roundtrips, this results in significant observable -slowdown. To reduce the latency, we need to shorten the data path from a Linux -system call to OS X and back again. This requires tuning each component in the -data path in turn -- some of which require significant engineering effort. Even -if we achieve a huge latency reduction of 100μs/roundtrip, we will still "only" -see a doubling of performance. This is typical of performance engineering, which -requires significant effort to analyze slowdowns and develop optimized -components. We know how we can likely halve the roundtrip time but we haven't -implemented those improvements yet (more on this below in [What you can -do](osxfs.md#what-you-can-do)). +Latency is the time it takes for a file system call to complete. For instance, +the time between a thread issuing write in a container and resuming with the +number of bytes written. With a classical block-based file system, this latency +is typically under 10μs (microseconds). With `osxfs`, latency is presently +around 200μs for most operations or 20x slower. For workloads which demand many +sequential roundtrips, this results in significant observable slowdown. To +reduce the latency, we need to shorten the data path from a Linux system call to +OS X and back again. This requires tuning each component in the data path in +turn -- some of which require significant engineering effort. Even if we achieve +a huge latency reduction of 100μs/roundtrip, we will still "only" see a doubling +of performance. This is typical of performance engineering, which requires +significant effort to analyze slowdowns and develop optimized components. We +know how we can likely halve the roundtrip time but we haven't implemented those +improvements yet (more on this below in +[What you can do](osxfs.md#what-you-can-do)). There is hope for significant performance improvement in the near term despite these fundamental communication channel properties, which are difficult to diff --git a/docker-for-mac/troubleshoot.md b/docker-for-mac/troubleshoot.md index e9500f1cd2..81995091d3 100644 --- a/docker-for-mac/troubleshoot.md +++ b/docker-for-mac/troubleshoot.md @@ -250,7 +250,7 @@ know before you install](index.md#what-to-know-before-you-install). * Restart your Mac to stop / discard any vestige of the daemon running from the previously installed version. - * Run the the uninstall commands from the menu. + * Run the uninstall commands from the menu.
diff --git a/docker-for-windows/opensource.md b/docker-for-windows/opensource.md index c35bb07d2c..ac5d22f1cf 100644 --- a/docker-for-windows/opensource.md +++ b/docker-for-windows/opensource.md @@ -4,10 +4,10 @@ keywords: docker, opensource title: Open source components and licensing --- -Docker Desktop Editions are built using open source software software. For +Docker Desktop Editions are built using open source software. For details on the licensing, chooseGET "/containers/{name:.*}/json"
HostIP
replaced by the the actual Node's IP if HostIP
is 0.0.0.0
+ HostIP
replaced by the actual Node's IP if HostIP
is 0.0.0.0
GET "/containers/json"
HostIP
replaced by the the actual Node's IP if HostIP
is 0.0.0.0
+ HostIP
replaced by the actual Node's IP if HostIP
is 0.0.0.0