Merge pull request #7412 from ocaballeror/spelling

Spelling revision
This commit is contained in:
Maria Bermudez 2018-10-04 12:28:51 -07:00 committed by GitHub
commit 7fc7b463b2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
94 changed files with 152 additions and 151 deletions

View File

@ -307,7 +307,7 @@ In order to keep the Git repository light, _please_ compress the images
(losslessly). On Mac you may use (ImageOptim)[https://imageoptim.com] for (losslessly). On Mac you may use (ImageOptim)[https://imageoptim.com] for
instance. Be sure to compress the images *before* adding them to the instance. Be sure to compress the images *before* adding them to the
repository, doing it afterwards actually worsens the impact on the Git repo (but repository, doing it afterwards actually worsens the impact on the Git repo (but
still optimizes the bandwith during browsing). still optimizes the bandwidth during browsing).
## Building archives and the live published docs ## Building archives and the live published docs

View File

@ -303,7 +303,7 @@ memory | The memory limit of the container in MB (see [Runtime Constraints on CP
memory_swap | Total memory limit (memory + swap) of the container in MB memory_swap | Total memory limit (memory + swap) of the container in MB
autorestart | Whether to restart the container automatically if it stops (see [Crash recovery](/docker-cloud/apps/autorestart/) for more information) autorestart | Whether to restart the container automatically if it stops (see [Crash recovery](/docker-cloud/apps/autorestart/) for more information)
autodestroy | Whether to terminate the container automatically if it stops (see [Autodestroy](/docker-cloud/apps/auto-destroy/) for more information) autodestroy | Whether to terminate the container automatically if it stops (see [Autodestroy](/docker-cloud/apps/auto-destroy/) for more information)
roles | List of Docker Cloud roles asigned to this container (see [API roles](/docker-cloud/apps/api-roles/) for more information)) roles | List of Docker Cloud roles assigned to this container (see [API roles](/docker-cloud/apps/api-roles/) for more information))
linked_to_container | List of IP addresses of the linked containers (see table `Container Link attributes` below and [Service links](/docker-cloud/apps/service-links/) for more information) linked_to_container | List of IP addresses of the linked containers (see table `Container Link attributes` below and [Service links](/docker-cloud/apps/service-links/) for more information)
link_variables | List of environment variables that would be exposed in any container that is linked to this one link_variables | List of environment variables that would be exposed in any container that is linked to this one
privileged | Whether the container has Docker's `privileged` flag set or not (see [Runtime privilege](/engine/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration) for more information) privileged | Whether the container has Docker's `privileged` flag set or not (see [Runtime privilege](/engine/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration) for more information)

View File

@ -11,7 +11,7 @@
* - highlight element tag and class names can be specified in options * - highlight element tag and class names can be specified in options
* *
* Usage: * Usage:
* // wrap every occurrance of text 'lorem' in content * // wrap every occurrence of text 'lorem' in content
* // with <span class='highlight'> (default options) * // with <span class='highlight'> (default options)
* $('#content').highlight('lorem'); * $('#content').highlight('lorem');
* *
@ -26,7 +26,7 @@
* // don't ignore case during search of term 'lorem' * // don't ignore case during search of term 'lorem'
* $('#content').highlight('lorem', { caseSensitive: true }); * $('#content').highlight('lorem', { caseSensitive: true });
* *
* // wrap every occurrance of term 'ipsum' in content * // wrap every occurrence of term 'ipsum' in content
* // with <em class='important'> * // with <em class='important'>
* $('#content').highlight('ipsum', { element: 'em', className: 'important' }); * $('#content').highlight('ipsum', { element: 'em', className: 'important' });
* *

View File

@ -858,7 +858,7 @@
} }
// Maintains chainablity // Maintains chainability
return self; return self;
}, },
@ -911,7 +911,7 @@
} }
// Maintains chainablity // Maintains chainability
return self; return self;
}, },

View File

@ -1667,7 +1667,7 @@
/** /**
* lunr.trimmer is a pipeline function for trimming non word * lunr.trimmer is a pipeline function for trimming non word
* characters from the begining and end of tokens before they * characters from the beginning and end of tokens before they
* enter the index. * enter the index.
* *
* This implementation may not work correctly for non latin * This implementation may not work correctly for non latin
@ -1891,7 +1891,7 @@
} else if (typeof exports === 'object') { } else if (typeof exports === 'object') {
/** /**
* Node. Does not work with strict CommonJS, but * Node. Does not work with strict CommonJS, but
* only CommonJS-like enviroments that support module.exports, * only CommonJS-like environments that support module.exports,
* like Node. * like Node.
*/ */
module.exports = factory() module.exports = factory()

View File

@ -62,7 +62,7 @@
<p>An action represents an API call by a user. Details of the API call such as timestamp, origin IP address, and user agent are logged in the action object.</p> <p>An action represents an API call by a user. Details of the API call such as timestamp, origin IP address, and user agent are logged in the action object.</p>
<p>Simple API calls that do not require asynchronous execution will return immediately with the appropiate HTTP error code and an action object will be created either in <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> states. API calls that do require asynchronous execution will return HTTP code <code class="prettyprint">202 Accepted</code> immediately and create an action object in <code class="prettyprint">In progress</code> state, which will change to <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> state depending on the outcome of the operation being performed. In both cases the response will include a <code class="prettyprint">X-DockerCloud-Action-URI</code> header with the resource URI of the created action.</p> <p>Simple API calls that do not require asynchronous execution will return immediately with the appropriate HTTP error code and an action object will be created either in <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> states. API calls that do require asynchronous execution will return HTTP code <code class="prettyprint">202 Accepted</code> immediately and create an action object in <code class="prettyprint">In progress</code> state, which will change to <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> state depending on the outcome of the operation being performed. In both cases the response will include a <code class="prettyprint">X-DockerCloud-Action-URI</code> header with the resource URI of the created action.</p>
<h3 id="attributes">Attributes</h3> <h3 id="attributes">Attributes</h3>

View File

@ -470,7 +470,7 @@
</tr> </tr>
<tr> <tr>
<td>roles</td> <td>roles</td>
<td>List of Docker Cloud roles asigned to this container (see <a href="/docker-cloud/apps/api-roles/">API roles</a> for more information))</td> <td>List of Docker Cloud roles assigned to this container (see <a href="/docker-cloud/apps/api-roles/">API roles</a> for more information))</td>
</tr> </tr>
<tr> <tr>
<td>linked_to_container</td> <td>linked_to_container</td>

View File

@ -81,7 +81,7 @@
</tr> </tr>
<tr> <tr>
<td style="text-align: left">action</td> <td style="text-align: left">action</td>
<td style="text-align: left">Type of action that was executed on the object. Posible values: <code class="prettyprint">create</code>, <code class="prettyprint">update</code> or <code class="prettyprint">delete</code></td> <td style="text-align: left">Type of action that was executed on the object. Possible values: <code class="prettyprint">create</code>, <code class="prettyprint">update</code> or <code class="prettyprint">delete</code></td>
</tr> </tr>
<tr> <tr>
<td style="text-align: left">parents</td> <td style="text-align: left">parents</td>

View File

@ -200,7 +200,7 @@
<ul> <ul>
<li><code class="prettyprint">id</code>: AWS VPC identifier of the target VPC where the nodes of the cluster will be deployed (required)</li> <li><code class="prettyprint">id</code>: AWS VPC identifier of the target VPC where the nodes of the cluster will be deployed (required)</li>
<li><code class="prettyprint">subnets</code>: a list of target subnet indentifiers inside selected VPC. If you specify more than one subnet, Docker Cloud will balance among all of them following a high-availability schema (optional)</li> <li><code class="prettyprint">subnets</code>: a list of target subnet identifiers inside selected VPC. If you specify more than one subnet, Docker Cloud will balance among all of them following a high-availability schema (optional)</li>
<li><code class="prettyprint">security_groups</code>: the security group that will be applied to every node of the cluster (optional)</li> <li><code class="prettyprint">security_groups</code>: the security group that will be applied to every node of the cluster (optional)</li>
</ul></li> </ul></li>
<li><code class="prettyprint">iam</code>: IAM-related options (optional) <li><code class="prettyprint">iam</code>: IAM-related options (optional)

View File

@ -82,7 +82,7 @@ e.TokenStore=function(){this.root={docs:{}},this.length=0},e.TokenStore.load=fun
* - highlight element tag and class names can be specified in options * - highlight element tag and class names can be specified in options
* *
* Usage: * Usage:
* // wrap every occurrance of text 'lorem' in content * // wrap every occurrence of text 'lorem' in content
* // with <span class='highlight'> (default options) * // with <span class='highlight'> (default options)
* $('#content').highlight('lorem'); * $('#content').highlight('lorem');
* *
@ -97,7 +97,7 @@ e.TokenStore=function(){this.root={docs:{}},this.length=0},e.TokenStore.load=fun
* // don't ignore case during search of term 'lorem' * // don't ignore case during search of term 'lorem'
* $('#content').highlight('lorem', { caseSensitive: true }); * $('#content').highlight('lorem', { caseSensitive: true });
* *
* // wrap every occurrance of term 'ipsum' in content * // wrap every occurrence of term 'ipsum' in content
* // with <em class='important'> * // with <em class='important'>
* $('#content').highlight('ipsum', { element: 'em', className: 'important' }); * $('#content').highlight('ipsum', { element: 'em', className: 'important' });
* *

View File

@ -208,7 +208,7 @@ set this globally, or specify it before each CLI command. To learn more, see the
<p>An action represents an API call by a user. Details of the API call such as timestamp, origin IP address, and user agent are logged in the action object.</p> <p>An action represents an API call by a user. Details of the API call such as timestamp, origin IP address, and user agent are logged in the action object.</p>
<p>Simple API calls that do not require asynchronous execution will return immediately with the appropiate HTTP error code and an action object will be created either in <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> states. API calls that do require asynchronous execution will return HTTP code <code class="prettyprint">202 Accepted</code> immediately and create an action object in <code class="prettyprint">In progress</code> state, which will change to <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> state depending on the outcome of the operation being performed. In both cases the response will include a <code class="prettyprint">X-DockerCloud-Action-URI</code> header with the resource URI of the created action.</p> <p>Simple API calls that do not require asynchronous execution will return immediately with the appropriate HTTP error code and an action object will be created either in <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> states. API calls that do require asynchronous execution will return HTTP code <code class="prettyprint">202 Accepted</code> immediately and create an action object in <code class="prettyprint">In progress</code> state, which will change to <code class="prettyprint">Success</code> or <code class="prettyprint">Failed</code> state depending on the outcome of the operation being performed. In both cases the response will include a <code class="prettyprint">X-DockerCloud-Action-URI</code> header with the resource URI of the created action.</p>
<h3 id="attributes">Attributes</h3> <h3 id="attributes">Attributes</h3>
@ -1450,7 +1450,7 @@ set this globally, or specify it before each CLI command. To learn more, see the
<ul> <ul>
<li><code class="prettyprint">id</code>: AWS VPC identifier of the target VPC where the nodes of the cluster will be deployed (required)</li> <li><code class="prettyprint">id</code>: AWS VPC identifier of the target VPC where the nodes of the cluster will be deployed (required)</li>
<li><code class="prettyprint">subnets</code>: a list of target subnet indentifiers inside selected VPC. If you specify more than one subnet, Docker Cloud will balance among all of them following a high-availability schema (optional)</li> <li><code class="prettyprint">subnets</code>: a list of target subnet identifiers inside selected VPC. If you specify more than one subnet, Docker Cloud will balance among all of them following a high-availability schema (optional)</li>
<li><code class="prettyprint">security_groups</code>: the security group that will be applied to every node of the cluster (optional)</li> <li><code class="prettyprint">security_groups</code>: the security group that will be applied to every node of the cluster (optional)</li>
</ul></li> </ul></li>
<li><code class="prettyprint">iam</code>: IAM-related options (optional) <li><code class="prettyprint">iam</code>: IAM-related options (optional)
@ -5263,7 +5263,7 @@ docker-cloud tag <span class="nb">set</span> -t tag-2 7eaf7fff
</tr> </tr>
<tr> <tr>
<td>roles</td> <td>roles</td>
<td>List of Docker Cloud roles asigned to this container (see <a href="/docker-cloud/apps/api-roles/">API roles</a> for more information))</td> <td>List of Docker Cloud roles assigned to this container (see <a href="/docker-cloud/apps/api-roles/">API roles</a> for more information))</td>
</tr> </tr>
<tr> <tr>
<td>linked_to_container</td> <td>linked_to_container</td>
@ -6326,7 +6326,7 @@ container.execute("ls", handler=msg_handler)
</tr> </tr>
<tr> <tr>
<td style="text-align: left">action</td> <td style="text-align: left">action</td>
<td style="text-align: left">Type of action that was executed on the object. Posible values: <code class="prettyprint">create</code>, <code class="prettyprint">update</code> or <code class="prettyprint">delete</code></td> <td style="text-align: left">Type of action that was executed on the object. Possible values: <code class="prettyprint">create</code>, <code class="prettyprint">update</code> or <code class="prettyprint">delete</code></td>
</tr> </tr>
<tr> <tr>
<td style="text-align: left">parents</td> <td style="text-align: left">parents</td>

View File

@ -186,7 +186,7 @@ configure this app to use our SQL Server database, and then create a
Go ahead and try out the website! This sample uses the SQL Server Go ahead and try out the website! This sample uses the SQL Server
database image in the back-end for authentication. database image in the back-end for authentication.
Ready! You now have a ASP.NET Core application running against SQL Server in Ready! You now have an ASP.NET Core application running against SQL Server in
Docker Compose! This sample made use of some of the most popular Microsoft Docker Compose! This sample made use of some of the most popular Microsoft
products for Linux. To learn more about Windows Containers, check out products for Linux. To learn more about Windows Containers, check out
[Docker Labs for Windows Containers](https://github.com/docker/labs/tree/master/windows) [Docker Labs for Windows Containers](https://github.com/docker/labs/tree/master/windows)

View File

@ -415,7 +415,7 @@ id.
Sets the PID mode to the host PID mode. This turns on sharing between Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers container and the host operating system the PID address space. Containers
launched with this flag can access and manipulate other launched with this flag can access and manipulate other
containers in the bare-metal machine's namespace and vise-versa. containers in the bare-metal machine's namespace and vice versa.
### ports ### ports

View File

@ -1006,7 +1006,7 @@ designated container or service.
If set to "host", the service's PID mode is the host PID mode. This turns If set to "host", the service's PID mode is the host PID mode. This turns
on sharing between container and the host operating system the PID address on sharing between container and the host operating system the PID address
space. Containers launched with this flag can access and manipulate space. Containers launched with this flag can access and manipulate
other containers in the bare-metal machine's namespace and vise-versa. other containers in the bare-metal machine's namespace and vice versa.
> **Note**: the `service:` and `container:` forms require > **Note**: the `service:` and `container:` forms require
> [version 2.1](compose-versioning.md#version-21) or above > [version 2.1](compose-versioning.md#version-21) or above
@ -1483,7 +1483,7 @@ Set a custom name for this volume.
data: data:
name: my-app-data name: my-app-data
It can also be used in conjuction with the `external` property: It can also be used in conjunction with the `external` property:
version: '2.1' version: '2.1'
volumes: volumes:
@ -1641,7 +1641,7 @@ Set a custom name for this network.
network1: network1:
name: my-app-net name: my-app-net
It can also be used in conjuction with the `external` property: It can also be used in conjunction with the `external` property:
version: '2.1' version: '2.1'
networks: networks:

View File

@ -1409,7 +1409,7 @@ networks:
Sets the PID mode to the host PID mode. This turns on sharing between Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers container and the host operating system the PID address space. Containers
launched with this flag can access and manipulate other launched with this flag can access and manipulate other
containers in the bare-metal machine's namespace and vise-versa. containers in the bare-metal machine's namespace and vice versa.
### ports ### ports
@ -2029,7 +2029,7 @@ and will **not** be scoped with the stack name.
data: data:
name: my-app-data name: my-app-data
It can also be used in conjuction with the `external` property: It can also be used in conjunction with the `external` property:
version: '3.4' version: '3.4'
volumes: volumes:
@ -2257,7 +2257,7 @@ and will **not** be scoped with the stack name.
network1: network1:
name: my-app-net name: my-app-net
It can also be used in conjuction with the `external` property: It can also be used in conjunction with the `external` property:
version: '3.5' version: '3.5'
networks: networks:

View File

@ -230,7 +230,7 @@ web_1 | A server is already
running. Check /myapp/tmp/pids/server.pid. running. Check /myapp/tmp/pids/server.pid.
``` ```
To resolve this, delete the file `tmp/pids/server.pid`, and then re-start the To resolve this, delete the file `tmp/pids/server.pid`, and then restart the
application with `docker-compose up`. application with `docker-compose up`.
### Restart the application ### Restart the application

View File

@ -49,7 +49,7 @@ The following properties let you configure the splunk logging driver.
- To configure the `splunk` driver across the Docker environment, edit - To configure the `splunk` driver across the Docker environment, edit
`daemon.json` with the key, `"log-opts": {"NAME": "VALUE", ...}`. `daemon.json` with the key, `"log-opts": {"NAME": "VALUE", ...}`.
- To configure the `splunk` driver for an indiviual container, use `docker run` - To configure the `splunk` driver for an individual container, use `docker run`
with the flag, `--log-opt NAME=VALUE ...`. with the flag, `--log-opt NAME=VALUE ...`.
| Option | Required | Description | | Option | Required | Description |

View File

@ -11,7 +11,7 @@ is ensuring you're running DTR 2.0. If that's not the case, start by upgrading
your installation to version 2.0.0, and then upgrade to the latest version your installation to version 2.0.0, and then upgrade to the latest version
available. available.
There is no downtime when upgrading an highly-available DTR cluster. If your There is no downtime when upgrading a highly-available DTR cluster. If your
DTR deployment has a single replica, schedule the upgrade to take place outside DTR deployment has a single replica, schedule the upgrade to take place outside
business peak hours to ensure the impact on your business is close to none. business peak hours to ensure the impact on your business is close to none.

View File

@ -9,7 +9,7 @@ is ensuring you're running DTR 2.0. If that's not the case, start by upgrading
your installation to version 2.0.0, and then upgrade to the latest version your installation to version 2.0.0, and then upgrade to the latest version
available. available.
There is no downtime when upgrading an highly-available DTR cluster. If your There is no downtime when upgrading a highly-available DTR cluster. If your
DTR deployment has a single replica, schedule the upgrade to take place outside DTR deployment has a single replica, schedule the upgrade to take place outside
business peak hours to ensure the impact on your business is close to none. business peak hours to ensure the impact on your business is close to none.

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -24,7 +24,7 @@ Start by
Then, as a best practice you should Then, as a best practice you should
[create a new IAM user](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) [create a new IAM user](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)
just for the DTR just for the DTR
integration and apply a IAM policy that ensures the user has limited permissions. integration and apply an IAM policy that ensures the user has limited permissions.
This user only needs permissions to access the bucket that you use to store This user only needs permissions to access the bucket that you use to store
images, and to read, write, and delete files. images, and to read, write, and delete files.

View File

@ -18,7 +18,7 @@ pushes will fail
The GC cron schedule is set to run in **UTC time**. Containers typically run in The GC cron schedule is set to run in **UTC time**. Containers typically run in
UTC time (unless the system time is mounted), therefore remember that the cron UTC time (unless the system time is mounted), therefore remember that the cron
schedule will run based off of UTC time when configuring. schedule will run based on UTC time when configuring.
GC puts DTR into read-only mode; pulls succeed while pushes fail. Pushing an GC puts DTR into read-only mode; pulls succeed while pushes fail. Pushing an
image while GC runs may lead to undefined behavior and data loss, therefore image while GC runs may lead to undefined behavior and data loss, therefore

View File

@ -68,7 +68,7 @@ Jobs can be in one of the following status:
## Job capacity ## Job capacity
Each job runner has a limited capacity and doesn't claim jobs that require an Each job runner has a limited capacity and doesn't claim jobs that require a
higher capacity. You can see the capacity of a job runner using the higher capacity. You can see the capacity of a job runner using the
`GET /api/v0/workers` endpoint: `GET /api/v0/workers` endpoint:

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -54,7 +54,7 @@ with more details on any one of these services:
* Content trust (notary) * Content trust (notary)
This endpoint is for checking the health of a *single* replica. To get This endpoint is for checking the health of a *single* replica. To get
the health of every replica in a cluster, querying each replica individiually is the health of every replica in a cluster, querying each replica individually is
the preferred way to do it in real time. the preferred way to do it in real time.
The `/api/v0/meta/cluster_status` The `/api/v0/meta/cluster_status`

View File

@ -69,7 +69,7 @@ Jobs can be in one of the following status:
## Job capacity ## Job capacity
Each job runner has a limited capacity and doesn't claim jobs that require an Each job runner has a limited capacity and doesn't claim jobs that require a
higher capacity. You can see the capacity of a job runner using the higher capacity. You can see the capacity of a job runner using the
`GET /api/v0/workers` endpoint: `GET /api/v0/workers` endpoint:

View File

@ -2132,7 +2132,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12457,7 +12457,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13877,7 +13877,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26083,7 +26083,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -49,9 +49,9 @@ Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a productio
| `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. | | `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. |
| `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. | | `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. |
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocals are tcp, udp, or internal. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocols are tcp, udp, or internal. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24.For high-availalibity, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. | | `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24.For high-availalibity, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |

View File

@ -42,9 +42,9 @@ time, configure your DTR for high-availability.
| `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. | | `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. |
| `--log-host` | $LOG_HOST | Where to send logs to. The endpoint to send logs to. Use this flag if you set `--log-protocol` to tcp or udp. | | `--log-host` | $LOG_HOST | Where to send logs to. The endpoint to send logs to. Use this flag if you set `--log-protocol` to tcp or udp. |
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal. This allows to define the protocol used to send container logs to an external system. The supported protocals are tcp, udp, or internal. Use this flag with `--log-host`. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal. This allows to define the protocol used to send container logs to an external system. The supported protocols are tcp, udp, or internal. Use this flag with `--log-host`. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>. By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>. By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for. When using `--http-proxy` you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for. When using `--http-proxy` you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80. This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with `--replica-https-port`. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80. This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with `--replica-https-port`. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443. This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443. This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
| `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP. Download the UCP TLS CA certificate from https://<ucp-url>/ca, and use --ucp-ca "$(cat ca.pem)". | | `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP. Download the UCP TLS CA certificate from https://<ucp-url>/ca, and use --ucp-ca "$(cat ca.pem)". |

View File

@ -24,7 +24,7 @@ restore procedure for the Docker images stored in your registry, taking in
consideration whether your DTR installation is configured to store images on consideration whether your DTR installation is configured to store images on
the local filesystem or using a cloud provider. the local filesystem or using a cloud provider.
After restoring, you can add more DTR replicas by using the the 'join' command. After restoring, you can add more DTR replicas by using the 'join' command.
## Options ## Options
@ -46,9 +46,9 @@ After restoring, you can add more DTR replicas by using the the 'join' command.
| `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. | | `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. |
| `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. | | `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. |
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocals are tcp, udp, or internal. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocols are tcp, udp, or internal. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
| `--replica-id` | $DTR_INSTALL_REPLICA_ID | Assign an ID to the DTR replica. Random by default. | | `--replica-id` | $DTR_INSTALL_REPLICA_ID | Assign an ID to the DTR replica. Random by default. |

View File

@ -54,7 +54,7 @@ with more details on any one of these services:
* Content trust (notary) * Content trust (notary)
This endpoint is for checking the health of a *single* replica. To get This endpoint is for checking the health of a *single* replica. To get
the health of every replica in a cluster, querying each replica individiually is the health of every replica in a cluster, querying each replica individually is
the preferred way to do it in real time. the preferred way to do it in real time.
The `/api/v0/meta/cluster_status` The `/api/v0/meta/cluster_status`

View File

@ -69,8 +69,8 @@ Jobs can be in one of the following status:
## Job capacity ## Job capacity
Each job runner has a limited capacity and doesn't claim jobs that require an Each job runner has a limited capacity and doesn't claim jobs that require a
higher capacity. You can see the capacity of a job runner using the higher capacity. You can see the capacity of a job runner using the
`GET /api/v0/workers` endpoint: `GET /api/v0/workers` endpoint:
```json ```json

View File

@ -46,9 +46,9 @@ Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a productio
| `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. | | `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. |
| `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. | | `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. |
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocals are tcp, udp, or internal. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocols are tcp, udp, or internal. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24.For high-availalibity, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. | | `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24.For high-availalibity, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |

View File

@ -39,9 +39,9 @@ time, configure your DTR for high-availability.
| `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. | | `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. |
| `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. | | `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. |
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocals are tcp, udp, or internal. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocols are tcp, udp, or internal. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
| `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP.Download the UCP TLS CA certificate from https://<ucp-url>/ca, and use --ucp-ca "$(cat ca.pem)". | | `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP.Download the UCP TLS CA certificate from https://<ucp-url>/ca, and use --ucp-ca "$(cat ca.pem)". |

View File

@ -24,7 +24,7 @@ restore procedure for the Docker images stored in your registry, taking in
consideration whether your DTR installation is configured to store images on consideration whether your DTR installation is configured to store images on
the local filesystem or using a cloud provider. the local filesystem or using a cloud provider.
After restoring, you can add more DTR replicas by using the the 'join' command. After restoring, you can add more DTR replicas by using the 'join' command.
## Options ## Options
@ -43,9 +43,9 @@ After restoring, you can add more DTR replicas by using the the 'join' command.
| `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. | | `--https-proxy` | $DTR_HTTPS_PROXY | The HTTPS proxy used for outgoing requests. |
| `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. | | `--log-host` | $LOG_HOST | Where to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp. |
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocals are tcp, udp, or internal. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.This allows to define the protocol used to send container logs to an external system. The supported protocols are tcp, udp, or internal. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
| `--replica-id` | $DTR_INSTALL_REPLICA_ID | Assign an ID to the DTR replica. Random by default. | | `--replica-id` | $DTR_INSTALL_REPLICA_ID | Assign an ID to the DTR replica. Random by default. |

View File

@ -10,7 +10,7 @@ title: Integrate with Docker Trusted Registry
You can integrate UCP with Docker Trusted Registry (DTR). This allows you to You can integrate UCP with Docker Trusted Registry (DTR). This allows you to
securely store and manage the Docker images that are used in your UCP cluster. securely store and manage the Docker images that are used in your UCP cluster.
At an high-level, there are three steps to integrate UCP with DTR: At a high-level, there are three steps to integrate UCP with DTR:
* Configure UCP to know about DTR, * Configure UCP to know about DTR,
* Configure DTR to trust UCP, * Configure DTR to trust UCP,

View File

@ -150,7 +150,7 @@ To enable the networking feature, do the following.
5. Restart the Engine `daemon`. 5. Restart the Engine `daemon`.
The Engine `daemon` is a OS service process running on each node in your The Engine `daemon` is an OS service process running on each node in your
cluster. How you restart a service is operating-system dependent. Some cluster. How you restart a service is operating-system dependent. Some
examples appear below but keep in mind that on your system, the restart examples appear below but keep in mind that on your system, the restart
operation may differ. Check with your system administrator if you are not operation may differ. Check with your system administrator if you are not

View File

@ -145,7 +145,7 @@ user certificates:
$ notary delegation add -p <dtr_url>/<account>/<repository> targets/releases --all-paths user1.pem user2.pem $ notary delegation add -p <dtr_url>/<account>/<repository> targets/releases --all-paths user1.pem user2.pem
``` ```
The above command adds the the `targets/releases` delegation role to a trusted The above command adds the `targets/releases` delegation role to a trusted
repository. repository.
This role is treated as an actual release branch for Docker Content Trust, This role is treated as an actual release branch for Docker Content Trust,
since `docker pull` commands with trust enabled will pull directly from this since `docker pull` commands with trust enabled will pull directly from this

View File

@ -16,7 +16,7 @@ you use the [docker swarm join](/engine/swarm/swarm-tutorial/add-nodes.md)
command to add more nodes to your cluster. When joining new nodes, the UCP command to add more nodes to your cluster. When joining new nodes, the UCP
services automatically start running in that node. services automatically start running in that node.
When joining a node a a cluster you can specify its role: manager or worker. When joining a node a cluster you can specify its role: manager or worker.
* **Manager nodes** * **Manager nodes**

View File

@ -11,7 +11,7 @@ services with sensitive information like passwords, TLS certificates, or
private keys. private keys.
Universal Control Plane allows you to store this sensitive information, also Universal Control Plane allows you to store this sensitive information, also
know as secrets, in a secure way. It also gives you role-based access control known as secrets, in a secure way. It also gives you role-based access control
so that you can control which users can use a secret in their services so that you can control which users can use a secret in their services
and which ones can manage the secret. and which ones can manage the secret.

View File

@ -139,7 +139,7 @@ Settings for syncing users.
## auth.ldap.admin_sync_opts (optional) ## auth.ldap.admin_sync_opts (optional)
Settings for syncing system admininistrator users. Settings for syncing system administrator users.
| Parameter | Required | Description | | Parameter | Required | Description |
|:-----------------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |:-----------------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|

View File

@ -223,5 +223,5 @@ you can create an overlay network that contains the `com.docker.mesh.http` label
docker network create -d overlay --label com.docker.ucp.mesh.http=true new-hrm-network docker network create -d overlay --label com.docker.ucp.mesh.http=true new-hrm-network
``` ```
If you're creating a a new HRM network you need to disable the HRM service first, or disable If you're creating a new HRM network you need to disable the HRM service first, or disable
and enable the HRM service after you create the network else HRM will not be available on new network. and enable the HRM service after you create the network else HRM will not be available on new network.

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -202,7 +202,7 @@ to an Organization, the Cancel and Retry buttons only appear if you have `Read &
Automated builds are enabled per branch or tag, and can be disabled and Automated builds are enabled per branch or tag, and can be disabled and
re-enabled easily. You might do this when you want to only build manually for re-enabled easily. You might do this when you want to only build manually for
awhile, for example when you are doing major refactoring in your code. Disabling a while, for example when you are doing major refactoring in your code. Disabling
autobuilds does not disable [autotests](automated-testing.md). autobuilds does not disable [autotests](automated-testing.md).
To disable an automated build: To disable an automated build:

View File

@ -107,7 +107,7 @@ Learn how to [connect to a swarm through Docker Cloud](connect-to-swarm.md).
Learn how to [register existing swarms](register-swarms.md). Learn how to [register existing swarms](register-swarms.md).
You can get an overivew of topics on [swarms in Docker Cloud](index.md). You can get an overview of topics on [swarms in Docker Cloud](index.md).
To find out more about Docker swarm in general, see the Docker engine To find out more about Docker swarm in general, see the Docker engine
[Swarm Mode overview](/engine/swarm/). [Swarm Mode overview](/engine/swarm/).

View File

@ -113,7 +113,7 @@ Learn how to [connect to a swarm through Docker Cloud](connect-to-swarm.md).
Learn how to [register existing swarms](register-swarms.md). Learn how to [register existing swarms](register-swarms.md).
You can get an overivew of topics on [swarms in Docker Cloud](index.md). You can get an overview of topics on [swarms in Docker Cloud](index.md).
To find out more about Docker swarm in general, see the Docker engine To find out more about Docker swarm in general, see the Docker engine
[Swarm Mode overview](/engine/swarm/). [Swarm Mode overview](/engine/swarm/).

View File

@ -112,7 +112,7 @@ You need an SSH key to provide to Docker Cloud during the swarm create
process. If you haven't done so yet, check out how to [Set up SSH process. If you haven't done so yet, check out how to [Set up SSH
keys](ssh-key-setup.md). keys](ssh-key-setup.md).
You can get an overivew of topics on [swarms in Docker Cloud](index.md). You can get an overview of topics on [swarms in Docker Cloud](index.md).
**Using Standard Mode to managing Docker nodes on Azure?** If you are **Using Standard Mode to managing Docker nodes on Azure?** If you are
setting up nodes on Azure in [Standard Mode](/docker-cloud/standard/), setting up nodes on Azure in [Standard Mode](/docker-cloud/standard/),

View File

@ -157,7 +157,7 @@ EOF
For a complete description of the parameters in an ECS task definition, please refer to the [documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html).  For a complete description of the parameters in an ECS task definition, please refer to the [documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html). 
If you've already written a Docker compose file for your service, you can import it as an ECS task definition using the `ecs-cli`, a purpose-built CLI for interacting with the ECS APIs.  It's also possible to create a ECS service directly from a Docker compose file.  The following is an example of a docker compose file for the `db` service. If you've already written a Docker compose file for your service, you can import it as an ECS task definition using the `ecs-cli`, a purpose-built CLI for interacting with the ECS APIs.  It's also possible to create an ECS service directly from a Docker compose file.  The following is an example of a docker compose file for the `db` service.
``` ```
version: '2' version: '2'
@ -207,7 +207,7 @@ When you're ready to register the task definition, execute the following command
``` ```
aws ecs register-task-definition --cli-input-json file://db-taskdef.json   aws ecs register-task-definition --cli-input-json file://db-taskdef.json  
``` ```
Now that we've created a task definition for the Postgres database, we need to create a ECS service.  When you create a service, the tasks are automatically monitored by the ECS scheduler which will restart tasks when they fail in order to maintain your desired state.  With ECS, you can also associate a name with your service in Route 53 so other services can discover it by querying DNS.  For this service, you're going to create an A record.  Now that we've created a task definition for the Postgres database, we need to create an ECS service.  When you create a service, the tasks are automatically monitored by the ECS scheduler which will restart tasks when they fail in order to maintain your desired state.  With ECS, you can also associate a name with your service in Route 53 so other services can discover it by querying DNS.  For this service, you're going to create an A record. 
The first step involves creating a namespace for our `db` service, for example, `corp.local`.  The following command creates a private hosted zone in Route 53 that will be used for our namespace.   The first step involves creating a namespace for our `db` service, for example, `corp.local`.  The following command creates a private hosted zone in Route 53 that will be used for our namespace.  
@ -313,7 +313,7 @@ Register the task definition.
aws ecs register-task-definition --region <region> --cli-input-json file://redis-taskdef.json aws ecs register-task-definition --region <region> --cli-input-json file://redis-taskdef.json
``` ```
This task definition will create a ECS task that runs a `redis:alpine` container that listens on port 6379.  This task definition will create an ECS task that runs a `redis:alpine` container that listens on port 6379. 
Register the `redis:alpine` service with the service discovery service.  This will create a A record in a Route 53 private hosted zone that other services will use for service discovery. Register the `redis:alpine` service with the service discovery service.  This will create a A record in a Route 53 private hosted zone that other services will use for service discovery.

View File

@ -694,7 +694,7 @@ Save the Kubernetes manifest file (as `k8s-vote.yml`) and check it into version
## Test the app on AKS ## Test the app on AKS
Before migrating, you should thoroughly test each new Kubernetes manifest on a AKS cluster. Healthy testing includes _deploying_ the application with the new manifest file, performing _scaling_ operations, increasing _load_, running _failure_ scenarios, and doing _updates_ and _rollbacks_. These tests are specific to each of your applications. You should also manage your manifest files in a version control system. Before migrating, you should thoroughly test each new Kubernetes manifest on an AKS cluster. Healthy testing includes _deploying_ the application with the new manifest file, performing _scaling_ operations, increasing _load_, running _failure_ scenarios, and doing _updates_ and _rollbacks_. These tests are specific to each of your applications. You should also manage your manifest files in a version control system.
The following steps explain how to deploy your app from the Kubernetes manifest file and verify that it is running. The steps are based on the sample application used throughout this guide, but the general commands should work for any app. The following steps explain how to deploy your app from the Kubernetes manifest file and verify that it is running. The steps are based on the sample application used throughout this guide, but the general commands should work for any app.

View File

@ -123,7 +123,7 @@ for Mac](install.md#download-docker-for-mac).
- [Notary 0.6.1](https://github.com/docker/notary/releases/tag/v0.6.1) - [Notary 0.6.1](https://github.com/docker/notary/releases/tag/v0.6.1)
* New * New
- Re-enable raw as the the default disk format for users running macOS 10.13.4 and higher. Note this change only takes effect after a "reset to factory defaults" or "remove all data" (from the Whale menu -> Preferences -> Reset). Related to [docker/for-mac#2625](https://github.com/docker/for-mac/issues/2625) - Re-enable raw as the default disk format for users running macOS 10.13.4 and higher. Note this change only takes effect after a "reset to factory defaults" or "remove all data" (from the Whale menu -> Preferences -> Reset). Related to [docker/for-mac#2625](https://github.com/docker/for-mac/issues/2625)
* Bug fixes and minor changes * Bug fixes and minor changes
- Fix Docker for Mac not starting due to socket file paths being too long (typically HOME folder path being too long). Fixes [docker/for-mac#2727](https://github.com/docker/for-mac/issues/2727), [docker/for-mac#2731](https://github.com/docker/for-mac/issues/2731). - Fix Docker for Mac not starting due to socket file paths being too long (typically HOME folder path being too long). Fixes [docker/for-mac#2727](https://github.com/docker/for-mac/issues/2727), [docker/for-mac#2731](https://github.com/docker/for-mac/issues/2731).

View File

@ -44,7 +44,7 @@ for Mac](install.md#download-docker-for-mac).
* New * New
- Kubernetes Support. You can now run a single-node Kubernetes cluster from the "Kubernetes" Pane in Docker For Mac Preferences and use kubectl commands as well as docker commands. See https://docs.docker.com/docker-for-mac/kubernetes/ - Kubernetes Support. You can now run a single-node Kubernetes cluster from the "Kubernetes" Pane in Docker For Mac Preferences and use kubectl commands as well as docker commands. See https://docs.docker.com/docker-for-mac/kubernetes/
- Add an experimental SOCKS server to allow access to container networks, see [docker/for-mac#2670](https://github.com/docker/for-mac/issues/2670#issuecomment-372365274). Also see [docker/for-mac#2721](https://github.com/docker/for-mac/issues/2721) - Add an experimental SOCKS server to allow access to container networks, see [docker/for-mac#2670](https://github.com/docker/for-mac/issues/2670#issuecomment-372365274). Also see [docker/for-mac#2721](https://github.com/docker/for-mac/issues/2721)
- Re-enable raw as the the default disk format for users running macOS 10.13.4 and higher. Note this change only takes effect after a "reset to factory defaults" or "remove all data" (from the Whale menu -> Preferences -> Reset). Related to [docker/for-mac#2625](https://github.com/docker/for-mac/issues/2625) - Re-enable raw as the default disk format for users running macOS 10.13.4 and higher. Note this change only takes effect after a "reset to factory defaults" or "remove all data" (from the Whale menu -> Preferences -> Reset). Related to [docker/for-mac#2625](https://github.com/docker/for-mac/issues/2625)
* Bug fixes and minor changes * Bug fixes and minor changes
- AUFS storage driver is deprecated in Docker Desktop and AUFS support will be removed in the next major release. You can continue with AUFS in Docker Desktop 18.06.x, but you will need to reset disk image (in Preferences > Reset menu) before updating to the next major update. You can check documentation to [save images](https://docs.docker.com/engine/reference/commandline/save/#examples) and [backup volumes](https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes) - AUFS storage driver is deprecated in Docker Desktop and AUFS support will be removed in the next major release. You can continue with AUFS in Docker Desktop 18.06.x, but you will need to reset disk image (in Preferences > Reset menu) before updating to the next major update. You can check documentation to [save images](https://docs.docker.com/engine/reference/commandline/save/#examples) and [backup volumes](https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes)

View File

@ -274,7 +274,7 @@ know before you install](install.md#what-to-know-before-you-install).
* IPv6 is not (yet) supported on Docker for Mac. * IPv6 is not (yet) supported on Docker for Mac.
A workaround is provided that auto-filters out the IPv6 addresses in DNS A workaround is provided that auto-filters out the IPv6 addresses in DNS
server lists and enables successful network accesss. For example, server lists and enables successful network access. For example,
`2001:4860:4860::8888` would become `8.8.8.8`. To learn more, see these `2001:4860:4860::8888` would become `8.8.8.8`. To learn more, see these
issues on GitHub and Docker for Mac forums: issues on GitHub and Docker for Mac forums:

View File

@ -244,7 +244,7 @@ it for you.
See [these instructions](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install) to install Hyper-V manually. A reboot is *required*. If you install Hyper-V without the reboot, Docker for Windows does not work correctly. On some systems, Virtualization needs to be enabled in the BIOS. The steps to do so are Vendor specific, but typically the BIOS option is called `Virtualization Technology (VTx)` or similar. See [these instructions](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install) to install Hyper-V manually. A reboot is *required*. If you install Hyper-V without the reboot, Docker for Windows does not work correctly. On some systems, Virtualization needs to be enabled in the BIOS. The steps to do so are Vendor specific, but typically the BIOS option is called `Virtualization Technology (VTx)` or similar.
From the start menu, type in "Turn Windows features on or off" and hit enter. In the subequent screen, verify Hyper-V is enabled and has a checkmark: From the start menu, type in "Turn Windows features on or off" and hit enter. In the subsequent screen, verify Hyper-V is enabled and has a checkmark:
![Hyper-V on Windows features](images/hyperv-enabled.png){:width="600px"} ![Hyper-V on Windows features](images/hyperv-enabled.png){:width="600px"}

View File

@ -100,7 +100,7 @@ Now you can push this repository to the registry designated by its name or tag.
$ docker push <hub-user>/<repo-name>:<tag> $ docker push <hub-user>/<repo-name>:<tag>
The image is then uploaded and available for use by your team-mates and/or The image is then uploaded and available for use by your teammates and/or
the community. the community.

View File

@ -552,7 +552,7 @@ if [[ $? -ne 0 ]]; then
fi fi
####################################################################################################################################### #######################################################################################################################################
# Run a alpine container with the plugin and send data to it # Run an alpine container with the plugin and send data to it
####################################################################################################################################### #######################################################################################################################################
docker container run \ docker container run \
--rm \ --rm \

View File

@ -94,7 +94,7 @@ We aim to have product listings published with the concept of versions, allowing
*Documentation* maps to *Documentation Link* in the publish process. *Documentation* maps to *Documentation Link* in the publish process.
*Feedback* is provided via customer reviews. https://store.docker.com/images/node?tab=reviews is an example. *Feedback* is provided via customer reviews. https://store.docker.com/images/node?tab=reviews is an example.
*Tier Description* is what you see once users get entitled to a plan. For instance, in https://store.docker.com/images/openmaptiles-openstreetmap-maps/plans/f1fc533a-76f0-493a-80a1-4e0a2b38a563?tab=instructions `A detailed street map of any place on a planet. Evaluation and non-production use. Production use license available separately` is what this publisher entered in the Tier description *Tier Description* is what you see once users get entitled to a plan. For instance, in https://store.docker.com/images/openmaptiles-openstreetmap-maps/plans/f1fc533a-76f0-493a-80a1-4e0a2b38a563?tab=instructions `A detailed street map of any place on a planet. Evaluation and non-production use. Production use license available separately` is what this publisher entered in the Tier description
*Installation instructions* is documentation on installing your software. In this case the documentation is just `Just launch the container and the map is going to be available on port 80 - ready-to-use - with instructions and list of available styles.` (We recommend more details for any content thats a certification candidate). *Installation instructions* is documentation on installing your software. In this case the documentation is just `Just launch the container and the map is going to be available on port 80 - ready-to-use - with instructions and list of available styles.` (We recommend more details for any content that's a certification candidate).
### How can I remove a submission? I dont want to currently have this image published as it is missing several information. ### How can I remove a submission? I dont want to currently have this image published as it is missing several information.

View File

@ -76,7 +76,7 @@ Jobs can be in one of the following status:
## Job capacity ## Job capacity
Each job runner has a limited capacity and won't claim jobs that require an Each job runner has a limited capacity and won't claim jobs that require a
higher capacity. You can see the capacity of a job runner using the higher capacity. You can see the capacity of a job runner using the
`GET /api/v0/workers` endpoint: `GET /api/v0/workers` endpoint:

View File

@ -14,7 +14,7 @@ In the **DTR web UI**, navigate to the repository that has been scanned.
![Tag list](../../images/override-vulnerability-1.png){: .with-border} ![Tag list](../../images/override-vulnerability-1.png){: .with-border}
Click **View details** for the image you want to see the scan results, and Click **View details** for the image you want to see the scan results, and
and choose **Components** to see the vulnerabilities for each component packaged choose **Components** to see the vulnerabilities for each component packaged
in the image. in the image.
Select the component with the vulnerability you want to ignore, navigate to the Select the component with the vulnerability you want to ignore, navigate to the

View File

@ -43,7 +43,7 @@ the public key certificate for that certificate authority.
You can get it by accessing `https://<dtr-domain>/ca`. You can get it by accessing `https://<dtr-domain>/ca`.
Click **execute** and make sure you got an HTTP 201 response, signaling that the Click **execute** and make sure you got an HTTP 201 response, signaling that the
the repository is polling the source repository every couple of minutes repository is polling the source repository every couple of minutes
## Where to go next ## Where to go next

View File

@ -54,7 +54,7 @@ dtr_ca_url=${dtr_full_url}/ca
dtr_host_address=${dtr_full_url#"https://"} dtr_host_address=${dtr_full_url#"https://"}
dtr_host_address=${dtr_host_address%":443"} dtr_host_address=${dtr_host_address%":443"}
# Create the registry configuration and save it it # Create the registry configuration and save it
cat <<EOL > trust-dtr.toml cat <<EOL > trust-dtr.toml
[[registries]] [[registries]]

View File

@ -39,7 +39,7 @@ To push images to DTR, you need CLI access to a licensed installation of
Docker EE. Docker EE.
- [License your installation](license-your-installation.md). - [License your installation](license-your-installation.md).
- [Set up your Docker CLI](../../user-acccess/cli.md). - [Set up your Docker CLI](../../user-access/cli.md).
When you're set up for CLI-based access to a licensed Docker EE instance, When you're set up for CLI-based access to a licensed Docker EE instance,
you can push images to DTR. you can push images to DTR.

View File

@ -138,7 +138,7 @@ Settings for syncing users.
### auth.ldap.admin_sync_opts (optional) ### auth.ldap.admin_sync_opts (optional)
Settings for syncing system admininistrator users. Settings for syncing system administrator users.
| Parameter | Required | Description | | Parameter | Required | Description |
|:-----------------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |:-----------------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|

View File

@ -105,7 +105,7 @@ spec:
terminationGracePeriodSeconds: 60 terminationGracePeriodSeconds: 60
containers: containers:
- name: default-http-backend - name: default-http-backend
# Any image is permissable as long as: # Any image is permissible as long as:
# 1. It serves a 404 page at / # 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint # 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4 image: gcr.io/google_containers/defaultbackend:1.4

View File

@ -325,7 +325,7 @@ deprecated. Deploy your applications as Swarm services or Kubernetes workloads.
* Fixee an issue where removing a worker node from the cluster would cause an etcd member to be removed on a manager node. * Fixee an issue where removing a worker node from the cluster would cause an etcd member to be removed on a manager node.
* Upgraded `etcd` version to 2.3.8. * Upgraded `etcd` version to 2.3.8.
* Fixed an issue that causes classic Swarm to provide outdated data. * Fixed an issue that causes classic Swarm to provide outdated data.
* Fixed an issue that raises `ucp-kv` collection error with un-named volumes. * Fixed an issue that raises `ucp-kv` collection error with unnamed volumes.
* UI * UI
* Fixed an issue that causes UI to not parse volume options correctly. * Fixed an issue that causes UI to not parse volume options correctly.

View File

@ -80,7 +80,7 @@ cd client-bundle; Import-Module .\env.ps1
</div> </div>
</div> </div>
The client bundle utility scripts update the the environment variables The client bundle utility scripts update the environment variables
`DOCKER_HOST` to make your client tools communicate with your UCP deployment, `DOCKER_HOST` to make your client tools communicate with your UCP deployment,
and the `DOCKER_CERT_PATH` environment variable to use the client certificates and the `DOCKER_CERT_PATH` environment variable to use the client certificates
that are included in the client bundle you downloaded. The utility scripts also that are included in the client bundle you downloaded. The utility scripts also

View File

@ -66,7 +66,7 @@ signed by the old root CA anymore.
Run `docker swarm ca --rotate` to generate a new CA certificate and key. If you Run `docker swarm ca --rotate` to generate a new CA certificate and key. If you
prefer, you can pass the `--ca-cert` and `--external-ca` flags to specify the prefer, you can pass the `--ca-cert` and `--external-ca` flags to specify the
root certificate and and to use a root CA external to the swarm. Alternately, root certificate and to use a root CA external to the swarm. Alternately,
you can pass the `--ca-cert` and `--ca-key` flags to specify the exact you can pass the `--ca-cert` and `--ca-key` flags to specify the exact
certificate and key you would like the swarm to use. certificate and key you would like the swarm to use.

View File

@ -696,7 +696,7 @@ $ docker service create \
> proportion to any of the other groups identified by a specific label > proportion to any of the other groups identified by a specific label
> value. In a sense, a missing label is the same as having the label with > value. In a sense, a missing label is the same as having the label with
> a null value attached to it. If the service should **only** run on > a null value attached to it. If the service should **only** run on
> nodes with the label being used for the the spread preference, the > nodes with the label being used for the spread preference, the
> preference should be combined with a constraint. > preference should be combined with a constraint.
You can specify multiple placement preferences, and they are processed in the You can specify multiple placement preferences, and they are processed in the

View File

@ -243,7 +243,7 @@ from the repository.
b. Install a specific version by its fully qualified package name, which is b. Install a specific version by its fully qualified package name, which is
the package name (`docker-ce`) plus the version string (2nd column) up to the package name (`docker-ce`) plus the version string (2nd column) up to
the first hyphen, separated by a an equals sign (`=`), for example, the first hyphen, separated by an equals sign (`=`), for example,
`docker-ce=18.03.0.ce`. `docker-ce=18.03.0.ce`.
```bash ```bash

View File

@ -92,7 +92,7 @@ $(document).on('click', 'a[href*="#"]:not(.noanchor , .find_a_partner_section .c
// find the target of the clicked anchor tag // find the target of the clicked anchor tag
var targetBSR = $(this).find('a')[0].hash; var targetBSR = $(this).find('a')[0].hash;
var parentBSR = $(this); var parentBSR = $(this);
// hide detail containers, not the the current target // hide detail containers, not the current target
$('.bsr-item-detail').not(targetBSR).hide(); $('.bsr-item-detail').not(targetBSR).hide();
// toggle current target detail container // toggle current target detail container
$(targetBSR).slideToggle(); $(targetBSR).slideToggle();

View File

@ -35,7 +35,7 @@ The size of the VM's disk can be configured this way:
- `--virtualbox-hostonly-no-dhcp`: Disable the Host Only DHCP Server - `--virtualbox-hostonly-no-dhcp`: Disable the Host Only DHCP Server
- `--virtualbox-import-boot2docker-vm`: The name of a Boot2Docker VM to import. - `--virtualbox-import-boot2docker-vm`: The name of a Boot2Docker VM to import.
- `--virtualbox-memory`: Size of memory for the host in MB. - `--virtualbox-memory`: Size of memory for the host in MB.
- `--virtualbox-nat-nictype`: Specify the NAT Network Adapter Type. Possible values are are '82540EM' (Intel PRO/1000), 'Am79C973' (PCnet-FAST III) and 'virtio' Paravirtualized network adapter. - `--virtualbox-nat-nictype`: Specify the NAT Network Adapter Type. Possible values are '82540EM' (Intel PRO/1000), 'Am79C973' (PCnet-FAST III) and 'virtio' Paravirtualized network adapter.
- `--virtualbox-no-dns-proxy`: Disable proxying all DNS requests to the host (Boolean value, default to false) - `--virtualbox-no-dns-proxy`: Disable proxying all DNS requests to the host (Boolean value, default to false)
- `--virtualbox-no-share`: Disable the mount of your home directory - `--virtualbox-no-share`: Disable the mount of your home directory
- `--virtualbox-no-vtx-check`: Disable checking for the availability of hardware virtualization before the vm is started - `--virtualbox-no-vtx-check`: Disable checking for the availability of hardware virtualization before the vm is started

View File

@ -18,7 +18,7 @@ from those Docker desktop applications. See Docker Cloud (Edge feature) on
[Windows](/docker-for-windows/index.md#docker-cloud-edge-feature). [Windows](/docker-for-windows/index.md#docker-cloud-edge-feature).
> >
> Docker Machine still works as described here, but Docker Cloud > Docker Machine still works as described here, but Docker Cloud
supercedes Machine for this purpose. supersedes Machine for this purpose.
{: .important} {: .important}
Follow along with this example to create a Dockerized [Amazon Web Services (AWS)](https://aws.amazon.com/) EC2 instance. Follow along with this example to create a Dockerized [Amazon Web Services (AWS)](https://aws.amazon.com/) EC2 instance.

View File

@ -17,7 +17,7 @@ Docker desktop applications. See Docker Cloud (Edge feature) on
[Windows](/docker-for-windows/index.md#docker-cloud-edge-feature). [Windows](/docker-for-windows/index.md#docker-cloud-edge-feature).
> >
> Docker Machine still works as described here, but Docker Cloud > Docker Machine still works as described here, but Docker Cloud
supercedes Machine for this purpose. supersedes Machine for this purpose.
{: .important} {: .important}
- [Digital Ocean Example](ocean.md) - [Digital Ocean Example](ocean.md)

View File

@ -17,7 +17,7 @@ those Docker desktop applications. See Docker Cloud (Edge feature) on
[Windows](/docker-for-windows/index.md#docker-cloud-edge-feature). [Windows](/docker-for-windows/index.md#docker-cloud-edge-feature).
> >
> Docker Machine still works as described below, but Docker Cloud > Docker Machine still works as described below, but Docker Cloud
supercedes Machine for this purpose. supersedes Machine for this purpose.
{: .important} {: .important}
Follow along with this example to create a Dockerized [Digital Ocean](https://digitalocean.com) Droplet (cloud host). Follow along with this example to create a Dockerized [Digital Ocean](https://digitalocean.com) Droplet (cloud host).

View File

@ -16,7 +16,7 @@ Docker desktop applications. See Docker Cloud (Edge feature) on
[Mac](/docker-for-mac/index.md#docker-cloud-edge-feature) or [Mac](/docker-for-mac/index.md#docker-cloud-edge-feature) or
[Windows](/docker-for-windows/index.md#docker-cloud-edge-feature). [Windows](/docker-for-windows/index.md#docker-cloud-edge-feature).
> >
> Docker Machine still works as described here, but Docker Cloud supercedes Machine for this purpose. > Docker Machine still works as described here, but Docker Cloud supersedes Machine for this purpose.
{: .important} {: .important}
Docker Machine driver plugins are available for many cloud platforms, so you can Docker Machine driver plugins are available for many cloud platforms, so you can
@ -115,11 +115,11 @@ You can register an already existing docker host by passing the daemon url. With
## Use Machine to provision Docker Swarm clusters ## Use Machine to provision Docker Swarm clusters
> Swarm mode supercedes Docker Machine provisioning of swarm clusters > Swarm mode supersedes Docker Machine provisioning of swarm clusters
> >
> In previous releases, Docker Machine was used to provision swarm > In previous releases, Docker Machine was used to provision swarm
clusters, but this is legacy. [Swarm mode](/engine/swarm/index.md), built clusters, but this is legacy. [Swarm mode](/engine/swarm/index.md), built
into Docker Engine, supercedes Machine provisioning of swarm clusters. The into Docker Engine, supersedes Machine provisioning of swarm clusters. The
topics below show you how to get started with the new swarm mode. topics below show you how to get started with the new swarm mode.
{: .important} {: .important}

View File

@ -157,7 +157,7 @@ Example:
<td valign="top">yes if not <code>memory</code></td> <td valign="top">yes if not <code>memory</code></td>
<td valign="top">The <a href="https://github.com/go-sql-driver/mysql"> <td valign="top">The <a href="https://github.com/go-sql-driver/mysql">
the Data Source Name used to access the DB.</a> the Data Source Name used to access the DB.</a>
(include <code>parseTime=true</code> as part of the the DSN)</td> (include <code>parseTime=true</code> as part of the DSN)</td>
</tr> </tr>
<tr> <tr>
<td valign="top"><code>default_alias</code></td> <td valign="top"><code>default_alias</code></td>

View File

@ -205,7 +205,7 @@ and using them in a production deployment is highly insecure.
Notary is a user/client-based system, and it searches for certificates in the Notary is a user/client-based system, and it searches for certificates in the
user's home directory, at `~/.docker/trust`. To streamline using Notary from user's home directory, at `~/.docker/trust`. To streamline using Notary from
the command line, create an alias that maps the user's `trust` directory to the command line, create an alias that maps the user's `trust` directory to
the the system's `ca-certificates` directory. the system's `ca-certificates` directory.
```bash ```bash
$ alias notary="notary -s https://<dtr-url> -d ~/.docker/trust --tlscacert /usr/local/share/ca-certificates/<dtr-url>.crt" $ alias notary="notary -s https://<dtr-url> -d ~/.docker/trust --tlscacert /usr/local/share/ca-certificates/<dtr-url>.crt"

View File

@ -48,7 +48,7 @@ Note: Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a productio
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO.The supported log levels are debug, info, warn, error, or fatal.. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO.The supported log levels are debug, info, warn, error, or fatal.. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24.For high-availalibity, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. | | `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24.For high-availalibity, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |

View File

@ -41,7 +41,7 @@ time, configure your DTR for high-availability.
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO.The supported log levels are debug, info, warn, error, or fatal.. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO.The supported log levels are debug, info, warn, error, or fatal.. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
| `--replica-rethinkdb-cache-mb` | $RETHINKDB_CACHE_MB | The maximum amount of space for rethinkdb in-memory cache use for the given replica in MB. | `--replica-rethinkdb-cache-mb` | $RETHINKDB_CACHE_MB | The maximum amount of space for rethinkdb in-memory cache use for the given replica in MB.

View File

@ -55,7 +55,7 @@ DTR replicas for high availability.
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO.The supported log levels are debug, info, warn, error, or fatal.. | | `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO.The supported log levels are debug, info, warn, error, or fatal.. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with --log-host. | | `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal.By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with --log-host. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. | | `--nfs-storage-url` | $NFS_STORAGE_URL | NFS to store Docker images. Format nfs://<ip&#124;hostname>/<mountpoint>.By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify an NFS mount for DTR to store images, using the format nfs://<ip&#124;hostname>/<mountpoint>. To use this flag, you need to install an NFS client library like nfs-common in the node where you're deploying DTR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so you don't need to use this flag. To reconfigure DTR to stop using NFS, leave this option empty. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route throught the proxy. Format acme.com[, acme.org]. | | `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don't want to route through the proxy. Format acme.com[, acme.org]. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. | | `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is 80.This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. | | `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is 443.This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
| `--replica-id` | $DTR_INSTALL_REPLICA_ID | Assign a 12-character hexadecimal ID to the DTR replica. Random by default. | | `--replica-id` | $DTR_INSTALL_REPLICA_ID | Assign a 12-character hexadecimal ID to the DTR replica. Random by default. |

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -2121,7 +2121,7 @@ SuperagentHttpClient.prototype.execute = function (obj) {
} else if (res && obj.on && obj.on.response) { } else if (res && obj.on && obj.on.response) {
var possibleObj; var possibleObj;
// Already parsed by by superagent? // Already parsed by superagent?
if(res.body && Object.keys(res.body).length > 0) { if(res.body && Object.keys(res.body).length > 0) {
possibleObj = res.body; possibleObj = res.body;
} else { } else {
@ -12442,7 +12442,7 @@ var iframe,
elemdisplay = {}; elemdisplay = {};
/** /**
* Retrieve the actual display of a element * Retrieve the actual display of an element
* @param {String} name nodeName of the element * @param {String} name nodeName of the element
* @param {Object} doc Document object * @param {Object} doc Document object
*/ */
@ -13862,7 +13862,7 @@ jQuery.fx.speeds = {
}; };
// Based off of the plugin by Clint Helfers, with permission. // Based on the plugin by Clint Helfers, with permission.
// http://blindsignals.com/index.php/2009/07/jquery-delay/ // http://blindsignals.com/index.php/2009/07/jquery-delay/
jQuery.fn.delay = function( time, type ) { jQuery.fn.delay = function( time, type ) {
time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time;
@ -26068,7 +26068,7 @@ var baseCreate = require('./baseCreate'),
* @private * @private
* @param {*} value The value to wrap. * @param {*} value The value to wrap.
* @param {boolean} [chainAll] Enable chaining for all wrapper methods. * @param {boolean} [chainAll] Enable chaining for all wrapper methods.
* @param {Array} [actions=[]] Actions to peform to resolve the unwrapped value. * @param {Array} [actions=[]] Actions to perform to resolve the unwrapped value.
*/ */
function LodashWrapper(value, chainAll, actions) { function LodashWrapper(value, chainAll, actions) {
this.__wrapped__ = value; this.__wrapped__ = value;

View File

@ -31,7 +31,7 @@ validation of the `storagedriver.StorageDriver` interface.
## Driver selection and configuration ## Driver selection and configuration
The preferred method of selecting a storage driver is using the `StorageDriverFactory` interface in the `storagedriver/factory` package. These factories provide a common interface for constructing storage drivers with a parameters map. The factory model is based off of the [Register](http://golang.org/pkg/database/sql/#Register) and [Open](http://golang.org/pkg/database/sql/#Open) methods in the builtin [database/sql](http://golang.org/pkg/database/sql) package. The preferred method of selecting a storage driver is using the `StorageDriverFactory` interface in the `storagedriver/factory` package. These factories provide a common interface for constructing storage drivers with a parameters map. The factory model is based on the [Register](http://golang.org/pkg/database/sql/#Register) and [Open](http://golang.org/pkg/database/sql/#Open) methods in the builtin [database/sql](http://golang.org/pkg/database/sql) package.
Storage driver factories may be registered by name using the Storage driver factories may be registered by name using the
`factory.Register` method, and then later invoked by calling `factory.Create` `factory.Register` method, and then later invoked by calling `factory.Create`

View File

@ -404,7 +404,7 @@ Release notes for stable versions are listed first. You can
- Update runc to fix hang during start and exec [moby/moby#36097](https://github.com/moby/moby/pull/36097) - Update runc to fix hang during start and exec [moby/moby#36097](https://github.com/moby/moby/pull/36097)
- Windows: Vendor of Microsoft/hcsshim @v.0.6.8 partial fix for import layer failing [moby/moby#35924](https://github.com/moby/moby/pull/35924) - Windows: Vendor of Microsoft/hcsshim @v.0.6.8 partial fix for import layer failing [moby/moby#35924](https://github.com/moby/moby/pull/35924)
* Do not make graphdriver homes private mounts [moby/moby#36047](https://github.com/moby/moby/pull/36047) * Do not make graphdriver homes private mounts [moby/moby#36047](https://github.com/moby/moby/pull/36047)
* Use rslave propogation for mounts from daemon root [moby/moby#36055](https://github.com/moby/moby/pull/36055) * Use rslave propagation for mounts from daemon root [moby/moby#36055](https://github.com/moby/moby/pull/36055)
* Set daemon root to use shared mount propagation [moby/moby#36096](https://github.com/moby/moby/pull/36096) * Set daemon root to use shared mount propagation [moby/moby#36096](https://github.com/moby/moby/pull/36096)
* Validate that mounted paths exist when container is started, not just during creation [moby/moby#35833](https://github.com/moby/moby/pull/35833) * Validate that mounted paths exist when container is started, not just during creation [moby/moby#35833](https://github.com/moby/moby/pull/35833)
* Add `REMOVE` and `ORPHANED` to TaskState [moby/moby#36146](https://github.com/moby/moby/pull/36146) * Add `REMOVE` and `ORPHANED` to TaskState [moby/moby#36146](https://github.com/moby/moby/pull/36146)
@ -477,7 +477,7 @@ Release notes for stable versions are listed first. You can
* `/dev` should not be readonly with `--readonly` flag [moby/moby#35344](https://github.com/moby/moby/pull/35344) * `/dev` should not be readonly with `--readonly` flag [moby/moby#35344](https://github.com/moby/moby/pull/35344)
+ Add custom build-time Graphdrivers priority list [moby/moby#35522](https://github.com/moby/moby/pull/35522) + Add custom build-time Graphdrivers priority list [moby/moby#35522](https://github.com/moby/moby/pull/35522)
* LCOW: CLI changes to add platform flag - pull, run, create and build [docker/cli#474](https://github.com/docker/cli/pull/474) * LCOW: CLI changes to add platform flag - pull, run, create and build [docker/cli#474](https://github.com/docker/cli/pull/474)
* Fix width/height on Windoes for `docker exec` [moby/moby#35631](https://github.com/moby/moby/pull/35631) * Fix width/height on Windows for `docker exec` [moby/moby#35631](https://github.com/moby/moby/pull/35631)
* Detect overlay2 support on pre-4.0 kernels [moby/moby#35527](https://github.com/moby/moby/pull/35527) * Detect overlay2 support on pre-4.0 kernels [moby/moby#35527](https://github.com/moby/moby/pull/35527)
* Devicemapper: remove container rootfs mountPath after umount [moby/moby#34573](https://github.com/moby/moby/pull/34573) * Devicemapper: remove container rootfs mountPath after umount [moby/moby#34573](https://github.com/moby/moby/pull/34573)
* Disallow overlay/overlay2 on top of NFS [moby/moby#35483](https://github.com/moby/moby/pull/35483) * Disallow overlay/overlay2 on top of NFS [moby/moby#35483](https://github.com/moby/moby/pull/35483)
@ -745,7 +745,7 @@ Release notes for stable versions are listed first. You can
+ Add Support swarm-mode services with node-local networks such as macvlan, ipvlan, bridge, host [#32981](https://github.com/moby/moby/pull/32981) + Add Support swarm-mode services with node-local networks such as macvlan, ipvlan, bridge, host [#32981](https://github.com/moby/moby/pull/32981)
+ Pass driver-options to network drivers on service creation [#32981](https://github.com/moby/moby/pull/33130) + Pass driver-options to network drivers on service creation [#32981](https://github.com/moby/moby/pull/33130)
+ Isolate Swarm Control-plane traffic from Application data traffic using --data-path-addr [#32717](https://github.com/moby/moby/pull/32717) + Isolate Swarm Control-plane traffic from Application data traffic using --data-path-addr [#32717](https://github.com/moby/moby/pull/32717)
* Several improvments to Service Discovery [#docker/libnetwork/1796](https://github.com/docker/libnetwork/pull/1796) * Several improvements to Service Discovery [#docker/libnetwork/1796](https://github.com/docker/libnetwork/pull/1796)
### Packaging ### Packaging

View File

@ -62,7 +62,7 @@ toc_max: 2
### Bugfixes ### Bugfixes
- Fixed a bug where the ip_range attirbute in IPAM configs was prevented - Fixed a bug where the ip_range attribute in IPAM configs was prevented
from passing validation from passing validation
## 1.21.1 (2018-04-27) ## 1.21.1 (2018-04-27)
@ -275,7 +275,7 @@ toc_max: 2
preventing Compose from recovering volume data from previous containers for preventing Compose from recovering volume data from previous containers for
anonymous volumes anonymous volumes
- Added limit for number of simulatenous parallel operations, which should - Added limit for number of simultaneous parallel operations, which should
prevent accidental resource exhaustion of the server. Default is 64 and prevent accidental resource exhaustion of the server. Default is 64 and
can be configured using the `COMPOSE_PARALLEL_LIMIT` environment variable can be configured using the `COMPOSE_PARALLEL_LIMIT` environment variable
@ -554,7 +554,7 @@ toc_max: 2
### Bugfixes ### Bugfixes
- Volumes specified through the `--volume` flag of `docker-compose run` now - Volumes specified through the `--volume` flag of `docker-compose run` now
complement volumes declared in the service's defintion instead of replacing complement volumes declared in the service's definition instead of replacing
them them
- Fixed a bug where using multiple Compose files would unset the scale value - Fixed a bug where using multiple Compose files would unset the scale value

View File

@ -842,7 +842,7 @@ installing docker, make sure to update them accordingly.
+ Add security options to `docker info` output [#21172](https://github.com/docker/docker/pull/21172) [#23520](https://github.com/docker/docker/pull/23520) + Add security options to `docker info` output [#21172](https://github.com/docker/docker/pull/21172) [#23520](https://github.com/docker/docker/pull/23520)
+ Add insecure registries to `docker info` output [#20410](https://github.com/docker/docker/pull/20410) + Add insecure registries to `docker info` output [#20410](https://github.com/docker/docker/pull/20410)
+ Extend Docker authorization with TLS user information [#21556](https://github.com/docker/docker/pull/21556) + Extend Docker authorization with TLS user information [#21556](https://github.com/docker/docker/pull/21556)
+ devicemapper: expose Mininum Thin Pool Free Space through `docker info` [#21945](https://github.com/docker/docker/pull/21945) + devicemapper: expose Minimum Thin Pool Free Space through `docker info` [#21945](https://github.com/docker/docker/pull/21945)
* API now returns a JSON object when an error occurs making it more consistent [#22880](https://github.com/docker/docker/pull/22880) * API now returns a JSON object when an error occurs making it more consistent [#22880](https://github.com/docker/docker/pull/22880)
- Prevent `docker run -i --restart` from hanging on exit [#22777](https://github.com/docker/docker/pull/22777) - Prevent `docker run -i --restart` from hanging on exit [#22777](https://github.com/docker/docker/pull/22777)
- Fix API/CLI discrepancy on hostname validation [#21641](https://github.com/docker/docker/pull/21641) - Fix API/CLI discrepancy on hostname validation [#21641](https://github.com/docker/docker/pull/21641)

View File

@ -70,6 +70,7 @@ storage driver is configured, Docker uses it by default.
AUFS is a *union filesystem*, which means that it layers multiple directories on AUFS is a *union filesystem*, which means that it layers multiple directories on
a single Linux host and presents them as a single directory. These directories a single Linux host and presents them as a single directory. These directories
are called _branches_ in AUFS terminology, and _layers_ in Docker terminology. are called _branches_ in AUFS terminology, and _layers_ in Docker terminology.
The unification process is referred to as a _union mount_. The unification process is referred to as a _union mount_.
The diagram below shows a Docker container based on the `ubuntu:latest` image. The diagram below shows a Docker container based on the `ubuntu:latest` image.

View File

@ -27,7 +27,7 @@ use unless you have substantial experience with ZFS on Linux.
## Prerequisites ## Prerequisites
- ZFS requires one or more dedicated block devices, preferrably solid-state - ZFS requires one or more dedicated block devices, preferably solid-state
drives (SSDs). drives (SSDs).
- ZFS is only supported on Docker CE with Ubuntu 14.04 or higher, with the `zfs` - ZFS is only supported on Docker CE with Ubuntu 14.04 or higher, with the `zfs`
package (16.04 and higher) or `zfs-native` and `ubuntu-zfs` packages (14.04) package (16.04 and higher) or `zfs-native` and `ubuntu-zfs` packages (14.04)

View File

@ -262,8 +262,8 @@ In this step, you install the keys on the relevant servers in the
infrastructure. Each server needs three files: infrastructure. Each server needs three files:
- A copy of the Certificate Authority's public key (`ca.pem`) - A copy of the Certificate Authority's public key (`ca.pem`)
- It's own private key - Its own private key
- It's own public key (cert) - Its own public key (cert)
The procedure below shows you how to copy these files from the CA server to each The procedure below shows you how to copy these files from the CA server to each
server using `scp`. As part of the copy procedure, rename each file as server using `scp`. As part of the copy procedure, rename each file as

View File

@ -242,7 +242,7 @@ in Step 4.
-D run -c /etc/config.toml -D run -c /etc/config.toml
``` ```
This command relies on the `config.toml` file being in the current directory. After running the command, confirm the image is runing: This command relies on the `config.toml` file being in the current directory. After running the command, confirm the image is running:
```bash ```bash
$ docker ps $ docker ps
@ -250,7 +250,7 @@ in Step 4.
d846b801a978 ehazlett/interlock:1.0.1 "/bin/interlock -D ru" 2 minutes ago Up 2 minutes 0.0.0.0:32770->8080/tcp interlock d846b801a978 ehazlett/interlock:1.0.1 "/bin/interlock -D ru" 2 minutes ago Up 2 minutes 0.0.0.0:32770->8080/tcp interlock
``` ```
If you don't see the image runing, use `docker ps -a` to list all images to make sure the system attempted to start the image. Then, get the logs to see why the container failed to start. If you don't see the image running, use `docker ps -a` to list all images to make sure the system attempted to start the image. Then, get the logs to see why the container failed to start.
```bash ```bash
$ docker logs interlock $ docker logs interlock

View File

@ -126,7 +126,7 @@ https://github.com/docker/docker.github.io/tree/master/docker-cloud/images
#### Using a custom target ID #### Using a custom target ID
This topic has a custom target ID above its heading that can be used to link to This topic has a custom target ID above its heading that can be used to link to
it, in addtion to, or instead of, the default concatenated heading style. The it, in addition to, or instead of, the default concatenated heading style. The
format of this ID is `{: id="custom-target-id"}`. format of this ID is `{: id="custom-target-id"}`.
You can use custom targets to link to headings or even paragraphs. You link to You can use custom targets to link to headings or even paragraphs. You link to
@ -667,7 +667,7 @@ we use often.
### Raw, no highlighting ### Raw, no highlighting
The raw markup is needed to keep Liquid from interperting the things with double The raw markup is needed to keep Liquid from interpreting the things with double
braces as templating language. braces as templating language.
{% raw %} {% raw %}

View File

@ -65,7 +65,7 @@ func TestFrontMatterKeywords(t *testing.T) {
}) })
} }
// testFrontMatterKeywords tests if if keywords are present and correctly // testFrontMatterKeywords tests if keywords are present and correctly
// formatted in given markdown file bytes // formatted in given markdown file bytes
func testFrontMatterKeywords(mdBytes []byte) error { func testFrontMatterKeywords(mdBytes []byte) error {
fm, _, err := frontparser.ParseFrontmatterAndContent(mdBytes) fm, _, err := frontparser.ParseFrontmatterAndContent(mdBytes)