Publish amberjack content to beta.docs.docker.com (#1089)

* Raw content addition

* Merge default-backend info here

* Moved to interlock-vip info

* Incorporate Euan's changes

Add examples for sticky_session_cookie and redirects

* Fix indentation issue

* 1013: Move desktop ent content to docs-private

* fix yaml spacing error

* 1013 - Fix ToC indentation, missing images

* 1010, 1011 - Update user instructions, add new screenshot

* update Jenkinsfile

* update jenkinsfile with very important protections

So we're lucky we're not using the master branch to update our swarm services here because if we someone had pushed to it, it would have triggered a docs.docker.com build. This is becuase this Jenkinsfile, which has been merged from the docker.github.io project has the content for updating docs.docker.com and not beta.docs.docker.com. Maria and I have worked out a potential solution to this problem and I hope to implement it today.

* Fix the DDE Overview ToC

* make Jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* Address review comments from Ben and GuillaumeT

* fix image path

* Fix review comments from Mathieu and Guillaume

* fix pending review comments

* Add documentation for --service-cluster-ip-range flag

https://github.com/docker/orca/pull/16417 adds support to make service cluster IP range subnet configurable for UCP install via the    --service-cluster-ip-range flag

* Added a period.

* Add documentation for UCP install page

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

* Add OS support statement

* Add Assemble docs

* Update ToC to include Assemble topics

* Remove version pack install section

* Adding APP CLI guide for customer beta2

Signed-off-by: Nigel Poulton <nigelpoulton@hotmail.com>

* Fix broken cross-refs

* fix the navigation

* Update version packs

The default version pack is now 3.0

We don't publicly advertise the Community version pack as its usage is for internal testing only.

Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>

* bumped headings by one level + minor updates

* 1006 - Adding Docker Template content

* Update ToC to add Docker template entry

* Adding the CLI reference topic and an updated toc

* Added CLI reference, updated toc, fixed broken links

* replaced hardcoded names with 'username'

* Add registry-cli plugin reference

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Update docker_registry docs

* Add docker template reference docs

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Raw content addition

* Moved to interlock-vip info

* Fix indentation issue

* 1013: Move desktop ent content to docs-private

* fix yaml spacing error

* 1013 - Fix ToC indentation, missing images

* 1010, 1011 - Update user instructions, add new screenshot

* Fix the DDE Overview ToC

* Sync forked amberjack branch with docs-private (#1068)

* Service labels info

* Tuning info

* Update info

* New deploy landing page info

* Offline install info

* New production info

* New upgrade info

* New landing page info

* Canary info

* Context info

* Landing page info

* Interlock VIP mode info

* Labels reference info

* Redirects info

* Service clusters info

* Sessions info

* SSL info

* TLS info

* Websockets info

* Incorporated latest change from Netlify site

* Images

* Moved to images directory

* Moved info

* Moved info

* Moved info

* Moved info

* Moved info

* Changed default port based on github.io update

* Add HideInfoHeaders based on github.io update

* HideInfoHeaders in code sample

* Wording and tag updates

* Tag and link updates

* Fix some minor issues in vfs storage-driver section

- Fix mention of `storage-drivers` instead of `storage-opts`
- Repeat the selected driver in the second `daemon.json` example
- Remove mention of `CE` as this driver can be used
  on Docker EE (although it's mainly intending for
  debugging, so not a "supported" driver)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Wording cleanup

* Intra-doc links

* Link titles

* Wording and link changes

* Remove site URL from link path

* Removed Kube GC Known issue from UCP 3.1.4

* Update release-notes.md

DTR info

Edits on 2.5.10 and 2.6.4 entries

Add upgrade warning information

Updated engine info per Andrew's input

Added Component table info per Mark

* Update DTR release notes

* Fixed dates

* Fixed formatting issues

* Temporary - review later

* Remove stage compose file for docs-private

* Update compose-version to 1.24.0

https://github.com/docker/compose/releases/tag/1.24.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* re-add removed Jenkinsfile

* Added moby#36951 to 18.09.4 release notes

* Wording and link updates

* Updated Offline Bundles for March Patch

* Update release notes for 1.23.2 and 1.24.0

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>

* Link to client bundle instructions

* Minor edits

- Moved dates to be consistent with other release notes
- Made grammar a little more consistent

* Update index.md : #### host or none - network (#8425)

* Update index.md : #### host or none - network

Choosing specific network for a build instead of the [network_mode]. network_mode doesn't work when providing a network for a particular build rather it skips the block and move to next service thus using network.

* Minor syntax updates

* Update index.md

those changes were a result of conflict that i tried to resolve.

* add slack webhook to Jenkinsfile

* add slack webhook to Jenkinsfile

* Update release-notes.md

* add slack webhook to Jenkinsfile

* Fix labels-reference link

* Add pip dependencies to compose doc for alpine (#8554)

* Add pip dependencies to compose doc for alpine

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>

* Minor edit

* Audit branch (#8564)

* Update trust-with-remote-ucp.md

* Fix link texts

* Addresses 8446

* Update trust_delegation.md

* - Addresses 8446
- Cleans up broken links
- Fixes vague link texts

Addresses 8446

Update trust_delegation.md

* Update running_ssh_service.md

* Update running_ssh_service.md

Fixed formatting and wording. Also moved note above the code.

* Update running_ssh_service.md

Fixed typo.

* Compose: Update build docs, Add --quiet flag

* Fix destroy reference page link

Relates to https://github.com/docker/docker.github.io/pull/8441

* Rephrase Ubuntu 14.04 note

* Revert "Compose: Update build docs, Add --quiet flag"

* # This is a combination of 4 commits.

- Addresses 8446
- Cleans up broken links
- Fixes vague link texts

Addresses 8446

Update trust_delegation.md

- Addresses 8446
- Cleans up broken links
- Fixes vague link texts

Addresses 8446

Update trust_delegation.md

Update trust-with-remote-ucp.md

- Addresses 8446
- Cleans up broken links
- Fixes vague link texts

Fix destroy reference page link

Relates to https://github.com/docker/docker.github.io/pull/8441

* - Addresses 8446
- Cleans up broken links
- Fixes vague link texts

* Addresses 8446 with text and link cleanup.

* Update syntax language from none to bash

* Update index.md

* Remove merge conflict

* Include Ubuntu version in Dockerfile

more recent versions of Ubuntu don't work with the given Dockerfile

* Adding Azure note (#8566)

* Adding Azure note

* Rephrase additional line and update link

* Fix typo

* Update configs.md

* Adding Azure note (#8566)

* Adding Azure note

* Rephrase additional line and update link

* Final edit

* Updated the 3.1.4 release notes to include Centos 7.6 support

* update jenkinsfile with very important protections

So we're lucky we're not using the master branch to update our swarm services here because if we someone had pushed to it, it would have triggered a docs.docker.com build. This is becuase this Jenkinsfile, which has been merged from the docker.github.io project has the content for updating docs.docker.com and not beta.docs.docker.com. Maria and I have worked out a potential solution to this problem and I hope to implement it today.

* add protection to Jenkinsfile

* fix git url protection in jenkinsfile

* typo fix

friendlyname -> friendlyhello

* Storage backend data migration updates

Fix incorrect API command, add backup updates

Update incorrect commands

* --unmanaged-cni is not a valid option for upgrade

* Update to UCP known issues

* Update UCP release notes

* Update release-notes.md

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* Add HSTS warning for specifying --dtr-external-url

* Typo on logging driver name

* Addressed engineering feedback

* Netlify redirects interlock (#8595)

* Added netlify redirect

* Remove redundant "be"

* Update the "role-based access control" link

On page "https://docs.docker.com/ee/ucp/user-access/", update the hyperlink "role-based access control" to point to "https://docs.docker.com/ee/ucp/authorization/" instead of "https://docs.docker.com/ee/access-control".

* Add UCP user password limitation

* Revert "Updated the UCP 3.1.4 release notes to include Centos 7.6 support"

* Adding emphasis on Static IP requirement (#7276)

* Adding emphasis on Static IP requirement

We had a customer (00056641) who changed IPs like this all at once, and they are in a messy status.    We should make it clear that static IP is absolutely required.
```***-ucp-0-dw original="10.15.89.6" updated="10.15.89.7"
***-ucp-1-dw original="10.15.89.5" updated="10.15.89.6"
***-ucp-2-dw original="10.15.89.7" updated="10.15.89.5" ```

* Link to prod requirement of static IP addresses

* Adding warning about layer7 config (#8617)

* Adding warning about layer7 config

Adding warning about layer7 config not being included in the backup

* Text edit

* Sync published with master (#8619)

* Update install.md

add note: 8 character password minimum length

* Include Ubuntu version in Dockerfile

more recent versions of Ubuntu don't work with the given Dockerfile

* Updated the 3.1.4 release notes to include Centos 7.6 support

* Remove redundant "be"

* Update the "role-based access control" link

On page "https://docs.docker.com/ee/ucp/user-access/", update the hyperlink "role-based access control" to point to "https://docs.docker.com/ee/ucp/authorization/" instead of "https://docs.docker.com/ee/access-control".

* Add UCP user password limitation

* Revert "Updated the UCP 3.1.4 release notes to include Centos 7.6 support"

* Adding emphasis on Static IP requirement (#7276)

* Adding emphasis on Static IP requirement

We had a customer (00056641) who changed IPs like this all at once, and they are in a messy status.    We should make it clear that static IP is absolutely required.
```***-ucp-0-dw original="10.15.89.6" updated="10.15.89.7"
***-ucp-1-dw original="10.15.89.5" updated="10.15.89.6"
***-ucp-2-dw original="10.15.89.7" updated="10.15.89.5" ```

* Link to prod requirement of static IP addresses

* Adding warning about layer7 config (#8617)

* Adding warning about layer7 config

Adding warning about layer7 config not being included in the backup

* Text edit

* Add the 'Install on Azure' page back to the TOC for UCP 3.0 (#8623)

* Add the Install on Azure page back to the UCP 3.0 TOC

* Fix the copy / paste error on Install on UCP

* Fix Liquid syntax error in "reset user password"

```
Liquid Warning: Liquid syntax error (line 33): Expected end_of_string but found number in "{{ index .Spec.TaskTemplate.ContainerSpec.Args 0 }}" in ee/ucp/authorization/reset-user-password.md
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Fix link text

* Patch release notes 04 2019 (#8633)

* Add version update for Engine/UCP

* Add DTR version updates

* Added April Offline Bundles

* Engine release notes update

* Update release-notes.md

* Update release-notes.md

* Minor edit

* Minor edit

* Add 2.4.11 DTR info

* Remove statement about supporting CNI plugin (#8594)

* Remove statement about supporting CNI plugin

* Update install-cni-plugin.md

* Removing internal JIRA links

* Use site parameter to use latest compose file versions in examples (#8630)

* Use site parameter to use latest compose file versions in examples

Make sure that examples use the latest version of the compose file
format, to encourage using the latest version, and to prevent
users from running into "not supported by this version" problems
when copy/pasting, and combining examples that use different
versions.

Also add a note about `version: x` not being equivalent to
`version: x.latest`.

Note that there are still some examples using fixed versions
in the UCP sections; we need to evaluate those to make sure
the right (and supported) versions are used for UCP (which may
be different than "latest").

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Address some v3/v2 issues, and YAML syntax error

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Minor edit

* Final updates

- Added note around v2 and v3 versioning
- Updated note for v3 to match the v2 update

* compose-file: remove reference to custom init path (#8628)

* compose-file: remove reference to custom init path

This option was never functional, and was not intended
to be added to the "container create" API, so let's
remove it, because it has been removed in Docker 17.05,
and was broken in versions before that; see

- docker/docker-py#2309 Remove init_path from create
- moby/moby#32355 --init-path does not seem to work
- moby/moby#32470 remove --init-path from client

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Update index.md

* Remove extra which

Change below line

From

AUFS, which can suffer noticeable latencies when searching for files in images with many layers

To

AUFS can suffer noticeable latencies when searching for files in images with many layers

* Fix a broken link

* Add documentation for --service-cluster-ip-range flag

https://github.com/docker/orca/pull/16417 adds support to make service cluster IP range subnet configurable for UCP install via the    --service-cluster-ip-range flag

* Added a period.

* Add documentation for UCP install page

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>

* Redirect to current version of page, since it's reached EOL

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Sync published with master (#8673)

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Published (#8674)

* add slack webhook to Jenkinsfile

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* Sync published with master (#8619)

* Update install.md

add note: 8 character password minimum length

* Include Ubuntu version in Dockerfile

more recent versions of Ubuntu don't work with the given Dockerfile

* Updated the 3.1.4 release notes to include Centos 7.6 support

* Remove redundant "be"

* Update the "role-based access control" link

On page "https://docs.docker.com/ee/ucp/user-access/", update the hyperlink "role-based access control" to point to "https://docs.docker.com/ee/ucp/authorization/" instead of "https://docs.docker.com/ee/access-control".

* Add UCP user password limitation

* Revert "Updated the UCP 3.1.4 release notes to include Centos 7.6 support"

* Adding emphasis on Static IP requirement (#7276)

* Adding emphasis on Static IP requirement

We had a customer (00056641) who changed IPs like this all at once, and they are in a messy status.    We should make it clear that static IP is absolutely required.
```***-ucp-0-dw original="10.15.89.6" updated="10.15.89.7"
***-ucp-1-dw original="10.15.89.5" updated="10.15.89.6"
***-ucp-2-dw original="10.15.89.7" updated="10.15.89.5" ```

* Link to prod requirement of static IP addresses

* Adding warning about layer7 config (#8617)

* Adding warning about layer7 config

Adding warning about layer7 config not being included in the backup

* Text edit

* Sync published with master (#8673)

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8678)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Fixed heading inconsistency

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8677)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Update concatenated to chained

* Minor fix

* interlock --> ucp-interlock (#8675)

* interlock --> ucp-interlock

* Fixed code samples

- Use the latest UCP version and the latest ucp-interlock image
- Leverage ucp page version Jekyll variable

* Typo

* Final syntax fix

* Update backup.md

* Sync published with master (#8685)

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Published (#8674)

* add slack webhook to Jenkinsfile

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* Sync published with master (#8619)

* Update install.md

add note: 8 character password minimum length

* Include Ubuntu version in Dockerfile

more recent versions of Ubuntu don't work with the given Dockerfile

* Updated the 3.1.4 release notes to include Centos 7.6 support

* Remove redundant "be"

* Update the "role-based access control" link

On page "https://docs.docker.com/ee/ucp/user-access/", update the hyperlink "role-based access control" to point to "https://docs.docker.com/ee/ucp/authorization/" instead of "https://docs.docker.com/ee/access-control".

* Add UCP user password limitation

* Revert "Updated the UCP 3.1.4 release notes to include Centos 7.6 support"

* Adding emphasis on Static IP requirement (#7276)

* Adding emphasis on Static IP requirement

We had a customer (00056641) who changed IPs like this all at once, and they are in a messy status.    We should make it clear that static IP is absolutely required.
```***-ucp-0-dw original="10.15.89.6" updated="10.15.89.7"
***-ucp-1-dw original="10.15.89.5" updated="10.15.89.6"
***-ucp-2-dw original="10.15.89.7" updated="10.15.89.5" ```

* Link to prod requirement of static IP addresses

* Adding warning about layer7 config (#8617)

* Adding warning about layer7 config

Adding warning about layer7 config not being included in the backup

* Text edit

* Sync published with master (#8673)

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8678)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Fixed heading inconsistency

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8677)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Update concatenated to chained

* Minor fix

* interlock --> ucp-interlock (#8675)

* interlock --> ucp-interlock

* Fixed code samples

- Use the latest UCP version and the latest ucp-interlock image
- Leverage ucp page version Jekyll variable

* Typo

* Final syntax fix

* Update backup.md

* Removed Reference to Interlock Preview Image, and added relevant UCP Image Org and Tag

* Fix syntax error which caused the master build to fail

* Preview page.ucp_org output

* Sync published with master (#8693) (#8694)

* Adding Azure note (#8566)

* Adding Azure note

* Rephrase additional line and update link

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Published (#8674)

* add slack webhook to Jenkinsfile

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* Sync published with master (#8619)

* Update install.md

add note: 8 character password minimum length

* Include Ubuntu version in Dockerfile

more recent versions of Ubuntu don't work with the given Dockerfile

* Updated the 3.1.4 release notes to include Centos 7.6 support

* Remove redundant "be"

* Update the "role-based access control" link

On page "https://docs.docker.com/ee/ucp/user-access/", update the hyperlink "role-based access control" to point to "https://docs.docker.com/ee/ucp/authorization/" instead of "https://docs.docker.com/ee/access-control".

* Add UCP user password limitation

* Revert "Updated the UCP 3.1.4 release notes to include Centos 7.6 support"

* Adding emphasis on Static IP requirement (#7276)

* Adding emphasis on Static IP requirement

We had a customer (00056641) who changed IPs like this all at once, and they are in a messy status.    We should make it clear that static IP is absolutely required.
```***-ucp-0-dw original="10.15.89.6" updated="10.15.89.7"
***-ucp-1-dw original="10.15.89.5" updated="10.15.89.6"
***-ucp-2-dw original="10.15.89.7" updated="10.15.89.5" ```

* Link to prod requirement of static IP addresses

* Adding warning about layer7 config (#8617)

* Adding warning about layer7 config

Adding warning about layer7 config not being included in the backup

* Text edit

* Sync published with master (#8673)

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8678)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Fixed heading inconsistency

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8677)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Update concatenated to chained

* Minor fix

* interlock --> ucp-interlock (#8675)

* interlock --> ucp-interlock

* Fixed code samples

- Use the latest UCP version and the latest ucp-interlock image
- Leverage ucp page version Jekyll variable

* Typo

* Final syntax fix

* Update backup.md

* Removed Reference to Interlock Preview Image, and added relevant UCP Image Org and Tag

* Fix syntax error which caused the master build to fail

* docs: fix typo in removal of named volumes (#8686)

* Updated the ToC for Upgrading Interlock

* Removed the Previous Interlock SSL Page

* Moved Redirect to latest page

* Update index.md (#8690)

Fix typo - missing word.

* Update bind-mounts.md (#8696)

* Minor edits (#8708)

* Minor edits

- Standardized setting of replica ID as per @caervs
- Fix broken link

* Consistency edits

- Standardized setting of replica ID
- Added note that this command only works on Linux

* Standardize replica setting

- Update commands for creating tar files for local and NFS-mounted images

* Fixed broken 'important changes' link (#8721)

* Interlock fix - remove haproxy and custom template files (#8722)

* Removed haproxy and custom template info

* Delete file

* Delete file

* Render DTR version (#8726)

* Release notes for 2.0.4.0 win (Edge)

Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>

* Release notes for 2.0.4.0 mac (Edge)

Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>

* Update-edge-release-notes.md

Minor updates to the proposed content. Looks good otherwise.

* Updated edge-release-notes (Windows)

Minor edits

* Added Docker-Compose awslogs example (#8638)

* Added docker compose aws logs information

* Fixed formatting and text

- Signed off by @bermudezmt

* Fix: duplicate paragraph `depends_on` (#8539)

* Fix: duplicate paragraph `depends_on`

Amend duplicate paragraph `depends_on` in Compose file reference doc.

* Fix: add missing blank line

* Updated Engine/DTR/UCP version info (#8744)

* Updated Engine/DTR/UCP version info

* Fixed version

* Updates for May patch

* Release notes update (May) (#8763)

* Latest info including known issues

* Updates for 2.6.6, 2.5.11, 2.4.12

* Added 18.09.6 updates

* Added link

* Fixed link error

* Syntax error

* 2.6.6 info cleanup

* Added Hub info

* Added Hub info for 2.6.6

* Added Hub info for 3.1.7

* Link fix

* Update line items for DTR 2.6.6

* Add line break after Known Issues

- Affects 2.5.11.

* Edit line items

Minor edits and formatting fixes

* Remove outdated links/fix links (#8760)

* Fix dates

* Fix dates

* Fix dates

* Fixed syntax error (#8732)

* Fixed syntax error

Last edit to the REPLICA_ID command introduced a syntax error by adding an extra ')'. Removed it.

* Fix replica ID setting examples

- Accept suggestion from @thajeztah based on product testing
- Apply change to page examples
- Remove NFS backup example based on the following errors:
tar: /var/lib/docker/volumes/dtr-registry-nfs-36e6bf87816d: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors

* Update header for example tar

* Fixed link title

* Fixed link title

* Added new example and deprecation info (#8773)

* Updated multi-stage build doc (#8769)

Changed the 'as' keyword to 'AS' to match the Dockerfile reference docs here: https://docs.docker.com/engine/reference/builder/#from

* Fix typo (#8766)

* Fixed a sentence (#8728)

* Fixed a sentence

* Minor edit

* Update configure-tls.md (#8719)

* Update upgrade.md (#8718)

* Update index.md (#8717)

* Update configure-tls.md (#8716)

* Add TOC entry for Hub page title change (#8777)

* Update upgrade.md

* Fix left navigation TOC

* Update get-started.md (#8713)

* Update tmpfs.md (#8711)

* Add an indentation in compose-gettingstarted.md (#8487)

* Add an indentation

* Fix messaging on service dependencies

* Sync master with published (#8779)

* Sync published with master (#8693)

* Adding Azure note (#8566)

* Adding Azure note

* Rephrase additional line and update link

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Published (#8674)

* add slack webhook to Jenkinsfile

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* Sync published with master (#8619)

* Update install.md

add note: 8 character password minimum length

* Include Ubuntu version in Dockerfile

more recent versions of Ubuntu don't work with the given Dockerfile

* Updated the 3.1.4 release notes to include Centos 7.6 support

* Remove redundant "be"

* Update the "role-based access control" link

On page "https://docs.docker.com/ee/ucp/user-access/", update the hyperlink "role-based access control" to point to "https://docs.docker.com/ee/ucp/authorization/" instead of "https://docs.docker.com/ee/access-control".

* Add UCP user password limitation

* Revert "Updated the UCP 3.1.4 release notes to include Centos 7.6 support"

* Adding emphasis on Static IP requirement (#7276)

* Adding emphasis on Static IP requirement

We had a customer (00056641) who changed IPs like this all at once, and they are in a messy status.    We should make it clear that static IP is absolutely required.
```***-ucp-0-dw original="10.15.89.6" updated="10.15.89.7"
***-ucp-1-dw original="10.15.89.5" updated="10.15.89.6"
***-ucp-2-dw original="10.15.89.7" updated="10.15.89.5" ```

* Link to prod requirement of static IP addresses

* Adding warning about layer7 config (#8617)

* Adding warning about layer7 config

Adding warning about layer7 config not being included in the backup

* Text edit

* Sync published with master (#8673)

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8678)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Fixed heading inconsistency

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8677)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Update concatenated to chained

* Minor fix

* interlock --> ucp-interlock (#8675)

* interlock --> ucp-interlock

* Fixed code samples

- Use the latest UCP version and the latest ucp-interlock image
- Leverage ucp page version Jekyll variable

* Typo

* Final syntax fix

* Update backup.md

* Removed Reference to Interlock Preview Image, and added relevant UCP Image Org and Tag

* Fix syntax error which caused the master build to fail

* Sync published with master (#8695)

* Sync published with master (#8693) (#8694)

* Adding Azure note (#8566)

* Adding Azure note

* Rephrase additional line and update link

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Published (#8674)

* add slack webhook to Jenkinsfile

* make jenkinsfile serve private and public docs

After a couple of Jenkins-based mix-ups it became obvious we needed a Jenkinsfile that would serve both public and private projects, that we could move between repos without worry. This Jenkinsfile knows which images to build and push and which swarm services to update because of the use of git_url and branch conditions.

* Sync published with master (#8619)

* Update install.md

add note: 8 character password minimum length

* Include Ubuntu version in Dockerfile

more recent versions of Ubuntu don't work with the given Dockerfile

* Updated the 3.1.4 release notes to include Centos 7.6 support

* Remove redundant "be"

* Update the "role-based access control" link

On page "https://docs.docker.com/ee/ucp/user-access/", update the hyperlink "role-based access control" to point to "https://docs.docker.com/ee/ucp/authorization/" instead of "https://docs.docker.com/ee/access-control".

* Add UCP user password limitation

* Revert "Updated the UCP 3.1.4 release notes to include Centos 7.6 support"

* Adding emphasis on Static IP requirement (#7276)

* Adding emphasis on Static IP requirement

We had a customer (00056641) who changed IPs like this all at once, and they are in a messy status.    We should make it clear that static IP is absolutely required.
```***-ucp-0-dw original="10.15.89.6" updated="10.15.89.7"
***-ucp-1-dw original="10.15.89.5" updated="10.15.89.6"
***-ucp-2-dw original="10.15.89.7" updated="10.15.89.5" ```

* Link to prod requirement of static IP addresses

* Adding warning about layer7 config (#8617)

* Adding warning about layer7 config

Adding warning about layer7 config not being included in the backup

* Text edit

* Sync published with master (#8673)

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)

* Correct Pod-CIDR Warning

* Content cleanup

Please check that I haven't changed the meaning of the updated prerequisites.

* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.

* Incorporated Steven F's feedback and Issue 8551

* Provide a warning when setting a small IP Count variable

* Final edits

* Update install-on-azure.md

* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command

* Removed Orchestrator Tag Pre Req from Azure Docs

* Clarifying need for 0644 permissions

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.

DTR Metadata backup command improvements:

DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:

1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.

Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.

* Technical and editorial review

* More edits

* line 8; remove unnecessary a (#8672)

* line 8; remove unnecessary a

* Minor edit

* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)

* Added examples (#8599)

* Added examples

Added examples with more detail and automation to help customers backup DTR without creating support tickets.

* Linked to explanation of example command

@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.

We can re-add in a follow-up PR, if you think that example is crucial to this page.

* Remove deadlink in the Interlock ToC (#8668)

* Found a deadlink in the Interlock ToC

* Added Redirect

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8678)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Fixed heading inconsistency

* Trying to fix command rendering of '--format "{{ .Names }}"' (#8677)

* Trying to fix command rendering of '--format "{{ .Names }}"'

--format "{{ .Names }}" is showing up in the markup but is rendering as --format "" in the published version. Added {% raw %} tags to try to fix.

* Update concatenated to chained

* Minor fix

* interlock --> ucp-interlock (#8675)

* interlock --> ucp-interlock

* Fixed code samples

- Use the latest UCP version and the latest ucp-interlock image
- Leverage ucp page version Jekyll variable

* Typo

* Final syntax fix

* Update backup.md

* Removed Reference to Interlock Preview Image, and added relevant UCP Image Org and Tag

* Fix syntax error which caused the master build to fail

* docs: fix typo in removal of named volumes (#8686)

* Sync published with master (#8709)

* Sync published with master (#8693) (#8694)

* Adding Azure note (#8566)

* Rephrase additional line and update link

* Revert "Netlify redirects interlock (#8595)"

This reverts commit a7793edc74.

* UCP Install on Azure Patch (#8522)

* Improved backup commands (#8597)

* Improved backup commands

DTR image backup command improvements:

1. Local and NFS mount image ba…

* Follow-up cleanup (#1069)

* Delete interlock_service_clusters.png~HEAD

* Delete interlock_service_clusters.png~Raw content addition

* Clean up interlock files for Amberjack

* Remove merge markers in toc.yml

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* Add correct UCP interlock TOC entries

Fingers crossed on this one - did it from the browser. :D

* added api reference, fixed tech review comments

* Added patch release changelogs

* Update docker cli reference for 19.03

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

* SAML SCIM update (#1073)

* Added SCIM entry

* SCIM content

* Updates per Ryan's feedback

* Removed delete

* Update per Ryan's feedback

* Minor wording changes

* Additional endpoints added

* Update per Ryan's feedback

* Metadata updates

* Anchor links added

* Updates per Maria

* Adding links to Docker for Mac and Windows Community content

* OSCAL TOC entry (#1083)

* Added Docker Desktop Enterprise 2.0.0.4-ent changelogs

Signed-off-by: Ulrich VACHON <ulrich.vachon@docker.com>

* minor updates to the public beta release notes

* gMSA info (#1074)

* Added gMSA note.

* Added gMSA bullet

* Added gMSA info

* Changes per Drew's feedback

* Updates per Drew's feedback

* Moved content per feedback

* Moved content per feedback

* Updates per Drew's feedback

* Update per feedback

* Update release-notes.md

* Update release notes

Public beta

* iSCSI info (#1075)

* Added raw content

* Added iscsi options

* Added iSCSI entry

* Images

* Clean up

* Updates per feedback

* Updates per Anusha

* Update to iscsi parameter

* Added updates per Deep's feedback

* Updates per Deep's feedback

* Updated iSCSI parameter description

* Update page versions for UCP and DTR
This commit is contained in:
Maria Bermudez 2019-05-16 11:39:36 -07:00 committed by Maria Bermudez
parent c793295bc8
commit ffe8ffd1e8
182 changed files with 7302 additions and 454 deletions

2
Jenkinsfile vendored
View File

@ -246,4 +246,4 @@ pipeline {
}
}
}
}
}

View File

@ -106,7 +106,7 @@ defaults:
values:
dtr_org: "docker"
dtr_repo: "dtr"
dtr_version: "2.6.6"
dtr_version: "2.7.0-beta4"
- scope:
path: "datacenter/dtr/2.5"
values:
@ -149,15 +149,15 @@ defaults:
values:
ucp_org: "docker"
ucp_repo: "ucp"
ucp_version: "3.1.7"
ucp_version: "3.2.0-beta4"
- scope: # This is a bit of a hack for the get-support.md topic.
path: "ee"
values:
ucp_org: "docker"
ucp_repo: "ucp"
dtr_repo: "dtr"
ucp_version: "3.1.7"
dtr_version: "2.6.6"
ucp_version: "3.2.0-beta4"
dtr_version: "2.7.0-beta4"
- scope:
path: "datacenter/ucp/3.0"
values:

View File

@ -0,0 +1,23 @@
command: docker template
short: Use templates to quickly create new services
long: Use templates to quickly create new services
pname: docker
plink: docker.yaml
cname:
- docker template config
- docker template inspect
- docker template list
- docker template scaffold
- docker template version
clink:
- docker_template_config.yaml
- docker_template_inspect.yaml
- docker_template_list.yaml
- docker_template_scaffold.yaml
- docker_template_version.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,17 @@
command: docker template config
short: Modify docker template configuration
long: Modify docker template configuration
pname: docker template
plink: docker_template.yaml
cname:
- docker template config set
- docker template config view
clink:
- docker_template_config_set.yaml
- docker_template_config_view.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,48 @@
command: docker template config set
short: set default values for docker template
long: set default values for docker template
usage: docker template config set
pname: docker template config
plink: docker_template_config.yaml
options:
- option: feedback
value_type: bool
default_value: "false"
description: |
Send anonymous feedback about usage (performance, failure status, os, version)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: no-feedback
value_type: bool
default_value: "false"
description: Don't send anonymous feedback
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: org
value_type: string
description: Set default organization / docker hub user
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: server
value_type: string
description: Set default registry server (host[:port])
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,22 @@
command: docker template config view
short: view default values for docker template
long: view default values for docker template
usage: docker template config view
pname: docker template config
plink: docker_template_config.yaml
options:
- option: format
value_type: string
default_value: yaml
description: Configure the output format (json|yaml)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,22 @@
command: docker template inspect
short: Inspect service templates or application templates
long: Inspect service templates or application templates
usage: docker template inspect <service or application>
pname: docker template
plink: docker_template.yaml
options:
- option: format
value_type: string
default_value: pretty
description: Configure the output format (pretty|json|yaml)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,32 @@
command: docker template list
aliases: ls
short: List available templates with their informations
long: List available templates with their informations
usage: docker template list
pname: docker template
plink: docker_template.yaml
options:
- option: format
value_type: string
default_value: pretty
description: Configure the output format (pretty|json|yaml)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: type
value_type: string
default_value: all
description: Filter by type (application|service|all)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,70 @@
command: docker template scaffold
short: Choose an application template or service template(s) and scaffold a new project
long: Choose an application template or service template(s) and scaffold a new project
usage: docker template scaffold application [<alias=service>...] OR scaffold [alias=]service
[<[alias=]service>...]
pname: docker template
plink: docker_template.yaml
options:
- option: name
value_type: string
description: Application name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: org
value_type: string
description: |
Deploy to a specific organization / docker hub user (if not specified, it will use your current hub login)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: path
value_type: string
description: Deploy to a specific path
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: platform
value_type: string
default_value: linux
description: Target platform (linux|windows)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: server
value_type: string
description: Deploy to a specific registry server (host[:port])
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: set
shorthand: s
value_type: stringArray
default_value: '[]'
description: Override parameters values (service.name=value)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
examples: "docker template scaffold react-java-mysql -s back.java=10 -s front.externalPort=80
\ndocker template scaffold react-java-mysql java=back reactjs=front -s reactjs.externalPort=80
\ndocker template scaffold back=spring front=react -s back.externalPort=9000 \ndocker
template scaffold react-java-mysql --server=myregistry:5000 --org=myorg"
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,12 @@
command: docker template version
short: Print version information
long: Print version information
usage: docker template version
pname: docker template
plink: docker_template.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -7,6 +7,7 @@ cname:
- docker commit
- docker config
- docker container
- docker context
- docker cp
- docker create
- docker deploy
@ -66,6 +67,7 @@ clink:
- docker_commit.yaml
- docker_config.yaml
- docker_container.yaml
- docker_context.yaml
- docker_cp.yaml
- docker_create.yaml
- docker_deploy.yaml

View File

@ -16,8 +16,8 @@ long: |-
To stop a container, use `CTRL-c`. This key sequence sends `SIGKILL` to the
container. If `--sig-proxy` is true (the default),`CTRL-c` sends a `SIGINT` to
the container. You can detach from a container and leave it running using the
`CTRL-p CTRL-q` key sequence.
the container. If the container was run with `-i` and `-t`, you can detach from
a container and leave it running using the `CTRL-p CTRL-q` key sequence.
> **Note:**
> A process running as PID 1 inside a container is treated specially by

View File

@ -284,6 +284,17 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: output
shorthand: o
value_type: stringArray
default_value: '[]'
description: 'Output destination (format: type=local,dest=path)'
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: platform
value_type: string
description: Set platform if server is multi-platform capable
@ -551,13 +562,13 @@ examples: "### Build with PATH\n\n```bash\n$ docker build .\n\nUploading context
is preserved with this method.\n\nThe `--squash` option is an experimental feature,
and should not be considered\nstable.\n\n\nSquashing layers can be beneficial if
your Dockerfile produces multiple layers\nmodifying the same files, for example,
file that are created in one step, and\nremoved in another step. For other use-cases,
files that are created in one step, and\nremoved in another step. For other use-cases,
squashing images may actually have\na negative impact on performance; when pulling
an image consisting of multiple\nlayers, layers can be pulled in parallel, and allows
sharing layers between\nimages (saving space).\n\nFor most use cases, multi-stage
are a better alternative, as they give more\nfine-grained control over your build,
and can take advantage of future\noptimizations in the builder. Refer to the [use
multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/)\nsection
builds are a better alternative, as they give more\nfine-grained control over your
build, and can take advantage of future\noptimizations in the builder. Refer to
the [use multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/)\nsection
in the userguide for more information.\n\n\n#### Known limitations\n\nThe `--squash`
option has a number of known limitations:\n\n- When squashing layers, the resulting
image cannot take advantage of layer\n sharing with other images, and may use significantly
@ -568,7 +579,7 @@ examples: "### Build with PATH\n\n```bash\n$ docker build .\n\nUploading context
\ impact on performance, as a single layer takes longer to extract, and\n downloading
a single layer cannot be parallelized.\n- When attempting to squash an image that
does not make changes to the\n filesystem (for example, the Dockerfile only contains
`ENV` instructions),\n the squash step will fail (see [issue #33823](https://github.com/moby/moby/issues/33823)\n\n####
`ENV` instructions),\n the squash step will fail (see [issue #33823](https://github.com/moby/moby/issues/33823)).\n\n####
Prerequisites\n\nThe example on this page is using experimental mode in Docker 1.13.\n\nExperimental
mode can be enabled by using the `--experimental` flag when starting the Docker
daemon or setting `experimental: true` in the `daemon.json` configuration file.\n\nBy

View File

@ -5,10 +5,13 @@ usage: docker builder
pname: docker
plink: docker.yaml
cname:
- docker builder build
- docker builder prune
clink:
- docker_builder_build.yaml
- docker_builder_prune.yaml
deprecated: false
min_api_version: "1.31"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -0,0 +1,335 @@
command: docker builder build
short: Build an image from a Dockerfile
long: Build an image from a Dockerfile
usage: docker builder build [OPTIONS] PATH | URL | -
pname: docker builder
plink: docker_builder.yaml
options:
- option: add-host
value_type: list
description: Add a custom host-to-IP mapping (host:ip)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: build-arg
value_type: list
description: Set build-time variables
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cache-from
value_type: stringSlice
default_value: '[]'
description: Images to consider as cache sources
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cgroup-parent
value_type: string
description: Optional parent cgroup for the container
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: compress
value_type: bool
default_value: "false"
description: Compress the build context using gzip
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpu-period
value_type: int64
default_value: "0"
description: Limit the CPU CFS (Completely Fair Scheduler) period
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpu-quota
value_type: int64
default_value: "0"
description: Limit the CPU CFS (Completely Fair Scheduler) quota
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpu-shares
shorthand: c
value_type: int64
default_value: "0"
description: CPU shares (relative weight)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpuset-cpus
value_type: string
description: CPUs in which to allow execution (0-3, 0,1)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: cpuset-mems
value_type: string
description: MEMs in which to allow execution (0-3, 0,1)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: disable-content-trust
value_type: bool
default_value: "true"
description: Skip image verification
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: file
shorthand: f
value_type: string
description: Name of the Dockerfile (Default is 'PATH/Dockerfile')
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: force-rm
value_type: bool
default_value: "false"
description: Always remove intermediate containers
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: iidfile
value_type: string
description: Write the image ID to the file
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: isolation
value_type: string
description: Container isolation technology
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: label
value_type: list
description: Set metadata for an image
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: memory
shorthand: m
value_type: bytes
default_value: "0"
description: Memory limit
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: memory-swap
value_type: bytes
default_value: "0"
description: |
Swap limit equal to memory plus swap: '-1' to enable unlimited swap
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
description: |
Set the networking mode for the RUN instructions during build
deprecated: false
min_api_version: "1.25"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: no-cache
value_type: bool
default_value: "false"
description: Do not use cache when building the image
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: output
shorthand: o
value_type: stringArray
default_value: '[]'
description: 'Output destination (format: type=local,dest=path)'
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: platform
value_type: string
description: Set platform if server is multi-platform capable
deprecated: false
min_api_version: "1.32"
experimental: true
experimentalcli: false
kubernetes: false
swarm: false
- option: progress
value_type: string
default_value: auto
description: |
Set type of progress output (auto, plain, tty). Use plain to show container output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: pull
value_type: bool
default_value: "false"
description: Always attempt to pull a newer version of the image
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Suppress the build output and print image ID on success
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: rm
value_type: bool
default_value: "true"
description: Remove intermediate containers after a successful build
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: secret
value_type: stringArray
default_value: '[]'
description: |
Secret file to expose to the build (only if BuildKit enabled): id=mysecret,src=/local/secret
deprecated: false
min_api_version: "1.39"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: security-opt
value_type: stringSlice
default_value: '[]'
description: Security options
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: shm-size
value_type: bytes
default_value: "0"
description: Size of /dev/shm
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: squash
value_type: bool
default_value: "false"
description: Squash newly built layers into a single new layer
deprecated: false
min_api_version: "1.25"
experimental: true
experimentalcli: false
kubernetes: false
swarm: false
- option: ssh
value_type: stringArray
default_value: '[]'
description: |
SSH agent socket or keys to expose to the build (only if BuildKit enabled) (format: default|<id>[=<socket>|<key>[,<key>]])
deprecated: false
min_api_version: "1.39"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: stream
value_type: bool
default_value: "false"
description: Stream attaches to server to negotiate build context
deprecated: false
min_api_version: "1.31"
experimental: true
experimentalcli: false
kubernetes: false
swarm: false
- option: tag
shorthand: t
value_type: list
description: Name and optionally a tag in the 'name:tag' format
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: target
value_type: string
description: Set the target build stage to build.
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: ulimit
value_type: ulimit
default_value: '[]'
description: Ulimit options
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
min_api_version: "1.31"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -18,6 +18,7 @@ options:
value_type: string
description: Template driver
deprecated: false
min_api_version: "1.37"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -259,6 +259,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -292,6 +300,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -560,8 +577,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -577,8 +593,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false

View File

@ -277,6 +277,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -310,6 +318,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -578,8 +595,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -595,8 +611,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false

View File

@ -126,6 +126,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: pids-limit
value_type: int64
default_value: "0"
description: Tune container pids limit (set -1 for unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: restart
value_type: string
description: Restart policy to apply when a container exits

View File

@ -0,0 +1,30 @@
command: docker context
short: Manage contexts
long: Manage contexts
usage: docker context
pname: docker
plink: docker.yaml
cname:
- docker context create
- docker context export
- docker context import
- docker context inspect
- docker context ls
- docker context rm
- docker context update
- docker context use
clink:
- docker_context_create.yaml
- docker_context_export.yaml
- docker_context_import.yaml
- docker_context_inspect.yaml
- docker_context_ls.yaml
- docker_context_rm.yaml
- docker_context_update.yaml
- docker_context_use.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,116 @@
command: docker context create
short: Create a context
long: |-
Creates a new `context`. This allows you to quickly switch the cli
configuration to connect to different clusters or single nodes.
To create a context from scratch provide the docker and, if required,
kubernetes options. The example below creates the context `my-context`
with a docker endpoint of `/var/run/docker.sock` and a kubernetes configuration
sourced from the file `/home/me/my-kube-config`:
```bash
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes config-file=/home/me/my-kube-config
```
Use the `--from=<context-name>` option to create a new context from
an existing context. The example below creates a new context named `my-context`
from the existing context `existing-context`:
```bash
$ docker context create my-context --from existing-context
```
If the `--from` option is not set, the `context` is created from the current context:
```bash
$ docker context create my-context
```
This can be used to create a context out of an existing `DOCKER_HOST` based script:
```bash
$ source my-setup-script.sh
$ docker context create my-context
```
To source only the `docker` endpoint configuration from an existing context
use the `--docker from=<context-name>` option. The example below creates a
new context named `my-context` using the docker endpoint configuration from
the existing context `existing-context` and a kubernetes configuration sourced
from the file `/home/me/my-kube-config`:
```bash
$ docker context create my-context \
--docker from=existing-context \
--kubernetes config-file=/home/me/my-kube-config
```
To source only the `kubernetes` configuration from an existing context use the
`--kubernetes from=<context-name>` option. The example below creates a new
context named `my-context` using the kuberentes configuration from the existing
context `existing-context` and a docker endpoint of `/var/run/docker.sock`:
```bash
$ docker context create my-context \
--docker host=/var/run/docker.sock \
--kubernetes from=existing-context
```
Docker and Kubernetes endpoints configurations, as well as default stack
orchestrator and description can be modified with `docker context update`
usage: docker context create [OPTIONS] CONTEXT
pname: docker context
plink: docker_context.yaml
options:
- option: default-stack-orchestrator
value_type: string
description: |
Default orchestrator for stack operations to use with this context (swarm|kubernetes|all)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: description
value_type: string
description: Description of the context
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: docker
value_type: stringToString
default_value: '[]'
description: set the docker endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: from
value_type: string
description: create context from a named context
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: kubernetes
value_type: stringToString
default_value: '[]'
description: set the kubernetes endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,25 @@
command: docker context export
short: Export a context to a tar or kubeconfig file
long: |-
Exports a context in a file that can then be used with `docker context import` (or with `kubectl` if `--kubeconfig` is set).
Default output filename is `<CONTEXT>.dockercontext`, or `<CONTEXT>.kubeconfig` if `--kubeconfig` is set.
To export to `STDOUT`, you can run `docker context export my-context -`.
usage: docker context export [OPTIONS] CONTEXT [FILE|-]
pname: docker context
plink: docker_context.yaml
options:
- option: kubeconfig
value_type: bool
default_value: "false"
description: Export as a kubeconfig file
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,13 @@
command: docker context import
short: Import a context from a tar file
long: Imports a context previously exported with `docker context export`. To import
from stdin, use a hyphen (`-`) as filename.
usage: docker context import CONTEXT FILE|-
pname: docker context
plink: docker_context.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,60 @@
command: docker context inspect
short: Display detailed information on one or more contexts
long: Inspects one or more contexts.
usage: docker context inspect [OPTIONS] [CONTEXT] [CONTEXT...]
pname: docker context
plink: docker_context.yaml
options:
- option: format
shorthand: f
value_type: string
description: Format the output using the given Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
### Inspect a context by name
```bash
$ docker context inspect "local+aks"
[
{
"Name": "local+aks",
"Metadata": {
"Description": "Local Docker Engine + Azure AKS endpoint",
"StackOrchestrator": "kubernetes"
},
"Endpoints": {
"docker": {
"Host": "npipe:////./pipe/docker_engine",
"SkipTLSVerify": false
},
"kubernetes": {
"Host": "https://simon-aks-***.hcp.uksouth.azmk8s.io:443",
"SkipTLSVerify": false,
"DefaultNamespace": "default"
}
},
"TLSMaterial": {
"kubernetes": [
"ca.pem",
"cert.pem",
"key.pem"
]
},
"Storage": {
"MetadataPath": "C:\\Users\\simon\\.docker\\contexts\\meta\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f509141daff05f620fc54ddee",
"TLSPath": "C:\\Users\\simon\\.docker\\contexts\\tls\\cb6d08c0a1bfa5fe6f012e61a442788c00bed93f509141daff05f620fc54ddee"
}
}
]
```
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,32 @@
command: docker context ls
aliases: list
short: List contexts
long: List contexts
usage: docker context ls [OPTIONS]
pname: docker context
plink: docker_context.yaml
options:
- option: format
value_type: string
description: Pretty-print contexts using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Only show context names
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,24 @@
command: docker context rm
aliases: remove
short: Remove one or more contexts
long: Remove one or more contexts
usage: docker context rm CONTEXT [CONTEXT...]
pname: docker context
plink: docker_context.yaml
options:
- option: force
shorthand: f
value_type: bool
default_value: "false"
description: Force the removal of a context in use
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,50 @@
command: docker context update
short: Update a context
long: |-
Updates an existing `context`.
See [context create](context_create.md)
usage: docker context update [OPTIONS] CONTEXT
pname: docker context
plink: docker_context.yaml
options:
- option: default-stack-orchestrator
value_type: string
description: |
Default orchestrator for stack operations to use with this context (swarm|kubernetes|all)
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: description
value_type: string
description: Description of the context
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: docker
value_type: stringToString
default_value: '[]'
description: set the docker endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: kubernetes
value_type: stringToString
default_value: '[]'
description: set the kubernetes endpoint
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,14 @@
command: docker context use
short: Set the current docker context
long: |-
Set the default context to use, when `DOCKER_HOST`, `DOCKER_CONTEXT` environment variables and `--host`, `--context` global options are not set.
To disable usage of contexts, you can use the special `default` context.
usage: docker context use CONTEXT
pname: docker context
plink: docker_context.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -270,6 +270,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -303,6 +311,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -571,8 +588,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -588,8 +604,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false

View File

@ -182,6 +182,17 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: output
shorthand: o
value_type: stringArray
default_value: '[]'
description: 'Output destination (format: type=local,dest=path)'
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: platform
value_type: string
description: Set platform if server is multi-platform capable

View File

@ -33,6 +33,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Suppress verbose output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false

View File

@ -307,6 +307,16 @@ examples: |-
busybox glibc 21c16b6787c6 5 weeks ago 4.19 MB
```
Filtering with multiple `reference` would give, either match A or B:
```bash
$ docker images --filter=reference='busy*:uclibc' --filter=reference='busy*:glibc'
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox uclibc e02e811dd08f 5 weeks ago 1.09 MB
busybox glibc 21c16b6787c6 5 weeks ago 4.19 MB
```
### Format the output
The formatting option (`--format`) will pretty print container output

View File

@ -31,203 +31,66 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
### Show output
The example below shows the output for a daemon running on Red Hat Enterprise Linux,
using the `devicemapper` storage driver. As can be seen in the output, additional
information about the `devicemapper` storage driver is shown:
```bash
$ docker info
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-202:2-25583803-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.68 GB
Data Space Total: 107.4 GB
Data Space Available: 7.548 GB
Metadata Space Used: 2.322 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 991.7 MiB
Name: ip-172-30-0-91.ec2.internal
ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Username: gordontheturtle
Registry: https://index.docker.io/v1/
Insecure registries:
myinsecurehost:5000
127.0.0.0/8
```
### Show debugging output
Here is a sample output for a daemon running on Ubuntu, using the overlay2
storage driver and a node that is part of a 2-node swarm:
```bash
$ docker -D info
Containers: 14
Running: 3
Paused: 1
Stopped: 10
Images: 52
Server Version: 1.13.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: rdjq45w1op418waxlairloqbm
Is Manager: true
ClusterID: te8kdyw33n36fqiz74bfjeixd
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Root Rotation In Progress: false
Node Address: 172.16.66.128 172.16.66.129
Manager Addresses:
172.16.66.128:2477
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531
runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2
init version: N/A (expected: v0.13.0)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-31-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.937 GiB
Name: ubuntu
ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326
Docker Root Dir: /var/lib/docker
Debug Mode (client): true
Debug Mode (server): true
File Descriptors: 30
Goroutines: 123
System Time: 2016-11-12T17:24:37.955404361-08:00
EventsListeners: 0
Http Proxy: http://test:test@proxy.example.com:8080
Https Proxy: https://test:test@proxy.example.com:8080
No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
storage=ssd
staging=true
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false
```
The global `-D` option causes all `docker` commands to output debug information.
### Format the output
You can also specify the output format:
```bash
$ docker info --format '{{json .}}'
{"ID":"I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S","Containers":14, ...}
```
### Run `docker info` on Windows
Here is a sample output for a daemon running on Windows Server 2016:
```none
E:\docker>docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 17
Server Version: 1.13.0
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: nat null overlay
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 14393 (14393.206.amd64fre.rs1_release.160912-1937)
Operating System: Windows Server 2016 Datacenter
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 3.999 GiB
Name: WIN-V0V70C0LU5P
ID: NYMS:B5VK:UMSL:FVDZ:EWB5:FKVK:LPFL:FJMQ:H6FT:BZJ6:L2TD:XH62
Docker Root Dir: C:\control
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://192.168.1.2/
http://registry-mirror.example.com:5000/
Live Restore Enabled: false
```
examples: "### Show output\n\nThe example below shows the output for a daemon running
on Red Hat Enterprise Linux,\nusing the `devicemapper` storage driver. As can be
seen in the output, additional\ninformation about the `devicemapper` storage driver
is shown:\n\n```bash\n$ docker info\nClient:\n Debug Mode: false\n\nServer:\n Containers:
14\n Running: 3\n Paused: 1\n Stopped: 10\n Images: 52\n Server Version: 1.10.3\n
Storage Driver: devicemapper\n Pool Name: docker-202:2-25583803-pool\n Pool Blocksize:
65.54 kB\n Base Device Size: 10.74 GB\n Backing Filesystem: xfs\n Data file:
/dev/loop0\n Metadata file: /dev/loop1\n Data Space Used: 1.68 GB\n Data Space
Total: 107.4 GB\n Data Space Available: 7.548 GB\n Metadata Space Used: 2.322
MB\n Metadata Space Total: 2.147 GB\n Metadata Space Available: 2.145 GB\n Udev
Sync Supported: true\n Deferred Removal Enabled: false\n Deferred Deletion Enabled:
false\n Deferred Deleted Device Count: 0\n Data loop file: /var/lib/docker/devicemapper/devicemapper/data\n
\ Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata\n Library
Version: 1.02.107-RHEL7 (2015-12-01)\n Execution Driver: native-0.2\n Logging Driver:
json-file\n Plugins:\n Volume: local\n Network: null host bridge\n Kernel Version:
3.10.0-327.el7.x86_64\n Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)\n
OSType: linux\n Architecture: x86_64\n CPUs: 1\n Total Memory: 991.7 MiB\n Name:
ip-172-30-0-91.ec2.internal\n ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S\n
Docker Root Dir: /var/lib/docker\n Debug Mode: false\n Username: gordontheturtle\n
Registry: https://index.docker.io/v1/\n Insecure registries:\n myinsecurehost:5000\n
\ 127.0.0.0/8\n```\n \n### Show debugging output\n\nHere is a sample output for
a daemon running on Ubuntu, using the overlay2\nstorage driver and a node that is
part of a 2-node swarm:\n\n```bash\n$ docker -D info\nClient:\n Debug Mode: true\n\nServer:\n
Containers: 14\n Running: 3\n Paused: 1\n Stopped: 10\n Images: 52\n Server Version:
1.13.0\n Storage Driver: overlay2\n Backing Filesystem: extfs\n Supports d_type:
true\n Native Overlay Diff: false\n Logging Driver: json-file\n Cgroup Driver:
cgroupfs\n Plugins:\n Volume: local\n Network: bridge host macvlan null overlay\n
Swarm: active\n NodeID: rdjq45w1op418waxlairloqbm\n Is Manager: true\n ClusterID:
te8kdyw33n36fqiz74bfjeixd\n Managers: 1\n Nodes: 2\n Orchestration:\n Task
History Retention Limit: 5\n Raft:\n Snapshot Interval: 10000\n Number of Old
Snapshots to Retain: 0\n Heartbeat Tick: 1\n Election Tick: 3\n Dispatcher:\n
\ Heartbeat Period: 5 seconds\n CA Configuration:\n Expiry Duration: 3 months\n
\ Root Rotation In Progress: false\n Node Address: 172.16.66.128 172.16.66.129\n
\ Manager Addresses:\n 172.16.66.128:2477\n Runtimes: runc\n Default Runtime:
runc\n Init Binary: docker-init\n containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531\n
runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2\n init version: N/A (expected:
v0.13.0)\n Security Options:\n apparmor\n seccomp\n Profile: default\n Kernel
Version: 4.4.0-31-generic\n Operating System: Ubuntu 16.04.1 LTS\n OSType: linux\n
Architecture: x86_64\n CPUs: 2\n Total Memory: 1.937 GiB\n Name: ubuntu\n ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326\n
Docker Root Dir: /var/lib/docker\n Debug Mode: true\n File Descriptors: 30\n Goroutines:
123\n System Time: 2016-11-12T17:24:37.955404361-08:00\n EventsListeners: 0\n
Http Proxy: http://test:test@proxy.example.com:8080\n Https Proxy: https://test:test@proxy.example.com:8080\n
No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com\n Registry: https://index.docker.io/v1/\n
WARNING: No swap limit support\n Labels:\n storage=ssd\n staging=true\n Experimental:
false\n Insecure Registries:\n 127.0.0.0/8\n Registry Mirrors:\n http://192.168.1.2/\n
\ http://registry-mirror.example.com:5000/\n Live Restore Enabled: false\n```\n\nThe
global `-D` option causes all `docker` commands to output debug information.\n\n###
Format the output\n\nYou can also specify the output format:\n\n```bash\n$ docker
info --format '{{json .}}'\n\n{\"ID\":\"I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S\",\"Containers\":14,
...}\n```\n\n### Run `docker info` on Windows\n\nHere is a sample output for a daemon
running on Windows Server 2016:\n\n```none\nE:\\docker>docker info\nClient:\n Debug
Mode: false\n\nServer:\n Containers: 1\n Running: 0\n Paused: 0\n Stopped: 1\n
Images: 17\n Server Version: 1.13.0\n Storage Driver: windowsfilter\n Windows:\n
Logging Driver: json-file\n Plugins:\n Volume: local\n Network: nat null overlay\n
Swarm: inactive\n Default Isolation: process\n Kernel Version: 10.0 14393 (14393.206.amd64fre.rs1_release.160912-1937)\n
Operating System: Windows Server 2016 Datacenter\n OSType: windows\n Architecture:
x86_64\n CPUs: 8\n Total Memory: 3.999 GiB\n Name: WIN-V0V70C0LU5P\n ID: NYMS:B5VK:UMSL:FVDZ:EWB5:FKVK:LPFL:FJMQ:H6FT:BZJ6:L2TD:XH62\n
Docker Root Dir: C:\\control\n Debug Mode: false\n Registry: https://index.docker.io/v1/\n
Insecure Registries:\n 127.0.0.0/8\n Registry Mirrors:\n http://192.168.1.2/\n
\ http://registry-mirror.example.com:5000/\n Live Restore Enabled: false\n```"
deprecated: false
experimental: false
experimentalcli: false

View File

@ -83,7 +83,7 @@ examples: "### Inspect an image's manifest object\n \n```bash\n$ docker manifest
IP and port.\nThis is similar to tagging an image and pushing it to a foreign registry.\n\nAfter
you have created your local copy of the manifest list, you may optionally\n`annotate`
it. Annotations allowed are the architecture and operating system (overriding the
image's current values),\nos features, and an archictecure variant. \n\nFinally,
image's current values),\nos features, and an architecture variant. \n\nFinally,
you need to `push` your manifest list to the desired registry. Below are descriptions
of these three commands,\nand an example putting them all together.\n\n```bash\n$
docker manifest create 45.55.81.106:5000/coolapp:v1 \\\n 45.55.81.106:5000/coolapp-ppc64le-linux:v1
@ -122,7 +122,7 @@ examples: "### Inspect an image's manifest object\n \n```bash\n$ docker manifest
docker manifest push --insecure myprivateregistry.mycompany.com/repo/image:tag\n```\n\nNote
that the `--insecure` flag is not required to annotate a manifest list, since annotations
are to a locally-stored copy of a manifest list. You may also skip the `--insecure`
flag if you are performaing a `docker manifest inspect` on a locally-stored manifest
flag if you are performing a `docker manifest inspect` on a locally-stored manifest
list. Be sure to keep in mind that locally-stored manifest lists are never used
by the engine on a `docker pull`."
deprecated: false

View File

@ -23,6 +23,7 @@ clink:
- docker_network_prune.yaml
- docker_network_rm.yaml
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -17,6 +17,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: driver-opt
value_type: stringSlice
default_value: '[]'
description: driver options for the network
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: ip
value_type: string
description: IPv4 address (e.g., 172.30.100.104)
@ -119,6 +128,7 @@ examples: |-
You can connect a container to one or more networks. The networks need not be the same type. For example, you can connect a single container bridge and overlay networks.
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -336,6 +336,7 @@ examples: |-
my-ingress-network
```
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -22,6 +22,7 @@ examples: |-
$ docker network disconnect multi-host-network container1
```
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -27,6 +27,7 @@ options:
kubernetes: false
swarm: false
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -125,6 +125,7 @@ examples: "### List all networks\n\n```bash\n$ sudo docker network ls\nNETWORK I
by a colon for all networks:\n\n```bash\n$ docker network ls --format \"{{.ID}}:
{{.Driver}}\"\nafaaab448eb2: bridge\nd1584f8dc718: host\n391df270dc66: null\n```"
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -31,6 +31,7 @@ examples: |-
list and tries to delete that. The command reports success or failure for each
deletion.
deprecated: false
min_api_version: "1.21"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -1,8 +1,6 @@
command: docker node promote
short: Promote one or more nodes to manager in the swarm
long: |-
Promotes a node to manager. This command targets a docker engine that is a
manager in the swarm.
long: Promotes a node to manager. This command can only be executed on a manager node.
usage: docker node promote NODE [NODE...]
pname: docker node
plink: docker_node.yaml

View File

@ -133,6 +133,7 @@ examples: |-
Placeholder | Description
----------------|------------------------------------------------------------------------------------------
`.ID` | Task ID
`.Name` | Task name
`.Image` | Task image
`.Node` | Node ID

View File

@ -57,6 +57,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Suppress verbose output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
### Pull an image from Docker Hub

View File

@ -1,6 +1,14 @@
command: docker rmi
short: Remove one or more images
long: Remove one or more images
long: |-
Removes (and un-tags) one or more images from the host node. If an image has
multiple tags, using this command with the tag as a parameter only removes the
tag. If the tag is the only one for the image, both the image and the tag are
removed.
This does not remove images from a registry. You cannot remove an image of a
running container unless you use the `-f` option. To see all images on a host
use the [`docker image ls`](images.md) command.
usage: docker rmi [OPTIONS] IMAGE [IMAGE...]
pname: docker
plink: docker.yaml

View File

@ -288,6 +288,14 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: domainname
value_type: string
description: Container NIS domain name
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: entrypoint
value_type: string
description: Overwrite the default ENTRYPOINT of the image
@ -321,6 +329,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: gpus
value_type: gpu-request
description: GPU devices to add to the container ('all' to pass all GPUs)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: group-add
value_type: list
description: Add additional groups to join
@ -589,8 +606,7 @@ options:
kubernetes: false
swarm: false
- option: net
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -606,8 +622,7 @@ options:
kubernetes: false
swarm: false
- option: network
value_type: string
default_value: default
value_type: network
description: Connect a container to a network
deprecated: false
experimental: false
@ -1306,6 +1321,28 @@ examples: |-
> that may be removed should not be added to untrusted containers with
> `--device`.
For Windows, the format of the string passed to the `--device` option is in
the form of `--device=<IdType>/<Id>`. Beginning with Windows Server 2019
and Windows 10 October 2018 Update, Windows only supports an IdType of
`class` and the Id as a [device interface class
GUID](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/overview-of-device-interface-classes).
Refer to the table defined in the [Windows container
docs](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/hardware-devices-in-containers)
for a list of container-supported device interface class GUIDs.
If this option is specified for a process-isolated Windows container, _all_
devices that implement the requested device interface class GUID are made
available in the container. For example, the command below makes all COM
ports on the host visible in the container.
```powershell
PS C:\> docker run --device=class/86E0D1E0-8089-11D0-9CE4-08003E301F73 mcr.microsoft.com/windows/servercore:ltsc2019
```
> **Note**: the `--device` option is only supported on process-isolated
> Windows containers. This option fails if the container isolation is `hyperv`
> or when running Linux Containers on Windows (LCOW).
### Restart policies (--restart)
Use Docker's `--restart` to specify a container's *restart policy*. A restart
@ -1441,15 +1478,15 @@ examples: |-
On Windows, `--isolation` can take one of these values:
| Value | Description |
|:----------|:-------------------------------------------------------------------------------------------|
| `default` | Use the value specified by the Docker daemon's `--exec-opt` or system default (see below). |
| `process` | Shared-kernel namespace isolation (not supported on Windows client operating systems). |
| `hyperv` | Hyper-V hypervisor partition-based isolation. |
| Value | Description |
|:----------|:------------------------------------------------------------------------------------------------------------------|
| `default` | Use the value specified by the Docker daemon's `--exec-opt` or system default (see below). |
| `process` | Shared-kernel namespace isolation (not supported on Windows client operating systems older than Windows 10 1809). |
| `hyperv` | Hyper-V hypervisor partition-based isolation. |
The default isolation on Windows server operating systems is `process`. The default (and only supported)
The default isolation on Windows server operating systems is `process`. The default
isolation on Windows client operating systems is `hyperv`. An attempt to start a container on a client
operating system with `--isolation process` will fail.
operating system older than Windows 10 1809 with `--isolation process` will fail.
On Windows server, assuming the default configuration, these commands are equivalent
and result in `process` isolation:

View File

@ -38,6 +38,14 @@ examples: |-
$ docker save -o fedora-latest.tar fedora:latest
```
### Save an image to a tar.gz file using gzip
You can use gzip to save the image file and make the backup smaller.
```bash
docker save myimage:latest | gzip > myimage_latest.tar.gz
```
### Cherry-pick particular tags
You can even cherry-pick particular tags of an image repository.

View File

@ -13,7 +13,7 @@ options:
value_type: string
description: Secret driver
deprecated: false
min_api_version: "1.37"
min_api_version: "1.31"
experimental: false
experimentalcli: false
kubernetes: false
@ -31,6 +31,7 @@ options:
value_type: string
description: Template driver
deprecated: false
min_api_version: "1.37"
experimental: false
experimentalcli: false
kubernetes: false

View File

@ -358,6 +358,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: replicas-max-per-node
value_type: uint64
default_value: "0"
description: Maximum number of tasks per node (default 0 = unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: reserve-cpu
value_type: decimal
description: Reserve CPUs
@ -497,6 +507,15 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: sysctl
value_type: list
description: Sysctl options
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: tty
shorthand: t
value_type: bool
@ -636,8 +655,8 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
\ --update-parallelism 2 \\\n redis:3.0.6\n```\n\nWhen you run a [service update](service_update.md),
the scheduler updates a\nmaximum of 2 tasks at a time, with `10s` between updates.
For more information,\nrefer to the [rolling updates\ntutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/).\n\n###
Set environment variables (-e, --env)\n\nThis sets an environmental variable for
all tasks in a service. For example:\n\n```bash\n$ docker service create \\\n --name
Set environment variables (-e, --env)\n\nThis sets an environment variable for all
tasks in a service. For example:\n\n```bash\n$ docker service create \\\n --name
redis_2 \\\n --replicas 5 \\\n --env MYVAR=foo \\\n redis:3.0.6\n```\n\nTo specify
multiple environment variables, specify multiple `--env` flags, each\nwith a separate
key-value pair.\n\n```bash\n$ docker service create \\\n --name redis_2 \\\n --replicas
@ -652,46 +671,49 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
Add bind mounts, volumes or memory filesystems\n\nDocker supports three different
kinds of mounts, which allow containers to read\nfrom or write to files or directories,
either on the host operating system, or\non memory filesystems. These types are
_data volumes_ (often referred to simply\nas volumes), _bind mounts_, and _tmpfs_.\n\nA
**bind mount** makes a file or directory on the host available to the\ncontainer
it is mounted within. A bind mount may be either read-only or\nread-write. For example,
a container might share its host's DNS information by\nmeans of a bind mount of
the host's `/etc/resolv.conf` or a container might\nwrite logs to its host's `/var/log/myContainerLogs`
directory. If you use\nbind mounts and your host and containers have different notions
of permissions,\naccess controls, or other such details, you will run into portability
issues.\n\nA **named volume** is a mechanism for decoupling persistent data needed
by your\ncontainer from the image used to create the container and from the host
machine.\nNamed volumes are created and managed by Docker, and a named volume persists\neven
when no container is currently using it. Data in named volumes can be\nshared between
a container and the host machine, as well as between multiple\ncontainers. Docker
uses a _volume driver_ to create, manage, and mount volumes.\nYou can back up or
restore volumes using Docker commands.\n\nA **tmpfs** mounts a tmpfs inside a container
for volatile data.\n\nConsider a situation where your image starts a lightweight
web server. You could\nuse that image as a base image, copy in your website's HTML
files, and package\nthat into another image. Each time your website changed, you'd
need to update\nthe new image and redeploy all of the containers serving your website.
A better\nsolution is to store the website in a named volume which is attached to
each of\nyour web server containers when they start. To update the website, you
just\nupdate the named volume.\n\nFor more information about named volumes, see\n[Data
Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/).\n\nThe following
table describes options which apply to both bind mounts and named\nvolumes in a
service:\n\n<table>\n <tr>\n <th>Option</th>\n <th>Required</th>\n <th>Description</th>\n
\ </tr>\n <tr>\n <td><b>types</b></td>\n <td></td>\n <td>\n <p>The
type of mount, can be either <tt>volume</tt>, <tt>bind</tt>, or <tt>tmpfs</tt>.
Defaults to <tt>volume</tt> if no type is specified.\n <ul>\n <li><tt>volume</tt>:
mounts a <a href=\"https://docs.docker.com/engine/reference/commandline/volume_create/\">managed
_data volumes_ (often referred to simply\nas volumes), _bind mounts_, _tmpfs_, and
_named pipes_.\n\nA **bind mount** makes a file or directory on the host available
to the\ncontainer it is mounted within. A bind mount may be either read-only or\nread-write.
For example, a container might share its host's DNS information by\nmeans of a bind
mount of the host's `/etc/resolv.conf` or a container might\nwrite logs to its host's
`/var/log/myContainerLogs` directory. If you use\nbind mounts and your host and
containers have different notions of permissions,\naccess controls, or other such
details, you will run into portability issues.\n\nA **named volume** is a mechanism
for decoupling persistent data needed by your\ncontainer from the image used to
create the container and from the host machine.\nNamed volumes are created and managed
by Docker, and a named volume persists\neven when no container is currently using
it. Data in named volumes can be\nshared between a container and the host machine,
as well as between multiple\ncontainers. Docker uses a _volume driver_ to create,
manage, and mount volumes.\nYou can back up or restore volumes using Docker commands.\n\nA
**tmpfs** mounts a tmpfs inside a container for volatile data.\n\nA **npipe** mounts
a named pipe from the host into the container.\n\nConsider a situation where your
image starts a lightweight web server. You could\nuse that image as a base image,
copy in your website's HTML files, and package\nthat into another image. Each time
your website changed, you'd need to update\nthe new image and redeploy all of the
containers serving your website. A better\nsolution is to store the website in a
named volume which is attached to each of\nyour web server containers when they
start. To update the website, you just\nupdate the named volume.\n\nFor more information
about named volumes, see\n[Data Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/).\n\nThe
following table describes options which apply to both bind mounts and named\nvolumes
in a service:\n\n<table>\n <tr>\n <th>Option</th>\n <th>Required</th>\n <th>Description</th>\n
\ </tr>\n <tr>\n <td><b>type</b></td>\n <td></td>\n <td>\n <p>The
type of mount, can be either <tt>volume</tt>, <tt>bind</tt>, <tt>tmpfs</tt>, or
<tt>npipe</tt>. Defaults to <tt>volume</tt> if no type is specified.\n <ul>\n
\ <li><tt>volume</tt>: mounts a <a href=\"https://docs.docker.com/engine/reference/commandline/volume_create/\">managed
volume</a>\n into the container.</li> <li><tt>bind</tt>:\n bind-mounts
a directory or file from the host into the container.</li>\n <li><tt>tmpfs</tt>:
mount a tmpfs in the container</li>\n </ul></p>\n </td>\n </tr>\n <tr>\n
\ <td><b>src</b> or <b>source</b></td>\n <td>for <tt>type=bind</tt> only></td>\n
\ <td>\n <ul>\n <li>\n <tt>type=volume</tt>: <tt>src</tt>
is an optional way to specify the name of the volume (for example, <tt>src=my-volume</tt>).\n
\ If the named volume does not exist, it is automatically created. If no
<tt>src</tt> is specified, the volume is\n assigned a random name which
is guaranteed to be unique on the host, but may not be unique cluster-wide.\n A
randomly-named volume has the same lifecycle as its container and is destroyed when
the <i>container</i>\n is destroyed (which is upon <tt>service update</tt>,
or when scaling or re-balancing the service)\n </li>\n <li>\n <tt>type=bind</tt>:
mount a tmpfs in the container</li>\n <li><tt>npipe</tt>: mounts named pipe
from the host into the container (Windows containers only).</li>\n </ul></p>\n
\ </td>\n </tr>\n <tr>\n <td><b>src</b> or <b>source</b></td>\n <td>for
<tt>type=bind</tt> and <tt>type=npipe</tt></td>\n <td>\n <ul>\n <li>\n
\ <tt>type=volume</tt>: <tt>src</tt> is an optional way to specify the name
of the volume (for example, <tt>src=my-volume</tt>).\n If the named volume
does not exist, it is automatically created. If no <tt>src</tt> is specified, the
volume is\n assigned a random name which is guaranteed to be unique on
the host, but may not be unique cluster-wide.\n A randomly-named volume
has the same lifecycle as its container and is destroyed when the <i>container</i>\n
\ is destroyed (which is upon <tt>service update</tt>, or when scaling or
re-balancing the service)\n </li>\n <li>\n <tt>type=bind</tt>:
<tt>src</tt> is required, and specifies an absolute path to the file or directory
to bind-mount\n (for example, <tt>src=/path/on/host/</tt>). An error is
produced if the file or directory does not exist.\n </li>\n <li>\n
@ -703,10 +725,16 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
mounting the volume or bind mount.</p>\n </td>\n </tr>\n <tr>\n <td><p><b>readonly</b>
or <b>ro</b></p></td>\n <td></td>\n <td>\n <p>The Engine mounts binds
and volumes <tt>read-write</tt> unless <tt>readonly</tt> option\n is given
when mounting the bind or volume.\n <ul>\n <li><tt>true</tt> or <tt>1</tt>
when mounting the bind or volume. Note that setting <tt>readonly</tt> for a\n bind-mount
does not make its submounts <tt>readonly</tt> on the current Linux implementation.
See also <tt>bind-nonrecursive</tt>.\n <ul>\n <li><tt>true</tt> or <tt>1</tt>
or no value: Mounts the bind or volume read-only.</li>\n <li><tt>false</tt>
or <tt>0</tt>: Mounts the bind or volume read-write.</li>\n </ul></p>\n </td>\n
\ </tr>\n <tr>\n <td><b>consistency</b></td>\n <td></td>\n <td>\n <p>The
\ </tr>\n</table>\n\n#### Options for Bind Mounts\n\nThe following options can only
be used for bind mounts (`type=bind`):\n\n\n<table>\n <tr>\n <th>Option</th>\n
\ <th>Description</th>\n </tr>\n <tr>\n <td><b>bind-propagation</b></td>\n
\ <td>\n <p>See the <a href=\"#bind-propagation\">bind propagation section</a>.</p>\n
\ </td>\n </tr>\n <tr>\n <td><b>consistency</b></td>\n <td>\n <p>The
consistency requirements for the mount; one of\n <ul>\n <li><tt>default</tt>:
Equivalent to <tt>consistent</tt>.</li>\n <li><tt>consistent</tt>: Full
consistency. The container runtime and the host maintain an identical view of the
@ -715,33 +743,41 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
visible within a container.</li>\n <li><tt>delegated</tt>: The container
runtime's view of the mount is authoritative. There may be delays before updates
made in a container are visible on the host.</li>\n </ul>\n </p>\n </td>\n
\ </tr>\n</table>\n\n#### Bind Propagation\n\nBind propagation refers to whether
or not mounts created within a given\nbind mount or named volume can be propagated
to replicas of that mount. Consider\na mount point `/mnt`, which is also mounted
on `/tmp`. The propation settings\ncontrol whether a mount on `/tmp/a` would also
be available on `/mnt/a`. Each\npropagation setting has a recursive counterpoint.
In the case of recursion,\nconsider that `/tmp/a` is also mounted as `/foo`. The
propagation settings\ncontrol whether `/mnt/a` and/or `/tmp/a` would exist.\n\nThe
`bind-propagation` option defaults to `rprivate` for both bind mounts and\nvolume
mounts, and is only configurable for bind mounts. In other words, named\nvolumes
do not support bind propagation.\n\n- **`shared`**: Sub-mounts of the original mount
are exposed to replica mounts,\n and sub-mounts of replica mounts
are also propagated to the\n original mount.\n- **`slave`**: similar
to a shared mount, but only in one direction. If the\n original mount
exposes a sub-mount, the replica mount can see it.\n However, if the
replica mount exposes a sub-mount, the original\n mount cannot see
it.\n- **`private`**: The mount is private. Sub-mounts within it are not exposed
to\n replica mounts, and sub-mounts of replica mounts are not\n
\ exposed to the original mount.\n- **`rshared`**: The same as shared,
but the propagation also extends to and from\n mount points nested
within any of the original or replica mount\n points.\n- **`rslave`**:
The same as `slave`, but the propagation also extends to and from\n mount
\ </tr>\n <tr>\n <td><b>bind-nonrecursive</b></td>\n <td>\n By default,
submounts are recursively bind-mounted as well. However, this behavior can be confusing
when a\n bind mount is configured with <tt>readonly</tt> option, because submounts
are not mounted as read-only.\n Set <tt>bind-nonrecursive</tt> to disable recursive
bind-mount.<br />\n <br />\n A value is optional:<br />\n <br />\n
\ <ul>\n <li><tt>true</tt> or <tt>1</tt>: Disables recursive bind-mount.</li>\n
\ <li><tt>false</tt> or <tt>0</tt>: Default if you do not provide a value.
Enables recursive bind-mount.</li>\n </ul>\n </td>\n </tr>\n</table>\n\n#####
Bind propagation\n\nBind propagation refers to whether or not mounts created within
a given\nbind mount or named volume can be propagated to replicas of that mount.
Consider\na mount point `/mnt`, which is also mounted on `/tmp`. The propation settings\ncontrol
whether a mount on `/tmp/a` would also be available on `/mnt/a`. Each\npropagation
setting has a recursive counterpoint. In the case of recursion,\nconsider that `/tmp/a`
is also mounted as `/foo`. The propagation settings\ncontrol whether `/mnt/a` and/or
`/tmp/a` would exist.\n\nThe `bind-propagation` option defaults to `rprivate` for
both bind mounts and\nvolume mounts, and is only configurable for bind mounts. In
other words, named\nvolumes do not support bind propagation.\n\n- **`shared`**:
Sub-mounts of the original mount are exposed to replica mounts,\n and
sub-mounts of replica mounts are also propagated to the\n original
mount.\n- **`slave`**: similar to a shared mount, but only in one direction. If
the\n original mount exposes a sub-mount, the replica mount can see
it.\n However, if the replica mount exposes a sub-mount, the original\n
\ mount cannot see it.\n- **`private`**: The mount is private. Sub-mounts
within it are not exposed to\n replica mounts, and sub-mounts of
replica mounts are not\n exposed to the original mount.\n- **`rshared`**:
The same as shared, but the propagation also extends to and from\n mount
points nested within any of the original or replica mount\n points.\n-
**`rprivate`**: The default. The same as `private`, meaning that no mount points\n
\ anywhere within the original or replica mount points propagate\n
\ in either direction.\n\nFor more information about bind propagation,
see the\n[Linux kernel documentation for shared subtree](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt).\n\n####
Options for Named Volumes\n\nThe following options can only be used for named volumes
**`rslave`**: The same as `slave`, but the propagation also extends to and from\n
\ mount points nested within any of the original or replica mount\n
\ points.\n- **`rprivate`**: The default. The same as `private`,
meaning that no mount points\n anywhere within the original or
replica mount points propagate\n in either direction.\n\nFor more
information about bind propagation, see the\n[Linux kernel documentation for shared
subtree](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt).\n\n####
Options for named volumes\n\nThe following options can only be used for named volumes
(`type=volume`):\n\n\n<table>\n <tr>\n <th>Option</th>\n <th>Description</th>\n
\ </tr>\n <tr>\n <td><b>volume-driver</b></td>\n <td>\n <p>Name of the
volume-driver plugin to use for the volume. Defaults to\n <tt>\"local\"</tt>,
@ -756,8 +792,8 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
\ the Engine copies those files and directories into the volume, allowing\n
\ the host to access them. Set <tt>volume-nocopy</tt> to disable copying files\n
\ from the container's filesystem to the volume and mount the empty volume.<br
/>\n\n A value is optional:\n\n <ul>\n <li><tt>true</tt> or <tt>1</tt>:
Default if you do not provide a value. Disables copying.</li>\n <li><tt>false</tt>
/>\n <br />\n A value is optional:<br />\n <br />\n <ul>\n <li><tt>true</tt>
or <tt>1</tt>: Default if you do not provide a value. Disables copying.</li>\n <li><tt>false</tt>
or <tt>0</tt>: Enables copying.</li>\n </ul>\n </td>\n </tr>\n <tr>\n
\ <td><b>volume-opt</b></td>\n <td>\n Options specific to a given volume
driver, which will be passed to the\n driver when creating the volume. Options
@ -868,14 +904,23 @@ examples: "### Create a service\n\n```bash\n$ docker service create --name redis
\ --placement-pref 'spread=node.labels.rack' \\\n redis:3.0.6\n```\n\nWhen updating
a service with `docker service update`, `--placement-pref-add`\nappends a new placement
preference after all existing placement preferences.\n`--placement-pref-rm` removes
an existing placement preference that matches the\nargument.\n\n### Attach a service
to an existing network (--network)\n\nYou can use overlay networks to connect one
or more services within the swarm.\n\nFirst, create an overlay network on a manager
node the docker network create\ncommand:\n\n```bash\n$ docker network create --driver
overlay my-network\n\netjpu59cykrptrgw0z0hk5snf\n```\n\nAfter you create an overlay
network in swarm mode, all manager nodes have\naccess to the network.\n\nWhen you
create a service and pass the `--network` flag to attach the service to\nthe overlay
network:\n\n```bash\n$ docker service create \\\n --replicas 3 \\\n --network
an existing placement preference that matches the\nargument.\n\n### Specify maximum
replicas per node (--replicas-max-per-node)\n\nUse the `--replicas-max-per-node`
flag to set the maximum number of replica tasks that can run on a node.\nThe following
command creates a nginx service with 2 replica tasks but only one replica task per
node.\n\nOne example where this can be useful is to balance tasks over a set of
data centers together with `--placement-pref`\nand let `--replicas-max-per-node`
setting make sure that replicas are not migrated to another datacenter during\nmaintenance
or datacenter failure.\n\nThe example below illustrates this:\n\n```bash\n$ docker
service create \\\n --name nginx \\\n --replicas 2 \\\n --replicas-max-per-node
1 \\\n --placement-pref 'spread=node.labels.datacenter' \\\n nginx\n```\n\n###
Attach a service to an existing network (--network)\n\nYou can use overlay networks
to connect one or more services within the swarm.\n\nFirst, create an overlay network
on a manager node the docker network create\ncommand:\n\n```bash\n$ docker network
create --driver overlay my-network\n\netjpu59cykrptrgw0z0hk5snf\n```\n\nAfter you
create an overlay network in swarm mode, all manager nodes have\naccess to the network.\n\nWhen
you create a service and pass the `--network` flag to attach the service to\nthe
overlay network:\n\n```bash\n$ docker service create \\\n --replicas 3 \\\n --network
my-network \\\n --name my-web \\\n nginx\n\n716thylsndqma81j6kkkb5aus\n```\n\nThe
swarm extends my-network to each node running the service.\n\nContainers on the
same network can access each other using\n[service discovery](https://docs.docker.com/engine/swarm/networking/#use-swarm-mode-service-discovery).\n\nLong

View File

@ -492,6 +492,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: replicas-max-per-node
value_type: uint64
default_value: "0"
description: Maximum number of tasks per node (default 0 = unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: reserve-cpu
value_type: decimal
description: Reserve CPUs
@ -647,6 +657,24 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: sysctl-add
value_type: list
description: Add or update a Sysctl option
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: sysctl-rm
value_type: list
description: Remove a Sysctl option
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: tty
shorthand: t
value_type: bool

View File

@ -180,8 +180,8 @@ examples: |-
"table {{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"
> **Note**: On Docker 17.09 and older, the `{{.Container}}` column was used, in
> stead of `{{.ID}}\t{{.Name}}`.
> **Note**: On Docker 17.09 and older, the `{{.Container}}` column was used,
> instead of `{{.ID}}\t{{.Name}}`.
deprecated: false
experimental: false
experimentalcli: false

View File

@ -53,6 +53,17 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: data-path-port
value_type: uint32
default_value: "0"
description: |
Port number to use for data path traffic (1024 - 49151). If no value is set or is set to 0, the default port (4789) is used.
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: default-addr-pool
value_type: ipNetSlice
default_value: '[]'
@ -137,127 +148,77 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
examples: |-
```bash
$ docker swarm init --advertise-addr 192.168.99.121
Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx \
172.17.0.2:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
```
`docker swarm init` generates two random tokens, a worker token and a manager token. When you join
a new node to the swarm, the node joins as a worker or manager node based upon the token you pass
to [swarm join](swarm_join.md).
After you create the swarm, you can display or rotate the token using
[swarm join-token](swarm_join_token.md).
### `--autolock`
This flag enables automatic locking of managers with an encryption key. The
private keys and data stored by all managers will be protected by the
encryption key printed in the output, and will not be accessible without it.
Thus, it is very important to store this key in order to activate a manager
after it restarts. The key can be passed to `docker swarm unlock` to reactivate
the manager. Autolock can be disabled by running
`docker swarm update --autolock=false`. After disabling it, the encryption key
is no longer required to start the manager, and it will start up on its own
without user intervention.
### `--cert-expiry`
This flag sets the validity period for node certificates.
### `--dispatcher-heartbeat`
This flag sets the frequency with which nodes are told to use as a
period to report their health.
### `--external-ca`
This flag sets up the swarm to use an external CA to issue node certificates. The value takes
the form `protocol=X,url=Y`. The value for `protocol` specifies what protocol should be used
to send signing requests to the external CA. Currently, the only supported value is `cfssl`.
The URL specifies the endpoint where signing requests should be submitted.
### `--force-new-cluster`
This flag forces an existing node that was part of a quorum that was lost to restart as a single node Manager without losing its data.
### `--listen-addr`
The node listens for inbound swarm manager traffic on this address. The default is to listen on
0.0.0.0:2377. It is also possible to specify a network interface to listen on that interface's
address; for example `--listen-addr eth0:2377`.
Specifying a port is optional. If the value is a bare IP address or interface
name, the default port 2377 will be used.
### `--advertise-addr`
This flag specifies the address that will be advertised to other members of the
swarm for API access and overlay networking. If unspecified, Docker will check
if the system has a single IP address, and use that IP address with the
listening port (see `--listen-addr`). If the system has multiple IP addresses,
`--advertise-addr` must be specified so that the correct address is chosen for
inter-manager communication and overlay networking.
It is also possible to specify a network interface to advertise that interface's address;
for example `--advertise-addr eth0:2377`.
Specifying a port is optional. If the value is a bare IP address or interface
name, the default port 2377 will be used.
### `--data-path-addr`
This flag specifies the address that global scope network drivers will publish towards
other nodes in order to reach the containers running on this node.
Using this parameter it is then possible to separate the container's data traffic from the
management traffic of the cluster.
If unspecified, Docker will use the same IP address or interface that is used for the
advertise address.
### `--default-addr-pool`
This flag specifies default subnet pools for global scope networks.
Format example is `--default-addr-pool 30.30.0.0/16 --default-addr-pool 40.40.0.0/16`
### `--default-addr-pool-mask-length`
This flag specifies default subnet pools mask length for default-addr-pool.
Format example is `--default-addr-pool-mask-length 24`
### `--task-history-limit`
This flag sets up task history retention limit.
### `--max-snapshots`
This flag sets the number of old Raft snapshots to retain in addition to the
current Raft snapshots. By default, no old snapshots are retained. This option
may be used for debugging, or to store old snapshots of the swarm state for
disaster recovery purposes.
### `--snapshot-interval`
This flag specifies how many log entries to allow in between Raft snapshots.
Setting this to a higher number will trigger snapshots less frequently.
Snapshots compact the Raft log and allow for more efficient transfer of the
state to new managers. However, there is a performance cost to taking snapshots
frequently.
### `--availability`
This flag specifies the availability of the node at the time the node joins a master.
Possible availability values are `active`, `pause`, or `drain`.
This flag is useful in certain situations. For example, a cluster may want to have
dedicated manager nodes that are not served as worker nodes. This could be achieved
by passing `--availability=drain` to `docker swarm init`.
examples: "```bash\n$ docker swarm init --advertise-addr 192.168.99.121\nSwarm initialized:
current node (bvz81updecsj6wjz393c09vti) is now a manager.\n\nTo add a worker to
this swarm, run the following command:\n\n docker swarm join \\\n --token
SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx
\\\n 172.17.0.2:2377\n\nTo add a manager to this swarm, run 'docker swarm join-token
manager' and follow the instructions.\n```\n\n`docker swarm init` generates two
random tokens, a worker token and a manager token. When you join\na new node to
the swarm, the node joins as a worker or manager node based upon the token you pass\nto
[swarm join](swarm_join.md).\n\nAfter you create the swarm, you can display or rotate
the token using\n[swarm join-token](swarm_join_token.md).\n\n### `--autolock`\n\nThis
flag enables automatic locking of managers with an encryption key. The\nprivate
keys and data stored by all managers will be protected by the\nencryption key printed
in the output, and will not be accessible without it.\nThus, it is very important
to store this key in order to activate a manager\nafter it restarts. The key can
be passed to `docker swarm unlock` to reactivate\nthe manager. Autolock can be disabled
by running\n`docker swarm update --autolock=false`. After disabling it, the encryption
key\nis no longer required to start the manager, and it will start up on its own\nwithout
user intervention.\n\n### `--cert-expiry`\n\nThis flag sets the validity period
for node certificates.\n\n### `--dispatcher-heartbeat`\n\nThis flag sets the frequency
with which nodes are told to use as a\nperiod to report their health.\n\n### `--external-ca`\n\nThis
flag sets up the swarm to use an external CA to issue node certificates. The value
takes\nthe form `protocol=X,url=Y`. The value for `protocol` specifies what protocol
should be used\nto send signing requests to the external CA. Currently, the only
supported value is `cfssl`.\nThe URL specifies the endpoint where signing requests
should be submitted.\n\n### `--force-new-cluster`\n\nThis flag forces an existing
node that was part of a quorum that was lost to restart as a single node Manager
without losing its data.\n\n### `--listen-addr`\n\nThe node listens for inbound
swarm manager traffic on this address. The default is to listen on\n0.0.0.0:2377.
It is also possible to specify a network interface to listen on that interface's\naddress;
for example `--listen-addr eth0:2377`.\n\nSpecifying a port is optional. If the
value is a bare IP address or interface\nname, the default port 2377 will be used.\n\n###
`--advertise-addr`\n\nThis flag specifies the address that will be advertised to
other members of the\nswarm for API access and overlay networking. If unspecified,
Docker will check\nif the system has a single IP address, and use that IP address
with the\nlistening port (see `--listen-addr`). If the system has multiple IP addresses,\n`--advertise-addr`
must be specified so that the correct address is chosen for\ninter-manager communication
and overlay networking.\n\nIt is also possible to specify a network interface to
advertise that interface's address;\nfor example `--advertise-addr eth0:2377`.\n\nSpecifying
a port is optional. If the value is a bare IP address or interface\nname, the default
port 2377 will be used.\n\n### `--data-path-addr`\n\nThis flag specifies the address
that global scope network drivers will publish towards\nother nodes in order to
reach the containers running on this node.\nUsing this parameter it is then possible
to separate the container's data traffic from the\nmanagement traffic of the cluster.\nIf
unspecified, Docker will use the same IP address or interface that is used for the\nadvertise
address.\n\n### `--data-path-port`\n\nThis flag allows you to configure the UDP
port number to use for data path\ntraffic. The provided port number must be within
the 1024 - 49151 range. If\nthis flag is not set or is set to 0, the default port
number 4789 is used.\nThe data path port can only be configured when initializing
the swarm, and\napplies to all nodes that join the swarm.\nThe following example
initializes a new Swarm, and configures the data path\nport to UDP port 7777;\n\n```bash\ndocker
swarm init --data-path-port=7777\n```\nAfter the swarm is initialized, use the `docker
info` command to verify that\nthe port is configured:\n\n```bash\ndocker info\n\t...\n\tClusterID:
9vs5ygs0gguyyec4iqf2314c0\n\tManagers: 1\n\tNodes: 1\n\tData Path Port: 7777\n\t...\n```\n\n###
`--default-addr-pool`\nThis flag specifies default subnet pools for global scope
networks.\nFormat example is `--default-addr-pool 30.30.0.0/16 --default-addr-pool
40.40.0.0/16`\n\n### `--default-addr-pool-mask-length`\nThis flag specifies default
subnet pools mask length for default-addr-pool.\nFormat example is `--default-addr-pool-mask-length
24`\n\n### `--task-history-limit`\n\nThis flag sets up task history retention limit.\n\n###
`--max-snapshots`\n\nThis flag sets the number of old Raft snapshots to retain in
addition to the\ncurrent Raft snapshots. By default, no old snapshots are retained.
This option\nmay be used for debugging, or to store old snapshots of the swarm state
for\ndisaster recovery purposes.\n\n### `--snapshot-interval`\n\nThis flag specifies
how many log entries to allow in between Raft snapshots.\nSetting this to a higher
number will trigger snapshots less frequently.\nSnapshots compact the Raft log and
allow for more efficient transfer of the\nstate to new managers. However, there
is a performance cost to taking snapshots\nfrequently.\n\n### `--availability`\n\nThis
flag specifies the availability of the node at the time the node joins a master.\nPossible
availability values are `active`, `pause`, or `drain`.\n\nThis flag is useful in
certain situations. For example, a cluster may want to have\ndedicated manager nodes
that are not served as worker nodes. This could be achieved\nby passing `--availability=drain`
to `docker swarm init`."
deprecated: false
min_api_version: "1.24"
experimental: false

View File

@ -35,7 +35,6 @@ examples: |-
Images 5 2 16.43 MB 11.63 MB (70%)
Containers 2 0 212 B 212 B (100%)
Local Volumes 2 1 36 B 0 B (0%)
Build Cache 0 0 0B 0B
```
A more detailed view can be requested using the `-v, --verbose` flag:
@ -63,14 +62,6 @@ examples: |-
NAME LINKS SIZE
07c7bdf3e34ab76d921894c2b834f073721fccfbbcba792aa7648e3a7a664c2e 2 36 B
my-named-vol 0 0 B
Build cache usage: 0B
CACHE ID CACHE TYPE SIZE CREATED LAST USED USAGE SHARED
0d8ab63ff30d regular 4.34MB 7 days ago 0 true
189876ac9226 regular 11.5MB 7 days ago 0 true
```
* `SHARED SIZE` is the amount of space that an image shares with another one (i.e. their common data)

View File

@ -140,6 +140,16 @@ options:
experimentalcli: false
kubernetes: false
swarm: false
- option: pids-limit
value_type: int64
default_value: "0"
description: Tune container pids limit (set -1 for unlimited)
deprecated: false
min_api_version: "1.40"
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: restart
value_type: string
description: Restart policy to apply when a container exits

View File

@ -0,0 +1,30 @@
command: docker registry
short: Manage Docker registries
long: Manage Docker registries
usage: docker registry
pname: docker
plink: docker.yaml
cname:
- docker registry events
- docker registry history
- docker registry info
- docker registry inspect
- docker registry joblogs
- docker registry jobs
- docker registry ls
- docker registry rmi
clink:
- docker_registry_events.yaml
- docker_registry_history.yaml
- docker_registry_info.yaml
- docker_registry_inspect.yaml
- docker_registry_joblogs.yaml
- docker_registry_jobs.yaml
- docker_registry_ls.yaml
- docker_registry_rmi.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,57 @@
command: docker registry events
short: List registry events (DTR Only)
long: List registry events (Only supported by Docker Trusted Registry)
usage: docker registry events HOST | REPOSITORY [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
options:
- option: format
value_type: string
description: Pretty-print output using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: limit
value_type: int64
default_value: "50"
description: Specify the number of event results
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: no-trunc
value_type: bool
default_value: "false"
description: Don't truncate output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: object-type
value_type: string
description: |
Specify the type of Event target object [REPOSITORY | TAG | BLOB | MANIFEST | WEBHOOK | URI | PROMOTION | PUSH_MIRRORING | POLL_MIRRORING]
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: type
value_type: string
description: |
Specify the type of Event [CREATE | GET | DELETE | UPDATE | SEND | FAIL]
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,40 @@
command: docker registry history
short: Inspect registry image history (DTR Only)
long: Inspect registry image history (DTR Only)
usage: docker registry history IMAGE [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
options:
- option: format
value_type: string
description: Pretty-print history using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: human
shorthand: H
value_type: bool
default_value: "true"
description: Print sizes and dates in human readable format
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: no-trunc
value_type: bool
default_value: "false"
description: Don't truncate output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,22 @@
command: docker registry info
short: Display information about a registry (DTR Only)
long: Display information about a registry (Only supported by Docker Trusted Registry
and must be authenticated as an admin user)
usage: docker registry info HOST [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
options:
- option: format
value_type: string
description: Pretty-print output using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,21 @@
command: docker registry inspect
short: Inspect registry image
long: Inspect registry image
usage: docker registry inspect IMAGE [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
options:
- option: format
value_type: string
description: Pretty-print output using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,21 @@
command: docker registry joblogs
short: List registry job logs (DTR Only)
long: List registry job logs (DTR Only)
usage: docker registry joblogs HOST JOB_ID [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
options:
- option: format
value_type: string
description: Pretty-print output using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,49 @@
command: docker registry jobs
short: List registry jobs (DTR Only)
long: List registry jobs (Only supported by Docker Trusted Registry and must be authenticated
as an admin user)
usage: docker registry jobs HOST [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
options:
- option: action
value_type: string
description: |
Specify the type of Job action [onlinegc | onlinegc_metadata | onlinegc_joblogs | onlinegc_events | license_update | scan_check | scan_check_single | scan_check_all | update_vuln_db | nautilus_update_db | push_mirror_tag | poll_mirror | tag_prune]
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: format
value_type: string
description: Pretty-print output using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: limit
value_type: int64
default_value: "50"
description: Specify the number of job results
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: no-trunc
value_type: bool
default_value: "false"
description: Don't truncate output
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,49 @@
command: docker registry ls
short: List registry images
long: List registry images
usage: docker registry ls REPOSITORY[:TAG] [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
options:
- option: digests
value_type: bool
default_value: "false"
description: Show digests
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: format
value_type: string
description: Pretty-print output using a Go template
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: quiet
shorthand: q
value_type: bool
default_value: "false"
description: Only display image names
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
- option: verbose
value_type: bool
default_value: "false"
description: Display verbose image information
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -0,0 +1,12 @@
command: docker registry rmi
short: Remove a registry image (DTR Only)
long: Remove a registry image (DTR Only)
usage: docker registry rmi REPOSITORY:TAG [OPTIONS]
pname: docker registry
plink: docker_registry.yaml
deprecated: false
experimental: false
experimentalcli: false
kubernetes: false
swarm: false

View File

@ -494,6 +494,10 @@ guides:
title: NISTIR 8176
- path: /compliance/nist/itl_october2017/
title: NIST ITL Bulletin October 2017
- sectiontitle: OSCAL
section:
- path: /compliance/oscal
title: OSCAL compliance guidance
- sectiontitle: CIS Benchmarks
section:
- path: /compliance/cis/docker_ee/
@ -548,6 +552,8 @@ reference:
section:
- path: /engine/reference/commandline/builder/
title: docker builder
- path: /engine/reference/commandline/builder_build/
title: docker builder build
- path: /engine/reference/commandline/builder_prune/
title: docker builder prune
- sectiontitle: docker checkpoint *
@ -628,6 +634,26 @@ reference:
title: docker container update
- path: /engine/reference/commandline/container_wait/
title: docker container wait
- sectiontitle: docker context *
section:
- path: /engine/reference/commandline/context/
title: docker context
- path: /engine/reference/commandline/context_create/
title: docker context create
- path: /engine/reference/commandline/context_export/
title: docker context export
- path: /engine/reference/commandline/context_import/
title: docker context import
- path: /engine/reference/commandline/context_inspect/
title: docker context inspect
- path: /engine/reference/commandline/context_ls/
title: docker context ls
- path: /engine/reference/commandline/context_rm/
title: docker context rm
- path: /engine/reference/commandline/context_update/
title: docker context update
- path: /engine/reference/commandline/context_use/
title: docker context use
- path: /engine/reference/commandline/cp/
title: docker cp
- path: /engine/reference/commandline/create/
@ -780,6 +806,24 @@ reference:
title: docker pull
- path: /engine/reference/commandline/push/
title: docker push
- sectiontitle: docker registry *
section:
- path: /engine/reference/commandline/registry/
title: docker registry
- path: /engine/reference/commandline/registry_events/
title: docker registry events
- path: /engine/reference/commandline/registry_history/
title: docker registry history
- path: /engine/reference/commandline/registry_info/
title: docker registry info
- path: /engine/reference/commandline/registry_inspect/
title: docker registry inspect
- path: /engine/reference/commandline/registry_joblogs/
title: docker registry joblogs
- path: /engine/reference/commandline/registry_ls/
title: docker registry ls
- path: /engine/reference/commandline/registry_rmi/
title: docker registry rmi
- path: /engine/reference/commandline/rename/
title: docker rename
- path: /engine/reference/commandline/restart/
@ -880,6 +924,24 @@ reference:
title: docker system prune
- path: /engine/reference/commandline/tag/
title: docker tag
- sectiontitle: docker template *
section:
- path: /engine/reference/commandline/template/
title: docker template
- path: /engine/reference/commandline/template_config/
title: docker template config
- path: /engine/reference/commandline/template_config_set/
title: docker template config set
- path: /engine/reference/commandline/template_config_view/
title: docker template config view
- path: /engine/reference/commandline/template_inspect/
title: docker template inspect
- path: /engine/reference/commandline/template_list/
title: docker template list
- path: /engine/reference/commandline/template_scaffold/
title: docker template scaffold
- path: /engine/reference/commandline/template_version/
title: docker template version
- path: /engine/reference/commandline/top/
title: docker top
- sectiontitle: docker trust *
@ -1160,6 +1222,40 @@ manuals:
nosync: true
- title: Release notes
path: /engine/release-notes/
- sectiontitle: Docker Desktop Enterprise
section:
- path: /ee/desktop/
title: Overview
- path: /ee/desktop/release-notes/
title: Release notes
- sectiontitle: Admin
section:
- sectiontitle: Install
section:
- path: /ee/desktop/admin/install/mac/
title: Install DDE on Mac
- path: /ee/desktop/admin/install/windows/
title: Install DDE on Windows
- sectiontitle: Configure
section:
- path: /ee/desktop/admin/configure/mac-admin/
title: Configure DDE on Mac
- path: /ee/desktop/admin/configure/windows-admin/
title: Configure DDE on Windows
- sectiontitle: User
section:
- path: /ee/desktop/user/mac-user/
title: Use DDE on Mac
- path: /ee/desktop/user/windows-user/
title: Use DDE on Windows
- path: /ee/desktop/app-designer/
title: Application designer
- sectiontitle: Troubleshoot
section:
- path: /ee/desktop/troubleshoot/mac-issues/
title: Troubleshoot DDE issues on Mac
- path: /ee/desktop/troubleshoot/windows-issues/
title: Troubleshoot DDE issues on Windows
- sectiontitle: Universal Control Plane
section:
- path: /ee/ucp/
@ -1204,6 +1300,8 @@ manuals:
title: Create UCP audit logs
- path: /ee/ucp/admin/configure/enable-saml-authentication/
title: Enable SAML authentication
- path: /ee/ucp/admin/configure/integrate-scim/
title: SCIM integration
- path: /ee/ucp/admin/configure/enable-helm-tiller/
title: Enable Helm and Tiller with UCP
- path: /ee/ucp/admin/configure/external-auth/
@ -1349,6 +1447,12 @@ manuals:
path: /ee/ucp/interlock/usage/interlock-vip-mode/
- title: Using routing labels
path: /ee/ucp/interlock/usage/labels-reference/
- title: Publishing a default host service
path: /ee/ucp/interlock/usage/default-backend/
- title: Specifying a routing mode
path: /ee/ucp/interlock/usage/interlock-vip-mode/
- title: Using routing labels
path: /ee/ucp/interlock/usage/labels-reference.md/
- title: Implementing redirects
path: /ee/ucp/interlock/usage/redirects/
- title: Implementing a service cluster
@ -1357,6 +1461,10 @@ manuals:
path: /ee/ucp/interlock/usage/sessions/
- title: Securing services with TLS
path: /ee/ucp/interlock/usage/tls/
- title: Configuring websockets
path: /ee/ucp/interlock/usage/websockets/
- title: Securing services with TLS
path: /ee/ucp/interlock/usage/tls/
- title: Configuring websockets
path: /ee/ucp/interlock/usage/websockets/
- sectiontitle: Deploy apps with Kubernetes
@ -1384,7 +1492,9 @@ manuals:
- title: Use Azure Files Storage
path: /ee/ucp/kubernetes/storage/use-azure-files/
- title: Use AWS EBS Storage
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
- title: Configure iSCSI
path: /ee/ucp/kubernetes/storage/use-iscsi/
- title: API reference
path: /reference/ucp/3.1/api/
nosync: true
@ -3084,6 +3194,32 @@ manuals:
title: Get support
- title: Get support
path: /ee/get-support/
- sectiontitle: Docker Assemble
section:
- path: /assemble/install/
title: Install
- path: /assemble/spring-boot/
title: Build a Spring Boot project
- path: /assemble/dot-net/
title: Build a C# ASP.NET Core project
- path: /assemble/configure/
title: Configure
- path: /assemble/images/
title: Images
- path: /assemble/adv-backend-manage/
title: Advanced Backend Management
- path: /assemble/cli-reference/
title: CLI reference
- sectiontitle: Docker App
section:
- path: /app/working-with-app/
title: Working with Docker App
- sectiontitle: Docker Template
section:
- path: /app-template/working-with-template/
title: Working with Docker Template
- path: /app-template/cli-reference/
title: CLI reference
- sectiontitle: Docker Compose
section:
- path: /compose/overview/

View File

@ -0,0 +1,161 @@
---
title: Docker Template API reference
description: Docker Template API reference
keywords: application, template, API, definition
---
This page contains information about the Docker Template API reference.
## Service template definition
The following section provides information about the valid parameters that you can use when you create a service template definition.
```
apiVersion: v1alpha1
kind: ServiceTemplate
metadata:
name: angular
platforms:
- linux
spec:
title: Angular
description: Angular service
icon: https://cdn.worldvectorlogo.com/logos/angular-icon-1.svg
source:
image: docker.io/myorg/myservice:version
parameters:
- name: node
description: Node version
type: enum
defaultValue: "9"
values:
- value: "10"
description: "10"
- value: "9"
description: "9"
- value: "8"
description: "8"
- name: externalPort
description: External port
defaultValue: "8080"
type: hostPort
```
### root
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
| apiVersion |yes | The api format version. Current latest is v1alpha1|
|kind| yes|The kind of object. Must be `ServiceTemplate` For services templates.|
### metadata
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
|name |yes | The identifier for this service. Must be unique within a given library. |
|platform| yes|A list of allowed target platforms. Possible options are `windows` and `linux`|
### spec
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
| title |yes |The label for this service, as displayed when listed in `docker template` commands or in the `application-designer`|
|description| no|A short description for this service|
|icon|no|An icon representing the service. Only used in the Application Designer|
### spec/source
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
| image |yes| The name of the image associated with this service template. Must be in full `repo/org/service:version` format|
### spec/parameters
The parameters section allows to specify the input parameters that are going to be used by the service.
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
|name |yes| The identifier for this parameter. Must be unique within the service parameters. |
|description| no|A short description of the parameter. Will be used as label in the Application Designer|
|type| yes|The type of the parameter. Possible options are:<ul><li>`string` - The default type, with no validation or specific features.</li><li>`enum` - Allow the user to choose a value included in a specific list of options. Must specify the values parameter.</li><li>`hostPort` - Specify that this parameter is a port that is going to be exposed. Use port format regexp validation, and avoid duplicate ports within an application.</li></ul>|
|defaultValue| yes|The default value for this parameter. For enum type, must be a valid value from the values list.|
|values| no|For enum type, specify a list of value with a value/description tuple.|
## Application template definition
The following section provides information about the valid parameters that you can use when you create a application template definition.
```
apiVersion: v1alpha1
kind: ApplicationTemplate
metadata:
name: nginx-flask-mysql
platforms:
- linux
spec:
title: Flask / NGINX / MySQL application
description: Sample Python/Flask application with an Nginx proxy and a MySQL database
services:
- name: back
serviceId: flask
parameters:
externalPort: "80"
- name: db
serviceId: mysql
- name: proxy
serviceId: nginx
```
### root
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
| apiVersion |yes | The api format version. Current latest is v1alpha1|
|kind| yes|The kind of object. Must be `ApplicationTemplate` For application templates.|
### metadata
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
|name |yes | The identifier for this application template. Must be unique within a given library.|
|platform| yes|A list of allowed target platforms. Possible options are `windows` and `linux`|
### spec
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
| title |yes |The label for this application template, as displayed when listed in `docker template` commands or in `application-designer` |
|description| no|A short description for this service|
### spec/services
This section lists the service templates used in the application.
| Parameter |Required? | Description |
| :----------------------|:----------------------|:----------------------------------------|
| name |yes|The name of the service. It will be used for image name and for subfolder within the application structure. |
|serviceId |yes|The id of the service to use (equivalent to the metadata/name field of the service) |
| parameters |no|A map (string to string) that can be used to override the default values of the service parameters.|
## Service configuration file
The file is mounted at `/run/configuration` in every service template container and contains the template context in a JSON format.
| Parameter |Description |
| :----------------------|:----------------------|
|ServiceId |The service id|
| name |The name of the service as specified by the application template or overridden by the user|
|parameters |A map (string to string) containing the services parameter values.|
| targetPath |The destination folder for the application on the host machine.|
|namespace |The service images namespace (org and user)|
|services |A list containing all the services of the application (see below)|
### Attributes
The items in the services list contains the following attributes:
| Parameter |Description |
| :----------------------|:----------------------|
|serviceId |The service id|
| name |The name of the service as specified by the application template or overridden by the user|
| parameters |A map (string to string) containing the services parameter values.|

View File

@ -0,0 +1,197 @@
---
title: Docker Template CLI reference
description: Docker Template CLI reference
keywords: Docker, application template, CLI, Application Designer,
---
This page provides information about the `docker template` command.
## Overview
Docker Template is a CLI plugin that introduces a top-level `docker template`command that allows users to create new Docker applications using a library of templates. With `docker template`, you can scaffold a full project structure for a chosen technical stack or a set of technical stacks using the best practices pre-configured in a generated Dockerfile and docker-compose file.
For more information about Docker Template, see [Working with Docker Template](/ee/docker-template/working-with-template).
## `docker template` commands
To view the commands and sub-commands available in `docker template`, run:
`docker template --help`
```
Usage: docker template COMMAND
Use templates to quickly create new services
Commands:
inspect Inspect service templates or application templates
list List available templates with their information
scaffold Choose an application template or service template(s) and scaffold a new project
version Print version information
Run 'docker template COMMAND --help' for more information on a command.
```
### inspect
The `docker template inspect` command allows you to view the details of the template such as service parameters, default values, and for application templates, the list of services included in the application.
```
Usage: docker template inspect <service or application>
Inspect service templates or application templates
Options:
--format string Configure the output format (pretty|json|yaml)
(default "pretty")
```
For example:
```
docker template inspect react-java-mysql
NAME: react-java-mysql
TITLE: React / Spring / MySQL application
DESCRIPTION: Sample React application with a Spring backend and a MySQL database
SERVICES:
* PARAMETERS FOR SERVICE: front (react)
NAME DESCRIPTION TYPE DEFAULT VALUE VALUES
node Node version enum 9 10, 9, 8
externalPort External port hostPort 8080
* PARAMETERS FOR SERVICE: back (spring)
NAME DESCRIPTION TYPE DEFAULT VALUE VALUES
java Java version enum 9 10, 9, 8
groupId Group Id string com.company
artifactId Artifact Id string project
appName Application name string New App
appDescription Application description string My new SpringBoot app
externalPort External port hostPort 8080
* PARAMETERS FOR SERVICE: db (mysql)
NAME DESCRIPTION TYPE DEFAULT VALUE VALUES
version Version enum 5.7 5.7
```
### list
The `docker template list` command lists the available service and application templates.
```
Usage: docker template list
List available templates with their information
Aliases:
list, ls
Options:
--format string Configure the output format (pretty|json|yaml)
(default "pretty")
--type string Filter by type (application|service|all) (default
"all")
```
For example:
`docker template list`
```
NAME TYPE DESCRIPTION
aspnet-mssql application Sample asp.net core application with mssql database
nginx-flask-mysql application Sample Python/Flask application with an Nginx proxy and a MySQL database
nginx-golang-mysql application Sample Golang application with an Nginx proxy and a MySQL database
nginx-golang-postgres application Sample Golang application with an Nginx proxy and a PostgreSQL database
react-java-mysql application Sample React application with an Spring backend and a MySQL database
react-express-mysql application Sample React application with a NodeJS backend and a MySQL database
sparkjava-mysql application Java application and a MySQL database
spring-postgres application Sample Java application with Spring framework and a Postgres database
angular service Angular service
aspnetcore service A lean and composable framework for building web and cloud applications
consul service A highly available and distributed service discovery and KV store
django service A high-level Python Web framework
express service NodeJS web application with Express server
flask service A microframework for Python based on Werkzeug, Jinja 2 and good intentions
golang service A powerful URL router and dispatcher for golang
gwt service GWT (Google Web Toolkit) / Java service
jsf service JavaServer Faces technology establishes the standard for building server-side user interfaces.
mssql service Microsoft SQL Server for Docker Engine
mysql service Official MySQL image
nginx service An HTTP and reverse proxy server
postgres service Official PostgreSQL image
rails service A web-application framework that includes everything needed to create database-backed web applications
react service React/Redux service with Webpack hot reload
sparkjava service A micro framework for creating web applications in Java 8 with minimal effort
spring service Customizable Java/Spring template
vuejs service VueJS service
```
### scaffold
The `docker template scaffold` command allows you to generate a project structure for a template.
```
Usage: docker template scaffold application [<alias=service>...] OR scaffold [alias=]service [<[alias=]service>...]
Choose an application template or service template(s) and scaffold a new project
Examples:
docker template scaffold react-java-mysql -s back.java=10 -s front.externalPort=80
docker template scaffold react-java-mysql java=back reactjs=front -s reactjs.externalPort=80
docker template scaffold back=spring front=react -s back.externalPort=9000
docker template scaffold react-java-mysql --server=myregistry:5000 --org=myorg
Options:
--build Run docker-compose build after deploy
--name string Application name
--org string Deploy to a specific organization / docker hub
user (if not specified, it will use your
current hub login)
--path string Deploy to a specific path
--platform string Target platform (linux|windows) (default "linux")
--server string Deploy to a specific registry server (host[:port])
-s, --set stringArray Override parameters values (service.name=value)
```
For example:
`docker template scaffold react-java-mysql`
If you want to change some of the parameter values (exposed port, specific version, etc.) you can pass additional parameters, and reference the service name it applies to with `--set` or `-s`.
For example:
`docker template scaffold react-java-mysql -s back.java=10 -s front.externalPort=80`
By default, the `docker template scaffold` command generates the project structure in the current folder. However, you can specify another folder using the `--path` parameter.
For example:
`docker template scaffold react-java-mysql --path /xxx`
You can also change service names by providing aliases when scaffolding either an application template or a list of service templates.
For example:
`docker template scaffold react-java-mysql java=back reactjs=front -s reactjs.externalPort=80`
### version
The `docker template version` command displays the Docker Template version number.
```
Usage: docker template version
Print version information
```
For example:
`docker template version`
```
Version: d6c11e577c592aad69d34db6d4dc740d65291e36
Git Commit: 96ea0063b0c9aaa0cc5b5ff811b51a6e2e752be9
```

View File

@ -0,0 +1,445 @@
---
title: Working with Docker Template
description: Working with Docker Application Template
keywords: Docker, application template, Application Designer,
---
## Overview
Docker Template is a CLI plugin that introduces a top-level `docker template` command that allows users to create new Docker applications by using a library of templates. There are two types of templates — service templates and application templates.
A _service template_ is a container image that generates code and contains the metadata associated with the image.
- The container image takes `/run/configuration` mounted file as input to generate assets such as code, Dockerfile, and `docker-compose.yaml` for a given service, and writes the output to the `/project` mounted folder.
- The metadata file that describes the service template is called the service definition. It contains the name of the service, description, and available parameters such as ports, volumes, etc. For a complete list of parameters that are allowed, see [Docker Template API reference](/ee/app-template/api-reference).
An _application template_ is a collection of one or more service templates.
## Create a custom service template
A Docker template contains a predefined set of service and application templates. To create a custom template based on your requirements, you must complete the following steps:
1. Create a service container image
2. Create the service template definition
3. Add the service template to the library
4. Share the service template
### Create a service container image
A service template provides the description required by Docker Template to scaffold a project. A service template runs inside a container with two bind mounts:
1. `/run/configuration`, a JSON file which contains all settings such as parameters, image name, etc. For example:
```
{
"parameters": {
"externalPort": "80",
"artifactId": "com.company.app"
},
...
}
```
2. `/project`, the output folder to which the container image writes the generated assets.
#### Basic service template
To create a basic service template, you need to create two files — a dockerfile and a docker compose file in a new folder. For example, to create a new MySQL service template, create the following files in a folder called `my-service`:
`docker-compose.yaml`
```
version: "3.6"
services:
mysql:
image: mysql
```
`Dockerfile`
```
FROM alpine
COPY docker-compose.yaml .
CMD cp docker-compose.yaml /project/
```
This adds a MySQL service to your application.
#### Create a service with code
Services that generate a template using code must contain the following files that are valid:
- A *Dockerfile* located at the root of the `my-service` folder. This is the Dockerfile that is used for the service when running the application.
- A *docker-compose.yaml* file located at the root of the `my-service` folder. The `docker-compose.yaml` file must contain the service declaration and any optional volumes or secrets.
Heres an example of a simple NodeJS service:
```
my-service
├── Dockerfile # The Dockerfile of the service template
└── assets
├── Dockerfile # The Dockerfile of the generated service
└── docker-compose.yaml # The service declaration
```
The NodeJS service contains the following files:
`my-service/Dockerfile`
```
FROM alpine
COPY assets /assets
CMD ["cp", "/assets", "/project"]
FROM dockertemplate/interpolator:v0.0.8 as interpolator
COPY assets /assets
```
`my-service/assets/docker-compose.yaml`
```
version: "3.6"
services:
{{ .Name }}:
build: {{ .Name }}
ports:
- {{ .Parameters.externalPort }}:3000
```
`my-service/assets/Dockerfile`
```
FROM NODE:9
WORKDIR /app
COPY package.json .
RUN yarn install
COPY . .
CMD ["yarn", "run", "start"]
```
> **Note:** After scaffolding the template, you can add the default files your template contains to the `assets` folder.
The next step is to build and push the service template image to a remote repository by running the following command:
```
cd [...]/my-service
docker build -t org/my-service .
docker push org/my-service
```
To build and push the image to an instance of Docker Trusted Registry(DTR), or to an external registry, specify the name of the repository:
```
cd [...]/my-service
docker build -t myrepo:5000/my-service .
docker push myrepo:5000/my-service
```
### Create the service template definition
The service definition contains metadata that describes a service template. It contains the name of the service, description, and available parameters such as ports, volumes, etc.
After creating the service definition, you can proceed to [Add templates to Docker Template](#add-templates-to-docker-template) to add the service definition to the Docker Template repository.
Of all the available service and application definitions, Docker Template has access to only one catalog, referred to as the repository. It uses the catalog content to display service and application templates to the end user.
Here is an example of the Express service definition:
```
- apiVersion: v1alpha1 # constant
kind: ServiceTemplate # constant
metadata:
name: Express # the name of the service
spec:
title: Express # The title/label of the service
icon: https://docker-application-template.s3.amazonaws.com/assets/express.png # url for an icon
description: NodeJS web application with Express server
source:
image: org/my-service:latest
```
The most important section here is `image: org/my-service:latest`. This is the image associated with this service template. You can use this line to point to any image. For example, you can use an Express image directly from the hub `docker.io/dockertemplate/express:latest` or from the DTR private repository `myrepo:5000/my-service:latest`. The other properties in the service definition are mostly metadata for display and indexation purposes.
#### Adding parameters to the service
Now that you have created a simple express service, you can customize it based on your requirements. For example, you can choose the version of NodeJS to use when running the service.
To customize a service, you need to complete the following tasks:
1. Declare the parameters in the service definition. This tells Docker Template whether or not the CLI can accept the parameters, and allows the [Application Designer](/ee/desktop/app-designer) to be aware of the new options.
2. Use the parameters during service construction.
#### Declare the parameters
Add the parameters available to the application. The following example adds the NodeJS version and the external port:
```
- [...]
spec:
[...]
parameters:
- name: node
defaultValue: "9"
description: Node version
type: enum
values:
- value: "10"
description: "10"
- value: "9"
description: "9"
- value: "8"
description: "8"
- defaultValue: "3000"
description: External port
name: externalPort
type: hostPort
[...]
```
#### Use the parameters during service construction
When you run the service template container, a volume is mounted making the service parameters available at `/run/configuration`.
The file matches the following go struct:
```
type TemplateContext struct {
ServiceID string `json:"serviceId,omitempty"`
Name string `json:"name,omitempty"`
Parameters map[string]string `json:"parameters,omitempty"`
TargetPath string `json:"targetPath,omitempty"`
Namespace string `json:"namespace,omitempty"`
Services []ConfiguredService `json:"services,omitempty"`
}
```
Where `ConfiguredService` is:
```
type ConfiguredService struct {
ID string `json:"serviceId,omitempty"`
Name string `json:"name,omitempty"`
Parameters map[string]string `json:"parameters,omitempty"`
}
```
You can then use the file to obtain values for the parameters and use this information based on your requirements. However, in most cases, the JSON file is used to interpolate the variables. Therefore, we provide a utility called `interpolator` that expands variables in templates. For more information, see [Interpolator](#interpolator).
To use the `interpolator` image, update `my-service/Dockerfile` to use the following Dockerfile:
```
FROM dockertemplate/interpolator:v0.0.3-beta1
COPY assets .
```
> **Note:** The interpolator tag must match the version used in Docker Template. Verify this using the `docker template version` command .
This places the interpolator image in the `/assets` folder and copies the folder to the target `/project` folder. If you prefer to do this manually, use a Dockerfile instead:
```
WORKDIR /assets
CMD ["/interpolator", "-config", "/run/configuration", "-source", "/assets", "-destination", "/project"]
```
When this is complete, use the newly added node option in `my-service/assets/Dockerfile`, by replacing the line:
`FROM node:9`
with
`FROM node:{{ .Parameters.node }}`
Now, build and push the image to your repository.
### Add service template to the library
You must add the service to a repository file in order to see it when you run the `docker template ls` command, or to make the service available in the Application Designer.
#### Create the repository file
Create a local repository file called `library.yaml` anywhere on your local drive and add the newly created service definitions and application definitions to it.
`library.yaml`
```
apiVersion: v1alpha1
generated: "2018-06-13T09:24:07.392654524Z"
kind: RepositoryContent
services: # List of service templates available
- apiVersion: v1alpha1 # here is the service definition for our service template.
kind: ServiceTemplate
name: express
spec:
title: Express
[...]
```
#### Add the local repository to docker-template settings
> **Note:** You can also use the instructions in this section to add templates to the [Application Designer](/ee/desktop/app-designer).
Now that you have created a local repository and added service definitions to it, you must make Docker Template aware of these. To do this:
1. Edit `~/.docker/dockertemplate/preferences.yaml` as follows:
```
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
2. Add your local repository:
```
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: custom-services # here
url: file://path/to/my/library.yaml
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
After updating the `preferences.yaml` file, run `docker template ls` or restart the Application Designer and select **Custom application**. The new service should now be visible in the list of available services.
### Share custom service templates
To share a custom service template, you must complete the following steps:
1. Push the image to an available endpoint (for example, Docker Hub)
2. Share the service definition (for example, GitHub)
3. Ensure the receiver has modified their `preferences.yaml` file to point to the service definition that you have shared, and are permitted to accept remote images.
## Create a custom application template
An application template is a collection of one or more service templates. You must complete the following steps to create a custom application template:
1. Create an application template definition
2. Add the application template to the library
3. Share your custom application template
### Create the application definition
An application template definition contains metadata that describes an application template. It contains information such as the name and description of the template, the services it contains, and the parameters for each of the services.
Before you create an application template definition, you must create a repository that contains the services you are planning to include in the template. For more information, see [Create the repository file](#create-the-repository-file).
For example, to create an Express and MySQL application, the application definition must be similar to the following yaml file:
```
apiVersion: v1alpha1 #constant
kind: ApplicationTemplate #constant
metadata:
name: express-mysql #the name of the application
spec:
description: Sample application with a NodeJS backend and a MySQL database
services: # list of the services
- name: back
serviceId: express # service name
parameters: # (optional) define the default application parameters
externalPort: 9000
- name: db
serviceId: mysql
title: Express / MySQL application
```
### Add the template to the library
Create a local repository file called `library.yaml` anywhere on your local drive. If you have already created the `library.yaml` file, add the application definitions to it.
`library.yaml`
```
apiVersion: v1alpha1
generated: "2018-06-13T09:24:07.392654524Z"
kind: RepositoryContent
services: # List of service templates available
- apiVersion: v1alpha1 # here is the service definition for our service template.
kind: ServiceTemplate
name: express
spec:
title: Express
[...]
templates: # List of application templates available
- apiVersion: v1alpha1 #constant
kind: ApplicationTemplate # here is the application definition for our application template
metadata:
name: express-mysql
spec:
```
### Add the local repository to `docker-template` settings
Now that you have created a local repository and added application definitions, you must make Docker Template aware of these. To do this:
1. Edit `~/.docker/dockertemplate/preferences.yaml` as follows:
```
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
2. Add your local repository:
```
apiVersion: v1alpha1
channel: master
kind: Preferences
repositories:
- name: custom-services # here
url: file://path/to/my/library.yaml
- name: library-master
url: https://docker-application-template.s3.amazonaws.com/master/library.yaml
```
After updating the `preferences.yaml` file, run `docker template ls` or restart the Application Designer and select **Custom application**. The new template should now be visible in the list of available templates.
### Share the custom application template
To share a custom application template, you must complete the following steps:
1. Push the image to an available endpoint (for example, Docker Hub)
2. Share the application definition (for example, GitHub)
3. Ensure the receiver has modified their `preferences.yaml` file to point to the application definition that you have shared, and are permitted to accept remote images.
## Interpolator
The `interpolator` utility is basically an image containing a binary which:
- takes a folder (assets folder) and the service parameter file as input,
- replaces variables in the input folder using the parameters specified by the user (for example, the service name, external port, etc), and
- writes the interpolated files to the destination folder.
The interpolator implementation uses [Golang template](https://golang.org/pkg/text/template/) to aggregate the services to create the final application. If your service template uses the `interpolator` image by default, it expects all the asset files to be located in the `/assets` folder:
`/interpolator -source /assets -destination /project`
However, you can create your own scaffolding script that performs calls to the `interpolator`.
> **Note:** It is not mandatory to use the `interpolator` utility. You can use a utility of your choice to handle parameter replacement and file copying to achieve the same result.
The following table lists the `interpolator` binary options:
| Parameter | Default value | Description |
| :----------------------|:---------------------------|:----------------------------------------|
| `-source` | none | Source file or folder to interpolate from|
| `-destination` | none | Destination file or folder to copy the interpolated files to|
| `-config` | `/run/configuration` | The path to the json configuration file |
| `-skip-template` | false | If set to `true`, it copies assets without any transformation |

346
app/working-with-app.md Normal file
View File

@ -0,0 +1,346 @@
---
title: Working with Docker App
description: Learn about Docker App
keywords: Docker App, applications, compose, orchestration
---
## Overview
Docker App is a CLI plug-in that introduces a top-level `docker app` command that brings the _container experience_ to applications. The following table compares Docker containers with Docker applications.
| Object | Config file | Build with | Execute with |
| ------------- |---------------| -------------------|-----------------------|
| Container | Dockerfile | docker image build | docker container run |
| App | bundle.json | docker app bundle | docker app install |
With Docker App, entire applications can now be managed as easily as images and containers. For example, Docker App lets you _build_, _validate_ and _deploy_ applications with the `docker app` command. You can even leverage secure supply-chain features such as signed `push` and `pull` operations.
This guide will walk you through two scenarios:
1. Initialize and deploy a new Docker App project from scratch
2. Convert an existing Compose app into a Docker App project (Added later in the beta process)
The first scenario will familiarize you with the basic components of a Docker App and get you comfortable with the tools and workflow.
## Initialize and deploy a new Docker App project from scratch
In this section, we'll walk through the process of creating a new Docker App project. By then end, you'll be familiar with the workflow and most important commands.
We'll complete the following steps:
1. Pre-requisites
2. Initialize an empty new project
3. Populate the project
4. Validate the app
5. Deploy the app
6. Push the app to Docker Hub
7. Install the app directly from Docker Hub
### Pre-requisites
In order to follow along, you'll need at least one Docker node operating in Swarm mode. You will also need the latest build of the Docker CLI with the APP CLI plugin included.
Depending on your Linux distribution and your security context, you may need to prepend commands with `sudo`.
### Initialize a new empty project
The `docker app init` command is used to initialize a new Docker application project. If you run it on its own, it initializes a new empty project. If you point it to an existing `docker-compose.yml` file, it initializes a new project based on the Compose file.
Use the following command to initialize a new empty project called "hello-world".
```
$ docker app init --single-file hello-world
Created "hello-world.dockerapp"
```
The command will produce a single file in your current directory called `hello-world.dockerapp`. The format of the file name is <project-name> appended with `.dockerapp`.
```
$ ls
hello-world.dockerapp
```
If you run `docker app init` without the `--single-file` flag you will get a new directory containing three YAML files. The name of the directory will the name of the project with `.dockerapp` appended, and the three YAML files will be:
- `docker-compose.yml`
- `metadata.yml`
- `parameters.yml`
However, the `--single-file` option merges the three YAML files into a single YAML file with three sections. Each of these sections relates to one of the three YAML files mentioned above --- `docker-compose.yml`, `metadata.yml`, and `parameters.yml`. Using the `--single-file` option is great for enabling you to share your application via a single configuration file.
Inspect the YAML with the following command.
```
$ cat hello-world.dockerapp
# Application metadata - equivalent to metadata.yml.
version: 0.1.0
name: hello-world
description:
---
# Application services - equivalent to docker-compose.yml.
version: "3.6"
services: {}
---
# Default application parameters - equivalent to parameters.yml.
```
Your file may be more verbose.
Notice that each of the three sections is separated by a set of three dashes ("---"). Let's quickly describe each section.
The first section of the file is where you specify identification metadata such as name, version, and description. It accepts key-value pairs. This part of the file can be a separate file called `metadata.yml`
The second section of the file describes the application. It can be a separate file called `docker-compose.yml`.
The final section is where default values for application parameters can be expressed. It can be a separate file called `parameters.yml`
### Populate the project
In this section, we'll edit the project YAML file so that it runs a simple web app.
Use your preferred editor to edit the `hello-world.dockerapp` YMAL file and update the application section to the following:
```
version: "3.6"
services:
hello:
image: hashicorp/http-echo
command: ["-text", "${text}"]
ports:
- ${port}:5678
```
Update the Parameters section to the following:
```
port: 8080
text: Hello world!
```
The sections of the YAML file are currently order-based. This means it's important they remain in the order we've explained, with the _metadata_ section being first, the _app_ section being second, and the _parameters_ section being last. This may change to name-based sections in future releases.
Save the changes.
The application has been updated to run a single-container application based on the `hashicorp/http-echo` web server image. This image will have it execute a single command that displays some text and exposes itself on a network port.
Following best-practices, the configuration of the application has been decoupled form the application itself using variables. In this case, the text displayed by the app, and the port it will be published on, are controlled by two variables defined in the Parameters section of the file.
Docker App provides the `inspect` sub-command to provide a prettified summary of the application configuration. It's important to note that the application is not running at this point, and that the `inspect` operation inspects the configuration file(s).
```
$ docker app inspect hello-world.dockerapp
hello-world 0.1.0
Service (1) Replicas Ports Image
----------- -------- ----- -----
hello 1 8080 hashicorp/http-echo
Parameters (2) Value
-------------- -----
port 8080
text Hello world!
```
`docker app inspect` operations will fail if the parameters section doesn't specify a default value for every parameter expressed in the app section.
The application is ready to validated and rendered.
### Validate the app
Docker App provides the `validate` sub-command to check syntax and other aspects of the configuration. If validation passes, the command returns no arguments.
```
$ docker app validate hello-world.dockerapp
Validated "hello-world.dockerapp"
```
`docker app validate` operations will fail if the parameters section doesn't specify a default value for every parameter expressed in the app section.
As the `validate` operation has returned no problems, the app is ready to be deployed.
### Deploy the app
There are several options for deploying a Docker App project.
1. Deploy as a native Docker App application
2. Deploy as a Compose app application
3. Deploy as a Docker Stack application
We'll look at all three options, starting with deploying as a native Dock App application.
#### Deploy as a native Docker App
The process for deploying as a native Docker app is as follows.
1. Use `docker app install` to deploy the application
Use the following command to deploy (install) the application.
```
$ docker app install hello-world.dockerapp --name my-app
Creating network my-app_default
Creating service my-app_hello
```
The app will be deployed using the stack orchestrator. This means you can inspect it with regular `docker stack` commands.
```
$ docker stack ls
NAME SERVICES ORCHESTRATOR
my-app 1 Swarm
```
You can also check the status of the app with the `docker app status <app-name>` command.
```
$ docker app status my-app
ID NAME MODE REPLICAS IMAGE PORTS
miqdk1v7j3zk my-app_hello replicated 1/1 hashicorp/http-echo:latest *:8080->5678/tcp
```
Now that the app is running, you can point a web browser at the DNS name or public IP of the Docker node on port 8080 and see the app in all its glory. You will need to ensure traffic to port 8080 is allowed on the connection form your browser to your Docker host.
You can uninstall the app with `docker app uninstall my-app`.
#### Deploy as a Docker Compose app
The process for deploying a as a Compose app comprises two major steps:
1. Render the Docker app project as a `docker-compose.yml` file.
2. Deploy the app using `docker-compose up`.
You will need a recent version of Docker Compose tom complete these steps.
Rendering is the process of reading the entire application configuration and outputting it as a single `docker-compose.yml` file. This will create a Compose file with hard-coded values wherever a parameter was specified as a variable.
Use the following command to render the app to a Compose file called `docker-compose.yml` in the current directory.
```
$ docker app render --output docker-compose.yml hello-world.dockerapp
```
Check the contents of the resulting `docker-compose.yml` file.
```
$ cat docker-compose.yml
version: "3.6"
services:
hello:
command:
- -text
- Hello world!
image: hashicorp/http-echo
ports:
- mode: ingress
target: 5678
published: 8080
protocol: tcp
```
Notice that the file contains hard-coded values that were expanded based on the contents of the Parameters section of the project's YAML file. For example, ${text} has been expanded to "Hello world!".
Use `docker-compose up` to deploy the app.
```
$ docker-compose up --detach
WARNING: The Docker Engine you're using is running in swarm mode.
<Snip>
```
The application is now running as a Docker compose app and should be reachable on port `8080` on your Docker host. You will need to ensure traffic to port 8080 is allowed on the connection form your browser to your Docker host.
You can use `docker-compose down` to stop and remove the application.
#### Deploy as a Docker Stack
Deploying the app as a Docker stack is a two-step process very similar to deploying it as a Docker compose app.
1. Render the Docker app project as a `docker-compose.yml` file.
2. Deploy the app using `docker stack deploy`.
We'll assume that you've followed the steps to render the Docker app project as a compose file (shown in the previous section) and that you're ready to deploy it as a Docker Stack. Your Docker host will need to be in Swarm mode.
```
$ docker stack deploy hello-world-app -c docker-compose.yml
Creating network hello-world-app_default
Creating service hello-world-app_hello
```
The app is now deployed as a Docker stack and can be reached on port `8080` on your Docker host.
Use the `docker stack rm hello-world-app` command to stop and remove the stack. You will need to ensure traffic to port 8080 is allowed on the connection form your browser to your Docker host.
### Push the app to Docker Hub
As mentioned in the intro, `docker app` lets you manage entire applications the same way that we currently manage container images. For example, you can push and pull entire applications from registries like Docker Hub with `docker app push` and `docker app pull`. Other `docker app` commands, such as `install`, `upgrade`, and `render` can be performed directly on applications while they are stored in a registry.
Let's see some examples.
Push the application to Docker Hub. To complete this step, you'll need a valid Docker ID and you'll need to be logged in to the registry you are pushing the app to.
Be sure to replace the registry ID in the example below with your own.
```
$ docker app push my-app --tag nigelpoulton/app-test:0.1.0
docker app push hello-world.dockerapp --tag nigelpoulton/app-test:0.1.0
docker.io/nigelpoulton/app-test:0.1.0-invoc
hashicorp/http-echo
application/vnd.docker.distribution.manifest.v2+json [2/2] (sha256:ba27d460...)
<Snip>
```
The app is now stored in the container registry.
### Install the app directly from Docker Hub
Now that the app is pushed to the registry, try an `inspect` and `install` command against it. The location of your app will be different to the one shown in the examples.
```
$ docker app inspect nigelpoulton/app-test:0.1.0
hello-world 0.1.0
Service (1) Replicas Ports Image
----------- -------- ----- -----
hello 1 8080 nigelpoulton/app-test@sha256:ba27d460cd1f22a1a4331bdf74f4fccbc025552357e8a3249c40ae216275de96
Parameters (2) Value
-------------- -----
port 8080
text Hello world!
```
This action was performed directly against the app in the registry.
Now install it as a native Docker App by referencing the app in the registry.
```
$ docker app install nigelpoulton/app-test:0.1.0
Creating network hello-world_default
Creating service hello-world_hello
```
Test that the app is working.
The app used in these examples is a simple web server that displays the text "Hello world!" on port 8080, your app may be different.
```
$ curl http://localhost:8080
Hello world!
```
Uninstall the app.
```
$ docker app uninstall hello-world
Removing service hello-world_hello
Removing network hello-world_default
```
You can see the name of your Docker App with the `docker stack ls` command.
## Convert an existing Compose app into a Docker App project
Content TBA

View File

@ -0,0 +1,90 @@
---
title: Advanced backend management
description: Advanced backend management for Docker Assemble
keywords: Backend, Assemble, Docker Enterprise, plugin, Spring Boot, .NET, c#, F#
---
## Backend access to host ports
Docker Assemble requires its own buildkit instance to be running in a Docker container on the local system. You can start and manage the backend using the `backend` subcommand of `docker assemble`. For more information, see [Install Docker Assemble](/install).
As the backend runs in a container with its own network namespace, it cannot access host resources directly. This is most noticeable when trying to push to a local registry as `localhost:5000`.
The backend supports a sidecar container which proxies ports from within the backend container to the container's gateway (which is in effect a host IP). This is sufficient to allow access to host ports which have been bound to `0.0.0.0` (or to the gateway specifically), but not ones which are bound to `127.0.0.1`.
By default, port 5000 is proxied in this way, as that is the most common port used for a local registry to allow access to a local registry on `localhost:5000` (the most common setup). You can proxy other ports using the `--allow-host-port` option to docker assemble backend start.
For example, to expose port `6000` instead of port `5000`, run:
```
$ docker assemble backend start --allow-host-port 6000
```
> **Notes:**
>
> - You can repeat the `--allow-host-port` option or give it a comma separated list of ports.
> - Passing `--allow-host-port 0` disables the default and no ports are exposed. For example:
>
> `$ docker assemble backend start --allow-host-port 0`
> - On Docker Desktop, this functionality allows the backend to access ports on the Docker Desktop VM host, rather than the Windows or macOS host. To access the the Windows or macOS host port, you can use `host.docker.internal` as usual.
## Backend sub-commands
### Info
The info sub-command describes the backend:
```
~$ docker assemble backend info
ID: 2f03e7d288e6bea770a2acba4c8c918732aefcd1946c94c918e8a54792e4540f (running)
Image: docker/assemble-backend@sha256:«…»
Sidecar containers:
- 0f339c0cc8d7 docker-assemble-backend-username-proxy-port-5000 (running)
Found 1 worker(s):
- 70it95b8x171u5g9jbixkscz9
Platforms:
- linux/amd64
Labels:
- com.docker.assemble.commit: «…»
- org.mobyproject.buildkit.worker.executor: oci
- org.mobyproject.buildkit.worker.hostname: 2f03e7d288e6
- org.mobyproject.buildkit.worker.snapshotter: overlayfs
Build cache contains 54 entries, total size 3.65GB (0B currently in use)
```
### Stop
The stop sub-command destroys the backend container
```
~$ docker assemble backend stop
```
### Logs
The logs sub-command displays the backend logs.
```
~$ docker assemble backend logs
```
### Cache
The build cache gets lost when the backend is stopped. To avoid this, you can create a volume named `docker-assemble-backend-cache-«username»` and it will automatically be used as the build cache.
Alternatively you can specify a named docker volume to use for the cache. For example:
```
~$ docker volume create $USER-assemble-cache
username-assemble-cache
~$ docker assemble backend start --cache-volume=username-assemble-cache
Pulling image «…»: Success
Started container "docker-assemble-backend-username" (74476d3fdea7)
```
For information regarding the current cache contents, run the command `docker assemble backend cache`.
To clean the cache, run`docker assemble backend cache purge`.

134
assemble/cli-reference.md Normal file
View File

@ -0,0 +1,134 @@
---
title: Docker Assemble CLI reference
description: Docker Assemble CLI reference
keywords: Docker, assemble, Spring Boot, ASP .NET, backend
---
This page provides information about the `docker assemble` command.
## Overview
Docker Assemble (`docker assemble`) is a CLI plugin which provides a language and framework-aware tool that enables users to build an application into an optimized Docker container.
For more information about Docker Assemble, see [Docker Assemble](/assemble/install/).
## `docker assemble` commands
To view the commands and sub-commands available in `docker assemble`, run:
`docker assemble --help`
```
Usage: docker assemble [OPTIONS] COMMAND
assemble is a high-level build tool
Options:
--addr string backend address (default
"docker-container://docker-assemble-backend-Usha-Mandya")
Management Commands:
backend Manage build backend service
Commands:
build Build a project into a container
version Print the version number of docker assemble
Run 'docker assemble COMMAND --help' for more information on a command.
```
### backend
The `docker assemble backend` command allows you to manage and build backend services. Docker Assemble requires its own buildkit instance to be running in a Docker container on the local system.
```
Usage: docker assemble backend [OPTIONS] COMMAND
Manage build backend service
Options:
--addr string backend address (default
"docker-container://docker-assemble-backend-username")
Management Commands:
cache Manage build cache
Commands:
info Print information about build backend service
logs Show logs for build backend service
start Start build backend service
stop Stop build backend service
Run 'docker assemble backend COMMAND --help' for more information on a command.
```
For example:
```
docker assemble backend start
Pulling image «…»: Success
Started backend container "docker-assemble-backend-username" (3e627bb365a4)
```
For more information about `backend`, see [Advanced backend management](/assemble/adv-backend-manage).
### build
The `docker assemble build` command enables you to build a project into a container.
```
Usage: docker assemble build [PATH]
Build a project into a container
Options:
--addr string backend address (default
"docker-container://docker-assemble-backend-username")
--label KEY=VALUE label to write into the image as KEY=VALUE
--name NAME build image with repository NAME (default
taken from project metadata)
--namespace NAMESPACE build image within repository NAMESPACE
(default no namespace)
-o, --option OPTION=VALUE set an option as OPTION=VALUE
--port stringArray port to expose from container
--progress string set type of progress (auto, plain, tty).
Use plain to show container output (default
"auto")
--push push result to registry, not local image store
--push-insecure push result to insecure (http) registry,
not local image store
--tag TAG tag image with TAG (default taken from
project metadata or "latest")
```
For example:
```
~$ docker assemble build docker-springframework
«…»
Successfully built: docker.io/library/hello-boot:1
```
## version
The `docker assemble version` command displays the version number of Docker Assemble.
```
Usage: docker assemble version
Print the version number of docker assemble
Options:
--addr string backend address (default
"docker-container://docker-assemble-backend-username")
```
For example:
```
> docker assemble version
docker assemble v0.31.0
commit: d089e2be00b0f7d7f565aeba11cb8bc6dd56a40b
buildkit: 2bd8e6cb2b42
os/arch: windows/amd64
```

81
assemble/configure.md Normal file
View File

@ -0,0 +1,81 @@
---
title: Configure Docker Assemble
description: Installing Docker Assemble
keywords: Assemble, Docker Enterprise, plugin, Spring Boot, .NET, c#, F#
---
Although you dont need to configure anything to build a project using Docker Assemble, you may wish to override the defaults, and in some cases, add fields that werent automatically detected from the project file. To support this, Docker Assemble allows you to add a file `docker-assemble.yaml` to the root of your project. The settings you provide in the `docker-assemble.yaml` file overrides any auto-detection and can themselves be overridden by command-line arguments
The `docker-assemble.yaml` file is in YAML syntax and has the following informal schema:
- `version`: (_string_) mandatory, must contain `0.2.0`
- `image`: (_map_) contains options related to the output image.
- `platforms`: (_list of strings_) lists the possible platforms which can be built (for example, `linux/amd64`, `windows/amd64`). The default is determined automatically from the project type and content. Note that by default Docker Assemble will build only for `linux/amd64` unless `--push` is used. See [Building Multiplatform images](/assemble/images/#multi-platform-images).
- `ports`: (_list of strings_) contains ports to expose from a container running the image. e.g `80/tcp` or `8080`. Default is to automatically determine the set of ports to expose where possible. To disable this and export no ports specify a list containing precisely one element of `none`.
- `labels`: (_map_) contains labels to write into the image as `key`-`value` (_string_) pairs.
- `repository-namespace`: (_string_) the registry and path component of the desired output image. e.g. `docker.io/library` or `docker.io/user`.
- `repository-name`: (_string_) the name of the specific image within `repository-namespace`. Overrides any name derived from the build system specific configuration.
- `tag`: (_string_) the default tag to use. Overrides and version/tag derived from the build system specific configuration.
- `healthcheck`: (_map_) describes how to check a container running the image is healthy.
- `kind`: (_string_) sets the type of Healthcheck to perform. Valid values are `none`, `simple-tcpport-open` and `springboot`. See [Health checks](/assemble/images/#health-checks).
- `interval`: (_duration_) the time to wait between checks.
- `timeout`: (_duration_) the time to wait before considering the check to have hung.
- `start-period`: (_duration_) period for the container to initialize before the retries starts to count down
- `retries`: (_integer_) number of consecutive failures needed to consider a container as unhealthy.
- `springboot`: (_map_) if this is a Spring Boot project then contains related configuration options.
- `enabled`: (_boolean_) true if this is a springboot project.
- `java-version`: (_string_) configures the Java version to use. Valid options are `8` and `10`.
- `build-image`: (_string_) sets a custom base build image
- `runtime-images` (_map_) sets a custom base runtime image by platform. For valid keys, refer to the **Spring Boot** section in [Custom base images](/assemble/images/#custom-base-images).
- `aspnetcore`: (_map_) if this is an ASP.NET Core project then contains related configuration options.
- `enabled`: (_boolean_) true if this is an ASP.NET Core project.
- `version`: (_string_) configures the ASP.NET Core version to use. Valid options are `1.0`, `1.1`, `2.0` and `2.1`.
- `build-image`: (_string_) sets a custom base build image
- `runtime-images` (_map_) sets a custom base runtime image by platform. For valid keys, refer to the **ASP.NET Core** section in [Custom base images](/assemble/images/#custom-base-images).
> **Notes:**
>
> - The only mandatory field in `docker-assemble.yaml` is `version`. All other parameters are optional.
>
> - At most one of `dotnet` or `springboot` can be present in the yaml file.
>
> - Fields of type duration are integers with nanosecond granularity. However the following units of time are supported: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. For example, `25s`.
Each setting in the configuration file has a command line equivalent which can be used with the `-o/--option` argument, which takes a `KEY=VALUE` string where `KEY` is constructed by joining each element of the YAML hierarchy with a period (.).
For example, the `image → repository-namespace` key in the YAML becomes `-o image.repository-namespace=NAME` on the command line and `springboot → enabled` becomes `-o springboot.enabled=BOOLEAN`.
The following convenience aliases take precedence over the `-o/--option` equivalents:
- `--namespace` is an alias for `image.repository-namespace`;
- `--name` corresponds to `image.repository-name`;
- `--tag` corresponds to `image.tag`;
- `--label` corresponds to `image.labels` (can be used multiple times);
- `--port` corresponds to `image.ports` (can be used multiple times)

65
assemble/dot-net.md Normal file
View File

@ -0,0 +1,65 @@
---
title: Build a C# ASP.NET Core project
description: Building a C# ASP.NET Core project using Docker Assemble
keywords: Assemble, Docker Enterprise, Spring Boot, container image
---
Ensure you are running the `backend` before you build any projects using Docker Assemble. For instructions on running the backend, see [Install Docker Assemble](/assemble/install).
Clone the git repository you would like to use. The following example uses the `dotnetdemo` repository.
```
~$ git clone https://github.com/mbentley/dotnetdemo
Cloning into 'dotnetdemo'...
«…»
```
Build the project using the `docker assemble build` command by passing it the path to the source repository (or a subdirectory in the following example):
```
~$ docker assemble build dotnetdemo/dotnetdemo
«…»
Successfully built: docker.io/library/dotnetdemo:latest
```
The resulting image is exported to the local Docker image store using a name and a tag which are automatically determined by the project metadata.
```
~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
dotnetdemo latest a055e61e3a9e 24 seconds ago 349MB
```
An image name consists of `«namespace»/«name»:«tag»`. Where, `«namespace»/` is optional and defaults to `none`. If the project metadata does not contain a tag (or a version), then latest is used. If the project metadata does not contain a name and it was not provided on the command line, then a fatal error occurs.
Use the `--namespace`, `--name` and `--tag` options to override each element of the image name:
```
~$ docker assemble build --name testing --tag latest dotnetdemo/
«…»
INFO[0007] Successfully built "testing:latest"
~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
testing latest d7f41384814f 32 seconds ago 97.4MB
hello-boot 1 0dbc2c425cff 5 minutes ago 97.4MB
```
Run the container:
```
~$ docker run -d --rm -p 8080:80 dotnetdemo:latest
e1c54291e96967dad402a81c4217978a544e4d7b0fdd3c0a2e2cca384c3b4adb
~$ docker ps
CONTAINER ID IMAGE COMMAND «…» PORTS NAMES
e1c54291e969 dotnetdemo:latest "dotnet dotnetdemo.d…" «…» 0.0.0.0:8080->80/tcp lucid_murdock
~$ docker logs e1c54291e969
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {11bba23a-71ad-4191-b583-4f974e296033} may be persisted to storage in unencrypted form.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
~$ curl -s localhost:8080 | grep '<h4>'
<h4>This environment is </h4>
<h4>served from e1c54291e969 at 11/22/2018 16:00:23</h4>
~$ docker rm -f e1c54291e969
```

114
assemble/images.md Normal file
View File

@ -0,0 +1,114 @@
---
title: Docker Assemble images
description: Building Docker Assemble images
keywords: Assemble, Docker Enterprise, plugin, Spring Boot, .NET, c#, F#
---
## Multi-platform images
By default, Docker Assemble builds images for the `linux/amd64` platform and exports them to the local Docker image store. This is also true when running Docker Assemble on Windows or macOS. For some application frameworks, Docker Assemble can build multi-platform images to support running on several host platforms. For example, `linux/amd64` and `windows/amd64`.
To support multi-platform images, images must be pushed to a registry instead of the local image store. This is because the local image store can only import uni-platform images which match its platform.
To enable the multi-platform mode, use the `--push` option. For example:
```
docker assemble build --push /path/to/my/project
```
To push to an insecure (unencrypted) registry, use `--push-insecure` instead of `--push`.
## Custom base images
Docker Assemble allows you to override the base images for building and running your project. For example, the following `docker-assemble.yaml` file defines `maven:3-ibmjava-8-alpine` as the base build image and `openjdk:8-jre-alpine` as the base runtime image (for linux/amd64 platform).
```
version: "0.2.0"
springboot:
enabled: true
build-image: "maven:3-ibmjava-8-alpine"
runtime-images:
linux/amd64: "openjdk:8-jre-alpine"
```
Linux-based images must be Debian, Red Hat, or Alpine-based and have a standard environment with:
- `find`
- `xargs`
- `grep`
- `true`
- a standard POSIX shell (located at `/bin/sh`)
These tools are required for internal inspection that Docker Assemble performs on the images. Depending on the type of your project and your configuration, the base images must meet other requirements as described in the following sections.
### Spring Boot
Install Java JDK and maven on the base build image and ensure it is available in `$PATH`. Install a maven settings file as `/usr/share/maven/ref/settings-docker.xml` (irrespective of the install location of Maven).
Ensure the base runtime image has Java JRE installed and is available in `$PATH`. The build and runtime image must have the same version of Java installed.
Supported build platform:
- `linux/amd64`
Supported runtime platforms:
- `linux/amd64`
- `windows/amd64`
### ASP.NET Core
Install .NET Core SDK on the base build image and ensure it includes the [.NET Core command-line interface tools](https://docs.microsoft.com/en-us/dotnet/core/tools/?tabs=netcore2x).
Install [.NET Core command-line interface tools](https://docs.microsoft.com/en-us/dotnet/core/tools/?tabs=netcore2x) on the base runtime image.
Supported build platform:
- `linux/amd64`
Supported runtime platforms:
- `linux/amd64`
- `windows/amd64`
## Bill of lading
Docker Assemble generates a bill of lading when building an image. This contains information about the tools, base images, libraries, and packages used by Assemble to build the image and that are included in the runtime image. The bill of lading has two parts one for build and one for runtime.
The build part includes:
- The base image used
- A map of packages installed and their versions
- A map of libraries used for the build and their versions
- A map of build tools and their corresponding versions
The runtime part includes:
- The base image used
- A map of packages installed and their versions
- A map of runtime tools and their versions
You can find the bill of lading by inspecting the resulting image. It is stored using the label `com.docker.assemble.bill-of-lading`:
```
docker image inspect --format '{{ index .Config.Labels "com.docker.assemble.bill-of-lading" }}' <image>
```
> **Note:** The bill of lading is only supported on the `linux/amd64` platform and only for images which are based on Alpine (`apk`), Red Hat (`rpm`) or Debian (`dpkg-query`).
## Health checks
Docker Assemble only supports health checks on `linux/amd64` based runtime images and require certain additional commands to be present depending on the value of `image.healthcheck.kind`:
- `simple-tcpport-open:` requires the `nc` command
- `springboot:` requires the `curl` and `jq` commands
On Alpine (`apk`) and Debian (`dpkg`) based images, these dependencies are installed automatically. For other base images, you must ensure they are present in the images you specify.
If your base runtime image lacks the necessary commands, you may need to set `image.healthcheck.kind` to `none` in your `docker-assemble.yaml` file.

37
assemble/install.md Normal file
View File

@ -0,0 +1,37 @@
---
title: Install Docker Assemble
description: Installing Docker Assemble
keywords: Assemble, Docker Enterprise, plugin, Spring Boot, .NET, c#, F#
---
## Overview
Docker Assemble (`docker assemble`) is a plugin which provides a language and framework-aware tool that enables users to build an application into an optimized Docker container. With Docker Assemble, users can quickly build Docker images without providing configuration information (like Dockerfile) by auto-detecting the required information from existing framework configuration.
Docker Assemble supports the following application frameworks:
- [Spring Boot](https://spring.io/projects/spring-boot) when using the [Maven](https://maven.apache.org/) build system
- [ASP.NET Core](https://docs.microsoft.com/en-us/aspnet/core) (with C# and F#)
## System requirements
Docker Assemble requires a Linux, Windows, or a macOS Mojave with the Docker Engine installed.
## Install
Docker Assemble requires its own buildkit instance to be running in a Docker container on the local system. You can start and manage the backend using the `backend` subcommand of `docker assemble`.
To start the backend, run:
```
~$ docker assemble backend start`
Pulling image «…»: Success
Started backend container "docker-assemble-backend-username" (3e627bb365a4)
```
When the backend is running, it can be used for multiple builds and does not need to be restarted.
> **Note:** For instructions on running a remote backend, accessing logs, saving the build cache in a named volume, accessing a host port, and for information about the buildkit instance, see `--help` .
For advanced backend user information, see [Advanced Backend Management](/assemble/adv-backend-manage/).

70
assemble/spring-boot.md Normal file
View File

@ -0,0 +1,70 @@
---
title: Build a Spring Boot project
description: Building a Spring Boot project using Docker Assemble
keywords: Assemble, Docker Enterprise, Spring Boot, container image
---
Ensure you are running the `backend` before you build any projects using Docker Assemble. For instructions on running the backend, see [Install Docker Assemble](/assemble/install).
Clone the git repository you would like to use. The following example uses the `docker-springfamework` repository.
```
~$ git clone https://github.com/anokun7/docker-springframework
Cloning into 'docker-springframework'...
«…»
```
When you build a Spring Boot project, Docker Assemble automatically detects the information it requires from the `pom.xml` project file.
Build the project using the `docker assemble build` command by passing it the path to the source repository:
```
~$ docker assemble build docker-springframework
«…»
Successfully built: docker.io/library/hello-boot:1
```
The resulting image is exported to the local Docker image store using a name and a tag which are automatically determined by the project metadata.
```
~$ docker image ls | head -n 2
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-boot 1 00b0fbcf3c40 About a minute ago 97.4MB
```
An image name consists of `«namespace»/«name»:«tag»`. Where, `«namespace»/` is optional and defaults to `none`. If the project metadata does not contain a tag (or a version), then `latest` is used. If the project metadata does not contain a name and it was not provided on the command line, a fatal error occurs.
Use the `--namespace`, `--name` and `--tag` options to override each element of the image name:
```
~$ docker assemble build --name testing --tag latest docker-springframework/
«…»
INFO[0007] Successfully built "testing:latest"
~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
testing latest d7f41384814f 32 seconds ago 97.4MB
hello-boot 1 0dbc2c425cff 5 minutes ago 97.4MB
```
Run the container:
```
~$ docker run -d --rm -p 8080:8080 hello-boot:1
b2c88bdc35761ba2b99f85ce1f3e3ce9ed98931767b139a0429865cadb46ce13
~$ docker ps
CONTAINER ID IMAGE COMMAND «…» PORTS NAMES
b2c88bdc3576 hello-boot:1 "java -Djava.securit…" «…» 0.0.0.0:8080->8080/tcp silly_villani
~$ docker logs b2c88bdc3576
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.2.RELEASE)
«…» : Starting Application v1 on b2c88bdc3576 with PID 1 (/hello-boot-1.jar started by root in /)
«…»
~$ curl -s localhost:8080
Hello from b2c88bdc3576
~$ docker rm -f b2c88bdc3576
```

View File

@ -529,7 +529,7 @@ an error.
### credential_spec
> **Note**: this option was added in v3.3.
> **Note**: This option was added in v3.3. Using group Managed Service Account (gMSA) configurations with compose files is supported in Compose version 3.8.
Configure the credential spec for managed service account. This option is only
used for services using Windows containers. The `credential_spec` must be in the
@ -558,6 +558,23 @@ credential_spec:
registry: my-credential-spec
```
#### Example gMSA configuration
When configuring a gMSA credential spec for a service, you only need
to specify a credential spec with `config`, as shown in the following example:
```
version: "3.8"
services:
myservice:
image: myimage:latest
credential_spec:
config: my_credential_spec
configs:
my_credentials_spec:
file: ./my-credential-spec.json|
```
### depends_on
Express dependency between services, Service dependencies cause the following

View File

@ -13,6 +13,10 @@ and writes them in files using the JSON format. The JSON format annotates each l
origin (`stdout` or `stderr`) and its timestamp. Each log file contains information about
only one container.
```json
{"log":"Log line is here\n","stream":"stdout","time":"2019-01-01T11:11:11.111111111Z"}
```
## Usage
To use the `json-file` driver as the default logging driver, set the `log-driver`

View File

@ -68,4 +68,4 @@ files no larger than 10 megabytes each.
```bash
$ docker run -it --log-opt max-size=10m --log-opt max-file=3 alpine ash
```
```

View File

@ -0,0 +1,131 @@
---
title: Configure Docker Desktop Enterprise on Mac
description: Learn about Docker Desktop Enterprise
keywords: Docker EE, Windows, Mac, Docker Desktop, Enterprise
---
This page contains information on how system administrators can configure Docker Desktop Enterprise (DDE) settings, specify and lock configuration parameters to create a standardized development environment on Mac operating systems.
## Environment configuration (administrators only)
The administrator configuration file allows you to customize and standardize your Docker Desktop environment across the organization.
When you install Docker Desktop Enterprise, a configuration file with default values is installed at the following location. Do not change the location of the `admin-settings.json` file.
`/Library/Application Support/Docker/DockerDesktop/admin-settings.json`
To edit `admin-settings.json`, you must have sudo access privileges.
### Syntax for `admin-settings.json`
1. `configurationFileVersion`: This must be the first parameter listed in `admin-settings.json`. It specifies the version of the configuration file format and must not be changed.
2. A nested list of configuration parameters, each of which contains a minimum of the following two settings:
- `locked`: If set to `true`, users without elevated access privileges are not able to edit this setting from the UI or by directly editing the `settings.json` file (the `settings.json` file stores the user's preferences). If set to `false`, users without elevated access privileges can change this setting from the UI or by directly editing
`settings.json`. If this setting is omitted, the default value is `false`.
- `value`: Specifies the value of the parameter. Docker Desktop Enterprise uses the value when first started and after a reset to factory defaults. If this setting is omitted, a default value that is built into the application is used.
### Parameters and settings
The following `admin-settings.json` code and table provide the required syntax and descriptions for parameters and values:
```json
{
"configurationFileVersion": 1,
"analyticsEnabled": {
"locked": false,
"value": false
},
"dockerCliOptions": {
"stackOrchestrator": {
"locked": false,
"value": "swarm"
}
},
"proxy": {
"locked": false,
"value": {
"http": "http://proxy.docker.com:8080",
"https": "https://proxy.docker.com:8080",
"exclude": "docker.com,github.com"
}
},
"linuxVM": {
"cpus": {
"locked": false,
"value": 2
},
"memoryMiB": {
"locked": false,
"value": 2048
},
"swapMiB": {
"locked": false,
"value": 1024
},
"diskSizeMiB": {
"locked": false,
"value": 65536
},
"dataFolder" : {
"value" : "/Users/...",
"locked" : false
},
"filesharingDirectories": {
"locked":false,
"value":["/Users", "..."]
},
"dockerDaemonOptions": {
"experimental": {
"locked": false,
"value": true
}
}
},
"kubernetes": {
"enabled": {
"locked": false,
"value": false
},
"showSystemContainers": {
"locked": false,
"value": false
},
"podNetworkCIDR": {
"locked": false,
"value": null
},
"serviceCIDR": {
"locked": false,
"value": null
}
}
}
```
Parameter values and descriptions for environment configuration on Mac:
| Parameter | Description |
| :--------------------------------- | :--------------------------------- |
| `configurationFileVersion` | Specifies the version of the configuration file format. |
| `analyticsEnabled` | If `value` is true, allow Docker Desktop Enterprise to sends diagnostics, crash reports, and usage data. This information helps Docker improve and troubleshoot the application. |
| `dockerCliOptions` | Specifies key-value pairs in the user's `~/.docker/config.json` file. In the sample code provided, the orchestration for docker stack commands is set to `swarm` rather than `kubernetes`. |
| `proxy` | The `http` setting specifies the HTTP proxy setting. The `https` setting specifies the HTTPS proxy setting. The `exclude` setting specifies a comma-separated list of hosts and domains to bypass the proxy. **Warning:** This parameter should be locked after being set: `locked: "true"`. |
| `linuxVM` | Parameters and settings related to the Linux VM - grouped together in this example for convenience. |
| `cpus` | Specifies the default number of virtual CPUs for the VM. If the physical machine has only 1 core, the default value is set to 1. |
| `memoryMiB` | Specifies the amount of memory in MiB (1 MiB = 1048576 bytes) allocated for the VM.|
| `swapMiB` | Specifies the amount of memory in MiB (1 MiB = 1048576 bytes) allocated for the swap file. |
| `dataFolder` | Specifies the directory containing the VM disk files. |
| `diskSizeMiB` | Specifies the amount of disk storage in MiB (1 MiB = 1048576 bytes) allocated for images and containers. |
| `filesharingDirectories` | The host folders that users can bind-mount in containers. |
| `dockerDaemonOptions` | Overrides the options in the linux daemon config file. For more information, see [Docker engine reference](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file). |
| (End of `linuxVM` section.) | |
| `kubernetes` | Parameters and settings related to kubernetes options - grouped together here for convenience. |
| `enabled` | If `locked` is set to `true`, the Kubernetes cluster starts when Docker Desktop Enterprise is started. |
| `showSystemContainers` | If true, displays Kubernetes internal containers when running docker commands such as `docker ps`. |
| `podNetworkCIDR` | This is currently unimplemented. `locked` must be set to true. |
| `serviceCIDR` | This is currently unimplemented. `locked` must be set to true. |
| (End of `kubernetes` section.) | |

View File

@ -0,0 +1,182 @@
---
title: Configure Docker Desktop Enterprise on Windows
description: Learn about Docker Desktop Enterprise configuration
keywords: Docker Admin, Windows, Docker Desktop, Enterprise
---
This page contains information on how system administrators can configure Docker Desktop Enterprise (DDE) settings, specify and lock configuration parameters to create a standardized development environment on Windows operating systems.
## Environment configuration (administrators only)
The administrator configuration file allows you to customize and standardize your Docker Desktop environment across the organization.
When you install Docker Desktop Enterprise, a configuration file with default values is installed at the following location. Do not change the location of the `admin-settings.json` file.
`\%ProgramData%\DockerDesktop\admin-settings.json`
which defaults to:
`C:\ProgramData\DockerDesktop\admin-settings.json`
You must have administrator access privileges to edit `admin-settings.json`.
### Syntax for `admin-settings.json`
1. `configurationFileVersion`: This must be the first parameter listed in `admin-settings.json`. It specifies the version of the configuration file format and must not be changed.
2. A nested list of configuration parameters, each of which contains a minimum of
the following two settings:
- `locked`: If set to `true`, users without elevated access privileges are not able to edit this setting
from the UI or by directly editing the `settings.json` file (the `settings.json` file stores the user's preferences). If set to `false`, users without elevated access privileges can change this setting from the UI or by directly editing
`settings.json`. If this setting is omitted, the default value is `false'.
- `value`: Specifies the value of the parameter. Docker Desktop Enterprise uses the value when first started and after a reset to factory defaults. If this setting is omitted, a default value that is built into the application is used.
### Parameters and settings
The following `admin-settings.json` code and table provide the required syntax and descriptions for parameters and values:
```json
{
"configurationFileVersion": 1,
"engine": {
"locked": false,
"value": "linux"
},
"analyticsEnabled": {
"locked": false,
"value": false
},
"exposeDockerAPIOnTCP2375": {
"locked": false,
"value": false
},
"dockerCliOptions": {
"stackOrchestrator": {
"locked": false,
"value": "swarm"
}
},
"proxy": {
"locked": false,
"value": {
"http": "http://proxy.docker.com:8080",
"https": "https://proxy.docker.com:8080",
"exclude": "docker.com,github.com"
}
},
"linuxVM": {
"cpus": {
"locked": false,
"value": 2
},
"memoryMiB": {
"locked": false,
"value": 2048
},
"swapMiB": {
"locked": false,
"value": 1024
},
"dataFolder": {
"locked": false,
"value": null
},
"diskSizeMiB": {
"locked": false,
"value": 65536
},
"hypervCIDR": {
"locked": false,
"value": "10.0.75.0/28"
},
"vpnkitCIDR": {
"locked": false,
"value": "192.168.65.0/28"
},
"useDnsForwarder": {
"locked": false,
"value": true
},
"dns": {
"locked": false,
"value": "8.8.8.8"
},
"dockerDaemonOptions": {
"experimental": {
"locked": false,
"value": true
}
}
},
"windows": {
"dockerDaemonOptions": {
"experimental": {
"locked": false,
"value": true
}
}
},
"kubernetes": {
"enabled": {
"locked": false,
"value": false
},
"showSystemContainers": {
"locked": false,
"value": false
},
"podNetworkCIDR": {
"locked": false,
"value": null
},
"serviceCIDR": {
"locked": false,
"value": null
}
},
"sharedDrives": {
"locked": true,
"value": [ ]
},
"sharedFolders": ["%USERPROFILE%"]
}
```
Parameter values and descriptions for environment configuration on Windows:
| Parameter | Description |
| :--------------------------------- | :--------------------------------- |
| `configurationFileVersion` | Specifies the version of the configuration file format. |
| `engine` | Specifies the default Docker engine to be used. `linux` specifies the Linux engine. `windows` specifies the Windows engine. |
| `analyticsEnabled` | If `value` is true, allow Docker Desktop Enterprise to sends diagnostics, crash reports, and usage data. This information helps Docker improve and troubleshoot the application. |
| `exposeDockerAPIOnTCP2375` | Exposes Docker API on a specified port. In this example, setting 'locked' to `true` exposes the Docker API on port 2375. **Warning:** This is unauthenticated and should only be enabled if protected by suitable firewall rules.|
| `dockerCliOptions` | Specifies key-value pairs in the user's `%HOME%\.docker\config.json` file. In the sample code provided, the orchestration for docker stack commands is set to `swarm` rather than `kubernetes`. |
| `proxy` | The `http` setting specifies the HTTP proxy setting. The `https` setting specifies the HTTPS proxy setting. The `exclude` setting specifies a comma-separated list of hosts and domains to bypass the proxy. **Warning:** This parameter should be locked after being set: `locked: "true"`. |
| `linuxVM` | Parameters and settings related to the Linux VM - grouped together in this example for convenience. |
| `cpus` | Specifies the default number of virtual CPUs for the VM. If the physical machine has only 1 core, the default value is set to 1. |
| `memoryMiB` | Specifies the amount of memory in MiB (1 MiB = 1048576 bytes) allocated for the VM.
| `swapMiB` | Specifies the amount of memory in MiB (1 MiB = 1048576 bytes) allocated for the swap file. |
| `dataFolder` | Specifies the root folder where Docker Desktop should put VM disk files. |
| `diskSizeMiB` | Specifies the amount of disk storage in MiB (1 MiB = 1048576 bytes) allocated for images and containers. |
| `hypervCIDR` | Specifies the subnet used for Hyper-V networking. The chosen subnet must not conflict with other resources on your network. |
| `vpnkitCIDR` | Specifies the subnet used for VPNKit networking and drive sharing. The chosen subnet must not conflict with other resources on your network. |
| `useDnsForwarder` | If `value` is set to `true`, this automatically determines the upstream DNS servers based on the host's network adapters. |
| `dns` | If `value` for `useDnsForwarder` is set to `false`, the Linux VM uses the server information in this `value` setting for DNS resolution. |
| `dockerDaemonOptions` | Overrides the options in the Linux daemon config file. For more information, see [Docker engine reference](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file). |
| (End of `linuxVM` section.) | |
| `windows` | Parameters and settings related to the Windows daemon-related options - grouped together in this example for convenience. |
| `dockerDaemonOptions` | Overrides the options in the Windows daemon config file. For more information, see [Docker engine reference](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file). |
| (End of `windows` section.) | |
| `kubernetes` | Parameters and settings related to kubernetes options - grouped together here for convenience. |
| `enabled` | If `locked` is set to `true`, the Kubernetes cluster starts when Docker Desktop Enterprise is started. |
| `showSystemContainers` | If true, displays Kubernetes internal containers when running docker commands such as `docker ps`. |
| `podNetworkCIDR` | This is currently unimplemented. `locked` must be set to true. |
| `serviceCIDR` | This is currently unimplemented. `locked` must be set to true. |
| (End of `kubernetes` section.) | |
| `sharedDrives` | If `sharedDrives` is set to `true`, this locks the drives users are allowed to share ( `["C", "D"]` ), but does not actually share drives by default (sharing a drive prompts the user for a password). `value` is a whitelist of drives that can be shared. **Warning:** Note that when updating this value, if you remove drives that have been shared, you must also `net share /delete` those drives. |
| `sharedFolders` | If specified, restrict the folders the user is allowed to share with Windows containers. |

View File

@ -0,0 +1,105 @@
---
title: Install Docker Desktop Enterprise on Mac
description: Learn about Docker Desktop Enterprise
keywords: Docker EE, Mac, Docker Desktop, Enterprise
---
This page contains information about the system requirements and specific instructions that help you install Docker Desktop Enterprise (DDE) on Mac.
> **Warning:** If you are using the Community version of Docker Desktop, you must uninstall Docker Desktop Community in order to install Docker Desktop Enterprise.
## System requirements
- Mac hardware must be a 2010 or newer model, with Intels hardware support for memory management unit (MMU) virtualization, including Extended Page Tables (EPT) and Unrestricted Mode. You can check to see if your machine has this support by running the following command in a terminal: `sysctl kern.hv_support`
- macOS 10.12 and newer macOS releases are supported. We recommend upgrading to the latest version of macOS.
- At least 4GB of RAM
- VirtualBox prior to version 4.3.30 must NOT be installed (it is incompatible with Docker for Mac). If you have a newer version of VirtualBox installed, its fine.
> **Note:** Docker supports Docker Desktop Enterprise on the most recent versions of macOS. That is, the current release of macOS and the previous two releases. As new major versions of macOS are made generally available, Docker will stop supporting the oldest version and support the newest version of macOS (in addition to the previous two releases).
## Installation
Download Docker Desktop Enterprise for [**Mac**](https://download.docker.com/mac/enterprise/Docker.pkg). The DDE installer includes Docker Engine, Docker CLI client, and Docker Compose.
Double-click the `.pkg` file to begin the installation and follow the on-screen instructions. When the installation is complete, click the Launchpad icon in the Dock and then **Docker** to start Docker Desktop.
Mac administrators can use the command line option `\$ sudo installer -pkg Docker.pkg -target /` for fine tuning and mass installation. After running this command, you can start Docker Desktop from the Applications folder on each machine.
Administrators can configure additional settings by modifying the administrator configuration file. For more information, see [Configure Desktop Enterprise for Mac](/ee/desktop/admin/configure/mac-admin).
## License file
Install the Docker Desktop Enterprise license file at the following location:
`/Library/Group Containers/group.com.docker/docker_subscription.lic`
You must create the path if it doesn't already exist. If the license file is missing, you will be asked to provide it when you try to run Docker Desktop Enterprise. Contact your system administrator to obtain the license file.
## Firewall exceptions
Docker Desktop Enterprise requires the following firewall exceptions. If you do not have firewall access, or are unsure about how to set firewall exceptions, contact your system administrator.
- The process `com.docker.vpnkit` proxies all outgoing container TCP and
UDP traffic. This includes Docker image downloading but not DNS
resolution, which is performed over a Unix domain socket connected
to the `mDNSResponder` system service.
- The process `com.docker.vpnkit` binds external ports on behalf of
containers. For example, `docker run -p 80:80 nginx` binds port 80 on all
interfaces.
- If using Kubernetes, the API server is exposed with TLS on
`127.0.0.1:6443` by `com.docker.vpnkit`.
## Version packs
Docker Desktop Enterprise is bundled with default version pack [Enterprise 3.0 (Docker Engine 19.03 / Kubernetes 1.14.1)](https://download.docker.com/mac/enterprise/enterprise-3.0.ddvp). System administrators can install version packs using a command line tool to use a different version of the Docker Engine and Kubernetes for development work:
- [Docker Enterprise 2.0 (17.06/Kubernetes 1.8.11)](https://download.docker.com/mac/enterprise/enterprise-2.0.ddvp)
- [Docker Enterprise 2.1 (18.09/Kubernetes 1.11.5)](https://download.docker.com/mac/enterprise/enterprise-2.1.ddvp)
For information on using the CLI tool for version pack installation, see [Command line installation](#command-line-installation).
> **Note:** It is not possible to install the version packs using the Docker Desktop user interface or by double-clicking the `.ddvp` file.
Available version packs are listed within the **Version Selection** option in the Docker Desktop menu. If more than one version pack is installed, you can select the corresponding entry to work with a different version pack. After you select a different version pack, Docker Desktop restarts and the selected Docker Engine and Kubernetes versions are used.
If more than one version pack is installed, you can select the corresponding entry to work with a different version pack. After you select a different version pack, Docker Desktop restarts and the selected Docker Engine and Kubernetes versions are used.
## Command line installation
System administrators can use a command line executable to install and uninstall Docker Desktop Enterprise and version packs.
When you install Docker Desktop Enterprise, the command line tool is installed at the following location:
[ApplicationPath]/Contents/Resources/bin/dockerdesktop-admin
>**Note:** Command line installation is supported for administrators only. You must have `sudo` access privilege to run the CLI commands.
### Version-pack install
Run the following command to install or upgrade a version pack to the version contained in the specified `.ddvp` archive:
dockerdesktop-admin version-pack install [path-to-archive]
>**Note:** You must stop Docker Desktop before installing a version pack.
### Version-pack uninstall
Run the following command to uninstall the specified version pack:
dockerdesktop-admin version-pack uninstall [version-pack-name]
>**Note:** You must stop Docker Desktop before uninstalling a version pack.
### Application uninstall
Run the following command to uninstall the application:
sudo /Applications/Docker.app/Contents/Resources/bin/dockerdesktop-admin app uninstall
The `sudo` command uninstalls files such as version packs that are installed by an administrator, but are not accessible by users.

View File

@ -0,0 +1,122 @@
---
title: Install Docker Desktop Enterprise on Windows
description: Learn about Docker Desktop Enterprise
keywords: Docker EE, Windows, Docker Desktop, Enterprise
---
This page contains information about the system requirements and specific instructions that help you install Docker Desktop Enterprise (DDE) on Windows.
> **Warning:** If you are using the Community version of Docker Desktop, you must uninstall Docker Desktop Community in order to install Docker Desktop Enterprise.
## System requirements
- Windows 10 Pro or Enterprise version 15063 or later.
- Hyper-V and Containers Windows features must be enabled.
- The following hardware prerequisites are required to successfully run Client
Hyper-V on Windows 10:
- 64 bit processor with [Second Level Address Translation (SLAT)](http://en.wikipedia.org/wiki/Second_Level_Address_Translation)
- 4GB system RAM
- BIOS-level hardware virtualization support must be enabled in the
BIOS settings:
![Virtualization Technology (VTx) must be enabled in BIOS settings](../../images/windows-prereq.png "BIOS setting information for hardware virtualization support")
> **Note:** Docker supports Docker Desktop Enterprise on Windows based on Microsofts support lifecycle for Windows 10 operating system. For more information, see the [Windows lifecycle fact sheet](https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet).
## Installation
Download Docker Desktop Enterprise for [**Windows**](https://download.docker.com/win/enterprise/DockerDesktop.msi).
The Docker Desktop Enterprise installer includes Docker Engine, Docker CLI client, and Docker Compose.
Double-click the `.msi` file to begin the installation and follow the on-screen instructions. When the installation is complete, select **Docker Desktop** from the Start menu to start Docker Desktop.
For information about installing DDE using the command line, see [Command line installation](#command-line-installation).
## License file
Install the Docker Desktop Enterprise license file at the following location:
%ProgramData%\DockerDesktop\docker_subscription.lic
You must create the path if it doesn't already exist. If the license file is missing, you will be asked to provide it when you try to run Docker Desktop Enterprise. Contact your system administrator to obtain the license file.
## Firewall exceptions
Docker Desktop Enterprise requires the following firewall exceptions. If you do not have firewall access, or are unsure about how to set firewall exceptions, contact your system administrator.
- The process `com.docker.vpnkit` proxies all outgoing container TCP and
UDP traffic. This includes Docker image downloading but not DNS
resolution, which is performed over a loopback TCP and UDP connection
to the main application.
- The process `com.docker.vpnkit` binds external ports on behalf of
containers. For example, `docker run -p 80:80 nginx` binds port 80 on all
interfaces.
- If using Kubernetes, the API server is exposed with TLS on `127.0.0.1:6445` by `com.docker.vpnkit`.
## Version packs
Docker Desktop Enterprise is bundled with default version pack [Enterprise 3.0 (Docker Engine 19.03 / Kubernetes 1.14.1)](https://download.docker.com/win/enterprise/enterprise-3.0.ddvp). System administrators can install version packs using a command line tool to use a different version of the Docker Engine and Kubernetes for development work:
- [Docker Enterprise 2.0 (17.06/Kubernetes 1.8.11)](https://download.docker.com/win/enterprise/enterprise-2.0.ddvp)
- [Docker Enterprise 2.1 (18.09/Kubernetes 1.11.5)](https://download.docker.com/win/enterprise/enterprise-2.1.ddvp)
For information on using the CLI tool for version pack installation, see [Command line installation](#command-line-installation).
Available version packs are listed within the **Version Selection** option in the Docker Desktop menu. If more than one version pack is installed, you can select the corresponding entry to work with a different version pack. After you select a different version pack, Docker Desktop restarts and the selected Docker Engine and Kubernetes versions are used.
## Command line installation
>**Note:** Command line installation is supported for administrators only. You must have `administrator` access to run the CLI commands.
System administrators can use the command line for mass installation and fine tuning the Docker Desktop Enterprise deployment. Run the following command as an administrator to perform a silent installation:
msiexec /i DockerDesktop.msi /quiet
You can also set the following properties:
- `INSTALLDIR [string]:` configures the folder to install Docker Desktop to (default is C:\Program Files\Docker\Docker)
- `STARTMENUSHORTCUT [yes|no]:` specifies whether to create an entry in the Start menu for Docker Desktop (default is yes)
- `DESKTOPSHORTCUT [yes|no]:` specifies whether to create a shortcut on the desktop for Docker Desktop (default is yes)
For example:
msiexec /i DockerDesktop.msi /quiet AUTOSTART=no STARTMENUSHORTCUT=no INSTALLDIR=”D:\Docker Desktop”
Docker Desktop Enterprise includes a command line executable to install and uninstall version packs. When you install DDE, the command line tool is installed at the following location:
[ApplicationPath]\dockerdesktop-admin.exe
### Version-pack install
Run the following command to install or upgrade a version pack to the version contained in the specified `.ddvp` archive:
dockerdesktop-admin.exe -InstallVersionPack=['path-to-archive']
>**Note:** You must stop Docker Desktop before installing a version pack.
### Version-pack uninstall
Run the following command to uninstall the specified version pack:
dockerdesktop-admin.exe -UninstallVersionPack=[version-pack-name|'path-to-archive']
>**Note:** You must stop Docker Desktop before uninstalling a version pack.
### Application uninstall
To uninstall the application:
1. Open the **Add or remove programs** dialog
1. Select **Docker Desktop** from the **Apps & features** list.
1. Click **Uninstall**.

View File

@ -0,0 +1,42 @@
---
title: Application Designer
description: Docker Desktop Enterprise Application Designer
keywords: Docker EE, Windows, Mac, Docker Desktop, Enterprise, templates, designer
---
## Overview
The Application Designer helps Docker developers quickly create new
Docker apps using a library of templates. To start the Application
Designer, select the **Design new application** menu entry.
![The Application Designer lets you choose an existing template or create a custom application.](./images/app-design-start.png "Application Designer")
The list of available templates is provided:
![You can tab through the available application templates. A description of each template is provided.](./images/app-design-choose.png "Available templates for application creation")
After selecting a template, you can then customize your application, For
example, if you select **Flask / NGINX / MySQL**, you can then
- select a different version of python or mysql; and
- choose different external ports:
![You can customize your application, which includes specifying database, proxy, and other details.](./images/app-design-custom.png "Customizing your application")
You can then name your application and customize the disk location:
![You can also customize the name and location of your application.](./images/app-design-custom2.png "Naming and specifying a location for your application")
When you select **Assemble**, your application is created.
![When you assemble your application, a status screen is displayed.](./images/app-design-test.png "Assembling your application")
Once assembled, the following screen allows you to run the application. Select **Run application** to pull the images and start the containers:
![When you run your application, the terminal displays output from the application.](./images/app-design-run.png "Running your application")
Use the corresponding buttons to start and stop your application. Select **Open in Finder** on Mac or **Open in Explorer** on Windows to
view application files on disk. Select **Open in Visual Studio Code** to open files with an editor. Note that debug logs from the application are displayed in the lower part of the Application Designer
window.

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 259 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Some files were not shown because too many files have changed in this diff Show More