mirror of https://github.com/docker/docs.git
Compare commits
No commits in common. "archive/publish-tools" and "main" have entirely different histories.
archive/pu
...
main
|
@ -1,6 +1,14 @@
|
||||||
.dockerignore
|
.DS_Store
|
||||||
Dockerfile
|
|
||||||
.git
|
|
||||||
.github
|
.github
|
||||||
tests
|
.gitignore
|
||||||
_site
|
.idea
|
||||||
|
.hugo_build.lock
|
||||||
|
_releaser
|
||||||
|
CONTRIBUTING.md
|
||||||
|
Dockerfile
|
||||||
|
compose.yml
|
||||||
|
docker-bake.hcl
|
||||||
|
public
|
||||||
|
node_modules
|
||||||
|
resources
|
||||||
|
tmp
|
||||||
|
|
|
@ -0,0 +1,5 @@
|
||||||
|
# Auto-detect text files, ensure they use LF.
|
||||||
|
* text=auto eol=lf
|
||||||
|
|
||||||
|
# Fine-tune GitHub's language detection
|
||||||
|
content/**/*.md linguist-detectable
|
|
@ -0,0 +1,44 @@
|
||||||
|
# Each line is a file pattern followed by one or more owners.
|
||||||
|
# Owners will be requested for review when someone opens a pull request.
|
||||||
|
|
||||||
|
# For more details, see https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
|
||||||
|
|
||||||
|
/content/manuals/build/ @crazy-max @ArthurFlag
|
||||||
|
|
||||||
|
/content/manuals/build-cloud/ @crazy-max @craig-osterhout
|
||||||
|
|
||||||
|
/content/manuals/compose/ @aevesdocker
|
||||||
|
|
||||||
|
/content/manuals/desktop/ @aevesdocker
|
||||||
|
|
||||||
|
/content/manuals/extensions/ @aevesdocker
|
||||||
|
|
||||||
|
/content/manuals/extensions-sdk/ @aevesdocker
|
||||||
|
|
||||||
|
/content/manuals/scout/ @craig-osterhout
|
||||||
|
|
||||||
|
/content/manuals/docker-hub/ @craig-osterhout
|
||||||
|
|
||||||
|
/content/manuals/engine/ @thaJeztah @ArthurFlag
|
||||||
|
|
||||||
|
/content/reference/api/engine/ @thaJeztah @ArthurFlag
|
||||||
|
|
||||||
|
/content/reference/cli/ @thaJeztah @ArthurFlag
|
||||||
|
|
||||||
|
/content/manuals/subscription/ @sarahsanders-docker
|
||||||
|
|
||||||
|
/content/manuals/security/ @aevesdocker @sarahsanders-docker
|
||||||
|
|
||||||
|
/content/manuals/admin/ @sarahsanders-docker
|
||||||
|
|
||||||
|
/content/manuals/billing/ @sarahsanders-docker
|
||||||
|
|
||||||
|
/content/manuals/accounts/ @sarahsanders-docker
|
||||||
|
|
||||||
|
/content/manuals/ai/ @ArthurFlag
|
||||||
|
|
||||||
|
/_vendor @sarahsanders-docker @ArthurFlag
|
||||||
|
|
||||||
|
/content/manuals/cloud/ @craig-osterhout
|
||||||
|
|
||||||
|
/content/manuals/dhi/ @craig-osterhout
|
|
@ -0,0 +1,32 @@
|
||||||
|
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
||||||
|
name: Broken link
|
||||||
|
description: Four-oh-four!
|
||||||
|
title: '[404]: <link text>'
|
||||||
|
labels:
|
||||||
|
- status/triage
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: input
|
||||||
|
id: location
|
||||||
|
attributes:
|
||||||
|
label: Location
|
||||||
|
description: Where did you find the broken link?
|
||||||
|
placeholder: https://docs.docker.com/
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: input
|
||||||
|
id: target
|
||||||
|
attributes:
|
||||||
|
label: Broken link
|
||||||
|
description: Where does the broken link point to?
|
||||||
|
placeholder: https://docs.docker.com/
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: comment
|
||||||
|
attributes:
|
||||||
|
label: Comment
|
||||||
|
description: Do you have any additional information to share?
|
||||||
|
placeholder: "I think this points to the wrong page..."
|
||||||
|
validations:
|
||||||
|
required: false
|
|
@ -0,0 +1,23 @@
|
||||||
|
blank_issues_enabled: false
|
||||||
|
contact_links:
|
||||||
|
- Name: Slack
|
||||||
|
url: https://dockr.ly/comm-slack
|
||||||
|
about: Ask questions in the Docker Community Slack
|
||||||
|
- name: Moby
|
||||||
|
url: https://github.com/moby/moby/issues
|
||||||
|
about: Bug reports for Docker Engine
|
||||||
|
- name: Docker Desktop for Windows
|
||||||
|
url: https://github.com/docker/for-win/issues
|
||||||
|
about: Bug reports for Docker Desktop for Windows
|
||||||
|
- name: Docker Desktop for Mac
|
||||||
|
url: https://github.com/docker/for-mac/issues
|
||||||
|
about: Bug reports for Docker Desktop for Mac
|
||||||
|
- name: Docker Desktop for Linux
|
||||||
|
url: https://github.com/docker/for-linux/issues
|
||||||
|
about: Bug reports for Docker Desktop for Linux
|
||||||
|
- name: Docker Compose
|
||||||
|
url: https://github.com/docker/compose/issues
|
||||||
|
about: Bug reports for Docker Compose
|
||||||
|
- name: Docker Buildx
|
||||||
|
url: https://github.com/docker/buildx/issues
|
||||||
|
about: Bug reports for Docker Buildx
|
|
@ -0,0 +1,48 @@
|
||||||
|
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
||||||
|
name: Docs issue
|
||||||
|
description: Report incorrect or missing content in docs, or a website issue
|
||||||
|
labels:
|
||||||
|
- status/triage
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Is this a docs issue?
|
||||||
|
description: |
|
||||||
|
Use this issue for reporting issues related to Docker documentation.
|
||||||
|
For product issues, refer to the corresponding product repository.
|
||||||
|
options:
|
||||||
|
- label: My issue is about the documentation content or website
|
||||||
|
required: true
|
||||||
|
- type: dropdown
|
||||||
|
attributes:
|
||||||
|
label: Type of issue
|
||||||
|
description: What type of problem are you reporting?
|
||||||
|
multiple: false
|
||||||
|
options:
|
||||||
|
- Information is incorrect
|
||||||
|
- I can't find what I'm looking for
|
||||||
|
- There's a problem with the website
|
||||||
|
- Other
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Description
|
||||||
|
description: |
|
||||||
|
Briefly describe the problem that you found.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: input
|
||||||
|
id: location
|
||||||
|
attributes:
|
||||||
|
label: Location
|
||||||
|
description: Where did you find the problem?
|
||||||
|
placeholder: "https://docs.docker.com/"
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Suggestion
|
||||||
|
description: >
|
||||||
|
Let us know if you have specific ideas on how we can fix the issue.
|
|
@ -0,0 +1,25 @@
|
||||||
|
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
||||||
|
name: New guide
|
||||||
|
description: Propose a new guide for Docker docs
|
||||||
|
labels:
|
||||||
|
- area/guides
|
||||||
|
- kind/proposal
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Description
|
||||||
|
description: |
|
||||||
|
Briefly describe the topic that you would like us to cover.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Would you like to contribute this guide?
|
||||||
|
description: |
|
||||||
|
If you select this checkbox, you indicate that you're willing to
|
||||||
|
contribute this guide. If not, we will treat this issue as a request,
|
||||||
|
and someone (a Docker employee, Docker captain, or community member)
|
||||||
|
may pick it up and start working on it.
|
||||||
|
options:
|
||||||
|
- label: "Yes"
|
|
@ -0,0 +1,7 @@
|
||||||
|
version: 2
|
||||||
|
updates:
|
||||||
|
- package-ecosystem: "github-actions"
|
||||||
|
open-pull-requests-limit: 10
|
||||||
|
directory: "/"
|
||||||
|
schedule:
|
||||||
|
interval: "daily"
|
|
@ -0,0 +1,111 @@
|
||||||
|
---
|
||||||
|
applyTo: '**/*.md'
|
||||||
|
---
|
||||||
|
# Documentation Writing Instructions
|
||||||
|
|
||||||
|
These are our documentation writing style guidelines.
|
||||||
|
|
||||||
|
## General Style tips
|
||||||
|
|
||||||
|
* Get to the point fast.
|
||||||
|
* Talk like a person.
|
||||||
|
* Simpler is better.
|
||||||
|
* Be brief. Give customers just enough information to make decisions confidently. Prune every excess word.
|
||||||
|
* We use Hugo to generate our docs.
|
||||||
|
|
||||||
|
## Grammar
|
||||||
|
|
||||||
|
* Use present tense verbs (is, open) instead of past tense (was, opened).
|
||||||
|
* Write factual statements and direct commands. Avoid hypotheticals like "could" or "would".
|
||||||
|
* Use active voice where the subject performs the action.
|
||||||
|
* Write in second person (you) to speak directly to readers.
|
||||||
|
* Use gender-neutral language.
|
||||||
|
* Avoid multiple -ing words that can create ambiguity.
|
||||||
|
* Keep prepositional phrases simple and clear.
|
||||||
|
* Place modifiers close to what they modify.
|
||||||
|
|
||||||
|
## Capitalization
|
||||||
|
|
||||||
|
* Use sentence-style capitalization for everything except proper nouns.
|
||||||
|
* Always capitalize proper nouns.
|
||||||
|
* Don’t capitalize the spelled-out form of an acronym unless it's a proper noun.
|
||||||
|
* In programming languages, follow the traditional capitalization of keywords and other special terms.
|
||||||
|
* Don't use all uppercase for emphasis.
|
||||||
|
|
||||||
|
## Numbers
|
||||||
|
|
||||||
|
* Spell out numbers for zero through nine, unless space is limited. Use numerals for 10 and above.
|
||||||
|
* Spell out numbers at the beginning of a sentence.
|
||||||
|
* Spell out ordinal numbers such as first, second, and third. Don't add -ly to form adverbs from ordinal numbers.
|
||||||
|
|
||||||
|
## Punctuation
|
||||||
|
|
||||||
|
* Use short, simple sentences.
|
||||||
|
* End all sentences with a period.
|
||||||
|
* Use one space after punctuation marks.
|
||||||
|
* After a colon, capitalize only proper nouns.
|
||||||
|
* Avoid semicolons - use separate sentences instead.
|
||||||
|
* Use question marks sparingly.
|
||||||
|
* Don't use slashes (/) - use "or" instead.
|
||||||
|
|
||||||
|
## Text formatting
|
||||||
|
|
||||||
|
* UI elements, like menu items, dialog names, and names of text boxes, should be in bold text.
|
||||||
|
* Use code style for:
|
||||||
|
* Code elements, like method names, property names, and language keywords.
|
||||||
|
* SQL commands.
|
||||||
|
* Command-line commands.
|
||||||
|
* Database table and column names.
|
||||||
|
* Resource names (like virtual machine names) that shouldn't be localized.
|
||||||
|
* URLs that you don't want to be selectable.
|
||||||
|
* For code placeholders, if you want users to replace part of an input string with their own values, use angle brackets (less than < and greater than > characters) on that placeholder text.
|
||||||
|
* Don't apply an inline style like italic, bold, or inline code style to headings.
|
||||||
|
|
||||||
|
## Alerts
|
||||||
|
|
||||||
|
* Alerts are a Markdown extension to create block quotes that render with colors and icons that indicate the significance of the content. The following alert types are supported:
|
||||||
|
|
||||||
|
* `[!NOTE]` Information the user should notice even if skimming.
|
||||||
|
* `[!TIP]` Optional information to help a user be more successful.
|
||||||
|
* `[!IMPORTANT]` Essential information required for user success.
|
||||||
|
* `[!CAUTION]` Negative potential consequences of an action.
|
||||||
|
* `[!WARNING]` Dangerous certain consequences of an action.
|
||||||
|
|
||||||
|
## Links
|
||||||
|
|
||||||
|
* Links to other documentation articles should be relative, not absolute. Include the `.md` suffix.
|
||||||
|
* Links to bookmarks within the same article should be relative and start with `#`.
|
||||||
|
* Link descriptions should be descriptive and make sense on their own. Don't use "click here" or "this link" or "here".
|
||||||
|
|
||||||
|
## Images
|
||||||
|
|
||||||
|
* Use images only when they add value.
|
||||||
|
* Images have a descriptive and meaningful alt text that starts with "Screenshot showing" and ends with ".".
|
||||||
|
* Videos have a descriptive and meaningful alt text or title that starts with "Video showing" and ends with ".".
|
||||||
|
|
||||||
|
## Numbered steps
|
||||||
|
|
||||||
|
* Write complete sentences with capitalization and periods
|
||||||
|
* Use imperative verbs
|
||||||
|
* Clearly indicate where actions take place (UI location)
|
||||||
|
* For single steps, use a bullet instead of a number
|
||||||
|
* When allowed, use angle brackets for menu sequences (File > Open)
|
||||||
|
* When writing ordered lists, only use 1's.
|
||||||
|
|
||||||
|
## Terminology
|
||||||
|
|
||||||
|
* Use "Select" instead of "Click" for UI elements like buttons, menu items, links, dropdowns, and checkboxes.
|
||||||
|
* Use "might" instead of "may" for conditional statements.
|
||||||
|
* Avoid latin abbreviations like "e.g.". Use "for example" instead.
|
||||||
|
* Use the verb "to enable" instead "to allow" unless you're referring to permissions.
|
||||||
|
* Follow the terms and capitalization guidelines in #fetch [VS Code docs wiki](https://github.com/microsoft/vscode-docs/wiki/VS-Code-glossary)
|
||||||
|
|
||||||
|
|
||||||
|
## Complete style guide
|
||||||
|
|
||||||
|
Find all the details of the style guide in these files:
|
||||||
|
|
||||||
|
- `./content/contribute/style/grammar.md` – Grammar rules
|
||||||
|
- `./content/contribute/style/formatting.md` – Formatting rules
|
||||||
|
- `./content/contribute/style/recommended-words.md` – Approved words and phrasing
|
||||||
|
- `./content/contribute/style/voice-tone.md` – Voice and tone guidance
|
|
@ -0,0 +1,202 @@
|
||||||
|
area/ai:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/ai/**
|
||||||
|
- content/reference/cli/model/**
|
||||||
|
|
||||||
|
area/release:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- .github/**
|
||||||
|
- hack/releaser/**
|
||||||
|
- netlify.toml
|
||||||
|
|
||||||
|
area/config:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- Dockerfile
|
||||||
|
- Makefile
|
||||||
|
- compose.yaml
|
||||||
|
- docker-bake.hcl
|
||||||
|
- hugo.yaml
|
||||||
|
- pagefind.yml
|
||||||
|
- hack/vendor
|
||||||
|
|
||||||
|
area/contrib:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/contribute/**
|
||||||
|
- CONTRIBUTING.md
|
||||||
|
|
||||||
|
area/tests:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- .htmltest.yml
|
||||||
|
- .markdownlint.json
|
||||||
|
- .vale.ini
|
||||||
|
- _vale/**
|
||||||
|
- hack/test/*
|
||||||
|
|
||||||
|
area/build:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/build/**
|
||||||
|
- _vendor/github.com/moby/buildkit/**
|
||||||
|
- _vendor/github.com/docker/buildx/**
|
||||||
|
- content/reference/cli/docker/buildx/**
|
||||||
|
|
||||||
|
area/build-cloud:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/build-cloud/**
|
||||||
|
|
||||||
|
area/cloud:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/cloud/**
|
||||||
|
|
||||||
|
area/compose:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/compose/**
|
||||||
|
- content/reference/compose-file/**
|
||||||
|
- _vendor/github.com/docker/compose/**
|
||||||
|
|
||||||
|
area/desktop:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/desktop/**
|
||||||
|
|
||||||
|
area/dhi:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/dhi/**
|
||||||
|
|
||||||
|
area/engine:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/engine/**
|
||||||
|
- content/reference/api/engine/**
|
||||||
|
|
||||||
|
area/install:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/desktop/install/**
|
||||||
|
- content/manuals/engine/install/**
|
||||||
|
|
||||||
|
area/swarm:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/engine/swarm/**
|
||||||
|
|
||||||
|
area/security:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/security/**
|
||||||
|
- content/manuals/engine/security/**
|
||||||
|
|
||||||
|
area/get-started:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/get-started/**
|
||||||
|
|
||||||
|
area/guides:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/guides/**
|
||||||
|
- content/learning-paths/**
|
||||||
|
|
||||||
|
area/networking:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/engine/network/**
|
||||||
|
- content/manuals/engine/daemon/ipv6.md
|
||||||
|
|
||||||
|
area/hub:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/docker-hub/**
|
||||||
|
|
||||||
|
area/cli:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/reference/cli/**
|
||||||
|
- _vendor/github.com/docker/cli/**
|
||||||
|
- _vendor/github.com/docker/scout-cli/**
|
||||||
|
- data/engine-cli/**
|
||||||
|
- data/buildx-cli/**
|
||||||
|
- data/debug-cli/**
|
||||||
|
- data/init-cli/**
|
||||||
|
|
||||||
|
area/api:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/reference/api/**
|
||||||
|
- _vendor/github.com/moby/moby/docs/api/*
|
||||||
|
|
||||||
|
area/scout:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/scout/**
|
||||||
|
- _vendor/github.com/docker/scout-cli/**
|
||||||
|
|
||||||
|
area/billing:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/billing/**
|
||||||
|
|
||||||
|
area/subscription:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/subscription/**
|
||||||
|
|
||||||
|
area/admin:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/admin/**
|
||||||
|
|
||||||
|
area/extensions:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/extensions/**
|
||||||
|
- content/reference/api/extensions-sdk/**
|
||||||
|
|
||||||
|
area/samples:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/samples/**
|
||||||
|
|
||||||
|
area/storage:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/engine/storage/**
|
||||||
|
|
||||||
|
area/accounts:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/accounts/**
|
||||||
|
|
||||||
|
area/copilot:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- content/manuals/copilot/**
|
||||||
|
|
||||||
|
hugo:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- assets/**
|
||||||
|
- hugo.yaml
|
||||||
|
- hugo_stats.json
|
||||||
|
- i18n/**
|
||||||
|
- layouts/**
|
||||||
|
- static/**
|
||||||
|
- tailwind.config.js
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
- changed-files:
|
||||||
|
- any-glob-to-any-file:
|
||||||
|
- go.mod
|
||||||
|
- go.sum
|
||||||
|
- package*.json
|
||||||
|
- _vendor/**
|
||||||
|
- hack/vendor
|
|
@ -0,0 +1,16 @@
|
||||||
|
---
|
||||||
|
mode: 'edit'
|
||||||
|
---
|
||||||
|
|
||||||
|
Imagine you're an experienced technical writer. You need to review content for
|
||||||
|
how fresh and up to date it is. Apply the following:
|
||||||
|
|
||||||
|
1. Fix spelling errors and typos
|
||||||
|
2. Verify whether the markdown structure conforms to common markdown standards
|
||||||
|
3. Ensure the content follows our [style guide file](../instructions/styleguide-instructions.md) as a guide.
|
||||||
|
4. Make sure the titles on the page provide better context about the content (for an improved search experience).
|
||||||
|
5. Ensure all the components formatted correctly.
|
||||||
|
6. Improve the SEO keywords.
|
||||||
|
7. If you find numbered lists, make sure their numbering only uses 1's.
|
||||||
|
|
||||||
|
Do your best and don't be lazy.
|
|
@ -0,0 +1,22 @@
|
||||||
|
---
|
||||||
|
mode: 'edit'
|
||||||
|
---
|
||||||
|
|
||||||
|
Imagine you're an experienced technical writer. You need to review content for
|
||||||
|
how fresh and up to date it is. Apply the following:
|
||||||
|
|
||||||
|
1. Improve the presentational layer - components, splitting up the page into smaller pages
|
||||||
|
Consider the following:
|
||||||
|
|
||||||
|
1. Can you use tabs to display multiple variants of the same steps?
|
||||||
|
2. Can you make a key item of information stand out with a call-out?
|
||||||
|
3. Can you reduce a large amount of text to a series of bullet points?
|
||||||
|
4. Are there other code components you could use?
|
||||||
|
2. Check if any operating systems or package versions mentioned are still current and supported
|
||||||
|
3. Check the accuracy of the content
|
||||||
|
4. If appropriate, follow the document from start to finish to see if steps make sense in sequence
|
||||||
|
5. Try to add some helpful next steps to the end of the document, but only if there are no *Next steps* or *Related pages* section, already.
|
||||||
|
6. Try to clarify, shorten or improve the efficiency of some sentences.
|
||||||
|
7. Check for LLM readibility.
|
||||||
|
|
||||||
|
Do your best and don't be lazy.
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
mode: edit
|
||||||
|
description: You are a technical writer reviewing an article for clarity, conciseness, and adherence to the documentation writing style guidelines.
|
||||||
|
---
|
||||||
|
Review the article for clarity, conciseness, and adherence to our documentation [style guidelines](../instructions/styleguide-instructions.md).
|
||||||
|
|
||||||
|
Provide concrete and practical suggestions for improvement.
|
|
@ -0,0 +1,18 @@
|
||||||
|
<!--Delete sections as needed -->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
<!-- Tell us what you did and why -->
|
||||||
|
|
||||||
|
## Related issues or tickets
|
||||||
|
|
||||||
|
<!-- Related issues, pull requests, or Jira tickets -->
|
||||||
|
|
||||||
|
## Reviews
|
||||||
|
|
||||||
|
<!-- Notes for reviewers here -->
|
||||||
|
<!-- List applicable reviews (optionally @tag reviewers) -->
|
||||||
|
|
||||||
|
- [ ] Technical review
|
||||||
|
- [ ] Editorial review
|
||||||
|
- [ ] Product review
|
|
@ -0,0 +1,107 @@
|
||||||
|
name: build
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
# needs push event on default branch otherwise cache is evicted when pull request is merged
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
pull_request:
|
||||||
|
|
||||||
|
env:
|
||||||
|
# Use edge release of buildx (latest RC, fallback to latest stable)
|
||||||
|
SETUP_BUILDX_VERSION: edge
|
||||||
|
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read # to fetch code (actions/checkout)
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
releaser:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
with:
|
||||||
|
version: ${{ env.SETUP_BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
|
||||||
|
-
|
||||||
|
name: Build
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
docker-bake.hcl
|
||||||
|
targets: releaser-build
|
||||||
|
|
||||||
|
build:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
needs:
|
||||||
|
- releaser
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
-
|
||||||
|
name: Build
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
source: .
|
||||||
|
files: |
|
||||||
|
docker-bake.hcl
|
||||||
|
targets: release
|
||||||
|
-
|
||||||
|
name: Check Cloudfront config
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
source: .
|
||||||
|
targets: aws-cloudfront-update
|
||||||
|
env:
|
||||||
|
DRY_RUN: true
|
||||||
|
AWS_REGION: us-east-1
|
||||||
|
AWS_CLOUDFRONT_ID: 0123456789ABCD
|
||||||
|
AWS_LAMBDA_FUNCTION: DockerDocsRedirectFunction-dummy
|
||||||
|
|
||||||
|
vale:
|
||||||
|
if: ${{ github.event_name == 'pull_request' }}
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: errata-ai/vale-action@reviewdog
|
||||||
|
env:
|
||||||
|
PIP_BREAK_SYSTEM_PACKAGES: 1
|
||||||
|
with:
|
||||||
|
files: content
|
||||||
|
|
||||||
|
validate:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
target:
|
||||||
|
- lint
|
||||||
|
- test
|
||||||
|
- unused-media
|
||||||
|
- test-go-redirects
|
||||||
|
- dockerfile-lint
|
||||||
|
- path-warnings
|
||||||
|
- validate-vendor
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
-
|
||||||
|
name: Validate
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
docker-bake.hcl
|
||||||
|
targets: ${{ matrix.target }}
|
||||||
|
set: |
|
||||||
|
*.args.BUILDKIT_CONTEXT_KEEP_GIT_DIR=1
|
|
@ -0,0 +1,163 @@
|
||||||
|
name: deploy
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- lab
|
||||||
|
- main
|
||||||
|
- published
|
||||||
|
|
||||||
|
env:
|
||||||
|
# Use edge release of buildx (latest RC, fallback to latest stable)
|
||||||
|
SETUP_BUILDX_VERSION: edge
|
||||||
|
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
|
||||||
|
|
||||||
|
# these permissions are needed to interact with GitHub's OIDC Token endpoint.
|
||||||
|
permissions:
|
||||||
|
id-token: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
publish:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
if: github.repository_owner == 'docker'
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Prepare
|
||||||
|
run: |
|
||||||
|
HUGO_ENV=development
|
||||||
|
DOCS_AWS_REGION=us-east-1
|
||||||
|
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
|
||||||
|
HUGO_ENV=staging
|
||||||
|
DOCS_URL="https://docs-stage.docker.com"
|
||||||
|
DOCS_AWS_IAM_ROLE="arn:aws:iam::710015040892:role/stage-docs-docs.docker.com-20220818202135984800000001"
|
||||||
|
DOCS_S3_BUCKET="stage-docs-docs.docker.com"
|
||||||
|
DOCS_S3_CONFIG="s3-config.json"
|
||||||
|
DOCS_CLOUDFRONT_ID="E1R7CSW3F0X4H8"
|
||||||
|
DOCS_LAMBDA_FUNCTION_REDIRECTS="DockerDocsRedirectFunction-stage"
|
||||||
|
DOCS_SLACK_MSG="Successfully deployed docs-stage from main branch. $DOCS_URL"
|
||||||
|
elif [ "${{ github.ref }}" = "refs/heads/published" ]; then
|
||||||
|
HUGO_ENV=production
|
||||||
|
DOCS_URL="https://docs.docker.com"
|
||||||
|
DOCS_AWS_IAM_ROLE="arn:aws:iam::710015040892:role/prod-docs-docs.docker.com-20220818202218674300000001"
|
||||||
|
DOCS_S3_BUCKET="prod-docs-docs.docker.com"
|
||||||
|
DOCS_S3_CONFIG="s3-config.json"
|
||||||
|
DOCS_CLOUDFRONT_ID="E228TTN20HNU8F"
|
||||||
|
DOCS_LAMBDA_FUNCTION_REDIRECTS="DockerDocsRedirectFunction-prod"
|
||||||
|
DOCS_SLACK_MSG="Successfully deployed docs from published branch. $DOCS_URL"
|
||||||
|
elif [ "${{ github.ref }}" = "refs/heads/lab" ]; then
|
||||||
|
HUGO_ENV=lab
|
||||||
|
DOCS_URL="https://docs-labs.docker.com"
|
||||||
|
DOCS_AWS_IAM_ROLE="arn:aws:iam::710015040892:role/labs-docs-docs.docker.com-20220818202218402500000001"
|
||||||
|
DOCS_S3_BUCKET="labs-docs-docs.docker.com"
|
||||||
|
DOCS_S3_CONFIG="s3-config.json"
|
||||||
|
DOCS_CLOUDFRONT_ID="E1MYDYF65FW3HG"
|
||||||
|
DOCS_LAMBDA_FUNCTION_REDIRECTS="DockerDocsRedirectFunction-labs"
|
||||||
|
else
|
||||||
|
echo >&2 "ERROR: unknown branch ${{ github.ref }}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
SEND_SLACK_MSG="true"
|
||||||
|
if [ -z "$DOCS_AWS_IAM_ROLE" ] || [ -z "$DOCS_S3_BUCKET" ] || [ -z "$DOCS_CLOUDFRONT_ID" ] || [ -z "$DOCS_SLACK_MSG" ]; then
|
||||||
|
SEND_SLACK_MSG="false"
|
||||||
|
fi
|
||||||
|
echo "BRANCH_NAME=${GITHUB_REF#refs/heads/}" >> $GITHUB_ENV
|
||||||
|
echo "HUGO_ENV=$HUGO_ENV" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_URL=$DOCS_URL" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_AWS_REGION=$DOCS_AWS_REGION" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_AWS_IAM_ROLE=$DOCS_AWS_IAM_ROLE" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_S3_BUCKET=$DOCS_S3_BUCKET" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_S3_CONFIG=$DOCS_S3_CONFIG" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_CLOUDFRONT_ID=$DOCS_CLOUDFRONT_ID" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_LAMBDA_FUNCTION_REDIRECTS=$DOCS_LAMBDA_FUNCTION_REDIRECTS" >> $GITHUB_ENV
|
||||||
|
echo "DOCS_SLACK_MSG=$DOCS_SLACK_MSG" >> $GITHUB_ENV
|
||||||
|
echo "SEND_SLACK_MSG=$SEND_SLACK_MSG" >> $GITHUB_ENV
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
with:
|
||||||
|
version: ${{ env.SETUP_BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
|
||||||
|
-
|
||||||
|
name: Build website
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
source: .
|
||||||
|
files: |
|
||||||
|
docker-bake.hcl
|
||||||
|
targets: release
|
||||||
|
provenance: false
|
||||||
|
-
|
||||||
|
name: Configure AWS Credentials
|
||||||
|
if: ${{ env.DOCS_AWS_IAM_ROLE != '' }}
|
||||||
|
uses: aws-actions/configure-aws-credentials@v4
|
||||||
|
with:
|
||||||
|
role-to-assume: ${{ env.DOCS_AWS_IAM_ROLE }}
|
||||||
|
aws-region: ${{ env.DOCS_AWS_REGION }}
|
||||||
|
-
|
||||||
|
name: Upload files to S3 bucket
|
||||||
|
if: ${{ env.DOCS_S3_BUCKET != '' }}
|
||||||
|
run: |
|
||||||
|
aws --region ${{ env.DOCS_AWS_REGION }} s3 sync \
|
||||||
|
--acl public-read \
|
||||||
|
--delete \
|
||||||
|
--exclude "*" \
|
||||||
|
--include "*.webp" \
|
||||||
|
--metadata-directive="REPLACE" \
|
||||||
|
--no-guess-mime-type \
|
||||||
|
--content-type="image/webp" \
|
||||||
|
public s3://${{ env.DOCS_S3_BUCKET }}/
|
||||||
|
aws --region ${{ env.DOCS_AWS_REGION }} s3 sync \
|
||||||
|
--acl public-read \
|
||||||
|
--delete \
|
||||||
|
--exclude "*.webp" \
|
||||||
|
public s3://${{ env.DOCS_S3_BUCKET }}/
|
||||||
|
-
|
||||||
|
name: Update S3 config
|
||||||
|
if: ${{ env.DOCS_S3_BUCKET != '' && env.DOCS_S3_CONFIG != '' }}
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
source: .
|
||||||
|
files: |
|
||||||
|
docker-bake.hcl
|
||||||
|
targets: aws-s3-update-config
|
||||||
|
env:
|
||||||
|
AWS_REGION: ${{ env.DOCS_AWS_REGION }}
|
||||||
|
AWS_S3_BUCKET: ${{ env.DOCS_S3_BUCKET }}
|
||||||
|
AWS_S3_CONFIG: ${{ env.DOCS_S3_CONFIG }}
|
||||||
|
-
|
||||||
|
name: Update Cloudfront config
|
||||||
|
if: ${{ env.DOCS_CLOUDFRONT_ID != '' }}
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
source: .
|
||||||
|
files: |
|
||||||
|
docker-bake.hcl
|
||||||
|
targets: aws-cloudfront-update
|
||||||
|
env:
|
||||||
|
AWS_REGION: us-east-1 # cloudfront and lambda edge functions are only available in us-east-1 region
|
||||||
|
AWS_CLOUDFRONT_ID: ${{ env.DOCS_CLOUDFRONT_ID }}
|
||||||
|
AWS_LAMBDA_FUNCTION: ${{ env.DOCS_LAMBDA_FUNCTION_REDIRECTS }}
|
||||||
|
-
|
||||||
|
name: Invalidate Cloudfront cache
|
||||||
|
if: ${{ env.DOCS_CLOUDFRONT_ID != '' }}
|
||||||
|
run: |
|
||||||
|
aws cloudfront create-invalidation --distribution-id ${{ env.DOCS_CLOUDFRONT_ID }} --paths "/*"
|
||||||
|
env:
|
||||||
|
AWS_REGION: us-east-1 # cloudfront is only available in us-east-1 region
|
||||||
|
AWS_MAX_ATTEMPTS: 5
|
||||||
|
-
|
||||||
|
name: Send Slack notification
|
||||||
|
if: ${{ env.SEND_SLACK_MSG == 'true' }}
|
||||||
|
run: |
|
||||||
|
curl -X POST -H 'Content-type: application/json' --data '{"text":"${{ env.DOCS_SLACK_MSG }}"}' ${{ secrets.SLACK_WEBHOOK }}
|
|
@ -0,0 +1,19 @@
|
||||||
|
name: labeler
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request_target:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
labeler:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
pull-requests: write
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Run
|
||||||
|
uses: actions/labeler@8558fd74291d67161a8a78ce36a881fa63b766a9 # v5.0.0
|
|
@ -0,0 +1,35 @@
|
||||||
|
name: merge
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
# open or update publishing PR when there is a push to main
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
main-to-published:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
if: github.repository_owner == 'docker'
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
ref: published
|
||||||
|
- name: Reset published branch
|
||||||
|
run: |
|
||||||
|
git fetch origin main:main
|
||||||
|
git reset --hard main
|
||||||
|
- name: Create Pull Request
|
||||||
|
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e
|
||||||
|
with:
|
||||||
|
delete-branch: false
|
||||||
|
branch: published-update
|
||||||
|
commit-message: publish updates from main
|
||||||
|
labels: area/release
|
||||||
|
title: publish updates from main
|
||||||
|
body: |
|
||||||
|
Automated pull request for publishing docs updates.
|
|
@ -0,0 +1,103 @@
|
||||||
|
# reusable workflow to validate docs from upstream repository for which pages are remotely fetched
|
||||||
|
# - module-name: the name of the module, without github.com prefix (e.g., docker/buildx)
|
||||||
|
# - data-files-id: id of the artifact (using actions/upload-artifact) containing the YAML data files to validate (optional)
|
||||||
|
# - data-files-folder: folder in _data containing the files to download and copy to (e.g., buildx)
|
||||||
|
# if changes are made in this workflow, please keep commit sha updated on downstream workflows:
|
||||||
|
# - https://github.com/docker/buildx/blob/master/.github/workflows/docs-upstream.yml
|
||||||
|
# - https://github.com/docker/compose/blob/main/.github/workflows/docs-upstream.yml
|
||||||
|
name: validate-upstream
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_call:
|
||||||
|
inputs:
|
||||||
|
module-name:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
data-files-id:
|
||||||
|
required: false
|
||||||
|
type: string
|
||||||
|
data-files-folder:
|
||||||
|
required: false
|
||||||
|
type: string
|
||||||
|
create-placeholder-stubs:
|
||||||
|
type: boolean
|
||||||
|
required: false
|
||||||
|
|
||||||
|
env:
|
||||||
|
# Use edge release of buildx (latest RC, fallback to latest stable)
|
||||||
|
SETUP_BUILDX_VERSION: edge
|
||||||
|
SETUP_BUILDKIT_IMAGE: "moby/buildkit:latest"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
run:
|
||||||
|
runs-on: ubuntu-24.04
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
repository: docker/docs
|
||||||
|
-
|
||||||
|
name: Download data files
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
|
if: ${{ inputs.data-files-id != '' && inputs.data-files-folder != '' }}
|
||||||
|
with:
|
||||||
|
name: ${{ inputs.data-files-id }}
|
||||||
|
path: /tmp/data/${{ inputs.data-files-folder }}
|
||||||
|
-
|
||||||
|
# Copy data files from /tmp/data/${{ inputs.data-files-folder }} to
|
||||||
|
# data/${{ inputs.data-files-folder }}. If create-placeholder-stubs
|
||||||
|
# is set to true, then check if a placeholder file exists for each data file in
|
||||||
|
# that folder. If not, create a placeholder stub file for the data file.
|
||||||
|
name: Copy data files
|
||||||
|
if: ${{ inputs.data-files-id != '' && inputs.data-files-folder != '' }}
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const fs = require('fs');
|
||||||
|
const path = require('path');
|
||||||
|
const globber = await glob.create(`/tmp/data/${{ inputs.data-files-folder }}/*.yaml`);
|
||||||
|
for await (const yamlSrcPath of globber.globGenerator()) {
|
||||||
|
const yamlSrcFilename = path.basename(yamlSrcPath);
|
||||||
|
const yamlSrcNoExt = yamlSrcPath.replace(".yaml", "");
|
||||||
|
const hasSubCommands = (await (await glob.create(yamlSrcNoExt)).glob()).length > 1;
|
||||||
|
const yamlDestPath = path.join('data', `${{ inputs.data-files-folder }}`, yamlSrcFilename);
|
||||||
|
let placeholderPath = path.join("content/reference/cli", yamlSrcFilename.replace('_', '/').replace(/\.yaml$/, '.md'));
|
||||||
|
if (hasSubCommands) {
|
||||||
|
placeholderPath = placeholderPath.replace('.md', '/_index.md');
|
||||||
|
};
|
||||||
|
if (`${{ inputs.create-placeholder-stubs }}` && !fs.existsSync(placeholderPath)) {
|
||||||
|
fs.mkdirSync(path.dirname(placeholderPath), { recursive: true });
|
||||||
|
const placeholderContent = `---
|
||||||
|
datafolder: ${{ inputs.data-files-folder }}
|
||||||
|
datafile: ${yamlSrcFilename.replace(/\.[^/.]+$/, '')}
|
||||||
|
title: ${yamlSrcFilename.replace(/\.[^/.]+$/, "").replaceAll('_', ' ')}
|
||||||
|
layout: cli
|
||||||
|
---`;
|
||||||
|
await core.group(`creating ${placeholderPath}`, async () => {
|
||||||
|
core.info(placeholderContent);
|
||||||
|
});
|
||||||
|
await fs.writeFileSync(placeholderPath, placeholderContent);
|
||||||
|
}
|
||||||
|
core.info(`${yamlSrcPath} => ${yamlDestPath}`);
|
||||||
|
await fs.copyFileSync(yamlSrcPath, yamlDestPath);
|
||||||
|
}
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
with:
|
||||||
|
version: ${{ env.SETUP_BUILDX_VERSION }}
|
||||||
|
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
|
||||||
|
-
|
||||||
|
name: Validate
|
||||||
|
uses: docker/bake-action@v6
|
||||||
|
with:
|
||||||
|
source: .
|
||||||
|
files: |
|
||||||
|
docker-bake.hcl
|
||||||
|
targets: validate-upstream
|
||||||
|
provenance: false
|
||||||
|
env:
|
||||||
|
UPSTREAM_MODULE_NAME: ${{ inputs.module-name }}
|
||||||
|
UPSTREAM_REPO: ${{ github.repository }}
|
||||||
|
UPSTREAM_COMMIT: ${{ github.sha }}
|
|
@ -1,10 +1,12 @@
|
||||||
|
.hugo_build.lock
|
||||||
|
.idea/
|
||||||
|
.vscode/mcp.json
|
||||||
|
.vscode/settings.json
|
||||||
|
.vscode/tasks.json
|
||||||
**/.DS_Store
|
**/.DS_Store
|
||||||
**/desktop.ini
|
**/desktop.ini
|
||||||
.bundle/**
|
node_modules
|
||||||
.jekyll-metadata
|
public
|
||||||
_site/**
|
resources
|
||||||
.sass-cache/**
|
static/pagefind
|
||||||
CNAME
|
tmp
|
||||||
Gemfile.lock
|
|
||||||
_samples/library/**
|
|
||||||
_kbase/**
|
|
||||||
|
|
|
@ -0,0 +1,15 @@
|
||||||
|
DirectoryPath: "public"
|
||||||
|
EnforceHTTPS: false
|
||||||
|
CheckDoctype: false
|
||||||
|
CheckExternal: false
|
||||||
|
IgnoreAltMissing: true
|
||||||
|
IgnoreAltEmpty: true
|
||||||
|
IgnoreEmptyHref: true
|
||||||
|
IgnoreDirectoryMissingTrailingSlash: true
|
||||||
|
IgnoreURLs:
|
||||||
|
- "^/reference/api/hub/.*$"
|
||||||
|
- "^/reference/api/engine/v.+/#.*$"
|
||||||
|
IgnoreDirs:
|
||||||
|
- "registry/configuration"
|
||||||
|
- "compose/compose-file" # temporarily ignore until upstream is fixed
|
||||||
|
CacheExpires: "6h"
|
|
@ -0,0 +1,24 @@
|
||||||
|
{
|
||||||
|
"default": false,
|
||||||
|
"blanks-around-headings": true,
|
||||||
|
"hr-style": true,
|
||||||
|
"heading-start-left": true,
|
||||||
|
"single-h1": true,
|
||||||
|
"no-trailing-punctuation": true,
|
||||||
|
"no-missing-space-atx": true,
|
||||||
|
"no-multiple-space-atx": true,
|
||||||
|
"no-missing-space-closed-atx": true,
|
||||||
|
"no-multiple-space-closed-atx": true,
|
||||||
|
"no-space-in-emphasis": true,
|
||||||
|
"no-space-in-code": true,
|
||||||
|
"no-space-in-links": true,
|
||||||
|
"no-empty-links": true,
|
||||||
|
"ol-prefix": {"style": "one_or_ordered"},
|
||||||
|
"no-reversed-links": true,
|
||||||
|
"reference-links-images": {
|
||||||
|
"shortcut_syntax": false
|
||||||
|
},
|
||||||
|
"fenced-code-language": true,
|
||||||
|
"table-pipe-style": true,
|
||||||
|
"table-column-count": true
|
||||||
|
}
|
|
@ -0,0 +1,16 @@
|
||||||
|
{
|
||||||
|
"plugins": [
|
||||||
|
"prettier-plugin-go-template",
|
||||||
|
"prettier-plugin-tailwindcss"
|
||||||
|
],
|
||||||
|
"overrides": [
|
||||||
|
{
|
||||||
|
"files": [
|
||||||
|
"*.html"
|
||||||
|
],
|
||||||
|
"options": {
|
||||||
|
"parser": "go-template"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -0,0 +1,24 @@
|
||||||
|
StylesPath = _vale
|
||||||
|
MinAlertLevel = suggestion
|
||||||
|
|
||||||
|
Vocab = Docker
|
||||||
|
|
||||||
|
[*.md]
|
||||||
|
BasedOnStyles = Vale, Docker
|
||||||
|
# Exclude `{{< ... >}}`, `{{% ... %}}`, [Who]({{< ... >}})
|
||||||
|
TokenIgnores = ({{[%<] .* [%>]}}.*?{{[%<] ?/.* [%>]}}), \
|
||||||
|
(\[.+\]\({{< .+ >}}\)), \
|
||||||
|
[^\S\r\n]({{[%<] \w+ .+ [%>]}})\s, \
|
||||||
|
[^\S\r\n]({{[%<](?:/\*) .* (?:\*/)[%>]}})\s, \
|
||||||
|
(?sm)({{[%<] .*?\s[%>]}})
|
||||||
|
|
||||||
|
# Exclude `{{< myshortcode `This is some <b>HTML</b>, ... >}}`
|
||||||
|
BlockIgnores = (?sm)^({{[%<] \w+ [^{]*?\s[%>]}})\n$, \
|
||||||
|
(?s) *({{< highlight [^>]* ?>}}.*?{{< ?/ ?highlight >}})
|
||||||
|
|
||||||
|
# Disable rules for genered content
|
||||||
|
# Content is checked upstream
|
||||||
|
[**/{model-cli/docs/reference,content/reference/cli/docker/model}/**.md]
|
||||||
|
BasedOnStyles = Vale
|
||||||
|
Vale.Spelling = NO
|
||||||
|
Vale.Terms = NO
|
|
@ -0,0 +1,57 @@
|
||||||
|
{
|
||||||
|
"Insert Hugo Note Admonition": {
|
||||||
|
"prefix": ["admonition", "note"],
|
||||||
|
"body": ["> [!NOTE]", "> $1"],
|
||||||
|
"description": "Insert a Hugo note admonition",
|
||||||
|
},
|
||||||
|
"Insert Hugo Important Admonition": {
|
||||||
|
"prefix": ["admonition", "important"],
|
||||||
|
"body": ["> [!IMPORTANT]", "> $1"],
|
||||||
|
"description": "Insert a Hugo important admonition",
|
||||||
|
},
|
||||||
|
"Insert Hugo Warning Admonition": {
|
||||||
|
"prefix": ["admonition", "warning"],
|
||||||
|
"body": ["> [!WARNING]", "> $1"],
|
||||||
|
"description": "Insert a Hugo warning admonition",
|
||||||
|
},
|
||||||
|
"Insert Hugo Tip Admonition": {
|
||||||
|
"prefix": ["admonition", "tip"],
|
||||||
|
"body": ["> [!TIP]", "> $1"],
|
||||||
|
"description": "Insert a Hugo tip admonition",
|
||||||
|
},
|
||||||
|
"Insert Hugo Tabs": {
|
||||||
|
"prefix": ["admonition", "tabs"],
|
||||||
|
"body": [
|
||||||
|
"",
|
||||||
|
"{{< tabs group=\"$1\" >}}",
|
||||||
|
"{{< tab name=\"$2\">}}",
|
||||||
|
"",
|
||||||
|
"$3",
|
||||||
|
"",
|
||||||
|
"{{< /tab >}}",
|
||||||
|
"{{< tab name=\"$4\">}}",
|
||||||
|
"",
|
||||||
|
"$5",
|
||||||
|
"",
|
||||||
|
"{{< /tab >}}",
|
||||||
|
"{{</tabs >}}",
|
||||||
|
"",
|
||||||
|
],
|
||||||
|
"description": "Insert a Hugo tabs block with two tabs and snippet stops for names and content",
|
||||||
|
},
|
||||||
|
"Insert Hugo code block (no title)": {
|
||||||
|
"prefix": ["codeblock", "block"],
|
||||||
|
"body": ["```${1:json}", "$2", "```", ""],
|
||||||
|
"description": "Insert a Hugo code block with an optional title",
|
||||||
|
},
|
||||||
|
"Insert Hugo code block (with title)": {
|
||||||
|
"prefix": ["codeblock", "codettl", "block"],
|
||||||
|
"body": ["```${1:json} {title=\"$2\"}", "$3", "```", ""],
|
||||||
|
"description": "Insert a Hugo code block with an optional title",
|
||||||
|
},
|
||||||
|
"Insert a Button": {
|
||||||
|
"prefix": ["button"],
|
||||||
|
"body": ["{{< button url=\"$1\" text=\"$2\" >}}"],
|
||||||
|
"description": "Insert a Hugo button",
|
||||||
|
},
|
||||||
|
}
|
|
@ -0,0 +1,127 @@
|
||||||
|
# Contributing to Docker Documentation
|
||||||
|
|
||||||
|
We value documentation contributions from the Docker community. We'd like to
|
||||||
|
make it as easy as possible for you to work in this repository.
|
||||||
|
|
||||||
|
Our style guide and instructions on using our page templates and components is
|
||||||
|
available in the [contribution section](https://docs.docker.com/contribute/) on
|
||||||
|
the website.
|
||||||
|
|
||||||
|
The following guidelines describe the ways in which you can contribute to the
|
||||||
|
Docker documentation at <https://docs.docker.com/>, and how to get started.
|
||||||
|
|
||||||
|
## Reporting issues
|
||||||
|
|
||||||
|
If you encounter a problem with the content, or the site in general, feel free
|
||||||
|
to [submit an issue](https://github.com/docker/docs/issues/new/choose) in our
|
||||||
|
[GitHub issue tracker](https://github.com/docker/docs/issues). You can also use
|
||||||
|
the issue tracker to raise requests on improvements, or suggest new content
|
||||||
|
that you think is missing or that you would like to see.
|
||||||
|
|
||||||
|
## Editing content
|
||||||
|
|
||||||
|
The website is built using [Hugo](https://gohugo.io/). The content is primarily
|
||||||
|
Markdown files in the `/content` directory of this repository (with a few
|
||||||
|
exceptions, see [Content not edited here](#content-not-edited-here)).
|
||||||
|
|
||||||
|
The structure of the sidebar navigation on the site is defined by the site's
|
||||||
|
section hierarchy in the `contents` directory. The titles of the pages are
|
||||||
|
defined in the front matter of the Markdown files. You can use `title` and
|
||||||
|
`linkTitle` to define the title of the page. `title` is used for the page
|
||||||
|
title, and `linkTitle` is used for the sidebar title. If `linkTitle` is not
|
||||||
|
defined, the `title` is used for both.
|
||||||
|
|
||||||
|
You must fork this repository to create a pull request to propose changes. For more details, see [Local setup](#local-setup).
|
||||||
|
|
||||||
|
### General guidelines
|
||||||
|
|
||||||
|
Help make reviewing easier by following these guidelines:
|
||||||
|
|
||||||
|
- Try not to touch a large number of files in a single PR if possible.
|
||||||
|
- Don't change whitespace or line wrapping in parts of a file you aren't
|
||||||
|
editing for other reasons. Make sure your text editor isn't configured to
|
||||||
|
automatically reformat the whole file when saving.
|
||||||
|
- We use GitHub Actions for testing and creating preview deployments for each
|
||||||
|
pull request. The URL of the preview deployment is added as a comment on the
|
||||||
|
pull request. Check the staging site to verify how your changes look and fix
|
||||||
|
issues, if necessary.
|
||||||
|
|
||||||
|
### Local setup
|
||||||
|
|
||||||
|
You can use Docker (surprise) to build and serve the files locally.
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> This requires Docker Desktop version **4.24** or later, or Docker Engine with Docker
|
||||||
|
> Compose version [**2.22**](https://docs.docker.com/compose/how-tos/file-watch/) or later.
|
||||||
|
|
||||||
|
1. [Fork the docker/docs repository.](https://github.com/docker/docs/fork)
|
||||||
|
|
||||||
|
2. Clone your forked docs repository:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ git clone https://github.com/<your-username>/docs
|
||||||
|
$ cd docs
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Configure Git to sync your docs fork with the upstream docker/docs
|
||||||
|
repository and prevent accidental pushes to the upstream repository:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ git remote add upstream https://github.com/docker/docs.git
|
||||||
|
$ git remote set-url --push upstream no_pushing
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Check out a branch:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ git checkout -b <branch>
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Start the local development server:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker compose watch
|
||||||
|
```
|
||||||
|
|
||||||
|
The site will be served for local preview at <http://localhost:1313>. The
|
||||||
|
development server watches for changes and automatically rebuilds your site.
|
||||||
|
|
||||||
|
To stop the development server:
|
||||||
|
|
||||||
|
1. In your terminal, press `<Ctrl+C>` to exit the file watch mode of Compose.
|
||||||
|
2. Stop the Compose service with the `docker compose down` command.
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
Before you push your changes and open a pull request, we recommend that you
|
||||||
|
test your site locally first. Local tests check for broken links, incorrectly
|
||||||
|
formatted markup, and other things. To run the tests:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake validate
|
||||||
|
```
|
||||||
|
|
||||||
|
If this command doesn't result in any errors, you're good to go!
|
||||||
|
|
||||||
|
## Content not edited here
|
||||||
|
|
||||||
|
CLI reference documentation is maintained in upstream repositories. It's
|
||||||
|
partially generated from code, and is only vendored here for publishing. To
|
||||||
|
update the CLI reference docs, refer to the corresponding repository:
|
||||||
|
|
||||||
|
- [docker/cli](https://github.com/docker/cli)
|
||||||
|
- [docker/buildx](https://github.com/docker/buildx)
|
||||||
|
- [docker/compose](https://github.com/docker/compose)
|
||||||
|
|
||||||
|
Feel free to raise an issue on this repository if you're not sure how to
|
||||||
|
proceed, and we'll help out.
|
||||||
|
|
||||||
|
Other content that appears on the site, but that's not edited here, includes:
|
||||||
|
|
||||||
|
- Dockerfile reference
|
||||||
|
- Docker Engine API reference
|
||||||
|
- Compose specification
|
||||||
|
- Buildx Bake reference
|
||||||
|
|
||||||
|
If you spot an issue in any of these pages, feel free to raise an issue here
|
||||||
|
and we'll make sure it gets fixed in the upstream source.
|
|
@ -0,0 +1,168 @@
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
# check=skip=InvalidBaseImagePlatform
|
||||||
|
|
||||||
|
ARG ALPINE_VERSION=3.21
|
||||||
|
ARG GO_VERSION=1.24
|
||||||
|
ARG HTMLTEST_VERSION=0.17.0
|
||||||
|
ARG HUGO_VERSION=0.141.0
|
||||||
|
ARG NODE_VERSION=22
|
||||||
|
ARG PAGEFIND_VERSION=1.3.0
|
||||||
|
|
||||||
|
# base defines the generic base stage
|
||||||
|
FROM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS base
|
||||||
|
RUN apk add --no-cache \
|
||||||
|
git \
|
||||||
|
nodejs \
|
||||||
|
npm \
|
||||||
|
gcompat \
|
||||||
|
rsync
|
||||||
|
|
||||||
|
# npm downloads Node.js dependencies
|
||||||
|
FROM base AS npm
|
||||||
|
ENV NODE_ENV="production"
|
||||||
|
WORKDIR /out
|
||||||
|
RUN --mount=source=package.json,target=package.json \
|
||||||
|
--mount=source=package-lock.json,target=package-lock.json \
|
||||||
|
--mount=type=cache,target=/root/.npm \
|
||||||
|
npm ci
|
||||||
|
|
||||||
|
# hugo downloads the Hugo binary
|
||||||
|
FROM base AS hugo
|
||||||
|
ARG TARGETARCH
|
||||||
|
ARG HUGO_VERSION
|
||||||
|
WORKDIR /out
|
||||||
|
ADD https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz .
|
||||||
|
RUN tar xvf hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz
|
||||||
|
|
||||||
|
# build-base is the base stage used for building the site
|
||||||
|
FROM base AS build-base
|
||||||
|
WORKDIR /project
|
||||||
|
COPY --from=hugo /out/hugo /bin/hugo
|
||||||
|
COPY --from=npm /out/node_modules node_modules
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# build creates production builds with Hugo
|
||||||
|
FROM build-base AS build
|
||||||
|
# HUGO_ENV sets the hugo.Environment (production, development, preview)
|
||||||
|
ARG HUGO_ENV="development"
|
||||||
|
# DOCS_URL sets the base URL for the site
|
||||||
|
ARG DOCS_URL="https://docs.docker.com"
|
||||||
|
ENV HUGO_CACHEDIR="/tmp/hugo_cache"
|
||||||
|
RUN --mount=type=cache,target=/tmp/hugo_cache \
|
||||||
|
hugo --gc --minify -e $HUGO_ENV -b $DOCS_URL
|
||||||
|
|
||||||
|
# lint lints markdown files
|
||||||
|
FROM davidanson/markdownlint-cli2:v0.14.0 AS lint
|
||||||
|
USER root
|
||||||
|
RUN --mount=type=bind,target=. \
|
||||||
|
/usr/local/bin/markdownlint-cli2 \
|
||||||
|
"content/**/*.md" \
|
||||||
|
"#content/manuals/engine/release-notes/*.md" \
|
||||||
|
"#content/manuals/desktop/previous-versions/*.md"
|
||||||
|
|
||||||
|
# test validates HTML output and checks for broken links
|
||||||
|
FROM wjdp/htmltest:v${HTMLTEST_VERSION} AS test
|
||||||
|
WORKDIR /test
|
||||||
|
COPY --from=build /project/public ./public
|
||||||
|
ADD .htmltest.yml .htmltest.yml
|
||||||
|
RUN htmltest
|
||||||
|
|
||||||
|
# update-modules downloads and vendors Hugo modules
|
||||||
|
FROM build-base AS update-modules
|
||||||
|
# MODULE is the Go module path and version of the module to update
|
||||||
|
ARG MODULE
|
||||||
|
RUN <<"EOT"
|
||||||
|
set -ex
|
||||||
|
if [ -n "$MODULE" ]; then
|
||||||
|
hugo mod get ${MODULE}
|
||||||
|
RESOLVED=$(cat go.mod | grep -m 1 "${MODULE/@*/}" | awk '{print $1 "@" $2}')
|
||||||
|
go mod edit -replace "${MODULE/@*/}=${RESOLVED}";
|
||||||
|
else
|
||||||
|
echo "no module set";
|
||||||
|
fi
|
||||||
|
EOT
|
||||||
|
RUN hugo mod vendor
|
||||||
|
|
||||||
|
# vendor is an empty stage with only vendored Hugo modules
|
||||||
|
FROM scratch AS vendor
|
||||||
|
COPY --from=update-modules /project/_vendor /_vendor
|
||||||
|
COPY --from=update-modules /project/go.* /
|
||||||
|
|
||||||
|
FROM base AS validate-vendor
|
||||||
|
RUN --mount=target=/context \
|
||||||
|
--mount=type=bind,from=vendor,target=/out \
|
||||||
|
--mount=target=.,type=tmpfs <<EOT
|
||||||
|
set -e
|
||||||
|
rsync -a /context/. .
|
||||||
|
git add -A
|
||||||
|
rm -rf _vendor
|
||||||
|
cp -rf /out/* .
|
||||||
|
if [ -n "$(git status --porcelain -- go.mod go.sum _vendor)" ]; then
|
||||||
|
echo >&2 'ERROR: Vendor result differs. Please vendor your package with "make vendor"'
|
||||||
|
git status --porcelain -- go.mod go.sum _vendor
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
EOT
|
||||||
|
|
||||||
|
# build-upstream builds an upstream project with a replacement module
|
||||||
|
FROM build-base AS build-upstream
|
||||||
|
# UPSTREAM_MODULE_NAME is the canonical upstream repository name and namespace (e.g. moby/buildkit)
|
||||||
|
ARG UPSTREAM_MODULE_NAME
|
||||||
|
# UPSTREAM_REPO is the repository of the project to validate (e.g. dvdksn/buildkit)
|
||||||
|
ARG UPSTREAM_REPO
|
||||||
|
# UPSTREAM_COMMIT is the commit hash of the upstream project to validate
|
||||||
|
ARG UPSTREAM_COMMIT
|
||||||
|
# HUGO_MODULE_REPLACEMENTS is the replacement module for the upstream project
|
||||||
|
ENV HUGO_MODULE_REPLACEMENTS="github.com/${UPSTREAM_MODULE_NAME} -> github.com/${UPSTREAM_REPO} ${UPSTREAM_COMMIT}"
|
||||||
|
RUN hugo --ignoreVendorPaths "github.com/${UPSTREAM_MODULE_NAME}"
|
||||||
|
|
||||||
|
# validate-upstream validates HTML output for upstream builds
|
||||||
|
FROM wjdp/htmltest:v${HTMLTEST_VERSION} AS validate-upstream
|
||||||
|
WORKDIR /test
|
||||||
|
COPY --from=build-upstream /project/public ./public
|
||||||
|
ADD .htmltest.yml .htmltest.yml
|
||||||
|
RUN htmltest
|
||||||
|
|
||||||
|
# unused-media checks for unused graphics and other media
|
||||||
|
FROM alpine:${ALPINE_VERSION} AS unused-media
|
||||||
|
RUN apk add --no-cache fd ripgrep
|
||||||
|
WORKDIR /test
|
||||||
|
RUN --mount=type=bind,target=. ./hack/test/unused_media
|
||||||
|
|
||||||
|
# path-warnings checks for duplicate target paths
|
||||||
|
FROM build-base AS path-warnings
|
||||||
|
RUN hugo --printPathWarnings > ./path-warnings.txt
|
||||||
|
RUN <<EOT
|
||||||
|
DUPLICATE_TARGETS=$(grep "Duplicate target paths" ./path-warnings.txt)
|
||||||
|
if [ ! -z "$DUPLICATE_TARGETS" ]; then
|
||||||
|
echo "$DUPLICATE_TARGETS"
|
||||||
|
echo "You probably have a duplicate alias defined. Please check your aliases."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
EOT
|
||||||
|
|
||||||
|
# pagefind installs the Pagefind runtime
|
||||||
|
FROM base AS pagefind
|
||||||
|
ARG PAGEFIND_VERSION
|
||||||
|
COPY --from=build /project/public ./public
|
||||||
|
RUN --mount=type=bind,src=pagefind.yml,target=pagefind.yml \
|
||||||
|
npx pagefind@v${PAGEFIND_VERSION} --output-path "/pagefind"
|
||||||
|
|
||||||
|
# index generates a Pagefind index
|
||||||
|
FROM scratch AS index
|
||||||
|
COPY --from=pagefind /pagefind .
|
||||||
|
|
||||||
|
# test-go-redirects checks that the /go/ redirects are valid
|
||||||
|
FROM alpine:${ALPINE_VERSION} AS test-go-redirects
|
||||||
|
WORKDIR /work
|
||||||
|
RUN apk add yq
|
||||||
|
COPY --from=build /project/public ./public
|
||||||
|
RUN --mount=type=bind,target=. <<"EOT"
|
||||||
|
set -ex
|
||||||
|
./hack/test/go_redirects
|
||||||
|
EOT
|
||||||
|
|
||||||
|
# release is an empty scratch image with only compiled assets
|
||||||
|
FROM scratch AS release
|
||||||
|
COPY --from=build /project/public /
|
||||||
|
COPY --from=pagefind /pagefind /pagefind
|
|
@ -1,18 +0,0 @@
|
||||||
# Build minifier utility
|
|
||||||
FROM golang:1.9-alpine AS minifier
|
|
||||||
RUN apk add --no-cache git
|
|
||||||
RUN go get -d github.com/tdewolff/minify/cmd/minify \
|
|
||||||
&& go build -v -o /minify github.com/tdewolff/minify/cmd/minify
|
|
||||||
|
|
||||||
# Set the version of Github Pages to use for each docs archive
|
|
||||||
FROM starefossen/github-pages:177
|
|
||||||
|
|
||||||
# Get some utilities we need for post-build steps
|
|
||||||
RUN apk add --no-cache bash wget subversion gzip
|
|
||||||
|
|
||||||
# Copy scripts used for static HTML post-processing.
|
|
||||||
COPY scripts /scripts
|
|
||||||
COPY --from=minifier /minify /scripts/minify
|
|
||||||
|
|
||||||
# Print out a message if someone tries to run this image on its own
|
|
||||||
CMD echo 'This image is only meant to be used as a base image for building docs.'
|
|
|
@ -1,16 +0,0 @@
|
||||||
# Get Jekyll build env
|
|
||||||
FROM docs/docker.github.io:docs-builder AS builder
|
|
||||||
|
|
||||||
# Make the version accessible to this build-stage
|
|
||||||
ONBUILD ARG VER
|
|
||||||
|
|
||||||
# Build the docs from this branch
|
|
||||||
ONBUILD COPY . /source
|
|
||||||
ONBUILD RUN JEKYLL_ENV="/${VER}" jekyll build --source /source --destination /site/${VER}
|
|
||||||
|
|
||||||
# Do post-processing on archive
|
|
||||||
ONBUILD RUN /scripts/fix-archives.sh /site/ ${VER}
|
|
||||||
|
|
||||||
# Make an index.html and 404.html which will redirect / to /${VER}/
|
|
||||||
ONBUILD RUN echo "<html><head><title>Redirect for ${VER}</title><meta http-equiv=\"refresh\" content=\"0;url='/${VER}/'\" /></head><body><p>If you are not redirected automatically, click <a href=\"/${VER}/\">here</a>.</p></body></html>" > /site/index.html
|
|
||||||
ONBUILD RUN echo "<html><head><title>Redirect for ${VER}</title><meta http-equiv=\"refresh\" content=\"0;url='/${VER}/'\" /></head><body><p>If you are not redirected automatically, click <a href=\"/${VER}/\">here</a>.</p></body></html>" > /site/404.html
|
|
|
@ -1,19 +0,0 @@
|
||||||
# Base image to use for building documentation archives
|
|
||||||
# this image uses "ONBUILD" to perform all required steps in the archives
|
|
||||||
# and relies upon its parent image having a layer called `builder`.
|
|
||||||
|
|
||||||
FROM nginx:alpine
|
|
||||||
|
|
||||||
# Make the version accessible to this build-stage, and copy it to an ENV so that it persists in the final image
|
|
||||||
ONBUILD ARG VER
|
|
||||||
ONBUILD ENV VER=$VER
|
|
||||||
|
|
||||||
# Clean out any existing HTML files, and copy the HTML from the builder stage to the default location for Nginx
|
|
||||||
ONBUILD RUN rm -rf /usr/share/nginx/html/*
|
|
||||||
ONBUILD COPY --from=builder /site /usr/share/nginx/html
|
|
||||||
|
|
||||||
# Copy the Nginx config
|
|
||||||
COPY nginx-overrides.conf /etc/nginx/conf.d/default.conf
|
|
||||||
|
|
||||||
# Start Nginx to serve the archive at / (which will redirect to the version-specific dir)
|
|
||||||
CMD echo -e "Docker docs are viewable at:\nhttp://0.0.0.0:4000"; exec nginx -g 'daemon off;'
|
|
|
@ -0,0 +1,201 @@
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright 2016 Docker, Inc.
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
|
@ -0,0 +1,3 @@
|
||||||
|
.PHONY: vendor
|
||||||
|
vendor: ## vendor hugo modules
|
||||||
|
./hack/vendor
|
123
README.md
123
README.md
|
@ -1,105 +1,52 @@
|
||||||
This branch contains Dockerfiles and configuration files which create base
|
# Docs @ Docker
|
||||||
images used by the Docker docs publication process.
|
|
||||||
|
|
||||||
|
<img src="static/assets/images/docker-docs.png" alt="Welcome to Docker Documentation" style="max-width: 50%;">
|
||||||
|
|
||||||
> **Warning**: Each time a change is pushed to this branch, all the images built
|
Welcome to the Docker Documentation repository. This is the source for
|
||||||
from this branch will be automatically rebuilt on Docker Cloud. This will in
|
[https://docs.docker.com/](https://docs.docker.com/).
|
||||||
turn cause all the docs archives to be rebuilt.
|
|
||||||
|
|
||||||
## Overview of creating an archive image
|
Feel free to send us pull requests and file issues. Our docs are completely
|
||||||
|
open source, and we deeply appreciate contributions from the Docker community!
|
||||||
|
|
||||||
1. The archive's `Dockerfile` is invoked.
|
## Provide feedback
|
||||||
|
|
||||||
2. It is based on the `docker.github.io/docs:docs-builder` image (built by the
|
We’d love to hear your feedback. Please file documentation issues only in the
|
||||||
[Dockerfile.builder](Dockerfile.builder) Dockerfile in the `publish-tools`
|
Docs GitHub repository. You can file a new issue to suggest improvements or if
|
||||||
branch). That image in turn invokes the
|
you see any errors in the existing documentation.
|
||||||
`docker.github.io/docs:docs-builder-onbuild` image (built by the
|
|
||||||
[Dockerfile.builder.onbuild](Dockerfile.builder.onbuild) Dockerfile in the
|
|
||||||
`publish-tools` branch). Post-processing scripts included in this image.
|
|
||||||
|
|
||||||
At the end of step 2, all the static HTML has been built and post-processing
|
Before submitting a new issue, check whether the issue has already been
|
||||||
has been done on it.
|
reported. You can join the discussion using an emoji, or by adding a comment to
|
||||||
|
an existing issue. If possible, we recommend that you suggest a fix to the issue
|
||||||
|
by creating a pull request.
|
||||||
|
|
||||||
3. The archive's `Dockerfile` resets to the
|
You can ask general questions and get community support through the [Docker
|
||||||
`docker.github.io/docs:nginx-onbuild` image (built by the
|
Community Slack](https://dockr.ly/comm-slack). Personalized support is available
|
||||||
[Dockerfile.nginx](Dockerfile.nginx.onbuild) Dockerfile in the `publish-tools`
|
through the Docker Pro, Team, and Business subscriptions. See [Docker
|
||||||
branch). This image contains a Nginx environment, our custom Nginx
|
Pricing](https://www.docker.com/pricing) for details.
|
||||||
configuration file, and some (tiny) scripts we use for post-processing HTML.
|
|
||||||
|
|
||||||
At the end of step 3, the static HTML from step 2 has been copied into the
|
If you have an idea for a new feature or behavior change in a specific aspect of
|
||||||
much smaller layer created by the `docker.github.io/docs:nginx-onbuild`
|
Docker or have found a product bug, file that issue in the project's code
|
||||||
image, along with the Nginx configuration. The static HTML for the archive
|
repository.
|
||||||
is now self-browseable.
|
|
||||||
|
|
||||||
The result of these three steps is the archive Dockerfile, which is tagged as
|
We've made it easy for you to file new issues.
|
||||||
`docker.github.io/docs:v<VER>` as set in the Dockerfile in step 1. This image
|
|
||||||
has two uses:
|
|
||||||
|
|
||||||
- It can be deployed as a standalone docs archive for that version.
|
- Click **[New issue](https://github.com/docker/docs/issues/new)** on the docs repository and fill in the details, or
|
||||||
- It is also incorporated into the process which builds the
|
- Click **Request docs changes** in the right column of every page on
|
||||||
[`docker.github.io/docs:docs-base`](https://github.com/docker/docker.github.io/tree/docs-base)
|
[docs.docker.com](https://docs.docker.com/) and add the details, or
|
||||||
image. That image holds all of the archives, one per directory, and is the base
|
|
||||||
image for the documentation published on https://docs.docker.com/).
|
|
||||||
|
|
||||||
## Build all of the required images locally
|

|
||||||
|
|
||||||
All of the images are built using the auto-builder function of Docker Cloud.
|
- Click the **Give feedback** link on the side of every page in the docs.
|
||||||
To test the entire process end-to-end on your local system, you need to build
|
|
||||||
each of the required images locally and tag it appropriately:
|
|
||||||
|
|
||||||
1. Locally build and tag all tooling images:
|

|
||||||
|
|
||||||
```bash
|
## Contribute to Docker docs
|
||||||
$ git checkout publish-tools
|
|
||||||
$ docker build -t docs/docker.github.io:docs-builder -f Dockerfile.builder .
|
|
||||||
$ docker build -t docs/docker.github.io:docs-builder-onbuild -f Dockerfile.builder.onbuild .
|
|
||||||
$ docker build -t docs/docker.github.io:nginx-onbuild -f Dockerfile.nginx.onbuild .
|
|
||||||
```
|
|
||||||
|
|
||||||
2. For each archive branch (`v1.4` through whatever is the newest archive
|
We value your contribution. We want to make it as easy as possible to submit
|
||||||
(currently `v17.09`)), build that archive branch's image. This example does
|
your contributions to the Docker docs repository. Changes to the docs are
|
||||||
that for the `v1.4` archive branch:
|
handled through pull requests against the `main` branch. To learn how to
|
||||||
|
contribute, see [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||||
```bash
|
|
||||||
$ git checkout v1.4
|
|
||||||
$ docker build -t docs/docker.github.io:v1.4 .
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**: The archive Dockerfile looks like this (comments have been
|
|
||||||
> removed). Each of the two `FROM` lines will use the `VER` build-time
|
|
||||||
> argument as a parameter.
|
|
||||||
>
|
|
||||||
> ```Dockerfile
|
|
||||||
> ARG VER=v1.4
|
|
||||||
> FROM docs/docker.github.io:docs-builder-onbuild AS builder
|
|
||||||
> FROM docs/docker.github.io:nginx-onbuild
|
|
||||||
> ```
|
|
||||||
|
|
||||||
3. After repeating step 2 for each archive branch, build the image for `master`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ git checkout master
|
|
||||||
$ docker build -t docs/docker.github.io:latest -t docker.github.io/docs:livedocs .
|
|
||||||
```
|
|
||||||
|
|
||||||
The resulting image has the static HTML for each archive and for the
|
|
||||||
contents of `master`. To test it:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker run --rm -it -p 4000:4000 docs/docker.github.io:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
## When to change each file in this branch
|
|
||||||
|
|
||||||
- `Dockerfile.builder`: to update the version of Jekyll or to add or modify
|
|
||||||
tools needed by the Jekyll environment.
|
|
||||||
- `Dockerfile.builder.onbuild`: to change the logic for building archives using
|
|
||||||
Jekyll or post-processing the static HTML.
|
|
||||||
- contents of the `scripts` directory: To change the behavior of any of the
|
|
||||||
individual post-processing scripts which run against the static HTML.
|
|
||||||
- `Dockerfile.nginx.onbuild`: To change the base Nginx image or to change the
|
|
||||||
command that starts Nginx for an archive.
|
|
||||||
- `nginx-overrides.conf`: To change the Nginx configuration used by all of the
|
|
||||||
images which serve static HTML.
|
|
||||||
|
|
||||||
|
## Copyright and license
|
||||||
|
|
||||||
|
Copyright 2013-2025 Docker, Inc., released under the <a href="https://github.com/docker/docs/blob/main/LICENSE">Apache 2.0 license</a> .
|
||||||
|
|
|
@ -0,0 +1,169 @@
|
||||||
|
extends: conditional
|
||||||
|
message: "'%s' has no definition."
|
||||||
|
link: https://docs.docker.com/contribute/style/grammar/#acronyms-and-initialisms
|
||||||
|
level: warning
|
||||||
|
ignorecase: false
|
||||||
|
# Ensures that the existence of 'first' implies the existence of 'second'.
|
||||||
|
first: '\b([A-Z]{2,5})\b'
|
||||||
|
second: '(?:\b[A-Z][a-z]+ )+\(([A-Z]{2,5})s?\)'
|
||||||
|
# ... with the exception of these:
|
||||||
|
exceptions:
|
||||||
|
- ACH
|
||||||
|
- AGPL
|
||||||
|
- AI
|
||||||
|
- API
|
||||||
|
- ARM
|
||||||
|
- ARP
|
||||||
|
- ASP
|
||||||
|
- AUFS
|
||||||
|
- AWS
|
||||||
|
- BGP # Border Gateway Protocol
|
||||||
|
- BIOS
|
||||||
|
- BPF
|
||||||
|
- BSD
|
||||||
|
- CDI
|
||||||
|
- CFS
|
||||||
|
- CI
|
||||||
|
- CIDR
|
||||||
|
- CISA
|
||||||
|
- CLI
|
||||||
|
- CNCF
|
||||||
|
- CORS
|
||||||
|
- CPU
|
||||||
|
- CSI
|
||||||
|
- CSS
|
||||||
|
- CSV
|
||||||
|
- CUDA
|
||||||
|
- CVE
|
||||||
|
- DAD
|
||||||
|
- DCT
|
||||||
|
- DEBUG
|
||||||
|
- DHCP
|
||||||
|
- DMR
|
||||||
|
- DNS
|
||||||
|
- DOM
|
||||||
|
- DPI
|
||||||
|
- DSOS
|
||||||
|
- DVP
|
||||||
|
- ECI
|
||||||
|
- ELK
|
||||||
|
- FAQ
|
||||||
|
- FPM
|
||||||
|
- FUSE
|
||||||
|
- GB
|
||||||
|
- GCC
|
||||||
|
- GDB
|
||||||
|
- GET
|
||||||
|
- GHSA
|
||||||
|
- GNOME
|
||||||
|
- GNU
|
||||||
|
- GPG
|
||||||
|
- GPL
|
||||||
|
- GPU
|
||||||
|
- GRUB
|
||||||
|
- GTK
|
||||||
|
- GUI
|
||||||
|
- GUID
|
||||||
|
- HEAD
|
||||||
|
- HTML
|
||||||
|
- HTTP
|
||||||
|
- HTTPS
|
||||||
|
- IAM
|
||||||
|
- IBM
|
||||||
|
- ID
|
||||||
|
- IDE
|
||||||
|
- IP
|
||||||
|
- IPAM
|
||||||
|
- IPC
|
||||||
|
- IT
|
||||||
|
- JAR
|
||||||
|
- JIT
|
||||||
|
- JSON
|
||||||
|
- JSX
|
||||||
|
- KDE
|
||||||
|
- LESS
|
||||||
|
- LLDB
|
||||||
|
- LLM
|
||||||
|
- LTS
|
||||||
|
- MAC
|
||||||
|
- MATE
|
||||||
|
- mcp
|
||||||
|
- MCP
|
||||||
|
- MDM
|
||||||
|
- MDN
|
||||||
|
- MSI
|
||||||
|
- NAT
|
||||||
|
- NET
|
||||||
|
- NFS
|
||||||
|
- NOTE
|
||||||
|
- NTFS
|
||||||
|
- NTLM
|
||||||
|
- NUMA
|
||||||
|
- NVDA
|
||||||
|
- OCI
|
||||||
|
- OS
|
||||||
|
- OSI
|
||||||
|
- OSS
|
||||||
|
- PATH
|
||||||
|
- PDF
|
||||||
|
- PEM
|
||||||
|
- PHP
|
||||||
|
- PID
|
||||||
|
- POSIX
|
||||||
|
- POST
|
||||||
|
- QA
|
||||||
|
- QEMU
|
||||||
|
- RAM
|
||||||
|
- REPL
|
||||||
|
- REST
|
||||||
|
- RFC
|
||||||
|
- RHEL
|
||||||
|
- RPM
|
||||||
|
- RSA
|
||||||
|
- SAML
|
||||||
|
- SARIF
|
||||||
|
- SBOM
|
||||||
|
- SCIM
|
||||||
|
- SCM
|
||||||
|
- SCSS
|
||||||
|
- SCTP
|
||||||
|
- SDK
|
||||||
|
- SLES
|
||||||
|
- SLSA
|
||||||
|
- SOCKS
|
||||||
|
- SPDX
|
||||||
|
- SQL
|
||||||
|
- SSD
|
||||||
|
- SSH
|
||||||
|
- SSL
|
||||||
|
- SSO
|
||||||
|
- SVG
|
||||||
|
- TBD
|
||||||
|
- TCP
|
||||||
|
- TCP
|
||||||
|
- TIP
|
||||||
|
- TLS
|
||||||
|
- TODO
|
||||||
|
- TTY
|
||||||
|
- TXT
|
||||||
|
- UDP
|
||||||
|
- UI
|
||||||
|
- URI
|
||||||
|
- URL
|
||||||
|
- USB
|
||||||
|
- USD
|
||||||
|
- UTF
|
||||||
|
- UTS
|
||||||
|
- UUID
|
||||||
|
- VAT
|
||||||
|
- VDI
|
||||||
|
- VIP
|
||||||
|
- VLAN
|
||||||
|
- VM
|
||||||
|
- VPN
|
||||||
|
- WSL
|
||||||
|
- XML
|
||||||
|
- XSS
|
||||||
|
- YAML
|
||||||
|
- ZFS
|
||||||
|
- ZIP
|
|
@ -0,0 +1,8 @@
|
||||||
|
extends: existence
|
||||||
|
message: "Consider removing '%s'."
|
||||||
|
ignorecase: true
|
||||||
|
level: warning
|
||||||
|
tokens:
|
||||||
|
- please
|
||||||
|
- very
|
||||||
|
- really
|
|
@ -0,0 +1,10 @@
|
||||||
|
extends: existence
|
||||||
|
message: "Please capitalize Docker."
|
||||||
|
level: error
|
||||||
|
ignorecase: false
|
||||||
|
action:
|
||||||
|
name: replace
|
||||||
|
params:
|
||||||
|
- Docker
|
||||||
|
tokens:
|
||||||
|
- '[^\[/]docker[^/]'
|
|
@ -0,0 +1,11 @@
|
||||||
|
extends: existence
|
||||||
|
message: "Don't use exclamation points in text."
|
||||||
|
nonword: true
|
||||||
|
level: error
|
||||||
|
action:
|
||||||
|
name: edit
|
||||||
|
params:
|
||||||
|
- trim_right
|
||||||
|
- "!"
|
||||||
|
tokens:
|
||||||
|
- '\w+!(?:\s|$)'
|
|
@ -0,0 +1,6 @@
|
||||||
|
extends: substitution
|
||||||
|
message: "Use '%s' instead of '%s'."
|
||||||
|
level: error
|
||||||
|
ignorecase: false
|
||||||
|
swap:
|
||||||
|
Docker CE: Docker Engine
|
|
@ -0,0 +1,8 @@
|
||||||
|
extends: existence
|
||||||
|
message: "Avoid generic calls to action: '%s'"
|
||||||
|
link: https://docs.docker.com/contribute/style/formatting/#links
|
||||||
|
level: warning
|
||||||
|
scope: raw
|
||||||
|
ignorecase: true
|
||||||
|
raw:
|
||||||
|
- \[(click here|(find out|learn) more)\]
|
|
@ -0,0 +1,7 @@
|
||||||
|
extends: occurrence
|
||||||
|
message: "Try to keep headings short (< 8 words)."
|
||||||
|
link: https://docs.docker.com/contribute/style/formatting/#headings-and-subheadings
|
||||||
|
scope: heading
|
||||||
|
level: suggestion
|
||||||
|
max: 8
|
||||||
|
token: \b(\w+)\b
|
|
@ -0,0 +1,12 @@
|
||||||
|
extends: existence
|
||||||
|
message: "Don't put a period at the end of a heading."
|
||||||
|
nonword: true
|
||||||
|
level: warning
|
||||||
|
scope: heading
|
||||||
|
action:
|
||||||
|
name: edit
|
||||||
|
params:
|
||||||
|
- trim_right
|
||||||
|
- "."
|
||||||
|
tokens:
|
||||||
|
- '[a-z0-9][.]\s*$'
|
|
@ -0,0 +1,8 @@
|
||||||
|
extends: capitalization
|
||||||
|
message: "Use sentence case for headings: '%s'."
|
||||||
|
level: warning
|
||||||
|
scope: heading
|
||||||
|
match: $sentence
|
||||||
|
threshold: 0.4
|
||||||
|
indicators:
|
||||||
|
- ":"
|
|
@ -0,0 +1,6 @@
|
||||||
|
extends: existence
|
||||||
|
message: Don’t add commas (,) or semicolons (;) to the ends of list items.
|
||||||
|
link: https://docs.docker.com/contribute/style/grammar/#lists
|
||||||
|
level: warning
|
||||||
|
scope: list
|
||||||
|
raw: '[,;]$'
|
|
@ -0,0 +1,7 @@
|
||||||
|
extends: existence
|
||||||
|
message: "Use the Oxford comma in '%s'."
|
||||||
|
scope: sentence
|
||||||
|
level: warning
|
||||||
|
nonword: true
|
||||||
|
tokens:
|
||||||
|
- '(?:[^\s,]+,){1,}\s\w+\s(?:and|or)\s\w+[.?!]'
|
|
@ -0,0 +1,43 @@
|
||||||
|
extends: substitution
|
||||||
|
message: "Consider using '%s' instead of '%s'"
|
||||||
|
link: https://docs.docker.com/contribute/style/recommended-words/
|
||||||
|
ignorecase: true
|
||||||
|
level: suggestion
|
||||||
|
action:
|
||||||
|
name: replace
|
||||||
|
swap:
|
||||||
|
'\b(?:eg|e\.g\.)[\s,]': for example
|
||||||
|
'\b(?:ie|i\.e\.)[\s,]': that is
|
||||||
|
(?:account name|accountname|user name): username
|
||||||
|
(?:drop down|dropdown): drop-down
|
||||||
|
(?:log out|logout): sign out
|
||||||
|
(?:sign on|log on|log in|logon|login): sign in
|
||||||
|
above: previous
|
||||||
|
adaptor: adapter
|
||||||
|
admin(?! console): administrator
|
||||||
|
administrate: administer
|
||||||
|
afterwards: afterward
|
||||||
|
allow: let
|
||||||
|
allows: lets
|
||||||
|
alphabetic: alphabetical
|
||||||
|
alphanumerical: alphanumeric
|
||||||
|
anti-aliasing: antialiasing
|
||||||
|
anti-malware: antimalware
|
||||||
|
anti-spyware: antispyware
|
||||||
|
anti-virus: antivirus
|
||||||
|
appendixes: appendices
|
||||||
|
assembler: assembly
|
||||||
|
below: following
|
||||||
|
check box: checkbox
|
||||||
|
check boxes: checkboxes
|
||||||
|
click: select
|
||||||
|
distro: distribution
|
||||||
|
ergo: therefore
|
||||||
|
file name: filename
|
||||||
|
keypress: keystroke
|
||||||
|
mutices: mutexes
|
||||||
|
repo: repository
|
||||||
|
scroll: navigate
|
||||||
|
url: URL
|
||||||
|
vs: versus
|
||||||
|
wish: want
|
|
@ -0,0 +1,7 @@
|
||||||
|
extends: occurrence
|
||||||
|
message: "Write short, concise sentences. (<=40 words)"
|
||||||
|
scope: sentence
|
||||||
|
link: https://docs.docker.com/contribute/checklist/
|
||||||
|
level: warning
|
||||||
|
max: 40
|
||||||
|
token: \b(\w+)\b
|
|
@ -0,0 +1,10 @@
|
||||||
|
extends: existence
|
||||||
|
message: "'%s' should have one space."
|
||||||
|
level: error
|
||||||
|
scope:
|
||||||
|
- list
|
||||||
|
- heading
|
||||||
|
- paragraph
|
||||||
|
nonword: true
|
||||||
|
tokens:
|
||||||
|
- " {2,}"
|
|
@ -0,0 +1,9 @@
|
||||||
|
extends: substitution
|
||||||
|
message: "Use '%s' instead of '%s'."
|
||||||
|
ignorecase: true
|
||||||
|
level: warning
|
||||||
|
action:
|
||||||
|
name: replace
|
||||||
|
swap:
|
||||||
|
URL for: URL of
|
||||||
|
an URL: a URL
|
|
@ -0,0 +1,10 @@
|
||||||
|
extends: substitution
|
||||||
|
message: "Use '%s' instead of '%s'"
|
||||||
|
link: https://docs.docker.com/contribute/style/recommended-words/
|
||||||
|
level: error
|
||||||
|
swap:
|
||||||
|
(?:kilobytes?|KB): kB
|
||||||
|
gigabytes?: GB
|
||||||
|
megabytes?: MB
|
||||||
|
petabytes?: PB
|
||||||
|
terrabytes?: TB
|
|
@ -0,0 +1,12 @@
|
||||||
|
extends: existence
|
||||||
|
message: Use later when talking about version numbers.
|
||||||
|
link: https://docs.docker.com/contribute/style/recommended-words/#later
|
||||||
|
scope: raw
|
||||||
|
raw:
|
||||||
|
- '\bv?'
|
||||||
|
- '(?P<major>0|[1-9]\d*)\.?'
|
||||||
|
- '(?P<minor>0|[1-9]\d*)?\.?'
|
||||||
|
- '(?P<patch>0|[1-9]\d*)?'
|
||||||
|
- '(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?'
|
||||||
|
- '(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?'
|
||||||
|
- '\b (and|or) (higher|above)'
|
|
@ -0,0 +1,10 @@
|
||||||
|
extends: existence
|
||||||
|
message: "Avoid using first-person plural like '%s'."
|
||||||
|
level: warning
|
||||||
|
ignorecase: true
|
||||||
|
tokens:
|
||||||
|
- we
|
||||||
|
- we'(?:ve|re)
|
||||||
|
- ours?
|
||||||
|
- us
|
||||||
|
- let's
|
|
@ -0,0 +1,232 @@
|
||||||
|
(?i)[A-Z]{2,}'?s
|
||||||
|
Adreno
|
||||||
|
Aleksandrov
|
||||||
|
Amazon
|
||||||
|
Anchore
|
||||||
|
Apple
|
||||||
|
Artifactory
|
||||||
|
Azure
|
||||||
|
bootup
|
||||||
|
Btrfs
|
||||||
|
BuildKit
|
||||||
|
BusyBox
|
||||||
|
CentOS
|
||||||
|
Ceph
|
||||||
|
cgroup
|
||||||
|
Chrome
|
||||||
|
Chrome DevTools
|
||||||
|
Citrix
|
||||||
|
CloudFront
|
||||||
|
Codefresh
|
||||||
|
Codespaces
|
||||||
|
config
|
||||||
|
containerd
|
||||||
|
Couchbase
|
||||||
|
CouchDB
|
||||||
|
datacenter
|
||||||
|
Datadog
|
||||||
|
Ddosify
|
||||||
|
Debootstrap
|
||||||
|
deprovisioning
|
||||||
|
deserialization
|
||||||
|
deserialize
|
||||||
|
Dev
|
||||||
|
Dev Environments?
|
||||||
|
Dex
|
||||||
|
displayName
|
||||||
|
Django
|
||||||
|
DMR
|
||||||
|
Docker Build Cloud
|
||||||
|
Docker Business
|
||||||
|
Docker Dasboard
|
||||||
|
Docker Desktop
|
||||||
|
Docker Engine
|
||||||
|
Docker Extension
|
||||||
|
Docker Hub
|
||||||
|
Docker Scout
|
||||||
|
Docker Team
|
||||||
|
Docker-Sponsored Open Source
|
||||||
|
Docker's
|
||||||
|
Dockerfile
|
||||||
|
dockerignore
|
||||||
|
Dockerize
|
||||||
|
Dockerizing
|
||||||
|
Entra
|
||||||
|
EPERM
|
||||||
|
Ethernet
|
||||||
|
Fargate
|
||||||
|
Fedora
|
||||||
|
firewalld
|
||||||
|
Flink
|
||||||
|
fluentd
|
||||||
|
g?libc
|
||||||
|
GeoNetwork
|
||||||
|
GGUF
|
||||||
|
Git
|
||||||
|
GitHub( Actions)?
|
||||||
|
Google
|
||||||
|
Grafana
|
||||||
|
Gravatar
|
||||||
|
gRPC
|
||||||
|
HyperKit
|
||||||
|
inferencing
|
||||||
|
inotify
|
||||||
|
Intel
|
||||||
|
Intune
|
||||||
|
iptables
|
||||||
|
IPv[46]
|
||||||
|
IPvlan
|
||||||
|
isort
|
||||||
|
Jamf
|
||||||
|
JetBrains
|
||||||
|
JFrog
|
||||||
|
JUnit
|
||||||
|
Kerberos
|
||||||
|
Kitematic
|
||||||
|
Kubeadm
|
||||||
|
kubectl
|
||||||
|
kubefwd
|
||||||
|
kubelet
|
||||||
|
Kubernetes
|
||||||
|
Laradock
|
||||||
|
Laravel
|
||||||
|
libseccomp
|
||||||
|
Linux
|
||||||
|
LinuxKit
|
||||||
|
Logstash
|
||||||
|
lookup
|
||||||
|
Mac
|
||||||
|
macOS
|
||||||
|
macvlan
|
||||||
|
Mail(chimp|gun)
|
||||||
|
mfsymlinks
|
||||||
|
Microsoft
|
||||||
|
minikube
|
||||||
|
monorepos?
|
||||||
|
musl
|
||||||
|
MySQL
|
||||||
|
nameserver
|
||||||
|
namespace
|
||||||
|
namespacing
|
||||||
|
netfilter
|
||||||
|
netlabel
|
||||||
|
Netplan
|
||||||
|
NFSv\d
|
||||||
|
Nginx
|
||||||
|
npm
|
||||||
|
Nutanix
|
||||||
|
Nuxeo
|
||||||
|
NVIDIA
|
||||||
|
OAuth
|
||||||
|
Okta
|
||||||
|
Ollama
|
||||||
|
osquery
|
||||||
|
osxfs
|
||||||
|
OTel
|
||||||
|
Paketo
|
||||||
|
pgAdmin
|
||||||
|
PKG
|
||||||
|
Postgres
|
||||||
|
PowerShell
|
||||||
|
Python
|
||||||
|
Qualcomm
|
||||||
|
rollback
|
||||||
|
rootful
|
||||||
|
runc
|
||||||
|
Ryuk
|
||||||
|
S3
|
||||||
|
scrollable
|
||||||
|
Slack
|
||||||
|
snapshotters?
|
||||||
|
Snyk
|
||||||
|
Solr
|
||||||
|
SonarQube
|
||||||
|
SQLite
|
||||||
|
stdin
|
||||||
|
stdout
|
||||||
|
subfolder
|
||||||
|
Syft
|
||||||
|
syntaxes
|
||||||
|
Sysbox
|
||||||
|
sysctls
|
||||||
|
Sysdig
|
||||||
|
systemd
|
||||||
|
Testcontainers
|
||||||
|
tmpfs
|
||||||
|
Traefik
|
||||||
|
Trixie
|
||||||
|
Ubuntu
|
||||||
|
ufw
|
||||||
|
uid
|
||||||
|
umask
|
||||||
|
Unix
|
||||||
|
unmanaged
|
||||||
|
VMware
|
||||||
|
vpnkit
|
||||||
|
vSphere
|
||||||
|
VSCode
|
||||||
|
Wasm
|
||||||
|
Windows
|
||||||
|
windowsfilter
|
||||||
|
WireMock
|
||||||
|
Xdebug
|
||||||
|
Zscaler
|
||||||
|
Zsh
|
||||||
|
[Aa]nonymized?
|
||||||
|
[Aa]utobuild
|
||||||
|
[Aa]llowlist
|
||||||
|
[Aa]utobuilds?
|
||||||
|
[Aa]utotests?
|
||||||
|
[Bb]uildx
|
||||||
|
[Bb]uildpack(s)?
|
||||||
|
[Bb]uildx
|
||||||
|
[Cc]odenames?
|
||||||
|
[Cc]ompose
|
||||||
|
[Cc]onfigs
|
||||||
|
[Dd]istroless
|
||||||
|
[Ff]ilepaths?
|
||||||
|
[Ff]iletypes?
|
||||||
|
[GgCc]oroutine
|
||||||
|
[Hh]ealthcheck
|
||||||
|
[Hh]ostname
|
||||||
|
[Ii]nfosec
|
||||||
|
[Ii]nline
|
||||||
|
[Kk]eyrings?
|
||||||
|
[Ll]oopback
|
||||||
|
[Mm]emcached
|
||||||
|
[Mm]oby
|
||||||
|
[Mm]ountpoint
|
||||||
|
[Nn]amespace
|
||||||
|
[Oo]nboarding
|
||||||
|
[Pp]aravirtualization
|
||||||
|
[Pp]repend
|
||||||
|
[Pp]rocfs
|
||||||
|
[Pp]roxied
|
||||||
|
[Pp]roxying
|
||||||
|
[pP]yright
|
||||||
|
[Rr]eal-time
|
||||||
|
[Rr]egex(es)?
|
||||||
|
[Rr]untimes?
|
||||||
|
[Ss]andbox(ed)?
|
||||||
|
[Ss]eccomp
|
||||||
|
[Ss]ubmounts?
|
||||||
|
[Ss]ubnet
|
||||||
|
[Ss]ubpaths?
|
||||||
|
[Ss]ubtrees?
|
||||||
|
[Ss]wappable
|
||||||
|
[Ss]wappable
|
||||||
|
[Ss]warm
|
||||||
|
[Ss]warm
|
||||||
|
[Ss]yscalls?
|
||||||
|
[Ss]ysfs
|
||||||
|
[Tt]eardown
|
||||||
|
[Tt]oolchains?
|
||||||
|
[Uu]narchived?
|
||||||
|
[Uu]ngated
|
||||||
|
[Uu]ntrusted
|
||||||
|
[Uu]serland
|
||||||
|
[Uu]serspace
|
||||||
|
[Vv]irtiofs
|
||||||
|
[Vv]irtualize
|
||||||
|
[Ww]alkthrough
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,165 @@
|
||||||
|
---
|
||||||
|
description: Volume plugin for Amazon EBS
|
||||||
|
keywords: "API, Usage, plugins, documentation, developer, amazon, ebs, rexray, volume"
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- This file is maintained within the docker/cli GitHub
|
||||||
|
repository at https://github.com/docker/cli/. Make all
|
||||||
|
pull requests against that repo. If you see this file in
|
||||||
|
another repository, consider it read-only there, as it will
|
||||||
|
periodically be overwritten by the definitive file. Pull
|
||||||
|
requests which include edits to this file in other repositories
|
||||||
|
will be rejected.
|
||||||
|
-->
|
||||||
|
|
||||||
|
# Volume plugin for Amazon EBS
|
||||||
|
|
||||||
|
## A proof-of-concept Rexray plugin
|
||||||
|
|
||||||
|
In this example, a simple Rexray plugin will be created for the purposes of using
|
||||||
|
it on an Amazon EC2 instance with EBS. It is not meant to be a complete Rexray plugin.
|
||||||
|
|
||||||
|
The example source is available at [https://github.com/tiborvass/rexray-plugin](https://github.com/tiborvass/rexray-plugin).
|
||||||
|
|
||||||
|
To learn more about Rexray: [https://github.com/codedellemc/rexray](https://github.com/codedellemc/rexray)
|
||||||
|
|
||||||
|
## 1. Make a Docker image
|
||||||
|
|
||||||
|
The following is the Dockerfile used to containerize rexray.
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM debian:jessie
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends wget ca-certificates
|
||||||
|
RUN wget https://dl.bintray.com/emccode/rexray/stable/0.6.4/rexray-Linux-x86_64-0.6.4.tar.gz -O rexray.tar.gz && tar -xvzf rexray.tar.gz -C /usr/bin && rm rexray.tar.gz
|
||||||
|
RUN mkdir -p /run/docker/plugins /var/lib/libstorage/volumes
|
||||||
|
ENTRYPOINT ["rexray"]
|
||||||
|
CMD ["--help"]
|
||||||
|
```
|
||||||
|
|
||||||
|
To build it you can run `image=$(cat Dockerfile | docker build -q -)` and `$image`
|
||||||
|
will reference the containerized rexray image.
|
||||||
|
|
||||||
|
## 2. Extract rootfs
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ TMPDIR=/tmp/rexray # for the purpose of this example
|
||||||
|
$ # create container without running it, to extract the rootfs from image
|
||||||
|
$ docker create --name rexray "$image"
|
||||||
|
$ # save the rootfs to a tar archive
|
||||||
|
$ docker export -o $TMPDIR/rexray.tar rexray
|
||||||
|
$ # extract rootfs from tar archive to a rootfs folder
|
||||||
|
$ ( mkdir -p $TMPDIR/rootfs; cd $TMPDIR/rootfs; tar xf ../rexray.tar )
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Add plugin configuration
|
||||||
|
|
||||||
|
We have to put the following JSON to `$TMPDIR/config.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Args": {
|
||||||
|
"Description": "",
|
||||||
|
"Name": "",
|
||||||
|
"Settable": null,
|
||||||
|
"Value": null
|
||||||
|
},
|
||||||
|
"Description": "A proof-of-concept EBS plugin (using rexray) for Docker",
|
||||||
|
"Documentation": "https://github.com/tiborvass/rexray-plugin",
|
||||||
|
"Entrypoint": [
|
||||||
|
"/usr/bin/rexray", "service", "start", "-f"
|
||||||
|
],
|
||||||
|
"Env": [
|
||||||
|
{
|
||||||
|
"Description": "",
|
||||||
|
"Name": "REXRAY_SERVICE",
|
||||||
|
"Settable": [
|
||||||
|
"value"
|
||||||
|
],
|
||||||
|
"Value": "ebs"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Description": "",
|
||||||
|
"Name": "EBS_ACCESSKEY",
|
||||||
|
"Settable": [
|
||||||
|
"value"
|
||||||
|
],
|
||||||
|
"Value": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Description": "",
|
||||||
|
"Name": "EBS_SECRETKEY",
|
||||||
|
"Settable": [
|
||||||
|
"value"
|
||||||
|
],
|
||||||
|
"Value": ""
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Interface": {
|
||||||
|
"Socket": "rexray.sock",
|
||||||
|
"Types": [
|
||||||
|
"docker.volumedriver/1.0"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Linux": {
|
||||||
|
"AllowAllDevices": true,
|
||||||
|
"Capabilities": ["CAP_SYS_ADMIN"],
|
||||||
|
"Devices": null
|
||||||
|
},
|
||||||
|
"Mounts": [
|
||||||
|
{
|
||||||
|
"Source": "/dev",
|
||||||
|
"Destination": "/dev",
|
||||||
|
"Type": "bind",
|
||||||
|
"Options": ["rbind"]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Network": {
|
||||||
|
"Type": "host"
|
||||||
|
},
|
||||||
|
"PropagatedMount": "/var/lib/libstorage/volumes",
|
||||||
|
"User": {},
|
||||||
|
"WorkDir": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Note a couple of points:
|
||||||
|
- `PropagatedMount` is needed so that the docker daemon can see mounts done by the
|
||||||
|
rexray plugin from within the container, otherwise the docker daemon is not able
|
||||||
|
to mount a docker volume.
|
||||||
|
- The rexray plugin needs dynamic access to host devices. For that reason, we
|
||||||
|
have to give it access to all devices under `/dev` and set `AllowAllDevices` to
|
||||||
|
true for proper access.
|
||||||
|
- The user of this simple plugin can change only 3 settings: `REXRAY_SERVICE`,
|
||||||
|
`EBS_ACCESSKEY` and `EBS_SECRETKEY`. This is because of the reduced scope of this
|
||||||
|
plugin. Ideally other rexray parameters could also be set.
|
||||||
|
|
||||||
|
## 4. Create plugin
|
||||||
|
|
||||||
|
`docker plugin create tiborvass/rexray-plugin "$TMPDIR"` will create the plugin.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ docker plugin ls
|
||||||
|
ID NAME DESCRIPTION ENABLED
|
||||||
|
2475a4bd0ca5 tiborvass/rexray-plugin:latest A rexray volume plugin for Docker false
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Test plugin
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ docker plugin set tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`
|
||||||
|
$ docker plugin enable tiborvass/rexray-plugin
|
||||||
|
$ docker volume create -d tiborvass/rexray-plugin my-ebs-volume
|
||||||
|
$ docker volume ls
|
||||||
|
DRIVER VOLUME NAME
|
||||||
|
tiborvass/rexray-plugin:latest my-ebs-volume
|
||||||
|
$ docker run --rm -v my-ebs-volume:/volume busybox sh -c 'echo bye > /volume/hi'
|
||||||
|
$ docker run --rm -v my-ebs-volume:/volume busybox cat /volume/hi
|
||||||
|
bye
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Push plugin
|
||||||
|
|
||||||
|
First, ensure you are logged in with `docker login`. Then you can run:
|
||||||
|
`docker plugin push tiborvass/rexray-plugin` to push it like a regular docker
|
||||||
|
image to a registry, to make it available for others to install via
|
||||||
|
`docker plugin install tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`.
|
|
@ -0,0 +1,271 @@
|
||||||
|
---
|
||||||
|
title: Docker Engine managed plugin system
|
||||||
|
linkTitle: Docker Engine plugins
|
||||||
|
description: Develop and use a plugin with the managed plugin system
|
||||||
|
keywords: "API, Usage, plugins, documentation, developer"
|
||||||
|
aliases:
|
||||||
|
- "/engine/extend/plugins_graphdriver/"
|
||||||
|
---
|
||||||
|
|
||||||
|
- [Installing and using a plugin](index.md#installing-and-using-a-plugin)
|
||||||
|
- [Developing a plugin](index.md#developing-a-plugin)
|
||||||
|
- [Debugging plugins](index.md#debugging-plugins)
|
||||||
|
|
||||||
|
Docker Engine's plugin system lets you install, start, stop, and remove
|
||||||
|
plugins using Docker Engine.
|
||||||
|
|
||||||
|
For information about legacy (non-managed) plugins, refer to
|
||||||
|
[Understand legacy Docker Engine plugins](legacy_plugins.md).
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Docker Engine managed plugins are currently not supported on Windows daemons.
|
||||||
|
|
||||||
|
## Installing and using a plugin
|
||||||
|
|
||||||
|
Plugins are distributed as Docker images and can be hosted on Docker Hub or on
|
||||||
|
a private registry.
|
||||||
|
|
||||||
|
To install a plugin, use the `docker plugin install` command, which pulls the
|
||||||
|
plugin from Docker Hub or your private registry, prompts you to grant
|
||||||
|
permissions or capabilities if necessary, and enables the plugin.
|
||||||
|
|
||||||
|
To check the status of installed plugins, use the `docker plugin ls` command.
|
||||||
|
Plugins that start successfully are listed as enabled in the output.
|
||||||
|
|
||||||
|
After a plugin is installed, you can use it as an option for another Docker
|
||||||
|
operation, such as creating a volume.
|
||||||
|
|
||||||
|
In the following example, you install the [`rclone` plugin](https://rclone.org/docker/), verify that it is
|
||||||
|
enabled, and use it to create a volume.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> This example is intended for instructional purposes only.
|
||||||
|
|
||||||
|
1. Set up the pre-requisite directories. By default they must exist on the host at the following locations:
|
||||||
|
|
||||||
|
- `/var/lib/docker-plugins/rclone/config`. Reserved for the `rclone.conf` config file and must exist even if it's empty and the config file is not present.
|
||||||
|
- `/var/lib/docker-plugins/rclone/cache`. Holds the plugin state file as well as optional VFS caches.
|
||||||
|
|
||||||
|
2. Install the `rclone` plugin.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker plugin install rclone/docker-volume-rclone --alias rclone
|
||||||
|
|
||||||
|
Plugin "rclone/docker-volume-rclone" is requesting the following privileges:
|
||||||
|
- network: [host]
|
||||||
|
- mount: [/var/lib/docker-plugins/rclone/config]
|
||||||
|
- mount: [/var/lib/docker-plugins/rclone/cache]
|
||||||
|
- device: [/dev/fuse]
|
||||||
|
- capabilities: [CAP_SYS_ADMIN]
|
||||||
|
Do you grant the above permissions? [y/N]
|
||||||
|
```
|
||||||
|
|
||||||
|
The plugin requests 5 privileges:
|
||||||
|
|
||||||
|
- It needs access to the `host` network.
|
||||||
|
- Access to pre-requisite directories to mount to store:
|
||||||
|
- Your Rclone config files
|
||||||
|
- Temporary cache data
|
||||||
|
- Gives access to the FUSE (Filesystem in Userspace) device. This is required because Rclone uses FUSE to mount remote storage as if it were a local filesystem.
|
||||||
|
- It needs the `CAP_SYS_ADMIN` capability, which allows the plugin to run
|
||||||
|
the `mount` command.
|
||||||
|
|
||||||
|
2. Check that the plugin is enabled in the output of `docker plugin ls`.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker plugin ls
|
||||||
|
|
||||||
|
ID NAME DESCRIPTION ENABLED
|
||||||
|
aede66158353 rclone:latest Rclone volume plugin for Docker true
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Create a volume using the plugin.
|
||||||
|
This example mounts the `/remote` directory on host `1.2.3.4` into a
|
||||||
|
volume named `rclonevolume`.
|
||||||
|
|
||||||
|
This volume can now be mounted into containers.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker volume create \
|
||||||
|
-d rclone \
|
||||||
|
--name rclonevolume \
|
||||||
|
-o type=sftp \
|
||||||
|
-o path=remote \
|
||||||
|
-o sftp-host=1.2.3.4 \
|
||||||
|
-o sftp-user=user \
|
||||||
|
-o "sftp-password=$(cat file_containing_password_for_remote_host)"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Verify that the volume was created successfully.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker volume ls
|
||||||
|
|
||||||
|
DRIVER NAME
|
||||||
|
rclone rclonevolume
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Start a container that uses the volume `rclonevolume`.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker run --rm -v rclonevolume:/data busybox ls /data
|
||||||
|
|
||||||
|
<content of /remote on machine 1.2.3.4>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Remove the volume `rclonevolume`
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker volume rm rclonevolume
|
||||||
|
|
||||||
|
sshvolume
|
||||||
|
```
|
||||||
|
|
||||||
|
To disable a plugin, use the `docker plugin disable` command. To completely
|
||||||
|
remove it, use the `docker plugin remove` command. For other available
|
||||||
|
commands and options, see the
|
||||||
|
[command line reference](https://docs.docker.com/reference/cli/docker/).
|
||||||
|
|
||||||
|
## Developing a plugin
|
||||||
|
|
||||||
|
#### The rootfs directory
|
||||||
|
|
||||||
|
The `rootfs` directory represents the root filesystem of the plugin. In this
|
||||||
|
example, it was created from a Dockerfile:
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> The `/run/docker/plugins` directory is mandatory inside of the
|
||||||
|
> plugin's filesystem for Docker to communicate with the plugin.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ git clone https://github.com/vieux/docker-volume-sshfs
|
||||||
|
$ cd docker-volume-sshfs
|
||||||
|
$ docker build -t rootfsimage .
|
||||||
|
$ id=$(docker create rootfsimage true) # id was cd851ce43a403 when the image was created
|
||||||
|
$ sudo mkdir -p myplugin/rootfs
|
||||||
|
$ sudo docker export "$id" | sudo tar -x -C myplugin/rootfs
|
||||||
|
$ docker rm -vf "$id"
|
||||||
|
$ docker rmi rootfsimage
|
||||||
|
```
|
||||||
|
|
||||||
|
#### The config.json file
|
||||||
|
|
||||||
|
The `config.json` file describes the plugin. See the [plugins config reference](config.md).
|
||||||
|
|
||||||
|
Consider the following `config.json` file.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"description": "sshFS plugin for Docker",
|
||||||
|
"documentation": "https://docs.docker.com/engine/extend/plugins/",
|
||||||
|
"entrypoint": ["/docker-volume-sshfs"],
|
||||||
|
"network": {
|
||||||
|
"type": "host"
|
||||||
|
},
|
||||||
|
"interface": {
|
||||||
|
"types": ["docker.volumedriver/1.0"],
|
||||||
|
"socket": "sshfs.sock"
|
||||||
|
},
|
||||||
|
"linux": {
|
||||||
|
"capabilities": ["CAP_SYS_ADMIN"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This plugin is a volume driver. It requires a `host` network and the
|
||||||
|
`CAP_SYS_ADMIN` capability. It depends upon the `/docker-volume-sshfs`
|
||||||
|
entrypoint and uses the `/run/docker/plugins/sshfs.sock` socket to communicate
|
||||||
|
with Docker Engine. This plugin has no runtime parameters.
|
||||||
|
|
||||||
|
#### Creating the plugin
|
||||||
|
|
||||||
|
A new plugin can be created by running
|
||||||
|
`docker plugin create <plugin-name> ./path/to/plugin/data` where the plugin
|
||||||
|
data contains a plugin configuration file `config.json` and a root filesystem
|
||||||
|
in subdirectory `rootfs`.
|
||||||
|
|
||||||
|
After that the plugin `<plugin-name>` will show up in `docker plugin ls`.
|
||||||
|
Plugins can be pushed to remote registries with
|
||||||
|
`docker plugin push <plugin-name>`.
|
||||||
|
|
||||||
|
## Debugging plugins
|
||||||
|
|
||||||
|
Stdout of a plugin is redirected to dockerd logs. Such entries have a
|
||||||
|
`plugin=<ID>` suffix. Here are a few examples of commands for pluginID
|
||||||
|
`f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62` and their
|
||||||
|
corresponding log entries in the docker daemon logs.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker plugin install tiborvass/sample-volume-plugin
|
||||||
|
|
||||||
|
INFO[0036] Starting... Found 0 volumes on startup plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker volume create -d tiborvass/sample-volume-plugin samplevol
|
||||||
|
|
||||||
|
INFO[0193] Create Called... Ensuring directory /data/samplevol exists on host... plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
INFO[0193] open /var/lib/docker/plugin-data/local-persist.json: no such file or directory plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
INFO[0193] Created volume samplevol with mountpoint /data/samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
INFO[0193] Path Called... Returned path /data/samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker run -v samplevol:/tmp busybox sh
|
||||||
|
|
||||||
|
INFO[0421] Get Called... Found samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
INFO[0421] Mount Called... Mounted samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
INFO[0421] Path Called... Returned path /data/samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
INFO[0421] Unmount Called... Unmounted samplevol plugin=f52a3df433b9aceee436eaada0752f5797aab1de47e5485f1690a073b860ff62
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Using runc to obtain logfiles and shell into the plugin.
|
||||||
|
|
||||||
|
Use `runc`, the default docker container runtime, for debugging plugins by
|
||||||
|
collecting plugin logs redirected to a file.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ sudo runc --root /run/docker/runtime-runc/plugins.moby list
|
||||||
|
|
||||||
|
ID PID STATUS BUNDLE CREATED OWNER
|
||||||
|
93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 15806 running /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby-plugins/93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 2018-02-08T21:40:08.621358213Z root
|
||||||
|
9b4606d84e06b56df84fadf054a21374b247941c94ce405b0a261499d689d9c9 14992 running /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby-plugins/9b4606d84e06b56df84fadf054a21374b247941c94ce405b0a261499d689d9c9 2018-02-08T21:35:12.321325872Z root
|
||||||
|
c5bb4b90941efcaccca999439ed06d6a6affdde7081bb34dc84126b57b3e793d 14984 running /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby-plugins/c5bb4b90941efcaccca999439ed06d6a6affdde7081bb34dc84126b57b3e793d 2018-02-08T21:35:12.321288966Z root
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ sudo runc --root /run/docker/runtime-runc/plugins.moby exec 93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 cat /var/log/plugin.log
|
||||||
|
```
|
||||||
|
|
||||||
|
If the plugin has a built-in shell, then exec into the plugin can be done as
|
||||||
|
follows:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ sudo runc --root /run/docker/runtime-runc/plugins.moby exec -t 93f1e7dbfe11c938782c2993628c895cf28e2274072c4a346a6002446c949b25 sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Using curl to debug plugin socket issues.
|
||||||
|
|
||||||
|
To verify if the plugin API socket that the docker daemon communicates with
|
||||||
|
is responsive, use curl. In this example, we will make API calls from the
|
||||||
|
docker host to volume and network plugins using curl 7.47.0 to ensure that
|
||||||
|
the plugin is listening on the said socket. For a well functioning plugin,
|
||||||
|
these basic requests should work. Note that plugin sockets are available on the host under `/var/run/docker/plugins/<pluginID>`
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ curl -H "Content-Type: application/json" -XPOST -d '{}' --unix-socket /var/run/docker/plugins/e8a37ba56fc879c991f7d7921901723c64df6b42b87e6a0b055771ecf8477a6d/plugin.sock http:/VolumeDriver.List
|
||||||
|
|
||||||
|
{"Mountpoint":"","Err":"","Volumes":[{"Name":"myvol1","Mountpoint":"/data/myvol1"},{"Name":"myvol2","Mountpoint":"/data/myvol2"}],"Volume":null}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ curl -H "Content-Type: application/json" -XPOST -d '{}' --unix-socket /var/run/docker/plugins/45e00a7ce6185d6e365904c8bcf62eb724b1fe307e0d4e7ecc9f6c1eb7bcdb70/plugin.sock http:/NetworkDriver.GetCapabilities
|
||||||
|
|
||||||
|
{"Scope":"local"}
|
||||||
|
```
|
||||||
|
|
||||||
|
When using curl 7.5 and above, the URL should be of the form
|
||||||
|
`http://hostname/APICall`, where `hostname` is the valid hostname where the
|
||||||
|
plugin is installed and `APICall` is the call to the plugin API.
|
||||||
|
|
||||||
|
For example, `http://localhost/VolumeDriver.List`
|
|
@ -0,0 +1,227 @@
|
||||||
|
---
|
||||||
|
description: "How to develop and use a plugin with the managed plugin system"
|
||||||
|
keywords: "API, Usage, plugins, documentation, developer"
|
||||||
|
title: Plugin Config Version 1 of Plugin V2
|
||||||
|
---
|
||||||
|
|
||||||
|
This document outlines the format of the V0 plugin configuration.
|
||||||
|
|
||||||
|
Plugin configs describe the various constituents of a Docker engine plugin.
|
||||||
|
Plugin configs can be serialized to JSON format with the following media types:
|
||||||
|
|
||||||
|
| Config Type | Media Type |
|
||||||
|
|-------------|-----------------------------------------|
|
||||||
|
| config | `application/vnd.docker.plugin.v1+json` |
|
||||||
|
|
||||||
|
## Config Field Descriptions
|
||||||
|
|
||||||
|
Config provides the base accessible fields for working with V0 plugin format in
|
||||||
|
the registry.
|
||||||
|
|
||||||
|
- `description` string
|
||||||
|
|
||||||
|
Description of the plugin
|
||||||
|
|
||||||
|
- `documentation` string
|
||||||
|
|
||||||
|
Link to the documentation about the plugin
|
||||||
|
|
||||||
|
- `interface` PluginInterface
|
||||||
|
|
||||||
|
Interface implemented by the plugins, struct consisting of the following fields:
|
||||||
|
|
||||||
|
- `types` string array
|
||||||
|
|
||||||
|
Types indicate what interface(s) the plugin currently implements.
|
||||||
|
|
||||||
|
Supported types:
|
||||||
|
|
||||||
|
- `docker.volumedriver/1.0`
|
||||||
|
|
||||||
|
- `docker.networkdriver/1.0`
|
||||||
|
|
||||||
|
- `docker.ipamdriver/1.0`
|
||||||
|
|
||||||
|
- `docker.authz/1.0`
|
||||||
|
|
||||||
|
- `docker.logdriver/1.0`
|
||||||
|
|
||||||
|
- `docker.metricscollector/1.0`
|
||||||
|
|
||||||
|
- `socket` string
|
||||||
|
|
||||||
|
Socket is the name of the socket the engine should use to communicate with the plugins.
|
||||||
|
the socket will be created in `/run/docker/plugins`.
|
||||||
|
|
||||||
|
- `entrypoint` string array
|
||||||
|
|
||||||
|
Entrypoint of the plugin, see [`ENTRYPOINT`](https://docs.docker.com/reference/dockerfile/#entrypoint)
|
||||||
|
|
||||||
|
- `workdir` string
|
||||||
|
|
||||||
|
Working directory of the plugin, see [`WORKDIR`](https://docs.docker.com/reference/dockerfile/#workdir)
|
||||||
|
|
||||||
|
- `network` PluginNetwork
|
||||||
|
|
||||||
|
Network of the plugin, struct consisting of the following fields:
|
||||||
|
|
||||||
|
- `type` string
|
||||||
|
|
||||||
|
Network type.
|
||||||
|
|
||||||
|
Supported types:
|
||||||
|
|
||||||
|
- `bridge`
|
||||||
|
- `host`
|
||||||
|
- `none`
|
||||||
|
|
||||||
|
- `mounts` PluginMount array
|
||||||
|
|
||||||
|
Mount of the plugin, struct consisting of the following fields.
|
||||||
|
See [`MOUNTS`](https://github.com/opencontainers/runtime-spec/blob/master/config.md#mounts).
|
||||||
|
|
||||||
|
- `name` string
|
||||||
|
|
||||||
|
Name of the mount.
|
||||||
|
|
||||||
|
- `description` string
|
||||||
|
|
||||||
|
Description of the mount.
|
||||||
|
|
||||||
|
- `source` string
|
||||||
|
|
||||||
|
Source of the mount.
|
||||||
|
|
||||||
|
- `destination` string
|
||||||
|
|
||||||
|
Destination of the mount.
|
||||||
|
|
||||||
|
- `type` string
|
||||||
|
|
||||||
|
Mount type.
|
||||||
|
|
||||||
|
- `options` string array
|
||||||
|
|
||||||
|
Options of the mount.
|
||||||
|
|
||||||
|
- `ipchost` Boolean
|
||||||
|
|
||||||
|
Access to host ipc namespace.
|
||||||
|
|
||||||
|
- `pidhost` Boolean
|
||||||
|
|
||||||
|
Access to host PID namespace.
|
||||||
|
|
||||||
|
- `propagatedMount` string
|
||||||
|
|
||||||
|
Path to be mounted as rshared, so that mounts under that path are visible to
|
||||||
|
Docker. This is useful for volume plugins. This path will be bind-mounted
|
||||||
|
outside of the plugin rootfs so it's contents are preserved on upgrade.
|
||||||
|
|
||||||
|
- `env` PluginEnv array
|
||||||
|
|
||||||
|
Environment variables of the plugin, struct consisting of the following fields:
|
||||||
|
|
||||||
|
- `name` string
|
||||||
|
|
||||||
|
Name of the environment variable.
|
||||||
|
|
||||||
|
- `description` string
|
||||||
|
|
||||||
|
Description of the environment variable.
|
||||||
|
|
||||||
|
- `value` string
|
||||||
|
|
||||||
|
Value of the environment variable.
|
||||||
|
|
||||||
|
- `args` PluginArgs
|
||||||
|
|
||||||
|
Arguments of the plugin, struct consisting of the following fields:
|
||||||
|
|
||||||
|
- `name` string
|
||||||
|
|
||||||
|
Name of the arguments.
|
||||||
|
|
||||||
|
- `description` string
|
||||||
|
|
||||||
|
Description of the arguments.
|
||||||
|
|
||||||
|
- `value` string array
|
||||||
|
|
||||||
|
Values of the arguments.
|
||||||
|
|
||||||
|
- `linux` PluginLinux
|
||||||
|
|
||||||
|
- `capabilities` string array
|
||||||
|
|
||||||
|
Capabilities of the plugin (Linux only), see list [`here`](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md#security)
|
||||||
|
|
||||||
|
- `allowAllDevices` Boolean
|
||||||
|
|
||||||
|
If `/dev` is bind mounted from the host, and allowAllDevices is set to true, the plugin will have `rwm` access to all devices on the host.
|
||||||
|
|
||||||
|
- `devices` PluginDevice array
|
||||||
|
|
||||||
|
Device of the plugin, (Linux only), struct consisting of the following fields.
|
||||||
|
See [`DEVICES`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#devices).
|
||||||
|
|
||||||
|
- `name` string
|
||||||
|
|
||||||
|
Name of the device.
|
||||||
|
|
||||||
|
- `description` string
|
||||||
|
|
||||||
|
Description of the device.
|
||||||
|
|
||||||
|
- `path` string
|
||||||
|
|
||||||
|
Path of the device.
|
||||||
|
|
||||||
|
## Example Config
|
||||||
|
|
||||||
|
The following example shows the 'tiborvass/sample-volume-plugin' plugin config.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Args": {
|
||||||
|
"Description": "",
|
||||||
|
"Name": "",
|
||||||
|
"Settable": null,
|
||||||
|
"Value": null
|
||||||
|
},
|
||||||
|
"Description": "A sample volume plugin for Docker",
|
||||||
|
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
|
||||||
|
"Entrypoint": [
|
||||||
|
"/usr/bin/sample-volume-plugin",
|
||||||
|
"/data"
|
||||||
|
],
|
||||||
|
"Env": [
|
||||||
|
{
|
||||||
|
"Description": "",
|
||||||
|
"Name": "DEBUG",
|
||||||
|
"Settable": [
|
||||||
|
"value"
|
||||||
|
],
|
||||||
|
"Value": "0"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Interface": {
|
||||||
|
"Socket": "plugin.sock",
|
||||||
|
"Types": [
|
||||||
|
"docker.volumedriver/1.0"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Linux": {
|
||||||
|
"Capabilities": null,
|
||||||
|
"AllowAllDevices": false,
|
||||||
|
"Devices": null
|
||||||
|
},
|
||||||
|
"Mounts": null,
|
||||||
|
"Network": {
|
||||||
|
"Type": ""
|
||||||
|
},
|
||||||
|
"PropagatedMount": "/data",
|
||||||
|
"User": {},
|
||||||
|
"Workdir": ""
|
||||||
|
}
|
||||||
|
```
|
BIN
_vendor/github.com/docker/cli/docs/extend/images/authz_additional_info.png
generated
Normal file
BIN
_vendor/github.com/docker/cli/docs/extend/images/authz_additional_info.png
generated
Normal file
Binary file not shown.
After Width: | Height: | Size: 45 KiB |
Binary file not shown.
After Width: | Height: | Size: 33 KiB |
Binary file not shown.
After Width: | Height: | Size: 32 KiB |
BIN
_vendor/github.com/docker/cli/docs/extend/images/authz_connection_hijack.png
generated
Normal file
BIN
_vendor/github.com/docker/cli/docs/extend/images/authz_connection_hijack.png
generated
Normal file
Binary file not shown.
After Width: | Height: | Size: 38 KiB |
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
|
@ -0,0 +1,91 @@
|
||||||
|
---
|
||||||
|
title: Use Docker Engine plugins
|
||||||
|
aliases:
|
||||||
|
- "/engine/extend/plugins/"
|
||||||
|
description: "How to add additional functionality to Docker with plugins extensions"
|
||||||
|
keywords: "Examples, Usage, plugins, docker, documentation, user guide"
|
||||||
|
---
|
||||||
|
|
||||||
|
This document describes the Docker Engine plugins generally available in Docker
|
||||||
|
Engine. To view information on plugins managed by Docker,
|
||||||
|
refer to [Docker Engine plugin system](_index.md).
|
||||||
|
|
||||||
|
You can extend the capabilities of the Docker Engine by loading third-party
|
||||||
|
plugins. This page explains the types of plugins and provides links to several
|
||||||
|
volume and network plugins for Docker.
|
||||||
|
|
||||||
|
## Types of plugins
|
||||||
|
|
||||||
|
Plugins extend Docker's functionality. They come in specific types. For
|
||||||
|
example, a [volume plugin](plugins_volume.md) might enable Docker
|
||||||
|
volumes to persist across multiple Docker hosts and a
|
||||||
|
[network plugin](plugins_network.md) might provide network plumbing.
|
||||||
|
|
||||||
|
Currently Docker supports authorization, volume and network driver plugins. In the future it
|
||||||
|
will support additional plugin types.
|
||||||
|
|
||||||
|
## Installing a plugin
|
||||||
|
|
||||||
|
Follow the instructions in the plugin's documentation.
|
||||||
|
|
||||||
|
## Finding a plugin
|
||||||
|
|
||||||
|
The sections below provide an overview of available third-party plugins.
|
||||||
|
|
||||||
|
### Network plugins
|
||||||
|
|
||||||
|
| Plugin | Description |
|
||||||
|
| :--------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| [Contiv Networking](https://github.com/contiv/netplugin) | An open source network plugin to provide infrastructure and security policies for a multi-tenant micro services deployment, while providing an integration to physical network for non-container workload. Contiv Networking implements the remote driver and IPAM APIs available in Docker 1.9 onwards. |
|
||||||
|
| [Kuryr Network Plugin](https://github.com/openstack/kuryr) | A network plugin is developed as part of the OpenStack Kuryr project and implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. It includes an IPAM driver as well. |
|
||||||
|
| [Kathará Network Plugin](https://github.com/KatharaFramework/NetworkPlugin) | Docker Network Plugin used by Kathará, an open source container-based network emulation system for showing interactive demos/lessons, testing production networks in a sandbox environment, or developing new network protocols. |
|
||||||
|
|
||||||
|
### Volume plugins
|
||||||
|
|
||||||
|
| Plugin | Description |
|
||||||
|
|:---------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| [Azure File Storage plugin](https://github.com/Azure/azurefile-dockervolumedriver) | Lets you mount Microsoft [Azure File Storage](https://azure.microsoft.com/blog/azure-file-storage-now-generally-available/) shares to Docker containers as volumes using the SMB 3.0 protocol. [Learn more](https://azure.microsoft.com/blog/persistent-docker-volumes-with-azure-file-storage/). |
|
||||||
|
| [BeeGFS Volume Plugin](https://github.com/RedCoolBeans/docker-volume-beegfs) | An open source volume plugin to create persistent volumes in a BeeGFS parallel file system. |
|
||||||
|
| [Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume) | A volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS. |
|
||||||
|
| [Contiv Volume Plugin](https://github.com/contiv/volplugin) | An open source volume plugin that provides multi-tenant, persistent, distributed storage with intent based consumption. It has support for Ceph and NFS. |
|
||||||
|
| [Convoy plugin](https://github.com/rancher/convoy) | A volume plugin for a variety of storage back-ends including device mapper and NFS. It's a simple standalone executable written in Go and provides the framework to support vendor-specific extensions such as snapshots, backups and restore. |
|
||||||
|
| [DigitalOcean Block Storage plugin](https://github.com/omallo/docker-volume-plugin-dostorage) | Integrates DigitalOcean's [block storage solution](https://www.digitalocean.com/products/storage/) into the Docker ecosystem by automatically attaching a given block storage volume to a DigitalOcean droplet and making the contents of the volume available to Docker containers running on that droplet. |
|
||||||
|
| [DRBD plugin](https://www.drbd.org/en/supported-projects/docker) | A volume plugin that provides highly available storage replicated by [DRBD](https://www.drbd.org). Data written to the docker volume is replicated in a cluster of DRBD nodes. |
|
||||||
|
| [Flocker plugin](https://github.com/ScatterHQ/flocker) | A volume plugin that provides multi-host portable volumes for Docker, enabling you to run databases and other stateful containers and move them around across a cluster of machines. |
|
||||||
|
| [Fuxi Volume Plugin](https://github.com/openstack/fuxi) | A volume plugin that is developed as part of the OpenStack Kuryr project and implements the Docker volume plugin API by utilizing Cinder, the OpenStack block storage service. |
|
||||||
|
| [gce-docker plugin](https://github.com/mcuadros/gce-docker) | A volume plugin able to attach, format and mount Google Compute [persistent-disks](https://cloud.google.com/compute/docs/disks/persistent-disks). |
|
||||||
|
| [GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) | A volume plugin that provides multi-host volumes management for Docker using GlusterFS. |
|
||||||
|
| [Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) | A volume plugin that allows on-demand, version controlled access to your data. Horcrux is an open-source plugin, written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3. |
|
||||||
|
| [HPE 3Par Volume Plugin](https://github.com/hpe-storage/python-hpedockerplugin/) | A volume plugin that supports HPE 3Par and StoreVirtual iSCSI storage arrays. |
|
||||||
|
| [Infinit volume plugin](https://infinit.sh/documentation/docker/volume-plugin) | A volume plugin that makes it easy to mount and manage Infinit volumes using Docker. |
|
||||||
|
| [IPFS Volume Plugin](https://github.com/vdemeester/docker-volume-ipfs) | An open source volume plugin that allows using an [ipfs](https://ipfs.io/) filesystem as a volume. |
|
||||||
|
| [Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) | A plugin that provides credentials and secret management using Keywhiz as a central repository. |
|
||||||
|
| [Linode Volume Plugin](https://github.com/linode/docker-volume-linode) | A plugin that adds the ability to manage Linode Block Storage as Docker Volumes from within a Linode. |
|
||||||
|
| [Local Persist Plugin](https://github.com/CWSpear/local-persist) | A volume plugin that extends the default `local` driver's functionality by allowing you specify a mountpoint anywhere on the host, which enables the files to *always persist*, even if the volume is removed via `docker volume rm`. |
|
||||||
|
| [NetApp Plugin](https://github.com/NetApp/netappdvp) (nDVP) | A volume plugin that provides direct integration with the Docker ecosystem for the NetApp storage portfolio. The nDVP package supports the provisioning and management of storage resources from the storage platform to Docker hosts, with a robust framework for adding additional platforms in the future. |
|
||||||
|
| [Netshare plugin](https://github.com/ContainX/docker-volume-netshare) | A volume plugin that provides volume management for NFS 3/4, AWS EFS and CIFS file systems. |
|
||||||
|
| [Nimble Storage Volume Plugin](https://scod.hpedev.io/docker_volume_plugins/hpe_nimble_storage/index.html) | A volume plug-in that integrates with Nimble Storage Unified Flash Fabric arrays. The plug-in abstracts array volume capabilities to the Docker administrator to allow self-provisioning of secure multi-tenant volumes and clones. |
|
||||||
|
| [OpenStorage Plugin](https://github.com/libopenstorage/openstorage) | A cluster-aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few. |
|
||||||
|
| [Portworx Volume Plugin](https://github.com/portworx/px-dev) | A volume plugin that turns any server into a scale-out converged compute/storage node, providing container granular storage and highly available volumes across any node, using a shared-nothing storage backend that works with any docker scheduler. |
|
||||||
|
| [Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) | A volume plugin that connects Docker to [Quobyte](https://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform. |
|
||||||
|
| [REX-Ray plugin](https://github.com/emccode/rexray) | A volume plugin which is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC. |
|
||||||
|
| [Virtuozzo Storage and Ploop plugin](https://github.com/virtuozzo/docker-volume-ploop) | A volume plugin with support for Virtuozzo Storage distributed cloud file system as well as ploop devices. |
|
||||||
|
| [VMware vSphere Storage Plugin](https://github.com/vmware/docker-volume-vsphere) | Docker Volume Driver for vSphere enables customers to address persistent storage requirements for Docker containers in vSphere environments. |
|
||||||
|
|
||||||
|
### Authorization plugins
|
||||||
|
|
||||||
|
| Plugin | Description |
|
||||||
|
|:---------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| [Casbin AuthZ Plugin](https://github.com/casbin/casbin-authz-plugin) | An authorization plugin based on [Casbin](https://github.com/casbin/casbin), which supports access control models like ACL, RBAC, ABAC. The access control model can be customized. The policy can be persisted into file or DB. |
|
||||||
|
| [HBM plugin](https://github.com/kassisol/hbm) | An authorization plugin that prevents from executing commands with certains parameters. |
|
||||||
|
| [Twistlock AuthZ Broker](https://github.com/twistlock/authz) | A basic extendable authorization plugin that runs directly on the host or inside a container. This plugin allows you to define user policies that it evaluates during authorization. Basic authorization is provided if Docker daemon is started with the --tlsverify flag (username is extracted from the certificate common name). |
|
||||||
|
|
||||||
|
## Troubleshooting a plugin
|
||||||
|
|
||||||
|
If you are having problems with Docker after loading a plugin, ask the authors
|
||||||
|
of the plugin for help. The Docker team may not be able to assist you.
|
||||||
|
|
||||||
|
## Writing a plugin
|
||||||
|
|
||||||
|
If you are interested in writing a plugin for Docker, or seeing how they work
|
||||||
|
under the hood, see the [Docker plugins reference](plugin_api.md).
|
|
@ -0,0 +1,186 @@
|
||||||
|
---
|
||||||
|
title: Docker Plugin API
|
||||||
|
description: "How to write Docker plugins extensions "
|
||||||
|
keywords: "API, Usage, plugins, documentation, developer"
|
||||||
|
---
|
||||||
|
|
||||||
|
Docker plugins are out-of-process extensions which add capabilities to the
|
||||||
|
Docker Engine.
|
||||||
|
|
||||||
|
This document describes the Docker Engine plugin API. To view information on
|
||||||
|
plugins managed by Docker Engine, refer to [Docker Engine plugin system](_index.md).
|
||||||
|
|
||||||
|
This page is intended for people who want to develop their own Docker plugin.
|
||||||
|
If you just want to learn about or use Docker plugins, look
|
||||||
|
[here](legacy_plugins.md).
|
||||||
|
|
||||||
|
## What plugins are
|
||||||
|
|
||||||
|
A plugin is a process running on the same or a different host as the Docker daemon,
|
||||||
|
which registers itself by placing a file on the daemon host in one of the plugin
|
||||||
|
directories described in [Plugin discovery](#plugin-discovery).
|
||||||
|
|
||||||
|
Plugins have human-readable names, which are short, lowercase strings. For
|
||||||
|
example, `flocker` or `weave`.
|
||||||
|
|
||||||
|
Plugins can run inside or outside containers. Currently running them outside
|
||||||
|
containers is recommended.
|
||||||
|
|
||||||
|
## Plugin discovery
|
||||||
|
|
||||||
|
Docker discovers plugins by looking for them in the plugin directory whenever a
|
||||||
|
user or container tries to use one by name.
|
||||||
|
|
||||||
|
There are three types of files which can be put in the plugin directory.
|
||||||
|
|
||||||
|
* `.sock` files are Unix domain sockets.
|
||||||
|
* `.spec` files are text files containing a URL, such as `unix:///other.sock` or `tcp://localhost:8080`.
|
||||||
|
* `.json` files are text files containing a full json specification for the plugin.
|
||||||
|
|
||||||
|
Plugins with Unix domain socket files must run on the same host as the Docker daemon.
|
||||||
|
Plugins with `.spec` or `.json` files can run on a different host if you specify a remote URL.
|
||||||
|
|
||||||
|
Unix domain socket files must be located under `/run/docker/plugins`, whereas
|
||||||
|
spec files can be located either under `/etc/docker/plugins` or `/usr/lib/docker/plugins`.
|
||||||
|
|
||||||
|
The name of the file (excluding the extension) determines the plugin name.
|
||||||
|
|
||||||
|
For example, the `flocker` plugin might create a Unix socket at
|
||||||
|
`/run/docker/plugins/flocker.sock`.
|
||||||
|
|
||||||
|
You can define each plugin into a separated subdirectory if you want to isolate definitions from each other.
|
||||||
|
For example, you can create the `flocker` socket under `/run/docker/plugins/flocker/flocker.sock` and only
|
||||||
|
mount `/run/docker/plugins/flocker` inside the `flocker` container.
|
||||||
|
|
||||||
|
Docker always searches for Unix sockets in `/run/docker/plugins` first. It checks for spec or json files under
|
||||||
|
`/etc/docker/plugins` and `/usr/lib/docker/plugins` if the socket doesn't exist. The directory scan stops as
|
||||||
|
soon as it finds the first plugin definition with the given name.
|
||||||
|
|
||||||
|
### JSON specification
|
||||||
|
|
||||||
|
This is the JSON format for a plugin:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Name": "plugin-example",
|
||||||
|
"Addr": "https://example.com/docker/plugin",
|
||||||
|
"TLSConfig": {
|
||||||
|
"InsecureSkipVerify": false,
|
||||||
|
"CAFile": "/usr/shared/docker/certs/example-ca.pem",
|
||||||
|
"CertFile": "/usr/shared/docker/certs/example-cert.pem",
|
||||||
|
"KeyFile": "/usr/shared/docker/certs/example-key.pem"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `TLSConfig` field is optional and TLS will only be verified if this configuration is present.
|
||||||
|
|
||||||
|
## Plugin lifecycle
|
||||||
|
|
||||||
|
Plugins should be started before Docker, and stopped after Docker. For
|
||||||
|
example, when packaging a plugin for a platform which supports `systemd`, you
|
||||||
|
might use [`systemd` dependencies](
|
||||||
|
https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Before=) to
|
||||||
|
manage startup and shutdown order.
|
||||||
|
|
||||||
|
When upgrading a plugin, you should first stop the Docker daemon, upgrade the
|
||||||
|
plugin, then start Docker again.
|
||||||
|
|
||||||
|
## Plugin activation
|
||||||
|
|
||||||
|
When a plugin is first referred to -- either by a user referring to it by name
|
||||||
|
(e.g. `docker run --volume-driver=foo`) or a container already configured to
|
||||||
|
use a plugin being started -- Docker looks for the named plugin in the plugin
|
||||||
|
directory and activates it with a handshake. See Handshake API below.
|
||||||
|
|
||||||
|
Plugins are not activated automatically at Docker daemon startup. Rather,
|
||||||
|
they are activated only lazily, or on-demand, when they are needed.
|
||||||
|
|
||||||
|
## Systemd socket activation
|
||||||
|
|
||||||
|
Plugins may also be socket activated by `systemd`. The official [Plugins helpers](https://github.com/docker/go-plugins-helpers)
|
||||||
|
natively supports socket activation. In order for a plugin to be socket activated it needs
|
||||||
|
a `service` file and a `socket` file.
|
||||||
|
|
||||||
|
The `service` file (for example `/lib/systemd/system/your-plugin.service`):
|
||||||
|
|
||||||
|
```systemd
|
||||||
|
[Unit]
|
||||||
|
Description=Your plugin
|
||||||
|
Before=docker.service
|
||||||
|
After=network.target your-plugin.socket
|
||||||
|
Requires=your-plugin.socket docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/lib/docker/your-plugin
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
The `socket` file (for example `/lib/systemd/system/your-plugin.socket`):
|
||||||
|
|
||||||
|
```systemd
|
||||||
|
[Unit]
|
||||||
|
Description=Your plugin
|
||||||
|
|
||||||
|
[Socket]
|
||||||
|
ListenStream=/run/docker/plugins/your-plugin.sock
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=sockets.target
|
||||||
|
```
|
||||||
|
|
||||||
|
This will allow plugins to be actually started when the Docker daemon connects to
|
||||||
|
the sockets they're listening on (for instance the first time the daemon uses them
|
||||||
|
or if one of the plugin goes down accidentally).
|
||||||
|
|
||||||
|
## API design
|
||||||
|
|
||||||
|
The Plugin API is RPC-style JSON over HTTP, much like webhooks.
|
||||||
|
|
||||||
|
Requests flow from the Docker daemon to the plugin. The plugin needs to
|
||||||
|
implement an HTTP server and bind this to the Unix socket mentioned in the
|
||||||
|
"plugin discovery" section.
|
||||||
|
|
||||||
|
All requests are HTTP `POST` requests.
|
||||||
|
|
||||||
|
The API is versioned via an Accept header, which currently is always set to
|
||||||
|
`application/vnd.docker.plugins.v1+json`.
|
||||||
|
|
||||||
|
## Handshake API
|
||||||
|
|
||||||
|
Plugins are activated via the following "handshake" API call.
|
||||||
|
|
||||||
|
### /Plugin.Activate
|
||||||
|
|
||||||
|
Request: empty body
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Implements": ["VolumeDriver"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Responds with a list of Docker subsystems which this plugin implements.
|
||||||
|
After activation, the plugin will then be sent events from this subsystem.
|
||||||
|
|
||||||
|
Possible values are:
|
||||||
|
|
||||||
|
* [`authz`](plugins_authorization.md)
|
||||||
|
* [`NetworkDriver`](plugins_network.md)
|
||||||
|
* [`VolumeDriver`](plugins_volume.md)
|
||||||
|
|
||||||
|
## Plugin retries
|
||||||
|
|
||||||
|
Attempts to call a method on a plugin are retried with an exponential backoff
|
||||||
|
for up to 30 seconds. This may help when packaging plugins as containers, since
|
||||||
|
it gives plugin containers a chance to start up before failing any user
|
||||||
|
containers which depend on them.
|
||||||
|
|
||||||
|
## Plugins helpers
|
||||||
|
|
||||||
|
To ease plugins development, we're providing an `sdk` for each kind of plugins
|
||||||
|
currently supported by Docker at [docker/go-plugins-helpers](https://github.com/docker/go-plugins-helpers).
|
|
@ -0,0 +1,248 @@
|
||||||
|
---
|
||||||
|
title: Access authorization plugin
|
||||||
|
description: "How to create authorization plugins to manage access control to your Docker daemon."
|
||||||
|
keywords: "security, authorization, authentication, docker, documentation, plugin, extend"
|
||||||
|
aliases:
|
||||||
|
- "/engine/extend/authorization/"
|
||||||
|
---
|
||||||
|
|
||||||
|
This document describes the Docker Engine plugins available in Docker
|
||||||
|
Engine. To view information on plugins managed by Docker Engine,
|
||||||
|
refer to [Docker Engine plugin system](_index.md).
|
||||||
|
|
||||||
|
Docker's out-of-the-box authorization model is all or nothing. Any user with
|
||||||
|
permission to access the Docker daemon can run any Docker client command. The
|
||||||
|
same is true for callers using Docker's Engine API to contact the daemon. If you
|
||||||
|
require greater access control, you can create authorization plugins and add
|
||||||
|
them to your Docker daemon configuration. Using an authorization plugin, a
|
||||||
|
Docker administrator can configure granular access policies for managing access
|
||||||
|
to the Docker daemon.
|
||||||
|
|
||||||
|
Anyone with the appropriate skills can develop an authorization plugin. These
|
||||||
|
skills, at their most basic, are knowledge of Docker, understanding of REST, and
|
||||||
|
sound programming knowledge. This document describes the architecture, state,
|
||||||
|
and methods information available to an authorization plugin developer.
|
||||||
|
|
||||||
|
## Basic principles
|
||||||
|
|
||||||
|
Docker's [plugin infrastructure](plugin_api.md) enables
|
||||||
|
extending Docker by loading, removing and communicating with
|
||||||
|
third-party components using a generic API. The access authorization subsystem
|
||||||
|
was built using this mechanism.
|
||||||
|
|
||||||
|
Using this subsystem, you don't need to rebuild the Docker daemon to add an
|
||||||
|
authorization plugin. You can add a plugin to an installed Docker daemon. You do
|
||||||
|
need to restart the Docker daemon to add a new plugin.
|
||||||
|
|
||||||
|
An authorization plugin approves or denies requests to the Docker daemon based
|
||||||
|
on both the current authentication context and the command context. The
|
||||||
|
authentication context contains all user details and the authentication method.
|
||||||
|
The command context contains all the relevant request data.
|
||||||
|
|
||||||
|
Authorization plugins must follow the rules described in [Docker Plugin API](plugin_api.md).
|
||||||
|
Each plugin must reside within directories described under the
|
||||||
|
[Plugin discovery](plugin_api.md#plugin-discovery) section.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> The abbreviations `AuthZ` and `AuthN` mean authorization and authentication
|
||||||
|
> respectively.
|
||||||
|
|
||||||
|
## Default user authorization mechanism
|
||||||
|
|
||||||
|
If TLS is enabled in the [Docker daemon](https://docs.docker.com/engine/security/https/), the default user authorization flow extracts the user details from the certificate subject name.
|
||||||
|
That is, the `User` field is set to the client certificate subject common name, and the `AuthenticationMethod` field is set to `TLS`.
|
||||||
|
|
||||||
|
## Basic architecture
|
||||||
|
|
||||||
|
You are responsible for registering your plugin as part of the Docker daemon
|
||||||
|
startup. You can install multiple plugins and chain them together. This chain
|
||||||
|
can be ordered. Each request to the daemon passes in order through the chain.
|
||||||
|
Only when all the plugins grant access to the resource, is the access granted.
|
||||||
|
|
||||||
|
When an HTTP request is made to the Docker daemon through the CLI or via the
|
||||||
|
Engine API, the authentication subsystem passes the request to the installed
|
||||||
|
authentication plugin(s). The request contains the user (caller) and command
|
||||||
|
context. The plugin is responsible for deciding whether to allow or deny the
|
||||||
|
request.
|
||||||
|
|
||||||
|
The sequence diagrams below depict an allow and deny authorization flow:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Each request sent to the plugin includes the authenticated user, the HTTP
|
||||||
|
headers, and the request/response body. Only the user name and the
|
||||||
|
authentication method used are passed to the plugin. Most importantly, no user
|
||||||
|
credentials or tokens are passed. Finally, not all request/response bodies
|
||||||
|
are sent to the authorization plugin. Only those request/response bodies where
|
||||||
|
the `Content-Type` is either `text/*` or `application/json` are sent.
|
||||||
|
|
||||||
|
For commands that can potentially hijack the HTTP connection (`HTTP
|
||||||
|
Upgrade`), such as `exec`, the authorization plugin is only called for the
|
||||||
|
initial HTTP requests. Once the plugin approves the command, authorization is
|
||||||
|
not applied to the rest of the flow. Specifically, the streaming data is not
|
||||||
|
passed to the authorization plugins. For commands that return chunked HTTP
|
||||||
|
response, such as `logs` and `events`, only the HTTP request is sent to the
|
||||||
|
authorization plugins.
|
||||||
|
|
||||||
|
During request/response processing, some authorization flows might
|
||||||
|
need to do additional queries to the Docker daemon. To complete such flows,
|
||||||
|
plugins can call the daemon API similar to a regular user. To enable these
|
||||||
|
additional queries, the plugin must provide the means for an administrator to
|
||||||
|
configure proper authentication and security policies.
|
||||||
|
|
||||||
|
## Docker client flows
|
||||||
|
|
||||||
|
To enable and configure the authorization plugin, the plugin developer must
|
||||||
|
support the Docker client interactions detailed in this section.
|
||||||
|
|
||||||
|
### Setting up Docker daemon
|
||||||
|
|
||||||
|
Enable the authorization plugin with a dedicated command line flag in the
|
||||||
|
`--authorization-plugin=PLUGIN_ID` format. The flag supplies a `PLUGIN_ID`
|
||||||
|
value. This value can be the plugin’s socket or a path to a specification file.
|
||||||
|
Authorization plugins can be loaded without restarting the daemon. Refer
|
||||||
|
to the [`dockerd` documentation](https://docs.docker.com/reference/cli/dockerd/#configuration-reload-behavior) for more information.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ dockerd --authorization-plugin=plugin1 --authorization-plugin=plugin2,...
|
||||||
|
```
|
||||||
|
|
||||||
|
Docker's authorization subsystem supports multiple `--authorization-plugin` parameters.
|
||||||
|
|
||||||
|
### Calling authorized command (allow)
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker pull centos
|
||||||
|
<...>
|
||||||
|
f1b10cd84249: Pull complete
|
||||||
|
<...>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Calling unauthorized command (deny)
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker pull centos
|
||||||
|
<...>
|
||||||
|
docker: Error response from daemon: authorization denied by plugin PLUGIN_NAME: volumes are not allowed.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error from plugins
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker pull centos
|
||||||
|
<...>
|
||||||
|
docker: Error response from daemon: plugin PLUGIN_NAME failed with error: AuthZPlugin.AuthZReq: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
|
||||||
|
```
|
||||||
|
|
||||||
|
## API schema and implementation
|
||||||
|
|
||||||
|
In addition to Docker's standard plugin registration method, each plugin
|
||||||
|
should implement the following two methods:
|
||||||
|
|
||||||
|
* `/AuthZPlugin.AuthZReq` This authorize request method is called before the Docker daemon processes the client request.
|
||||||
|
|
||||||
|
* `/AuthZPlugin.AuthZRes` This authorize response method is called before the response is returned from Docker daemon to the client.
|
||||||
|
|
||||||
|
#### /AuthZPlugin.AuthZReq
|
||||||
|
|
||||||
|
Request
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"User": "The user identification",
|
||||||
|
"UserAuthNMethod": "The authentication method used",
|
||||||
|
"RequestMethod": "The HTTP method",
|
||||||
|
"RequestURI": "The HTTP request URI",
|
||||||
|
"RequestBody": "Byte array containing the raw HTTP request body",
|
||||||
|
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string "
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Allow": "Determined whether the user is allowed or not",
|
||||||
|
"Msg": "The authorization message",
|
||||||
|
"Err": "The error message if things go wrong"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### /AuthZPlugin.AuthZRes
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"User": "The user identification",
|
||||||
|
"UserAuthNMethod": "The authentication method used",
|
||||||
|
"RequestMethod": "The HTTP method",
|
||||||
|
"RequestURI": "The HTTP request URI",
|
||||||
|
"RequestBody": "Byte array containing the raw HTTP request body",
|
||||||
|
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string",
|
||||||
|
"ResponseBody": "Byte array containing the raw HTTP response body",
|
||||||
|
"ResponseHeader": "Byte array containing the raw HTTP response header as a map[string][]string",
|
||||||
|
"ResponseStatusCode":"Response status code"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Allow": "Determined whether the user is allowed or not",
|
||||||
|
"Msg": "The authorization message",
|
||||||
|
"Err": "The error message if things go wrong"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Request authorization
|
||||||
|
|
||||||
|
Each plugin must support two request authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.
|
||||||
|
|
||||||
|
#### Daemon -> Plugin
|
||||||
|
|
||||||
|
Name | Type | Description
|
||||||
|
-----------------------|-------------------|-------------------------------------------------------
|
||||||
|
User | string | The user identification
|
||||||
|
Authentication method | string | The authentication method used
|
||||||
|
Request method | enum | The HTTP method (GET/DELETE/POST)
|
||||||
|
Request URI | string | The HTTP request URI including API version (e.g., v.1.17/containers/json)
|
||||||
|
Request headers | map[string]string | Request headers as key value pairs (without the authorization header)
|
||||||
|
Request body | []byte | Raw request body
|
||||||
|
|
||||||
|
#### Plugin -> Daemon
|
||||||
|
|
||||||
|
Name | Type | Description
|
||||||
|
--------|--------|----------------------------------------------------------------------------------
|
||||||
|
Allow | bool | Boolean value indicating whether the request is allowed or denied
|
||||||
|
Msg | string | Authorization message (will be returned to the client in case the access is denied)
|
||||||
|
Err | string | Error message (will be returned to the client in case the plugin encounter an error. The string value supplied may appear in logs, so should not include confidential information)
|
||||||
|
|
||||||
|
### Response authorization
|
||||||
|
|
||||||
|
The plugin must support two authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.
|
||||||
|
|
||||||
|
#### Daemon -> Plugin
|
||||||
|
|
||||||
|
Name | Type | Description
|
||||||
|
----------------------- |------------------ |----------------------------------------------------
|
||||||
|
User | string | The user identification
|
||||||
|
Authentication method | string | The authentication method used
|
||||||
|
Request method | string | The HTTP method (GET/DELETE/POST)
|
||||||
|
Request URI | string | The HTTP request URI including API version (e.g., v.1.17/containers/json)
|
||||||
|
Request headers | map[string]string | Request headers as key value pairs (without the authorization header)
|
||||||
|
Request body | []byte | Raw request body
|
||||||
|
Response status code | int | Status code from the Docker daemon
|
||||||
|
Response headers | map[string]string | Response headers as key value pairs
|
||||||
|
Response body | []byte | Raw Docker daemon response body
|
||||||
|
|
||||||
|
#### Plugin -> Daemon
|
||||||
|
|
||||||
|
Name | Type | Description
|
||||||
|
--------|--------|----------------------------------------------------------------------------------
|
||||||
|
Allow | bool | Boolean value indicating whether the response is allowed or denied
|
||||||
|
Msg | string | Authorization message (will be returned to the client in case the access is denied)
|
||||||
|
Err | string | Error message (will be returned to the client in case the plugin encounter an error. The string value supplied may appear in logs, so should not include confidential information)
|
|
@ -0,0 +1,215 @@
|
||||||
|
---
|
||||||
|
title: Docker log driver plugins
|
||||||
|
description: "Log driver plugins."
|
||||||
|
keywords: "Examples, Usage, plugins, docker, documentation, user guide, logging"
|
||||||
|
---
|
||||||
|
|
||||||
|
This document describes logging driver plugins for Docker.
|
||||||
|
|
||||||
|
Logging drivers enables users to forward container logs to another service for
|
||||||
|
processing. Docker includes several logging drivers as built-ins, however can
|
||||||
|
never hope to support all use-cases with built-in drivers. Plugins allow Docker
|
||||||
|
to support a wide range of logging services without requiring to embed client
|
||||||
|
libraries for these services in the main Docker codebase. See the
|
||||||
|
[plugin documentation](legacy_plugins.md) for more information.
|
||||||
|
|
||||||
|
## Create a logging plugin
|
||||||
|
|
||||||
|
The main interface for logging plugins uses the same JSON+HTTP RPC protocol used
|
||||||
|
by other plugin types. See the
|
||||||
|
[example](https://github.com/cpuguy83/docker-log-driver-test) plugin for a
|
||||||
|
reference implementation of a logging plugin. The example wraps the built-in
|
||||||
|
`jsonfilelog` log driver.
|
||||||
|
|
||||||
|
## LogDriver protocol
|
||||||
|
|
||||||
|
Logging plugins must register as a `LogDriver` during plugin activation. Once
|
||||||
|
activated users can specify the plugin as a log driver.
|
||||||
|
|
||||||
|
There are two HTTP endpoints that logging plugins must implement:
|
||||||
|
|
||||||
|
### `/LogDriver.StartLogging`
|
||||||
|
|
||||||
|
Signals to the plugin that a container is starting that the plugin should start
|
||||||
|
receiving logs for.
|
||||||
|
|
||||||
|
Logs will be streamed over the defined file in the request. On Linux this file
|
||||||
|
is a FIFO. Logging plugins are not currently supported on Windows.
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"File": "/path/to/file/stream",
|
||||||
|
"Info": {
|
||||||
|
"ContainerID": "123456"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`File` is the path to the log stream that needs to be consumed. Each call to
|
||||||
|
`StartLogging` should provide a different file path, even if it's a container
|
||||||
|
that the plugin has already received logs for prior. The file is created by
|
||||||
|
Docker with a randomly generated name.
|
||||||
|
|
||||||
|
`Info` is details about the container that's being logged. This is fairly
|
||||||
|
free-form, but is defined by the following struct definition:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Info struct {
|
||||||
|
Config map[string]string
|
||||||
|
ContainerID string
|
||||||
|
ContainerName string
|
||||||
|
ContainerEntrypoint string
|
||||||
|
ContainerArgs []string
|
||||||
|
ContainerImageID string
|
||||||
|
ContainerImageName string
|
||||||
|
ContainerCreated time.Time
|
||||||
|
ContainerEnv []string
|
||||||
|
ContainerLabels map[string]string
|
||||||
|
LogPath string
|
||||||
|
DaemonName string
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`ContainerID` will always be supplied with this struct, but other fields may be
|
||||||
|
empty or missing.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If an error occurred during this request, add an error message to the `Err` field
|
||||||
|
in the response. If no error then you can either send an empty response (`{}`)
|
||||||
|
or an empty value for the `Err` field.
|
||||||
|
|
||||||
|
The driver should at this point be consuming log messages from the passed in file.
|
||||||
|
If messages are unconsumed, it may cause the container to block while trying to
|
||||||
|
write to its stdio streams.
|
||||||
|
|
||||||
|
Log stream messages are encoded as protocol buffers. The protobuf definitions are
|
||||||
|
in the
|
||||||
|
[moby repository](https://github.com/moby/moby/blob/master/api/types/plugins/logdriver/entry.proto).
|
||||||
|
|
||||||
|
Since protocol buffers are not self-delimited you must decode them from the stream
|
||||||
|
using the following stream format:
|
||||||
|
|
||||||
|
```text
|
||||||
|
[size][message]
|
||||||
|
```
|
||||||
|
|
||||||
|
Where `size` is a 4-byte big endian binary encoded uint32. `size` in this case
|
||||||
|
defines the size of the next message. `message` is the actual log entry.
|
||||||
|
|
||||||
|
A reference golang implementation of a stream encoder/decoder can be found
|
||||||
|
[here](https://github.com/docker/docker/blob/master/api/types/plugins/logdriver/io.go)
|
||||||
|
|
||||||
|
### `/LogDriver.StopLogging`
|
||||||
|
|
||||||
|
Signals to the plugin to stop collecting logs from the defined file.
|
||||||
|
Once a response is received, the file will be removed by Docker. You must make
|
||||||
|
sure to collect all logs on the stream before responding to this request or risk
|
||||||
|
losing log data.
|
||||||
|
|
||||||
|
Requests on this endpoint does not mean that the container has been removed
|
||||||
|
only that it has stopped.
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"File": "/path/to/file/stream"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If an error occurred during this request, add an error message to the `Err` field
|
||||||
|
in the response. If no error then you can either send an empty response (`{}`)
|
||||||
|
or an empty value for the `Err` field.
|
||||||
|
|
||||||
|
## Optional endpoints
|
||||||
|
|
||||||
|
Logging plugins can implement two extra logging endpoints:
|
||||||
|
|
||||||
|
### `/LogDriver.Capabilities`
|
||||||
|
|
||||||
|
Defines the capabilities of the log driver. You must implement this endpoint for
|
||||||
|
Docker to be able to take advantage of any of the defined capabilities.
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ReadLogs": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Supported capabilities:
|
||||||
|
|
||||||
|
- `ReadLogs` - this tells Docker that the plugin is capable of reading back logs
|
||||||
|
to clients. Plugins that report that they support `ReadLogs` must implement the
|
||||||
|
`/LogDriver.ReadLogs` endpoint
|
||||||
|
|
||||||
|
### `/LogDriver.ReadLogs`
|
||||||
|
|
||||||
|
Reads back logs to the client. This is used when `docker logs <container>` is
|
||||||
|
called.
|
||||||
|
|
||||||
|
In order for Docker to use this endpoint, the plugin must specify as much when
|
||||||
|
`/LogDriver.Capabilities` is called.
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ReadConfig": {},
|
||||||
|
"Info": {
|
||||||
|
"ContainerID": "123456"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`ReadConfig` is the list of options for reading, it is defined with the following
|
||||||
|
golang struct:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type ReadConfig struct {
|
||||||
|
Since time.Time
|
||||||
|
Tail int
|
||||||
|
Follow bool
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `Since` defines the oldest log that should be sent.
|
||||||
|
- `Tail` defines the number of lines to read (e.g. like the command `tail -n 10`)
|
||||||
|
- `Follow` signals that the client wants to stay attached to receive new log messages
|
||||||
|
as they come in once the existing logs have been read.
|
||||||
|
|
||||||
|
`Info` is the same type defined in `/LogDriver.StartLogging`. It should be used
|
||||||
|
to determine what set of logs to read.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```text
|
||||||
|
{{ log stream }}
|
||||||
|
```
|
||||||
|
|
||||||
|
The response should be the encoded log message using the same format as the
|
||||||
|
messages that the plugin consumed from Docker.
|
|
@ -0,0 +1,79 @@
|
||||||
|
---
|
||||||
|
title: Docker metrics collector plugins
|
||||||
|
description: "Metrics plugins."
|
||||||
|
keywords: "Examples, Usage, plugins, docker, documentation, user guide, metrics"
|
||||||
|
---
|
||||||
|
|
||||||
|
Docker exposes internal metrics based on the Prometheus format. Metrics plugins
|
||||||
|
enable accessing these metrics in a consistent way by providing a Unix
|
||||||
|
socket at a predefined path where the plugin can scrape the metrics.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> While the plugin interface for metrics is non-experimental, the naming of the
|
||||||
|
> metrics and metric labels is still considered experimental and may change in a
|
||||||
|
> future version.
|
||||||
|
|
||||||
|
## Creating a metrics plugin
|
||||||
|
|
||||||
|
You must currently set `PropagatedMount` in the plugin `config.json` to
|
||||||
|
`/run/docker`. This allows the plugin to receive updated mounts
|
||||||
|
(the bind-mounted socket) from Docker after the plugin is already configured.
|
||||||
|
|
||||||
|
## MetricsCollector protocol
|
||||||
|
|
||||||
|
Metrics plugins must register as implementing the`MetricsCollector` interface
|
||||||
|
in `config.json`.
|
||||||
|
|
||||||
|
On Unix platforms, the socket is located at `/run/docker/metrics.sock` in the
|
||||||
|
plugin's rootfs.
|
||||||
|
|
||||||
|
`MetricsCollector` must implement two endpoints:
|
||||||
|
|
||||||
|
### `MetricsCollector.StartMetrics`
|
||||||
|
|
||||||
|
Signals to the plugin that the metrics socket is now available for scraping
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
The request has no payload.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If an error occurred during this request, add an error message to the `Err` field
|
||||||
|
in the response. If no error then you can either send an empty response (`{}`)
|
||||||
|
or an empty value for the `Err` field. Errors will only be logged.
|
||||||
|
|
||||||
|
### `MetricsCollector.StopMetrics`
|
||||||
|
|
||||||
|
Signals to the plugin that the metrics socket is no longer available.
|
||||||
|
This may happen when the daemon is shutting down.
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
The request has no payload.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If an error occurred during this request, add an error message to the `Err` field
|
||||||
|
in the response. If no error then you can either send an empty response (`{}`)
|
||||||
|
or an empty value for the `Err` field. Errors will only be logged.
|
|
@ -0,0 +1,71 @@
|
||||||
|
---
|
||||||
|
title: Docker network driver plugins
|
||||||
|
description: "Network driver plugins."
|
||||||
|
keywords: "Examples, Usage, plugins, docker, documentation, user guide"
|
||||||
|
---
|
||||||
|
|
||||||
|
This document describes Docker Engine network driver plugins generally
|
||||||
|
available in Docker Engine. To view information on plugins
|
||||||
|
managed by Docker Engine, refer to [Docker Engine plugin system](_index.md).
|
||||||
|
|
||||||
|
Docker Engine network plugins enable Engine deployments to be extended to
|
||||||
|
support a wide range of networking technologies, such as VXLAN, IPVLAN, MACVLAN
|
||||||
|
or something completely different. Network driver plugins are supported via the
|
||||||
|
LibNetwork project. Each plugin is implemented as a "remote driver" for
|
||||||
|
LibNetwork, which shares plugin infrastructure with Engine. Effectively, network
|
||||||
|
driver plugins are activated in the same way as other plugins, and use the same
|
||||||
|
kind of protocol.
|
||||||
|
|
||||||
|
## Network plugins and Swarm mode
|
||||||
|
|
||||||
|
[Legacy plugins](legacy_plugins.md) do not work in Swarm mode. However,
|
||||||
|
plugins written using the [v2 plugin system](_index.md) do work in Swarm mode, as
|
||||||
|
long as they are installed on each Swarm worker node.
|
||||||
|
|
||||||
|
## Use network driver plugins
|
||||||
|
|
||||||
|
The means of installing and running a network driver plugin depend on the
|
||||||
|
particular plugin. So, be sure to install your plugin according to the
|
||||||
|
instructions obtained from the plugin developer.
|
||||||
|
|
||||||
|
Once running however, network driver plugins are used just like the built-in
|
||||||
|
network drivers: by being mentioned as a driver in network-oriented Docker
|
||||||
|
commands. For example,
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker network create --driver weave mynet
|
||||||
|
```
|
||||||
|
|
||||||
|
Some network driver plugins are listed in [plugins](legacy_plugins.md)
|
||||||
|
|
||||||
|
The `mynet` network is now owned by `weave`, so subsequent commands
|
||||||
|
referring to that network will be sent to the plugin,
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker run --network=mynet busybox top
|
||||||
|
```
|
||||||
|
|
||||||
|
## Find network plugins
|
||||||
|
|
||||||
|
Network plugins are written by third parties, and are published by those
|
||||||
|
third parties, either on
|
||||||
|
[Docker Hub](https://hub.docker.com/search?q=&type=plugin)
|
||||||
|
or on the third party's site.
|
||||||
|
|
||||||
|
## Write a network plugin
|
||||||
|
|
||||||
|
Network plugins implement the [Docker plugin API](plugin_api.md) and the network
|
||||||
|
plugin protocol
|
||||||
|
|
||||||
|
## Network plugin protocol
|
||||||
|
|
||||||
|
The network driver protocol, in addition to the plugin activation call, is
|
||||||
|
documented as part of libnetwork:
|
||||||
|
[https://github.com/moby/moby/blob/master/libnetwork/docs/remote.md](https://github.com/moby/moby/blob/master/libnetwork/docs/remote.md).
|
||||||
|
|
||||||
|
## Related Information
|
||||||
|
|
||||||
|
To interact with the Docker maintainers and other interested users, see the IRC channel `#docker-network`.
|
||||||
|
|
||||||
|
- [Docker networks feature overview](https://docs.docker.com/engine/userguide/networking/)
|
||||||
|
- The [LibNetwork](https://github.com/docker/libnetwork) project
|
|
@ -0,0 +1,184 @@
|
||||||
|
---
|
||||||
|
keywords: "API, Usage, plugins, documentation, developer"
|
||||||
|
title: Plugins and Services
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- This file is maintained within the docker/cli GitHub
|
||||||
|
repository at https://github.com/docker/cli/. Make all
|
||||||
|
pull requests against that repo. If you see this file in
|
||||||
|
another repository, consider it read-only there, as it will
|
||||||
|
periodically be overwritten by the definitive file. Pull
|
||||||
|
requests which include edits to this file in other repositories
|
||||||
|
will be rejected.
|
||||||
|
-->
|
||||||
|
|
||||||
|
# Using Volume and Network plugins in Docker services
|
||||||
|
|
||||||
|
In swarm mode, it is possible to create a service that allows for attaching
|
||||||
|
to networks or mounting volumes that are backed by plugins. Swarm schedules
|
||||||
|
services based on plugin availability on a node.
|
||||||
|
|
||||||
|
|
||||||
|
### Volume plugins
|
||||||
|
|
||||||
|
In this example, a volume plugin is installed on a swarm worker and a volume
|
||||||
|
is created using the plugin. In the manager, a service is created with the
|
||||||
|
relevant mount options. It can be observed that the service is scheduled to
|
||||||
|
run on the worker node with the said volume plugin and volume. Note that,
|
||||||
|
node1 is the manager and node2 is the worker.
|
||||||
|
|
||||||
|
1. Prepare manager. In node 1:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker swarm init
|
||||||
|
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Join swarm, install plugin and create volume on worker. In node 2:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker swarm join \
|
||||||
|
--token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
|
||||||
|
192.168.99.100:2377
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker plugin install tiborvass/sample-volume-plugin
|
||||||
|
latest: Pulling from tiborvass/sample-volume-plugin
|
||||||
|
eb9c16fbdc53: Download complete
|
||||||
|
Digest: sha256:00b42de88f3a3e0342e7b35fa62394b0a9ceb54d37f4c50be5d3167899994639
|
||||||
|
Status: Downloaded newer image for tiborvass/sample-volume-plugin:latest
|
||||||
|
Installed plugin tiborvass/sample-volume-plugin
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker volume create -d tiborvass/sample-volume-plugin --name pluginVol
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Create a service using the plugin and volume. In node1:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker service create --name my-service --mount type=volume,volume-driver=tiborvass/sample-volume-plugin,source=pluginVol,destination=/tmp busybox top
|
||||||
|
|
||||||
|
$ docker service ls
|
||||||
|
z1sj8bb8jnfn my-service replicated 1/1 busybox:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
`docker service ls` shows service 1 instance of service running.
|
||||||
|
|
||||||
|
4. Observe the task getting scheduled in node 2:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker ps --format '{{.ID}}\t {{.Status}} {{.Names}} {{.Command}}'
|
||||||
|
83fc1e842599 Up 2 days my-service.1.9jn59qzn7nbc3m0zt1hij12xs "top"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network plugins
|
||||||
|
|
||||||
|
In this example, a global scope network plugin is installed on both the
|
||||||
|
swarm manager and worker. A service is created with replicated instances
|
||||||
|
using the installed plugin. We will observe how the availability of the
|
||||||
|
plugin determines network creation and container scheduling.
|
||||||
|
|
||||||
|
Note that node1 is the manager and node2 is the worker.
|
||||||
|
|
||||||
|
|
||||||
|
1. Install a global scoped network plugin on both manager and worker. On node1
|
||||||
|
and node2:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker plugin install bboreham/weave2
|
||||||
|
Plugin "bboreham/weave2" is requesting the following privileges:
|
||||||
|
- network: [host]
|
||||||
|
- capabilities: [CAP_SYS_ADMIN CAP_NET_ADMIN]
|
||||||
|
Do you grant the above permissions? [y/N] y
|
||||||
|
latest: Pulling from bboreham/weave2
|
||||||
|
7718f575adf7: Download complete
|
||||||
|
Digest: sha256:2780330cc15644b60809637ee8bd68b4c85c893d973cb17f2981aabfadfb6d72
|
||||||
|
Status: Downloaded newer image for bboreham/weave2:latest
|
||||||
|
Installed plugin bboreham/weave2
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create a network using plugin on manager. On node1:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker network create --driver=bboreham/weave2:latest globalnet
|
||||||
|
|
||||||
|
$ docker network ls
|
||||||
|
NETWORK ID NAME DRIVER SCOPE
|
||||||
|
qlj7ueteg6ly globalnet bboreham/weave2:latest swarm
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Create a service on the manager and have replicas set to 8. Observe that
|
||||||
|
containers get scheduled on both manager and worker.
|
||||||
|
|
||||||
|
On node 1:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker service create --network globalnet --name myservice --replicas=8 mrjana/simpleweb simpleweb
|
||||||
|
w90drnfzw85nygbie9kb89vpa
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker ps
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
87520965206a mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.4.ytdzpktmwor82zjxkh118uf1v
|
||||||
|
15e24de0f7aa mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.2.kh7a9n3iauq759q9mtxyfs9hp
|
||||||
|
c8c8f0144cdc mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.6.sjhpj5gr3xt33e3u2jycoj195
|
||||||
|
2e8e4b2c5c08 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 5 seconds ago Up 4 seconds myservice.8.2z29zowsghx66u2velublwmrh
|
||||||
|
```
|
||||||
|
|
||||||
|
On node 2:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker ps
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
53c0ae7c1dae mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up Less than a second myservice.7.x44tvvdm3iwkt9kif35f7ykz1
|
||||||
|
9b56c627fee0 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up Less than a second myservice.1.x7n1rm6lltw5gja3ueikze57q
|
||||||
|
d4f5927ba52c mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up 1 second myservice.5.i97bfo9uc6oe42lymafs9rz6k
|
||||||
|
478c0d395bd7 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 2 seconds ago Up Less than a second myservice.3.yr7nkffa48lff1vrl2r1m1ucs
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Scale down the number of instances. On node1:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker service scale myservice=0
|
||||||
|
myservice scaled to 0
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Disable and uninstall the plugin on the worker. On node2:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker plugin rm -f bboreham/weave2
|
||||||
|
bboreham/weave2
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Scale up the number of instances again. Observe that all containers are
|
||||||
|
scheduled on the master and not on the worker, because the plugin is not available on the worker anymore.
|
||||||
|
|
||||||
|
On node 1:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker service scale myservice=8
|
||||||
|
myservice scaled to 8
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker ps
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
cf4b0ec2415e mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.3.r7p5o208jmlzpcbm2ytl3q6n1
|
||||||
|
57c64a6a2b88 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.4.dwoezsbb02ccstkhlqjy2xe7h
|
||||||
|
3ac68cc4e7b8 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 35 seconds myservice.5.zx4ezdrm2nwxzkrwnxthv0284
|
||||||
|
006c3cb318fc mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.8.q0e3umt19y3h3gzo1ty336k5r
|
||||||
|
dd2ffebde435 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.7.a77y3u22prjipnrjg7vzpv3ba
|
||||||
|
a86c74d8b84b mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 36 seconds myservice.6.z9nbn14bagitwol1biveeygl7
|
||||||
|
2846a7850ba0 mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 37 seconds myservice.2.ypufz2eh9fyhppgb89g8wtj76
|
||||||
|
e2ec01efcd8a mrjana/simpleweb@sha256:317d7f221d68c86d503119b0ea12c29de42af0a22ca087d522646ad1069a47a4 "simpleweb" 39 seconds ago Up 38 seconds myservice.1.8w7c4ttzr6zcb9sjsqyhwp3yl
|
||||||
|
```
|
||||||
|
|
||||||
|
On node 2:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker ps
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
```
|
|
@ -0,0 +1,358 @@
|
||||||
|
---
|
||||||
|
title: Docker volume plugins
|
||||||
|
description: "How to manage data with external volume plugins"
|
||||||
|
keywords: "Examples, Usage, volume, docker, data, volumes, plugin, api"
|
||||||
|
---
|
||||||
|
|
||||||
|
Docker Engine volume plugins enable Engine deployments to be integrated with
|
||||||
|
external storage systems such as Amazon EBS, and enable data volumes to persist
|
||||||
|
beyond the lifetime of a single Docker host. See the
|
||||||
|
[plugin documentation](legacy_plugins.md) for more information.
|
||||||
|
|
||||||
|
## Changelog
|
||||||
|
|
||||||
|
### 1.13.0
|
||||||
|
|
||||||
|
- If used as part of the v2 plugin architecture, mountpoints that are part of
|
||||||
|
paths returned by the plugin must be mounted under the directory specified by
|
||||||
|
`PropagatedMount` in the plugin configuration
|
||||||
|
([#26398](https://github.com/docker/docker/pull/26398))
|
||||||
|
|
||||||
|
### 1.12.0
|
||||||
|
|
||||||
|
- Add `Status` field to `VolumeDriver.Get` response
|
||||||
|
([#21006](https://github.com/docker/docker/pull/21006#))
|
||||||
|
- Add `VolumeDriver.Capabilities` to get capabilities of the volume driver
|
||||||
|
([#22077](https://github.com/docker/docker/pull/22077))
|
||||||
|
|
||||||
|
### 1.10.0
|
||||||
|
|
||||||
|
- Add `VolumeDriver.Get` which gets the details about the volume
|
||||||
|
([#16534](https://github.com/docker/docker/pull/16534))
|
||||||
|
- Add `VolumeDriver.List` which lists all volumes owned by the driver
|
||||||
|
([#16534](https://github.com/docker/docker/pull/16534))
|
||||||
|
|
||||||
|
### 1.8.0
|
||||||
|
|
||||||
|
- Initial support for volume driver plugins
|
||||||
|
([#14659](https://github.com/docker/docker/pull/14659))
|
||||||
|
|
||||||
|
## Command-line changes
|
||||||
|
|
||||||
|
To give a container access to a volume, use the `--volume` and `--volume-driver`
|
||||||
|
flags on the `docker container run` command. The `--volume` (or `-v`) flag
|
||||||
|
accepts a volume name and path on the host, and the `--volume-driver` flag
|
||||||
|
accepts a driver type.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker volume create --driver=flocker volumename
|
||||||
|
|
||||||
|
$ docker container run -it --volume volumename:/data busybox sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### `--volume`
|
||||||
|
|
||||||
|
The `--volume` (or `-v`) flag takes a value that is in the format
|
||||||
|
`<volume_name>:<mountpoint>`. The two parts of the value are
|
||||||
|
separated by a colon (`:`) character.
|
||||||
|
|
||||||
|
- The volume name is a human-readable name for the volume, and cannot begin with
|
||||||
|
a `/` character. It is referred to as `volume_name` in the rest of this topic.
|
||||||
|
- The `Mountpoint` is the path on the host (v1) or in the plugin (v2) where the
|
||||||
|
volume has been made available.
|
||||||
|
|
||||||
|
### `volumedriver`
|
||||||
|
|
||||||
|
Specifying a `volumedriver` in conjunction with a `volumename` allows you to
|
||||||
|
use plugins such as [Flocker](https://github.com/ScatterHQ/flocker) to manage
|
||||||
|
volumes external to a single host, such as those on EBS.
|
||||||
|
|
||||||
|
## Create a VolumeDriver
|
||||||
|
|
||||||
|
The container creation endpoint (`/containers/create`) accepts a `VolumeDriver`
|
||||||
|
field of type `string` allowing to specify the name of the driver. If not
|
||||||
|
specified, it defaults to `"local"` (the default driver for local volumes).
|
||||||
|
|
||||||
|
## Volume plugin protocol
|
||||||
|
|
||||||
|
If a plugin registers itself as a `VolumeDriver` when activated, it must
|
||||||
|
provide the Docker Daemon with writeable paths on the host filesystem. The Docker
|
||||||
|
daemon provides these paths to containers to consume. The Docker daemon makes
|
||||||
|
the volumes available by bind-mounting the provided paths into the containers.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Volume plugins should *not* write data to the `/var/lib/docker/` directory,
|
||||||
|
> including `/var/lib/docker/volumes`. The `/var/lib/docker/` directory is
|
||||||
|
> reserved for Docker.
|
||||||
|
|
||||||
|
### `/VolumeDriver.Create`
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Name": "volume_name",
|
||||||
|
"Opts": {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Instruct the plugin that the user wants to create a volume, given a user
|
||||||
|
specified volume name. The plugin does not need to actually manifest the
|
||||||
|
volume on the filesystem yet (until `Mount` is called).
|
||||||
|
`Opts` is a map of driver specific options passed through from the user request.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Respond with a string error if an error occurred.
|
||||||
|
|
||||||
|
### `/VolumeDriver.Remove`
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Name": "volume_name"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Delete the specified volume from disk. This request is issued when a user
|
||||||
|
invokes `docker rm -v` to remove volumes associated with a container.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Respond with a string error if an error occurred.
|
||||||
|
|
||||||
|
### `/VolumeDriver.Mount`
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Name": "volume_name",
|
||||||
|
"ID": "b87d7442095999a92b65b3d9691e697b61713829cc0ffd1bb72e4ccd51aa4d6c"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Docker requires the plugin to provide a volume, given a user specified volume
|
||||||
|
name. `Mount` is called once per container start. If the same `volume_name` is requested
|
||||||
|
more than once, the plugin may need to keep track of each new mount request and provision
|
||||||
|
at the first mount request and deprovision at the last corresponding unmount request.
|
||||||
|
|
||||||
|
`ID` is a unique ID for the caller that is requesting the mount.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
- v1
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Mountpoint": "/path/to/directory/on/host",
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- v2
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Mountpoint": "/path/under/PropagatedMount",
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`Mountpoint` is the path on the host (v1) or in the plugin (v2) where the volume
|
||||||
|
has been made available.
|
||||||
|
|
||||||
|
`Err` is either empty or contains an error string.
|
||||||
|
|
||||||
|
### `/VolumeDriver.Path`
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Name": "volume_name"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Request the path to the volume with the given `volume_name`.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
- v1
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Mountpoint": "/path/to/directory/on/host",
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- v2
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Mountpoint": "/path/under/PropagatedMount",
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Respond with the path on the host (v1) or inside the plugin (v2) where the
|
||||||
|
volume has been made available, and/or a string error if an error occurred.
|
||||||
|
|
||||||
|
`Mountpoint` is optional. However, the plugin may be queried again later if one
|
||||||
|
is not provided.
|
||||||
|
|
||||||
|
### `/VolumeDriver.Unmount`
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Name": "volume_name",
|
||||||
|
"ID": "b87d7442095999a92b65b3d9691e697b61713829cc0ffd1bb72e4ccd51aa4d6c"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Docker is no longer using the named volume. `Unmount` is called once per
|
||||||
|
container stop. Plugin may deduce that it is safe to deprovision the volume at
|
||||||
|
this point.
|
||||||
|
|
||||||
|
`ID` is a unique ID for the caller that is requesting the mount.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Respond with a string error if an error occurred.
|
||||||
|
|
||||||
|
### `/VolumeDriver.Get`
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Name": "volume_name"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Get info about `volume_name`.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
- v1
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Volume": {
|
||||||
|
"Name": "volume_name",
|
||||||
|
"Mountpoint": "/path/to/directory/on/host",
|
||||||
|
"Status": {}
|
||||||
|
},
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- v2
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Volume": {
|
||||||
|
"Name": "volume_name",
|
||||||
|
"Mountpoint": "/path/under/PropagatedMount",
|
||||||
|
"Status": {}
|
||||||
|
},
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Respond with a string error if an error occurred. `Mountpoint` and `Status` are
|
||||||
|
optional.
|
||||||
|
|
||||||
|
|
||||||
|
### /VolumeDriver.List
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
Get the list of volumes registered with the plugin.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
- v1
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Volumes": [
|
||||||
|
{
|
||||||
|
"Name": "volume_name",
|
||||||
|
"Mountpoint": "/path/to/directory/on/host"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- v2
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Volumes": [
|
||||||
|
{
|
||||||
|
"Name": "volume_name",
|
||||||
|
"Mountpoint": "/path/under/PropagatedMount"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Err": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Respond with a string error if an error occurred. `Mountpoint` is optional.
|
||||||
|
|
||||||
|
### /VolumeDriver.Capabilities
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
Get the list of capabilities the driver supports.
|
||||||
|
|
||||||
|
The driver is not required to implement `Capabilities`. If it is not
|
||||||
|
implemented, the default values are used.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Capabilities": {
|
||||||
|
"Scope": "global"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Supported scopes are `global` and `local`. Any other value in `Scope` will be
|
||||||
|
ignored, and `local` is used. `Scope` allows cluster managers to handle the
|
||||||
|
volume in different ways. For instance, a scope of `global`, signals to the
|
||||||
|
cluster manager that it only needs to create the volume once instead of on each
|
||||||
|
Docker host. More capabilities may be added in the future.
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,212 @@
|
||||||
|
# docker compose
|
||||||
|
|
||||||
|
```text
|
||||||
|
docker compose [-f <arg>...] [options] [COMMAND] [ARGS...]
|
||||||
|
```
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Define and run multi-container applications with Docker
|
||||||
|
|
||||||
|
### Subcommands
|
||||||
|
|
||||||
|
| Name | Description |
|
||||||
|
|:--------------------------------|:----------------------------------------------------------------------------------------|
|
||||||
|
| [`attach`](compose_attach.md) | Attach local standard input, output, and error streams to a service's running container |
|
||||||
|
| [`bridge`](compose_bridge.md) | Convert compose files into another model |
|
||||||
|
| [`build`](compose_build.md) | Build or rebuild services |
|
||||||
|
| [`commit`](compose_commit.md) | Create a new image from a service container's changes |
|
||||||
|
| [`config`](compose_config.md) | Parse, resolve and render compose file in canonical format |
|
||||||
|
| [`cp`](compose_cp.md) | Copy files/folders between a service container and the local filesystem |
|
||||||
|
| [`create`](compose_create.md) | Creates containers for a service |
|
||||||
|
| [`down`](compose_down.md) | Stop and remove containers, networks |
|
||||||
|
| [`events`](compose_events.md) | Receive real time events from containers |
|
||||||
|
| [`exec`](compose_exec.md) | Execute a command in a running container |
|
||||||
|
| [`export`](compose_export.md) | Export a service container's filesystem as a tar archive |
|
||||||
|
| [`images`](compose_images.md) | List images used by the created containers |
|
||||||
|
| [`kill`](compose_kill.md) | Force stop service containers |
|
||||||
|
| [`logs`](compose_logs.md) | View output from containers |
|
||||||
|
| [`ls`](compose_ls.md) | List running compose projects |
|
||||||
|
| [`pause`](compose_pause.md) | Pause services |
|
||||||
|
| [`port`](compose_port.md) | Print the public port for a port binding |
|
||||||
|
| [`ps`](compose_ps.md) | List containers |
|
||||||
|
| [`publish`](compose_publish.md) | Publish compose application |
|
||||||
|
| [`pull`](compose_pull.md) | Pull service images |
|
||||||
|
| [`push`](compose_push.md) | Push service images |
|
||||||
|
| [`restart`](compose_restart.md) | Restart service containers |
|
||||||
|
| [`rm`](compose_rm.md) | Removes stopped service containers |
|
||||||
|
| [`run`](compose_run.md) | Run a one-off command on a service |
|
||||||
|
| [`scale`](compose_scale.md) | Scale services |
|
||||||
|
| [`start`](compose_start.md) | Start services |
|
||||||
|
| [`stats`](compose_stats.md) | Display a live stream of container(s) resource usage statistics |
|
||||||
|
| [`stop`](compose_stop.md) | Stop services |
|
||||||
|
| [`top`](compose_top.md) | Display the running processes |
|
||||||
|
| [`unpause`](compose_unpause.md) | Unpause services |
|
||||||
|
| [`up`](compose_up.md) | Create and start containers |
|
||||||
|
| [`version`](compose_version.md) | Show the Docker Compose version information |
|
||||||
|
| [`volumes`](compose_volumes.md) | List volumes |
|
||||||
|
| [`wait`](compose_wait.md) | Block until containers of all (or specified) services stop. |
|
||||||
|
| [`watch`](compose_watch.md) | Watch build context for service and rebuild/refresh containers when files are updated |
|
||||||
|
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:-----------------------|:--------------|:--------|:----------------------------------------------------------------------------------------------------|
|
||||||
|
| `--all-resources` | `bool` | | Include all resources, even those not used by services |
|
||||||
|
| `--ansi` | `string` | `auto` | Control when to print ANSI control characters ("never"\|"always"\|"auto") |
|
||||||
|
| `--compatibility` | `bool` | | Run compose in backward compatibility mode |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--env-file` | `stringArray` | | Specify an alternate environment file |
|
||||||
|
| `-f`, `--file` | `stringArray` | | Compose configuration files |
|
||||||
|
| `--parallel` | `int` | `-1` | Control max parallelism, -1 for unlimited |
|
||||||
|
| `--profile` | `stringArray` | | Specify a profile to enable |
|
||||||
|
| `--progress` | `string` | | Set type of progress output (auto, tty, plain, json, quiet) |
|
||||||
|
| `--project-directory` | `string` | | Specify an alternate working directory<br>(default: the path of the, first specified, Compose file) |
|
||||||
|
| `-p`, `--project-name` | `string` | | Project name |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Use `-f` to specify the name and path of one or more Compose files
|
||||||
|
Use the `-f` flag to specify the location of a Compose [configuration file](/reference/compose-file/).
|
||||||
|
|
||||||
|
#### Specifying multiple Compose files
|
||||||
|
You can supply multiple `-f` configuration files. When you supply multiple files, Compose combines them into a single
|
||||||
|
configuration. Compose builds the configuration in the order you supply the files. Subsequent files override and add
|
||||||
|
to their predecessors.
|
||||||
|
|
||||||
|
For example, consider this command line:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker compose -f compose.yaml -f compose.admin.yaml run backup_db
|
||||||
|
```
|
||||||
|
|
||||||
|
The `compose.yaml` file might specify a `webapp` service.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
webapp:
|
||||||
|
image: examples/web
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
volumes:
|
||||||
|
- "/data"
|
||||||
|
```
|
||||||
|
If the `compose.admin.yaml` also specifies this same service, any matching fields override the previous file.
|
||||||
|
New values, add to the `webapp` service configuration.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
webapp:
|
||||||
|
build: .
|
||||||
|
environment:
|
||||||
|
- DEBUG=1
|
||||||
|
```
|
||||||
|
|
||||||
|
When you use multiple Compose files, all paths in the files are relative to the first configuration file specified
|
||||||
|
with `-f`. You can use the `--project-directory` option to override this base path.
|
||||||
|
|
||||||
|
Use a `-f` with `-` (dash) as the filename to read the configuration from stdin. When stdin is used all paths in the
|
||||||
|
configuration are relative to the current working directory.
|
||||||
|
|
||||||
|
The `-f` flag is optional. If you don’t provide this flag on the command line, Compose traverses the working directory
|
||||||
|
and its parent directories looking for a `compose.yaml` or `docker-compose.yaml` file.
|
||||||
|
|
||||||
|
#### Specifying a path to a single Compose file
|
||||||
|
You can use the `-f` flag to specify a path to a Compose file that is not located in the current directory, either
|
||||||
|
from the command line or by setting up a `COMPOSE_FILE` environment variable in your shell or in an environment file.
|
||||||
|
|
||||||
|
For an example of using the `-f` option at the command line, suppose you are running the Compose Rails sample, and
|
||||||
|
have a `compose.yaml` file in a directory called `sandbox/rails`. You can use a command like `docker compose pull` to
|
||||||
|
get the postgres image for the db service from anywhere by using the `-f` flag as follows:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker compose -f ~/sandbox/rails/compose.yaml pull db
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use `-p` to specify a project name
|
||||||
|
|
||||||
|
Each configuration has a project name. Compose sets the project name using
|
||||||
|
the following mechanisms, in order of precedence:
|
||||||
|
- The `-p` command line flag
|
||||||
|
- The `COMPOSE_PROJECT_NAME` environment variable
|
||||||
|
- The top level `name:` variable from the config file (or the last `name:`
|
||||||
|
from a series of config files specified using `-f`)
|
||||||
|
- The `basename` of the project directory containing the config file (or
|
||||||
|
containing the first config file specified using `-f`)
|
||||||
|
- The `basename` of the current directory if no config file is specified
|
||||||
|
Project names must contain only lowercase letters, decimal digits, dashes,
|
||||||
|
and underscores, and must begin with a lowercase letter or decimal digit. If
|
||||||
|
the `basename` of the project directory or current directory violates this
|
||||||
|
constraint, you must use one of the other mechanisms.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker compose -p my_project ps -a
|
||||||
|
NAME SERVICE STATUS PORTS
|
||||||
|
my_project_demo_1 demo running
|
||||||
|
|
||||||
|
$ docker compose -p my_project logs
|
||||||
|
demo_1 | PING localhost (127.0.0.1): 56 data bytes
|
||||||
|
demo_1 | 64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.095 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use profiles to enable optional services
|
||||||
|
|
||||||
|
Use `--profile` to specify one or more active profiles
|
||||||
|
Calling `docker compose --profile frontend up` starts the services with the profile `frontend` and services
|
||||||
|
without any specified profiles.
|
||||||
|
You can also enable multiple profiles, e.g. with `docker compose --profile frontend --profile debug up` the profiles `frontend` and `debug` is enabled.
|
||||||
|
|
||||||
|
Profiles can also be set by `COMPOSE_PROFILES` environment variable.
|
||||||
|
|
||||||
|
### Configuring parallelism
|
||||||
|
|
||||||
|
Use `--parallel` to specify the maximum level of parallelism for concurrent engine calls.
|
||||||
|
Calling `docker compose --parallel 1 pull` pulls the pullable images defined in the Compose file
|
||||||
|
one at a time. This can also be used to control build concurrency.
|
||||||
|
|
||||||
|
Parallelism can also be set by the `COMPOSE_PARALLEL_LIMIT` environment variable.
|
||||||
|
|
||||||
|
### Set up environment variables
|
||||||
|
|
||||||
|
You can set environment variables for various docker compose options, including the `-f`, `-p` and `--profiles` flags.
|
||||||
|
|
||||||
|
Setting the `COMPOSE_FILE` environment variable is equivalent to passing the `-f` flag,
|
||||||
|
`COMPOSE_PROJECT_NAME` environment variable does the same as the `-p` flag,
|
||||||
|
`COMPOSE_PROFILES` environment variable is equivalent to the `--profiles` flag
|
||||||
|
and `COMPOSE_PARALLEL_LIMIT` does the same as the `--parallel` flag.
|
||||||
|
|
||||||
|
If flags are explicitly set on the command line, the associated environment variable is ignored.
|
||||||
|
|
||||||
|
Setting the `COMPOSE_IGNORE_ORPHANS` environment variable to `true` stops docker compose from detecting orphaned
|
||||||
|
containers for the project.
|
||||||
|
|
||||||
|
Setting the `COMPOSE_MENU` environment variable to `false` disables the helper menu when running `docker compose up`
|
||||||
|
in attached mode. Alternatively, you can also run `docker compose up --menu=false` to disable the helper menu.
|
||||||
|
|
||||||
|
### Use Dry Run mode to test your command
|
||||||
|
|
||||||
|
Use `--dry-run` flag to test a command without changing your application stack state.
|
||||||
|
Dry Run mode shows you all the steps Compose applies when executing a command, for example:
|
||||||
|
```console
|
||||||
|
$ docker compose --dry-run up --build -d
|
||||||
|
[+] Pulling 1/1
|
||||||
|
✔ DRY-RUN MODE - db Pulled 0.9s
|
||||||
|
[+] Running 10/8
|
||||||
|
✔ DRY-RUN MODE - build service backend 0.0s
|
||||||
|
✔ DRY-RUN MODE - ==> ==> writing image dryRun-754a08ddf8bcb1cf22f310f09206dd783d42f7dd 0.0s
|
||||||
|
✔ DRY-RUN MODE - ==> ==> naming to nginx-golang-mysql-backend 0.0s
|
||||||
|
✔ DRY-RUN MODE - Network nginx-golang-mysql_default Created 0.0s
|
||||||
|
✔ DRY-RUN MODE - Container nginx-golang-mysql-db-1 Created 0.0s
|
||||||
|
✔ DRY-RUN MODE - Container nginx-golang-mysql-backend-1 Created 0.0s
|
||||||
|
✔ DRY-RUN MODE - Container nginx-golang-mysql-proxy-1 Created 0.0s
|
||||||
|
✔ DRY-RUN MODE - Container nginx-golang-mysql-db-1 Healthy 0.5s
|
||||||
|
✔ DRY-RUN MODE - Container nginx-golang-mysql-backend-1 Started 0.0s
|
||||||
|
✔ DRY-RUN MODE - Container nginx-golang-mysql-proxy-1 Started Started
|
||||||
|
```
|
||||||
|
From the example above, you can see that the first step is to pull the image defined by `db` service, then build the `backend` service.
|
||||||
|
Next, the containers are created. The `db` service is started, and the `backend` and `proxy` wait until the `db` service is healthy before starting.
|
||||||
|
|
||||||
|
Dry Run mode works with almost all commands. You cannot use Dry Run mode with a command that doesn't change the state of a Compose stack such as `ps`, `ls`, `logs` for example.
|
|
@ -0,0 +1,22 @@
|
||||||
|
# docker compose alpha
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Experimental commands
|
||||||
|
|
||||||
|
### Subcommands
|
||||||
|
|
||||||
|
| Name | Description |
|
||||||
|
|:----------------------------------|:-----------------------------------------------------------------------------------------------------|
|
||||||
|
| [`viz`](compose_alpha_viz.md) | EXPERIMENTAL - Generate a graphviz graph from your compose file |
|
||||||
|
| [`watch`](compose_alpha_watch.md) | EXPERIMENTAL - Watch build context for service and rebuild/refresh containers when files are updated |
|
||||||
|
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------|:-----|:--------|:--------------------------------|
|
||||||
|
| `--dry-run` | | | Execute command in dry run mode |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
8
_vendor/github.com/docker/compose/v2/docs/reference/compose_alpha_dry-run.md
generated
Normal file
8
_vendor/github.com/docker/compose/v2/docs/reference/compose_alpha_dry-run.md
generated
Normal file
|
@ -0,0 +1,8 @@
|
||||||
|
# docker compose alpha dry-run
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Dry run command allows you to test a command without applying changes
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
17
_vendor/github.com/docker/compose/v2/docs/reference/compose_alpha_generate.md
generated
Normal file
17
_vendor/github.com/docker/compose/v2/docs/reference/compose_alpha_generate.md
generated
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
# docker compose alpha generate
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
EXPERIMENTAL - Generate a Compose file from existing containers
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:----------------|:---------|:--------|:------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--format` | `string` | `yaml` | Format the output. Values: [yaml \| json] |
|
||||||
|
| `--name` | `string` | | Project name to set in the Compose file |
|
||||||
|
| `--project-dir` | `string` | | Directory to use for the project |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
18
_vendor/github.com/docker/compose/v2/docs/reference/compose_alpha_publish.md
generated
Normal file
18
_vendor/github.com/docker/compose/v2/docs/reference/compose_alpha_publish.md
generated
Normal file
|
@ -0,0 +1,18 @@
|
||||||
|
# docker compose alpha publish
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Publish compose application
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:--------------------------|:---------|:--------|:-------------------------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--oci-version` | `string` | | OCI image/artifact specification version (automatically determined by default) |
|
||||||
|
| `--resolve-image-digests` | `bool` | | Pin image tags to digests |
|
||||||
|
| `--with-env` | `bool` | | Include environment variables in the published OCI artifact |
|
||||||
|
| `-y`, `--yes` | `bool` | | Assume "yes" as answer to all prompts |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,15 @@
|
||||||
|
# docker compose alpha scale
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Scale services
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------|:-----|:--------|:--------------------------------|
|
||||||
|
| `--dry-run` | | | Execute command in dry run mode |
|
||||||
|
| `--no-deps` | | | Don't start linked services |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,19 @@
|
||||||
|
# docker compose alpha viz
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
EXPERIMENTAL - Generate a graphviz graph from your compose file
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:---------------------|:-------|:--------|:---------------------------------------------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--image` | `bool` | | Include service's image name in output graph |
|
||||||
|
| `--indentation-size` | `int` | `1` | Number of tabs or spaces to use for indentation |
|
||||||
|
| `--networks` | `bool` | | Include service's attached networks in output graph |
|
||||||
|
| `--ports` | `bool` | | Include service's exposed ports in output graph |
|
||||||
|
| `--spaces` | `bool` | | If given, space character ' ' will be used to indent,<br>otherwise tab character '\t' will be used |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,16 @@
|
||||||
|
# docker compose alpha watch
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Watch build context for service and rebuild/refresh containers when files are updated
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------|:-----|:--------|:----------------------------------------------|
|
||||||
|
| `--dry-run` | | | Execute command in dry run mode |
|
||||||
|
| `--no-up` | | | Do not build & start services before watching |
|
||||||
|
| `--quiet` | | | hide build output |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,17 @@
|
||||||
|
# docker compose attach
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Attach local standard input, output, and error streams to a service's running container
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:----------------|:---------|:--------|:----------------------------------------------------------|
|
||||||
|
| `--detach-keys` | `string` | | Override the key sequence for detaching from a container. |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--index` | `int` | `0` | index of the container if service has multiple replicas. |
|
||||||
|
| `--no-stdin` | `bool` | | Do not attach STDIN |
|
||||||
|
| `--sig-proxy` | `bool` | `true` | Proxy all received signals to the process |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
|
@ -0,0 +1,22 @@
|
||||||
|
# docker compose bridge
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Convert compose files into another model
|
||||||
|
|
||||||
|
### Subcommands
|
||||||
|
|
||||||
|
| Name | Description |
|
||||||
|
|:-------------------------------------------------------|:-----------------------------------------------------------------------------|
|
||||||
|
| [`convert`](compose_bridge_convert.md) | Convert compose files to Kubernetes manifests, Helm charts, or another model |
|
||||||
|
| [`transformations`](compose_bridge_transformations.md) | Manage transformation images |
|
||||||
|
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------|:-------|:--------|:--------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
17
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_convert.md
generated
Normal file
17
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_convert.md
generated
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
# docker compose bridge convert
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Convert compose files to Kubernetes manifests, Helm charts, or another model
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:-------------------------|:--------------|:--------|:-------------------------------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `-o`, `--output` | `string` | `out` | The output directory for the Kubernetes resources |
|
||||||
|
| `--templates` | `string` | | Directory containing transformation templates |
|
||||||
|
| `-t`, `--transformation` | `stringArray` | | Transformation to apply to compose model (default: docker/compose-bridge-kubernetes) |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
22
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_transformations.md
generated
Normal file
22
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_transformations.md
generated
Normal file
|
@ -0,0 +1,22 @@
|
||||||
|
# docker compose bridge transformations
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Manage transformation images
|
||||||
|
|
||||||
|
### Subcommands
|
||||||
|
|
||||||
|
| Name | Description |
|
||||||
|
|:-----------------------------------------------------|:-------------------------------|
|
||||||
|
| [`create`](compose_bridge_transformations_create.md) | Create a new transformation |
|
||||||
|
| [`list`](compose_bridge_transformations_list.md) | List available transformations |
|
||||||
|
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------|:-------|:--------|:--------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
15
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_transformations_create.md
generated
Normal file
15
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_transformations_create.md
generated
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
# docker compose bridge transformations create
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Create a new transformation
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:---------------|:---------|:--------|:----------------------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `-f`, `--from` | `string` | | Existing transformation to copy (default: docker/compose-bridge-kubernetes) |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
20
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_transformations_list.md
generated
Normal file
20
_vendor/github.com/docker/compose/v2/docs/reference/compose_bridge_transformations_list.md
generated
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
# docker compose bridge transformations list
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
List available transformations
|
||||||
|
|
||||||
|
### Aliases
|
||||||
|
|
||||||
|
`docker compose bridge transformations list`, `docker compose bridge transformations ls`
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:----------------|:---------|:--------|:-------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--format` | `string` | `table` | Format the output. Values: [table \| json] |
|
||||||
|
| `-q`, `--quiet` | `bool` | | Only display transformer names |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,44 @@
|
||||||
|
# docker compose build
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Services are built once and then tagged, by default as `project-service`.
|
||||||
|
|
||||||
|
If the Compose file specifies an
|
||||||
|
[image](https://github.com/compose-spec/compose-spec/blob/main/spec.md#image) name,
|
||||||
|
the image is tagged with that name, substituting any variables beforehand. See
|
||||||
|
[variable interpolation](https://github.com/compose-spec/compose-spec/blob/main/spec.md#interpolation).
|
||||||
|
|
||||||
|
If you change a service's `Dockerfile` or the contents of its build directory,
|
||||||
|
run `docker compose build` to rebuild it.
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:----------------------|:--------------|:--------|:------------------------------------------------------------------------------------------------------------|
|
||||||
|
| `--build-arg` | `stringArray` | | Set build-time variables for services |
|
||||||
|
| `--builder` | `string` | | Set builder to use |
|
||||||
|
| `--check` | `bool` | | Check build configuration |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `-m`, `--memory` | `bytes` | `0` | Set memory limit for the build container. Not supported by BuildKit. |
|
||||||
|
| `--no-cache` | `bool` | | Do not use cache when building the image |
|
||||||
|
| `--print` | `bool` | | Print equivalent bake file |
|
||||||
|
| `--pull` | `bool` | | Always attempt to pull a newer version of the image |
|
||||||
|
| `--push` | `bool` | | Push service images |
|
||||||
|
| `-q`, `--quiet` | `bool` | | Don't print anything to STDOUT |
|
||||||
|
| `--ssh` | `string` | | Set SSH authentications used when building service images. (use 'default' for using your default SSH Agent) |
|
||||||
|
| `--with-dependencies` | `bool` | | Also build dependencies (transitively) |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Services are built once and then tagged, by default as `project-service`.
|
||||||
|
|
||||||
|
If the Compose file specifies an
|
||||||
|
[image](https://github.com/compose-spec/compose-spec/blob/main/spec.md#image) name,
|
||||||
|
the image is tagged with that name, substituting any variables beforehand. See
|
||||||
|
[variable interpolation](https://github.com/compose-spec/compose-spec/blob/main/spec.md#interpolation).
|
||||||
|
|
||||||
|
If you change a service's `Dockerfile` or the contents of its build directory,
|
||||||
|
run `docker compose build` to rebuild it.
|
|
@ -0,0 +1,19 @@
|
||||||
|
# docker compose commit
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Create a new image from a service container's changes
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------------|:---------|:--------|:-----------------------------------------------------------|
|
||||||
|
| `-a`, `--author` | `string` | | Author (e.g., "John Hannibal Smith <hannibal@a-team.com>") |
|
||||||
|
| `-c`, `--change` | `list` | | Apply Dockerfile instruction to the created image |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--index` | `int` | `0` | index of the container if service has multiple replicas. |
|
||||||
|
| `-m`, `--message` | `string` | | Commit message |
|
||||||
|
| `-p`, `--pause` | `bool` | `true` | Pause container during commit |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,39 @@
|
||||||
|
# docker compose convert
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
`docker compose config` renders the actual data model to be applied on the Docker Engine.
|
||||||
|
It merges the Compose files set by `-f` flags, resolves variables in the Compose file, and expands short-notation into
|
||||||
|
the canonical format.
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:--------------------------|:---------|:--------|:----------------------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--environment` | `bool` | | Print environment used for interpolation. |
|
||||||
|
| `--format` | `string` | | Format the output. Values: [yaml \| json] |
|
||||||
|
| `--hash` | `string` | | Print the service config hash, one per line. |
|
||||||
|
| `--images` | `bool` | | Print the image names, one per line. |
|
||||||
|
| `--lock-image-digests` | `bool` | | Produces an override file with image digests |
|
||||||
|
| `--networks` | `bool` | | Print the network names, one per line. |
|
||||||
|
| `--no-consistency` | `bool` | | Don't check model consistency - warning: may produce invalid Compose output |
|
||||||
|
| `--no-env-resolution` | `bool` | | Don't resolve service env files |
|
||||||
|
| `--no-interpolate` | `bool` | | Don't interpolate environment variables |
|
||||||
|
| `--no-normalize` | `bool` | | Don't normalize compose model |
|
||||||
|
| `--no-path-resolution` | `bool` | | Don't resolve file paths |
|
||||||
|
| `-o`, `--output` | `string` | | Save to file (default to stdout) |
|
||||||
|
| `--profiles` | `bool` | | Print the profile names, one per line. |
|
||||||
|
| `-q`, `--quiet` | `bool` | | Only validate the configuration, don't print anything |
|
||||||
|
| `--resolve-image-digests` | `bool` | | Pin image tags to digests |
|
||||||
|
| `--services` | `bool` | | Print the service names, one per line. |
|
||||||
|
| `--variables` | `bool` | | Print model variables and default values. |
|
||||||
|
| `--volumes` | `bool` | | Print the volume names, one per line. |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
`docker compose config` renders the actual data model to be applied on the Docker Engine.
|
||||||
|
It merges the Compose files set by `-f` flags, resolves variables in the Compose file, and expands short-notation into
|
||||||
|
the canonical format.
|
|
@ -0,0 +1,18 @@
|
||||||
|
# docker compose cp
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Copy files/folders between a service container and the local filesystem
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:----------------------|:-------|:--------|:--------------------------------------------------------|
|
||||||
|
| `--all` | `bool` | | Include containers created by the run command |
|
||||||
|
| `-a`, `--archive` | `bool` | | Archive mode (copy all uid/gid information) |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `-L`, `--follow-link` | `bool` | | Always follow symbol link in SRC_PATH |
|
||||||
|
| `--index` | `int` | `0` | Index of the container if service has multiple replicas |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,23 @@
|
||||||
|
# docker compose create
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Creates containers for a service
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:-------------------|:--------------|:---------|:----------------------------------------------------------------------------------------------|
|
||||||
|
| `--build` | `bool` | | Build images before starting containers |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--force-recreate` | `bool` | | Recreate containers even if their configuration and image haven't changed |
|
||||||
|
| `--no-build` | `bool` | | Don't build an image, even if it's policy |
|
||||||
|
| `--no-recreate` | `bool` | | If containers already exist, don't recreate them. Incompatible with --force-recreate. |
|
||||||
|
| `--pull` | `string` | `policy` | Pull image before running ("always"\|"missing"\|"never"\|"build") |
|
||||||
|
| `--quiet-pull` | `bool` | | Pull without printing progress information |
|
||||||
|
| `--remove-orphans` | `bool` | | Remove containers for services not defined in the Compose file |
|
||||||
|
| `--scale` | `stringArray` | | Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present. |
|
||||||
|
| `-y`, `--yes` | `bool` | | Assume "yes" as answer to all prompts and run non-interactively |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,45 @@
|
||||||
|
# docker compose down
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Stops containers and removes containers, networks, volumes, and images created by `up`.
|
||||||
|
|
||||||
|
By default, the only things removed are:
|
||||||
|
|
||||||
|
- Containers for services defined in the Compose file.
|
||||||
|
- Networks defined in the networks section of the Compose file.
|
||||||
|
- The default network, if one is used.
|
||||||
|
|
||||||
|
Networks and volumes defined as external are never removed.
|
||||||
|
|
||||||
|
Anonymous volumes are not removed by default. However, as they don’t have a stable name, they are not automatically
|
||||||
|
mounted by a subsequent `up`. For data that needs to persist between updates, use explicit paths as bind mounts or
|
||||||
|
named volumes.
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:-------------------|:---------|:--------|:------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--remove-orphans` | `bool` | | Remove containers for services not defined in the Compose file |
|
||||||
|
| `--rmi` | `string` | | Remove images used by services. "local" remove only images that don't have a custom tag ("local"\|"all") |
|
||||||
|
| `-t`, `--timeout` | `int` | `0` | Specify a shutdown timeout in seconds |
|
||||||
|
| `-v`, `--volumes` | `bool` | | Remove named volumes declared in the "volumes" section of the Compose file and anonymous volumes attached to containers |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Stops containers and removes containers, networks, volumes, and images created by `up`.
|
||||||
|
|
||||||
|
By default, the only things removed are:
|
||||||
|
|
||||||
|
- Containers for services defined in the Compose file.
|
||||||
|
- Networks defined in the networks section of the Compose file.
|
||||||
|
- The default network, if one is used.
|
||||||
|
|
||||||
|
Networks and volumes defined as external are never removed.
|
||||||
|
|
||||||
|
Anonymous volumes are not removed by default. However, as they don’t have a stable name, they are not automatically
|
||||||
|
mounted by a subsequent `up`. For data that needs to persist between updates, use explicit paths as bind mounts or
|
||||||
|
named volumes.
|
|
@ -0,0 +1,54 @@
|
||||||
|
# docker compose events
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Stream container events for every container in the project.
|
||||||
|
|
||||||
|
With the `--json` flag, a json object is printed one per line with the format:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"time": "2015-11-20T18:01:03.615550",
|
||||||
|
"type": "container",
|
||||||
|
"action": "create",
|
||||||
|
"id": "213cf7...5fc39a",
|
||||||
|
"service": "web",
|
||||||
|
"attributes": {
|
||||||
|
"name": "application_web_1",
|
||||||
|
"image": "alpine:edge"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The events that can be received using this can be seen [here](/reference/cli/docker/system/events/#object-types).
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------|:-------|:--------|:------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--json` | `bool` | | Output events as a stream of json objects |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Stream container events for every container in the project.
|
||||||
|
|
||||||
|
With the `--json` flag, a json object is printed one per line with the format:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"time": "2015-11-20T18:01:03.615550",
|
||||||
|
"type": "container",
|
||||||
|
"action": "create",
|
||||||
|
"id": "213cf7...5fc39a",
|
||||||
|
"service": "web",
|
||||||
|
"attributes": {
|
||||||
|
"name": "application_web_1",
|
||||||
|
"image": "alpine:edge"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The events that can be received using this can be seen [here](https://docs.docker.com/reference/cli/docker/system/events/#object-types).
|
|
@ -0,0 +1,30 @@
|
||||||
|
# docker compose exec
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
This is the equivalent of `docker exec` targeting a Compose service.
|
||||||
|
|
||||||
|
With this subcommand, you can run arbitrary commands in your services. Commands allocate a TTY by default, so
|
||||||
|
you can use a command such as `docker compose exec web sh` to get an interactive prompt.
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------------|:--------------|:--------|:---------------------------------------------------------------------------------|
|
||||||
|
| `-d`, `--detach` | `bool` | | Detached mode: Run command in the background |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `-e`, `--env` | `stringArray` | | Set environment variables |
|
||||||
|
| `--index` | `int` | `0` | Index of the container if service has multiple replicas |
|
||||||
|
| `-T`, `--no-TTY` | `bool` | `true` | Disable pseudo-TTY allocation. By default `docker compose exec` allocates a TTY. |
|
||||||
|
| `--privileged` | `bool` | | Give extended privileges to the process |
|
||||||
|
| `-u`, `--user` | `string` | | Run the command as this user |
|
||||||
|
| `-w`, `--workdir` | `string` | | Path to workdir directory for this command |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
This is the equivalent of `docker exec` targeting a Compose service.
|
||||||
|
|
||||||
|
With this subcommand, you can run arbitrary commands in your services. Commands allocate a TTY by default, so
|
||||||
|
you can use a command such as `docker compose exec web sh` to get an interactive prompt.
|
|
@ -0,0 +1,16 @@
|
||||||
|
# docker compose export
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Export a service container's filesystem as a tar archive
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:-----------------|:---------|:--------|:---------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--index` | `int` | `0` | index of the container if service has multiple replicas. |
|
||||||
|
| `-o`, `--output` | `string` | | Write to a file, instead of STDOUT |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,16 @@
|
||||||
|
# docker compose images
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
List images used by the created containers
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:----------------|:---------|:--------|:-------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--format` | `string` | `table` | Format the output. Values: [table \| json] |
|
||||||
|
| `-q`, `--quiet` | `bool` | | Only display IDs |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
|
@ -0,0 +1,27 @@
|
||||||
|
# docker compose kill
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Forces running containers to stop by sending a `SIGKILL` signal. Optionally the signal can be passed, for example:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker compose kill -s SIGINT
|
||||||
|
```
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:-------------------|:---------|:----------|:---------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--remove-orphans` | `bool` | | Remove containers for services not defined in the Compose file |
|
||||||
|
| `-s`, `--signal` | `string` | `SIGKILL` | SIGNAL to send to the container |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Forces running containers to stop by sending a `SIGKILL` signal. Optionally the signal can be passed, for example:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker compose kill -s SIGINT
|
||||||
|
```
|
|
@ -0,0 +1,25 @@
|
||||||
|
# docker compose logs
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Displays log output from services
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:---------------------|:---------|:--------|:-----------------------------------------------------------------------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `-f`, `--follow` | `bool` | | Follow log output |
|
||||||
|
| `--index` | `int` | `0` | index of the container if service has multiple replicas |
|
||||||
|
| `--no-color` | `bool` | | Produce monochrome output |
|
||||||
|
| `--no-log-prefix` | `bool` | | Don't print prefix in logs |
|
||||||
|
| `--since` | `string` | | Show logs since timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes) |
|
||||||
|
| `-n`, `--tail` | `string` | `all` | Number of lines to show from the end of the logs for each container |
|
||||||
|
| `-t`, `--timestamps` | `bool` | | Show timestamps |
|
||||||
|
| `--until` | `string` | | Show logs before a timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes) |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Displays log output from services
|
|
@ -0,0 +1,21 @@
|
||||||
|
# docker compose ls
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Lists running Compose projects
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:----------------|:---------|:--------|:-------------------------------------------|
|
||||||
|
| `-a`, `--all` | `bool` | | Show all stopped Compose projects |
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
| `--filter` | `filter` | | Filter output based on conditions provided |
|
||||||
|
| `--format` | `string` | `table` | Format the output. Values: [table \| json] |
|
||||||
|
| `-q`, `--quiet` | `bool` | | Only display project names |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Lists running Compose projects
|
|
@ -0,0 +1,17 @@
|
||||||
|
# docker compose pause
|
||||||
|
|
||||||
|
<!---MARKER_GEN_START-->
|
||||||
|
Pauses running containers of a service. They can be unpaused with `docker compose unpause`.
|
||||||
|
|
||||||
|
### Options
|
||||||
|
|
||||||
|
| Name | Type | Default | Description |
|
||||||
|
|:------------|:-------|:--------|:--------------------------------|
|
||||||
|
| `--dry-run` | `bool` | | Execute command in dry run mode |
|
||||||
|
|
||||||
|
|
||||||
|
<!---MARKER_GEN_END-->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Pauses running containers of a service. They can be unpaused with `docker compose unpause`.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue