mirror of https://github.com/docker/docs.git
Merge content from public and private master branches (#1120)
This commit is contained in:
parent
b22efe3567
commit
ae9c87e741
|
@ -1340,7 +1340,7 @@ manuals:
|
|||
title: Troubleshoot DDE issues on Mac
|
||||
- path: /ee/desktop/troubleshoot/windows-issues/
|
||||
title: Troubleshoot DDE issues on Windows
|
||||
- sectiontitle: Universal Control Plane
|
||||
- sectiontitle: Universal Control Plane
|
||||
section:
|
||||
- path: /ee/ucp/
|
||||
title: Universal Control Plane overview
|
||||
|
@ -1557,7 +1557,7 @@ manuals:
|
|||
path: /ee/ucp/kubernetes/layer-7-routing/
|
||||
- title: Create a service account for a Kubernetes app
|
||||
path: /ee/ucp/kubernetes/create-service-account/
|
||||
- title: Install a CNI plugin
|
||||
- title: Install an unmanaged CNI plugin
|
||||
path: /ee/ucp/kubernetes/install-cni-plugin/
|
||||
- title: Kubernetes network encryption
|
||||
path: /ee/ucp/kubernetes/kubernetes-network-encryption/
|
||||
|
@ -1565,8 +1565,12 @@ manuals:
|
|||
path: /ee/ucp/kubernetes/use-csi/
|
||||
- sectiontitle: Persistent Storage
|
||||
section:
|
||||
- title: Use NFS storage
|
||||
- title: Use NFS Storage
|
||||
path: /ee/ucp/kubernetes/storage/use-nfs-volumes/
|
||||
- title: Use Azure Disk Storage
|
||||
path: /ee/ucp/kubernetes/storage/use-azure-disk/
|
||||
- title: Use Azure Files Storage
|
||||
path: /ee/ucp/kubernetes/storage/use-azure-files/
|
||||
- title: Use AWS EBS Storage
|
||||
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
|
||||
- title: Configure iSCSI
|
||||
|
@ -2402,6 +2406,14 @@ manuals:
|
|||
title: Create and manage organizations
|
||||
- path: /ee/dtr/admin/manage-users/permission-levels/
|
||||
title: Permission levels
|
||||
- sectiontitle: Manage webhooks
|
||||
section:
|
||||
- title: Create and manage webhooks
|
||||
path: /ee/dtr/admin/manage-webhooks/
|
||||
- title: Use the web interface
|
||||
path: /ee/dtr/admin/manage-webhooks/use-the-web-ui/
|
||||
- title: Use the API
|
||||
path: /ee/dtr/admin/manage-webhooks/use-the-api/
|
||||
- sectiontitle: Manage jobs
|
||||
section:
|
||||
- path: /ee/dtr/admin/manage-jobs/job-queue/
|
||||
|
@ -2485,8 +2497,6 @@ manuals:
|
|||
path: /ee/dtr/user/audit-repository-events/
|
||||
- title: Auto-delete repository events
|
||||
path: /ee/dtr/admin/configure/auto-delete-repo-events/
|
||||
- path: /ee/dtr/user/create-and-manage-webhooks/
|
||||
title: Create and manage webhooks
|
||||
- title: Manage access tokens
|
||||
path: /ee/dtr/user/access-tokens/
|
||||
- title: Tag pruning
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Working with Docker Template
|
||||
description: Working with Docker Application Template
|
||||
keywords: Docker, application template, Application Designer,
|
||||
keywords: Docker, application template, Application Designer
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
|
|
@ -31,7 +31,6 @@ Define the application dependencies.
|
|||
import redis
|
||||
from flask import Flask
|
||||
|
||||
|
||||
app = Flask(__name__)
|
||||
cache = redis.Redis(host='redis', port=6379)
|
||||
|
||||
|
@ -53,9 +52,6 @@ Define the application dependencies.
|
|||
count = get_hit_count()
|
||||
return 'Hello World! I have been seen {} times.\n'.format(count)
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run(host="0.0.0.0", debug=True)
|
||||
|
||||
|
||||
In this example, `redis` is the hostname of the redis container on the
|
||||
application's network. We use the default port for Redis, `6379`.
|
||||
|
@ -86,19 +82,25 @@ itself.
|
|||
In your project directory, create a file named `Dockerfile` and paste the
|
||||
following:
|
||||
|
||||
FROM python:3.4-alpine
|
||||
ADD . /code
|
||||
FROM python:3.7-alpine
|
||||
WORKDIR /code
|
||||
ENV FLASK_APP app.py
|
||||
ENV FLASK_RUN_HOST 0.0.0.0
|
||||
RUN apk add --no-cache gcc musl-dev linux-headers
|
||||
COPY requirements.txt requirements.txt
|
||||
RUN pip install -r requirements.txt
|
||||
CMD ["python", "app.py"]
|
||||
COPY . .
|
||||
CMD ["flask", "run"]
|
||||
|
||||
This tells Docker to:
|
||||
|
||||
* Build an image starting with the Python 3.4 image.
|
||||
* Add the current directory `.` into the path `/code` in the image.
|
||||
* Build an image starting with the Python 3.7 image.
|
||||
* Set the working directory to `/code`.
|
||||
* Install the Python dependencies.
|
||||
* Set the default command for the container to `python app.py`.
|
||||
* Set environment variables used by the `flask` command.
|
||||
* Install gcc so Python packages such as MarkupSafe and SQLAlchemy can compile speedups.
|
||||
* Copy `requirements.txt` and install the Python dependencies.
|
||||
* Copy the current directory `.` in the project to the workdir `.` in the image.
|
||||
* Set the default command for the container to `flask run`.
|
||||
|
||||
For more information on how to write Dockerfiles, see the [Docker user
|
||||
guide](/engine/tutorials/dockerimages.md#building-an-image-from-a-dockerfile)
|
||||
|
@ -115,7 +117,7 @@ the following:
|
|||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "5000:5000"
|
||||
redis:
|
||||
image: "redis:alpine"
|
||||
|
||||
|
@ -161,13 +163,13 @@ image pulled from the Docker Hub registry.
|
|||
Compose pulls a Redis image, builds an image for your code, and starts the
|
||||
services you defined. In this case, the code is statically copied into the image at build time.
|
||||
|
||||
2. Enter `http://0.0.0.0:5000/` in a browser to see the application running.
|
||||
2. Enter http://localhost:5000/ in a browser to see the application running.
|
||||
|
||||
If you're using Docker natively on Linux, Docker Desktop for Mac, or Docker Desktop for
|
||||
Windows, then the web app should now be listening on port 5000 on your
|
||||
Docker daemon host. Point your web browser to `http://localhost:5000` to
|
||||
Docker daemon host. Point your web browser to http://localhost:5000 to
|
||||
find the `Hello World` message. If this doesn't resolve, you can also try
|
||||
`http://0.0.0.0:5000`.
|
||||
http://127.0.0.1:5000.
|
||||
|
||||
If you're using Docker Machine on a Mac or Windows, use `docker-machine ip
|
||||
MACHINE_VM` to get the IP address of your Docker host. Then, open
|
||||
|
@ -219,15 +221,19 @@ Edit `docker-compose.yml` in your project directory to add a [bind mount](/engin
|
|||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
- .:/code
|
||||
environment:
|
||||
FLASK_ENV: development
|
||||
redis:
|
||||
image: "redis:alpine"
|
||||
|
||||
The new `volumes` key mounts the project directory (current directory) on the
|
||||
host to `/code` inside the container, allowing you to modify the code on the
|
||||
fly, without having to rebuild the image.
|
||||
fly, without having to rebuild the image. The `environment` key sets the
|
||||
`FLASK_ENV` environment variable, which tells `flask run` to run in development
|
||||
mode and reload the code on change. This mode should only be used in development.
|
||||
|
||||
## Step 6: Re-build and run the app with Compose
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
description: Learn how to read container logs locally when using a third party logging solution.
|
||||
keywords: docker, logging, driver
|
||||
title: Using docker logs to read container logs for remote logging drivers
|
||||
title: Use docker logs to read container logs for remote logging drivers
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
@ -27,7 +27,8 @@ Starting with Docker Engine Enterprise 18.03.1-ee-1, you can use `docker logs` t
|
|||
logs regardless of the configured logging driver or plugin. This capability, sometimes referred to
|
||||
as dual logging, allows you to use `docker logs` to read container logs locally in a consistent format,
|
||||
regardless of the remote log driver used, because the engine is configured to log information to the “local”
|
||||
logging driver. Refer to [Configure the default logging driver](/configure) for additional information.
|
||||
logging driver. Refer to [Configure the default logging driver](/config/containers/logging/configure) for additional information.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
|
@ -102,7 +102,7 @@ keep image size small:
|
|||
|
||||
There are limitations around sharing data amongst nodes of a swarm service.
|
||||
If you use [Docker for AWS](/docker-for-aws/persistent-data-volumes.md) or
|
||||
[Docker for Azure](docker-for-azure/persistent-data-volumes.md), you can use the
|
||||
[Docker for Azure](/docker-for-azure/persistent-data-volumes.md), you can use the
|
||||
Cloudstor plugin to share data amongst your swarm service nodes. You can also
|
||||
write your application data into a separate database which supports simultaneous
|
||||
updates.
|
||||
|
|
|
@ -106,7 +106,7 @@ is about older releases of Docker for Mac.
|
|||
|
||||
If, after installing Docker for Mac, you [change the name of your macOS user
|
||||
account and home folder](https://support.apple.com/en-us/HT201548), Docker for
|
||||
Mac fails to start. [Reset to Factory Defaults](index.md#reset) is the simplest
|
||||
Mac fails to start. [Reset to Factory Defaults](/docker-for-mac/index/#reset) is the simplest
|
||||
fix, but you'll lose all your settings, containers, images, etc.
|
||||
|
||||
To preserve them, open the `~/Library/Group
|
||||
|
@ -246,7 +246,7 @@ Starting with Docker for Mac Beta 27 and Stable 1.12.3, all trusted certificate
|
|||
authorities (CAs) (root or intermediate) are supported.
|
||||
|
||||
For full information on adding server and client side certs, see
|
||||
[Add TLS certificates](index.md#add-tls-certificates) in the Getting Started topic.
|
||||
[Add TLS certificates](/docker-for-mac/index/#add-tls-certificates) in the Getting Started topic.
|
||||
|
||||
### How do I add client certificates?
|
||||
|
||||
|
@ -256,7 +256,7 @@ in `~/.docker/certs.d/<MyRegistry>:<Port>/client.cert` and
|
|||
`~/.docker/certs.d/<MyRegistry>:<Port>/client.key`.
|
||||
|
||||
For full information on adding server and client side certs, see
|
||||
[Add TLS certificates](index.md#add-tls-certificates) in the Getting Started topic.
|
||||
[Add TLS certificates](/docker-for-mac/index/#add-tls-certificates) in the Getting Started topic.
|
||||
|
||||
### Can I pass through a USB device to a container?
|
||||
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
---
|
||||
title: Use Docker Desktop Enterprise on Mac
|
||||
description: Exploring the Mac user interface
|
||||
keywords: Docker EE, Windows, Mac, Docker Desktop, Enterprise
|
||||
---
|
||||
|
||||
|
|
|
@ -10,24 +10,24 @@ Docker Trusted Registry has a global setting for repository event auto-deletion.
|
|||
|
||||
## Steps
|
||||
|
||||
1. In your browser, navigate to `https://<dtr-url>` and log in with your UCP credentials.
|
||||
1. In your browser, navigate to `https://<dtr-url>` and log in with your admin credentials.
|
||||
|
||||
2. Select **System** on the left navigation pane which will display the **Settings** page by default.
|
||||
2. Select **System** from the left navigation pane which displays the **Settings** page by default.
|
||||
|
||||
3. Scroll down to **Repository Events** and turn on ***Auto-Deletion***.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
4. Specify the conditions with which an event auto-deletion will be triggered.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
DTR allows you to set your auto-deletion conditions based on the following optional repository event attributes:
|
||||
DTR allows you to set your auto-deletion conditions based on the following optional repository event attributes:
|
||||
|
||||
| Name | Description | Example |
|
||||
|:----------------|:---------------------------------------------------| :----------------|
|
||||
| Age | Lets you remove events older than your specified number of hours, days, weeks or months| `2 months` |
|
||||
| Max number of events | Lets you specify the maximum number of events allowed in the repositories. | `6000` |
|
||||
| Name | Description | Example |
|
||||
|:----------------|:---------------------------------------------------| :----------------|
|
||||
| Age | Lets you remove events older than your specified number of hours, days, weeks or months| `2 months` |
|
||||
| Max number of events | Lets you specify the maximum number of events allowed in the repositories. | `6000` |
|
||||
|
||||
If you check and specify both, events in your repositories will be removed during garbage collection if either condition is met. You should see a confirmation message right away.
|
||||
|
||||
|
@ -35,7 +35,7 @@ If you check and specify both, events in your repositories will be removed durin
|
|||
|
||||
6. Navigate to **System > Job Logs** to confirm that `onlinegc` has happened.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
title: Manage webhooks
|
||||
description: Learn how to create, configure, and test webhooks in Docker Trusted Registry.
|
||||
keywords: registry, webhooks
|
||||
redirect_from:
|
||||
- /datacenter/dtr/2.5/guides/user/create-and-manage-webhooks/
|
||||
- /ee/dtr/user/create-and-manage-webhooks/
|
||||
---
|
||||
|
||||
You can configure DTR to automatically post event notifications to a webhook URL of your choosing. This lets you build complex CI and CD pipelines with your Docker images. The following is a complete list of event types you can trigger webhook notifications for via the [web interface](use-the-web-ui) or the [API](use-the-API).
|
||||
|
||||
## Webhook types
|
||||
|
||||
| Event Type | Scope | Access Level | Availability |
|
||||
| --------------------------------------- | ----------------------- | ---------------- | ------------ |
|
||||
| Tag pushed to repository (`TAG_PUSH`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Tag pulled from repository (`TAG_PULL`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Tag deleted from repository (`TAG_DELETE`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Manifest pushed to repository (`MANIFEST_PUSH`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Manifest pulled from repository (`MANIFEST_PULL`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Manifest deleted from repository (`MANIFEST_DELETE`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Security scan completed (`SCAN_COMPLETED`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Security scan failed (`SCAN_FAILED`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Image promoted from repository (`PROMOTION`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Image mirrored from repository (`PUSH_MIRRORING`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Image mirrored from remote repository (`POLL_MIRRORING`) | Individual repositories | Repository admin | Web UI & API |
|
||||
| Repository created, updated, or deleted (`REPO_CREATED`, `REPO_UPDATED`, and `REPO_DELETED`) | Namespaces / Organizations | Namespace / Org owners | API Only |
|
||||
| Security scanner update completed (`SCANNER_UPDATE_COMPLETED`) | Global | DTR admin | API only |
|
||||
|
||||
You must have admin privileges to a repository or namespace in order to
|
||||
subscribe to its webhook events. For example, a user must be an admin of repository "foo/bar" to subscribe to its tag push events. A DTR admin can subscribe to any event.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Manage webhooks via the web interface](use-the-web-ui)
|
||||
- [Manage webhooks via the the API](use-the-api)
|
|
@ -0,0 +1,311 @@
|
|||
---
|
||||
title: Manage webhooks via the API
|
||||
description: Learn how to create, configure, and test webhooks for DTR using the API.
|
||||
keywords: dtr, webhooks, api, registry
|
||||
---
|
||||
|
||||
## Prerequisite
|
||||
|
||||
See [Webhook types](/ee/dtr/admin/manage-webhooks/index.md/#webhook-types) for a list of events you can trigger notifications for via the API.
|
||||
|
||||
## API Base URL
|
||||
|
||||
Your DTR hostname serves as the base URL for your API requests.
|
||||
|
||||
## Swagger API explorer
|
||||
|
||||
From the DTR web interface, click **API** on the bottom left navigation pane to explore the API resources and endpoints. Click **Execute** to send your API request.
|
||||
|
||||
## API requests via curl
|
||||
|
||||
You can use [curl](https://curl.haxx.se/docs/manpage.html) to send HTTP or HTTPS API requests. Note that you will have to specify `skipTLSVerification: true` on your request in order to test the webhook endpoint over HTTP.
|
||||
|
||||
### Example curl request
|
||||
|
||||
```bash
|
||||
curl -u test-user:$TOKEN -X POST "https://dtr-example.com/api/v0/webhooks" -H "accept: application/json" -H "content-type: application/json" -d "{ \"endpoint\": \"https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019\", \"key\": \"maria-testorg/lab-words\", \"skipTLSVerification\": true, \"type\": \"TAG_PULL\"}"
|
||||
```
|
||||
|
||||
### Example JSON response
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "b7bf702c31601efb4796da59900ddc1b7c72eb8ca80fdfb1b9fecdbad5418155",
|
||||
"type": "TAG_PULL",
|
||||
"key": "maria-testorg/lab-words",
|
||||
"endpoint": "https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019",
|
||||
"authorID": "194efd8e-9ee6-4d43-a34b-eefd9ce39087",
|
||||
"createdAt": "2019-05-22T01:55:20.471286995Z",
|
||||
"lastSuccessfulAt": "0001-01-01T00:00:00Z",
|
||||
"inactive": false,
|
||||
"tlsCert": "",
|
||||
"skipTLSVerification": true
|
||||
}
|
||||
```
|
||||
|
||||
## Subscribe to events
|
||||
|
||||
To subscribe to events, send a `POST` request to
|
||||
`/api/v0/webhooks` with the following JSON payload:
|
||||
|
||||
### Example usage
|
||||
|
||||
```
|
||||
{
|
||||
"type": "TAG_PUSH",
|
||||
"key": "foo/bar",
|
||||
"endpoint": "https://example.com"
|
||||
}
|
||||
```
|
||||
|
||||
The keys in the payload are:
|
||||
|
||||
- `type`: The event type to subcribe to.
|
||||
- `key`: The namespace/organization or repo to subscribe to. For example, "foo/bar" to subscribe to
|
||||
pushes to the "bar" repository within the namespace/organization "foo".
|
||||
- `endpoint`: The URL to send the JSON payload to.
|
||||
|
||||
Normal users **must** supply a "key" to scope a particular webhook event to
|
||||
a repository or a namespace/organization. DTR admins can choose to omit this,
|
||||
meaning a POST event notification of your specified type will be sent for all DTR repositories and namespaces.
|
||||
|
||||
### Receive a payload
|
||||
|
||||
Whenever your specified event type occurs, DTR will send a POST request to the given
|
||||
endpoint with a JSON-encoded payload. The payload will always have the
|
||||
following wrapper:
|
||||
|
||||
```
|
||||
{
|
||||
"type": "...",
|
||||
"createdAt": "2012-04-23T18:25:43.511Z",
|
||||
"contents": {...}
|
||||
}
|
||||
```
|
||||
|
||||
- `type` refers to the event type received at the specified subscription endpoint.
|
||||
- `contents` refers to the payload of the event itself. Each event is different, therefore the
|
||||
structure of the JSON object in `contents` will change depending on the event
|
||||
type. See [Content structure](#content-structure) for more details.
|
||||
|
||||
### Test payload subscriptions
|
||||
|
||||
Before subscribing to an event, you can view and test your endpoints using
|
||||
fake data. To send a test payload, send `POST` request to
|
||||
`/api/v0/webhooks/test` with the following payload:
|
||||
|
||||
```
|
||||
{
|
||||
"type": "...",
|
||||
"endpoint": "https://www.example.com/"
|
||||
}
|
||||
```
|
||||
|
||||
Change `type` to the event type that you want to receive. DTR will then send
|
||||
an example payload to your specified endpoint. The example
|
||||
payload sent is always the same.
|
||||
|
||||
## Content structure
|
||||
|
||||
Comments after (`//`) are for informational purposes only, and the example payloads have been clipped for brevity.
|
||||
|
||||
### Repository event content structure
|
||||
|
||||
**Tag push**
|
||||
|
||||
```
|
||||
{
|
||||
"namespace": "", // (string) namespace/organization for the repository
|
||||
"repository": "", // (string) repository name
|
||||
"tag": "", // (string) the name of the tag just pushed
|
||||
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
|
||||
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
|
||||
"os": "", // (string) the OS for the tag's manifest
|
||||
"architecture": "", // (string) the architecture for the tag's manifest
|
||||
"author": "", // (string) the username of the person who pushed the tag
|
||||
"pushedAt": "", // (string) JSON-encoded timestamp of when the push occurred
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**Tag delete**
|
||||
|
||||
```
|
||||
{
|
||||
"namespace": "", // (string) namespace/organization for the repository
|
||||
"repository": "", // (string) repository name
|
||||
"tag": "", // (string) the name of the tag just deleted
|
||||
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
|
||||
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
|
||||
"os": "", // (string) the OS for the tag's manifest
|
||||
"architecture": "", // (string) the architecture for the tag's manifest
|
||||
"author": "", // (string) the username of the person who deleted the tag
|
||||
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
|
||||
...
|
||||
}
|
||||
```
|
||||
**Manifest push**
|
||||
|
||||
```
|
||||
{
|
||||
"namespace": "", // (string) namespace/organization for the repository
|
||||
"repository": "", // (string) repository name
|
||||
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
|
||||
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
|
||||
"os": "", // (string) the OS for the manifest
|
||||
"architecture": "", // (string) the architecture for the manifest
|
||||
"author": "", // (string) the username of the person who pushed the manifest
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**Manifest delete**
|
||||
|
||||
```
|
||||
{
|
||||
"namespace": "", // (string) namespace/organization for the repository
|
||||
"repository": "", // (string) repository name
|
||||
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
|
||||
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
|
||||
"os": "", // (string) the OS for the manifest
|
||||
"architecture": "", // (string) the architecture for the manifest
|
||||
"author": "", // (string) the username of the person who deleted the manifest
|
||||
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**Security scan completed**
|
||||
|
||||
```
|
||||
{
|
||||
"namespace": "", // (string) namespace/organization for the repository
|
||||
"repository": "", // (string) repository name
|
||||
"tag": "", // (string) the name of the tag scanned
|
||||
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
|
||||
"scanSummary": {
|
||||
"namespace": "", // (string) repository's namespace/organization name
|
||||
"repository": "", // (string) repository name
|
||||
"tag": "", // (string) the name of the tag just pushed
|
||||
"critical": 0, // (int) number of critical issues, where CVSS >= 7.0
|
||||
"major": 0, // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
|
||||
"minor": 0, // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
|
||||
"last_scan_status": 0, // (int) enum; see scan status section
|
||||
"check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Security scan failed**
|
||||
|
||||
```
|
||||
{
|
||||
"namespace": "", // (string) namespace/organization for the repository
|
||||
"repository": "", // (string) repository name
|
||||
"tag": "", // (string) the name of the tag scanned
|
||||
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
|
||||
"error": "", // (string) the error that occurred while scanning
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### Namespace-specific event structure
|
||||
|
||||
**Repository event (created/updated/deleted)**
|
||||
|
||||
```
|
||||
{
|
||||
"namespace": "", // (string) repository's namespace/organization name
|
||||
"repository": "", // (string) repository name
|
||||
"event": "", // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
|
||||
"author": "", // (string) the name of the user responsible for the event
|
||||
"data": {} // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
|
||||
}
|
||||
```
|
||||
|
||||
### Global event structure
|
||||
|
||||
**Security scanner update complete**
|
||||
|
||||
```
|
||||
{
|
||||
"scanner_version": "",
|
||||
"scanner_updated_at": "", // (string) JSON-encoded timestamp of when the scanner updated
|
||||
"db_version": 0, // (int) newly updated database version
|
||||
"db_updated_at": "", // (string) JSON-encoded timestamp of when the database updated
|
||||
"success": <true|false> // (bool) whether the update was successful
|
||||
"replicas": { // (object) a map keyed by replica ID containing update information for each replica
|
||||
"replica_id": {
|
||||
"db_updated_at": "", // (string) JSON-encoded time of when the replica updated
|
||||
"version": "", // (string) version updated to
|
||||
"replica_id": "" // (string) replica ID
|
||||
},
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Security scan status codes
|
||||
|
||||
|
||||
- 0: **Failed**. An error occurred checking an image's layer
|
||||
- 1: **Unscanned**. The image has not yet been scanned
|
||||
- 2: **Scanning**. Scanning in progress
|
||||
- 3: **Pending**: The image will be scanned when a worker is available
|
||||
- 4: **Scanned**: The image has been scanned but vulnerabilities have not yet been checked
|
||||
- 5: **Checking**: The image is being checked for vulnerabilities
|
||||
- 6: **Completed**: The image has been fully security scanned
|
||||
|
||||
|
||||
## View and manage existing subscriptions
|
||||
|
||||
### View all subscriptions
|
||||
|
||||
To view existing subscriptions, send a `GET` request to `/api/v0/webhooks`. As
|
||||
a normal user (i.e. not a DTR admin), this will show all of your
|
||||
current subscriptions across every namespace/organization and repository. As a DTR
|
||||
admin, this will show **every** webhook configured for your DTR.
|
||||
|
||||
The API response will be in the following format:
|
||||
|
||||
```
|
||||
[
|
||||
{
|
||||
"id": "", // (string): UUID of the webhook subscription
|
||||
"type": "", // (string): webhook event type
|
||||
"key": "", // (string): the individual resource this subscription is scoped to
|
||||
"endpoint": "", // (string): the endpoint to send POST event notifications to
|
||||
"authorID": "", // (string): the user ID resposible for creating the subscription
|
||||
"createdAt": "", // (string): JSON-encoded datetime when the subscription was created
|
||||
},
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
For more information, [view the API documentation](/reference/dtr/{{site.dtr_version}}/api/).
|
||||
|
||||
### View subscriptions for a particular resource
|
||||
|
||||
You can also view subscriptions for a given resource that you are an
|
||||
admin of. For example, if you have admin rights to the repository
|
||||
"foo/bar", you can view all subscriptions (even other people's) from a
|
||||
particular API endpoint. These endpoints are:
|
||||
|
||||
- `GET /api/v0/repositories/{namespace}/{repository}/webhooks`: View all
|
||||
webhook subscriptions for a repository
|
||||
- `GET /api/v0/repositories/{namespace}/webhooks`: View all webhook subscriptions for a
|
||||
namespace/organization
|
||||
|
||||
### Delete a subscription
|
||||
|
||||
To delete a webhook subscription, send a `DELETE` request to
|
||||
`/api/v0/webhooks/{id}`, replacing `{id}` with the webhook subscription ID
|
||||
which you would like to delete.
|
||||
|
||||
Only a DTR admin or an admin for the resource with the event subscription can delete a subscription. As a normal user, you can only
|
||||
delete subscriptions for repositories which you manage.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Manage jobs](/ee/dtr/admin/manage-jobs/job-queue/)
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Manage repository webhooks via the web interface
|
||||
description: Learn how to create, configure, and test repository webhooks for DTR using the web interface.
|
||||
keywords: dtr, webhooks, ui, web interface, registry
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You must have admin privileges to the repository in order to create a webhook.
|
||||
- See [Webhook types](/ee/dtr/admin/manage-webhooks/index.md/#webhook-types) for a list of events you can trigger notifications for using the web interface.
|
||||
|
||||
## Create a webhook for your repository
|
||||
|
||||
1. In your browser, navigate to `https://<dtr-url>` and log in with your credentials.
|
||||
|
||||
2. Select **Repositories** from the left navigation pane, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the `/` after the specific namespace for your repository.
|
||||
|
||||
3. Select the **Webhooks** tab, and click **New Webhook**.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
4. From the drop-down list, select the event that will trigger the webhook.
|
||||
5. Set the URL which will receive the JSON payload. Click **Test** next to the **Webhook URL** field, so that you can validate that the integration is working. At your specified URL, you should receive a JSON payload for your chosen event type notification.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "TAG_PUSH",
|
||||
"createdAt": "2019-05-15T19:39:40.607337713Z",
|
||||
"contents": {
|
||||
"namespace": "foo",
|
||||
"repository": "bar",
|
||||
"tag": "latest",
|
||||
"digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
|
||||
"imageName": "foo/bar:latest",
|
||||
"os": "linux",
|
||||
"architecture": "amd64",
|
||||
"author": "",
|
||||
"pushedAt": "2015-01-02T15:04:05Z"
|
||||
},
|
||||
"location": "/repositories/foo/bar/tags/latest"
|
||||
}
|
||||
```
|
||||
|
||||
6. Expand "Show advanced settings" to paste the TLS certificate associated with your webhook URL. For testing purposes, you can test over HTTP instead of HTTPS.
|
||||
|
||||
7. Click **Create** to save. Once saved, your webhook is active and starts sending POST notifications whenever your chosen event type is triggered.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
As a repository admin, you can add or delete a webhook at any point. Additionally, you can create, view, and delete webhooks for your organization or trusted registry [using the API](use-the-api).
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Manage webhooks via the API](use-the-api)
|
Binary file not shown.
Before Width: | Height: | Size: 266 KiB After Width: | Height: | Size: 45 KiB |
Binary file not shown.
Before Width: | Height: | Size: 267 KiB After Width: | Height: | Size: 34 KiB |
|
@ -13,9 +13,9 @@ In the following section, we will show you how to view and audit the list of eve
|
|||
## View List of Events
|
||||
|
||||
As of DTR 2.3, admins were able to view a list of DTR events [using the API](/datacenter/dtr/2.3/reference/api/#!/events/GetEvents). DTR 2.6 enhances that feature by showing a permission-based events list for each repository page on the web interface. To view the list of events within a repository, do the following:
|
||||
1. Navigate to `https://<dtr-url>` and log in with your UCP credentials.
|
||||
1. Navigate to `https://<dtr-url>` and log in with your DTR credentials.
|
||||
|
||||
2. Select **Repositories** on the left navigation pane, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the `/` after the specific namespace for your repository.
|
||||
2. Select **Repositories** from the left navigation pane, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the `/` after the specific namespace for your repository.
|
||||
|
||||
3. Select the **Activity** tab. You should see a paginated list of the latest events based on your repository permission level. By default, **Activity** shows the latest `10` events and excludes pull events, which are only visible to repository and DTR admins.
|
||||
* If you're a repository or a DTR admin, uncheck "Exclude pull" to view pull events. This should give you a better understanding of who is consuming your images.
|
||||
|
|
|
@ -1,50 +0,0 @@
|
|||
---
|
||||
title: Manage webhooks
|
||||
description: Learn how to create, configure, and test webhooks in Docker Trusted Registry.
|
||||
keywords: registry, webhooks
|
||||
redirect_from:
|
||||
- /datacenter/dtr/2.5/guides/user/create-and-manage-webhooks/
|
||||
---
|
||||
|
||||
DTR has webhooks so that you can run custom logic when an event happens. This
|
||||
lets you build complex CI and CD pipelines with your Docker images.
|
||||
|
||||
## Create a webhook
|
||||
|
||||
To create a webhook, navigate to the **repository details** page, choose
|
||||
the **Webhooks** tab, and click **New Webhook**.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Select the event that will trigger the webhook, and set the URL to send
|
||||
information about the event. Once everything is set up, click **Test** for
|
||||
DTR to send a JSON payload to the URL you set up, so that you can validate
|
||||
that the integration is working. You'll get an event that looks like this:
|
||||
|
||||
```json
|
||||
{
|
||||
"contents": {
|
||||
"architecture": "amd64",
|
||||
"author": "",
|
||||
"digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
|
||||
"imageName": "example.com/foo/bar:latest",
|
||||
"namespace": "foo",
|
||||
"os": "linux",
|
||||
"pushedAt": "2015-01-02T15:04:05Z",
|
||||
"repository": "bar",
|
||||
"tag": "latest"
|
||||
},
|
||||
"createdAt": "2017-06-20T01:29:53.046620425Z",
|
||||
"location": "/repositories/foo/bar/tags/latest",
|
||||
"type": "TAG_PUSH"
|
||||
}
|
||||
```
|
||||
|
||||
Once you save, your webhook is active and starts sending notifications when
|
||||
the event is triggered.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [Create promotion policies](promotion-policies/index.md)
|
|
@ -37,7 +37,7 @@ Vulnerability Database that is installed on your DTR instance. When
|
|||
this database is updated, DTR reviews the indexed components for newly
|
||||
discovered vulnerabilities.
|
||||
|
||||
DTR scans both Linux and Windows images, but but by default Docker doesn't push
|
||||
DTR scans both Linux and Windows images, but by default Docker doesn't push
|
||||
foreign image layers for Windows images so DTR won't be able to scan them. If
|
||||
you want DTR to scan your Windows images, [configure Docker to always push image
|
||||
layers](pull-and-push-images.md), and it will scan the non-foreign layers.
|
||||
|
|
|
@ -7,7 +7,7 @@ redirect_from:
|
|||
- /ee/dtr/user/manage-images/sign-images/manage-trusted-repositories/
|
||||
---
|
||||
|
||||
2 Key components of the Docker Trusted Registry is the Notary Server and Notary
|
||||
2 Key components of the Docker Trusted Registry are the Notary Server and Notary
|
||||
Signer. These 2 containers give us the required components to use Docker Content
|
||||
Trust right out of the box. [Docker Content
|
||||
Trust](/engine/security/trust/content_trust/) allows us to sign image tags,
|
||||
|
|
|
@ -36,17 +36,41 @@ Name: `is-admin`, Filter: (user defined) for identifying if the user is an admin
|
|||
|
||||
### ADFS integration values
|
||||
|
||||
ADFS integration requires these values:
|
||||
ADFS integration requires the following steps:
|
||||
|
||||
- Service provider metadata URI. This value is the URL for UCP, qualified with `/enzi/v0/saml/metadata`. For example, `https://111.111.111.111/enzi/v0/saml/metadata`.
|
||||
- Attribute Store: Active Directory.
|
||||
- Add LDAP Attribute = Email Address; Outgoing Claim Type: Email Address
|
||||
- Add LDAP Attribute = Display-Name; Outgoing Claim Type: Common Name
|
||||
- Claim using Custom Rule. For example, `c:[Type == "http://schemas.xmlsoap.org/claims/CommonName"]
|
||||
=> issue(Type = "fullname", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType);`
|
||||
- Outgoing claim type: Name ID
|
||||
- Outgoing name ID format: Transient Identifier
|
||||
- Pass through all claim values
|
||||
1. Add a relying party trust. For example: https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust)
|
||||
|
||||
2. Obtain the service provider metadata URI. This value is the URL for UCP, qualified with `/enzi/v0/saml/metadata`. For example, `https://111.111.111.111/enzi/v0/saml/metadata`.
|
||||
|
||||
3. Add claim rules:
|
||||
|
||||
* Convert values from AD to SAML
|
||||
- Display-name : Common Name
|
||||
- E-Mail-Addresses : E-Mail Address
|
||||
- SAM-Account-Name : Name ID
|
||||
* Create full name for UCP (custom rule):
|
||||
```
|
||||
c:[Type == "http://schemas.xmlsoap.org/claims/CommonName"]
|
||||
=> issue(Type = "fullname", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value,
|
||||
ValueType = c.ValueType);
|
||||
```
|
||||
* Transform account name to Name ID:
|
||||
- Incoming type: Name ID
|
||||
- Incoming format: Unspecified
|
||||
- Outgoing claim type: Name ID
|
||||
- Outgoing format: Transient ID
|
||||
* Pass admin value to allow admin access based on AD group (send group membership as claim):
|
||||
- Users group : Your admin group
|
||||
- Outgoing claim type: is-admin
|
||||
- Outgoing claim value: 1
|
||||
* Configure group membership (for more complex organizations with multiple groups to manage access)
|
||||
- Send LDAP attributes as claims
|
||||
- Attribute store: Active Directory
|
||||
- Add two rows with the following information:
|
||||
- LDAP attribute = email address; outgoing claim type: email address
|
||||
- LDAP attribute = Display-Name; outgoing claim type: common name
|
||||
- Mapping:
|
||||
- Token-Groups - Unqualified Names : member-of
|
||||
|
||||
## Configure the SAML integration
|
||||
|
||||
|
|
|
@ -28,14 +28,15 @@ your cluster.
|
|||
|
||||
For production-grade deployments, follow these rules of thumb:
|
||||
|
||||
* For high availability with minimal
|
||||
network overhead, the recommended number of manager nodes is 3. The recommended maximum number of manager
|
||||
nodes is 5. Adding too many manager nodes to the cluster can lead to performance degradation,
|
||||
because changes to configurations must be replicated across all manager nodes.
|
||||
* When a manager node fails, the number of failures tolerated by your cluster
|
||||
decreases. Don't leave that node offline for too long.
|
||||
* You should distribute your manager nodes across different availability
|
||||
zones. This way your cluster can continue working even if an entire
|
||||
availability zone goes down.
|
||||
* Adding many manager nodes to the cluster might lead to performance
|
||||
degradation, as changes to configurations need to be replicated across all
|
||||
manager nodes. The maximum advisable is seven manager nodes.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -52,17 +52,6 @@ Make sure you follow the [UCP System requirements](system-requirements.md)
|
|||
for opening networking ports. Ensure that your hardware or software firewalls
|
||||
are open appropriately or disabled.
|
||||
|
||||
> Ubuntu 14.04 mounts
|
||||
>
|
||||
> For UCP to install correctly on Ubuntu 14.04, `/mnt` and other mounts
|
||||
> must be shared:
|
||||
> ```
|
||||
> sudo mount --make-shared /mnt
|
||||
> sudo mount --make-shared /
|
||||
> sudo mount --make-shared /run
|
||||
> sudo mount --make-shared /dev
|
||||
> ```
|
||||
|
||||
To install UCP:
|
||||
|
||||
1. Use ssh to log in to the host where you want to install UCP.
|
||||
|
@ -86,12 +75,12 @@ To install UCP:
|
|||
To find what other options are available in the install command, check the
|
||||
[reference documentation](/reference/ucp/3.1/cli/install.md).
|
||||
|
||||
> Custom CNI plugins
|
||||
> Custom Container Networking Interface (CNI) plugins
|
||||
>
|
||||
> If you want to use a third-party Container Networking Interface (CNI) plugin,
|
||||
> like Flannel or Weave, modify the previous command line to include the
|
||||
> `--cni-installer-url` option. Learn how to
|
||||
> [install a CNI plugin](../../kubernetes/install-cni-plugin.md).
|
||||
> UCP will install [Project Calico](https://docs.projectcalico.org/v3.7/introduction/)
|
||||
> for container-to-container communication for Kubernetes. A platform operator may
|
||||
> choose to install an alternative CNI plugin, such as Weave or Flannel. Please see
|
||||
>[Install an unmanaged CNI plugin](/ee/ucp/kubernetes/install-cni-plugin/).
|
||||
{: important}
|
||||
|
||||
## Step 5: License your installation
|
||||
|
|
|
@ -23,7 +23,7 @@ For an example, see [Deploy stateless app with RBAC](deploy-stateless-app.md).
|
|||
|
||||
## Subjects
|
||||
|
||||
A subject represents a user, team, organization, or service account. A subject
|
||||
A subject represents a user, team, organization, or a service account. A subject
|
||||
can be granted a role that defines permitted operations against one or more
|
||||
resource sets.
|
||||
|
||||
|
@ -34,19 +34,19 @@ resource sets.
|
|||
- **Organization**: A group of teams that share a specific set of permissions,
|
||||
defined by the roles of the organization.
|
||||
- **Service account**: A Kubernetes object that enables a workload to access
|
||||
cluster resources that are assigned to a namespace.
|
||||
cluster resources which are assigned to a namespace.
|
||||
|
||||
Learn to [create and configure users and teams](create-users-and-teams-manually.md).
|
||||
|
||||
## Roles
|
||||
|
||||
Roles define what operations can be done by whom. A role is a set of permitted
|
||||
operations against a type of resource, like a container or volume, that's
|
||||
assigned to a user or team with a grant.
|
||||
operations against a type of resource, like a container or volume, which is
|
||||
assigned to a user or a team with a grant.
|
||||
|
||||
For example, the built-in role, **Restricted Control**, includes permission to
|
||||
For example, the built-in role, **Restricted Control**, includes permissions to
|
||||
view and schedule nodes but not to update nodes. A custom **DBA** role might
|
||||
include permissions to `r-w-x` volumes and secrets.
|
||||
include permissions to `r-w-x` (read, write, and execute) volumes and secrets.
|
||||
|
||||
Most organizations use multiple roles to fine-tune the appropriate access. A
|
||||
given team or user may have different roles provided to them depending on what
|
||||
|
@ -71,7 +71,7 @@ To control user access, cluster resources are grouped into Docker Swarm
|
|||
is a logical area for a Kubernetes cluster. Kubernetes comes with a `default`
|
||||
namespace for your cluster objects, plus two more namespaces for system and
|
||||
public resources. You can create custom namespaces, but unlike Swarm
|
||||
collections, namespaces _can't be nested_. Resource types that users can
|
||||
collections, namespaces _cannot be nested_. Resource types that users can
|
||||
access in a Kubernetes namespace include pods, deployments, network policies,
|
||||
nodes, services, secrets, and many more.
|
||||
|
||||
|
@ -80,11 +80,12 @@ Together, collections and namespaces are named *resource sets*. Learn to
|
|||
|
||||
## Grants
|
||||
|
||||
A grant is made up of *subject*, *role*, and *resource set*.
|
||||
A grant is made up of a *subject*, a *role*, and a *resource set*.
|
||||
|
||||
Grants define which users can access what resources in what way. Grants are
|
||||
effectively Access Control Lists (ACLs), and when grouped together, they
|
||||
provide comprehensive access policies for an entire organization.
|
||||
effectively **Access Control Lists** (ACLs) which
|
||||
provide comprehensive access policies for an entire organization when grouped
|
||||
together.
|
||||
|
||||
Only an administrator can manage grants, subjects, roles, and access to
|
||||
resources.
|
||||
|
@ -96,6 +97,37 @@ resources.
|
|||
> and applies grants to users and teams.
|
||||
{: .important}
|
||||
|
||||
## Secure Kubernetes defaults
|
||||
|
||||
For cluster security, only users and service accounts granted the `cluster-admin` ClusterRole for
|
||||
all Kubernetes namespaces via a ClusterRoleBinding can deploy pods with privileged options. This prevents a
|
||||
platform user from being able to bypass the Universal Control Plane Security Model.
|
||||
|
||||
These privileged options include:
|
||||
|
||||
- `PodSpec.hostIPC` - Prevents a user from deploying a pod in the host's IPC
|
||||
Namespace.
|
||||
- `PodSpec.hostNetwork` - Prevents a user from deploying a pod in the host's
|
||||
Network Namespace.
|
||||
- `PodSpec.hostPID` - Prevents a user from deploying a pod in the host's PID
|
||||
Namespace.
|
||||
- `SecurityContext.allowPrivilegeEscalation` - Prevents a child process
|
||||
of a container from gaining more privileges than its parent.
|
||||
- `SecurityContext.capabilities` - Prevents additional [Linux
|
||||
Capabilities](https://docs.docker.com/engine/security/security/#linux-kernel-capabilities)
|
||||
from being added to a pod.
|
||||
- `SecurityContext.privileged` - Prevents a user from deploying a [Privileged
|
||||
Container](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities).
|
||||
- `Volume.hostPath` - Prevents a user from mounting a path from the host into
|
||||
the container. This could be a file, a directory, or even the Docker Socket.
|
||||
|
||||
If a user without a cluster admin role tries to deploy a pod with any of these
|
||||
privileged options, an error similar to the following example is displayed:
|
||||
|
||||
```bash
|
||||
Error from server (Forbidden): error when creating "pod.yaml": pods "mypod" is forbidden: user "<user-id>" is not an admin and does not have permissions to use privileged mode for resource
|
||||
```
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Create and configure users and teams](create-users-and-teams-manually.md)
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 82 KiB |
Binary file not shown.
After Width: | Height: | Size: 22 KiB |
|
@ -8,19 +8,14 @@ redirect_from:
|
|||
|
||||
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
|
||||
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Enable layer 7 routing](#enable-layer-7-routing)
|
||||
3. [Work with the core service configuration file](#work-with-the-core-service-configuration-file)
|
||||
4. [Create a dedicated network for Interlock and extensions](#create-a-dedicated-network-for-Interlock-and-extensions)
|
||||
5. [Create the Interlock service](#create-the-interlock-service)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Docker](https://www.docker.com) version 17.06 or later
|
||||
- Docker must be running in [Swarm mode](/engine/swarm/)
|
||||
- Internet access (see [Offline installation](./offline-install.md) for installing without internet access)
|
||||
|
||||
## Enable layer 7 routing
|
||||
## Enable layer 7 routing via UCP
|
||||
|
||||
By default, layer 7 routing is disabled, so you must first
|
||||
enable this service from the UCP web UI.
|
||||
|
||||
|
@ -28,7 +23,7 @@ enable this service from the UCP web UI.
|
|||
2. Navigate to **Admin Settings**
|
||||
3. Select **Layer 7 Routing** and then select **Enable Layer 7 Routing**
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
By default, the routing mesh service listens on port 8080 for HTTP and port
|
||||
8443 for HTTPS. Change the ports if you already have services that are using
|
||||
|
@ -46,8 +41,7 @@ and attaches it to the `ucp-interlock` network. This allows both services
|
|||
to communicate.
|
||||
4. The `ucp-interlock-extension` generates a configuration to be used by
|
||||
the proxy service. By default the proxy service is NGINX, so this service
|
||||
generates a standard NGINX configuration.
|
||||
( Is this valid here????) UCP creates the `com.docker.ucp.interlock.conf-1` configuration file and uses it to configure all
|
||||
generates a standard NGINX configuration. UCP creates the `com.docker.ucp.interlock.conf-1` configuration file and uses it to configure all
|
||||
the internal components of this service.
|
||||
5. The `ucp-interlock` service takes the proxy configuration and uses it to
|
||||
start the `ucp-interlock-proxy` service.
|
||||
|
@ -55,8 +49,7 @@ start the `ucp-interlock-proxy` service.
|
|||
At this point everything is ready for you to start using the layer 7 routing
|
||||
service with your swarm workloads.
|
||||
|
||||
|
||||
The following code sample provides a default UCP configuration:
|
||||
The following code sample provides a default UCP configuration (this will be created automatically when enabling Interlock as per section [Enable layer 7 routing](#enable-layer-7-routing)):
|
||||
|
||||
```toml
|
||||
ListenAddr = ":8080"
|
||||
|
@ -78,7 +71,7 @@ PollInterval = "3s"
|
|||
ProxyStopGracePeriod = "5s"
|
||||
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
|
||||
PublishMode = "ingress"
|
||||
PublishedPort = 80
|
||||
PublishedPort = 8080
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 8443
|
||||
TargetSSLPort = 443
|
||||
|
@ -123,7 +116,12 @@ PollInterval = "3s"
|
|||
HideInfoHeaders = false
|
||||
```
|
||||
|
||||
## Enable layer 7 routing manually
|
||||
|
||||
Interlock can also be enabled from the command line by following the below sections.
|
||||
|
||||
### Work with the core service configuration file
|
||||
|
||||
Interlock uses the TOML file for the core service configuration. The following example utilizes Swarm deployment and recovery features by creating a Docker Config object:
|
||||
|
||||
```bash
|
||||
|
@ -143,9 +141,9 @@ PollInterval = "3s"
|
|||
ProxyStopGracePeriod = "3s"
|
||||
ServiceCluster = ""
|
||||
PublishMode = "ingress"
|
||||
PublishedPort = 80
|
||||
PublishedPort = 8080
|
||||
TargetPort = 80
|
||||
PublishedSSLPort = 443
|
||||
PublishedSSLPort = 8443
|
||||
TargetSSLPort = 443
|
||||
[Extensions.default.Config]
|
||||
User = "nginx"
|
||||
|
@ -166,6 +164,7 @@ $> docker network create -d overlay interlock
|
|||
```
|
||||
|
||||
### Create the Interlock service
|
||||
|
||||
Now you can create the Interlock service. Note the requirement to constrain to a manager. The
|
||||
Interlock core service must have access to a Swarm manager, however the extension and proxy services
|
||||
are recommended to run on workers. See the [Production](./production.md) section for more information
|
||||
|
|
|
@ -1,98 +1,119 @@
|
|||
---
|
||||
title: Install a CNI plugin
|
||||
description: Learn how to install a Container Networking Interface plugin on Docker Universal Control Plane.
|
||||
keywords: ucp, cli, administration, kubectl, Kubernetes, cni, Container Networking Interface, flannel, weave, ipip, calico
|
||||
title: Install an unmanaged CNI plugin
|
||||
description: Learn how to install a Container Networking Interface (CNI) plugin on Docker Universal Control Plane.
|
||||
keywords: ucp, kubernetes, cni, container networking interface, flannel, weave, calico
|
||||
---
|
||||
|
||||
For Docker Universal Control Plane, [Project Calico](https://docs.projectcalico.org/v3.0/introduction/)
|
||||
provides the secure networking functionality for the container communication with Kubernetes.
|
||||
For Docker Universal Control Plane (UCP), [Calico](https://docs.projectcalico.org/v3.7/introduction/)
|
||||
provides the secure networking functionality for container-to-container communication within
|
||||
Kubernetes. UCP handles the lifecycle of Calico and packages it with UCP
|
||||
installation and upgrade. Additionally, the Calico deployment included with
|
||||
UCP is fully supported with Docker providing guidance on the
|
||||
[CNI components](https://github.com/projectcalico/cni-plugin).
|
||||
|
||||
Docker EE supports Calico and installs the
|
||||
built-in [Calico](https://github.com/projectcalico/cni-plugin) plugin, but you can override that and
|
||||
install a Docker certified plugin.
|
||||
At install time, UCP can be configured to install an alternative CNI plugin
|
||||
to support alternative use cases. The alternative CNI plugin is certified by
|
||||
Docker and its partners, and published on Docker Hub. UCP components are still
|
||||
fully supported by Docker and respective partners. Docker will provide
|
||||
pointers to basic configuration, however for additional guidance on managing third party
|
||||
CNI components, the platform operator will need to refer to the partner documentation
|
||||
or contact that third party.
|
||||
|
||||
> **Note**: The `--cni-installer-url` option is deprecated as of UCP 3.1. It is replaced by the `--unmanaged-cni` option.
|
||||
## Install an unmanaged CNI plugin on Docker UCP
|
||||
|
||||
# Install UCP with a custom CNI plugin
|
||||
Once a platform operator has complied with [UCP system
|
||||
requirements](/ee/ucp/admin/install/system-requirements/) and
|
||||
taken into consideration any requirements for the custom CNI plugin, you can
|
||||
[run the UCP install command](/reference/ucp/3.1/cli/install/) with the `--unmanaged-cni` flag
|
||||
to bring up the platform.
|
||||
|
||||
Modify the [UCP install command-line](../admin/install/index.md#step-4-install-ucp)
|
||||
to add the `--cni-installer-url` [option](/reference/ucp/3.0/cli/install.md),
|
||||
providing a URL for the location of the CNI plugin's YAML file:
|
||||
This command will install UCP, and bring up components
|
||||
like the user interface and the RBAC engine. UCP components that
|
||||
require Kubernetes Networking, such as Metrics, will not start and will stay in
|
||||
a `Container Creating` state in Kubernetes, until a CNI is installed.
|
||||
|
||||
### Install UCP without a CNI plugin
|
||||
|
||||
Once connected to a manager node with the Docker Enterprise Engine installed,
|
||||
you are ready to install UCP with the `--unmanaged-cni` flag.
|
||||
|
||||
```bash
|
||||
docker container run --rm -it --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \
|
||||
--host-address <node-ip-address> \
|
||||
--unmanaged-cni <true|false> \
|
||||
--unmanaged-cni \
|
||||
--interactive
|
||||
```
|
||||
|
||||
> **Note**: Setting `--unmanaged-cni` to `true` value installs UCP without a managed CNI plugin. UCP and the
|
||||
> Kubernetes components will be running but pod-to-pod networking will not function until a CNI plugin is manually
|
||||
> installed. This will impact some functionality of UCP until a CNI plugin is running.
|
||||
Once the installation is complete, you will be able to access UCP in the browser.
|
||||
Note that the manager node will be unhealthy as the kubelet will
|
||||
report `NetworkPluginNotReady`. Additionally, the metrics in the UCP dashboard
|
||||
will also be unavailable, as this runs in a Kubernetes pod.
|
||||
|
||||
You must provide a correct YAML installation file for the CNI plugin, but most
|
||||
of the default files work on Docker EE with no modification.
|
||||
### Configure CLI access to UCP
|
||||
|
||||
## YAML files for CNI plugins
|
||||
|
||||
Use the following commands to get the YAML files for popular CNI plugins.
|
||||
|
||||
- [Flannel](https://github.com/coreos/flannel)
|
||||
```bash
|
||||
# Get the URL for the Flannel CNI plugin.
|
||||
CNI_URL="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
|
||||
```
|
||||
- [Weave](https://www.weave.works/)
|
||||
```bash
|
||||
# Get the URL for the Weave CNI plugin.
|
||||
CNI_URL="https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI5IiwgR2l0VmVyc2lvbjoidjEuOS4zIiwgR2l0Q29tbWl0OiJkMjgzNTQxNjU0NGYyOThjOTE5ZTJlYWQzYmUzZDA4NjRiNTIzMjNiIiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxOC0wMi0wN1QxMjoyMjoyMVoiLCBHb1ZlcnNpb246ImdvMS45LjIiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjgrIiwgR2l0VmVyc2lvbjoidjEuOC4yLWRvY2tlci4xNDMrYWYwODAwNzk1OWUyY2UiLCBHaXRDb21taXQ6ImFmMDgwMDc5NTllMmNlYWUxMTZiMDk4ZWNhYTYyNGI0YjI0MjBkODgiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE4LTAyLTAxVDIzOjI2OjE3WiIsIEdvVmVyc2lvbjoiZ28xLjguMyIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg=="
|
||||
```
|
||||
If you have kubectl available, for example by using
|
||||
[Docker Desktop for Mac](/docker-for-mac/kubernetes.md), you can use the following
|
||||
command to get the URL for the [Weave](https://www.weave.works/) CNI plugin:
|
||||
```bash
|
||||
# Get the URL for the Weave CNI plugin.
|
||||
CNI_URL="https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
- [Romana](http://docs.romana.io/)
|
||||
```bash
|
||||
# Get the URL for the Romana CNI plugin.
|
||||
CNI_URL="https://raw.githubusercontent.com/romana/romana/master/docs/kubernetes/romana-kubeadm.yml"
|
||||
```
|
||||
|
||||
## Disable IP in IP overlay tunneling
|
||||
|
||||
The Calico CNI plugin supports both overlay (IPIP) and underlay forwarding
|
||||
technologies. By default, Docker UCP uses IPIP overlay tunneling.
|
||||
|
||||
If you're used to managing applications at the network level through the
|
||||
underlay visibility, or you want to reuse existing networking tools in the
|
||||
underlay, you may want to disable the IPIP functionality. Run the following
|
||||
commands on the Kubernetes master node to disable IPIP overlay tunneling.
|
||||
Next, a platform operator should log into UCP, download a UCP client bundle, and
|
||||
configure the Kubernetes CLI tool, `kubectl`. See [CLI Based
|
||||
Access](ee/ucp/user-access/cli/#download-client-certificates) for more details.
|
||||
|
||||
With `kubectl`, you can see that the UCP components running on
|
||||
Kubernetes are still pending, waiting for a CNI driver before becoming
|
||||
available.
|
||||
|
||||
```bash
|
||||
# Exec into the Calico Kubernetes controller container.
|
||||
docker exec -it $(docker ps --filter name=k8s_calico-kube-controllers_calico-kube-controllers -q) sh
|
||||
|
||||
# Download calicoctl
|
||||
wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl && chmod +x calicoctl
|
||||
|
||||
# Get the IP pool configuration.
|
||||
./calicoctl get ippool -o yaml > ippool.yaml
|
||||
|
||||
# Edit the file: Disable IPIP in ippool.yaml by setting "ipipMode: Never".
|
||||
|
||||
# Apply the edited file to the Calico plugin.
|
||||
./calicoctl apply -f ippool.yaml
|
||||
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
manager-01 NotReady master 10m v1.11.9-docker-1
|
||||
|
||||
$ kubectl get pods -n kube-system -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
|
||||
compose-565f7cf9ff-gq2gv 0/1 Pending 0 10m <none> <none> <none>
|
||||
compose-api-574d64f46f-r4c5g 0/1 Pending 0 10m <none> <none> <none>
|
||||
kube-dns-6d96c4d9c6-8jzv7 0/3 Pending 0 10m <none> <none> <none>
|
||||
ucp-metrics-nwt2z 0/3 ContainerCreating 0 10m <none> manager-01 <none>
|
||||
```
|
||||
|
||||
These steps disable overlay tunneling, and Calico uses the underlay networking,
|
||||
in environments where it's supported.
|
||||
### Install an unmanaged CNI plugin
|
||||
|
||||
You can use`kubectl` to install a custom CNI plugin on UCP.
|
||||
Alternative CNI plugins are Weave, Flannel, Canal, Romana and many more.
|
||||
Platform operators have complete flexibility on what to install, but Docker
|
||||
will not support the CNI plugin.
|
||||
|
||||
The steps for installing a CNI plugin typically include:
|
||||
- Downloading the relevant upstream CNI binaries from
|
||||
https://github.com/containernetworking/cni/releases/tag/
|
||||
- Placing them in `/opt/cni/bin`
|
||||
- Downloading the relevant CNI plugin's Kubernetes Manifest YAML, and
|
||||
- Running `$ kubectl apply -f <your-custom-cni-plugin>.yaml`
|
||||
|
||||
Follow the CNI plugin documentation for specific installation
|
||||
instructions.
|
||||
|
||||
> While troubleshooting a custom CNI plugin, you may wish to access logs
|
||||
> within the kubelet. Connect to a UCP manager node and run
|
||||
> `$ docker logs ucp-kubelet`.
|
||||
|
||||
### Verify the UCP installation
|
||||
|
||||
Upon successful installation of the CNI plugin, the related UCP components should have
|
||||
a `Running` status as pods start to become available.
|
||||
|
||||
```
|
||||
$ kubectl get pods -n kube-system -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
|
||||
compose-565f7cf9ff-gq2gv 1/1 Running 0 21m 10.32.0.2 manager-01 <none>
|
||||
compose-api-574d64f46f-r4c5g 1/1 Running 0 21m 10.32.0.3 manager-01 <none>
|
||||
kube-dns-6d96c4d9c6-8jzv7 3/3 Running 0 22m 10.32.0.5 manager-01 <none>
|
||||
ucp-metrics-nwt2z 3/3 Running 0 22m 10.32.0.4 manager-01 <none>
|
||||
weave-net-wgvcd 2/2 Running 0 8m 172.31.6.95 manager-01 <none>
|
||||
```
|
||||
|
||||
> **Note**: The above example deployment uses Weave. If you are using an alternative
|
||||
> CNI plugin, look for the relevant name and review its status.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Install UCP for production](../admin/install.md)
|
||||
- [Deploy a workload to a Kubernetes cluster](../kubernetes.md)
|
||||
- [Make your Cluster Highly Available](https://docs.docker.com/ee/ucp/admin/install/#step-6-join-manager-nodes)
|
||||
- [Install an Ingress Controller on Kubernetes](ee/ucp/kubernetes/layer-7-routing/)
|
||||
|
|
|
@ -0,0 +1,239 @@
|
|||
---
|
||||
title: Configuring Azure Disk Storage for Kubernetes
|
||||
description: Learn how to add persistent storage to your Docker Enterprise clusters running on Azure with Azure Disk.
|
||||
keywords: Universal Control Plane, UCP, Docker EE, Kubernetes, storage, volume
|
||||
redirect_from:
|
||||
---
|
||||
|
||||
Platform operators can provide persistent storage for workloads running on
|
||||
Docker Enterprise and Microsoft Azure by using Azure Disk. Platform
|
||||
operators can either pre-provision Azure Disks to be consumed by Kubernetes
|
||||
Pods, or can use the Azure Kubernetes integration to dynamically provision Azure
|
||||
Disks on demand.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes you have already provisioned a UCP environment on
|
||||
Microsoft Azure. The Cluster must be provisioned after meeting all of the
|
||||
prerequisites listed in [Install UCP on
|
||||
Azure](/ee/ucp/admin/install/install-on-azure.md).
|
||||
|
||||
Additionally, this guide uses the Kubernetes Command Line tool `$
|
||||
kubectl` to provision Kubernetes objects within a UCP cluster. Therefore, this
|
||||
tool must be downloaded, along with a UCP client bundle. For more
|
||||
information on configuring CLI access for UCP, see [CLI Based
|
||||
Access](/ee/ucp/user-access/cli/).
|
||||
|
||||
## Manually provision Azure Disks
|
||||
|
||||
An operator can use existing Azure Disks or manually provision new ones to
|
||||
provide persistent storage for Kubernetes Pods. Azure Disks can be manually
|
||||
provisioned in the Azure Portal, using ARM Templates or the Azure CLI. The
|
||||
following example uses the Azure CLI to manually provision an Azure
|
||||
Disk.
|
||||
|
||||
```bash
|
||||
$ RG=myresourcegroup
|
||||
|
||||
$ az disk create \
|
||||
--resource-group $RG \
|
||||
--name k8s_volume_1 \
|
||||
--size-gb 20 \
|
||||
--query id \
|
||||
--output tsv
|
||||
```
|
||||
|
||||
Using the Azure CLI command in the previous example should return the Azure ID of the Azure Disk
|
||||
Object. If you are provisioning Azure resources using an alternative method,
|
||||
make sure you retrieve the Azure ID of the Azure Disk, because it is needed for another step.
|
||||
|
||||
```
|
||||
/subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
|
||||
```
|
||||
|
||||
You can now create Kubernetes Objects that refer to this Azure Disk. The following
|
||||
example uses a Kubernetes Pod. However, the same Azure Disk syntax can be
|
||||
used for DaemonSets, Deployments, and StatefulSets. In the following example, the
|
||||
Azure Disk Name and ID reflect the manually created Azure Disk.
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod-azuredisk
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: mypod
|
||||
volumeMounts:
|
||||
- name: mystorage
|
||||
mountPath: /data
|
||||
volumes:
|
||||
- name: mystorage
|
||||
azureDisk:
|
||||
kind: Managed
|
||||
diskName: k8s_volume_1
|
||||
diskURI: /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
|
||||
EOF
|
||||
```
|
||||
|
||||
## Dynamically provision Azure Disks
|
||||
|
||||
### Define the Azure Disk Storage Class
|
||||
|
||||
Kubernetes can dynamically provision Azure Disks using the Azure Kubernetes
|
||||
integration, which was configured when UCP was installed. For Kubernetes
|
||||
to determine which APIs to use when provisioning storage, you must
|
||||
create Kubernetes Storage Classes specific to each storage backend. For more
|
||||
information on Kubernetes Storage Classes, see [Storage
|
||||
Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
|
||||
|
||||
In Azure there are 2 different Azure Disk types that can be consumed by
|
||||
Kubernetes: Azure Disk Standard Volumes and Azure Disk Premium Volumes. For more
|
||||
information on their differences, see [Azure
|
||||
Disks](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-types).
|
||||
|
||||
Depending on your use case, you can deploy one or both of the Azure Disk storage Classes (Standard and Advanced).
|
||||
|
||||
To create a Standard Storage Class:
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
storageaccounttype: Standard_LRS
|
||||
kind: Managed
|
||||
EOF
|
||||
```
|
||||
|
||||
To Create a Premium Storage Class:
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
name: premium
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
storageaccounttype: Premium_LRS
|
||||
kind: Managed
|
||||
EOF
|
||||
```
|
||||
|
||||
To determine which Storage Classes have been provisioned:
|
||||
|
||||
```bash
|
||||
$ kubectl get storageclasses
|
||||
NAME PROVISIONER AGE
|
||||
premium kubernetes.io/azure-disk 1m
|
||||
standard kubernetes.io/azure-disk 1m
|
||||
```
|
||||
|
||||
### Create an Azure Disk with a Persistent Volume Claim
|
||||
|
||||
After you create a Storage Class, you can use Kubernetes
|
||||
Objects to dynamically provision Azure Disks. This is done using Kubernetes
|
||||
Persistent Volumes Claims. For more information on Kubernetes Persistent Volume
|
||||
Claims, see
|
||||
[PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction).
|
||||
|
||||
The following example uses the standard storage class and creates a 5 GiB Azure Disk. Alter these values to fit your use case.
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: azure-disk-pvc
|
||||
spec:
|
||||
storageClassName: standard
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
At this point, you should see a new Persistent Volume Claim and Persistent Volume
|
||||
inside of Kubernetes. You should also see a new Azure Disk created in the Azure
|
||||
Portal.
|
||||
|
||||
```bash
|
||||
$ kubectl get persistentvolumeclaim
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
azure-disk-pvc Bound pvc-587deeb6-6ad6-11e9-9509-0242ac11000b 5Gi RWO standard 1m
|
||||
|
||||
$ kubectl get persistentvolume
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-587deeb6-6ad6-11e9-9509-0242ac11000b 5Gi RWO Delete Bound default/azure-disk-pvc standard 3m
|
||||
```
|
||||
|
||||
### Attach the new Azure Disk to a Kubernetes pod
|
||||
|
||||
Now that a Kubernetes Persistent Volume has been created, you can mount this into
|
||||
a Kubernetes Pod. The disk can be consumed by any Kubernetes object type, including
|
||||
a Deployment, DaemonSet, or StatefulSet. However, the following example just mounts
|
||||
the persistent volume into a standalone pod.
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mypod-dynamic-azuredisk
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/usr/share/nginx/html"
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
persistentVolumeClaim:
|
||||
claimName: azure-disk-pvc
|
||||
EOF
|
||||
```
|
||||
|
||||
### Azure Virtual Machine data disk capacity
|
||||
|
||||
In Azure, there are limits to the number of data disks that can be attached to
|
||||
each Virtual Machine. This data is shown in [Azure Virtual Machine
|
||||
Sizes](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general).
|
||||
Kubernetes is aware of these restrictions, and prevents pods from
|
||||
deploying on Nodes that have reached their maximum Azure Disk Capacity.
|
||||
|
||||
This can be seen if a pod is stuck in the `ContainerCreating` stage:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mypod-azure-disk 0/1 ContainerCreating 0 4m
|
||||
```
|
||||
|
||||
Describing the pod displays troubleshooting logs, showing the node has
|
||||
reached its capacity:
|
||||
|
||||
```bash
|
||||
$ kubectl describe pods mypod-azure-disk
|
||||
<...>
|
||||
Warning FailedAttachVolume 7s (x11 over 6m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" : Attach volume "kubernetes-dynamic-pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" to instance "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Compute/virtualMachines/worker-03" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: failed request: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="The maximum number of data disks allowed to be attached to a VM of this size is 4." Target="dataDisks"
|
||||
```
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Deploy an Ingress Controller on
|
||||
Kubernetes](/ee/ucp/kubernetes/layer-7-routing/)
|
||||
- [Discover Network Encryption on
|
||||
Kubernetes](/ee/ucp/kubernetes/kubernetes-network-encryption/)
|
|
@ -0,0 +1,246 @@
|
|||
---
|
||||
title: Configuring Azure Files Storage for Kubernetes
|
||||
description: Learn how to add persistent storage to your Docker Enterprise clusters running on Azure with Azure Files.
|
||||
keywords: Universal Control Plane, UCP, Docker EE, Kubernetes, storage, volume
|
||||
redirect_from:
|
||||
---
|
||||
|
||||
Platform operators can provide persistent storage for workloads running on
|
||||
Docker Enterprise and Microsoft Azure by using Azure Files. You can either
|
||||
pre-provision Azure Files Shares to be consumed by
|
||||
Kubernetes Pods or can you use the Azure Kubernetes integration to dynamically
|
||||
provision Azure Files Shares on demand.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This guide assumes you have already provisioned a UCP environment on
|
||||
Microsoft Azure. The cluster must be provisioned after meeting all
|
||||
prerequisites listed in [Install UCP on
|
||||
Azure](/ee/ucp/admin/install/install-on-azure.md).
|
||||
|
||||
Additionally, this guide uses the Kubernetes Command Line tool `$
|
||||
kubectl` to provision Kubernetes objects within a UCP cluster. Therefore, you must download
|
||||
this tool along with a UCP client bundle. For more
|
||||
information on configuring CLI access to UCP, see [CLI Based
|
||||
Access](/ee/ucp/user-access/cli/).
|
||||
|
||||
## Manually Provisioning Azure Files
|
||||
|
||||
You can use existing Azure Files Shares or manually provision new ones to
|
||||
provide persistent storage for Kubernetes Pods. Azure Files Shares can be
|
||||
manually provisioned in the Azure Portal using ARM Templates or using the Azure
|
||||
CLI. The following example uses the Azure CLI to manually provision
|
||||
Azure Files Shares.
|
||||
|
||||
### Creating an Azure Storage Account
|
||||
|
||||
When manually creating an Azure Files Share, first create an Azure
|
||||
Storage Account for the file shares. If you have already provisioned
|
||||
a Storage Account, you can skip to [Creating an Azure Files
|
||||
Share](#creating-an-azure-file-share).
|
||||
|
||||
> **Note**: the Azure Kubernetes Driver does not support Azure Storage Accounts
|
||||
> created using Azure Premium Storage.
|
||||
|
||||
```bash
|
||||
$ REGION=ukwest
|
||||
$ SA=mystorageaccount
|
||||
$ RG=myresourcegroup
|
||||
|
||||
$ az storage account create \
|
||||
--name $SA \
|
||||
--resource-group $RG \
|
||||
--location $REGION \
|
||||
--sku Standard_LRS
|
||||
```
|
||||
|
||||
### Creating an Azure Files Share
|
||||
|
||||
Next, provision an Azure Files Share. The size of this share can be
|
||||
adjusted to fit the end user's requirements. If you have already created an
|
||||
Azure Files Share, you can skip to [Configuring a Kubernetes
|
||||
Secret](#configuring-a-kubernetes-secret).
|
||||
|
||||
```bash
|
||||
$ SA=mystorageaccount
|
||||
$ RG=myresourcegroup
|
||||
$ FS=myfileshare
|
||||
$ SIZE=5
|
||||
|
||||
# This Azure Collection String can also be found in the Azure Portal
|
||||
$ export AZURE_STORAGE_CONNECTION_STRING=`az storage account show-connection-string --name $SA --resource-group $RG -o tsv`
|
||||
|
||||
$ az storage share create \
|
||||
--name $FS \
|
||||
--quota $SIZE \
|
||||
--connection-string $AZURE_STORAGE_CONNECTION_STRING
|
||||
```
|
||||
|
||||
### Configuring a Kubernetes Secret
|
||||
|
||||
After a File Share has been created, you must load the Azure Storage
|
||||
Account Access key as a Kubernetes Secret into UCP. This provides access to
|
||||
the file share when Kubernetes attempts to mount the share into a pod. This key
|
||||
can be found in the Azure Portal or retrieved as shown in the following example by the Azure CLI:
|
||||
|
||||
```bash
|
||||
$ SA=mystorageaccount
|
||||
$ RG=myresourcegroup
|
||||
$ FS=myfileshare
|
||||
|
||||
# The Azure Storage Account Access Key can also be found in the Azure Portal
|
||||
$ STORAGE_KEY=$(az storage account keys list --resource-group $RG --account-name $SA --query "[0].value" -o tsv)
|
||||
|
||||
$ kubectl create secret generic azure-secret \
|
||||
--from-literal=azurestorageaccountname=$SA \
|
||||
--from-literal=azurestorageaccountkey=$STORAGE_KEY
|
||||
```
|
||||
|
||||
### Mount the Azure Files Share into a Kubernetes Pod
|
||||
|
||||
The final step is to mount the Azure Files Share, using the Kubernetes Secret, into
|
||||
a Kubernetes Pod. The following code creates a standalone Kubernetes pod, but you
|
||||
can also use alternative Kubernetes Objects such as Deployments, DaemonSets, or
|
||||
StatefulSets, with the existing Azure Files Share.
|
||||
|
||||
```bash
|
||||
$ FS=myfileshare
|
||||
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod-azurefile
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: mypod
|
||||
volumeMounts:
|
||||
- name: mystorage
|
||||
mountPath: /data
|
||||
volumes:
|
||||
- name: mystorage
|
||||
azureFile:
|
||||
secretName: azure-secret
|
||||
shareName: $FS
|
||||
readOnly: false
|
||||
EOF
|
||||
```
|
||||
|
||||
## Dynamically Provisioning Azure Files Shares
|
||||
|
||||
### Defining the Azure Disk Storage Class
|
||||
|
||||
Kubernetes can dynamically provision Azure Files Shares using the Azure
|
||||
Kubernetes integration, which was configured when UCP was installed. For
|
||||
Kubernetes to know which APIs to use when provisioning storage, you must
|
||||
create Kubernetes Storage Classes specific to each storage
|
||||
backend. For more information on Kubernetes Storage Classes, see [Storage
|
||||
Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
|
||||
|
||||
> Today, only the Standard Storage Class is supported when using the Azure
|
||||
> Kubernetes Plugin. File shares using the Premium Storage Class will fail to
|
||||
> mount.
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: kubernetes.io/azure-file
|
||||
mountOptions:
|
||||
- dir_mode=0777
|
||||
- file_mode=0777
|
||||
- uid=1000
|
||||
- gid=1000
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
EOF
|
||||
```
|
||||
|
||||
To see which Storage Classes have been provisioned:
|
||||
|
||||
```bash
|
||||
$ kubectl get storageclasses
|
||||
NAME PROVISIONER AGE
|
||||
azurefile kubernetes.io/azure-file 1m
|
||||
```
|
||||
|
||||
### Creating an Azure Files Share using a Persistent Volume Claim
|
||||
|
||||
After you create a Storage Class, you can use Kubernetes
|
||||
Objects to dynamically provision Azure Files Shares. This is done using
|
||||
Kubernetes Persistent Volumes Claims
|
||||
[PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction).
|
||||
Kubernetes uses an existing Azure Storage Account if one exists inside of the
|
||||
Azure Resource Group. If an Azure Storage Account does not exist,
|
||||
Kubernetes creates one.
|
||||
|
||||
The following example uses the standard storage class and creates a 5 GB Azure
|
||||
File Share. Alter these values to fit your use case.
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: azure-file-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: standard
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
At this point, you should see a newly created Persistent Volume Claim and Persistent Volume:
|
||||
|
||||
```bash
|
||||
$ kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
azure-file-pvc Bound pvc-f7ccebf0-70e0-11e9-8d0a-0242ac110007 5Gi RWX standard 22s
|
||||
|
||||
$ kubectl get pv
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-f7ccebf0-70e0-11e9-8d0a-0242ac110007 5Gi RWX Delete Bound default/azure-file-pvc standard 2m
|
||||
```
|
||||
|
||||
### Attach the new Azure Files Share to a Kubernetes Pod
|
||||
|
||||
Now that a Kubernetes Persistent Volume has been created, mount this into
|
||||
a Kubernetes Pod. The file share can be consumed by any Kubernetes object type
|
||||
such as a Deployment, DaemonSet, or StatefulSet. However, the following
|
||||
example just mounts the persistent volume into a standalone pod.
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | kubectl create -f -
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: task-pv-container
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/usr/share/nginx/html"
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
persistentVolumeClaim:
|
||||
claimName: azure-file-pvc
|
||||
EOF
|
||||
```
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Deploy an Ingress Controller on
|
||||
Kubernetes](/ee/ucp/kubernetes/layer-7-routing/)
|
||||
- [Discover Network Encryption on
|
||||
Kubernetes](/ee/ucp/kubernetes/kubernetes-network-encryption/)
|
|
@ -120,7 +120,7 @@ is available in [17.12 Edge (mac45)](/docker-for-mac/edge-release-notes/#docker-
|
|||
[17.12 Stable (mac46)](/docker-for-mac/release-notes/#docker-community-edition-17120-ce-mac46-2018-01-09){: target="_blank" class="_"} and higher.
|
||||
> - [Kubernetes on Docker Desktop for Windows](/docker-for-windows/kubernetes/){: target="_blank" class="_"}
|
||||
is available in
|
||||
[18.02 Edge (win50)](/docker-for-windows/edge-release-notes/#docker-community-edition-18020-ce-rc1-win50-2018-01-26){: target="_blank" class="_"} and higher edge channels only.
|
||||
[18.06.0 CE (win70)](/docker-for-windows/release-notes/){: target="_blank" class="_"} and higher as well as edge channels.
|
||||
|
||||
[Install Docker](/engine/installation/index.md){: class="button outline-btn"}
|
||||
<div style="clear:left"></div>
|
||||
|
|
|
@ -47,7 +47,7 @@ Older versions of Docker were called `docker` or `docker-engine`. In addition,
|
|||
if you are upgrading from Docker CE to Docker EE, remove the Docker CE package.
|
||||
|
||||
```bash
|
||||
$ sudo apt-get remove docker docker-engine docker-ce docker.io
|
||||
$ sudo apt-get remove docker docker-engine docker-ce docker-ce-cli docker.io
|
||||
```
|
||||
|
||||
It's OK if `apt-get` reports that none of these packages are installed.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Use Macvlan networks
|
||||
title: Use macvlan networks
|
||||
description: All about using macvlan to make your containers appear like physical machines on the network
|
||||
keywords: network, macvlan, standalone
|
||||
redirect_from:
|
||||
|
@ -13,25 +13,27 @@ this type of situation, you can use the `macvlan` network driver to assign a MAC
|
|||
address to each container's virtual network interface, making it appear to be
|
||||
a physical network interface directly connected to the physical network. In this
|
||||
case, you need to designate a physical interface on your Docker host to use for
|
||||
the Macvlan, as well as the subnet and gateway of the Macvlan. You can even
|
||||
isolate your Macvlan networks using different physical network interfaces.
|
||||
the `macvlan`, as well as the subnet and gateway of the `macvlan`. You can even
|
||||
isolate your `macvlan` networks using different physical network interfaces.
|
||||
Keep the following things in mind:
|
||||
|
||||
- It is very easy to unintentionally damage your network due to IP address
|
||||
exhaustion or to "VLAN spread", which is a situation in which you have an
|
||||
inappropriately large number of unique MAC addresses in your network.
|
||||
|
||||
- Your networking equipment needs to be able to handle "promiscuous mode",
|
||||
where one physical interface can be assigned multiple MAC addresses.
|
||||
|
||||
- If your application can work using a bridge (on a single Docker host) or
|
||||
overlay (to communicate across multiple Docker hosts), these solutions may be
|
||||
better in the long term.
|
||||
|
||||
## Create a macvlan network
|
||||
|
||||
When you create a Macvlan network, it can either be in bridge mode or 802.1q
|
||||
When you create a `macvlan` network, it can either be in bridge mode or 802.1q
|
||||
trunk bridge mode.
|
||||
|
||||
- In bridge mode,Macvlan traffic goes through a physical device on the host.
|
||||
- In bridge mode, `macvlan` traffic goes through a physical device on the host.
|
||||
|
||||
- In 802.1q trunk bridge mode, traffic goes through an 802.1q sub-interface
|
||||
which Docker creates on the fly. This allows you to control routing and
|
||||
|
@ -39,7 +41,7 @@ trunk bridge mode.
|
|||
|
||||
### Bridge mode
|
||||
|
||||
To create a Macvlan network which bridges with a given physical network
|
||||
To create a `macvlan` network which bridges with a given physical network
|
||||
interface, use `--driver macvlan` with the `docker network create` command. You
|
||||
also need to specify the `parent`, which is the interface the traffic will
|
||||
physically go through on the Docker host.
|
||||
|
@ -47,18 +49,18 @@ physically go through on the Docker host.
|
|||
```bash
|
||||
$ docker network create -d macvlan \
|
||||
--subnet=172.16.86.0/24 \
|
||||
--gateway=172.16.86.1 \
|
||||
--gateway=172.16.86.1 \
|
||||
-o parent=eth0 pub_net
|
||||
```
|
||||
|
||||
If you need to exclude IP addresses from being used in the Macvlan network, such
|
||||
If you need to exclude IP addresses from being used in the `macvlan` network, such
|
||||
as when a given IP address is already in use, use `--aux-addresses`:
|
||||
|
||||
```bash
|
||||
$ docker network create -d macvlan \
|
||||
--subnet=192.168.32.0/24 \
|
||||
$ docker network create -d macvlan \
|
||||
--subnet=192.168.32.0/24 \
|
||||
--ip-range=192.168.32.128/25 \
|
||||
--gateway=192.168.32.254 \
|
||||
--gateway=192.168.32.254 \
|
||||
--aux-address="my-router=192.168.32.129" \
|
||||
-o parent=eth0 macnet32
|
||||
```
|
||||
|
@ -70,7 +72,7 @@ Docker interprets that as a sub-interface of `eth0` and creates the sub-interfac
|
|||
automatically.
|
||||
|
||||
```bash
|
||||
$ docker network create -d macvlan \
|
||||
$ docker network create -d macvlan \
|
||||
--subnet=192.168.50.0/24 \
|
||||
--gateway=192.168.50.1 \
|
||||
-o parent=eth0.50 macvlan50
|
||||
|
@ -85,26 +87,25 @@ instead, and get an L2 bridge. Specify `-o ipvlan_mode=l2`.
|
|||
$ docker network create -d ipvlan \
|
||||
--subnet=192.168.210.0/24 \
|
||||
--subnet=192.168.212.0/24 \
|
||||
--gateway=192.168.210.254 \
|
||||
--gateway=192.168.212.254 \
|
||||
--gateway=192.168.210.254 \
|
||||
--gateway=192.168.212.254 \
|
||||
-o ipvlan_mode=l2 ipvlan210
|
||||
```
|
||||
|
||||
## Use IPv6
|
||||
|
||||
If you have [configured the Docker daemon to allow IPv6](/config/daemon/ipv6.md),
|
||||
you can use dual-stack IPv4/IPv6 Macvlan networks.
|
||||
you can use dual-stack IPv4/IPv6 `macvlan` networks.
|
||||
|
||||
```bash
|
||||
$ docker network create -d macvlan \
|
||||
$ docker network create -d macvlan \
|
||||
--subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \
|
||||
--gateway=192.168.216.1 --gateway=192.168.218.1 \
|
||||
--gateway=192.168.216.1 --gateway=192.168.218.1 \
|
||||
--subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
|
||||
-o parent=eth0.218 \
|
||||
-o macvlan_mode=bridge macvlan216
|
||||
```
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
- Go through the [macvlan networking tutorial](/network/network-tutorial-macvlan.md)
|
||||
|
|
|
@ -37,10 +37,11 @@ Note:
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:--------------------------|:---------------------------|
|
||||
|`--debug, D`|Enable debug mode|
|
||||
|`--jsonlog`|Produce json formatted output for easier parsing|
|
||||
|`--interactive, i`|Run in interactive mode and prompt for configuration values|
|
||||
|`--id`|The ID of the UCP instance to back up|
|
||||
|`--passphrase`|Encrypt the tar file with a passphrase|
|
||||
| Option | Description |
|
||||
|:-----------------------|:--------------------------------------------------------------------|
|
||||
| `--debug, -D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, -i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--id` *value* | The ID of the UCP instance to back up |
|
||||
| `--no-passphrase` | Opt out to encrypt the tar file with a passphrase (not recommended) |
|
||||
| `--passphrase` *value* | Encrypt the tar file with a passphrase |
|
||||
|
|
|
@ -27,9 +27,9 @@ to configure DTR.
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:--------------------------|:---------------------------|
|
||||
|`--debug, D`|Enable debug mode|
|
||||
|`--jsonlog`|Produce json formatted output for easier parsing|
|
||||
|`--ca`|Only print the contents of the ca.pem file|
|
||||
|`--cluster`|Print the internal UCP swarm root CA and cert instead of the public server cert|
|
||||
| Option | Description |
|
||||
|:-------------|:--------------------------------------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--ca` | Only print the contents of the ca.pem file |
|
||||
| `--cluster` | Print the internal UCP swarm root CA and cert instead of the public server cert |
|
||||
|
|
|
@ -23,3 +23,9 @@ a client bundle.
|
|||
|
||||
This ID is used by other commands as confirmation.
|
||||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:-------------|:-------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
|
|
|
@ -24,11 +24,11 @@ the ones that are missing.
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:--------------------------|:---------------------------|
|
||||
|`--debug, D`|Enable debug mode|
|
||||
|`--jsonlog`|Produce json formatted output for easier parsing|
|
||||
|`--pull`|Pull UCP images: `always`, when `missing`, or `never`|
|
||||
|`--registry-username`|Username to use when pulling images|
|
||||
|`--registry-password`|Password to use when pulling images|
|
||||
|`--list`|List all images used by UCP but don't pull them|
|
||||
| Option | Description |
|
||||
|:------------------------------|:-----------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--list` | List all images used by UCP but don't pull them |
|
||||
| `--pull` *value* | Pull UCP images: `always`, when `missing`, or `never`|
|
||||
| `--registry-password` *value* | Password to use when pulling images |
|
||||
| `--registry-username` *value* | Username to use when pulling images |
|
||||
|
|
|
@ -31,15 +31,16 @@ docker container run -it --rm \
|
|||
|
||||
| Option | Description |
|
||||
|:-----------------|:----------------------------------------------------------|
|
||||
| `backup` | Create a backup of a UCP manager node |
|
||||
| `dump-certs` | Print the public certificates used by this UCP web server |
|
||||
| `example-config` | Display an example configuration file for UCP |
|
||||
| `help` | Shows a list of commands or help for one command |
|
||||
| `id` | Print the ID of UCP running on this node |
|
||||
| `images` | Verify the UCP images on this node |
|
||||
| `install` | Install UCP on this node |
|
||||
| `restart` | Start or restart UCP components running on this node |
|
||||
| `stop` | Stop UCP components running on this node |
|
||||
| `upgrade` | Upgrade the UCP cluster |
|
||||
| `images` | Verify the UCP images on this node |
|
||||
| `uninstall-ucp` | Uninstall UCP from this swarm |
|
||||
| `dump-certs` | Print the public certificates used by this UCP web server |
|
||||
| `support` | Create a support dump for this UCP node |
|
||||
| `id` | Print the ID of UCP running on this node |
|
||||
| `backup` | Create a backup of a UCP manager node |
|
||||
| `restore` | Restore a UCP cluster from a backup |
|
||||
| `example-config` | Display an example configuration file for UCP |
|
||||
| `stop` | Stop UCP components running on this node |
|
||||
| `support` | Create a support dump for this UCP node |
|
||||
| `uninstall-ucp` | Uninstall UCP from this swarm |
|
||||
| `upgrade` | Upgrade the UCP cluster |
|
||||
|
|
|
@ -42,45 +42,46 @@ If you are installing on Azure, see [Install UCP on Azure](/ee/ucp/admin/install
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:-------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `--admin-password` | The UCP administrator password. Must be at least 8 characters. |
|
||||
| `--admin-username` | The UCP administrator username |
|
||||
| `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility |
|
||||
| `--cloud-provider` | The cloud provider for the cluster
|
||||
| `--cni-installer-url` | Deprecated feature. A URL pointing to a Kubernetes YAML file to be used as an installer for the CNI plugin of the cluster. If specified, the default CNI plugin is not installed. If the URL uses the HTTPS scheme, no certificate verification is performed. |
|
||||
| `--controller-port` | Port for the web UI and API
|
||||
| `--data-path-addr` | Address or interface to use for data path traffic. Format: IP address or network interface name
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--disable-tracking` | Disable anonymous tracking and analytics |
|
||||
| `--disable-usage` | Disable anonymous usage reporting |
|
||||
| `--dns` | Set custom DNS servers for the UCP containers |
|
||||
| `--dns-opt` | Set DNS options for the UCP containers |
|
||||
| `--dns-search` | Set custom DNS search domains for the UCP containers |
|
||||
| `--enable-profiling` | Enable performance profiling |
|
||||
| `--existing-config` | Use the latest existing UCP config during this installation. The install fails if a config is not found. |
|
||||
| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation |
|
||||
| `--external-service-lb` | Set the external service load balancer reported in the UI |
|
||||
| `--force-insecure-tcp` | Force install to continue even with unauthenticated Docker Engine ports |
|
||||
| `--force-minimums` | Force the install/upgrade even if the system doesn't meet the minimum requirements. |
|
||||
| `--host-address` | The network address to advertise to other nodes. Format: IP address or network interface name |
|
||||
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--kube-apiserver-port` | Port for the Kubernetes API server (default: 6443) |
|
||||
| `--kv-snapshot-count` | Number of changes between key-value store snapshots |
|
||||
| `--kv-timeout` | Timeout in milliseconds for the key-value store |
|
||||
| `--license` | Add a license: e.g.` --license "$(cat license.lic)" ` |
|
||||
| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IPs from (Default: `192.168.0.0/16`) |
|
||||
|`--service-cluster-ip-range`| Sets the subnet pool from which the IP for Services should be allocated (Default is `10.96.0.0/16`). |
|
||||
| `--preserve-certs` | Don't generate certificates if they already exist |
|
||||
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
|
||||
| `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility |
|
||||
| `--registry-username` | Username to use when pulling images |
|
||||
| `--registry-password` | Password to use when pulling images |
|
||||
| `--san` | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
|
||||
| `--skip-cloud-provider` | Disables checks that rely on detecting the cloud provider (if any) on which the cluster is currently running. |
|
||||
| `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility |
|
||||
| `--swarm-port` | Port for the Docker Swarm manager. Used for backwards compatibility |
|
||||
| `--swarm-grpc-port` | Port for communication between nodes |
|
||||
| `--unlock-key` | The unlock key for this swarm-mode cluster, if one exists. |
|
||||
| `--unmanaged-cni` |The default value of `false` indicates that Kubernetes networking is managed by UCP with its default managed CNI plugin, Calico. When set to `true`, UCP does not deploy or manage the lifecycle of the default CNI plugin - the CNI plugin is deployed and managed independently of UCP. Note that when `unmanaged-cni=true`, networking in the cluster will not function for Kubernetes until a CNI plugin is deployed. |
|
||||
| Option | Description |
|
||||
|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `--debug, -D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, -i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--admin-password` *value* | The UCP administrator password [$UCP_ADMIN_PASSWORD] |
|
||||
| `--admin-username` *value* | The UCP administrator username [$UCP_ADMIN_USER] |
|
||||
| `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility |
|
||||
| `--cloud-provider` *value* | The cloud provider for the cluster |
|
||||
| `--cni-installer-url` *value* | A URL pointing to a kubernetes YAML file to be used as an installer for the CNI plugin of the cluster. If specified, the default CNI plugin will not be installed. If the URL is using the HTTPS scheme, no certificate verification will be performed |
|
||||
| `--controller-port` *value* | Port for the web UI and API (default: 443) |
|
||||
| `--data-path-addr` *value* | Address or interface to use for data path traffic. Format: IP address or network interface name [$UCP_DATA_PATH_ADDR] |
|
||||
| `--disable-tracking` | Disable anonymous tracking and analytics |
|
||||
| `--disable-usage` | Disable anonymous usage reporting |
|
||||
| `--dns-opt` *value* | Set DNS options for the UCP containers [$DNS_OPT] |
|
||||
| `--dns-search` *value* | Set custom DNS search domains for the UCP containers [$DNS_SEARCH] |
|
||||
| `--dns` *value* | Set custom DNS servers for the UCP containers [$DNS] |
|
||||
| `--enable-profiling` | Enable performance profiling |
|
||||
| `--existing-config` | Use the latest existing UCP config during this installation. The install will fail if a config is not found |
|
||||
| `--external-server-cert` | Customize the certificates used by the UCP web server |
|
||||
| `--external-service-lb` *value* | Set the IP address of the load balancer that published services are expected to be reachable on |
|
||||
| `--force-insecure-tcp` | Force install to continue even with unauthenticated Docker Engine ports. |
|
||||
| `--force-minimums` | Force the install/upgrade even if the system does not meet the minimum requirements |
|
||||
| `--host-address` *value* | The network address to advertise to other nodes. Format: IP address or network interface name [$UCP_HOST_ADDRESS] |
|
||||
| `--kube-apiserver-port` *value* | Port for the Kubernetes API server (default: 6443) |
|
||||
| `--kv-snapshot-count` *value* | Number of changes between key-value store snapshots (default: 20000) [$KV_SNAPSHOT_COUNT] |
|
||||
| `--kv-timeout` *value* | Timeout in milliseconds for the key-value store (default: 5000) [$KV_TIMEOUT] |
|
||||
| `--license` *value* | Add a license: e.g. --license "$(cat license.lic)" [$UCP_LICENSE] |
|
||||
| `--nodeport-range` *value* | Allowed port range for Kubernetes services of type NodePort (Default: 32768-35535) (default: "32768-35535") |
|
||||
| `--pod-cidr` *value* | Kubernetes cluster IP pool for the pods to allocated IP from (Default: 192.168.0.0/16) (default: "192.168.0.0/16") |
|
||||
| `--preserve-certs` | Don't generate certificates if they already exist |
|
||||
| `--pull` *value* | Pull UCP images: 'always', when 'missing', or 'never' (default: "missing") |
|
||||
| `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility |
|
||||
| `--registry-password` *value* | Password to use when pulling images [$REGISTRY_PASSWORD] |
|
||||
| `--registry-username` *value* | Username to use when pulling images [$REGISTRY_USERNAME] |
|
||||
| `--san` *value* | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) [$UCP_HOSTNAMES] |
|
||||
| `--skip-cloud-provider-check` | Disables checks which rely on detecting which (if any) cloud provider the cluster is currently running on |
|
||||
| `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility |
|
||||
| `--swarm-grpc-port` *value* | Port for communication between nodes (default: 2377) |
|
||||
| `--swarm-port` *value* | Port for the Docker Swarm manager. Used for backwards compatibility (default: 2376) |
|
||||
| `--unlock-key` *value* | The unlock key for this swarm-mode cluster, if one exists. [$UNLOCK_KEY] |
|
||||
| `--unmanaged-cni` | Flag to indicate if cni provider is calico and managed by UCP (calico is the default CNI provider) | |
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ docker container run --rm -it \
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:--------------------------|:---------------------------|
|
||||
|`--debug, D`|Enable debug mode|
|
||||
|`--jsonlog`|Produce json formatted output for easier parsing|
|
||||
| Option | Description |
|
||||
|:-------------|:-------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
|
|
|
@ -58,13 +58,13 @@ Notes:
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:-------------------|:----------------------------------------------------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--passphrase` | Decrypt the backup tar file with the provided passphrase |
|
||||
| `--san` | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
|
||||
| `--host-address` | The network address to advertise to other nodes. Format: IP address or network interface name |
|
||||
| `--data-path-addr` | Address or interface to use for data path traffic |
|
||||
| `--unlock-key` | The unlock key for this swarm-mode cluster, if one exists. |
|
||||
| Option | Description |
|
||||
|:---------------------------|:----------------------------------------------------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--data-path-addr` *value* | Address or interface to use for data path traffic |
|
||||
| `--host-address` *value* | The network address to advertise to other nodes. Format: IP address or network interface name |
|
||||
| `--passphrase` *value* | Decrypt the backup tar file with the provided passphrase |
|
||||
| `--san` *value* | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
|
||||
| `--unlock-key` *value* | The unlock key for this swarm-mode cluster, if one exists. |
|
||||
|
|
|
@ -18,7 +18,7 @@ docker container run --rm -it \
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:--------------------------|:---------------------------|
|
||||
|`--debug, D`|Enable debug mode|
|
||||
|`--jsonlog`|Produce json formatted output for easier parsing|
|
||||
| Option | Description |
|
||||
|:-------------|:-------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
|
|
|
@ -22,8 +22,7 @@ This command creates a support dump file for the specified node(s), and prints i
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:--------------------------|:---------------------------|
|
||||
|`--loglines`|Specify number of lines to grab from `journalctl`. The default is 10,000 lines.|
|
||||
|`--servicedriller`|Run the swarm service driller (ssd) tool. For more information on this tool, see [Docker Swarm Service Driller (ssd)](https://github.com/docker/libnetwork/tree/master/cmd/ssd) Not run by default.|
|
||||
|`--nodes`|Select specific nodes on which to produce a support dump. Comma-separated node IDs are allowed. The default selects all nodes.|
|
||||
| Option | Description |
|
||||
|:-------------|:-------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
|
|
|
@ -30,13 +30,13 @@ UCP is installed again.
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
| :-------------------- | :---------------------------------------------------------- |
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
|
||||
| `--registry-username` | Username to use when pulling images |
|
||||
| `--registry-password` | Password to use when pulling images |
|
||||
| `--id` | The ID of the UCP instance to uninstall |
|
||||
| `--purge-config` | Remove UCP configs during uninstallation |
|
||||
| Option | Description |
|
||||
|:------------------------------|:----------------------------------------------------------- |
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--id` *value* | The ID of the UCP instance to uninstall |
|
||||
| `--pull` *value* | Pull UCP images: `always`, when `missing`, or `never` |
|
||||
| `--purge-config` | Remove UCP configs during uninstallation |
|
||||
| `--registry-password` *value* | Password to use when pulling images |
|
||||
| `--registry-username` *value* | Username to use when pulling images |
|
||||
|
|
|
@ -29,19 +29,16 @@ healthy and that all nodes have been upgraded successfully.
|
|||
|
||||
## Options
|
||||
|
||||
| Option | Description |
|
||||
|:----------------------|:------------------------------------------------------------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--admin-username` | The UCP administrator username |
|
||||
| `--admin-password` | The UCP administrator password |
|
||||
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
|
||||
| `--registry-username` | Username to use when pulling images |
|
||||
| `--registry-password` | Password to use when pulling images |
|
||||
| `--id` | The ID of the UCP instance to upgrade |
|
||||
| `--host-address` | Override the previously configured host address with this IP or network interface |
|
||||
| `--force-minimums` | Force the install/upgrade even if the system does not meet the minimum requirements |
|
||||
| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IP. The default IP pool is `192.168.0.0/16`. |
|
||||
| `--nodeport-range` | Allowed port range for Kubernetes services of type `NodePort`. The default port range is `32768-35535`. |
|
||||
| `--cloud-provider` | The cloud provider for the cluster |
|
||||
| Option | Description |
|
||||
|:------------------------------|:------------------------------------------------------------------------------------|
|
||||
| `--debug, D` | Enable debug mode |
|
||||
| `--jsonlog` | Produce json formatted output for easier parsing |
|
||||
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
|
||||
| `--admin-password` *value* | The UCP administrator password |
|
||||
| `--admin-username` *value* | The UCP administrator username |
|
||||
| `--force-minimums` | Force the install/upgrade even if the system does not meet the minimum requirements |
|
||||
| `--host-address` *value* | Override the previously configured host address with this IP or network interface |
|
||||
| `--id` | The ID of the UCP instance to upgrade |
|
||||
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
|
||||
| `--registry-password` *value* | Password to use when pulling images |
|
||||
| `--registry-username` *value* | Username to use when pulling images |
|
||||
|
|
|
@ -27,14 +27,14 @@ instruction in the image's Dockerfile. Each layer except the very last one is
|
|||
read-only. Consider the following Dockerfile:
|
||||
|
||||
```conf
|
||||
FROM ubuntu:15.04
|
||||
FROM ubuntu:18.04
|
||||
COPY . /app
|
||||
RUN make /app
|
||||
CMD python /app/app.py
|
||||
```
|
||||
|
||||
This Dockerfile contains four commands, each of which creates a layer. The
|
||||
`FROM` statement starts out by creating a layer from the `ubuntu:15.04` image.
|
||||
`FROM` statement starts out by creating a layer from the `ubuntu:18.04` image.
|
||||
The `COPY` command adds some files from your Docker client's current directory.
|
||||
The `RUN` command builds your application using the `make` command. Finally,
|
||||
the last layer specifies what command to run within the container.
|
||||
|
@ -45,7 +45,7 @@ writable layer on top of the underlying layers. This layer is often called the
|
|||
"container layer". All changes made to the running container, such as writing
|
||||
new files, modifying existing files, and deleting files, are written to this thin
|
||||
writable container layer. The diagram below shows a container based on the Ubuntu
|
||||
15.04 image.
|
||||
18.04 image.
|
||||
|
||||

|
||||
|
||||
|
@ -63,7 +63,7 @@ deleted. The underlying image remains unchanged.
|
|||
Because each container has its own writable container layer, and all changes are
|
||||
stored in this container layer, multiple containers can share access to the same
|
||||
underlying image and yet have their own data state. The diagram below shows
|
||||
multiple containers sharing the same Ubuntu 15.04 image.
|
||||
multiple containers sharing the same Ubuntu 18.04 image.
|
||||
|
||||

|
||||
|
||||
|
@ -130,28 +130,28 @@ usually `/var/lib/docker/` on Linux hosts. You can see these layers being pulled
|
|||
in this example:
|
||||
|
||||
```bash
|
||||
$ docker pull ubuntu:15.04
|
||||
|
||||
15.04: Pulling from library/ubuntu
|
||||
1ba8ac955b97: Pull complete
|
||||
f157c4e5ede7: Pull complete
|
||||
0b7e98f84c4c: Pull complete
|
||||
a3ed95caeb02: Pull complete
|
||||
Digest: sha256:5e279a9df07990286cce22e1b0f5b0490629ca6d187698746ae5e28e604a640e
|
||||
Status: Downloaded newer image for ubuntu:15.04
|
||||
$ docker pull ubuntu:18.04
|
||||
18.04: Pulling from library/ubuntu
|
||||
f476d66f5408: Pull complete
|
||||
8882c27f669e: Pull complete
|
||||
d9af21273955: Pull complete
|
||||
f5029279ec12: Pull complete
|
||||
Digest: sha256:ab6cb8de3ad7bb33e2534677f865008535427390b117d7939193f8d1a6613e34
|
||||
Status: Downloaded newer image for ubuntu:18.04
|
||||
```
|
||||
|
||||
Each of these layers is stored in its own directory inside the Docker host's
|
||||
local storage area. To examine the layers on the filesystem, list the contents
|
||||
of `/var/lib/docker/<storage-driver>/layers/`. This example uses the `aufs`
|
||||
of `/var/lib/docker/<storage-driver>`. This example uses the `overlay2`
|
||||
storage driver:
|
||||
|
||||
```bash
|
||||
$ ls /var/lib/docker/aufs/layers
|
||||
1d6674ff835b10f76e354806e16b950f91a191d3b471236609ab13a930275e24
|
||||
5dbb0cbe0148cf447b9464a358c1587be586058d9a4c9ce079320265e2bb94e7
|
||||
bef7199f2ed8e86fa4ada1309cfad3089e0542fec8894690529e4c04a7ca2d73
|
||||
ebf814eccfe98f2704660ca1d844e4348db3b5ccc637eb905d4818fbfb00a06a
|
||||
$ ls /var/lib/docker/overlay2
|
||||
16802227a96c24dcbeab5b37821e2b67a9f921749cd9a2e386d5a6d5bc6fc6d3
|
||||
377d73dbb466e0bc7c9ee23166771b35ebdbe02ef17753d79fd3571d4ce659d7
|
||||
3f02d96212b03e3383160d31d7c6aeca750d2d8a1879965b89fe8146594c453d
|
||||
ec1ec45792908e90484f7e629330666e7eee599f08729c93890a7205a6ba35f5
|
||||
l
|
||||
```
|
||||
|
||||
The directory names do not correspond to the layer IDs (this has been true since
|
||||
|
@ -161,7 +161,7 @@ Now imagine that you have two different Dockerfiles. You use the first one to
|
|||
create an image called `acme/my-base-image:1.0`.
|
||||
|
||||
```conf
|
||||
FROM ubuntu:16.10
|
||||
FROM ubuntu:18.04
|
||||
COPY . /app
|
||||
```
|
||||
|
||||
|
@ -209,10 +209,9 @@ layers are the same.
|
|||
|
||||
```bash
|
||||
$ docker build -t acme/my-base-image:1.0 -f Dockerfile.base .
|
||||
|
||||
Sending build context to Docker daemon 4.096kB
|
||||
Step 1/2 : FROM ubuntu:16.10
|
||||
---> 31005225a745
|
||||
Sending build context to Docker daemon 812.4MB
|
||||
Step 1/2 : FROM ubuntu:18.04
|
||||
---> d131e0fa2585
|
||||
Step 2/2 : COPY . /app
|
||||
---> Using cache
|
||||
---> bd09118bcef6
|
||||
|
@ -252,7 +251,7 @@ layers are the same.
|
|||
$ docker history bd09118bcef6
|
||||
IMAGE CREATED CREATED BY SIZE COMMENT
|
||||
bd09118bcef6 4 minutes ago /bin/sh -c #(nop) COPY dir:35a7eb158c1504e... 100B
|
||||
31005225a745 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
|
||||
d131e0fa2585 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
|
||||
<missing> 3 months ago /bin/sh -c mkdir -p /run/systemd && echo '... 7B
|
||||
<missing> 3 months ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\... 2.78kB
|
||||
<missing> 3 months ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B
|
||||
|
@ -266,7 +265,7 @@ layers are the same.
|
|||
IMAGE CREATED CREATED BY SIZE COMMENT
|
||||
dbf995fc07ff 3 minutes ago /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "/a... 0B
|
||||
bd09118bcef6 5 minutes ago /bin/sh -c #(nop) COPY dir:35a7eb158c1504e... 100B
|
||||
31005225a745 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
|
||||
d131e0fa2585 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
|
||||
<missing> 3 months ago /bin/sh -c mkdir -p /run/systemd && echo '... 7B
|
||||
<missing> 3 months ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\... 2.78kB
|
||||
<missing> 3 months ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B
|
||||
|
|
|
@ -263,7 +263,7 @@ There are several factors that influence the performance of Docker using the
|
|||
filesystems like ZFS. ZFS mitigates this by using a small block size of 128k.
|
||||
The ZFS intent log (ZIL) and the coalescing of writes (delayed writes) also
|
||||
help to reduce fragmentation. You can monitor fragmentation using
|
||||
`zfs status`. However, there is no way to defragment ZFS without reformatting
|
||||
`zpool status`. However, there is no way to defragment ZFS without reformatting
|
||||
and restoring the filesystem.
|
||||
|
||||
- **Use the native ZFS driver for Linux**: The ZFS FUSE implementation is not
|
||||
|
|
|
@ -464,7 +464,7 @@ $ docker service create -d \
|
|||
docker service create -d \
|
||||
--name nfs-service \
|
||||
--mount 'type=volume,source=nfsvolume,target=/app,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:/,"volume-opt=o=10.0.0.10,rw,nfsvers=4,async"' \
|
||||
nginx:latest`
|
||||
nginx:latest
|
||||
```
|
||||
|
||||
## Backup, restore, or migrate data volumes
|
||||
|
|
|
@ -168,6 +168,11 @@ Or with node discovery:
|
|||
|
||||
## Docker Hub as a hosted discovery service
|
||||
|
||||
> ### Deprecation Notice
|
||||
>
|
||||
> The Docker Hub Hosted Discovery Service will be removed on June 19th, 2019. Please switch to one of the other discovery mechanisms. Several brownouts of the service will take place in the weeks leading up to the removal in order for users to find places where this is still used and give them time to prepare.
|
||||
{:.info}
|
||||
|
||||
> **Warning**:
|
||||
> The Docker Hub Hosted Discovery Service **is not recommended** for production use. It's intended to be used for testing/development. See the discovery backends for production use.
|
||||
{:.warning}
|
||||
|
|
Loading…
Reference in New Issue