Merge branch 'master' of github.com:docker/docs-private into CNI-Calico-enhancements-563
|
|
@ -309,6 +309,14 @@ instance. Be sure to compress the images *before* adding them to the
|
|||
repository, doing it afterwards actually worsens the impact on the Git repo (but
|
||||
still optimizes the bandwith during browsing).
|
||||
|
||||
## Beta content disclaimer
|
||||
|
||||
```bash
|
||||
> BETA DISCLAIMER
|
||||
>
|
||||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
```
|
||||
|
||||
## Building archives and the live published docs
|
||||
|
||||
All the images described below are automatically built using Docker Cloud. To
|
||||
|
|
|
|||
|
|
@ -2345,6 +2345,16 @@ manuals:
|
|||
title: Create and manage organizations
|
||||
- path: /ee/dtr/admin/manage-users/permission-levels/
|
||||
title: Permission levels
|
||||
- sectiontitle: Manage jobs
|
||||
section:
|
||||
- path: /ee/dtr/admin/manage-jobs/job-queue/
|
||||
title: Job Queue
|
||||
- path: /ee/dtr/admin/manage-jobs/audit-jobs-via-ui/
|
||||
title: Audit Jobs with the Web Interface
|
||||
- path: /ee/dtr/admin/manage-jobs/audit-jobs-via-api/
|
||||
title: Audit Jobs with the API
|
||||
- path: /ee/dtr/admin/manage-jobs/auto-delete-job-logs/
|
||||
title: Enable Auto-Deletion of Job Logs
|
||||
- sectiontitle: Monitor and troubleshoot
|
||||
section:
|
||||
- path: /ee/dtr/admin/monitor-and-troubleshoot/
|
||||
|
|
@ -2353,8 +2363,6 @@ manuals:
|
|||
title: Check Notary audit logs
|
||||
- path: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-with-logs/
|
||||
title: Troubleshoot with logs
|
||||
- path: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-batch-jobs/
|
||||
title: Troubleshoot batch jobs
|
||||
- sectiontitle: Disaster recovery
|
||||
section:
|
||||
- title: Overview
|
||||
|
|
|
|||
|
|
@ -0,0 +1,182 @@
|
|||
---
|
||||
title: Audit Jobs via the API
|
||||
description: Learn how Docker Trusted Registry runs batch jobs for job-related troubleshooting.
|
||||
keywords: dtr, troubleshoot, audit, job logs, jobs, api
|
||||
redirect_from: /ee/dtr/admin/monitor-and-troubleshoot/troubleshoot-batch-jobs/
|
||||
---
|
||||
|
||||
|
||||
> BETA DISCLAIMER
|
||||
>
|
||||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
|
||||
## Overview
|
||||
|
||||
This covers troubleshooting batch jobs via the API and was introduced in DTR 2.2. Starting in DTR 2.6, admins have the ability to [audit jobs](audit-jobs-via-ui.md) using the web interface.
|
||||
|
||||
## Prerequisite
|
||||
* [Job Queue](job-queue.md)
|
||||
|
||||
### Job capacity
|
||||
|
||||
Each job runner has a limited capacity and will not claim jobs that require a
|
||||
higher capacity. You can see the capacity of a job runner via the
|
||||
`GET /api/v0/workers` endpoint:
|
||||
|
||||
```json
|
||||
{
|
||||
"workers": [
|
||||
{
|
||||
"id": "000000000000",
|
||||
"status": "running",
|
||||
"capacityMap": {
|
||||
"scan": 1,
|
||||
"scanCheck": 1
|
||||
},
|
||||
"heartbeatExpiration": "2017-02-18T00:51:02Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This means that the worker with replica ID `000000000000` has a capacity of 1
|
||||
`scan` and 1 `scanCheck`. Next, review the list of available jobs:
|
||||
|
||||
```json
|
||||
{
|
||||
"jobs": [
|
||||
{
|
||||
"id": "0",
|
||||
"workerID": "",
|
||||
"status": "waiting",
|
||||
"capacityMap": {
|
||||
"scan": 1
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "1",
|
||||
"workerID": "",
|
||||
"status": "waiting",
|
||||
"capacityMap": {
|
||||
"scan": 1
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"workerID": "",
|
||||
"status": "waiting",
|
||||
"capacityMap": {
|
||||
"scanCheck": 1
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
If worker `000000000000` notices the jobs
|
||||
in `waiting` state above, then it will be able to pick up jobs `0` and `2` since it has the capacity
|
||||
for both. Job `1` will have to wait until the previous scan job, `0`, is completed. The job queue will then look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"jobs": [
|
||||
{
|
||||
"id": "0",
|
||||
"workerID": "000000000000",
|
||||
"status": "running",
|
||||
"capacityMap": {
|
||||
"scan": 1
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "1",
|
||||
"workerID": "",
|
||||
"status": "waiting",
|
||||
"capacityMap": {
|
||||
"scan": 1
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"workerID": "000000000000",
|
||||
"status": "running",
|
||||
"capacityMap": {
|
||||
"scanCheck": 1
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
You can get a list of jobs via the `GET /api/v0/jobs/` endpoint. Each job
|
||||
looks like:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
|
||||
"retryFromID": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
|
||||
"workerID": "000000000000",
|
||||
"status": "done",
|
||||
"scheduledAt": "2017-02-17T01:09:47.771Z",
|
||||
"lastUpdated": "2017-02-17T01:10:14.117Z",
|
||||
"action": "scan_check_single",
|
||||
"retriesLeft": 0,
|
||||
"retriesTotal": 0,
|
||||
"capacityMap": {
|
||||
"scan": 1
|
||||
},
|
||||
"parameters": {
|
||||
"SHA256SUM": "1bacd3c8ccb1f15609a10bd4a403831d0ec0b354438ddbf644c95c5d54f8eb13"
|
||||
},
|
||||
"deadline": "",
|
||||
"stopTimeout": ""
|
||||
}
|
||||
```
|
||||
The JSON fields of interest here are:
|
||||
|
||||
* `id`: The ID of the job
|
||||
* `workerID`: The ID of the worker in a DTR replica that is running this job
|
||||
* `status`: The current state of the job
|
||||
* `action`: The type of job the worker will actually perform
|
||||
* `capacityMap`: The available capacity a worker needs for this job to run
|
||||
|
||||
|
||||
### Cron jobs
|
||||
|
||||
Several of the jobs performed by DTR are run in a recurrent schedule. You can
|
||||
see those jobs using the `GET /api/v0/crons` endpoint:
|
||||
|
||||
|
||||
```json
|
||||
{
|
||||
"crons": [
|
||||
{
|
||||
"id": "48875b1b-5006-48f5-9f3c-af9fbdd82255",
|
||||
"action": "license_update",
|
||||
"schedule": "57 54 3 * * *",
|
||||
"retries": 2,
|
||||
"capacityMap": null,
|
||||
"parameters": null,
|
||||
"deadline": "",
|
||||
"stopTimeout": "",
|
||||
"nextRun": "2017-02-22T03:54:57Z"
|
||||
},
|
||||
{
|
||||
"id": "b1c1e61e-1e74-4677-8e4a-2a7dacefffdc",
|
||||
"action": "update_db",
|
||||
"schedule": "0 0 3 * * *",
|
||||
"retries": 0,
|
||||
"capacityMap": null,
|
||||
"parameters": null,
|
||||
"deadline": "",
|
||||
"stopTimeout": "",
|
||||
"nextRun": "2017-02-22T03:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The `schedule` field uses a cron expression following the `(seconds) (minutes) (hours) (day of month) (month) (day of week)` format. For example, `57 54 3 * * *` with cron ID `48875b1b-5006-48f5-9f3c-af9fbdd82255` will be run at `03:54:57` on any day of the week or the month, which is `2017-02-22T03:54:57Z` in the example JSON response above.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Enable auto-deletion of job logs](./auto-delete-job-logs.md)
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
title: Audit Jobs via the Web Interface
|
||||
description: View a list of jobs happening within DTR and review the detailed logs for each job.
|
||||
keywords: dtr, troubleshoot, audit, job logs, jobs, ui
|
||||
---
|
||||
|
||||
|
||||
> BETA DISCLAIMER
|
||||
>
|
||||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
|
||||
As of DTR 2.2, admins were able to [view and audit jobs within DTR](audit-jobs-via-api) using the API. DTR 2.6 enhances those capabilities by adding a **Job Logs** tab under **System** settings on the user interface. The tab displays a sortable and paginated list of jobs along with links to associated job logs.
|
||||
|
||||
## Prerequisite
|
||||
* [Job Queue](job-queue.md)
|
||||
|
||||
## View Jobs List
|
||||
|
||||
To view the list of jobs within DTR, do the following:
|
||||
|
||||
1. Navigate to `https://<dtr-url>`and log in with your UCP credentials.
|
||||
|
||||
2. Select **System** from the left navigation pane, and then click **Job Logs**. You should see a paginated list of past, running, and queued jobs. By default, **Job Logs** shows the latest `10` jobs on the first page.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
|
||||
3. Specify a filtering option. **Job Logs** lets you filter by:
|
||||
|
||||
* Action: See [Audit Jobs via the API: Job Types](job-queue/#job-types) for an explanation on the different actions or job types.
|
||||
|
||||
* Worker ID: The ID of the worker in a DTR replica that is responsible for running the job.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
|
||||
4. Optional: Click **Edit Settings** on the right of the filtering options to update your **Job Logs** settings. See [Enable auto-deletion of job logs](auto-delete-job-logs) for more details.
|
||||
|
||||
### Job Details
|
||||
|
||||
The following is an explanation of the job-related fields displayed in **Job Logs** and uses the filtered `online_gc` action from above.
|
||||
|
||||
| Job Detail | Description | Example |
|
||||
|:----------------|:-------------------------------------------------|:--------|
|
||||
| Action | The type of action or job being performed. See [Job Types](./job-queue/#job-types) for a full list of job types. | `onlinegc`
|
||||
| ID | The ID of the job. | `ccc05646-569a-4ac4-b8e1-113111f63fb9` |
|
||||
| Worker | The ID of the worker node responsible for running the job. | `8f553c8b697c`|
|
||||
| Status | Current status of the action or job. See [Job Status](./job-queue/#job-status) for more details. | `done` |
|
||||
| Start Time | Time when the job started. | `9/23/2018 7:04 PM` |
|
||||
| Last Updated | Time when the job was last updated. | `9/23/2018 7:04 PM` |
|
||||
| View Logs | Links to the full logs for the job. | `[View Logs]` |
|
||||
|
||||
## View Job-specific Logs
|
||||
|
||||
To view the log details for a specific job, do the following:
|
||||
|
||||
1. Click **View Logs** next to the job's **Last Updated** value. You will be redirected to the log detail page of your selected job.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
|
||||
Notice how the job `ID` is reflected in the URL while the `Action` and the abbreviated form of the job `ID` are reflected in the heading. Also, the JSON lines displayed are job-specific [DTR container logs](https://success.docker.com/article/how-to-check-the-docker-trusted-registry-dtr-logs). See [DTR Internal Components](../../architecture/#dtr-internal-components) for more details.
|
||||
|
||||
2. Enter or select a different line count to truncate the number of lines displayed. Lines are cut off from the end of the logs.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Enable auto-deletion of job logs](./auto-delete-job-logs.md)
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Enable Auto-Deletion of Job Logs
|
||||
description: Enable auto-deletion of old or unnecessary job logs for maintenance.
|
||||
keywords: dtr, jobs, log, job logs, system
|
||||
---
|
||||
|
||||
> BETA DISCLAIMER
|
||||
>
|
||||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
|
||||
## Overview
|
||||
|
||||
Docker Trusted Registry has a global setting for auto-deletion of job logs which allows them to be removed as part of [garbage collection](../configure/garbage-collection.md). DTR admins can enable auto-deletion of repository events in DTR 2.6 based on specified conditions which are covered below.
|
||||
|
||||
## Steps
|
||||
|
||||
1. In your browser, navigate to `https://<dtr-url>` and log in with your UCP credentials.
|
||||
|
||||
2. Select **System** on the left navigation pane which will display the **Settings** page by default.
|
||||
|
||||
3. Scroll down to **Job Logs** and turn on **Auto-Deletion**.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
4. Specify the conditions with which a job log auto-deletion will be triggered.
|
||||
|
||||
DTR allows you to set your auto-deletion conditions based on the following optional job log attributes:
|
||||
|
||||
| Name | Description | Example |
|
||||
|:----------------|:---------------------------------------------------| :----------------|
|
||||
| Age | Lets you remove job logs which are older than your specified number of hours, days, weeks or months| `2 months` |
|
||||
| Max number of events | Lets you specify the maximum number of job logs allowed within DTR. | `100` |
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
|
||||
If you check and specify both, job logs will be removed from DTR during garbage collection if either condition is met. You should see a confirmation message right away.
|
||||
|
||||
5. Click **Start Deletion** if you're ready. Read more about [garbage collection](../configure/garbage-collection/#under-the-hood) if you're unsure about this operation.
|
||||
|
||||
6. Navigate to **System > Job Logs** to confirm that [**onlinegc_joblogs**](job-queue/#job-types) has started. For a detailed breakdown of individual job logs, see [View Job-specific Logs](audit-jobs-via-ui/#view-job-specific-logs) in "Audit Jobs via the Web Interface."
|
||||
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
|
||||
> Job Log Deletion
|
||||
>
|
||||
> When you enable auto-deletion of job logs, the logs will be permanently deleted during garbage collection. See [Configure logging drivers](../../../../config/containers/logging/configure/) for a list of supported logging drivers and plugins.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Monitor Docker Trusted Registry](monitor-and-troubleshoot.md)
|
||||
|
||||
|
|
@ -0,0 +1,80 @@
|
|||
---
|
||||
title: Job Queue
|
||||
description: Learn how Docker Trusted Registry runs batch jobs for troubleshooting job-related issues.
|
||||
keywords: dtr, job queue, job management
|
||||
---
|
||||
|
||||
Docker Trusted Registry (DTR) uses a job queue to schedule batch jobs. Jobs are added to a cluster-wide job queue, and then consumed and executed by a job runner within DTR.
|
||||
|
||||

|
||||
|
||||
All DTR replicas have access to the job queue, and have a job runner component
|
||||
that can get and execute work.
|
||||
|
||||
## How it works
|
||||
|
||||
When a job is created, it is added to a cluster-wide job queue and enters the `waiting` state.
|
||||
When one of the DTR replicas is ready to claim the job, it waits a random time of up
|
||||
to `3` seconds to give every replica the opportunity to claim the task.
|
||||
|
||||
A replica claims a job by adding its replica ID to the job. That way, other
|
||||
replicas will know the job has been claimed. Once a replica claims a job, it adds
|
||||
that job to an internal queue, which in turn sorts the jobs by their `scheduledAt` time.
|
||||
Once that happens, the replica updates the job status to `running`, and
|
||||
starts executing it.
|
||||
|
||||
The job runner component of each DTR replica keeps a `heartbeatExpiration`
|
||||
entry on the database that is shared by all replicas. If a replica becomes
|
||||
unhealthy, other replicas notice the change and update the status of the failing worker to `dead`.
|
||||
Also, all the jobs that were claimed by the unhealthy replica enter the `worker_dead` state,
|
||||
so that other replicas can claim the job.
|
||||
|
||||
## Job Types
|
||||
|
||||
DTR runs periodic and long-running jobs. The following is a complete list of jobs you can filter for via [the user interface](view-job-logs.md) or [the API](../troubleshoot-batch-jobs.md).
|
||||
|
||||
| Job | Description |
|
||||
|:------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| gc | A garbage collection job that deletes layers associated with deleted images. |
|
||||
| onlinegc | A garbage collection job that deletes layers associated with deleted images without putting the registry in read-only mode. |
|
||||
| onlinegc_metadata | A garbage collection job that deletes metadata associated with deleted images. |
|
||||
| onlinegc_joblogs | A garbage collection job that deletes job logs based on a configured job history setting. |
|
||||
| metadatastoremigration | A necessary migration that enables the `onlinegc` feature. |
|
||||
| sleep | Used for testing the correctness of the jobrunner. It sleeps for 60 seconds. |
|
||||
| false | Used for testing the correctness of the jobrunner. It runs the `false` command and immediately fails. |
|
||||
| tagmigration | Used for synchronizing tag and manifest information between the DTR database and the storage backend. |
|
||||
| bloblinkmigration | A DTR 2.1 to 2.2 upgrade process that adds references for blobs to repositories in the database. |
|
||||
| license_update | Checks for license expiration extensions if online license updates are enabled. |
|
||||
| scan_check | An image security scanning job. This job does not perform the actual scanning, rather it spawns `scan_check_single` jobs (one for each layer in the image). Once all of the `scan_check_single` jobs are complete, this job will terminate. |
|
||||
| scan_check_single | A security scanning job for a particular layer given by the `parameter: SHA256SUM`. This job breaks up the layer into components and checks each component for vulnerabilities. |
|
||||
| scan_check_all | A security scanning job that updates all of the currently scanned images to display the latest vulnerabilities. |
|
||||
| update_vuln_db | A job that is created to update DTR's vulnerability database. It uses an Internet connection to check for database updates through `https://dss-cve-updates.docker.com/` and updates the `dtr-scanningstore` container if there is a new update available. |
|
||||
| scannedlayermigration | A DTR 2.4 to 2.5 upgrade process that restructures scanned image data. |
|
||||
| push_mirror_tag | A job that pushes a tag to another registry after a push mirror policy has been evaluated. |
|
||||
| poll_mirror | A global cron that evaluates poll mirroring policies. |
|
||||
| webhook | A job that is used to dispatch a webhook payload to a single endpoint. |
|
||||
| nautilus_update_db | The old name for the `update_vuln_db` job. This may be visible on old log files. |
|
||||
| ro_registry | A user-initiated job for manually switching DTR into read-only mode. |
|
||||
| tag_pruning | A job for cleaning up unnecessary or unwanted repository tags which can be configured by repository admins. For configuration options, see [Tag Pruning](../../user/tag-pruning). |
|
||||
|
||||
## Job Status
|
||||
|
||||
Jobs can have one of the following status values:
|
||||
|
||||
| Status | Description |
|
||||
|:----------------|:------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| waiting | Unclaimed job waiting to be picked up by a worker. |
|
||||
| running | The job is currently being run by the specified `workerID`. |
|
||||
| done | The job has successfully completed. |
|
||||
| error | The job has completed with errors. |
|
||||
| cancel_request | The status of a job is monitored by the worker in the database. If the job status changes to `cancel_request`, the job is canceled by the worker. |
|
||||
| cancel | The job has been canceled and was not fully executed. |
|
||||
| deleted | The job and its logs have been removed. |
|
||||
| worker_dead | The worker for this job has been declared `dead` and the job will not continue. |
|
||||
| worker_shutdown | The worker that was running this job has been gracefully stopped. |
|
||||
| worker_resurrection | The worker for this job has reconnected to the database and will cancel this job. |
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Audit Jobs via Web Interface](audit-jobs-via-ui)
|
||||
- [Audit Jobs via API](audit-jobs-via-api)
|
||||
|
Before Width: | Height: | Size: 240 KiB After Width: | Height: | Size: 216 KiB |
|
Before Width: | Height: | Size: 271 KiB After Width: | Height: | Size: 250 KiB |
|
After Width: | Height: | Size: 275 KiB |
|
After Width: | Height: | Size: 127 KiB |
|
After Width: | Height: | Size: 345 KiB |
|
After Width: | Height: | Size: 325 KiB |
|
After Width: | Height: | Size: 401 KiB |
|
Before Width: | Height: | Size: 92 KiB After Width: | Height: | Size: 216 KiB |
|
Before Width: | Height: | Size: 88 KiB After Width: | Height: | Size: 214 KiB |
|
After Width: | Height: | Size: 90 KiB |
|
After Width: | Height: | Size: 684 KiB |
|
After Width: | Height: | Size: 271 KiB |
|
After Width: | Height: | Size: 445 KiB |
|
|
@ -7,33 +7,32 @@ redirect_from:
|
|||
- /datacenter/dtr/2.5/guides/user/access-tokens/
|
||||
---
|
||||
|
||||
Docker Trusted Registry allows you to issue access tokens so that you can
|
||||
integrate with other services without having to give those services your
|
||||
credentials. An access token is issued for a user and has the same DTR
|
||||
permissions the user has.
|
||||
Docker Trusted Registry allows you to create and distribute access tokens to enable programmatic access to DTR. Access tokens are linked to a particular user account and duplicate whatever permissions that account has at time of use. If the account changes permissions, so will the token.
|
||||
|
||||
It's better to use access tokens to build integrations since you can issue
|
||||
multiple tokens, one for each integration, and revoke them at any time.
|
||||
Access tokens are useful in cases such as building integrations since you can issue multiple tokens – one for each integration – and revoke them at any time.
|
||||
|
||||
## Create an access token
|
||||
|
||||
In the **DTR web UI**, navigate to your user profile, and choose **Access tokens**.
|
||||
1. To create an access token for the first time, log in to `https://<dtr-url` with your UCP credentials.
|
||||
|
||||
2. Expand your **Profile** from the left navigation pane and select **Profile > Access Tokens**.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Click **New access token**, and assign a meaningful name to your token.
|
||||
Choose a name that indicates where the token is going to be used, or what’s the
|
||||
purpose for the token. Administrators can also create tokens for other users.
|
||||
3. Add a description for your token. Specify something which indicates where the token is going to be used, or set a purpose for the token. Administrators can also create tokens for other users.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Once the token is created you won’t be able to see it again, but you can
|
||||
rename it if needed.
|
||||
## Modify an access token
|
||||
|
||||
Once the token is created, you will not be able to see it again. You do have the option to rename, deactivate or delete the token as needed. You can delete the token by selecting it and clicking **Delete**, or you can click **View Details**:
|
||||
|
||||
{: .with-border}
|
||||
|
||||
## Use the access token
|
||||
|
||||
You can use an access token in any place that requires your DTR password.
|
||||
As an example you can use access tokens to login in from your Docker CLI client:
|
||||
You can use an access token anywhere that requires your DTR password.
|
||||
As an example you can pass your access token to the `--password` or `-p` option when logging in from your Docker CLI client:
|
||||
|
||||
```bash
|
||||
docker login dtr.example.org --username <username> --password <token>
|
||||
|
|
|
|||
|
|
@ -9,36 +9,48 @@ redirect_from:
|
|||
Since DTR is secure by default, you need to create the image repository before
|
||||
being able to push the image to DTR.
|
||||
|
||||
In this example, we'll create the 'golang' repository in DTR.
|
||||
In this example, we'll create the `wordpress` repository in DTR.
|
||||
|
||||
## Create a repository
|
||||
|
||||
To create a new repository, navigate to the **DTR web application**, and click
|
||||
the **New repository** button.
|
||||
1. To create an image repository for the first time, log in to `https://<dtr-url` with your UCP credentials.
|
||||
|
||||
2. Select **Repositories** from the left navigation pane and click **New repository** on the upper right corner of the Repositories page.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Add a **name and description** for the repository, and choose whether your
|
||||
repository is public or private:
|
||||
|
||||
3. Select your namespace and enter a name for your repository. You can optionally add a description.
|
||||
|
||||
4. Choose whether your repository is `public` or `private`:
|
||||
|
||||
* Public repositories are visible to all users, but can only be changed by
|
||||
users granted with permission to write them.
|
||||
users with write permissions to them.
|
||||
* Private repositories can only be seen by users that have been granted
|
||||
permissions to that repository.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Click **Save** to create the repository.
|
||||
|
||||
5. Click **Create** to create the repository.
|
||||
|
||||
When creating a repository in DTR, the full name of the repository becomes
|
||||
`<dtr-domain-name>/<user-or-org>/<repository-name>`. In this example, the full
|
||||
name of our repository will be `dtr.example.org/dave.lauper/golang`.
|
||||
name of our repository will be `dtr-example.com/test-user-1/wordpress`.
|
||||
|
||||
6. Optional: Click **Show advanced settings** to make your tags immutable or set your image scanning trigger.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
> Immutable Tags and Tag Limit
|
||||
>
|
||||
> Starting in DTR 2.6, repository admins can enable tag pruning by [setting a tag limit](tag-pruning/#set-a-tag-limit). This can only be set if you turn off **Immutability** and allow your repository tags to be overwritten.
|
||||
|
||||
> Image name size for DTR
|
||||
>
|
||||
> When creating an image name for use with DTR ensure that the organization and repository name has less than 56 characters and that the entire image name which includes domain, organization and repository name does not exceed 255 characters.
|
||||
>
|
||||
> The 56 character `<user-or-org/repository-name>` limit in DTR is due to an underlying limitation in how the image name information is stored within DTR metadata in RethinkDB. RethinkDB currently has a Primary Key length limit of 127 characters.
|
||||
> The 56-character `<user-or-org/repository-name>` limit in DTR is due to an underlying limitation in how the image name information is stored within DTR metadata in RethinkDB. RethinkDB currently has a Primary Key length limit of 127 characters.
|
||||
>
|
||||
> When DTR stores the above data it appends a sha256sum comprised of 72 characters to the end of the value to ensure uniqueness within the database. If the `<user-or-org/repository-name>` exceeds 56 characters it will then exceed the 127 character limit in RethinkDB (72+56=128).
|
||||
{: .important}
|
||||
|
|
|
|||
|
|
@ -62,13 +62,19 @@ To enable SAML authentication:
|
|||
|
||||

|
||||
|
||||
4. In the **SAML Enabled** section, select **Yes** to display the required settings.
|
||||
4. In the **SAML Enabled** section, select **Yes** to display the required settings. The settings are grouped by those needed by the identity provider server and by those needed by UCP as a SAML service provider.
|
||||
|
||||

|
||||

|
||||
|
||||
5. In **IdP Metadata URL** enter the URL for the identity provider's metadata.
|
||||
6. In **UCP Host** enter the URL that includes the IP address of your UCP console.
|
||||
7. Select **Save** to complete the integration.
|
||||
6. If the metadata URL is publicly certified, you can leave **Skip TLS Verification** unchecked and **Root Certificates Bundle** blank, which is the default. Skipping TLS verification is not recommended in production environments. If the metadata URL cannot be certified by the default certificate authority store, you must provide the certificates from the identity provider in the **Root Certificates Bundle** field.
|
||||
7. In **UCP Host** enter the URL that includes the IP address or domain of your UCP installation. The port number is optional. The current IP address or domain appears by default.
|
||||
|
||||

|
||||
|
||||
8. To customize the text of the sign-in button, enter your button text in the **Customize Sign In Button Text** field. The default text is 'Sign in with SAML'.
|
||||
9. The **Service Provider Metadata URL** and **Assertion Consumer Service (ACS) URL** appear in shaded boxes. Select the copy icon at the right side of each box to copy that URL to the clipboard for pasting in the identity provider workflow.
|
||||
9. Select **Save** to complete the integration.
|
||||
|
||||
## Security considerations
|
||||
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 89 KiB After Width: | Height: | Size: 60 KiB |
|
After Width: | Height: | Size: 67 KiB |
|
|
@ -15,13 +15,12 @@ known issues for the latest UCP version.
|
|||
You can then use [the upgrade instructions](admin/install/upgrade.md) to
|
||||
upgrade your installation to the latest release.
|
||||
|
||||
* [Version 3.1](#version-31)
|
||||
* [Version 3.0](#version-30)
|
||||
* [Version 2.2](#version-22)
|
||||
|
||||
# Version 3.1
|
||||
|
||||
## Beta 1 (2018-09-11)
|
||||
|
||||
**New Features**
|
||||
* Default address pool for Swarm is now user configurable
|
||||
* UCP now supports Kubernetes Network Encryption using IPSec
|
||||
|
|
|
|||
|
|
@ -4,7 +4,11 @@ keywords: guide, swarm mode, node
|
|||
title: Join nodes to a swarm
|
||||
---
|
||||
|
||||
When you first create a swarm, you place a single Docker Engine (Engine) into
|
||||
> BETA DISCLAIMER
|
||||
>
|
||||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
|
||||
When you first create a swarm, you place a single Docker Engine into
|
||||
swarm mode. To take full advantage of swarm mode you can add nodes to the swarm:
|
||||
|
||||
* Adding worker nodes increases capacity. When you deploy a service to a swarm,
|
||||
|
|
@ -26,6 +30,10 @@ the `docker swarm join` command. The node only uses the token at join time. If
|
|||
you subsequently rotate the token, it doesn't affect existing swarm nodes. Refer
|
||||
to [Run Docker Engine in swarm mode](swarm-mode.md#view-the-join-command-or-update-a-swarm-join-token).
|
||||
|
||||
**NOTE:** Docker engine allows a non-FIPS node to join a FIPS-enabled swarm cluster.
|
||||
|
||||
While a mixed FIPS environment makes upgrading or changing status easier, Docker recommends not running a mixed FIPS environment in production.
|
||||
|
||||
## Join as a worker node
|
||||
|
||||
To retrieve the join command including the join token for worker nodes, run the
|
||||
|
|
|
|||
|
|
@ -9,6 +9,10 @@ redirect_from:
|
|||
title: Get Docker EE for Red Hat Enterprise Linux
|
||||
---
|
||||
|
||||
> BETA DISCLAIMER
|
||||
>
|
||||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
|
||||
{% assign linux-dist = "rhel" %}
|
||||
{% assign linux-dist-cap = "RHEL" %}
|
||||
{% assign linux-dist-url-slug = "rhel" %}
|
||||
|
|
@ -44,6 +48,61 @@ On {{ linux-dist-long }}, Docker EE supports storage drivers, `overlay2` and `de
|
|||
|
||||
- [Device Mapper](/storage/storagedriver/device-mapper-driver/){: target="_blank" class="_" }: On production systems using `devicemapper`, you must use `direct-lvm` mode, which requires one or more dedicated block devices. Fast storage such as solid-state media (SSD) is recommended. Do not start Docker until properly configured per the [storage guide](/storage/storagedriver/device-mapper-driver/){: target="_blank" class="_" }.
|
||||
|
||||
### FIPS 140-2 cryptographic module support
|
||||
|
||||
[Federal Information Processing Standards (FIPS) Publication 140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf) is a United States Federal security requirement for cryptographic modules.
|
||||
|
||||
With Docker EE Basic license for versions 18.03 and later, Docker provides FIPS 140-2 support in RHEL 7.3, 7.4 and 7.5. This includes a FIPS supported cryptographic module. If the RHEL implementation already has FIPS support enabled, FIPS is automatically enabled in the Docker engine.
|
||||
|
||||
To verify the FIPS-140-2 module is enabled in the Linux kernel, confirm the file `/proc/sys/crypto/fips_enabled.conf` contains `1`.
|
||||
|
||||
```
|
||||
$ cat /proc/sys/crypto/fips_enabled.conf
|
||||
1
|
||||
```
|
||||
|
||||
**NOTE:** FIPS is only supported in the Docker EE engine. UCP and DTR currently do not have support for FIPS-140-2.
|
||||
|
||||
To enable FIPS 140-2 compliance on a system that is not in FIPS 140-2 mode, do the following:
|
||||
|
||||
Create a file called `/etc/systemd/system/docker.service.d/fips-module.conf`. It needs to contain the following:
|
||||
|
||||
```
|
||||
[Service]
|
||||
Environment="DOCKER_FIPS=1"
|
||||
```
|
||||
|
||||
Reload the Docker configuration to systemd.
|
||||
|
||||
`$ sudo systemctl daemon-reload`
|
||||
|
||||
Restart the Docker service as root.
|
||||
|
||||
`$ sudo systemctl restart docker`
|
||||
|
||||
To confirm Docker is running with FIPS-140-2 enabled, run the `docker info` command:
|
||||
|
||||
```
|
||||
$ docker info --format '{{ .SecurityOptions }}'
|
||||
[name=selinux name=fips]
|
||||
```
|
||||
|
||||
### Disabling FIPS-140-2
|
||||
|
||||
If the system has the FIPS 140-2 cryptographic module installed on the operating system,
|
||||
it is possible to disable FIPS-140-2 compliance.
|
||||
|
||||
To disable FIPS 140-2 in Docker but not the operating system, set the value `DOCKER_FIPS=0`
|
||||
in the `/etc/systemd/system/docker.service.d/fips-module.conf`.
|
||||
|
||||
Reload the Docker configuration to systemd.
|
||||
|
||||
`$ sudo systemctl daemon-reload`
|
||||
|
||||
Restart the Docker service as root.
|
||||
|
||||
`$ sudo systemctl restart docker`
|
||||
|
||||
### Find your Docker EE repo URL
|
||||
|
||||
{% include ee-linux-install-reuse.md section="find-ee-repo-url" %}
|
||||
|
|
|
|||
|
|
@ -7,13 +7,13 @@ redirect_from:
|
|||
- /engine/installation/windows/docker-ee/
|
||||
---
|
||||
|
||||
> BETA DISCLAIMER
|
||||
>
|
||||
> This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.
|
||||
|
||||
{% capture filename %}{{ page.win_latest_build }}.zip{% endcapture %} {% capture download_url %}https://download.docker.com/components/engine/windows-server/{{ site.docker_ee_version }}/{{ filename }}{% endcapture %}
|
||||
|
||||
Docker Enterprise Edition for Windows Server (*Docker EE*) enables native
|
||||
Docker containers on Windows Server. Windows Server 2016 and later versions are supported. The Docker EE installation package
|
||||
includes everything you need to run Docker on Windows Server.
|
||||
This topic describes pre-install considerations, and how to download and
|
||||
install Docker EE.
|
||||
Docker Enterprise Edition for Windows Server (*Docker EE*) enables native Docker containers on Windows Server. Windows Server 2016 and later versions are supported. The Docker EE installation package includes everything you need to run Docker on Windows Server. This topic describes pre-install considerations, and how to download and install Docker EE.
|
||||
|
||||
> Release notes
|
||||
>
|
||||
|
|
@ -73,6 +73,37 @@ sconfig
|
|||
|
||||
Select option `6) Download and Install Updates`.
|
||||
|
||||
|
||||
### FIPS 140-2 cryptographic module support
|
||||
|
||||
[Federal Information Processing Standards (FIPS) Publication 140-2](https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf) is a United States Federal security requirement for cryptographic modules.
|
||||
|
||||
With Docker EE Basic license for versions 18.09 and later, Docker provides FIPS 140-2 support in Windows Server 2016. This includes a FIPS supported cryptographic module. If the Windows implementation already has FIPS support enabled, FIPS is automatically enabled in the Docker engine.
|
||||
|
||||
**NOTE:** FIPS 140-2 is only supported in the Docker EE engine. UCP and DTR currently do not have support for FIPS 140-2.
|
||||
|
||||
To enable FIPS 140-2 compliance on a system that is not in FIPS 140-2 mode, do the following in PowerShell:
|
||||
|
||||
```
|
||||
[System.Environment]::SetEnvironmentVariable("DOCKER_FIPS", "1", "Machine")
|
||||
```
|
||||
|
||||
Restart the Docker service by running the following command.
|
||||
|
||||
```
|
||||
net stop docker
|
||||
net start docker
|
||||
```
|
||||
|
||||
To confirm Docker is running with FIPS-140-2 enabled, run the `docker info` command:
|
||||
|
||||
```
|
||||
Labels:
|
||||
com.docker.security.fips=enabled
|
||||
```
|
||||
|
||||
**NOTE:** If the system has the FIPS-140-2 cryptographic module installed on the operating system, it is possible to disable FIPS-140-2 compliance. To disable FIPS-140-2 in Docker but not the operating system, set the value `"DOCKER_FIPS","0"` in the `[System.Environment]`.`
|
||||
|
||||
## Use a script to install Docker EE
|
||||
|
||||
Use the following steps when you want to install manually, script automated
|
||||
|
|
|
|||