Sync published with master (#8848)

* update python and flask usage in dockerfile

* Uses the modern Python 3.7 image, as 3.4 is EOL.
* Separates copying and installing requirements from copying
  project, to make rebuilds more efficient.
* Uses the recommended `flask run` command. This is especially
  needed on Windows, where `app.py` incorrectly looks like an
  executable file when copying into Docker.
* Uses the `FLASK_ENV` env var to control development mode.

* remove unused `app.run()` call

This is not needed when using the recommended `flask run`
command to run the development server.

* remove 0.0.0.0 url

* add gcc so markupsafe compiles speedups

* Published into master (#8824)

* Sync published with master (#8822)

* Updated Windows Release that supports Kubernetes

Changed from old outdated Edge release to reflect use of a stable release.  Kubernetes page actually reflects this version as well (so its an error on this page only).

* Interlock link fixes (#8798)

* Remove outdated links/fix links

* Next steps link fix

* Next steps link fixes

* Logging driver 920 (#8625)

* Logging driver port from vnext-engine

* Update json-file.md

* Update json-file.md

* Port changes from vnext-engine

* Updates based on feedback

* Added note back in

* Added note back in

* Added limitations per Anusha

* New dual logging info

* Added link to new topic

Needs verification.

* Changes per feedback.

* Updates per feedback

* Updates per feedback

* Updated 20m

* Added CE version

* Added missing comma

* Updates per feedback

* Add raw tag
Add TOC entry - subject to change

* Add entry for local logging driver

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Updates per feedback

* Updates per feedback

* Update zfs-driver.md (#8735)

* Update zfs-driver.md

* Add suggested correction

* Removed HA Proxy Link

* Added Azure Disk and Azure File Storage for UCP Workloads (#8774)

* Added Azure Disk and Azure File

I have added Azure Disk and Azure file documentation for use with UCP
3.0 or newer.

* Added the Azure Disk Content
* Added the Azure File Content
* Updated the Toc to include Azure Disk and Azure File

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Responding to feedback, inc changing Azure File to Azure Files

Following on from Steven and Deeps feedback this commit addresses those
nits. Including changing `Operators` to `Platform Operators`, switching
`Azure File` to `Azure Files` and many small formating changes.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Minor style updates

* Minor style updates

* Final edits

* Removed Ubuntu 14.04 warnings from Docker UCP install Page (#8804)

We dropped support for Ubuntu 14.04 in Enterprise 2.1 / UCP 3.1, however
the installation instructions still carry 14.04 warnings.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Fix broken link (#8801)

* ubuntu.md: remove old docker-ce-cli (#8665)

I hit the following error when "upgrading" docker-ce 18.09 to docker-ee 17.06:

> dpkg: error processing archive /var/cache/apt/archives/docker-ee_3%3a17.06.2~ee~19~3-0~ubuntu_amd64.deb (--unpack):
 trying to overwrite '/usr/share/fish/vendor_completions.d/docker.fish', which is also in package docker-ce-cli 5:18.09.4~2.1.rc1-0~ubuntu-xenial

This commit adds `docker-ce-cli` to the list in "uninstall old packages" to fix this.

* Updated UCP CLI Reference to 3.1.7 (#8805)

-Updated all of the UCP 3.1.7 references.
-Alphabeticalised each reference
-Added very a value is expected or not after each variable.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Fix numbering issue

* Fix formatting

* Added UCP Kubernetes Secure RBAC Defaults (#8810)

* Added Kubernetes Secure RBAC Defaults

* Style updates

* Final edits

* Sync published with master (#8823)

* Updated Windows Release that supports Kubernetes

Changed from old outdated Edge release to reflect use of a stable release.  Kubernetes page actually reflects this version as well (so its an error on this page only).

* Interlock link fixes (#8798)

* Remove outdated links/fix links

* Next steps link fix

* Next steps link fixes

* Logging driver 920 (#8625)

* Logging driver port from vnext-engine

* Update json-file.md

* Update json-file.md

* Port changes from vnext-engine

* Updates based on feedback

* Added note back in

* Added note back in

* Added limitations per Anusha

* New dual logging info

* Added link to new topic

Needs verification.

* Changes per feedback.

* Updates per feedback

* Updates per feedback

* Updated 20m

* Added CE version

* Added missing comma

* Updates per feedback

* Add raw tag
Add TOC entry - subject to change

* Add entry for local logging driver

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Updates per feedback

* Updates per feedback

* Update zfs-driver.md (#8735)

* Update zfs-driver.md

* Add suggested correction

* Removed HA Proxy Link

* Added Azure Disk and Azure File Storage for UCP Workloads (#8774)

* Added Azure Disk and Azure File

I have added Azure Disk and Azure file documentation for use with UCP
3.0 or newer.

* Added the Azure Disk Content
* Added the Azure File Content
* Updated the Toc to include Azure Disk and Azure File

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Responding to feedback, inc changing Azure File to Azure Files

Following on from Steven and Deeps feedback this commit addresses those
nits. Including changing `Operators` to `Platform Operators`, switching
`Azure File` to `Azure Files` and many small formating changes.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Minor style updates

* Minor style updates

* Final edits

* Removed Ubuntu 14.04 warnings from Docker UCP install Page (#8804)

We dropped support for Ubuntu 14.04 in Enterprise 2.1 / UCP 3.1, however
the installation instructions still carry 14.04 warnings.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Fix broken link (#8801)

* ubuntu.md: remove old docker-ce-cli (#8665)

I hit the following error when "upgrading" docker-ce 18.09 to docker-ee 17.06:

> dpkg: error processing archive /var/cache/apt/archives/docker-ee_3%3a17.06.2~ee~19~3-0~ubuntu_amd64.deb (--unpack):
 trying to overwrite '/usr/share/fish/vendor_completions.d/docker.fish', which is also in package docker-ce-cli 5:18.09.4~2.1.rc1-0~ubuntu-xenial

This commit adds `docker-ce-cli` to the list in "uninstall old packages" to fix this.

* Updated UCP CLI Reference to 3.1.7 (#8805)

-Updated all of the UCP 3.1.7 references.
-Alphabeticalised each reference
-Added very a value is expected or not after each variable.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Fix numbering issue

* Fix formatting

* Added UCP Kubernetes Secure RBAC Defaults (#8810)

* Added Kubernetes Secure RBAC Defaults

* Style updates

* Final edits

* Add deprecation notice for Hub discovery (#8828)

* Hub Swarm discovery service deprecation

Document the deprecation

Add warning graphic

Fix formatting

* updated Layer 7 UI image to have correct http port, updated deployment steps to make it clear what can be done via the UI and the alternative manualy approach

Signed-off-by: Steve Richards <steve.james.richards@gmail.com>

* Propose 3 as the number of manager nodes (#8827)

* Propose 3 as the number of manager nodes

Propose 3 managers as the default number of manager nodes.

* Minor style updates

* typeo in document and small update to image (#8837)

* Fix typos (#8650)

* remove extra 'but' on line 40 (#8626)

* Removed redundant TOC entries at top

* Update index.md

* SAML updates for ADFS (#8832)

* Updates for ADFS

* Syntax fix

* Updates per feedback

* Update enable-saml-authentication.md

* Improve webhook management docs for DTR (#8794)

* Improve webhook management docs

UI and API updates

Final updates

Fix link to event types

Standardize word usage

Remove old page

Add clarification of webhook scope

* Incorporate feedback
This commit is contained in:
Maria Bermudez 2019-05-24 16:55:33 -07:00 committed by GitHub
parent 52e081c767
commit d08b77897d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 472 additions and 94 deletions

View File

@ -2299,8 +2299,14 @@ manuals:
path: /ee/dtr/user/audit-repository-events/ path: /ee/dtr/user/audit-repository-events/
- title: Auto-delete repository events - title: Auto-delete repository events
path: /ee/dtr/admin/configure/auto-delete-repo-events/ path: /ee/dtr/admin/configure/auto-delete-repo-events/
- path: /ee/dtr/user/create-and-manage-webhooks/ - sectiontitle: Manage webhooks
title: Create and manage webhooks section:
- title: Create and manage webhooks
path: /ee/dtr/admin/manage-webhooks/
- title: Use the web interface
path: /ee/dtr/admin/manage-webhooks/use-the-web-ui
- title: Use the API
path: /ee/dtr/admin/manage-webhooks/use-the-api
- title: Manage access tokens - title: Manage access tokens
path: /ee/dtr/user/access-tokens/ path: /ee/dtr/user/access-tokens/
- title: Tag pruning - title: Tag pruning

View File

@ -31,7 +31,6 @@ Define the application dependencies.
import redis import redis
from flask import Flask from flask import Flask
app = Flask(__name__) app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379) cache = redis.Redis(host='redis', port=6379)
@ -53,9 +52,6 @@ Define the application dependencies.
count = get_hit_count() count = get_hit_count()
return 'Hello World! I have been seen {} times.\n'.format(count) return 'Hello World! I have been seen {} times.\n'.format(count)
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
In this example, `redis` is the hostname of the redis container on the In this example, `redis` is the hostname of the redis container on the
application's network. We use the default port for Redis, `6379`. application's network. We use the default port for Redis, `6379`.
@ -86,19 +82,25 @@ itself.
In your project directory, create a file named `Dockerfile` and paste the In your project directory, create a file named `Dockerfile` and paste the
following: following:
FROM python:3.4-alpine FROM python:3.7-alpine
ADD . /code
WORKDIR /code WORKDIR /code
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt RUN pip install -r requirements.txt
CMD ["python", "app.py"] COPY . .
CMD ["flask", "run"]
This tells Docker to: This tells Docker to:
* Build an image starting with the Python 3.4 image. * Build an image starting with the Python 3.7 image.
* Add the current directory `.` into the path `/code` in the image.
* Set the working directory to `/code`. * Set the working directory to `/code`.
* Install the Python dependencies. * Set environment variables used by the `flask` command.
* Set the default command for the container to `python app.py`. * Install gcc so Python packages such as MarkupSafe and SQLAlchemy can compile speedups.
* Copy `requirements.txt` and install the Python dependencies.
* Copy the current directory `.` in the project to the workdir `.` in the image.
* Set the default command for the container to `flask run`.
For more information on how to write Dockerfiles, see the [Docker user For more information on how to write Dockerfiles, see the [Docker user
guide](/engine/tutorials/dockerimages.md#building-an-image-from-a-dockerfile) guide](/engine/tutorials/dockerimages.md#building-an-image-from-a-dockerfile)
@ -161,13 +163,13 @@ image pulled from the Docker Hub registry.
Compose pulls a Redis image, builds an image for your code, and starts the Compose pulls a Redis image, builds an image for your code, and starts the
services you defined. In this case, the code is statically copied into the image at build time. services you defined. In this case, the code is statically copied into the image at build time.
2. Enter `http://0.0.0.0:5000/` in a browser to see the application running. 2. Enter http://localhost:5000/ in a browser to see the application running.
If you're using Docker natively on Linux, Docker Desktop for Mac, or Docker Desktop for If you're using Docker natively on Linux, Docker Desktop for Mac, or Docker Desktop for
Windows, then the web app should now be listening on port 5000 on your Windows, then the web app should now be listening on port 5000 on your
Docker daemon host. Point your web browser to `http://localhost:5000` to Docker daemon host. Point your web browser to http://localhost:5000 to
find the `Hello World` message. If this doesn't resolve, you can also try find the `Hello World` message. If this doesn't resolve, you can also try
`http://0.0.0.0:5000`. http://127.0.0.1:5000.
If you're using Docker Machine on a Mac or Windows, use `docker-machine ip If you're using Docker Machine on a Mac or Windows, use `docker-machine ip
MACHINE_VM` to get the IP address of your Docker host. Then, open MACHINE_VM` to get the IP address of your Docker host. Then, open
@ -222,12 +224,16 @@ Edit `docker-compose.yml` in your project directory to add a [bind mount](/engin
- "5000:5000" - "5000:5000"
volumes: volumes:
- .:/code - .:/code
environment:
FLASK_ENV: development
redis: redis:
image: "redis:alpine" image: "redis:alpine"
The new `volumes` key mounts the project directory (current directory) on the The new `volumes` key mounts the project directory (current directory) on the
host to `/code` inside the container, allowing you to modify the code on the host to `/code` inside the container, allowing you to modify the code on the
fly, without having to rebuild the image. fly, without having to rebuild the image. The `environment` key sets the
`FLASK_ENV` environment variable, which tells `flask run` to run in development
mode and reload the code on change. This mode should only be used in development.
## Step 6: Re-build and run the app with Compose ## Step 6: Re-build and run the app with Compose

View File

@ -10,9 +10,9 @@ Docker Trusted Registry has a global setting for repository event auto-deletion.
## Steps ## Steps
1. In your browser, navigate to `https://<dtr-url>` and log in with your UCP credentials. 1. In your browser, navigate to `https://<dtr-url>` and log in with your admin credentials.
2. Select **System** on the left navigation pane which will display the **Settings** page by default. 2. Select **System** from the left navigation pane which displays the **Settings** page by default.
3. Scroll down to **Repository Events** and turn on ***Auto-Deletion***. 3. Scroll down to **Repository Events** and turn on ***Auto-Deletion***.

View File

@ -0,0 +1,36 @@
---
title: Manage webhooks
description: Learn how to create, configure, and test webhooks in Docker Trusted Registry.
keywords: registry, webhooks
redirect_from:
- /datacenter/dtr/2.5/guides/user/create-and-manage-webhooks/
- /ee/dtr/user/create-and-manage-webhooks/
---
You can configure DTR to automatically post event notifications to a webhook URL of your choosing. This lets you build complex CI and CD pipelines with your Docker images. The following is a complete list of event types you can trigger webhook notifications for via the [web interface](use-the-web-ui) or the [API](use-the-API).
## Webhook types
| Event Type | Scope | Access Level | Availability |
| --------------------------------------- | ----------------------- | ---------------- | ------------ |
| Tag pushed to repository (`TAG_PUSH`) | Individual repositories | Repository admin | Web UI & API |
| Tag pulled from repository (`TAG_PULL`) | Individual repositories | Repository admin | Web UI & API |
| Tag deleted from repository (`TAG_DELETE`) | Individual repositories | Repository admin | Web UI & API |
| Manifest pushed to repository (`MANIFEST_PUSH`) | Individual repositories | Repository admin | Web UI & API |
| Manifest pulled from repository (`MANIFEST_PULL`) | Individual repositories | Repository admin | Web UI & API |
| Manifest deleted from repository (`MANIFEST_DELETE`) | Individual repositories | Repository admin | Web UI & API |
| Security scan completed (`SCAN_COMPLETED`) | Individual repositories | Repository admin | Web UI & API |
| Security scan failed (`SCAN_FAILED`) | Individual repositories | Repository admin | Web UI & API |
| Image promoted from repository (`PROMOTION`) | Individual repositories | Repository admin | Web UI & API |
| Image mirrored from repository (`PUSH_MIRRORING`) | Individual repositories | Repository admin | Web UI & API |
| Image mirrored from remote repository (`POLL_MIRRORING`) | Individual repositories | Repository admin | Web UI & API |
| Repository created, updated, or deleted (`REPO_CREATED`, `REPO_UPDATED`, and `REPO_DELETED`) | Namespaces / Organizations | Namespace / Org owners | API Only |
| Security scanner update completed (`SCANNER_UPDATE_COMPLETED`) | Global | DTR admin | API only |
You must have admin privileges to a repository or namespace in order to
subscribe to its webhook events. For example, a user must be an admin of repository "foo/bar" to subscribe to its tag push events. A DTR admin can subscribe to any event.
## Where to go next
- [Manage webhooks via the web interface](use-the-web-ui)
- [Manage webhooks via the the API](use-the-api)

View File

@ -0,0 +1,311 @@
---
title: Manage webhooks via the API
description: Learn how to create, configure, and test webhooks for DTR using the API.
keywords: dtr, webhooks, api, registry
---
## Prerequisite
See [Event types for webhooks](/ee/dtr/admin/manage-webhooks/index.md/#event-types-for-webhooks) for a complete list of event types you can trigger notifications for via the API.
## API Base URL
Your DTR hostname serves as the base URL for your API requests.
## Swagger API explorer
From the DTR web interface, click **API** on the bottom left navigation pane to explore the API resources and endpoints. Click **Execute** to send your API request.
## API requests via curl
You can use [curl](https://curl.haxx.se/docs/manpage.html) to send HTTP or HTTPS API requests. Note that you will have to specify `skipTLSVerification: true` on your request in order to test the webhook endpoint over HTTP.
### Example curl request
```bash
curl -u test-user:$TOKEN -X POST "https://dtr-example.com/api/v0/webhooks" -H "accept: application/json" -H "content-type: application/json" -d "{ \"endpoint\": \"https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019\", \"key\": \"maria-testorg/lab-words\", \"skipTLSVerification\": true, \"type\": \"TAG_PULL\"}"
```
### Example JSON response
```json
{
"id": "b7bf702c31601efb4796da59900ddc1b7c72eb8ca80fdfb1b9fecdbad5418155",
"type": "TAG_PULL",
"key": "maria-testorg/lab-words",
"endpoint": "https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019",
"authorID": "194efd8e-9ee6-4d43-a34b-eefd9ce39087",
"createdAt": "2019-05-22T01:55:20.471286995Z",
"lastSuccessfulAt": "0001-01-01T00:00:00Z",
"inactive": false,
"tlsCert": "",
"skipTLSVerification": true
}
```
## Subscribe to events
To subscribe to events, send a `POST` request to
`/api/v0/webhooks` with the following JSON payload:
### Example usage
```
{
"type": "TAG_PUSH",
"key": "foo/bar",
"endpoint": "https://example.com"
}
```
The keys in the payload are:
- `type`: The event type to subcribe to.
- `key`: The namespace/organization or repo to subscribe to. For example, "foo/bar" to subscribe to
pushes to the "bar" repository within the namespace/organization "foo".
- `endpoint`: The URL to send the JSON payload to.
Normal users **must** supply a "key" to scope a particular webhook event to
a repository or a namespace/organization. DTR admins can choose to omit this,
meaning a POST event notification of your specified type will be sent for all DTR repositories and namespaces.
### Receive a payload
Whenever your specified event type occurs, DTR will send a POST request to the given
endpoint with a JSON-encoded payload. The payload will always have the
following wrapper:
```
{
"type": "...",
"createdAt": "2012-04-23T18:25:43.511Z",
"contents": {...}
}
```
- `type` refers to the event type received at the specified subscription endpoint.
- `contents` refers to the payload of the event itself. Each event is different, therefore the
structure of the JSON object in `contents` will change depending on the event
type. See [Content structure](#content-structure) for more details.
### Test payload subscriptions
Before subscribing to an event, you can view and test your endpoints using
fake data. To send a test payload, send `POST` request to
`/api/v0/webhooks/test` with the following payload:
```
{
"type": "...",
"endpoint": "https://www.example.com/"
}
```
Change `type` to the event type that you want to receive. DTR will then send
an example payload to your specified endpoint. The example
payload sent is always the same.
## Content structure
Comments after (`//`) are for informational purposes only, and the example payloads have been clipped for brevity.
### Repository event content structure
**Tag push**
```
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"os": "", // (string) the OS for the tag's manifest
"architecture": "", // (string) the architecture for the tag's manifest
"author": "", // (string) the username of the person who pushed the tag
"pushedAt": "", // (string) JSON-encoded timestamp of when the push occurred
...
}
```
**Tag delete**
```
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just deleted
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"os": "", // (string) the OS for the tag's manifest
"architecture": "", // (string) the architecture for the tag's manifest
"author": "", // (string) the username of the person who deleted the tag
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
...
}
```
**Manifest push**
```
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"os": "", // (string) the OS for the manifest
"architecture": "", // (string) the architecture for the manifest
"author": "", // (string) the username of the person who pushed the manifest
...
}
```
**Manifest delete**
```
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"os": "", // (string) the OS for the manifest
"architecture": "", // (string) the architecture for the manifest
"author": "", // (string) the username of the person who deleted the manifest
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
...
}
```
**Security scan completed**
```
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"scanSummary": {
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"critical": 0, // (int) number of critical issues, where CVSS >= 7.0
"major": 0, // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
"minor": 0, // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
"last_scan_status": 0, // (int) enum; see scan status section
"check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
...
}
}
```
**Security scan failed**
```
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including DTR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"error": "", // (string) the error that occurred while scanning
...
}
```
### Namespace-specific event structure
**Repository event (created/updated/deleted)**
```
{
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"event": "", // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
"author": "", // (string) the name of the user responsible for the event
"data": {} // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
}
```
### Global event structure
**Security scanner update complete**
```
{
"scanner_version": "",
"scanner_updated_at": "", // (string) JSON-encoded timestamp of when the scanner updated
"db_version": 0, // (int) newly updated database version
"db_updated_at": "", // (string) JSON-encoded timestamp of when the database updated
"success": <true|false> // (bool) whether the update was successful
"replicas": { // (object) a map keyed by replica ID containing update information for each replica
"replica_id": {
"db_updated_at": "", // (string) JSON-encoded time of when the replica updated
"version": "", // (string) version updated to
"replica_id": "" // (string) replica ID
},
...
}
}
```
### Security scan status codes
- 0: **Failed**. An error occurred checking an image's layer
- 1: **Unscanned**. The image has not yet been scanned
- 2: **Scanning**. Scanning in progress
- 3: **Pending**: The image will be scanned when a worker is available
- 4: **Scanned**: The image has been scanned but vulnerabilities have not yet been checked
- 5: **Checking**: The image is being checked for vulnerabilities
- 6: **Completed**: The image has been fully security scanned
## View and manage existing subscriptions
### View all subscriptions
To view existing subscriptions, send a `GET` request to `/api/v0/webhooks`. As
a normal user (i.e. not a DTR admin), this will show all of your
current subscriptions across every namespace/organization and repository. As a DTR
admin, this will show **every** webhook configured for your DTR.
The API response will be in the following format:
```
[
{
"id": "", // (string): UUID of the webhook subscription
"type": "", // (string): webhook event type
"key": "", // (string): the individual resource this subscription is scoped to
"endpoint": "", // (string): the endpoint to send POST event notifications to
"authorID": "", // (string): the user ID resposible for creating the subscription
"createdAt": "", // (string): JSON-encoded datetime when the subscription was created
},
...
]
```
For more information, [view the API documentation](/reference/dtr/{{site.dtr_version}}/api/).
### View subscriptions for a particular resource
You can also view subscriptions for a given resource that you are an
admin of. For example, if you have admin rights to the repository
"foo/bar", you can view all subscriptions (even other people's) from a
particular API endpoint. These endpoints are:
- `GET /api/v0/repositories/{namespace}/{repository}/webhooks`: View all
webhook subscriptions for a repository
- `GET /api/v0/repositories/{namespace}/webhooks`: View all webhook subscriptions for a
namespace/organization
### Delete a subscription
To delete a webhook subscription, send a `DELETE` request to
`/api/v0/webhooks/{id}`, replacing `{id}` with the webhook subscription ID
which you would like to delete.
Only a DTR admin or an admin for the resource with the event subscription can delete a subscription. As a normal user, you can only
delete subscriptions for repositories which you manage.
## Where to go next
- [Create promotion policies](promotion-policies/index.md)

View File

@ -0,0 +1,54 @@
---
title: Manage repository webhooks via the web interface
description: Learn how to create, configure, and test repository webhooks for DTR using the web interface.
keywords: dtr, webhooks, ui, web interface, registry
---
## Prerequisites
- You must have admin privileges to the repository in order to create a webhook.
- See [Event types](/ee/dtr/admin/manage-webhooks/index.md/#event-types-for-webhooks) for a complete list of event types you can trigger notifications for using the web interface.
## Create a webhook for your repository
1. In your browser, navigate to `https://<dtr-url>` and log in with your credentials.
2. Select **Repositories** from the left navigation pane, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the `/` after the specific namespace for your repository.
3. Select the **Webhooks** tab, and click **New Webhook**.
![](/ee/dtr/images/manage-webhooks-1.png){: .with-border}
4. From the drop-down list, select the event that will trigger the webhook.
5. Set the URL which will receive the JSON payload. Click **Test** next to the **Webhook URL** field, so that you can validate that the integration is working. At your specified URL, you should receive a JSON payload for your chosen event type notification.
```json
{
"type": "TAG_PUSH",
"createdAt": "2019-05-15T19:39:40.607337713Z",
"contents": {
"namespace": "foo",
"repository": "bar",
"tag": "latest",
"digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
"imageName": "foo/bar:latest",
"os": "linux",
"architecture": "amd64",
"author": "",
"pushedAt": "2015-01-02T15:04:05Z"
},
"location": "/repositories/foo/bar/tags/latest"
}
```
6. Expand "Show advanced settings" to paste the TLS certificate associated with your webhook URL. For testing purposes, you can test over HTTP instead of HTTPS.
7. Click **Create** to save. Once saved, your webhook is active and starts sending POST notifications whenever your chosen event type is triggered.
![](/ee/dtr/images/manage-webhooks-2.png){: .with-border}
As a repository admin, you can add or delete a webhook at any point. Additionally, you can create, view, and delete webhooks for your organization or trusted registry [using the API](use-the-api).
## Where to go next
- [Manage webhooks via the API](use-the-api)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 267 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

@ -13,9 +13,9 @@ In the following section, we will show you how to view and audit the list of eve
## View List of Events ## View List of Events
As of DTR 2.3, admins were able to view a list of DTR events [using the API](/datacenter/dtr/2.3/reference/api/#!/events/GetEvents). DTR 2.6 enhances that feature by showing a permission-based events list for each repository page on the web interface. To view the list of events within a repository, do the following: As of DTR 2.3, admins were able to view a list of DTR events [using the API](/datacenter/dtr/2.3/reference/api/#!/events/GetEvents). DTR 2.6 enhances that feature by showing a permission-based events list for each repository page on the web interface. To view the list of events within a repository, do the following:
1. Navigate to `https://<dtr-url>` and log in with your UCP credentials. 1. Navigate to `https://<dtr-url>` and log in with your DTR credentials.
2. Select **Repositories** on the left navigation pane, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the `/` after the specific namespace for your repository. 2. Select **Repositories** from the left navigation pane, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the `/` after the specific namespace for your repository.
3. Select the **Activity** tab. You should see a paginated list of the latest events based on your repository permission level. By default, **Activity** shows the latest `10` events and excludes pull events, which are only visible to repository and DTR admins. 3. Select the **Activity** tab. You should see a paginated list of the latest events based on your repository permission level. By default, **Activity** shows the latest `10` events and excludes pull events, which are only visible to repository and DTR admins.
* If you're a repository or a DTR admin, uncheck "Exclude pull" to view pull events. This should give you a better understanding of who is consuming your images. * If you're a repository or a DTR admin, uncheck "Exclude pull" to view pull events. This should give you a better understanding of who is consuming your images.

View File

@ -1,50 +0,0 @@
---
title: Manage webhooks
description: Learn how to create, configure, and test webhooks in Docker Trusted Registry.
keywords: registry, webhooks
redirect_from:
- /datacenter/dtr/2.5/guides/user/create-and-manage-webhooks/
---
DTR has webhooks so that you can run custom logic when an event happens. This
lets you build complex CI and CD pipelines with your Docker images.
## Create a webhook
To create a webhook, navigate to the **repository details** page, choose
the **Webhooks** tab, and click **New Webhook**.
![](../images/manage-webhooks-1.png){: .with-border}
Select the event that will trigger the webhook, and set the URL to send
information about the event. Once everything is set up, click **Test** for
DTR to send a JSON payload to the URL you set up, so that you can validate
that the integration is working. You'll get an event that looks like this:
```json
{
"contents": {
"architecture": "amd64",
"author": "",
"digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
"imageName": "example.com/foo/bar:latest",
"namespace": "foo",
"os": "linux",
"pushedAt": "2015-01-02T15:04:05Z",
"repository": "bar",
"tag": "latest"
},
"createdAt": "2017-06-20T01:29:53.046620425Z",
"location": "/repositories/foo/bar/tags/latest",
"type": "TAG_PUSH"
}
```
Once you save, your webhook is active and starts sending notifications when
the event is triggered.
![](../images/manage-webhooks-2.png){: .with-border}
## Where to go next
* [Create promotion policies](promotion-policies/index.md)

View File

@ -36,17 +36,41 @@ Name: `is-admin`, Filter: (user defined) for identifying if the user is an admin
### ADFS integration values ### ADFS integration values
ADFS integration requires these values: ADFS integration requires the following steps:
- Service provider metadata URI. This value is the URL for UCP, qualified with `/enzi/v0/saml/metadata`. For example, `https://111.111.111.111/enzi/v0/saml/metadata`. 1. Add a relying party trust. For example: https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust)
- Attribute Store: Active Directory.
- Add LDAP Attribute = Email Address; Outgoing Claim Type: Email Address 2. Obtain the service provider metadata URI. This value is the URL for UCP, qualified with `/enzi/v0/saml/metadata`. For example, `https://111.111.111.111/enzi/v0/saml/metadata`.
- Add LDAP Attribute = Display-Name; Outgoing Claim Type: Common Name
- Claim using Custom Rule. For example, `c:[Type == "http://schemas.xmlsoap.org/claims/CommonName"] 3. Add claim rules:
=> issue(Type = "fullname", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType);`
* Convert values from AD to SAML
- Display-name : Common Name
- E-Mail-Addresses : E-Mail Address
- SAM-Account-Name : Name ID
* Create full name for UCP (custom rule):
```
c:[Type == "http://schemas.xmlsoap.org/claims/CommonName"]
=> issue(Type = "fullname", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value,
ValueType = c.ValueType);
```
* Transform account name to Name ID:
- Incoming type: Name ID
- Incoming format: Unspecified
- Outgoing claim type: Name ID - Outgoing claim type: Name ID
- Outgoing name ID format: Transient Identifier - Outgoing format: Transient ID
- Pass through all claim values * Pass admin value to allow admin access based on AD group (send group membership as claim):
- Users group : Your admin group
- Outgoing claim type: is-admin
- Outgoing claim value: 1
* Configure group membership (for more complex organizations with multiple groups to manage access)
- Send LDAP attributes as claims
- Attribute store: Active Directory
- Add two rows with the following information:
- LDAP attribute = email address; outgoing claim type: email address
- LDAP attribute = Display-Name; outgoing claim type: common name
- Mapping:
- Token-Groups - Unqualified Names : member-of
## Configure the SAML integration ## Configure the SAML integration

View File

@ -8,14 +8,6 @@ redirect_from:
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh. This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
- [Prerequisites](#prerequisites)
- [Enable layer 7 routing via UCP](#enable-layer-7-routing-via-ucp)
- [Enable layer 7 routing manually](#enable-layer-7-routing-manually)
- [Work with the core service configuration file](#work-with-the-core-service-configuration-file)
- [Create a dedicated network for Interlock and extensions](#create-a-dedicated-network-for-interlock-and-extensions)
- [Create the Interlock service](#create-the-interlock-service)
- [Next steps](#next-steps)
## Prerequisites ## Prerequisites
- [Docker](https://www.docker.com) version 17.06 or later - [Docker](https://www.docker.com) version 17.06 or later
@ -49,8 +41,7 @@ and attaches it to the `ucp-interlock` network. This allows both services
to communicate. to communicate.
4. The `ucp-interlock-extension` generates a configuration to be used by 4. The `ucp-interlock-extension` generates a configuration to be used by
the proxy service. By default the proxy service is NGINX, so this service the proxy service. By default the proxy service is NGINX, so this service
generates a standard NGINX configuration. generates a standard NGINX configuration. UCP creates the `com.docker.ucp.interlock.conf-1` configuration file and uses it to configure all
( Is this valid here????) UCP creates the `com.docker.ucp.interlock.conf-1` configuration file and uses it to configure all
the internal components of this service. the internal components of this service.
5. The `ucp-interlock` service takes the proxy configuration and uses it to 5. The `ucp-interlock` service takes the proxy configuration and uses it to
start the `ucp-interlock-proxy` service. start the `ucp-interlock-proxy` service.