mirror of https://github.com/docker/docs.git
Merge pull request #20022 from dvdksn/scout-metrics-exporter-datadog
scout: metrics exporter, datadog example
This commit is contained in:
commit
04252ecf22
|
@ -27,6 +27,7 @@ DHCP
|
|||
DNS
|
||||
DSOS
|
||||
DVP
|
||||
Datadog
|
||||
Ddosify
|
||||
Dev Environments?
|
||||
Django
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 27 KiB |
Binary file not shown.
After Width: | Height: | Size: 29 KiB |
Binary file not shown.
After Width: | Height: | Size: 29 KiB |
Binary file not shown.
After Width: | Height: | Size: 58 KiB |
Binary file not shown.
After Width: | Height: | Size: 49 KiB |
Binary file not shown.
Before Width: | Height: | Size: 92 KiB |
|
@ -7,9 +7,9 @@ keywords: scout, exporter, prometheus, grafana, metrics, dashboard, api, compose
|
|||
---
|
||||
|
||||
Docker Scout exposes a metrics HTTP endpoint that lets you scrape vulnerability
|
||||
and policy data from Docker Scout, using Prometheus. With this you can create
|
||||
your own, self-hosted Docker Scout dashboards for visualizing supply chain
|
||||
metrics.
|
||||
and policy data from Docker Scout, using Prometheus or Datadog. With this you
|
||||
can create your own, self-hosted Docker Scout dashboards for visualizing supply
|
||||
chain metrics.
|
||||
|
||||
## Metrics
|
||||
|
||||
|
@ -30,7 +30,7 @@ The metrics endpoint exposes the following metrics:
|
|||
>
|
||||
> Streams is mostly an internal concept in Docker Scout,
|
||||
> with the exception of the data exposed through this metrics endpoint.
|
||||
{ .tip }
|
||||
{ .tip #stream }
|
||||
|
||||
## Creating an access token
|
||||
|
||||
|
@ -43,7 +43,9 @@ To create a PAT, follow the steps in [Create an access token](/security/for-deve
|
|||
Once you have created the PAT, store it in a secure location.
|
||||
You will need to provide this token to the exporter when scraping metrics.
|
||||
|
||||
## Configure Prometheus
|
||||
## Prometheus
|
||||
|
||||
This section describes how to scrape the metrics endpoint using Prometheus.
|
||||
|
||||
### Add a job for your organization
|
||||
|
||||
|
@ -64,16 +66,6 @@ scrape_configs:
|
|||
The address in the `targets` field is set to the domain name of the Docker Scout API, `api.scout.docker.com`.
|
||||
Make sure that there's no firewall rule in place preventing the server from communicating with this endpoint.
|
||||
|
||||
### Scrape interval
|
||||
|
||||
By default, Prometheus scrapes the metrics every 15 seconds.
|
||||
You can change the scrape interval by setting the `scrape_interval` field in the Prometheus configuration file at the global or job level.
|
||||
A scraping interval of 60 minutes or higher is recommended.
|
||||
|
||||
Because of the own nature of vulnerability data, the metrics exposed through this API are unlikely to change at a high frequency.
|
||||
For this reason, the metrics endpoint has a 60-minute cache by default.
|
||||
If you set the scrape interval to less than 60 minutes, you will see the same data in the metrics for multiple scrapes during that time window.
|
||||
|
||||
### Add bearer token authentication
|
||||
|
||||
To scrape metrics from the Docker Scout Exporter endpoint using Prometheus, you need to configure Prometheus to use the PAT as a bearer token.
|
||||
|
@ -100,7 +92,7 @@ If you are running Prometheus in a Docker container or Kubernetes pod, mount the
|
|||
|
||||
Finally, restart Prometheus to apply the changes.
|
||||
|
||||
## Sample project
|
||||
### Prometheus sample project
|
||||
|
||||
If you don't have a Prometheus server set up, you can run a [sample project](https://github.com/dockersamples/scout-metrics-exporter) using Docker Compose.
|
||||
The sample includes a Prometheus server that scrapes metrics for a Docker organization enrolled in Docker Scout,
|
||||
|
@ -111,16 +103,17 @@ alongside Grafana with a pre-configured dashboard to visualize the vulnerability
|
|||
|
||||
```console
|
||||
$ git clone git@github.com:dockersamples/scout-metrics-exporter.git
|
||||
$ cd scout-metrics-exporter/prometheus
|
||||
```
|
||||
|
||||
2. [Create a Docker access token](/security/for-developers/access-tokens/#create-an-access-token)
|
||||
and store it in a plain text file at `prometheus/token` under the template directory.
|
||||
and store it in a plain text file at `/prometheus/prometheus/token` under the template directory.
|
||||
|
||||
```plaintext {title=token}
|
||||
$ echo $DOCKER_PAT > ./prometheus/token
|
||||
```
|
||||
|
||||
3. In the Prometheus configuration file at `prometheus/prometheus.yml`,
|
||||
3. In the Prometheus configuration file at `/prometheus/prometheus/prometheus.yml`,
|
||||
replace `ORG` in the `metrics_path` property on line 6 with the namespace of your Docker organization.
|
||||
|
||||
```yaml {title="prometheus/prometheus.yml",hl_lines="6",linenos=1}
|
||||
|
@ -170,11 +163,187 @@ Prometheus UI at <http://localhost:9090/targets>.
|
|||
To view the Grafana dashboards, go to <http://localhost:3000/dashboards>,
|
||||
and sign in using the credentials defined in the Docker Compose file (username: `admin`, password: `grafana`).
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||
|
||||
The dashboards are pre-configured to visualize the vulnerability and policy metrics scraped by Prometheus.
|
||||
|
||||
### Revoke an access token
|
||||
## Datadog
|
||||
|
||||
This section describes how to scrape the metrics endpoint using Datadog.
|
||||
Datadog pulls data for monitoring by running a customizable
|
||||
[agent](https://docs.datadoghq.com/agent/?tab=Linux) that scrapes available
|
||||
endpoints for any exposed metrics. The OpenMetrics and Prometheus checks are
|
||||
included in the agent, so you don’t need to install anything else on your
|
||||
containers or hosts.
|
||||
|
||||
This guide assumes you have a Datadog account and a Datadog API Key. Refer to
|
||||
the [Datadog documentation](https://docs.datadoghq.com/agent) to get started.
|
||||
|
||||
### Configure the Datadog agent
|
||||
|
||||
To start collecting the metrics, you will need to edit the agent’s
|
||||
configuration file for the OpenMetrics check. If you're running the agent as a
|
||||
container, such file must be mounted at
|
||||
`/etc/datadog-agent/conf.d/openmetrics.d/conf.yaml`.
|
||||
|
||||
The following example shows a Datadog configuration that:
|
||||
|
||||
- Specifies the OpenMetrics endpoint targeting the `dockerscoutpolicy` Docker organization
|
||||
- A `namespace` that all collected metrics will be prefixed with
|
||||
- The [`metrics`](#metrics) you want the agent to scrape (`scout_*`)
|
||||
- An `auth_token` section for the Datadog agent to authenticate to the Metrics
|
||||
endpoint, using a Docker PAT as a Bearer token.
|
||||
|
||||
```yaml
|
||||
instances:
|
||||
- openmetrics_endpoint: "https://api.scout.docker.com/v1/exporter/org/dockerscoutpolicy"
|
||||
namespace: "scout-metrics-exporter"
|
||||
metrics:
|
||||
- scout_*
|
||||
auth_token:
|
||||
reader:
|
||||
type: file
|
||||
path: /var/run/secrets/scout-metrics-exporter/token
|
||||
writer:
|
||||
type: header
|
||||
name: Authorization
|
||||
value: Bearer <TOKEN>
|
||||
```
|
||||
|
||||
> **Important**
|
||||
>
|
||||
> Do not replace the `<TOKEN>` placeholder in the previous configuration
|
||||
> example. It must stay as it is. Only make sure the Docker PAT is correctly
|
||||
> mounted into the Datadog agent in the specified filesystem path. Save the
|
||||
> file as `conf.yaml` and restart the agent.
|
||||
{ .important }
|
||||
|
||||
When creating a Datadog agent configuration of your own, make sure to edit the
|
||||
`openmetrics_endpoint` property to target your organization, by replacing
|
||||
`dockerscoutpolicy` with the namespace of your Docker organization.
|
||||
|
||||
### Datadog sample project
|
||||
|
||||
If you don't have a Datadog server set up, you can run a [sample project](https://github.com/dockersamples/scout-metrics-exporter)
|
||||
using Docker Compose. The sample includes a Datadog agent, running as a
|
||||
container, that scrapes metrics for a Docker organization enrolled in Docker
|
||||
Scout. This sample project assumes that you have a Datadog account, an API key,
|
||||
and a Datadog site.
|
||||
|
||||
1. Clone the starter template for bootstrapping a Datadog Compose service for
|
||||
scraping the Docker Scout metrics endpoint:
|
||||
|
||||
```console
|
||||
$ git clone git@github.com:dockersamples/scout-metrics-exporter.git
|
||||
$ cd scout-metrics-exporter/datadog
|
||||
```
|
||||
|
||||
2. [Create a Docker access token](/security/for-developers/access-tokens/#create-an-access-token)
|
||||
and store it in a plain text file at `/datadog/token` under the template directory.
|
||||
|
||||
```plaintext {title=token}
|
||||
$ echo $DOCKER_PAT > ./token
|
||||
```
|
||||
|
||||
3. In the `/datadog/compose.yaml` file, update the `DD_API_KEY` and `DD_SITE` environment variables
|
||||
with the values for your Datadog deployment.
|
||||
|
||||
```yaml {hl_lines="5-6"}
|
||||
datadog-agent:
|
||||
container_name: datadog-agent
|
||||
image: gcr.io/datadoghq/agent:7
|
||||
environment:
|
||||
- DD_API_KEY=${DD_API_KEY} # e.g. 1b6b3a42...
|
||||
- DD_SITE=${DD_SITE} # e.g. datadoghq.com
|
||||
- DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- ./conf.yaml:/etc/datadog-agent/conf.d/openmetrics.d/conf.yaml:ro
|
||||
- ./token:/var/run/secrets/scout-metrics-exporter/token:ro
|
||||
```
|
||||
|
||||
The `volumes` section mounts the Docker socket from the host to the
|
||||
container. This is required to obtain an accurate hostname when running as a
|
||||
container ([more details here](https://docs.datadoghq.com/agent/troubleshooting/hostname_containers/)).
|
||||
|
||||
It also mounts the agent's config file and the Docker access token.
|
||||
|
||||
4. Edit the `/datadog/config.yaml` file by replacing the placeholder `<ORG>` in
|
||||
the `openmetrics_endpoint` property with the namespace of the Docker
|
||||
organization that you want to collect metrics for.
|
||||
|
||||
```yaml {hl_lines=2}
|
||||
instances:
|
||||
- openmetrics_endpoint: "https://api.scout.docker.com/v1/exporter/org/<<ORG>>"
|
||||
namespace: "scout-metrics-exporter"
|
||||
# ...
|
||||
```
|
||||
|
||||
5. Start the Compose services.
|
||||
|
||||
```console
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
If configured properly, you should see the OpenMetrics check under Running
|
||||
Checks when you run the agent’s status command whose output should look similar
|
||||
to:
|
||||
|
||||
```text
|
||||
openmetrics (4.2.0)
|
||||
-------------------
|
||||
Instance ID: openmetrics:scout-prometheus-exporter:6393910f4d92f7c2 [OK]
|
||||
Configuration Source: file:/etc/datadog-agent/conf.d/openmetrics.d/conf.yaml
|
||||
Total Runs: 1
|
||||
Metric Samples: Last Run: 236, Total: 236
|
||||
Events: Last Run: 0, Total: 0
|
||||
Service Checks: Last Run: 1, Total: 1
|
||||
Average Execution Time : 2.537s
|
||||
Last Execution Date : 2024-05-08 10:41:07 UTC (1715164867000)
|
||||
Last Successful Execution Date : 2024-05-08 10:41:07 UTC (1715164867000)
|
||||
```
|
||||
|
||||
For a comprehensive list of options, take a look at this [example config file](https://github.com/DataDog/integrations-core/blob/master/openmetrics/datadog_checks/openmetrics/data/conf.yaml.example) for the generic OpenMetrics check.
|
||||
|
||||
### Visualizing your data
|
||||
|
||||
Once the agent is configured to grab Prometheus metrics, you can use them to build comprehensive Datadog graphs, dashboards, and alerts.
|
||||
|
||||
Go into your [Metric summary page](https://app.datadoghq.com/metric/summary?filter=scout_prometheus_exporter)
|
||||
to see the metrics collected from this example. This configuration will collect
|
||||
all exposed metrics starting with `scout_` under the namespace
|
||||
`scout_metrics_exporter`.
|
||||
|
||||

|
||||
|
||||
The following screenshots show examples of a Datadog dashboard containing
|
||||
graphs about vulnerability and policy compliance for a specific [stream](#stream).
|
||||
|
||||

|
||||

|
||||
|
||||
> The reason why the lines in the graphs look flat is due to the own nature of
|
||||
> vulnerabilities (they don't change too often) and the short time interval
|
||||
> selected in the date picker.
|
||||
|
||||
## Scrape interval
|
||||
|
||||
By default, Prometheus and Datadog scrape metrics at a 15 second interval.
|
||||
Because of the own nature of vulnerability data, the metrics exposed through this API are unlikely to change at a high frequency.
|
||||
For this reason, the metrics endpoint has a 60-minute cache by default,
|
||||
which means a scraping interval of 60 minutes or higher is recommended.
|
||||
If you set the scrape interval to less than 60 minutes, you will see the same data in the metrics for multiple scrapes during that time window.
|
||||
|
||||
To change the scrape interval:
|
||||
|
||||
- Prometheus: set the `scrape_interval` field in the Prometheus configuration
|
||||
file at the global or job level.
|
||||
- Datadog: set the `min_collection_interval` property in the Datadog agent
|
||||
configuration file, see [Datadog documentation](https://docs.datadoghq.com/developers/custom_checks/write_agent_check/#updating-the-collection-interval).
|
||||
|
||||
## Revoke an access token
|
||||
|
||||
If you suspect that your PAT has been compromised or is no longer needed, you can revoke it at any time.
|
||||
To revoke a PAT, follow the steps in the [Create and manage access tokens](/security/for-developers/access-tokens/#modify-existing-tokens).
|
||||
|
|
Loading…
Reference in New Issue