Update UCP configure screenshots (#10074)

* Update screenshots

* Edit audit logging

* Update orch type topic
This commit is contained in:
Traci Morrison 2020-01-02 13:32:18 -05:00 committed by GitHub
parent 2f514979b4
commit 059d2baac1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 41 additions and 40 deletions

View File

@ -6,7 +6,7 @@ keywords: cluster, node, label, swarm, metadata
With Docker UCP, you can add labels to your nodes. Labels are metadata that
describe the node, like its role (development, QA, production), its region
(US, EU, APAC), or the kind of disk (hdd, ssd). Once you have labeled your
(US, EU, APAC), or the kind of disk (HDD, SSD). Once you have labeled your
nodes, you can add deployment constraints to your services, to ensure they
are scheduled on a node with a specific label.
@ -22,7 +22,7 @@ to organize access to your cluster.
## Apply labels to a node
In this example we'll apply the `ssd` label to a node. Then we'll deploy
In this example, we'll apply the `ssd` label to a node. Next, we'll deploy
a service with a deployment constraint to make sure the service is always
scheduled to run on a node that has the `ssd` label.
@ -31,7 +31,7 @@ scheduled to run on a node that has the `ssd` label.
3. In the nodes list, select the node to which you want to apply labels.
4. In the details pane, select the edit node icon in the upper-right corner to edit the node.
![](../../images/add-labels-to-cluster-nodes-3.png)
![](../../images/v32-edit-node.png)
5. In the **Edit Node** page, scroll down to the **Labels** section.
6. Select **Add Label**.
@ -113,13 +113,13 @@ click **Done**.
6. Navigate to the **Nodes** page, and click the node that has the
`disk` label. In the details pane, click the **Inspect Resource**
dropdown and select **Containers**.
drop-down menu and select **Containers**.
![](../../images/use-constraints-in-stack-deployment-2.png)
Dismiss the filter and navigate to the **Nodes** page. Click a node that
doesn't have the `disk` label. In the details pane, click the
**Inspect Resource** dropdown and select **Containers**. There are no
**Inspect Resource** drop-down menu and select **Containers**. There are no
WordPress containers scheduled on the node. Dismiss the filter.
## Add a constraint to a service by using the UCP web UI

View File

@ -10,10 +10,10 @@ individual users, administrators or software components that have affected the
system. They are focused on external user/agent actions and security rather than
understanding state or events of the system itself.
Audit Logs capture all HTTP actions (GET, PUT, POST, PATCH, DELETE) to all UCP
Audit logs capture all HTTP actions (GET, PUT, POST, PATCH, DELETE) to all UCP
API, Swarm API and Kubernetes API endpoints that are invoked (except for the
ignored list) and sent to Docker Engine via stdout. Creating audit logs is a UCP
component that integrates with Swarm, K8s, and UCP APIs.
component that integrates with Swarm, Kubernetes, and UCP APIs.
## Logging levels
@ -35,6 +35,8 @@ logging levels are provided:
- **Request**: includes all fields from the Metadata level as well as the
request payload.
> Note
>
> Once UCP audit logging has been enabled, audit logs can be found within the
> container logs of the `ucp-controller` container on each UCP manager node.
> Please ensure you have a
@ -46,10 +48,10 @@ request payload.
You can use audit logs to help with the following use cases:
- **Historical Troubleshooting** - Audit logs are helpful in determining a
sequence of past events that explain why an issue occured.
- **Historical troubleshooting** - Audit logs are helpful in determining a
sequence of past events that explain why an issue occurred.
- **Security Analysis and Auditing** - Security is one of the primary uses for
- **Security analysis and auditing** - Security is one of the primary uses for
audit logs. A full record of all user interactions with the container
infrastructure gives your security team full visibility to questionable or
attempted unauthorized accesses.
@ -61,27 +63,24 @@ generate chargeback information.
created by the event, alerting features can be built on top of event tools that
generate alerts for ops teams (PagerDuty, OpsGenie, Slack, or custom solutions).
## Enabling UCP Audit Logging
## Enabling UCP audit logging
UCP audit logging can be enabled via the UCP web user interface, the UCP API or
via the UCP configuration file.
### Enabling UCP Audit Logging via UI
### Enabling UCP audit logging using the web UI
1) Log in to the **UCP** Web User Interface
2) Navigate to **Admin Settings**
3) Select **Audit Logs**
4) In the **Configure Audit Log Level** section, select the relevant logging
1. Log in to the **UCP** Web User Interface
2. Navigate to **Admin Settings**
3. Select **Audit Logs**
4. In the **Configure Audit Log Level** section, select the relevant logging
level.
![Enabling Audit Logging in UCP](../../images/auditlogging.png){: .with-border}
![Enabling Audit Logging in UCP](../../images/auditlogging.png){: .with-border}
5) Click **Save**
5. Click **Save**
### Enabling UCP Audit Logging via API
### Enabling UCP audit logging using the API
1. Download the UCP Client bundle [Download client bundle from the command line](https://success.docker.com/article/download-client-bundle-from-the-cli).
@ -108,11 +107,10 @@ level.
curl --cert ${DOCKER_CERT_PATH}/cert.pem --key ${DOCKER_CERT_PATH}/key.pem --cacert ${DOCKER_CERT_PATH}/ca.pem -k -H "Content-Type: application/json" -X PUT --data $(cat auditlog.json) https://ucp-domain/api/ucp/config/logging
```
### Enabling UCP Audit Logging via Config File
### Enabling UCP audit logging using the configuration file
Enabling UCP audit logging via the UCP Configuration file can be done before
or after a UCP installation. Following the UCP Configuration file documentation
[here](./ucp-configuration-file/).
Enabling UCP audit logging via the UCP configuration file can be done before
or after a UCP installation. Refer to the [UCP configuration file](./ucp-configuration-file/) topic for more information.
The section of the UCP configuration file that controls UCP auditing logging is:
@ -124,20 +122,23 @@ The section of the UCP configuration file that controls UCP auditing logging is:
The supported variables for `level` are `""`, `"metadata"` or `"request"`.
> Important: The `support_dump_include_audit_logs` flag specifies whether user identification information from the ucp-controller container logs is included in the support dump. To prevent this information from being sent with the support dump, make sure that `support_dump_include_audit_logs` is set to `false`. When disabled, the support dump collection tool filters out any lines from the `ucp-controller` container logs that contain the substring `auditID`.
> Note
>
> The `support_dump_include_audit_logs` flag specifies whether user identification information from the ucp-controller container logs is included in the support dump. To prevent this information from being sent with the support dump, make sure that `support_dump_include_audit_logs` is set to `false`. When disabled, the support dump collection tool filters out any lines from the `ucp-controller` container logs that contain the substring `auditID`.
{: .important}
## Accessing Audit Logs
## Accessing audit logs
The audit logs are exposed today through the `ucp-controller` logs. You can
access these logs locally through the Docker cli or through an external
access these logs locally through the Docker CLI or through an external
container logging solution, such as [ELK](https://success.docker.com/article/elasticsearch-logstash-kibana-logging)
### Accessing Audit Logs via the Docker Cli
### Accessing audit logs using the Docker CLI
1) Source a UCP Client Bundle
To access audit logs using the Docker CLI:
2) Run `docker logs` to obtain audit logs. In the following example,
1. Source a UCP Client Bundle
2. Run `docker logs` to obtain audit logs. In the following example,
we are tailing the command to show the last log entry.
```
@ -208,7 +209,7 @@ Information for the following API endpoints is redacted from the audit logs for
- `/swarm/join` (POST)
- `/swarm/update` (POST)
-`/auth/login` (POST)
- Kube secrete create/update endpoints
- Kubernetes secrete create/update endpoints
## Where to go next

View File

@ -24,14 +24,14 @@ both on the same node. Although you can choose to mix orchestrator types on the
same node, this isn't recommended for production deployments because of the
likelihood of resource contention.
To change a node's orchestrator type from the **Edit node** page:
To change a node's orchestrator type from the Edit Node page:
1. Log in to the Docker Enterprise web UI with an administrator account.
2. Navigate to the **Nodes** page, and click the node that you want to assign
to a different orchestrator.
3. In the details pane, click **Configure** and select **Details** to open
the **Edit node** page.
4. In the **Orchestrator properties** section, click the orchestrator type
the Edit Node page.
4. In the **Orchestrator Properties** section, click the orchestrator type
for the node.
5. Click **Save** to assign the node to the selected orchestrator.
@ -73,7 +73,7 @@ you'll be in the same situation as if you were running in `Mixed` node.
> committed to another workload that was scheduled by the other orchestrator.
> When this happens, the node could run out of memory or other resources.
>
> For this reason, we recommend against mixing orchestrators on a production
> For this reason, we recommend not mixing orchestrators on a production
> node.
{: .warning}
@ -86,10 +86,10 @@ To set the orchestrator for new nodes:
1. Log in to the Docker Enterprise web UI with an administrator account.
2. Open the **Admin Settings** page, and in the left pane, click **Scheduler**.
3. Under **Set orchestrator type for new nodes** click **Swarm** or **Kubernetes**.
3. Under **Set Orchestrator Type for New Nodes**, click **Swarm** or **Kubernetes**.
4. Click **Save**.
![](../../images/join-nodes-to-cluster-1.png){: .with-border}
![](../../images/v32scheduler.png){: .with-border}
From now on, when you join a node to the cluster, new workloads on the node
are scheduled by the specified orchestrator type. Existing nodes in the cluster

Binary file not shown.

After

Width:  |  Height:  |  Size: 243 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB