Merge pull request #1334 from docker/master

pull private master into sept patch release branch
This commit is contained in:
Dawn W 2019-09-24 13:58:09 -07:00 committed by GitHub
commit ad4cf7ec1e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 122 additions and 144 deletions

View File

@ -30,7 +30,7 @@ You must perform a manual backup on each manager node, because logs contain node
[Lock your swarm to protect its encryption key](/engine/swarm/swarm_manager_locking.md). [Lock your swarm to protect its encryption key](/engine/swarm/swarm_manager_locking.md).
2. Because you must stop the engine of the manager node before performing the backup, having three manager 2. Because you must stop the engine of the manager node before performing the backup, having three manager
nodes is recommended for high availability [HA]). For a cluster to be operational, a majority of managers nodes is recommended for high availability (HA). For a cluster to be operational, a majority of managers
must be online. If less than 3 managers exists, the cluster is unavailable during the backup. must be online. If less than 3 managers exists, the cluster is unavailable during the backup.
> **Note**: During the time that a manager is shut down, your swarm is more vulnerable to > **Note**: During the time that a manager is shut down, your swarm is more vulnerable to
@ -76,7 +76,7 @@ You must perform a manual backup on each manager node, because logs contain node
``` ```
systemctl start docker systemctl start docker
``` ```
7. Except for step 1, repeat the previous steps for each node. 7. Except for step 1, repeat the previous steps for each manager node.
### Where to go next ### Where to go next

View File

@ -30,13 +30,14 @@ scheduled to run on a node that has the `ssd` label.
2. Select **Nodes** in the left-hand navigation menu. 2. Select **Nodes** in the left-hand navigation menu.
3. In the nodes list, select the node to which you want to apply labels. 3. In the nodes list, select the node to which you want to apply labels.
4. In the details pane, select the edit node icon in the upper-right corner to edit the node. 4. In the details pane, select the edit node icon in the upper-right corner to edit the node.
![](../../images/add-labels-to-cluster-nodes-3.png) ![](../../images/add-labels-to-cluster-nodes-3.png)
5. In the **Edit Node** page, scroll down to the **Labels** section. 5. In the **Edit Node** page, scroll down to the **Labels** section.
6. Select **Add Label**. 6. Select **Add Label**.
7. Add a label with the key `disk` and a value of `ssd`. 7. Add a label with the key `disk` and a value of `ssd`.
![](../../images/add-labels-to-cluster-nodes-2.png){: .with-border} ![](../../images/add-labels-to-cluster-nodes-2.png){: .with-border}
8. Click **Save** then dismiss the **Edit Node** page. 8. Click **Save** then dismiss the **Edit Node** page.
9. In the node's details pane, select **Labels** to view the labels that are applied to the node. 9. In the node's details pane, select **Labels** to view the labels that are applied to the node.
@ -60,67 +61,66 @@ service to be scheduled only on nodes that have SSD storage:
1. Navigate to the **Stacks** page. 1. Navigate to the **Stacks** page.
2. Name the new stack "wordpress". 2. Name the new stack "wordpress".
3. Under **Orchestrator Mode**, select **Swarm Services**. 3. Under **Orchestrator Mode**, select **Swarm Services**.
4. In the **docker-compose.yml** editor, paste the following stack file. 4. In the **docker-compose.yml** editor, paste the following stack file.
``` ```
version: "3.1" version: "3.1"
services: services:
db: db:
image: mysql:5.7 image: mysql:5.7
deploy: deploy:
placement: placement:
constraints: constraints:
- node.labels.disk == ssd - node.labels.disk == ssd
restart_policy: restart_policy:
condition: on-failure condition: on-failure
networks: networks:
- wordpress-net - wordpress-net
environment: environment:
MYSQL_ROOT_PASSWORD: wordpress MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress MYSQL_PASSWORD: wordpress
wordpress: wordpress:
depends_on: depends_on:
- db - db
image: wordpress:latest image: wordpress:latest
deploy: deploy:
replicas: 1 replicas: 1
placement: placement:
constraints: constraints:
- node.labels.disk == ssd - node.labels.disk == ssd
restart_policy: restart_policy:
condition: on-failure condition: on-failure
max_attempts: 3 max_attempts: 3
networks: networks:
- wordpress-net - wordpress-net
ports: ports:
- "8000:80" - "8000:80"
environment: environment:
WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress WORDPRESS_DB_PASSWORD: wordpress
networks: networks:
wordpress-net: wordpress-net:
``` ```
5. Click **Create** to deploy the stack, and when the stack deploys, 5. Click **Create** to deploy the stack, and when the stack deploys,
click **Done**. click **Done**.
![](../../images/use-constraints-in-stack-deployment.png) ![](../../images/use-constraints-in-stack-deployment.png)
6. Navigate to the **Nodes** page, and click the node that has the 6. Navigate to the **Nodes** page, and click the node that has the
`disk` label. In the details pane, click the **Inspect Resource** `disk` label. In the details pane, click the **Inspect Resource**
dropdown and select **Containers**. dropdown and select **Containers**.
![](../../images/use-constraints-in-stack-deployment-2.png) ![](../../images/use-constraints-in-stack-deployment-2.png)
Dismiss the filter and navigate to the **Nodes** page. Click a node that Dismiss the filter and navigate to the **Nodes** page. Click a node that
doesn't have the `disk` label. In the details pane, click the doesn't have the `disk` label. In the details pane, click the
**Inspect Resource** dropdown and select **Containers**. There are no **Inspect Resource** dropdown and select **Containers**. There are no
WordPress containers scheduled on the node. Dismiss the filter. WordPress containers scheduled on the node. Dismiss the filter.
## Add a constraint to a service by using the UCP web UI ## Add a constraint to a service by using the UCP web UI

View File

@ -77,7 +77,7 @@ with a new worker node. The type of upgrade you perform depends on what is neede
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any - [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster. manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager - Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
advanced than the automated, in-place cluster upgrade. advanced than the automated, in-place cluster upgrade.
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI. - [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
@ -125,7 +125,7 @@ of manager node upgrades.
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any - [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster. manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager - Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
advanced than the automated, in-place cluster upgrade. advanced than the automated, in-place cluster upgrade.
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI. - [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
@ -138,28 +138,6 @@ advanced than the automated, in-place cluster upgrade.
in batches of multiple nodes rather than one at a time, and shut down servers to in batches of multiple nodes rather than one at a time, and shut down servers to
remove worker nodes. This type of upgrade is the most advanced. remove worker nodes. This type of upgrade is the most advanced.
### Use the web interface to perform an upgrade
> **Note**: If you plan to add nodes to the UCP cluster, use the [CLI](#use-the-cli-to-perform-an-upgrade) for the upgrade.
When an upgrade is available for a UCP installation, a banner appears.
![](../../images/upgrade-ucp-1.png){: .with-border}
Clicking this message takes an admin user directly to the upgrade process.
It can be found under the **Upgrade** tab of the **Admin Settings** section.
![](../../images/upgrade-ucp-2.png){: .with-border}
In the **Available Versions** drop down, select the version you want to update.
Copy and paste the CLI command provided into a terminal on a manager node to
perform the upgrade.
During the upgrade, the web interface will be unavailable, and you should wait
until completion before continuing to interact with it. When the upgrade
completes, you'll see a notification that a newer version of the web interface
is available and a browser refresh is required to see it.
### Use the CLI to perform an upgrade ### Use the CLI to perform an upgrade
There are two different ways to upgrade a UCP Cluster via the CLI. The first is There are two different ways to upgrade a UCP Cluster via the CLI. The first is
@ -348,7 +326,7 @@ nodes in the cluster at one time.
Kubelet is unhealthy: Kubelet stopped posting node status Kubelet is unhealthy: Kubelet stopped posting node status
``` ```
- Alternatively, you may see other port errors such as the one below in the ucp-controller - Alternatively, you may see other port errors such as the one below in the ucp-controller
container logs: container logs:
``` ```
http: proxy error: dial tcp 10.14.101.141:12388: connect: no route to host http: proxy error: dial tcp 10.14.101.141:12388: connect: no route to host

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

View File

@ -91,10 +91,10 @@ istio-pilot ClusterIP 10.96.199.152 <none> 15010/TCP,15011
5) Test the Ingress Deployment 5) Test the Ingress Deployment
To test that the Envory proxy is working correclty in the Isitio Gateway pods, To test that the Envoy proxy is working correclty in the Isitio Gateway pods,
there is a status port configured on an internal port 15020. From the above there is a status port configured on an internal port 15020. From the above
output we can see that port 15020 is exposed as a Kubernetes NodePort, in the output we can see that port 15020 is exposed as a Kubernetes NodePort, in the
output above the NodePort is 34300 put this could be different in each output above the NodePort is 34300 but this could be different in each
environment. environment.
To check the envoy proxy's status, there is a health endpoint at To check the envoy proxy's status, there is a health endpoint at

View File

@ -111,7 +111,7 @@ To confirm that your client tools are now communicating with UCP, run:
{% raw %} {% raw %}
docker version --format '{{.Server.Version}}' docker version --format '{{.Server.Version}}'
{% endraw %} {% endraw %}
{{ page.ucp_repo }}/{{ page.ucp_version }}
``` ```
<hr> <hr>
</div> </div>

View File

@ -68,5 +68,5 @@ Notes:
| `--host-address` *value* | The network address to advertise to other nodes. Format: IP address or network interface name | | `--host-address` *value* | The network address to advertise to other nodes. Format: IP address or network interface name |
| `--passphrase` *value* | Decrypt the backup tar file with the provided passphrase | | `--passphrase` *value* | Decrypt the backup tar file with the provided passphrase |
| `--san` *value* | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) | | `--san` *value* | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
| `--swarm-grpc-port *value* | Port for communication between nodes (default: 2377) | | `--swarm-grpc-port` *value* | Port for communication between nodes (default: 2377) |
| `--unlock-key` *value* | The unlock key for this swarm-mode cluster, if one exists. | | `--unlock-key` *value* | The unlock key for this swarm-mode cluster, if one exists. |