mirror of https://github.com/docker/docs.git
Merge pull request #1334 from docker/master
pull private master into sept patch release branch
This commit is contained in:
commit
ad4cf7ec1e
|
|
@ -30,7 +30,7 @@ You must perform a manual backup on each manager node, because logs contain node
|
|||
[Lock your swarm to protect its encryption key](/engine/swarm/swarm_manager_locking.md).
|
||||
|
||||
2. Because you must stop the engine of the manager node before performing the backup, having three manager
|
||||
nodes is recommended for high availability [HA]). For a cluster to be operational, a majority of managers
|
||||
nodes is recommended for high availability (HA). For a cluster to be operational, a majority of managers
|
||||
must be online. If less than 3 managers exists, the cluster is unavailable during the backup.
|
||||
|
||||
> **Note**: During the time that a manager is shut down, your swarm is more vulnerable to
|
||||
|
|
@ -76,7 +76,7 @@ You must perform a manual backup on each manager node, because logs contain node
|
|||
```
|
||||
systemctl start docker
|
||||
```
|
||||
7. Except for step 1, repeat the previous steps for each node.
|
||||
7. Except for step 1, repeat the previous steps for each manager node.
|
||||
|
||||
### Where to go next
|
||||
|
||||
|
|
|
|||
|
|
@ -30,13 +30,14 @@ scheduled to run on a node that has the `ssd` label.
|
|||
2. Select **Nodes** in the left-hand navigation menu.
|
||||
3. In the nodes list, select the node to which you want to apply labels.
|
||||
4. In the details pane, select the edit node icon in the upper-right corner to edit the node.
|
||||
|
||||

|
||||
|
||||
5. In the **Edit Node** page, scroll down to the **Labels** section.
|
||||
6. Select **Add Label**.
|
||||
7. Add a label with the key `disk` and a value of `ssd`.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
8. Click **Save** then dismiss the **Edit Node** page.
|
||||
9. In the node's details pane, select **Labels** to view the labels that are applied to the node.
|
||||
|
|
@ -60,13 +61,12 @@ service to be scheduled only on nodes that have SSD storage:
|
|||
1. Navigate to the **Stacks** page.
|
||||
2. Name the new stack "wordpress".
|
||||
3. Under **Orchestrator Mode**, select **Swarm Services**.
|
||||
|
||||
4. In the **docker-compose.yml** editor, paste the following stack file.
|
||||
|
||||
```
|
||||
version: "3.1"
|
||||
```
|
||||
version: "3.1"
|
||||
|
||||
services:
|
||||
services:
|
||||
db:
|
||||
image: mysql:5.7
|
||||
deploy:
|
||||
|
|
@ -102,25 +102,25 @@ services:
|
|||
WORDPRESS_DB_HOST: db:3306
|
||||
WORDPRESS_DB_PASSWORD: wordpress
|
||||
|
||||
networks:
|
||||
networks:
|
||||
wordpress-net:
|
||||
```
|
||||
```
|
||||
|
||||
5. Click **Create** to deploy the stack, and when the stack deploys,
|
||||
click **Done**.
|
||||
|
||||

|
||||

|
||||
|
||||
6. Navigate to the **Nodes** page, and click the node that has the
|
||||
`disk` label. In the details pane, click the **Inspect Resource**
|
||||
dropdown and select **Containers**.
|
||||
|
||||

|
||||

|
||||
|
||||
Dismiss the filter and navigate to the **Nodes** page. Click a node that
|
||||
doesn't have the `disk` label. In the details pane, click the
|
||||
**Inspect Resource** dropdown and select **Containers**. There are no
|
||||
WordPress containers scheduled on the node. Dismiss the filter.
|
||||
Dismiss the filter and navigate to the **Nodes** page. Click a node that
|
||||
doesn't have the `disk` label. In the details pane, click the
|
||||
**Inspect Resource** dropdown and select **Containers**. There are no
|
||||
WordPress containers scheduled on the node. Dismiss the filter.
|
||||
|
||||
## Add a constraint to a service by using the UCP web UI
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ with a new worker node. The type of upgrade you perform depends on what is neede
|
|||
|
||||
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
|
||||
manager node. Automatically upgrades the entire cluster.
|
||||
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
|
||||
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
|
||||
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
|
||||
advanced than the automated, in-place cluster upgrade.
|
||||
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
|
||||
|
|
@ -125,7 +125,7 @@ of manager node upgrades.
|
|||
|
||||
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
|
||||
manager node. Automatically upgrades the entire cluster.
|
||||
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
|
||||
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
|
||||
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
|
||||
advanced than the automated, in-place cluster upgrade.
|
||||
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
|
||||
|
|
@ -138,28 +138,6 @@ advanced than the automated, in-place cluster upgrade.
|
|||
in batches of multiple nodes rather than one at a time, and shut down servers to
|
||||
remove worker nodes. This type of upgrade is the most advanced.
|
||||
|
||||
### Use the web interface to perform an upgrade
|
||||
|
||||
> **Note**: If you plan to add nodes to the UCP cluster, use the [CLI](#use-the-cli-to-perform-an-upgrade) for the upgrade.
|
||||
|
||||
When an upgrade is available for a UCP installation, a banner appears.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Clicking this message takes an admin user directly to the upgrade process.
|
||||
It can be found under the **Upgrade** tab of the **Admin Settings** section.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
In the **Available Versions** drop down, select the version you want to update.
|
||||
Copy and paste the CLI command provided into a terminal on a manager node to
|
||||
perform the upgrade.
|
||||
|
||||
During the upgrade, the web interface will be unavailable, and you should wait
|
||||
until completion before continuing to interact with it. When the upgrade
|
||||
completes, you'll see a notification that a newer version of the web interface
|
||||
is available and a browser refresh is required to see it.
|
||||
|
||||
### Use the CLI to perform an upgrade
|
||||
|
||||
There are two different ways to upgrade a UCP Cluster via the CLI. The first is
|
||||
|
|
|
|||
Binary file not shown.
|
Before Width: | Height: | Size: 110 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 68 KiB |
|
|
@ -91,10 +91,10 @@ istio-pilot ClusterIP 10.96.199.152 <none> 15010/TCP,15011
|
|||
|
||||
5) Test the Ingress Deployment
|
||||
|
||||
To test that the Envory proxy is working correclty in the Isitio Gateway pods,
|
||||
To test that the Envoy proxy is working correclty in the Isitio Gateway pods,
|
||||
there is a status port configured on an internal port 15020. From the above
|
||||
output we can see that port 15020 is exposed as a Kubernetes NodePort, in the
|
||||
output above the NodePort is 34300 put this could be different in each
|
||||
output above the NodePort is 34300 but this could be different in each
|
||||
environment.
|
||||
|
||||
To check the envoy proxy's status, there is a health endpoint at
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ To confirm that your client tools are now communicating with UCP, run:
|
|||
{% raw %}
|
||||
docker version --format '{{.Server.Version}}'
|
||||
{% endraw %}
|
||||
{{ page.ucp_repo }}/{{ page.ucp_version }}
|
||||
|
||||
```
|
||||
<hr>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -68,5 +68,5 @@ Notes:
|
|||
| `--host-address` *value* | The network address to advertise to other nodes. Format: IP address or network interface name |
|
||||
| `--passphrase` *value* | Decrypt the backup tar file with the provided passphrase |
|
||||
| `--san` *value* | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
|
||||
| `--swarm-grpc-port *value* | Port for communication between nodes (default: 2377) |
|
||||
| `--swarm-grpc-port` *value* | Port for communication between nodes (default: 2377) |
|
||||
| `--unlock-key` *value* | The unlock key for this swarm-mode cluster, if one exists. |
|
||||
|
|
|
|||
Loading…
Reference in New Issue