mirror of https://github.com/docker/docs.git
Closes #430
Few tweaks on check Update with comments from Dan Last comments;fix some build breaks Tighten language add reconfigure info Signed-off-by: Mary Anthony <mary@docker.com>
This commit is contained in:
parent
843915cec1
commit
d4b9f505d1
Binary file not shown.
After Width: | Height: | Size: 162 KiB |
Binary file not shown.
Before Width: | Height: | Size: 82 KiB After Width: | Height: | Size: 22 KiB |
393
networking.md
393
networking.md
|
@ -7,34 +7,189 @@ parent="mn_ucp"
|
||||||
+++
|
+++
|
||||||
<![end-metadata]-->
|
<![end-metadata]-->
|
||||||
|
|
||||||
# Set up container networking with UCP
|
# Enable container networking with UCP
|
||||||
|
|
||||||
Beginning in release 1.9, the Docker Engine updated and expanded its networking
|
Along with host and bridge networks, Docker Engine provides for users to create
|
||||||
subsystem. Along with host and bridge networks, you can now create custom
|
container overlay networks. These networks span multiple hosts running Docker
|
||||||
networks that encompass multiple hosts running Docker Engine. This last feature
|
Engine. Launching a container on one host, makes the container available to all
|
||||||
is known as multi-host networking.
|
hosts in that container network. Another name for this capability is multi-host
|
||||||
|
network.
|
||||||
|
|
||||||
- [Multi-host networking and UCP](#multi-host-networking-and-ucp)
|
This page explains how to use the `engine-discovery` command to enable
|
||||||
- [Prerequisites](#prerequisites)
|
multi-host networks on your UCP installation. You'll do a complete configuration
|
||||||
- [Configure networking and restart the daemon](#configure-networking-and-restart-the-daemon)
|
on all nodes within your UCP deployment.
|
||||||
- [Troubleshoot the daemon configuration](#troubleshoot-the-daemon-configuration)
|
|
||||||
|
|
||||||
#### About these installation instructions
|
## About container networks and UCP
|
||||||
|
|
||||||
These installation instructions were written using the Ubuntu 14.0.3 operating system. This means that the file paths and commands used in the instructions are specific to Ubuntu 14.0.3. If you are installing on another operating system, the steps are the same but your commands and paths may differ.
|
You create a mulit-host container network using the Docker Engine client or the
|
||||||
|
UCP administration console. Container networks are custom networks you create
|
||||||
|
using the `overlay` network plugin driver. You must configure container networking
|
||||||
|
explicitly on UCP. Once you have your UCP installation running but
|
||||||
|
before you start using it, you enable container networks.
|
||||||
|
|
||||||
## Multi-host networking and UCP
|
Enabling container networking is a process. First, you run the
|
||||||
|
`engine-discovery` subcommand on the node. This subcommand configures the Engine
|
||||||
|
daemon options (`DOCKER_OPTS`) for the cluster key-value store. The options
|
||||||
|
include the IP address of each UCP controller and replica. Once you have run the
|
||||||
|
subcommand, you then restart the node's Engine daemon for the changes to take
|
||||||
|
effect.
|
||||||
|
|
||||||
You create a multi-host network using the Docker client or the UCP administration console. Multi-host networks rely on the `overlay` network plugin driver. To create a network, using the CLI.
|
Because the Engine daemon options rely on you already having the IP addresses of
|
||||||
|
the controller and replicas, you run `engine-discovery` after you have installed
|
||||||
|
these key nodes. You should enable networking on the controller first and then
|
||||||
|
the replicas. Once these are configured, you run the subcommand on each worker
|
||||||
|
node.
|
||||||
|
|
||||||
|
Once you've enabled container networks, you can create one through UCP the
|
||||||
|
application or the Engine CLI. To create a network using the Engine CLI, open a
|
||||||
|
command line on any UCP node and do the following:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker network create -d overlay my-custom-network
|
$ docker network create -d overlay my-custom-network
|
||||||
```
|
```
|
||||||
|
|
||||||
For the UCP beta, enabling multi-host networking is a manual process. You must
|
## Get each node host IP address
|
||||||
enable it after you install UCP on all your nodes. If you do not have
|
|
||||||
networking enabled, the Docker client returns the following error when you try
|
To continue with this procedure, you need to know the host address values you
|
||||||
to create a network with the overlay driver:
|
used on each controller or node. This is the address used with the `install` or
|
||||||
|
`join` subcommands to identify a node. Host addresses are used among the UCP
|
||||||
|
nodes for network communication.
|
||||||
|
|
||||||
|
1. Log into your UCP dashboard as a user with `admin` privileges.
|
||||||
|
|
||||||
|
2. Click **Nodes** on the dashboard.
|
||||||
|
|
||||||
|
A page listing the installed UCP nodes appears.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
3. Use the **ADDRESS** field to record the host IP address for each node.
|
||||||
|
|
||||||
|
Make sure you do not include the port number, just the IP address.
|
||||||
|
|
||||||
|
|
||||||
|
## Enable the networking feature
|
||||||
|
|
||||||
|
If you followed the prerequisites, you should have a list of the host-address
|
||||||
|
values you used with `install` to create the controller, the replicas, and
|
||||||
|
`join` each node. In this step, you enable the networking feature on all your
|
||||||
|
controller node, your replicas nodes (if you are using high availability), and
|
||||||
|
the worker nodes.
|
||||||
|
|
||||||
|
Do this procedure on one node at a time:
|
||||||
|
|
||||||
|
* begin with the controller
|
||||||
|
* continue onto doing all replicas
|
||||||
|
* finish with the worker nodes
|
||||||
|
|
||||||
|
Do this procedure on one node at a time because if you restart all the
|
||||||
|
controller daemons at the same time, you can increase the startup delay. This is
|
||||||
|
because `etcd` has to come up and establish quorum before the daemons can fully
|
||||||
|
recover.
|
||||||
|
|
||||||
|
To enable the networking feature, do the following.
|
||||||
|
|
||||||
|
1. Log into the host running the UCP controller.
|
||||||
|
|
||||||
|
2. Review the `discovery-engine` help.
|
||||||
|
|
||||||
|
$ docker run --rm docker/ucp engine-discovery --help
|
||||||
|
|
||||||
|
2. Leave the UCP processes running.
|
||||||
|
|
||||||
|
3. Run the `discovery-engine` command.
|
||||||
|
|
||||||
|
The command syntax is:
|
||||||
|
|
||||||
|
docker run --rm -it --name ucp \
|
||||||
|
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
docker/ucp:0.8.0 engine-discovery
|
||||||
|
--controller <private IP> [--controller <private IP> ]
|
||||||
|
--host-address [<private IP>]
|
||||||
|
|
||||||
|
If you are using high availability, you must provide the controller and all
|
||||||
|
the replica's by passing multiple `--controller` flags. when you configure
|
||||||
|
network. The command installs discovery on a UCP installation
|
||||||
|
with a two controllers (a primary and a replica).
|
||||||
|
|
||||||
|
$ docker run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:0.8.0 engine-discovery --controller 192.168.99.106 --controller 192.168.99.116 --host-address 192.168.99.106
|
||||||
|
INFO[0000] New configuration established. Signaling the daemon to load it...
|
||||||
|
INFO[0001] Successfully delivered signal to daemon
|
||||||
|
|
||||||
|
The `host-address` value is the the external address of the node you're
|
||||||
|
operating against. This is the address other nodes when communicating with
|
||||||
|
each other across the communication network.
|
||||||
|
|
||||||
|
If you specify the `--host-address` flag without an IP, the command attempts
|
||||||
|
to discover the address of the current node. If the command cannot discover
|
||||||
|
the address, it fails and prompts you to supply it:
|
||||||
|
|
||||||
|
FATA[0000] flag needs an argument: -host-address
|
||||||
|
|
||||||
|
4. Restart the Engine `daemon`.
|
||||||
|
|
||||||
|
The Engine `daemon` is a OS service process running on each node in your
|
||||||
|
cluster. How you restart a service is operating-system dependent. Some
|
||||||
|
examples appear below but keep in mind that on your system, the restart
|
||||||
|
operation may differ. Check with your system administrator if you are not
|
||||||
|
sure how to restart a daemon. Some example restarts include the following,
|
||||||
|
keep in mind your installation may be different:
|
||||||
|
|
||||||
|
**Ubuntu**:
|
||||||
|
|
||||||
|
$ sudo service docker restart
|
||||||
|
|
||||||
|
**Centos/RedHat**:
|
||||||
|
|
||||||
|
$ sudo systemctl daemon-reload
|
||||||
|
$ sudo systemctl restart docker.service
|
||||||
|
|
||||||
|
5. Review the Docker logs to check the restart.
|
||||||
|
|
||||||
|
The logging facilities for the Engine daemon is installation dependent. Some
|
||||||
|
example review operations include the following, keep in mind your
|
||||||
|
installation may be different:
|
||||||
|
|
||||||
|
**Ubuntu**:
|
||||||
|
|
||||||
|
$ sudo tail -f /var/log/upstart/docker.log
|
||||||
|
|
||||||
|
**Centos/RedHat**:
|
||||||
|
|
||||||
|
$ sudo journalctl -fu docker.service
|
||||||
|
|
||||||
|
6. Verify that you can create and remove a custom network.
|
||||||
|
|
||||||
|
$ docker network create -d overlay my-custom-network
|
||||||
|
$ docker network ls
|
||||||
|
$ docker network rm my-custom-network
|
||||||
|
|
||||||
|
7. Repeat steps 2-6 on the replica nodes in your cluster.
|
||||||
|
|
||||||
|
8. After enabling networking on the controllers and replicas, repeat steps 2-6 on
|
||||||
|
the remaining nodes in the cluster.
|
||||||
|
|
||||||
|
## Adding new nodes and replicas
|
||||||
|
|
||||||
|
Once your UCP installation is up and running, you may need to add a new worker
|
||||||
|
node or a new replica node. If you add a new worker node, you must run
|
||||||
|
`engine-discovery` on the node after you `join` it to the cluster. If you need
|
||||||
|
to add a replica, you'll need:
|
||||||
|
|
||||||
|
1. Re-run network configuration process on the controller to add the replica..
|
||||||
|
2. Run network configuration process on the new replica.
|
||||||
|
3. Run network configuration process again on all your nodes.
|
||||||
|
|
||||||
|
This will update the Engine's `daemon` configuration to include the new
|
||||||
|
`replica`. Keep in mind that this process can add downtime to your UCP
|
||||||
|
production installation. You should plan accordingly.
|
||||||
|
|
||||||
|
## Troubleshoot container networking
|
||||||
|
|
||||||
|
This section lists errors you can encounter when working with container networks
|
||||||
|
and UCP.
|
||||||
|
|
||||||
|
### Create: failed to parse pool request for address
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker network create -d overlay my-custom-network
|
$ docker network create -d overlay my-custom-network
|
||||||
|
@ -46,208 +201,20 @@ the same error.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
This error returns because the networking features rely on a key-value store. In
|
If you have not configured multi-host networking using the `engine-discovery`
|
||||||
a UCP environment, that key-value store is configured through UCP and protected
|
command, the Docker client returns these errors. Check the Engine daemon
|
||||||
by the Swarm TLS certificate chain. To avoid this error, you need to manually
|
configuration and make sure you have properly configured it.
|
||||||
configure the Docker daemon to use UCP's key-value store in a secure manner.
|
|
||||||
|
|
||||||
This page explains how to configure the Docker Engine daemon startup options.
|
### daemon configuration errors
|
||||||
Once the daemon is configured and restarted, the `docker network` CLI and the
|
|
||||||
resources they create will use the Swarm TLS certificate chain managed by UCP.
|
|
||||||
|
|
||||||
You'll do this configuration on all Engine installations within your UCP
|
The `engine-discovery` command works by modifying the start configuration for
|
||||||
deployment. Once you configure and restart the Engine daemon, you'll have secure
|
the Docker daemon. If you have trouble, try these troubleshooting measures:
|
||||||
communication within your cluster as you create custom multi-host networks on
|
|
||||||
the controller or nodes.
|
|
||||||
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
You must install UCP on your entire cluster (controller and nodes), before
|
|
||||||
following these instructions. Make sure you have run on the `install` and
|
|
||||||
`join` on each node as appropriate. Then, enable mult-host networking on every
|
|
||||||
node in your cluster using the instructions in this page.
|
|
||||||
|
|
||||||
UCP requires that all clients, including Docker Engine, use a Swarm TLS
|
|
||||||
certificate chain signed by the UCP Swarm Root CA. You configured these
|
|
||||||
certificates at bootstrap time either interactively or by passing the `--san`
|
|
||||||
(subject alternative sames) option to `install` or `join`.
|
|
||||||
|
|
||||||
To continue with this procedure, you need to know the SAN values you used on
|
|
||||||
each controller or node. Because you can pass a SAN either as an IP address or
|
|
||||||
fully-qualified hostname, make sure you know how to find these.
|
|
||||||
|
|
||||||
### Get public IP addresses
|
|
||||||
|
|
||||||
If you used public IP addresses, do the following:
|
|
||||||
|
|
||||||
1. Log into a host in your UCP cluster (controller or one of your nodes).
|
|
||||||
|
|
||||||
2. Run these two commands to get the public IP:
|
|
||||||
|
|
||||||
$ IP_ADDRESS=$(ip -o -4 route get 8.8.8.8 | cut -f8 -d' ')
|
|
||||||
$ echo ${IP_ADDRESS}
|
|
||||||
|
|
||||||
If your cluster is installed on a cloud provider, the public IP may not be
|
|
||||||
the same as the IP address returned by this command. Confirm through your
|
|
||||||
cloud provider's console or command line that this value is indeed the
|
|
||||||
public IP. For example, the AWS console shows these values to you:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
3. Note the host's IP address.
|
|
||||||
|
|
||||||
4. Repeat steps 1-3 on the remaining hosts in your cluster.
|
|
||||||
|
|
||||||
### Get fully-qualified domain name
|
|
||||||
|
|
||||||
If you used a fully-qualified domain name (DNS) for SANs, you use them again
|
|
||||||
configure multi-host networking. If you don't recall the name you used for each
|
|
||||||
node, then:
|
|
||||||
|
|
||||||
* If your hosts are on a private network, ask your system administrator for their fully-qualified domain service names.
|
|
||||||
* If your hosts are from a cloud provider, use the provider's console or other facility to get the name.
|
|
||||||
|
|
||||||
An easy way to get the controller's SAN values is to examine its certificate
|
|
||||||
through your browser. Each browser has a different way to view a website's
|
|
||||||
certificate. To do this on Chrome:
|
|
||||||
|
|
||||||
1. Open the browser to the UCP console.
|
|
||||||
|
|
||||||
2. In the address bar, click on the connection icon.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
The browser opens displays the connection information. Depending on your Chrome version the dialog may be slightly different.
|
|
||||||
|
|
||||||
3. Click the **Certificate Information**.
|
|
||||||
|
|
||||||
4. Open the **Details** view and scroll down to the **Subject Alternative Name** section.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
## Configure networking and restart the daemon
|
|
||||||
|
|
||||||
If you followed the prerequisites, you should have a list of the SAN values you used `docker/ucp install` to create the controller and `join` each node. With these values in hand, do the following:
|
|
||||||
|
|
||||||
1. Log into the host running the UCP controller.
|
|
||||||
|
|
||||||
2. Leave the UCP processes running.
|
|
||||||
|
|
||||||
3. Determine the Docker daemon's startup configuration file.
|
|
||||||
|
|
||||||
Each Linux distribution have different approaches for configuring daemon
|
|
||||||
startup (init) options. On Centos/RedHat systems that rely on systemd, the
|
|
||||||
Docker daemon startup options are stored in the
|
|
||||||
`/lib/systemd/system/docker.service` file. Ubuntu 14.04 stores these in the
|
|
||||||
`/etc/default/docker` file.
|
|
||||||
|
|
||||||
4. Open the configuration file with your favorite editor.
|
|
||||||
|
|
||||||
**Ubuntu**:
|
|
||||||
|
|
||||||
$ sudo vi /etc/default/docker
|
|
||||||
|
|
||||||
**Centos/Redhat**:
|
|
||||||
|
|
||||||
1. Create a drop-in directory.
|
|
||||||
|
|
||||||
$ sudo mkdir -p /etc/systemd/system/docker.service.d
|
|
||||||
|
|
||||||
2. Create a new systemd drop-in file.
|
|
||||||
|
|
||||||
$ sudo vi /etc/systemd/system/docker.service.d/10-execstart.conf
|
|
||||||
|
|
||||||
5. Set the start options.
|
|
||||||
|
|
||||||
**Ubuntu**:
|
|
||||||
Uncomment the `DOCKER_OPTS` line and add the following options.
|
|
||||||
|
|
||||||
--cluster-advertise CURRENT_HOST_PUBLIC_IP_OR_DNS:12376
|
|
||||||
--cluster-store etcd://CONTROLLER_PUBLIC_IP_OR_DNS:12379
|
|
||||||
--cluster-store-opt kv.cacertfile=/var/lib/docker/discovery_certs/ca.pem
|
|
||||||
--cluster-store-opt kv.certfile=/var/lib/docker/discovery_certs/cert.pem
|
|
||||||
--cluster-store-opt kv.keyfile=/var/lib/docker/discovery_certs/key.pem
|
|
||||||
|
|
||||||
Replace `CURRENT_HOST_PUBLIC_IP` with the IP of the host whose file you
|
|
||||||
are configuration. Replace `CONTROLLER_PUBLIC_IP_OR_DOMAIN` with the IP
|
|
||||||
address of the UCP controller. When you are done, the line should look
|
|
||||||
similar to the following:
|
|
||||||
|
|
||||||
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --cluster-advertise 52.70.180.235:12376 --cluster-store etcd://52.70.188.239:12379 --cluster-store-opt kv.cacertfile=/var/lib/docker/discovery_certs/ca.pem --cluster-store-opt kv.certfile=/var/lib/docker/discovery_certs/cert.pem --cluster-store-opt kv.keyfile=/var/lib/docker/discovery_certs/key.pem"
|
|
||||||
|
|
||||||
**Centos/Redhat**:
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=
|
|
||||||
ExecStart=/usr/bin/docker daemon -H fd://
|
|
||||||
--cluster-advertise CURRENT_HOST_PUBLIC_IP_OR_DNS:12376
|
|
||||||
--cluster-store etcd://CONTROLLER_PUBLIC_IP_OR_DNS:12379
|
|
||||||
--cluster-store-opt
|
|
||||||
kv.cacertfile=/var/lib/docker/discovery_certs/ca.pem
|
|
||||||
--cluster-store-opt
|
|
||||||
kv.certfile=/var/lib/docker/discovery_certs/cert.pem
|
|
||||||
--cluster-store-opt kv.keyfile=/var/lib/docker/discovery_certs/key.pem
|
|
||||||
|
|
||||||
>**Note**: This drop-in file overrides any existing Docker `ExecStart`
|
|
||||||
options. If you have existing `ExecStart` options, you should modify these
|
|
||||||
appropriately. For detailed information on these options, see [Custom
|
|
||||||
Docker daemon
|
|
||||||
options](https://docs.docker.com/engine/articles/systemd/#custom-docker-daemon-options)
|
|
||||||
|
|
||||||
6. Save and close the Docker configuration file.
|
|
||||||
|
|
||||||
6. Restart the Docker daemon.
|
|
||||||
|
|
||||||
**Ubuntu**:
|
|
||||||
|
|
||||||
$ sudo service docker restart
|
|
||||||
|
|
||||||
**Centos/RedHat**:
|
|
||||||
|
|
||||||
$ sudo systemctl daemon-reload
|
|
||||||
$ sudo systemctl restart docker.service
|
|
||||||
|
|
||||||
7. Review the Docker logs to check the restart.
|
|
||||||
|
|
||||||
**Ubuntu**:
|
|
||||||
|
|
||||||
$ sudo tail -f /var/log/upstart/docker.log
|
|
||||||
|
|
||||||
**Centos/RedHat**:
|
|
||||||
|
|
||||||
$ sudo journalctl -fu docker.service
|
|
||||||
|
|
||||||
8. Verify that you can create and delete a custom network.
|
|
||||||
|
|
||||||
$ docker network create -d overlay my-custom-network
|
|
||||||
$ docker network ls
|
|
||||||
$ docker network rm my-custom-network
|
|
||||||
|
|
||||||
9. Repeat steps 1-8 on the remaining nodes in your cluster.
|
|
||||||
|
|
||||||
## High availability and networking
|
|
||||||
|
|
||||||
If you are using high availability with container networking, you must enter in all of the `etcd` IP addresses (master and replicas):
|
|
||||||
|
|
||||||
```
|
|
||||||
--cluster-store etcd://[etcd_IP1:port],[etcd_IP2:port],[etcd_IP3:port]
|
|
||||||
```
|
|
||||||
|
|
||||||
This configuration ensures that, if the master `etcd` fails, UCP and Swarm
|
|
||||||
know where to look up a replica's key-value store.
|
|
||||||
|
|
||||||
|
|
||||||
## Troubleshoot the daemon configuration
|
|
||||||
|
|
||||||
If you have trouble, try these troubleshooting measures:
|
|
||||||
|
|
||||||
* Review the daemon logs to ensure the daemon was started.
|
* Review the daemon logs to ensure the daemon was started.
|
||||||
* Add the `-D` (debug) to the Docker daemon start options.
|
* Add the `-D` (debug) to the Docker daemon start options.
|
||||||
* Check your daemon configuration to ensure that `--cluster-advertise eth0:12376` is set properly.
|
* Check your Docker daemon configuration to ensure that `--cluster-advertise` is set properly.
|
||||||
* Check your daemon configuration `--cluster-store` options is point to the
|
* Check your daemon configuration `--cluster-store` options is point to the
|
||||||
key-store `etcd://CONTROLLER_PUBLIC_IP_OR_DOMAIN:12379` on the UCP controller.
|
key-store `etcd://CONTROLLER_PUBLIC_IP_OR_DOMAIN:PORT` on the UCP controller.
|
||||||
* Make sure the controller is accessible over the network, for example `ping CONTROLLER_PUBLIC_IP_OR_DOMAIN`.
|
* Make sure the controller is accessible over the network, for example `ping CONTROLLER_PUBLIC_IP_OR_DOMAIN`.
|
||||||
A ping requires that inbound ICMP requests are allowed on the controller.
|
A ping requires that inbound ICMP requests are allowed on the controller.
|
||||||
* Stop the daemon and start it manually from the command line.
|
* Stop the daemon and start it manually from the command line.
|
||||||
|
|
|
@ -32,5 +32,4 @@ The current doc theme does not allow us to create a split menu: a menu whose lab
|
||||||
* [UCP Key/Value Store Backend](kv_store.md)
|
* [UCP Key/Value Store Backend](kv_store.md)
|
||||||
* [Set up container networking with UCP](networking.md)
|
* [Set up container networking with UCP](networking.md)
|
||||||
* [Set up high availability](understand_ha.md)
|
* [Set up high availability](understand_ha.md)
|
||||||
* [Manually set up a Certificate Authority](certs.md)
|
|
||||||
* [Deploy an application thru UCP](deploy-application.md)
|
* [Deploy an application thru UCP](deploy-application.md)
|
||||||
|
|
|
@ -0,0 +1,46 @@
|
||||||
|
+++
|
||||||
|
title = "engine-discovery"
|
||||||
|
description = "description"
|
||||||
|
[menu.main]
|
||||||
|
parent = "ucp_ref"
|
||||||
|
+++
|
||||||
|
|
||||||
|
# engine-discovery
|
||||||
|
|
||||||
|
Manage the Engine discovery configuration on a node.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
docker run --rm -it \
|
||||||
|
--name ucp \
|
||||||
|
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
docker/ucp \
|
||||||
|
engine-discovery [options]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Use this command to display and update Engine discovery configuration on a node.
|
||||||
|
The discovery configuration is used by Engine for cluster membership and
|
||||||
|
multi-host networking.
|
||||||
|
|
||||||
|
Use one or more '--controller' arguments to specify *all* of the
|
||||||
|
UCP controllers in this cluster.
|
||||||
|
|
||||||
|
The '--host-address' argument specifies the public advertise address for the
|
||||||
|
particular node you are running the command on. This host-address is how other
|
||||||
|
nodes in UCP talk to this node. You may specify an IP or hostname, and the
|
||||||
|
command automatically detects and fills in the port number. If you omit the
|
||||||
|
address, the tool attempts to discover the node's address.
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
| Option | Description |
|
||||||
|
|---------------------------|----------------------------------------------------------------------------------|
|
||||||
|
| `--debug`, `-D` | Enable debug. |
|
||||||
|
| `--jsonlog` | Produce json formatted output for easier parsing. |
|
||||||
|
| `--interactive`, `-i` | Enable interactive mode. You are prompted to enter all required information. |
|
||||||
|
| `--list` | Display the Engine discovery configuration |
|
||||||
|
| `--controller [--controller option --controller option]` | Update discovery with one or more controller's external IP address or hostname. |
|
||||||
|
| `--host-address` | Update the external IP address or hostname this node advertises itself as [`$UCP_HOST_ADDRESS`]. |
|
|
@ -66,7 +66,7 @@ Additional help is available for each command with the '--help' option.
|
||||||
|
|
||||||
| Command | Description |
|
| Command | Description |
|
||||||
|---------------------------------|-----------------------------------------------------------------------------|
|
|---------------------------------|-----------------------------------------------------------------------------|
|
||||||
| [`install`](instal.md) | Install UCP on this engine. |
|
| [`install`](install.md) | Install UCP on this engine. |
|
||||||
| [`join`](join.md) | Join this engine to an existing UCP. |
|
| [`join`](join.md) | Join this engine to an existing UCP. |
|
||||||
| [`upgrade`](upgrade.md) | Upgrade the UCP components on this Engine. |
|
| [`upgrade`](upgrade.md) | Upgrade the UCP components on this Engine. |
|
||||||
| [`images`](images.md) | Verify the UCP images on this Engine. |
|
| [`images`](images.md) | Verify the UCP images on this Engine. |
|
||||||
|
|
Loading…
Reference in New Issue