From ffbe7588f8f791c650e9bb7cb416fbe13810ac98 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 10 Oct 2018 10:05:01 -0700 Subject: [PATCH 01/27] begun edits of existing topic --- .../admin/configure/ucp-configuration-file.md | 17 +---------------- 1 file changed, 1 insertion(+), 16 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 12c07ef8ff..9a6783cc4d 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -87,21 +87,6 @@ docker container run --rm {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_ver | `renewal_threshold_minutes` | no | The length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A zero value disables session extension. The default is 1440, which is 24 hours. | | `per_user_limit` | no | The maximum number of sessions that a user can have active simultaneously. If creating a new session would put a user over this limit, the least recently used session will be deleted. A value of zero disables limiting the number of sessions that users may have. The default is 5. | -### auth.ldap (optional) - -| Parameter | Required | Description | -|:------------------------|:---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `server_url` | no | The URL of the LDAP server. | -| `no_simple_pagination` | no | Set to `true` if the LDAP server doesn't support the Simple Paged Results control extension (RFC 2696). The default is `false`. | -| `start_tls` | no | Set to `true` to use StartTLS to secure the connection to the server, ignored if the server URL scheme is 'ldaps://'. The default is `false`. | -| `root_certs` | no | A root certificate PEM bundle to use when establishing a TLS connection to the server. | -| `tls_skip_verify` | no | Set to `true` to skip verifying the server's certificate when establishing a TLS connection, which isn't recommended unless testing on a secure network. The default is `false`. | -| `reader_dn` | no | The distinguished name the system uses to bind to the LDAP server when performing searches. | -| `reader_password` | no | The password that the system uses to bind to the LDAP server when performing searches. | -| `sync_schedule` | no | The scheduled time for automatic LDAP sync jobs, in CRON format. Needs to have the seconds field set to zero. The default is @hourly if empty or omitted. | -| `jit_user_provisioning` | no | Whether to only create user accounts upon first login (recommended). The default is `true`. | - - ### auth.ldap.additional_domains array (optional) A list of additional LDAP domains and corresponding server configs from which @@ -202,7 +187,7 @@ Configures the logging options for UCP components. ### license_configuration table (optional) -Specifies whether the your UCP license is automatically renewed. +Specifies whether the your UCP license is automatically renewed. | Parameter | Required | Description | |:---------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| From a0e66af60fc565f94ac170f3be70f3216ec3096c Mon Sep 17 00:00:00 2001 From: ddeyo Date: Fri, 12 Oct 2018 14:28:56 -0700 Subject: [PATCH 02/27] latest --- .../admin/configure/ucp-configuration-file.md | 49 ------------------- 1 file changed, 49 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 9a6783cc4d..4475e5bcd6 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -87,55 +87,6 @@ docker container run --rm {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_ver | `renewal_threshold_minutes` | no | The length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A zero value disables session extension. The default is 1440, which is 24 hours. | | `per_user_limit` | no | The maximum number of sessions that a user can have active simultaneously. If creating a new session would put a user over this limit, the least recently used session will be deleted. A value of zero disables limiting the number of sessions that users may have. The default is 5. | -### auth.ldap.additional_domains array (optional) - -A list of additional LDAP domains and corresponding server configs from which -to sync users and team members. This is an advanced feature which most -environments don't need. - -| Parameter | Required | Description | -|:-----------------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `domain` | no | The root domain component of this server, for example, `dc=example,dc=com`. A longest-suffix match of the base DN for LDAP searches is used to select which LDAP server to use for search requests. If no matching domain is found, the default LDAP server config is used. | -| `server_url` | no | The URL of the LDAP server for the current additional domain. | -| `no_simple_pagination` | no | Set to true if the LDAP server for this additional domain does not support the Simple Paged Results control extension (RFC 2696). The default is `false`. | -| `server_url` | no | The URL of the LDAP server. | -| `start_tls` | no | Whether to use StartTLS to secure the connection to the server, ignored if the server URL scheme is 'ldaps://'. | -| `root_certs` | no | A root certificate PEM bundle to use when establishing a TLS connection to the server for the current additional domain. | -| `tls_skip_verify` | no | Whether to skip verifying the additional domain server's certificate when establishing a TLS connection, not recommended unless testing on a secure network. The default is `true`. | -| `reader_dn` | no | The distinguished name the system uses to bind to the LDAP server when performing searches under the additional domain. | -| `reader_password` | no | The password that the system uses to bind to the LDAP server when performing searches under the additional domain. | - -### auth.ldap.user_search_configs array (optional) - -Settings for syncing users. - -| Parameter | Required | Description | -|:--------------------------|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `base_dn` | no | The distinguished name of the element from which the LDAP server will search for users, for example, `ou=people,dc=example,dc=com`. | -| `scope_subtree` | no | Set to `true` to search for users in the entire subtree of the base DN. Set to `false` to search only one level under the base DN. The default is `false`. | -| `username_attr` | no | The name of the attribute of the LDAP user element which should be selected as the username. The default is `uid`. | -| `full_name_attr` | no | The name of the attribute of the LDAP user element which should be selected as the full name of the user. The default is `cn`. | -| `filter` | no | The LDAP search filter used to select user elements, for example, `(&(objectClass=person)(objectClass=user))`. May be left blank. | -| `match_group` | no | Whether to additionally filter users to those who are direct members of a group. The default is `true`. | -| `match_group_dn` | no | The distinguished name of the LDAP group, for example, `cn=ddc-users,ou=groups,dc=example,dc=com`. Required if `matchGroup` is `true`. | -| `match_group_member_attr` | no | The name of the LDAP group entry attribute which corresponds to distinguished names of members. Required if `matchGroup` is `true`. The default is `member`. | -| `match_group_iterate` | no | Set to `true` to get all of the user attributes by iterating through the group members and performing a lookup for each one separately. Use this instead of searching users first, then applying the group selection filter. Ignored if `matchGroup` is `false`. The default is `false`. | - -### auth.ldap.admin_sync_opts (optional) - -Settings for syncing system admininistrator users. - -| Parameter | Required | Description | -|:-----------------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `enable_sync` | no | Set to `true` to enable syncing admins. If `false`, all other fields in this table are ignored. The default is `true`. | -| `select_group_members` | no | Set to `true` to sync using a group DN and member attribute selection. Set to `false` to use a search filter. The default is `true`. | -| `group_dn` | no | The distinguished name of the LDAP group, for example, `cn=ddc-admins,ou=groups,dc=example,dc=com`. Required if `select_group_members` is `true`. | -| `group_member_attr` | no | The name of the LDAP group entry attribute which corresponds to distinguished names of members. Required if `select_group_members` is `true`. The default is `member`. | -| `search_base_dn` | no | The distinguished name of the element from which the LDAP server will search for users, for example, `ou=people,dc=example,dc=com`. Required if `select_group_members` is `false`. | -| `search_scope_subtree` | no | Set to `true` to search for users in the entire subtree of the base DN. Set to `false` to search only one level under the base DN. The default is `false`. Required if `select_group_members` is `false`. | -| `search_filter` | no | The LDAP search filter used to select users if `select_group_members` is `false`, for example, `(memberOf=cn=ddc-admins,ou=groups,dc=example,dc=com)`. May be left blank. | - - ### registries array (optional) An array of tables that specifies the DTR instances that the current UCP instance manages. From f3f67096b55898e79e65e4e84b59907e34e3b879 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Fri, 12 Oct 2018 14:34:41 -0700 Subject: [PATCH 03/27] remove local build files --- .../enable-ldap-config-file.html | 68 ++++ .../configure/_site/external-auth/index.html | 351 ++++++++++++++++++ .../configure/_site/join-nodes/index.html | 59 +++ .../join-linux-nodes-to-cluster.html | 143 +++++++ .../join-windows-nodes-to-cluster.html | 235 ++++++++++++ .../_site/join-nodes/use-a-load-balancer.html | 220 +++++++++++ 6 files changed, 1076 insertions(+) create mode 100644 ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html create mode 100644 ee/ucp/admin/configure/_site/external-auth/index.html create mode 100644 ee/ucp/admin/configure/_site/join-nodes/index.html create mode 100644 ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html create mode 100644 ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html create mode 100644 ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html diff --git a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html b/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html new file mode 100644 index 0000000000..bf1cc07c30 --- /dev/null +++ b/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html @@ -0,0 +1,68 @@ +

Docker UCP integrates with LDAP directory services, so that you can manage +users and groups from your organization’s directory and automatically +propagate this information to UCP and DTR. You can set up your cluster’s LDAP +configuration by using the UCP web UI, or you can use a +UCP configuration file.

+ +

To see an example TOML config file that shows how to configure UCP settings, +run UCP with the example-config option. +Learn about UCP configuration files.

+ +
docker container run --rm /: example-config
+
+ +

Set up LDAP by using a configuration file

+ +
    +
  1. +

    Use the following command to extract the name of the currently active +configuration from the ucp-agent service.

    + +
        
    +$ CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}' ucp-agent | grep 'com.docker.ucp.config-')
    +    
    +
    +
  2. +
  3. +

    Get the current configuration and save it to a TOML file.

    + +
        
    +docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
    +    
    +
    +
  4. +
  5. +

    Use the output of the example-config command as a guide to edit your +config.toml file. Under the [auth] sections, set backend = "ldap" +and [auth.ldap] to configure LDAP integration the way you want.

    +
  6. +
  7. +

    Once you’ve finished editing your config.toml file, create a new Docker +Config object by using the following command.

    + +
    NEW_CONFIG_NAME="com.docker.ucp.config-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
    +docker config create $NEW_CONFIG_NAME config.toml
    +
    +
  8. +
  9. +

    Update the ucp-agent service to remove the reference to the old config +and add a reference to the new config.

    + +
    docker service update --config-rm "$CURRENT_CONFIG_NAME" --config-add "source=${NEW_CONFIG_NAME},target=/etc/ucp/ucp.toml" ucp-agent
    +
    +
  10. +
  11. +

    Wait a few moments for the ucp-agent service tasks to update across +your cluster. If you set jit_user_provisioning = true in the LDAP +configuration, users matching any of your specified search queries will +have their accounts created when they log in with their username and LDAP +password.

    +
  12. +
+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/external-auth/index.html b/ee/ucp/admin/configure/_site/external-auth/index.html new file mode 100644 index 0000000000..1ea0bb307a --- /dev/null +++ b/ee/ucp/admin/configure/_site/external-auth/index.html @@ -0,0 +1,351 @@ +

Docker UCP integrates with LDAP directory services, so that you can manage +users and groups from your organization’s directory and it will automatically +propagate that information to UCP and DTR.

+ +

If you enable LDAP, UCP uses a remote directory server to create users +automatically, and all logins are forwarded to the directory server.

+ +

When you switch from built-in authentication to LDAP authentication, +all manually created users whose usernames don’t match any LDAP search results +are still available.

+ +

When you enable LDAP authentication, you can choose whether UCP creates user +accounts only when users log in for the first time. Select the +Just-In-Time User Provisioning option to ensure that the only LDAP +accounts that exist in UCP are those that have had a user log in to UCP.

+ +

How UCP integrates with LDAP

+ +

You control how UCP integrates with LDAP by creating searches for users. +You can specify multiple search configurations, and you can specify multiple +LDAP servers to integrate with. Searches start with the Base DN, which is +the distinguished name of the node in the LDAP directory tree where the +search starts looking for users.

+ +

Access LDAP settings by navigating to the Authentication & Authorization +page in the UCP web UI. There are two sections for controlling LDAP searches +and servers.

+ +
    +
  • LDAP user search configurations: This is the section of the +Authentication & Authorization page where you specify search +parameters, like Base DN, scope, filter, the username attribute, +and the full name attribute. These searches are stored in a list, and +the ordering may be important, depending on your search configuration.
  • +
  • LDAP server: This is the section where you specify the URL of an LDAP +server, TLS configuration, and credentials for doing the search requests. +Also, you provide a domain for all servers but the first one. The first +server is considered the default domain server. Any others are associated +with the domain that you specify in the page.
  • +
+ +

Here’s what happens when UCP synchronizes with LDAP:

+ +
    +
  1. UCP creates a set of search results by iterating over each of the user +search configs, in the order that you specify.
  2. +
  3. UCP choses an LDAP server from the list of domain servers by considering the +Base DN from the user search config and selecting the domain server that +has the longest domain suffix match.
  4. +
  5. If no domain server has a domain suffix that matches the Base DN from the +search config, UCP uses the default domain server.
  6. +
  7. UCP combines the search results into a list of users and creates UCP +accounts for them. If the Just-In-Time User Provisioning option is set, +user accounts are created only when users first log in.
  8. +
+ +

The domain server to use is determined by the Base DN in each search config. +UCP doesn’t perform search requests against each of the domain servers, only +the one which has the longest matching domain suffix, or the default if there’s +no match.

+ +

Here’s an example. Let’s say we have three LDAP domain servers:

+ + + + + + + + + + + + + + + + + + + + + + +
DomainServer URL
defaultldaps://ldap.example.com
dc=subsidiary1,dc=comldaps://ldap.subsidiary1.com
dc=subsidiary2,dc=subsidiary1,dc=comldaps://ldap.subsidiary2.com
+ +

Here are three user search configs with the following Base DNs:

+ +
    +
  • +

    baseDN=ou=people,dc=subsidiary1,dc=com

    + +

    For this search config, dc=subsidiary1,dc=com is the only server with a +domain which is a suffix, so UCP uses the server ldaps://ldap.subsidiary1.com +for the search request.

    +
  • +
  • +

    baseDN=ou=product,dc=subsidiary2,dc=subsidiary1,dc=com

    + +

    For this search config, two of the domain servers have a domain which is a +suffix of this base DN, but dc=subsidiary2,dc=subsidiary1,dc=com is the +longer of the two, so UCP uses the server ldaps://ldap.subsidiary2.com +for the search request.

    +
  • +
  • +

    baseDN=ou=eng,dc=example,dc=com

    + +

    For this search config, there is no server with a domain specified which is +a suffix of this base DN, so UCP uses the default server, ldaps://ldap.example.com, +for the search request.

    +
  • +
+ +

If there are username collisions for the search results between domains, UCP +uses only the first search result, so the ordering of the user search configs +may be important. For example, if both the first and third user search configs +result in a record with the username jane.doe, the first has higher +precedence and the second is ignored. For this reason, it’s important to choose +a username attribute that’s unique for your users across all domains.

+ +

Because names may collide, it’s a good idea to use something unique to the +subsidiary, like the email address for each person. Users can log in with the +email address, for example, jane.doe@subsidiary1.com.

+ +

Configure the LDAP integration

+ +

To configure UCP to create and authenticate users by using an LDAP directory, +go to the UCP web UI, navigate to the Admin Settings page and click +Authentication & Authorization to select the method used to create and +authenticate users.

+ +

+ +

In the LDAP Enabled section, click Yes to The LDAP settings appear. +Now configure your LDAP directory integration.

+ +

Default role for all private collections

+ +

Use this setting to change the default permissions of new users.

+ +

Click the dropdown to select the permission level that UCP assigns by default +to the private collections of new users. For example, if you change the value +to View Only, all users who log in for the first time after the setting is +changed have View Only access to their private collections, but permissions +remain unchanged for all existing users. +Learn more about permission levels.

+ +

LDAP enabled

+ +

Click Yes to enable integrating UCP users and teams with LDAP servers.

+ +

LDAP server

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
LDAP server URLThe URL where the LDAP server can be reached.
Reader DNThe distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice, this should be an LDAP read-only user.
Reader passwordThe password of the account used for searching entries in the LDAP server.
Use Start TLSWhether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with ldaps://, this field is ignored.
Skip TLS verificationWhether to verify the LDAP server certificate when using TLS. The connection is still encrypted but vulnerable to man-in-the-middle attacks.
No simple paginationIf your LDAP server doesn’t support pagination.
Just-In-Time User ProvisioningWhether to create user accounts only when users log in for the first time. The default value of true is recommended. If you upgraded from UCP 2.0.x, the default is false.
+ +

+ +

Click Confirm to add your LDAP domain.

+ +

To integrate with more LDAP servers, click Add LDAP Domain.

+ +

LDAP user search configurations

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription 
Base DNThe distinguished name of the node in the directory tree where the search should start looking for users. 
Username attributeThe LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: / \ [ ] : ; | = , + * ? < > ' ". 
Full name attributeThe LDAP attribute to use as the user’s full name for display purposes. If left empty, UCP will not create new users with a full name value. 
FilterThe LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. 
Search subtree instead of just one levelWhether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. 
Match Group MembersWhether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support memberOf search filters. 
Iterate through group membersIf Select Group Members is selected, this option searches for users by first iterating over the target group’s membership, making a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results. 
Group DNIf Select Group Members is selected, this specifies the distinguished name of the group from which to select users. 
Group Member AttributeIf Select Group Members is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. 
+ +

+ +

To configure more user search queries, click Add LDAP User Search Configuration +again. This is useful in cases where users may be found in multiple distinct +subtrees of your organization’s directory. Any user entry which matches at +least one of the search configurations will be synced as a user.

+ +

LDAP test login

+ + + + + + + + + + + + + + + + + + +
FieldDescription
UsernameAn LDAP username for testing authentication to this application. This value corresponds with the Username Attribute specified in the LDAP user search configurations section.
PasswordThe user’s password used to authenticate (BIND) to the directory server.
+ +

Before you save the configuration changes, you should test that the integration +is correctly configured. You can do this by providing the credentials of an +LDAP user, and clicking the Test button.

+ +

LDAP sync configuration

+ + + + + + + + + + + + + + + + + + +
FieldDescription
Sync intervalThe interval, in hours, to synchronize users between UCP and the LDAP server. When the synchronization job runs, new users found in the LDAP server are created in UCP with the default permission level. UCP users that don’t exist in the LDAP server become inactive.
Enable sync of admin usersThis option specifies that system admins should be synced directly with members of a group in your organization’s LDAP directory. The admins will be synced to match the membership of the group. The configured recovery admin user will also remain a system admin.
+ +

Once you’ve configured the LDAP integration, UCP synchronizes users based on +the interval you’ve defined starting at the top of the hour. When the +synchronization runs, UCP stores logs that can help you troubleshoot when +something goes wrong.

+ +

You can also manually synchronize users by clicking Sync Now.

+ +

Revoke user access

+ +

When a user is removed from LDAP, the effect on the user’s UCP account depends +on the Just-In-Time User Provisioning setting:

+ +
    +
  • Just-In-Time User Provisioning is false: Users deleted from LDAP become +inactive in UCP after the next LDAP synchronization runs.
  • +
  • Just-In-Time User Provisioning is true: Users deleted from LDAP can’t +authenticate, but their UCP accounts remain active. This means that they can +use their client bundles to run commands. To prevent this, deactivate their +UCP user accounts.
  • +
+ +

Data synced from your organization’s LDAP directory

+ +

UCP saves a minimum amount of user data required to operate. This includes +the value of the username and full name attributes that you have specified in +the configuration as well as the distinguished name of each synced user. +UCP does not store any additional data from the directory server.

+ +

Sync teams

+ +

UCP enables syncing teams with a search query or group in your organization’s +LDAP directory. +Sync team members with your organization’s LDAP directory.

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/join-nodes/index.html b/ee/ucp/admin/configure/_site/join-nodes/index.html new file mode 100644 index 0000000000..c7e5e7ae54 --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/index.html @@ -0,0 +1,59 @@ +

Docker Universal Control Plane is designed for high availability (HA). You can +join multiple manager nodes to the cluster, so that if one manager node fails, +another can automatically take its place without impact to the cluster.

+ +

Having multiple manager nodes in your cluster allows you to:

+ +
    +
  • Handle manager node failures,
  • +
  • Load-balance user requests across all manager nodes.
  • +
+ +

Size your deployment

+ +

To make the cluster tolerant to more failures, add additional replica nodes to +your cluster.

+ + + + + + + + + + + + + + + + + + + + + + +
Manager nodesFailures tolerated
10
31
52
+ +

For production-grade deployments, follow these rules of thumb:

+ +
    +
  • When a manager node fails, the number of failures tolerated by your cluster +decreases. Don’t leave that node offline for too long.
  • +
  • You should distribute your manager nodes across different availability +zones. This way your cluster can continue working even if an entire +availability zone goes down.
  • +
  • Adding many manager nodes to the cluster might lead to performance +degradation, as changes to configurations need to be replicated across all +manager nodes. The maximum advisable is seven manager nodes.
  • +
+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html new file mode 100644 index 0000000000..d0836ae5c0 --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html @@ -0,0 +1,143 @@ +

Docker EE is designed for scaling horizontally as your applications grow in +size and usage. You can add or remove nodes from the cluster to scale it +to your needs. You can join Windows Server 2016, IBM z System, and Linux nodes +to the cluster.

+ +

Because Docker EE leverages the clustering functionality provided by Docker +Engine, you use the docker swarm join +command to add more nodes to your cluster. When you join a new node, Docker EE +services start running on the node automatically.

+ +

Node roles

+ +

When you join a node to a cluster, you specify its role: manager or worker.

+ +
    +
  • +

    Manager: Manager nodes are responsible for cluster management +functionality and dispatching tasks to worker nodes. Having multiple +manager nodes allows your swarm to be highly available and tolerant of +node failures.

    + +

    Manager nodes also run all Docker EE components in a replicated way, so +by adding additional manager nodes, you’re also making the cluster highly +available. +Learn more about the Docker EE architecture.

    +
  • +
  • +

    Worker: Worker nodes receive and execute your services and applications. +Having multiple worker nodes allows you to scale the computing capacity of +your cluster.

    + +

    When deploying Docker Trusted Registry in your cluster, you deploy it to a +worker node.

    +
  • +
+ +

Join a node to the cluster

+ +

You can join Windows Server 2016, IBM z System, and Linux nodes to the cluster, +but only Linux nodes can be managers.

+ +

To join nodes to the cluster, go to the Docker EE web UI and navigate to the +Nodes page.

+ +
    +
  1. Click Add Node to add a new node.
  2. +
  3. Select the type of node to add, Windows or Linux.
  4. +
  5. Click Manager if you want to add the node as a manager.
  6. +
  7. Check the Use a custom listen address option to specify the address +and port where new node listens for inbound cluster management traffic.
  8. +
  9. Check the Use a custom listen address option to specify the +IP address that’s advertised to all members of the cluster for API access.
  10. +
+ +

+ +

Copy the displayed command, use SSH to log in to the host that you want to +join to the cluster, and run the docker swarm join command on the host.

+ +

To add a Windows node, click Windows and follow the instructions in +Join Windows worker nodes to a cluster.

+ +

After you run the join command in the node, the node is displayed on the +Nodes page in the Docker EE web UI. From there, you can change the node’s +cluster configuration, including its assigned orchestrator type. +Learn how to change the orchestrator for a node.

+ +

Pause or drain a node

+ +

Once a node is part of the cluster, you can configure the node’s availability +so that it is:

+ +
    +
  • Active: the node can receive and execute tasks.
  • +
  • Paused: the node continues running existing tasks, but doesn’t receive +new tasks.
  • +
  • Drained: the node won’t receive new tasks. Existing tasks are stopped and +replica tasks are launched in active nodes.
  • +
+ +

Pause or drain a node from the Edit Node page:

+ +
    +
  1. In the Docker EE web UI, browse to the Nodes page and select the node.
  2. +
  3. In the details pane, click Configure and select Details to open +the Edit Node page.
  4. +
  5. In the Availability section, click Active, Pause, or Drain.
  6. +
  7. Click Save to change the availability of the node.
  8. +
+ +

+ +

Promote or demote a node

+ +

You can promote worker nodes to managers to make UCP fault tolerant. You can +also demote a manager node into a worker.

+ +

To promote or demote a manager node:

+ +
    +
  1. Navigate to the Nodes page, and click the node that you want to demote.
  2. +
  3. In the details pane, click Configure and select Details to open +the Edit Node page.
  4. +
  5. In the Role section, click Manager or Worker.
  6. +
  7. Click Save and wait until the operation completes.
  8. +
  9. Navigate to the Nodes page, and confirm that the node role has changed.
  10. +
+ +

If you’re load-balancing user requests to Docker EE across multiple manager +nodes, don’t forget to remove these nodes from your load-balancing pool when +you demote them to workers.

+ +

Remove a node from the cluster

+ +

You can remove worker nodes from the cluster at any time:

+ +
    +
  1. Navigate to the Nodes page and select the node.
  2. +
  3. In the details pane, click Actions and select Remove.
  4. +
  5. Click Confirm when you’re prompted.
  6. +
+ +

Since manager nodes are important to the cluster overall health, you need to +be careful when removing one from the cluster.

+ +

To remove a manager node:

+ +
    +
  1. Make sure all nodes in the cluster are healthy. Don’t remove manager nodes +if that’s not the case.
  2. +
  3. Demote the manager node into a worker.
  4. +
  5. Now you can remove that node from the cluster.
  6. +
+ +

Use the CLI to manage your nodes

+ +

You can use the Docker CLI client to manage your nodes from the CLI. To do +this, configure your Docker CLI client with a UCP client bundle.

+ +

Once you do that, you can start managing your UCP nodes:

+ +
docker node ls
+
diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html new file mode 100644 index 0000000000..07939769d0 --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html @@ -0,0 +1,235 @@ +

Docker Enterprise Edition supports worker nodes that run on Windows Server 2016 or 1709. +Only worker nodes are supported on Windows, and all manager nodes in the cluster +must run on Linux.

+ +

Follow these steps to enable a worker node on Windows.

+ +
    +
  1. Install Docker EE Engine on Windows Server 2016.
  2. +
  3. Configure the Windows node.
  4. +
  5. Join the Windows node to the cluster.
  6. +
+ +

Install Docker EE Engine on Windows Server 2016 or 1709

+ +

Install Docker EE Engine +on a Windows Server 2016 or 1709 instance to enable joining a cluster that’s managed by +Docker Enterprise Edition.

+ +

Configure the Windows node

+ +

Follow these steps to configure the docker daemon and the Windows environment.

+ +
    +
  1. Add a label to the node.
  2. +
  3. Pull the Windows-specific image of ucp-agent, which is named ucp-agent-win.
  4. +
  5. Run the Windows worker setup script provided with ucp-agent-win.
  6. +
  7. Join the cluster with the token provided by the Docker EE web UI or CLI.
  8. +
+ +

Add a label to the node

+ +

Configure the Docker Engine running on the node to have a label. This makes +it easier to deploy applications on nodes with this label.

+ +

Create the file C:\ProgramData\docker\config\daemon.json with the following +content:

+ +
{
+  "labels": ["os=windows"]
+}
+
+ +

Restart Docker for the changes to take effect:

+ +
Restart-Service docker
+
+ +

Pull the Windows-specific images

+ +

On a manager node, run the following command to list the images that are required +on Windows nodes.

+ +
docker container run --rm /: images --list --enable-windows
+/ucp-agent-win:
+/ucp-dsinfo-win:
+
+ +

On Windows Server 2016, in a PowerShell terminal running as Administrator, +log in to Docker Hub with the docker login command and pull the listed images.

+ +
docker image pull /ucp-agent-win:
+docker image pull /ucp-dsinfo-win:
+
+ +

Run the Windows node setup script

+ +

You need to open ports 2376 and 12376, and create certificates +for the Docker daemon to communicate securely. Use this command to run +the Windows node setup script:

+ +
$script = [ScriptBlock]::Create((docker run --rm /ucp-agent-win: windows-script | Out-String))
+
+Invoke-Command $script
+
+ +
+

Docker daemon restart

+ +

When you run windows-script, the Docker service is unavailable temporarily.

+
+ +

The Windows node is ready to join the cluster. Run the setup script on each +instance of Windows Server that will be a worker node.

+ +

Compatibility with daemon.json

+ +

The script may be incompatible with installations that use a config file at +C:\ProgramData\docker\config\daemon.json. If you use such a file, make sure +that the daemon runs on port 2376 and that it uses certificates located in +C:\ProgramData\docker\daemoncerts. If certificates don’t exist in this +directory, run ucp-agent-win generate-certs, as shown in Step 2 of the +procedure in Set up certs for the dockerd service.

+ +

In the daemon.json file, set the tlscacert, tlscert, and tlskey options +to the corresponding files in C:\ProgramData\docker\daemoncerts:

+ +
{
+...
+		"debug":     true,
+		"tls":       true,
+		"tlscacert": "C:\ProgramData\docker\daemoncerts\ca.pem",
+		"tlscert":   "C:\ProgramData\docker\daemoncerts\cert.pem",
+		"tlskey":    "C:\ProgramData\docker\daemoncerts\key.pem",
+		"tlsverify": true,
+...
+}
+
+ +

Join the Windows node to the cluster

+ +

Now you can join the cluster by using the docker swarm join command that’s +provided by the Docker EE web UI and CLI.

+ +
    +
  1. Log in to the Docker EE web UI with an administrator account.
  2. +
  3. Navigate to the Nodes page.
  4. +
  5. Click Add Node to add a new node.
  6. +
  7. In the Node Type section, click Windows.
  8. +
  9. In the Step 2 section, click the checkbox for +“I’m ready to join my windows node.”
  10. +
  11. Check the Use a custom listen address option to specify the address +and port where new node listens for inbound cluster management traffic.
  12. +
  13. +

    Check the Use a custom listen address option to specify the +IP address that’s advertised to all members of the cluster for API access.

    + +

    +
  14. +
+ +

Copy the displayed command. It looks similar to the following:

+ +
docker swarm join --token <token> <ucp-manager-ip>
+
+ +

You can also use the command line to get the join token. Using your +UCP client bundle, run:

+ +
docker swarm join-token worker
+
+ +

Run the docker swarm join command on each instance of Windows Server that +will be a worker node.

+ +

Configure a Windows worker node manually

+ +

The following sections describe how to run the commands in the setup script +manually to configure the dockerd service and the Windows environment. +The script opens ports in the firewall and sets up certificates for dockerd.

+ +

To see the script, you can run the windows-script command without piping +to the Invoke-Expression cmdlet.

+ +
docker container run --rm /ucp-agent-win: windows-script
+
+ +

Open ports in the Windows firewall

+ +

Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic.

+ +

In a PowerShell terminal running as Administrator, run these commands +to add rules to the Windows firewall.

+ +
netsh advfirewall firewall add rule name="docker_local" dir=in action=allow protocol=TCP localport=2376
+netsh advfirewall firewall add rule name="docker_proxy" dir=in action=allow protocol=TCP localport=12376
+
+ +

Set up certs for the dockerd service

+ +
    +
  1. Create the directory C:\ProgramData\docker\daemoncerts.
  2. +
  3. +

    In a PowerShell terminal running as Administrator, run the following command +to generate certificates.

    + +
    docker container run --rm -v C:\ProgramData\docker\daemoncerts:C:\certs /ucp-agent-win: generate-certs
    +
    +
  4. +
  5. +

    To set up certificates, run the following commands to stop and unregister the +dockerd service, register the service with the certificates, and restart the service.

    + +
    Stop-Service docker
    +dockerd --unregister-service
    +dockerd -H npipe:// -H 0.0.0.0:2376 --tlsverify --tlscacert=C:\ProgramData\docker\daemoncerts\ca.pem --tlscert=C:\ProgramData\docker\daemoncerts\cert.pem --tlskey=C:\ProgramData\docker\daemoncerts\key.pem --register-service
    +Start-Service docker
    +
    +
  6. +
+ +

The dockerd service and the Windows environment are now configured to join a Docker EE cluster.

+ +
+

TLS certificate setup

+ +

If the TLS certificates aren’t set up correctly, the Docker EE web UI shows the +following warning.

+ +
Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon.
+
+
+ +

Windows nodes limitations

+ +

Some features are not yet supported on Windows nodes:

+ +
    +
  • Networking +
      +
    • The cluster mode routing mesh can’t be used on Windows nodes. You can expose +a port for your service in the host where it is running, and use the HTTP +routing mesh to make your service accessible using a domain name.
    • +
    • Encrypted networks are not supported. If you’ve upgraded from a previous +version, you’ll also need to recreate the ucp-hrm network to make it +unencrypted.
    • +
    +
  • +
  • Secrets +
      +
    • When using secrets with Windows services, Windows stores temporary secret +files on disk. You can use BitLocker on the volume containing the Docker +root directory to encrypt the secret data at rest.
    • +
    • When creating a service which uses Windows containers, the options to +specify UID, GID, and mode are not supported for secrets. Secrets are +currently only accessible by administrators and users with system access +within the container.
    • +
    +
  • +
  • Mounts +
      +
    • On Windows, Docker can’t listen on a Unix socket. Use TCP or a named pipe +instead.
    • +
    +
  • +
diff --git a/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html b/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html new file mode 100644 index 0000000000..f1161c5a8d --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html @@ -0,0 +1,220 @@ +

Once you’ve joined multiple manager nodes for high-availability, you can +configure your own load balancer to balance user requests across all +manager nodes.

+ +

+ +

This allows users to access UCP using a centralized domain name. If +a manager node goes down, the load balancer can detect that and stop forwarding +requests to that node, so that the failure goes unnoticed by users.

+ +

Load-balancing on UCP

+ +

Since Docker UCP uses mutual TLS, make sure you configure your load balancer to:

+ +
    +
  • Load-balance TCP traffic on ports 443 and 6443.
  • +
  • Not terminate HTTPS connections.
  • +
  • Use the /_ping endpoint on each manager node, to check if the node +is healthy and if it should remain on the load balancing pool or not.
  • +
+ +

Load balancing UCP and DTR

+ +

By default, both UCP and DTR use port 443. If you plan on deploying UCP and DTR, +your load balancer needs to distinguish traffic between the two by IP address +or port number.

+ +
    +
  • If you want to configure your load balancer to listen on port 443: +
      +
    • Use one load balancer for UCP, and another for DTR,
    • +
    • Use the same load balancer with multiple virtual IPs.
    • +
    +
  • +
  • Configure your load balancer to expose UCP or DTR on a port other than 443.
  • +
+ +
+

Additional requirements

+ +

In addition to configuring your load balancer to distinguish between UCP and DTR, configuring a load balancer for DTR has additional requirements.

+
+ +

Configuration examples

+ +

Use the following examples to configure your load balancer for UCP.

+ + +
+
+
user  nginx;
+worker_processes  1;
+
+error_log  /var/log/nginx/error.log warn;
+pid        /var/run/nginx.pid;
+
+events {
+    worker_connections  1024;
+}
+
+stream {
+    upstream ucp_443 {
+        server <UCP_MANAGER_1_IP>:443 max_fails=2 fail_timeout=30s;
+        server <UCP_MANAGER_2_IP>:443 max_fails=2 fail_timeout=30s;
+        server <UCP_MANAGER_N_IP>:443  max_fails=2 fail_timeout=30s;
+    }
+    server {
+        listen 443;
+        proxy_pass ucp_443;
+    }
+}
+
+
+
+
global
+    log /dev/log    local0
+    log /dev/log    local1 notice
+
+defaults
+        mode    tcp
+        option  dontlognull
+        timeout connect     5s
+        timeout client      50s
+        timeout server      50s
+        timeout tunnel      1h
+        timeout client-fin  50s
+### frontends
+# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
+frontend ucp_stats
+        mode http
+        bind 0.0.0.0:8181
+        default_backend ucp_stats
+frontend ucp_443
+        mode tcp
+        bind 0.0.0.0:443
+        default_backend ucp_upstream_servers_443
+### backends
+backend ucp_stats
+        mode http
+        option httplog
+        stats enable
+        stats admin if TRUE
+        stats refresh 5m
+backend ucp_upstream_servers_443
+        mode tcp
+        option httpchk GET /_ping HTTP/1.1\r\nHost:\ <UCP_FQDN>
+        server node01 <UCP_MANAGER_1_IP>:443 weight 100 check check-ssl verify none
+        server node02 <UCP_MANAGER_2_IP>:443 weight 100 check check-ssl verify none
+        server node03 <UCP_MANAGER_N_IP>:443 weight 100 check check-ssl verify none
+
+
+
+
{
+    "Subnets": [
+        "subnet-XXXXXXXX",
+        "subnet-YYYYYYYY",
+        "subnet-ZZZZZZZZ"
+    ],
+    "CanonicalHostedZoneNameID": "XXXXXXXXXXX",
+    "CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
+    "ListenerDescriptions": [
+        {
+            "Listener": {
+                "InstancePort": 443,
+                "LoadBalancerPort": 443,
+                "Protocol": "TCP",
+                "InstanceProtocol": "TCP"
+            },
+            "PolicyNames": []
+        }
+    ],
+    "HealthCheck": {
+        "HealthyThreshold": 2,
+        "Interval": 10,
+        "Target": "HTTPS:443/_ping",
+        "Timeout": 2,
+        "UnhealthyThreshold": 4
+    },
+    "VPCId": "vpc-XXXXXX",
+    "BackendServerDescriptions": [],
+    "Instances": [
+        {
+            "InstanceId": "i-XXXXXXXXX"
+        },
+        {
+            "InstanceId": "i-XXXXXXXXX"
+        },
+        {
+            "InstanceId": "i-XXXXXXXXX"
+        }
+    ],
+    "DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
+    "SecurityGroups": [
+        "sg-XXXXXXXXX"
+    ],
+    "Policies": {
+        "LBCookieStickinessPolicies": [],
+        "AppCookieStickinessPolicies": [],
+        "OtherPolicies": []
+    },
+    "LoadBalancerName": "ELB-UCP",
+    "CreatedTime": "2017-02-13T21:40:15.400Z",
+    "AvailabilityZones": [
+        "us-west-2c",
+        "us-west-2a",
+        "us-west-2b"
+    ],
+    "Scheme": "internet-facing",
+    "SourceSecurityGroup": {
+        "OwnerAlias": "XXXXXXXXXXXX",
+        "GroupName":  "XXXXXXXXXXXX"
+    }
+}
+
+
+
+ +

You can deploy your load balancer using:

+ + +
+
+
# Create the nginx.conf file, then
+# deploy the load balancer
+
+docker run --detach \
+  --name ucp-lb \
+  --restart=unless-stopped \
+  --publish 443:443 \
+  --volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
+  nginx:stable-alpine
+
+
+
+
# Create the haproxy.cfg file, then
+# deploy the load balancer
+
+docker run --detach \
+  --name ucp-lb \
+  --publish 443:443 \
+  --publish 8181:8181 \
+  --restart=unless-stopped \
+  --volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
+  haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
+
+
+
+ +

Where to go next

+ + From 9df2f34b75ab4cd4e3a220ca1dd35e17ee2b0e23 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 13:28:28 -0700 Subject: [PATCH 04/27] edits per PR --- ee/ucp/admin/configure/set-session-timeout.md | 6 +- .../admin/configure/ucp-configuration-file.md | 73 ++++++++++--------- 2 files changed, 40 insertions(+), 39 deletions(-) diff --git a/ee/ucp/admin/configure/set-session-timeout.md b/ee/ucp/admin/configure/set-session-timeout.md index 19d6cb9375..366126ef16 100644 --- a/ee/ucp/admin/configure/set-session-timeout.md +++ b/ee/ucp/admin/configure/set-session-timeout.md @@ -16,6 +16,6 @@ To configure UCP login sessions, go to the UCP web UI, navigate to the | Field | Description | | :---------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Lifetime Minutes | The initial lifetime of a login session, from the time UCP generates it. When this time expires, UCP invalidates the session, and the user must authenticate again to establish a new session. The default is 4320 minutes, which is 72 hours. | -| Renewal Threshold Minutes | The time before session expiration when UCP extends an active session. UCP extends the session by the number of hours specified in **Lifetime Hours**. The threshold value can't be greater than **Lifetime Hours**. The default is 1440 minutes, which is 24 hours. To specify that sessions are extended with every use, set the threshold equal to the lifetime. To specify that sessions are never extended, set the threshold to zero. This may cause users to be logged out unexpectedly while using the UCP web UI. | -| Per User Limit | The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. To disable the limit, set the value to zero. | +| Lifetime Minutes | The initial lifetime of a login session, starting from the time UCP generates the session. When this time expires, UCP invalidates the session. To establish a new session,the u ser must authenticate again. The default is 60 minutes. The minimum is 10 minutes. | +| Renewal Threshold Minutes | The time by which UCP extends an active session before session expiration. UCP extends the session by the number of minutes specified in **Lifetime Minutes**. The threshold value can't be greater than **Lifetime Minutes**. The default is 20 minutes. To specify that sessions are never extended, set the threshold value to zero. This may cause users to be logged out unexpectedly while using the UCP web interface. The maximum threshold is 5 minutes less than **Lifetime Minutes**. | +| Per User Limit | The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. To disable this limit, set the value to zero. The default limit is 10 sessions. | diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 4475e5bcd6..a6791f8558 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -4,62 +4,55 @@ description: Set up UCP deployments by using a configuration file. keywords: Docker EE, UCP, configuration, config --- -You have two options to configure UCP: through the web UI, or using a Docker -config object. In most cases, the web UI is a front-end for changing the -configuration file. +There are two ways to configure UCP: +- through the web interface, or +- importing and exporting the UCP config in a TOML file. For more information about TOML, see [here on GitHub](https://github.com/toml-lang/toml/blob/master/README.md)) -You can customize how UCP is installed by creating a configuration file upfront. -During the installation UCP detects and starts using the configuration. +You can customize the UCP installation by creating a configuration file at the +time of installation. During the installation, UCP detects and starts using the +configuration specified in this file. ## UCP configuration file -The `ucp-agent` service uses a configuration file to set up UCP. You can use the configuration file in different ways to set up your UCP cluster. -- Install one cluster and use the UCP web UI to configure it as desired, - extract the configuration file, edit it as needed, and use the edited - config file to make copies to multiple other cluster. -- Install a UCP cluster, extract and edit the configuration file, and use the - CLI to apply the new configuration to the same cluster. +- Install one cluster and use the UCP web interface to configure it as desired, + export the configuration file, edit it as needed, and then import the edited + configuration file into multiple other clusters. +- Install a UCP cluster, export and edit the configuration file, and then use the + API to import the new configuration into the same cluster. - Run the `example-config` command, edit the example configuration file, and - apply the file at install time or after installation. + set the configuration at install time or import after installation. Specify your configuration settings in a TOML file. -[Learn about Tom's Obvious, Minimal Language](https://github.com/toml-lang/toml/blob/master/README.md). -The configuration has a versioned naming convention, with a trailing decimal -number that increases with each version, like `com.docker.ucp.config-1`. The -`ucp-agent` service maps the configuration to the file at `/etc/ucp/ucp.toml`. +## Export and modify existing configuration -## Inspect and modify existing configuration - -Use the `docker config inspect` command to view the current settings and emit -them to a file. +Use the `Get Config TOML` API to export the current settings and emit them to a file. From within the directory of a UCP admin user's [client certificate bundle](../../user-access/cli.md), +and with the UCP hostname `UCP_HOST`, the following `curl` command will export +the current UCP configuration to a file named `ucp-config.toml`: ```bash -{% raw %} -# CURRENT_CONFIG_NAME will be the name of the currently active UCP configuration -CURRENT_CONFIG_NAME=$(docker service inspect ucp-agent --format '{{range .Spec.TaskTemplate.ContainerSpec.Configs}}{{if eq "/etc/ucp/ucp.toml" .File.Name}}{{.ConfigName}}{{end}}{{end}}') -# Collect the current config with `docker config inspect` -docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > ucp-config.toml -{% endraw %} +curl --cacert ca.pem --cert cert.pem --key key.pem https://UCP_HOST/api/ucp/config-toml > ucp-config.toml ``` -Edit the file, then use the `docker config create` and `docker service update` -commands to create and apply the configuration from the file. +Edit this file, then use the following `curl` command to import it back into +UCP and apply your configuration changes: ```bash -# NEXT_CONFIG_NAME will be the name of the new UCP configuration -NEXT_CONFIG_NAME=${CURRENT_CONFIG_NAME%%-*}-$((${CURRENT_CONFIG_NAME##*-}+1)) -# Create the new cluster configuration from the file ucp-config.toml -docker config create $NEXT_CONFIG_NAME ucp-config.toml -# Use the `docker service update` command to remove the current configuration -# and apply the new configuration to the `ucp-agent` service. -docker service update --config-rm $CURRENT_CONFIG_NAME --config-add source=$NEXT_CONFIG_NAME,target=/etc/ucp/ucp.toml ucp-agent +curl --cacert ca.pem --cert cert.pem --key key.pem --upload-file ucp-config.toml https://UCP_HOST/api/ucp/config-toml ``` +## Apply existing configuration file at install time. +You can configure UCP to import an existing configuration file at install time. To do this using the Configs feature of Docker Swarm, follow these steps. + +1. Create a Docker Swarm Config object with a name of `com.docker.ucp.config` and having a value of your UCP config TOML file contents. +2. When installing UCP on that cluster, be sure to specify the `--existing-config` flag to instruct the installer to +use that object as its initial configuration. +3. After installation, remove the `com.docker.ucp.config` Config object. + ## Example configuration file You can see an example TOML config file that shows how to configure UCP @@ -97,6 +90,14 @@ An array of tables that specifies the DTR instances that the current UCP instanc | `service_id` | yes | The DTR instance's OpenID Connect Client ID, as registered with the Docker authentication provider. | | `ca_bundle` | no | If you're using a custom certificate authority (CA), the `ca_bundle` setting specifies the root CA bundle for the DTR instance. The value is a string with the contents of a `ca.pem` file. | +### audit_log_configuration table (optional) + Configures audit logging options for UCP components. + | Parameter | Required | Description | +|:---------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `level` | no | Specify the audit logging level. Leave empty for disabling audit logs (default). Other legal values are "metadata" and "request". | +| `support_dump_include_audit_logs` | no | When set to true, support dumps will include audit logs in the logs of the ucp-controller container of each manager node. The default is `false`. | + + ### scheduling_configuration table (optional) Specifies scheduling options and the default orchestrator for new nodes. @@ -165,7 +166,7 @@ components. Assigning these values overrides the settings in a container's | `profiling_enabled` | no | Set to `true` to enable specialized debugging endpoints for profiling UCP performance. The default is `false`. | | `kv_timeout` | no | Sets the key-value store timeout setting, in milliseconds. The default is `5000`. | | `kv_snapshot_count` | no | Sets the key-value store snapshot count setting. The default is `20000`. | -| `external_service_lb` | no | Specifies an optional external load balancer for default links to services with exposed ports in the web UI. | +| `external_service_lb` | no | Specifies an optional external load balancer for default links to services with exposed ports in the web interface. | | `cni_installer_url` | no | Specifies the URL of a Kubernetes YAML file to be used for installing a CNI plugin. Applies only during initial installation. If empty, the default CNI plugin is used. | | `metrics_retention_time` | no | Adjusts the metrics retention time. | | `metrics_scrape_interval` | no | Sets the interval for how frequently managers gather metrics from nodes in the cluster. | From 3980d971a43b81b29944d764d884055f4cb4159e Mon Sep 17 00:00:00 2001 From: David Deyo Date: Mon, 15 Oct 2018 13:47:59 -0700 Subject: [PATCH 05/27] Delete index.html --- .../configure/_site/join-nodes/index.html | 59 ------------------- 1 file changed, 59 deletions(-) delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/index.html diff --git a/ee/ucp/admin/configure/_site/join-nodes/index.html b/ee/ucp/admin/configure/_site/join-nodes/index.html deleted file mode 100644 index c7e5e7ae54..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/index.html +++ /dev/null @@ -1,59 +0,0 @@ -

Docker Universal Control Plane is designed for high availability (HA). You can -join multiple manager nodes to the cluster, so that if one manager node fails, -another can automatically take its place without impact to the cluster.

- -

Having multiple manager nodes in your cluster allows you to:

- -
    -
  • Handle manager node failures,
  • -
  • Load-balance user requests across all manager nodes.
  • -
- -

Size your deployment

- -

To make the cluster tolerant to more failures, add additional replica nodes to -your cluster.

- - - - - - - - - - - - - - - - - - - - - - -
Manager nodesFailures tolerated
10
31
52
- -

For production-grade deployments, follow these rules of thumb:

- -
    -
  • When a manager node fails, the number of failures tolerated by your cluster -decreases. Don’t leave that node offline for too long.
  • -
  • You should distribute your manager nodes across different availability -zones. This way your cluster can continue working even if an entire -availability zone goes down.
  • -
  • Adding many manager nodes to the cluster might lead to performance -degradation, as changes to configurations need to be replicated across all -manager nodes. The maximum advisable is seven manager nodes.
  • -
- -

Where to go next

- - From e4201ef28c8fb797aed3fecb75e042e4b8b5f52e Mon Sep 17 00:00:00 2001 From: David Deyo Date: Mon, 15 Oct 2018 13:48:09 -0700 Subject: [PATCH 06/27] Delete enable-ldap-config-file.html --- .../enable-ldap-config-file.html | 68 ------------------- 1 file changed, 68 deletions(-) delete mode 100644 ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html diff --git a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html b/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html deleted file mode 100644 index bf1cc07c30..0000000000 --- a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html +++ /dev/null @@ -1,68 +0,0 @@ -

Docker UCP integrates with LDAP directory services, so that you can manage -users and groups from your organization’s directory and automatically -propagate this information to UCP and DTR. You can set up your cluster’s LDAP -configuration by using the UCP web UI, or you can use a -UCP configuration file.

- -

To see an example TOML config file that shows how to configure UCP settings, -run UCP with the example-config option. -Learn about UCP configuration files.

- -
docker container run --rm /: example-config
-
- -

Set up LDAP by using a configuration file

- -
    -
  1. -

    Use the following command to extract the name of the currently active -configuration from the ucp-agent service.

    - -
        
    -$ CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}' ucp-agent | grep 'com.docker.ucp.config-')
    -    
    -
    -
  2. -
  3. -

    Get the current configuration and save it to a TOML file.

    - -
        
    -docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
    -    
    -
    -
  4. -
  5. -

    Use the output of the example-config command as a guide to edit your -config.toml file. Under the [auth] sections, set backend = "ldap" -and [auth.ldap] to configure LDAP integration the way you want.

    -
  6. -
  7. -

    Once you’ve finished editing your config.toml file, create a new Docker -Config object by using the following command.

    - -
    NEW_CONFIG_NAME="com.docker.ucp.config-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
    -docker config create $NEW_CONFIG_NAME config.toml
    -
    -
  8. -
  9. -

    Update the ucp-agent service to remove the reference to the old config -and add a reference to the new config.

    - -
    docker service update --config-rm "$CURRENT_CONFIG_NAME" --config-add "source=${NEW_CONFIG_NAME},target=/etc/ucp/ucp.toml" ucp-agent
    -
    -
  10. -
  11. -

    Wait a few moments for the ucp-agent service tasks to update across -your cluster. If you set jit_user_provisioning = true in the LDAP -configuration, users matching any of your specified search queries will -have their accounts created when they log in with their username and LDAP -password.

    -
  12. -
- -

Where to go next

- - From 73edd2acb81b45f8fd2372c94818eeffa69670aa Mon Sep 17 00:00:00 2001 From: David Deyo Date: Mon, 15 Oct 2018 13:48:19 -0700 Subject: [PATCH 07/27] Delete index.html --- .../configure/_site/external-auth/index.html | 351 ------------------ 1 file changed, 351 deletions(-) delete mode 100644 ee/ucp/admin/configure/_site/external-auth/index.html diff --git a/ee/ucp/admin/configure/_site/external-auth/index.html b/ee/ucp/admin/configure/_site/external-auth/index.html deleted file mode 100644 index 1ea0bb307a..0000000000 --- a/ee/ucp/admin/configure/_site/external-auth/index.html +++ /dev/null @@ -1,351 +0,0 @@ -

Docker UCP integrates with LDAP directory services, so that you can manage -users and groups from your organization’s directory and it will automatically -propagate that information to UCP and DTR.

- -

If you enable LDAP, UCP uses a remote directory server to create users -automatically, and all logins are forwarded to the directory server.

- -

When you switch from built-in authentication to LDAP authentication, -all manually created users whose usernames don’t match any LDAP search results -are still available.

- -

When you enable LDAP authentication, you can choose whether UCP creates user -accounts only when users log in for the first time. Select the -Just-In-Time User Provisioning option to ensure that the only LDAP -accounts that exist in UCP are those that have had a user log in to UCP.

- -

How UCP integrates with LDAP

- -

You control how UCP integrates with LDAP by creating searches for users. -You can specify multiple search configurations, and you can specify multiple -LDAP servers to integrate with. Searches start with the Base DN, which is -the distinguished name of the node in the LDAP directory tree where the -search starts looking for users.

- -

Access LDAP settings by navigating to the Authentication & Authorization -page in the UCP web UI. There are two sections for controlling LDAP searches -and servers.

- -
    -
  • LDAP user search configurations: This is the section of the -Authentication & Authorization page where you specify search -parameters, like Base DN, scope, filter, the username attribute, -and the full name attribute. These searches are stored in a list, and -the ordering may be important, depending on your search configuration.
  • -
  • LDAP server: This is the section where you specify the URL of an LDAP -server, TLS configuration, and credentials for doing the search requests. -Also, you provide a domain for all servers but the first one. The first -server is considered the default domain server. Any others are associated -with the domain that you specify in the page.
  • -
- -

Here’s what happens when UCP synchronizes with LDAP:

- -
    -
  1. UCP creates a set of search results by iterating over each of the user -search configs, in the order that you specify.
  2. -
  3. UCP choses an LDAP server from the list of domain servers by considering the -Base DN from the user search config and selecting the domain server that -has the longest domain suffix match.
  4. -
  5. If no domain server has a domain suffix that matches the Base DN from the -search config, UCP uses the default domain server.
  6. -
  7. UCP combines the search results into a list of users and creates UCP -accounts for them. If the Just-In-Time User Provisioning option is set, -user accounts are created only when users first log in.
  8. -
- -

The domain server to use is determined by the Base DN in each search config. -UCP doesn’t perform search requests against each of the domain servers, only -the one which has the longest matching domain suffix, or the default if there’s -no match.

- -

Here’s an example. Let’s say we have three LDAP domain servers:

- - - - - - - - - - - - - - - - - - - - - - -
DomainServer URL
defaultldaps://ldap.example.com
dc=subsidiary1,dc=comldaps://ldap.subsidiary1.com
dc=subsidiary2,dc=subsidiary1,dc=comldaps://ldap.subsidiary2.com
- -

Here are three user search configs with the following Base DNs:

- -
    -
  • -

    baseDN=ou=people,dc=subsidiary1,dc=com

    - -

    For this search config, dc=subsidiary1,dc=com is the only server with a -domain which is a suffix, so UCP uses the server ldaps://ldap.subsidiary1.com -for the search request.

    -
  • -
  • -

    baseDN=ou=product,dc=subsidiary2,dc=subsidiary1,dc=com

    - -

    For this search config, two of the domain servers have a domain which is a -suffix of this base DN, but dc=subsidiary2,dc=subsidiary1,dc=com is the -longer of the two, so UCP uses the server ldaps://ldap.subsidiary2.com -for the search request.

    -
  • -
  • -

    baseDN=ou=eng,dc=example,dc=com

    - -

    For this search config, there is no server with a domain specified which is -a suffix of this base DN, so UCP uses the default server, ldaps://ldap.example.com, -for the search request.

    -
  • -
- -

If there are username collisions for the search results between domains, UCP -uses only the first search result, so the ordering of the user search configs -may be important. For example, if both the first and third user search configs -result in a record with the username jane.doe, the first has higher -precedence and the second is ignored. For this reason, it’s important to choose -a username attribute that’s unique for your users across all domains.

- -

Because names may collide, it’s a good idea to use something unique to the -subsidiary, like the email address for each person. Users can log in with the -email address, for example, jane.doe@subsidiary1.com.

- -

Configure the LDAP integration

- -

To configure UCP to create and authenticate users by using an LDAP directory, -go to the UCP web UI, navigate to the Admin Settings page and click -Authentication & Authorization to select the method used to create and -authenticate users.

- -

- -

In the LDAP Enabled section, click Yes to The LDAP settings appear. -Now configure your LDAP directory integration.

- -

Default role for all private collections

- -

Use this setting to change the default permissions of new users.

- -

Click the dropdown to select the permission level that UCP assigns by default -to the private collections of new users. For example, if you change the value -to View Only, all users who log in for the first time after the setting is -changed have View Only access to their private collections, but permissions -remain unchanged for all existing users. -Learn more about permission levels.

- -

LDAP enabled

- -

Click Yes to enable integrating UCP users and teams with LDAP servers.

- -

LDAP server

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
LDAP server URLThe URL where the LDAP server can be reached.
Reader DNThe distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice, this should be an LDAP read-only user.
Reader passwordThe password of the account used for searching entries in the LDAP server.
Use Start TLSWhether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with ldaps://, this field is ignored.
Skip TLS verificationWhether to verify the LDAP server certificate when using TLS. The connection is still encrypted but vulnerable to man-in-the-middle attacks.
No simple paginationIf your LDAP server doesn’t support pagination.
Just-In-Time User ProvisioningWhether to create user accounts only when users log in for the first time. The default value of true is recommended. If you upgraded from UCP 2.0.x, the default is false.
- -

- -

Click Confirm to add your LDAP domain.

- -

To integrate with more LDAP servers, click Add LDAP Domain.

- -

LDAP user search configurations

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription 
Base DNThe distinguished name of the node in the directory tree where the search should start looking for users. 
Username attributeThe LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: / \ [ ] : ; | = , + * ? < > ' ". 
Full name attributeThe LDAP attribute to use as the user’s full name for display purposes. If left empty, UCP will not create new users with a full name value. 
FilterThe LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. 
Search subtree instead of just one levelWhether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. 
Match Group MembersWhether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support memberOf search filters. 
Iterate through group membersIf Select Group Members is selected, this option searches for users by first iterating over the target group’s membership, making a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results. 
Group DNIf Select Group Members is selected, this specifies the distinguished name of the group from which to select users. 
Group Member AttributeIf Select Group Members is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. 
- -

- -

To configure more user search queries, click Add LDAP User Search Configuration -again. This is useful in cases where users may be found in multiple distinct -subtrees of your organization’s directory. Any user entry which matches at -least one of the search configurations will be synced as a user.

- -

LDAP test login

- - - - - - - - - - - - - - - - - - -
FieldDescription
UsernameAn LDAP username for testing authentication to this application. This value corresponds with the Username Attribute specified in the LDAP user search configurations section.
PasswordThe user’s password used to authenticate (BIND) to the directory server.
- -

Before you save the configuration changes, you should test that the integration -is correctly configured. You can do this by providing the credentials of an -LDAP user, and clicking the Test button.

- -

LDAP sync configuration

- - - - - - - - - - - - - - - - - - -
FieldDescription
Sync intervalThe interval, in hours, to synchronize users between UCP and the LDAP server. When the synchronization job runs, new users found in the LDAP server are created in UCP with the default permission level. UCP users that don’t exist in the LDAP server become inactive.
Enable sync of admin usersThis option specifies that system admins should be synced directly with members of a group in your organization’s LDAP directory. The admins will be synced to match the membership of the group. The configured recovery admin user will also remain a system admin.
- -

Once you’ve configured the LDAP integration, UCP synchronizes users based on -the interval you’ve defined starting at the top of the hour. When the -synchronization runs, UCP stores logs that can help you troubleshoot when -something goes wrong.

- -

You can also manually synchronize users by clicking Sync Now.

- -

Revoke user access

- -

When a user is removed from LDAP, the effect on the user’s UCP account depends -on the Just-In-Time User Provisioning setting:

- -
    -
  • Just-In-Time User Provisioning is false: Users deleted from LDAP become -inactive in UCP after the next LDAP synchronization runs.
  • -
  • Just-In-Time User Provisioning is true: Users deleted from LDAP can’t -authenticate, but their UCP accounts remain active. This means that they can -use their client bundles to run commands. To prevent this, deactivate their -UCP user accounts.
  • -
- -

Data synced from your organization’s LDAP directory

- -

UCP saves a minimum amount of user data required to operate. This includes -the value of the username and full name attributes that you have specified in -the configuration as well as the distinguished name of each synced user. -UCP does not store any additional data from the directory server.

- -

Sync teams

- -

UCP enables syncing teams with a search query or group in your organization’s -LDAP directory. -Sync team members with your organization’s LDAP directory.

- -

Where to go next

- - From 35aae466a7c2830f27ea15203c254718dfc59154 Mon Sep 17 00:00:00 2001 From: David Deyo Date: Mon, 15 Oct 2018 13:48:27 -0700 Subject: [PATCH 08/27] Delete join-linux-nodes-to-cluster.html --- .../join-linux-nodes-to-cluster.html | 143 ------------------ 1 file changed, 143 deletions(-) delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html deleted file mode 100644 index d0836ae5c0..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html +++ /dev/null @@ -1,143 +0,0 @@ -

Docker EE is designed for scaling horizontally as your applications grow in -size and usage. You can add or remove nodes from the cluster to scale it -to your needs. You can join Windows Server 2016, IBM z System, and Linux nodes -to the cluster.

- -

Because Docker EE leverages the clustering functionality provided by Docker -Engine, you use the docker swarm join -command to add more nodes to your cluster. When you join a new node, Docker EE -services start running on the node automatically.

- -

Node roles

- -

When you join a node to a cluster, you specify its role: manager or worker.

- -
    -
  • -

    Manager: Manager nodes are responsible for cluster management -functionality and dispatching tasks to worker nodes. Having multiple -manager nodes allows your swarm to be highly available and tolerant of -node failures.

    - -

    Manager nodes also run all Docker EE components in a replicated way, so -by adding additional manager nodes, you’re also making the cluster highly -available. -Learn more about the Docker EE architecture.

    -
  • -
  • -

    Worker: Worker nodes receive and execute your services and applications. -Having multiple worker nodes allows you to scale the computing capacity of -your cluster.

    - -

    When deploying Docker Trusted Registry in your cluster, you deploy it to a -worker node.

    -
  • -
- -

Join a node to the cluster

- -

You can join Windows Server 2016, IBM z System, and Linux nodes to the cluster, -but only Linux nodes can be managers.

- -

To join nodes to the cluster, go to the Docker EE web UI and navigate to the -Nodes page.

- -
    -
  1. Click Add Node to add a new node.
  2. -
  3. Select the type of node to add, Windows or Linux.
  4. -
  5. Click Manager if you want to add the node as a manager.
  6. -
  7. Check the Use a custom listen address option to specify the address -and port where new node listens for inbound cluster management traffic.
  8. -
  9. Check the Use a custom listen address option to specify the -IP address that’s advertised to all members of the cluster for API access.
  10. -
- -

- -

Copy the displayed command, use SSH to log in to the host that you want to -join to the cluster, and run the docker swarm join command on the host.

- -

To add a Windows node, click Windows and follow the instructions in -Join Windows worker nodes to a cluster.

- -

After you run the join command in the node, the node is displayed on the -Nodes page in the Docker EE web UI. From there, you can change the node’s -cluster configuration, including its assigned orchestrator type. -Learn how to change the orchestrator for a node.

- -

Pause or drain a node

- -

Once a node is part of the cluster, you can configure the node’s availability -so that it is:

- -
    -
  • Active: the node can receive and execute tasks.
  • -
  • Paused: the node continues running existing tasks, but doesn’t receive -new tasks.
  • -
  • Drained: the node won’t receive new tasks. Existing tasks are stopped and -replica tasks are launched in active nodes.
  • -
- -

Pause or drain a node from the Edit Node page:

- -
    -
  1. In the Docker EE web UI, browse to the Nodes page and select the node.
  2. -
  3. In the details pane, click Configure and select Details to open -the Edit Node page.
  4. -
  5. In the Availability section, click Active, Pause, or Drain.
  6. -
  7. Click Save to change the availability of the node.
  8. -
- -

- -

Promote or demote a node

- -

You can promote worker nodes to managers to make UCP fault tolerant. You can -also demote a manager node into a worker.

- -

To promote or demote a manager node:

- -
    -
  1. Navigate to the Nodes page, and click the node that you want to demote.
  2. -
  3. In the details pane, click Configure and select Details to open -the Edit Node page.
  4. -
  5. In the Role section, click Manager or Worker.
  6. -
  7. Click Save and wait until the operation completes.
  8. -
  9. Navigate to the Nodes page, and confirm that the node role has changed.
  10. -
- -

If you’re load-balancing user requests to Docker EE across multiple manager -nodes, don’t forget to remove these nodes from your load-balancing pool when -you demote them to workers.

- -

Remove a node from the cluster

- -

You can remove worker nodes from the cluster at any time:

- -
    -
  1. Navigate to the Nodes page and select the node.
  2. -
  3. In the details pane, click Actions and select Remove.
  4. -
  5. Click Confirm when you’re prompted.
  6. -
- -

Since manager nodes are important to the cluster overall health, you need to -be careful when removing one from the cluster.

- -

To remove a manager node:

- -
    -
  1. Make sure all nodes in the cluster are healthy. Don’t remove manager nodes -if that’s not the case.
  2. -
  3. Demote the manager node into a worker.
  4. -
  5. Now you can remove that node from the cluster.
  6. -
- -

Use the CLI to manage your nodes

- -

You can use the Docker CLI client to manage your nodes from the CLI. To do -this, configure your Docker CLI client with a UCP client bundle.

- -

Once you do that, you can start managing your UCP nodes:

- -
docker node ls
-
From 1185eae1d9781242d38d7c63261ae128092ba00e Mon Sep 17 00:00:00 2001 From: David Deyo Date: Mon, 15 Oct 2018 13:48:42 -0700 Subject: [PATCH 09/27] Delete join-windows-nodes-to-cluster.html --- .../join-windows-nodes-to-cluster.html | 235 ------------------ 1 file changed, 235 deletions(-) delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html deleted file mode 100644 index 07939769d0..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html +++ /dev/null @@ -1,235 +0,0 @@ -

Docker Enterprise Edition supports worker nodes that run on Windows Server 2016 or 1709. -Only worker nodes are supported on Windows, and all manager nodes in the cluster -must run on Linux.

- -

Follow these steps to enable a worker node on Windows.

- -
    -
  1. Install Docker EE Engine on Windows Server 2016.
  2. -
  3. Configure the Windows node.
  4. -
  5. Join the Windows node to the cluster.
  6. -
- -

Install Docker EE Engine on Windows Server 2016 or 1709

- -

Install Docker EE Engine -on a Windows Server 2016 or 1709 instance to enable joining a cluster that’s managed by -Docker Enterprise Edition.

- -

Configure the Windows node

- -

Follow these steps to configure the docker daemon and the Windows environment.

- -
    -
  1. Add a label to the node.
  2. -
  3. Pull the Windows-specific image of ucp-agent, which is named ucp-agent-win.
  4. -
  5. Run the Windows worker setup script provided with ucp-agent-win.
  6. -
  7. Join the cluster with the token provided by the Docker EE web UI or CLI.
  8. -
- -

Add a label to the node

- -

Configure the Docker Engine running on the node to have a label. This makes -it easier to deploy applications on nodes with this label.

- -

Create the file C:\ProgramData\docker\config\daemon.json with the following -content:

- -
{
-  "labels": ["os=windows"]
-}
-
- -

Restart Docker for the changes to take effect:

- -
Restart-Service docker
-
- -

Pull the Windows-specific images

- -

On a manager node, run the following command to list the images that are required -on Windows nodes.

- -
docker container run --rm /: images --list --enable-windows
-/ucp-agent-win:
-/ucp-dsinfo-win:
-
- -

On Windows Server 2016, in a PowerShell terminal running as Administrator, -log in to Docker Hub with the docker login command and pull the listed images.

- -
docker image pull /ucp-agent-win:
-docker image pull /ucp-dsinfo-win:
-
- -

Run the Windows node setup script

- -

You need to open ports 2376 and 12376, and create certificates -for the Docker daemon to communicate securely. Use this command to run -the Windows node setup script:

- -
$script = [ScriptBlock]::Create((docker run --rm /ucp-agent-win: windows-script | Out-String))
-
-Invoke-Command $script
-
- -
-

Docker daemon restart

- -

When you run windows-script, the Docker service is unavailable temporarily.

-
- -

The Windows node is ready to join the cluster. Run the setup script on each -instance of Windows Server that will be a worker node.

- -

Compatibility with daemon.json

- -

The script may be incompatible with installations that use a config file at -C:\ProgramData\docker\config\daemon.json. If you use such a file, make sure -that the daemon runs on port 2376 and that it uses certificates located in -C:\ProgramData\docker\daemoncerts. If certificates don’t exist in this -directory, run ucp-agent-win generate-certs, as shown in Step 2 of the -procedure in Set up certs for the dockerd service.

- -

In the daemon.json file, set the tlscacert, tlscert, and tlskey options -to the corresponding files in C:\ProgramData\docker\daemoncerts:

- -
{
-...
-		"debug":     true,
-		"tls":       true,
-		"tlscacert": "C:\ProgramData\docker\daemoncerts\ca.pem",
-		"tlscert":   "C:\ProgramData\docker\daemoncerts\cert.pem",
-		"tlskey":    "C:\ProgramData\docker\daemoncerts\key.pem",
-		"tlsverify": true,
-...
-}
-
- -

Join the Windows node to the cluster

- -

Now you can join the cluster by using the docker swarm join command that’s -provided by the Docker EE web UI and CLI.

- -
    -
  1. Log in to the Docker EE web UI with an administrator account.
  2. -
  3. Navigate to the Nodes page.
  4. -
  5. Click Add Node to add a new node.
  6. -
  7. In the Node Type section, click Windows.
  8. -
  9. In the Step 2 section, click the checkbox for -“I’m ready to join my windows node.”
  10. -
  11. Check the Use a custom listen address option to specify the address -and port where new node listens for inbound cluster management traffic.
  12. -
  13. -

    Check the Use a custom listen address option to specify the -IP address that’s advertised to all members of the cluster for API access.

    - -

    -
  14. -
- -

Copy the displayed command. It looks similar to the following:

- -
docker swarm join --token <token> <ucp-manager-ip>
-
- -

You can also use the command line to get the join token. Using your -UCP client bundle, run:

- -
docker swarm join-token worker
-
- -

Run the docker swarm join command on each instance of Windows Server that -will be a worker node.

- -

Configure a Windows worker node manually

- -

The following sections describe how to run the commands in the setup script -manually to configure the dockerd service and the Windows environment. -The script opens ports in the firewall and sets up certificates for dockerd.

- -

To see the script, you can run the windows-script command without piping -to the Invoke-Expression cmdlet.

- -
docker container run --rm /ucp-agent-win: windows-script
-
- -

Open ports in the Windows firewall

- -

Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic.

- -

In a PowerShell terminal running as Administrator, run these commands -to add rules to the Windows firewall.

- -
netsh advfirewall firewall add rule name="docker_local" dir=in action=allow protocol=TCP localport=2376
-netsh advfirewall firewall add rule name="docker_proxy" dir=in action=allow protocol=TCP localport=12376
-
- -

Set up certs for the dockerd service

- -
    -
  1. Create the directory C:\ProgramData\docker\daemoncerts.
  2. -
  3. -

    In a PowerShell terminal running as Administrator, run the following command -to generate certificates.

    - -
    docker container run --rm -v C:\ProgramData\docker\daemoncerts:C:\certs /ucp-agent-win: generate-certs
    -
    -
  4. -
  5. -

    To set up certificates, run the following commands to stop and unregister the -dockerd service, register the service with the certificates, and restart the service.

    - -
    Stop-Service docker
    -dockerd --unregister-service
    -dockerd -H npipe:// -H 0.0.0.0:2376 --tlsverify --tlscacert=C:\ProgramData\docker\daemoncerts\ca.pem --tlscert=C:\ProgramData\docker\daemoncerts\cert.pem --tlskey=C:\ProgramData\docker\daemoncerts\key.pem --register-service
    -Start-Service docker
    -
    -
  6. -
- -

The dockerd service and the Windows environment are now configured to join a Docker EE cluster.

- -
-

TLS certificate setup

- -

If the TLS certificates aren’t set up correctly, the Docker EE web UI shows the -following warning.

- -
Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon.
-
-
- -

Windows nodes limitations

- -

Some features are not yet supported on Windows nodes:

- -
    -
  • Networking -
      -
    • The cluster mode routing mesh can’t be used on Windows nodes. You can expose -a port for your service in the host where it is running, and use the HTTP -routing mesh to make your service accessible using a domain name.
    • -
    • Encrypted networks are not supported. If you’ve upgraded from a previous -version, you’ll also need to recreate the ucp-hrm network to make it -unencrypted.
    • -
    -
  • -
  • Secrets -
      -
    • When using secrets with Windows services, Windows stores temporary secret -files on disk. You can use BitLocker on the volume containing the Docker -root directory to encrypt the secret data at rest.
    • -
    • When creating a service which uses Windows containers, the options to -specify UID, GID, and mode are not supported for secrets. Secrets are -currently only accessible by administrators and users with system access -within the container.
    • -
    -
  • -
  • Mounts -
      -
    • On Windows, Docker can’t listen on a Unix socket. Use TCP or a named pipe -instead.
    • -
    -
  • -
From eb1542dd3df51cd77fc8513b5dcefc9471507595 Mon Sep 17 00:00:00 2001 From: David Deyo Date: Mon, 15 Oct 2018 13:48:51 -0700 Subject: [PATCH 10/27] Delete use-a-load-balancer.html --- .../_site/join-nodes/use-a-load-balancer.html | 220 ------------------ 1 file changed, 220 deletions(-) delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html diff --git a/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html b/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html deleted file mode 100644 index f1161c5a8d..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html +++ /dev/null @@ -1,220 +0,0 @@ -

Once you’ve joined multiple manager nodes for high-availability, you can -configure your own load balancer to balance user requests across all -manager nodes.

- -

- -

This allows users to access UCP using a centralized domain name. If -a manager node goes down, the load balancer can detect that and stop forwarding -requests to that node, so that the failure goes unnoticed by users.

- -

Load-balancing on UCP

- -

Since Docker UCP uses mutual TLS, make sure you configure your load balancer to:

- -
    -
  • Load-balance TCP traffic on ports 443 and 6443.
  • -
  • Not terminate HTTPS connections.
  • -
  • Use the /_ping endpoint on each manager node, to check if the node -is healthy and if it should remain on the load balancing pool or not.
  • -
- -

Load balancing UCP and DTR

- -

By default, both UCP and DTR use port 443. If you plan on deploying UCP and DTR, -your load balancer needs to distinguish traffic between the two by IP address -or port number.

- -
    -
  • If you want to configure your load balancer to listen on port 443: -
      -
    • Use one load balancer for UCP, and another for DTR,
    • -
    • Use the same load balancer with multiple virtual IPs.
    • -
    -
  • -
  • Configure your load balancer to expose UCP or DTR on a port other than 443.
  • -
- -
-

Additional requirements

- -

In addition to configuring your load balancer to distinguish between UCP and DTR, configuring a load balancer for DTR has additional requirements.

-
- -

Configuration examples

- -

Use the following examples to configure your load balancer for UCP.

- - -
-
-
user  nginx;
-worker_processes  1;
-
-error_log  /var/log/nginx/error.log warn;
-pid        /var/run/nginx.pid;
-
-events {
-    worker_connections  1024;
-}
-
-stream {
-    upstream ucp_443 {
-        server <UCP_MANAGER_1_IP>:443 max_fails=2 fail_timeout=30s;
-        server <UCP_MANAGER_2_IP>:443 max_fails=2 fail_timeout=30s;
-        server <UCP_MANAGER_N_IP>:443  max_fails=2 fail_timeout=30s;
-    }
-    server {
-        listen 443;
-        proxy_pass ucp_443;
-    }
-}
-
-
-
-
global
-    log /dev/log    local0
-    log /dev/log    local1 notice
-
-defaults
-        mode    tcp
-        option  dontlognull
-        timeout connect     5s
-        timeout client      50s
-        timeout server      50s
-        timeout tunnel      1h
-        timeout client-fin  50s
-### frontends
-# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
-frontend ucp_stats
-        mode http
-        bind 0.0.0.0:8181
-        default_backend ucp_stats
-frontend ucp_443
-        mode tcp
-        bind 0.0.0.0:443
-        default_backend ucp_upstream_servers_443
-### backends
-backend ucp_stats
-        mode http
-        option httplog
-        stats enable
-        stats admin if TRUE
-        stats refresh 5m
-backend ucp_upstream_servers_443
-        mode tcp
-        option httpchk GET /_ping HTTP/1.1\r\nHost:\ <UCP_FQDN>
-        server node01 <UCP_MANAGER_1_IP>:443 weight 100 check check-ssl verify none
-        server node02 <UCP_MANAGER_2_IP>:443 weight 100 check check-ssl verify none
-        server node03 <UCP_MANAGER_N_IP>:443 weight 100 check check-ssl verify none
-
-
-
-
{
-    "Subnets": [
-        "subnet-XXXXXXXX",
-        "subnet-YYYYYYYY",
-        "subnet-ZZZZZZZZ"
-    ],
-    "CanonicalHostedZoneNameID": "XXXXXXXXXXX",
-    "CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
-    "ListenerDescriptions": [
-        {
-            "Listener": {
-                "InstancePort": 443,
-                "LoadBalancerPort": 443,
-                "Protocol": "TCP",
-                "InstanceProtocol": "TCP"
-            },
-            "PolicyNames": []
-        }
-    ],
-    "HealthCheck": {
-        "HealthyThreshold": 2,
-        "Interval": 10,
-        "Target": "HTTPS:443/_ping",
-        "Timeout": 2,
-        "UnhealthyThreshold": 4
-    },
-    "VPCId": "vpc-XXXXXX",
-    "BackendServerDescriptions": [],
-    "Instances": [
-        {
-            "InstanceId": "i-XXXXXXXXX"
-        },
-        {
-            "InstanceId": "i-XXXXXXXXX"
-        },
-        {
-            "InstanceId": "i-XXXXXXXXX"
-        }
-    ],
-    "DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
-    "SecurityGroups": [
-        "sg-XXXXXXXXX"
-    ],
-    "Policies": {
-        "LBCookieStickinessPolicies": [],
-        "AppCookieStickinessPolicies": [],
-        "OtherPolicies": []
-    },
-    "LoadBalancerName": "ELB-UCP",
-    "CreatedTime": "2017-02-13T21:40:15.400Z",
-    "AvailabilityZones": [
-        "us-west-2c",
-        "us-west-2a",
-        "us-west-2b"
-    ],
-    "Scheme": "internet-facing",
-    "SourceSecurityGroup": {
-        "OwnerAlias": "XXXXXXXXXXXX",
-        "GroupName":  "XXXXXXXXXXXX"
-    }
-}
-
-
-
- -

You can deploy your load balancer using:

- - -
-
-
# Create the nginx.conf file, then
-# deploy the load balancer
-
-docker run --detach \
-  --name ucp-lb \
-  --restart=unless-stopped \
-  --publish 443:443 \
-  --volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
-  nginx:stable-alpine
-
-
-
-
# Create the haproxy.cfg file, then
-# deploy the load balancer
-
-docker run --detach \
-  --name ucp-lb \
-  --publish 443:443 \
-  --publish 8181:8181 \
-  --restart=unless-stopped \
-  --volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
-  haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
-
-
-
- -

Where to go next

- - From c63d0e786ecdf1bcc0d02e7a05c67bcb96d87d6e Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 13:54:09 -0700 Subject: [PATCH 11/27] updates --- .../enable-ldap-config-file.html | 68 ---- .../configure/_site/external-auth/index.html | 351 ------------------ .../configure/_site/join-nodes/index.html | 59 --- .../join-linux-nodes-to-cluster.html | 143 ------- .../join-windows-nodes-to-cluster.html | 235 ------------ .../_site/join-nodes/use-a-load-balancer.html | 220 ----------- ee/ucp/admin/configure/set-session-timeout.md | 2 +- 7 files changed, 1 insertion(+), 1077 deletions(-) delete mode 100644 ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html delete mode 100644 ee/ucp/admin/configure/_site/external-auth/index.html delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/index.html delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html delete mode 100644 ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html diff --git a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html b/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html deleted file mode 100644 index bf1cc07c30..0000000000 --- a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html +++ /dev/null @@ -1,68 +0,0 @@ -

Docker UCP integrates with LDAP directory services, so that you can manage -users and groups from your organization’s directory and automatically -propagate this information to UCP and DTR. You can set up your cluster’s LDAP -configuration by using the UCP web UI, or you can use a -UCP configuration file.

- -

To see an example TOML config file that shows how to configure UCP settings, -run UCP with the example-config option. -Learn about UCP configuration files.

- -
docker container run --rm /: example-config
-
- -

Set up LDAP by using a configuration file

- -
    -
  1. -

    Use the following command to extract the name of the currently active -configuration from the ucp-agent service.

    - -
        
    -$ CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}' ucp-agent | grep 'com.docker.ucp.config-')
    -    
    -
    -
  2. -
  3. -

    Get the current configuration and save it to a TOML file.

    - -
        
    -docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
    -    
    -
    -
  4. -
  5. -

    Use the output of the example-config command as a guide to edit your -config.toml file. Under the [auth] sections, set backend = "ldap" -and [auth.ldap] to configure LDAP integration the way you want.

    -
  6. -
  7. -

    Once you’ve finished editing your config.toml file, create a new Docker -Config object by using the following command.

    - -
    NEW_CONFIG_NAME="com.docker.ucp.config-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
    -docker config create $NEW_CONFIG_NAME config.toml
    -
    -
  8. -
  9. -

    Update the ucp-agent service to remove the reference to the old config -and add a reference to the new config.

    - -
    docker service update --config-rm "$CURRENT_CONFIG_NAME" --config-add "source=${NEW_CONFIG_NAME},target=/etc/ucp/ucp.toml" ucp-agent
    -
    -
  10. -
  11. -

    Wait a few moments for the ucp-agent service tasks to update across -your cluster. If you set jit_user_provisioning = true in the LDAP -configuration, users matching any of your specified search queries will -have their accounts created when they log in with their username and LDAP -password.

    -
  12. -
- -

Where to go next

- - diff --git a/ee/ucp/admin/configure/_site/external-auth/index.html b/ee/ucp/admin/configure/_site/external-auth/index.html deleted file mode 100644 index 1ea0bb307a..0000000000 --- a/ee/ucp/admin/configure/_site/external-auth/index.html +++ /dev/null @@ -1,351 +0,0 @@ -

Docker UCP integrates with LDAP directory services, so that you can manage -users and groups from your organization’s directory and it will automatically -propagate that information to UCP and DTR.

- -

If you enable LDAP, UCP uses a remote directory server to create users -automatically, and all logins are forwarded to the directory server.

- -

When you switch from built-in authentication to LDAP authentication, -all manually created users whose usernames don’t match any LDAP search results -are still available.

- -

When you enable LDAP authentication, you can choose whether UCP creates user -accounts only when users log in for the first time. Select the -Just-In-Time User Provisioning option to ensure that the only LDAP -accounts that exist in UCP are those that have had a user log in to UCP.

- -

How UCP integrates with LDAP

- -

You control how UCP integrates with LDAP by creating searches for users. -You can specify multiple search configurations, and you can specify multiple -LDAP servers to integrate with. Searches start with the Base DN, which is -the distinguished name of the node in the LDAP directory tree where the -search starts looking for users.

- -

Access LDAP settings by navigating to the Authentication & Authorization -page in the UCP web UI. There are two sections for controlling LDAP searches -and servers.

- -
    -
  • LDAP user search configurations: This is the section of the -Authentication & Authorization page where you specify search -parameters, like Base DN, scope, filter, the username attribute, -and the full name attribute. These searches are stored in a list, and -the ordering may be important, depending on your search configuration.
  • -
  • LDAP server: This is the section where you specify the URL of an LDAP -server, TLS configuration, and credentials for doing the search requests. -Also, you provide a domain for all servers but the first one. The first -server is considered the default domain server. Any others are associated -with the domain that you specify in the page.
  • -
- -

Here’s what happens when UCP synchronizes with LDAP:

- -
    -
  1. UCP creates a set of search results by iterating over each of the user -search configs, in the order that you specify.
  2. -
  3. UCP choses an LDAP server from the list of domain servers by considering the -Base DN from the user search config and selecting the domain server that -has the longest domain suffix match.
  4. -
  5. If no domain server has a domain suffix that matches the Base DN from the -search config, UCP uses the default domain server.
  6. -
  7. UCP combines the search results into a list of users and creates UCP -accounts for them. If the Just-In-Time User Provisioning option is set, -user accounts are created only when users first log in.
  8. -
- -

The domain server to use is determined by the Base DN in each search config. -UCP doesn’t perform search requests against each of the domain servers, only -the one which has the longest matching domain suffix, or the default if there’s -no match.

- -

Here’s an example. Let’s say we have three LDAP domain servers:

- - - - - - - - - - - - - - - - - - - - - - -
DomainServer URL
defaultldaps://ldap.example.com
dc=subsidiary1,dc=comldaps://ldap.subsidiary1.com
dc=subsidiary2,dc=subsidiary1,dc=comldaps://ldap.subsidiary2.com
- -

Here are three user search configs with the following Base DNs:

- -
    -
  • -

    baseDN=ou=people,dc=subsidiary1,dc=com

    - -

    For this search config, dc=subsidiary1,dc=com is the only server with a -domain which is a suffix, so UCP uses the server ldaps://ldap.subsidiary1.com -for the search request.

    -
  • -
  • -

    baseDN=ou=product,dc=subsidiary2,dc=subsidiary1,dc=com

    - -

    For this search config, two of the domain servers have a domain which is a -suffix of this base DN, but dc=subsidiary2,dc=subsidiary1,dc=com is the -longer of the two, so UCP uses the server ldaps://ldap.subsidiary2.com -for the search request.

    -
  • -
  • -

    baseDN=ou=eng,dc=example,dc=com

    - -

    For this search config, there is no server with a domain specified which is -a suffix of this base DN, so UCP uses the default server, ldaps://ldap.example.com, -for the search request.

    -
  • -
- -

If there are username collisions for the search results between domains, UCP -uses only the first search result, so the ordering of the user search configs -may be important. For example, if both the first and third user search configs -result in a record with the username jane.doe, the first has higher -precedence and the second is ignored. For this reason, it’s important to choose -a username attribute that’s unique for your users across all domains.

- -

Because names may collide, it’s a good idea to use something unique to the -subsidiary, like the email address for each person. Users can log in with the -email address, for example, jane.doe@subsidiary1.com.

- -

Configure the LDAP integration

- -

To configure UCP to create and authenticate users by using an LDAP directory, -go to the UCP web UI, navigate to the Admin Settings page and click -Authentication & Authorization to select the method used to create and -authenticate users.

- -

- -

In the LDAP Enabled section, click Yes to The LDAP settings appear. -Now configure your LDAP directory integration.

- -

Default role for all private collections

- -

Use this setting to change the default permissions of new users.

- -

Click the dropdown to select the permission level that UCP assigns by default -to the private collections of new users. For example, if you change the value -to View Only, all users who log in for the first time after the setting is -changed have View Only access to their private collections, but permissions -remain unchanged for all existing users. -Learn more about permission levels.

- -

LDAP enabled

- -

Click Yes to enable integrating UCP users and teams with LDAP servers.

- -

LDAP server

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
LDAP server URLThe URL where the LDAP server can be reached.
Reader DNThe distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice, this should be an LDAP read-only user.
Reader passwordThe password of the account used for searching entries in the LDAP server.
Use Start TLSWhether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with ldaps://, this field is ignored.
Skip TLS verificationWhether to verify the LDAP server certificate when using TLS. The connection is still encrypted but vulnerable to man-in-the-middle attacks.
No simple paginationIf your LDAP server doesn’t support pagination.
Just-In-Time User ProvisioningWhether to create user accounts only when users log in for the first time. The default value of true is recommended. If you upgraded from UCP 2.0.x, the default is false.
- -

- -

Click Confirm to add your LDAP domain.

- -

To integrate with more LDAP servers, click Add LDAP Domain.

- -

LDAP user search configurations

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription 
Base DNThe distinguished name of the node in the directory tree where the search should start looking for users. 
Username attributeThe LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: / \ [ ] : ; | = , + * ? < > ' ". 
Full name attributeThe LDAP attribute to use as the user’s full name for display purposes. If left empty, UCP will not create new users with a full name value. 
FilterThe LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. 
Search subtree instead of just one levelWhether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. 
Match Group MembersWhether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support memberOf search filters. 
Iterate through group membersIf Select Group Members is selected, this option searches for users by first iterating over the target group’s membership, making a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results. 
Group DNIf Select Group Members is selected, this specifies the distinguished name of the group from which to select users. 
Group Member AttributeIf Select Group Members is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. 
- -

- -

To configure more user search queries, click Add LDAP User Search Configuration -again. This is useful in cases where users may be found in multiple distinct -subtrees of your organization’s directory. Any user entry which matches at -least one of the search configurations will be synced as a user.

- -

LDAP test login

- - - - - - - - - - - - - - - - - - -
FieldDescription
UsernameAn LDAP username for testing authentication to this application. This value corresponds with the Username Attribute specified in the LDAP user search configurations section.
PasswordThe user’s password used to authenticate (BIND) to the directory server.
- -

Before you save the configuration changes, you should test that the integration -is correctly configured. You can do this by providing the credentials of an -LDAP user, and clicking the Test button.

- -

LDAP sync configuration

- - - - - - - - - - - - - - - - - - -
FieldDescription
Sync intervalThe interval, in hours, to synchronize users between UCP and the LDAP server. When the synchronization job runs, new users found in the LDAP server are created in UCP with the default permission level. UCP users that don’t exist in the LDAP server become inactive.
Enable sync of admin usersThis option specifies that system admins should be synced directly with members of a group in your organization’s LDAP directory. The admins will be synced to match the membership of the group. The configured recovery admin user will also remain a system admin.
- -

Once you’ve configured the LDAP integration, UCP synchronizes users based on -the interval you’ve defined starting at the top of the hour. When the -synchronization runs, UCP stores logs that can help you troubleshoot when -something goes wrong.

- -

You can also manually synchronize users by clicking Sync Now.

- -

Revoke user access

- -

When a user is removed from LDAP, the effect on the user’s UCP account depends -on the Just-In-Time User Provisioning setting:

- -
    -
  • Just-In-Time User Provisioning is false: Users deleted from LDAP become -inactive in UCP after the next LDAP synchronization runs.
  • -
  • Just-In-Time User Provisioning is true: Users deleted from LDAP can’t -authenticate, but their UCP accounts remain active. This means that they can -use their client bundles to run commands. To prevent this, deactivate their -UCP user accounts.
  • -
- -

Data synced from your organization’s LDAP directory

- -

UCP saves a minimum amount of user data required to operate. This includes -the value of the username and full name attributes that you have specified in -the configuration as well as the distinguished name of each synced user. -UCP does not store any additional data from the directory server.

- -

Sync teams

- -

UCP enables syncing teams with a search query or group in your organization’s -LDAP directory. -Sync team members with your organization’s LDAP directory.

- -

Where to go next

- - diff --git a/ee/ucp/admin/configure/_site/join-nodes/index.html b/ee/ucp/admin/configure/_site/join-nodes/index.html deleted file mode 100644 index c7e5e7ae54..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/index.html +++ /dev/null @@ -1,59 +0,0 @@ -

Docker Universal Control Plane is designed for high availability (HA). You can -join multiple manager nodes to the cluster, so that if one manager node fails, -another can automatically take its place without impact to the cluster.

- -

Having multiple manager nodes in your cluster allows you to:

- -
    -
  • Handle manager node failures,
  • -
  • Load-balance user requests across all manager nodes.
  • -
- -

Size your deployment

- -

To make the cluster tolerant to more failures, add additional replica nodes to -your cluster.

- - - - - - - - - - - - - - - - - - - - - - -
Manager nodesFailures tolerated
10
31
52
- -

For production-grade deployments, follow these rules of thumb:

- -
    -
  • When a manager node fails, the number of failures tolerated by your cluster -decreases. Don’t leave that node offline for too long.
  • -
  • You should distribute your manager nodes across different availability -zones. This way your cluster can continue working even if an entire -availability zone goes down.
  • -
  • Adding many manager nodes to the cluster might lead to performance -degradation, as changes to configurations need to be replicated across all -manager nodes. The maximum advisable is seven manager nodes.
  • -
- -

Where to go next

- - diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html deleted file mode 100644 index d0836ae5c0..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html +++ /dev/null @@ -1,143 +0,0 @@ -

Docker EE is designed for scaling horizontally as your applications grow in -size and usage. You can add or remove nodes from the cluster to scale it -to your needs. You can join Windows Server 2016, IBM z System, and Linux nodes -to the cluster.

- -

Because Docker EE leverages the clustering functionality provided by Docker -Engine, you use the docker swarm join -command to add more nodes to your cluster. When you join a new node, Docker EE -services start running on the node automatically.

- -

Node roles

- -

When you join a node to a cluster, you specify its role: manager or worker.

- -
    -
  • -

    Manager: Manager nodes are responsible for cluster management -functionality and dispatching tasks to worker nodes. Having multiple -manager nodes allows your swarm to be highly available and tolerant of -node failures.

    - -

    Manager nodes also run all Docker EE components in a replicated way, so -by adding additional manager nodes, you’re also making the cluster highly -available. -Learn more about the Docker EE architecture.

    -
  • -
  • -

    Worker: Worker nodes receive and execute your services and applications. -Having multiple worker nodes allows you to scale the computing capacity of -your cluster.

    - -

    When deploying Docker Trusted Registry in your cluster, you deploy it to a -worker node.

    -
  • -
- -

Join a node to the cluster

- -

You can join Windows Server 2016, IBM z System, and Linux nodes to the cluster, -but only Linux nodes can be managers.

- -

To join nodes to the cluster, go to the Docker EE web UI and navigate to the -Nodes page.

- -
    -
  1. Click Add Node to add a new node.
  2. -
  3. Select the type of node to add, Windows or Linux.
  4. -
  5. Click Manager if you want to add the node as a manager.
  6. -
  7. Check the Use a custom listen address option to specify the address -and port where new node listens for inbound cluster management traffic.
  8. -
  9. Check the Use a custom listen address option to specify the -IP address that’s advertised to all members of the cluster for API access.
  10. -
- -

- -

Copy the displayed command, use SSH to log in to the host that you want to -join to the cluster, and run the docker swarm join command on the host.

- -

To add a Windows node, click Windows and follow the instructions in -Join Windows worker nodes to a cluster.

- -

After you run the join command in the node, the node is displayed on the -Nodes page in the Docker EE web UI. From there, you can change the node’s -cluster configuration, including its assigned orchestrator type. -Learn how to change the orchestrator for a node.

- -

Pause or drain a node

- -

Once a node is part of the cluster, you can configure the node’s availability -so that it is:

- -
    -
  • Active: the node can receive and execute tasks.
  • -
  • Paused: the node continues running existing tasks, but doesn’t receive -new tasks.
  • -
  • Drained: the node won’t receive new tasks. Existing tasks are stopped and -replica tasks are launched in active nodes.
  • -
- -

Pause or drain a node from the Edit Node page:

- -
    -
  1. In the Docker EE web UI, browse to the Nodes page and select the node.
  2. -
  3. In the details pane, click Configure and select Details to open -the Edit Node page.
  4. -
  5. In the Availability section, click Active, Pause, or Drain.
  6. -
  7. Click Save to change the availability of the node.
  8. -
- -

- -

Promote or demote a node

- -

You can promote worker nodes to managers to make UCP fault tolerant. You can -also demote a manager node into a worker.

- -

To promote or demote a manager node:

- -
    -
  1. Navigate to the Nodes page, and click the node that you want to demote.
  2. -
  3. In the details pane, click Configure and select Details to open -the Edit Node page.
  4. -
  5. In the Role section, click Manager or Worker.
  6. -
  7. Click Save and wait until the operation completes.
  8. -
  9. Navigate to the Nodes page, and confirm that the node role has changed.
  10. -
- -

If you’re load-balancing user requests to Docker EE across multiple manager -nodes, don’t forget to remove these nodes from your load-balancing pool when -you demote them to workers.

- -

Remove a node from the cluster

- -

You can remove worker nodes from the cluster at any time:

- -
    -
  1. Navigate to the Nodes page and select the node.
  2. -
  3. In the details pane, click Actions and select Remove.
  4. -
  5. Click Confirm when you’re prompted.
  6. -
- -

Since manager nodes are important to the cluster overall health, you need to -be careful when removing one from the cluster.

- -

To remove a manager node:

- -
    -
  1. Make sure all nodes in the cluster are healthy. Don’t remove manager nodes -if that’s not the case.
  2. -
  3. Demote the manager node into a worker.
  4. -
  5. Now you can remove that node from the cluster.
  6. -
- -

Use the CLI to manage your nodes

- -

You can use the Docker CLI client to manage your nodes from the CLI. To do -this, configure your Docker CLI client with a UCP client bundle.

- -

Once you do that, you can start managing your UCP nodes:

- -
docker node ls
-
diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html deleted file mode 100644 index 07939769d0..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html +++ /dev/null @@ -1,235 +0,0 @@ -

Docker Enterprise Edition supports worker nodes that run on Windows Server 2016 or 1709. -Only worker nodes are supported on Windows, and all manager nodes in the cluster -must run on Linux.

- -

Follow these steps to enable a worker node on Windows.

- -
    -
  1. Install Docker EE Engine on Windows Server 2016.
  2. -
  3. Configure the Windows node.
  4. -
  5. Join the Windows node to the cluster.
  6. -
- -

Install Docker EE Engine on Windows Server 2016 or 1709

- -

Install Docker EE Engine -on a Windows Server 2016 or 1709 instance to enable joining a cluster that’s managed by -Docker Enterprise Edition.

- -

Configure the Windows node

- -

Follow these steps to configure the docker daemon and the Windows environment.

- -
    -
  1. Add a label to the node.
  2. -
  3. Pull the Windows-specific image of ucp-agent, which is named ucp-agent-win.
  4. -
  5. Run the Windows worker setup script provided with ucp-agent-win.
  6. -
  7. Join the cluster with the token provided by the Docker EE web UI or CLI.
  8. -
- -

Add a label to the node

- -

Configure the Docker Engine running on the node to have a label. This makes -it easier to deploy applications on nodes with this label.

- -

Create the file C:\ProgramData\docker\config\daemon.json with the following -content:

- -
{
-  "labels": ["os=windows"]
-}
-
- -

Restart Docker for the changes to take effect:

- -
Restart-Service docker
-
- -

Pull the Windows-specific images

- -

On a manager node, run the following command to list the images that are required -on Windows nodes.

- -
docker container run --rm /: images --list --enable-windows
-/ucp-agent-win:
-/ucp-dsinfo-win:
-
- -

On Windows Server 2016, in a PowerShell terminal running as Administrator, -log in to Docker Hub with the docker login command and pull the listed images.

- -
docker image pull /ucp-agent-win:
-docker image pull /ucp-dsinfo-win:
-
- -

Run the Windows node setup script

- -

You need to open ports 2376 and 12376, and create certificates -for the Docker daemon to communicate securely. Use this command to run -the Windows node setup script:

- -
$script = [ScriptBlock]::Create((docker run --rm /ucp-agent-win: windows-script | Out-String))
-
-Invoke-Command $script
-
- -
-

Docker daemon restart

- -

When you run windows-script, the Docker service is unavailable temporarily.

-
- -

The Windows node is ready to join the cluster. Run the setup script on each -instance of Windows Server that will be a worker node.

- -

Compatibility with daemon.json

- -

The script may be incompatible with installations that use a config file at -C:\ProgramData\docker\config\daemon.json. If you use such a file, make sure -that the daemon runs on port 2376 and that it uses certificates located in -C:\ProgramData\docker\daemoncerts. If certificates don’t exist in this -directory, run ucp-agent-win generate-certs, as shown in Step 2 of the -procedure in Set up certs for the dockerd service.

- -

In the daemon.json file, set the tlscacert, tlscert, and tlskey options -to the corresponding files in C:\ProgramData\docker\daemoncerts:

- -
{
-...
-		"debug":     true,
-		"tls":       true,
-		"tlscacert": "C:\ProgramData\docker\daemoncerts\ca.pem",
-		"tlscert":   "C:\ProgramData\docker\daemoncerts\cert.pem",
-		"tlskey":    "C:\ProgramData\docker\daemoncerts\key.pem",
-		"tlsverify": true,
-...
-}
-
- -

Join the Windows node to the cluster

- -

Now you can join the cluster by using the docker swarm join command that’s -provided by the Docker EE web UI and CLI.

- -
    -
  1. Log in to the Docker EE web UI with an administrator account.
  2. -
  3. Navigate to the Nodes page.
  4. -
  5. Click Add Node to add a new node.
  6. -
  7. In the Node Type section, click Windows.
  8. -
  9. In the Step 2 section, click the checkbox for -“I’m ready to join my windows node.”
  10. -
  11. Check the Use a custom listen address option to specify the address -and port where new node listens for inbound cluster management traffic.
  12. -
  13. -

    Check the Use a custom listen address option to specify the -IP address that’s advertised to all members of the cluster for API access.

    - -

    -
  14. -
- -

Copy the displayed command. It looks similar to the following:

- -
docker swarm join --token <token> <ucp-manager-ip>
-
- -

You can also use the command line to get the join token. Using your -UCP client bundle, run:

- -
docker swarm join-token worker
-
- -

Run the docker swarm join command on each instance of Windows Server that -will be a worker node.

- -

Configure a Windows worker node manually

- -

The following sections describe how to run the commands in the setup script -manually to configure the dockerd service and the Windows environment. -The script opens ports in the firewall and sets up certificates for dockerd.

- -

To see the script, you can run the windows-script command without piping -to the Invoke-Expression cmdlet.

- -
docker container run --rm /ucp-agent-win: windows-script
-
- -

Open ports in the Windows firewall

- -

Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic.

- -

In a PowerShell terminal running as Administrator, run these commands -to add rules to the Windows firewall.

- -
netsh advfirewall firewall add rule name="docker_local" dir=in action=allow protocol=TCP localport=2376
-netsh advfirewall firewall add rule name="docker_proxy" dir=in action=allow protocol=TCP localport=12376
-
- -

Set up certs for the dockerd service

- -
    -
  1. Create the directory C:\ProgramData\docker\daemoncerts.
  2. -
  3. -

    In a PowerShell terminal running as Administrator, run the following command -to generate certificates.

    - -
    docker container run --rm -v C:\ProgramData\docker\daemoncerts:C:\certs /ucp-agent-win: generate-certs
    -
    -
  4. -
  5. -

    To set up certificates, run the following commands to stop and unregister the -dockerd service, register the service with the certificates, and restart the service.

    - -
    Stop-Service docker
    -dockerd --unregister-service
    -dockerd -H npipe:// -H 0.0.0.0:2376 --tlsverify --tlscacert=C:\ProgramData\docker\daemoncerts\ca.pem --tlscert=C:\ProgramData\docker\daemoncerts\cert.pem --tlskey=C:\ProgramData\docker\daemoncerts\key.pem --register-service
    -Start-Service docker
    -
    -
  6. -
- -

The dockerd service and the Windows environment are now configured to join a Docker EE cluster.

- -
-

TLS certificate setup

- -

If the TLS certificates aren’t set up correctly, the Docker EE web UI shows the -following warning.

- -
Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon.
-
-
- -

Windows nodes limitations

- -

Some features are not yet supported on Windows nodes:

- -
    -
  • Networking -
      -
    • The cluster mode routing mesh can’t be used on Windows nodes. You can expose -a port for your service in the host where it is running, and use the HTTP -routing mesh to make your service accessible using a domain name.
    • -
    • Encrypted networks are not supported. If you’ve upgraded from a previous -version, you’ll also need to recreate the ucp-hrm network to make it -unencrypted.
    • -
    -
  • -
  • Secrets -
      -
    • When using secrets with Windows services, Windows stores temporary secret -files on disk. You can use BitLocker on the volume containing the Docker -root directory to encrypt the secret data at rest.
    • -
    • When creating a service which uses Windows containers, the options to -specify UID, GID, and mode are not supported for secrets. Secrets are -currently only accessible by administrators and users with system access -within the container.
    • -
    -
  • -
  • Mounts -
      -
    • On Windows, Docker can’t listen on a Unix socket. Use TCP or a named pipe -instead.
    • -
    -
  • -
diff --git a/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html b/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html deleted file mode 100644 index f1161c5a8d..0000000000 --- a/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html +++ /dev/null @@ -1,220 +0,0 @@ -

Once you’ve joined multiple manager nodes for high-availability, you can -configure your own load balancer to balance user requests across all -manager nodes.

- -

- -

This allows users to access UCP using a centralized domain name. If -a manager node goes down, the load balancer can detect that and stop forwarding -requests to that node, so that the failure goes unnoticed by users.

- -

Load-balancing on UCP

- -

Since Docker UCP uses mutual TLS, make sure you configure your load balancer to:

- -
    -
  • Load-balance TCP traffic on ports 443 and 6443.
  • -
  • Not terminate HTTPS connections.
  • -
  • Use the /_ping endpoint on each manager node, to check if the node -is healthy and if it should remain on the load balancing pool or not.
  • -
- -

Load balancing UCP and DTR

- -

By default, both UCP and DTR use port 443. If you plan on deploying UCP and DTR, -your load balancer needs to distinguish traffic between the two by IP address -or port number.

- -
    -
  • If you want to configure your load balancer to listen on port 443: -
      -
    • Use one load balancer for UCP, and another for DTR,
    • -
    • Use the same load balancer with multiple virtual IPs.
    • -
    -
  • -
  • Configure your load balancer to expose UCP or DTR on a port other than 443.
  • -
- -
-

Additional requirements

- -

In addition to configuring your load balancer to distinguish between UCP and DTR, configuring a load balancer for DTR has additional requirements.

-
- -

Configuration examples

- -

Use the following examples to configure your load balancer for UCP.

- - -
-
-
user  nginx;
-worker_processes  1;
-
-error_log  /var/log/nginx/error.log warn;
-pid        /var/run/nginx.pid;
-
-events {
-    worker_connections  1024;
-}
-
-stream {
-    upstream ucp_443 {
-        server <UCP_MANAGER_1_IP>:443 max_fails=2 fail_timeout=30s;
-        server <UCP_MANAGER_2_IP>:443 max_fails=2 fail_timeout=30s;
-        server <UCP_MANAGER_N_IP>:443  max_fails=2 fail_timeout=30s;
-    }
-    server {
-        listen 443;
-        proxy_pass ucp_443;
-    }
-}
-
-
-
-
global
-    log /dev/log    local0
-    log /dev/log    local1 notice
-
-defaults
-        mode    tcp
-        option  dontlognull
-        timeout connect     5s
-        timeout client      50s
-        timeout server      50s
-        timeout tunnel      1h
-        timeout client-fin  50s
-### frontends
-# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
-frontend ucp_stats
-        mode http
-        bind 0.0.0.0:8181
-        default_backend ucp_stats
-frontend ucp_443
-        mode tcp
-        bind 0.0.0.0:443
-        default_backend ucp_upstream_servers_443
-### backends
-backend ucp_stats
-        mode http
-        option httplog
-        stats enable
-        stats admin if TRUE
-        stats refresh 5m
-backend ucp_upstream_servers_443
-        mode tcp
-        option httpchk GET /_ping HTTP/1.1\r\nHost:\ <UCP_FQDN>
-        server node01 <UCP_MANAGER_1_IP>:443 weight 100 check check-ssl verify none
-        server node02 <UCP_MANAGER_2_IP>:443 weight 100 check check-ssl verify none
-        server node03 <UCP_MANAGER_N_IP>:443 weight 100 check check-ssl verify none
-
-
-
-
{
-    "Subnets": [
-        "subnet-XXXXXXXX",
-        "subnet-YYYYYYYY",
-        "subnet-ZZZZZZZZ"
-    ],
-    "CanonicalHostedZoneNameID": "XXXXXXXXXXX",
-    "CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
-    "ListenerDescriptions": [
-        {
-            "Listener": {
-                "InstancePort": 443,
-                "LoadBalancerPort": 443,
-                "Protocol": "TCP",
-                "InstanceProtocol": "TCP"
-            },
-            "PolicyNames": []
-        }
-    ],
-    "HealthCheck": {
-        "HealthyThreshold": 2,
-        "Interval": 10,
-        "Target": "HTTPS:443/_ping",
-        "Timeout": 2,
-        "UnhealthyThreshold": 4
-    },
-    "VPCId": "vpc-XXXXXX",
-    "BackendServerDescriptions": [],
-    "Instances": [
-        {
-            "InstanceId": "i-XXXXXXXXX"
-        },
-        {
-            "InstanceId": "i-XXXXXXXXX"
-        },
-        {
-            "InstanceId": "i-XXXXXXXXX"
-        }
-    ],
-    "DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
-    "SecurityGroups": [
-        "sg-XXXXXXXXX"
-    ],
-    "Policies": {
-        "LBCookieStickinessPolicies": [],
-        "AppCookieStickinessPolicies": [],
-        "OtherPolicies": []
-    },
-    "LoadBalancerName": "ELB-UCP",
-    "CreatedTime": "2017-02-13T21:40:15.400Z",
-    "AvailabilityZones": [
-        "us-west-2c",
-        "us-west-2a",
-        "us-west-2b"
-    ],
-    "Scheme": "internet-facing",
-    "SourceSecurityGroup": {
-        "OwnerAlias": "XXXXXXXXXXXX",
-        "GroupName":  "XXXXXXXXXXXX"
-    }
-}
-
-
-
- -

You can deploy your load balancer using:

- - -
-
-
# Create the nginx.conf file, then
-# deploy the load balancer
-
-docker run --detach \
-  --name ucp-lb \
-  --restart=unless-stopped \
-  --publish 443:443 \
-  --volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
-  nginx:stable-alpine
-
-
-
-
# Create the haproxy.cfg file, then
-# deploy the load balancer
-
-docker run --detach \
-  --name ucp-lb \
-  --publish 443:443 \
-  --publish 8181:8181 \
-  --restart=unless-stopped \
-  --volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
-  haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
-
-
-
- -

Where to go next

- - diff --git a/ee/ucp/admin/configure/set-session-timeout.md b/ee/ucp/admin/configure/set-session-timeout.md index 366126ef16..c2cc0594f3 100644 --- a/ee/ucp/admin/configure/set-session-timeout.md +++ b/ee/ucp/admin/configure/set-session-timeout.md @@ -16,6 +16,6 @@ To configure UCP login sessions, go to the UCP web UI, navigate to the | Field | Description | | :---------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Lifetime Minutes | The initial lifetime of a login session, starting from the time UCP generates the session. When this time expires, UCP invalidates the session. To establish a new session,the u ser must authenticate again. The default is 60 minutes. The minimum is 10 minutes. | +| Lifetime Minutes | The initial lifetime of a login session, starting from the time UCP generates the session. When this time expires, UCP invalidates the session. To establish a new session, the user must authenticate again. The default is 60 minutes with a minimum of 10 minutes. | | Renewal Threshold Minutes | The time by which UCP extends an active session before session expiration. UCP extends the session by the number of minutes specified in **Lifetime Minutes**. The threshold value can't be greater than **Lifetime Minutes**. The default is 20 minutes. To specify that sessions are never extended, set the threshold value to zero. This may cause users to be logged out unexpectedly while using the UCP web interface. The maximum threshold is 5 minutes less than **Lifetime Minutes**. | | Per User Limit | The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. To disable this limit, set the value to zero. The default limit is 10 sessions. | From ae1bcefd0a84d08142ea5d2a8c56169c7dd62490 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 14:11:08 -0700 Subject: [PATCH 12/27] review feedback incorporated --- ee/ucp/admin/configure/set-session-timeout.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/ee/ucp/admin/configure/set-session-timeout.md b/ee/ucp/admin/configure/set-session-timeout.md index c2cc0594f3..c4e8175ee2 100644 --- a/ee/ucp/admin/configure/set-session-timeout.md +++ b/ee/ucp/admin/configure/set-session-timeout.md @@ -16,6 +16,6 @@ To configure UCP login sessions, go to the UCP web UI, navigate to the | Field | Description | | :---------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Lifetime Minutes | The initial lifetime of a login session, starting from the time UCP generates the session. When this time expires, UCP invalidates the session. To establish a new session, the user must authenticate again. The default is 60 minutes with a minimum of 10 minutes. | -| Renewal Threshold Minutes | The time by which UCP extends an active session before session expiration. UCP extends the session by the number of minutes specified in **Lifetime Minutes**. The threshold value can't be greater than **Lifetime Minutes**. The default is 20 minutes. To specify that sessions are never extended, set the threshold value to zero. This may cause users to be logged out unexpectedly while using the UCP web interface. The maximum threshold is 5 minutes less than **Lifetime Minutes**. | -| Per User Limit | The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. To disable this limit, set the value to zero. The default limit is 10 sessions. | +| Lifetime Minutes | The initial lifetime of a login session, starting from the time UCP generates the session. When this time expires, UCP invalidates the active session. To establish a new session, the user must authenticate again. The default is 60 minutes with a minimum of 10 minutes. | +| Renewal Threshold Minutes | The time by which UCP extends an active session before session expiration. UCP extends the session by the number of minutes specified in **Lifetime Minutes**. The threshold value can't be greater than **Lifetime Minutes**. The default extension is 20 minutes. To specify that no sessions are extended, set the threshold value to zero. This may cause users to be logged out unexpectedly while using the UCP web interface. The maximum threshold is 5 minutes less than **Lifetime Minutes**. | +| Per User Limit | The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. Every time you use a session token, the server marks it with the current time (`lastUsed` metadata). When you create a new session that would put you over the per user limit, the session with the oldest `lastUsed` time will be deleted. This is not necessarily the oldest session. To disable this limit, set the value to zero. The default limit is 10 sessions. | From e3d70dbfb7b4d82663fccf5aaf7c55b2fd180428 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 14:23:42 -0700 Subject: [PATCH 13/27] more review edits --- ee/ucp/admin/configure/set-session-timeout.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ee/ucp/admin/configure/set-session-timeout.md b/ee/ucp/admin/configure/set-session-timeout.md index c4e8175ee2..15924643fd 100644 --- a/ee/ucp/admin/configure/set-session-timeout.md +++ b/ee/ucp/admin/configure/set-session-timeout.md @@ -18,4 +18,4 @@ To configure UCP login sessions, go to the UCP web UI, navigate to the | :---------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Lifetime Minutes | The initial lifetime of a login session, starting from the time UCP generates the session. When this time expires, UCP invalidates the active session. To establish a new session, the user must authenticate again. The default is 60 minutes with a minimum of 10 minutes. | | Renewal Threshold Minutes | The time by which UCP extends an active session before session expiration. UCP extends the session by the number of minutes specified in **Lifetime Minutes**. The threshold value can't be greater than **Lifetime Minutes**. The default extension is 20 minutes. To specify that no sessions are extended, set the threshold value to zero. This may cause users to be logged out unexpectedly while using the UCP web interface. The maximum threshold is 5 minutes less than **Lifetime Minutes**. | -| Per User Limit | The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. Every time you use a session token, the server marks it with the current time (`lastUsed` metadata). When you create a new session that would put you over the per user limit, the session with the oldest `lastUsed` time will be deleted. This is not necessarily the oldest session. To disable this limit, set the value to zero. The default limit is 10 sessions. | +| Per User Limit | The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. Every time you use a session token, the server marks it with the current time (`lastUsed` metadata). When you create a new session that would put you over the per user limit, the session with the oldest `lastUsed` time is deleted. This is not necessarily the oldest session. To disable this limit, set the value to zero. The default limit is 10 sessions. | From 8bfe3abcc11fd4267cf2208741c7eb18dd8cc922 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 14:32:27 -0700 Subject: [PATCH 14/27] make it stop --- ee/ucp/admin/configure/ucp-configuration-file.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index a6791f8558..de5ace0caa 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -6,7 +6,7 @@ keywords: Docker EE, UCP, configuration, config There are two ways to configure UCP: - through the web interface, or -- importing and exporting the UCP config in a TOML file. For more information about TOML, see [here on GitHub](https://github.com/toml-lang/toml/blob/master/README.md)) +- by importing and exporting the UCP config in a TOML file. For more information about TOML, see [the TOML README on GitHub](https://github.com/toml-lang/toml/blob/master/README.md)) You can customize the UCP installation by creating a configuration file at the time of installation. During the installation, UCP detects and starts using the @@ -45,10 +45,10 @@ UCP and apply your configuration changes: curl --cacert ca.pem --cert cert.pem --key key.pem --upload-file ucp-config.toml https://UCP_HOST/api/ucp/config-toml ``` -## Apply existing configuration file at install time. -You can configure UCP to import an existing configuration file at install time. To do this using the Configs feature of Docker Swarm, follow these steps. +## Apply existing configuration file at install time +You can configure UCP to import an existing configuration file at install time. To do this using the **Configs** feature of Docker Swarm, follow these steps. -1. Create a Docker Swarm Config object with a name of `com.docker.ucp.config` and having a value of your UCP config TOML file contents. +1. Create a **Docker Swarm Config** object with a name of `com.docker.ucp.config` and having a value of your UCP config TOML file contents. 2. When installing UCP on that cluster, be sure to specify the `--existing-config` flag to instruct the installer to use that object as its initial configuration. 3. After installation, remove the `com.docker.ucp.config` Config object. From ad3bf1c5556e61ef98909214afc665cec8f1d31a Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 16:36:50 -0700 Subject: [PATCH 15/27] fix table --- ee/ucp/admin/configure/ucp-configuration-file.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index de5ace0caa..302008adc9 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -91,9 +91,10 @@ An array of tables that specifies the DTR instances that the current UCP instanc | `ca_bundle` | no | If you're using a custom certificate authority (CA), the `ca_bundle` setting specifies the root CA bundle for the DTR instance. The value is a string with the contents of a `ca.pem` file. | ### audit_log_configuration table (optional) - Configures audit logging options for UCP components. - | Parameter | Required | Description | -|:---------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +Configures audit logging options for UCP components. + +| Parameter | Required | Description | +|:---------------|:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `level` | no | Specify the audit logging level. Leave empty for disabling audit logs (default). Other legal values are "metadata" and "request". | | `support_dump_include_audit_logs` | no | When set to true, support dumps will include audit logs in the logs of the ucp-controller container of each manager node. The default is `false`. | From 495dc5ce9f6d726fd8a3d6ebfddebc29c94c873b Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 16:40:30 -0700 Subject: [PATCH 16/27] update table --- ee/ucp/admin/configure/ucp-configuration-file.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 302008adc9..41e8635c83 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -88,15 +88,15 @@ An array of tables that specifies the DTR instances that the current UCP instanc |:---------------|:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `host_address` | yes | The address for connecting to the DTR instance tied to this UCP cluster. | | `service_id` | yes | The DTR instance's OpenID Connect Client ID, as registered with the Docker authentication provider. | -| `ca_bundle` | no | If you're using a custom certificate authority (CA), the `ca_bundle` setting specifies the root CA bundle for the DTR instance. The value is a string with the contents of a `ca.pem` file. | +| `ca_bundle` | no | If you're using a custom certificate authority (CA), `ca_bundle` specifies the root CA bundle for the DTR instance. The value is a string with the contents of a `ca.pem` file. | ### audit_log_configuration table (optional) Configures audit logging options for UCP components. | Parameter | Required | Description | |:---------------|:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `level` | no | Specify the audit logging level. Leave empty for disabling audit logs (default). Other legal values are "metadata" and "request". | -| `support_dump_include_audit_logs` | no | When set to true, support dumps will include audit logs in the logs of the ucp-controller container of each manager node. The default is `false`. | +| `level` | no | Specify the audit logging level. Leave empty for disabling audit logs (default). Other legal values are `metadata` and `request`. | +| `support_dump_include_audit_logs` | no | When set to true, support dumps will include audit logs in the logs of the `ucp-controller` container of each manager node. The default is `false`. | ### scheduling_configuration table (optional) From adf25d3835bd29712272007d2f8d747b6e3a4a5f Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 16:45:07 -0700 Subject: [PATCH 17/27] updates --- ee/ucp/admin/configure/ucp-configuration-file.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 41e8635c83..971f51e45d 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -6,7 +6,7 @@ keywords: Docker EE, UCP, configuration, config There are two ways to configure UCP: - through the web interface, or -- by importing and exporting the UCP config in a TOML file. For more information about TOML, see [the TOML README on GitHub](https://github.com/toml-lang/toml/blob/master/README.md)) +- by importing and exporting the UCP config in a TOML file. For more information about TOML, see the [TOML README on GitHub](https://github.com/toml-lang/toml/blob/master/README.md). You can customize the UCP installation by creating a configuration file at the time of installation. During the installation, UCP detects and starts using the From 514d3ddac8389a69e84c80b52af9f69493e2c381 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Mon, 15 Oct 2018 17:08:26 -0700 Subject: [PATCH 18/27] config-toml API name provided --- ee/ucp/admin/configure/ucp-configuration-file.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 971f51e45d..093556d426 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -12,7 +12,7 @@ You can customize the UCP installation by creating a configuration file at the time of installation. During the installation, UCP detects and starts using the configuration specified in this file. -## UCP configuration file +## The UCP configuration file You can use the configuration file in different ways to set up your UCP cluster. @@ -27,10 +27,10 @@ cluster. Specify your configuration settings in a TOML file. -## Export and modify existing configuration +## Export and modify an existing configuration -Use the `Get Config TOML` API to export the current settings and emit them to a file. From within the directory of a UCP admin user's [client certificate bundle](../../user-access/cli.md), -and with the UCP hostname `UCP_HOST`, the following `curl` command will export +Use the `config-toml` API to export the current settings and emit them to a file. From within the directory of a UCP admin user's [client certificate bundle](../../user-access/cli.md), +and with the UCP hostname `UCP_HOST`, the following `curl` command exports the current UCP configuration to a file named `ucp-config.toml`: ```bash @@ -45,13 +45,12 @@ UCP and apply your configuration changes: curl --cacert ca.pem --cert cert.pem --key key.pem --upload-file ucp-config.toml https://UCP_HOST/api/ucp/config-toml ``` -## Apply existing configuration file at install time +## Apply an existing configuration file at install time You can configure UCP to import an existing configuration file at install time. To do this using the **Configs** feature of Docker Swarm, follow these steps. 1. Create a **Docker Swarm Config** object with a name of `com.docker.ucp.config` and having a value of your UCP config TOML file contents. -2. When installing UCP on that cluster, be sure to specify the `--existing-config` flag to instruct the installer to -use that object as its initial configuration. -3. After installation, remove the `com.docker.ucp.config` Config object. +2. When installing UCP on that cluster, specify the `--existing-config` flag to have the installer use that object for its initial configuration. +3. After installation, delete the `com.docker.ucp.config` object. ## Example configuration file From 4e6c855e7a129fd3846d306b347915923188eda5 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Tue, 16 Oct 2018 16:39:16 -0700 Subject: [PATCH 19/27] delete obsolete topic from TOC --- _data/toc.yaml | 2 -- 1 file changed, 2 deletions(-) diff --git a/_data/toc.yaml b/_data/toc.yaml index 3066f47919..2c5b000c09 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -1572,8 +1572,6 @@ manuals: title: Enable SAML authentication - path: /ee/ucp/admin/configure/external-auth/ title: Integrate with LDAP - - path: /ee/ucp/admin/configure/external-auth/enable-ldap-config-file/ - title: Integrate with LDAP by using a configuration file - path: /ee/ucp/admin/configure/license-your-installation/ title: License your installation - path: /ee/ucp/admin/configure/restrict-services-to-worker-nodes/ From 4c484b551a83bbf690f71668e89f9534cd0a523d Mon Sep 17 00:00:00 2001 From: ddeyo Date: Tue, 16 Oct 2018 22:25:57 -0700 Subject: [PATCH 20/27] removed ldap and config file --- ee/ucp/admin/configure/set-session-timeout.md | 2 +- ee/ucp/ucp-architecture.md | 3 +-- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/ee/ucp/admin/configure/set-session-timeout.md b/ee/ucp/admin/configure/set-session-timeout.md index 15924643fd..c4bdf6d113 100644 --- a/ee/ucp/admin/configure/set-session-timeout.md +++ b/ee/ucp/admin/configure/set-session-timeout.md @@ -7,7 +7,7 @@ keywords: UCP, authorization, authentication, security, session, timeout Docker Universal Control Plane enables setting properties of user sessions, like session timeout and number of concurrent sessions. -To configure UCP login sessions, go to the UCP web UI, navigate to the +To configure UCP login sessions, go to the UCP web interface, navigate to the **Admin Settings** page and click **Authentication & Authorization**. ![](../../images/authentication-authorization.png) diff --git a/ee/ucp/ucp-architecture.md b/ee/ucp/ucp-architecture.md index 2a0bec559c..17bbfd53df 100644 --- a/ee/ucp/ucp-architecture.md +++ b/ee/ucp/ucp-architecture.md @@ -180,14 +180,13 @@ driver. By default, the data for these volumes can be found at `/var/lib/docker/volumes//_data`. -## Configurations use by UCP +## Configurations used by UCP | Configuration name | Description | |:-------------------------------|:-------------------------------------------------------------------------------------------------| | com.docker.interlock.extension | Configuration for the Interlock extension service that monitors and configures the proxy service | | com.docker.interlock.proxy | Configuration for the service responsible for handling user requests and routing them | | com.docker.license | The Docker EE license | -| com.docker.ucp.config | The UCP controller configuration. Most of the settings available on the UCP UI are stored here | | com.docker.ucp.interlock.conf | Configuration for the core Interlock service | ## How you interact with UCP From 2f5eebd3b6e25bd871712129c55a58fcbb98fb56 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 17 Oct 2018 09:28:50 -0700 Subject: [PATCH 21/27] deleting obsolete topic --- .../external-auth/enable-ldap-config-file.md | 68 ------------------- 1 file changed, 68 deletions(-) delete mode 100644 ee/ucp/admin/configure/external-auth/enable-ldap-config-file.md diff --git a/ee/ucp/admin/configure/external-auth/enable-ldap-config-file.md b/ee/ucp/admin/configure/external-auth/enable-ldap-config-file.md deleted file mode 100644 index 973a8c278e..0000000000 --- a/ee/ucp/admin/configure/external-auth/enable-ldap-config-file.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Integrate with LDAP by using a configuration file -description: Set up LDAP authentication by using a configuration file. -keywords: UCP, LDAP, config ---- - -Docker UCP integrates with LDAP directory services, so that you can manage -users and groups from your organization's directory and automatically -propagate this information to UCP and DTR. You can set up your cluster's LDAP -configuration by using the UCP web UI, or you can use a -[UCP configuration file](../ucp-configuration-file.md). - -To see an example TOML config file that shows how to configure UCP settings, -run UCP with the `example-config` option. -[Learn about UCP configuration files](../ucp-configuration-file.md). - -```bash -docker container run --rm {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} example-config -``` - -## Set up LDAP by using a configuration file - -1. Use the following command to extract the name of the currently active - configuration from the `ucp-agent` service. - - ```bash - {% raw %} - $ CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}' ucp-agent | grep 'com.docker.ucp.config-') - {% endraw %} - ``` - -2. Get the current configuration and save it to a TOML file. - - ```bash - {% raw %} - docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml - {% endraw %} - ``` - -3. Use the output of the `example-config` command as a guide to edit your - `config.toml` file. Under the `[auth]` sections, set `backend = "ldap"` - and `[auth.ldap]` to configure LDAP integration the way you want. - -4. Once you've finished editing your `config.toml` file, create a new Docker - Config object by using the following command. - - ```bash - NEW_CONFIG_NAME="com.docker.ucp.config-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))" - docker config create $NEW_CONFIG_NAME config.toml - ``` - -5. Update the `ucp-agent` service to remove the reference to the old config - and add a reference to the new config. - - ```bash - docker service update --config-rm "$CURRENT_CONFIG_NAME" --config-add "source=${NEW_CONFIG_NAME},target=/etc/ucp/ucp.toml" ucp-agent - ``` - -6. Wait a few moments for the `ucp-agent` service tasks to update across - your cluster. If you set `jit_user_provisioning = true` in the LDAP - configuration, users matching any of your specified search queries will - have their accounts created when they log in with their username and LDAP - password. - -## Where to go next - -- [Create users and teams manually](../../../authorization/create-users-and-teams-manually.md) -- [Create teams with LDAP](../../../authorization/create-teams-with-ldap.md) From ac6a4cf4062d94bb3b70c5a0c6c08197bebd8716 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 17 Oct 2018 09:52:13 -0700 Subject: [PATCH 22/27] revisions per review --- ee/ucp/admin/configure/ucp-configuration-file.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 093556d426..4cf1a62589 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -29,9 +29,7 @@ Specify your configuration settings in a TOML file. ## Export and modify an existing configuration -Use the `config-toml` API to export the current settings and emit them to a file. From within the directory of a UCP admin user's [client certificate bundle](../../user-access/cli.md), -and with the UCP hostname `UCP_HOST`, the following `curl` command exports -the current UCP configuration to a file named `ucp-config.toml`: +Use the `config-toml` API to export the current settings and write them to a file. Within the directory of a UCP admin user's [client certificate bundle](../../user-access/cli.md), the following command exports the current configuration for the UCP hostname `UCP_HOST` to a file named `ucp-config.toml`: ```bash curl --cacert ca.pem --cert cert.pem --key key.pem https://UCP_HOST/api/ucp/config-toml > ucp-config.toml From b5e5a076cfae70bf245231bb6916b425329a9b96 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 17 Oct 2018 09:56:12 -0700 Subject: [PATCH 23/27] more revisions per review --- ee/ucp/admin/configure/ucp-configuration-file.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 4cf1a62589..64d2d9dab0 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -35,7 +35,7 @@ Use the `config-toml` API to export the current settings and write them to a fil curl --cacert ca.pem --cert cert.pem --key key.pem https://UCP_HOST/api/ucp/config-toml > ucp-config.toml ``` -Edit this file, then use the following `curl` command to import it back into +Edit `ucp-config.toml`, then use the following `curl` command to import it back into UCP and apply your configuration changes: @@ -46,7 +46,7 @@ curl --cacert ca.pem --cert cert.pem --key key.pem --upload-file ucp-config.toml ## Apply an existing configuration file at install time You can configure UCP to import an existing configuration file at install time. To do this using the **Configs** feature of Docker Swarm, follow these steps. -1. Create a **Docker Swarm Config** object with a name of `com.docker.ucp.config` and having a value of your UCP config TOML file contents. +1. Create a **Docker Swarm Config** object with a name of `com.docker.ucp.config` and the TOML value of your UCP configuration file contents. 2. When installing UCP on that cluster, specify the `--existing-config` flag to have the installer use that object for its initial configuration. 3. After installation, delete the `com.docker.ucp.config` object. From 8afaefc661f07edfb208390ed728b4f3bc3bdbf0 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 17 Oct 2018 11:27:12 -0700 Subject: [PATCH 24/27] optional headers added --- ee/ucp/admin/configure/ucp-configuration-file.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 64d2d9dab0..1a86604504 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -87,6 +87,18 @@ An array of tables that specifies the DTR instances that the current UCP instanc | `service_id` | yes | The DTR instance's OpenID Connect Client ID, as registered with the Docker authentication provider. | | `ca_bundle` | no | If you're using a custom certificate authority (CA), `ca_bundle` specifies the root CA bundle for the DTR instance. The value is a string with the contents of a `ca.pem` file. | +### custom headers (optional) + +Included when you need to set custom API headers. You can repeat this section multiple times to specify multiple separate headers. If you include custom headers, you must specify both `name` and `value`. + +[custom_api_server_headers] + +| Item | Description | | +|:---------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `name` | Set to specify the name of the custom header with `name` = "*X-Custom-Header-Name*". | +| `value` | Set to specify the value of the custom header with `value` = "*Custom Header Value*". | + + ### audit_log_configuration table (optional) Configures audit logging options for UCP components. From fd9fe3bca6349bee012712924867136c7770b1b5 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 17 Oct 2018 13:26:06 -0700 Subject: [PATCH 25/27] table fix --- ee/ucp/admin/configure/ucp-configuration-file.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 1a86604504..50c26344d4 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -94,7 +94,7 @@ Included when you need to set custom API headers. You can repeat this section mu [custom_api_server_headers] | Item | Description | | -|:---------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +|:---------------|:-------------------------------------------------------------------------------------------| | `name` | Set to specify the name of the custom header with `name` = "*X-Custom-Header-Name*". | | `value` | Set to specify the value of the custom header with `value` = "*Custom Header Value*". | From 5018fcb48634ceb887ee4bcbb76f87cadc53efd9 Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 17 Oct 2018 14:07:42 -0700 Subject: [PATCH 26/27] fix 2 --- ee/ucp/admin/configure/ucp-configuration-file.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 50c26344d4..b07bd8e5ae 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -93,8 +93,9 @@ Included when you need to set custom API headers. You can repeat this section mu [custom_api_server_headers] -| Item | Description | | -|:---------------|:-------------------------------------------------------------------------------------------| +| Item | Description | +| ----------- | ----------- | +| Header | Title | | `name` | Set to specify the name of the custom header with `name` = "*X-Custom-Header-Name*". | | `value` | Set to specify the value of the custom header with `value` = "*Custom Header Value*". | From d5020cb46833c182b6bfbf11db7a51ada774f8dd Mon Sep 17 00:00:00 2001 From: ddeyo Date: Wed, 17 Oct 2018 14:08:39 -0700 Subject: [PATCH 27/27] fix 3 --- ee/ucp/admin/configure/ucp-configuration-file.md | 1 - 1 file changed, 1 deletion(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index b07bd8e5ae..da80635884 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -95,7 +95,6 @@ Included when you need to set custom API headers. You can repeat this section mu | Item | Description | | ----------- | ----------- | -| Header | Title | | `name` | Set to specify the name of the custom header with `name` = "*X-Custom-Header-Name*". | | `value` | Set to specify the value of the custom header with `value` = "*Custom Header Value*". |