diff --git a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html b/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html deleted file mode 100644 index bf1cc07c30..0000000000 --- a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html +++ /dev/null @@ -1,68 +0,0 @@ -
Docker UCP integrates with LDAP directory services, so that you can manage -users and groups from your organization’s directory and automatically -propagate this information to UCP and DTR. You can set up your cluster’s LDAP -configuration by using the UCP web UI, or you can use a -UCP configuration file.
- -To see an example TOML config file that shows how to configure UCP settings,
-run UCP with the example-config
option.
-Learn about UCP configuration files.
docker container run --rm /: example-config
-
Use the following command to extract the name of the currently active
-configuration from the ucp-agent
service.
-$ CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}' ucp-agent | grep 'com.docker.ucp.config-')
-
-
Get the current configuration and save it to a TOML file.
- -
-docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
-
-
Use the output of the example-config
command as a guide to edit your
-config.toml
file. Under the [auth]
sections, set backend = "ldap"
-and [auth.ldap]
to configure LDAP integration the way you want.
Once you’ve finished editing your config.toml
file, create a new Docker
-Config object by using the following command.
NEW_CONFIG_NAME="com.docker.ucp.config-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
-docker config create $NEW_CONFIG_NAME config.toml
-
Update the ucp-agent
service to remove the reference to the old config
-and add a reference to the new config.
docker service update --config-rm "$CURRENT_CONFIG_NAME" --config-add "source=${NEW_CONFIG_NAME},target=/etc/ucp/ucp.toml" ucp-agent
-
Wait a few moments for the ucp-agent
service tasks to update across
-your cluster. If you set jit_user_provisioning = true
in the LDAP
-configuration, users matching any of your specified search queries will
-have their accounts created when they log in with their username and LDAP
-password.
Docker UCP integrates with LDAP directory services, so that you can manage -users and groups from your organization’s directory and it will automatically -propagate that information to UCP and DTR.
- -If you enable LDAP, UCP uses a remote directory server to create users -automatically, and all logins are forwarded to the directory server.
- -When you switch from built-in authentication to LDAP authentication, -all manually created users whose usernames don’t match any LDAP search results -are still available.
- -When you enable LDAP authentication, you can choose whether UCP creates user -accounts only when users log in for the first time. Select the -Just-In-Time User Provisioning option to ensure that the only LDAP -accounts that exist in UCP are those that have had a user log in to UCP.
- -You control how UCP integrates with LDAP by creating searches for users.
-You can specify multiple search configurations, and you can specify multiple
-LDAP servers to integrate with. Searches start with the Base DN
, which is
-the distinguished name of the node in the LDAP directory tree where the
-search starts looking for users.
Access LDAP settings by navigating to the Authentication & Authorization -page in the UCP web UI. There are two sections for controlling LDAP searches -and servers.
- -Base DN
, scope
, filter
, the username
attribute,
-and the full name
attribute. These searches are stored in a list, and
-the ordering may be important, depending on your search configuration.Here’s what happens when UCP synchronizes with LDAP:
- -Base DN
from the user search config and selecting the domain server that
-has the longest domain suffix match.Base DN
from the
-search config, UCP uses the default domain server.The domain server to use is determined by the Base DN
in each search config.
-UCP doesn’t perform search requests against each of the domain servers, only
-the one which has the longest matching domain suffix, or the default if there’s
-no match.
Here’s an example. Let’s say we have three LDAP domain servers:
- -Domain | -Server URL | -
---|---|
default | -ldaps://ldap.example.com | -
dc=subsidiary1,dc=com |
- ldaps://ldap.subsidiary1.com | -
dc=subsidiary2,dc=subsidiary1,dc=com |
- ldaps://ldap.subsidiary2.com | -
Here are three user search configs with the following Base DNs
:
baseDN=ou=people,dc=subsidiary1,dc=com
For this search config, dc=subsidiary1,dc=com
is the only server with a
-domain which is a suffix, so UCP uses the server ldaps://ldap.subsidiary1.com
-for the search request.
baseDN=ou=product,dc=subsidiary2,dc=subsidiary1,dc=com
For this search config, two of the domain servers have a domain which is a
-suffix of this base DN, but dc=subsidiary2,dc=subsidiary1,dc=com
is the
-longer of the two, so UCP uses the server ldaps://ldap.subsidiary2.com
-for the search request.
baseDN=ou=eng,dc=example,dc=com
For this search config, there is no server with a domain specified which is
-a suffix of this base DN, so UCP uses the default server, ldaps://ldap.example.com
,
-for the search request.
If there are username
collisions for the search results between domains, UCP
-uses only the first search result, so the ordering of the user search configs
-may be important. For example, if both the first and third user search configs
-result in a record with the username jane.doe
, the first has higher
-precedence and the second is ignored. For this reason, it’s important to choose
-a username
attribute that’s unique for your users across all domains.
Because names may collide, it’s a good idea to use something unique to the
-subsidiary, like the email address for each person. Users can log in with the
-email address, for example, jane.doe@subsidiary1.com
.
To configure UCP to create and authenticate users by using an LDAP directory, -go to the UCP web UI, navigate to the Admin Settings page and click -Authentication & Authorization to select the method used to create and -authenticate users.
- -In the LDAP Enabled section, click Yes to The LDAP settings appear. -Now configure your LDAP directory integration.
- -Use this setting to change the default permissions of new users.
- -Click the dropdown to select the permission level that UCP assigns by default
-to the private collections of new users. For example, if you change the value
-to View Only
, all users who log in for the first time after the setting is
-changed have View Only
access to their private collections, but permissions
-remain unchanged for all existing users.
-Learn more about permission levels.
Click Yes to enable integrating UCP users and teams with LDAP servers.
- -Field | -Description | -
---|---|
LDAP server URL | -The URL where the LDAP server can be reached. | -
Reader DN | -The distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice, this should be an LDAP read-only user. | -
Reader password | -The password of the account used for searching entries in the LDAP server. | -
Use Start TLS | -Whether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with ldaps:// , this field is ignored. |
-
Skip TLS verification | -Whether to verify the LDAP server certificate when using TLS. The connection is still encrypted but vulnerable to man-in-the-middle attacks. | -
No simple pagination | -If your LDAP server doesn’t support pagination. | -
Just-In-Time User Provisioning | -Whether to create user accounts only when users log in for the first time. The default value of true is recommended. If you upgraded from UCP 2.0.x, the default is false . |
-
Click Confirm to add your LDAP domain.
- -To integrate with more LDAP servers, click Add LDAP Domain.
- -Field | -Description | -- |
---|---|---|
Base DN | -The distinguished name of the node in the directory tree where the search should start looking for users. | -- |
Username attribute | -The LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: / \ [ ] : ; | = , + * ? < > ' " . |
- - |
Full name attribute | -The LDAP attribute to use as the user’s full name for display purposes. If left empty, UCP will not create new users with a full name value. | -- |
Filter | -The LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. | -- |
Search subtree instead of just one level | -Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. | -- |
Match Group Members | -Whether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support memberOf search filters. |
- - |
Iterate through group members | -If Select Group Members is selected, this option searches for users by first iterating over the target group’s membership, making a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results. |
- - |
Group DN | -If Select Group Members is selected, this specifies the distinguished name of the group from which to select users. |
- - |
Group Member Attribute | -If Select Group Members is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. |
- - |
To configure more user search queries, click Add LDAP User Search Configuration -again. This is useful in cases where users may be found in multiple distinct -subtrees of your organization’s directory. Any user entry which matches at -least one of the search configurations will be synced as a user.
- -Field | -Description | -
---|---|
Username | -An LDAP username for testing authentication to this application. This value corresponds with the Username Attribute specified in the LDAP user search configurations section. | -
Password | -The user’s password used to authenticate (BIND) to the directory server. | -
Before you save the configuration changes, you should test that the integration -is correctly configured. You can do this by providing the credentials of an -LDAP user, and clicking the Test button.
- -Field | -Description | -
---|---|
Sync interval | -The interval, in hours, to synchronize users between UCP and the LDAP server. When the synchronization job runs, new users found in the LDAP server are created in UCP with the default permission level. UCP users that don’t exist in the LDAP server become inactive. | -
Enable sync of admin users | -This option specifies that system admins should be synced directly with members of a group in your organization’s LDAP directory. The admins will be synced to match the membership of the group. The configured recovery admin user will also remain a system admin. | -
Once you’ve configured the LDAP integration, UCP synchronizes users based on -the interval you’ve defined starting at the top of the hour. When the -synchronization runs, UCP stores logs that can help you troubleshoot when -something goes wrong.
- -You can also manually synchronize users by clicking Sync Now.
- -When a user is removed from LDAP, the effect on the user’s UCP account depends -on the Just-In-Time User Provisioning setting:
- -false
: Users deleted from LDAP become
-inactive in UCP after the next LDAP synchronization runs.true
: Users deleted from LDAP can’t
-authenticate, but their UCP accounts remain active. This means that they can
-use their client bundles to run commands. To prevent this, deactivate their
-UCP user accounts.UCP saves a minimum amount of user data required to operate. This includes -the value of the username and full name attributes that you have specified in -the configuration as well as the distinguished name of each synced user. -UCP does not store any additional data from the directory server.
- -UCP enables syncing teams with a search query or group in your organization’s -LDAP directory. -Sync team members with your organization’s LDAP directory.
- -Docker Universal Control Plane is designed for high availability (HA). You can -join multiple manager nodes to the cluster, so that if one manager node fails, -another can automatically take its place without impact to the cluster.
- -Having multiple manager nodes in your cluster allows you to:
- -To make the cluster tolerant to more failures, add additional replica nodes to -your cluster.
- -Manager nodes | -Failures tolerated | -
---|---|
1 | -0 | -
3 | -1 | -
5 | -2 | -
For production-grade deployments, follow these rules of thumb:
- -Docker EE is designed for scaling horizontally as your applications grow in -size and usage. You can add or remove nodes from the cluster to scale it -to your needs. You can join Windows Server 2016, IBM z System, and Linux nodes -to the cluster.
- -Because Docker EE leverages the clustering functionality provided by Docker -Engine, you use the docker swarm join -command to add more nodes to your cluster. When you join a new node, Docker EE -services start running on the node automatically.
- -When you join a node to a cluster, you specify its role: manager or worker.
- -Manager: Manager nodes are responsible for cluster management -functionality and dispatching tasks to worker nodes. Having multiple -manager nodes allows your swarm to be highly available and tolerant of -node failures.
- -Manager nodes also run all Docker EE components in a replicated way, so -by adding additional manager nodes, you’re also making the cluster highly -available. -Learn more about the Docker EE architecture.
-Worker: Worker nodes receive and execute your services and applications. -Having multiple worker nodes allows you to scale the computing capacity of -your cluster.
- -When deploying Docker Trusted Registry in your cluster, you deploy it to a -worker node.
-You can join Windows Server 2016, IBM z System, and Linux nodes to the cluster, -but only Linux nodes can be managers.
- -To join nodes to the cluster, go to the Docker EE web UI and navigate to the -Nodes page.
- -Copy the displayed command, use SSH to log in to the host that you want to
-join to the cluster, and run the docker swarm join
command on the host.
To add a Windows node, click Windows and follow the instructions in -Join Windows worker nodes to a cluster.
- -After you run the join command in the node, the node is displayed on the -Nodes page in the Docker EE web UI. From there, you can change the node’s -cluster configuration, including its assigned orchestrator type. -Learn how to change the orchestrator for a node.
- -Once a node is part of the cluster, you can configure the node’s availability -so that it is:
- -Pause or drain a node from the Edit Node page:
- -You can promote worker nodes to managers to make UCP fault tolerant. You can -also demote a manager node into a worker.
- -To promote or demote a manager node:
- -If you’re load-balancing user requests to Docker EE across multiple manager -nodes, don’t forget to remove these nodes from your load-balancing pool when -you demote them to workers.
- -You can remove worker nodes from the cluster at any time:
- -Since manager nodes are important to the cluster overall health, you need to -be careful when removing one from the cluster.
- -To remove a manager node:
- -You can use the Docker CLI client to manage your nodes from the CLI. To do -this, configure your Docker CLI client with a UCP client bundle.
- -Once you do that, you can start managing your UCP nodes:
- -docker node ls
-
Docker Enterprise Edition supports worker nodes that run on Windows Server 2016 or 1709. -Only worker nodes are supported on Windows, and all manager nodes in the cluster -must run on Linux.
- -Follow these steps to enable a worker node on Windows.
- -Install Docker EE Engine -on a Windows Server 2016 or 1709 instance to enable joining a cluster that’s managed by -Docker Enterprise Edition.
- -Follow these steps to configure the docker daemon and the Windows environment.
- -ucp-agent
, which is named ucp-agent-win
.ucp-agent-win
.Configure the Docker Engine running on the node to have a label. This makes -it easier to deploy applications on nodes with this label.
- -Create the file C:\ProgramData\docker\config\daemon.json
with the following
-content:
{
- "labels": ["os=windows"]
-}
-
Restart Docker for the changes to take effect:
- -Restart-Service docker
-
On a manager node, run the following command to list the images that are required -on Windows nodes.
- -docker container run --rm /: images --list --enable-windows
-/ucp-agent-win:
-/ucp-dsinfo-win:
-
On Windows Server 2016, in a PowerShell terminal running as Administrator,
-log in to Docker Hub with the docker login
command and pull the listed images.
docker image pull /ucp-agent-win:
-docker image pull /ucp-dsinfo-win:
-
You need to open ports 2376 and 12376, and create certificates -for the Docker daemon to communicate securely. Use this command to run -the Windows node setup script:
- -$script = [ScriptBlock]::Create((docker run --rm /ucp-agent-win: windows-script | Out-String))
-
-Invoke-Command $script
-
-- -Docker daemon restart
- -When you run
-windows-script
, the Docker service is unavailable temporarily.
The Windows node is ready to join the cluster. Run the setup script on each -instance of Windows Server that will be a worker node.
- -The script may be incompatible with installations that use a config file at
-C:\ProgramData\docker\config\daemon.json
. If you use such a file, make sure
-that the daemon runs on port 2376 and that it uses certificates located in
-C:\ProgramData\docker\daemoncerts
. If certificates don’t exist in this
-directory, run ucp-agent-win generate-certs
, as shown in Step 2 of the
-procedure in Set up certs for the dockerd service.
In the daemon.json file, set the tlscacert
, tlscert
, and tlskey
options
-to the corresponding files in C:\ProgramData\docker\daemoncerts
:
{
-...
- "debug": true,
- "tls": true,
- "tlscacert": "C:\ProgramData\docker\daemoncerts\ca.pem",
- "tlscert": "C:\ProgramData\docker\daemoncerts\cert.pem",
- "tlskey": "C:\ProgramData\docker\daemoncerts\key.pem",
- "tlsverify": true,
-...
-}
-
Now you can join the cluster by using the docker swarm join
command that’s
-provided by the Docker EE web UI and CLI.
Check the Use a custom listen address option to specify the -IP address that’s advertised to all members of the cluster for API access.
- -Copy the displayed command. It looks similar to the following:
- -docker swarm join --token <token> <ucp-manager-ip>
-
You can also use the command line to get the join token. Using your -UCP client bundle, run:
- -docker swarm join-token worker
-
Run the docker swarm join
command on each instance of Windows Server that
-will be a worker node.
The following sections describe how to run the commands in the setup script
-manually to configure the dockerd
service and the Windows environment.
-The script opens ports in the firewall and sets up certificates for dockerd
.
To see the script, you can run the windows-script
command without piping
-to the Invoke-Expression
cmdlet.
docker container run --rm /ucp-agent-win: windows-script
-
Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic.
- -In a PowerShell terminal running as Administrator, run these commands -to add rules to the Windows firewall.
- -netsh advfirewall firewall add rule name="docker_local" dir=in action=allow protocol=TCP localport=2376
-netsh advfirewall firewall add rule name="docker_proxy" dir=in action=allow protocol=TCP localport=12376
-
C:\ProgramData\docker\daemoncerts
.In a PowerShell terminal running as Administrator, run the following command -to generate certificates.
- -docker container run --rm -v C:\ProgramData\docker\daemoncerts:C:\certs /ucp-agent-win: generate-certs
-
To set up certificates, run the following commands to stop and unregister the
-dockerd
service, register the service with the certificates, and restart the service.
Stop-Service docker
-dockerd --unregister-service
-dockerd -H npipe:// -H 0.0.0.0:2376 --tlsverify --tlscacert=C:\ProgramData\docker\daemoncerts\ca.pem --tlscert=C:\ProgramData\docker\daemoncerts\cert.pem --tlskey=C:\ProgramData\docker\daemoncerts\key.pem --register-service
-Start-Service docker
-
The dockerd
service and the Windows environment are now configured to join a Docker EE cluster.
-- -TLS certificate setup
- -If the TLS certificates aren’t set up correctly, the Docker EE web UI shows the -following warning.
- --Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon. -
Some features are not yet supported on Windows nodes:
- -ucp-hrm
network to make it
-unencrypted.Once you’ve joined multiple manager nodes for high-availability, you can -configure your own load balancer to balance user requests across all -manager nodes.
- -This allows users to access UCP using a centralized domain name. If -a manager node goes down, the load balancer can detect that and stop forwarding -requests to that node, so that the failure goes unnoticed by users.
- -Since Docker UCP uses mutual TLS, make sure you configure your load balancer to:
- -443
and 6443
./_ping
endpoint on each manager node, to check if the node
-is healthy and if it should remain on the load balancing pool or not.By default, both UCP and DTR use port 443. If you plan on deploying UCP and DTR, -your load balancer needs to distinguish traffic between the two by IP address -or port number.
- --- -Additional requirements
- -In addition to configuring your load balancer to distinguish between UCP and DTR, configuring a load balancer for DTR has additional requirements.
-
Use the following examples to configure your load balancer for UCP.
- - -user nginx;
-worker_processes 1;
-
-error_log /var/log/nginx/error.log warn;
-pid /var/run/nginx.pid;
-
-events {
- worker_connections 1024;
-}
-
-stream {
- upstream ucp_443 {
- server <UCP_MANAGER_1_IP>:443 max_fails=2 fail_timeout=30s;
- server <UCP_MANAGER_2_IP>:443 max_fails=2 fail_timeout=30s;
- server <UCP_MANAGER_N_IP>:443 max_fails=2 fail_timeout=30s;
- }
- server {
- listen 443;
- proxy_pass ucp_443;
- }
-}
-
global
- log /dev/log local0
- log /dev/log local1 notice
-
-defaults
- mode tcp
- option dontlognull
- timeout connect 5s
- timeout client 50s
- timeout server 50s
- timeout tunnel 1h
- timeout client-fin 50s
-### frontends
-# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
-frontend ucp_stats
- mode http
- bind 0.0.0.0:8181
- default_backend ucp_stats
-frontend ucp_443
- mode tcp
- bind 0.0.0.0:443
- default_backend ucp_upstream_servers_443
-### backends
-backend ucp_stats
- mode http
- option httplog
- stats enable
- stats admin if TRUE
- stats refresh 5m
-backend ucp_upstream_servers_443
- mode tcp
- option httpchk GET /_ping HTTP/1.1\r\nHost:\ <UCP_FQDN>
- server node01 <UCP_MANAGER_1_IP>:443 weight 100 check check-ssl verify none
- server node02 <UCP_MANAGER_2_IP>:443 weight 100 check check-ssl verify none
- server node03 <UCP_MANAGER_N_IP>:443 weight 100 check check-ssl verify none
-
{
- "Subnets": [
- "subnet-XXXXXXXX",
- "subnet-YYYYYYYY",
- "subnet-ZZZZZZZZ"
- ],
- "CanonicalHostedZoneNameID": "XXXXXXXXXXX",
- "CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
- "ListenerDescriptions": [
- {
- "Listener": {
- "InstancePort": 443,
- "LoadBalancerPort": 443,
- "Protocol": "TCP",
- "InstanceProtocol": "TCP"
- },
- "PolicyNames": []
- }
- ],
- "HealthCheck": {
- "HealthyThreshold": 2,
- "Interval": 10,
- "Target": "HTTPS:443/_ping",
- "Timeout": 2,
- "UnhealthyThreshold": 4
- },
- "VPCId": "vpc-XXXXXX",
- "BackendServerDescriptions": [],
- "Instances": [
- {
- "InstanceId": "i-XXXXXXXXX"
- },
- {
- "InstanceId": "i-XXXXXXXXX"
- },
- {
- "InstanceId": "i-XXXXXXXXX"
- }
- ],
- "DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
- "SecurityGroups": [
- "sg-XXXXXXXXX"
- ],
- "Policies": {
- "LBCookieStickinessPolicies": [],
- "AppCookieStickinessPolicies": [],
- "OtherPolicies": []
- },
- "LoadBalancerName": "ELB-UCP",
- "CreatedTime": "2017-02-13T21:40:15.400Z",
- "AvailabilityZones": [
- "us-west-2c",
- "us-west-2a",
- "us-west-2b"
- ],
- "Scheme": "internet-facing",
- "SourceSecurityGroup": {
- "OwnerAlias": "XXXXXXXXXXXX",
- "GroupName": "XXXXXXXXXXXX"
- }
-}
-
You can deploy your load balancer using:
- - -# Create the nginx.conf file, then
-# deploy the load balancer
-
-docker run --detach \
- --name ucp-lb \
- --restart=unless-stopped \
- --publish 443:443 \
- --volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
- nginx:stable-alpine
-
# Create the haproxy.cfg file, then
-# deploy the load balancer
-
-docker run --detach \
- --name ucp-lb \
- --publish 443:443 \
- --publish 8181:8181 \
- --restart=unless-stopped \
- --volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
- haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
-