diff --git a/ee/ucp/admin/configure/_site/add-labels-to-cluster-nodes.html b/ee/ucp/admin/configure/_site/add-labels-to-cluster-nodes.html new file mode 100644 index 0000000000..15ec34f080 --- /dev/null +++ b/ee/ucp/admin/configure/_site/add-labels-to-cluster-nodes.html @@ -0,0 +1,137 @@ +

With Docker UCP, you can add labels to your nodes. Labels are metadata that +describe the node, like its role (development, QA, production), its region +(US, EU, APAC), or the kind of disk (hdd, ssd). Once you have labeled your +nodes, you can add deployment constraints to your services, to ensure they +are scheduled on a node with a specific label.

+ +

For example, you can apply labels based on their role in the development +lifecycle, or the hardware resources they have.

+ +

+ +

Don’t create labels for authorization and permissions to resources. +Instead, use resource sets, either UCP collections or Kubernetes namespaces, +to organize access to your cluster. +Learn about managing access with resource sets.

+ +

Apply labels to a node

+ +

In this example we’ll apply the ssd label to a node. Then we’ll deploy +a service with a deployment constraint to make sure the service is always +scheduled to run on a node that has the ssd label.

+ +

Log in with administrator credentials in the UCP web UI, navigate to the +Nodes page, and choose the node you want to apply labels to. In the +details pane, click Configure.

+ +

In the Edit Node page, scroll down to the Labels section.

+ +

Click Add Label, and add a label with the key disk and a value of ssd.

+ +

+ +

Click Save and dismiss the Edit Node page. In the node’s details +pane, click Labels to view the labels that are applied to the node.

+ +

You can also do this from the CLI by running:

+ +
docker node update --label-add <key>=<value> <node-id>
+
+ +

Deploy a service with constraints

+ +

When deploying a service, you can specify constraints, so that the service gets +scheduled only on a node that has a label that fulfills all of the constraints +you specify.

+ +

In this example, when users deploy a service, they can add a constraint for the +service to be scheduled only on nodes that have SSD storage: +node.labels.disk == ssd.

+ +

Navigate to the Stacks page. Name the new stack “wordpress”, and in the +Mode dropdown, check Swarm Services.

+ +

In the docker-compose.yml editor, paste the following stack file.

+ +
version: "3.1"
+
+services:
+  db:
+    image: mysql:5.7
+    deploy:
+      placement:
+        constraints:
+          - node.labels.disk == ssd
+      restart_policy:
+        condition: on-failure
+    networks:
+      - wordpress-net
+    environment:
+      MYSQL_ROOT_PASSWORD: wordpress
+      MYSQL_DATABASE: wordpress
+      MYSQL_USER: wordpress
+      MYSQL_PASSWORD: wordpress
+  wordpress:
+    depends_on:
+      - db
+    image: wordpress:latest
+    deploy:
+      replicas: 1
+      placement:
+        constraints:
+          - node.labels.disk == ssd
+      restart_policy:
+        condition: on-failure
+        max_attempts: 3
+    networks:
+      - wordpress-net
+    ports:
+      - "8000:80"
+    environment:
+      WORDPRESS_DB_HOST: db:3306
+      WORDPRESS_DB_PASSWORD: wordpress
+
+networks:
+  wordpress-net:
+
+ +

Click Create to deploy the stack, and when the stack deploys, +click Done.

+ +

+ +

Navigate to the Nodes page, and click the node that has the +disk label. In the details pane, click the Inspect Resource +dropdown and select Containers.

+ +

+ +

Dismiss the filter and navigate to the Nodes page. Click a node that +doesn’t have the disk label. In the details pane, click the +Inspect Resource dropdown and select Containers. There are no +WordPress containers scheduled on the node. Dismiss the filter.

+ +

Add a constraint to a service by using the UCP web UI

+ +

You can declare the deployment constraints in your docker-compose.yml file or +when you’re creating a stack. Also, you can apply them when you’re creating +a service.

+ +

To check if a service has deployment constraints, navigate to the +Services page and choose the service that you want to check. +In the details pane, click Constraints to list the constraint labels.

+ +

To edit the constraints on the service, click Configure and select +Details to open the Update Service page. Click Scheduling to +view the constraints.

+ +

+ +

You can add or remove deployment constraints on this page.

+ +

Where to go next

+ + + diff --git a/ee/ucp/admin/configure/_site/add-sans-to-cluster.html b/ee/ucp/admin/configure/_site/add-sans-to-cluster.html new file mode 100644 index 0000000000..4fa2536494 --- /dev/null +++ b/ee/ucp/admin/configure/_site/add-sans-to-cluster.html @@ -0,0 +1,51 @@ +

UCP always runs with HTTPS enabled. When you connect to UCP, you need to make +sure that the hostname that you use to connect is recognized by UCP’s +certificates. If, for instance, you put UCP behind a load balancer that +forwards its traffic to your UCP instance, your requests will be for the load +balancer’s hostname or IP address, not UCP’s. UCP will reject these requests +unless you include the load balancer’s address as a Subject Alternative Name +(or SAN) in UCP’s certificates.

+ +

If you use your own TLS certificates, make sure that they have the correct SAN +values. +Learn about using your own TLS certificates.

+ +

If you want to use the self-signed certificate that UCP has out of the box, you +can set up the SANs when you install UCP with the --san argument. You can +also add them after installation.

+ +

Add new SANs to UCP

+ +
    +
  1. In the UCP web UI, log in with administrator credentials and navigate to +the Nodes page.
  2. +
  3. Click on a manager node, and in the details pane, click Configure and +select Details.
  4. +
  5. In the SANs section, click Add SAN, and enter one or more SANs +for the cluster. +
  6. +
  7. Once you’re done, click Save.
  8. +
+ +

You will have to do this on every existsing manager node in the cluster, +but once you have done so, the SANs are applied automatically to any new +manager nodes that join the cluster.

+ +

You can also do this from the CLI by first running:

+ +

+docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs" }}' <node-id>
+default-cs,127.0.0.1,172.17.0.1
+
+
+ +

This will get the current set of SANs for the given manager node. Append your +desired SAN to this list, for example default-cs,127.0.0.1,172.17.0.1,example.com, +and then run:

+ +
docker node update --label-add com.docker.ucp.SANs=<SANs-list> <node-id>
+
+ +

<SANs-list> is the list of SANs with your new SAN appended at the end. As in +the web UI, you must do this for every manager node.

+ diff --git a/ee/ucp/admin/configure/_site/deploy-route-reflectors.html b/ee/ucp/admin/configure/_site/deploy-route-reflectors.html new file mode 100644 index 0000000000..b81a50fb6f --- /dev/null +++ b/ee/ucp/admin/configure/_site/deploy-route-reflectors.html @@ -0,0 +1,243 @@ +

UCP uses Calico as the default Kubernetes networking solution. Calico is +configured to create a BGP mesh between all nodes in the cluster.

+ +

As you add more nodes to the cluster, networking performance starts decreasing. +If your cluster has more than 100 nodes, you should reconfigure Calico to use +Route Reflectors instead of a node-to-node mesh.

+ +

This article guides you in deploying Calico Route Reflectors in a UCP cluster. +UCP running on Microsoft Azure uses Azure SDN instead of Calico for +multi-host networking. +If your UCP deployment is running on Azure, you don’t need to configure it this +way.

+ +

Before you begin

+ +

For production-grade systems, you should deploy at least two Route Reflectors, +each running on a dedicated node. These nodes should not be running any other +workloads.

+ +

If Route Reflectors are running on a same node as other workloads, swarm ingress +and NodePorts might not work in these workloads.

+ +

Choose dedicated notes

+ +

Start by tainting the nodes, so that no other workload runs there. Configure +your CLI with a UCP client bundle, and for each dedicated node, run:

+ +
kubectl taint node <node-name> \
+  com.docker.ucp.kubernetes.calico/route-reflector=true:NoSchedule
+
+ +

Then add labels to those nodes, so that you can target them when deploying the +Route Reflectors. For each dedicated node, run:

+ +
kubectl label nodes <node-name> \
+  com.docker.ucp.kubernetes.calico/route-reflector=true
+
+ +

Deploy the Route Reflectors

+ +

Create a calico-rr.yaml file with the following content:

+ +
kind: DaemonSet
+apiVersion: extensions/v1beta1
+metadata:
+  name: calico-rr
+  namespace: kube-system
+  labels:
+    app: calico-rr
+spec:
+  updateStrategy:
+    type: RollingUpdate
+  selector:
+    matchLabels:
+      k8s-app: calico-rr
+  template:
+    metadata:
+      labels:
+        k8s-app: calico-rr
+      annotations:
+        scheduler.alpha.kubernetes.io/critical-pod: ''
+    spec:
+      tolerations:
+        - key: com.docker.ucp.kubernetes.calico/route-reflector
+          value: "true"
+          effect: NoSchedule
+      hostNetwork: true
+      containers:
+        - name: calico-rr
+          image: calico/routereflector:v0.6.1
+          env:
+            - name: ETCD_ENDPOINTS
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: etcd_endpoints
+            - name: ETCD_CA_CERT_FILE
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: etcd_ca
+            # Location of the client key for etcd.
+            - name: ETCD_KEY_FILE
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: etcd_key # Location of the client certificate for etcd.
+            - name: ETCD_CERT_FILE
+              valueFrom:
+                configMapKeyRef:
+                  name: calico-config
+                  key: etcd_cert
+            - name: IP
+              valueFrom:
+                fieldRef:
+                  fieldPath: status.podIP
+          volumeMounts:
+            - mountPath: /calico-secrets
+              name: etcd-certs
+          securityContext:
+            privileged: true
+      nodeSelector:
+        com.docker.ucp.kubernetes.calico/route-reflector: "true"
+      volumes:
+      # Mount in the etcd TLS secrets.
+        - name: etcd-certs
+          secret:
+            secretName: calico-etcd-secrets
+
+ +

Then, deploy the DaemonSet using:

+ +
kubectl create -f calico-rr.yaml
+
+ +

Configure calicoctl

+ +

To reconfigure Calico to use Route Reflectors instead of a node-to-node mesh, +you’ll need to SSH into a UCP node and download the calicoctl tool.

+ +

Log in to a UCP node using SSH, and run:

+ +
sudo curl --location https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl \
+  --output /usr/bin/calicoctl
+sudo chmod +x /usr/bin/calicoctl
+
+ +

Now you need to configure calicoctl to communicate with the etcd key-value +store managed by UCP. Create a file named /etc/calico/calicoctl.cfg with +the following content:

+ +
apiVersion: projectcalico.org/v3
+kind: CalicoAPIConfig
+metadata:
+spec:
+  datastoreType: "etcdv3"
+  etcdEndpoints: "127.0.0.1:12378"
+  etcdKeyFile: "/var/lib/docker/volumes/ucp-node-certs/_data/key.pem"
+  etcdCertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/cert.pem"
+  etcdCACertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/ca.pem"
+
+ +

Disable node-to-node BGP mesh

+ +

Not that you’ve configured calicoctl, you can check the current Calico BGP +configuration:

+ +
sudo calicoctl get bgpconfig
+
+ +

If you don’t see any configuration listed, create one by running:

+ +
cat << EOF | sudo calicoctl create -f -
+apiVersion: projectcalico.org/v3
+kind: BGPConfiguration
+metadata:
+  name: default
+spec:
+  logSeverityScreen: Info
+  nodeToNodeMeshEnabled: false
+  asNumber: 63400
+EOF
+
+ +

This creates a new configuration with node-to-node mesh BGP disabled. +If you have a configuration, and meshenabled is set to true, update your +configuration:

+ +
sudo calicoctl get bgpconfig --output yaml > bgp.yaml
+
+ +

Edit the bgp.yaml file, updating nodeToNodeMeshEnabled to false. Then +update Calico configuration by running:

+ +
sudo calicoctl replace -f bgp.yaml
+
+ +

Configure Calico to use Route Reflectors

+ +

To configure Calico to use the Route Reflectors you need to know the AS number +for your network first. For that, run:

+ +
sudo calicoctl get nodes --output=wide
+
+ +

Now that you have the AS number, you can create the Calico configuration. +For each Route Reflector, customize and run the following snippet:

+ +
sudo calicoctl create -f - << EOF
+apiVersion: projectcalico.org/v3
+kind: BGPPeer
+metadata:
+  name: bgppeer-global
+spec:
+  peerIP: <IP_RR>
+  asNumber: <AS_NUMBER>
+EOF
+
+ +

Where:

+ + +

You can learn more about this configuration in the +Calico documentation.

+ +

Stop calico-node pods

+ +

If you have calico-node pods running on the nodes dedicated for running the +Route Reflector, manually delete them. This ensures that you don’t have them +both running on the same node.

+ +

Using your UCP client bundle, run:

+ +
# Find the Pod name
+kubectl get pods -n kube-system -o wide | grep <node-name>
+
+# Delete the Pod
+kubectl delete pod -n kube-system <pod-name>
+
+ +

Validate peers

+ +

Now you can check that other calico-node pods running on other nodes are +peering with the Route Reflector:

+ +
sudo calicoctl node status
+
+ +

You should see something like:

+ +
IPv4 BGP status
++--------------+-----------+-------+----------+-------------+
+| PEER ADDRESS | PEER TYPE | STATE |  SINCE   |    INFO     |
++--------------+-----------+-------+----------+-------------+
+| 172.31.24.86 | global    | up    | 23:10:04 | Established |
++--------------+-----------+-------+----------+-------------+
+
+IPv6 BGP status
+No IPv6 peers found.
+
diff --git a/ee/ucp/admin/configure/_site/enable-saml-authentication.html b/ee/ucp/admin/configure/_site/enable-saml-authentication.html new file mode 100644 index 0000000000..128a062055 --- /dev/null +++ b/ee/ucp/admin/configure/_site/enable-saml-authentication.html @@ -0,0 +1,107 @@ +
+

Beta disclaimer

+ +

This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.

+
+ +

SAML is commonly supported by enterprise authentication systems. SAML-based single sign-on (SSO) gives you access to UCP through a SAML 2.0-compliant identity provider.

+ +

SAML-based single sign-on (SSO) gives you access to UCP through a SAML 2.0-compliant identity provider. UCP supports SAML for authentication as a service provider integrated with your identity provider.

+ +

For more information about SAML, see the SAML XML website.

+ +

UCP supports these identity providers:

+ + + +

Configure identity provider integration

+ +

There are values your identity provider needs for successful integration with UCP, as follows. These values can vary between identity providers. Consult your identity provider documentation for instructions on providing these values as part of their integration process.

+ +

Okta integration values

+ +

Okta integration requires these values:

+ + + +

ADFS integration values

+ +

ADFS integration requires these values:

+ + + +

Configure the SAML integration

+ +

To enable SAML authentication:

+ +
    +
  1. Go to the UCP web interface.
  2. +
  3. Navigate to the Admin Settings.
  4. +
  5. +

    Select Authentication & Authorization.

    + +

    Enabling SAML in UCP

    +
  6. +
  7. +

    In the SAML Enabled section, select Yes to display the required settings. The settings are grouped by those needed by the identity provider server and by those needed by UCP as a SAML service provider.

    + +

    Configuring IdP values for SAML in UCP

    +
  8. +
  9. In IdP Metadata URL enter the URL for the identity provider’s metadata.
  10. +
  11. If the metadata URL is publicly certified, you can leave Skip TLS Verification unchecked and Root Certificates Bundle blank, which is the default. If the metadata URL is NOT certified, you must provide the certificates from the identity provider in the Root Certificates Bundle field whether or not you check Skip TLS Verification.
  12. +
  13. +

    In UCP Host enter the URL that includes the IP address or domain of your UCP web interface. The current IP address appears by default.

    + +

    Configuring service provider values for SAML in UCP

    +
  14. +
  15. To customize the text of the sign-in button, enter your button text in the Customize Sign In Button Text field. The default text is ‘Sign in with SAML’.
  16. +
  17. The Service Provider Metadata URL and Assertion Consumer Service (ACS) URL appear in shaded boxes. Select the copy icon at the right side of each box to copy that URL to the clipboard for pasting in the identity provider workflow.
  18. +
  19. Select Save to complete the integration.
  20. +
+ +

Security considerations

+ +

You can download a client bundle to access UCP. A client bundle is a group of certificates downloadable directly from UCP web interface that enables command line as well as API access to UCP. It lets you authorize a remote Docker engine to access specific user accounts managed in Docker EE, absorbing all associated RBAC controls in the process. You can now execute docker swarm commands from your remote machine that take effect on the remote cluster. You can download the client bundle in the Admin Settings under My Profile.

+ +

Downloading UCP Client Profile

+ +
+

Caution

+ +

Users who have been previously authorized using a Client Bundle will continue to be able to access UCP regardless of the newly configured SAML access controls. To ensure that access from the client bundle is synced with the identity provider, we recommend the following steps. Otherwise, a previously-authorized user could get access to UCP through their existing client bundle.

+ + +
diff --git a/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html b/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html new file mode 100644 index 0000000000..bf1cc07c30 --- /dev/null +++ b/ee/ucp/admin/configure/_site/external-auth/enable-ldap-config-file.html @@ -0,0 +1,68 @@ +

Docker UCP integrates with LDAP directory services, so that you can manage +users and groups from your organization’s directory and automatically +propagate this information to UCP and DTR. You can set up your cluster’s LDAP +configuration by using the UCP web UI, or you can use a +UCP configuration file.

+ +

To see an example TOML config file that shows how to configure UCP settings, +run UCP with the example-config option. +Learn about UCP configuration files.

+ +
docker container run --rm /: example-config
+
+ +

Set up LDAP by using a configuration file

+ +
    +
  1. +

    Use the following command to extract the name of the currently active +configuration from the ucp-agent service.

    + +
        
    +$ CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}' ucp-agent | grep 'com.docker.ucp.config-')
    +    
    +
    +
  2. +
  3. +

    Get the current configuration and save it to a TOML file.

    + +
        
    +docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
    +    
    +
    +
  4. +
  5. +

    Use the output of the example-config command as a guide to edit your +config.toml file. Under the [auth] sections, set backend = "ldap" +and [auth.ldap] to configure LDAP integration the way you want.

    +
  6. +
  7. +

    Once you’ve finished editing your config.toml file, create a new Docker +Config object by using the following command.

    + +
    NEW_CONFIG_NAME="com.docker.ucp.config-$(( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
    +docker config create $NEW_CONFIG_NAME config.toml
    +
    +
  8. +
  9. +

    Update the ucp-agent service to remove the reference to the old config +and add a reference to the new config.

    + +
    docker service update --config-rm "$CURRENT_CONFIG_NAME" --config-add "source=${NEW_CONFIG_NAME},target=/etc/ucp/ucp.toml" ucp-agent
    +
    +
  10. +
  11. +

    Wait a few moments for the ucp-agent service tasks to update across +your cluster. If you set jit_user_provisioning = true in the LDAP +configuration, users matching any of your specified search queries will +have their accounts created when they log in with their username and LDAP +password.

    +
  12. +
+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/external-auth/index.html b/ee/ucp/admin/configure/_site/external-auth/index.html new file mode 100644 index 0000000000..1ea0bb307a --- /dev/null +++ b/ee/ucp/admin/configure/_site/external-auth/index.html @@ -0,0 +1,351 @@ +

Docker UCP integrates with LDAP directory services, so that you can manage +users and groups from your organization’s directory and it will automatically +propagate that information to UCP and DTR.

+ +

If you enable LDAP, UCP uses a remote directory server to create users +automatically, and all logins are forwarded to the directory server.

+ +

When you switch from built-in authentication to LDAP authentication, +all manually created users whose usernames don’t match any LDAP search results +are still available.

+ +

When you enable LDAP authentication, you can choose whether UCP creates user +accounts only when users log in for the first time. Select the +Just-In-Time User Provisioning option to ensure that the only LDAP +accounts that exist in UCP are those that have had a user log in to UCP.

+ +

How UCP integrates with LDAP

+ +

You control how UCP integrates with LDAP by creating searches for users. +You can specify multiple search configurations, and you can specify multiple +LDAP servers to integrate with. Searches start with the Base DN, which is +the distinguished name of the node in the LDAP directory tree where the +search starts looking for users.

+ +

Access LDAP settings by navigating to the Authentication & Authorization +page in the UCP web UI. There are two sections for controlling LDAP searches +and servers.

+ + + +

Here’s what happens when UCP synchronizes with LDAP:

+ +
    +
  1. UCP creates a set of search results by iterating over each of the user +search configs, in the order that you specify.
  2. +
  3. UCP choses an LDAP server from the list of domain servers by considering the +Base DN from the user search config and selecting the domain server that +has the longest domain suffix match.
  4. +
  5. If no domain server has a domain suffix that matches the Base DN from the +search config, UCP uses the default domain server.
  6. +
  7. UCP combines the search results into a list of users and creates UCP +accounts for them. If the Just-In-Time User Provisioning option is set, +user accounts are created only when users first log in.
  8. +
+ +

The domain server to use is determined by the Base DN in each search config. +UCP doesn’t perform search requests against each of the domain servers, only +the one which has the longest matching domain suffix, or the default if there’s +no match.

+ +

Here’s an example. Let’s say we have three LDAP domain servers:

+ + + + + + + + + + + + + + + + + + + + + + +
DomainServer URL
defaultldaps://ldap.example.com
dc=subsidiary1,dc=comldaps://ldap.subsidiary1.com
dc=subsidiary2,dc=subsidiary1,dc=comldaps://ldap.subsidiary2.com
+ +

Here are three user search configs with the following Base DNs:

+ + + +

If there are username collisions for the search results between domains, UCP +uses only the first search result, so the ordering of the user search configs +may be important. For example, if both the first and third user search configs +result in a record with the username jane.doe, the first has higher +precedence and the second is ignored. For this reason, it’s important to choose +a username attribute that’s unique for your users across all domains.

+ +

Because names may collide, it’s a good idea to use something unique to the +subsidiary, like the email address for each person. Users can log in with the +email address, for example, jane.doe@subsidiary1.com.

+ +

Configure the LDAP integration

+ +

To configure UCP to create and authenticate users by using an LDAP directory, +go to the UCP web UI, navigate to the Admin Settings page and click +Authentication & Authorization to select the method used to create and +authenticate users.

+ +

+ +

In the LDAP Enabled section, click Yes to The LDAP settings appear. +Now configure your LDAP directory integration.

+ +

Default role for all private collections

+ +

Use this setting to change the default permissions of new users.

+ +

Click the dropdown to select the permission level that UCP assigns by default +to the private collections of new users. For example, if you change the value +to View Only, all users who log in for the first time after the setting is +changed have View Only access to their private collections, but permissions +remain unchanged for all existing users. +Learn more about permission levels.

+ +

LDAP enabled

+ +

Click Yes to enable integrating UCP users and teams with LDAP servers.

+ +

LDAP server

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
LDAP server URLThe URL where the LDAP server can be reached.
Reader DNThe distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice, this should be an LDAP read-only user.
Reader passwordThe password of the account used for searching entries in the LDAP server.
Use Start TLSWhether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with ldaps://, this field is ignored.
Skip TLS verificationWhether to verify the LDAP server certificate when using TLS. The connection is still encrypted but vulnerable to man-in-the-middle attacks.
No simple paginationIf your LDAP server doesn’t support pagination.
Just-In-Time User ProvisioningWhether to create user accounts only when users log in for the first time. The default value of true is recommended. If you upgraded from UCP 2.0.x, the default is false.
+ +

+ +

Click Confirm to add your LDAP domain.

+ +

To integrate with more LDAP servers, click Add LDAP Domain.

+ +

LDAP user search configurations

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription 
Base DNThe distinguished name of the node in the directory tree where the search should start looking for users. 
Username attributeThe LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: / \ [ ] : ; | = , + * ? < > ' ". 
Full name attributeThe LDAP attribute to use as the user’s full name for display purposes. If left empty, UCP will not create new users with a full name value. 
FilterThe LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. 
Search subtree instead of just one levelWhether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. 
Match Group MembersWhether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support memberOf search filters. 
Iterate through group membersIf Select Group Members is selected, this option searches for users by first iterating over the target group’s membership, making a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results. 
Group DNIf Select Group Members is selected, this specifies the distinguished name of the group from which to select users. 
Group Member AttributeIf Select Group Members is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. 
+ +

+ +

To configure more user search queries, click Add LDAP User Search Configuration +again. This is useful in cases where users may be found in multiple distinct +subtrees of your organization’s directory. Any user entry which matches at +least one of the search configurations will be synced as a user.

+ +

LDAP test login

+ + + + + + + + + + + + + + + + + + +
FieldDescription
UsernameAn LDAP username for testing authentication to this application. This value corresponds with the Username Attribute specified in the LDAP user search configurations section.
PasswordThe user’s password used to authenticate (BIND) to the directory server.
+ +

Before you save the configuration changes, you should test that the integration +is correctly configured. You can do this by providing the credentials of an +LDAP user, and clicking the Test button.

+ +

LDAP sync configuration

+ + + + + + + + + + + + + + + + + + +
FieldDescription
Sync intervalThe interval, in hours, to synchronize users between UCP and the LDAP server. When the synchronization job runs, new users found in the LDAP server are created in UCP with the default permission level. UCP users that don’t exist in the LDAP server become inactive.
Enable sync of admin usersThis option specifies that system admins should be synced directly with members of a group in your organization’s LDAP directory. The admins will be synced to match the membership of the group. The configured recovery admin user will also remain a system admin.
+ +

Once you’ve configured the LDAP integration, UCP synchronizes users based on +the interval you’ve defined starting at the top of the hour. When the +synchronization runs, UCP stores logs that can help you troubleshoot when +something goes wrong.

+ +

You can also manually synchronize users by clicking Sync Now.

+ +

Revoke user access

+ +

When a user is removed from LDAP, the effect on the user’s UCP account depends +on the Just-In-Time User Provisioning setting:

+ + + +

Data synced from your organization’s LDAP directory

+ +

UCP saves a minimum amount of user data required to operate. This includes +the value of the username and full name attributes that you have specified in +the configuration as well as the distinguished name of each synced user. +UCP does not store any additional data from the directory server.

+ +

Sync teams

+ +

UCP enables syncing teams with a search query or group in your organization’s +LDAP directory. +Sync team members with your organization’s LDAP directory.

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/integrate-with-multiple-registries.html b/ee/ucp/admin/configure/_site/integrate-with-multiple-registries.html new file mode 100644 index 0000000000..a2feb69d45 --- /dev/null +++ b/ee/ucp/admin/configure/_site/integrate-with-multiple-registries.html @@ -0,0 +1,65 @@ +

Universal Control Plane can pull and run images from any image registry, +including Docker Trusted Registry and Docker Store.

+ +

If your registry uses globally-trusted TLS certificates, everything works +out of the box, and you don’t need to configure anything. But if your registries +use self-signed certificates or certificates issues by your own Certificate +Authority, you need to configure UCP to trust those registries.

+ +

Trust Docker Trusted Registry

+ +

To configure UCP to trust a DTR deployment, you need to update the +UCP system configuration to include one entry for +each DTR deployment:

+ +
[[registries]]
+  host_address = "dtr.example.org"
+  ca_bundle = """
+-----BEGIN CERTIFICATE-----
+...
+-----END CERTIFICATE-----"""
+
+[[registries]]
+  host_address = "internal-dtr.example.org:444"
+  ca_bundle = """
+-----BEGIN CERTIFICATE-----
+...
+-----END CERTIFICATE-----"""
+
+ +

You only need to include the port section if your DTR deployment is running +on a port other than 443.

+ +

You can customize and use the script below to generate a file named +trust-dtr.toml with the configuration needed for your DTR deployment.

+ +
# Replace this url by your DTR deployment url and port
+DTR_URL=https://dtr.example.org
+DTR_PORT=443
+
+dtr_full_url=${DTR_URL}:${DTR_PORT}
+dtr_ca_url=${dtr_full_url}/ca
+
+# Strip protocol and default https port
+dtr_host_address=${dtr_full_url#"https://"}
+dtr_host_address=${dtr_host_address%":443"}
+
+# Create the registry configuration and save it it
+cat <<EOL > trust-dtr.toml
+
+[[registries]]
+  # host address should not contain protocol or port if using 443
+  host_address = $dtr_host_address
+  ca_bundle = """
+$(curl -sk $dtr_ca_url)"""
+EOL
+
+ +

You can then append the content of trust-dtr.toml to your current UCP +configuration to make UCP trust this DTR deployment.

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/join-nodes/index.html b/ee/ucp/admin/configure/_site/join-nodes/index.html new file mode 100644 index 0000000000..c7e5e7ae54 --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/index.html @@ -0,0 +1,59 @@ +

Docker Universal Control Plane is designed for high availability (HA). You can +join multiple manager nodes to the cluster, so that if one manager node fails, +another can automatically take its place without impact to the cluster.

+ +

Having multiple manager nodes in your cluster allows you to:

+ + + +

Size your deployment

+ +

To make the cluster tolerant to more failures, add additional replica nodes to +your cluster.

+ + + + + + + + + + + + + + + + + + + + + + +
Manager nodesFailures tolerated
10
31
52
+ +

For production-grade deployments, follow these rules of thumb:

+ + + +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html new file mode 100644 index 0000000000..d0836ae5c0 --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/join-linux-nodes-to-cluster.html @@ -0,0 +1,143 @@ +

Docker EE is designed for scaling horizontally as your applications grow in +size and usage. You can add or remove nodes from the cluster to scale it +to your needs. You can join Windows Server 2016, IBM z System, and Linux nodes +to the cluster.

+ +

Because Docker EE leverages the clustering functionality provided by Docker +Engine, you use the docker swarm join +command to add more nodes to your cluster. When you join a new node, Docker EE +services start running on the node automatically.

+ +

Node roles

+ +

When you join a node to a cluster, you specify its role: manager or worker.

+ + + +

Join a node to the cluster

+ +

You can join Windows Server 2016, IBM z System, and Linux nodes to the cluster, +but only Linux nodes can be managers.

+ +

To join nodes to the cluster, go to the Docker EE web UI and navigate to the +Nodes page.

+ +
    +
  1. Click Add Node to add a new node.
  2. +
  3. Select the type of node to add, Windows or Linux.
  4. +
  5. Click Manager if you want to add the node as a manager.
  6. +
  7. Check the Use a custom listen address option to specify the address +and port where new node listens for inbound cluster management traffic.
  8. +
  9. Check the Use a custom listen address option to specify the +IP address that’s advertised to all members of the cluster for API access.
  10. +
+ +

+ +

Copy the displayed command, use SSH to log in to the host that you want to +join to the cluster, and run the docker swarm join command on the host.

+ +

To add a Windows node, click Windows and follow the instructions in +Join Windows worker nodes to a cluster.

+ +

After you run the join command in the node, the node is displayed on the +Nodes page in the Docker EE web UI. From there, you can change the node’s +cluster configuration, including its assigned orchestrator type. +Learn how to change the orchestrator for a node.

+ +

Pause or drain a node

+ +

Once a node is part of the cluster, you can configure the node’s availability +so that it is:

+ + + +

Pause or drain a node from the Edit Node page:

+ +
    +
  1. In the Docker EE web UI, browse to the Nodes page and select the node.
  2. +
  3. In the details pane, click Configure and select Details to open +the Edit Node page.
  4. +
  5. In the Availability section, click Active, Pause, or Drain.
  6. +
  7. Click Save to change the availability of the node.
  8. +
+ +

+ +

Promote or demote a node

+ +

You can promote worker nodes to managers to make UCP fault tolerant. You can +also demote a manager node into a worker.

+ +

To promote or demote a manager node:

+ +
    +
  1. Navigate to the Nodes page, and click the node that you want to demote.
  2. +
  3. In the details pane, click Configure and select Details to open +the Edit Node page.
  4. +
  5. In the Role section, click Manager or Worker.
  6. +
  7. Click Save and wait until the operation completes.
  8. +
  9. Navigate to the Nodes page, and confirm that the node role has changed.
  10. +
+ +

If you’re load-balancing user requests to Docker EE across multiple manager +nodes, don’t forget to remove these nodes from your load-balancing pool when +you demote them to workers.

+ +

Remove a node from the cluster

+ +

You can remove worker nodes from the cluster at any time:

+ +
    +
  1. Navigate to the Nodes page and select the node.
  2. +
  3. In the details pane, click Actions and select Remove.
  4. +
  5. Click Confirm when you’re prompted.
  6. +
+ +

Since manager nodes are important to the cluster overall health, you need to +be careful when removing one from the cluster.

+ +

To remove a manager node:

+ +
    +
  1. Make sure all nodes in the cluster are healthy. Don’t remove manager nodes +if that’s not the case.
  2. +
  3. Demote the manager node into a worker.
  4. +
  5. Now you can remove that node from the cluster.
  6. +
+ +

Use the CLI to manage your nodes

+ +

You can use the Docker CLI client to manage your nodes from the CLI. To do +this, configure your Docker CLI client with a UCP client bundle.

+ +

Once you do that, you can start managing your UCP nodes:

+ +
docker node ls
+
diff --git a/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html b/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html new file mode 100644 index 0000000000..07939769d0 --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/join-windows-nodes-to-cluster.html @@ -0,0 +1,235 @@ +

Docker Enterprise Edition supports worker nodes that run on Windows Server 2016 or 1709. +Only worker nodes are supported on Windows, and all manager nodes in the cluster +must run on Linux.

+ +

Follow these steps to enable a worker node on Windows.

+ +
    +
  1. Install Docker EE Engine on Windows Server 2016.
  2. +
  3. Configure the Windows node.
  4. +
  5. Join the Windows node to the cluster.
  6. +
+ +

Install Docker EE Engine on Windows Server 2016 or 1709

+ +

Install Docker EE Engine +on a Windows Server 2016 or 1709 instance to enable joining a cluster that’s managed by +Docker Enterprise Edition.

+ +

Configure the Windows node

+ +

Follow these steps to configure the docker daemon and the Windows environment.

+ +
    +
  1. Add a label to the node.
  2. +
  3. Pull the Windows-specific image of ucp-agent, which is named ucp-agent-win.
  4. +
  5. Run the Windows worker setup script provided with ucp-agent-win.
  6. +
  7. Join the cluster with the token provided by the Docker EE web UI or CLI.
  8. +
+ +

Add a label to the node

+ +

Configure the Docker Engine running on the node to have a label. This makes +it easier to deploy applications on nodes with this label.

+ +

Create the file C:\ProgramData\docker\config\daemon.json with the following +content:

+ +
{
+  "labels": ["os=windows"]
+}
+
+ +

Restart Docker for the changes to take effect:

+ +
Restart-Service docker
+
+ +

Pull the Windows-specific images

+ +

On a manager node, run the following command to list the images that are required +on Windows nodes.

+ +
docker container run --rm /: images --list --enable-windows
+/ucp-agent-win:
+/ucp-dsinfo-win:
+
+ +

On Windows Server 2016, in a PowerShell terminal running as Administrator, +log in to Docker Hub with the docker login command and pull the listed images.

+ +
docker image pull /ucp-agent-win:
+docker image pull /ucp-dsinfo-win:
+
+ +

Run the Windows node setup script

+ +

You need to open ports 2376 and 12376, and create certificates +for the Docker daemon to communicate securely. Use this command to run +the Windows node setup script:

+ +
$script = [ScriptBlock]::Create((docker run --rm /ucp-agent-win: windows-script | Out-String))
+
+Invoke-Command $script
+
+ +
+

Docker daemon restart

+ +

When you run windows-script, the Docker service is unavailable temporarily.

+
+ +

The Windows node is ready to join the cluster. Run the setup script on each +instance of Windows Server that will be a worker node.

+ +

Compatibility with daemon.json

+ +

The script may be incompatible with installations that use a config file at +C:\ProgramData\docker\config\daemon.json. If you use such a file, make sure +that the daemon runs on port 2376 and that it uses certificates located in +C:\ProgramData\docker\daemoncerts. If certificates don’t exist in this +directory, run ucp-agent-win generate-certs, as shown in Step 2 of the +procedure in Set up certs for the dockerd service.

+ +

In the daemon.json file, set the tlscacert, tlscert, and tlskey options +to the corresponding files in C:\ProgramData\docker\daemoncerts:

+ +
{
+...
+		"debug":     true,
+		"tls":       true,
+		"tlscacert": "C:\ProgramData\docker\daemoncerts\ca.pem",
+		"tlscert":   "C:\ProgramData\docker\daemoncerts\cert.pem",
+		"tlskey":    "C:\ProgramData\docker\daemoncerts\key.pem",
+		"tlsverify": true,
+...
+}
+
+ +

Join the Windows node to the cluster

+ +

Now you can join the cluster by using the docker swarm join command that’s +provided by the Docker EE web UI and CLI.

+ +
    +
  1. Log in to the Docker EE web UI with an administrator account.
  2. +
  3. Navigate to the Nodes page.
  4. +
  5. Click Add Node to add a new node.
  6. +
  7. In the Node Type section, click Windows.
  8. +
  9. In the Step 2 section, click the checkbox for +“I’m ready to join my windows node.”
  10. +
  11. Check the Use a custom listen address option to specify the address +and port where new node listens for inbound cluster management traffic.
  12. +
  13. +

    Check the Use a custom listen address option to specify the +IP address that’s advertised to all members of the cluster for API access.

    + +

    +
  14. +
+ +

Copy the displayed command. It looks similar to the following:

+ +
docker swarm join --token <token> <ucp-manager-ip>
+
+ +

You can also use the command line to get the join token. Using your +UCP client bundle, run:

+ +
docker swarm join-token worker
+
+ +

Run the docker swarm join command on each instance of Windows Server that +will be a worker node.

+ +

Configure a Windows worker node manually

+ +

The following sections describe how to run the commands in the setup script +manually to configure the dockerd service and the Windows environment. +The script opens ports in the firewall and sets up certificates for dockerd.

+ +

To see the script, you can run the windows-script command without piping +to the Invoke-Expression cmdlet.

+ +
docker container run --rm /ucp-agent-win: windows-script
+
+ +

Open ports in the Windows firewall

+ +

Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic.

+ +

In a PowerShell terminal running as Administrator, run these commands +to add rules to the Windows firewall.

+ +
netsh advfirewall firewall add rule name="docker_local" dir=in action=allow protocol=TCP localport=2376
+netsh advfirewall firewall add rule name="docker_proxy" dir=in action=allow protocol=TCP localport=12376
+
+ +

Set up certs for the dockerd service

+ +
    +
  1. Create the directory C:\ProgramData\docker\daemoncerts.
  2. +
  3. +

    In a PowerShell terminal running as Administrator, run the following command +to generate certificates.

    + +
    docker container run --rm -v C:\ProgramData\docker\daemoncerts:C:\certs /ucp-agent-win: generate-certs
    +
    +
  4. +
  5. +

    To set up certificates, run the following commands to stop and unregister the +dockerd service, register the service with the certificates, and restart the service.

    + +
    Stop-Service docker
    +dockerd --unregister-service
    +dockerd -H npipe:// -H 0.0.0.0:2376 --tlsverify --tlscacert=C:\ProgramData\docker\daemoncerts\ca.pem --tlscert=C:\ProgramData\docker\daemoncerts\cert.pem --tlskey=C:\ProgramData\docker\daemoncerts\key.pem --register-service
    +Start-Service docker
    +
    +
  6. +
+ +

The dockerd service and the Windows environment are now configured to join a Docker EE cluster.

+ +
+

TLS certificate setup

+ +

If the TLS certificates aren’t set up correctly, the Docker EE web UI shows the +following warning.

+ +
Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon.
+
+
+ +

Windows nodes limitations

+ +

Some features are not yet supported on Windows nodes:

+ + diff --git a/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html b/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html new file mode 100644 index 0000000000..f1161c5a8d --- /dev/null +++ b/ee/ucp/admin/configure/_site/join-nodes/use-a-load-balancer.html @@ -0,0 +1,220 @@ +

Once you’ve joined multiple manager nodes for high-availability, you can +configure your own load balancer to balance user requests across all +manager nodes.

+ +

+ +

This allows users to access UCP using a centralized domain name. If +a manager node goes down, the load balancer can detect that and stop forwarding +requests to that node, so that the failure goes unnoticed by users.

+ +

Load-balancing on UCP

+ +

Since Docker UCP uses mutual TLS, make sure you configure your load balancer to:

+ + + +

Load balancing UCP and DTR

+ +

By default, both UCP and DTR use port 443. If you plan on deploying UCP and DTR, +your load balancer needs to distinguish traffic between the two by IP address +or port number.

+ + + +
+

Additional requirements

+ +

In addition to configuring your load balancer to distinguish between UCP and DTR, configuring a load balancer for DTR has additional requirements.

+
+ +

Configuration examples

+ +

Use the following examples to configure your load balancer for UCP.

+ + +
+
+
user  nginx;
+worker_processes  1;
+
+error_log  /var/log/nginx/error.log warn;
+pid        /var/run/nginx.pid;
+
+events {
+    worker_connections  1024;
+}
+
+stream {
+    upstream ucp_443 {
+        server <UCP_MANAGER_1_IP>:443 max_fails=2 fail_timeout=30s;
+        server <UCP_MANAGER_2_IP>:443 max_fails=2 fail_timeout=30s;
+        server <UCP_MANAGER_N_IP>:443  max_fails=2 fail_timeout=30s;
+    }
+    server {
+        listen 443;
+        proxy_pass ucp_443;
+    }
+}
+
+
+
+
global
+    log /dev/log    local0
+    log /dev/log    local1 notice
+
+defaults
+        mode    tcp
+        option  dontlognull
+        timeout connect     5s
+        timeout client      50s
+        timeout server      50s
+        timeout tunnel      1h
+        timeout client-fin  50s
+### frontends
+# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
+frontend ucp_stats
+        mode http
+        bind 0.0.0.0:8181
+        default_backend ucp_stats
+frontend ucp_443
+        mode tcp
+        bind 0.0.0.0:443
+        default_backend ucp_upstream_servers_443
+### backends
+backend ucp_stats
+        mode http
+        option httplog
+        stats enable
+        stats admin if TRUE
+        stats refresh 5m
+backend ucp_upstream_servers_443
+        mode tcp
+        option httpchk GET /_ping HTTP/1.1\r\nHost:\ <UCP_FQDN>
+        server node01 <UCP_MANAGER_1_IP>:443 weight 100 check check-ssl verify none
+        server node02 <UCP_MANAGER_2_IP>:443 weight 100 check check-ssl verify none
+        server node03 <UCP_MANAGER_N_IP>:443 weight 100 check check-ssl verify none
+
+
+
+
{
+    "Subnets": [
+        "subnet-XXXXXXXX",
+        "subnet-YYYYYYYY",
+        "subnet-ZZZZZZZZ"
+    ],
+    "CanonicalHostedZoneNameID": "XXXXXXXXXXX",
+    "CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
+    "ListenerDescriptions": [
+        {
+            "Listener": {
+                "InstancePort": 443,
+                "LoadBalancerPort": 443,
+                "Protocol": "TCP",
+                "InstanceProtocol": "TCP"
+            },
+            "PolicyNames": []
+        }
+    ],
+    "HealthCheck": {
+        "HealthyThreshold": 2,
+        "Interval": 10,
+        "Target": "HTTPS:443/_ping",
+        "Timeout": 2,
+        "UnhealthyThreshold": 4
+    },
+    "VPCId": "vpc-XXXXXX",
+    "BackendServerDescriptions": [],
+    "Instances": [
+        {
+            "InstanceId": "i-XXXXXXXXX"
+        },
+        {
+            "InstanceId": "i-XXXXXXXXX"
+        },
+        {
+            "InstanceId": "i-XXXXXXXXX"
+        }
+    ],
+    "DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
+    "SecurityGroups": [
+        "sg-XXXXXXXXX"
+    ],
+    "Policies": {
+        "LBCookieStickinessPolicies": [],
+        "AppCookieStickinessPolicies": [],
+        "OtherPolicies": []
+    },
+    "LoadBalancerName": "ELB-UCP",
+    "CreatedTime": "2017-02-13T21:40:15.400Z",
+    "AvailabilityZones": [
+        "us-west-2c",
+        "us-west-2a",
+        "us-west-2b"
+    ],
+    "Scheme": "internet-facing",
+    "SourceSecurityGroup": {
+        "OwnerAlias": "XXXXXXXXXXXX",
+        "GroupName":  "XXXXXXXXXXXX"
+    }
+}
+
+
+
+ +

You can deploy your load balancer using:

+ + +
+
+
# Create the nginx.conf file, then
+# deploy the load balancer
+
+docker run --detach \
+  --name ucp-lb \
+  --restart=unless-stopped \
+  --publish 443:443 \
+  --volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
+  nginx:stable-alpine
+
+
+
+
# Create the haproxy.cfg file, then
+# deploy the load balancer
+
+docker run --detach \
+  --name ucp-lb \
+  --publish 443:443 \
+  --publish 8181:8181 \
+  --restart=unless-stopped \
+  --volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
+  haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
+
+
+
+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/license-your-installation.html b/ee/ucp/admin/configure/_site/license-your-installation.html new file mode 100644 index 0000000000..df202a7c58 --- /dev/null +++ b/ee/ucp/admin/configure/_site/license-your-installation.html @@ -0,0 +1,29 @@ +

After installing Docker Universal Control Plane, you need to license your +installation. Here’s how to do it.

+ +

Download your license

+ +

Go to Docker Store and +download your UCP license, or get a free trial license.

+ +

+ +

License your installation

+ +

Once you’ve downloaded the license file, you can apply it to your UCP +installation.

+ +

In the UCP web UI, log in with administrator credentials and +navigate to the Admin Settings page.

+ +

In the left pane, click License and click Upload License. The +license refreshes immediately, and you don’t need to click Save.

+ +

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/manage-and-deploy-private-images.html b/ee/ucp/admin/configure/_site/manage-and-deploy-private-images.html new file mode 100644 index 0000000000..25e47947cc --- /dev/null +++ b/ee/ucp/admin/configure/_site/manage-and-deploy-private-images.html @@ -0,0 +1,150 @@ +

Docker Enterprise Edition (EE) has its own image registry (DTR) so that +you can store and manage the images that you deploy to your cluster. +In this topic, you push an image to DTR and later deploy it to your cluster, +using the Kubernetes orchestrator.

+ +

Open the DTR web UI

+ +
    +
  1. In the Docker EE web UI, click Admin Settings.
  2. +
  3. In the left pane, click Docker Trusted Registry.
  4. +
  5. +

    In the Installed DTRs section, note the URL of your cluster’s DTR +instance.

    + +

    +
  6. +
  7. In a new browser tab, enter the URL to open the DTR web UI.
  8. +
+ +

Create an image repository

+ +
    +
  1. In the DTR web UI, click Repositories.
  2. +
  3. Click New Repository, and in the Repository Name field, enter +“wordpress”.
  4. +
  5. +

    Click Save to create the repository.

    + +

    +
  6. +
+ +

Push an image to DTR

+ +

Instead of building an image from scratch, we’ll pull the official WordPress +image from Docker Hub, tag it, and push it to DTR. Once that WordPress version +is in DTR, only authorized users can change it.

+ +

To push images to DTR, you need CLI access to a licensed installation of +Docker EE.

+ + + +

When you’re set up for CLI-based access to a licensed Docker EE instance, +you can push images to DTR.

+ +
    +
  1. +

    Pull the public WordPress image from Docker Hub:

    + +
    docker pull wordpress
    +
    +
  2. +
  3. +

    Tag the image, using the IP address or DNS name of your DTR instance:

    + +
    docker tag wordpress:latest <dtr-url>:<port>/admin/wordpress:latest
    +
    +
  4. +
  5. Log in to a Docker EE manager node.
  6. +
  7. +

    Push the tagged image to DTR:

    + +
    docker image push <dtr-url>:<port>/admin/wordpress:latest
    +
    +
  8. +
+ +

Confirm the image push

+ +

In the DTR web UI, confirm that the wordpress:latest image is store in your +DTR instance.

+ +
    +
  1. In the DTR web UI, click Repositories.
  2. +
  3. Click wordpress to open the repo.
  4. +
  5. Click Images to view the stored images.
  6. +
  7. +

    Confirm that the latest tag is present.

    + +

    +
  8. +
+ +

You’re ready to deploy the wordpress:latest image into production.

+ +

Deploy the private image to UCP

+ +

With the WordPress image stored in DTR, Docker EE can deploy the image to a +Kubernetes cluster with a simple Deployment object:

+ +
apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+  name: wordpress-deployment
+spec:
+  selector:
+    matchLabels:
+      app: wordpress
+  replicas: 2
+  template:
+    metadata:
+      labels:
+        app: wordpress
+    spec:
+      containers:
+      - name: wordpress
+        image: <dtr-url>:<port>/admin/wordpress:latest
+        ports:
+        - containerPort: 80
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: wordpress-service
+  labels:
+    app: wordpress
+spec:
+  type: NodePort
+  ports:
+    - port: 80
+      nodePort: 30081
+  selector:
+    app: wordpress
+
+ +

The Deployment object’s YAML specifies your DTR image in the pod template spec: +image: <dtr-url>:<port>/admin/wordpress:latest. Also, the YAML file defines +a NodePort service that exposes the WordPress application, so it’s accessible +from outside the cluster.

+ +
    +
  1. Open the Docker EE web UI, and in the left pane, click Kubernetes.
  2. +
  3. Click Create to open the Create Kubernetes Object page.
  4. +
  5. In the Namespace dropdown, select default.
  6. +
  7. In the Object YAML editor, paste the Deployment object’s YAML.
  8. +
  9. Click Create. When the Kubernetes objects are created, +the Load Balancers page opens.
  10. +
  11. Click wordpress-service, and in the details pane, find the Ports +section.
  12. +
  13. +

    Click the URL to open the default WordPress home page.

    + +

    +
  14. +
+ diff --git a/ee/ucp/admin/configure/_site/restrict-services-to-worker-nodes.html b/ee/ucp/admin/configure/_site/restrict-services-to-worker-nodes.html new file mode 100644 index 0000000000..b0efcd6cb3 --- /dev/null +++ b/ee/ucp/admin/configure/_site/restrict-services-to-worker-nodes.html @@ -0,0 +1,20 @@ +

You can configure UCP to allow users to deploy and run services only in +worker nodes. This ensures all cluster management functionality stays +performant, and makes the cluster more secure.

+ +

If a user deploys a malicious service that can affect the node where it +is running, it won’t be able to affect other nodes in the cluster, or +any cluster management functionality.

+ +

To restrict users from deploying to manager nodes, log in with administrator +credentials to the UCP web UI, navigate to the Admin Settings +page, and choose Scheduler.

+ +

+ +

You can then choose if user services should be allowed to run on manager nodes +or not.

+ +

Having a grant with the Scheduler role against the / collection takes +precedence over any other grants with Node Schedule on subcollections.

+ diff --git a/ee/ucp/admin/configure/_site/run-only-the-images-you-trust.html b/ee/ucp/admin/configure/_site/run-only-the-images-you-trust.html new file mode 100644 index 0000000000..336b9f4041 --- /dev/null +++ b/ee/ucp/admin/configure/_site/run-only-the-images-you-trust.html @@ -0,0 +1,57 @@ +

With Docker Universal Control Plane you can enforce applications to only use +Docker images signed by UCP users you trust. When a user tries to deploy an +application to the cluster, UCP checks if the application uses a Docker image +that is not trusted, and won’t continue with the deployment if that’s the case.

+ +

Enforce image signing

+ +

By signing and verifying the Docker images, you ensure that the images being +used in your cluster are the ones you trust and haven’t been altered either in +the image registry or on their way from the image registry to your UCP cluster.

+ +

Example workflow

+ +

Here’s an example of a typical workflow:

+ +
    +
  1. A developer makes changes to a service and pushes their changes to a version +control system.
  2. +
  3. A CI system creates a build, runs tests, and pushes an image to DTR with the +new changes.
  4. +
  5. The quality engineering team pulls the image and runs more tests. If +everything looks good they sign and push the image.
  6. +
  7. The IT operations team deploys a service. If the image used for the service +was signed by the QA team, UCP deploys it. Otherwise UCP refuses to deploy.
  8. +
+ +

Configure UCP

+ +

To configure UCP to only allow running services that use Docker images you +trust, go to the UCP web UI, navigate to the Admin Settings page, and in +the left pane, click Docker Content Trust.

+ +

Select the Run Only Signed Images option to only allow deploying +applications if they use images you trust.

+ +

UCP settings

+ +

With this setting, UCP allows deploying any image as long as the image has +been signed. It doesn’t matter who signed the image.

+ +

To enforce that the image needs to be signed by specific teams, click Add Team +and select those teams from the list.

+ +

UCP settings

+ +

If you specify multiple teams, the image needs to be signed by a member of each +team, or someone that is a member of all those teams.

+ +

Click Save for UCP to start enforcing the policy. From now on, existing +services will continue running and can be restarted if needed, but UCP will only +allow deploying new services that use a trusted image.

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/scale-your-cluster.html b/ee/ucp/admin/configure/_site/scale-your-cluster.html new file mode 100644 index 0000000000..694d7fb706 --- /dev/null +++ b/ee/ucp/admin/configure/_site/scale-your-cluster.html @@ -0,0 +1,164 @@ +

Docker UCP is designed for scaling horizontally as your applications grow in +size and usage. You can add or remove nodes from the UCP cluster to make it +scale to your needs.

+ +

+ +

Since UCP leverages the clustering functionality provided by Docker Engine, +you use the docker swarm join +command to add more nodes to your cluster. When joining new nodes, the UCP +services automatically start running in that node.

+ +

When joining a node to a cluster you can specify its role: manager or worker.

+ + + +

Join nodes to the cluster

+ +

To join nodes to the cluster, go to the UCP web UI and navigate to the Nodes +page.

+ +

+ +

Click Add Node to add a new node.

+ +

+ + + +

Copy the displayed command, use ssh to log into the host that you want to +join to the cluster, and run the docker swarm join command on the host.

+ +

To add a Windows node, click Windows and follow the instructions in +Join Windows worker nodes to a cluster.

+ +

After you run the join command in the node, you can view the node in the +UCP web UI.

+ +

Remove nodes from the cluster

+ +
    +
  1. If the target node is a manager, you will need to first demote the node into +a worker before proceeding with the removal: + +
  2. +
  3. +

    If the status of the worker node is Ready, you’ll need to manually force +the node to leave the cluster. To do this, connect to the target node through +SSH and run docker swarm leave --force directly against the local docker +engine.

    + +
    +

    Loss of quorum

    + +

    Do not perform this step if the node is still a manager, as +this may cause loss of quorum.

    +
    +
  4. +
  5. Now that the status of the node is reported as Down, you may remove the +node: + +
  6. +
+ +

Pause and drain nodes

+ +

Once a node is part of the cluster you can change its role making a manager +node into a worker and vice versa. You can also configure the node availability +so that it is:

+ + + +

In the UCP web UI, browse to the Nodes page and select the node. In the details pane, click the Configure to open the Edit Node page.

+ +

+ +

If you’re load-balancing user requests to UCP across multiple manager nodes, +when demoting those nodes into workers, don’t forget to remove them from your +load-balancing pool.

+ +

Use the CLI to scale your cluster

+ +

You can also use the command line to do all of the above operations. To get the +join token, run the following command on a manager node:

+ +
docker swarm join-token worker
+
+ +

If you want to add a new manager node instead of a worker node, use +docker swarm join-token manager instead. If you want to use a custom listen +address, add the --listen-addr arg:

+ +
docker swarm join \
+    --token SWMTKN-1-2o5ra9t7022neymg4u15f3jjfh0qh3yof817nunoioxa9i7lsp-dkmt01ebwp2m0wce1u31h6lmj \
+    --listen-addr 234.234.234.234 \
+    192.168.99.100:2377
+
+ +

Once your node is added, you can see it by running docker node ls on a manager:

+ +
docker node ls
+
+ +

To change the node’s availability, use:

+ +
docker node update --availability drain node2
+
+ +

You can set the availability to active, pause, or drain.

+ +

To remove the node, use:

+ +
docker node rm <node-hostname>
+
+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/set-orchestrator-type.html b/ee/ucp/admin/configure/_site/set-orchestrator-type.html new file mode 100644 index 0000000000..bfa20e0bf8 --- /dev/null +++ b/ee/ucp/admin/configure/_site/set-orchestrator-type.html @@ -0,0 +1,217 @@ +

When you add a node to the cluster, the node’s workloads are managed by a +default orchestrator, either Docker Swarm or Kubernetes. When you install +Docker EE, new nodes are managed by Docker Swarm, but you can change the +default orchestrator to Kubernetes in the administrator settings.

+ +

Changing the default orchestrator doesn’t affect existing nodes in the cluster. +You can change the orchestrator type for individual nodes in the cluster +by navigating to the node’s configuration page in the Docker EE web UI.

+ +

Change the orchestrator for a node

+ +

You can change the current orchestrator for any node that’s joined to a +Docker EE cluster. The available orchestrator types are Kubernetes, +Swarm, and Mixed.

+ +

The Mixed type enables workloads to be scheduled by Kubernetes and Swarm +both on the same node. Although you can choose to mix orchestrator types on the +same node, this isn’t recommended for production deployments because of the +likelihood of resource contention.

+ +

Change a node’s orchestrator type on the Edit node page:

+ +
    +
  1. Log in to the Docker EE web UI with an administrator account.
  2. +
  3. Navigate to the Nodes page, and click the node that you want to assign +to a different orchestrator.
  4. +
  5. In the details pane, click Configure and select Details to open +the Edit node page.
  6. +
  7. In the Orchestrator properties section, click the orchestrator type +for the node.
  8. +
  9. +

    Click Save to assign the node to the selected orchestrator.

    + +

    +
  10. +
+ +

What happens when you change a node’s orchestrator

+ +

When you change the orchestrator type for a node, existing workloads are +evicted, and they’re not migrated to the new orchestrator automatically. +If you want the workloads to be scheduled by the new orchestrator, you must +migrate them manually. For example, if you deploy WordPress on a Swarm +node, and you change the node’s orchestrator type to Kubernetes, Docker EE +doesn’t migrate the workload, and WordPress continues running on Swarm. In +this case, you must migrate your WordPress deployment to Kubernetes manually.

+ +

The following table summarizes the results of changing a node’s orchestrator.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
WorkloadOn orchestrator change
ContainersContainer continues running in node
Docker serviceNode is drained, and tasks are rescheduled to another node
Pods and other imperative resourcesContinue running in node
Deployments and other declarative resourcesMight change, but for now, continue running in node
+ +

If a node is running containers, and you change the node to Kubernetes, these +containers will continue running, and Kubernetes won’t be aware of them, so +you’ll be in the same situation as if you were running in Mixed node.

+ +
+

Be careful when mixing orchestrators on a node.

+ +

When you change a node’s orchestrator, you can choose to run the node in a +mixed mode, with both Kubernetes and Swarm workloads. The Mixed type +is not intended for production use, and it may impact existing workloads +on the node.

+ +

This is because the two orchestrator types have different views of the node’s +resources, and they don’t know about each other’s workloads. One orchestrator +can schedule a workload without knowing that the node’s resources are already +committed to another workload that was scheduled by the other orchestrator. +When this happens, the node could run out of memory or other resources.

+ +

For this reason, we recommend against mixing orchestrators on a production +node.

+
+ +

Set the default orchestrator type for new nodes

+ +

You can set the default orchestrator for new nodes to Kubernetes or +Swarm.

+ +

To set the orchestrator for new nodes:

+ +
    +
  1. Log in to the Docker EE web UI with an administrator account.
  2. +
  3. Open the Admin Settings page, and in the left pane, click Scheduler.
  4. +
  5. Under Set orchestrator type for new nodes click Swarm +or Kubernetes.
  6. +
  7. +

    Click Save.

    + +

    +
  8. +
+ +

From now on, when you join a node to the cluster, new workloads on the node +are scheduled by the specified orchestrator type. Existing nodes in the cluster +aren’t affected.

+ +

Once a node is joined to the cluster, you can +change the orchestrator that schedules its +workloads.

+ +

Choosing the orchestrator type

+ +

The workloads on your cluster can be scheduled by Kubernetes or by Swarm, or +the cluster can be mixed, running both orchestrator types. If you choose to +run a mixed cluster, be aware that the different orchestrators aren’t aware of +each other, and there’s no coordination between them.

+ +

We recommend that you make the decision about orchestration when you set up the +cluster initially. Commit to Kubernetes or Swarm on all nodes, or assign each +node individually to a specific orchestrator. Once you start deploying workloads, +avoid changing the orchestrator setting. If you do change the orchestrator for a +node, your workloads are evicted, and you must deploy them again through the +new orchestrator.

+ +
+

Node demotion and orchestrator type

+ +

When you promote a worker node to be a manager, its orchestrator type +automatically changes to Mixed. If you demote the same node to be a worker, +its orchestrator type remains as Mixed.

+
+ +

Use the CLI to set the orchestrator type

+ +

Set the orchestrator on a node by assigning the orchestrator labels, +com.docker.ucp.orchestrator.swarm or com.docker.ucp.orchestrator.kubernetes, +to true.

+ +

To schedule Swarm workloads on a node:

+ +
docker node update --label-add com.docker.ucp.orchestrator.swarm=true <node-id>
+
+ +

To schedule Kubernetes workloads on a node:

+ +
docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
+
+ +

To schedule Kubernetes and Swarm workloads on a node:

+ +
docker node update --label-add com.docker.ucp.orchestrator.swarm=true <node-id>
+docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
+
+ +
+

Mixed nodes

+ +

Scheduling both Kubernetes and Swarm workloads on a node is not recommended +for production deployments, because of the likelihood of resource contention.

+
+ +

To change the orchestrator type for a node from Swarm to Kubernetes:

+ +
docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
+docker node update --label-rm com.docker.ucp.orchestrator.swarm <node-id>
+
+ +

UCP detects the node label change and updates the Kubernetes node accordingly.

+ +

Check the value of the orchestrator label by inspecting the node:

+ +
docker node inspect <node-id> | grep -i orchestrator
+
+ +

The docker node inspect command returns the node’s configuration, including +the orchestrator:

+ +
"com.docker.ucp.orchestrator.kubernetes": "true"
+
+ +
+

Orchestrator label

+ +

The com.docker.ucp.orchestrator label isn’t displayed in the Labels +list for a node in the Docker EE web UI.

+
+ +

Set the default orchestrator type for new nodes

+ +

The default orchestrator for new nodes is a setting in the Docker EE +configuration file:

+ +
default_node_orchestrator = "swarm"
+
+ +

The value can be swarm or kubernetes.

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/set-session-timeout.html b/ee/ucp/admin/configure/_site/set-session-timeout.html new file mode 100644 index 0000000000..fb2281fa73 --- /dev/null +++ b/ee/ucp/admin/configure/_site/set-session-timeout.html @@ -0,0 +1,32 @@ +

Docker Universal Control Plane enables setting properties of user sessions, +like session timeout and number of concurrent sessions.

+ +

To configure UCP login sessions, go to the UCP web UI, navigate to the +Admin Settings page and click Authentication & Authorization.

+ +

+ +

Login session controls

+ + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
Lifetime MinutesThe initial lifetime of a login session, from the time UCP generates it. When this time expires, UCP invalidates the session, and the user must authenticate again to establish a new session. The default is 4320 minutes, which is 72 hours.
Renewal Threshold MinutesThe time before session expiration when UCP extends an active session. UCP extends the session by the number of hours specified in Lifetime Hours. The threshold value can’t be greater than Lifetime Hours. The default is 1440 minutes, which is 24 hours. To specify that sessions are extended with every use, set the threshold equal to the lifetime. To specify that sessions are never extended, set the threshold to zero. This may cause users to be logged out unexpectedly while using the UCP web UI.
Per User LimitThe maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. To disable the limit, set the value to zero.
diff --git a/ee/ucp/admin/configure/_site/store-logs-in-an-external-system.html b/ee/ucp/admin/configure/_site/store-logs-in-an-external-system.html new file mode 100644 index 0000000000..04d1bf896b --- /dev/null +++ b/ee/ucp/admin/configure/_site/store-logs-in-an-external-system.html @@ -0,0 +1,61 @@ +

You can configure UCP for sending logs to a remote logging service:

+ +
    +
  1. Log in to UCP with an administrator account.
  2. +
  3. Navigate to the Admin Settings page.
  4. +
  5. Set the information about your logging server, and click +Save.
  6. +
+ +

+ +
+

External system for logs

+ +

Administrators should configure Docker EE to store logs using an external +system. By default, the Docker daemon doesn’t delete logs, which means that +in a production system with intense usage, your logs can consume a +significant amount of disk space.

+
+ +

Example: Setting up an ELK stack

+ +

One popular logging stack is composed of Elasticsearch, Logstash, and +Kibana. The following example demonstrates how to set up an example +deployment which can be used for logging.

+ +
docker volume create --name orca-elasticsearch-data
+
+docker container run -d \
+    --name elasticsearch \
+    -v orca-elasticsearch-data:/usr/share/elasticsearch/data \
+    elasticsearch elasticsearch -Enetwork.host=0.0.0.0
+
+docker container run -d \
+    -p 514:514 \
+    --name logstash \
+    --link elasticsearch:es \
+    logstash \
+    sh -c "logstash -e 'input { syslog { } } output { stdout { } elasticsearch { hosts => [ \"es\" ] } } filter { json { source => \"message\" } }'"
+
+docker container run -d \
+    --name kibana \
+    --link elasticsearch:elasticsearch \
+    -p 5601:5601 \
+    kibana
+
+ +

Once you have these containers running, configure UCP to send logs to +the IP of the Logstash container. You can then browse to port 5601 on the system +running Kibana and browse log/event entries. You should specify the “time” +field for indexing.

+ +

When deployed in a production environment, you should secure your ELK +stack. UCP does not do this itself, but there are a number of 3rd party +options that can accomplish this, like the Shield plug-in for Kibana.

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/ucp-configuration-file.html b/ee/ucp/admin/configure/_site/ucp-configuration-file.html new file mode 100644 index 0000000000..1366d0019c --- /dev/null +++ b/ee/ucp/admin/configure/_site/ucp-configuration-file.html @@ -0,0 +1,673 @@ +

You have two options to configure UCP: through the web UI, or using a Docker +config object. In most cases, the web UI is a front-end for changing the +configuration file.

+ +

You can customize how UCP is installed by creating a configuration file upfront. +During the installation UCP detects and starts using the configuration.

+ +

UCP configuration file

+ +

The ucp-agent service uses a configuration file to set up UCP. +You can use the configuration file in different ways to set up your UCP +cluster.

+ + + +

Specify your configuration settings in a TOML file. +Learn about Tom’s Obvious, Minimal Language.

+ +

The configuration has a versioned naming convention, with a trailing decimal +number that increases with each version, like com.docker.ucp.config-1. The +ucp-agent service maps the configuration to the file at /etc/ucp/ucp.toml.

+ +

Inspect and modify existing configuration

+ +

Use the docker config inspect command to view the current settings and emit +them to a file.

+ +

+# CURRENT_CONFIG_NAME will be the name of the currently active UCP configuration
+CURRENT_CONFIG_NAME=$(docker service inspect ucp-agent --format '{{range .Spec.TaskTemplate.ContainerSpec.Configs}}{{if eq "/etc/ucp/ucp.toml" .File.Name}}{{.ConfigName}}{{end}}{{end}}')
+# Collect the current config with `docker config inspect`
+docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > ucp-config.toml
+
+
+ +

Edit the file, then use the docker config create and docker service update +commands to create and apply the configuration from the file.

+ +
# NEXT_CONFIG_NAME will be the name of the new UCP configuration
+NEXT_CONFIG_NAME=${CURRENT_CONFIG_NAME%%-*}-$((${CURRENT_CONFIG_NAME##*-}+1))
+# Create the new cluster configuration from the file ucp-config.toml
+docker config create $NEXT_CONFIG_NAME  ucp-config.toml
+# Use the `docker service update` command to remove the current configuration
+# and apply the new configuration to the `ucp-agent` service.
+docker service update --config-rm $CURRENT_CONFIG_NAME --config-add source=$NEXT_CONFIG_NAME,target=/etc/ucp/ucp.toml ucp-agent
+
+ +

Example configuration file

+ +

You can see an example TOML config file that shows how to configure UCP +settings. From the command line, run UCP with the example-config option:

+ +
docker container run --rm /: example-config
+
+ +

Configuration options

+ +

auth table

+ + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
backendnoThe name of the authorization backend to use, either managed or ldap. The default is managed.
default_new_user_rolenoThe role that new users get for their private resource sets. Values are admin, viewonly, scheduler, restrictedcontrol, or fullcontrol. The default is restrictedcontrol.
+ +

auth.sessions

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
lifetime_minutesnoThe initial session lifetime, in minutes. The default is 4320, which is 72 hours.
renewal_threshold_minutesnoThe length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A zero value disables session extension. The default is 1440, which is 24 hours.
per_user_limitnoThe maximum number of sessions that a user can have active simultaneously. If creating a new session would put a user over this limit, the least recently used session will be deleted. A value of zero disables limiting the number of sessions that users may have. The default is 5.
+ +

auth.ldap (optional)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
server_urlnoThe URL of the LDAP server.
no_simple_paginationnoSet to true if the LDAP server doesn’t support the Simple Paged Results control extension (RFC 2696). The default is false.
start_tlsnoSet to true to use StartTLS to secure the connection to the server, ignored if the server URL scheme is ‘ldaps://’. The default is false.
root_certsnoA root certificate PEM bundle to use when establishing a TLS connection to the server.
tls_skip_verifynoSet to true to skip verifying the server’s certificate when establishing a TLS connection, which isn’t recommended unless testing on a secure network. The default is false.
reader_dnnoThe distinguished name the system uses to bind to the LDAP server when performing searches.
reader_passwordnoThe password that the system uses to bind to the LDAP server when performing searches.
sync_schedulenoThe scheduled time for automatic LDAP sync jobs, in CRON format. Needs to have the seconds field set to zero. The default is @hourly if empty or omitted.
jit_user_provisioningnoWhether to only create user accounts upon first login (recommended). The default is true.
+ +

auth.ldap.additional_domains array (optional)

+ +

A list of additional LDAP domains and corresponding server configs from which +to sync users and team members. This is an advanced feature which most +environments don’t need.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
domainnoThe root domain component of this server, for example, dc=example,dc=com. A longest-suffix match of the base DN for LDAP searches is used to select which LDAP server to use for search requests. If no matching domain is found, the default LDAP server config is used.
server_urlnoThe URL of the LDAP server for the current additional domain.
no_simple_paginationnoSet to true if the LDAP server for this additional domain does not support the Simple Paged Results control extension (RFC 2696). The default is false.
server_urlnoThe URL of the LDAP server.
start_tlsnoWhether to use StartTLS to secure the connection to the server, ignored if the server URL scheme is ‘ldaps://’.
root_certsnoA root certificate PEM bundle to use when establishing a TLS connection to the server for the current additional domain.
tls_skip_verifynoWhether to skip verifying the additional domain server’s certificate when establishing a TLS connection, not recommended unless testing on a secure network. The default is true.
reader_dnnoThe distinguished name the system uses to bind to the LDAP server when performing searches under the additional domain.
reader_passwordnoThe password that the system uses to bind to the LDAP server when performing searches under the additional domain.
+ +

auth.ldap.user_search_configs array (optional)

+ +

Settings for syncing users.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
base_dnnoThe distinguished name of the element from which the LDAP server will search for users, for example, ou=people,dc=example,dc=com.
scope_subtreenoSet to true to search for users in the entire subtree of the base DN. Set to false to search only one level under the base DN. The default is false.
username_attrnoThe name of the attribute of the LDAP user element which should be selected as the username. The default is uid.
full_name_attrnoThe name of the attribute of the LDAP user element which should be selected as the full name of the user. The default is cn.
filternoThe LDAP search filter used to select user elements, for example, (&(objectClass=person)(objectClass=user)). May be left blank.
match_groupnoWhether to additionally filter users to those who are direct members of a group. The default is true.
match_group_dnnoThe distinguished name of the LDAP group, for example, cn=ddc-users,ou=groups,dc=example,dc=com. Required if matchGroup is true.
match_group_member_attrnoThe name of the LDAP group entry attribute which corresponds to distinguished names of members. Required if matchGroup is true. The default is member.
match_group_iteratenoSet to true to get all of the user attributes by iterating through the group members and performing a lookup for each one separately. Use this instead of searching users first, then applying the group selection filter. Ignored if matchGroup is false. The default is false.
+ +

auth.ldap.admin_sync_opts (optional)

+ +

Settings for syncing system admininistrator users.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
enable_syncnoSet to true to enable syncing admins. If false, all other fields in this table are ignored. The default is true.
select_group_membersnoSet to true to sync using a group DN and member attribute selection. Set to false to use a search filter. The default is true.
group_dnnoThe distinguished name of the LDAP group, for example, cn=ddc-admins,ou=groups,dc=example,dc=com. Required if select_group_members is true.
group_member_attrnoThe name of the LDAP group entry attribute which corresponds to distinguished names of members. Required if select_group_members is true. The default is member.
search_base_dnnoThe distinguished name of the element from which the LDAP server will search for users, for example, ou=people,dc=example,dc=com. Required if select_group_members is false.
search_scope_subtreenoSet to true to search for users in the entire subtree of the base DN. Set to false to search only one level under the base DN. The default is false. Required if select_group_members is false.
search_filternoThe LDAP search filter used to select users if select_group_members is false, for example, (memberOf=cn=ddc-admins,ou=groups,dc=example,dc=com). May be left blank.
+ +

registries array (optional)

+ +

An array of tables that specifies the DTR instances that the current UCP instance manages.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
host_addressyesThe address for connecting to the DTR instance tied to this UCP cluster.
service_idyesThe DTR instance’s OpenID Connect Client ID, as registered with the Docker authentication provider.
ca_bundlenoIf you’re using a custom certificate authority (CA), the ca_bundle setting specifies the root CA bundle for the DTR instance. The value is a string with the contents of a ca.pem file.
+ +

scheduling_configuration table (optional)

+ +

Specifies scheduling options and the default orchestrator for new nodes.

+ + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
enable_admin_ucp_schedulingnoSet to true to allow admins to schedule on containers on manager nodes. The default is false.
default_node_orchestratornoSets the type of orchestrator to use for new nodes that are joined to the cluster. Can be swarm or kubernetes. The default is swarm.
+ +

tracking_configuration table (optional)

+ +

Specifies the analytics data that UCP collects.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
disable_usageinfonoSet to true to disable analytics of usage information. The default is false.
disable_trackingnoSet to true to disable analytics of API call information. The default is false.
anonymize_trackingnoAnonymize analytic data. Set to true to hide your license ID. The default is false.
cluster_labelnoSet a label to be included with analytics/
+ +

trust_configuration table (optional)

+ +

Specifies whether DTR images require signing.

+ + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
require_content_trustnoSet to true to require images be signed by content trust. The default is false.
require_signature_fromnoA string array that specifies users or teams which must sign images.
+ +

log_configuration table (optional)

+ +

Configures the logging options for UCP components.

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
protocolnoThe protocol to use for remote logging. Values are tcp and udp. The default is tcp.
hostnoSpecifies a remote syslog server to send UCP controller logs to. If omitted, controller logs are sent through the default docker daemon logging driver from the ucp-controller container.
levelnoThe logging level for UCP components. Values are syslog priority levels: debug, info, notice, warning, err, crit, alert, and emerg.
+ +

license_configuration table (optional)

+ +

Specifies whether the your UCP license is automatically renewed.

+ + + + + + + + + + + + + + + + +
ParameterRequiredDescription
auto_refreshnoSet to true to enable attempted automatic license renewal when the license nears expiration. If disabled, you must manually upload renewed license after expiration. The default is true.
+ +

cluster_config table (required)

+ +

Configures the cluster that the current UCP instance manages.

+ +

The dns, dns_opt, and dns_search settings configure the DNS settings for UCP +components. Assigning these values overrides the settings in a container’s +/etc/resolv.conf file. For more info, see +Configure container DNS.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
controller_portyesConfigures the port that the ucp-controller listens to. The default is 443.
kube_apiserver_portyesConfigures the port the Kubernetes API server listens to.
swarm_portyesConfigures the port that the ucp-swarm-manager listens to. The default is 2376.
swarm_strategynoConfigures placement strategy for container scheduling. This doesn’t affect swarm-mode services. Values are spread, binpack, and random.
dnsyesArray of IP addresses to add as nameservers.
dns_optyesArray of options used by DNS resolvers.
dns_searchyesArray of domain names to search when a bare unqualified hostname is used inside of a container.
profiling_enablednoSet to true to enable specialized debugging endpoints for profiling UCP performance. The default is false.
kv_timeoutnoSets the key-value store timeout setting, in milliseconds. The default is 5000.
kv_snapshot_countnoSets the key-value store snapshot count setting. The default is 20000.
external_service_lbnoSpecifies an optional external load balancer for default links to services with exposed ports in the web UI.
cni_installer_urlnoSpecifies the URL of a Kubernetes YAML file to be used for installing a CNI plugin. Applies only during initial installation. If empty, the default CNI plugin is used.
metrics_retention_timenoAdjusts the metrics retention time.
metrics_scrape_intervalnoSets the interval for how frequently managers gather metrics from nodes in the cluster.
metrics_disk_usage_intervalnoSets the interval for how frequently storage metrics are gathered. This operation can be expensive when large volumes are present.
rethinkdb_cache_sizenoSets the size of the cache used by UCP’s RethinkDB servers. The default is 512MB, but leaving this field empty or specifying auto instructs RethinkDB to determine a cache size automatically.
cloud_providernoSet the cloud provider for the kubernetes cluster.
pod_cidryesSets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is 192.168.0.0/16.
nodeport_rangeyesSet the port range that for Kubernetes services of type NodePort can be exposed in. Default is 32768-35535.
custom_kube_api_server_flagsnoSet the configuration options for the Kubernetes API server.
custom_kube_controller_manager_flagsnoSet the configuration options for the Kubernetes controller manager
custom_kubelet_flagsnoSet the configuration options for Kubelets
custom_kube_scheduler_flagsnoSet the configuration options for the Kubernetes scheduler
local_volume_collection_mappingnoStore data about collections for volumes in UCP’s local KV store instead of on the volume labels. This is used for enforcing access control on volumes.
manager_kube_reserved_resourcesnoReserve resources for Docker UCP and Kubernetes components which are running on manager nodes.
worker_kube_reserved_resourcesnoReserve resources for Docker UCP and Kubernetes components which are running on worker nodes.
diff --git a/ee/ucp/admin/configure/_site/use-nfs-volumes.html b/ee/ucp/admin/configure/_site/use-nfs-volumes.html new file mode 100644 index 0000000000..b8fc8b5226 --- /dev/null +++ b/ee/ucp/admin/configure/_site/use-nfs-volumes.html @@ -0,0 +1,422 @@ +

Docker UCP supports Network File System (NFS) persistent volumes for +Kubernetes. To enable this feature on a UCP cluster, you need to set up +an NFS storage volume provisioner.

+ +
+

Kubernetes storage drivers

+ +

Currently, NFS is the only Kubernetes storage driver that UCP supports.

+
+ +

Enable NFS volume provisioning

+ +

The following steps enable NFS volume provisioning on a UCP cluster:

+ +
    +
  1. Create an NFS server pod.
  2. +
  3. Create a default storage class.
  4. +
  5. Create persistent volumes that use the default storage class.
  6. +
  7. Deploy your persistent volume claims and applications.
  8. +
+ +

The following procedure shows you how to deploy WordPress and a MySQL backend +that use NFS volume provisioning.

+ +

Install the Kubernetes CLI to complete the +procedure for enabling NFS provisioning.

+ +

Create the NFS Server

+ +

To enable NFS volume provisioning on a UCP cluster, you need to install +an NFS server. Google provides an image for this purpose.

+ +

On any node in the cluster with a UCP client bundle, +copy the following yaml to a file named nfs-server.yaml.

+ +
apiVersion: v1
+kind: Pod
+metadata:
+  name: nfs-server
+  namespace: default
+  labels:
+    role: nfs-server
+spec:
+  tolerations:
+  - key: node-role.kubernetes.io/master
+    effect: NoSchedule
+  nodeSelector:
+    node-role.kubernetes.io/master: ""
+  containers:
+  - name: nfs-server
+    image: gcr.io/google_containers/volume-nfs:0.8
+    securityContext:
+      privileged: true
+    ports:
+      - name: nfs-0
+        containerPort: 2049
+        protocol: TCP
+  restartPolicy: Always
+
+ +

Run the following command to create the NFS server pod.

+ +
kubectl create -f nfs-server.yaml
+
+ +

The default storage class needs the IP address of the NFS server pod. +Run the following command to get the pod’s IP address.

+ +
kubectl describe pod nfs-server | grep IP:
+
+ +

The result looks like this:

+ +
IP:           192.168.106.67
+
+ +

Create the default storage class

+ +

To enable NFS provisioning, create a storage class that has the +storageclass.kubernetes.io/is-default-class annotation set to true. +Also, provide the IP address of the NFS server pod as a parameter.

+ +

Copy the following yaml to a file named default-storage.yaml. Replace +<nfs-server-pod-ip-address> with the IP address from the previous step.

+ +
kind: StorageClass
+apiVersion: storage.k8s.io/v1beta1
+metadata:
+  namespace: default
+  name: default-storage
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+  labels:
+    kubernetes.io/cluster-service: "true"
+provisioner: kubernetes.io/nfs
+parameters:
+  path: /
+  server: <nfs-server-pod-ip-address>
+
+ +

Run the following command to create the default storage class.

+ +
kubectl create -f default-storage.yaml
+
+ +

Confirm that the storage class was created and that it’s assigned as the +default for the cluster.

+ +
kubectl get storageclass
+
+ +

It should look like this:

+ +
NAME                        PROVISIONER         AGE
+default-storage (default)   kubernetes.io/nfs   58s
+
+ +

Create persistent volumes

+ +

Create two persistent volumes based on the default-storage storage class. +One volume is for the MySQL database, and the other is for WordPress.

+ +

To create an NFS volume, specify storageClassName: default-storage in the +persistent volume spec.

+ +

Copy the following yaml to a file named local-volumes.yaml.

+ +
apiVersion: v1
+kind: PersistentVolume
+metadata:
+  name: local-pv-1
+  labels:
+    type: local
+spec:
+  storageClassName: default-storage
+  capacity:
+    storage: 20Gi
+  accessModes:
+    - ReadWriteOnce
+  hostPath:
+    path: /tmp/data/pv-1
+---
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+  name: local-pv-2
+  labels:
+    type: local
+spec:
+  storageClassName: default-storage
+  capacity:
+    storage: 20Gi
+  accessModes:
+    - ReadWriteOnce
+  hostPath:
+    path: /tmp/data/pv-2
+
+ +

Run this command to create the persistent volumes.

+ +
kubectl create -f local-volumes.yaml
+
+ +

Inspect the volumes:

+ +
kubectl get persistentvolumes
+
+ +

They should look like this:

+ +
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS      REASON    AGE
+local-pv-1   20Gi       RWO            Retain           Available             default-storage             1m
+local-pv-2   20Gi       RWO            Retain           Available             default-storage             1m
+
+ +

Create a secret for the MySQL password

+ +

Create a secret for the password that you want to use for accessing the MySQL +database. Use this command to create the secret object:

+ +
kubectl create secret generic mysql-pass --from-literal=password=<mysql-password>
+
+ +

Deploy persistent volume claims and applications

+ +

You have two persistent volumes that are available for claims. The MySQL +deployment uses one volume, and WordPress uses the other.

+ +

Copy the following yaml to a file named wordpress-deployment.yaml. +The claims in this file make no reference to a particular storage class, so +they bind to any available volumes that can satisfy the storage request. +In this example, both claims request 20Gi of storage.

+ +
+

Use specific persistent volume

+ +

If you are attempting to use a specific persistent volume and not let Kubernetes choose at random, ensure that the storageClassName key is populated in the persistent claim itself.

+
+ +
apiVersion: v1
+kind: Service
+metadata:
+  name: wordpress-mysql
+  labels:
+    app: wordpress
+spec:
+  ports:
+    - port: 3306
+  selector:
+    app: wordpress
+    tier: mysql
+  clusterIP: None
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: mysql-pv-claim
+  labels:
+    app: wordpress
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 20Gi
+---
+apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+  name: wordpress-mysql
+  labels:
+    app: wordpress
+spec:
+  selector:
+    matchLabels:
+      app: wordpress
+      tier: mysql
+  strategy:
+    type: Recreate
+  template:
+    metadata:
+      labels:
+        app: wordpress
+        tier: mysql
+    spec:
+      containers:
+      - image: mysql:5.6
+        name: mysql
+        env:
+        - name: MYSQL_ROOT_PASSWORD
+          valueFrom:
+            secretKeyRef:
+              name: mysql-pass
+              key: password
+        ports:
+        - containerPort: 3306
+          name: mysql
+        volumeMounts:
+        - name: mysql-persistent-storage
+          mountPath: /var/lib/mysql
+      volumes:
+      - name: mysql-persistent-storage
+        persistentVolumeClaim:
+          claimName: mysql-pv-claim
+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: wordpress
+  labels:
+    app: wordpress
+spec:
+  ports:
+    - port: 80
+  selector:
+    app: wordpress
+    tier: frontend
+  type: LoadBalancer
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: wp-pv-claim
+  labels:
+    app: wordpress
+spec:
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: 20Gi
+---
+apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+  name: wordpress
+  labels:
+    app: wordpress
+spec:
+  selector:
+    matchLabels:
+      app: wordpress
+      tier: frontend
+  strategy:
+    type: Recreate
+  template:
+    metadata:
+      labels:
+        app: wordpress
+        tier: frontend
+    spec:
+      containers:
+      - image: wordpress:4.8-apache
+        name: wordpress
+        env:
+        - name: WORDPRESS_DB_HOST
+          value: wordpress-mysql
+        - name: WORDPRESS_DB_PASSWORD
+          valueFrom:
+            secretKeyRef:
+              name: mysql-pass
+              key: password
+        ports:
+        - containerPort: 80
+          name: wordpress
+        volumeMounts:
+        - name: wordpress-persistent-storage
+          mountPath: /var/www/html
+      volumes:
+      - name: wordpress-persistent-storage
+        persistentVolumeClaim:
+          claimName: wp-pv-claim
+
+ +

Run the following command to deploy the MySQL and WordPress images.

+ +
kubectl create -f wordpress-deployment.yaml
+
+ +

Confirm that the pods are up and running.

+ +
kubectl get pods
+
+ +

You should see something like this:

+ +
NAME                               READY     STATUS    RESTARTS   AGE
+nfs-server                         1/1       Running   0          2h
+wordpress-f4dcfdf45-4rkgs          1/1       Running   0          1m
+wordpress-mysql-7bdd6d857c-fvgqx   1/1       Running   0          1m
+
+ +

It may take a few minutes for both pods to enter the Running state.

+ +

Inspect the deployment

+ +

The WordPress deployment is ready to go. You can see it in action by opening +a web browser on the URL of the WordPress service. The easiest way to get the +URL is to open the UCP web UI, navigate to the Kubernetes Load Balancers +page, and click the wordpress service. In the details pane, the URL is +listed in the Ports section.

+ +

+ +

Also, you can get the URL by using the command line.

+ +

On any node in the cluster, run the following command to get the IP addresses +that are assigned to the current node.

+ +

+docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs" }}' <node-id>
+
+
+ +

You should see a list of IP addresses, like this:

+ +
172.31.36.167,jg-latest-ubuntu-0,127.0.0.1,172.17.0.1,54.213.225.17
+
+ +

One of these corresponds with the external node IP address. Look for an address +that’s not in the 192.*, 127.*, and 172.* ranges. In the current example, +the IP address is 54.213.225.17.

+ +

The WordPress web UI is served through a NodePort, which you get with this +command:

+ +
kubectl describe svc wordpress | grep NodePort 
+
+ +

Which returns something like this:

+ +
NodePort:                 <unset>  34746/TCP
+
+ +

Put the two together to get the URL for the WordPress service: +http://<node-ip>:<node-port>.

+ +

For this example, the URL is http://54.213.225.17:34746.

+ +

+ +

Write a blog post to use the storage

+ +

Open the URL for the WordPress service and follow the instructions for +installing WordPress. In this example, the blog is named “NFS Volumes”.

+ +

+ +

Create a new blog post and publish it.

+ +

+ +

Click the permalink to view the site.

+ +

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/use-node-local-network-in-swarm.html b/ee/ucp/admin/configure/_site/use-node-local-network-in-swarm.html new file mode 100644 index 0000000000..46399f9087 --- /dev/null +++ b/ee/ucp/admin/configure/_site/use-node-local-network-in-swarm.html @@ -0,0 +1,47 @@ +

Docker Universal Control Plane can use your local networking drivers to +orchestrate your cluster. You can create a config network, with a driver like +MAC VLAN, and you use it like any other named network in UCP. If it’s set up +as attachable, you can attach containers.

+ +
+

Security

+ +

Encrypting communication between containers on different nodes works only on +overlay networks.

+
+ +

Use UCP to create node-specific networks

+ +

Always use UCP to create node-specific networks. You can use the UCP web UI +or the CLI (with an admin bundle). If you create the networks without UCP, +the networks won’t have the right access labels and won’t be available in UCP.

+ +

Create a MAC VLAN network

+ +
    +
  1. Log in as an administrator.
  2. +
  3. Navigate to Networks and click Create Network.
  4. +
  5. Name the network “macvlan”.
  6. +
  7. In the Driver dropdown, select Macvlan.
  8. +
  9. +

    In the Macvlan Configure section, select the configuration option. +Create all of the config-only networks before you create the config-from +network.

    + + +
  10. +
  11. Click Create to create the network.
  12. +
+ diff --git a/ee/ucp/admin/configure/_site/use-trusted-images-for-ci.html b/ee/ucp/admin/configure/_site/use-trusted-images-for-ci.html new file mode 100644 index 0000000000..5abcb16fa4 --- /dev/null +++ b/ee/ucp/admin/configure/_site/use-trusted-images-for-ci.html @@ -0,0 +1,139 @@ +

The document provides a minimal example on setting up Docker Content Trust (DCT) in +Universal Control Plane (UCP) for use with a Continuous Integration (CI) system. It +covers setting up the necessary accounts and trust delegations to restrict only those +images built by your CI system to be deployed to your UCP managed cluster.

+ +

Set up UCP accounts and teams

+ +

The first step is to create a user account for your CI system. For the purposes of +this document we will assume you are using Jenkins as your CI system and will therefore +name the account “jenkins”. As an admin user logged in to UCP, navigate to “User Management” +and select “Add User”. Create a user with the name “jenkins” and set a strong password.

+ +

Next, create a team called “CI” and add the “jenkins” user to this team. All signing +policy is team based, so if we want only a single user to be able to sign images +destined to be deployed on the cluster, we must create a team for this one user.

+ +

Set up the signing policy

+ +

While still logged in as an admin, navigate to “Admin Settings” and select the “Content Trust” +subsection. Select the checkbox to enable content trust and in the select box that appears, +select the “CI” team we have just created. Save the settings.

+ +

This policy will require that every image that referenced in a docker image pull, +docker container run, or docker service create must be signed by a key corresponding +to a member of the “CI” team. In this case, the only member is the “jenkins” user.

+ +

Create keys for the Jenkins user

+ +

The signing policy implementation uses the certificates issued in user client bundles +to connect a signature to a user. Using an incognito browser window (or otherwise), +log in to the “jenkins” user account you created earlier. Download a client bundle for +this user. It is also recommended to change the description associated with the public +key stored in UCP such that you can identify in the future which key is being used for +signing.

+ +

Each time a user retrieves a new client bundle, a new keypair is generated. It is therefore +necessary to keep track of a specific bundle that a user chooses to designate as their signing bundle.

+ +

Once you have decompressed the client bundle, the only two files you need for the purposes +of signing are cert.pem and key.pem. These represent the public and private parts of +the user’s signing identity respectively. We will load the key.pem file onto the Jenkins +servers, and use cert.pem to create delegations for the “jenkins” user in our +Trusted Collection.

+ +

Prepare the Jenkins server

+ +

Load key.pem on Jenkins

+ +

You will need to use the notary client to load keys onto your Jenkins server. Simply run +notary -d /path/to/.docker/trust key import /path/to/key.pem. You will be asked to set +a password to encrypt the key on disk. For automated signing, this password can be configured +into the environment under the variable name DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE. The -d +flag to the command specifies the path to the trust subdirectory within the server’s docker +configuration directory. Typically this is found at ~/.docker/trust.

+ +

Enable content trust

+ +

There are two ways to enable content trust: globally, and per operation. To enabled content +trust globally, set the environment variable DOCKER_CONTENT_TRUST=1. To enable on a per +operation basis, wherever you run docker image push in your Jenkins scripts, add the flag +--disable-content-trust=false. You may wish to use this second option if you only want +to sign some images.

+ +

The Jenkins server is now prepared to sign images, but we need to create delegations referencing +the key to give it the necessary permissions.

+ +

Initialize a repository

+ +

Any commands displayed in this section should not be run from the Jenkins server. You +will most likely want to run them from your local system.

+ +

If this is a new repository, create it in Docker Trusted Registry (DTR) or Docker Hub, +depending on which you use to store your images, before proceeding further.

+ +

We will now initialize the trust data and create the delegation that provides the Jenkins +key with permissions to sign content. The following commands initialize the trust data and +rotate snapshotting responsibilities to the server. This is necessary to ensure human involvement +is not required to publish new content.

+ +
notary -s https://my_notary_server.com -d ~/.docker/trust init my_repository
+notary -s https://my_notary_server.com -d ~/.docker/trust key rotate my_repository snapshot -r
+notary -s https://my_notary_server.com -d ~/.docker/trust publish my_repository
+
+ +

The -s flag specifies the server hosting a notary service. If you are operating against +Docker Hub, this will be https://notary.docker.io. If you are operating against your own DTR +instance, this will be the same hostname you use in image names when running docker commands preceded +by the https:// scheme. For example, if you would run docker image push my_dtr:4443/me/an_image the value +of the -s flag would be expected to be https://my_dtr:4443.

+ +

If you are using DTR, the name of the repository should be identical to the full name you use +in a docker image push command. If however you use Docker Hub, the name you use in a docker image push +must be preceded by docker.io/. i.e. if you ran docker image push me/alpine, you would +notary init docker.io/me/alpine.

+ +

For brevity, we will exclude the -s and -d flags from subsequent command, but be aware you +will still need to provide them for the commands to work correctly.

+ +

Now that the repository is initialized, we need to create the delegations for Jenkins. Docker +Content Trust treats a delegation role called targets/releases specially. It considers this +delegation to contain the canonical list of published images for the repository. It is therefore +generally desirable to add all users to this delegation with the following command:

+ +
notary delegation add my_repository targets/releases --all-paths /path/to/cert.pem
+
+ +

This solves a number of prioritization problems that would result from needing to determine +which delegation should ultimately be trusted for a specific image. However, because it +is anticipated that any user will be able to sign the targets/releases role it is not trusted +in determining if a signing policy has been met. Therefore it is also necessary to create a +delegation specifically for Jenkins:

+ +
notary delegation add my_repository targets/jenkins --all-paths /path/to/cert.pem
+
+ +

We will then publish both these updates (remember to add the correct -s and -d flags):

+ +
notary publish my_repository
+
+ +

Informational (Advanced): If we included the targets/releases role in determining if a signing policy +had been met, we would run into the situation of images being opportunistically deployed when +an appropriate user signs. In the scenario we have described so far, only images signed by +the “CI” team (containing only the “jenkins” user) should be deployable. If a user “Moby” could +also sign images but was not part of the “CI” team, they might sign and publish a new targets/releases +that contained their image. UCP would refuse to deploy this image because it was not signed +by the “CI” team. However, the next time Jenkins published an image, it would update and sign +the targets/releases role as whole, enabling “Moby” to deploy their image.

+ +

Conclusion

+ +

With the Trusted Collection initialized, and delegations created, the Jenkins server will +now use the key we imported to sign any images we push to this repository.

+ +

Through either the Docker CLI, or the UCP browser interface, we will find that any images +that do not meet our signing policy cannot be used. The signing policy we set up requires +that the “CI” team must have signed any image we attempt to docker image pull, docker container run, +or docker service create, and the only member of that team is the “jenkins” user. This +restricts us to only running images that were published by our Jenkins CI system.

diff --git a/ee/ucp/admin/configure/_site/use-your-own-tls-certificates.html b/ee/ucp/admin/configure/_site/use-your-own-tls-certificates.html new file mode 100644 index 0000000000..46363a41cf --- /dev/null +++ b/ee/ucp/admin/configure/_site/use-your-own-tls-certificates.html @@ -0,0 +1,57 @@ +

All UCP services are exposed using HTTPS, to ensure all communications between +clients and UCP are encrypted. By default, this is done using self-signed TLS +certificates that are not trusted by client tools like web browsers. So when +you try to access UCP, your browser warns that it doesn’t trust UCP or that +UCP has an invalid certificate.

+ +

invalid certificate

+ +

The same happens with other client tools.

+ +
$ curl https://ucp.example.org
+
+SSL certificate problem: Invalid certificate chain
+
+ +

You can configure UCP to use your own TLS certificates, so that it is +automatically trusted by your browser and client tools.

+ +

To ensure minimal impact to your business, you should plan for this change to +happen outside business peak hours. Your applications will continue running +normally, but existing UCP client certificates will become invalid, so users +will have to download new ones to access UCP from the CLI.

+ +

Configure UCP to use your own TLS certificates and keys

+ +

In the UCP web UI, log in with administrator credentials and +navigate to the Admin Settings page.

+ +

In the left pane, click Certificates.

+ +

+ +

Upload your certificates and keys:

+ + + +

Finally, click Save for the changes to take effect.

+ +

After replacing the TLS certificates, your users won’t be able to authenticate +with their old client certificate bundles. Ask your users to go to the UCP +web UI and get new client certificate bundles.

+ +

If you deployed Docker Trusted Registry, you’ll also need to reconfigure it +to trust the new UCP TLS certificates. +Learn how to configure DTR.

+ +

Where to go next

+ + diff --git a/ee/ucp/admin/configure/_site/view-namespace-resources.html b/ee/ucp/admin/configure/_site/view-namespace-resources.html new file mode 100644 index 0000000000..16c34a3091 --- /dev/null +++ b/ee/ucp/admin/configure/_site/view-namespace-resources.html @@ -0,0 +1,121 @@ +

With Docker Enterprise Edition, administrators can filter the view of +Kubernetes objects by the namespace the objects are assigned to. You can +specify a single namespace, or you can specify all available namespaces.

+ +

Create two namespaces

+ +

In this example, you create two Kubernetes namespaces and deploy a service +to both of them.

+ +
    +
  1. Log in to the UCP web UI with an administrator account.
  2. +
  3. In the left pane, click Kubernetes.
  4. +
  5. Click Create to open the Create Kubernetes Object page.
  6. +
  7. +

    In the Object YAML editor, paste the following YAML.

    + +
    apiVersion: v1
    +kind: Namespace
    +metadata:
    + name: blue
    +---
    + apiVersion: v1
    + kind: Namespace
    + metadata:
    +  name: green
    +
    +
  8. +
  9. Click Create to create the blue and green namespaces.
  10. +
+ +

+ +

Deploy services

+ +

Create a NodePort service in the blue namespace.

+ +
    +
  1. Navigate to the Create Kubernetes Object page.
  2. +
  3. In the Namespace dropdown, select blue.
  4. +
  5. +

    In the Object YAML editor, paste the following YAML.

    + +
    apiVersion: v1
    +kind: Service
    +metadata:
    +  name: app-service-blue
    +  labels:
    +    app: app-blue
    +spec:
    +  type: NodePort
    +  ports:
    +    - port: 80
    +      nodePort: 32768
    +  selector:
    +    app: app-blue
    +
    +
  6. +
  7. +

    Click Create to deploy the service in the blue namespace.

    +
  8. +
  9. +

    Repeat the previous steps with the following YAML, but this time, +select green from the Namespace dropdown.

    + +
    apiVersion: v1
    +kind: Service
    +metadata:
    +  name: app-service-green
    +  labels:
    +    app: app-green
    +spec:
    +  type: NodePort
    +  ports:
    +    - port: 80
    +      nodePort: 32769
    +  selector:
    +    app: app-green
    +
    +
  10. +
+ +

View services

+ +

Currently, the Namespaces view is set to the default namespace, so the +Load Balancers page doesn’t show your services.

+ +
    +
  1. In the left pane, click Namespaces to open the list of namespaces.
  2. +
  3. In the upper-right corner, click the Set context for all namespaces +toggle and click Confirm. The indicator in the left pane changes to All Namespaces.
  4. +
  5. Click Load Balancers to view your services.
  6. +
+ +

+ +

Filter the view by namespace

+ +

With the Set context for all namespaces toggle set, you see all of the +Kubernetes objects in every namespace. Now filter the view to show only +objects in one namespace.

+ +
    +
  1. In the left pane, click Namespaces to open the list of namespaces.
  2. +
  3. +

    In the green namespace, click the More options icon and in the +context menu, select Set Context.

    + +

    +
  4. +
  5. Click Confirm to set the context to the green namespace. +The indicator in the left pane changes to green.
  6. +
  7. Click Load Balancers to view your app-service-green service. +The app-service-blue service doesn’t appear.
  8. +
+ +

+ +

To view the app-service-blue service, repeat the previous steps, but this +time, select Set Context on the blue namespace.

+ +

diff --git a/ee/ucp/admin/configure/enable-saml-authentication.md b/ee/ucp/admin/configure/enable-saml-authentication.md index ce2910849f..d2284647f3 100644 --- a/ee/ucp/admin/configure/enable-saml-authentication.md +++ b/ee/ucp/admin/configure/enable-saml-authentication.md @@ -67,8 +67,8 @@ To enable SAML authentication: ![Configuring IdP values for SAML in UCP](../../images/saml_settings.png) 5. In **IdP Metadata URL** enter the URL for the identity provider's metadata. -6. If the metadata URL is publicly certified, you can leave **Skip TLS Verification** unchecked and **Root Certificates Bundle** blank, which is the default. If the metadata URL is NOT certified, you must provide the certificates from the identity provider in the **Root Certificates Bundle** field whether or not you check **Skip TLS Verification**. -7. In **UCP Host** enter the URL that includes the IP address or domain of your UCP web interface. The current IP address appears by default. +6. If the metadata URL is publicly certified, you can leave **Skip TLS Verification** unchecked and **Root Certificates Bundle** blank, which is the default. Skipping TLS verification is not recommended in production environments. If the metadata URL cannot be certified by the default certificate authority store, you must provide the certificates from the identity provider in the **Root Certificates Bundle** field whether or not you check **Skip TLS Verification**. +7. In **UCP Host** enter the URL that includes the IP address or domain of your UCP installation. The port number is optional. The current IP address or domain appears by default. ![Configuring service provider values for SAML in UCP](../../images/saml_settings_2.png)