final changes

This commit is contained in:
ddeyo 2018-10-09 13:17:34 -07:00
parent 048ad1dd80
commit a1058b0504
26 changed files with 3870 additions and 2 deletions

View File

@ -0,0 +1,137 @@
<p>With Docker UCP, you can add labels to your nodes. Labels are metadata that
describe the node, like its role (development, QA, production), its region
(US, EU, APAC), or the kind of disk (hdd, ssd). Once you have labeled your
nodes, you can add deployment constraints to your services, to ensure they
are scheduled on a node with a specific label.</p>
<p>For example, you can apply labels based on their role in the development
lifecycle, or the hardware resources they have.</p>
<p><img src="../../images/add-labels-to-cluster-nodes-1.svg" alt="" /></p>
<p>Dont create labels for authorization and permissions to resources.
Instead, use resource sets, either UCP collections or Kubernetes namespaces,
to organize access to your cluster.
<a href="../../authorization/group-resources.md">Learn about managing access with resource sets</a>.</p>
<h2 id="apply-labels-to-a-node">Apply labels to a node</h2>
<p>In this example well apply the <code class="highlighter-rouge">ssd</code> label to a node. Then well deploy
a service with a deployment constraint to make sure the service is always
scheduled to run on a node that has the <code class="highlighter-rouge">ssd</code> label.</p>
<p>Log in with administrator credentials in the UCP web UI, navigate to the
<strong>Nodes</strong> page, and choose the node you want to apply labels to. In the
details pane, click <strong>Configure</strong>.</p>
<p>In the <strong>Edit Node</strong> page, scroll down to the <strong>Labels</strong> section.</p>
<p>Click <strong>Add Label</strong>, and add a label with the key <code class="highlighter-rouge">disk</code> and a value of <code class="highlighter-rouge">ssd</code>.</p>
<p><img src="../../images/add-labels-to-cluster-nodes-2.png" alt="" class="with-border" /></p>
<p>Click <strong>Save</strong> and dismiss the <strong>Edit Node</strong> page. In the nodes details
pane, click <strong>Labels</strong> to view the labels that are applied to the node.</p>
<p>You can also do this from the CLI by running:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node update <span class="nt">--label-add</span> &lt;key&gt;<span class="o">=</span>&lt;value&gt; &lt;node-id&gt;
</code></pre></div></div>
<h2 id="deploy-a-service-with-constraints">Deploy a service with constraints</h2>
<p>When deploying a service, you can specify constraints, so that the service gets
scheduled only on a node that has a label that fulfills all of the constraints
you specify.</p>
<p>In this example, when users deploy a service, they can add a constraint for the
service to be scheduled only on nodes that have SSD storage:
<code class="highlighter-rouge">node.labels.disk == ssd</code>.</p>
<p>Navigate to the <strong>Stacks</strong> page. Name the new stack “wordpress”, and in the
<strong>Mode</strong> dropdown, check <strong>Swarm Services</strong>.</p>
<p>In the <strong>docker-compose.yml</strong> editor, paste the following stack file.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>version: "3.1"
services:
db:
image: mysql:5.7
deploy:
placement:
constraints:
- node.labels.disk == ssd
restart_policy:
condition: on-failure
networks:
- wordpress-net
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
deploy:
replicas: 1
placement:
constraints:
- node.labels.disk == ssd
restart_policy:
condition: on-failure
max_attempts: 3
networks:
- wordpress-net
ports:
- "8000:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
networks:
wordpress-net:
</code></pre></div></div>
<p>Click <strong>Create</strong> to deploy the stack, and when the stack deploys,
click <strong>Done</strong>.</p>
<p><img src="../../images/use-constraints-in-stack-deployment.png" alt="" /></p>
<p>Navigate to the <strong>Nodes</strong> page, and click the node that has the
<code class="highlighter-rouge">disk</code> label. In the details pane, click the <strong>Inspect Resource</strong>
dropdown and select <strong>Containers</strong>.</p>
<p><img src="../../images/use-constraints-in-stack-deployment-2.png" alt="" /></p>
<p>Dismiss the filter and navigate to the <strong>Nodes</strong> page. Click a node that
doesnt have the <code class="highlighter-rouge">disk</code> label. In the details pane, click the
<strong>Inspect Resource</strong> dropdown and select <strong>Containers</strong>. There are no
WordPress containers scheduled on the node. Dismiss the filter.</p>
<h2 id="add-a-constraint-to-a-service-by-using-the-ucp-web-ui">Add a constraint to a service by using the UCP web UI</h2>
<p>You can declare the deployment constraints in your docker-compose.yml file or
when youre creating a stack. Also, you can apply them when youre creating
a service.</p>
<p>To check if a service has deployment constraints, navigate to the
<strong>Services</strong> page and choose the service that you want to check.
In the details pane, click <strong>Constraints</strong> to list the constraint labels.</p>
<p>To edit the constraints on the service, click <strong>Configure</strong> and select
<strong>Details</strong> to open the <strong>Update Service</strong> page. Click <strong>Scheduling</strong> to
view the constraints.</p>
<p><img src="../../images/add-constraint-to-service.png" alt="" /></p>
<p>You can add or remove deployment constraints on this page.</p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="store-logs-in-an-external-system.md">Store logs in an external system</a></li>
</ul>

View File

@ -0,0 +1,51 @@
<p>UCP always runs with HTTPS enabled. When you connect to UCP, you need to make
sure that the hostname that you use to connect is recognized by UCPs
certificates. If, for instance, you put UCP behind a load balancer that
forwards its traffic to your UCP instance, your requests will be for the load
balancers hostname or IP address, not UCPs. UCP will reject these requests
unless you include the load balancers address as a Subject Alternative Name
(or SAN) in UCPs certificates.</p>
<p>If you use your own TLS certificates, make sure that they have the correct SAN
values.
<a href="use-your-own-tls-certificates.md">Learn about using your own TLS certificates</a>.</p>
<p>If you want to use the self-signed certificate that UCP has out of the box, you
can set up the SANs when you install UCP with the <code class="highlighter-rouge">--san</code> argument. You can
also add them after installation.</p>
<h2 id="add-new-sans-to-ucp">Add new SANs to UCP</h2>
<ol>
<li>In the UCP web UI, log in with administrator credentials and navigate to
the <strong>Nodes</strong> page.</li>
<li>Click on a manager node, and in the details pane, click <strong>Configure</strong> and
select <strong>Details</strong>.</li>
<li>In the <strong>SANs</strong> section, click <strong>Add SAN</strong>, and enter one or more SANs
for the cluster.
<img src="../../images/add-sans-to-cluster-1.png" alt="" class="with-border" /></li>
<li>Once youre done, click <strong>Save</strong>.</li>
</ol>
<p>You will have to do this on every existsing manager node in the cluster,
but once you have done so, the SANs are applied automatically to any new
manager nodes that join the cluster.</p>
<p>You can also do this from the CLI by first running:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
docker node inspect <span class="nt">--format</span> <span class="s1">'{{ index .Spec.Labels "com.docker.ucp.SANs" }}'</span> &lt;node-id&gt;
default-cs,127.0.0.1,172.17.0.1
</code></pre></div></div>
<p>This will get the current set of SANs for the given manager node. Append your
desired SAN to this list, for example <code class="highlighter-rouge">default-cs,127.0.0.1,172.17.0.1,example.com</code>,
and then run:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node update <span class="nt">--label-add</span> com.docker.ucp.SANs<span class="o">=</span>&lt;SANs-list&gt; &lt;node-id&gt;
</code></pre></div></div>
<p><code class="highlighter-rouge">&lt;SANs-list&gt;</code> is the list of SANs with your new SAN appended at the end. As in
the web UI, you must do this for every manager node.</p>

View File

@ -0,0 +1,243 @@
<p>UCP uses Calico as the default Kubernetes networking solution. Calico is
configured to create a BGP mesh between all nodes in the cluster.</p>
<p>As you add more nodes to the cluster, networking performance starts decreasing.
If your cluster has more than 100 nodes, you should reconfigure Calico to use
Route Reflectors instead of a node-to-node mesh.</p>
<p>This article guides you in deploying Calico Route Reflectors in a UCP cluster.
UCP running on Microsoft Azure uses Azure SDN instead of Calico for
multi-host networking.
If your UCP deployment is running on Azure, you dont need to configure it this
way.</p>
<h2 id="before-you-begin">Before you begin</h2>
<p>For production-grade systems, you should deploy at least two Route Reflectors,
each running on a dedicated node. These nodes should not be running any other
workloads.</p>
<p>If Route Reflectors are running on a same node as other workloads, swarm ingress
and NodePorts might not work in these workloads.</p>
<h2 id="choose-dedicated-notes">Choose dedicated notes</h2>
<p>Start by tainting the nodes, so that no other workload runs there. Configure
your CLI with a UCP client bundle, and for each dedicated node, run:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl taint node &lt;node-name&gt; \
com.docker.ucp.kubernetes.calico/route-reflector=true:NoSchedule
</code></pre></div></div>
<p>Then add labels to those nodes, so that you can target them when deploying the
Route Reflectors. For each dedicated node, run:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl label nodes &lt;node-name&gt; \
com.docker.ucp.kubernetes.calico/route-reflector=true
</code></pre></div></div>
<h2 id="deploy-the-route-reflectors">Deploy the Route Reflectors</h2>
<p>Create a <code class="highlighter-rouge">calico-rr.yaml</code> file with the following content:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-rr
namespace: kube-system
labels:
app: calico-rr
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: calico-rr
template:
metadata:
labels:
k8s-app: calico-rr
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: com.docker.ucp.kubernetes.calico/route-reflector
value: "true"
effect: NoSchedule
hostNetwork: true
containers:
- name: calico-rr
image: calico/routereflector:v0.6.1
env:
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key # Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /calico-secrets
name: etcd-certs
securityContext:
privileged: true
nodeSelector:
com.docker.ucp.kubernetes.calico/route-reflector: "true"
volumes:
# Mount in the etcd TLS secrets.
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
</code></pre></div></div>
<p>Then, deploy the DaemonSet using:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl create -f calico-rr.yaml
</code></pre></div></div>
<h2 id="configure-calicoctl">Configure calicoctl</h2>
<p>To reconfigure Calico to use Route Reflectors instead of a node-to-node mesh,
youll need to SSH into a UCP node and download the <code class="highlighter-rouge">calicoctl</code> tool.</p>
<p>Log in to a UCP node using SSH, and run:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo curl --location https://github.com/projectcalico/calicoctl/releases/download/v3.1.1/calicoctl \
--output /usr/bin/calicoctl
sudo chmod +x /usr/bin/calicoctl
</code></pre></div></div>
<p>Now you need to configure <code class="highlighter-rouge">calicoctl</code> to communicate with the etcd key-value
store managed by UCP. Create a file named <code class="highlighter-rouge">/etc/calico/calicoctl.cfg</code> with
the following content:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: "etcdv3"
etcdEndpoints: "127.0.0.1:12378"
etcdKeyFile: "/var/lib/docker/volumes/ucp-node-certs/_data/key.pem"
etcdCertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/cert.pem"
etcdCACertFile: "/var/lib/docker/volumes/ucp-node-certs/_data/ca.pem"
</code></pre></div></div>
<h2 id="disable-node-to-node-bgp-mesh">Disable node-to-node BGP mesh</h2>
<p>Not that youve configured <code class="highlighter-rouge">calicoctl</code>, you can check the current Calico BGP
configuration:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo calicoctl get bgpconfig
</code></pre></div></div>
<p>If you dont see any configuration listed, create one by running:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cat &lt;&lt; EOF | sudo calicoctl create -f -
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
name: default
spec:
logSeverityScreen: Info
nodeToNodeMeshEnabled: false
asNumber: 63400
EOF
</code></pre></div></div>
<p>This creates a new configuration with node-to-node mesh BGP disabled.
If you have a configuration, and <code class="highlighter-rouge">meshenabled</code> is set to <code class="highlighter-rouge">true</code>, update your
configuration:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo calicoctl get bgpconfig --output yaml &gt; bgp.yaml
</code></pre></div></div>
<p>Edit the <code class="highlighter-rouge">bgp.yaml</code> file, updating <code class="highlighter-rouge">nodeToNodeMeshEnabled</code> to <code class="highlighter-rouge">false</code>. Then
update Calico configuration by running:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo calicoctl replace -f bgp.yaml
</code></pre></div></div>
<h2 id="configure-calico-to-use-route-reflectors">Configure Calico to use Route Reflectors</h2>
<p>To configure Calico to use the Route Reflectors you need to know the AS number
for your network first. For that, run:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo calicoctl get nodes --output=wide
</code></pre></div></div>
<p>Now that you have the AS number, you can create the Calico configuration.
For each Route Reflector, customize and run the following snippet:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo calicoctl create -f - &lt;&lt; EOF
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
name: bgppeer-global
spec:
peerIP: &lt;IP_RR&gt;
asNumber: &lt;AS_NUMBER&gt;
EOF
</code></pre></div></div>
<p>Where:</p>
<ul>
<li><code class="highlighter-rouge">IP_RR</code> is the IP of the node where the Route Reflector pod is deployed.</li>
<li><code class="highlighter-rouge">AS_NUMBER</code> is the same <code class="highlighter-rouge">AS number</code> for your nodes.</li>
</ul>
<p>You can learn more about this configuration in the
<a href="https://docs.projectcalico.org/v3.1/usage/routereflector/calico-routereflector">Calico documentation</a>.</p>
<h2 id="stop-calico-node-pods">Stop calico-node pods</h2>
<p>If you have <code class="highlighter-rouge">calico-node</code> pods running on the nodes dedicated for running the
Route Reflector, manually delete them. This ensures that you dont have them
both running on the same node.</p>
<p>Using your UCP client bundle, run:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Find the Pod name
kubectl get pods -n kube-system -o wide | grep &lt;node-name&gt;
# Delete the Pod
kubectl delete pod -n kube-system &lt;pod-name&gt;
</code></pre></div></div>
<h2 id="validate-peers">Validate peers</h2>
<p>Now you can check that other <code class="highlighter-rouge">calico-node</code> pods running on other nodes are
peering with the Route Reflector:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo calicoctl node status
</code></pre></div></div>
<p>You should see something like:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>IPv4 BGP status
+--------------+-----------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-----------+-------+----------+-------------+
| 172.31.24.86 | global | up | 23:10:04 | Established |
+--------------+-----------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
</code></pre></div></div>

View File

@ -0,0 +1,107 @@
<blockquote>
<p>Beta disclaimer</p>
<p>This is beta content. It is not yet complete and should be considered a work in progress. This content is subject to change without notice.</p>
</blockquote>
<p>SAML is commonly supported by enterprise authentication systems. SAML-based single sign-on (SSO) gives you access to UCP through a SAML 2.0-compliant identity provider.</p>
<p>SAML-based single sign-on (SSO) gives you access to UCP through a SAML 2.0-compliant identity provider. UCP supports SAML for authentication as a service provider integrated with your identity provider.</p>
<p>For more information about SAML, see the <a href="http://saml.xml.org/">SAML XML website</a>.</p>
<p>UCP supports these identity providers:</p>
<ul>
<li><a href="https://www.okta.com/">Okta</a></li>
<li><a href="https://docs.microsoft.com/en-us/windows-server/identity/active-directory-federation-services">ADFS</a></li>
</ul>
<h2 id="configure-identity-provider-integration">Configure identity provider integration</h2>
<p>There are values your identity provider needs for successful integration with UCP, as follows. These values can vary between identity providers. Consult your identity provider documentation for instructions on providing these values as part of their integration process.</p>
<h3 id="okta-integration-values">Okta integration values</h3>
<p>Okta integration requires these values:</p>
<ul>
<li>URL for single signon (SSO). This value is the URL for UCP, qualified with <code class="highlighter-rouge">/enzi/v0/saml/acs</code>. For example, <code class="highlighter-rouge">https://111.111.111.111/enzi/v0/saml/acs</code>.</li>
<li>Service provider audience URI. This value is the URL for UCP, qualified with <code class="highlighter-rouge">/enzi/v0/saml/metadata</code>. For example, <code class="highlighter-rouge">https://111.111.111.111/enzi/v0/saml/metadata</code>.</li>
<li>NameID format. Select Unspecified.</li>
<li>Application username. Email (For example, a custom <code class="highlighter-rouge">${f:substringBefore(user.email, "@")}</code> specifies the username portion of the email address.</li>
<li>Attribute Statements:
<ul>
<li>Name: <code class="highlighter-rouge">fullname</code>, Value: <code class="highlighter-rouge">user.displayName</code>.</li>
<li>Group Attribute Statement:
Name: <code class="highlighter-rouge">member-of</code>, Filter: (user defined) for associate group membership. The group name is returned with the assertion.
Name: <code class="highlighter-rouge">is-admin</code>, Filter: (user defined) for identifying if the user is an admin.</li>
</ul>
</li>
</ul>
<h3 id="adfs-integration-values">ADFS integration values</h3>
<p>ADFS integration requires these values:</p>
<ul>
<li>Service provider metadata URI. This value is the URL for UCP, qualified with <code class="highlighter-rouge">/enzi/v0/saml/metadata</code>. For example, <code class="highlighter-rouge">https://111.111.111.111/enzi/v0/saml/metadata</code>.</li>
<li>Attribute Store: Active Directory.
<ul>
<li>Add LDAP Attribute = Email Address; Outgoing Claim Type: Email Address</li>
<li>Add LDAP Attribute = Display-Name; Outgoing Claim Type: Common Name</li>
</ul>
</li>
<li>Claim using Custom Rule. For example, <code class="highlighter-rouge">c:[Type == "http://schemas.xmlsoap.org/claims/CommonName"]
=&gt; issue(Type = "fullname", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType);</code></li>
<li>Outgoing claim type: Name ID</li>
<li>Outgoing name ID format: Transient Identifier</li>
<li>Pass through all claim values</li>
</ul>
<h2 id="configure-the-saml-integration">Configure the SAML integration</h2>
<p>To enable SAML authentication:</p>
<ol>
<li>Go to the UCP web interface.</li>
<li>Navigate to the <strong>Admin Settings</strong>.</li>
<li>
<p>Select <strong>Authentication &amp; Authorization</strong>.</p>
<p><img src="../../images/saml_enabled.png" alt="Enabling SAML in UCP" /></p>
</li>
<li>
<p>In the <strong>SAML Enabled</strong> section, select <strong>Yes</strong> to display the required settings. The settings are grouped by those needed by the identity provider server and by those needed by UCP as a SAML service provider.</p>
<p><img src="../../images/saml_settings.png" alt="Configuring IdP values for SAML in UCP" /></p>
</li>
<li>In <strong>IdP Metadata URL</strong> enter the URL for the identity providers metadata.</li>
<li>If the metadata URL is publicly certified, you can leave <strong>Skip TLS Verification</strong> unchecked and <strong>Root Certificates Bundle</strong> blank, which is the default. If the metadata URL is NOT certified, you must provide the certificates from the identity provider in the <strong>Root Certificates Bundle</strong> field whether or not you check <strong>Skip TLS Verification</strong>.</li>
<li>
<p>In <strong>UCP Host</strong> enter the URL that includes the IP address or domain of your UCP web interface. The current IP address appears by default.</p>
<p><img src="../../images/saml_settings_2.png" alt="Configuring service provider values for SAML in UCP" /></p>
</li>
<li>To customize the text of the sign-in button, enter your button text in the <strong>Customize Sign In Button Text</strong> field. The default text is Sign in with SAML.</li>
<li>The <strong>Service Provider Metadata URL</strong> and <strong>Assertion Consumer Service (ACS) URL</strong> appear in shaded boxes. Select the copy icon at the right side of each box to copy that URL to the clipboard for pasting in the identity provider workflow.</li>
<li>Select <strong>Save</strong> to complete the integration.</li>
</ol>
<h2 id="security-considerations">Security considerations</h2>
<p>You can download a client bundle to access UCP. A client bundle is a group of certificates downloadable directly from UCP web interface that enables command line as well as API access to UCP. It lets you authorize a remote Docker engine to access specific user accounts managed in Docker EE, absorbing all associated RBAC controls in the process. You can now execute docker swarm commands from your remote machine that take effect on the remote cluster. You can download the client bundle in the <strong>Admin Settings</strong> under <strong>My Profile</strong>.</p>
<p><img src="../../images/client-bundle.png" alt="Downloading UCP Client Profile" /></p>
<blockquote>
<p>Caution</p>
<p>Users who have been previously authorized using a Client Bundle will continue to be able to access UCP regardless of the newly configured SAML access controls. To ensure that access from the client bundle is synced with the identity provider, we recommend the following steps. Otherwise, a previously-authorized user could get access to UCP through their existing client bundle.</p>
<ul>
<li>Remove the user account from UCP that grants the client bundle access.</li>
<li>If group membership in the identity provider changes, replicate this change in UCP.</li>
<li>Continue to use LDAP to sync group membership.</li>
</ul>
</blockquote>

View File

@ -0,0 +1,68 @@
<p>Docker UCP integrates with LDAP directory services, so that you can manage
users and groups from your organizations directory and automatically
propagate this information to UCP and DTR. You can set up your clusters LDAP
configuration by using the UCP web UI, or you can use a
<a href="../ucp-configuration-file.md">UCP configuration file</a>.</p>
<p>To see an example TOML config file that shows how to configure UCP settings,
run UCP with the <code class="highlighter-rouge">example-config</code> option.
<a href="../ucp-configuration-file.md">Learn about UCP configuration files</a>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container run <span class="nt">--rm</span> /: example-config
</code></pre></div></div>
<h2 id="set-up-ldap-by-using-a-configuration-file">Set up LDAP by using a configuration file</h2>
<ol>
<li>
<p>Use the following command to extract the name of the currently active
configuration from the <code class="highlighter-rouge">ucp-agent</code> service.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="nv">$ CURRENT_CONFIG_NAME</span><span class="o">=</span><span class="k">$(</span>docker service inspect <span class="nt">--format</span> <span class="s1">'{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}'</span> ucp-agent | <span class="nb">grep</span> <span class="s1">'com.docker.ucp.config-'</span><span class="k">)</span>
</code></pre></div> </div>
</li>
<li>
<p>Get the current configuration and save it to a TOML file.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
docker config inspect <span class="nt">--format</span> <span class="s1">'{{ printf "%s" .Spec.Data }}'</span> <span class="nv">$CURRENT_CONFIG_NAME</span> <span class="o">&gt;</span> config.toml
</code></pre></div> </div>
</li>
<li>
<p>Use the output of the <code class="highlighter-rouge">example-config</code> command as a guide to edit your
<code class="highlighter-rouge">config.toml</code> file. Under the <code class="highlighter-rouge">[auth]</code> sections, set <code class="highlighter-rouge">backend = "ldap"</code>
and <code class="highlighter-rouge">[auth.ldap]</code> to configure LDAP integration the way you want.</p>
</li>
<li>
<p>Once youve finished editing your <code class="highlighter-rouge">config.toml</code> file, create a new Docker
Config object by using the following command.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">NEW_CONFIG_NAME</span><span class="o">=</span><span class="s2">"com.docker.ucp.config-</span><span class="k">$((</span> <span class="k">$(</span>cut <span class="nt">-d</span> <span class="s1">'-'</span> <span class="nt">-f</span> 2 <span class="o">&lt;&lt;&lt;</span> <span class="s2">"</span><span class="nv">$CURRENT_CONFIG_NAME</span><span class="s2">"</span><span class="k">)</span> <span class="o">+</span> <span class="m">1</span> <span class="k">))</span><span class="s2">"</span>
docker config create <span class="nv">$NEW_CONFIG_NAME</span> config.toml
</code></pre></div> </div>
</li>
<li>
<p>Update the <code class="highlighter-rouge">ucp-agent</code> service to remove the reference to the old config
and add a reference to the new config.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker service update <span class="nt">--config-rm</span> <span class="s2">"</span><span class="nv">$CURRENT_CONFIG_NAME</span><span class="s2">"</span> <span class="nt">--config-add</span> <span class="s2">"source=</span><span class="k">${</span><span class="nv">NEW_CONFIG_NAME</span><span class="k">}</span><span class="s2">,target=/etc/ucp/ucp.toml"</span> ucp-agent
</code></pre></div> </div>
</li>
<li>
<p>Wait a few moments for the <code class="highlighter-rouge">ucp-agent</code> service tasks to update across
your cluster. If you set <code class="highlighter-rouge">jit_user_provisioning = true</code> in the LDAP
configuration, users matching any of your specified search queries will
have their accounts created when they log in with their username and LDAP
password.</p>
</li>
</ol>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="../../../authorization/create-users-and-teams-manually.md">Create users and teams manually</a></li>
<li><a href="../../../authorization/create-teams-with-ldap.md">Create teams with LDAP</a></li>
</ul>

View File

@ -0,0 +1,351 @@
<p>Docker UCP integrates with LDAP directory services, so that you can manage
users and groups from your organizations directory and it will automatically
propagate that information to UCP and DTR.</p>
<p>If you enable LDAP, UCP uses a remote directory server to create users
automatically, and all logins are forwarded to the directory server.</p>
<p>When you switch from built-in authentication to LDAP authentication,
all manually created users whose usernames dont match any LDAP search results
are still available.</p>
<p>When you enable LDAP authentication, you can choose whether UCP creates user
accounts only when users log in for the first time. Select the
<strong>Just-In-Time User Provisioning</strong> option to ensure that the only LDAP
accounts that exist in UCP are those that have had a user log in to UCP.</p>
<h2 id="how-ucp-integrates-with-ldap">How UCP integrates with LDAP</h2>
<p>You control how UCP integrates with LDAP by creating searches for users.
You can specify multiple search configurations, and you can specify multiple
LDAP servers to integrate with. Searches start with the <code class="highlighter-rouge">Base DN</code>, which is
the <em>distinguished name</em> of the node in the LDAP directory tree where the
search starts looking for users.</p>
<p>Access LDAP settings by navigating to the <strong>Authentication &amp; Authorization</strong>
page in the UCP web UI. There are two sections for controlling LDAP searches
and servers.</p>
<ul>
<li><strong>LDAP user search configurations:</strong> This is the section of the
<strong>Authentication &amp; Authorization</strong> page where you specify search
parameters, like <code class="highlighter-rouge">Base DN</code>, <code class="highlighter-rouge">scope</code>, <code class="highlighter-rouge">filter</code>, the <code class="highlighter-rouge">username</code> attribute,
and the <code class="highlighter-rouge">full name</code> attribute. These searches are stored in a list, and
the ordering may be important, depending on your search configuration.</li>
<li><strong>LDAP server:</strong> This is the section where you specify the URL of an LDAP
server, TLS configuration, and credentials for doing the search requests.
Also, you provide a domain for all servers but the first one. The first
server is considered the default domain server. Any others are associated
with the domain that you specify in the page.</li>
</ul>
<p>Heres what happens when UCP synchronizes with LDAP:</p>
<ol>
<li>UCP creates a set of search results by iterating over each of the user
search configs, in the order that you specify.</li>
<li>UCP choses an LDAP server from the list of domain servers by considering the
<code class="highlighter-rouge">Base DN</code> from the user search config and selecting the domain server that
has the longest domain suffix match.</li>
<li>If no domain server has a domain suffix that matches the <code class="highlighter-rouge">Base DN</code> from the
search config, UCP uses the default domain server.</li>
<li>UCP combines the search results into a list of users and creates UCP
accounts for them. If the <strong>Just-In-Time User Provisioning</strong> option is set,
user accounts are created only when users first log in.</li>
</ol>
<p>The domain server to use is determined by the <code class="highlighter-rouge">Base DN</code> in each search config.
UCP doesnt perform search requests against each of the domain servers, only
the one which has the longest matching domain suffix, or the default if theres
no match.</p>
<p>Heres an example. Lets say we have three LDAP domain servers:</p>
<table>
<thead>
<tr>
<th>Domain</th>
<th>Server URL</th>
</tr>
</thead>
<tbody>
<tr>
<td><em>default</em></td>
<td>ldaps://ldap.example.com</td>
</tr>
<tr>
<td><code class="highlighter-rouge">dc=subsidiary1,dc=com</code></td>
<td>ldaps://ldap.subsidiary1.com</td>
</tr>
<tr>
<td><code class="highlighter-rouge">dc=subsidiary2,dc=subsidiary1,dc=com</code></td>
<td>ldaps://ldap.subsidiary2.com</td>
</tr>
</tbody>
</table>
<p>Here are three user search configs with the following <code class="highlighter-rouge">Base DNs</code>:</p>
<ul>
<li>
<p>baseDN=<code class="highlighter-rouge">ou=people,dc=subsidiary1,dc=com</code></p>
<p>For this search config, <code class="highlighter-rouge">dc=subsidiary1,dc=com</code> is the only server with a
domain which is a suffix, so UCP uses the server <code class="highlighter-rouge">ldaps://ldap.subsidiary1.com</code>
for the search request.</p>
</li>
<li>
<p>baseDN=<code class="highlighter-rouge">ou=product,dc=subsidiary2,dc=subsidiary1,dc=com</code></p>
<p>For this search config, two of the domain servers have a domain which is a
suffix of this base DN, but <code class="highlighter-rouge">dc=subsidiary2,dc=subsidiary1,dc=com</code> is the
longer of the two, so UCP uses the server <code class="highlighter-rouge">ldaps://ldap.subsidiary2.com</code>
for the search request.</p>
</li>
<li>
<p>baseDN=<code class="highlighter-rouge">ou=eng,dc=example,dc=com</code></p>
<p>For this search config, there is no server with a domain specified which is
a suffix of this base DN, so UCP uses the default server, <code class="highlighter-rouge">ldaps://ldap.example.com</code>,
for the search request.</p>
</li>
</ul>
<p>If there are <code class="highlighter-rouge">username</code> collisions for the search results between domains, UCP
uses only the first search result, so the ordering of the user search configs
may be important. For example, if both the first and third user search configs
result in a record with the username <code class="highlighter-rouge">jane.doe</code>, the first has higher
precedence and the second is ignored. For this reason, its important to choose
a <code class="highlighter-rouge">username</code> attribute thats unique for your users across all domains.</p>
<p>Because names may collide, its a good idea to use something unique to the
subsidiary, like the email address for each person. Users can log in with the
email address, for example, <code class="highlighter-rouge">jane.doe@subsidiary1.com</code>.</p>
<h2 id="configure-the-ldap-integration">Configure the LDAP integration</h2>
<p>To configure UCP to create and authenticate users by using an LDAP directory,
go to the UCP web UI, navigate to the <strong>Admin Settings</strong> page and click
<strong>Authentication &amp; Authorization</strong> to select the method used to create and
authenticate users.</p>
<p><img src="../../../images/authentication-authorization.png" alt="" /></p>
<p>In the <strong>LDAP Enabled</strong> section, click <strong>Yes</strong> to The LDAP settings appear.
Now configure your LDAP directory integration.</p>
<h2 id="default-role-for-all-private-collections">Default role for all private collections</h2>
<p>Use this setting to change the default permissions of new users.</p>
<p>Click the dropdown to select the permission level that UCP assigns by default
to the private collections of new users. For example, if you change the value
to <code class="highlighter-rouge">View Only</code>, all users who log in for the first time after the setting is
changed have <code class="highlighter-rouge">View Only</code> access to their private collections, but permissions
remain unchanged for all existing users.
<a href="../../../authorization/define-roles.md">Learn more about permission levels</a>.</p>
<h2 id="ldap-enabled">LDAP enabled</h2>
<p>Click <strong>Yes</strong> to enable integrating UCP users and teams with LDAP servers.</p>
<h2 id="ldap-server">LDAP server</h2>
<table>
<thead>
<tr>
<th style="text-align: left">Field</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">LDAP server URL</td>
<td style="text-align: left">The URL where the LDAP server can be reached.</td>
</tr>
<tr>
<td style="text-align: left">Reader DN</td>
<td style="text-align: left">The distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice, this should be an LDAP read-only user.</td>
</tr>
<tr>
<td style="text-align: left">Reader password</td>
<td style="text-align: left">The password of the account used for searching entries in the LDAP server.</td>
</tr>
<tr>
<td style="text-align: left">Use Start TLS</td>
<td style="text-align: left">Whether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with <code class="highlighter-rouge">ldaps://</code>, this field is ignored.</td>
</tr>
<tr>
<td style="text-align: left">Skip TLS verification</td>
<td style="text-align: left">Whether to verify the LDAP server certificate when using TLS. The connection is still encrypted but vulnerable to man-in-the-middle attacks.</td>
</tr>
<tr>
<td style="text-align: left">No simple pagination</td>
<td style="text-align: left">If your LDAP server doesnt support pagination.</td>
</tr>
<tr>
<td style="text-align: left">Just-In-Time User Provisioning</td>
<td style="text-align: left">Whether to create user accounts only when users log in for the first time. The default value of <code class="highlighter-rouge">true</code> is recommended. If you upgraded from UCP 2.0.x, the default is <code class="highlighter-rouge">false</code>.</td>
</tr>
</tbody>
</table>
<p><img src="../../../images/ldap-integration-1.png" alt="" class="with-border" /></p>
<p>Click <strong>Confirm</strong> to add your LDAP domain.</p>
<p>To integrate with more LDAP servers, click <strong>Add LDAP Domain</strong>.</p>
<h2 id="ldap-user-search-configurations">LDAP user search configurations</h2>
<table>
<thead>
<tr>
<th style="text-align: left">Field</th>
<th style="text-align: left">Description</th>
<th> </th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">Base DN</td>
<td style="text-align: left">The distinguished name of the node in the directory tree where the search should start looking for users.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Username attribute</td>
<td style="text-align: left">The LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: <code class="highlighter-rouge">/</code> <code class="highlighter-rouge">\</code> <code class="highlighter-rouge">[</code> <code class="highlighter-rouge">]</code> <code class="highlighter-rouge">:</code> <code class="highlighter-rouge">;</code> <code class="highlighter-rouge">|</code> <code class="highlighter-rouge">=</code> <code class="highlighter-rouge">,</code> <code class="highlighter-rouge">+</code> <code class="highlighter-rouge">*</code> <code class="highlighter-rouge">?</code> <code class="highlighter-rouge">&lt;</code> <code class="highlighter-rouge">&gt;</code> <code class="highlighter-rouge">'</code> <code class="highlighter-rouge">"</code>.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Full name attribute</td>
<td style="text-align: left">The LDAP attribute to use as the users full name for display purposes. If left empty, UCP will not create new users with a full name value.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Filter</td>
<td style="text-align: left">The LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Search subtree instead of just one level</td>
<td style="text-align: left">Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Match Group Members</td>
<td style="text-align: left">Whether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support <code class="highlighter-rouge">memberOf</code> search filters.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Iterate through group members</td>
<td style="text-align: left">If <code class="highlighter-rouge">Select Group Members</code> is selected, this option searches for users by first iterating over the target groups membership, making a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter, or if your directory server does not support simple pagination of search results.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Group DN</td>
<td style="text-align: left">If <code class="highlighter-rouge">Select Group Members</code> is selected, this specifies the distinguished name of the group from which to select users.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: left">Group Member Attribute</td>
<td style="text-align: left">If <code class="highlighter-rouge">Select Group Members</code> is selected, the value of this group attribute corresponds to the distinguished names of the members of the group.</td>
<td> </td>
</tr>
</tbody>
</table>
<p><img src="../../../images/ldap-integration-2.png" alt="" class="with-border" /></p>
<p>To configure more user search queries, click <strong>Add LDAP User Search Configuration</strong>
again. This is useful in cases where users may be found in multiple distinct
subtrees of your organizations directory. Any user entry which matches at
least one of the search configurations will be synced as a user.</p>
<h2 id="ldap-test-login">LDAP test login</h2>
<table>
<thead>
<tr>
<th style="text-align: left">Field</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">Username</td>
<td style="text-align: left">An LDAP username for testing authentication to this application. This value corresponds with the <strong>Username Attribute</strong> specified in the <strong>LDAP user search configurations</strong> section.</td>
</tr>
<tr>
<td style="text-align: left">Password</td>
<td style="text-align: left">The users password used to authenticate (BIND) to the directory server.</td>
</tr>
</tbody>
</table>
<p>Before you save the configuration changes, you should test that the integration
is correctly configured. You can do this by providing the credentials of an
LDAP user, and clicking the <strong>Test</strong> button.</p>
<h2 id="ldap-sync-configuration">LDAP sync configuration</h2>
<table>
<thead>
<tr>
<th style="text-align: left">Field</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">Sync interval</td>
<td style="text-align: left">The interval, in hours, to synchronize users between UCP and the LDAP server. When the synchronization job runs, new users found in the LDAP server are created in UCP with the default permission level. UCP users that dont exist in the LDAP server become inactive.</td>
</tr>
<tr>
<td style="text-align: left">Enable sync of admin users</td>
<td style="text-align: left">This option specifies that system admins should be synced directly with members of a group in your organizations LDAP directory. The admins will be synced to match the membership of the group. The configured recovery admin user will also remain a system admin.</td>
</tr>
</tbody>
</table>
<p>Once youve configured the LDAP integration, UCP synchronizes users based on
the interval youve defined starting at the top of the hour. When the
synchronization runs, UCP stores logs that can help you troubleshoot when
something goes wrong.</p>
<p>You can also manually synchronize users by clicking <strong>Sync Now</strong>.</p>
<h2 id="revoke-user-access">Revoke user access</h2>
<p>When a user is removed from LDAP, the effect on the users UCP account depends
on the <strong>Just-In-Time User Provisioning</strong> setting:</p>
<ul>
<li><strong>Just-In-Time User Provisioning</strong> is <code class="highlighter-rouge">false</code>: Users deleted from LDAP become
inactive in UCP after the next LDAP synchronization runs.</li>
<li><strong>Just-In-Time User Provisioning</strong> is <code class="highlighter-rouge">true</code>: Users deleted from LDAP cant
authenticate, but their UCP accounts remain active. This means that they can
use their client bundles to run commands. To prevent this, deactivate their
UCP user accounts.</li>
</ul>
<h2 id="data-synced-from-your-organizations-ldap-directory">Data synced from your organizations LDAP directory</h2>
<p>UCP saves a minimum amount of user data required to operate. This includes
the value of the username and full name attributes that you have specified in
the configuration as well as the distinguished name of each synced user.
UCP does not store any additional data from the directory server.</p>
<h2 id="sync-teams">Sync teams</h2>
<p>UCP enables syncing teams with a search query or group in your organizations
LDAP directory.
<a href="../../../authorization/create-teams-with-ldap.md">Sync team members with your organizations LDAP directory</a>.</p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="../../../authorization/create-users-and-teams-manually.md">Create users and teams manually</a></li>
<li><a href="../../../authorization/create-teams-with-ldap.md">Create teams with LDAP</a></li>
<li><a href="enable-ldap-config-file.md">Enable LDAP integration by using a configuration file</a></li>
</ul>

View File

@ -0,0 +1,65 @@
<p>Universal Control Plane can pull and run images from any image registry,
including Docker Trusted Registry and Docker Store.</p>
<p>If your registry uses globally-trusted TLS certificates, everything works
out of the box, and you dont need to configure anything. But if your registries
use self-signed certificates or certificates issues by your own Certificate
Authority, you need to configure UCP to trust those registries.</p>
<h2 id="trust-docker-trusted-registry">Trust Docker Trusted Registry</h2>
<p>To configure UCP to trust a DTR deployment, you need to update the
<a href="ucp-configuration-file.md">UCP system configuration</a> to include one entry for
each DTR deployment:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[[registries]]
host_address = "dtr.example.org"
ca_bundle = """
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----"""
[[registries]]
host_address = "internal-dtr.example.org:444"
ca_bundle = """
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----"""
</code></pre></div></div>
<p>You only need to include the port section if your DTR deployment is running
on a port other than 443.</p>
<p>You can customize and use the script below to generate a file named
<code class="highlighter-rouge">trust-dtr.toml</code> with the configuration needed for your DTR deployment.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Replace this url by your DTR deployment url and port
DTR_URL=https://dtr.example.org
DTR_PORT=443
dtr_full_url=${DTR_URL}:${DTR_PORT}
dtr_ca_url=${dtr_full_url}/ca
# Strip protocol and default https port
dtr_host_address=${dtr_full_url#"https://"}
dtr_host_address=${dtr_host_address%":443"}
# Create the registry configuration and save it it
cat &lt;&lt;EOL &gt; trust-dtr.toml
[[registries]]
# host address should not contain protocol or port if using 443
host_address = $dtr_host_address
ca_bundle = """
$(curl -sk $dtr_ca_url)"""
EOL
</code></pre></div></div>
<p>You can then append the content of <code class="highlighter-rouge">trust-dtr.toml</code> to your current UCP
configuration to make UCP trust this DTR deployment.</p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="external-auth/enable-ldap-config-file.md">Integrate with LDAP by using a configuration file</a></li>
</ul>

View File

@ -0,0 +1,59 @@
<p>Docker Universal Control Plane is designed for high availability (HA). You can
join multiple manager nodes to the cluster, so that if one manager node fails,
another can automatically take its place without impact to the cluster.</p>
<p>Having multiple manager nodes in your cluster allows you to:</p>
<ul>
<li>Handle manager node failures,</li>
<li>Load-balance user requests across all manager nodes.</li>
</ul>
<h2 id="size-your-deployment">Size your deployment</h2>
<p>To make the cluster tolerant to more failures, add additional replica nodes to
your cluster.</p>
<table>
<thead>
<tr>
<th style="text-align: center">Manager nodes</th>
<th style="text-align: center">Failures tolerated</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">1</td>
<td style="text-align: center">0</td>
</tr>
<tr>
<td style="text-align: center">3</td>
<td style="text-align: center">1</td>
</tr>
<tr>
<td style="text-align: center">5</td>
<td style="text-align: center">2</td>
</tr>
</tbody>
</table>
<p>For production-grade deployments, follow these rules of thumb:</p>
<ul>
<li>When a manager node fails, the number of failures tolerated by your cluster
decreases. Dont leave that node offline for too long.</li>
<li>You should distribute your manager nodes across different availability
zones. This way your cluster can continue working even if an entire
availability zone goes down.</li>
<li>Adding many manager nodes to the cluster might lead to performance
degradation, as changes to configurations need to be replicated across all
manager nodes. The maximum advisable is seven manager nodes.</li>
</ul>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="join-linux-nodes-to-cluster.md">Join nodes to your cluster</a></li>
<li><a href="join-windows-nodes-to-cluster.md">Join Windows worker nodes to your cluster</a></li>
<li><a href="use-a-load-balancer.md">Use a load balancer</a></li>
</ul>

View File

@ -0,0 +1,143 @@
<p>Docker EE is designed for scaling horizontally as your applications grow in
size and usage. You can add or remove nodes from the cluster to scale it
to your needs. You can join Windows Server 2016, IBM z System, and Linux nodes
to the cluster.</p>
<p>Because Docker EE leverages the clustering functionality provided by Docker
Engine, you use the <a href="/engine/swarm/swarm-tutorial/add-nodes.md">docker swarm join</a>
command to add more nodes to your cluster. When you join a new node, Docker EE
services start running on the node automatically.</p>
<h2 id="node-roles">Node roles</h2>
<p>When you join a node to a cluster, you specify its role: manager or worker.</p>
<ul>
<li>
<p><strong>Manager</strong>: Manager nodes are responsible for cluster management
functionality and dispatching tasks to worker nodes. Having multiple
manager nodes allows your swarm to be highly available and tolerant of
node failures.</p>
<p>Manager nodes also run all Docker EE components in a replicated way, so
by adding additional manager nodes, youre also making the cluster highly
available.
<a href="/enterprise/docker-ee-architecture.md">Learn more about the Docker EE architecture.</a></p>
</li>
<li>
<p><strong>Worker</strong>: Worker nodes receive and execute your services and applications.
Having multiple worker nodes allows you to scale the computing capacity of
your cluster.</p>
<p>When deploying Docker Trusted Registry in your cluster, you deploy it to a
worker node.</p>
</li>
</ul>
<h2 id="join-a-node-to-the-cluster">Join a node to the cluster</h2>
<p>You can join Windows Server 2016, IBM z System, and Linux nodes to the cluster,
but only Linux nodes can be managers.</p>
<p>To join nodes to the cluster, go to the Docker EE web UI and navigate to the
<strong>Nodes</strong> page.</p>
<ol>
<li>Click <strong>Add Node</strong> to add a new node.</li>
<li>Select the type of node to add, <strong>Windows</strong> or <strong>Linux</strong>.</li>
<li>Click <strong>Manager</strong> if you want to add the node as a manager.</li>
<li>Check the <strong>Use a custom listen address</strong> option to specify the address
and port where new node listens for inbound cluster management traffic.</li>
<li>Check the <strong>Use a custom listen address</strong> option to specify the
IP address thats advertised to all members of the cluster for API access.</li>
</ol>
<p><img src="../../../images/join-nodes-to-cluster-2.png" alt="" class="with-border" /></p>
<p>Copy the displayed command, use SSH to log in to the host that you want to
join to the cluster, and run the <code class="highlighter-rouge">docker swarm join</code> command on the host.</p>
<p>To add a Windows node, click <strong>Windows</strong> and follow the instructions in
<a href="join-windows-nodes-to-cluster.md">Join Windows worker nodes to a cluster</a>.</p>
<p>After you run the join command in the node, the node is displayed on the
<strong>Nodes</strong> page in the Docker EE web UI. From there, you can change the nodes
cluster configuration, including its assigned orchestrator type.
<a href="../set-orchestrator-type.md">Learn how to change the orchestrator for a node</a>.</p>
<h2 id="pause-or-drain-a-node">Pause or drain a node</h2>
<p>Once a node is part of the cluster, you can configure the nodes availability
so that it is:</p>
<ul>
<li><strong>Active</strong>: the node can receive and execute tasks.</li>
<li><strong>Paused</strong>: the node continues running existing tasks, but doesnt receive
new tasks.</li>
<li><strong>Drained</strong>: the node wont receive new tasks. Existing tasks are stopped and
replica tasks are launched in active nodes.</li>
</ul>
<p>Pause or drain a node from the <strong>Edit Node</strong> page:</p>
<ol>
<li>In the Docker EE web UI, browse to the <strong>Nodes</strong> page and select the node.</li>
<li>In the details pane, click <strong>Configure</strong> and select <strong>Details</strong> to open
the <strong>Edit Node</strong> page.</li>
<li>In the <strong>Availability</strong> section, click <strong>Active</strong>, <strong>Pause</strong>, or <strong>Drain</strong>.</li>
<li>Click <strong>Save</strong> to change the availability of the node.</li>
</ol>
<p><img src="../../../images/join-nodes-to-cluster-3.png" alt="" class="with-border" /></p>
<h2 id="promote-or-demote-a-node">Promote or demote a node</h2>
<p>You can promote worker nodes to managers to make UCP fault tolerant. You can
also demote a manager node into a worker.</p>
<p>To promote or demote a manager node:</p>
<ol>
<li>Navigate to the <strong>Nodes</strong> page, and click the node that you want to demote.</li>
<li>In the details pane, click <strong>Configure</strong> and select <strong>Details</strong> to open
the <strong>Edit Node</strong> page.</li>
<li>In the <strong>Role</strong> section, click <strong>Manager</strong> or <strong>Worker</strong>.</li>
<li>Click <strong>Save</strong> and wait until the operation completes.</li>
<li>Navigate to the <strong>Nodes</strong> page, and confirm that the node role has changed.</li>
</ol>
<p>If youre load-balancing user requests to Docker EE across multiple manager
nodes, dont forget to remove these nodes from your load-balancing pool when
you demote them to workers.</p>
<h2 id="remove-a-node-from-the-cluster">Remove a node from the cluster</h2>
<p>You can remove worker nodes from the cluster at any time:</p>
<ol>
<li>Navigate to the <strong>Nodes</strong> page and select the node.</li>
<li>In the details pane, click <strong>Actions</strong> and select <strong>Remove</strong>.</li>
<li>Click <strong>Confirm</strong> when youre prompted.</li>
</ol>
<p>Since manager nodes are important to the cluster overall health, you need to
be careful when removing one from the cluster.</p>
<p>To remove a manager node:</p>
<ol>
<li>Make sure all nodes in the cluster are healthy. Dont remove manager nodes
if thats not the case.</li>
<li>Demote the manager node into a worker.</li>
<li>Now you can remove that node from the cluster.</li>
</ol>
<h2 id="use-the-cli-to-manage-your-nodes">Use the CLI to manage your nodes</h2>
<p>You can use the Docker CLI client to manage your nodes from the CLI. To do
this, configure your Docker CLI client with a <a href="../../../user-access/cli.md">UCP client bundle</a>.</p>
<p>Once you do that, you can start managing your UCP nodes:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node <span class="nb">ls</span>
</code></pre></div></div>

View File

@ -0,0 +1,235 @@
<p>Docker Enterprise Edition supports worker nodes that run on Windows Server 2016 or 1709.
Only worker nodes are supported on Windows, and all manager nodes in the cluster
must run on Linux.</p>
<p>Follow these steps to enable a worker node on Windows.</p>
<ol>
<li>Install Docker EE Engine on Windows Server 2016.</li>
<li>Configure the Windows node.</li>
<li>Join the Windows node to the cluster.</li>
</ol>
<h2 id="install-docker-ee-engine-on-windows-server-2016-or-1709">Install Docker EE Engine on Windows Server 2016 or 1709</h2>
<p><a href="/engine/installation/windows/docker-ee/#use-a-script-to-install-docker-ee">Install Docker EE Engine</a>
on a Windows Server 2016 or 1709 instance to enable joining a cluster thats managed by
Docker Enterprise Edition.</p>
<h2 id="configure-the-windows-node">Configure the Windows node</h2>
<p>Follow these steps to configure the docker daemon and the Windows environment.</p>
<ol>
<li>Add a label to the node.</li>
<li>Pull the Windows-specific image of <code class="highlighter-rouge">ucp-agent</code>, which is named <code class="highlighter-rouge">ucp-agent-win</code>.</li>
<li>Run the Windows worker setup script provided with <code class="highlighter-rouge">ucp-agent-win</code>.</li>
<li>Join the cluster with the token provided by the Docker EE web UI or CLI.</li>
</ol>
<h3 id="add-a-label-to-the-node">Add a label to the node</h3>
<p>Configure the Docker Engine running on the node to have a label. This makes
it easier to deploy applications on nodes with this label.</p>
<p>Create the file <code class="highlighter-rouge">C:\ProgramData\docker\config\daemon.json</code> with the following
content:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"labels": ["os=windows"]
}
</code></pre></div></div>
<p>Restart Docker for the changes to take effect:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Restart-Service docker
</code></pre></div></div>
<h3 id="pull-the-windows-specific-images">Pull the Windows-specific images</h3>
<p>On a manager node, run the following command to list the images that are required
on Windows nodes.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container run <span class="nt">--rm</span> /: images <span class="nt">--list</span> <span class="nt">--enable-windows</span>
/ucp-agent-win:
/ucp-dsinfo-win:
</code></pre></div></div>
<p>On Windows Server 2016, in a PowerShell terminal running as Administrator,
log in to Docker Hub with the <code class="highlighter-rouge">docker login</code> command and pull the listed images.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image pull /ucp-agent-win:
docker image pull /ucp-dsinfo-win:
</code></pre></div></div>
<h3 id="run-the-windows-node-setup-script">Run the Windows node setup script</h3>
<p>You need to open ports 2376 and 12376, and create certificates
for the Docker daemon to communicate securely. Use this command to run
the Windows node setup script:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$script</span> <span class="o">=</span> <span class="o">[</span>ScriptBlock]::Create<span class="o">((</span>docker run --rm /ucp-agent-win: windows-script | <span class="nb">Out-String</span><span class="o">))</span>
Invoke-Command <span class="nv">$script</span>
</code></pre></div></div>
<blockquote>
<p>Docker daemon restart</p>
<p>When you run <code class="highlighter-rouge">windows-script</code>, the Docker service is unavailable temporarily.</p>
</blockquote>
<p>The Windows node is ready to join the cluster. Run the setup script on each
instance of Windows Server that will be a worker node.</p>
<h3 id="compatibility-with-daemonjson">Compatibility with daemon.json</h3>
<p>The script may be incompatible with installations that use a config file at
<code class="highlighter-rouge">C:\ProgramData\docker\config\daemon.json</code>. If you use such a file, make sure
that the daemon runs on port 2376 and that it uses certificates located in
<code class="highlighter-rouge">C:\ProgramData\docker\daemoncerts</code>. If certificates dont exist in this
directory, run <code class="highlighter-rouge">ucp-agent-win generate-certs</code>, as shown in Step 2 of the
procedure in <a href="#set-up-certs-for-the-dockerd-service">Set up certs for the dockerd service</a>.</p>
<p>In the daemon.json file, set the <code class="highlighter-rouge">tlscacert</code>, <code class="highlighter-rouge">tlscert</code>, and <code class="highlighter-rouge">tlskey</code> options
to the corresponding files in <code class="highlighter-rouge">C:\ProgramData\docker\daemoncerts</code>:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="s2">"debug"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="s2">"tls"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="s2">"tlscacert"</span><span class="p">:</span><span class="w"> </span><span class="s2">"C:</span><span class="se">\P</span><span class="s2">rogramData</span><span class="se">\d</span><span class="s2">ocker</span><span class="se">\d</span><span class="s2">aemoncerts</span><span class="se">\c</span><span class="s2">a.pem"</span><span class="p">,</span><span class="w">
</span><span class="s2">"tlscert"</span><span class="p">:</span><span class="w"> </span><span class="s2">"C:</span><span class="se">\P</span><span class="s2">rogramData</span><span class="se">\d</span><span class="s2">ocker</span><span class="se">\d</span><span class="s2">aemoncerts</span><span class="se">\c</span><span class="s2">ert.pem"</span><span class="p">,</span><span class="w">
</span><span class="s2">"tlskey"</span><span class="p">:</span><span class="w"> </span><span class="s2">"C:</span><span class="se">\P</span><span class="s2">rogramData</span><span class="se">\d</span><span class="s2">ocker</span><span class="se">\d</span><span class="s2">aemoncerts</span><span class="se">\k</span><span class="s2">ey.pem"</span><span class="p">,</span><span class="w">
</span><span class="s2">"tlsverify"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="err">...</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<h2 id="join-the-windows-node-to-the-cluster">Join the Windows node to the cluster</h2>
<p>Now you can join the cluster by using the <code class="highlighter-rouge">docker swarm join</code> command thats
provided by the Docker EE web UI and CLI.</p>
<ol>
<li>Log in to the Docker EE web UI with an administrator account.</li>
<li>Navigate to the <strong>Nodes</strong> page.</li>
<li>Click <strong>Add Node</strong> to add a new node.</li>
<li>In the <strong>Node Type</strong> section, click <strong>Windows</strong>.</li>
<li>In the <strong>Step 2</strong> section, click the checkbox for
“Im ready to join my windows node.”</li>
<li>Check the <strong>Use a custom listen address</strong> option to specify the address
and port where new node listens for inbound cluster management traffic.</li>
<li>
<p>Check the <strong>Use a custom listen address</strong> option to specify the
IP address thats advertised to all members of the cluster for API access.</p>
<p><img src="../../../images/join-windows-nodes-to-cluster-1.png" alt="" class="with-border" /></p>
</li>
</ol>
<p>Copy the displayed command. It looks similar to the following:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker swarm join --token &lt;token&gt; &lt;ucp-manager-ip&gt;
</code></pre></div></div>
<p>You can also use the command line to get the join token. Using your
<a href="../../../user-access/cli.md">UCP client bundle</a>, run:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker swarm join-token worker
</code></pre></div></div>
<p>Run the <code class="highlighter-rouge">docker swarm join</code> command on each instance of Windows Server that
will be a worker node.</p>
<h2 id="configure-a-windows-worker-node-manually">Configure a Windows worker node manually</h2>
<p>The following sections describe how to run the commands in the setup script
manually to configure the <code class="highlighter-rouge">dockerd</code> service and the Windows environment.
The script opens ports in the firewall and sets up certificates for <code class="highlighter-rouge">dockerd</code>.</p>
<p>To see the script, you can run the <code class="highlighter-rouge">windows-script</code> command without piping
to the <code class="highlighter-rouge">Invoke-Expression</code> cmdlet.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container run --rm /ucp-agent-win: windows-script
</code></pre></div></div>
<h3 id="open-ports-in-the-windows-firewall">Open ports in the Windows firewall</h3>
<p>Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic.</p>
<p>In a PowerShell terminal running as Administrator, run these commands
to add rules to the Windows firewall.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>netsh advfirewall firewall add rule <span class="nv">name</span><span class="o">=</span><span class="s2">"docker_local"</span> <span class="nb">dir</span><span class="o">=</span><span class="k">in </span><span class="nv">action</span><span class="o">=</span>allow <span class="nv">protocol</span><span class="o">=</span>TCP <span class="nv">localport</span><span class="o">=</span>2376
netsh advfirewall firewall add rule <span class="nv">name</span><span class="o">=</span><span class="s2">"docker_proxy"</span> <span class="nb">dir</span><span class="o">=</span><span class="k">in </span><span class="nv">action</span><span class="o">=</span>allow <span class="nv">protocol</span><span class="o">=</span>TCP <span class="nv">localport</span><span class="o">=</span>12376
</code></pre></div></div>
<h3 id="set-up-certs-for-the-dockerd-service">Set up certs for the dockerd service</h3>
<ol>
<li>Create the directory <code class="highlighter-rouge">C:\ProgramData\docker\daemoncerts</code>.</li>
<li>
<p>In a PowerShell terminal running as Administrator, run the following command
to generate certificates.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container run --rm -v C:\ProgramData\docker\daemoncerts:C:\certs /ucp-agent-win: generate-certs
</code></pre></div> </div>
</li>
<li>
<p>To set up certificates, run the following commands to stop and unregister the
<code class="highlighter-rouge">dockerd</code> service, register the service with the certificates, and restart the service.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">Stop-Service </span>docker
dockerd --unregister-service
dockerd -H npipe:// -H 0.0.0.0:2376 --tlsverify --tlscacert<span class="o">=</span>C:\ProgramData\docker\daemoncerts\ca.pem --tlscert<span class="o">=</span>C:\ProgramData\docker\daemoncerts\cert.pem --tlskey<span class="o">=</span>C:\ProgramData\docker\daemoncerts\key.pem --register-service
<span class="nb">Start-Service </span>docker
</code></pre></div> </div>
</li>
</ol>
<p>The <code class="highlighter-rouge">dockerd</code> service and the Windows environment are now configured to join a Docker EE cluster.</p>
<blockquote>
<p>TLS certificate setup</p>
<p>If the TLS certificates arent set up correctly, the Docker EE web UI shows the
following warning.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon.
</code></pre></div> </div>
</blockquote>
<h2 id="windows-nodes-limitations">Windows nodes limitations</h2>
<p>Some features are not yet supported on Windows nodes:</p>
<ul>
<li>Networking
<ul>
<li>The cluster mode routing mesh cant be used on Windows nodes. You can expose
a port for your service in the host where it is running, and use the HTTP
routing mesh to make your service accessible using a domain name.</li>
<li>Encrypted networks are not supported. If youve upgraded from a previous
version, youll also need to recreate the <code class="highlighter-rouge">ucp-hrm</code> network to make it
unencrypted.</li>
</ul>
</li>
<li>Secrets
<ul>
<li>When using secrets with Windows services, Windows stores temporary secret
files on disk. You can use BitLocker on the volume containing the Docker
root directory to encrypt the secret data at rest.</li>
<li>When creating a service which uses Windows containers, the options to
specify UID, GID, and mode are not supported for secrets. Secrets are
currently only accessible by administrators and users with system access
within the container.</li>
</ul>
</li>
<li>Mounts
<ul>
<li>On Windows, Docker cant listen on a Unix socket. Use TCP or a named pipe
instead.</li>
</ul>
</li>
</ul>

View File

@ -0,0 +1,220 @@
<p>Once youve joined multiple manager nodes for high-availability, you can
configure your own load balancer to balance user requests across all
manager nodes.</p>
<p><img src="../../../images/use-a-load-balancer-1.svg" alt="" /></p>
<p>This allows users to access UCP using a centralized domain name. If
a manager node goes down, the load balancer can detect that and stop forwarding
requests to that node, so that the failure goes unnoticed by users.</p>
<h2 id="load-balancing-on-ucp">Load-balancing on UCP</h2>
<p>Since Docker UCP uses mutual TLS, make sure you configure your load balancer to:</p>
<ul>
<li>Load-balance TCP traffic on ports <code class="highlighter-rouge">443</code> and <code class="highlighter-rouge">6443</code>.</li>
<li>Not terminate HTTPS connections.</li>
<li>Use the <code class="highlighter-rouge">/_ping</code> endpoint on each manager node, to check if the node
is healthy and if it should remain on the load balancing pool or not.</li>
</ul>
<h2 id="load-balancing-ucp-and-dtr">Load balancing UCP and DTR</h2>
<p>By default, both UCP and DTR use port 443. If you plan on deploying UCP and DTR,
your load balancer needs to distinguish traffic between the two by IP address
or port number.</p>
<ul>
<li>If you want to configure your load balancer to listen on port 443:
<ul>
<li>Use one load balancer for UCP, and another for DTR,</li>
<li>Use the same load balancer with multiple virtual IPs.</li>
</ul>
</li>
<li>Configure your load balancer to expose UCP or DTR on a port other than 443.</li>
</ul>
<blockquote class="important">
<p>Additional requirements</p>
<p>In addition to configuring your load balancer to distinguish between UCP and DTR, configuring a load balancer for DTR has <a href="https://docs.docker.com/ee/dtr/admin/configure/use-a-load-balancer/#load-balance-dtr">additional requirements</a>.</p>
</blockquote>
<h2 id="configuration-examples">Configuration examples</h2>
<p>Use the following examples to configure your load balancer for UCP.</p>
<ul class="nav nav-tabs">
<li class="active"><a data-toggle="tab" data-target="#nginx" data-group="nginx">NGINX</a></li>
<li><a data-toggle="tab" data-target="#haproxy" data-group="haproxy">HAProxy</a></li>
<li><a data-toggle="tab" data-target="#aws">AWS LB</a></li>
</ul>
<div class="tab-content">
<div id="nginx" class="tab-pane fade in active">
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">user</span> <span class="n">nginx</span>;
<span class="n">worker_processes</span> <span class="m">1</span>;
<span class="n">error_log</span> /<span class="n">var</span>/<span class="n">log</span>/<span class="n">nginx</span>/<span class="n">error</span>.<span class="n">log</span> <span class="n">warn</span>;
<span class="n">pid</span> /<span class="n">var</span>/<span class="n">run</span>/<span class="n">nginx</span>.<span class="n">pid</span>;
<span class="n">events</span> {
<span class="n">worker_connections</span> <span class="m">1024</span>;
}
<span class="n">stream</span> {
<span class="n">upstream</span> <span class="n">ucp_443</span> {
<span class="n">server</span> &lt;<span class="n">UCP_MANAGER_1_IP</span>&gt;:<span class="m">443</span> <span class="n">max_fails</span>=<span class="m">2</span> <span class="n">fail_timeout</span>=<span class="m">30</span><span class="n">s</span>;
<span class="n">server</span> &lt;<span class="n">UCP_MANAGER_2_IP</span>&gt;:<span class="m">443</span> <span class="n">max_fails</span>=<span class="m">2</span> <span class="n">fail_timeout</span>=<span class="m">30</span><span class="n">s</span>;
<span class="n">server</span> &lt;<span class="n">UCP_MANAGER_N_IP</span>&gt;:<span class="m">443</span> <span class="n">max_fails</span>=<span class="m">2</span> <span class="n">fail_timeout</span>=<span class="m">30</span><span class="n">s</span>;
}
<span class="n">server</span> {
<span class="n">listen</span> <span class="m">443</span>;
<span class="n">proxy_pass</span> <span class="n">ucp_443</span>;
}
}
</code></pre></div> </div>
</div>
<div id="haproxy" class="tab-pane fade">
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">global</span>
<span class="n">log</span> /<span class="n">dev</span>/<span class="n">log</span> <span class="n">local0</span>
<span class="n">log</span> /<span class="n">dev</span>/<span class="n">log</span> <span class="n">local1</span> <span class="n">notice</span>
<span class="n">defaults</span>
<span class="n">mode</span> <span class="n">tcp</span>
<span class="n">option</span> <span class="n">dontlognull</span>
<span class="n">timeout</span> <span class="n">connect</span> <span class="m">5</span><span class="n">s</span>
<span class="n">timeout</span> <span class="n">client</span> <span class="m">50</span><span class="n">s</span>
<span class="n">timeout</span> <span class="n">server</span> <span class="m">50</span><span class="n">s</span>
<span class="n">timeout</span> <span class="n">tunnel</span> <span class="m">1</span><span class="n">h</span>
<span class="n">timeout</span> <span class="n">client</span>-<span class="n">fin</span> <span class="m">50</span><span class="n">s</span>
<span class="c">### frontends
# Optional HAProxy Stats Page accessible at http://&lt;host-ip&gt;:8181/haproxy?stats
</span><span class="n">frontend</span> <span class="n">ucp_stats</span>
<span class="n">mode</span> <span class="n">http</span>
<span class="n">bind</span> <span class="m">0</span>.<span class="m">0</span>.<span class="m">0</span>.<span class="m">0</span>:<span class="m">8181</span>
<span class="n">default_backend</span> <span class="n">ucp_stats</span>
<span class="n">frontend</span> <span class="n">ucp_443</span>
<span class="n">mode</span> <span class="n">tcp</span>
<span class="n">bind</span> <span class="m">0</span>.<span class="m">0</span>.<span class="m">0</span>.<span class="m">0</span>:<span class="m">443</span>
<span class="n">default_backend</span> <span class="n">ucp_upstream_servers_443</span>
<span class="c">### backends
</span><span class="n">backend</span> <span class="n">ucp_stats</span>
<span class="n">mode</span> <span class="n">http</span>
<span class="n">option</span> <span class="n">httplog</span>
<span class="n">stats</span> <span class="n">enable</span>
<span class="n">stats</span> <span class="n">admin</span> <span class="n">if</span> <span class="n">TRUE</span>
<span class="n">stats</span> <span class="n">refresh</span> <span class="m">5</span><span class="n">m</span>
<span class="n">backend</span> <span class="n">ucp_upstream_servers_443</span>
<span class="n">mode</span> <span class="n">tcp</span>
<span class="n">option</span> <span class="n">httpchk</span> <span class="n">GET</span> /<span class="err">_</span><span class="n">ping</span> <span class="n">HTTP</span>/<span class="m">1</span>.<span class="m">1</span>\<span class="n">r</span>\<span class="n">nHost</span>:\ &lt;<span class="n">UCP_FQDN</span>&gt;
<span class="n">server</span> <span class="n">node01</span> &lt;<span class="n">UCP_MANAGER_1_IP</span>&gt;:<span class="m">443</span> <span class="n">weight</span> <span class="m">100</span> <span class="n">check</span> <span class="n">check</span>-<span class="n">ssl</span> <span class="n">verify</span> <span class="n">none</span>
<span class="n">server</span> <span class="n">node02</span> &lt;<span class="n">UCP_MANAGER_2_IP</span>&gt;:<span class="m">443</span> <span class="n">weight</span> <span class="m">100</span> <span class="n">check</span> <span class="n">check</span>-<span class="n">ssl</span> <span class="n">verify</span> <span class="n">none</span>
<span class="n">server</span> <span class="n">node03</span> &lt;<span class="n">UCP_MANAGER_N_IP</span>&gt;:<span class="m">443</span> <span class="n">weight</span> <span class="m">100</span> <span class="n">check</span> <span class="n">check</span>-<span class="n">ssl</span> <span class="n">verify</span> <span class="n">none</span>
</code></pre></div> </div>
</div>
<div id="aws" class="tab-pane fade">
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="s2">"Subnets"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"subnet-XXXXXXXX"</span><span class="p">,</span><span class="w">
</span><span class="s2">"subnet-YYYYYYYY"</span><span class="p">,</span><span class="w">
</span><span class="s2">"subnet-ZZZZZZZZ"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="s2">"CanonicalHostedZoneNameID"</span><span class="p">:</span><span class="w"> </span><span class="s2">"XXXXXXXXXXX"</span><span class="p">,</span><span class="w">
</span><span class="s2">"CanonicalHostedZoneName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"XXXXXXXXX.us-west-XXX.elb.amazonaws.com"</span><span class="p">,</span><span class="w">
</span><span class="s2">"ListenerDescriptions"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="s2">"Listener"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"InstancePort"</span><span class="p">:</span><span class="w"> </span><span class="mi">443</span><span class="p">,</span><span class="w">
</span><span class="s2">"LoadBalancerPort"</span><span class="p">:</span><span class="w"> </span><span class="mi">443</span><span class="p">,</span><span class="w">
</span><span class="s2">"Protocol"</span><span class="p">:</span><span class="w"> </span><span class="s2">"TCP"</span><span class="p">,</span><span class="w">
</span><span class="s2">"InstanceProtocol"</span><span class="p">:</span><span class="w"> </span><span class="s2">"TCP"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="s2">"PolicyNames"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="s2">"HealthCheck"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"HealthyThreshold"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w">
</span><span class="s2">"Interval"</span><span class="p">:</span><span class="w"> </span><span class="mi">10</span><span class="p">,</span><span class="w">
</span><span class="s2">"Target"</span><span class="p">:</span><span class="w"> </span><span class="s2">"HTTPS:443/_ping"</span><span class="p">,</span><span class="w">
</span><span class="s2">"Timeout"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w">
</span><span class="s2">"UnhealthyThreshold"</span><span class="p">:</span><span class="w"> </span><span class="mi">4</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="s2">"VPCId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"vpc-XXXXXX"</span><span class="p">,</span><span class="w">
</span><span class="s2">"BackendServerDescriptions"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="s2">"Instances"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="s2">"InstanceId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"i-XXXXXXXXX"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="s2">"InstanceId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"i-XXXXXXXXX"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="s2">"InstanceId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"i-XXXXXXXXX"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="s2">"DNSName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"XXXXXXXXXXXX.us-west-2.elb.amazonaws.com"</span><span class="p">,</span><span class="w">
</span><span class="s2">"SecurityGroups"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"sg-XXXXXXXXX"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="s2">"Policies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"LBCookieStickinessPolicies"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="s2">"AppCookieStickinessPolicies"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="s2">"OtherPolicies"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="s2">"LoadBalancerName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ELB-UCP"</span><span class="p">,</span><span class="w">
</span><span class="s2">"CreatedTime"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2017-02-13T21:40:15.400Z"</span><span class="p">,</span><span class="w">
</span><span class="s2">"AvailabilityZones"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"us-west-2c"</span><span class="p">,</span><span class="w">
</span><span class="s2">"us-west-2a"</span><span class="p">,</span><span class="w">
</span><span class="s2">"us-west-2b"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="s2">"Scheme"</span><span class="p">:</span><span class="w"> </span><span class="s2">"internet-facing"</span><span class="p">,</span><span class="w">
</span><span class="s2">"SourceSecurityGroup"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"OwnerAlias"</span><span class="p">:</span><span class="w"> </span><span class="s2">"XXXXXXXXXXXX"</span><span class="p">,</span><span class="w">
</span><span class="s2">"GroupName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"XXXXXXXXXXXX"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div> </div>
</div>
</div>
<p>You can deploy your load balancer using:</p>
<ul class="nav nav-tabs">
<li class="active"><a data-toggle="tab" data-target="#nginx-2" data-group="nginx">NGINX</a></li>
<li><a data-toggle="tab" data-target="#haproxy-2" data-group="haproxy">HAProxy</a></li>
</ul>
<div class="tab-content">
<div id="nginx-2" class="tab-pane fade in active">
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Create the nginx.conf file, then
# deploy the load balancer
</span>
<span class="n">docker</span> <span class="n">run</span> --<span class="n">detach</span> \
--<span class="n">name</span> <span class="n">ucp</span>-<span class="n">lb</span> \
--<span class="n">restart</span>=<span class="n">unless</span>-<span class="n">stopped</span> \
--<span class="n">publish</span> <span class="m">443</span>:<span class="m">443</span> \
--<span class="n">volume</span> ${<span class="n">PWD</span>}/<span class="n">nginx</span>.<span class="n">conf</span>:/<span class="n">etc</span>/<span class="n">nginx</span>/<span class="n">nginx</span>.<span class="n">conf</span>:<span class="n">ro</span> \
<span class="n">nginx</span>:<span class="n">stable</span>-<span class="n">alpine</span>
</code></pre></div> </div>
</div>
<div id="haproxy-2" class="tab-pane fade">
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Create the haproxy.cfg file, then
# deploy the load balancer
</span>
<span class="n">docker</span> <span class="n">run</span> --<span class="n">detach</span> \
--<span class="n">name</span> <span class="n">ucp</span>-<span class="n">lb</span> \
--<span class="n">publish</span> <span class="m">443</span>:<span class="m">443</span> \
--<span class="n">publish</span> <span class="m">8181</span>:<span class="m">8181</span> \
--<span class="n">restart</span>=<span class="n">unless</span>-<span class="n">stopped</span> \
--<span class="n">volume</span> ${<span class="n">PWD</span>}/<span class="n">haproxy</span>.<span class="n">cfg</span>:/<span class="n">usr</span>/<span class="n">local</span>/<span class="n">etc</span>/<span class="n">haproxy</span>/<span class="n">haproxy</span>.<span class="n">cfg</span>:<span class="n">ro</span> \
<span class="n">haproxy</span>:<span class="m">1</span>.<span class="m">7</span>-<span class="n">alpine</span> <span class="n">haproxy</span> -<span class="n">d</span> -<span class="n">f</span> /<span class="n">usr</span>/<span class="n">local</span>/<span class="n">etc</span>/<span class="n">haproxy</span>/<span class="n">haproxy</span>.<span class="n">cfg</span>
</code></pre></div> </div>
</div>
</div>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="../add-labels-to-cluster-nodes.md">Add labels to cluster nodes</a></li>
</ul>

View File

@ -0,0 +1,29 @@
<p>After installing Docker Universal Control Plane, you need to license your
installation. Heres how to do it.</p>
<h2 id="download-your-license">Download your license</h2>
<p>Go to <a href="https://www.docker.com/enterprise-edition">Docker Store</a> and
download your UCP license, or get a free trial license.</p>
<p><img src="../../images/license-ucp-1.png" alt="" class="with-border" /></p>
<h2 id="license-your-installation">License your installation</h2>
<p>Once youve downloaded the license file, you can apply it to your UCP
installation.</p>
<p>In the UCP web UI, log in with administrator credentials and
navigate to the <strong>Admin Settings</strong> page.</p>
<p>In the left pane, click <strong>License</strong> and click <strong>Upload License</strong>. The
license refreshes immediately, and you dont need to click <strong>Save</strong>.</p>
<p><img src="../../images/license-ucp-2.png" alt="" class="with-border" /></p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="../install.md">Install UCP</a></li>
<li><a href="../install/install-offline.md">Install UCP offline</a></li>
</ul>

View File

@ -0,0 +1,150 @@
<p>Docker Enterprise Edition (EE) has its own image registry (DTR) so that
you can store and manage the images that you deploy to your cluster.
In this topic, you push an image to DTR and later deploy it to your cluster,
using the Kubernetes orchestrator.</p>
<h2 id="open-the-dtr-web-ui">Open the DTR web UI</h2>
<ol>
<li>In the Docker EE web UI, click <strong>Admin Settings</strong>.</li>
<li>In the left pane, click <strong>Docker Trusted Registry</strong>.</li>
<li>
<p>In the <strong>Installed DTRs</strong> section, note the URL of your clusters DTR
instance.</p>
<p><img src="../../images/manage-and-deploy-private-images-1.png" alt="" class="with-border" /></p>
</li>
<li>In a new browser tab, enter the URL to open the DTR web UI.</li>
</ol>
<h2 id="create-an-image-repository">Create an image repository</h2>
<ol>
<li>In the DTR web UI, click <strong>Repositories</strong>.</li>
<li>Click <strong>New Repository</strong>, and in the <strong>Repository Name</strong> field, enter
“wordpress”.</li>
<li>
<p>Click <strong>Save</strong> to create the repository.</p>
<p><img src="../../images/manage-and-deploy-private-images-2.png" alt="" class="with-border" /></p>
</li>
</ol>
<h2 id="push-an-image-to-dtr">Push an image to DTR</h2>
<p>Instead of building an image from scratch, well pull the official WordPress
image from Docker Hub, tag it, and push it to DTR. Once that WordPress version
is in DTR, only authorized users can change it.</p>
<p>To push images to DTR, you need CLI access to a licensed installation of
Docker EE.</p>
<ul>
<li><a href="license-your-installation.md">License your installation</a>.</li>
<li><a href="../../user-acccess/cli.md">Set up your Docker CLI</a>.</li>
</ul>
<p>When youre set up for CLI-based access to a licensed Docker EE instance,
you can push images to DTR.</p>
<ol>
<li>
<p>Pull the public WordPress image from Docker Hub:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker pull wordpress
</code></pre></div> </div>
</li>
<li>
<p>Tag the image, using the IP address or DNS name of your DTR instance:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker tag wordpress:latest &lt;dtr-url&gt;:&lt;port&gt;/admin/wordpress:latest
</code></pre></div> </div>
</li>
<li>Log in to a Docker EE manager node.</li>
<li>
<p>Push the tagged image to DTR:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image push &lt;dtr-url&gt;:&lt;port&gt;/admin/wordpress:latest
</code></pre></div> </div>
</li>
</ol>
<h2 id="confirm-the-image-push">Confirm the image push</h2>
<p>In the DTR web UI, confirm that the <code class="highlighter-rouge">wordpress:latest</code> image is store in your
DTR instance.</p>
<ol>
<li>In the DTR web UI, click <strong>Repositories</strong>.</li>
<li>Click <strong>wordpress</strong> to open the repo.</li>
<li>Click <strong>Images</strong> to view the stored images.</li>
<li>
<p>Confirm that the <code class="highlighter-rouge">latest</code> tag is present.</p>
<p><img src="../../images/manage-and-deploy-private-images-3.png" alt="" class="with-border" /></p>
</li>
</ol>
<p>Youre ready to deploy the <code class="highlighter-rouge">wordpress:latest</code> image into production.</p>
<h2 id="deploy-the-private-image-to-ucp">Deploy the private image to UCP</h2>
<p>With the WordPress image stored in DTR, Docker EE can deploy the image to a
Kubernetes cluster with a simple Deployment object:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">apps/v1beta2</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress-deployment</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">matchLabels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">replicas</span><span class="pi">:</span> <span class="s">2</span>
<span class="na">template</span><span class="pi">:</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">containers</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">&lt;dtr-url&gt;:&lt;port&gt;/admin/wordpress:latest</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="s">80</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Service</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress-service</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">NodePort</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">port</span><span class="pi">:</span> <span class="s">80</span>
<span class="na">nodePort</span><span class="pi">:</span> <span class="s">30081</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
</code></pre></div></div>
<p>The Deployment objects YAML specifies your DTR image in the pod template spec:
<code class="highlighter-rouge">image: &lt;dtr-url&gt;:&lt;port&gt;/admin/wordpress:latest</code>. Also, the YAML file defines
a <code class="highlighter-rouge">NodePort</code> service that exposes the WordPress application, so its accessible
from outside the cluster.</p>
<ol>
<li>Open the Docker EE web UI, and in the left pane, click <strong>Kubernetes</strong>.</li>
<li>Click <strong>Create</strong> to open the <strong>Create Kubernetes Object</strong> page.</li>
<li>In the <strong>Namespace</strong> dropdown, select <strong>default</strong>.</li>
<li>In the <strong>Object YAML</strong> editor, paste the Deployment objects YAML.</li>
<li>Click <strong>Create</strong>. When the Kubernetes objects are created,
the <strong>Load Balancers</strong> page opens.</li>
<li>Click <strong>wordpress-service</strong>, and in the details pane, find the <strong>Ports</strong>
section.</li>
<li>
<p>Click the URL to open the default WordPress home page.</p>
<p><img src="../../images/manage-and-deploy-private-images-4.png" alt="" class="with-border" /></p>
</li>
</ol>

View File

@ -0,0 +1,20 @@
<p>You can configure UCP to allow users to deploy and run services only in
worker nodes. This ensures all cluster management functionality stays
performant, and makes the cluster more secure.</p>
<p>If a user deploys a malicious service that can affect the node where it
is running, it wont be able to affect other nodes in the cluster, or
any cluster management functionality.</p>
<p>To restrict users from deploying to manager nodes, log in with administrator
credentials to the UCP web UI, navigate to the <strong>Admin Settings</strong>
page, and choose <strong>Scheduler</strong>.</p>
<p><img src="../../images/restrict-services-to-worker-nodes-1.png" alt="" class="with-border" /></p>
<p>You can then choose if user services should be allowed to run on manager nodes
or not.</p>
<p>Having a grant with the <code class="highlighter-rouge">Scheduler</code> role against the <code class="highlighter-rouge">/</code> collection takes
precedence over any other grants with <code class="highlighter-rouge">Node Schedule</code> on subcollections.</p>

View File

@ -0,0 +1,57 @@
<p>With Docker Universal Control Plane you can enforce applications to only use
Docker images signed by UCP users you trust. When a user tries to deploy an
application to the cluster, UCP checks if the application uses a Docker image
that is not trusted, and wont continue with the deployment if thats the case.</p>
<p><img src="../../images/run-only-the-images-you-trust-1.svg" alt="Enforce image signing" /></p>
<p>By signing and verifying the Docker images, you ensure that the images being
used in your cluster are the ones you trust and havent been altered either in
the image registry or on their way from the image registry to your UCP cluster.</p>
<h2 id="example-workflow">Example workflow</h2>
<p>Heres an example of a typical workflow:</p>
<ol>
<li>A developer makes changes to a service and pushes their changes to a version
control system.</li>
<li>A CI system creates a build, runs tests, and pushes an image to DTR with the
new changes.</li>
<li>The quality engineering team pulls the image and runs more tests. If
everything looks good they sign and push the image.</li>
<li>The IT operations team deploys a service. If the image used for the service
was signed by the QA team, UCP deploys it. Otherwise UCP refuses to deploy.</li>
</ol>
<h2 id="configure-ucp">Configure UCP</h2>
<p>To configure UCP to only allow running services that use Docker images you
trust, go to the UCP web UI, navigate to the <strong>Admin Settings</strong> page, and in
the left pane, click <strong>Docker Content Trust</strong>.</p>
<p>Select the <strong>Run Only Signed Images</strong> option to only allow deploying
applications if they use images you trust.</p>
<p><img src="../../images/run-only-the-images-you-trust-2.png" alt="UCP settings" class="with-border" /></p>
<p>With this setting, UCP allows deploying any image as long as the image has
been signed. It doesnt matter who signed the image.</p>
<p>To enforce that the image needs to be signed by specific teams, click <strong>Add Team</strong>
and select those teams from the list.</p>
<p><img src="../../images/run-only-the-images-you-trust-3.png" alt="UCP settings" class="with-border" /></p>
<p>If you specify multiple teams, the image needs to be signed by a member of each
team, or someone that is a member of all those teams.</p>
<p>Click <strong>Save</strong> for UCP to start enforcing the policy. From now on, existing
services will continue running and can be restarted if needed, but UCP will only
allow deploying new services that use a trusted image.</p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="/ee/dtr/user/manage-images/sign-images.md">Sign and push images to DTR</a></li>
</ul>

View File

@ -0,0 +1,164 @@
<p>Docker UCP is designed for scaling horizontally as your applications grow in
size and usage. You can add or remove nodes from the UCP cluster to make it
scale to your needs.</p>
<p><img src="../../images/scale-your-cluster-0.svg" alt="" /></p>
<p>Since UCP leverages the clustering functionality provided by Docker Engine,
you use the <a href="/engine/swarm/swarm-tutorial/add-nodes.md">docker swarm join</a>
command to add more nodes to your cluster. When joining new nodes, the UCP
services automatically start running in that node.</p>
<p>When joining a node to a cluster you can specify its role: manager or worker.</p>
<ul>
<li>
<p><strong>Manager nodes</strong></p>
<p>Manager nodes are responsible for cluster management functionality and
dispatching tasks to worker nodes. Having multiple manager nodes allows
your cluster to be highly-available and tolerate node failures.</p>
<p>Manager nodes also run all UCP components in a replicated way, so by adding
additional manager nodes, youre also making UCP highly available.
<a href="../../ucp-architecture.md">Learn more about the UCP architecture.</a></p>
</li>
<li>
<p><strong>Worker nodes</strong></p>
<p>Worker nodes receive and execute your services and applications. Having
multiple worker nodes allows you to scale the computing capacity of your
cluster.</p>
<p>When deploying Docker Trusted Registry in your cluster, you deploy it to a
worker node.</p>
</li>
</ul>
<h2 id="join-nodes-to-the-cluster">Join nodes to the cluster</h2>
<p>To join nodes to the cluster, go to the UCP web UI and navigate to the <strong>Nodes</strong>
page.</p>
<p><img src="../../images/scale-your-cluster-1.png" alt="" class="with-border" /></p>
<p>Click <strong>Add Node</strong> to add a new node.</p>
<p><img src="../../../images/try-ddc-3.png" alt="" class="with-border" /></p>
<ul>
<li>Click <strong>Manager</strong> if you want to add the node as a manager.</li>
<li>Check the <strong>Use a custom listen address</strong> option to specify the
IP address of the host that youll be joining to the cluster.</li>
<li>Check the <strong>Use a custom listen address</strong> option to specify the
IP address thats advertised to all members of the cluster for API access.</li>
</ul>
<p>Copy the displayed command, use ssh to log into the host that you want to
join to the cluster, and run the <code class="highlighter-rouge">docker swarm join</code> command on the host.</p>
<p>To add a Windows node, click <strong>Windows</strong> and follow the instructions in
<a href="join-nodes/join-windows-nodes-to-cluster.md">Join Windows worker nodes to a cluster</a>.</p>
<p>After you run the join command in the node, you can view the node in the
UCP web UI.</p>
<h2 id="remove-nodes-from-the-cluster">Remove nodes from the cluster</h2>
<ol>
<li>If the target node is a manager, you will need to first demote the node into
a worker before proceeding with the removal:
<ul>
<li>From the UCP web UI, navigate to the <strong>Nodes</strong> page. Select the node you
wish to remove and switch its role to <strong>Worker</strong>, wait until the operation
completes, and confirm that the node is no longer a manager.</li>
<li>From the CLI, perform <code class="highlighter-rouge">docker node ls</code> and identify the nodeID or hostname
of the target node. Then, run <code class="highlighter-rouge">docker node demote &lt;nodeID or hostname&gt;</code>.</li>
</ul>
</li>
<li>
<p>If the status of the worker node is <code class="highlighter-rouge">Ready</code>, youll need to manually force
the node to leave the cluster. To do this, connect to the target node through
SSH and run <code class="highlighter-rouge">docker swarm leave --force</code> directly against the local docker
engine.</p>
<blockquote>
<p>Loss of quorum</p>
<p>Do not perform this step if the node is still a manager, as
this may cause loss of quorum.</p>
</blockquote>
</li>
<li>Now that the status of the node is reported as <code class="highlighter-rouge">Down</code>, you may remove the
node:
<ul>
<li>From the UCP web UI, browse to the <strong>Nodes</strong> page and select the node.
In the details pane, click <strong>Actions</strong> and select <strong>Remove</strong>.
Click <strong>Confirm</strong> when youre prompted.</li>
<li>From the CLI, perform <code class="highlighter-rouge">docker node rm &lt;nodeID or hostname&gt;</code>.</li>
</ul>
</li>
</ol>
<h2 id="pause-and-drain-nodes">Pause and drain nodes</h2>
<p>Once a node is part of the cluster you can change its role making a manager
node into a worker and vice versa. You can also configure the node availability
so that it is:</p>
<ul>
<li>Active: the node can receive and execute tasks.</li>
<li>Paused: the node continues running existing tasks, but doesnt receive new ones.</li>
<li>Drained: the node wont receive new tasks. Existing tasks are stopped and
replica tasks are launched in active nodes.</li>
</ul>
<p>In the UCP web UI, browse to the <strong>Nodes</strong> page and select the node. In the details pane, click the <strong>Configure</strong> to open the <strong>Edit Node</strong> page.</p>
<p><img src="../../images/scale-your-cluster-3.png" alt="" class="with-border" /></p>
<p>If youre load-balancing user requests to UCP across multiple manager nodes,
when demoting those nodes into workers, dont forget to remove them from your
load-balancing pool.</p>
<h2 id="use-the-cli-to-scale-your-cluster">Use the CLI to scale your cluster</h2>
<p>You can also use the command line to do all of the above operations. To get the
join token, run the following command on a manager node:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker swarm join-token worker
</code></pre></div></div>
<p>If you want to add a new manager node instead of a worker node, use
<code class="highlighter-rouge">docker swarm join-token manager</code> instead. If you want to use a custom listen
address, add the <code class="highlighter-rouge">--listen-addr</code> arg:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker swarm join <span class="se">\</span>
<span class="nt">--token</span> SWMTKN-1-2o5ra9t7022neymg4u15f3jjfh0qh3yof817nunoioxa9i7lsp-dkmt01ebwp2m0wce1u31h6lmj <span class="se">\</span>
<span class="nt">--listen-addr</span> 234.234.234.234 <span class="se">\</span>
192.168.99.100:2377
</code></pre></div></div>
<p>Once your node is added, you can see it by running <code class="highlighter-rouge">docker node ls</code> on a manager:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node <span class="nb">ls</span>
</code></pre></div></div>
<p>To change the nodes availability, use:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node update <span class="nt">--availability</span> drain node2
</code></pre></div></div>
<p>You can set the availability to <code class="highlighter-rouge">active</code>, <code class="highlighter-rouge">pause</code>, or <code class="highlighter-rouge">drain</code>.</p>
<p>To remove the node, use:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node rm &lt;node-hostname&gt;
</code></pre></div></div>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="use-your-own-tls-certificates.md">Use your own TLS certificates</a></li>
<li><a href="join-nodes/index.md">Set up high availability</a></li>
</ul>

View File

@ -0,0 +1,217 @@
<p>When you add a node to the cluster, the nodes workloads are managed by a
default orchestrator, either Docker Swarm or Kubernetes. When you install
Docker EE, new nodes are managed by Docker Swarm, but you can change the
default orchestrator to Kubernetes in the administrator settings.</p>
<p>Changing the default orchestrator doesnt affect existing nodes in the cluster.
You can change the orchestrator type for individual nodes in the cluster
by navigating to the nodes configuration page in the Docker EE web UI.</p>
<h2 id="change-the-orchestrator-for-a-node">Change the orchestrator for a node</h2>
<p>You can change the current orchestrator for any node thats joined to a
Docker EE cluster. The available orchestrator types are <strong>Kubernetes</strong>,
<strong>Swarm</strong>, and <strong>Mixed</strong>.</p>
<p>The <strong>Mixed</strong> type enables workloads to be scheduled by Kubernetes and Swarm
both on the same node. Although you can choose to mix orchestrator types on the
same node, this isnt recommended for production deployments because of the
likelihood of resource contention.</p>
<p>Change a nodes orchestrator type on the <strong>Edit node</strong> page:</p>
<ol>
<li>Log in to the Docker EE web UI with an administrator account.</li>
<li>Navigate to the <strong>Nodes</strong> page, and click the node that you want to assign
to a different orchestrator.</li>
<li>In the details pane, click <strong>Configure</strong> and select <strong>Details</strong> to open
the <strong>Edit node</strong> page.</li>
<li>In the <strong>Orchestrator properties</strong> section, click the orchestrator type
for the node.</li>
<li>
<p>Click <strong>Save</strong> to assign the node to the selected orchestrator.</p>
<p><img src="../../images/change-orchestrator-for-node-1.png" alt="" class="with-border" /></p>
</li>
</ol>
<h2 id="what-happens-when-you-change-a-nodes-orchestrator">What happens when you change a nodes orchestrator</h2>
<p>When you change the orchestrator type for a node, existing workloads are
evicted, and theyre not migrated to the new orchestrator automatically.
If you want the workloads to be scheduled by the new orchestrator, you must
migrate them manually. For example, if you deploy WordPress on a Swarm
node, and you change the nodes orchestrator type to Kubernetes, Docker EE
doesnt migrate the workload, and WordPress continues running on Swarm. In
this case, you must migrate your WordPress deployment to Kubernetes manually.</p>
<p>The following table summarizes the results of changing a nodes orchestrator.</p>
<table>
<thead>
<tr>
<th>Workload</th>
<th>On orchestrator change</th>
</tr>
</thead>
<tbody>
<tr>
<td>Containers</td>
<td>Container continues running in node</td>
</tr>
<tr>
<td>Docker service</td>
<td>Node is drained, and tasks are rescheduled to another node</td>
</tr>
<tr>
<td>Pods and other imperative resources</td>
<td>Continue running in node</td>
</tr>
<tr>
<td>Deployments and other declarative resources</td>
<td>Might change, but for now, continue running in node</td>
</tr>
</tbody>
</table>
<p>If a node is running containers, and you change the node to Kubernetes, these
containers will continue running, and Kubernetes wont be aware of them, so
youll be in the same situation as if you were running in <code class="highlighter-rouge">Mixed</code> node.</p>
<blockquote class="warning">
<p>Be careful when mixing orchestrators on a node.</p>
<p>When you change a nodes orchestrator, you can choose to run the node in a
mixed mode, with both Kubernetes and Swarm workloads. The <code class="highlighter-rouge">Mixed</code> type
is not intended for production use, and it may impact existing workloads
on the node.</p>
<p>This is because the two orchestrator types have different views of the nodes
resources, and they dont know about each others workloads. One orchestrator
can schedule a workload without knowing that the nodes resources are already
committed to another workload that was scheduled by the other orchestrator.
When this happens, the node could run out of memory or other resources.</p>
<p>For this reason, we recommend against mixing orchestrators on a production
node.</p>
</blockquote>
<h2 id="set-the-default-orchestrator-type-for-new-nodes">Set the default orchestrator type for new nodes</h2>
<p>You can set the default orchestrator for new nodes to <strong>Kubernetes</strong> or
<strong>Swarm</strong>.</p>
<p>To set the orchestrator for new nodes:</p>
<ol>
<li>Log in to the Docker EE web UI with an administrator account.</li>
<li>Open the <strong>Admin Settings</strong> page, and in the left pane, click <strong>Scheduler</strong>.</li>
<li>Under <strong>Set orchestrator type for new nodes</strong> click <strong>Swarm</strong>
or <strong>Kubernetes</strong>.</li>
<li>
<p>Click <strong>Save</strong>.</p>
<p><img src="../../images/join-nodes-to-cluster-1.png" alt="" class="with-border" /></p>
</li>
</ol>
<p>From now on, when you join a node to the cluster, new workloads on the node
are scheduled by the specified orchestrator type. Existing nodes in the cluster
arent affected.</p>
<p>Once a node is joined to the cluster, you can
<a href="#change-the-orchestrator-for-a-node">change the orchestrator</a> that schedules its
workloads.</p>
<h2 id="choosing-the-orchestrator-type">Choosing the orchestrator type</h2>
<p>The workloads on your cluster can be scheduled by Kubernetes or by Swarm, or
the cluster can be mixed, running both orchestrator types. If you choose to
run a mixed cluster, be aware that the different orchestrators arent aware of
each other, and theres no coordination between them.</p>
<p>We recommend that you make the decision about orchestration when you set up the
cluster initially. Commit to Kubernetes or Swarm on all nodes, or assign each
node individually to a specific orchestrator. Once you start deploying workloads,
avoid changing the orchestrator setting. If you do change the orchestrator for a
node, your workloads are evicted, and you must deploy them again through the
new orchestrator.</p>
<blockquote>
<p>Node demotion and orchestrator type</p>
<p>When you promote a worker node to be a manager, its orchestrator type
automatically changes to <code class="highlighter-rouge">Mixed</code>. If you demote the same node to be a worker,
its orchestrator type remains as <code class="highlighter-rouge">Mixed</code>.</p>
</blockquote>
<h2 id="use-the-cli-to-set-the-orchestrator-type">Use the CLI to set the orchestrator type</h2>
<p>Set the orchestrator on a node by assigning the orchestrator labels,
<code class="highlighter-rouge">com.docker.ucp.orchestrator.swarm</code> or <code class="highlighter-rouge">com.docker.ucp.orchestrator.kubernetes</code>,
to <code class="highlighter-rouge">true</code>.</p>
<p>To schedule Swarm workloads on a node:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node update <span class="nt">--label-add</span> com.docker.ucp.orchestrator.swarm<span class="o">=</span><span class="nb">true</span> &lt;node-id&gt;
</code></pre></div></div>
<p>To schedule Kubernetes workloads on a node:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node update <span class="nt">--label-add</span> com.docker.ucp.orchestrator.kubernetes<span class="o">=</span><span class="nb">true</span> &lt;node-id&gt;
</code></pre></div></div>
<p>To schedule Kubernetes and Swarm workloads on a node:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node update <span class="nt">--label-add</span> com.docker.ucp.orchestrator.swarm<span class="o">=</span><span class="nb">true</span> &lt;node-id&gt;
docker node update <span class="nt">--label-add</span> com.docker.ucp.orchestrator.kubernetes<span class="o">=</span><span class="nb">true</span> &lt;node-id&gt;
</code></pre></div></div>
<blockquote class="warning">
<p>Mixed nodes</p>
<p>Scheduling both Kubernetes and Swarm workloads on a node is not recommended
for production deployments, because of the likelihood of resource contention.</p>
</blockquote>
<p>To change the orchestrator type for a node from Swarm to Kubernetes:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node update <span class="nt">--label-add</span> com.docker.ucp.orchestrator.kubernetes<span class="o">=</span><span class="nb">true</span> &lt;node-id&gt;
docker node update <span class="nt">--label-rm</span> com.docker.ucp.orchestrator.swarm &lt;node-id&gt;
</code></pre></div></div>
<p>UCP detects the node label change and updates the Kubernetes node accordingly.</p>
<p>Check the value of the orchestrator label by inspecting the node:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker node inspect &lt;node-id&gt; | <span class="nb">grep</span> <span class="nt">-i</span> orchestrator
</code></pre></div></div>
<p>The <code class="highlighter-rouge">docker node inspect</code> command returns the nodes configuration, including
the orchestrator:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="s2">"com.docker.ucp.orchestrator.kubernetes"</span>: <span class="s2">"true"</span>
</code></pre></div></div>
<blockquote class="important">
<p>Orchestrator label</p>
<p>The <code class="highlighter-rouge">com.docker.ucp.orchestrator</code> label isnt displayed in the <strong>Labels</strong>
list for a node in the Docker EE web UI.</p>
</blockquote>
<h2 id="set-the-default-orchestrator-type-for-new-nodes-1">Set the default orchestrator type for new nodes</h2>
<p>The default orchestrator for new nodes is a setting in the Docker EE
configuration file:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>default_node_orchestrator = "swarm"
</code></pre></div></div>
<p>The value can be <code class="highlighter-rouge">swarm</code> or <code class="highlighter-rouge">kubernetes</code>.</p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="ucp-configuration-file.md">Set up Docker EE by using a config file</a></li>
</ul>

View File

@ -0,0 +1,32 @@
<p>Docker Universal Control Plane enables setting properties of user sessions,
like session timeout and number of concurrent sessions.</p>
<p>To configure UCP login sessions, go to the UCP web UI, navigate to the
<strong>Admin Settings</strong> page and click <strong>Authentication &amp; Authorization</strong>.</p>
<p><img src="../../images/authentication-authorization.png" alt="" /></p>
<h2 id="login-session-controls">Login session controls</h2>
<table>
<thead>
<tr>
<th style="text-align: left">Field</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">Lifetime Minutes</td>
<td style="text-align: left">The initial lifetime of a login session, from the time UCP generates it. When this time expires, UCP invalidates the session, and the user must authenticate again to establish a new session. The default is 4320 minutes, which is 72 hours.</td>
</tr>
<tr>
<td style="text-align: left">Renewal Threshold Minutes</td>
<td style="text-align: left">The time before session expiration when UCP extends an active session. UCP extends the session by the number of hours specified in <strong>Lifetime Hours</strong>. The threshold value cant be greater than <strong>Lifetime Hours</strong>. The default is 1440 minutes, which is 24 hours. To specify that sessions are extended with every use, set the threshold equal to the lifetime. To specify that sessions are never extended, set the threshold to zero. This may cause users to be logged out unexpectedly while using the UCP web UI.</td>
</tr>
<tr>
<td style="text-align: left">Per User Limit</td>
<td style="text-align: left">The maximum number of simultaneous logins for a user. If creating a new session exceeds this limit, UCP deletes the least recently used session. To disable the limit, set the value to zero.</td>
</tr>
</tbody>
</table>

View File

@ -0,0 +1,61 @@
<p>You can configure UCP for sending logs to a remote logging service:</p>
<ol>
<li>Log in to UCP with an administrator account.</li>
<li>Navigate to the <strong>Admin Settings</strong> page.</li>
<li>Set the information about your logging server, and click
<strong>Save</strong>.</li>
</ol>
<p><img src="../../images/configure-logs-1.png" alt="" class="with-border" /></p>
<blockquote class="important">
<p>External system for logs</p>
<p>Administrators should configure Docker EE to store logs using an external
system. By default, the Docker daemon doesnt delete logs, which means that
in a production system with intense usage, your logs can consume a
significant amount of disk space.</p>
</blockquote>
<h2 id="example-setting-up-an-elk-stack">Example: Setting up an ELK stack</h2>
<p>One popular logging stack is composed of Elasticsearch, Logstash, and
Kibana. The following example demonstrates how to set up an example
deployment which can be used for logging.</p>
<pre><code class="language-none">docker volume create --name orca-elasticsearch-data
docker container run -d \
--name elasticsearch \
-v orca-elasticsearch-data:/usr/share/elasticsearch/data \
elasticsearch elasticsearch -Enetwork.host=0.0.0.0
docker container run -d \
-p 514:514 \
--name logstash \
--link elasticsearch:es \
logstash \
sh -c "logstash -e 'input { syslog { } } output { stdout { } elasticsearch { hosts =&gt; [ \"es\" ] } } filter { json { source =&gt; \"message\" } }'"
docker container run -d \
--name kibana \
--link elasticsearch:elasticsearch \
-p 5601:5601 \
kibana
</code></pre>
<p>Once you have these containers running, configure UCP to send logs to
the IP of the Logstash container. You can then browse to port 5601 on the system
running Kibana and browse log/event entries. You should specify the “time”
field for indexing.</p>
<p>When deployed in a production environment, you should secure your ELK
stack. UCP does not do this itself, but there are a number of 3rd party
options that can accomplish this, like the Shield plug-in for Kibana.</p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="restrict-services-to-worker-nodes.md">Restrict services to worker nodes</a></li>
</ul>

View File

@ -0,0 +1,673 @@
<p>You have two options to configure UCP: through the web UI, or using a Docker
config object. In most cases, the web UI is a front-end for changing the
configuration file.</p>
<p>You can customize how UCP is installed by creating a configuration file upfront.
During the installation UCP detects and starts using the configuration.</p>
<h2 id="ucp-configuration-file">UCP configuration file</h2>
<p>The <code class="highlighter-rouge">ucp-agent</code> service uses a configuration file to set up UCP.
You can use the configuration file in different ways to set up your UCP
cluster.</p>
<ul>
<li>Install one cluster and use the UCP web UI to configure it as desired,
extract the configuration file, edit it as needed, and use the edited
config file to make copies to multiple other cluster.</li>
<li>Install a UCP cluster, extract and edit the configuration file, and use the
CLI to apply the new configuration to the same cluster.</li>
<li>Run the <code class="highlighter-rouge">example-config</code> command, edit the example configuration file, and
apply the file at install time or after installation.</li>
</ul>
<p>Specify your configuration settings in a TOML file.
<a href="https://github.com/toml-lang/toml/blob/master/README.md">Learn about Toms Obvious, Minimal Language</a>.</p>
<p>The configuration has a versioned naming convention, with a trailing decimal
number that increases with each version, like <code class="highlighter-rouge">com.docker.ucp.config-1</code>. The
<code class="highlighter-rouge">ucp-agent</code> service maps the configuration to the file at <code class="highlighter-rouge">/etc/ucp/ucp.toml</code>.</p>
<h2 id="inspect-and-modify-existing-configuration">Inspect and modify existing configuration</h2>
<p>Use the <code class="highlighter-rouge">docker config inspect</code> command to view the current settings and emit
them to a file.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="c"># CURRENT_CONFIG_NAME will be the name of the currently active UCP configuration</span>
<span class="nv">CURRENT_CONFIG_NAME</span><span class="o">=</span><span class="k">$(</span>docker service inspect ucp-agent <span class="nt">--format</span> <span class="s1">'{{range .Spec.TaskTemplate.ContainerSpec.Configs}}{{if eq "/etc/ucp/ucp.toml" .File.Name}}{{.ConfigName}}{{end}}{{end}}'</span><span class="k">)</span>
<span class="c"># Collect the current config with `docker config inspect`</span>
docker config inspect <span class="nt">--format</span> <span class="s1">'{{ printf "%s" .Spec.Data }}'</span> <span class="nv">$CURRENT_CONFIG_NAME</span> <span class="o">&gt;</span> ucp-config.toml
</code></pre></div></div>
<p>Edit the file, then use the <code class="highlighter-rouge">docker config create</code> and <code class="highlighter-rouge">docker service update</code>
commands to create and apply the configuration from the file.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># NEXT_CONFIG_NAME will be the name of the new UCP configuration</span>
<span class="nv">NEXT_CONFIG_NAME</span><span class="o">=</span><span class="k">${</span><span class="nv">CURRENT_CONFIG_NAME</span><span class="p">%%-*</span><span class="k">}</span>-<span class="k">$((${</span><span class="nv">CURRENT_CONFIG_NAME</span><span class="p">##*-</span><span class="k">}</span><span class="o">+</span><span class="m">1</span><span class="k">))</span>
<span class="c"># Create the new cluster configuration from the file ucp-config.toml</span>
docker config create <span class="nv">$NEXT_CONFIG_NAME</span> ucp-config.toml
<span class="c"># Use the `docker service update` command to remove the current configuration</span>
<span class="c"># and apply the new configuration to the `ucp-agent` service.</span>
docker service update <span class="nt">--config-rm</span> <span class="nv">$CURRENT_CONFIG_NAME</span> <span class="nt">--config-add</span> <span class="nb">source</span><span class="o">=</span><span class="nv">$NEXT_CONFIG_NAME</span>,target<span class="o">=</span>/etc/ucp/ucp.toml ucp-agent
</code></pre></div></div>
<h2 id="example-configuration-file">Example configuration file</h2>
<p>You can see an example TOML config file that shows how to configure UCP
settings. From the command line, run UCP with the <code class="highlighter-rouge">example-config</code> option:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container run <span class="nt">--rm</span> /: example-config
</code></pre></div></div>
<h2 id="configuration-options">Configuration options</h2>
<h3 id="auth-table">auth table</h3>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">backend</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The name of the authorization backend to use, either <code class="highlighter-rouge">managed</code> or <code class="highlighter-rouge">ldap</code>. The default is <code class="highlighter-rouge">managed</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">default_new_user_role</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The role that new users get for their private resource sets. Values are <code class="highlighter-rouge">admin</code>, <code class="highlighter-rouge">viewonly</code>, <code class="highlighter-rouge">scheduler</code>, <code class="highlighter-rouge">restrictedcontrol</code>, or <code class="highlighter-rouge">fullcontrol</code>. The default is <code class="highlighter-rouge">restrictedcontrol</code>.</td>
</tr>
</tbody>
</table>
<h3 id="authsessions">auth.sessions</h3>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">lifetime_minutes</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The initial session lifetime, in minutes. The default is 4320, which is 72 hours.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">renewal_threshold_minutes</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A zero value disables session extension. The default is 1440, which is 24 hours.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">per_user_limit</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The maximum number of sessions that a user can have active simultaneously. If creating a new session would put a user over this limit, the least recently used session will be deleted. A value of zero disables limiting the number of sessions that users may have. The default is 5.</td>
</tr>
</tbody>
</table>
<h3 id="authldap-optional">auth.ldap (optional)</h3>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">server_url</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The URL of the LDAP server.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">no_simple_pagination</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> if the LDAP server doesnt support the Simple Paged Results control extension (RFC 2696). The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">start_tls</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to use StartTLS to secure the connection to the server, ignored if the server URL scheme is ldaps://. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">root_certs</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">A root certificate PEM bundle to use when establishing a TLS connection to the server.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">tls_skip_verify</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to skip verifying the servers certificate when establishing a TLS connection, which isnt recommended unless testing on a secure network. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">reader_dn</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The distinguished name the system uses to bind to the LDAP server when performing searches.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">reader_password</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The password that the system uses to bind to the LDAP server when performing searches.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">sync_schedule</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The scheduled time for automatic LDAP sync jobs, in CRON format. Needs to have the seconds field set to zero. The default is @hourly if empty or omitted.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">jit_user_provisioning</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Whether to only create user accounts upon first login (recommended). The default is <code class="highlighter-rouge">true</code>.</td>
</tr>
</tbody>
</table>
<h3 id="authldapadditional_domains-array-optional">auth.ldap.additional_domains array (optional)</h3>
<p>A list of additional LDAP domains and corresponding server configs from which
to sync users and team members. This is an advanced feature which most
environments dont need.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">domain</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The root domain component of this server, for example, <code class="highlighter-rouge">dc=example,dc=com</code>. A longest-suffix match of the base DN for LDAP searches is used to select which LDAP server to use for search requests. If no matching domain is found, the default LDAP server config is used.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">server_url</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The URL of the LDAP server for the current additional domain.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">no_simple_pagination</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to true if the LDAP server for this additional domain does not support the Simple Paged Results control extension (RFC 2696). The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">server_url</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The URL of the LDAP server.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">start_tls</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Whether to use StartTLS to secure the connection to the server, ignored if the server URL scheme is ldaps://.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">root_certs</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">A root certificate PEM bundle to use when establishing a TLS connection to the server for the current additional domain.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">tls_skip_verify</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Whether to skip verifying the additional domain servers certificate when establishing a TLS connection, not recommended unless testing on a secure network. The default is <code class="highlighter-rouge">true</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">reader_dn</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The distinguished name the system uses to bind to the LDAP server when performing searches under the additional domain.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">reader_password</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The password that the system uses to bind to the LDAP server when performing searches under the additional domain.</td>
</tr>
</tbody>
</table>
<h3 id="authldapuser_search_configs-array-optional">auth.ldap.user_search_configs array (optional)</h3>
<p>Settings for syncing users.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">base_dn</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The distinguished name of the element from which the LDAP server will search for users, for example, <code class="highlighter-rouge">ou=people,dc=example,dc=com</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">scope_subtree</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to search for users in the entire subtree of the base DN. Set to <code class="highlighter-rouge">false</code> to search only one level under the base DN. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">username_attr</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The name of the attribute of the LDAP user element which should be selected as the username. The default is <code class="highlighter-rouge">uid</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">full_name_attr</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The name of the attribute of the LDAP user element which should be selected as the full name of the user. The default is <code class="highlighter-rouge">cn</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">filter</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The LDAP search filter used to select user elements, for example, <code class="highlighter-rouge">(&amp;(objectClass=person)(objectClass=user))</code>. May be left blank.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">match_group</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Whether to additionally filter users to those who are direct members of a group. The default is <code class="highlighter-rouge">true</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">match_group_dn</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The distinguished name of the LDAP group, for example, <code class="highlighter-rouge">cn=ddc-users,ou=groups,dc=example,dc=com</code>. Required if <code class="highlighter-rouge">matchGroup</code> is <code class="highlighter-rouge">true</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">match_group_member_attr</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The name of the LDAP group entry attribute which corresponds to distinguished names of members. Required if <code class="highlighter-rouge">matchGroup</code> is <code class="highlighter-rouge">true</code>. The default is <code class="highlighter-rouge">member</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">match_group_iterate</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to get all of the user attributes by iterating through the group members and performing a lookup for each one separately. Use this instead of searching users first, then applying the group selection filter. Ignored if <code class="highlighter-rouge">matchGroup</code> is <code class="highlighter-rouge">false</code>. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
</tbody>
</table>
<h3 id="authldapadmin_sync_opts-optional">auth.ldap.admin_sync_opts (optional)</h3>
<p>Settings for syncing system admininistrator users.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">enable_sync</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to enable syncing admins. If <code class="highlighter-rouge">false</code>, all other fields in this table are ignored. The default is <code class="highlighter-rouge">true</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">select_group_members</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to sync using a group DN and member attribute selection. Set to <code class="highlighter-rouge">false</code> to use a search filter. The default is <code class="highlighter-rouge">true</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">group_dn</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The distinguished name of the LDAP group, for example, <code class="highlighter-rouge">cn=ddc-admins,ou=groups,dc=example,dc=com</code>. Required if <code class="highlighter-rouge">select_group_members</code> is <code class="highlighter-rouge">true</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">group_member_attr</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The name of the LDAP group entry attribute which corresponds to distinguished names of members. Required if <code class="highlighter-rouge">select_group_members</code> is <code class="highlighter-rouge">true</code>. The default is <code class="highlighter-rouge">member</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">search_base_dn</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The distinguished name of the element from which the LDAP server will search for users, for example, <code class="highlighter-rouge">ou=people,dc=example,dc=com</code>. Required if <code class="highlighter-rouge">select_group_members</code> is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">search_scope_subtree</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to search for users in the entire subtree of the base DN. Set to <code class="highlighter-rouge">false</code> to search only one level under the base DN. The default is <code class="highlighter-rouge">false</code>. Required if <code class="highlighter-rouge">select_group_members</code> is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">search_filter</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The LDAP search filter used to select users if <code class="highlighter-rouge">select_group_members</code> is <code class="highlighter-rouge">false</code>, for example, <code class="highlighter-rouge">(memberOf=cn=ddc-admins,ou=groups,dc=example,dc=com)</code>. May be left blank.</td>
</tr>
</tbody>
</table>
<h3 id="registries-array-optional">registries array (optional)</h3>
<p>An array of tables that specifies the DTR instances that the current UCP instance manages.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">host_address</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">The address for connecting to the DTR instance tied to this UCP cluster.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">service_id</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">The DTR instances OpenID Connect Client ID, as registered with the Docker authentication provider.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">ca_bundle</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">If youre using a custom certificate authority (CA), the <code class="highlighter-rouge">ca_bundle</code> setting specifies the root CA bundle for the DTR instance. The value is a string with the contents of a <code class="highlighter-rouge">ca.pem</code> file.</td>
</tr>
</tbody>
</table>
<h3 id="scheduling_configuration-table-optional">scheduling_configuration table (optional)</h3>
<p>Specifies scheduling options and the default orchestrator for new nodes.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">enable_admin_ucp_scheduling</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to allow admins to schedule on containers on manager nodes. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">default_node_orchestrator</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Sets the type of orchestrator to use for new nodes that are joined to the cluster. Can be <code class="highlighter-rouge">swarm</code> or <code class="highlighter-rouge">kubernetes</code>. The default is <code class="highlighter-rouge">swarm</code>.</td>
</tr>
</tbody>
</table>
<h3 id="tracking_configuration-table-optional">tracking_configuration table (optional)</h3>
<p>Specifies the analytics data that UCP collects.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">disable_usageinfo</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to disable analytics of usage information. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">disable_tracking</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to disable analytics of API call information. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">anonymize_tracking</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Anonymize analytic data. Set to <code class="highlighter-rouge">true</code> to hide your license ID. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">cluster_label</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set a label to be included with analytics/</td>
</tr>
</tbody>
</table>
<h3 id="trust_configuration-table-optional">trust_configuration table (optional)</h3>
<p>Specifies whether DTR images require signing.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">require_content_trust</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to require images be signed by content trust. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">require_signature_from</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">A string array that specifies users or teams which must sign images.</td>
</tr>
</tbody>
</table>
<h3 id="log_configuration-table-optional">log_configuration table (optional)</h3>
<p>Configures the logging options for UCP components.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">protocol</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The protocol to use for remote logging. Values are <code class="highlighter-rouge">tcp</code> and <code class="highlighter-rouge">udp</code>. The default is <code class="highlighter-rouge">tcp</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">host</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Specifies a remote syslog server to send UCP controller logs to. If omitted, controller logs are sent through the default docker daemon logging driver from the <code class="highlighter-rouge">ucp-controller</code> container.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">level</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">The logging level for UCP components. Values are <a href="https://linux.die.net/man/5/syslog.conf">syslog priority levels</a>: <code class="highlighter-rouge">debug</code>, <code class="highlighter-rouge">info</code>, <code class="highlighter-rouge">notice</code>, <code class="highlighter-rouge">warning</code>, <code class="highlighter-rouge">err</code>, <code class="highlighter-rouge">crit</code>, <code class="highlighter-rouge">alert</code>, and <code class="highlighter-rouge">emerg</code>.</td>
</tr>
</tbody>
</table>
<h3 id="license_configuration-table-optional">license_configuration table (optional)</h3>
<p>Specifies whether the your UCP license is automatically renewed.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">auto_refresh</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to enable attempted automatic license renewal when the license nears expiration. If disabled, you must manually upload renewed license after expiration. The default is <code class="highlighter-rouge">true</code>.</td>
</tr>
</tbody>
</table>
<h3 id="cluster_config-table-required">cluster_config table (required)</h3>
<p>Configures the cluster that the current UCP instance manages.</p>
<p>The <code class="highlighter-rouge">dns</code>, <code class="highlighter-rouge">dns_opt</code>, and <code class="highlighter-rouge">dns_search</code> settings configure the DNS settings for UCP
components. Assigning these values overrides the settings in a containers
<code class="highlighter-rouge">/etc/resolv.conf</code> file. For more info, see
<a href="/engine/userguide/networking/default_network/configure-dns/">Configure container DNS</a>.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Parameter</th>
<th style="text-align: left">Required</th>
<th style="text-align: left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">controller_port</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Configures the port that the <code class="highlighter-rouge">ucp-controller</code> listens to. The default is <code class="highlighter-rouge">443</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">kube_apiserver_port</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Configures the port the Kubernetes API server listens to.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">swarm_port</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Configures the port that the <code class="highlighter-rouge">ucp-swarm-manager</code> listens to. The default is <code class="highlighter-rouge">2376</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">swarm_strategy</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Configures placement strategy for container scheduling. This doesnt affect swarm-mode services. Values are <code class="highlighter-rouge">spread</code>, <code class="highlighter-rouge">binpack</code>, and <code class="highlighter-rouge">random</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">dns</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Array of IP addresses to add as nameservers.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">dns_opt</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Array of options used by DNS resolvers.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">dns_search</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Array of domain names to search when a bare unqualified hostname is used inside of a container.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">profiling_enabled</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set to <code class="highlighter-rouge">true</code> to enable specialized debugging endpoints for profiling UCP performance. The default is <code class="highlighter-rouge">false</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">kv_timeout</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Sets the key-value store timeout setting, in milliseconds. The default is <code class="highlighter-rouge">5000</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">kv_snapshot_count</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Sets the key-value store snapshot count setting. The default is <code class="highlighter-rouge">20000</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">external_service_lb</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Specifies an optional external load balancer for default links to services with exposed ports in the web UI.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">cni_installer_url</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Specifies the URL of a Kubernetes YAML file to be used for installing a CNI plugin. Applies only during initial installation. If empty, the default CNI plugin is used.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">metrics_retention_time</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Adjusts the metrics retention time.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">metrics_scrape_interval</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Sets the interval for how frequently managers gather metrics from nodes in the cluster.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">metrics_disk_usage_interval</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Sets the interval for how frequently storage metrics are gathered. This operation can be expensive when large volumes are present.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">rethinkdb_cache_size</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Sets the size of the cache used by UCPs RethinkDB servers. The default is 512MB, but leaving this field empty or specifying <code class="highlighter-rouge">auto</code> instructs RethinkDB to determine a cache size automatically.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">cloud_provider</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set the cloud provider for the kubernetes cluster.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">pod_cidr</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Sets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is <code class="highlighter-rouge">192.168.0.0/16</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">nodeport_range</code></td>
<td style="text-align: left">yes</td>
<td style="text-align: left">Set the port range that for Kubernetes services of type NodePort can be exposed in. Default is <code class="highlighter-rouge">32768-35535</code>.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">custom_kube_api_server_flags</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set the configuration options for the Kubernetes API server.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">custom_kube_controller_manager_flags</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set the configuration options for the Kubernetes controller manager</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">custom_kubelet_flags</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set the configuration options for Kubelets</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">custom_kube_scheduler_flags</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Set the configuration options for the Kubernetes scheduler</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">local_volume_collection_mapping</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Store data about collections for volumes in UCPs local KV store instead of on the volume labels. This is used for enforcing access control on volumes.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">manager_kube_reserved_resources</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Reserve resources for Docker UCP and Kubernetes components which are running on manager nodes.</td>
</tr>
<tr>
<td style="text-align: left"><code class="highlighter-rouge">worker_kube_reserved_resources</code></td>
<td style="text-align: left">no</td>
<td style="text-align: left">Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes.</td>
</tr>
</tbody>
</table>

View File

@ -0,0 +1,422 @@
<p>Docker UCP supports Network File System (NFS) persistent volumes for
Kubernetes. To enable this feature on a UCP cluster, you need to set up
an NFS storage volume provisioner.</p>
<blockquote>
<p>Kubernetes storage drivers</p>
<p>Currently, NFS is the only Kubernetes storage driver that UCP supports.</p>
</blockquote>
<h2 id="enable-nfs-volume-provisioning">Enable NFS volume provisioning</h2>
<p>The following steps enable NFS volume provisioning on a UCP cluster:</p>
<ol>
<li>Create an NFS server pod.</li>
<li>Create a default storage class.</li>
<li>Create persistent volumes that use the default storage class.</li>
<li>Deploy your persistent volume claims and applications.</li>
</ol>
<p>The following procedure shows you how to deploy WordPress and a MySQL backend
that use NFS volume provisioning.</p>
<p><a href="../../user-access/kubectl.md">Install the Kubernetes CLI</a> to complete the
procedure for enabling NFS provisioning.</p>
<h2 id="create-the-nfs-server">Create the NFS Server</h2>
<p>To enable NFS volume provisioning on a UCP cluster, you need to install
an NFS server. Google provides an image for this purpose.</p>
<p>On any node in the cluster with a <a href="../../user-access/cli.md">UCP client bundle</a>,
copy the following yaml to a file named nfs-server.yaml.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Pod</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">nfs-server</span>
<span class="na">namespace</span><span class="pi">:</span> <span class="s">default</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">role</span><span class="pi">:</span> <span class="s">nfs-server</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">tolerations</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">key</span><span class="pi">:</span> <span class="s">node-role.kubernetes.io/master</span>
<span class="na">effect</span><span class="pi">:</span> <span class="s">NoSchedule</span>
<span class="na">nodeSelector</span><span class="pi">:</span>
<span class="s">node-role.kubernetes.io/master</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span>
<span class="na">containers</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">nfs-server</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">gcr.io/google_containers/volume-nfs:0.8</span>
<span class="na">securityContext</span><span class="pi">:</span>
<span class="na">privileged</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">nfs-0</span>
<span class="na">containerPort</span><span class="pi">:</span> <span class="s">2049</span>
<span class="na">protocol</span><span class="pi">:</span> <span class="s">TCP</span>
<span class="na">restartPolicy</span><span class="pi">:</span> <span class="s">Always</span>
</code></pre></div></div>
<p>Run the following command to create the NFS server pod.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl create <span class="nt">-f</span> nfs-server.yaml
</code></pre></div></div>
<p>The default storage class needs the IP address of the NFS server pod.
Run the following command to get the pods IP address.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl describe pod nfs-server | <span class="nb">grep </span>IP:
</code></pre></div></div>
<p>The result looks like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>IP: 192.168.106.67
</code></pre></div></div>
<h2 id="create-the-default-storage-class">Create the default storage class</h2>
<p>To enable NFS provisioning, create a storage class that has the
<code class="highlighter-rouge">storageclass.kubernetes.io/is-default-class</code> annotation set to <code class="highlighter-rouge">true</code>.
Also, provide the IP address of the NFS server pod as a parameter.</p>
<p>Copy the following yaml to a file named default-storage.yaml. Replace
<code class="highlighter-rouge">&lt;nfs-server-pod-ip-address&gt;</code> with the IP address from the previous step.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">kind</span><span class="pi">:</span> <span class="s">StorageClass</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">storage.k8s.io/v1beta1</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">namespace</span><span class="pi">:</span> <span class="s">default</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">default-storage</span>
<span class="na">annotations</span><span class="pi">:</span>
<span class="s">storageclass.kubernetes.io/is-default-class</span><span class="pi">:</span> <span class="s2">"</span><span class="s">true"</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="s">kubernetes.io/cluster-service</span><span class="pi">:</span> <span class="s2">"</span><span class="s">true"</span>
<span class="na">provisioner</span><span class="pi">:</span> <span class="s">kubernetes.io/nfs</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">path</span><span class="pi">:</span> <span class="s">/</span>
<span class="na">server</span><span class="pi">:</span> <span class="s">&lt;nfs-server-pod-ip-address&gt;</span>
</code></pre></div></div>
<p>Run the following command to create the default storage class.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl create <span class="nt">-f</span> default-storage.yaml
</code></pre></div></div>
<p>Confirm that the storage class was created and that its assigned as the
default for the cluster.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl get storageclass
</code></pre></div></div>
<p>It should look like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME PROVISIONER AGE
default-storage (default) kubernetes.io/nfs 58s
</code></pre></div></div>
<h2 id="create-persistent-volumes">Create persistent volumes</h2>
<p>Create two persistent volumes based on the <code class="highlighter-rouge">default-storage</code> storage class.
One volume is for the MySQL database, and the other is for WordPress.</p>
<p>To create an NFS volume, specify <code class="highlighter-rouge">storageClassName: default-storage</code> in the
persistent volume spec.</p>
<p>Copy the following yaml to a file named local-volumes.yaml.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolume</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">local-pv-1</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">local</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">storageClassName</span><span class="pi">:</span> <span class="s">default-storage</span>
<span class="na">capacity</span><span class="pi">:</span>
<span class="na">storage</span><span class="pi">:</span> <span class="s">20Gi</span>
<span class="na">accessModes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">ReadWriteOnce</span>
<span class="na">hostPath</span><span class="pi">:</span>
<span class="na">path</span><span class="pi">:</span> <span class="s">/tmp/data/pv-1</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolume</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">local-pv-2</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">local</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">storageClassName</span><span class="pi">:</span> <span class="s">default-storage</span>
<span class="na">capacity</span><span class="pi">:</span>
<span class="na">storage</span><span class="pi">:</span> <span class="s">20Gi</span>
<span class="na">accessModes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">ReadWriteOnce</span>
<span class="na">hostPath</span><span class="pi">:</span>
<span class="na">path</span><span class="pi">:</span> <span class="s">/tmp/data/pv-2</span>
</code></pre></div></div>
<p>Run this command to create the persistent volumes.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl create <span class="nt">-f</span> local-volumes.yaml
</code></pre></div></div>
<p>Inspect the volumes:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl get persistentvolumes
</code></pre></div></div>
<p>They should look like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-1 20Gi RWO Retain Available default-storage 1m
local-pv-2 20Gi RWO Retain Available default-storage 1m
</code></pre></div></div>
<h2 id="create-a-secret-for-the-mysql-password">Create a secret for the MySQL password</h2>
<p>Create a secret for the password that you want to use for accessing the MySQL
database. Use this command to create the secret object:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl create secret generic mysql-pass <span class="nt">--from-literal</span><span class="o">=</span><span class="nv">password</span><span class="o">=</span>&lt;mysql-password&gt;
</code></pre></div></div>
<h2 id="deploy-persistent-volume-claims-and-applications">Deploy persistent volume claims and applications</h2>
<p>You have two persistent volumes that are available for claims. The MySQL
deployment uses one volume, and WordPress uses the other.</p>
<p>Copy the following yaml to a file named <code class="highlighter-rouge">wordpress-deployment.yaml</code>.
The claims in this file make no reference to a particular storage class, so
they bind to any available volumes that can satisfy the storage request.
In this example, both claims request <code class="highlighter-rouge">20Gi</code> of storage.</p>
<blockquote>
<p>Use specific persistent volume</p>
<p>If you are attempting to use a specific persistent volume and not let Kubernetes choose at random, ensure that the <code class="highlighter-rouge">storageClassName</code> key is populated in the persistent claim itself.</p>
</blockquote>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Service</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress-mysql</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">port</span><span class="pi">:</span> <span class="s">3306</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">tier</span><span class="pi">:</span> <span class="s">mysql</span>
<span class="na">clusterIP</span><span class="pi">:</span> <span class="s">None</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolumeClaim</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">mysql-pv-claim</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">accessModes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">ReadWriteOnce</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="na">requests</span><span class="pi">:</span>
<span class="na">storage</span><span class="pi">:</span> <span class="s">20Gi</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">apps/v1beta2</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress-mysql</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">matchLabels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">tier</span><span class="pi">:</span> <span class="s">mysql</span>
<span class="na">strategy</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">Recreate</span>
<span class="na">template</span><span class="pi">:</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">tier</span><span class="pi">:</span> <span class="s">mysql</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">containers</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">image</span><span class="pi">:</span> <span class="s">mysql:5.6</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">mysql</span>
<span class="na">env</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">MYSQL_ROOT_PASSWORD</span>
<span class="na">valueFrom</span><span class="pi">:</span>
<span class="na">secretKeyRef</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">mysql-pass</span>
<span class="na">key</span><span class="pi">:</span> <span class="s">password</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="s">3306</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">mysql</span>
<span class="na">volumeMounts</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">mysql-persistent-storage</span>
<span class="na">mountPath</span><span class="pi">:</span> <span class="s">/var/lib/mysql</span>
<span class="na">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">mysql-persistent-storage</span>
<span class="na">persistentVolumeClaim</span><span class="pi">:</span>
<span class="na">claimName</span><span class="pi">:</span> <span class="s">mysql-pv-claim</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Service</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">port</span><span class="pi">:</span> <span class="s">80</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">tier</span><span class="pi">:</span> <span class="s">frontend</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">LoadBalancer</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolumeClaim</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wp-pv-claim</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">accessModes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">ReadWriteOnce</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="na">requests</span><span class="pi">:</span>
<span class="na">storage</span><span class="pi">:</span> <span class="s">20Gi</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">apps/v1beta2</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">matchLabels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">tier</span><span class="pi">:</span> <span class="s">frontend</span>
<span class="na">strategy</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">Recreate</span>
<span class="na">template</span><span class="pi">:</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">tier</span><span class="pi">:</span> <span class="s">frontend</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">containers</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">image</span><span class="pi">:</span> <span class="s">wordpress:4.8-apache</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">env</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">WORDPRESS_DB_HOST</span>
<span class="na">value</span><span class="pi">:</span> <span class="s">wordpress-mysql</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">WORDPRESS_DB_PASSWORD</span>
<span class="na">valueFrom</span><span class="pi">:</span>
<span class="na">secretKeyRef</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">mysql-pass</span>
<span class="na">key</span><span class="pi">:</span> <span class="s">password</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="s">80</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">wordpress</span>
<span class="na">volumeMounts</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">wordpress-persistent-storage</span>
<span class="na">mountPath</span><span class="pi">:</span> <span class="s">/var/www/html</span>
<span class="na">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">wordpress-persistent-storage</span>
<span class="na">persistentVolumeClaim</span><span class="pi">:</span>
<span class="na">claimName</span><span class="pi">:</span> <span class="s">wp-pv-claim</span>
</code></pre></div></div>
<p>Run the following command to deploy the MySQL and WordPress images.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl create <span class="nt">-f</span> wordpress-deployment.yaml
</code></pre></div></div>
<p>Confirm that the pods are up and running.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl get pods
</code></pre></div></div>
<p>You should see something like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME READY STATUS RESTARTS AGE
nfs-server 1/1 Running 0 2h
wordpress-f4dcfdf45-4rkgs 1/1 Running 0 1m
wordpress-mysql-7bdd6d857c-fvgqx 1/1 Running 0 1m
</code></pre></div></div>
<p>It may take a few minutes for both pods to enter the <code class="highlighter-rouge">Running</code> state.</p>
<h2 id="inspect-the-deployment">Inspect the deployment</h2>
<p>The WordPress deployment is ready to go. You can see it in action by opening
a web browser on the URL of the WordPress service. The easiest way to get the
URL is to open the UCP web UI, navigate to the Kubernetes <strong>Load Balancers</strong>
page, and click the <strong>wordpress</strong> service. In the details pane, the URL is
listed in the <strong>Ports</strong> section.</p>
<p><img src="../../images/use-nfs-volume-1.png" alt="" class="with-border" /></p>
<p>Also, you can get the URL by using the command line.</p>
<p>On any node in the cluster, run the following command to get the IP addresses
that are assigned to the current node.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
docker node inspect <span class="nt">--format</span> <span class="s1">'{{ index .Spec.Labels "com.docker.ucp.SANs" }}'</span> &lt;node-id&gt;
</code></pre></div></div>
<p>You should see a list of IP addresses, like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>172.31.36.167,jg-latest-ubuntu-0,127.0.0.1,172.17.0.1,54.213.225.17
</code></pre></div></div>
<p>One of these corresponds with the external node IP address. Look for an address
thats not in the <code class="highlighter-rouge">192.*</code>, <code class="highlighter-rouge">127.*</code>, and <code class="highlighter-rouge">172.*</code> ranges. In the current example,
the IP address is <code class="highlighter-rouge">54.213.225.17</code>.</p>
<p>The WordPress web UI is served through a <code class="highlighter-rouge">NodePort</code>, which you get with this
command:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl describe svc wordpress | <span class="nb">grep </span>NodePort
</code></pre></div></div>
<p>Which returns something like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NodePort: &lt;unset&gt; 34746/TCP
</code></pre></div></div>
<p>Put the two together to get the URL for the WordPress service:
<code class="highlighter-rouge">http://&lt;node-ip&gt;:&lt;node-port&gt;</code>.</p>
<p>For this example, the URL is <code class="highlighter-rouge">http://54.213.225.17:34746</code>.</p>
<p><img src="../../images/use-nfs-volume-2.png" alt="" class="with-border" /></p>
<h2 id="write-a-blog-post-to-use-the-storage">Write a blog post to use the storage</h2>
<p>Open the URL for the WordPress service and follow the instructions for
installing WordPress. In this example, the blog is named “NFS Volumes”.</p>
<p><img src="../../images/use-nfs-volume-3.png" alt="" class="with-border" /></p>
<p>Create a new blog post and publish it.</p>
<p><img src="../../images/use-nfs-volume-4.png" alt="" class="with-border" /></p>
<p>Click the <strong>permalink</strong> to view the site.</p>
<p><img src="../../images/use-nfs-volume-5.png" alt="" class="with-border" /></p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs#nfs-server-part">Example of NFS based persistent volume</a></li>
<li><a href="https://v1-8.docs.kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/">Example: Deploying WordPress and MySQL with Persistent Volumes</a></li>
</ul>

View File

@ -0,0 +1,47 @@
<p>Docker Universal Control Plane can use your local networking drivers to
orchestrate your cluster. You can create a <em>config</em> network, with a driver like
MAC VLAN, and you use it like any other named network in UCP. If its set up
as attachable, you can attach containers.</p>
<blockquote>
<p>Security</p>
<p>Encrypting communication between containers on different nodes works only on
overlay networks.</p>
</blockquote>
<h2 id="use-ucp-to-create-node-specific-networks">Use UCP to create node-specific networks</h2>
<p>Always use UCP to create node-specific networks. You can use the UCP web UI
or the CLI (with an admin bundle). If you create the networks without UCP,
the networks wont have the right access labels and wont be available in UCP.</p>
<h2 id="create-a-mac-vlan-network">Create a MAC VLAN network</h2>
<ol>
<li>Log in as an administrator.</li>
<li>Navigate to <strong>Networks</strong> and click <strong>Create Network</strong>.</li>
<li>Name the network “macvlan”.</li>
<li>In the <strong>Driver</strong> dropdown, select <strong>Macvlan</strong>.</li>
<li>
<p>In the <strong>Macvlan Configure</strong> section, select the configuration option.
Create all of the config-only networks before you create the config-from
network.</p>
<ul>
<li><strong>Config Only</strong>: Prefix the <code class="highlighter-rouge">config-only</code> network name with a node hostname
prefix, like <code class="highlighter-rouge">node1/my-cfg-network</code>, <code class="highlighter-rouge">node2/my-cfg-network</code>, <em>etc</em>. This is
necessary to ensure that the access labels are applied consistently to all of
the back-end config-only networks. UCP routes the config-only network creation
to the appropriate node based on the node hostname prefix. All config-only
networks with the same name must belong in the same collection, or UCP returns
an error. Leaving the access label empty puts the network in the admins default
collection, which is <code class="highlighter-rouge">/</code> in a new UCP installation.</li>
<li><strong>Config From</strong>: Create the network from a Docker config. Dont set up an
access label for the config-from network. The labels of the network and its
collection placement are inherited from the related config-only networks.</li>
</ul>
</li>
<li>Click <strong>Create</strong> to create the network.</li>
</ol>

View File

@ -0,0 +1,139 @@
<p>The document provides a minimal example on setting up Docker Content Trust (DCT) in
Universal Control Plane (UCP) for use with a Continuous Integration (CI) system. It
covers setting up the necessary accounts and trust delegations to restrict only those
images built by your CI system to be deployed to your UCP managed cluster.</p>
<h2 id="set-up-ucp-accounts-and-teams">Set up UCP accounts and teams</h2>
<p>The first step is to create a user account for your CI system. For the purposes of
this document we will assume you are using Jenkins as your CI system and will therefore
name the account “jenkins”. As an admin user logged in to UCP, navigate to “User Management”
and select “Add User”. Create a user with the name “jenkins” and set a strong password.</p>
<p>Next, create a team called “CI” and add the “jenkins” user to this team. All signing
policy is team based, so if we want only a single user to be able to sign images
destined to be deployed on the cluster, we must create a team for this one user.</p>
<h2 id="set-up-the-signing-policy">Set up the signing policy</h2>
<p>While still logged in as an admin, navigate to “Admin Settings” and select the “Content Trust”
subsection. Select the checkbox to enable content trust and in the select box that appears,
select the “CI” team we have just created. Save the settings.</p>
<p>This policy will require that every image that referenced in a <code class="highlighter-rouge">docker image pull</code>,
<code class="highlighter-rouge">docker container run</code>, or <code class="highlighter-rouge">docker service create</code> must be signed by a key corresponding
to a member of the “CI” team. In this case, the only member is the “jenkins” user.</p>
<h2 id="create-keys-for-the-jenkins-user">Create keys for the Jenkins user</h2>
<p>The signing policy implementation uses the certificates issued in user client bundles
to connect a signature to a user. Using an incognito browser window (or otherwise),
log in to the “jenkins” user account you created earlier. Download a client bundle for
this user. It is also recommended to change the description associated with the public
key stored in UCP such that you can identify in the future which key is being used for
signing.</p>
<p>Each time a user retrieves a new client bundle, a new keypair is generated. It is therefore
necessary to keep track of a specific bundle that a user chooses to designate as their signing bundle.</p>
<p>Once you have decompressed the client bundle, the only two files you need for the purposes
of signing are <code class="highlighter-rouge">cert.pem</code> and <code class="highlighter-rouge">key.pem</code>. These represent the public and private parts of
the users signing identity respectively. We will load the <code class="highlighter-rouge">key.pem</code> file onto the Jenkins
servers, and use <code class="highlighter-rouge">cert.pem</code> to create delegations for the “jenkins” user in our
Trusted Collection.</p>
<h2 id="prepare-the-jenkins-server">Prepare the Jenkins server</h2>
<h3 id="load-keypem-on-jenkins">Load <code class="highlighter-rouge">key.pem</code> on Jenkins</h3>
<p>You will need to use the notary client to load keys onto your Jenkins server. Simply run
<code class="highlighter-rouge">notary -d /path/to/.docker/trust key import /path/to/key.pem</code>. You will be asked to set
a password to encrypt the key on disk. For automated signing, this password can be configured
into the environment under the variable name <code class="highlighter-rouge">DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE</code>. The <code class="highlighter-rouge">-d</code>
flag to the command specifies the path to the <code class="highlighter-rouge">trust</code> subdirectory within the servers <code class="highlighter-rouge">docker</code>
configuration directory. Typically this is found at <code class="highlighter-rouge">~/.docker/trust</code>.</p>
<h3 id="enable-content-trust">Enable content trust</h3>
<p>There are two ways to enable content trust: globally, and per operation. To enabled content
trust globally, set the environment variable <code class="highlighter-rouge">DOCKER_CONTENT_TRUST=1</code>. To enable on a per
operation basis, wherever you run <code class="highlighter-rouge">docker image push</code> in your Jenkins scripts, add the flag
<code class="highlighter-rouge">--disable-content-trust=false</code>. You may wish to use this second option if you only want
to sign some images.</p>
<p>The Jenkins server is now prepared to sign images, but we need to create delegations referencing
the key to give it the necessary permissions.</p>
<h2 id="initialize-a-repository">Initialize a repository</h2>
<p>Any commands displayed in this section should <em>not</em> be run from the Jenkins server. You
will most likely want to run them from your local system.</p>
<p>If this is a new repository, create it in Docker Trusted Registry (DTR) or Docker Hub,
depending on which you use to store your images, before proceeding further.</p>
<p>We will now initialize the trust data and create the delegation that provides the Jenkins
key with permissions to sign content. The following commands initialize the trust data and
rotate snapshotting responsibilities to the server. This is necessary to ensure human involvement
is not required to publish new content.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>notary -s https://my_notary_server.com -d ~/.docker/trust init my_repository
notary -s https://my_notary_server.com -d ~/.docker/trust key rotate my_repository snapshot -r
notary -s https://my_notary_server.com -d ~/.docker/trust publish my_repository
</code></pre></div></div>
<p>The <code class="highlighter-rouge">-s</code> flag specifies the server hosting a notary service. If you are operating against
Docker Hub, this will be <code class="highlighter-rouge">https://notary.docker.io</code>. If you are operating against your own DTR
instance, this will be the same hostname you use in image names when running docker commands preceded
by the <code class="highlighter-rouge">https://</code> scheme. For example, if you would run <code class="highlighter-rouge">docker image push my_dtr:4443/me/an_image</code> the value
of the <code class="highlighter-rouge">-s</code> flag would be expected to be <code class="highlighter-rouge">https://my_dtr:4443</code>.</p>
<p>If you are using DTR, the name of the repository should be identical to the full name you use
in a <code class="highlighter-rouge">docker image push</code> command. If however you use Docker Hub, the name you use in a <code class="highlighter-rouge">docker image push</code>
must be preceded by <code class="highlighter-rouge">docker.io/</code>. i.e. if you ran <code class="highlighter-rouge">docker image push me/alpine</code>, you would
<code class="highlighter-rouge">notary init docker.io/me/alpine</code>.</p>
<p>For brevity, we will exclude the <code class="highlighter-rouge">-s</code> and <code class="highlighter-rouge">-d</code> flags from subsequent command, but be aware you
will still need to provide them for the commands to work correctly.</p>
<p>Now that the repository is initialized, we need to create the delegations for Jenkins. Docker
Content Trust treats a delegation role called <code class="highlighter-rouge">targets/releases</code> specially. It considers this
delegation to contain the canonical list of published images for the repository. It is therefore
generally desirable to add all users to this delegation with the following command:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>notary delegation add my_repository targets/releases --all-paths /path/to/cert.pem
</code></pre></div></div>
<p>This solves a number of prioritization problems that would result from needing to determine
which delegation should ultimately be trusted for a specific image. However, because it
is anticipated that any user will be able to sign the <code class="highlighter-rouge">targets/releases</code> role it is not trusted
in determining if a signing policy has been met. Therefore it is also necessary to create a
delegation specifically for Jenkins:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>notary delegation add my_repository targets/jenkins --all-paths /path/to/cert.pem
</code></pre></div></div>
<p>We will then publish both these updates (remember to add the correct <code class="highlighter-rouge">-s</code> and <code class="highlighter-rouge">-d</code> flags):</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>notary publish my_repository
</code></pre></div></div>
<p>Informational (Advanced): If we included the <code class="highlighter-rouge">targets/releases</code> role in determining if a signing policy
had been met, we would run into the situation of images being opportunistically deployed when
an appropriate user signs. In the scenario we have described so far, only images signed by
the “CI” team (containing only the “jenkins” user) should be deployable. If a user “Moby” could
also sign images but was not part of the “CI” team, they might sign and publish a new <code class="highlighter-rouge">targets/releases</code>
that contained their image. UCP would refuse to deploy this image because it was not signed
by the “CI” team. However, the next time Jenkins published an image, it would update and sign
the <code class="highlighter-rouge">targets/releases</code> role as whole, enabling “Moby” to deploy their image.</p>
<h2 id="conclusion">Conclusion</h2>
<p>With the Trusted Collection initialized, and delegations created, the Jenkins server will
now use the key we imported to sign any images we push to this repository.</p>
<p>Through either the Docker CLI, or the UCP browser interface, we will find that any images
that do not meet our signing policy cannot be used. The signing policy we set up requires
that the “CI” team must have signed any image we attempt to <code class="highlighter-rouge">docker image pull</code>, <code class="highlighter-rouge">docker container run</code>,
or <code class="highlighter-rouge">docker service create</code>, and the only member of that team is the “jenkins” user. This
restricts us to only running images that were published by our Jenkins CI system.</p>

View File

@ -0,0 +1,57 @@
<p>All UCP services are exposed using HTTPS, to ensure all communications between
clients and UCP are encrypted. By default, this is done using self-signed TLS
certificates that are not trusted by client tools like web browsers. So when
you try to access UCP, your browser warns that it doesnt trust UCP or that
UCP has an invalid certificate.</p>
<p><img src="../../images/use-externally-signed-certs-1.png" alt="invalid certificate" /></p>
<p>The same happens with other client tools.</p>
<pre><code class="language-none">$ curl https://ucp.example.org
SSL certificate problem: Invalid certificate chain
</code></pre>
<p>You can configure UCP to use your own TLS certificates, so that it is
automatically trusted by your browser and client tools.</p>
<p>To ensure minimal impact to your business, you should plan for this change to
happen outside business peak hours. Your applications will continue running
normally, but existing UCP client certificates will become invalid, so users
will have to download new ones to <a href="../../user-access/cli.md">access UCP from the CLI</a>.</p>
<h2 id="configure-ucp-to-use-your-own-tls-certificates-and-keys">Configure UCP to use your own TLS certificates and keys</h2>
<p>In the UCP web UI, log in with administrator credentials and
navigate to the <strong>Admin Settings</strong> page.</p>
<p>In the left pane, click <strong>Certificates</strong>.</p>
<p><img src="../../images/use-externally-signed-certs-2.png" alt="" /></p>
<p>Upload your certificates and keys:</p>
<ul>
<li>A <code class="highlighter-rouge">ca.pem</code> file with the root CA public certificate.</li>
<li>A <code class="highlighter-rouge">cert.pem</code> file with the TLS certificate for your domain and any intermediate public
certificates, in this order.</li>
<li>A <code class="highlighter-rouge">key.pem</code> file with TLS private key. Make sure it is not encrypted with a password.
Encrypted keys should have <code class="highlighter-rouge">ENCRYPTED</code> in the first line.</li>
</ul>
<p>Finally, click <strong>Save</strong> for the changes to take effect.</p>
<p>After replacing the TLS certificates, your users wont be able to authenticate
with their old client certificate bundles. Ask your users to go to the UCP
web UI and <a href="../../user-access/cli.md">get new client certificate bundles</a>.</p>
<p>If you deployed Docker Trusted Registry, youll also need to reconfigure it
to trust the new UCP TLS certificates.
<a href="/reference/dtr/2.5/cli/reconfigure.md">Learn how to configure DTR</a>.</p>
<h2 id="where-to-go-next">Where to go next</h2>
<ul>
<li><a href="../../user-access/cli.md">Access UCP from the CLI</a></li>
</ul>

View File

@ -0,0 +1,121 @@
<p>With Docker Enterprise Edition, administrators can filter the view of
Kubernetes objects by the namespace the objects are assigned to. You can
specify a single namespace, or you can specify all available namespaces.</p>
<h2 id="create-two-namespaces">Create two namespaces</h2>
<p>In this example, you create two Kubernetes namespaces and deploy a service
to both of them.</p>
<ol>
<li>Log in to the UCP web UI with an administrator account.</li>
<li>In the left pane, click <strong>Kubernetes</strong>.</li>
<li>Click <strong>Create</strong> to open the <strong>Create Kubernetes Object</strong> page.</li>
<li>
<p>In the <strong>Object YAML</strong> editor, paste the following YAML.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Namespace</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">blue</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Namespace</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">green</span>
</code></pre></div> </div>
</li>
<li>Click <strong>Create</strong> to create the <code class="highlighter-rouge">blue</code> and <code class="highlighter-rouge">green</code> namespaces.</li>
</ol>
<p><img src="../../images/view-namespace-resources-1.png" alt="" class="with-border" /></p>
<h2 id="deploy-services">Deploy services</h2>
<p>Create a <code class="highlighter-rouge">NodePort</code> service in the <code class="highlighter-rouge">blue</code> namespace.</p>
<ol>
<li>Navigate to the <strong>Create Kubernetes Object</strong> page.</li>
<li>In the <strong>Namespace</strong> dropdown, select <strong>blue</strong>.</li>
<li>
<p>In the <strong>Object YAML</strong> editor, paste the following YAML.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Service</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">app-service-blue</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">app-blue</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">NodePort</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">port</span><span class="pi">:</span> <span class="s">80</span>
<span class="na">nodePort</span><span class="pi">:</span> <span class="s">32768</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">app-blue</span>
</code></pre></div> </div>
</li>
<li>
<p>Click <strong>Create</strong> to deploy the service in the <code class="highlighter-rouge">blue</code> namespace.</p>
</li>
<li>
<p>Repeat the previous steps with the following YAML, but this time,
select <code class="highlighter-rouge">green</code> from the <strong>Namespace</strong> dropdown.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Service</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">app-service-green</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">app-green</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">NodePort</span>
<span class="na">ports</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">port</span><span class="pi">:</span> <span class="s">80</span>
<span class="na">nodePort</span><span class="pi">:</span> <span class="s">32769</span>
<span class="na">selector</span><span class="pi">:</span>
<span class="na">app</span><span class="pi">:</span> <span class="s">app-green</span>
</code></pre></div> </div>
</li>
</ol>
<h2 id="view-services">View services</h2>
<p>Currently, the <strong>Namespaces</strong> view is set to the <strong>default</strong> namespace, so the
<strong>Load Balancers</strong> page doesnt show your services.</p>
<ol>
<li>In the left pane, click <strong>Namespaces</strong> to open the list of namespaces.</li>
<li>In the upper-right corner, click the <strong>Set context for all namespaces</strong>
toggle and click <strong>Confirm</strong>. The indicator in the left pane changes to <strong>All Namespaces</strong>.</li>
<li>Click <strong>Load Balancers</strong> to view your services.</li>
</ol>
<p><img src="../../images/view-namespace-resources-2.png" alt="" class="with-border" /></p>
<h2 id="filter-the-view-by-namespace">Filter the view by namespace</h2>
<p>With the <strong>Set context for all namespaces</strong> toggle set, you see all of the
Kubernetes objects in every namespace. Now filter the view to show only
objects in one namespace.</p>
<ol>
<li>In the left pane, click <strong>Namespaces</strong> to open the list of namespaces.</li>
<li>
<p>In the <strong>green</strong> namespace, click the <strong>More options</strong> icon and in the
context menu, select <strong>Set Context</strong>.</p>
<p><img src="../../images/view-namespace-resources-3.png" alt="" class="with-border" /></p>
</li>
<li>Click <strong>Confirm</strong> to set the context to the <code class="highlighter-rouge">green</code> namespace.
The indicator in the left pane changes to <strong>green</strong>.</li>
<li>Click <strong>Load Balancers</strong> to view your <code class="highlighter-rouge">app-service-green</code> service.
The <code class="highlighter-rouge">app-service-blue</code> service doesnt appear.</li>
</ol>
<p><img src="../../images/view-namespace-resources-4.png" alt="" class="with-border" /></p>
<p>To view the <code class="highlighter-rouge">app-service-blue</code> service, repeat the previous steps, but this
time, select <strong>Set Context</strong> on the <strong>blue</strong> namespace.</p>
<p><img src="../../images/view-namespace-resources-5.png" alt="" class="with-border" /></p>

View File

@ -67,8 +67,8 @@ To enable SAML authentication:
![Configuring IdP values for SAML in UCP](../../images/saml_settings.png)
5. In **IdP Metadata URL** enter the URL for the identity provider's metadata.
6. If the metadata URL is publicly certified, you can leave **Skip TLS Verification** unchecked and **Root Certificates Bundle** blank, which is the default. If the metadata URL is NOT certified, you must provide the certificates from the identity provider in the **Root Certificates Bundle** field whether or not you check **Skip TLS Verification**.
7. In **UCP Host** enter the URL that includes the IP address or domain of your UCP web interface. The current IP address appears by default.
6. If the metadata URL is publicly certified, you can leave **Skip TLS Verification** unchecked and **Root Certificates Bundle** blank, which is the default. Skipping TLS verification is not recommended in production environments. If the metadata URL cannot be certified by the default certificate authority store, you must provide the certificates from the identity provider in the **Root Certificates Bundle** field whether or not you check **Skip TLS Verification**.
7. In **UCP Host** enter the URL that includes the IP address or domain of your UCP installation. The port number is optional. The current IP address or domain appears by default.
![Configuring service provider values for SAML in UCP](../../images/saml_settings_2.png)