Merge branch 'master' of github.com:docker/docs-private into dtr-job-logs-681
|
|
@ -1564,7 +1564,7 @@ manuals:
|
|||
title: Add SANs to cluster certificates
|
||||
- path: /ee/ucp/admin/configure/collect-cluster-metrics/
|
||||
title: Collect UCP cluster metrics with Prometheus
|
||||
- path: /ee/ucp/authorization/configure-rbac-kube/
|
||||
- path: /ee/ucp/admin/configure/configure-rbac-kube/
|
||||
title: Configure native Kubernetes role-based access control
|
||||
- path: /ee/ucp/admin/configure/create-audit-logs/
|
||||
title: Create UCP audit logs
|
||||
|
|
@ -1733,6 +1733,8 @@ manuals:
|
|||
path: /ee/ucp/kubernetes/create-service-account/
|
||||
- title: Install a CNI plugin
|
||||
path: /ee/ucp/kubernetes/install-cni-plugin/
|
||||
- title: Kubernetes network encryption
|
||||
path: /ee/ucp/kubernetes/kubernetes-network-encryption/
|
||||
- title: API reference
|
||||
path: /reference/ucp/3.0/api/
|
||||
nosync: true
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 68 KiB After Width: | Height: | Size: 51 KiB |
|
After Width: | Height: | Size: 21 KiB |
|
|
@ -25,32 +25,43 @@ Events, logs, and metrics are sources of data that provide observability of your
|
|||
|
||||
The Docker EE platform provides a base set of metrics that gets you running and into production without having to rely on external or 3rd party tools. Docker strongly encourages the use of additional monitoring to provide more comprehensive visibility into your specific Docker environment, but recognizes the need for a basic set of metrics built into the product. The following are examples of these metrics:
|
||||
|
||||
- **Business metrics**: These are high-level aggregate metrics that typically combine technical, financial, and organizational data to create metrics for business leaders of the IT infrastructure. Some examples of business metrics might be:
|
||||
## Business metrics ##
|
||||
|
||||
These are high-level aggregate metrics that typically combine technical, financial, and organizational data to create metrics for business leaders of the IT infrastructure. Some examples of business metrics might be:
|
||||
- Company or division-level application downtime
|
||||
- Aggregate resource utilization
|
||||
- Application resource demand growth
|
||||
|
||||
## Application metrics ##
|
||||
|
||||
- **Application metrics**: These are metrics about domain of APM tools like AppDynamics or DynaTrace and provide metrics about the state or performance of the application itself.
|
||||
These are metrics about domain of APM tools like AppDynamics or DynaTrace and provide metrics about the state or performance of the application itself.
|
||||
- Service state metrics
|
||||
- Container platform metrics
|
||||
- Host infrastructure metrics
|
||||
|
||||
|
||||
Docker EE 2.1 does not collect or expose application level metrics. The following are metrics Docker EE 2.1 collects, aggregates, and exposes:
|
||||
|
||||
- **Service state metrics**: These are metrics about the state of services running on the container platform. These types of metrics have very low cardinality, meaning the values are typically from a small fixed set of possibilities, commonly binary.
|
||||
## Service state metrics ##
|
||||
|
||||
These are metrics about the state of services running on the container platform. These types of metrics have very low cardinality, meaning the values are typically from a small fixed set of possibilities, commonly binary.
|
||||
- Application health
|
||||
- Convergence of K8s deployments and Swarm services
|
||||
- Cluster load by number of services or containers or pods
|
||||
|
||||
- **Host infrastructure metrics**: These are metrics taken from te software & hardware infrastructure.
|
||||
|
||||
## Host infrastructure metrics ##
|
||||
|
||||
These are metrics taken from te software & hardware infrastructure.
|
||||
- CPU - Container-level CPU utilization, Node-level load average
|
||||
- Memory - RSS, swap
|
||||
- Network I/O - bandwidth, packets, drops
|
||||
- Storage I/O - disk I/O, IOPs, capacity
|
||||
- Operating System – file descriptors, open network connections, number of processes/threads
|
||||
|
||||
- **Container infrastructure system metrics**: These are application-level metrics derived from the container platform itself.
|
||||
|
||||
## Container infrastructure system metrics ##
|
||||
|
||||
These are application-level metrics derived from the container platform itself.
|
||||
- Infrastructure Quorum Leader - Swarm RAFT, etcd, rethink
|
||||
- UCP Component health - Healthy / Unhealthy
|
||||
|
||||
|
|
@ -63,34 +74,34 @@ To deploy Prometheus on worker nodes in a cluster:
|
|||
|
||||
2. Verify that ucp-metrics pods are running on all managers.
|
||||
|
||||
```
|
||||
$ kubectl -n kube-system get pods -l k8s-app=ucp-metrics -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
ucp-metrics-hvkr7 3/3 Running 0 4h 192.168.80.66 3a724a-0
|
||||
```
|
||||
```
|
||||
$ kubectl -n kube-system get pods -l k8s-app=ucp-metrics -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
ucp-metrics-hvkr7 3/3 Running 0 4h 192.168.80.66 3a724a-0
|
||||
```
|
||||
|
||||
3. Add a Kubernetes node label to one or more workers. Here we add a label with key "ucp-metrics" and value "".
|
||||
|
||||
```
|
||||
$ kubectl label node 3a724a-1 ucp-metrics=
|
||||
node "test-3a724a-1" labeled
|
||||
```
|
||||
```
|
||||
$ kubectl label node 3a724a-1 ucp-metrics=
|
||||
node "test-3a724a-1" labeled
|
||||
```
|
||||
|
||||
4. Patch the ucp-metrics DaemonSet's nodeSelector using the same key and value used for the node label. This example shows the key “ucp-metrics” and the value “”.
|
||||
|
||||
```
|
||||
$ kubectl -n kube-system patch daemonset ucp-metrics --type json -p '[{"op": "replace", "path": "/spec/template/spec/nodeSelector", "value": {"ucp-metrics": ""}}]'
|
||||
daemonset "ucp-metrics" patched
|
||||
```
|
||||
```
|
||||
$ kubectl -n kube-system patch daemonset ucp-metrics --type json -p '[{"op": "replace", "path": "/spec/template/spec/nodeSelector", "value": {"ucp-metrics": ""}}]'
|
||||
daemonset "ucp-metrics" patched
|
||||
```
|
||||
|
||||
5. Observe that ucp-metrics pods are running only on the labeled workers.
|
||||
|
||||
```
|
||||
$ kubectl -n kube-system get pods -l k8s-app=ucp-metrics -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
ucp-metrics-88lzx 3/3 Running 0 12s 192.168.83.1 3a724a-1
|
||||
ucp-metrics-hvkr7 3/3 Terminating 0 4h 192.168.80.66 3a724a-0
|
||||
```
|
||||
```
|
||||
$ kubectl -n kube-system get pods -l k8s-app=ucp-metrics -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
ucp-metrics-88lzx 3/3 Running 0 12s 192.168.83.1 3a724a-1
|
||||
ucp-metrics-hvkr7 3/3 Terminating 0 4h 192.168.80.66 3a724a-0
|
||||
```
|
||||
|
||||
## Configure external Prometheus to scrape metrics from UCP
|
||||
|
||||
|
|
@ -100,117 +111,117 @@ To configure your external Prometheus server to scrape metrics from Prometheus i
|
|||
|
||||
2. Create a Kubernetes secret containing your bundle’s TLS material.
|
||||
|
||||
```
|
||||
(cd $DOCKER_CERT_PATH && kubectl create secret generic prometheus --from-file=ca.pem --from-file=cert.pem --from-file=key.pem)
|
||||
```
|
||||
```
|
||||
(cd $DOCKER_CERT_PATH && kubectl create secret generic prometheus --from-file=ca.pem --from-file=cert.pem --from-file=key.pem)
|
||||
```
|
||||
|
||||
3. Create a Prometheus deployment and ClusterIP service using YAML as follows.
|
||||
|
||||
On AWS with Kube’s cloud provider configured, you can replace `ClusterIP` with `LoadBalancer` in the service YAML then access the service through the load balancer. If running Prometheus external to UCP, change the following domain for the inventory container in the Prometheus deployment from `ucp-controller.kube-system.svc.cluster.local` to an external domain to access UCP from the Prometheus node.
|
||||
|
||||
```
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: prometheus
|
||||
data:
|
||||
prometheus.yaml: |
|
||||
global:
|
||||
scrape_interval: 10s
|
||||
scrape_configs:
|
||||
- job_name: 'ucp'
|
||||
tls_config:
|
||||
ca_file: /bundle/ca.pem
|
||||
cert_file: /bundle/cert.pem
|
||||
key_file: /bundle/key.pem
|
||||
server_name: proxy.local
|
||||
scheme: https
|
||||
file_sd_configs:
|
||||
- files:
|
||||
- /inventory/inventory.json
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: prometheus
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: prometheus
|
||||
template:
|
||||
```
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
labels:
|
||||
app: prometheus
|
||||
name: prometheus
|
||||
data:
|
||||
prometheus.yaml: |
|
||||
global:
|
||||
scrape_interval: 10s
|
||||
scrape_configs:
|
||||
- job_name: 'ucp'
|
||||
tls_config:
|
||||
ca_file: /bundle/ca.pem
|
||||
cert_file: /bundle/cert.pem
|
||||
key_file: /bundle/key.pem
|
||||
server_name: proxy.local
|
||||
scheme: https
|
||||
file_sd_configs:
|
||||
- files:
|
||||
- /inventory/inventory.json
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: prometheus
|
||||
spec:
|
||||
containers:
|
||||
- name: inventory
|
||||
image: alpine
|
||||
command: ["sh", "-c"]
|
||||
args:
|
||||
- apk add --no-cache curl &&
|
||||
while :; do
|
||||
curl -Ss --cacert /bundle/ca.pem --cert /bundle/cert.pem --key /bundle/key.pem --output /inventory/inventory.json https://ucp-controller.kube-system.svc.cluster.local/metricsdiscovery;
|
||||
sleep 15;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: bundle
|
||||
mountPath: /bundle
|
||||
- name: inventory
|
||||
mountPath: /inventory
|
||||
- name: prometheus
|
||||
image: prom/prometheus
|
||||
command: ["/bin/prometheus"]
|
||||
args:
|
||||
- --config.file=/config/prometheus.yaml
|
||||
- --storage.tsdb.path=/prometheus
|
||||
- --web.console.libraries=/etc/prometheus/console_libraries
|
||||
- --web.console.templates=/etc/prometheus/consoles
|
||||
volumeMounts:
|
||||
- name: bundle
|
||||
mountPath: /bundle
|
||||
- name: config
|
||||
mountPath: /config
|
||||
- name: inventory
|
||||
mountPath: /inventory
|
||||
volumes:
|
||||
- name: bundle
|
||||
secret:
|
||||
secretName: prometheus
|
||||
- name: config
|
||||
configMap:
|
||||
name: prometheus
|
||||
- name: inventory
|
||||
emptyDir:
|
||||
medium: Memory
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: prometheus
|
||||
spec:
|
||||
ports:
|
||||
- port: 9090
|
||||
targetPort: 9090
|
||||
selector:
|
||||
app: prometheus
|
||||
sessionAffinity: ClientIP
|
||||
EOF
|
||||
```
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: prometheus
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: prometheus
|
||||
spec:
|
||||
containers:
|
||||
- name: inventory
|
||||
image: alpine
|
||||
command: ["sh", "-c"]
|
||||
args:
|
||||
- apk add --no-cache curl &&
|
||||
while :; do
|
||||
curl -Ss --cacert /bundle/ca.pem --cert /bundle/cert.pem --key /bundle/key.pem --output /inventory/inventory.json https://ucp-controller.kube-system.svc.cluster.local/metricsdiscovery;
|
||||
sleep 15;
|
||||
done
|
||||
volumeMounts:
|
||||
- name: bundle
|
||||
mountPath: /bundle
|
||||
- name: inventory
|
||||
mountPath: /inventory
|
||||
- name: prometheus
|
||||
image: prom/prometheus
|
||||
command: ["/bin/prometheus"]
|
||||
args:
|
||||
- --config.file=/config/prometheus.yaml
|
||||
- --storage.tsdb.path=/prometheus
|
||||
- --web.console.libraries=/etc/prometheus/console_libraries
|
||||
- --web.console.templates=/etc/prometheus/consoles
|
||||
volumeMounts:
|
||||
- name: bundle
|
||||
mountPath: /bundle
|
||||
- name: config
|
||||
mountPath: /config
|
||||
- name: inventory
|
||||
mountPath: /inventory
|
||||
volumes:
|
||||
- name: bundle
|
||||
secret:
|
||||
secretName: prometheus
|
||||
- name: config
|
||||
configMap:
|
||||
name: prometheus
|
||||
- name: inventory
|
||||
emptyDir:
|
||||
medium: Memory
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: prometheus
|
||||
spec:
|
||||
ports:
|
||||
- port: 9090
|
||||
targetPort: 9090
|
||||
selector:
|
||||
app: prometheus
|
||||
sessionAffinity: ClientIP
|
||||
EOF
|
||||
```
|
||||
|
||||
4. Determine the service ClusterIP.
|
||||
|
||||
```
|
||||
$ kubectl get service prometheus
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
prometheus ClusterIP 10.96.254.107 <none> 9090/TCP 1h
|
||||
```
|
||||
```
|
||||
$ kubectl get service prometheus
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
prometheus ClusterIP 10.96.254.107 <none> 9090/TCP 1h
|
||||
```
|
||||
|
||||
5. Forward port 9090 on the local host to the ClusterIP. The tunnel created does not need to be kept alive and is only intended to expose the Prometheus API.
|
||||
|
||||
```
|
||||
ssh -L 9090:10.96.254.107:9090 ANY_NODE
|
||||
```
|
||||
```
|
||||
ssh -L 9090:10.96.254.107:9090 ANY_NODE
|
||||
```
|
||||
|
||||
6. Visit `http://127.0.0.1:9090` to explore the UCP metrics being collected by Prometheus.
|
||||
|
|
|
|||
|
|
@ -21,8 +21,10 @@ You create Kubernetes roles either through the CLI using `kubectl` or through th
|
|||
|
||||
To create a Kuberenetes role in the UCP web interface:
|
||||
|
||||
1. Go to the UCP web UI.
|
||||
1. Go to the UCP web interface.
|
||||
|
||||
2. Navigate to the **Access Control**.
|
||||
|
||||
3. In the lefthand menu, select **Roles**.
|
||||
|
||||

|
||||
|
|
|
|||
|
|
@ -22,16 +22,16 @@ Docker UCP requires Docker Enterprise Edition. Before installing Docker EE on
|
|||
your cluster nodes, you should plan for a common hostname strategy.
|
||||
|
||||
Decide if you want to use short hostnames, like `engine01`, or Fully Qualified
|
||||
Domain Names (FQDN), like `engine01.docker.vm`. Whichever you choose,
|
||||
ensure that your naming strategy is consistent across the cluster, because
|
||||
Domain Names (FQDN), like `node01.company.example.com`. Whichever you choose,
|
||||
confirm your naming strategy is consistent across the cluster, because
|
||||
Docker Engine and UCP use hostnames.
|
||||
|
||||
For example, if your cluster has three hosts, you can name them:
|
||||
|
||||
```none
|
||||
node1.company.example.org
|
||||
node2.company.example.org
|
||||
node3.company.example.org
|
||||
node1.company.example.com
|
||||
node2.company.example.com
|
||||
node3.company.example.com
|
||||
```
|
||||
|
||||
## Static IP addresses
|
||||
|
|
@ -42,7 +42,11 @@ this.
|
|||
|
||||
## Avoid IP range conflicts
|
||||
|
||||
The default Kubernetes cluster IP pool for the pods is `192.168.0.0/16`. If it conflicts with your current networks, please use a custom IP pool by specifying `--pod-cidr` during UCP installation.
|
||||
Swarm uses a default address pool of `10.0.0.0/16` for its overlay networks. If this conflicts with your current network implementation, please use a custom IP address pool. To specify a custom IP address pool, use the `--default-address-pool` command line option during [Swarm initialization](../../../../engine/swarm/swarm-mode.md).
|
||||
|
||||
**NOTE:** Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be installed using this flag and UCP must be installed on top of it.
|
||||
|
||||
Kubernetes uses a default cluster IP pool for pods that is `192.168.0.0/16`. If it conflicts with your current networks, please use a custom IP pool by specifying `--pod-cidr` during UCP installation.
|
||||
|
||||
## Time synchronization
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,55 @@
|
|||
<p>To enable LDAP in UCP and sync to your LDAP directory:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Admin Settings</strong> under your username drop down.</li>
|
||||
<li>Click <strong>Authentication & Authorization</strong>.</li>
|
||||
<li>Scroll down and click <code class="highlighter-rouge">Yes</code> by <strong>LDAP Enabled</strong>. A list of LDAP settings displays.</li>
|
||||
<li>Input values to match your LDAP server installation.</li>
|
||||
<li>Test your configuration in UCP.</li>
|
||||
<li>Manually create teams in UCP to mirror those in LDAP.</li>
|
||||
<li>Click <strong>Sync Now</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p>If Docker EE is configured to sync users with your organization’s LDAP directory
|
||||
server, you can enable syncing the new team’s members when creating a new team
|
||||
or when modifying settings of an existing team.</p>
|
||||
|
||||
<p>For more, see: <a href="../admin/configure/external-auth/index.md">Integrate with an LDAP Directory</a>.</p>
|
||||
|
||||
<p><img src="../images/create-and-manage-teams-5.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="binding-to-the-ldap-server">Binding to the LDAP server</h2>
|
||||
|
||||
<p>There are two methods for matching group members from an LDAP directory, direct
|
||||
bind and search bind.</p>
|
||||
|
||||
<p>Select <strong>Immediately Sync Team Members</strong> to run an LDAP sync operation
|
||||
immediately after saving the configuration for the team. It may take a moment
|
||||
before the members of the team are fully synced.</p>
|
||||
|
||||
<h3 id="match-group-members-direct-bind">Match Group Members (Direct Bind)</h3>
|
||||
|
||||
<p>This option specifies that team members should be synced directly with members
|
||||
of a group in your organization’s LDAP directory. The team’s membership will by
|
||||
synced to match the membership of the group.</p>
|
||||
|
||||
<ul>
|
||||
<li><strong>Group DN</strong>: The distinguished name of the group from which to select users.</li>
|
||||
<li><strong>Group Member Attribute</strong>: The value of this group attribute corresponds to
|
||||
the distinguished names of the members of the group.</li>
|
||||
</ul>
|
||||
|
||||
<h3 id="match-search-results-search-bind">Match Search Results (Search Bind)</h3>
|
||||
|
||||
<p>This option specifies that team members should be synced using a search query
|
||||
against your organization’s LDAP directory. The team’s membership will be
|
||||
synced to match the users in the search results.</p>
|
||||
|
||||
<ul>
|
||||
<li><strong>Search Base DN</strong>: Distinguished name of the node in the directory tree where
|
||||
the search should start looking for users.</li>
|
||||
<li><strong>Search Filter</strong>: Filter to find users. If null, existing users in the search
|
||||
scope are added as members of the team.</li>
|
||||
<li><strong>Search subtree</strong>: Defines search through the full LDAP tree, not just one
|
||||
level, starting at the Base DN.</li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,106 @@
|
|||
<p>Users, teams, and organizations are referred to as subjects in Docker EE.</p>
|
||||
|
||||
<p>Individual users can belong to one or more teams but each team can only be in
|
||||
one organization. At the fictional startup, Acme Company, all teams in the
|
||||
organization are necessarily unique but the user, Alex, is on two teams:</p>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>acme-datacenter
|
||||
├── dba
|
||||
│ └── Alex*
|
||||
├── dev
|
||||
│ └── Bett
|
||||
└── ops
|
||||
├── Alex*
|
||||
└── Chad
|
||||
</code></pre></div></div>
|
||||
|
||||
<h2 id="authentication">Authentication</h2>
|
||||
|
||||
<p>All users are authenticated on the backend. Docker EE provides built-in
|
||||
authentication and also integrates with LDAP directory services.</p>
|
||||
|
||||
<p>To use Docker EE’s built-in authentication, you must <a href="#create-users-manually">create users manually</a>.</p>
|
||||
|
||||
<blockquote>
|
||||
<p>To enable LDAP and authenticate and synchronize UCP users and teams with your
|
||||
organization’s LDAP directory, see:</p>
|
||||
<ul>
|
||||
<li><a href="create-teams-with-ldap.md">Synchronize users and teams with LDAP in the UI</a></li>
|
||||
<li><a href="../admin/configure/external-auth/index.md">Integrate with an LDAP Directory</a>.</li>
|
||||
</ul>
|
||||
</blockquote>
|
||||
|
||||
<h2 id="build-an-organization-architecture">Build an organization architecture</h2>
|
||||
|
||||
<p>The general flow of designing an organization with teams in UCP is:</p>
|
||||
|
||||
<ol>
|
||||
<li>Create an organization.</li>
|
||||
<li>Add users or enable LDAD (for syncing users).</li>
|
||||
<li>Create teams under the organization.</li>
|
||||
<li>Add users to teams manually or sync with LDAP.</li>
|
||||
</ol>
|
||||
|
||||
<h3 id="create-an-organization-with-teams">Create an organization with teams</h3>
|
||||
|
||||
<p>To create an organization in UCP:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Organization & Teams</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Organization</strong>.</li>
|
||||
<li>Input the organization name.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p>To create teams in the organization:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click through the organization name.</li>
|
||||
<li>Click <strong>Create Team</strong>.</li>
|
||||
<li>Input a team name (and description).</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
<li>Add existing users to the team. To sync LDAP users, see: <a href="../admin/configure/external-auth/index.md">Integrate with an LDAP Directory</a>.
|
||||
<ul>
|
||||
<li>Click the team name and select <strong>Actions</strong> > <strong>Add Users</strong>.</li>
|
||||
<li>Check the users to include and click <strong>Add Users</strong>.</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ol>
|
||||
|
||||
<blockquote>
|
||||
<p><strong>Note</strong>: To sync teams with groups in an LDAP server, see <a href="create-teams-with-ldap.md">Sync Teams with LDAP</a>.</p>
|
||||
</blockquote>
|
||||
|
||||
<h3 id="create-users-manually">Create users manually</h3>
|
||||
|
||||
<p>New users are assigned a default permission level so that they can access the
|
||||
cluster. To extend a user’s default permissions, add them to a team and <a href="grant-permissions.md">create grants</a>. You can optionally grant them Docker EE
|
||||
administrator permissions.</p>
|
||||
|
||||
<p>To manually create users in UCP:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Users</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create User</strong>.</li>
|
||||
<li>Input username, password, and full name.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
<li>Optionally, check “Is a Docker EE Admin” to give the user administrator
|
||||
privileges.</li>
|
||||
</ol>
|
||||
|
||||
<blockquote>
|
||||
<p>A <code class="highlighter-rouge">Docker EE Admin</code> can grant users permission to change the cluster
|
||||
configuration and manage grants, roles, and resource sets.</p>
|
||||
</blockquote>
|
||||
|
||||
<p><img src="../images/ucp_usermgmt_users_create01.png" alt="" class="with-border" />
|
||||
<img src="../images/ucp_usermgmt_users_create02.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="create-teams-with-ldap.md">Synchronize teams with LDAP</a></li>
|
||||
<li><a href="define-roles.md">Define roles with authorized API operations</a></li>
|
||||
<li><a href="group-resources.md">Group and isolate cluster resources</a></li>
|
||||
<li><a href="grant-permissions.md">Grant role-access to cluster resources</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,77 @@
|
|||
<p>A role defines a set of API operations permitted against a resource set.
|
||||
You apply roles to users and teams by creating grants.</p>
|
||||
|
||||
<p><img src="../images/permissions-ucp.svg" alt="Diagram showing UCP permission levels" /></p>
|
||||
|
||||
<h2 id="default-roles">Default roles</h2>
|
||||
|
||||
<p>You can define custom roles or use the following built-in roles:</p>
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left">Built-in role</th>
|
||||
<th style="text-align: left">Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">None</code></td>
|
||||
<td style="text-align: left">Users have no access to Swarm or Kubernetes resources. Maps to <code class="highlighter-rouge">No Access</code> role in UCP 2.1.x.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">View Only</code></td>
|
||||
<td style="text-align: left">Users can view resources but can’t create them.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">Restricted Control</code></td>
|
||||
<td style="text-align: left">Users can view and edit resources but can’t run a service or container in a way that affects the node where it’s running. Users <em>cannot</em> mount a node directory, <code class="highlighter-rouge">exec</code> into containers, or run containers in privileged mode or with additional kernel capabilities.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">Scheduler</code></td>
|
||||
<td style="text-align: left">Users can view nodes (worker and manager) and schedule (not view) workloads on these nodes. By default, all users are granted the <code class="highlighter-rouge">Scheduler</code> role against the <code class="highlighter-rouge">/Shared</code> collection. (To view workloads, users need permissions such as <code class="highlighter-rouge">Container View</code>).</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">Full Control</code></td>
|
||||
<td style="text-align: left">Users can view and edit all granted resources. They can create containers without any restriction, but can’t see the containers of other users.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<h2 id="create-a-custom-role">Create a custom role</h2>
|
||||
|
||||
<p>The <strong>Roles</strong> page lists all default and custom roles applicable in the
|
||||
organization.</p>
|
||||
|
||||
<p>You can give a role a global name, such as “Remove Images”, which might enable the
|
||||
<strong>Remove</strong> and <strong>Force Remove</strong> operations for images. You can apply a role with
|
||||
the same name to different resource sets.</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Roles</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Role</strong>.</li>
|
||||
<li>Input the role name on the <strong>Details</strong> page.</li>
|
||||
<li>Click <strong>Operations</strong>. All available API operations are displayed.</li>
|
||||
<li>Select the permitted operations per resource type.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/custom-role-30.png" alt="" class="with-border" /></p>
|
||||
|
||||
<blockquote>
|
||||
<p><strong>Some important rules regarding roles</strong>:</p>
|
||||
<ul>
|
||||
<li>Roles are always enabled.</li>
|
||||
<li>Roles can’t be edited. To edit a role, you must delete and recreate it.</li>
|
||||
<li>Roles used within a grant can be deleted only after first deleting the grant.</li>
|
||||
<li>Only administrators can create and delete roles.</li>
|
||||
</ul>
|
||||
</blockquote>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="create-users-and-teams-manually.md">Create and configure users and teams</a></li>
|
||||
<li><a href="group-resources.md">Group and isolate cluster resources</a></li>
|
||||
<li><a href="grant-permissions.md">Grant role-access to cluster resources</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,191 @@
|
|||
<p>This tutorial explains how to deploy a NGINX web server and limit access to one
|
||||
team with role-based access control (RBAC).</p>
|
||||
|
||||
<h2 id="scenario">Scenario</h2>
|
||||
|
||||
<p>You are the Docker EE system administrator at Acme Company and need to configure
|
||||
permissions to company resources. The best way to do this is to:</p>
|
||||
|
||||
<ul>
|
||||
<li>Build the organization with teams and users.</li>
|
||||
<li>Define roles with allowable operations per resource types, like
|
||||
permission to run containers.</li>
|
||||
<li>Create collections or namespaces for accessing actual resources.</li>
|
||||
<li>Create grants that join team + role + resource set.</li>
|
||||
</ul>
|
||||
|
||||
<h2 id="build-the-organization">Build the organization</h2>
|
||||
|
||||
<p>Add the organization, <code class="highlighter-rouge">acme-datacenter</code>, and create three teams according to the
|
||||
following structure:</p>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>acme-datacenter
|
||||
├── dba
|
||||
│ └── Alex Alutin
|
||||
├── dev
|
||||
│ └── Bett Bhatia
|
||||
└── ops
|
||||
└── Chad Chavez
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Learn to <a href="create-users-and-teams-manually.md">create and configure users and teams</a>.</p>
|
||||
|
||||
<h2 id="kubernetes-deployment">Kubernetes deployment</h2>
|
||||
|
||||
<p>In this section, we deploy NGINX with Kubernetes. See <a href="#swarm-stack">Swarm stack</a>
|
||||
for the same exercise with Swarm.</p>
|
||||
|
||||
<h3 id="create-namespace">Create namespace</h3>
|
||||
|
||||
<p>Create a namespace to logically store the NGINX application:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Kubernetes</strong> > <strong>Namespaces</strong>.</li>
|
||||
<li>Paste the following manifest in the terminal window and click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: nginx-namespace
|
||||
</code></pre></div></div>
|
||||
|
||||
<h3 id="define-roles">Define roles</h3>
|
||||
|
||||
<p>You can use the built-in roles or define your own. For this exercise, create a
|
||||
simple role for the ops team:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Roles</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Role</strong>.</li>
|
||||
<li>On the <strong>Details</strong> tab, name the role <code class="highlighter-rouge">Kube Deploy</code>.</li>
|
||||
<li>On the <strong>Operations</strong> tab, check all <strong>Kubernetes Deployment Operations</strong>.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p>Learn to <a href="create-users-and-teams-manually.md">create and configure users and teams</a>.</p>
|
||||
|
||||
<h3 id="grant-access">Grant access</h3>
|
||||
|
||||
<p>Grant the ops team (and only the ops team) access to nginx-namespace with the
|
||||
custom role, <strong>Kube Deploy</strong>.</p>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>acme-datacenter/ops + Kube Deploy + nginx-namespace
|
||||
</code></pre></div></div>
|
||||
|
||||
<h3 id="deploy-nginx">Deploy NGINX</h3>
|
||||
|
||||
<p>You’ve configured Docker EE. The <code class="highlighter-rouge">ops</code> team can now deploy <code class="highlighter-rouge">nginx</code>.</p>
|
||||
|
||||
<ol>
|
||||
<li>Log on to UCP as “chad” (on the <code class="highlighter-rouge">ops</code>team).</li>
|
||||
<li>Click <strong>Kubernetes</strong> > <strong>Namespaces</strong>.</li>
|
||||
<li>Paste the following manifest in the terminal window and click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">apps/v1beta2</span> <span class="c1"># Use apps/v1beta1 for versions < 1.8.0</span>
|
||||
<span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
|
||||
<span class="na">metadata</span><span class="pi">:</span>
|
||||
<span class="na">name</span><span class="pi">:</span> <span class="s">nginx-deployment</span>
|
||||
<span class="na">spec</span><span class="pi">:</span>
|
||||
<span class="na">selector</span><span class="pi">:</span>
|
||||
<span class="na">matchLabels</span><span class="pi">:</span>
|
||||
<span class="na">app</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">replicas</span><span class="pi">:</span> <span class="s">2</span>
|
||||
<span class="na">template</span><span class="pi">:</span>
|
||||
<span class="na">metadata</span><span class="pi">:</span>
|
||||
<span class="na">labels</span><span class="pi">:</span>
|
||||
<span class="na">app</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">spec</span><span class="pi">:</span>
|
||||
<span class="na">containers</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">image</span><span class="pi">:</span> <span class="s">nginx:latest</span>
|
||||
<span class="na">ports</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="s">80</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<ol>
|
||||
<li>Log on to UCP as each user and ensure that:
|
||||
<ul>
|
||||
<li><code class="highlighter-rouge">dba</code> (alex) can’t see <code class="highlighter-rouge">nginx-namespace</code>.</li>
|
||||
<li><code class="highlighter-rouge">dev</code> (bett) can’t see <code class="highlighter-rouge">nginx-namespace</code>.</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ol>
|
||||
|
||||
<h2 id="swarm-stack">Swarm stack</h2>
|
||||
|
||||
<p>In this section, we deploy <code class="highlighter-rouge">nginx</code> as a Swarm service. See <a href="#kubernetes-deployment">Kubernetes Deployment</a>
|
||||
for the same exercise with Kubernetes.</p>
|
||||
|
||||
<h3 id="create-collection-paths">Create collection paths</h3>
|
||||
|
||||
<p>Create a collection for NGINX resources, nested under the <code class="highlighter-rouge">/Shared</code> collection:</p>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/
|
||||
├── System
|
||||
└── Shared
|
||||
└── nginx-collection
|
||||
</code></pre></div></div>
|
||||
|
||||
<blockquote>
|
||||
<p><strong>Tip</strong>: To drill into a collection, click <strong>View Children</strong>.</p>
|
||||
</blockquote>
|
||||
|
||||
<p>Learn to <a href="group-resources.md">group and isolate cluster resources</a>.</p>
|
||||
|
||||
<h3 id="define-roles-1">Define roles</h3>
|
||||
|
||||
<p>You can use the built-in roles or define your own. For this exercise, create a
|
||||
simple role for the ops team:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Roles</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Role</strong>.</li>
|
||||
<li>On the <strong>Details</strong> tab, name the role <code class="highlighter-rouge">Swarm Deploy</code>.</li>
|
||||
<li>On the <strong>Operations</strong> tab, check all <strong>Service Operations</strong>.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p>Learn to <a href="define-roles.md">create and configure users and teams</a>.</p>
|
||||
|
||||
<h3 id="grant-access-1">Grant access</h3>
|
||||
|
||||
<p>Grant the ops team (and only the ops team) access to <code class="highlighter-rouge">nginx-collection</code> with
|
||||
the built-in role, <strong>Swarm Deploy</strong>.</p>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>acme-datacenter/ops + Swarm Deploy + /Shared/nginx-collection
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Learn to <a href="grant-permissions.md">grant role-access to cluster resources</a>.</p>
|
||||
|
||||
<h3 id="deploy-nginx-1">Deploy NGINX</h3>
|
||||
|
||||
<p>You’ve configured Docker EE. The <code class="highlighter-rouge">ops</code> team can now deploy an <code class="highlighter-rouge">nginx</code> Swarm
|
||||
service.</p>
|
||||
|
||||
<ol>
|
||||
<li>Log on to UCP as chad (on the <code class="highlighter-rouge">ops</code>team).</li>
|
||||
<li>Click <strong>Swarm</strong> > <strong>Services</strong>.</li>
|
||||
<li>Click <strong>Create Stack</strong>.</li>
|
||||
<li>On the Details tab, enter:
|
||||
<ul>
|
||||
<li>Name: <code class="highlighter-rouge">nginx-service</code></li>
|
||||
<li>Image: nginx:latest</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>On the Collections tab:
|
||||
<ul>
|
||||
<li>Click <code class="highlighter-rouge">/Shared</code> in the breadcrumbs.</li>
|
||||
<li>Select <code class="highlighter-rouge">nginx-collection</code>.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
<li>Log on to UCP as each user and ensure that:
|
||||
<ul>
|
||||
<li><code class="highlighter-rouge">dba</code> (alex) cannot see <code class="highlighter-rouge">nginx-collection</code>.</li>
|
||||
<li><code class="highlighter-rouge">dev</code> (bett) cannot see <code class="highlighter-rouge">nginx-collection</code>.</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ol>
|
||||
|
||||
|
|
@ -0,0 +1,137 @@
|
|||
<p>Go through the <a href="ee-standard.md">Docker Enterprise Standard tutorial</a>,
|
||||
before continuing here with Docker Enterprise Advanced.</p>
|
||||
|
||||
<p>In the first tutorial, the fictional company, OrcaBank, designed an architecture
|
||||
with role-based access control (RBAC) to meet their organization’s security
|
||||
needs. They assigned multiple grants to fine-tune access to resources across
|
||||
collection boundaries on a single platform.</p>
|
||||
|
||||
<p>In this tutorial, OrcaBank implements new and more stringent security
|
||||
requirements for production applications:</p>
|
||||
|
||||
<p>First, OrcaBank adds staging zone to their deployment model. They will no longer
|
||||
move developed appliciatons directly in to production. Instead, they will deploy
|
||||
apps from their dev cluster to staging for testing, and then to production.</p>
|
||||
|
||||
<p>Second, production applications are no longer permitted to share any physical
|
||||
infrastructure with non-production infrastructure. OrcaBank segments the
|
||||
scheduling and access of applications with <a href="isolate-nodes.md">Node Access Control</a>.</p>
|
||||
|
||||
<blockquote>
|
||||
<p><a href="isolate-nodes.md">Node Access Control</a> is a feature of Docker EE
|
||||
Advanced and provides secure multi-tenancy with node-based isolation. Nodes
|
||||
can be placed in different collections so that resources can be scheduled and
|
||||
isolated on disparate physical or virtual hardware resources.</p>
|
||||
</blockquote>
|
||||
|
||||
<h2 id="team-access-requirements">Team access requirements</h2>
|
||||
|
||||
<p>OrcaBank still has three application teams, <code class="highlighter-rouge">payments</code>, <code class="highlighter-rouge">mobile</code>, and <code class="highlighter-rouge">db</code> with
|
||||
varying levels of segmentation between them.</p>
|
||||
|
||||
<p>Their RBAC redesign is going to organize their UCP cluster into two top-level
|
||||
collections, staging and production, which are completely separate security
|
||||
zones on separate physical infrastructure.</p>
|
||||
|
||||
<p>OrcaBank’s four teams now have different needs in production and staging:</p>
|
||||
|
||||
<ul>
|
||||
<li><code class="highlighter-rouge">security</code> should have view-only access to all applications in production (but
|
||||
not staging).</li>
|
||||
<li><code class="highlighter-rouge">db</code> should have full access to all database applications and resources in
|
||||
production (but not staging). See <a href="#db-team">DB Team</a>.</li>
|
||||
<li><code class="highlighter-rouge">mobile</code> should have full access to their Mobile applications in both
|
||||
production and staging and limited access to shared <code class="highlighter-rouge">db</code> services. See
|
||||
<a href="#mobile-team">Mobile Team</a>.</li>
|
||||
<li><code class="highlighter-rouge">payments</code> should have full access to their Payments applications in both
|
||||
production and staging and limited access to shared <code class="highlighter-rouge">db</code> services.</li>
|
||||
</ul>
|
||||
|
||||
<h2 id="role-composition">Role composition</h2>
|
||||
|
||||
<p>OrcaBank has decided to replace their custom <code class="highlighter-rouge">Ops</code> role with the built-in
|
||||
<code class="highlighter-rouge">Full Control</code> role.</p>
|
||||
|
||||
<ul>
|
||||
<li><code class="highlighter-rouge">View Only</code> (default role) allows users to see but not edit all cluster
|
||||
resources.</li>
|
||||
<li><code class="highlighter-rouge">Full Control</code> (default role) allows users complete control of all collections
|
||||
granted to them. They can also create containers without restriction but
|
||||
cannot see the containers of other users.</li>
|
||||
<li><code class="highlighter-rouge">View & Use Networks + Secrets</code> (custom role) enables users to view/connect
|
||||
to networks and view/use secrets used by <code class="highlighter-rouge">db</code> containers, but prevents them
|
||||
from seeing or impacting the <code class="highlighter-rouge">db</code> applications themselves.</li>
|
||||
</ul>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-0.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h2 id="collection-architecture">Collection architecture</h2>
|
||||
|
||||
<p>In the previous tutorial, OrcaBank created separate collections for each
|
||||
application team and nested them all under <code class="highlighter-rouge">/Shared</code>.</p>
|
||||
|
||||
<p>To meet their new security requirements for production, OrcaBank is redesigning
|
||||
collections in two ways:</p>
|
||||
|
||||
<ul>
|
||||
<li>Adding collections for both the production and staging zones, and nesting a
|
||||
set of application collections under each.</li>
|
||||
<li>Segmenting nodes. Both the production and staging zones will have dedicated
|
||||
nodes; and in production, each application will be on a dedicated node.</li>
|
||||
</ul>
|
||||
|
||||
<p>The collection architecture now has the following tree representation:</p>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/
|
||||
├── System
|
||||
├── Shared
|
||||
├── prod
|
||||
│ ├── mobile
|
||||
│ ├── payments
|
||||
│ └── db
|
||||
│ ├── mobile
|
||||
│ └── payments
|
||||
|
|
||||
└── staging
|
||||
├── mobile
|
||||
└── payments
|
||||
</code></pre></div></div>
|
||||
|
||||
<h2 id="grant-composition">Grant composition</h2>
|
||||
|
||||
<p>OrcaBank must now diversify their grants further to ensure the proper division
|
||||
of access.</p>
|
||||
|
||||
<p>The <code class="highlighter-rouge">payments</code> and <code class="highlighter-rouge">mobile</code> application teams will have three grants each–one
|
||||
for deploying to production, one for deploying to staging, and the same grant to
|
||||
access shared <code class="highlighter-rouge">db</code> networks and secrets.</p>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-grant-composition.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h2 id="orcabank-access-architecture">OrcaBank access architecture</h2>
|
||||
|
||||
<p>The resulting access architecture, designed with Docker EE Advanced, provides
|
||||
physical segmentation between production and staging using node access control.</p>
|
||||
|
||||
<p>Applications are scheduled only on UCP worker nodes in the dedicated application
|
||||
collection. And applications use shared resources across collection boundaries
|
||||
to access the databases in the <code class="highlighter-rouge">/prod/db</code> collection.</p>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-architecture.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h3 id="db-team">DB team</h3>
|
||||
|
||||
<p>The OrcaBank <code class="highlighter-rouge">db</code> team is responsible for deploying and managing the full
|
||||
lifecycle of the databases that are in production. They have the full set of
|
||||
operations against all database resources.</p>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-db.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h3 id="mobile-team">Mobile team</h3>
|
||||
|
||||
<p>The <code class="highlighter-rouge">mobile</code> team is responsible for deploying their full application stack in
|
||||
staging. In production they deploy their own applications but use the databases
|
||||
that are provided by the <code class="highlighter-rouge">db</code> team.</p>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-mobile.png" alt="image" class="with-border" /></p>
|
||||
|
||||
|
|
@ -0,0 +1,137 @@
|
|||
<p><a href="index.md">Collections and grants</a> are strong tools that can be used to control
|
||||
access and visibility to resources in UCP.</p>
|
||||
|
||||
<p>This tutorial describes a fictitious company named OrcaBank that needs to
|
||||
configure an architecture in UCP with role-based access control (RBAC) for
|
||||
their application engineering group.</p>
|
||||
|
||||
<h2 id="team-access-requirements">Team access requirements</h2>
|
||||
|
||||
<p>OrcaBank reorganized their application teams by product with each team providing
|
||||
shared services as necessary. Developers at OrcaBank do their own DevOps and
|
||||
deploy and manage the lifecycle of their applications.</p>
|
||||
|
||||
<p>OrcaBank has four teams with the following resource needs:</p>
|
||||
|
||||
<ul>
|
||||
<li><code class="highlighter-rouge">security</code> should have view-only access to all applications in the cluster.</li>
|
||||
<li><code class="highlighter-rouge">db</code> should have full access to all database applications and resources. See
|
||||
<a href="#db-team">DB Team</a>.</li>
|
||||
<li><code class="highlighter-rouge">mobile</code> should have full access to their mobile applications and limited
|
||||
access to shared <code class="highlighter-rouge">db</code> services. See <a href="#mobile-team">Mobile Team</a>.</li>
|
||||
<li><code class="highlighter-rouge">payments</code> should have full access to their payments applications and limited
|
||||
access to shared <code class="highlighter-rouge">db</code> services.</li>
|
||||
</ul>
|
||||
|
||||
<h2 id="role-composition">Role composition</h2>
|
||||
|
||||
<p>To assign the proper access, OrcaBank is employing a combination of default
|
||||
and custom roles:</p>
|
||||
|
||||
<ul>
|
||||
<li><code class="highlighter-rouge">View Only</code> (default role) allows users to see all resources (but not edit or use).</li>
|
||||
<li><code class="highlighter-rouge">Ops</code> (custom role) allows users to perform all operations against configs,
|
||||
containers, images, networks, nodes, secrets, services, and volumes.</li>
|
||||
<li><code class="highlighter-rouge">View & Use Networks + Secrets</code> (custom role) enables users to view/connect to
|
||||
networks and view/use secrets used by <code class="highlighter-rouge">db</code> containers, but prevents them from
|
||||
seeing or impacting the <code class="highlighter-rouge">db</code> applications themselves.</li>
|
||||
</ul>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-0.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h2 id="collection-architecture">Collection architecture</h2>
|
||||
|
||||
<p>OrcaBank is also creating collections of resources to mirror their team
|
||||
structure.</p>
|
||||
|
||||
<p>Currently, all OrcaBank applications share the same physical resources, so all
|
||||
nodes and applications are being configured in collections that nest under the
|
||||
built-in collection, <code class="highlighter-rouge">/Shared</code>.</p>
|
||||
|
||||
<p>Other collections are also being created to enable shared <code class="highlighter-rouge">db</code> applications.</p>
|
||||
|
||||
<blockquote>
|
||||
<p><strong>Note:</strong> For increased security with node-based isolation, use Docker
|
||||
Enterprise Advanced.</p>
|
||||
</blockquote>
|
||||
|
||||
<ul>
|
||||
<li><code class="highlighter-rouge">/Shared/mobile</code> hosts all Mobile applications and resources.</li>
|
||||
<li><code class="highlighter-rouge">/Shared/payments</code> hosts all Payments applications and resources.</li>
|
||||
<li><code class="highlighter-rouge">/Shared/db</code> is a top-level collection for all <code class="highlighter-rouge">db</code> resources.</li>
|
||||
<li><code class="highlighter-rouge">/Shared/db/payments</code> is a collection of <code class="highlighter-rouge">db</code> resources for Payments applications.</li>
|
||||
<li><code class="highlighter-rouge">/Shared/db/mobile</code> is a collection of <code class="highlighter-rouge">db</code> resources for Mobile applications.</li>
|
||||
</ul>
|
||||
|
||||
<p>The collection architecture has the following tree representation:</p>
|
||||
|
||||
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/
|
||||
├── System
|
||||
└── Shared
|
||||
├── mobile
|
||||
├── payments
|
||||
└── db
|
||||
├── mobile
|
||||
└── payments
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>OrcaBank’s <a href="#grant-composition">Grant composition</a> ensures that their collection
|
||||
architecture gives the <code class="highlighter-rouge">db</code> team access to <em>all</em> <code class="highlighter-rouge">db</code> resources and restricts
|
||||
app teams to <em>shared</em> <code class="highlighter-rouge">db</code> resources.</p>
|
||||
|
||||
<h2 id="ldapad-integration">LDAP/AD integration</h2>
|
||||
|
||||
<p>OrcaBank has standardized on LDAP for centralized authentication to help their
|
||||
identity team scale across all the platforms they manage.</p>
|
||||
|
||||
<p>To implement LDAP authentication in UCP, OrcaBank is using UCP’s native LDAP/AD
|
||||
integration to map LDAP groups directly to UCP teams. Users can be added to or
|
||||
removed from UCP teams via LDAP which can be managed centrally by OrcaBank’s
|
||||
identity team.</p>
|
||||
|
||||
<p>The following grant composition shows how LDAP groups are mapped to UCP teams.</p>
|
||||
|
||||
<h2 id="grant-composition">Grant composition</h2>
|
||||
|
||||
<p>OrcaBank is taking advantage of the flexibility in UCP’s grant model by applying
|
||||
two grants to each application team. One grant allows each team to fully
|
||||
manage the apps in their own collection, and the second grant gives them the
|
||||
(limited) access they need to networks and secrets within the <code class="highlighter-rouge">db</code> collection.</p>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-1.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h2 id="orcabank-access-architecture">OrcaBank access architecture</h2>
|
||||
|
||||
<p>OrcaBank’s resulting access architecture shows applications connecting across
|
||||
collection boundaries. By assigning multiple grants per team, the Mobile and
|
||||
Payments applications teams can connect to dedicated Database resources through
|
||||
a secure and controlled interface, leveraging Database networks and secrets.</p>
|
||||
|
||||
<blockquote>
|
||||
<p><strong>Note:</strong> In Docker Enterprise Standard, all resources are deployed across the
|
||||
same group of UCP worker nodes. Node segmentation is provided in Docker
|
||||
Enterprise Advanced and discussed in the <a href="ee-advanced.md">next tutorial</a>.</p>
|
||||
</blockquote>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-2.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h3 id="db-team">DB team</h3>
|
||||
|
||||
<p>The <code class="highlighter-rouge">db</code> team is responsible for deploying and managing the full lifecycle
|
||||
of the databases used by the application teams. They can execute the full set of
|
||||
operations against all database resources.</p>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-3.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h3 id="mobile-team">Mobile team</h3>
|
||||
|
||||
<p>The <code class="highlighter-rouge">mobile</code> team is responsible for deploying their own application stack,
|
||||
minus the database tier that is managed by the <code class="highlighter-rouge">db</code> team.</p>
|
||||
|
||||
<p><img src="../images/design-access-control-adv-4.png" alt="image" class="with-border" /></p>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="ee-advanced.md">Access control design with Docker EE Advanced</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,77 @@
|
|||
<p>Docker EE administrators can create <em>grants</em> to control how users and
|
||||
organizations access <a href="group-resources.md">resource sets</a>.</p>
|
||||
|
||||
<p>A grant defines <em>who</em> has <em>how much</em> access to <em>what</em> resources. Each grant is a
|
||||
1:1:1 mapping of <em>subject</em>, <em>role</em>, and <em>resource set</em>. For example, you can
|
||||
grant the “Prod Team” “Restricted Control” over services in the “/Production”
|
||||
collection.</p>
|
||||
|
||||
<p>A common workflow for creating grants has four steps:</p>
|
||||
|
||||
<ul>
|
||||
<li>Add and configure <strong>subjects</strong> (users, teams, and service accounts).</li>
|
||||
<li>Define custom <strong>roles</strong> (or use defaults) by adding permitted API operations
|
||||
per type of resource.</li>
|
||||
<li>Group cluster <strong>resources</strong> into Swarm collections or Kubernetes namespaces.</li>
|
||||
<li>Create <strong>grants</strong> by combining subject + role + resource set.</li>
|
||||
</ul>
|
||||
|
||||
<h2 id="kubernetes-grants">Kubernetes grants</h2>
|
||||
|
||||
<p>With Kubernetes orchestration, a grant is made up of <em>subject</em>, <em>role</em>, and
|
||||
<em>namespace</em>.</p>
|
||||
|
||||
<blockquote class="important">
|
||||
<p>This section assumes that you have created objects for the grant: subject, role,
|
||||
namespace.</p>
|
||||
</blockquote>
|
||||
|
||||
<p>To create a Kubernetes grant in UCP:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Grants</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Grant</strong>.</li>
|
||||
<li>Click <strong>Namespaces</strong> under <strong>Kubernetes</strong>.</li>
|
||||
<li>Find the desired namespace and click <strong>Select Namespace</strong>.</li>
|
||||
<li>On the <strong>Roles</strong> tab, select a role.</li>
|
||||
<li>On the <strong>Subjects</strong> tab, select a user, team, organization, or service
|
||||
account to authorize.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<h2 id="swarm-grants">Swarm grants</h2>
|
||||
|
||||
<p>With Swarm orchestration, a grant is made up of <em>subject</em>, <em>role</em>, and
|
||||
<em>collection</em>.</p>
|
||||
|
||||
<blockquote>
|
||||
<p>This section assumes that you have created objects to grant: teams/users,
|
||||
roles (built-in or custom), and a collection.</p>
|
||||
</blockquote>
|
||||
|
||||
<p><img src="../images/ucp-grant-model-0.svg" alt="" class="with-border" />
|
||||
<img src="../images/ucp-grant-model.svg" alt="" class="with-border" /></p>
|
||||
|
||||
<p>To create a grant in UCP:</p>
|
||||
|
||||
<ol>
|
||||
<li>Click <strong>Grants</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Grant</strong>.</li>
|
||||
<li>On the Collections tab, click <strong>Collections</strong> (for Swarm).</li>
|
||||
<li>Click <strong>View Children</strong> until you get to the desired collection and <strong>Select</strong>.</li>
|
||||
<li>On the <strong>Roles</strong> tab, select a role.</li>
|
||||
<li>On the <strong>Subjects</strong> tab, select a user, team, or organization to authorize.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<blockquote class="important">
|
||||
<p>By default, all new users are placed in the <code class="highlighter-rouge">docker-datacenter</code> organization.
|
||||
To apply permissions to all Docker EE users, create a grant with the
|
||||
<code class="highlighter-rouge">docker-datacenter</code> org as a subject.</p>
|
||||
</blockquote>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="deploy-stateless-app.md">Deploy a simple stateless app with RBAC</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,136 @@
|
|||
<p>Docker EE enables access control to cluster resources by grouping resources
|
||||
into <strong>resource sets</strong>. Combine resource sets with <a href="grant-permissions">grants</a>
|
||||
to give users permission to access specific cluster resources.</p>
|
||||
|
||||
<p>A resource set can be:</p>
|
||||
|
||||
<ul>
|
||||
<li>A <strong>Kubernetes namespace</strong> for Kubernetes workloads.</li>
|
||||
<li>A <strong>UCP collection</strong> for Swarm workloads.</li>
|
||||
</ul>
|
||||
|
||||
<h2 id="kubernetes-namespaces">Kubernetes namespaces</h2>
|
||||
|
||||
<p>A namespace allows you to group resources like Pods, Deployments, Services, or
|
||||
any other Kubernetes-specific resources. You can then enforce RBAC policies
|
||||
and resource quotas for the namespace.</p>
|
||||
|
||||
<p>Each Kubernetes resources can only be in one namespace, and namespaces cannot
|
||||
be nested inside one another.</p>
|
||||
|
||||
<p><a href="https://v1-8.docs.kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/">Learn more about Kubernetes namespaces</a>.</p>
|
||||
|
||||
<h2 id="swarm-collections">Swarm collections</h2>
|
||||
|
||||
<p>A Swarm collection is a directory of cluster resources like nodes, services,
|
||||
volumes, or other Swarm-specific resources.</p>
|
||||
|
||||
<p><img src="../images/collections-and-resources.svg" alt="" class="with-border" /></p>
|
||||
|
||||
<p>Each Swarm resource can only be in one collection at a time, but collections
|
||||
can be nested inside one another, to create hierarchies.</p>
|
||||
|
||||
<h3 id="nested-collections">Nested collections</h3>
|
||||
|
||||
<p>You can nest collections inside one another. If a user is granted permissions
|
||||
for one collection, they’ll have permissions for its child collections,
|
||||
pretty much like a directory structure..</p>
|
||||
|
||||
<p>For a child collection, or for a user who belongs to more than one team, the
|
||||
system concatenates permissions from multiple roles into an “effective role” for
|
||||
the user, which specifies the operations that are allowed against the target.</p>
|
||||
|
||||
<h3 id="built-in-collections">Built-in collections</h3>
|
||||
|
||||
<p>Docker EE provides a number of built-in collections.</p>
|
||||
|
||||
<p><img src="../images/collections-diagram.svg" alt="" class="with-border" /></p>
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left">Default collection</th>
|
||||
<th style="text-align: left">Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">/</code></td>
|
||||
<td style="text-align: left">Path to all resources in the Swarm cluster. Resources not in a collection are put here.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">/System</code></td>
|
||||
<td style="text-align: left">Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">/Shared</code></td>
|
||||
<td style="text-align: left">Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In <a href="https://www.docker.com/enterprise-edition">Docker EE Advanced</a>, worker nodes can be moved and <a href="isolate-nodes.md">isolated</a>.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">/Shared/Private/</code></td>
|
||||
<td style="text-align: left">Path to a user’s private collection.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left"><code class="highlighter-rouge">/Shared/Legacy</code></td>
|
||||
<td style="text-align: left">Path to the access control labels of legacy versions (UCP 2.1 and lower).</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<h3 id="default-collections">Default collections</h3>
|
||||
|
||||
<p>Each user has a default collection which can be changed in UCP preferences.</p>
|
||||
|
||||
<p>Users can’t deploy a resource without a collection. When a user deploys a
|
||||
resource without an access label, Docker EE automatically places the resource in
|
||||
the user’s default collection. <a href="../admin/configure/add-labels-to-cluster-nodes.md">Learn how to add labels to nodes</a>.</p>
|
||||
|
||||
<p>With Docker Compose, the system applies default collection labels across all
|
||||
resources in the stack unless <code class="highlighter-rouge">com.docker.ucp.access.label</code> has been explicitly
|
||||
set.</p>
|
||||
|
||||
<blockquote>
|
||||
<p>Default collections and collection labels</p>
|
||||
|
||||
<p>Default collections are good for users who work only on a well-defined slice of
|
||||
the system, as well as users who deploy stacks and don’t want to edit the
|
||||
contents of their compose files. A user with more versatile roles in the
|
||||
system, such as an administrator, might find it better to set custom labels for
|
||||
each resource.</p>
|
||||
</blockquote>
|
||||
|
||||
<h3 id="collections-and-labels">Collections and labels</h3>
|
||||
|
||||
<p>Resources are marked as being in a collection by using labels. Some resource
|
||||
types don’t have editable labels, so you can’t move them across collections.</p>
|
||||
|
||||
<blockquote>
|
||||
<p>Can edit labels: services, nodes, secrets, and configs
|
||||
Cannot edit labels: containers, networks, and volumes</p>
|
||||
</blockquote>
|
||||
|
||||
<p>For editable resources, you can change the <code class="highlighter-rouge">com.docker.ucp.access.label</code> to move
|
||||
resources to different collections. For example, you may need deploy resources
|
||||
to a collection other than your default collection.</p>
|
||||
|
||||
<p>The system uses the additional labels, <code class="highlighter-rouge">com.docker.ucp.collection.*</code>, to enable
|
||||
efficient resource lookups. By default, nodes have the
|
||||
<code class="highlighter-rouge">com.docker.ucp.collection.root</code>, <code class="highlighter-rouge">com.docker.ucp.collection.shared</code>, and
|
||||
<code class="highlighter-rouge">com.docker.ucp.collection.swarm</code> labels set to <code class="highlighter-rouge">true</code>. UCP
|
||||
automatically controls these labels, and you don’t need to manage them.</p>
|
||||
|
||||
<p>Collections get generic default names, but you can give them meaningful names,
|
||||
like “Dev”, “Test”, and “Prod”.</p>
|
||||
|
||||
<p>A <em>stack</em> is a group of resources identified by a label. You can place the
|
||||
stack’s resources in multiple collections. Resources are placed in the user’s
|
||||
default collection unless you specify an explicit <code class="highlighter-rouge">com.docker.ucp.access.label</code>
|
||||
within the stack/compose file.</p>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="create-users-and-teams-manually.md">Create and configure users and teams</a></li>
|
||||
<li><a href="define-roles.md">Define roles with authorized API operations</a></li>
|
||||
<li><a href="grant-permissions.md">Grant role-access to cluster resources</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
<p><a href="../index.md">Docker Universal Control Plane (UCP)</a>,
|
||||
the UI for <a href="https://www.docker.com/enterprise-edition">Docker EE</a>, lets you
|
||||
authorize users to view, edit, and use cluster resources by granting role-based
|
||||
permissions against resource sets.</p>
|
||||
|
||||
<p>To authorize access to cluster resources across your organization, UCP
|
||||
administrators might take the following high-level steps:</p>
|
||||
|
||||
<ul>
|
||||
<li>Add and configure <strong>subjects</strong> (users, teams, and service accounts).</li>
|
||||
<li>Define custom <strong>roles</strong> (or use defaults) by adding permitted operations per
|
||||
type of resource.</li>
|
||||
<li>Group cluster <strong>resources</strong> into resource sets of Swarm collections or
|
||||
Kubernetes namespaces.</li>
|
||||
<li>Create <strong>grants</strong> by combining subject + role + resource set.</li>
|
||||
</ul>
|
||||
|
||||
<p>For an example, see <a href="deploy-stateless-app.md">Deploy stateless app with RBAC</a>.</p>
|
||||
|
||||
<h2 id="subjects">Subjects</h2>
|
||||
|
||||
<p>A subject represents a user, team, organization, or service account. A subject
|
||||
can be granted a role that defines permitted operations against one or more
|
||||
resource sets.</p>
|
||||
|
||||
<ul>
|
||||
<li><strong>User</strong>: A person authenticated by the authentication backend. Users can
|
||||
belong to one or more teams and one or more organizations.</li>
|
||||
<li><strong>Team</strong>: A group of users that share permissions defined at the team level. A
|
||||
team can be in one organization only.</li>
|
||||
<li><strong>Organization</strong>: A group of teams that share a specific set of permissions,
|
||||
defined by the roles of the organization.</li>
|
||||
<li><strong>Service account</strong>: A Kubernetes object that enables a workload to access
|
||||
cluster resources that are assigned to a namespace.</li>
|
||||
</ul>
|
||||
|
||||
<p>Learn to <a href="create-users-and-teams-manually.md">create and configure users and teams</a>.</p>
|
||||
|
||||
<h2 id="roles">Roles</h2>
|
||||
|
||||
<p>Roles define what operations can be done by whom. A role is a set of permitted
|
||||
operations against a type of resource, like a container or volume, that’s
|
||||
assigned to a user or team with a grant.</p>
|
||||
|
||||
<p>For example, the built-in role, <strong>Restricted Control</strong>, includes permission to
|
||||
view and schedule nodes but not to update nodes. A custom <strong>DBA</strong> role might
|
||||
include permissions to <code class="highlighter-rouge">r-w-x</code> volumes and secrets.</p>
|
||||
|
||||
<p>Most organizations use multiple roles to fine-tune the appropriate access. A
|
||||
given team or user may have different roles provided to them depending on what
|
||||
resource they are accessing.</p>
|
||||
|
||||
<p>Learn to <a href="define-roles.md">define roles with authorized API operations</a>.</p>
|
||||
|
||||
<h2 id="resource-sets">Resource sets</h2>
|
||||
|
||||
<p>To control user access, cluster resources are grouped into Docker Swarm
|
||||
<em>collections</em> or Kubernetes <em>namespaces</em>.</p>
|
||||
|
||||
<ul>
|
||||
<li>
|
||||
<p><strong>Swarm collections</strong>: A collection has a directory-like structure that holds
|
||||
Swarm resources. You can create collections in UCP by defining a directory path
|
||||
and moving resources into it. Also, you can create the path in UCP and use
|
||||
<em>labels</em> in your YAML file to assign application resources to the path.
|
||||
Resource types that users can access in a Swarm collection include containers,
|
||||
networks, nodes, services, secrets, and volumes.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Kubernetes namespaces</strong>: A
|
||||
<a href="https://v1-8.docs.kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/">namespace</a>
|
||||
is a logical area for a Kubernetes cluster. Kubernetes comes with a <code class="highlighter-rouge">default</code>
|
||||
namespace for your cluster objects, plus two more namespaces for system and
|
||||
public resources. You can create custom namespaces, but unlike Swarm
|
||||
collections, namespaces <em>can’t be nested</em>. Resource types that users can
|
||||
access in a Kubernetes namespace include pods, deployments, network policies,
|
||||
nodes, services, secrets, and many more.</p>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<p>Together, collections and namespaces are named <em>resource sets</em>. Learn to
|
||||
<a href="group-resources.md">group and isolate cluster resources</a>.</p>
|
||||
|
||||
<h2 id="grants">Grants</h2>
|
||||
|
||||
<p>A grant is made up of <em>subject</em>, <em>role</em>, and <em>resource set</em>.</p>
|
||||
|
||||
<p>Grants define which users can access what resources in what way. Grants are
|
||||
effectively Access Control Lists (ACLs), and when grouped together, they
|
||||
provide comprehensive access policies for an entire organization.</p>
|
||||
|
||||
<p>Only an administrator can manage grants, subjects, roles, and access to
|
||||
resources.</p>
|
||||
|
||||
<blockquote class="important">
|
||||
<p>About administrators</p>
|
||||
|
||||
<p>An administrator is a user who creates subjects, groups resources by moving them
|
||||
into collections or namespaces, defines roles by selecting allowable operations,
|
||||
and applies grants to users and teams.</p>
|
||||
</blockquote>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="create-users-and-teams-manually.md">Create and configure users and teams</a></li>
|
||||
<li><a href="define-roles.md">Define roles with authorized API operations</a></li>
|
||||
<li><a href="grant-permissions.md">Grant role-access to cluster resources</a></li>
|
||||
<li><a href="group-resources.md">Group and isolate cluster resources</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,315 @@
|
|||
<p>With Docker EE Advanced, you can enable physical isolation of resources
|
||||
by organizing nodes into collections and granting <code class="highlighter-rouge">Scheduler</code> access for
|
||||
different users. To control access to nodes, move them to dedicated collections
|
||||
where you can grant access to specific users, teams, and organizations.</p>
|
||||
|
||||
<p><img src="../images/containers-and-nodes-diagram.svg" alt="" /></p>
|
||||
|
||||
<p>In this example, a team gets access to a node collection and a resource
|
||||
collection, and UCP access control ensures that the team members can’t view
|
||||
or use swarm resources that aren’t in their collection.</p>
|
||||
|
||||
<p>You need a Docker EE Advanced license and at least two worker nodes to
|
||||
complete this example.</p>
|
||||
|
||||
<ol>
|
||||
<li>Create an <code class="highlighter-rouge">Ops</code> team and assign a user to it.</li>
|
||||
<li>Create a <code class="highlighter-rouge">/Prod</code> collection for the team’s node.</li>
|
||||
<li>Assign a worker node to the <code class="highlighter-rouge">/Prod</code> collection.</li>
|
||||
<li>Grant the <code class="highlighter-rouge">Ops</code> teams access to its collection.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/isolate-nodes-diagram.svg" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="create-a-team">Create a team</h2>
|
||||
|
||||
<p>In the web UI, navigate to the <strong>Organizations & Teams</strong> page to create a team
|
||||
named “Ops” in your organization. Add a user who isn’t a UCP administrator to
|
||||
the team.
|
||||
<a href="create-users-and-teams-manually.md">Learn to create and manage teams</a>.</p>
|
||||
|
||||
<h2 id="create-a-node-collection-and-a-resource-collection">Create a node collection and a resource collection</h2>
|
||||
|
||||
<p>In this example, the Ops team uses an assigned group of nodes, which it
|
||||
accesses through a collection. Also, the team has a separate collection
|
||||
for its resources.</p>
|
||||
|
||||
<p>Create two collections: one for the team’s worker nodes and another for the
|
||||
team’s resources.</p>
|
||||
|
||||
<ol>
|
||||
<li>Navigate to the <strong>Collections</strong> page to view all of the resource
|
||||
collections in the swarm.</li>
|
||||
<li>Click <strong>Create collection</strong> and name the new collection “Prod”.</li>
|
||||
<li>Click <strong>Create</strong> to create the collection.</li>
|
||||
<li>Find <strong>Prod</strong> in the list, and click <strong>View children</strong>.</li>
|
||||
<li>Click <strong>Create collection</strong>, and name the child collection
|
||||
“Webserver”. This creates a sub-collection for access control.</li>
|
||||
</ol>
|
||||
|
||||
<p>You’ve created two new collections. The <code class="highlighter-rouge">/Prod</code> collection is for the worker
|
||||
nodes, and the <code class="highlighter-rouge">/Prod/Webserver</code> sub-collection is for access control to
|
||||
an application that you’ll deploy on the corresponding worker nodes.</p>
|
||||
|
||||
<h2 id="move-a-worker-node-to-a-collection">Move a worker node to a collection</h2>
|
||||
|
||||
<p>By default, worker nodes are located in the <code class="highlighter-rouge">/Shared</code> collection.
|
||||
Worker nodes that are running DTR are assigned to the <code class="highlighter-rouge">/System</code> collection.
|
||||
To control access to the team’s nodes, move them to a dedicated collection.</p>
|
||||
|
||||
<p>Move a worker node by changing the value of its access label key,
|
||||
<code class="highlighter-rouge">com.docker.ucp.access.label</code>, to a different collection.</p>
|
||||
|
||||
<ol>
|
||||
<li>Navigate to the <strong>Nodes</strong> page to view all of the nodes in the swarm.</li>
|
||||
<li>Click a worker node, and in the details pane, find its <strong>Collection</strong>.
|
||||
If it’s in the <code class="highlighter-rouge">/System</code> collection, click another worker node,
|
||||
because you can’t move nodes that are in the <code class="highlighter-rouge">/System</code> collection. By
|
||||
default, worker nodes are assigned to the <code class="highlighter-rouge">/Shared</code> collection.</li>
|
||||
<li>When you’ve found an available node, in the details pane, click
|
||||
<strong>Configure</strong>.</li>
|
||||
<li>In the <strong>Labels</strong> section, find <code class="highlighter-rouge">com.docker.ucp.access.label</code> and change
|
||||
its value from <code class="highlighter-rouge">/Shared</code> to <code class="highlighter-rouge">/Prod</code>.</li>
|
||||
<li>Click <strong>Save</strong> to move the node to the <code class="highlighter-rouge">/Prod</code> collection.</li>
|
||||
</ol>
|
||||
|
||||
<blockquote>
|
||||
<p>Docker EE Advanced required</p>
|
||||
|
||||
<p>If you don’t have a Docker EE Advanced license, you’ll get the following
|
||||
error message when you try to change the access label:
|
||||
<strong>Nodes must be in either the shared or system collection without an advanced license.</strong>
|
||||
<a href="https://www.docker.com/pricing">Get a Docker EE Advanced license</a>.</p>
|
||||
</blockquote>
|
||||
|
||||
<p><img src="../images/isolate-nodes-1.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="grant-access-for-a-team">Grant access for a team</h2>
|
||||
|
||||
<p>You need two grants to control access to nodes and container resources:</p>
|
||||
|
||||
<ul>
|
||||
<li>Grant the <code class="highlighter-rouge">Ops</code> team the <code class="highlighter-rouge">Restricted Control</code> role for the <code class="highlighter-rouge">/Prod/Webserver</code>
|
||||
resources.</li>
|
||||
<li>Grant the <code class="highlighter-rouge">Ops</code> team the <code class="highlighter-rouge">Scheduler</code> role against the nodes in the <code class="highlighter-rouge">/Prod</code>
|
||||
collection.</li>
|
||||
</ul>
|
||||
|
||||
<p>Create two grants for team access to the two collections:</p>
|
||||
|
||||
<ol>
|
||||
<li>Navigate to the <strong>Grants</strong> page and click <strong>Create Grant</strong>.</li>
|
||||
<li>In the left pane, click <strong>Resource Sets</strong>, and in the <strong>Swarm</strong> collection,
|
||||
click <strong>View Children</strong>.</li>
|
||||
<li>In the <strong>Prod</strong> collection, click <strong>View Children</strong>.</li>
|
||||
<li>In the <strong>Webserver</strong> collection, click <strong>Select Collection</strong>.</li>
|
||||
<li>In the left pane, click <strong>Roles</strong>, and select <strong>Restricted Control</strong>
|
||||
in the dropdown.</li>
|
||||
<li>Click <strong>Subjects</strong>, and under <strong>Select subject type</strong>, click <strong>Organizations</strong>.</li>
|
||||
<li>Select your organization, and in the <strong>Team</strong> dropdown, select <strong>Ops</strong>.</li>
|
||||
<li>Click <strong>Create</strong> to grant the Ops team access to the <code class="highlighter-rouge">/Prod/Webserver</code>
|
||||
collection.</li>
|
||||
</ol>
|
||||
|
||||
<p>The same steps apply for the nodes in the <code class="highlighter-rouge">/Prod</code> collection.</p>
|
||||
|
||||
<ol>
|
||||
<li>Navigate to the <strong>Grants</strong> page and click <strong>Create Grant</strong>.</li>
|
||||
<li>In the left pane, click <strong>Collections</strong>, and in the <strong>Swarm</strong> collection,
|
||||
click <strong>View Children</strong>.</li>
|
||||
<li>In the <strong>Prod</strong> collection, click <strong>Select Collection</strong>.</li>
|
||||
<li>In the left pane, click <strong>Roles</strong>, and in the dropdown, select <strong>Scheduler</strong>.</li>
|
||||
<li>In the left pane, click <strong>Subjects</strong>, and under <strong>Select subject type</strong>, click
|
||||
<strong>Organizations</strong>.</li>
|
||||
<li>Select your organization, and in the <strong>Team</strong> dropdown, select <strong>Ops</strong> .</li>
|
||||
<li>Click <strong>Create</strong> to grant the Ops team <code class="highlighter-rouge">Scheduler</code> access to the nodes in the
|
||||
<code class="highlighter-rouge">/Prod</code> collection.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/isolate-nodes-2.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>The cluster is set up for node isolation. Users with access to nodes in the
|
||||
<code class="highlighter-rouge">/Prod</code> collection can deploy <a href="#deploy-a-swarm-service-as-a-team-member">Swarm services</a>
|
||||
and <a href="#deploy-a-kubernetes-application">Kubernetes apps</a>, and their workloads
|
||||
won’t be scheduled on nodes that aren’t in the collection.</p>
|
||||
|
||||
<h2 id="deploy-a-swarm-service-as-a-team-member">Deploy a Swarm service as a team member</h2>
|
||||
|
||||
<p>When a user deploys a Swarm service, UCP assigns its resources to the user’s
|
||||
default collection.</p>
|
||||
|
||||
<p>From the target collection of a resource, UCP walks up the ancestor collections
|
||||
until it finds the highest ancestor that the user has <code class="highlighter-rouge">Scheduler</code> access to.
|
||||
Tasks are scheduled on any nodes in the tree below this ancestor. In this example,
|
||||
UCP assigns the user’s service to the <code class="highlighter-rouge">/Prod/Webserver</code> collection and schedules
|
||||
tasks on nodes in the <code class="highlighter-rouge">/Prod</code> collection.</p>
|
||||
|
||||
<p>As a user on the <code class="highlighter-rouge">Ops</code> team, set your default collection to <code class="highlighter-rouge">/Prod/Webserver</code>.</p>
|
||||
|
||||
<ol>
|
||||
<li>Log in as a user on the <code class="highlighter-rouge">Ops</code> team.</li>
|
||||
<li>Navigate to the <strong>Collections</strong> page, and in the <strong>Prod</strong> collection,
|
||||
click <strong>View Children</strong>.</li>
|
||||
<li>In the <strong>Webserver</strong> collection, click the <strong>More Options</strong> icon and
|
||||
select <strong>Set to default</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p>Deploy a service automatically to worker nodes in the <code class="highlighter-rouge">/Prod</code> collection.
|
||||
All resources are deployed under the user’s default collection,
|
||||
<code class="highlighter-rouge">/Prod/Webserver</code>, and the containers are scheduled only on the nodes under
|
||||
<code class="highlighter-rouge">/Prod</code>.</p>
|
||||
|
||||
<ol>
|
||||
<li>Navigate to the <strong>Services</strong> page, and click <strong>Create Service</strong>.</li>
|
||||
<li>Name the service “NGINX”, use the “nginx:latest” image, and click
|
||||
<strong>Create</strong>.</li>
|
||||
<li>When the <strong>nginx</strong> service status is green, click the service. In the
|
||||
details view, click <strong>Inspect Resource</strong>, and in the dropdown, select
|
||||
<strong>Containers</strong>.</li>
|
||||
<li>
|
||||
<p>Click the <strong>NGINX</strong> container, and in the details pane, confirm that its
|
||||
<strong>Collection</strong> is <strong>/Prod/Webserver</strong>.</p>
|
||||
|
||||
<p><img src="../images/isolate-nodes-3.png" alt="" class="with-border" /></p>
|
||||
</li>
|
||||
<li>Click <strong>Inspect Resource</strong>, and in the dropdown, select <strong>Nodes</strong>.</li>
|
||||
<li>
|
||||
<p>Click the node, and in the details pane, confirm that its <strong>Collection</strong>
|
||||
is <strong>/Prod</strong>.</p>
|
||||
|
||||
<p><img src="../images/isolate-nodes-4.png" alt="" class="with-border" /></p>
|
||||
</li>
|
||||
</ol>
|
||||
|
||||
<h3 id="alternative-use-a-grant-instead-of-the-default-collection">Alternative: Use a grant instead of the default collection</h3>
|
||||
|
||||
<p>Another approach is to use a grant instead of changing the user’s default
|
||||
collection. An administrator can create a grant for a role that has the
|
||||
<code class="highlighter-rouge">Service Create</code> permission against the <code class="highlighter-rouge">/Prod/Webserver</code> collection or a child
|
||||
collection. In this case, the user sets the value of the service’s access label,
|
||||
<code class="highlighter-rouge">com.docker.ucp.access.label</code>, to the new collection or one of its children
|
||||
that has a <code class="highlighter-rouge">Service Create</code> grant for the user.</p>
|
||||
|
||||
<h2 id="deploy-a-kubernetes-application">Deploy a Kubernetes application</h2>
|
||||
|
||||
<p>Starting in Docker Enterprise Edition 2.0, you can deploy a Kubernetes workload
|
||||
to worker nodes, based on a Kubernetes namespace.</p>
|
||||
|
||||
<ol>
|
||||
<li>Convert a node to use the Kubernetes orchestrator.</li>
|
||||
<li>Create a Kubernetes namespace.</li>
|
||||
<li>Create a grant for the namespace.</li>
|
||||
<li>Link the namespace to a node collection.</li>
|
||||
<li>Deploy a Kubernetes workload.</li>
|
||||
</ol>
|
||||
|
||||
<h3 id="convert-a-node-to-kubernetes">Convert a node to Kubernetes</h3>
|
||||
|
||||
<p>To deploy Kubernetes workloads, an administrator must convert a worker node to
|
||||
use the Kubernetes orchestrator.
|
||||
<a href="../admin/configure/set-orchestrator-type.md">Learn how to set the orchestrator type</a>
|
||||
for your nodes in the <code class="highlighter-rouge">/Prod</code> collection.</p>
|
||||
|
||||
<h3 id="create-a-kubernetes-namespace">Create a Kubernetes namespace</h3>
|
||||
|
||||
<p>An administrator must create a Kubernetes namespace to enable node isolation
|
||||
for Kubernetes workloads.</p>
|
||||
|
||||
<ol>
|
||||
<li>In the left pane, click <strong>Kubernetes</strong>.</li>
|
||||
<li>Click <strong>Create</strong> to open the <strong>Create Kubernetes Object</strong> page.</li>
|
||||
<li>
|
||||
<p>In the <strong>Object YAML</strong> editor, paste the following YAML.</p>
|
||||
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
|
||||
<span class="na">kind</span><span class="pi">:</span> <span class="s">Namespace</span>
|
||||
<span class="na">metadata</span><span class="pi">:</span>
|
||||
<span class="na">Name</span><span class="pi">:</span> <span class="s">ops-nodes</span>
|
||||
</code></pre></div> </div>
|
||||
</li>
|
||||
<li>Click <strong>Create</strong> to create the <code class="highlighter-rouge">ops-nodes</code> namespace.</li>
|
||||
</ol>
|
||||
|
||||
<h3 id="grant-access-to-the-kubernetes-namespace">Grant access to the Kubernetes namespace</h3>
|
||||
|
||||
<p>Create a grant to the <code class="highlighter-rouge">ops-nodes</code> namespace for the <code class="highlighter-rouge">Ops</code> team by following the
|
||||
same steps that you used to grant access to the <code class="highlighter-rouge">/Prod</code> collection, only this
|
||||
time, on the <strong>Create Grant</strong> page, pick <strong>Namespaces</strong>, instead of
|
||||
<strong>Collections</strong>.</p>
|
||||
|
||||
<p><img src="../images/isolate-nodes-5.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>Select the <strong>ops-nodes</strong> namespace, and create a <code class="highlighter-rouge">Full Control</code> grant for the
|
||||
<code class="highlighter-rouge">Ops</code> team.</p>
|
||||
|
||||
<p><img src="../images/isolate-nodes-6.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h3 id="link-the-namespace-to-a-node-collection">Link the namespace to a node collection</h3>
|
||||
|
||||
<p>The last step is to link the Kubernetes namespace the <code class="highlighter-rouge">/Prod</code> collection.</p>
|
||||
|
||||
<ol>
|
||||
<li>Navigate to the <strong>Namespaces</strong> page, and find the <strong>ops-nodes</strong> namespace
|
||||
in the list.</li>
|
||||
<li>
|
||||
<p>Click the <strong>More options</strong> icon and select <strong>Link nodes in collection</strong>.</p>
|
||||
|
||||
<p><img src="../images/isolate-nodes-7.png" alt="" class="with-border" /></p>
|
||||
</li>
|
||||
<li>In the <strong>Choose collection</strong> section, click <strong>View children</strong> on the
|
||||
<strong>Swarm</strong> collection to navigate to the <strong>Prod</strong> collection.</li>
|
||||
<li>On the <strong>Prod</strong> collection, click <strong>Select collection</strong>.</li>
|
||||
<li>
|
||||
<p>Click <strong>Confirm</strong> to link the namespace to the collection.</p>
|
||||
|
||||
<p><img src="../images/isolate-nodes-8.png" alt="" class="with-border" /></p>
|
||||
</li>
|
||||
</ol>
|
||||
|
||||
<h3 id="deploy-a-kubernetes-workload-to-the-node-collection">Deploy a Kubernetes workload to the node collection</h3>
|
||||
|
||||
<ol>
|
||||
<li>Log in in as a non-admin who’s on the <code class="highlighter-rouge">Ops</code> team.</li>
|
||||
<li>In the left pane, open the <strong>Kubernetes</strong> section.</li>
|
||||
<li>Confirm that <strong>ops-nodes</strong> is displayed under <strong>Namespaces</strong>.</li>
|
||||
<li>
|
||||
<p>Click <strong>Create</strong>, and in the <strong>Object YAML</strong> editor, paste the following
|
||||
YAML definition for an NGINX server.</p>
|
||||
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
|
||||
<span class="na">kind</span><span class="pi">:</span> <span class="s">ReplicationController</span>
|
||||
<span class="na">metadata</span><span class="pi">:</span>
|
||||
<span class="na">name</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">spec</span><span class="pi">:</span>
|
||||
<span class="na">replicas</span><span class="pi">:</span> <span class="s">1</span>
|
||||
<span class="na">selector</span><span class="pi">:</span>
|
||||
<span class="na">app</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">template</span><span class="pi">:</span>
|
||||
<span class="na">metadata</span><span class="pi">:</span>
|
||||
<span class="na">name</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">labels</span><span class="pi">:</span>
|
||||
<span class="na">app</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">spec</span><span class="pi">:</span>
|
||||
<span class="na">containers</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">image</span><span class="pi">:</span> <span class="s">nginx</span>
|
||||
<span class="na">ports</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="s">80</span>
|
||||
</code></pre></div> </div>
|
||||
|
||||
<p><img src="../images/isolate-nodes-9.png" alt="" class="with-border" /></p>
|
||||
</li>
|
||||
<li>Click <strong>Create</strong> to deploy the workload.</li>
|
||||
<li>
|
||||
<p>In the left pane, click <strong>Pods</strong> and confirm that the workload is running
|
||||
on pods in the <code class="highlighter-rouge">ops-nodes</code> namespace.</p>
|
||||
|
||||
<p><img src="../images/isolate-nodes-10.png" alt="" class="with-border" /></p>
|
||||
</li>
|
||||
</ol>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="isolate-volumes.md">Isolate volumes</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
<p>In this example, two teams are granted access to volumes in two different
|
||||
resource collections. UCP access control prevents the teams from viewing and
|
||||
accessing each other’s volumes, even though they may be located in the same
|
||||
nodes.</p>
|
||||
|
||||
<ol>
|
||||
<li>Create two teams.</li>
|
||||
<li>Create two collections, one for either team.</li>
|
||||
<li>Create grants to manage access to the collections.</li>
|
||||
<li>Team members create volumes that are specific to their team.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/isolate-volumes-diagram.svg" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="create-two-teams">Create two teams</h2>
|
||||
|
||||
<p>Navigate to the <strong>Organizations & Teams</strong> page to create two teams in the
|
||||
“engineering” organization, named “Dev” and “Prod”. Add a user who’s not a UCP administrator to the Dev team, and add another non-admin user to the Prod team. <a href="create-users-and-teams-manually.md">Learn how to create and manage teams</a>.</p>
|
||||
|
||||
<p><img src="../images/isolate-volumes-0.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="create-resource-collections">Create resource collections</h2>
|
||||
|
||||
<p>In this example, the Dev and Prod teams use two different volumes, which they
|
||||
access through two corresponding resource collections. The collections are
|
||||
placed under the <code class="highlighter-rouge">/Shared</code> collection.</p>
|
||||
|
||||
<ol>
|
||||
<li>In the left pane, click <strong>Collections</strong> to show all of the resource
|
||||
collections in the swarm.</li>
|
||||
<li>Find the <strong>/Shared</strong> collection and click <strong>View children</strong>.</li>
|
||||
<li>Click <strong>Create collection</strong> and name the new collection “dev-volumes”.</li>
|
||||
<li>Click <strong>Create</strong> to create the collection.</li>
|
||||
<li>Click <strong>Create collection</strong> again, name the new collection “prod-volumes”,
|
||||
and click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/isolate-volumes-0a.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="create-grants-for-controlling-access-to-the-new-volumes">Create grants for controlling access to the new volumes</h2>
|
||||
|
||||
<p>In this example, the Dev team gets access to its volumes from a grant that
|
||||
associates the team with the <code class="highlighter-rouge">/Shared/dev-volumes</code> collection, and the Prod
|
||||
team gets access to its volumes from another grant that associates the team
|
||||
with the <code class="highlighter-rouge">/Shared/prod-volumes</code> collection.</p>
|
||||
|
||||
<ol>
|
||||
<li>Navigate to the <strong>Grants</strong> page and click <strong>Create Grant</strong>.</li>
|
||||
<li>In the left pane, click <strong>Collections</strong>, and in the <strong>Swarm</strong> collection,
|
||||
click <strong>View Children</strong>.</li>
|
||||
<li>In the <strong>Shared</strong> collection, click <strong>View Children</strong>.</li>
|
||||
<li>In the list, find <strong>/Shared/dev-volumes</strong> and click <strong>Select Collection</strong>.</li>
|
||||
<li>Click <strong>Roles</strong>, and in the dropdown, select <strong>Restricted Control</strong>.</li>
|
||||
<li>Click <strong>Subjects</strong>, and under <strong>Select subject type</strong>, click <strong>Organizations</strong>.
|
||||
In the dropdown, pick the <strong>engineering</strong> organization, and in the
|
||||
<strong>Team</strong> dropdown, select <strong>Dev</strong>.</li>
|
||||
<li>Click <strong>Create</strong> to grant permissions to the Dev team.</li>
|
||||
<li>Click <strong>Create Grant</strong> and repeat the previous steps for the <strong>/Shared/prod-volumes</strong>
|
||||
collection and the Prod team.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/isolate-volumes-1.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>With the collections and grants in place, users can sign in and create volumes
|
||||
in their assigned collections.</p>
|
||||
|
||||
<h2 id="create-a-volume-as-a-team-member">Create a volume as a team member</h2>
|
||||
|
||||
<p>Team members have permission to create volumes in their assigned collection.</p>
|
||||
|
||||
<ol>
|
||||
<li>Log in as one of the users on the Dev team.</li>
|
||||
<li>Navigate to the <strong>Volumes</strong> page to view all of the volumes in the
|
||||
swarm that the user can access.</li>
|
||||
<li>Click <strong>Create volume</strong> and name the new volume “dev-data”.</li>
|
||||
<li>In the left pane, click <strong>Collections</strong>. The default collection
|
||||
appears. At the top of the page, click <strong>Shared</strong>, find the <strong>dev-volumes</strong> collection in the list, and click <strong>Select Collection</strong>.</li>
|
||||
<li>Click <strong>Create</strong> to add the “dev-data” volume to the collection.</li>
|
||||
<li>Log in as one of the users on the Prod team, and repeat the
|
||||
previous steps to create a “prod-data” volume assigned to the <code class="highlighter-rouge">/Shared/prod-volumes</code> collection.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/isolate-volumes-2.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>Now you can see role-based access control in action for volumes. The user on
|
||||
the Prod team can’t see the Dev team’s volumes, and if you log in again as a
|
||||
user on the Dev team, you won’t see the Prod team’s volumes.</p>
|
||||
|
||||
<p><img src="../images/isolate-volumes-3.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>Sign in with a UCP administrator account, and you see all of the volumes
|
||||
created by the Dev and Prod users.</p>
|
||||
|
||||
<p><img src="../images/isolate-volumes-4.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="isolate-nodes.md">Isolate Swarm nodes in Docker Advanced</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,122 @@
|
|||
<p>With Docker Enterprise Edition, you can create roles and grants
|
||||
that implement the permissions that are defined in your Kubernetes apps.
|
||||
Learn about <a href="https://v1-8.docs.kubernetes.io/docs/admin/authorization/rbac/">RBAC authorization in Kubernetes</a>.</p>
|
||||
|
||||
<p>Docker EE has its own implementation of role-based access control, so you
|
||||
can’t use Kubernetes RBAC objects directly. Instead, you create UCP roles
|
||||
and grants that correspond with the role objects and bindings in your
|
||||
Kubernetes app.</p>
|
||||
|
||||
<ul>
|
||||
<li>Kubernetes <code class="highlighter-rouge">Role</code> and <code class="highlighter-rouge">ClusterRole</code> objects become UCP roles.</li>
|
||||
<li>Kubernetes <code class="highlighter-rouge">RoleBinding</code> and <code class="highlighter-rouge">ClusterRoleBinding</code> objects become UCP grants.</li>
|
||||
</ul>
|
||||
|
||||
<p>Learn about <a href="grant-permissions.md">UCP roles and grants</a>.</p>
|
||||
|
||||
<blockquote class="important">
|
||||
<p>Kubernetes yaml in UCP</p>
|
||||
|
||||
<p>Docker EE has its own RBAC system that’s distinct from the Kubernetes
|
||||
system, so you can’t create any objects that are returned by the
|
||||
<code class="highlighter-rouge">/apis/rbac.authorization.k8s.io</code> endpoints. If the yaml for your Kubernetes
|
||||
app contains definitions for <code class="highlighter-rouge">Role</code>, <code class="highlighter-rouge">ClusterRole</code>, <code class="highlighter-rouge">RoleBinding</code> or
|
||||
<code class="highlighter-rouge">ClusterRoleBinding</code> objects, UCP returns an error.</p>
|
||||
</blockquote>
|
||||
|
||||
<h2 id="migrate-a-kubernetes-role-to-a-custom-ucp-role">Migrate a Kubernetes Role to a custom UCP role</h2>
|
||||
|
||||
<p>If you have <code class="highlighter-rouge">Role</code> and <code class="highlighter-rouge">ClusterRole</code> objects defined in the yaml for your
|
||||
Kubernetes app, you can realize the same authorization model by creating
|
||||
custom roles by using the UCP web UI.</p>
|
||||
|
||||
<p>The following Kubernetes yaml defines a <code class="highlighter-rouge">pod-reader</code> role, which gives users
|
||||
access to the read-only <code class="highlighter-rouge">pods</code> resource APIs, <code class="highlighter-rouge">get</code>, <code class="highlighter-rouge">watch</code>, and <code class="highlighter-rouge">list</code>.</p>
|
||||
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">kind</span><span class="pi">:</span> <span class="s">Role</span>
|
||||
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">rbac.authorization.k8s.io/v1</span>
|
||||
<span class="na">metadata</span><span class="pi">:</span>
|
||||
<span class="na">namespace</span><span class="pi">:</span> <span class="s">default</span>
|
||||
<span class="na">name</span><span class="pi">:</span> <span class="s">pod-reader</span>
|
||||
<span class="na">rules</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">apiGroups</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">"</span><span class="pi">]</span>
|
||||
<span class="na">resources</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">pods"</span><span class="pi">]</span>
|
||||
<span class="na">verbs</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">get"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">watch"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">list"</span><span class="pi">]</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Create a corresponding custom role by using the <strong>Create Role</strong> page in the
|
||||
UCP web UI.</p>
|
||||
|
||||
<ol>
|
||||
<li>Log in to the UCP web UI with an administrator account.</li>
|
||||
<li>Click <strong>Roles</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Role</strong>.</li>
|
||||
<li>In the <strong>Role Details</strong> section, name the role “pod-reader”.</li>
|
||||
<li>In the left pane, click <strong>Operations</strong>.</li>
|
||||
<li>Scroll to the <strong>Kubernetes pod operations</strong> section and expand the
|
||||
<strong>All Kubernetes Pod operations</strong> dropdown.</li>
|
||||
<li>Select the <strong>Pod Get</strong>, <strong>Pod List</strong>, and <strong>Pod Watch</strong> operations.
|
||||
<img src="../images/migrate-kubernetes-roles-1.png" alt="" class="with-border" /></li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p>The <code class="highlighter-rouge">pod-reader</code> role is ready to use in grants that control access to
|
||||
cluster resources.</p>
|
||||
|
||||
<h2 id="migrate-a-kubernetes-rolebinding-to-a-ucp-grant">Migrate a Kubernetes RoleBinding to a UCP grant</h2>
|
||||
|
||||
<p>If your Kubernetes app defines <code class="highlighter-rouge">RoleBinding</code> or <code class="highlighter-rouge">ClusterRoleBinding</code>
|
||||
objects for specific users, create corresponding grants by using the UCP web UI.</p>
|
||||
|
||||
<p>The following Kubernetes yaml defines a <code class="highlighter-rouge">RoleBinding</code> that grants user “jane”
|
||||
read-only access to pods in the <code class="highlighter-rouge">default</code> namespace.</p>
|
||||
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">kind</span><span class="pi">:</span> <span class="s">RoleBinding</span>
|
||||
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">rbac.authorization.k8s.io/v1</span>
|
||||
<span class="na">metadata</span><span class="pi">:</span>
|
||||
<span class="na">name</span><span class="pi">:</span> <span class="s">read-pods</span>
|
||||
<span class="na">namespace</span><span class="pi">:</span> <span class="s">default</span>
|
||||
<span class="na">subjects</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">kind</span><span class="pi">:</span> <span class="s">User</span>
|
||||
<span class="na">name</span><span class="pi">:</span> <span class="s">jane</span>
|
||||
<span class="na">apiGroup</span><span class="pi">:</span> <span class="s">rbac.authorization.k8s.io</span>
|
||||
<span class="na">roleRef</span><span class="pi">:</span>
|
||||
<span class="na">kind</span><span class="pi">:</span> <span class="s">Role</span>
|
||||
<span class="na">name</span><span class="pi">:</span> <span class="s">pod-reader</span>
|
||||
<span class="na">apiGroup</span><span class="pi">:</span> <span class="s">rbac.authorization.k8s.io</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Create a corresponding grant by using the <strong>Create Grant</strong> page in the
|
||||
UCP web UI.</p>
|
||||
|
||||
<ol>
|
||||
<li>Create a non-admin user named “jane”. <a href="create-users-and-teams-manually.md">Learn to create users and teams</a>.</li>
|
||||
<li>Click <strong>Grants</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Click <strong>Create Grant</strong>.</li>
|
||||
<li>In the <strong>Type</strong> section, click <strong>Namespaces</strong> and ensure that <strong>default</strong> is selected.</li>
|
||||
<li>In the left pane, click <strong>Roles</strong>, and in the <strong>Role</strong> dropdown, select <strong>pod-reader</strong>.</li>
|
||||
<li>In the left pane, click <strong>Subjects</strong>, and click <strong>All Users</strong>.</li>
|
||||
<li>In the <strong>User</strong> dropdown, select <strong>jane</strong>.</li>
|
||||
<li>Click <strong>Create</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p><img src="../images/migrate-kubernetes-roles-2.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>User “jane” has access to inspect pods in the <code class="highlighter-rouge">default</code> namespace.</p>
|
||||
|
||||
<h2 id="kubernetes-limitations">Kubernetes limitations</h2>
|
||||
|
||||
<p>There are a few limitations that you should be aware of when creating
|
||||
Kubernetes workloads:</p>
|
||||
|
||||
<ul>
|
||||
<li>Docker EE has its own RBAC system, so it’s not possible to create
|
||||
<code class="highlighter-rouge">ClusterRole</code> objects, <code class="highlighter-rouge">ClusterRoleBinding</code> objects, or any other object that is
|
||||
created by using the <code class="highlighter-rouge">/apis/rbac.authorization.k8s.io</code> endpoints.</li>
|
||||
<li>To make sure your cluster is secure, only users and service accounts that have been
|
||||
granted “Full Control” of all Kubernetes namespaces can deploy pods with privileged
|
||||
options. This includes: <code class="highlighter-rouge">PodSpec.hostIPC</code>, <code class="highlighter-rouge">PodSpec.hostNetwork</code>,
|
||||
<code class="highlighter-rouge">PodSpec.hostPID</code>, <code class="highlighter-rouge">SecurityContext.allowPrivilegeEscalation</code>,
|
||||
<code class="highlighter-rouge">SecurityContext.capabilities</code>, <code class="highlighter-rouge">SecurityContext.privileged</code>, and
|
||||
<code class="highlighter-rouge">Volume.hostPath</code>.</li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
<p>By default only admin users can pull images into a cluster managed by UCP.</p>
|
||||
|
||||
<p>Images are a shared resource, as such they are always in the <code class="highlighter-rouge">swarm</code> collection.
|
||||
To allow users access to pull images, you need to grant them the <code class="highlighter-rouge">image load</code>
|
||||
permission for the <code class="highlighter-rouge">swarm</code> collection.</p>
|
||||
|
||||
<p>As an admin user, go to the <strong>UCP web UI</strong>, navigate to the <strong>Roles</strong> page,
|
||||
and create a <strong>new role</strong> named <code class="highlighter-rouge">Pull images</code>.</p>
|
||||
|
||||
<p><img src="../images/rbac-pull-images-1.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>Then go to the <strong>Grants</strong> page, and create a new grant with:</p>
|
||||
|
||||
<ul>
|
||||
<li>Subject: the user you want to be able to pull images.</li>
|
||||
<li>Roles: the “Pull images” role you created.</li>
|
||||
<li>Resource set: the <code class="highlighter-rouge">swarm</code> collection.</li>
|
||||
</ul>
|
||||
|
||||
<p><img src="../images/rbac-pull-images-2.png" alt="" class="with-border" /></p>
|
||||
|
||||
<p>Once you click <strong>Create</strong> the user is able to pull images from the UCP web UI
|
||||
or the CLI.</p>
|
||||
|
||||
<h2 id="where-to-go-next">Where to go next</h2>
|
||||
|
||||
<ul>
|
||||
<li><a href="ee-standard.md">Docker EE Standard use case</a></li>
|
||||
<li><a href="ee-advanced.md">Docker EE Advanced use case</a></li>
|
||||
</ul>
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
<p>Docker EE administrators can reset user passwords managed in UCP:</p>
|
||||
|
||||
<ol>
|
||||
<li>Log in to UCP with administrator credentials.</li>
|
||||
<li>Click <strong>Users</strong> under <strong>User Management</strong>.</li>
|
||||
<li>Select the user whose password you want to change.</li>
|
||||
<li>Select <strong>Configure</strong> and select <strong>Security</strong>.</li>
|
||||
<li>Enter the new password, confirm, and click <strong>Update Password</strong>.</li>
|
||||
</ol>
|
||||
|
||||
<p>Users passwords managed with an LDAP service must be changed on the LDAP server.</p>
|
||||
|
||||
<p><img src="../images/recover-a-user-password-1.png" alt="" class="with-border" /></p>
|
||||
|
||||
<h2 id="change-administrator-passwords">Change administrator passwords</h2>
|
||||
|
||||
<p>Administrators who need a password change can ask another administrator for help
|
||||
or use <strong>ssh</strong> to log in to a manager node managed by Docker EE and run:</p>
|
||||
|
||||
<pre><code class="language-none">
|
||||
docker run --net=host -v ucp-auth-api-certs:/tls -it "$(docker inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' ucp-auth-api)" "$(docker inspect --format '{{ index .Spec.TaskTemplate.ContainerSpec.Args 0 }}' ucp-auth-api)" passwd -i
|
||||
|
||||
</code></pre>
|
||||
|
||||
|
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 41 KiB |
|
Before Width: | Height: | Size: 63 KiB After Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 162 KiB After Width: | Height: | Size: 120 KiB |
|
Before Width: | Height: | Size: 140 KiB After Width: | Height: | Size: 102 KiB |
|
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 30 KiB |
|
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
title: Kubernetes Network Encryption
|
||||
description: Learn how to configure network encryption in Kubernetes
|
||||
keywords: ucp, cli, administration, kubectl, Kubernetes, security, network, ipsec, ipip, esp, calico
|
||||
---
|
||||
|
||||
Docker Enterprise Edition provides data-plane level IPSec network encryption to securely encrypt application
|
||||
traffic in a Kubernetes cluster. This secures application traffic within a cluster when running in untrusted
|
||||
infrastructure or environments. It is an optional feature of UCP that is enabled by deploying the Secure Overlay
|
||||
components on Kuberenetes when using the default Calico driver for networking configured for IPIP tunnelling
|
||||
(the default configuration).
|
||||
|
||||
Kubernetes network encryption is enabled by two components in UCP: the SecureOverlay Agent and SecureOverlay
|
||||
Master. The agent is deployed as a per-node service that manages the encryption state of the data plane. The
|
||||
agent controls the IPSec encryption on Calico’s IPIP tunnel traffic between different nodes in the Kubernetes
|
||||
cluster. The master is the second component, deployed on a UCP manager node, which acts as the key management
|
||||
process that configures and periodically rotates the encryption keys.
|
||||
|
||||
Kubernetes network encryption uses AES Galois Counter Mode (AES-GCM) with 128-bit keys by default. Encryption
|
||||
is not enabled by default and requires the SecureOverlay Agent and Master to be deployed on UCP to begin
|
||||
encrypting traffic within the cluster. It can be enabled or disabled at any time during the cluster lifecycle.
|
||||
However, it should be noted that it can cause temporary traffic outages between pods during the first few minutes
|
||||
of traffic enabling/disabling. When enabled, Kubernetes pod traffic between hosts is encrypted at the IPIP tunnel
|
||||
interface in the UCP host.
|
||||
|
||||

|
||||
|
||||
## Requirements
|
||||
|
||||
Kubernetes Network Encryption is supported for the following platforms:
|
||||
* Docker Enterprise 2.1+ (UCP 3.1+)
|
||||
* Kubernetes 1.11+
|
||||
* On-premise, AWS, GCE
|
||||
* Azure is not supported for network encryption as encryption utilizes Calico’s IPIP tunnel
|
||||
* Only supported when using UCP’s default Calico CNI plugin
|
||||
* Supported on all Docker Enterprise supported Linux OSes
|
||||
|
||||
## Configuring MTUs
|
||||
|
||||
Before deploying the SecureOverlay components one must ensure that Calico is configured so that the IPIP tunnel
|
||||
MTU leaves sufficient headroom for the encryption overhead. Encryption adds 26 bytes of overhead but every IPSec
|
||||
packet size must be a multiple of 4 bytes. IPIP tunnels require 20 bytes of encapsulation overhead. So the IPIP
|
||||
tunnel interface MTU must be no more than "EXTMTU - 46 - ((EXTMTU - 46) modulo 4)" where EXTMTU is the minimum MTU
|
||||
of the external interfaces. An IPIP MTU of 1452 should generally be safe for most deployments.
|
||||
|
||||
Changing UCP's MTU requires updating the UCP configuration. This process is described [here](/ee/ucp/admin/configure/ucp-configuration-file).
|
||||
|
||||
The user must update the following values to the new MTU:
|
||||
|
||||
[cluster_config]
|
||||
...
|
||||
calico_mtu = "1452"
|
||||
ipip_mtu = "1452"
|
||||
...
|
||||
|
||||
## Configuring SecureOverlay
|
||||
|
||||
Once the cluster nodes’ MTUs are properly configured, deploy the SecureOverlay components using the Secure Overlay YAML file to UCP.
|
||||
|
||||
[Download the Secure Overlay YAML file here.](ucp-secureoverlay.yml)
|
||||
|
||||
After downloading the YAML file, run the following command from any machine with the properly configured kubectl environment and the proper UCP bundle's credentials:
|
||||
|
||||
```
|
||||
$ kubectl apply -f ucp-secureoverlay.yml
|
||||
```
|
||||
|
||||
Run this command at cluster installation time before starting any workloads.
|
||||
|
||||
To remove the encryption from the system, issue the command:
|
||||
|
||||
```
|
||||
$ kubectl delete -f ucp-secureoverlay.yml
|
||||
```
|
||||
|
|
@ -0,0 +1,165 @@
|
|||
######################
|
||||
# Cluster role for key management jobs
|
||||
######################
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: ucp-secureoverlay-mgr
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
---
|
||||
######################
|
||||
# Cluster role binding for key management jobs
|
||||
######################
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: ucp-secureoverlay-mgr
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ucp-secureoverlay-mgr
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ucp-secureoverlay-mgr
|
||||
namespace: kube-system
|
||||
---
|
||||
######################
|
||||
# Service account for key management jobs
|
||||
######################
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ucp-secureoverlay-mgr
|
||||
namespace: kube-system
|
||||
---
|
||||
######################
|
||||
# Cluster role for secure overlay per-node agent
|
||||
######################
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: ucp-secureoverlay-agent
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
######################
|
||||
# Cluster role binding for secure overlay per-node agent
|
||||
######################
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: ucp-secureoverlay-agent
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ucp-secureoverlay-agent
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ucp-secureoverlay-agent
|
||||
namespace: kube-system
|
||||
---
|
||||
######################
|
||||
# Service account secure overlay per-node agent
|
||||
######################
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ucp-secureoverlay-agent
|
||||
namespace: kube-system
|
||||
---
|
||||
######################
|
||||
# K8s secret of current key configuration
|
||||
######################
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: ucp-secureoverlay
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
data:
|
||||
keys: ""
|
||||
---
|
||||
######################
|
||||
# DaemonSet for secure overlay per-node agent
|
||||
######################
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: ucp-secureoverlay-agent
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: ucp-secureoverlay-agent
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: ucp-secureoverlay-agent
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: ucp-secureoverlay-agent
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
hostNetwork: true
|
||||
priorityClassName: system-node-critical
|
||||
terminationGracePeriodSeconds: 10
|
||||
serviceAccountName: ucp-secureoverlay-agent
|
||||
containers:
|
||||
- name: ucp-secureoverlay-agent
|
||||
image: ucp-secureoverlay-agent:3.1.0
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_ADMIN"]
|
||||
env:
|
||||
- name: MY_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
volumeMounts:
|
||||
- name: ucp-secureoverlay
|
||||
mountPath: /etc/secureoverlay/
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: ucp-secureoverlay
|
||||
secret:
|
||||
secretName: ucp-secureoverlay
|
||||
---
|
||||
######################
|
||||
# Deployment for manager of the whole cluster (to rotate keys)
|
||||
######################
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ucp-secureoverlay-mgr
|
||||
namespace: kube-system
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: ucp-secureoverlay-mgr
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
name: ucp-secureoverlay-mgr
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app: ucp-secureoverlay-mgr
|
||||
spec:
|
||||
serviceAccountName: ucp-secureoverlay-mgr
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- name: ucp-secureoverlay-mgr
|
||||
image: ucp-secureoverlay-mgr:3.1.0
|
||||
|
|
@ -192,16 +192,32 @@ $ docker network create \
|
|||
my-network
|
||||
```
|
||||
|
||||
##### Using custom default address pools
|
||||
|
||||
To customize subnet allocation for your Swarm networks, you can [optionally configure them](./swarm-mode.md) during `swarm init`.
|
||||
|
||||
For example, the following command is used when initializing Swarm:
|
||||
|
||||
```bash
|
||||
$ docker swarm init --default-address-pool 10.20.0.0/16 --default-addr-pool-mask-length 26`
|
||||
```
|
||||
|
||||
Whenever a user creates a network, but does not use the `--subnet` command line option, the subnet for this network will be allocated sequentially from the next available subnet from the pool. If the specified network is already allocated, that network will not be used for Swarm.
|
||||
|
||||
Multiple pools can be configured if discontiguous address space is required. However, allocation from specific pools is not supported. Network subnets will be allocated sequentially from the IP pool space and subnets will be reused as they are deallocated from networks that are deleted.
|
||||
|
||||
The default mask length can be configured and is the same for all networks. It is set to `/24` by default. To change the default subnet mask length, use the `--default-addr-pool-mask-length` command line option.
|
||||
|
||||
**NOTE:** Default address pools can only be configured on `swarm init` and cannot be altered after cluster creation.
|
||||
|
||||
##### Overlay network size limitations
|
||||
|
||||
You should create overlay networks with `/24` blocks (the default), which limits
|
||||
you to 256 IP addresses, when you create networks using the default VIP-based
|
||||
endpoint-mode. This recommendation addresses
|
||||
[limitations with swarm mode](https://github.com/moby/moby/issues/30820). If you
|
||||
need more than 256 IP addresses, do not increase the IP block size. You can either
|
||||
use `dnsrr` endpoint mode with an external load balancer, or use multiple smaller
|
||||
overlay networks. See [Configure service discovery](#configure-service-discovery)
|
||||
for more information about different endpoint modes.
|
||||
Docker recommends creating overlay networks with `/24` blocks. The `/24` overlay network blocks, which limits the network to 256 IP addresses.
|
||||
|
||||
This recommendation addresses [limitations with swarm mode](https://github.com/moby/moby/issues/30820).
|
||||
If you need more than 256 IP addresses, do not increase the IP block size. You can either use `dnsrr`
|
||||
endpoint mode with an external load balancer, or use multiple smaller overlay networks. See
|
||||
[Configure service discovery](#configure-service-discovery) or more information about different endpoint modes.
|
||||
|
||||
#### Configure encryption of application data
|
||||
|
||||
|
|
|
|||
|
|
@ -65,30 +65,36 @@ To add a manager to this swarm, run 'docker swarm join-token manager' and follow
|
|||
```
|
||||
### Configuring default address pools
|
||||
|
||||
By default Docker Swarm uses a default address pool `10.0.0.0/8` for global scope (overlay) networks. Every
|
||||
network that does not have a subnet specified will have a subnet sequentially allocated from this pool. In
|
||||
some circumstances it may be desirable to use a different default IP address pool for networks.
|
||||
|
||||
For example, if the default `10.0.0.0/8` range conflicts with already allocated address space in your network,
|
||||
then it is desireable to ensure that networks use a different range without requiring Swarm users to specify
|
||||
each subnet with the `--subnet` command.
|
||||
|
||||
To configure custom default address pools, you must define pools at Swarm initialization using the
|
||||
`--default-addr-pool` command line option. This command line option uses CIDR notation for defining the subnet mask.
|
||||
To create the custom address pool for Swarm, you must define at least one default address pool, and an optional default address pool subnet mask. For example, for the `10.0.0.0/27`, use the value `27`.
|
||||
|
||||
Docker allocates subnet addresses from the address ranges specified by the `--default-addr-pool` command line option. For example, a command line option `--default-addr-pool 10.10.0.0/16` indicates that Docker will allocate subnets from that `/16` address range. If `--default-addr-pool-mask-len` were unspecified or set explicitly to 24, this would result in 256 `/24` networks of the form `10.10.X.0/24`.
|
||||
=======
|
||||
By default Docker Swarm uses a default address pool `10.0.0.0/8` for global scope (overlay) networks. Every network that does not have a subnet specified will have a subnet sequentially allocated from this pool. In some circumstances it may be desireable to use a different default IP address pool for networks. For example, if the default `10.0.0.0/8` range conflicts with already allocated address space in your network then it is desireable to ensure that networks use a different range without requiring Swarm users to specify each subnet with the `--subnet` command.
|
||||
|
||||
To configure custom default address pools, you must define pools at Swarm initialization using the `--default-addr-pool` flag. To create the custom address pool for Swarm, you must define at least one default address pool, and an optional default address pool subnet mask. The default address pool uses CIDR notation.
|
||||
|
||||
Docker allocates subnet addresses from the address ranges specified by the --default-addr-pool options. For example, a command line option `--default-addr-pool 10.10.0.0/16` indicates that Docker will allocate subnets from that `/16` address range. If `--default-addr-pool-mask-len` were unspecified or set explicitly to 24, this would result in 256 `/24` networks of the form `10.10.X.0/24`.
|
||||
|
||||
The subnet range comes from the `--default-addr-pool`, (such as `10.10.0.0/16`). The size of 16 there represents the number of networks one can create within that `default-addr-pool` range. The `--default-address-pool` option may occur multiple times with each option providing additional addresses for docker to use for overlay subnets.
|
||||
|
||||
The format of the command is:
|
||||
```
|
||||
$ docker swarm init --default-address-pool <IP range in CIDR> [--default-address-pool <IP range in CIDR> --default-addr-pool-mask-length <CIDR value>]
|
||||
```
|
||||
To create a default IP address pool with a /16 (class B) for the 10.20.0.0 network looks like this:
|
||||
|
||||
```
|
||||
$ docker swarm init --default-addr-pool 10.20.0.0/16
|
||||
```
|
||||
|
||||
To create a default IP address pool with a `/16` (class B) for the `10.20.0.0` and `10.30.0.0` networks, and to
|
||||
create a subnet mask of `/26` for each network looks like this:
|
||||
=======
|
||||
To create a default IP address pool with a /16 (class B) for the 10.20.0.0 and 10.30.0.0 networks, and to create a subnet mask of /26 for each network looks like this:
|
||||
|
||||
```
|
||||
$ docker swarm init --default-addr-pool 10.20.0.0/16 --default-addr-pool 10.30.0.0/16 --default-addr-pool-mask-length 26
|
||||
```
|
||||
|
||||
In this example, `docker network create -d overlay net1` produces `10.20.0.0/26` as the allocated subnet for `net1`,
|
||||
and `docker network create -d overlay net2` produces `10.20.0.64/26` as the allocated subnet for `net2`. This continues until
|
||||
all the subnets are exhausted.
|
||||
=======
|
||||
In this example, `docker network create -d overlay net1` will result in `10.20.0.0/26` as the allocated subnet for `net1`, and `docker network create -d overlay net2` will result in `10.20.0.64/26` as the allocated subnet for `net2`. This continues until all the subnets are exhausted.
|
||||
|
||||
Refer to the following pages for more information:
|
||||
|
|
|
|||