diff --git a/ee/ucp/authorization/_site/create-teams-with-ldap.html b/ee/ucp/authorization/_site/create-teams-with-ldap.html new file mode 100644 index 0000000000..2868eee2d1 --- /dev/null +++ b/ee/ucp/authorization/_site/create-teams-with-ldap.html @@ -0,0 +1,55 @@ +
To enable LDAP in UCP and sync to your LDAP directory:
+ +Yes by LDAP Enabled. A list of LDAP settings displays.If Docker EE is configured to sync users with your organization’s LDAP directory +server, you can enable syncing the new team’s members when creating a new team +or when modifying settings of an existing team.
+ +For more, see: Integrate with an LDAP Directory.
+ +
There are two methods for matching group members from an LDAP directory, direct +bind and search bind.
+ +Select Immediately Sync Team Members to run an LDAP sync operation +immediately after saving the configuration for the team. It may take a moment +before the members of the team are fully synced.
+ +This option specifies that team members should be synced directly with members +of a group in your organization’s LDAP directory. The team’s membership will by +synced to match the membership of the group.
+ +This option specifies that team members should be synced using a search query +against your organization’s LDAP directory. The team’s membership will be +synced to match the users in the search results.
+ +Users, teams, and organizations are referred to as subjects in Docker EE.
+ +Individual users can belong to one or more teams but each team can only be in +one organization. At the fictional startup, Acme Company, all teams in the +organization are necessarily unique but the user, Alex, is on two teams:
+ +acme-datacenter
+├── dba
+│ └── Alex*
+├── dev
+│ └── Bett
+└── ops
+ ├── Alex*
+ └── Chad
+All users are authenticated on the backend. Docker EE provides built-in +authentication and also integrates with LDAP directory services.
+ +To use Docker EE’s built-in authentication, you must create users manually.
+ +++ +To enable LDAP and authenticate and synchronize UCP users and teams with your +organization’s LDAP directory, see:
+ +
The general flow of designing an organization with teams in UCP is:
+ +To create an organization in UCP:
+ +To create teams in the organization:
+ +++ +Note: To sync teams with groups in an LDAP server, see Sync Teams with LDAP.
+
New users are assigned a default permission level so that they can access the +cluster. To extend a user’s default permissions, add them to a team and create grants. You can optionally grant them Docker EE +administrator permissions.
+ +To manually create users in UCP:
+ +++ +A
+Docker EE Admincan grant users permission to change the cluster +configuration and manage grants, roles, and resource sets.
+
A role defines a set of API operations permitted against a resource set. +You apply roles to users and teams by creating grants.
+ +You can define custom roles or use the following built-in roles:
+ +| Built-in role | +Description | +
|---|---|
None |
+ Users have no access to Swarm or Kubernetes resources. Maps to No Access role in UCP 2.1.x. |
+
View Only |
+ Users can view resources but can’t create them. | +
Restricted Control |
+ Users can view and edit resources but can’t run a service or container in a way that affects the node where it’s running. Users cannot mount a node directory, exec into containers, or run containers in privileged mode or with additional kernel capabilities. |
+
Scheduler |
+ Users can view nodes (worker and manager) and schedule (not view) workloads on these nodes. By default, all users are granted the Scheduler role against the /Shared collection. (To view workloads, users need permissions such as Container View). |
+
Full Control |
+ Users can view and edit all granted resources. They can create containers without any restriction, but can’t see the containers of other users. | +
The Roles page lists all default and custom roles applicable in the +organization.
+ +You can give a role a global name, such as “Remove Images”, which might enable the +Remove and Force Remove operations for images. You can apply a role with +the same name to different resource sets.
+ +
++ +Some important rules regarding roles:
++
+- Roles are always enabled.
+- Roles can’t be edited. To edit a role, you must delete and recreate it.
+- Roles used within a grant can be deleted only after first deleting the grant.
+- Only administrators can create and delete roles.
+
This tutorial explains how to deploy a NGINX web server and limit access to one +team with role-based access control (RBAC).
+ +You are the Docker EE system administrator at Acme Company and need to configure +permissions to company resources. The best way to do this is to:
+ +Add the organization, acme-datacenter, and create three teams according to the
+following structure:
acme-datacenter
+├── dba
+│ └── Alex Alutin
+├── dev
+│ └── Bett Bhatia
+└── ops
+ └── Chad Chavez
+Learn to create and configure users and teams.
+ +In this section, we deploy NGINX with Kubernetes. See Swarm stack +for the same exercise with Swarm.
+ +Create a namespace to logically store the NGINX application:
+ +apiVersion: v1
+kind: Namespace
+metadata:
+ name: nginx-namespace
+You can use the built-in roles or define your own. For this exercise, create a +simple role for the ops team:
+ +Kube Deploy.Learn to create and configure users and teams.
+ +Grant the ops team (and only the ops team) access to nginx-namespace with the +custom role, Kube Deploy.
+ +acme-datacenter/ops + Kube Deploy + nginx-namespace
+You’ve configured Docker EE. The ops team can now deploy nginx.
opsteam).apiVersion: apps/v1beta2 # Use apps/v1beta1 for versions < 1.8.0
+kind: Deployment
+metadata:
+ name: nginx-deployment
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 2
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:latest
+ ports:
+ - containerPort: 80
+dba (alex) can’t see nginx-namespace.dev (bett) can’t see nginx-namespace.In this section, we deploy nginx as a Swarm service. See Kubernetes Deployment
+for the same exercise with Kubernetes.
Create a collection for NGINX resources, nested under the /Shared collection:
/
+├── System
+└── Shared
+ └── nginx-collection
+++ +Tip: To drill into a collection, click View Children.
+
Learn to group and isolate cluster resources.
+ +You can use the built-in roles or define your own. For this exercise, create a +simple role for the ops team:
+ +Swarm Deploy.Learn to create and configure users and teams.
+ +Grant the ops team (and only the ops team) access to nginx-collection with
+the built-in role, Swarm Deploy.
acme-datacenter/ops + Swarm Deploy + /Shared/nginx-collection
+Learn to grant role-access to cluster resources.
+ +You’ve configured Docker EE. The ops team can now deploy an nginx Swarm
+service.
opsteam).nginx-service/Shared in the breadcrumbs.nginx-collection.dba (alex) cannot see nginx-collection.dev (bett) cannot see nginx-collection.Go through the Docker Enterprise Standard tutorial, +before continuing here with Docker Enterprise Advanced.
+ +In the first tutorial, the fictional company, OrcaBank, designed an architecture +with role-based access control (RBAC) to meet their organization’s security +needs. They assigned multiple grants to fine-tune access to resources across +collection boundaries on a single platform.
+ +In this tutorial, OrcaBank implements new and more stringent security +requirements for production applications:
+ +First, OrcaBank adds staging zone to their deployment model. They will no longer +move developed appliciatons directly in to production. Instead, they will deploy +apps from their dev cluster to staging for testing, and then to production.
+ +Second, production applications are no longer permitted to share any physical +infrastructure with non-production infrastructure. OrcaBank segments the +scheduling and access of applications with Node Access Control.
+ +++ +Node Access Control is a feature of Docker EE +Advanced and provides secure multi-tenancy with node-based isolation. Nodes +can be placed in different collections so that resources can be scheduled and +isolated on disparate physical or virtual hardware resources.
+
OrcaBank still has three application teams, payments, mobile, and db with
+varying levels of segmentation between them.
Their RBAC redesign is going to organize their UCP cluster into two top-level +collections, staging and production, which are completely separate security +zones on separate physical infrastructure.
+ +OrcaBank’s four teams now have different needs in production and staging:
+ +security should have view-only access to all applications in production (but
+not staging).db should have full access to all database applications and resources in
+production (but not staging). See DB Team.mobile should have full access to their Mobile applications in both
+production and staging and limited access to shared db services. See
+Mobile Team.payments should have full access to their Payments applications in both
+production and staging and limited access to shared db services.OrcaBank has decided to replace their custom Ops role with the built-in
+Full Control role.
View Only (default role) allows users to see but not edit all cluster
+resources.Full Control (default role) allows users complete control of all collections
+granted to them. They can also create containers without restriction but
+cannot see the containers of other users.View & Use Networks + Secrets (custom role) enables users to view/connect
+to networks and view/use secrets used by db containers, but prevents them
+from seeing or impacting the db applications themselves.
In the previous tutorial, OrcaBank created separate collections for each
+application team and nested them all under /Shared.
To meet their new security requirements for production, OrcaBank is redesigning +collections in two ways:
+ +The collection architecture now has the following tree representation:
+ +/
+├── System
+├── Shared
+├── prod
+│ ├── mobile
+│ ├── payments
+│ └── db
+│ ├── mobile
+│ └── payments
+|
+└── staging
+ ├── mobile
+ └── payments
+OrcaBank must now diversify their grants further to ensure the proper division +of access.
+ +The payments and mobile application teams will have three grants each–one
+for deploying to production, one for deploying to staging, and the same grant to
+access shared db networks and secrets.

The resulting access architecture, designed with Docker EE Advanced, provides +physical segmentation between production and staging using node access control.
+ +Applications are scheduled only on UCP worker nodes in the dedicated application
+collection. And applications use shared resources across collection boundaries
+to access the databases in the /prod/db collection.

The OrcaBank db team is responsible for deploying and managing the full
+lifecycle of the databases that are in production. They have the full set of
+operations against all database resources.

The mobile team is responsible for deploying their full application stack in
+staging. In production they deploy their own applications but use the databases
+that are provided by the db team.

Collections and grants are strong tools that can be used to control +access and visibility to resources in UCP.
+ +This tutorial describes a fictitious company named OrcaBank that needs to +configure an architecture in UCP with role-based access control (RBAC) for +their application engineering group.
+ +OrcaBank reorganized their application teams by product with each team providing +shared services as necessary. Developers at OrcaBank do their own DevOps and +deploy and manage the lifecycle of their applications.
+ +OrcaBank has four teams with the following resource needs:
+ +security should have view-only access to all applications in the cluster.db should have full access to all database applications and resources. See
+DB Team.mobile should have full access to their mobile applications and limited
+access to shared db services. See Mobile Team.payments should have full access to their payments applications and limited
+access to shared db services.To assign the proper access, OrcaBank is employing a combination of default +and custom roles:
+ +View Only (default role) allows users to see all resources (but not edit or use).Ops (custom role) allows users to perform all operations against configs,
+containers, images, networks, nodes, secrets, services, and volumes.View & Use Networks + Secrets (custom role) enables users to view/connect to
+networks and view/use secrets used by db containers, but prevents them from
+seeing or impacting the db applications themselves.
OrcaBank is also creating collections of resources to mirror their team +structure.
+ +Currently, all OrcaBank applications share the same physical resources, so all
+nodes and applications are being configured in collections that nest under the
+built-in collection, /Shared.
Other collections are also being created to enable shared db applications.
++ +Note: For increased security with node-based isolation, use Docker +Enterprise Advanced.
+
/Shared/mobile hosts all Mobile applications and resources./Shared/payments hosts all Payments applications and resources./Shared/db is a top-level collection for all db resources./Shared/db/payments is a collection of db resources for Payments applications./Shared/db/mobile is a collection of db resources for Mobile applications.The collection architecture has the following tree representation:
+ +/
+├── System
+└── Shared
+ ├── mobile
+ ├── payments
+ └── db
+ ├── mobile
+ └── payments
+OrcaBank’s Grant composition ensures that their collection
+architecture gives the db team access to all db resources and restricts
+app teams to shared db resources.
OrcaBank has standardized on LDAP for centralized authentication to help their +identity team scale across all the platforms they manage.
+ +To implement LDAP authentication in UCP, OrcaBank is using UCP’s native LDAP/AD +integration to map LDAP groups directly to UCP teams. Users can be added to or +removed from UCP teams via LDAP which can be managed centrally by OrcaBank’s +identity team.
+ +The following grant composition shows how LDAP groups are mapped to UCP teams.
+ +OrcaBank is taking advantage of the flexibility in UCP’s grant model by applying
+two grants to each application team. One grant allows each team to fully
+manage the apps in their own collection, and the second grant gives them the
+(limited) access they need to networks and secrets within the db collection.

OrcaBank’s resulting access architecture shows applications connecting across +collection boundaries. By assigning multiple grants per team, the Mobile and +Payments applications teams can connect to dedicated Database resources through +a secure and controlled interface, leveraging Database networks and secrets.
+ +++ +Note: In Docker Enterprise Standard, all resources are deployed across the +same group of UCP worker nodes. Node segmentation is provided in Docker +Enterprise Advanced and discussed in the next tutorial.
+

The db team is responsible for deploying and managing the full lifecycle
+of the databases used by the application teams. They can execute the full set of
+operations against all database resources.

The mobile team is responsible for deploying their own application stack,
+minus the database tier that is managed by the db team.

Docker EE administrators can create grants to control how users and +organizations access resource sets.
+ +A grant defines who has how much access to what resources. Each grant is a +1:1:1 mapping of subject, role, and resource set. For example, you can +grant the “Prod Team” “Restricted Control” over services in the “/Production” +collection.
+ +A common workflow for creating grants has four steps:
+ +With Kubernetes orchestration, a grant is made up of subject, role, and +namespace.
+ +++ +This section assumes that you have created objects for the grant: subject, role, +namespace.
+
To create a Kubernetes grant in UCP:
+ +With Swarm orchestration, a grant is made up of subject, role, and +collection.
+ +++ +This section assumes that you have created objects to grant: teams/users, +roles (built-in or custom), and a collection.
+
+
To create a grant in UCP:
+ +++ +By default, all new users are placed in the
+docker-datacenterorganization. +To apply permissions to all Docker EE users, create a grant with the +docker-datacenterorg as a subject.
Docker EE enables access control to cluster resources by grouping resources +into resource sets. Combine resource sets with grants +to give users permission to access specific cluster resources.
+ +A resource set can be:
+ +A namespace allows you to group resources like Pods, Deployments, Services, or +any other Kubernetes-specific resources. You can then enforce RBAC policies +and resource quotas for the namespace.
+ +Each Kubernetes resources can only be in one namespace, and namespaces cannot +be nested inside one another.
+ +Learn more about Kubernetes namespaces.
+ +A Swarm collection is a directory of cluster resources like nodes, services, +volumes, or other Swarm-specific resources.
+ +Each Swarm resource can only be in one collection at a time, but collections +can be nested inside one another, to create hierarchies.
+ +You can nest collections inside one another. If a user is granted permissions +for one collection, they’ll have permissions for its child collections, +pretty much like a directory structure..
+ +For a child collection, or for a user who belongs to more than one team, the +system concatenates permissions from multiple roles into an “effective role” for +the user, which specifies the operations that are allowed against the target.
+ +Docker EE provides a number of built-in collections.
+ +| Default collection | +Description | +
|---|---|
/ |
+ Path to all resources in the Swarm cluster. Resources not in a collection are put here. | +
/System |
+ Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. | +
/Shared |
+ Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In Docker EE Advanced, worker nodes can be moved and isolated. | +
/Shared/Private/ |
+ Path to a user’s private collection. | +
/Shared/Legacy |
+ Path to the access control labels of legacy versions (UCP 2.1 and lower). | +
Each user has a default collection which can be changed in UCP preferences.
+ +Users can’t deploy a resource without a collection. When a user deploys a +resource without an access label, Docker EE automatically places the resource in +the user’s default collection. Learn how to add labels to nodes.
+ +With Docker Compose, the system applies default collection labels across all
+resources in the stack unless com.docker.ucp.access.label has been explicitly
+set.
++ +Default collections and collection labels
+ +Default collections are good for users who work only on a well-defined slice of +the system, as well as users who deploy stacks and don’t want to edit the +contents of their compose files. A user with more versatile roles in the +system, such as an administrator, might find it better to set custom labels for +each resource.
+
Resources are marked as being in a collection by using labels. Some resource +types don’t have editable labels, so you can’t move them across collections.
+ +++ +Can edit labels: services, nodes, secrets, and configs +Cannot edit labels: containers, networks, and volumes
+
For editable resources, you can change the com.docker.ucp.access.label to move
+resources to different collections. For example, you may need deploy resources
+to a collection other than your default collection.
The system uses the additional labels, com.docker.ucp.collection.*, to enable
+efficient resource lookups. By default, nodes have the
+com.docker.ucp.collection.root, com.docker.ucp.collection.shared, and
+com.docker.ucp.collection.swarm labels set to true. UCP
+automatically controls these labels, and you don’t need to manage them.
Collections get generic default names, but you can give them meaningful names, +like “Dev”, “Test”, and “Prod”.
+ +A stack is a group of resources identified by a label. You can place the
+stack’s resources in multiple collections. Resources are placed in the user’s
+default collection unless you specify an explicit com.docker.ucp.access.label
+within the stack/compose file.
Docker Universal Control Plane (UCP), +the UI for Docker EE, lets you +authorize users to view, edit, and use cluster resources by granting role-based +permissions against resource sets.
+ +To authorize access to cluster resources across your organization, UCP +administrators might take the following high-level steps:
+ +For an example, see Deploy stateless app with RBAC.
+ +A subject represents a user, team, organization, or service account. A subject +can be granted a role that defines permitted operations against one or more +resource sets.
+ +Learn to create and configure users and teams.
+ +Roles define what operations can be done by whom. A role is a set of permitted +operations against a type of resource, like a container or volume, that’s +assigned to a user or team with a grant.
+ +For example, the built-in role, Restricted Control, includes permission to
+view and schedule nodes but not to update nodes. A custom DBA role might
+include permissions to r-w-x volumes and secrets.
Most organizations use multiple roles to fine-tune the appropriate access. A +given team or user may have different roles provided to them depending on what +resource they are accessing.
+ +Learn to define roles with authorized API operations.
+ +To control user access, cluster resources are grouped into Docker Swarm +collections or Kubernetes namespaces.
+ +Swarm collections: A collection has a directory-like structure that holds +Swarm resources. You can create collections in UCP by defining a directory path +and moving resources into it. Also, you can create the path in UCP and use +labels in your YAML file to assign application resources to the path. +Resource types that users can access in a Swarm collection include containers, +networks, nodes, services, secrets, and volumes.
+Kubernetes namespaces: A
+namespace
+is a logical area for a Kubernetes cluster. Kubernetes comes with a default
+namespace for your cluster objects, plus two more namespaces for system and
+public resources. You can create custom namespaces, but unlike Swarm
+collections, namespaces can’t be nested. Resource types that users can
+access in a Kubernetes namespace include pods, deployments, network policies,
+nodes, services, secrets, and many more.
Together, collections and namespaces are named resource sets. Learn to +group and isolate cluster resources.
+ +A grant is made up of subject, role, and resource set.
+ +Grants define which users can access what resources in what way. Grants are +effectively Access Control Lists (ACLs), and when grouped together, they +provide comprehensive access policies for an entire organization.
+ +Only an administrator can manage grants, subjects, roles, and access to +resources.
+ +++ +About administrators
+ +An administrator is a user who creates subjects, groups resources by moving them +into collections or namespaces, defines roles by selecting allowable operations, +and applies grants to users and teams.
+
With Docker EE Advanced, you can enable physical isolation of resources
+by organizing nodes into collections and granting Scheduler access for
+different users. To control access to nodes, move them to dedicated collections
+where you can grant access to specific users, teams, and organizations.
In this example, a team gets access to a node collection and a resource +collection, and UCP access control ensures that the team members can’t view +or use swarm resources that aren’t in their collection.
+ +You need a Docker EE Advanced license and at least two worker nodes to +complete this example.
+ +Ops team and assign a user to it./Prod collection for the team’s node./Prod collection.Ops teams access to its collection.In the web UI, navigate to the Organizations & Teams page to create a team +named “Ops” in your organization. Add a user who isn’t a UCP administrator to +the team. +Learn to create and manage teams.
+ +In this example, the Ops team uses an assigned group of nodes, which it +accesses through a collection. Also, the team has a separate collection +for its resources.
+ +Create two collections: one for the team’s worker nodes and another for the +team’s resources.
+ +You’ve created two new collections. The /Prod collection is for the worker
+nodes, and the /Prod/Webserver sub-collection is for access control to
+an application that you’ll deploy on the corresponding worker nodes.
By default, worker nodes are located in the /Shared collection.
+Worker nodes that are running DTR are assigned to the /System collection.
+To control access to the team’s nodes, move them to a dedicated collection.
Move a worker node by changing the value of its access label key,
+com.docker.ucp.access.label, to a different collection.
/System collection, click another worker node,
+because you can’t move nodes that are in the /System collection. By
+default, worker nodes are assigned to the /Shared collection.com.docker.ucp.access.label and change
+its value from /Shared to /Prod./Prod collection.++ +Docker EE Advanced required
+ +If you don’t have a Docker EE Advanced license, you’ll get the following +error message when you try to change the access label: +Nodes must be in either the shared or system collection without an advanced license. +Get a Docker EE Advanced license.
+

You need two grants to control access to nodes and container resources:
+ +Ops team the Restricted Control role for the /Prod/Webserver
+resources.Ops team the Scheduler role against the nodes in the /Prod
+collection.Create two grants for team access to the two collections:
+ +/Prod/Webserver
+collection.The same steps apply for the nodes in the /Prod collection.
Scheduler access to the nodes in the
+/Prod collection.
The cluster is set up for node isolation. Users with access to nodes in the
+/Prod collection can deploy Swarm services
+and Kubernetes apps, and their workloads
+won’t be scheduled on nodes that aren’t in the collection.
When a user deploys a Swarm service, UCP assigns its resources to the user’s +default collection.
+ +From the target collection of a resource, UCP walks up the ancestor collections
+until it finds the highest ancestor that the user has Scheduler access to.
+Tasks are scheduled on any nodes in the tree below this ancestor. In this example,
+UCP assigns the user’s service to the /Prod/Webserver collection and schedules
+tasks on nodes in the /Prod collection.
As a user on the Ops team, set your default collection to /Prod/Webserver.
Ops team.Deploy a service automatically to worker nodes in the /Prod collection.
+All resources are deployed under the user’s default collection,
+/Prod/Webserver, and the containers are scheduled only on the nodes under
+/Prod.
Click the NGINX container, and in the details pane, confirm that its +Collection is /Prod/Webserver.
+ +
Click the node, and in the details pane, confirm that its Collection +is /Prod.
+ +
Another approach is to use a grant instead of changing the user’s default
+collection. An administrator can create a grant for a role that has the
+Service Create permission against the /Prod/Webserver collection or a child
+collection. In this case, the user sets the value of the service’s access label,
+com.docker.ucp.access.label, to the new collection or one of its children
+that has a Service Create grant for the user.
Starting in Docker Enterprise Edition 2.0, you can deploy a Kubernetes workload +to worker nodes, based on a Kubernetes namespace.
+ +To deploy Kubernetes workloads, an administrator must convert a worker node to
+use the Kubernetes orchestrator.
+Learn how to set the orchestrator type
+for your nodes in the /Prod collection.
An administrator must create a Kubernetes namespace to enable node isolation +for Kubernetes workloads.
+ +In the Object YAML editor, paste the following YAML.
+ +apiVersion: v1
+kind: Namespace
+metadata:
+ Name: ops-nodes
+ops-nodes namespace.Create a grant to the ops-nodes namespace for the Ops team by following the
+same steps that you used to grant access to the /Prod collection, only this
+time, on the Create Grant page, pick Namespaces, instead of
+Collections.

Select the ops-nodes namespace, and create a Full Control grant for the
+Ops team.

The last step is to link the Kubernetes namespace the /Prod collection.
Click the More options icon and select Link nodes in collection.
+ +
Click Confirm to link the namespace to the collection.
+ +
Ops team.Click Create, and in the Object YAML editor, paste the following +YAML definition for an NGINX server.
+ +apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: nginx
+spec:
+ replicas: 1
+ selector:
+ app: nginx
+ template:
+ metadata:
+ name: nginx
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+ ports:
+ - containerPort: 80
+
In the left pane, click Pods and confirm that the workload is running
+on pods in the ops-nodes namespace.

In this example, two teams are granted access to volumes in two different +resource collections. UCP access control prevents the teams from viewing and +accessing each other’s volumes, even though they may be located in the same +nodes.
+ +Navigate to the Organizations & Teams page to create two teams in the +“engineering” organization, named “Dev” and “Prod”. Add a user who’s not a UCP administrator to the Dev team, and add another non-admin user to the Prod team. Learn how to create and manage teams.
+ +
In this example, the Dev and Prod teams use two different volumes, which they
+access through two corresponding resource collections. The collections are
+placed under the /Shared collection.

In this example, the Dev team gets access to its volumes from a grant that
+associates the team with the /Shared/dev-volumes collection, and the Prod
+team gets access to its volumes from another grant that associates the team
+with the /Shared/prod-volumes collection.

With the collections and grants in place, users can sign in and create volumes +in their assigned collections.
+ +Team members have permission to create volumes in their assigned collection.
+ +/Shared/prod-volumes collection.
Now you can see role-based access control in action for volumes. The user on +the Prod team can’t see the Dev team’s volumes, and if you log in again as a +user on the Dev team, you won’t see the Prod team’s volumes.
+ +
Sign in with a UCP administrator account, and you see all of the volumes +created by the Dev and Prod users.
+ +
With Docker Enterprise Edition, you can create roles and grants +that implement the permissions that are defined in your Kubernetes apps. +Learn about RBAC authorization in Kubernetes.
+ +Docker EE has its own implementation of role-based access control, so you +can’t use Kubernetes RBAC objects directly. Instead, you create UCP roles +and grants that correspond with the role objects and bindings in your +Kubernetes app.
+ +Role and ClusterRole objects become UCP roles.RoleBinding and ClusterRoleBinding objects become UCP grants.Learn about UCP roles and grants.
+ +++ +Kubernetes yaml in UCP
+ +Docker EE has its own RBAC system that’s distinct from the Kubernetes +system, so you can’t create any objects that are returned by the +
+/apis/rbac.authorization.k8s.ioendpoints. If the yaml for your Kubernetes +app contains definitions forRole,ClusterRole,RoleBindingor +ClusterRoleBindingobjects, UCP returns an error.
If you have Role and ClusterRole objects defined in the yaml for your
+Kubernetes app, you can realize the same authorization model by creating
+custom roles by using the UCP web UI.
The following Kubernetes yaml defines a pod-reader role, which gives users
+access to the read-only pods resource APIs, get, watch, and list.
kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ namespace: default
+ name: pod-reader
+rules:
+- apiGroups: [""]
+ resources: ["pods"]
+ verbs: ["get", "watch", "list"]
+Create a corresponding custom role by using the Create Role page in the +UCP web UI.
+ +
The pod-reader role is ready to use in grants that control access to
+cluster resources.
If your Kubernetes app defines RoleBinding or ClusterRoleBinding
+objects for specific users, create corresponding grants by using the UCP web UI.
The following Kubernetes yaml defines a RoleBinding that grants user “jane”
+read-only access to pods in the default namespace.
kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: read-pods
+ namespace: default
+subjects:
+- kind: User
+ name: jane
+ apiGroup: rbac.authorization.k8s.io
+roleRef:
+ kind: Role
+ name: pod-reader
+ apiGroup: rbac.authorization.k8s.io
+Create a corresponding grant by using the Create Grant page in the +UCP web UI.
+ +
User “jane” has access to inspect pods in the default namespace.
There are a few limitations that you should be aware of when creating +Kubernetes workloads:
+ +ClusterRole objects, ClusterRoleBinding objects, or any other object that is
+created by using the /apis/rbac.authorization.k8s.io endpoints.PodSpec.hostIPC, PodSpec.hostNetwork,
+PodSpec.hostPID, SecurityContext.allowPrivilegeEscalation,
+SecurityContext.capabilities, SecurityContext.privileged, and
+Volume.hostPath.By default only admin users can pull images into a cluster managed by UCP.
+ +Images are a shared resource, as such they are always in the swarm collection.
+To allow users access to pull images, you need to grant them the image load
+permission for the swarm collection.
As an admin user, go to the UCP web UI, navigate to the Roles page,
+and create a new role named Pull images.

Then go to the Grants page, and create a new grant with:
+ +swarm collection.
Once you click Create the user is able to pull images from the UCP web UI +or the CLI.
+ +Docker EE administrators can reset user passwords managed in UCP:
+ +Users passwords managed with an LDAP service must be changed on the LDAP server.
+ +
Administrators who need a password change can ask another administrator for help +or use ssh to log in to a manager node managed by Docker EE and run:
+ +
+docker run --net=host -v ucp-auth-api-certs:/tls -it "$(docker inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' ucp-auth-api)" "$(docker inspect --format '{{ index .Spec.TaskTemplate.ContainerSpec.Args 0 }}' ucp-auth-api)" passwd -i
+
+
+