Update UCP RBAC to use simple layout

This commit is contained in:
Joao Fernandes 2017-12-05 14:01:49 -08:00 committed by Jim Galasyn
parent 288db22c35
commit 4290e3a2cd
12 changed files with 64 additions and 1184 deletions

View File

@ -2,17 +2,8 @@
title: Reset a user password
description: Learn how to recover your Docker Datacenter credentials.
keywords: ucp, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
## User passwords
Docker EE administrators can reset user passwords managed in UCP:
@ -39,5 +30,3 @@ docker exec -it ucp-auth-api enzi \
passwd -i
{% endraw %}
```
{% endif %}

View File

@ -2,19 +2,8 @@
title: Synchronize users and teams with LDAP
description: Learn how to enable LDAP and sync users and teams in Docker Universal Control Plane.
keywords: authorize, authentication, users, teams, UCP, Docker, LDAP
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
To enable LDAP in UCP and sync to your LDAP directory:
1. Click **Admin Settings** under your username drop down.
@ -29,7 +18,7 @@ If Docker EE is configured to sync users with your organization's LDAP directory
server, you can enable syncing the new team's members when creating a new team
or when modifying settings of an existing team.
For more, see: [Integrate with an LDAP Directory](../../datacenter/ucp/2.2/guides/admin/configure/external-auth/index.md).
For more, see: [Integrate with an LDAP Directory](../admin/configure/external-auth/index.md).
![](../images/create-and-manage-teams-5.png){: .with-border}
@ -65,58 +54,3 @@ synced to match the users in the search results.
scope are added as members of the team.
- **Search subtree**: Defines search through the full LDAP tree, not just one
level, starting at the Base DN.
{% elsif include.version=="ucp-2.2" %}
To enable LDAP in UCP and sync team members with your LDAP
directory:
1. Click **Admin Settings** under your username drop down.
2. Click **Authentication & Authorization**.
3. Scroll down and click `Yes` by **LDAP Enabled**. A list of LDAP settings displays.
4. Input values to match your LDAP server installation.
5. Test your configuration in UCP.
6. Manually create teams in UCP to mirror those in LDAP.
6. Click **Sync Now**.
If Docker EE is configured to sync users with your organization's LDAP directory
server, you can enable syncing the new team's members when creating a new team
or when modifying settings of an existing team.
[Learn how to sync with LDAP at the backend](../configure/external-auth/index.md).
![](../images/create-and-manage-teams-5.png){: .with-border}
There are two methods for matching group members from an LDAP directory:
**Match Group Members**
This option specifies that team members should be synced directly with members
of a group in your organization's LDAP directory. The team's membership will by
synced to match the membership of the group.
| Field | Description |
|:-----------------------|:------------------------------------------------------------------------------------------------------|
| Group DN | The distinguished name of the group from which to select users. |
| Group Member Attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. |
**Match Search Results**
This option specifies that team members should be synced using a search query
against your organization's LDAP directory. The team's membership will be
synced to match the users in the search results.
| Field | Description |
| :--------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
| Search Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. |
| Search Filter | The LDAP search filter used to find users. If you leave this field empty, all existing users in the search scope will be added as members of the team. |
| Search subtree instead of just one level | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. |
**Immediately Sync Team Members**
Select this option to run an LDAP sync operation immediately after saving the
configuration for the team. It may take a moment before the members of the team
are fully synced.
{% endif %}
{% endif %}

View File

@ -2,29 +2,9 @@
title: Access control model
description: Manage access to resources with role-based access control.
keywords: ucp, grant, role, permission, authentication, authorization
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-basics-create-subjects/
title: Create and configure users and teams
- path: /deploy/rbac/rbac-basics-define-roles/
title: Define roles with authorized API operations
- path: /deploy/rbac/rbac-basics-group-resources/
title: Group and isolate cluster resources
- path: /deploy/rbac/rbac-basics-grant-permissions/
title: Grant role-access to cluster resources
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
[Docker Univeral Control Plane (UCP)](https://docs.docker.com/datacenter/ucp/3.0/guides/),
[Docker Univeral Control Plane (UCP)](../index.md),
the UI for [Docker EE](https://www.docker.com/enterprise-edition), lets you
authorize users to view, edit, and use cluster resources by granting role-based
permissions against resource types.
@ -38,7 +18,7 @@ administrators might take the following high-level steps:
- Group cluster **resources** into Swarm collections or Kubernetes namespaces.
- Create **grants** by marrying subject + role + resource group.
For an example, see [Deploy stateless app with RBAC](./deploy/rbac/rbac-howto-deploy-stateless-app).
For an example, see [Deploy stateless app with RBAC](rbac-howto-deploy-stateless-app.md).
## Subjects
@ -52,7 +32,7 @@ role that defines permitted operations against one or more resource types.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
For more, see: [Create and configure users and teams](./rbac-basics-create-subjects.md)
For more, see: [Create and configure users and teams](rbac-basics-create-subjects.md)
## Roles
@ -68,7 +48,7 @@ Most organizations use multiple roles to fine-tune the approprate access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
For more, see: [Define roles with authorized API operations](./rbac-basics-define-roles.md)
For more, see: [Define roles with authorized API operations](rbac-basics-define-roles.md)
## Resources
@ -92,7 +72,7 @@ namespaces _cannot be nested_.
> Resource types that can be placed into a Kubernetes namespace include: Pods,
> Deployments, NetworkPolcies, Nodes, Services, Secrets, and many more.
For more, see: [Group and isolate cluster resources](./rbac-basics-group-resources.md).
For more, see: [Group and isolate cluster resources](rbac-basics-group-resources.md).
## Grants
@ -108,94 +88,11 @@ Only an administrator can manage grants, subjects, roles, and resources.
> into directories or namespaces, define roles by selecting allowable operations,
> and apply grants to users and teams.
For more, see: [Grant access to cluster resources](./rbac-basics-grant-permissions.md).
For more, see: [Grant access to cluster resources](rbac-basics-grant-permissions.md).
## Next steps
{% elsif include.version=="ucp-2.2" %}
[Docker Univeral Control Plane (UCP)](https://docs.docker.com/datacenter/ucp/2.2/guides/),
the UI for [Docker EE](https://www.docker.com/enterprise-edition), lets you
authorize users to view, edit, and use cluster resources by granting role-based
permissions against resource types.
To authorize access to cluster resources across your organization, UCP
administrators might take the following high-level steps:
- Add and configure **subjects** (users and teams).
- Define custom **roles** (or use defaults) by adding permitted operations per
resource types.
- Group cluster **resources** into Swarm collections.
- Create **grants** by marrying subject + role + resource group.
For an example, see [Deploy stateless app with RBAC](./deploy/rbac/rbac-howto-deploy-stateless-app).
## Subjects
A subject represents a user, team, or organization. A subject is granted a
role that defines permitted operations against one or more resource types.
- **User**: A person authenticated by the authentication backend. Users can
belong to one or more teams and one or more organizations.
- **Team**: A group of users that share permissions defined at the team level. A
team can be in one organization only.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
For more, see: [Create and configure users and teams](./rbac-basics-create-subjects.md)
## Roles
Roles define what operations can be done by whom. A role is a set of permitted
operations against a *resource type* (such as an image, container, volume) that
is assigned to a user or team with a grant.
For example, the built-in role, **Restricted Control**, includes permission to
view and schedule nodes but not to update nodes. A custom **DBA** role might
include permissions to r-w-x volumes and secrets.
Most organizations use different roles to fine-tune the approprate access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
For more, see: [Define roles with authorized API operations](./rbac-basics-define-roles.md)
## Resources
Cluster resources are grouped into Swarm collections.
A collection is a directory that holds Swarm resources. You can create
collections in UCP by both defining a directory path and moving resources into
it. Or you can create the path in UCP and use *labels* in your YAML file to
assign application resources to that path.
> Resource types that can be placed into a Swarm collection include: Containers,
> Networks, Nodes, Services, Secrets, and Volumes.
For more, see: [Group and isolate cluster resources](./rbac-basics-group-resources.md).
## Grants
A grant is made up of a *subject*, *resource group*, and *role*.
Grants define which users can access what resources in what way. Grants are
effectively Access Control Lists (ACLs), and when grouped together, can
provide comprehensive access policies for an entire organization.
Only an administrator can manage grants, subjects, roles, and resources.
> Administrators are users who create subjects, group resources by moving them
> into directories or namespaces, define roles by selecting allowable operations,
> and apply grants to users and teams.
For more, see: [Grant access to cluster resources](./rbac-basics-grant-permissions.md).
## Transition from UCP 2.1 access control
- Access labels & permissions are migrated automatically when upgrading from UCP 2.1.x.
- Unlabeled user-owned resources are migrated into `/Shared/Private/<username>`.
- Old access control labels are migrated into `/Shared/Legacy/<labelname>`.
- When deploying a resource, choose a collection instead of an access label.
- Use grants for access control, instead of unlabeled permissions.
{% endif %}
{% endif %}
* [Create and configure users and teams](rbac-basics-create-subjects.md)
* [Define roles with authorized API operations](rbac-basics-define-roles.md)
* [Group and isolate cluster resources](rbac-basics-group-resources.md)
* [Grant role-access to cluster resources](rbac-basics-grant-permissions.md)

View File

@ -2,26 +2,8 @@
title: Create and configure users and teams
description: Learn how to add users and define teams in Docker Universal Control Plane.
keywords: rbac, authorize, authentication, users, teams, UCP, Docker
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/admin-sync-with-ldap/
title: Synchronize teams with LDAP
- path: /deploy/rbac/rbac-basics-define-roles/
title: Define roles with authorized API operations
- path: /deploy/rbac/rbac-basics-group-resources/
title: Group and isolate cluster resources
- path: /deploy/rbac/rbac-basics-grant-permissions/
title: Grant role-access to cluster resources
---
{% if include.ui %}
Users, teams, and organizations are referred to as subjects in Docker EE.
Individual users can belong to one or more teams but each team can only be in
@ -49,7 +31,7 @@ To use Docker EE's built-in authentication, you must [create users manually](#cr
> To enable LDAP and authenticate and synchronize UCP users and teams with your
> organization's LDAP directory, see:
> - [Synchronize users and teams with LDAP in the UI](admin-sync-with-ldap.md)
> - [Integrate with an LDAP Directory](/datacenter/ucp/2.2/guides/admin/configure/external-auth/index.md).
> - [Integrate with an LDAP Directory](../admin/configure/external-auth/index.md).
## Build an organization architecture
@ -75,19 +57,16 @@ To create teams in the organization:
2. Click **Create Team**.
3. Input a team name (and description).
4. Click **Create**.
5. Add existing users to the team. To sync LDAP users, see: [Integrate with an LDAP Directory](../../datacenter/ucp/2.2/guides/admin/configure/external-auth/index.md).
5. Add existing users to the team. To sync LDAP users, see: [Integrate with an LDAP Directory](../admin/configure/external-auth/index.md).
- Click the team name and select **Actions** > **Add Users**.
- Check the users to include and click **Add Users**.
> **Note**: To sync teams with groups in an LDAP server, see [Sync Teams with LDAP](./admin-sync-with-ldap).
{% if include.version=="ucp-3.0" %}
> **Note**: To sync teams with groups in an LDAP server, see [Sync Teams with LDAP](admin-sync-with-ldap).
### Create users manually
New users are assigned a default permission level so that they can access the
cluster. To extend a user's default permissions, add them to a team and [create grants](./rbac-basics-grant-permissions/). You can optionally grant them Docker EE
cluster. To extend a user's default permissions, add them to a team and [create grants](rbac-basics-grant-permissions.md). You can optionally grant them Docker EE
administrator permissions.
To manally create users in UCP:
@ -105,27 +84,9 @@ To manally create users in UCP:
![](../images/ucp_usermgmt_users_create02.png){: .with-border}
{% elsif include.version=="ucp-2.2" %}
# Next steps
### Create users manuallly
New users are assigned a default permission level so that they can access the
cluster. To extend a user's default permissions, add them to a team and [create grants](/deploy/rbac/rbac-basics-grant-permissions/). You can optionally grant them Docker EE
administrator permissions.
To manally create users in UCP:
1. Navigate to the **Users** page.
2. Click **Create User**.
3. Input username, password, and full name.
4. Click **Create**.
5. [optional] Check "Is a UCP admin?".
> A `UCP Admin` can grant users permission to change the cluster configuration
> and manage grants, roles, and collections.
![](../images/create-users-1.png){: .with-border}
![](../images/create-users-2.png){: .with-border}
{% endif %}
{% endif %}
* [Synchronize teams with LDAP](admin-sync-with-ldap.md)
* [Define roles with authorized API operations](rbac-basics-define-roles.md)
* [Group and isolate cluster resources](rbac-basics-group-resources.md)
* [Grant role-access to cluster resources](rbac-basics-grant-permissions.md)

View File

@ -2,25 +2,8 @@
title: Define roles with authorized API operations
description: Learn how to create roles and set permissions in Docker Universal Control Plane.
keywords: rbac, authorization, authentication, users, teams, UCP
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-basics-create-subjects/
title: Create and configure users and teams
- path: /deploy/rbac/rbac-basics-group-resources/
title: Group and isolate cluster resources
- path: /deploy/rbac/rbac-basics-grant-permissions/
title: Grant role-access to cluster resources
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
A role defines a set of API operations permitted against a group of resources.
Roles are applied to users and teams with grants.
@ -30,13 +13,13 @@ Roles are applied to users and teams with grants.
You can define custom roles or use the following built-in roles:
| Built-in role | Description |
| ---------------------| ------------------------------------------------------------------------------- |
| `None` | Users have no access to Swarm or Kubernetes resources. Maps to `No Access` role in UCP 2.1.x. |
| `View Only` | Users can view resources but can't create them. |
| Built-in role | Description |
|:---------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `None` | Users have no access to Swarm or Kubernetes resources. Maps to `No Access` role in UCP 2.1.x. |
| `View Only` | Users can view resources but can't create them. |
| `Restricted Control` | Users can view and edit resources but can't run a service or container in a way that affects the node where it's running. Users _cannot_ mount a node directory, `exec` into containers, or run containers in privileged mode or with additional kernel capabilities. |
| `Scheduler` | Users can view nodes (worker and manager) and schedule (not view) workloads on these nodes. By default, all users are granted the `Scheduler` role against the `/Shared` collection. (To view workloads, users need permissions such as `Container View`). |
| `Full Control` | Users can view and edit all granted resources. They can create containers without any restriction, but can't see the containers of other users. |
| `Scheduler` | Users can view nodes (worker and manager) and schedule (not view) workloads on these nodes. By default, all users are granted the `Scheduler` role against the `/Shared` collection. (To view workloads, users need permissions such as `Container View`). |
| `Full Control` | Users can view and edit all granted resources. They can create containers without any restriction, but can't see the containers of other users. |
## Create a custom role
@ -63,50 +46,8 @@ the same name to different collections or namespaces.
> - Roles used within a grant can only be deleted after first deleting the grant.
> - Only administrators can create and delete roles.
## Next steps
{% elsif include.version=="ucp-2.2" %}
A role defines a set of API operations permitted against a group of resources.
Roles are applied to users and teams with grants.
![Diagram showing UCP permission levels](../images/permissions-ucp.svg)
## Default roles
You can define custom roles or use the following built-in roles:
| Built-in role | Description |
| ---------------------| ------------------------------------------------------------------------------- |
| `None` | Users have no access to Swarm resources. Maps to `No Access` role in UCP 2.1.x. |
| `View Only` | Users can view resources but can't create them. |
| `Restricted Control` | Users can view and edit resources but can't run a service or container in a way that affects the node where it's running. Users _cannot_ mount a node directory, `exec` into containers, or run containers in privileged mode or with additional kernel capabilities. |
| `Scheduler` | Users can view nodes (worker and manager) and schedule (not view) workloads on these nodes. By default, all users are granted the `Scheduler` role against the `/Shared` collection. (To view workloads, users need permissions such as `Container View`). |
| `Full Control` | Users can view and edit all granted resources. They can create containers without any restriction, but can't see the containers of other users. |
## Create a custom role
The **Roles** page lists all default and custom roles applicable in the
organization.
You can give a role a global name, such as "Remove Images", which might enable the
**Remove** and **Force Remove** operations for images. You can apply a role with
the same name to different collections.
1. Click **Roles** under **User Management**.
2. Click **Create Role**.
3. Input the role name on the **Details** page.
4. Click **Operations**. All available API operations are displayed.
5. Select the permitted operations per resource type.
6. Click **Create**.
![](../images/custom-role.png){: .with-border}
> **Some important rules regarding roles**:
> - Roles are always enabled.
> - Roles cannot be edited--they must be deleted and recreated.
> - Roles used within a grant can only be deleted after first deleting the grant.
> - Only administrators can create and delete roles.
{% endif %}
{% endif %}
* [Create and configure users and teams](rbac-basics-create-subjects.md)
* [Group and isolate cluster resources](rbac-basics-group-resources.md)
* [Grant role-access to cluster resources](rbac-basics-grant-permissions.md)

View File

@ -2,22 +2,8 @@
title: Grant role-access to cluster resources
description: Learn how to grant users and teams access to cluster resources with role-based access control.
keywords: rbac, ucp, grant, role, permission, authentication, authorization
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-howto-deploy-stateless-app/
title: Deploy a simple stateless app with RBAC
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
Docker EE administrators can create _grants_ to control how users and
organizations access resources.
@ -81,49 +67,6 @@ To create a grant in UCP:
> To apply permissions to all Docker EE users, create a grant with the
> `docker-datacenter` org as a subject.
## Next steps
{% elsif include.version=="ucp-2.2" %}
Docker EE administrators can create _grants_ to control how users and
organizations access resources.
A grant defines _who_ has _how much_ access to _what_ resources. Each grant is a
1:1:1 mapping of _subject_, _role_, and _resource group_. For example, you can
grant the "Prod Team" "Restricted Control" of services in the "/Production"
collection.
A common workflow for creating grants has four steps:
- Add and configure **subjects** (users and teams).
- Define custom **roles** (or use defaults) by adding permitted API operations
per resource type.
- Group cluster **resources** into Swarm collections.
- Create **grants** by marrying subject + role + resource group.
## Swarm grants
With Swarm orchestration, a grant is made up of *subject*, *role*, and
*collection*.
> This section assumes that you have created objects to grant: teams/users,
> roles (built-in or custom), and a collection.
![](../images/ucp-grant-model-0.svg){: .with-border}
![](../images/ucp-grant-model.svg){: .with-border}
To create a grant in UCP:
1. Click **Grants** under **User Management**.
2. Click **Create Grant**.
3. On the Collections tab, click **Collections** (for Swarm).
4. Click **View Children** until you get to the desired resource group and **Select**.
5. On the Roles tab, select a role.
6. On the Subjects tab, select a user, team, or organization to authorize.
7. Click **Create**.
> By default, all new users are placed in the `docker-datacenter` organization.
> To apply permissions to all Docker EE users, create a grant with the
> `docker-datacenter` org as a subject.
{% endif %}
{% endif %}
* [Deploy a simple stateless app with RBAC](rbac-howto-deploy-stateless-app.md)

View File

@ -2,26 +2,8 @@
title: Group and isolate cluster resources
description: Learn how to group resources into collections or namespaces to control access.
keywords: rbac, ucp, grant, role, permission, authentication, resource collection
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-basics-create-subjects/
title: Create and configure users and teams
- path: /deploy/rbac/rbac-basics-define-roles/
title: Define roles with authorized API operations
- path: /deploy/rbac/rbac-basics-grant-permissions/
title: Grant role-access to cluster resources
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
## Kubernetes namespace
A
@ -54,7 +36,7 @@ For example, each user has a private collection with the path,
the access label: `com.docker.ucp.access.label = /Shared/Private/molly`.
To deploy applications into a custom collection, you must define the collection
first. For an example, see [Deploy stateless app with RBAC](./deploy/rbac/rbac-howto-deploy-stateless-app).
first. For an example, see [Deploy stateless app with RBAC](rbac-howto-deploy-stateless-app.md).
When a user deploys a resource without an access label, Docker EE automatically
places the resource in the user's default collection.
@ -71,13 +53,13 @@ the user, which specifies the operations that are allowed against the target.
Docker EE provides a number of built-in collections.
| Default collection | Description |
| ------------------ | --------------------------------------------------------------------------------------- |
| `/` | Path to all resources in the Swarm cluster. Resources not in a collection are put here. |
| `/System` | Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. |
| Default collection | Description |
|:-------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `/` | Path to all resources in the Swarm cluster. Resources not in a collection are put here. |
| `/System` | Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. |
| `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](./rbac-howto-isolate-nodes/). |
| `/Shared/Private/` | Path to a user's private collection. |
| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). |
| `/Shared/Private/` | Path to a user's private collection. |
| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). |
This diagram shows the `/System` and `/Shared` collections created by Docker EE.
User private collections are children of the `/Shared/private` collection. Here,
@ -92,7 +74,7 @@ Each user has a default collection which can be changed in UCP preferences.
Users can't deploy a resource without a collection. When a user deploys a
resource without an access label, Docker EE automatically places the resource in
the user's default collection. [Learn how to add labels to nodes](../../datacenter/ucp/2.2/guides/admin/configure/add-labels-to-cluster-nodes/).
the user's default collection. [Learn how to add labels to nodes](../admin/configure/add-labels-to-cluster-nodes.md).
With Docker Compose, the system applies default collection labels across all
resources in the stack unless `com.docker.ucp.access.label` has been explicitly
@ -161,136 +143,10 @@ one of the nodes under `/Shared`.
If you want to isolate nodes against other teams, place these nodes in new
collections, and assign the `Scheduler` role, which contains the `Node Schedule`
permission, to the team. [Isolate swarm nodes to a specific team](./rbac-howto-isolate-nodes.md).
permission, to the team. [Isolate swarm nodes to a specific team](rbac-howto-isolate-nodes.md).
## Next case
{% elsif include.version=="ucp-2.2" %}
## Swarm collection
A collection is a directory of grouped resources, such as services, containers,
volumes, networks, and secrets. To authorize access, administrators create
grants against directory branches.
![](../images/collections-and-resources.svg){: .with-border}
### Access label
Access to a collection is granted with a path defined in an access label.
For example, each user has a private collection with the path,
`/Shared/Private/<username>`. The private collection for user "molly" would have
the access label: `com.docker.ucp.access.label = /Shared/Private/molly`.
To deploy applications into a custom collection, you must define the collection
first. For an example, see [Deploy stateless app with RBAC](./deploy/rbac/rbac-howto-deploy-stateless-app/#swarm-stack). When a user
deploys a resource without an access label, Docker EE automatically places the
resource in the user's default collection.
### Nested collections
You can nest collections. If a user has a grant against a collection, the grant
applies to all of its child collections.
For a child collection, or for a user who belongs to more than one team, the
system concatenates permissions from multiple roles into an "effective role" for
the user, which specifies the operations that are allowed against the target.
### Built-in collections
Docker EE provides a number of built-in collections.
| Default collection | Description |
| ------------------ | --------------------------------------------------------------------------------------- |
| `/` | Path to all resources in the Swarm cluster. Resources not in a collection are put here. |
| `/System` | Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. |
| `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](./rbac-howto-isolate-nodes/). |
| `/Shared/Private/` | Path to a user's private collection. |
| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). |
This diagram shows the `/System` and `/Shared` collections created by Docker EE.
User private collections are children of the `/Shared/private` collection. Here,
the Docker EE administrator user created a `/prod` collection and a child
collection, `/webserver`.
![](../images/collections-diagram.svg){: .with-border}
### Default collections
Each user has a default collection which can be changed in UCP preferences.
Users can't deploy a resource without a collection. When a user deploys a
resource without an access label, Docker EE automatically places the resource in
the user's default collection. [Learn how to add labels to nodes](../../datacenter/ucp/2.2/guides/admin/configure/add-labels-to-cluster-nodes/).
With Docker Compose, the system applies default collection labels across all
resources in the stack unless `com.docker.ucp.access.label` has been explicitly
set.
> Default collections and collection labels
>
> Default collections are good for users who ony work on a well-defined slice of
> the system, as well as users who deploy stacks and don't want to edit the
> contents of their compose files. A user with more versatile roles in the
> system, such as an adminitrator, might find it better to set custom labels for
> each resource.
### Collections and labels
Resources are marked as being in a collection by using labels. Some resource
types don't have editable labels, so you can't move them across collections.
> Can edit labels: services, nodes, secrets, and configs
> Cannot edit labels: containers, networks, and volumes
For editable resources, you can change the `com.docker.ucp.access.label` to move
resources to different collections. For example, you may need deploy resources
to a collection other than your default collection.
The system uses the additional labels, `com.docker.ucp.collection.*`, to enable
efficient resource lookups. By default, nodes have the
`com.docker.ucp.collection.root`, `com.docker.ucp.collection.shared`, and
`com.docker.ucp.collection.swarm` labels set to `true`. UCP
automatically controls these labels, and you don't need to manage them.
Collections get generic default names, but you can give them meaningful names,
like "Dev", "Test", and "Prod".
A *stack* is a group of resources identified by a label. You can place the
stack's resources in multiple collections. Resources are placed in the user's
default collection unless you specify an explicit `com.docker.ucp.access.label`
within the stack/compose file.
### Control access to nodes
The Docker EE Advanced license enables access control on worker nodes. Admin
users can move worker nodes from the default `/Shared` collection into other
collections and create corresponding grants for scheduling tasks.
In this example, an administrator has moved worker nodes to a `/prod`
collection:
![](../images/containers-and-nodes-diagram.svg)
When you deploy a resource with a collection, Docker EE sets a constraint
implicitly based on what nodes the collection, and any ancestor collections, can
access. The `Scheduler` role allows users to deploy resources on a node. By
default, all users have the `Scheduler` role against the `/Shared` collection.
When deploying a resource that isn't global, like local volumes, bridge
networks, containers, and services, the system identifies a set of "schedulable
nodes" for the user. The system identifies the target collection of the
resource, like `/Shared/Private/molly`, and it tries to find the parent that's
closest to the root that the user has the `Node Schedule` permission on.
For example, when a user with a default configuration runs `docker container run
nginx`, the system interprets this to mean, "Create an NGINX container under the
user's default collection, which is at `/Shared/Private/molly`, and deploy it on
one of the nodes under `/Shared`.
If you want to isolate nodes against other teams, place these nodes in new
collections, and assign the `Scheduler` role, which contains the `Node Schedule`
permission, to the team. [Isolate swarm nodes to a specific team](./rbac-howto-isolate-nodes.md).
{% endif %}
{% endif %}
* [Create and configure users and teams](rbac-basics-create-subjects.md)
* [Define roles with authorized API operations](rbac-basics-define-roles.md)
* [Grant role-access to cluster resources](rbac-basics-grant-permissions.md)

View File

@ -2,18 +2,8 @@
title: Deploy a simple stateless app with RBAC
description: Learn how to deploy a simple application and customize access to resources.
keywords: rbac, authorize, authentication, users, teams, UCP, Docker
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
This tutorial explains how to deploy a nginx web server and limit access to one
team with role-based access control (RBAC).
@ -42,7 +32,7 @@ acme-datacenter
  └── Chad Chavez
```
See: [Create and configure users and teams](./rbac-basics-create-subjects.md).
See: [Create and configure users and teams](rbac-basics-create-subjects.md).
## Kubernetes deployment
@ -74,7 +64,7 @@ simple role for the ops team:
4. On the **Operations** tab, check all **Kubernetes Deployment Operations**.
5. Click **Create**.
See: [Create and configure users and teams](./rbac-basics-define-roles.md).
See: [Create and configure users and teams](rbac-basics-define-roles.md).
### Grant access
@ -137,7 +127,7 @@ Create a collection for nginx resources, nested under the `/Shared` collection:
> **Tip**: To drill into a collection, click **View Children**.
See: [Group and isolate cluster resources](./rbac-basics-group-resources.md).
See: [Group and isolate cluster resources](rbac-basics-group-resources.md).
### Define roles
@ -150,7 +140,7 @@ simple role for the ops team:
4. On the **Operations** tab, check all **Service Operations**.
5. Click **Create**.
See: [Create and configure users and teams](./rbac-basics-define-roles.md).
See: [Create and configure users and teams](rbac-basics-define-roles.md).
### Grant access
@ -161,7 +151,7 @@ built-in role, **Swarm Deploy**.
acme-datacenter/ops + Swarm Deploy + /Shared/nginx-collection
```
See: [Grant role-access to cluster resources](./rbac-basics-grant-permissions.md).
See: [Grant role-access to cluster resources](rbac-basics-grant-permissions.md).
### Deploy Nginx
@ -181,101 +171,3 @@ service.
7. Log on to UCP as each user and ensure that:
- `dba` (alex) cannot see `nginx-collection`.
- `dev` (bett) cannot see `nginx-collection`.
{% elsif include.version=="ucp-2.2" %}
This tutorial explains how to deploy a nginx web server and limit access to one
team with role-based access control (RBAC).
## Scenario
You are the Docker EE system administrator at Acme Company and need to configure
permissions to company resources. The best way to do this is to:
- Build the organization with teams and users.
- Define roles with allowable operations per resource types (can run containers, etc.).
- Create collections or namespaces for storing actual resources.
- Create grants that join team + role + resources.
## Build the organization
Add the organization, `acme-datacenter`, and create three teams according to the
following structure:
```
acme-datacenter
├── dba
│   └── Alex Alutin
├── dev
│   └── Bett Bhatia
└── ops
  └── Chad Chavez
```
See: [Create and configure users and teams](./rbac-basics-create-subjects.md).
## Swarm Stack
In this section, we deploy `nginx` as a Swarm service.
### Create collection paths
Create a collection for nginx resources, nested under the `/Shared` collection:
```
/
├── System
└── Shared
   └── nginx-collection
```
> **Tip**: To drill into a collection, click **View Children**.
See: [Group and isolate cluster resources](./rbac-basics-group-resources.md).
### Define roles
You can use the built-in roles or define your own. For this exercise, create a
simple role for the ops team:
1. Click **Roles** under **User Management**.
2. Click **Create Role**.
3. On the **Details** tab, name the role `Swarm Deploy`.
4. On the **Operations** tab, check all **Service Operations**.
5. Click **Create**.
See: [Create and configure users and teams](./rbac-basics-define-roles.md).
### Grant access
Grant the ops team (and only the ops team) access to nginx-collection with the
built-in role, **Swarm Deploy**.
```
acme-datacenter/ops + Swarm Deploy + /Shared/nginx-collection
```
See: [Grant role-access to cluster resources](./rbac-basics-grant-permissions.md).
### Deploy Nginx
You've configured Docker EE. The `ops` team can now deploy an `nginx` Swarm
service.
1. Log on to UCP as chad (on the `ops`team).
2. Click **Swarm** > **Services**.
3. Click **Create Stack**.
4. On the Details tab, enter:
- Name: `nginx-service`
- Image: nginx:latest
5. On the Collections tab:
- Click `/Shared` in the breadcrumbs.
- Select `nginx-collection`.
6. Click **Create**.
7. Log on to UCP as each user and ensure that:
- `dba` (alex) cannot see `nginx-collection`.
- `dev` (bett) cannot see `nginx-collection`.
-
{% endif %}
{% endif %}

View File

@ -2,22 +2,8 @@
title: Isolate Swarm nodes in Docker Advanced
description: Create grants that limit access to nodes to specific teams.
keywords: ucp, grant, role, permission, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-howto-isolate-volumes/
title: Isolate Volumes
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
With Docker EE Advanced, you can enable physical isolation of resources
by organizing nodes into collections and granting `Scheduler` access for
different users. To control access to nodes, move them to dedicated collections
@ -42,7 +28,7 @@ complete this example.
In the web UI, navigate to the **Organizations & Teams** page to create a team
named "Ops" in your organization. Add a user who isn't a UCP administrator to
the team.
[Learn to create and manage teams](create-and-manage-teams.md).
[Learn to create and manage teams](rbac-basics-create-subjects.md).
## Create a node collection and a resource collection
@ -180,170 +166,6 @@ collection. In this case, the user sets the value of the service's access label,
`com.docker.ucp.access.label`, to the new collection or one of its children
that has a `Service Create` grant for the user.
## Next steps
{% elsif include.version=="ucp-2.2" %}
With Docker EE Advanced, you can enable physical isolation of resources
by organizing nodes into collections and granting `Scheduler` access for
different users. To control access to nodes, move them to dedicated collections
where you can grant access to specific users, teams, and organizations.
In this example, a team gets access to a node collection and a resource
collection, and UCP access control ensures that the team members can't view
or use swarm resources that aren't in their collection.
You need a Docker EE Advanced license and at least two worker nodes to
complete this example.
1. Create an `Ops` team and assign a user to it.
2. Create a `/Prod` collection for the team's node.
3. Assign a worker node to the `/Prod` collection.
4. Grant the `Ops` teams access to its collection.
![](../images/isolate-nodes-diagram.svg){: .with-border}
## Create a team
In the web UI, navigate to the **Organizations & Teams** page to create a team
named "Ops" in your organization. Add a user who isn't a UCP administrator to
the team.
[Learn to create and manage teams](create-and-manage-teams.md).
## Create a node collection and a resource collection
In this example, the Ops team uses an assigned group of nodes, which it
accesses through a collection. Also, the team has a separate collection
for its resources.
Create two collections: one for the team's worker nodes and another for the
team's resources.
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Click **Create collection** and name the new collection "Prod".
3. Click **Create** to create the collection.
4. Find **Prod** in the list, and click **View children**.
5. Click **Create collection**, and name the child collection
"Webserver". This creates a sub-collection for access control.
You've created two new collections. The `/Prod` collection is for the worker
nodes, and the `/Prod/Webserver` sub-collection is for access control to
an application that you'll deploy on the corresponding worker nodes.
## Move a worker node to a collection
By default, worker nodes are located in the `/Shared` collection.
Worker nodes that are running DTR are assigned to the `/System` collection.
To control access to the team's nodes, move them to a dedicated collection.
Move a worker node by changing the value of its access label key,
`com.docker.ucp.access.label`, to a different collection.
1. Navigate to the **Nodes** page to view all of the nodes in the swarm.
2. Click a worker node, and in the details pane, find its **Collection**.
If it's in the `/System` collection, click another worker node,
because you can't move nodes that are in the `/System` collection. By
default, worker nodes are assigned to the `/Shared` collection.
3. When you've found an available node, in the details pane, click
**Configure**.
3. In the **Labels** section, find `com.docker.ucp.access.label` and change
its value from `/Shared` to `/Prod`.
4. Click **Save** to move the node to the `/Prod` collection.
> Docker EE Advanced required
>
> If you don't have a Docker EE Advanced license, you'll get the following
> error message when you try to change the access label:
> **Nodes must be in either the shared or system collection without an advanced license.**
> [Get a Docker EE Advanced license](https://www.docker.com/pricing).
![](../images/isolate-nodes-1.png){: .with-border}
## Grant access for a team
You'll need two grants to control access to nodes and container resources:
- Grant the `Ops` team the `Restricted Control` role for the `/Prod/Webserver`
resources.
- Grant the `Ops` team the `Scheduler` role against the nodes in the `/Prod`
collection.
Create two grants for team access to the two collections:
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **View Children**.
4. In the **Webserver** collection, click **Select Collection**.
5. In the left pane, click **Roles**, and select **Restricted Control**
in the dropdown.
6. Click **Subjects**, and under **Select subject type**, click **Organizations**.
7. Select your organization, and in the **Team** dropdown, select **Ops**.
8. Click **Create** to grant the Ops team access to the `/Prod/Webserver`
collection.
The same steps apply for the nodes in the `/Prod` collection.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **Select Collection**.
4. In the left pane, click **Roles**, and in the dropdown, select **Scheduler**.
5. In the left pane, click **Subjects**, and under **Select subject type**, click
**Organizations**.
6. Select your organization, and in the **Team** dropdown, select **Ops** .
7. Click **Create** to grant the Ops team `Scheduler` access to the nodes in the
`/Prod` collection.
![](../images/isolate-nodes-2.png){: .with-border}
## Deploy a service as a team member
Your swarm is ready to show role-based access control in action. When a user
deploys a service, UCP assigns its resources to the user's default collection.
From the target collection of a resource, UCP walks up the ancestor collections
until it finds nodes that the user has `Scheduler` access to. In this example,
UCP assigns the user's service to the `/Prod/Webserver` collection and schedules
tasks on nodes in the `/Prod` collection.
As a user on the Ops team, set your default collection to `/Prod/Webserver`.
1. Log in as a user on the Ops team.
2. Navigate to the **Collections** page, and in the **Prod** collection,
click **View Children**.
3. In the **Webserver** collection, click the **More Options** icon and
select **Set to default**.
Deploy a service automatically to worker nodes in the `/Prod` collection.
All resources are deployed under the user's default collection,
`/Prod/Webserver`, and the containers are scheduled only on the nodes under
`/Prod`.
1. Navigate to the **Services** page, and click **Create Service**.
2. Name the service "NGINX", use the "nginx:latest" image, and click
**Create**.
3. When the **nginx** service status is green, click the service. In the
details view, click **Inspect Resource**, and in the dropdown, select
**Containers**.
4. Click the **NGINX** container, and in the details pane, confirm that its
**Collection** is **/Prod/Webserver**.
![](../images/isolate-nodes-3.png){: .with-border}
5. Click **Inspect Resource**, and in the dropdown, select **Nodes**.
6. Click the node, and in the details pane, confirm that its **Collection**
is **/Prod**.
![](../images/isolate-nodes-4.png){: .with-border}
## Alternative: Use a grant instead of the default collection
Another approach is to use a grant instead of changing the user's default
collection. An administrator can create a grant for a role that has the
`Service Create` permission against the `/Prod/Webserver` collection or a child
collection. In this case, the user sets the value of the service's access label,
`com.docker.ucp.access.label`, to the new collection or one of its children
that has a `Service Create` grant for the user.
{% endif %}
{% endif %}
* [Isolate volumes](rbac-howto-isolate-volumes.md)

View File

@ -2,22 +2,8 @@
title: Isolate volumes to a specific team
description: Create grants that limit access to volumes to specific teams.
keywords: ucp, grant, role, permission, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-howto-isolate-nodes/
title: Isolate Swarm nodes in Docker Advanced
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
In this example, two teams are granted access to volumes in two different
resource collections. UCP access control prevents the teams from viewing and
accessing each other's volumes, even though they may be located in the same
@ -35,7 +21,7 @@ nodes.
Navigate to the **Organizations & Teams** page to create two teams in your
organization, named "Dev" and "Prod". Add a user who's not a UCP administrator
to the Dev team, and add another non-admin user to the Prod team.
[Learn how to create and manage teams](./rbac-basics-create-subjects.md).
[Learn how to create and manage teams](rbac-basics-create-subjects.md).
## Create resource collections
@ -105,95 +91,6 @@ created by the Dev and Prod users.
![](../images/isolate-volumes-4.png){: .with-border}
## Next steps
{% elsif include.version=="ucp-2.2" %}
In this example, two teams are granted access to volumes in two different
resource collections. UCP access control prevents the teams from viewing and
accessing each other's volumes, even though they may be located in the same
nodes.
1. Create two teams.
2. Create two collections, one for either team.
3. Create grants to manage access to the collections.
4. Team members create volumes that are specific to their team.
![](../images/isolate-volumes-diagram.svg){: .with-border}
## Create two teams
Navigate to the **Organizations & Teams** page to create two teams in your
organization, named "Dev" and "Prod". Add a user who's not a UCP administrator
to the Dev team, and add another non-admin user to the Prod team.
[Learn how to create and manage teams](./rbac-basics-create-subjects.md).
## Create resource collections
In this example, the Dev and Prod teams use two different volumes, which they
access through two corresponding resource collections. The collections are
placed under the `/Shared` collection.
1. In the left pane, click **Collections** to show all of the resource
collections in the swarm.
2. Find the **/Shared** collection and click **View children**.
2. Click **Create collection** and name the new collection "dev-volumes".
3. Click **Create** to create the collection.
4. Click **Create collection** again, name the new collection "prod-volumes",
and click **Create**.
## Create grants for controlling access to the new volumes
In this example, the Dev team gets access to its volumes from a grant that
associates the team with the `/Shared/dev-volumes` collection, and the Prod
team gets access to its volumes from another grant that associates the team
with the `/Shared/prod-volumes` collection.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Shared** collection, click **View Children**.
4. In the list, find **/Shared/dev-volumes** and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **Restricted Control**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, pick your organization, and in the **Team** dropdown,
select **Dev**.
5. Click **Create** to grant permissions to the Dev team.
6. Click **Create Grant** and repeat the previous steps for the **/Shared/prod-volumes**
collection and the Prod team.
![](../images/isolate-volumes-1.png){: .with-border}
With the collections and grants in place, users can sign in and create volumes
in their assigned collections.
## Create a volume as a team member
Team members have permission to create volumes in their assigned collection.
1. Log in as one of the users on the Dev team.
2. Navigate to the **Volumes** page to view all of the volumes in the swarm
that the user can access.
2. Click **Create volume** and name the new volume "dev-data".
3. In the left pane, click **Collections**. The default collection appears.
At the top of the page, click **Shared**, find the **dev-volumes**
collection in the list, and click **Select Collection**.
4. Click **Create** to add the "dev-data" volume to the collection.
5. Log in as one of the users on the Prod team, and repeat the previous steps
to create a "prod-data" volume assigned to the `/Shared/prod-volumes`
collection.
![](../images/isolate-volumes-2.png){: .with-border}
Now you can see role-based access control in action for volumes. The user on
the Prod team can't see the Dev team's volumes, and if you log in again as a
user on the Dev team, you won't see the Prod team's volumes.
![](../images/isolate-volumes-3.png){: .with-border}
Sign in with a UCP administrator account, and you see all of the volumes
created by the Dev and Prod users.
![](../images/isolate-volumes-4.png){: .with-border}
{% endif %}
{% endif %}
* [Isolate Swarm nodes in Docker Advanced](rbac-howto-isolate-nodes)

View File

@ -2,19 +2,8 @@
title: Access control design with Docker EE Standard
description: Learn how to architect multitenancy by using Docker Enterprise Edition Advanced.
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-howto-orcabank1-advanced/
title: Access control design with Docker EE Advanced
---
{% if include.ui %}
[Collections and grants](index.md) are strong tools that can be used to control
access and visibility to resources in UCP.
@ -22,8 +11,6 @@ This tutorial describes a fictitious company named OrcaBank that needs to
configure an architecture in UCP with role-based access control (RBAC) for
their application engineering group.
{% if include.version=="ucp-3.0" %}
## Team access requirements
OrcaBank reorganized their application teams by product with each team providing
@ -141,124 +128,6 @@ minus the database tier that is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}
## Next steps
{% elsif include.version=="ucp-2.2" %}
## Team access requirements
OrcaBank reorganized their application teams by product with each team providing
shared services as necessary. Developers at OrcaBank do their own DevOps and
deploy and manage the lifecycle of their applications.
OrcaBank has four teams with the following resource needs:
- `security` should have view-only access to all applications in the cluster.
- `db` should have full access to all database applications and resources. See
[DB Team](#db-team).
- `mobile` should have full access to their mobile applications and limited
access to shared `db` services. See [Mobile Team](#mobile-team).
- `payments` should have full access to their payments applications and limited
access to shared `db` services.
## Role composition
To assign the proper access, OrcaBank is employing a combination of default
and custom roles:
- `View Only` (default role) allows users to see all resources (but not edit or use).
- `Ops` (custom role) allows users to perform all operations against configs,
containers, images, networks, nodes, secrets, services, and volumes.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect to
networks and view/use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
OrcaBank is also creating collections of resources to mirror their team
structure.
Currently, all OrcaBank applications share the same physical resources, so all
nodes and applications are being configured in collections that nest under the
built-in collection, `/Shared`.
Other collections are also being created to enable shared `db` applications.
> **Note:** For increased security with node-based isolation, use Docker
> Enterprise Advanced.
- `/Shared/mobile` hosts all Mobile applications and resources.
- `/Shared/payments` hosts all Payments applications and resources.
- `/Shared/db` is a top-level collection for all `db` resources.
- `/Shared/db/payments` is a collection of `db` resources for Payments applications.
- `/Shared/db/mobile` is a collection of `db` resources for Mobile applications.
The collection architecture has the following tree representation:
```
/
├── System
└── Shared
├── mobile
├── payments
   └── db
├── mobile
   └── payments
```
OrcaBank's [Grant composition](#grant-composition) ensures that their collection
architecture gives the `db` team access to _all_ `db` resources and restricts
app teams to _shared_ `db` resources.
## LDAP/AD integration
OrcaBank has standardized on LDAP for centralized authentication to help their
identity team scale across all the platforms they manage.
To implement LDAP authenticaiton in UCP, OrcaBank is using UCP's native LDAP/AD
integration to map LDAP groups directly to UCP teams. Users can be added to or
removed from UCP teams via LDAP which can be managed centrally by OrcaBank's
identity team.
The following grant composition shows how LDAP groups are mapped to UCP teams.
## Grant composition
OrcaBank is taking advantage of the flexibility in UCP's grant model by applying
two grants to each application team. One grant allows each team to fully
manage the apps in their own collection, and the second grant gives them the
(limited) access they need to networks and secrets within the `db` collection.
![image](../images/design-access-control-adv-1.png){: .with-border}
## OrcaBank access architecture
OrcaBank's resulting access architecture shows applications connecting across
collection boundaries. By assigning multiple grants per team, the Mobile and
Payments applications teams can connect to dedicated Database resources through
a secure and controlled interface, leveraging Database networks and secrets.
> **Note:** In Docker Enterprise Standard, all resources are deployed across the
> same group of UCP worker nodes. Node segmentation is provided in Docker
> Enterprise Advanced and discussed in the [next tutorial](./deploy/rbac/rbac-howto-orcabank1-advanced).
![image](../images/design-access-control-adv-2.png){: .with-border}
### DB team
The `db` team is responsible for deploying and managing the full lifecycle
of the databases used by the application teams. They can execute the full set of
operations against all database resources.
![image](../images/design-access-control-adv-3.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their own application stack,
minus the database tier that is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}
{% endif %}
{% endif %}
* [Access control design with Docker EE Advanced](rbac-howto-orcabank1-advanced.md)

View File

@ -2,20 +2,9 @@
title: Access control design with Docker EE Advanced
description: Learn how to architect multitenancy with Docker Enterprise Edition Advanced.
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/rbac-howto-orcabank1-standard/
title: Access control design with Docker EE Standard
---
{% if include.ui %}
Go through the [Docker Enterprise Standard tutorial](access-control-design-ee-standard.md),
Go through the [Docker Enterprise Standard tutorial](rbac-howto-orcabank1-standard.md),
before continuing here with Docker Enterprise Advanced.
In the first tutorial, the fictional company, OrcaBank, designed an architecture
@ -32,16 +21,13 @@ apps from their dev cluster to staging for testing, and then to production.
Second, production applications are no longer permitted to share any physical
infrastructure with non-production infrastructure. OrcaBank segments the
scheduling and access of applications with [Node Access Control](access-control-node.md).
scheduling and access of applications with [Node Access Control](rbac-howto-isolate-nodes.md).
> [Node Access Control](access-control-node.md) is a feature of Docker EE
> [Node Access Control](rbac-howto-isolate-nodes.md) is a feature of Docker EE
> Advanced and provides secure multi-tenancy with node-based isolation. Nodes
> can be placed in different collections so that resources can be scheduled and
> isolated on disparate physical or virtual hardware resources.
{% if include.version=="ucp-3.0" %}
## Team access requirements
OrcaBank still has three application teams, `payments`, `mobile`, and `db` with
@ -148,113 +134,6 @@ that are provided by the `db` team.
![image](../images/design-access-control-adv-mobile.png){: .with-border}
## Next steps
{% elsif include.version=="ucp-2.2" %}
## Team access requirements
OrcaBank still has three application teams, `payments`, `mobile`, and `db` with
varying levels of segmentation between them.
Their RBAC redesign is going to organize their UCP cluster into two top-level
collections, staging and production, which are completely separate security
zones on separate physical infrastructure.
OrcaBank's four teams now have different needs in production and staging:
- `security` should have view-only access to all applications in production (but
not staging).
- `db` should have full access to all database applications and resources in
production (but not staging). See [DB Team](#db-team).
- `mobile` should have full access to their Mobile applications in both
production and staging and limited access to shared `db` services. See
[Mobile Team](#mobile-team).
- `payments` should have full access to their Payments applications in both
production and staging and limited access to shared `db` services.
## Role composition
OrcaBank has decided to replace their custom `Ops` role with the built-in
`Full Control` role.
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `Full Control` (default role) allows users complete control of all collections
granted to them. They can also create containers without restriction but
cannot see the containers of other users.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect
to networks and view/use secrets used by `db` containers, but prevents them
from seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
In the previous tutorial, OrcaBank created separate collections for each
application team and nested them all under `/Shared`.
To meet their new security requirements for production, OrcaBank is redesigning
collections in two ways:
- Adding collections for both the production and staging zones, and nesting a
set of application collections under each.
- Segmenting nodes. Both the production and staging zones will have dedicated
nodes; and in production, each application will be on a dedicated node.
The collection architecture now has the following tree representation:
```
/
├── System
├── Shared
├── prod
│   ├── mobile
│   ├── payments
│   └── db
│ ├── mobile
│   └── payments
|
└── staging
├── mobile
└── payments
```
## Grant composition
OrcaBank must now diversify their grants further to ensure the proper division
of access.
The `payments` and `mobile` application teams will have three grants each--one
for deploying to production, one for deploying to staging, and the same grant to
access shared `db` networks and secrets.
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture, desgined with Docker EE Advanced, provides
physical segmentation between production and staging using node access control.
Applications are scheduled only on UCP worker nodes in the dedicated application
collection. And applications use shared resources across collection boundaries
to access the databases in the `/prod/db` collection.
![image](../images/design-access-control-adv-architecture.png){: .with-border}
### DB team
The OrcaBank `db` team is responsible for deploying and managing the full
lifecycle of the databases that are in production. They have the full set of
operations against all database resources.
![image](../images/design-access-control-adv-db.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their full application stack in
staging. In production they deploy their own applications but use the databases
that are provided by the `db` team.
![image](../images/design-access-control-adv-mobile.png){: .with-border}
{% endif %}
{% endif %}
* [Access control design with Docker EE Standard](rbac-howto-orcabank1-standard.md)