This commit is contained in:
Gwendolynne Barr 2017-11-30 21:12:58 -08:00 committed by Jim Galasyn
parent d09719ebd4
commit 9421b2d7cb
26 changed files with 197 additions and 2797 deletions

View File

@ -315,36 +315,34 @@ guides:
section:
- path: /deploy/rbac/
title: Access control model overview
- sectiontitle: User Management
- sectiontitle: The basics
section:
- path: /deploy/rbac/usermgmt-create-subjects/
- path: /deploy/rbac/basics-create-subjects/
title: Create and users and teams
- path: /deploy/rbac/usermgmt-sync-with-ldap/
title: Synchronize teams with LDAP
- path: /deploy/rbac/usermgmt-define-roles/
- path: /deploy/rbac/basics-define-roles/
title: Define roles with permissions
- path: /deploy/rbac/usermgmt-grant-permissions/
- path: /deploy/rbac/basics-group-resources/
title: Group cluster resources
- path: /deploy/rbac/basics-grant-permissions/
title: Grant access to resources
- path: /deploy/rbac/usermgmt-recover-password/
title: Reset user passwords
- sectiontitle: Shared Resources
- sectiontitle: Tutorials and use cases
section:
- path: /deploy/rbac/resources-group-resources/
title: Group resources as collections or namespaces
- path: /deploy/rbac/resources-isolate-nodes/
title: Isolate nodes
- path: /deploy/rbac/resources-isolate-volumes/
- path: /deploy/rbac/howto-deploy-stateless-app/
title: Deploy stateless app with RBAC
- path: /deploy/rbac/basics-isolate-volumes/
title: Isolate volumes
- sectiontitle: RBAC Tutorials
section:
- path: /deploy/rbac/howto-wordpress-multitier/
title: Wordpress and MySQL tutorial
- path: /deploy/rbac/howto-wordpress-view-only/
title: Deploy service with view-only access
- path: /deploy/rbac/basics-isolate-nodes/
title: Isolate nodes
- path: /deploy/rbac/howto-orcabank1-standard/
title: Orcabank example with Docker EE Standard
- path: /deploy/rbac/howto-orcabank2-advanced/
title: Orcabank example with Docker EE Advanced
- sectiontitle: User admin
section:
- path: /deploy/rbac/admin-sync-with-ldap/
title: Synchronize teams with LDAP
- path: /deploy/rbac/admin-recover-password/
title: Reset user passwords
- sectiontitle: Deploy your app in production
section:

View File

@ -1,255 +0,0 @@
---
title: Access control design with Docker EE Advanced
description: Learn how to architect multitenancy with Docker Enterprise Edition Advanced.
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
Go through the [Docker Enterprise Standard tutorial](access-control-design-ee-standard.md),
before continuing here with Docker Enterprise Advanced.
In the first tutorial, the fictional company, OrcaBank, designed an architecture
with role-based access control (RBAC) to meet their organization's security
needs. They assigned multiple grants to fine-tune access to resources across
collection boundaries on a single platform.
In this tutorial, OrcaBank implements new and more stringent security
requirements for production applications:
First, OrcaBank adds staging zone to their deployment model. They will no longer
move developed appliciatons directly in to production. Instead, they will deploy
apps from their dev cluster to staging for testing, and then to production.
Second, production applications are no longer permitted to share any physical
infrastructure with non-production infrastructure. OrcaBank segments the
scheduling and access of applications with [Node Access Control](access-control-node.md).
> [Node Access Control](access-control-node.md) is a feature of Docker EE
> Advanced and provides secure multi-tenancy with node-based isolation. Nodes
> can be placed in different collections so that resources can be scheduled and
> isolated on disparate physical or virtual hardware resources.
{% if include.version=="ucp-3.0" %}
## Team access requirements
OrcaBank still has three application teams, `payments`, `mobile`, and `db` with
varying levels of segmentation between them.
Their RBAC redesign is going to organize their UCP cluster into two top-level
collections, staging and production, which are completely separate security
zones on separate physical infrastructure.
OrcaBank's four teams now have different needs in production and staging:
- `security` should have view-only access to all applications in production (but
not staging).
- `db` should have full access to all database applications and resources in
production (but not staging). See [DB Team](#db-team).
- `mobile` should have full access to their Mobile applications in both
production and staging and limited access to shared `db` services. See
[Mobile Team](#mobile-team).
- `payments` should have full access to their Payments applications in both
production and staging and limited access to shared `db` services.
## Role composition
OrcaBank has decided to replace their custom `Ops` role with the built-in
`Full Control` role.
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `Full Control` (default role) allows users complete control of all collections
granted to them. They can also create containers without restriction but
cannot see the containers of other users.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect
to networks and view/use secrets used by `db` containers, but prevents them
from seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
In the previous tutorial, OrcaBank created separate collections for each
application team and nested them all under `/Shared`.
To meet their new security requirements for production, OrcaBank is redesigning
collections in two ways:
- Adding collections for both the production and staging zones, and nesting a
set of application collections under each.
- Segmenting nodes. Both the production and staging zones will have dedicated
nodes; and in production, each application will be on a dedicated node.
The collection architecture now has the following tree representation:
```
/
├── System
├── Shared
├── prod
│   ├── mobile
│   ├── payments
│   └── db
│ ├── mobile
│   └── payments
|
└── staging
├── mobile
└── payments
```
## Grant composition
OrcaBank must now diversify their grants further to ensure the proper division
of access.
The `payments` and `mobile` application teams will have three grants each--one
for deploying to production, one for deploying to staging, and the same grant to
access shared `db` networks and secrets.
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture, desgined with Docker EE Advanced, provides
physical segmentation between production and staging using node access control.
Applications are scheduled only on UCP worker nodes in the dedicated application
collection. And applications use shared resources across collection boundaries
to access the databases in the `/prod/db` collection.
![image](../images/design-access-control-adv-architecture.png){: .with-border}
### DB team
The OrcaBank `db` team is responsible for deploying and managing the full
lifecycle of the databases that are in production. They have the full set of
operations against all database resources.
![image](../images/design-access-control-adv-db.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their full application stack in
staging. In production they deploy their own applications but use the databases
that are provided by the `db` team.
![image](../images/design-access-control-adv-mobile.png){: .with-border}
{% elsif include.version=="ucp-2.2" %}
## Team access requirements
OrcaBank still has three application teams, `payments`, `mobile`, and `db` with
varying levels of segmentation between them.
Their RBAC redesign is going to organize their UCP cluster into two top-level
collections, staging and production, which are completely separate security
zones on separate physical infrastructure.
OrcaBank's four teams now have different needs in production and staging:
- `security` should have view-only access to all applications in production (but
not staging).
- `db` should have full access to all database applications and resources in
production (but not staging). See [DB Team](#db-team).
- `mobile` should have full access to their Mobile applications in both
production and staging and limited access to shared `db` services. See
[Mobile Team](#mobile-team).
- `payments` should have full access to their Payments applications in both
production and staging and limited access to shared `db` services.
## Role composition
OrcaBank has decided to replace their custom `Ops` role with the built-in
`Full Control` role.
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `Full Control` (default role) allows users complete control of all collections
granted to them. They can also create containers without restriction but
cannot see the containers of other users.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect
to networks and view/use secrets used by `db` containers, but prevents them
from seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
In the previous tutorial, OrcaBank created separate collections for each
application team and nested them all under `/Shared`.
To meet their new security requirements for production, OrcaBank is redesigning
collections in two ways:
- Adding collections for both the production and staging zones, and nesting a
set of application collections under each.
- Segmenting nodes. Both the production and staging zones will have dedicated
nodes; and in production, each application will be on a dedicated node.
The collection architecture now has the following tree representation:
```
/
├── System
├── Shared
├── prod
│   ├── mobile
│   ├── payments
│   └── db
│ ├── mobile
│   └── payments
|
└── staging
├── mobile
└── payments
```
## Grant composition
OrcaBank must now diversify their grants further to ensure the proper division
of access.
The `payments` and `mobile` application teams will have three grants each--one
for deploying to production, one for deploying to staging, and the same grant to
access shared `db` networks and secrets.
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture, desgined with Docker EE Advanced, provides
physical segmentation between production and staging using node access control.
Applications are scheduled only on UCP worker nodes in the dedicated application
collection. And applications use shared resources across collection boundaries
to access the databases in the `/prod/db` collection.
![image](../images/design-access-control-adv-architecture.png){: .with-border}
### DB team
The OrcaBank `db` team is responsible for deploying and managing the full
lifecycle of the databases that are in production. They have the full set of
operations against all database resources.
![image](../images/design-access-control-adv-db.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their full application stack in
staging. In production they deploy their own applications but use the databases
that are provided by the `db` team.
![image](../images/design-access-control-adv-mobile.png){: .with-border}
{% endif %}
{% endif %}

View File

@ -1,265 +0,0 @@
---
title: Access control design with Docker EE Standard
description: Learn how to architect multitenancy by using Docker Enterprise Edition Advanced.
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
[Collections and grants](index.md) are strong tools that can be used to control
access and visibility to resources in UCP.
This tutorial describes a fictitious company named OrcaBank that needs to
configure an architecture in UCP with role-based access control (RBAC) for
their application engineering group.
{% if include.version=="ucp-3.0" %}
## Team access requirements
OrcaBank reorganized their application teams by product with each team providing
shared services as necessary. Developers at OrcaBank do their own DevOps and
deploy and manage the lifecycle of their applications.
OrcaBank has four teams with the following resource needs:
- `security` should have view-only access to all applications in the cluster.
- `db` should have full access to all database applications and resources. See
[DB Team](#db-team).
- `mobile` should have full access to their mobile applications and limited
access to shared `db` services. See [Mobile Team](#mobile-team).
- `payments` should have full access to their payments applications and limited
access to shared `db` services.
## Role composition
To assign the proper access, OrcaBank is employing a combination of default
and custom roles:
- `View Only` (default role) allows users to see all resources (but not edit or use).
- `Ops` (custom role) allows users to perform all operations against configs,
containers, images, networks, nodes, secrets, services, and volumes.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect to
networks and view/use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
OrcaBank is also creating collections of resources to mirror their team
structure.
Currently, all OrcaBank applications share the same physical resources, so all
nodes and applications are being configured in collections that nest under the
built-in collection, `/Shared`.
Other collections are also being created to enable shared `db` applications.
> **Note:** For increased security with node-based isolation, use Docker
> Enterprise Advanced.
- `/Shared/mobile` hosts all Mobile applications and resources.
- `/Shared/payments` hosts all Payments applications and resources.
- `/Shared/db` is a top-level collection for all `db` resources.
- `/Shared/db/payments` is a collection of `db` resources for Payments applications.
- `/Shared/db/mobile` is a collection of `db` resources for Mobile applications.
The collection architecture has the following tree representation:
```
/
├── System
└── Shared
├── mobile
├── payments
   └── db
├── mobile
   └── payments
```
OrcaBank's [Grant composition](#grant-composition) ensures that their collection
architecture gives the `db` team access to _all_ `db` resources and restricts
app teams to _shared_ `db` resources.
## LDAP/AD integration
OrcaBank has standardized on LDAP for centralized authentication to help their
identity team scale across all the platforms they manage.
To implement LDAP authenticaiton in UCP, OrcaBank is using UCP's native LDAP/AD
integration to map LDAP groups directly to UCP teams. Users can be added to or
removed from UCP teams via LDAP which can be managed centrally by OrcaBank's
identity team.
The following grant composition shows how LDAP groups are mapped to UCP teams.
## Grant composition
OrcaBank is taking advantage of the flexibility in UCP's grant model by applying
two grants to each application team. One grant allows each team to fully
manage the apps in their own collection, and the second grant gives them the
(limited) access they need to networks and secrets within the `db` collection.
![image](../images/design-access-control-adv-1.png){: .with-border}
## OrcaBank access architecture
OrcaBank's resulting access architecture shows applications connecting across
collection boundaries. By assigning multiple grants per team, the Mobile and
Payments applications teams can connect to dedicated Database resources through
a secure and controlled interface, leveraging Database networks and secrets.
> **Note:** In Docker Enterprise Standard, all resources are deployed across the
> same group of UCP worker nodes. Node segmentation is provided in Docker
> Enterprise Advanced and discussed in the [next tutorial](#).
![image](../images/design-access-control-adv-2.png){: .with-border}
### DB team
The `db` team is responsible for deploying and managing the full lifecycle
of the databases used by the application teams. They can execute the full set of
operations against all database resources.
![image](../images/design-access-control-adv-3.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their own application stack,
minus the database tier that is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}
{% elsif include.version=="ucp-2.2" %}
## Team access requirements
OrcaBank reorganized their application teams by product with each team providing
shared services as necessary. Developers at OrcaBank do their own DevOps and
deploy and manage the lifecycle of their applications.
OrcaBank has four teams with the following resource needs:
- `security` should have view-only access to all applications in the cluster.
- `db` should have full access to all database applications and resources. See
[DB Team](#db-team).
- `mobile` should have full access to their mobile applications and limited
access to shared `db` services. See [Mobile Team](#mobile-team).
- `payments` should have full access to their payments applications and limited
access to shared `db` services.
## Role composition
To assign the proper access, OrcaBank is employing a combination of default
and custom roles:
- `View Only` (default role) allows users to see all resources (but not edit or use).
- `Ops` (custom role) allows users to perform all operations against configs,
containers, images, networks, nodes, secrets, services, and volumes.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect to
networks and view/use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
OrcaBank is also creating collections of resources to mirror their team
structure.
Currently, all OrcaBank applications share the same physical resources, so all
nodes and applications are being configured in collections that nest under the
built-in collection, `/Shared`.
Other collections are also being created to enable shared `db` applications.
> **Note:** For increased security with node-based isolation, use Docker
> Enterprise Advanced.
- `/Shared/mobile` hosts all Mobile applications and resources.
- `/Shared/payments` hosts all Payments applications and resources.
- `/Shared/db` is a top-level collection for all `db` resources.
- `/Shared/db/payments` is a collection of `db` resources for Payments applications.
- `/Shared/db/mobile` is a collection of `db` resources for Mobile applications.
The collection architecture has the following tree representation:
```
/
├── System
└── Shared
├── mobile
├── payments
   └── db
├── mobile
   └── payments
```
OrcaBank's [Grant composition](#grant-composition) ensures that their collection
architecture gives the `db` team access to _all_ `db` resources and restricts
app teams to _shared_ `db` resources.
## LDAP/AD integration
OrcaBank has standardized on LDAP for centralized authentication to help their
identity team scale across all the platforms they manage.
To implement LDAP authenticaiton in UCP, OrcaBank is using UCP's native LDAP/AD
integration to map LDAP groups directly to UCP teams. Users can be added to or
removed from UCP teams via LDAP which can be managed centrally by OrcaBank's
identity team.
The following grant composition shows how LDAP groups are mapped to UCP teams.
## Grant composition
OrcaBank is taking advantage of the flexibility in UCP's grant model by applying
two grants to each application team. One grant allows each team to fully
manage the apps in their own collection, and the second grant gives them the
(limited) access they need to networks and secrets within the `db` collection.
![image](../images/design-access-control-adv-1.png){: .with-border}
## OrcaBank access architecture
OrcaBank's resulting access architecture shows applications connecting across
collection boundaries. By assigning multiple grants per team, the Mobile and
Payments applications teams can connect to dedicated Database resources through
a secure and controlled interface, leveraging Database networks and secrets.
> **Note:** In Docker Enterprise Standard, all resources are deployed across the
> same group of UCP worker nodes. Node segmentation is provided in Docker
> Enterprise Advanced and discussed in the [next tutorial](#).
![image](../images/design-access-control-adv-2.png){: .with-border}
### DB team
The `db` team is responsible for deploying and managing the full lifecycle
of the databases used by the application teams. They can execute the full set of
operations against all database resources.
![image](../images/design-access-control-adv-3.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their own application stack,
minus the database tier that is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}
{% endif %}
## Where to go next
- [Access control design with Docker EE Advanced](access-control-design-ee-advanced.md)
{% endif %}

View File

@ -1,69 +0,0 @@
---
title: Node access control in Docker EE Advanced
description: Learn how to architect node access with Docker Enterprise Edition Standard.
keywords: authorize, authentication, node, UCP, role, access control
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
*Node access control* lets you segment scheduling and visibility by node. Node access control is available in [Docker EE Advanced](https://www.docker.com/pricing) with Swarm orchestration only.
{% elsif include.version=="ucp-2.2" %}
*Node access control* lets you segment scheduling and visibility by node. Node access control is available in [Docker EE Advanced](https://www.docker.com/pricing).
{% endif %}
By default, non-infrastructure nodes (non-UCP & DTR nodes) belong to a built-in
collection called `/Shared`. All application workloads in the cluster are
scheduled on nodes in the `/Shared` collection. This includes those deployed in
private collections (`/Shared/Private/`) or any other collection under
`/Shared`.
This setting is enabled by a built-in grant that assigns every UCP user the
`scheduler` capability against the `/Shared` collection.
Node Access Control works by placing nodes in custom collections (outside of
`/Shared`). If a user or team is granted a role with the `scheduler` capability
against a collection, then they can schedule containers and services on these
nodes.
In the following example, users with `scheduler` capability against
`/collection1` can schedule applications on those nodes.
Again, these collections lie outside of the `/Shared` collection so users
without grants do not have access to these collections unless it is explicitly
granted. These users can only deploy applications on the built-in `/Shared`
collection nodes.
![image](../images/design-access-control-adv-custom-grant.png){: .with-border}
The tree representation of this collection structure looks like this:
```
/
├── Shared
├── System
├── collection1
└── collection2
├── sub-collection1
└── sub-collection2
```
With the use of default collections, users, teams, and organizations can be
constrained to what nodes and physical infrastructure they are capable of
deploying on.
## Where to go next
- [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md)
{% endif %}

View File

@ -1,225 +0,0 @@
---
title: Deploy a service and restrict access with RBAC
description: Create a grant to control access to a service.
keywords: ucp, grant, role, permission, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
## Deploy Kubernetes workload and restrict access
This section is under construction.
## Deploy Swarm service and restrict access
In this example, your organization is granted access to a new resource
collection that contains one Swarm service.
1. Create an organization and a team.
2. Create a collection for the view-only service.
3. Deploy a Swarm serivce.
4. Create a grant to manage user access to the collection.
![](../images/view-only-access-diagram.svg)
### Create an organization
Create an organization with one team, and add one user who isn't an administrator
to the team.
1. Log in to UCP as an administrator.
2. Navigate to the **Organizations & Teams** page and click
**Create Organization**. Name the new organization `engineering` and
click **Create**.
3. Click **Create Team**, name the new team `Dev`, and click **Create**.
3. Add a non-admin user to the Dev team.
For more, see: [Learn how to create users and teams](usermgmt-create-subjects.md).
### Create a collection for the service
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Find the **Shared** collection and click **View children**.
3. Click **Create collection** and name the collection `View-only services`.
4. Click **Create** to create the collection.
![](../images/deploy-view-only-service-1.png)
The `/Shared/View-only services` collection is ready to use for access
control.
### Deploy a service
Currently, the new collection has no resources assigned to it. To access
resources through this collection, deploy a new service and add it to the
collection.
1. Navigate to the **Services** page and create a new service, named
`WordPress`.
2. In the **Image** textbox, enter `wordpress:latest`. This identifies the
most recent WordPress image in the Docker Store.
3. In the left pane, click **Collection**. The **Swarm** collection appears.
4. Click **View children** to list all of the collections. In **Shared**,
Click **View children**, find the **View-only services** collection and
select it.
5. Click **Create** to add the "WordPress" service to the collection and
deploy it.
![](../images/deploy-view-only-service-3.png)
You're ready to create a grant for controlling access to the "WordPress" service.
### Create a grant
Currently, users who aren't administrators can't access the
`/Shared/View-only services` collection. Create a grant to give the
`engineering` organization view-only access.
> A grant is made up of a *subject*, a *role*, and a *resource collection*.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, navigate to **/Shared/View-only services**,
and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **View Only**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, select **engineering**.
5. Click **Create** to grant permissions to the organization.
![](../images/deploy-view-only-service-4.png)
Everything is in place to show role-based access control in action.
### Verify the user's permissions
Users in the `engineering` organization have view-only access to the
`/Shared/View-only services` collection. You can confirm this by logging in
as a non-admin user in the organization and trying to delete the service.
1. Log in as the user who you assigned to the Dev team.
2. Navigate to the **Services** page and click **WordPress**.
3. In the details pane, confirm that the service's collection is
**/Shared/View-only services**.
![](../images/deploy-view-only-service-2.png)
4. Click the checkbox next to the **WordPress** service, click **Actions**,
and select **Remove**. You get an error message, because the user
doesn't have `Service Delete` access to the collection.
{% elsif include.version=="ucp-2.2" %}
## Deploy Swarm service and restrict access
In this example, your organization is granted access to a new resource
collection that contains one Swarm service.
1. Create an organization and a team.
2. Create a collection for the view-only service.
3. Deploy a Swarm serivce.
4. Create a grant to manage user access to the collection.
![](../images/view-only-access-diagram.svg)
### Create an organization
Create an organization with one team, and add one user who isn't an administrator
to the team.
1. Log in to UCP as an administrator.
2. Navigate to the **Organizations & Teams** page and click
**Create Organization**. Name the new organization `engineering` and
click **Create**.
3. Click **Create Team**, name the new team `Dev`, and click **Create**.
3. Add a non-admin user to the Dev team.
For more, see: [Learn how to create users and teams](usermgmt-create-subjects.md).
### Create a collection for the service
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Find the **Shared** collection and click **View children**.
3. Click **Create collection** and name the collection `View-only services`.
4. Click **Create** to create the collection.
![](../images/deploy-view-only-service-1.png)
The `/Shared/View-only services` collection is ready to use for access
control.
### Deploy a service
Currently, the new collection has no resources assigned to it. To access
resources through this collection, deploy a new service and add it to the
collection.
1. Navigate to the **Services** page and create a new service, named
`WordPress`.
2. In the **Image** textbox, enter `wordpress:latest`. This identifies the
most recent WordPress image in the Docker Store.
3. In the left pane, click **Collection**. The **Swarm** collection appears.
4. Click **View children** to list all of the collections. In **Shared**,
Click **View children**, find the **View-only services** collection and
select it.
5. Click **Create** to add the "WordPress" service to the collection and
deploy it.
![](../images/deploy-view-only-service-3.png)
You're ready to create a grant for controlling access to the "WordPress" service.
### Create a grant
Currently, users who aren't administrators can't access the
`/Shared/View-only services` collection. Create a grant to give the
`engineering` organization view-only access.
> A grant is made up of a *subject*, a *role*, and a *resource collection*.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, navigate to **/Shared/View-only services**,
and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **View Only**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, select **engineering**.
5. Click **Create** to grant permissions to the organization.
![](../images/deploy-view-only-service-4.png)
Everything is in place to show role-based access control in action.
### Verify the user's permissions
Users in the `engineering` organization have view-only access to the
`/Shared/View-only services` collection. You can confirm this by logging in
as a non-admin user in the organization and trying to delete the service.
1. Log in as the user who you assigned to the Dev team.
2. Navigate to the **Services** page and click **WordPress**.
3. In the details pane, confirm that the service's collection is
**/Shared/View-only services**.
![](../images/deploy-view-only-service-2.png)
4. Click the checkbox next to the **WordPress** service, click **Actions**,
and select **Remove**. You get an error message, because the user
doesn't have `Service Delete` access to the collection.
{% endif %}
### Where to go next
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
{% endif %}

View File

@ -1,90 +0,0 @@
---
title: Deploy simple Wordpress application with RBAC
description: Learn how to deploy a simple application and customize access to resources.
keywords: rbac, authorize, authentication, users, teams, UCP, Docker
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
{% elsif include.version=="ucp-2.2" %}
{% endif %}
This tutorial explains how to create a simple application with two services,
Worpress and MySQL, and use role-based access control (RBAC) to authorize access
across the organization.
## Build the organization
Acme company wants to start a blog to better communicate with its users.
```
Acme Datacenter
├── Dev
│   └── Alex Alutin
└── DBA
  └── Brad Bhatia
```
## Deploy Wordpress with MySQL
```
version: "3.1"
services:
db:
image: mysql:5.7
deploy:
replicas: 1
labels:
com.docker.ucp.access.label: "/Shared/acme-blog/mysql-collection"
restart_policy:
condition: on-failure
max_attempts: 3
volumes:
- db_data:/var/lib/mysql
networks:
- wordpress-net
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
deploy:
replicas: 1
labels:
com.docker.ucp.access.label: "/Shared/acme-blog/wordpress-collection"
restart_policy:
condition: on-failure
max_attempts: 3
volumes:
- wordpress_data:/var/www/html
networks:
- wordpress-net
ports:
- "8000:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:
wordpress_data:
networks:
wordpress-net:
labels:
com.docker.ucp.access.label: "/Shared/acme-blog"
```
{% endif %}

View File

@ -1,200 +0,0 @@
---
title: Access control model
description: Manage access to resources with role-based access control.
keywords: ucp, grant, role, permission, authentication, authorization
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/access-control/
title: Create and manage users
- path: /deploy/access-control/
title: Create and manage teams
- path: /deploy/access-control/
title: Deploy a service with view-only access across an organization
- path: /deploy/access-control/
title: Isolate volumes between two different teams
- path: /deploy/access-control/
title: Isolate swarm nodes between two different teams
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
With Docker Universal Control Plane (UCP), you can authorize how users view,
edit, and use cluster resources by granting role-based permissions. Resources
can be grouped according to an organization's needs and users can be granted
more than one role.
To authorize access to cluster resources across your organization, Docker
administrators might take the following high-level steps:
- Add and configure subjects (users and teams)
- Define customer roles (or use defaults) by adding permissions to resource types
- Group cluster resources into Swarm collections or Kubernetes namespaces
- Create grants by marrying subject + role + resource
## Subjects
A subject represents a user, team, or organization. A subject is granted a
role that defines permitted operations against one or more resources.
- **User**: A person authenticated by the authentication backend. Users can
belong to one or more teams and one or more organizations.
- **Team**: A group of users that share permissions defined at the team level. A
team can be in one organization only.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
For more, see:
- [Create users and teams in UCP](./usermgmt-create-subjects.md)
- [Synchronize users and teams with LDAP](./usermgmt-sync-with-ldap.md)
## Roles
Roles define what operations can be done by whom. A role is a set of permitted
operations (view, edit, use, etc.) against a *resource type* (such as an image,
container, volume, etc.) that is assigned to a user or team with a grant.
For example, the built-in role, **Restricted Control**, includes permission to
view and schedule a node but not update it. A custom **DBA** role might include
permissions to create, attach, view, and remove volumes.
Most organizations use different roles to fine-tune the approprate access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
For more, see: [Create roles that define user access to resources](./usermgmt-define-roles.md)
## Resources
Cluster resources are grouped into Swarm collections or Kubernetes namespaces.
A collection is a directory that holds Swarm resources. You can define and build
collections in UCP by assinging resources to a collection. Or you can create the
path in UCP and use *labels* in your YAML file to assign application resources to
that path.
> Swarm resources types that can be placed into a collection include: Containers,
> Networks, Nodes, Services, Secrets, and Volumes.
A [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
holds Kubernetes resources.
> Kubernetes resources types that can be placed into a namespace include: Pods,
> Deployments, NetworkPolcies, Nodes, Services, Secrets, and many more.
For more, see: [Group resources into collections or namespaces](resources-group-resources.md).
## Grants
A grant is made up of a *subject*, *resource group*, and *role*.
Grants define which users can access what resources in what way. Grants are
effectively Access Control Lists (ACLs), and when grouped together, can
provide comprehensive access policies for an entire organization.
Only an administrator can manage grants, subjects, roles, and resources.
> Administrators are users who create subjects, group resources by moving them
> into directories or namespaces, define roles by selecting allowable operations,
> and apply grants to users and teams.
For more, see: [Create grants and authorize access to users and teams](usermgmt-grant-permissions.md).
{% elsif include.version=="ucp-2.2" %}
With [Docker Universal Control Plane (UCP)](https://docs.docker.com/datacenter/ucp/2.2/guides/),
you can authorize how users view, edit, and use cluster resources by granting
role-based permissions. Resources can be grouped according to an organization's
needs and users can be granted more than one role.
To authorize access to cluster resources across your organization, Docker
administrators might take the following high-level steps:
- Add and configure subjects (users and teams)
- Define customer roles (or use defaults) by adding permissions to resource types
- Group cluster resources into Swarm collections
- Create grants by marrying subject + role + resource
## Subjects
A subject represents a user, team, or organization. A subject is granted a
role that defines permitted operations against one or more resources.
- **User**: A person authenticated by the authentication backend. Users can
belong to one or more teams and one or more organizations.
- **Team**: A group of users that share permissions defined at the team level. A
team can be in one organization only.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
For more, see:
- [Create users and teams in UCP](./usermgmt-create-subjects.md)
- [Synchronize users and teams with LDAP](./usermgmt-sync-with-ldap.md)
## Roles
Roles define what operations can be done by whom. A role is a set of permitted
operations (view, edit, use, etc.) against a *resource type* (such as an image,
container, volume, etc.) that is assigned to a user or team with a grant.
For example, the built-in role, **Restricted Control**, includes permission to
view and schedule a node but not update it. A custom **DBA** role might include
permissions to create, attach, view, and remove volumes.
Most organizations use different roles to fine-tune the approprate access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
For more, see [Create roles that define user access to resources](./usermgmt-define-roles.md)
## Resources
Cluster resources are grouped into collections.
A collection is a directory that holds Swarm resources. You can define and build
collections in UCP by assinging resources to a collection. Or you can create the
path in UCP and use *labels* in your YAML file to assign application resources to
that path.
> Swarm resources types that can be placed into a collection include: Containers,
> Networks, Nodes (physical or virtual), Services, Secrets, and Volumes.
For more, see: [Group resources into collections](resources-group-resources.md).
## Grants
A grant is made up of a *subject*, *resource collection*, and *role*.
Grants define which users can access what resources in what way. Grants are
effectively Access Control Lists (ACLs), which when grouped together, can
provide comprehensive access policies for an entire organization.
Only an administrator can manage grants, subjects, roles, and resources.
> Administrators are users who create subjects, group resources by moving them
> into directories or namespaces, define roles by selecting allowable operations,
> and apply grants to users and teams.
For more, see: [Create grants and authorize access to users and teams](usermgmt-grant-permissions.md).
## Transition from UCP 2.1 access control
- Your existing access labels and permissions are migrated automatically during
an upgrade from UCP 2.1.x.
- Unlabeled "user-owned" resources are migrated into the user's private
collection, in `/Shared/Private/<username>`.
- Old access control labels are migrated into `/Shared/Legacy/<labelname>`.
- When deploying a resource, choose a collection instead of an access label.
- Use grants for access control, instead of unlabeled permissions.
[See a deeper tutorial on how to design access control architectures.](access-control-design-ee-standard.md)
{% endif %}
{% endif %}

View File

@ -1,356 +0,0 @@
---
title: Isolate swarm nodes to a specific team
description: Create grants that limit access to nodes to specific teams.
keywords: ucp, grant, role, permission, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
With Docker EE Advanced, you can enable physical isolation of resources
by organizing nodes into collections and granting `Scheduler` access for
different users. To control access to nodes, move them to dedicated collections
where you can grant access to specific users, teams, and organizations.
In this example, a team gets access to a node collection and a resource
collection, and UCP access control ensures that the team members can't view
or use swarm resources that aren't in their collection.
You need a Docker EE Advanced license and at least two worker nodes to
complete this example.
1. Create an `Ops` team and assign a user to it.
2. Create a `/Prod` collection for the team's node.
3. Assign a worker node to the `/Prod` collection.
4. Grant the `Ops` teams access to its collection.
![](../images/isolate-nodes-diagram.svg){: .with-border}
## Create a team
In the web UI, navigate to the **Organizations & Teams** page to create a team
named "Ops" in your organization. Add a user who isn't a UCP administrator to
the team.
[Learn to create and manage teams](create-and-manage-teams.md).
## Create a node collection and a resource collection
In this example, the Ops team uses an assigned group of nodes, which it
accesses through a collection. Also, the team has a separate collection
for its resources.
Create two collections: one for the team's worker nodes and another for the
team's resources.
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Click **Create collection** and name the new collection "Prod".
3. Click **Create** to create the collection.
4. Find **Prod** in the list, and click **View children**.
5. Click **Create collection**, and name the child collection
"Webserver". This creates a sub-collection for access control.
You've created two new collections. The `/Prod` collection is for the worker
nodes, and the `/Prod/Webserver` sub-collection is for access control to
an application that you'll deploy on the corresponding worker nodes.
## Move a worker node to a collection
By default, worker nodes are located in the `/Shared` collection.
Worker nodes that are running DTR are assigned to the `/System` collection.
To control access to the team's nodes, move them to a dedicated collection.
Move a worker node by changing the value of its access label key,
`com.docker.ucp.access.label`, to a different collection.
1. Navigate to the **Nodes** page to view all of the nodes in the swarm.
2. Click a worker node, and in the details pane, find its **Collection**.
If it's in the `/System` collection, click another worker node,
because you can't move nodes that are in the `/System` collection. By
default, worker nodes are assigned to the `/Shared` collection.
3. When you've found an available node, in the details pane, click
**Configure**.
3. In the **Labels** section, find `com.docker.ucp.access.label` and change
its value from `/Shared` to `/Prod`.
4. Click **Save** to move the node to the `/Prod` collection.
> Docker EE Advanced required
>
> If you don't have a Docker EE Advanced license, you'll get the following
> error message when you try to change the access label:
> **Nodes must be in either the shared or system collection without an advanced license.**
> [Get a Docker EE Advanced license](https://www.docker.com/pricing).
![](../images/isolate-nodes-1.png){: .with-border}
## Grant access for a team
You'll need two grants to control access to nodes and container resources:
- Grant the `Ops` team the `Restricted Control` role for the `/Prod/Webserver`
resources.
- Grant the `Ops` team the `Scheduler` role against the nodes in the `/Prod`
collection.
Create two grants for team access to the two collections:
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **View Children**.
4. In the **Webserver** collection, click **Select Collection**.
5. In the left pane, click **Roles**, and select **Restricted Control**
in the dropdown.
6. Click **Subjects**, and under **Select subject type**, click **Organizations**.
7. Select your organization, and in the **Team** dropdown, select **Ops**.
8. Click **Create** to grant the Ops team access to the `/Prod/Webserver`
collection.
The same steps apply for the nodes in the `/Prod` collection.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **Select Collection**.
4. In the left pane, click **Roles**, and in the dropdown, select **Scheduler**.
5. In the left pane, click **Subjects**, and under **Select subject type**, click
**Organizations**.
6. Select your organization, and in the **Team** dropdown, select **Ops** .
7. Click **Create** to grant the Ops team `Scheduler` access to the nodes in the
`/Prod` collection.
![](../images/isolate-nodes-2.png){: .with-border}
## Deploy a service as a team member
Your swarm is ready to show role-based access control in action. When a user
deploys a service, UCP assigns its resources to the user's default collection.
From the target collection of a resource, UCP walks up the ancestor collections
until it finds nodes that the user has `Scheduler` access to. In this example,
UCP assigns the user's service to the `/Prod/Webserver` collection and schedules
tasks on nodes in the `/Prod` collection.
As a user on the Ops team, set your default collection to `/Prod/Webserver`.
1. Log in as a user on the Ops team.
2. Navigate to the **Collections** page, and in the **Prod** collection,
click **View Children**.
3. In the **Webserver** collection, click the **More Options** icon and
select **Set to default**.
Deploy a service automatically to worker nodes in the `/Prod` collection.
All resources are deployed under the user's default collection,
`/Prod/Webserver`, and the containers are scheduled only on the nodes under
`/Prod`.
1. Navigate to the **Services** page, and click **Create Service**.
2. Name the service "NGINX", use the "nginx:latest" image, and click
**Create**.
3. When the **nginx** service status is green, click the service. In the
details view, click **Inspect Resource**, and in the dropdown, select
**Containers**.
4. Click the **NGINX** container, and in the details pane, confirm that its
**Collection** is **/Prod/Webserver**.
![](../../images/isolate-nodes-3.png){: .with-border}
5. Click **Inspect Resource**, and in the dropdown, select **Nodes**.
6. Click the node, and in the details pane, confirm that its **Collection**
is **/Prod**.
![](../images/isolate-nodes-4.png){: .with-border}
## Alternative: Use a grant instead of the default collection
Another approach is to use a grant instead of changing the user's default
collection. An administrator can create a grant for a role that has the
`Service Create` permission against the `/Prod/Webserver` collection or a child
collection. In this case, the user sets the value of the service's access label,
`com.docker.ucp.access.label`, to the new collection or one of its children
that has a `Service Create` grant for the user.
## Where to go next
- [Node access control in Docker EE Advanced](access-control-node.md)
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
- [Deploy a service with view-only access across an organization](deploy-view-only-service.md)
{% elsif include.version=="ucp-2.2" %}
With Docker EE Advanced, you can enable physical isolation of resources
by organizing nodes into collections and granting `Scheduler` access for
different users. To control access to nodes, move them to dedicated collections
where you can grant access to specific users, teams, and organizations.
In this example, a team gets access to a node collection and a resource
collection, and UCP access control ensures that the team members can't view
or use swarm resources that aren't in their collection.
You need a Docker EE Advanced license and at least two worker nodes to
complete this example.
1. Create an `Ops` team and assign a user to it.
2. Create a `/Prod` collection for the team's node.
3. Assign a worker node to the `/Prod` collection.
4. Grant the `Ops` teams access to its collection.
![](../images/isolate-nodes-diagram.svg){: .with-border}
## Create a team
In the web UI, navigate to the **Organizations & Teams** page to create a team
named "Ops" in your organization. Add a user who isn't a UCP administrator to
the team.
[Learn to create and manage teams](create-and-manage-teams.md).
## Create a node collection and a resource collection
In this example, the Ops team uses an assigned group of nodes, which it
accesses through a collection. Also, the team has a separate collection
for its resources.
Create two collections: one for the team's worker nodes and another for the
team's resources.
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Click **Create collection** and name the new collection "Prod".
3. Click **Create** to create the collection.
4. Find **Prod** in the list, and click **View children**.
5. Click **Create collection**, and name the child collection
"Webserver". This creates a sub-collection for access control.
You've created two new collections. The `/Prod` collection is for the worker
nodes, and the `/Prod/Webserver` sub-collection is for access control to
an application that you'll deploy on the corresponding worker nodes.
## Move a worker node to a collection
By default, worker nodes are located in the `/Shared` collection.
Worker nodes that are running DTR are assigned to the `/System` collection.
To control access to the team's nodes, move them to a dedicated collection.
Move a worker node by changing the value of its access label key,
`com.docker.ucp.access.label`, to a different collection.
1. Navigate to the **Nodes** page to view all of the nodes in the swarm.
2. Click a worker node, and in the details pane, find its **Collection**.
If it's in the `/System` collection, click another worker node,
because you can't move nodes that are in the `/System` collection. By
default, worker nodes are assigned to the `/Shared` collection.
3. When you've found an available node, in the details pane, click
**Configure**.
3. In the **Labels** section, find `com.docker.ucp.access.label` and change
its value from `/Shared` to `/Prod`.
4. Click **Save** to move the node to the `/Prod` collection.
> Docker EE Advanced required
>
> If you don't have a Docker EE Advanced license, you'll get the following
> error message when you try to change the access label:
> **Nodes must be in either the shared or system collection without an advanced license.**
> [Get a Docker EE Advanced license](https://www.docker.com/pricing).
![](../images/isolate-nodes-1.png){: .with-border}
## Grant access for a team
You'll need two grants to control access to nodes and container resources:
- Grant the `Ops` team the `Restricted Control` role for the `/Prod/Webserver`
resources.
- Grant the `Ops` team the `Scheduler` role against the nodes in the `/Prod`
collection.
Create two grants for team access to the two collections:
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **View Children**.
4. In the **Webserver** collection, click **Select Collection**.
5. In the left pane, click **Roles**, and select **Restricted Control**
in the dropdown.
6. Click **Subjects**, and under **Select subject type**, click **Organizations**.
7. Select your organization, and in the **Team** dropdown, select **Ops**.
8. Click **Create** to grant the Ops team access to the `/Prod/Webserver`
collection.
The same steps apply for the nodes in the `/Prod` collection.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **Select Collection**.
4. In the left pane, click **Roles**, and in the dropdown, select **Scheduler**.
5. In the left pane, click **Subjects**, and under **Select subject type**, click
**Organizations**.
6. Select your organization, and in the **Team** dropdown, select **Ops** .
7. Click **Create** to grant the Ops team `Scheduler` access to the nodes in the
`/Prod` collection.
![](../images/isolate-nodes-2.png){: .with-border}
## Deploy a service as a team member
Your swarm is ready to show role-based access control in action. When a user
deploys a service, UCP assigns its resources to the user's default collection.
From the target collection of a resource, UCP walks up the ancestor collections
until it finds nodes that the user has `Scheduler` access to. In this example,
UCP assigns the user's service to the `/Prod/Webserver` collection and schedules
tasks on nodes in the `/Prod` collection.
As a user on the Ops team, set your default collection to `/Prod/Webserver`.
1. Log in as a user on the Ops team.
2. Navigate to the **Collections** page, and in the **Prod** collection,
click **View Children**.
3. In the **Webserver** collection, click the **More Options** icon and
select **Set to default**.
Deploy a service automatically to worker nodes in the `/Prod` collection.
All resources are deployed under the user's default collection,
`/Prod/Webserver`, and the containers are scheduled only on the nodes under
`/Prod`.
1. Navigate to the **Services** page, and click **Create Service**.
2. Name the service "NGINX", use the "nginx:latest" image, and click
**Create**.
3. When the **nginx** service status is green, click the service. In the
details view, click **Inspect Resource**, and in the dropdown, select
**Containers**.
4. Click the **NGINX** container, and in the details pane, confirm that its
**Collection** is **/Prod/Webserver**.
![](../../images/isolate-nodes-3.png){: .with-border}
5. Click **Inspect Resource**, and in the dropdown, select **Nodes**.
6. Click the node, and in the details pane, confirm that its **Collection**
is **/Prod**.
![](../images/isolate-nodes-4.png){: .with-border}
## Alternative: Use a grant instead of the default collection
Another approach is to use a grant instead of changing the user's default
collection. An administrator can create a grant for a role that has the
`Service Create` permission against the `/Prod/Webserver` collection or a child
collection. In this case, the user sets the value of the service's access label,
`com.docker.ucp.access.label`, to the new collection or one of its children
that has a `Service Create` grant for the user.
## Where to go next
- [Node access control in Docker EE Advanced](access-control-node.md)
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
- [Deploy a service with view-only access across an organization](deploy-view-only-service.md)
{% endif %}
{% endif %}

View File

@ -1,112 +0,0 @@
---
title: Isolate volumes between two different teams
description: Create grants that limit access to volumes to specific teams.
keywords: ucp, grant, role, permission, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
This topic is under construction.
{% elsif include.version=="ucp-2.2" %}
In this example, two teams are granted access to volumes in two different
resource collections. UCP access control prevents the teams from viewing and
accessing each other's volumes, even though they may be located in the same
nodes.
1. Create two teams.
2. Create two collections, one for either team.
3. Create grants to manage access to the collections.
4. Team members create volumes that are specific to their team.
![](../images/isolate-volumes-diagram.svg){: .with-border}
## Create two teams
Navigate to the **Organizations & Teams** page to create two teams in your
organization, named "Dev" and "Prod". Add a user who's not a UCP administrator
to the Dev team, and add another non-admin user to the Prod team.
[Learn how to create and manage teams](create-and-manage-teams.md).
## Create resource collections
In this example, the Dev and Prod teams use two different volumes, which they
access through two corresponding resource collections. The collections are
placed under the `/Shared` collection.
1. In the left pane, click **Collections** to show all of the resource
collections in the swarm.
2. Find the **/Shared** collection and click **View children**.
2. Click **Create collection** and name the new collection "dev-volumes".
3. Click **Create** to create the collection.
4. Click **Create collection** again, name the new collection "prod-volumes",
and click **Create**.
## Create grants for controlling access to the new volumes
In this example, the Dev team gets access to its volumes from a grant that
associates the team with the `/Shared/dev-volumes` collection, and the Prod
team gets access to its volumes from another grant that associates the team
with the `/Shared/prod-volumes` collection.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Shared** collection, click **View Children**.
4. In the list, find **/Shared/dev-volumes** and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **Restricted Control**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, pick your organization, and in the **Team** dropdown,
select **Dev**.
5. Click **Create** to grant permissions to the Dev team.
6. Click **Create Grant** and repeat the previous steps for the **/Shared/prod-volumes**
collection and the Prod team.
![](../images/isolate-volumes-1.png){: .with-border}
With the collections and grants in place, users can sign in and create volumes
in their assigned collections.
## Create a volume as a team member
Team members have permission to create volumes in their assigned collection.
1. Log in as one of the users on the Dev team.
2. Navigate to the **Volumes** page to view all of the volumes in the swarm
that the user can access.
2. Click **Create volume** and name the new volume "dev-data".
3. In the left pane, click **Collections**. The default collection appears.
At the top of the page, click **Shared**, find the **dev-volumes**
collection in the list, and click **Select Collection**.
4. Click **Create** to add the "dev-data" volume to the collection.
5. Log in as one of the users on the Prod team, and repeat the previous steps
to create a "prod-data" volume assigned to the `/Shared/prod-volumes`
collection.
![](../images/isolate-volumes-2.png){: .with-border}
Now you can see role-based access control in action for volumes. The user on
the Prod team can't see the Dev team's volumes, and if you log in again as a
user on the Dev team, you won't see the Prod team's volumes.
![](../images/isolate-volumes-3.png){: .with-border}
Sign in with a UCP administrator account, and you see all of the volumes
created by the Dev and Prod users.
![](../images/isolate-volumes-4.png){: .with-border}
## Where to go next
- [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md)
{% endif %}
{% endif %}

View File

@ -1,302 +0,0 @@
---
title: Group cluster resources
description: Learn how to group resources into collections or namespaces to control access.
keywords: rbac, ucp, grant, role, permission, authentication, resource collection
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
Docker EE enables controlling access to container resources by using
*collections*. A collection is a group of swarm resources, like services,
containers, volumes, networks, and secrets.
![](../images/collections-and-resources.svg){: .with-border}
Access to collections goes through a directory structure that arranges a swarm's
resources. To assign permissions, administrators create grants against directory
branches.
## Directory paths define access to collections
Access to collections is based on a directory-like structure. For example, the
path to a user's default collection is `/Shared/Private/<username>`. Every user
has a private collection that has the default permission specified by the UCP
administrator.
Each collection has an access label that identifies its path. For example, the
private collection for user "hans" has a label that looks like this:
```
com.docker.ucp.access.label = /Shared/Private/hans
```
You can nest collections. If a user has a grant against a collection, the grant
applies to all of its child collections.
For a child collection, or for a user who belongs to more than one team, the
system concatenates permissions from multiple roles into an "effective role" for
the user, which specifies the operations that are allowed against the target.
## Built-in collections
UCP provides a number of built-in collections.
- `/` - The path to the `Swarm` collection. All resources in the
cluster are here. Resources that aren't in a collection are assigned
to the `/` directory.
- `/System` - The system collection, which contains UCP managers, DTR nodes,
and UCP/DTR system services. By default, only admins have access to the
system collection, but you can change this.
- `/Shared` - All worker nodes are here by default, for scheduling.
In a system with a standard-tier license, all worker nodes are under
the `/Shared` collection. With the EE Advanced license, administrators
can move worker nodes to other collections and apply role-based access.
- `/Shared/Private` - User private collections are stored here.
- `/Shared/Legacy` - After updating from UCP 2.1, all legacy access control
labels are stored here.
![](../images/collections-diagram.svg){: .with-border}
This diagram shows the `/System` and `/Shared` collections that are created
by UCP. User private collections are children of the `/Shared/private`
collection. Also, an admin user has created a `/prod` collection and its
`/webserver` child collection.
## Default collections
A user always has a default collection. The user can select the default
in UI preferences. When a user deploys a resource in the web UI, the
preselected option is the default collection, but this can be changed.
Users can't deploy a resource without a collection. When deploying a
resource in CLI without an access label, UCP automatically places the
resource in the user's default collection.
[Learn how to add labels to cluster nodes](../../datacenter/ucp/2.2/guides/admin/configure/add-labels-to-cluster-nodes/).
When using Docker Compose, the system applies default collection labels
across all resources in the stack, unless the `com.docker.ucp.access.label`
has been set explicitly.
> Default collections and collection labels
>
> Setting a default collection is most helpful for users who deploy stacks
> and don't want to edit the contents of their compose files. Also, setting
> a default collection is useful for users who work only on a well-defined
> slice of the system. On the other hand, setting the collection label for
> every resource works best for users who have versatile roles in the system,
> like administrators.
## Collections and labels
Resources are marked as being in a collection by using labels.
Some resource types don't have editable labels, so you can't move resources
like this across collections. You can't modify collections after
resource creation for containers, networks, and volumes, but you can
update labels for services, nodes, secrets, and configs.
For editable resources, like services, secrets, nodes, and configs,
you can change the `com.docker.ucp.access.label` to move resources to
different collections. With the CLI, you can use this label to deploy
resources to a collection other than your default collection. Omitting this
label on the CLI deploys a resource on the user's default resource collection.
The system uses the additional labels, `com.docker.ucp.collection.*`, to enable
efficient resource lookups. By default, nodes have the
`com.docker.ucp.collection.root`, `com.docker.ucp.collection.shared`, and
`com.docker.ucp.collection.swarm` labels set to `true`. UCP automatically
controls these labels, and you don't need to manage them.
Collections get generic default names, but you can give them meaningful names,
like "Dev", "Test", and "Prod".
A *stack* is a group of resources identified by a label. You can place the
stack's resources in multiple collections. Resources are placed in the user's
default collection unless you specify an explicit `com.docker.ucp.access.label`
within the stack/compose file.
## Control access to nodes
The Docker EE Advanced license enables access control on worker nodes. Admin
users can move worker nodes from the default `/Shared` collection into other
collections and create corresponding grants for scheduling tasks.
In this example, an administrator has moved worker nodes to a `/prod`
collection:
![](../images/containers-and-nodes-diagram.svg)
When you deploy a resource with a collection, UCP sets a constraint implicitly
based on what nodes the collection, and any ancestor collections, can access.
The `Scheduler` role allows users to deploy resources on a node.
By default, all users have the `Scheduler` role against the `/Shared`
collection.
When deploying a resource that isn't global, like local volumes, bridge
networks, containers, and services, the system identifies a set of
"schedulable nodes" for the user. The system identifies the target collection
of the resource, like `/Shared/Private/hans`, and it tries to find the parent
that's closest to the root that the user has the `Node Schedule` permission on.
For example, when a user with a default configuration runs `docker container run nginx`,
the system interprets this to mean, "Create an NGINX container under the
user's default collection, which is at `/Shared/Private/hans`, and deploy it
on one of the nodes under `/Shared`.
If you want to isolate nodes against other teams, place these nodes in
new collections, and assign the `Scheduler` role, which contains the
`Node Schedule` permission, to the team.
[Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md).
{% elsif include.version=="ucp-2.2" %}
Docker EE enables controlling access to container resources by using
*collections*. A collection is a group of swarm resources, like services,
containers, volumes, networks, and secrets.
![](../images/collections-and-resources.svg){: .with-border}
Access to collections goes through a directory structure that arranges a swarm's
resources. To assign permissions, administrators create grants against directory
branches.
## Directory paths define access to collections
Access to collections is based on a directory-like structure. For example, the
path to a user's default collection is `/Shared/Private/<username>`. Every user
has a private collection that has the default permission specified by the UCP
administrator.
Each collection has an access label that identifies its path. For example, the
private collection for user "hans" has a label that looks like this:
```
com.docker.ucp.access.label = /Shared/Private/hans
```
You can nest collections. If a user has a grant against a collection, the grant
applies to all of its child collections.
For a child collection, or for a user who belongs to more than one team, the
system concatenates permissions from multiple roles into an "effective role" for
the user, which specifies the operations that are allowed against the target.
## Built-in collections
UCP provides a number of built-in collections.
- `/` - The path to the `Swarm` collection. All resources in the
cluster are here. Resources that aren't in a collection are assigned
to the `/` directory.
- `/System` - The system collection, which contains UCP managers, DTR nodes,
and UCP/DTR system services. By default, only admins have access to the
system collection, but you can change this.
- `/Shared` - All worker nodes are here by default, for scheduling.
In a system with a standard-tier license, all worker nodes are under
the `/Shared` collection. With the EE Advanced license, administrators
can move worker nodes to other collections and apply role-based access.
- `/Shared/Private` - User private collections are stored here.
- `/Shared/Legacy` - After updating from UCP 2.1, all legacy access control
labels are stored here.
![](../images/collections-diagram.svg){: .with-border}
This diagram shows the `/System` and `/Shared` collections that are created
by UCP. User private collections are children of the `/Shared/private`
collection. Also, an admin user has created a `/prod` collection and its
`/webserver` child collection.
## Default collections
A user always has a default collection. The user can select the default
in UI preferences. When a user deploys a resource in the web UI, the
preselected option is the default collection, but this can be changed.
Users can't deploy a resource without a collection. When deploying a
resource in CLI without an access label, UCP automatically places the
resource in the user's default collection.
[Learn how to add labels to cluster nodes](../../datacenter/ucp/2.2/guides/admin/configure/add-labels-to-cluster-nodes/).
When using Docker Compose, the system applies default collection labels
across all resources in the stack, unless the `com.docker.ucp.access.label`
has been set explicitly.
> Default collections and collection labels
>
> Setting a default collection is most helpful for users who deploy stacks
> and don't want to edit the contents of their compose files. Also, setting
> a default collection is useful for users who work only on a well-defined
> slice of the system. On the other hand, setting the collection label for
> every resource works best for users who have versatile roles in the system,
> like administrators.
## Collections and labels
Resources are marked as being in a collection by using labels.
Some resource types don't have editable labels, so you can't move resources
like this across collections. You can't modify collections after
resource creation for containers, networks, and volumes, but you can
update labels for services, nodes, secrets, and configs.
For editable resources, like services, secrets, nodes, and configs,
you can change the `com.docker.ucp.access.label` to move resources to
different collections. With the CLI, you can use this label to deploy
resources to a collection other than your default collection. Omitting this
label on the CLI deploys a resource on the user's default resource collection.
The system uses the additional labels, `com.docker.ucp.collection.*`, to enable
efficient resource lookups. By default, nodes have the
`com.docker.ucp.collection.root`, `com.docker.ucp.collection.shared`, and
`com.docker.ucp.collection.swarm` labels set to `true`. UCP automatically
controls these labels, and you don't need to manage them.
Collections get generic default names, but you can give them meaningful names,
like "Dev", "Test", and "Prod".
A *stack* is a group of resources identified by a label. You can place the
stack's resources in multiple collections. Resources are placed in the user's
default collection unless you specify an explicit `com.docker.ucp.access.label`
within the stack/compose file.
## Control access to nodes
The Docker EE Advanced license enables access control on worker nodes. Admin
users can move worker nodes from the default `/Shared` collection into other
collections and create corresponding grants for scheduling tasks.
In this example, an administrator has moved worker nodes to a `/prod`
collection:
![](../images/containers-and-nodes-diagram.svg)
When you deploy a resource with a collection, UCP sets a constraint implicitly
based on what nodes the collection, and any ancestor collections, can access.
The `Scheduler` role allows users to deploy resources on a node.
By default, all users have the `Scheduler` role against the `/Shared`
collection.
When deploying a resource that isn't global, like local volumes, bridge
networks, containers, and services, the system identifies a set of
"schedulable nodes" for the user. The system identifies the target collection
of the resource, like `/Shared/Private/hans`, and it tries to find the parent
that's closest to the root that the user has the `Node Schedule` permission on.
For example, when a user with a default configuration runs `docker container run nginx`,
the system interprets this to mean, "Create an NGINX container under the
user's default collection, which is at `/Shared/Private/hans`, and deploy it
on one of the nodes under `/Shared`.
If you want to isolate nodes against other teams, place these nodes in
new collections, and assign the `Scheduler` role, which contains the
`Node Schedule` permission, to the team.
[Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md).
{% endif %}
{% endif %}

View File

@ -1,133 +0,0 @@
---
title: Create and manage users and teams
description: Learn how to add users and define teams in Docker Universal Control Plane.
keywords: rbac, authorize, authentication, users, teams, UCP, Docker
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
Users, teams, and organizations are referred to as subjects in Docker Universal
Control Plane (UCP).
Individual users can belong to one or more teams but each team can only be in
one ogranziation. At the fictional startup, Acme Company, some users are on
multiple teams; but all teams are necessarily unique:
```
Acme Organization
├── DevOps Team
│   ├── *User Alex*
│   └── User Bart
├── Infra Team
│ ├── *User Alex*
│  └── User Chen
└── Apps Team
   ├── User Deva
   └── User Emma
```
## Authentication
All users are authenticated on the backend. UCP provides built-in authentication
and also integrates with LDAP directory services.
> To enable LDAP and authenticate and synchronize UCP users and teams with your
> organization's LDAP directory, see:
> - [Synchronize users and teams with LDAP in the UI](usermgmt-sync-with-ldap.md)
> - [Integrate with an LDAP Directory](../../datacenter/ucp/2.2/guides/admin/configure/external-auth/index.md).
To use UCP built-in authentication, you must manually create users.
## Build an organization architecture
The general flow of designing an organization of teams in UCP is:
1. Create an organization
2. Add users or configure UCP to sync with LDAP (in the UI or programmatically)
3. Create teams under the organization
4. Add users to teams manually or sync with LDAP
### Create an organization with teams
To create an organzation in UCP:
1. Click **Organization & Teams** under **User Management**.
2. Click **Create Organization**.
3. Input the organization name.
4. Click **Create**.
To create teams in the organization:
1. Click through the organization name.
2. Click **Create Team**.
3. Input a team name (and description).
4. Click **Create**.
5. Add existing users to the team. If they don't exist, see [Create users manually](#Create-users-manually).
- Click the team name and select **Actions** > **Add Users**.
- Check the users to include and click **Add Users**.
> **Note**: To sync teams with groups in an LDAP server, see [Sync Teams with LDAP](./usermgmt-sync-with-ldap).
{% if include.version=="ucp-3.0" %}
### Create users manually
New users are assigned a default permission level so that they can access the
cluster. You can optionally grant them UCP administrator permissions.
You can extend the user's default permissions by granting them fine-grained
permissions over resources. You do this by adding the user to a team.
To manally create users in UCP:
1. Click **Users** under **User Management**.
2. Click **Create User**.
3. Input username, password, and full name.
4. Click **Create**.
5. [optional] Check "Is a Docker EE Admin".
> A `Docker EE Admin` can grant users permission to change the cluster
> configuration and manage grants, roles, and collections.
![](../images/ucp_usermgmt_users_create01.png){: .with-border}
![](../images/ucp_usermgmt_users_create02.png){: .with-border}
{% elsif include.version=="ucp-2.2" %}
### Create users manuallly
New users are assigned a default permission level so that they can access the
cluster. You can optionally grant them UCP administrator permissions.
You can extend the user's default permissions by granting them fine-grained
permissions over resources. You do this by adding the user to a team.
To manally create users in UCP:
1. Navigate to the **Users** page.
2. Click **Create User**.
3. Input username, password, and full name.
4. Click **Create**.
5. [optional] Check "Is a UCP admin?".
> A `UCP Admin` can grant users permission to change the cluster configuration
> and manage grants, roles, and collections.
![](../images/create-users-1.png){: .with-border}
![](../images/create-users-2.png){: .with-border}
{% endif %}
## Where to go next
- [Synchronize users and teams with LDAP](./usermgmt-sync-with-ldap.md)
- [Create grants and authorize access to users and teams](usermgmt-grant-permissions.md).
{% endif %}

View File

@ -1,193 +0,0 @@
---
title: Create roles and authorize operations
description: Learn how to create roles and set permissions in Docker Universal Control Plane.
keywords: rbac, authorization, authentication, users, teams, UCP
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
Docker Universal Control Plane has two types of users: administrators and
regular users. Administrators can make changes to the UCP cluster, while regular
users have permissions that range from no access to full control over resources
such as volumes, networks, images, and containers.
Users are grouped into teams and organizations.
![Diagram showing UCP permission levels](../images/role-diagram.svg)
Administrators apply *grants* to users, teams, and organizations to give
permissions to swarm resources.
## Administrator users
In Docker UCP, only users with administrator privileges can make changes to
cluster settings. This includes:
* Managing user permissions by creating grants.
* Managing cluster configurations, like adding and removing nodes.
## Roles
A role is a set of permitted API operations on a collection that you
can assign to a specific user, team, or organization by using a grant.
UCP administrators view and manage roles by navigating to the **Roles** page.
The system provides the following default roles:
- **None**: Users have no access to swarm resources. This role maps to the
`No Access` role in UCP 2.1.x.
- **View Only**: Users can view resources but can't create them.
- **Restricted Control**: Users can view and edit resources but can't run a
service or container in a way that affects the node where it's running. Users
_cannot_: mount a node directory, `exec` into containers, or run containers in
privileged mode or with additional kernel capabilities.
- **Scheduler**: Users can view nodes and schedule workloads on them. Worker nodes
and manager nodes are affected by `Scheduler` grants. By default, all users get
a grant with the `Scheduler` role against the `/Shared` collection. Having
`Scheduler` access doesn't allow users to view workloads on these nodes--they
need the appropriate resource permissions such as `Container View`.
- **Full Control**: Users can view and edit all granted resources. They can create
containers without any restriction, but can't see the containers of other users.
![Diagram showing UCP permission levels](../images/permissions-ucp.svg)
Administrators can create a custom role that has Docker API permissions
that specify the API actions that a subject may perform.
The **Roles** page lists the available roles, including the default roles
and any custom roles that administrators have created. In the **Roles**
list, click a role to see the API operations that it uses. For example, the
`Scheduler` role has two of the node operations, `Schedule` and `View`.
## Create a custom role
Click **Create role** to create a custom role and define the API operations
that it uses. When you create a custom role, all of the APIs that you can use
are listed on the **Create Role** page. For example, you can create a custom
role that uses the node operations, `Schedule`, `Update`, and `View`, and you
might give it a name like "Node Operator".
1. Click **Roles** under **User Management**.
2. Click **Create Role**.
3. Input the role name on the **Details** page.
4. Click **Operations**.
5. Select the permitted operations per resource type.
6. Click **Create**.
![](../images/custom-role.png){: .with-border}
You can give a role a global name, like "Remove Images", which might enable
the **Remove** and **Force Remove** operations for images. You can apply a
role with the same name to different collections.
Only an administrator can create and remove roles. Roles are always enabled.
Roles can't be edited, so to change a role's API operations, you must delete it
and create it again.
You can't delete a custom role if it's used in a grant. You must first delete
the grants that use the role.
{% elsif include.version=="ucp-2.2" %}
Docker Universal Control Plane has two types of users: administrators and
regular users. Administrators can make changes to the UCP cluster, while regular
users have permissions that range from no access to full control over resources
such as volumes, networks, images, and containers.
Users are grouped into teams and organizations.
![Diagram showing UCP permission levels](../images/role-diagram.svg)
Administrators apply *grants* to users, teams, and organizations to give
permissions to swarm resources.
## Administrator users
In Docker UCP, only users with administrator privileges can make changes to
cluster settings. This includes:
* Managing user permissions by creating grants.
* Managing cluster configurations, like adding and removing nodes.
## Roles
A role is a set of permitted API operations on a collection that you
can assign to a specific user, team, or organization by using a grant.
UCP administrators view and manage roles by navigating to the **Roles** page.
The system provides the following default roles:
- **None**: Users have no access to swarm resources. This role maps to the
`No Access` role in UCP 2.1.x.
- **View Only**: Users can view resources but can't create them.
- **Restricted Control**: Users can view and edit resources but can't run a
service or container in a way that affects the node where it's running. Users
_cannot_: mount a node directory, `exec` into containers, or run containers in
privileged mode or with additional kernel capabilities.
- **Scheduler**: Users can view nodes and schedule workloads on them. Worker nodes
and manager nodes are affected by `Scheduler` grants. By default, all users get
a grant with the `Scheduler` role against the `/Shared` collection. Having
`Scheduler` access doesn't allow users to view workloads on these nodes--they
need the appropriate resource permissions such as `Container View`.
- **Full Control**: Users can view and edit all granted resources. They can create
containers without any restriction, but can't see the containers of other users.
![Diagram showing UCP permission levels](../images/permissions-ucp.svg)
Administrators can create a custom role that has Docker API permissions
that specify the API actions that a subject may perform.
The **Roles** page lists the available roles, including the default roles
and any custom roles that administrators have created. In the **Roles**
list, click a role to see the API operations that it uses. For example, the
`Scheduler` role has two of the node operations, `Schedule` and `View`.
## Create a custom role
Click **Create role** to create a custom role and define the API operations
that it uses. When you create a custom role, all of the APIs that you can use
are listed on the **Create Role** page. For example, you can create a custom
role that uses the node operations, `Schedule`, `Update`, and `View`, and you
might give it a name like "Node Operator".
1. Click **Roles** under **User Management**.
2. Click **Create Role**.
3. Input the role name on the **Details** page.
4. Click **Operations**.
5. Select the permitted operations per resource type.
6. Click **Create**.
![](../images/custom-role.png){: .with-border}
You can give a role a global name, like "Remove Images", which might enable
the **Remove** and **Force Remove** operations for images. You can apply a
role with the same name to different collections.
Only an administrator can create and remove roles. Roles are always enabled.
Roles can't be edited, so to change a role's API operations, you must delete it
and create it again.
You can't delete a custom role if it's used in a grant. You must first delete
the grants that use the role.
{% endif %}
## Where to go next
* [Create and manage users](create-and-manage-users.md)
* [Create and manage teams](create-and-manage-teams.md)
* [Docker Reference Architecture: Securing Docker EE and Security Best Practices](https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Securing_Docker_EE_and_Security_Best_Practices)
*
{% endif %}

View File

@ -1,71 +0,0 @@
---
title: Create grants to authorize access to resources
description: Learn how to grant users and teams access to cluster resources with role-based access control.
keywords: rbac, ucp, grant, role, permission, authentication, authorization
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
UCP administrators can create *grants* to control how users and organizations
access resources.
## Kubernetes grants
With Kubernetes orchestration, a grant is made up of a *subject*, a *role*, and a
*namespace*.
This section is under construction.
{% elsif include.version=="ucp-2.2" %}
{% endif %}
## Swarm grants
With Swarm orchestration, a grant is made up of a *subject*, a *role*, and a
*resource collection*.
![](../images/ucp-grant-model-0.svg){: .with-border}
A grant defines who (subject) has how much access (role) to a set of resources
(collection). Each grant is a 1:1:1 mapping of subject, role, collection. For
example, you can grant the "Prod Team" "Restricted Control" permissions for the
"/Production" collection.
The usual workflow for creating grants has four steps.
1. Set up your users and teams. For example, you might want three teams,
Dev, QA, and Prod.
2. Organize swarm resources into separate collections that each team uses.
3. Optionally, create custom roles for specific permissions to the Docker API.
4. Grant role-based access to collections for your teams.
![](../images/ucp-grant-model.svg){: .with-border}
### Create a Swarm grant
When you have your users, collections, and roles set up, you can create grants.
Administrators create grants on the **Manage Grants** page.
1. Click **Create Grant**. All of the collections in the system are listed.
2. Click **Select** on the collection you want to grant access to.
3. In the left pane, click **Roles** and select a role from the dropdown list.
4. In the left pane, click **Subjects**. Click **All Users** to create a grant
for a specific user, or click **Organizations** to create a grant for an
organization or a team.
5. Select a user, team, or organization and click **Create**.
By default, all new users are placed in the `docker-datacenter` organization. If
you want to apply a grant to all UCP users, create a grant with the
`docker-datacenter` org as a subject.
## Where to go next
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
{% endif %}

View File

@ -1,48 +0,0 @@
---
title: Reset a user password
description: Learn how to recover your Docker Datacenter credentials.
keywords: ucp, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
This topic is under construction.
{% elsif include.version=="ucp-2.2" %}
If you have administrator credentials to UCP, you can reset the password of
other users.
If that user is being managed using an LDAP service, you need to change the
user password on that system. If the user account is managed using UCP,
log in with administrator credentials to the UCP web UI, navigate to
the **Users** page, and choose the user whose password you want to change.
In the details pane, click **Configure** and select **Security** from the
dropdown.
![](../images/recover-user-password-1.png){: .with-border}
Update the user's password and click **Save**.
If you're an administrator and forgot your password, you can ask other users
with administrator credentials to change your password.
If you're the only administrator, use **ssh** to log in to a manager
node managed by UCP, and run:
```none
{% raw %}
docker exec -it ucp-auth-api enzi \
"$(docker inspect --format '{{ index .Args 0 }}' ucp-auth-api)" \
passwd -i
{% endraw %}
```
{% endif %}
{% endif %}

View File

@ -1,119 +0,0 @@
---
title: Synchronize users and teams with LDAP
description: Learn how to enable LDAP and sync users and teams in Docker Universal Control Plane.
keywords: authorize, authentication, users, teams, UCP, Docker, LDAP
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
To enable LDAP in UCP and sync team members with your LDAP directory:
1. Click **Admin Settings** under your username drop down.
2. Click **Authentication & Authorization**.
3. Scroll down and click `Yes` by **LDAP Enabled**. A list of LDAP settings displays.
4. Input values to match your LDAP server installation.
5. Test your configuration in UCP.
6. Manually create teams in UCP to mirror those in LDAP.
6. Click **Sync Now**.
If UCP is configured to sync users with your organization's LDAP directory
server, you can enable syncing the new team's members when
creating a new team or when modifying settings of an existing team.
[Learn how to sync with LDAP at the backend](../configure/external-auth/index.md).
![](../images/create-and-manage-teams-5.png){: .with-border}
There are two methods for matching group members from an LDAP directory:
**Match Group Members**
This option specifies that team members should be synced directly with members
of a group in your organization's LDAP directory. The team's membership will by
synced to match the membership of the group.
| Field | Description |
|:-----------------------|:------------------------------------------------------------------------------------------------------|
| Group DN | The distinguished name of the group from which to select users. |
| Group Member Attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. |
**Match Search Results**
This option specifies that team members should be synced using a search query
against your organization's LDAP directory. The team's membership will be
synced to match the users in the search results.
| Field | Description |
| :--------------------- | :---------------------------------------------------------------------------------------------------- |
| Search Base DN | Distinguished name of the node in the directory tree where the search should start looking for users. |
| Search Filter | Filter to find users. If null, existing users in the search scope are added as members of the team. |
| Search subtree | Defines search through the full LDAP tree, not just one level, starting at the Base DN. |
**Immediately Sync Team Members**
Select this option to run an LDAP sync operation immediately after saving the
configuration for the team. It may take a moment before the members of the team
are fully synced.
{% elsif include.version=="ucp-2.2" %}
To enable LDAP in UCP and sync team members with your LDAP directory:
1. Click **Admin Settings** under your username drop down.
2. Click **Authentication & Authorization**.
3. Scroll down and click `Yes` by **LDAP Enabled**. A list of LDAP settings displays.
4. Input values to match your LDAP server installation.
5. Test your configuration in UCP.
6. Manually create teams in UCP to mirror those in LDAP.
6. Click **Sync Now**.
If UCP is configured to sync users with your organization's LDAP directory
server, you can enable syncing the new team's members when
creating a new team or when modifying settings of an existing team.
[Learn how to sync with LDAP at the backend](../configure/external-auth/index.md).
![](../images/create-and-manage-teams-5.png){: .with-border}
There are two methods for matching group members from an LDAP directory:
**Match Group Members**
This option specifies that team members should be synced directly with members
of a group in your organization's LDAP directory. The team's membership will by
synced to match the membership of the group.
| Field | Description |
|:-----------------------|:------------------------------------------------------------------------------------------------------|
| Group DN | The distinguished name of the group from which to select users. |
| Group Member Attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. |
**Match Search Results**
This option specifies that team members should be synced using a search query
against your organization's LDAP directory. The team's membership will be
synced to match the users in the search results.
| Field | Description |
| :--------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
| Search Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. |
| Search Filter | The LDAP search filter used to find users. If you leave this field empty, all existing users in the search scope will be added as members of the team. |
| Search subtree instead of just one level | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. |
**Immediately Sync Team Members**
Select this option to run an LDAP sync operation immediately after saving the
configuration for the team. It may take a moment before the members of the team
are fully synced.
{% endif %}
{% endif %}

View File

@ -15,9 +15,9 @@ ui_tabs:
## User passwords
Docker EE administrators can reset user passwords managed in the Docker EE UI:
Docker EE administrators can reset user passwords managed in UCP:
1. Log in with administrator credentials to the Docker EE UI.
1. Log in to UCP with administrator credentials.
2. Click **Users** under **User Management**.
3. Select the user whose password you want to change.
4. Select **Configure > Security**.

View File

@ -12,16 +12,17 @@ ui_tabs:
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
To enable LDAP in the Docker EE UI and sync to your LDAP directory:
To enable LDAP in UCP and sync to your LDAP directory:
1. Click **Admin Settings** under your username drop down.
2. Click **Authentication & Authorization**.
3. Scroll down and click `Yes` by **LDAP Enabled**. A list of LDAP settings displays.
4. Input values to match your LDAP server installation.
5. Test your configuration in the Docker EE UI.
6. Manually create teams in the Docker EE UI to mirror those in LDAP.
5. Test your configuration in UCP.
6. Manually create teams in UCP to mirror those in LDAP.
6. Click **Sync Now**.
If Docker EE is configured to sync users with your organization's LDAP directory
@ -67,15 +68,15 @@ synced to match the users in the search results.
{% elsif include.version=="ucp-2.2" %}
To enable LDAP in the Docker EE UI and sync team members with your LDAP
To enable LDAP in UCP and sync team members with your LDAP
directory:
1. Click **Admin Settings** under your username drop down.
2. Click **Authentication & Authorization**.
3. Scroll down and click `Yes` by **LDAP Enabled**. A list of LDAP settings displays.
4. Input values to match your LDAP server installation.
5. Test your configuration in the Docker EE UI.
6. Manually create teams in the Docker EE UI to mirror those in LDAP.
5. Test your configuration in UCP.
6. Manually create teams in UCP to mirror those in LDAP.
6. Click **Sync Now**.
If Docker EE is configured to sync users with your organization's LDAP directory

View File

@ -10,20 +10,21 @@ ui_tabs:
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/usermgmt-sync-with-ldap/
- path: /deploy/rbac/basics-sync-with-ldap/
title: Synchronize teams with LDAP
- path: /datacenter/ucp/2.2/guides/admin/configure/external-auth/
title: Integrate with an LDAP Directory
- path: /deploy/rbac/usermgmt-define-roles/
title: Create roles to authorize access
- path: /deploy/rbac/usermgmt-grant-permissions/
- path: /deploy/rbac/basics-define-roles/
title: Define roles with authorized API operations
- path: /deploy/rbac/basics-group-resources/
title: Group and isolate cluster resources
- path: /deploy/rbac/basics-grant-permissions/
title: Grant access to cluster resources
---
{% if include.ui %}
Users, teams, and organizations are referred to as subjects in Docker Enterprise
Edition.
Users, teams, and organizations are referred to as subjects in Docker EE.
Individual users can belong to one or more teams but each team can only be in
one organization. At the fictional startup, Acme Company, all teams in the
@ -45,16 +46,16 @@ acme-datacenter
All users are authenticated on the backend. Docker EE provides built-in
authentication and also integrates with LDAP directory services.
To use Docker EE's built-in authentication, you must [create users manually](#Create-users-manually).
To use Docker EE's built-in authentication, you must [create users manually](#create-users-manually).
> To enable LDAP and authenticate and synchronize UCP users and teams with your
> organization's LDAP directory, see:
> - [Synchronize users and teams with LDAP in the UI](usermgmt-sync-with-ldap.md)
> - [Synchronize users and teams with LDAP in the UI](basics-sync-with-ldap.md)
> - [Integrate with an LDAP Directory](/datacenter/ucp/2.2/guides/admin/configure/external-auth/index.md).
## Build an organization architecture
The general flow of designing an organization with teams in the Docker EE UI is:
The general flow of designing an organization with teams in UCP is:
1. Create an organization.
2. Add users or enable LDAP.
@ -63,7 +64,7 @@ The general flow of designing an organization with teams in the Docker EE UI is:
### Create an organization with teams
To create an organzation in the Docker EE UI:
To create an organzation in UCP:
1. Click **Organization & Teams** under **User Management**.
2. Click **Create Organization**.
@ -80,7 +81,7 @@ To create teams in the organization:
- Click the team name and select **Actions** > **Add Users**.
- Check the users to include and click **Add Users**.
> **Note**: To sync teams with groups in an LDAP server, see [Sync Teams with LDAP](./usermgmt-sync-with-ldap).
> **Note**: To sync teams with groups in an LDAP server, see [Sync Teams with LDAP](./basics-sync-with-ldap).
{% if include.version=="ucp-3.0" %}
@ -88,10 +89,10 @@ To create teams in the organization:
### Create users manually
New users are assigned a default permission level so that they can access the
cluster. To extend a user's default permissions, add them to a team and [create grants](./usermgmt-grant-permissions/). You can optionally grant them Docker EE
cluster. To extend a user's default permissions, add them to a team and [create grants](./basics-grant-permissions/). You can optionally grant them Docker EE
administrator permissions.
To manally create users in the Docker EE UI:
To manally create users in UCP:
1. Click **Users** under **User Management**.
2. Click **Create User**.
@ -111,10 +112,10 @@ To manally create users in the Docker EE UI:
### Create users manuallly
New users are assigned a default permission level so that they can access the
cluster. To extend a user's default permissions, add them to a team and [create grants](/deploy/rbac/usermgmt-grant-permissions/). You can optionally grant them Docker EE
cluster. To extend a user's default permissions, add them to a team and [create grants](/deploy/rbac/basics-grant-permissions/). You can optionally grant them Docker EE
administrator permissions.
To manally create users in the Docker EE UI:
To manally create users in UCP:
1. Navigate to the **Users** page.
2. Click **Create User**.

View File

@ -10,9 +10,9 @@ ui_tabs:
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/usermgmt-create-subjects/
- path: /deploy/rbac/basics-create-subjects/
title: Create and configure users and teams
- path: /deploy/rbac/usermgmt-grant-permissions/
- path: /deploy/rbac/basics-grant-permissions/
title: Grant access to cluster resources
---

View File

@ -63,7 +63,7 @@ A common workflow for creating grants has four steps:
You can create grants after creating users, collections, and roles (if using
custom roles).
To create a grant in the Docker EE UI:
To create a grant in UCP:
1. Click **Grants** under **User Management**.
2. Click **Create Grant**.
@ -106,7 +106,7 @@ A common workflow for creating grants has four steps:
You can create grants after creating users, collections, and roles (if using custom roles).
To create a grant in the Docker EE UI:
To create a grant in UCP:
1. Click **Grants** under **User Management**.
2. Click **Create Grant**.

View File

@ -10,19 +10,30 @@ ui_tabs:
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/usermgmt-create-subjects/
- path: /deploy/rbac/basics-create-subjects/
title: Create and configure users and teams
- path: /deploy/rbac/usermgmt-define-roles/
- path: /deploy/rbac/basics-define-roles/
title: Create roles to authorize access
- path: /deploy/rbac/usermgmt-grant-permissions/
- path: /deploy/rbac/basics-grant-permissions/
title: Grant access to cluster resources
- path: /deploy/rbac/resources-isolate-volumes/
title: Isolate volumes
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
## Kubernetes namespace
A
[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
is a logical area for a Kubernetes cluster. Kuberenetes comes with a "default"
namespace for your cluster objects (plus two more for system and public
resources). You can create custom namespaces, but unlike Swarm collections,
namespaces _cannot be nested_.
> Resource types that can be placed into a Kubernetes namespace include: Pods,
> Deployments, NetworkPolcies, Nodes, Services, Secrets, and many more.
## Swarm collection
A collection is a directory of grouped resources, such as services, containers,
@ -63,7 +74,7 @@ Docker EE provides a number of built-in collections.
| ------------------ | --------------------------------------------------------------------------------------- |
| `/` | Path to all resources in the Swarm cluster. Resources not in a collection are put here. |
| `/System` | Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. |
| `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](./resources-isolate-nodes/). |
| `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](./howto-isolate-nodes/). |
| `/Shared/Private/` | Path to a user's private collection. |
| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). |
@ -77,7 +88,7 @@ collection, `/webserver`.
## Default collections
Each user has a default collection which can be changed and in the Docker EE UI preferences.
Each user has a default collection which can be changed in UCP preferences.
Users can't deploy a resource without a collection. When a user deploys a
resource in the CLI without an access label, Docker EE automatically places the
@ -114,7 +125,7 @@ deploys a resource on the user's default resource collection.
The system uses the additional labels, `com.docker.ucp.collection.*`, to enable
efficient resource lookups. By default, nodes have the
`com.docker.ucp.collection.root`, `com.docker.ucp.collection.shared`, and
`com.docker.ucp.collection.swarm` labels set to `true`. The Docker EE UI
`com.docker.ucp.collection.swarm` labels set to `true`. UCP
automatically controls these labels, and you don't need to manage them.
Collections get generic default names, but you can give them meaningful names,
@ -154,7 +165,7 @@ one of the nodes under `/Shared`.
If you want to isolate nodes against other teams, place these nodes in new
collections, and assign the `Scheduler` role, which contains the `Node Schedule`
permission, to the team. [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md).
permission, to the team. [Isolate swarm nodes to a specific team](howto-isolate-notes.md).
{% elsif include.version=="ucp-2.2" %}
@ -199,7 +210,7 @@ Docker EE provides a number of built-in collections.
| ------------------ | --------------------------------------------------------------------------------------- |
| `/` | Path to all resources in the Swarm cluster. Resources not in a collection are put here. |
| `/System` | Path to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable. |
| `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](./resources-isolate-nodes/). |
| `/Shared` | Default path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In [Docker EE Advanced](https://www.docker.com/enterprise-edition), worker nodes can be moved and [isolated](./howto-isolate-nodes/). |
| `/Shared/Private/` | Path to a user's private collection. |
| `/Shared/Legacy` | Path to the access control labels of legacy versions (UCP 2.1 and lower). |
@ -213,7 +224,7 @@ collection, `/webserver`.
## Default collections
Each user has a default collection which can be changed and in the Docker EE UI preferences.
Each user has a default collection which can be changed in UCP preferences.
Users can't deploy a resource without a collection. When a user deploys a
resource in the CLI without an access label, Docker EE automatically places the
@ -250,7 +261,7 @@ deploys a resource on the user's default resource collection.
The system uses the additional labels, `com.docker.ucp.collection.*`, to enable
efficient resource lookups. By default, nodes have the
`com.docker.ucp.collection.root`, `com.docker.ucp.collection.shared`, and
`com.docker.ucp.collection.swarm` labels set to `true`. The Docker EE UI
`com.docker.ucp.collection.swarm` labels set to `true`. UCP
automatically controls these labels, and you don't need to manage them.
Collections get generic default names, but you can give them meaningful names,
@ -290,7 +301,7 @@ one of the nodes under `/Shared`.
If you want to isolate nodes against other teams, place these nodes in new
collections, and assign the `Scheduler` role, which contains the `Node Schedule`
permission, to the team. [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md).
permission, to the team. [Isolate swarm nodes to a specific team](howto-isolate-notes.md).
{% endif %}
{% endif %}

View File

@ -1,5 +1,5 @@
---
title: Deploy a multi-tier application with RBAC
title: Deploy a simple stateless app with RBAC
description: Learn how to deploy a simple application and customize access to resources.
keywords: rbac, authorize, authentication, users, teams, UCP, Docker
redirect_from:
@ -13,29 +13,21 @@ ui_tabs:
{% if include.ui %}
This tutorial explains how to create a simple web application with multiple
components and how to separate access to each component with role-based access
control (RBAC).
This tutorial explains how to create a nginx web server and limit access to one
team with role-based access control (RBAC).
{% if include.version=="ucp-3.0" %}
## Swarm Stack
In this section, we deploy `acme-blog` as a Swarm stack of two services. See
[Kubernetes Deployment](#kubernetes-deployment) for the same exercise with
Kubernetes.
## Scenario
You are the Docker EE admin at Acme Company and need to secure access to
`acme-blog` and its component services, Wordpress and MySQL. The best way to do
this is to:
- Add teams and users
- Build the organization with teams and users
- Create collections (directories) for storing the resources of each component.
- Create grants that specify which team can do what operations on which collection.
- Give the all-clear for the ops team to deploy the blog.
### Build the organization
## Build the organization
Add the organization, `acme-datacenter`, and create three teams according to the
following structure:
@ -51,6 +43,80 @@ acme-datacenter
```
See: [Create and configure users and teams](./usermgmt-create-subjects.md).
{% if include.version=="ucp-3.0" %}
## Kubernetes deployment
In this section, we deploy `acme-blog` with Kubernetes. See [Swarm stack](#swarm-stack)
for the same exercise with Swarm.
### Generate Kubernetes secret
For the Kubernetes part of the tutorial, we need to generate a Kubernetes secret
with kubectl. Download the client bundle in UCP to get it running.
1. In the Docker EE UI:
a. Go to **My Profile**.
b. Click **New Client Bundle** > **Generate Client Bundle**.
2. On your localhost:
a. Open a new terminal and navigate to the bundle.
b. `mkdir bundle && cd bundle`
c. `unzip ucp-bundle-admin.zip`
d. Source the UCP environment: `eval "$(<env.sh)"`
3. Generate the secret:
```
echo -n "admin" > ./username.txt
echo -n "1f2d1e2e67df" > ./password.txt
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
```
4. Ensure the secret was generated: `kubectl get secrets`
> To undo the eval setting, close the terminal.
### Create namespaces
_Under construction_
```
apiVersion: v1
kind: Namespace
metadata:
name: mysql-namespace
```
test
```
apiVersion: v1
kind: Namespace
metadata:
name: wordpress-namespace
```
### Grant roles
_Under construction_
### Deploy Wordpress and MySQL with Kubernetes
_Under construction_
### Test access
_Under construction_
## Swarm Stack
In this section, we deploy `acme-blog` as a Swarm stack of two services. See
[Kubernetes Deployment](#kubernetes-deployment) for the same exercise with
Kubernetes.
### Create collection paths
Create three nested Swarm collections. First, create a collection for
@ -82,7 +148,7 @@ Create three grants with built-in roles:
See: [Grant access to cluster resources](./usermgmt-grant-permissions.md).
### Deploy Swarm stack with Wordpress and MySQL
### Deploy Wordpress and MySQL with Swarm
You've configured Docker EE. The `ops` team can now deploy `acme-blog`:
@ -157,22 +223,14 @@ Log on to the Docker EE UI as each user and ensure that
![image](../images/rbac-howto-wpress-mysql-dba-30.png){: .with-border}
## Kubernetes Deployment
In this section, we deploy `acme-blog` with Kubernetes.
...
{% elsif include.version=="ucp-2.2" %}
## Swarm Stack
In this section, we deploy `acme-blog` as a Swarm stack of two services.
You are the Docker EE admin at Acme Company and need to secure access to
`acme-blog` and its component services, Wordpress and MySQL. The best way to do
this is to:
You are the UCP admin at Acme Company and need to secure access to `acme-blog`
and its component services, Wordpress and MySQL. The best way to do this is to:
- Add teams and users
- Create collections (directories) for storing the resources of each component.
@ -226,9 +284,9 @@ Create three grants with built-in roles:
See: [Grant access to cluster resources](./usermgmt-grant-permissions.md).
### Deploy Swarm stack with Wordpress and MySQL
### Deploy Wordpress and MySQL with Swarm
You've configured Docker EE. The `ops` team can now deploy `acme-blog`:
You've configured UCP. The `ops` team can now deploy `acme-blog`:
1. Click **Shared Resources** > **Stacks**.
2. Click **Create Stack**.
@ -291,7 +349,7 @@ networks:
### Test access
Log on to the Docker EE UI as each user and ensure that:
Log on to UCP as each user and ensure that:
- `dba` (alex) can only see and access `mysql-collection`
- `dev` (bett) can only see and access `wordpress-collection`
- `ops` (chad) can see and access both.

View File

@ -12,7 +12,9 @@ ui_tabs:
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
With Docker EE Advanced, you can enable physical isolation of resources
by organizing nodes into collections and granting `Scheduler` access for
different users. To control access to nodes, move them to dedicated collections

View File

@ -1,5 +1,5 @@
---
title: Isolate volumes between two different teams
title: Isolate volumes to a specific team
description: Create grants that limit access to volumes to specific teams.
keywords: ucp, grant, role, permission, authentication
redirect_from:
@ -106,7 +106,7 @@ created by the Dev and Prod users.
## Where to go next
- [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md)
- [Isolate swarm nodes to a specific team](howto-isolate-nodes.md)
{% endif %}
{% endif %}

View File

@ -1,226 +0,0 @@
---
title: Deploy a simple Wordpress service with RBAC
description: Create a grant to control access to a service.
keywords: ucp, grant, role, permission, authentication
redirect_from:
- /ucp/
ui_tabs:
- version: ucp-3.0
orhigher: true
- version: ucp-2.2
orlower: true
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
## Deploy Kubernetes workload and restrict access
This section is under construction.
## Deploy Swarm service and restrict access
In this example, your organization is granted access to a new resource
collection that contains one Swarm service.
1. Create an organization and a team.
2. Create a collection for the view-only service.
3. Deploy a Swarm serivce.
4. Create a grant to manage user access to the collection.
![](../images/view-only-access-diagram.svg)
### Create an organization
Create an organization with one team, and add one user who isn't an
administrator to the team.
1. Log in to the Docker EE UI as an administrator.
2. Navigate to the **Organizations & Teams** page and click
**Create Organization**. Name the new organization `engineering` and
click **Create**.
3. Click **Create Team**, name the new team `Dev`, and click **Create**.
3. Add a non-admin user to the Dev team.
For more, see: [Learn how to create users and teams](usermgmt-create-subjects.md).
### Create a collection for the service
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Find the **Shared** collection and click **View children**.
3. Click **Create collection** and name the collection `View-only services`.
4. Click **Create** to create the collection.
![](../images/deploy-view-only-service-1.png)
The `/Shared/View-only services` collection is ready to use for access
control.
### Deploy a service
Currently, the new collection has no resources assigned to it. To access
resources through this collection, deploy a new service and add it to the
collection.
1. Navigate to the **Services** page and create a new service, named
`WordPress`.
2. In the **Image** textbox, enter `wordpress:latest`. This identifies the
most recent WordPress image in the Docker Store.
3. In the left pane, click **Collection**. The **Swarm** collection appears.
4. Click **View children** to list all of the collections. In **Shared**,
Click **View children**, find the **View-only services** collection and
select it.
5. Click **Create** to add the "WordPress" service to the collection and
deploy it.
![](../images/deploy-view-only-service-3.png)
You're ready to create a grant for controlling access to the "WordPress" service.
### Create a grant
Currently, users who aren't administrators can't access the
`/Shared/View-only services` collection. Create a grant to give the
`engineering` organization view-only access.
> A grant is made up of a *subject*, a *role*, and a *resource collection*.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, navigate to **/Shared/View-only services**,
and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **View Only**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, select **engineering**.
5. Click **Create** to grant permissions to the organization.
![](../images/deploy-view-only-service-4.png)
Everything is in place to show role-based access control in action.
### Verify the user's permissions
Users in the `engineering` organization have view-only access to the
`/Shared/View-only services` collection. You can confirm this by logging in
as a non-admin user in the organization and trying to delete the service.
1. Log in as the user who you assigned to the Dev team.
2. Navigate to the **Services** page and click **WordPress**.
3. In the details pane, confirm that the service's collection is
**/Shared/View-only services**.
![](../images/deploy-view-only-service-2.png)
4. Click the checkbox next to the **WordPress** service, click **Actions**,
and select **Remove**. You get an error message, because the user
doesn't have `Service Delete` access to the collection.
{% elsif include.version=="ucp-2.2" %}
## Deploy Swarm service and restrict access
In this example, your organization is granted access to a new resource
collection that contains one Swarm service.
1. Create an organization and a team.
2. Create a collection for the view-only service.
3. Deploy a Swarm serivce.
4. Create a grant to manage user access to the collection.
![](../images/view-only-access-diagram.svg)
### Create an organization
Create an organization with one team, and add one user who isn't an
administrator to the team.
1. Log in to the Docker EE UI as an administrator.
2. Navigate to the **Organizations & Teams** page and click
**Create Organization**. Name the new organization `engineering` and
click **Create**.
3. Click **Create Team**, name the new team `Dev`, and click **Create**.
3. Add a non-admin user to the Dev team.
For more, see: [Learn how to create users and teams](usermgmt-create-subjects.md).
### Create a collection for the service
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Find the **Shared** collection and click **View children**.
3. Click **Create collection** and name the collection `View-only services`.
4. Click **Create** to create the collection.
![](../images/deploy-view-only-service-1.png)
The `/Shared/View-only services` collection is ready to use for access
control.
### Deploy a service
Currently, the new collection has no resources assigned to it. To access
resources through this collection, deploy a new service and add it to the
collection.
1. Navigate to the **Services** page and create a new service, named
`WordPress`.
2. In the **Image** textbox, enter `wordpress:latest`. This identifies the
most recent WordPress image in the Docker Store.
3. In the left pane, click **Collection**. The **Swarm** collection appears.
4. Click **View children** to list all of the collections. In **Shared**,
Click **View children**, find the **View-only services** collection and
select it.
5. Click **Create** to add the "WordPress" service to the collection and
deploy it.
![](../images/deploy-view-only-service-3.png)
You're ready to create a grant for controlling access to the "WordPress" service.
### Create a grant
Currently, users who aren't administrators can't access the
`/Shared/View-only services` collection. Create a grant to give the
`engineering` organization view-only access.
> A grant is made up of a *subject*, a *role*, and a *resource collection*.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, navigate to **/Shared/View-only services**,
and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **View Only**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, select **engineering**.
5. Click **Create** to grant permissions to the organization.
![](../images/deploy-view-only-service-4.png)
Everything is in place to show role-based access control in action.
### Verify the user's permissions
Users in the `engineering` organization have view-only access to the
`/Shared/View-only services` collection. You can confirm this by logging in
as a non-admin user in the organization and trying to delete the service.
1. Log in as the user who you assigned to the Dev team.
2. Navigate to the **Services** page and click **WordPress**.
3. In the details pane, confirm that the service's collection is
**/Shared/View-only services**.
![](../images/deploy-view-only-service-2.png)
4. Click the checkbox next to the **WordPress** service, click **Actions**,
and select **Remove**. You get an error message, because the user
doesn't have `Service Delete` access to the collection.
{% endif %}
### Where to go next
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
{% endif %}

View File

@ -10,27 +10,25 @@ ui_tabs:
- version: ucp-2.2
orlower: true
next_steps:
- path: /deploy/rbac/usermgmt-create-subjects/
- path: /deploy/rbac/basics-create-subjects/
title: Create and configure users and teams
- path: /deploy/rbac/usermgmt-define-roles/
- path: /deploy/rbac/basics-define-roles/
title: Define roles with authorized API operations
- path: /deploy/rbac/resources-group-resources/
- path: /deploy/rbac/basics-group-resources/
title: Group and isolate cluster resources
- path: /deploy/rbac/usermgmt-grant-permissions/
- path: /deploy/rbac/basics-grant-permissions/
title: Grant access to cluster resources
---
{% if include.ui %}
The Docker Enterprise Edition UI, or the Docker Univeral Control Plane (UCP),
Docker Univeral Control Plane (UCP), the UI for [Docker EE](https://www.docker.com/enterprise-edition),
lets you authorize users to view, edit, and use cluster resources by granting
role-based permissions. Resources can be grouped and isolated inline with an
organization's needs and users can be granted more than one role.
role-based permissions against resource types.
{% if include.version=="ucp-3.0" %}
To authorize access to cluster resources across your organization, Docker EE
To authorize access to cluster resources across your organization, UCP
administrators might take the following high-level steps:
- Add and configure **subjects** (users and teams).
@ -39,7 +37,7 @@ administrators might take the following high-level steps:
- Group cluster **resources** into Swarm collections or Kubernetes namespaces.
- Create **grants** by marrying subject + role + resource.
For a simple example, see [Deploy a multi-tier application with RBAC](./howto-wordpress-multitier).
For a simple example, see _todo_
## Subjects
@ -53,9 +51,7 @@ role that defines permitted operations against one or more resource types.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
For more, see:
- [Create and configure users and teams](./usermgmt-create-subjects.md)
- [Synchronize users and teams with LDAP](./usermgmt-sync-with-ldap.md)
For more, see: [Create and configure users and teams](./basics-create-subjects.md)
## Roles
@ -71,16 +67,18 @@ Most organizations use multiple roles to fine-tune the approprate access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
For more, see: [Define roles with authorized API operations](./usermgmt-define-roles.md)
For more, see: [Define roles with authorized API operations](./basics-define-roles.md)
## Resources
Cluster resources are grouped into Swarm collections or Kubernetes namespaces.
A collection is a directory that holds Swarm resources. You can create
collections in the Docker EE UI by defining a path and moving resources. Or you
can create the path in the Docker EE UI and use *labels* in your YAML file to
assign application resources to that path. For an example, see [Deploy a multi-tier service with multiple roles](/deploy/rbac/howto-wordpress-multitier/).
collections in UCP by both defining a directory path and moving resources into
it. Or you can create the path in UCP and use *labels* in your YAML file to
assign application resources to that path.
For an example, see _todo_
> Resource types that can be placed into a Swarm collection include: Containers,
> Networks, Nodes, Services, Secrets, and Volumes.
@ -89,12 +87,13 @@ A
[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
is a logical area for a Kubernetes cluster. Kuberenetes comes with a "default"
namespace for your cluster objects (plus two more for system and public
resources).
resources). You can create custom namespaces, but unlike Swarm collections,
namespaces _cannot be nested_.
> Resource types that can be placed into a Kubernetes namespace include: Pods,
> Deployments, NetworkPolcies, Nodes, Services, Secrets, and many more.
For more, see: [Group and isolate cluster resources](./resources-group-resources.md).
For more, see: [Group and isolate cluster resources](./basics-group-resources.md).
## Grants
@ -110,19 +109,19 @@ Only an administrator can manage grants, subjects, roles, and resources.
> into directories or namespaces, define roles by selecting allowable operations,
> and apply grants to users and teams.
For more, see: [Grant access to cluster resources](./usermgmt-grant-permissions.md).
For more, see: [Grant access to cluster resources](./basics-grant-permissions.md).
{% elsif include.version=="ucp-2.2" %}
To authorize access to cluster resources across your organization, Docker EE
To authorize access to cluster resources across your organization, UCP
administrators might take the following high-level steps:
- Add and configure **subjects** (users and teams)
- Define custom **roles** (or use defaults) by adding permissions to resource
types
- Group cluster **resources** into Swarm collections
- Create **grants** by marrying subject + role + resource
- Add and configure **subjects** (users and teams).
- Define custom **roles** (or use defaults) by adding permitted operations per
resource types.
- Group cluster **resources** into Swarm collections.
- Create **grants** by marrying subject + role + resource.
## Subjects
@ -136,9 +135,7 @@ role that defines permitted operations against one or more resource types.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
For more, see:
- [Create and configure users and teams](./usermgmt-create-subjects.md)
- [Synchronize users and teams with LDAP](./usermgmt-sync-with-ldap.md)
For more, see: [Create and configure users and teams](./basics-create-subjects.md)
## Roles
@ -154,21 +151,23 @@ Most organizations use different roles to fine-tune the approprate access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
For more, see: [Define roles with authorized API operations](./usermgmt-define-roles.md)
For more, see: [Define roles with authorized API operations](./basics-define-roles.md)
## Resources
Cluster resources are grouped into Swarm collections.
A collection is a directory that holds Swarm resources. You can create
collections in the Docker EE UI by defining a path and moving resources. Or you
can create the path in the Docker EE UI and use *labels* in your YAML file to
assign application resources to that path. For an example, see [Deploy a multi-tier service with multiple roles](./deploy/rbac/howto-wordpress-multitier/).
collections in UCP by both defining a directory path and moving resources into
it. Or you can create the path in UCP and use *labels* in your YAML file to
assign application resources to that path.
For an example, see _todo_
> Resource types that can be placed into a Swarm collection include: Containers,
> Networks, Nodes, Services, Secrets, and Volumes.
For more, see: [Group and isolate cluster resources](./resources-group-resources.md).
For more, see: [Group and isolate cluster resources](./basics-group-resources.md).
## Grants
@ -184,21 +183,15 @@ Only an administrator can manage grants, subjects, roles, and resources.
> into directories or namespaces, define roles by selecting allowable operations,
> and apply grants to users and teams.
For more, see: [Grant access to cluster resources](./usermgmt-grant-permissions.md).
For more, see: [Grant access to cluster resources](./basics-grant-permissions.md).
## Transition from UCP 2.1 access control
- Your existing access labels and permissions are migrated automatically during
an upgrade from UCP 2.1.x.
- Unlabeled "user-owned" resources are migrated into each user's private
collection, in `/Shared/Private/<username>`.
- Access labels & permissions are migrated automatically when upgrading from UCP 2.1.x.
- Unlabeled user-owned resources are migrated into `/Shared/Private/<username>`.
- Old access control labels are migrated into `/Shared/Legacy/<labelname>`.
- When deploying a resource, choose a collection instead of an access label.
- Use grants for access control, instead of unlabeled permissions.
For hands-on tutorials, see:
- [Deploy a simple Wordpress service with RBAC](./howto-wordpress-view-only.md)
- [Deploy a multi-tier application with RBAC](./howto-wordpress-multitier.md)
{% endif %}
{% endif %}