Update tutorial, Test format (#296)

This commit is contained in:
Gwendolynne Barr 2017-11-17 13:32:32 -08:00 committed by Jim Galasyn
parent 30b9f68396
commit a21e7af29a
12 changed files with 495 additions and 234 deletions

View File

@ -311,7 +311,7 @@ guides:
- path: /storage/storagedriver/vfs-driver/
title: Use the VFS storage driver
- sectiontitle: Administer role-based access control
- sectiontitle: Configure user permissions
section:
- path: /deploy/access-control/
title: Access control model overview
@ -339,13 +339,13 @@ guides:
title: Isolate volumes to specific teams
- path: /deploy/access-control/deploy-view-only-service
title: Deploy a service with view-only access
- sectiontitle: Examples
- sectiontitle: OrcaBank Tutorial
section:
- path: /deploy/access-control/access-control-design-ee-standard
title: Orcabank tutorial 1
- path: /deploy/access-control/access-control-design-ee-advanced
title: Orcabank tutorial 2
- sectiontitle: Deploy your app in production
section:
- path: /deploy/

View File

@ -12,31 +12,23 @@ ui_tabs:
---
{% if include.ui %}
Go through the [Docker Enterprise Standard tutorial](access-control-design-ee-standard.md),
before continuing here with Docker Enterprise Advanced.
In the first tutorial, the fictional company, OrcaBank, designed an architecture
with role-based access control (RBAC) to meet their organization's security
needs. They assigned multiple grants to fine-tune access to resources across
collection boundaries on a single platform.
{% if include.version=="ucp-3.0" %}
This topic is under construction.
In this tutorial, OrcaBank implements new and more stringent security
requirements for production applications:
{% elsif include.version=="ucp-2.2" %}
The previous tutorial, [Access Control Design with Docker EE
Standard](access-control-design-ee-standard.md), explains how the fictional
company, OrcaBank, designed an architecture with role-based access control
(RBAC) to fit the specific security needs of their organization. OrcaBank used
the ability to assign multiple grants to control access to resources across
collection boundaries.
Go through the [first OrcaBank tutorial](access-control-design-ee-standard.md),
before continuing.
In this tutorial, OrcaBank implements their new stringent security requirements
for production applications:
First, OrcaBank adds a staging zone to their deployment model. They no longer
move developed applications directly in to production. Instead, they deploy apps
from their dev cluster to staging for testing, and then to production.
First, OrcaBank adds staging zone to their deployment model. They will no longer
move developed appliciatons directly in to production. Instead, they will deploy
apps from their dev cluster to staging for testing, and then to production.
Second, production applications are no longer permitted to share any physical
infrastructure with non-production infrastructure. Here, OrcaBank segments the
infrastructure with non-production infrastructure. OrcaBank segments the
scheduling and access of applications with [Node Access Control](access-control-node.md).
> [Node Access Control](access-control-node.md) is a feature of Docker EE
@ -44,14 +36,15 @@ scheduling and access of applications with [Node Access Control](access-control-
> can be placed in different collections so that resources can be scheduled and
> isolated on disparate physical or virtual hardware resources.
{% if include.version=="ucp-3.0" %}
## Team access requirements
OrcaBank still has three application teams, `payments`, `mobile`, and `db` with
varying levels of segmentation between them.
Their RBAC redesign organizes their UCP cluster into two top-level collections,
staging and production, which are completely separate security zones on separate
physical infrastructure.
Their RBAC redesign is going to organize their UCP cluster into two top-level
collections, staging and production, which are completely separate security
zones on separate physical infrastructure.
OrcaBank's four teams now have different needs in production and staging:
@ -72,12 +65,12 @@ OrcaBank has decided to replace their custom `Ops` role with the built-in
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `View & Use Networks + Secrets` (custom role) enables users to connect to
networks and use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
- `Full Control` (default role) allows users to view and edit volumes, networks,
and images. They can also create containers without restriction but cannot see
the containers of other users.
- `Full Control` (default role) allows users complete control of all collections
granted to them. They can also create containers without restriction but
cannot see the containers of other users.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect
to networks and view/use secrets used by `db` containers, but prevents them
from seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
@ -117,8 +110,116 @@ The collection architecture now has the following tree representation:
OrcaBank must now diversify their grants further to ensure the proper division
of access.
The payments and mobile applications teams will have three grants each--one for
deploying to production, one for deploying to staging, and the same grant to
The `payments` and `mobile` application teams will have three grants each--one
for deploying to production, one for deploying to staging, and the same grant to
access shared `db` networks and secrets.
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture, desgined with Docker EE Advanced, provides
physical segmentation between production and staging using node access control.
Applications are scheduled only on UCP worker nodes in the dedicated application
collection. And applications use shared resources across collection boundaries
to access the databases in the `/prod/db` collection.
![image](../images/design-access-control-adv-architecture.png){: .with-border}
### DB team
The OrcaBank `db` team is responsible for deploying and managing the full
lifecycle of the databases that are in production. They have the full set of
operations against all database resources.
![image](../images/design-access-control-adv-db.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their full application stack in
staging. In production they deploy their own applications but use the databases
that are provided by the `db` team.
![image](../images/design-access-control-adv-mobile.png){: .with-border}
{% elsif include.version=="ucp-2.2" %}
## Team access requirements
OrcaBank still has three application teams, `payments`, `mobile`, and `db` with
varying levels of segmentation between them.
Their RBAC redesign is going to organize their UCP cluster into two top-level
collections, staging and production, which are completely separate security
zones on separate physical infrastructure.
OrcaBank's four teams now have different needs in production and staging:
- `security` should have view-only access to all applications in production (but
not staging).
- `db` should have full access to all database applications and resources in
production (but not staging). See [DB Team](#db-team).
- `mobile` should have full access to their Mobile applications in both
production and staging and limited access to shared `db` services. See
[Mobile Team](#mobile-team).
- `payments` should have full access to their Payments applications in both
production and staging and limited access to shared `db` services.
## Role composition
OrcaBank has decided to replace their custom `Ops` role with the built-in
`Full Control` role.
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `Full Control` (default role) allows users complete control of all collections
granted to them. They can also create containers without restriction but
cannot see the containers of other users.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect
to networks and view/use secrets used by `db` containers, but prevents them
from seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
In the previous tutorial, OrcaBank created separate collections for each
application team and nested them all under `/Shared`.
To meet their new security requirements for production, OrcaBank is redesigning
collections in two ways:
- Adding collections for both the production and staging zones, and nesting a
set of application collections under each.
- Segmenting nodes. Both the production and staging zones will have dedicated
nodes; and in production, each application will be on a dedicated node.
The collection architecture now has the following tree representation:
```
/
├── System
├── Shared
├── prod
│   ├── mobile
│   ├── payments
│   └── db
│ ├── mobile
│   └── payments
|
└── staging
├── mobile
└── payments
```
## Grant composition
OrcaBank must now diversify their grants further to ensure the proper division
of access.
The `payments` and `mobile` application teams will have three grants each--one
for deploying to production, one for deploying to staging, and the same grant to
access shared `db` networks and secrets.
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}

View File

@ -12,65 +12,62 @@ ui_tabs:
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
This topic is under construction.
{% elsif include.version=="ucp-2.2" %}
[Collections and grants](index.md) are strong tools that can be used to control
access and visibility to resources in UCP.
This tutorial describes a fictitious company named OrcaBank that is designing an
architecture with role-based access control (RBAC) for their application
engineering group.
This tutorial describes a fictitious company named OrcaBank that needs to
configure an architecture in UCP with role-based access control (RBAC) for
their application engineering group.
{% if include.version=="ucp-3.0" %}
## Team access requirements
OrcaBank has organized their application teams by specialty with each team
providing shared services to other applications. Developers do their own DevOps
and deploy and manage the lifecycle of their applications.
OrcaBank reorganized their application teams by product with each team providing
shared services as necessary. Developers at OrcaBank do their own DevOps and
deploy and manage the lifecycle of their applications.
OrcaBank has four teams with the following needs:
OrcaBank has four teams with the following resource needs:
- `security` should have view-only access to all applications in the swarm.
- `security` should have view-only access to all applications in the cluster.
- `db` should have full access to all database applications and resources. See
[DB Team](#db-team).
- `mobile` should have full access to their Mobile applications and limited
- `mobile` should have full access to their mobile applications and limited
access to shared `db` services. See [Mobile Team](#mobile-team).
- `payments` should have full access to their Payments applications and limited
- `payments` should have full access to their payments applications and limited
access to shared `db` services.
## Role composition
To assign the proper access, OrcaBank is configuring a combination of default
To assign the proper access, OrcaBank is employing a combination of default
and custom roles:
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `View & Use Networks + Secrets` (custom role) enables users to connect to
networks and use secrets used by `db` containers, but prevents them from
- `View Only` (default role) allows users to see all resources (but not edit or use).
- `Ops` (custom role) allows users to perform all operations against configs,
containers, images, networks, nodes, secrets, services, and volumes.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect to
networks and view/use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
- `Ops` (custom role) allows users to do almost all operations against all types
of resources.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
OrcaBank is also creating collections that fit their team structure.
OrcaBank is also creating collections of resources to mirror their team
structure.
In their case, all applications share the same physical resources, so all nodes
and applications are being put into collections that nest under the built-in
collection, `/Shared`.
Currently, all OrcaBank applications share the same physical resources, so all
nodes and applications are being configured in collections that nest under the
built-in collection, `/Shared`.
Other collections are also being created to enable shared `db` applications.
> **Note:** For increased security with node-based isolation, use Docker
> Enterprise Advanced.
- `/Shared/mobile` hosts all Mobile applications and resources.
- `/Shared/payments` hosts all Payments applications and resources.
> For Secure multi-tenancy with node-based isolation, use Docker Enterprise
> Advanced. We cover this scenario in the [next tutorial](#).
Other collections were also created to enable shared `db` applications.
- `/Shared/db` is a top-level collection for all `db` resources.
- `/Shared/db/payments` is a collection of `db` resources for Payments applications.
- `/Shared/db/mobile` is a collection of `db` resources for Mobile applications.
@ -88,7 +85,6 @@ The collection architecture has the following tree representation:
   └── payments
```
OrcaBank's [Grant composition](#grant-composition) ensures that their collection
architecture gives the `db` team access to _all_ `db` resources and restricts
app teams to _shared_ `db` resources.
@ -121,9 +117,9 @@ collection boundaries. By assigning multiple grants per team, the Mobile and
Payments applications teams can connect to dedicated Database resources through
a secure and controlled interface, leveraging Database networks and secrets.
> Note: Because this is Docker Enterprise Standard, all resources are deployed
> across the same group of UCP worker nodes. Node segmentation is provided in
> Docker Enterprise Advanced and discussed in the [next tutorial](#).
> **Note:** In Docker Enterprise Standard, all resources are deployed across the
> same group of UCP worker nodes. Node segmentation is provided in Docker
> Enterprise Advanced and discussed in the [next tutorial](#).
![image](../images/design-access-control-adv-2.png){: .with-border}
@ -142,9 +138,128 @@ minus the database tier that is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}
{% elsif include.version=="ucp-2.2" %}
## Team access requirements
OrcaBank reorganized their application teams by product with each team providing
shared services as necessary. Developers at OrcaBank do their own DevOps and
deploy and manage the lifecycle of their applications.
OrcaBank has four teams with the following resource needs:
- `security` should have view-only access to all applications in the cluster.
- `db` should have full access to all database applications and resources. See
[DB Team](#db-team).
- `mobile` should have full access to their mobile applications and limited
access to shared `db` services. See [Mobile Team](#mobile-team).
- `payments` should have full access to their payments applications and limited
access to shared `db` services.
## Role composition
To assign the proper access, OrcaBank is employing a combination of default
and custom roles:
- `View Only` (default role) allows users to see all resources (but not edit or use).
- `Ops` (custom role) allows users to perform all operations against configs,
containers, images, networks, nodes, secrets, services, and volumes.
- `View & Use Networks + Secrets` (custom role) enables users to view/connect to
networks and view/use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
OrcaBank is also creating collections of resources to mirror their team
structure.
Currently, all OrcaBank applications share the same physical resources, so all
nodes and applications are being configured in collections that nest under the
built-in collection, `/Shared`.
Other collections are also being created to enable shared `db` applications.
> **Note:** For increased security with node-based isolation, use Docker
> Enterprise Advanced.
- `/Shared/mobile` hosts all Mobile applications and resources.
- `/Shared/payments` hosts all Payments applications and resources.
- `/Shared/db` is a top-level collection for all `db` resources.
- `/Shared/db/payments` is a collection of `db` resources for Payments applications.
- `/Shared/db/mobile` is a collection of `db` resources for Mobile applications.
The collection architecture has the following tree representation:
```
/
├── System
└── Shared
├── mobile
├── payments
   └── db
├── mobile
   └── payments
```
OrcaBank's [Grant composition](#grant-composition) ensures that their collection
architecture gives the `db` team access to _all_ `db` resources and restricts
app teams to _shared_ `db` resources.
## LDAP/AD integration
OrcaBank has standardized on LDAP for centralized authentication to help their
identity team scale across all the platforms they manage.
To implement LDAP authenticaiton in UCP, OrcaBank is using UCP's native LDAP/AD
integration to map LDAP groups directly to UCP teams. Users can be added to or
removed from UCP teams via LDAP which can be managed centrally by OrcaBank's
identity team.
The following grant composition shows how LDAP groups are mapped to UCP teams.
## Grant composition
OrcaBank is taking advantage of the flexibility in UCP's grant model by applying
two grants to each application team. One grant allows each team to fully
manage the apps in their own collection, and the second grant gives them the
(limited) access they need to networks and secrets within the `db` collection.
![image](../images/design-access-control-adv-1.png){: .with-border}
## OrcaBank access architecture
OrcaBank's resulting access architecture shows applications connecting across
collection boundaries. By assigning multiple grants per team, the Mobile and
Payments applications teams can connect to dedicated Database resources through
a secure and controlled interface, leveraging Database networks and secrets.
> **Note:** In Docker Enterprise Standard, all resources are deployed across the
> same group of UCP worker nodes. Node segmentation is provided in Docker
> Enterprise Advanced and discussed in the [next tutorial](#).
![image](../images/design-access-control-adv-2.png){: .with-border}
### DB team
The `db` team is responsible for deploying and managing the full lifecycle
of the databases used by the application teams. They can execute the full set of
operations against all database resources.
![image](../images/design-access-control-adv-3.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their own application stack,
minus the database tier that is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}
{% endif %}
## Where to go next
- [Access control design with Docker EE Advanced](access-control-design-ee-advanced.md)
{% endif %}
{% endif %}

View File

@ -13,31 +13,36 @@ ui_tabs:
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
This topic is under construction.
*Node access control* lets you segment scheduling and visibility by node. Node access control is available in [Docker EE Advanced](https://www.docker.com/pricing) with Swarm orchestration only.
{% elsif include.version=="ucp-2.2" %}
The ability to segment scheduling and visibility by node is called
*node access control* and is a feature of Docker EE Advanced. By default,
all nodes that aren't infrastructure nodes (UCP & DTR nodes) belong to a
built-in collection called `/Shared`. By default, all application workloads
in the cluster will get scheduled on nodes in the `/Shared` collection. This
includes users that are deploying in their private collections
(`/Shared/Private/`) and in any other collections under `/Shared`. This is
enabled by a built-in grant that grants every UCP user the `scheduler`
capability against the `/Shared` collection.
*Node access control* lets you segment scheduling and visibility by node. Node access control is available in [Docker EE Advanced](https://www.docker.com/pricing).
Node Access Control works by placing nodes in to custom collections outside of
`/Shared`. If the `scheduler` capability is granted via a role to a user or
group of users against a collection then they will be able to schedule
containers and services on these nodes. In the following example, users with
`scheduler` capability against `/collection1` will be able to schedule
applications on those nodes.
{% endif %}
Note that in the directory these collections lie outside of the `/Shared`
collection so users without grants will not have access to these collections
unless explicitly granted access. These users will only be able to deploy
applications on the built-in `/Shared` collection nodes.
By default, non-infrastructure nodes (non-UCP & DTR nodes) belong to a built-in
collection called `/Shared`. All application workloads in the cluster are
scheduled on nodes in the `/Shared` collection. This includes those deployed in
private collections (`/Shared/Private/`) or any other collection under
`/Shared`.
This setting is enabled by a built-in grant that assigns every UCP user the
`scheduler` capability against the `/Shared` collection.
Node Access Control works by placing nodes in custom collections (outside of
`/Shared`). If a user or team is granted a role with the `scheduler` capability
against a collection, then they can schedule containers and services on these
nodes.
In the following example, users with `scheduler` capability against
`/collection1` can schedule applications on those nodes.
Again, these collections lie outside of the `/Shared` collection so users
without grants do not have access to these collections unless it is explicitly
granted. These users can only deploy applications on the built-in `/Shared`
collection nodes.
![image](../images/design-access-control-adv-custom-grant.png){: .with-border}
@ -57,9 +62,8 @@ With the use of default collections, users, teams, and organizations can be
constrained to what nodes and physical infrastructure they are capable of
deploying on.
## Where to go next
- [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md)
{% endif %}
{% endif %}

View File

@ -12,10 +12,6 @@ ui_tabs:
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
This topic is under construction.
{% elsif include.version=="ucp-2.2" %}
You can extend the user's default permissions by granting them fine-grained
permissions over resources. You do this by adding the user to a team.
@ -127,5 +123,8 @@ the `Data Center Resources` collection.
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
- [Isolate swarm nodes between two different teams](isolate-nodes-between-teams.md)
{% if include.version=="ucp-3.0" %}
{% elsif include.version=="ucp-2.2" %}
{% endif %}
{% endif %}

View File

@ -17,7 +17,7 @@ integrates with LDAP directory services.
> To enable LDAP and manage users and groups from your organization's directory,
> go to Admin > Admin Settings > Authentication and Authorization.
> [Learn how to integrate with an LDAP directory] (../configure/external-auth/index.md).
> [Learn how to integrate with an LDAP directory](../configure/external-auth/index.md).
{% if include.version=="ucp-3.0" %}
To use UCP built-in authentication, you must manually create users. New users
@ -31,12 +31,12 @@ To create a new user in the UCP web UI:
4. Click **Create**.
5. [optional] Check "Is a Docker EE Admin".
![](../images/create-users-1-ucp3.png){: .with-border}
> Check `Is a Docker EE Admin` to grant a user permission to change the cluster
> configuration and manage grants, roles, and collections.
![](../images/create-users-2-ucp3.png){: .with-border}
![](../images/ucp_usermgmt_users_create01.png){: .with-border}
![](../images/ucp_usermgmt_users_create02.png){: .with-border}
{% elsif include.version=="ucp-2.2" %}
To use UCP built-in authentication, you must manually create users. New users

View File

@ -1,5 +1,5 @@
---
title: Deploy a service with view-only access across an organization
title: Deploy a service and restrict access with RBAC
description: Create a grant to control access to a service.
keywords: ucp, grant, role, permission, authentication
redirect_from:
@ -13,38 +13,47 @@ ui_tabs:
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
This topic is under construction.
## Deploy Kubernetes workload and restrict access
This section is under construction.
{% elsif include.version=="ucp-2.2" %}
{% endif %}
## Deploy Swarm service and restrict access
In this example, your organization is granted access to a new resource
collection that contains one service.
collection that contains one Swarm service.
1. Create an organization and a team.
2. Create a collection for the view-only service.
3. Create a grant to manage user access to the collection.
3. Deploy a Swarm serivce.
4. Create a grant to manage user access to the collection.
![](../images/view-only-access-diagram.svg)
## Create an organization
### Create an organization
In this example, you create an organization and a team, and you add one user
who isn't an administrator to the team.
[Learn how to create and manage teams](create-and-manage-teams.md).
Create an organization with one team, and add one user who isn't an administrator
to the team.
1. Log in to UCP as an administrator.
2. Navigate to the **Organizations & Teams** page and click
**Create Organization**. Name the new organization "engineering" and
**Create Organization**. Name the new organization `engineering` and
click **Create**.
3. Click **Create Team**, name the new team "Dev", and click **Create**.
3. Click **Create Team**, name the new team `Dev`, and click **Create**.
3. Add a non-admin user to the Dev team.
## Create a collection for the service
For more, see:
- [Learn how to create and manage users](create-and-manage-users.md).
- [Learn how to create and manage teams](create-and-manage-teams.md).
### Create a collection for the service
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Find the **Shared** collection and click **View children**.
3. Click **Create collection** and name the collection "View-only services".
3. Click **Create collection** and name the collection `View-only services`.
4. Click **Create** to create the collection.
![](../images/deploy-view-only-service-1.png)
@ -52,15 +61,15 @@ who isn't an administrator to the team.
The `/Shared/View-only services` collection is ready to use for access
control.
## Deploy a service
### Deploy a service
Currently, the new collection has no resources assigned to it. To access
resources through this collection, deploy a new service and add it to the
collection.
1. Navigate to the **Services** page and create a new service, named
"WordPress".
2. In the **Image** textbox, enter "wordpress:latest". This identifies the
`WordPress`.
2. In the **Image** textbox, enter `wordpress:latest`. This identifies the
most recent WordPress image in the Docker Store.
3. In the left pane, click **Collection**. The **Swarm** collection appears.
4. Click **View children** to list all of the collections. In **Shared**,
@ -73,12 +82,14 @@ collection.
You're ready to create a grant for controlling access to the "WordPress" service.
## Create a grant
### Create a grant
Currently, users who aren't administrators can't access the
`/Shared/View-only services` collection. Create a grant to give the
`engineering` organization view-only access.
> A grant is made up of a *subject*, a *role*, and a *resource collection*.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, navigate to **/Shared/View-only services**,
and click **Select Collection**.
@ -91,7 +102,7 @@ Currently, users who aren't administrators can't access the
Everything is in place to show role-based access control in action.
## Verify the user's permissions
### Verify the user's permissions
Users in the `engineering` organization have view-only access to the
`/Shared/View-only services` collection. You can confirm this by logging in
@ -108,9 +119,7 @@ as a non-admin user in the organization and trying to delete the service.
and select **Remove**. You get an error message, because the user
doesn't have `Service Delete` access to the collection.
## Where to go next
### Where to go next
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
{% endif %}
{% endif %}

View File

@ -12,21 +12,29 @@ ui_tabs:
---
{% if include.ui %}
UCP administrators can create *grants* to control how users and organizations
access resources.
{% if include.version=="ucp-3.0" %}
This topic is under construction.
## Kubernetes grants
With Kubernetes orchestration, a grant is made up of a *subject*, a *role*, and a
*namespace*.
This section is under construction.
{% elsif include.version=="ucp-2.2" %}
{% endif %}
If you're a UCP administrator, you can create *grants* to control how users
and organizations access swarm resources.
## Swarm grants
With Swarm orchestration, a grant is made up of a *subject*, a *role*, and a
*resource collection*.
![](../images/ucp-grant-model-0.svg){: .with-border}
A grant is made up of a *subject*, a *role*, and a *resource collection*.
A grant defines who (subject) has how much access (role)
to a set of resources (collection). Each grant is a 1:1:1 mapping of
subject, role, collection. For example, you can grant the "Prod Team"
"Restricted Control" permissions for the "/Production" collection.
A grant defines who (subject) has how much access (role) to a set of resources
(collection). Each grant is a 1:1:1 mapping of subject, role, collection. For
example, you can grant the "Prod Team" "Restricted Control" permissions for the
"/Production" collection.
The usual workflow for creating grants has four steps.
@ -38,10 +46,10 @@ The usual workflow for creating grants has four steps.
![](../images/ucp-grant-model.svg){: .with-border}
## Create a grant
### Create a Swarm grant
When you have your users, collections, and roles set up, you can create
grants. Administrators create grants on the **Manage Grants** page.
When you have your users, collections, and roles set up, you can create grants.
Administrators create grants on the **Manage Grants** page.
1. Click **Create Grant**. All of the collections in the system are listed.
2. Click **Select** on the collection you want to grant access to.
@ -51,8 +59,8 @@ grants. Administrators create grants on the **Manage Grants** page.
organization or a team.
5. Select a user, team, or organization and click **Create**.
By default, all new users are placed in the `docker-datacenter` organization.
If you want to apply a grant to all UCP users, create a grant with the
By default, all new users are placed in the `docker-datacenter` organization. If
you want to apply a grant to all UCP users, create a grant with the
`docker-datacenter` org as a subject.
## Where to go next
@ -60,4 +68,3 @@ If you want to apply a grant to all UCP users, create a grant with the
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
{% endif %}
{% endif %}

View File

@ -23,29 +23,86 @@ next_steps:
---
{% if include.ui %}
With Docker Universal Control Plane (UCP), you can configure how users access
resources by assigning role-based permissions with grants.
{% if include.version=="ucp-3.0" %}
This topic is under construction.
With Docker Universal Control Plane, you can control who creates and edits
resources, such as nodes, services, images, networks, and volumes. You can grant
and manage permissions to enforce fine-grained access control as needed.
## Grant access to Swarm resources
This topic is under construction.
UCP administrators can control who views, edits, and uses Swarm and Kubernetes
resources. They can grant and manage permissions to enforce fine-grained access
control as needed.
## Grant access to Kubernetes resources
This topic is under construction.
## Grants
## Transition from UCP 2.2 access control
This topic is under construction.
Grants define which users can access what resources. Grants are effectively
Access Control Lists (ACLs), which, when grouped together, can provide
comprehensive access policies for an entire organization.
A grant is made up of a *subject*, *namespace*, and *role*.
Administrators are users who create subjects, define namespaces by labelling
resources, define roles by selecting allowable operations, and apply grants to
users and teams.
> Only an administrator can manage grants, subjects, roles, and resources.
## Subjects
A subject represents a user, team, or organization. A subject is granted a
role that defines permitted operations against one or more resources.
- **User**: A person authenticated by the authentication backend. Users can
belong to one or more teams and one or more organizations.
- **Team**: A group of users that share permissions defined at the team level.
A team exists only as part of an organization, and all of its members
must be members of the organization. Team members share organization
permissions. A team can be in one organization only.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
## Namespaces
A namespace is ...
## Roles
Roles define what operations can be done by whom against which cluster
resources. A role is a set of permitted operations against a resource that is
assigned to a user or team with a grant.
For example, the built-in role, **Restricted Control**, includes permission to
view and schedule a node (in a granted namespace) but not update it.
Most organizations use different roles to assign the right kind of access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
You can build custom roles to meet your organizational needs or use the
following built-in roles:
- **View Only** - Users can see all cluster resources but not edit or use them.
- **Restricted Control** - Users can view containers and run a shell inside a
container process (with `docker exec`) but not view or edit other resources.
- **Full Control** - Users can perform all operations against granted resources.
- **Scheduler** - Users can view and schedule nodes.
[Learn more about roles and permissions](permission-levels.md).
{% elsif include.version=="ucp-2.2" %}
## Grant access to Swarm resources
UCP administrators control how subjects (users, teams, organizations) access
resources (collections) by assigning role-based permissions with *grants*.
UCP administrators can control who views, edits, and uses resources such as
nodes, services, images, networks, and volumes. They can grant and manage
permissions to enforce fine-grained access control as needed.
A grant is made up of a *subject*, *role*, and *resource collection*.
## Grants
Grants define which users can access what resources. Grants are effectively
Access Control Lists (ACLs), which, when grouped together, can provide
comprehensive access policies for an entire organization.
A grant is made up of a *subject*, *resource collection*, and *role*.
Administrators are users who create subjects, define collections by labelling
resources, define roles by selecting allowable operations, and apply grants to
@ -58,7 +115,7 @@ users and teams.
## Subjects
A subject represents a user, team, or organization. A subject is granted a
role for a collection of resources.
role that defines permitted operations against one or more resources.
- **User**: A person authenticated by the authentication backend. Users can
belong to one or more teams and one or more organizations.
@ -69,37 +126,50 @@ role for a collection of resources.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
## Collections
A collection is a group of resources that you define with labels and access by
specifying a directory-like path.
Swarm resources that can be placed into a collection include:
- Application configs
- Containers
- Networks
- Nodes (Physical or virtual)
- Services
- Secrets
- Volumes
## Roles
A role is a set of permitted API operations that you can assign to a specific
subject and collection by using a grant. UCP administrators view and manage
roles by navigating to the **Roles** page.
Roles define what operations can be done by whom against which cluster
resources. A role is a set of permitted operations against a resource that is
assigned to a user or team with a grant.
For example, the built-in role, **Restricted Control**, includes permission to view
and schedule a node (in a granted collection) but not update it.
Most organizations use different roles to assign the right kind of access. A
given team or user may have different roles provided to them depending on what
resource they are accessing.
You can build custom roles to meet your organizational needs or use the following
built-in roles:
- **View Only** - Users can see all cluster resources but not edit or use them.
- **Restricted Control** - Users can view containers and run a shell inside a
container process (with `docker exec`) but not view or edit other resources.
- **Full Control** - Users can perform all operations against granted resources.
- **Scheduler** - Users can view and schedule nodes.
[Learn more about roles and permissions](permission-levels.md).
## Resource collections
Docker EE allows you to control access to cluster resources with *collections*.
A collection is a group of resources that you access by specifying a
directory-like path.
Resources that can be placed into a collection include:
- Physical or virtual nodes
- Containers
- Services
- Networks
- Volumes
- Secrets
- Application configs
## Collection architecture
Grants define which users can access what resources. Grants are effectively
Access Control Lists (ACLs), which, when grouped together, can provide
comprehensive access policies for an entire organization.
Before grants can be implemented, collections must be defined and group
resources in a way that makes sense for an organization.
Before grants can be implemented, collections must group resources in a way that
makes sense for an organization.
For example, consider an organization with two application teams, Mobile and
Payments, which share cluster hardware resources but segregate access to their
@ -126,49 +196,7 @@ prod (collection)
> A subject that has access to any level in a collection hierarchy has the
> same access to any collections below it.
## Role composition
Roles define what operations can be done against cluster resources. Most
organizations use different roles to assign the right kind of access. A given
team or user may have different roles provided to them depending on what
resource they are accessing.
UCP provides default roles and lets you build custom roles. For example, here,
three different roles are used:
- **Full Control** (default role) - Allows users to perform all operations
against cluster resources.
- **View Only** (default role) - Allows users to see all cluster resources but
not edit or delete them.
- **Dev** (custom role) - Allows users to view containers and run a shell inside
a container process (with `docker exec`) but not to view or edit any other
cluster resources.
## Grant composition
The following four grants define the access policy for the entire organization
for this cluster. They tie together the collections that were created, the
default and custom roles, and also teams of users that are in UCP.
![image](../images/access-control-grant-composition.png){: .with-border}
## Access architecture
The resulting access architecture defined by these grants is depicted below.
![image](../images/access-control-collection-architecture.png){: .with-border}
There are four teams that are given access to cluster resources:
- The `ops` team has `Full Control` against the entire `/prod` collection. It
can deploy, view, edit, and remove applications and application resources.
- The `security` team has the `View Only` role. They can see, but not edit, all
resources in the `/prod` collection.
- The `mobile` team has the `Dev` role against the `/prod/mobile` collection
only. This team can see and `exec` into their own applications, but not the
`payments` applications.
- The `payments` team has the `Dev` role for the `/prod/payments` collection
only.
## Transition from UCP 2.1 access control

View File

@ -18,37 +18,35 @@ This topic is under construction.
{% elsif include.version=="ucp-2.2" %}
Docker EE enables controlling access to container resources by using
*collections*. A collection is a group of swarm resources,
like services, containers, volumes, networks, and secrets.
*collections*. A collection is a group of swarm resources, like services,
containers, volumes, networks, and secrets.
![](../images/collections-and-resources.svg){: .with-border}
Access to collections goes through a directory structure that arranges a
swarm's resources. To assign permissions, administrators create grants
against directory branches.
Access to collections goes through a directory structure that arranges a swarm's
resources. To assign permissions, administrators create grants against directory
branches.
## Directory paths define access to collections
Access to collections is based on a directory-like structure.
For example, the path to a user's default collection is
`/Shared/Private/<username>`. Every user has a private collection that
has the default permission specified by the UCP administrator.
Access to collections is based on a directory-like structure. For example, the
path to a user's default collection is `/Shared/Private/<username>`. Every user
has a private collection that has the default permission specified by the UCP
administrator.
Each collection has an access label that identifies its path.
For example, the private collection for user "hans" has a label that looks
like this:
Each collection has an access label that identifies its path. For example, the
private collection for user "hans" has a label that looks like this:
```
com.docker.ucp.access.label = /Shared/Private/hans
```
You can nest collections. If a user has a grant against a collection,
the grant applies to all of its child collections.
You can nest collections. If a user has a grant against a collection, the grant
applies to all of its child collections.
For a child collection, or for a user who belongs to more than one team,
the system concatenates permissions from multiple roles into an
"effective role" for the user, which specifies the operations that are
allowed against the target.
For a child collection, or for a user who belongs to more than one team, the
system concatenates permissions from multiple roles into an "effective role" for
the user, which specifies the operations that are allowed against the target.
## Built-in collections

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB