Move RBAC articles back to /ucp (#316)

* Move RBAC articles back to /ucp

* Fix TOC for RBAC
This commit is contained in:
Joao Fernandes 2017-12-05 10:57:06 -08:00 committed by Jim Galasyn
parent 2cfe2b8a83
commit 288db22c35
55 changed files with 37 additions and 1355 deletions

View File

@ -310,40 +310,6 @@ guides:
title: Use the ZFS storage driver
- path: /storage/storagedriver/vfs-driver/
title: Use the VFS storage driver
- sectiontitle: Authorize role-based access
section:
- path: /deploy/rbac/
title: Access control model overview
- sectiontitle: The basics
section:
- path: /deploy/rbac/rbac-basics-create-subjects/
title: Create and users and teams
- path: /deploy/rbac/rbac-basics-define-roles/
title: Define roles with permissions
- path: /deploy/rbac/rbac-basics-group-resources/
title: Group cluster resources
- path: /deploy/rbac/rbac-basics-grant-permissions/
title: Grant role-access to resources
- sectiontitle: Tutorials and use cases
section:
- path: /deploy/rbac/rbac-howto-deploy-stateless-app/
title: Deploy stateless app with RBAC
- path: /deploy/rbac/rbac-howto-isolate-volumes/
title: Isolate volumes
- path: /deploy/rbac/rbac-howto-isolate-nodes/
title: Isolate nodes
- path: /deploy/rbac/rbac-howto-orcabank1-standard/
title: Docker EE Standard Use Case
- path: /deploy/rbac/rbac-howto-orcabank2-advanced/
title: Docker EE Advanced Use Case
- sectiontitle: User admin
section:
- path: /deploy/rbac/admin-sync-with-ldap/
title: Synchronize teams with LDAP
- path: /deploy/rbac/admin-recover-password/
title: Reset user passwords
- sectiontitle: Deploy your app in production
section:
- path: /deploy/
@ -1678,34 +1644,42 @@ manuals:
title: uninstall-ucp
- path: /datacenter/ucp/3.0/reference/cli/upgrade/
title: upgrade
- sectiontitle: Access control
- sectiontitle: Authorize role-based access
section:
- path: /datacenter/ucp/3.0/guides/access-control/
title: Access control model
- path: /datacenter/ucp/3.0/guides/access-control/create-and-manage-users/
title: Create and manage users
- path: /datacenter/ucp/3.0/guides/access-control/create-and-manage-teams/
title: Create and manage teams
- path: /datacenter/ucp/3.0/guides/access-control/deploy-view-only-service/
title: Deploy a service with view-only access across an organization
- path: /datacenter/ucp/3.0/guides/access-control/grant-permissions/
title: Grant permissions to users based on roles
- path: /datacenter/ucp/3.0/guides/access-control/isolate-nodes-between-teams/
title: Isolate swarm nodes to a specific team
- path: /datacenter/ucp/3.0/guides/access-control/isolate-volumes-between-teams/
title: Isolate volumes between two different teams
- path: /datacenter/ucp/3.0/guides/access-control/manage-access-with-collections/
title: Manage access to resources by using collections
- path: /datacenter/ucp/3.0/guides/access-control/access-control-node/
title: Node access control
- path: /datacenter/ucp/3.0/guides/access-control/permission-levels/
title: Permission levels
- path: /datacenter/ucp/3.0/guides/access-control/access-control-design-ee-standard/
title: Access control design with Docker EE Standard
- path: /datacenter/ucp/3.0/guides/access-control/access-control-design-ee-advanced/
title: Access control design with Docker EE Advanced
- path: /datacenter/ucp/3.0/guides/access-control/recover-a-user-password/
title: Recover a user password
- path: /datacenter/ucp/3.0/guides/authorization/
title: Access control model overview
- sectiontitle: The basics
section:
- path: /datacenter/ucp/3.0/guides/authorization/rbac-basics-create-subjects/
title: Create and users and teams
- path: /datacenter/ucp/3.0/guides/authorization/rbac-basics-define-roles/
title: Define roles with permissions
- path: /datacenter/ucp/3.0/guides/authorization/rbac-basics-group-resources/
title: Group cluster resources
- path: /datacenter/ucp/3.0/guides/authorization/rbac-basics-grant-permissions/
title: Grant role-access to resources
- sectiontitle: Tutorials and use cases
section:
- path: /datacenter/ucp/3.0/guides/authorization/rbac-howto-deploy-stateless-app/
title: Deploy stateless app with RBAC
- path: /datacenter/ucp/3.0/guides/authorization/rbac-howto-isolate-volumes/
title: Isolate volumes
- path: /datacenter/ucp/3.0/guides/authorization/rbac-howto-isolate-nodes/
title: Isolate nodes
- path: /datacenter/ucp/3.0/guides/authorization/rbac-howto-orcabank1-standard/
title: Docker EE Standard Use Case
- path: /datacenter/ucp/3.0/guides/authorization/rbac-howto-orcabank2-advanced/
title: Docker EE Advanced Use Case
- sectiontitle: User admin
section:
- path: /datacenter/ucp/3.0/guides/authorization/admin-sync-with-ldap/
title: Synchronize teams with LDAP
- path: /datacenter/ucp/3.0/guides/authorization/admin-recover-password/
title: Reset user passwords
- sectiontitle: User guides
section:
- sectiontitle: Access UCP

View File

@ -1,127 +0,0 @@
---
title: Access control design with Docker EE Advanced
description: Learn how to architect multitenancy by using Docker Enterprise Edition Advanced.
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
---
[Collections and grants](index.md) are strong tools that can be used to control
access and visibility to resources in UCP. The previous tutorial,
[Access Control Design with Docker EE Standard](access-control-design-ee-standard.md),
describes a fictional company called OrcaBank that has designed a resource
access architecture that fits the specific security needs of their organization.
Be sure to go through this tutorial if you have not already before continuing.
In this tutorial OrcaBank's deployment model is becoming more advanced.
Instead of moving developed applications directly in to production,
OrcaBank will now deploy apps from their dev cluster to a staging zone of
their production cluster. After applications have passed staging they will
be moved to production. OrcaBank has very stringent security requirements for
production applications. Its security team recently read a blog post about
DevSecOps and is excited to implement some changes. Production applications
aren't permitted to share any physical infrastructure with non-Production
infrastructure.
In this tutorial OrcaBank will use Docker EE Advanced feature to segment the
scheduling and access control of applications across disparate physical
infrastructure. [Node Access Control](access-control-node.md) with EE Advanced
licensing allows nodes to be placed in different collections so that resources
can be scheduled and isolated on disparate physical or virtual hardware
resources.
## Team access requirements
As in the [Introductory Multitenancy Tutorial](access-control-design-ee-standard.md)
OrcaBank still has three application teams, `payments`, `mobile`, and `db` that
need to have varying levels of segmentation between them. Their upcoming Access
Control redesign will organize their UCP cluster in to two top-level collections,
Staging and Production, which will be completely separate security zones on
separate physical infrastructure.
- `security` should have visibility-only access across all
applications that are in Production. The security team is not
concerned with Staging applications and thus will not have
access to Staging
- `db` should have the full set of operations against all database
applications that are in Production. `db` does not manage the
databases that are in Staging, which are managed directly by the
application teams.
- `payments` should have the full set of operations to deploy Payments
apps in both Production and Staging and also access some of the shared
services provided by the `db` team.
- `mobile` has the same rights as the `payments` team, with respect to the
Mobile applications.
## Role composition
OrcaBank will use the same roles as in the Introductory Tutorial. An `ops` role
will provide them with the ability to deploy, destroy, and view any kind of
resource. `View Only` will be used by the security team to only view resources
with no edit rights. `View & Use Networks + Secrets` will be used to access
shared resources across collection boundaries, such as the `db` services that
are offered by the `db` collection to the other app teams.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
The previous tutorial had separate collections for each application team.
In this Access Control redesign there will be collections for each zone,
Staging and Production, and also collections within each zone for the
individual applications. Another major change is that Docker nodes will be
segmented themselves so that nodes in Staging are separate from Production
nodes. Within the Production zone every application will also have their own
dedicated nodes.
The resulting collection architecture takes the following tree representation:
```
/
├── System
├── Shared
├── prod
│   ├── db
│   ├── mobile
│   └── payments
└── staging
├── mobile
└── payments
```
## Grant composition
OrcaBank will now be granting teams diverse roles to different collections.
Multiple grants per team are required to grant this kind of access. Each of
the Payments and Mobile applications will have three grants that give them the
operation to deploy in their production zone, their staging zone, and also the
ability to share some resources with the `db` collection.
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture provides the appropriate physical segmentation
between Production and Staging. Applications will be scheduled only on the UCP
Worker nodes in the collection where the application is placed. The production
Mobile and Payments applications use shared resources across collection
boundaries to access the databases in the `/prod/db` collection.
![image](../images/design-access-control-adv-architecture.png){: .with-border}
### DB team
The OrcaBank `db` team is responsible for deploying and managing the full
lifecycle of the databases that are in Production. They have the full set of
operations against all database resources.
![image](../images/design-access-control-adv-db.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their full application stack in
staging. In production they deploy their own applications but utilize the
databases that are provided by the `db` team.
![image](../images/design-access-control-adv-mobile.png){: .with-border}

View File

@ -1,121 +0,0 @@
---
title: Access control design with Docker EE Standard
description: Learn how to architect multitenancy by using Docker Enterprise Edition Advanced.
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
---
[Collections and grants](index.md) are strong tools that can be used to control
access and visibility to resources in UCP. This tutorial describes a fictitious
company named OrcaBank that is designing the access architecture for two
application teams that they have, Payments and Mobile.
This tutorial introduces many concepts include collections, grants, centralized
LDAP/AD, and also the ability for resources to be shared between different teams
and across collections.
## Team access requirements
OrcaBank has organized their application teams to specialize more and provide
shared services to other applications. A `db` team was created just to manage
the databases that other applications will utilize. Additionally, OrcaBank
recently read a book about DevOps. They have decided that developers should be
able to deploy and manage the lifecycle of their own applications.
- `security` should have visibility-only access across all applications in the
swarm.
- `db` should have the full set of capabilities against all database
applications and their respective resources.
- `payments` should have the full set of capabilities to deploy Payments apps
and also access some of the shared services provided by the `db` team.
- `mobile` has the same rights as the `payments` team, with respect to the
Mobile applications.
## Role composition
OrcaBank will use a combination of default and custom roles, roles which they
have created specifically for their use case. They are using the default
`View Only` role to provide security access to only see but not edit resources.
There is an `ops` role that they created which can do almost all operations
against all types of resources. They also created the
`View & Use Networks + Secrets` role. This type of role will enable application
DevOps teams to use shared resources provided by other teams. It will enable
applications to connect to networks and use secrets that will also be used by
`db` containers, but not the ability to see or impact the `db` applications
themselves.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
OrcaBank will also create some collections that fit the organizational structure
of the company. Since all applications will share the same physical resources,
all nodes and applications are built in to collections underneath the `/Shared`
built-in collection.
- `/Shared/payments` hosts all applications and resources for the Payments
applications.
- `/Shared/mobile` hosts all applications and resources for the Mobile
applications.
Some other collections will be created to enable the shared `db` applications.
- `/Shared/db` will be a top-level collection for all `db` resources.
- `/Shared/db/payments` will be specifically for `db` resources providing
service to the Payments applications.
- `/Shared/db/mobile` will do the same for the Mobile applications.
The following grant composition will show that this collection architecture
allows an app team to access shared `db` resources without providing access
to _all_ `db` resources. At the same time _all_ `db` resources will be managed
by a single `db` team.
## LDAP/AD integration
OrcaBank has standardized on LDAP for centralized authentication to help their
identity team scale across all the platforms they manage. As a result LDAP
groups will be mapped directly to UCP teams using UCP's native LDAP/AD
integration. As a result users can be added to or removed from UCP teams via
LDAP which can be managed centrally by OrcaBank's identity team. The following
grant composition shows how LDAP groups are mapped to UCP teams .
## Grant composition
Two grants are applied for each application team, allowing each team to fully
manage their own apps in their collection, but also have limited access against
networks and secrets within the `db` collection. This kind of grant composition
provides flexibility to have different roles against different groups of
resources.
![image](../images/design-access-control-adv-1.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture shows applications connecting across
collection boundaries. Multiple grants per team allow Mobile applications and
Databases to connect to the same networks and use the same secrets so they can
securely connect with each other but through a secure and controlled interface.
Note that these resources are still deployed across the same group of UCP
worker nodes. Node segmentation is discussed in the [next tutorial](#).
![image](../images/design-access-control-adv-2.png){: .with-border}
### DB team
The `db` team is responsible for deploying and managing the full lifecycle
of the databases used by the application teams. They have the full set of
operations against all database resources.
![image](../images/design-access-control-adv-3.png){: .with-border}
### Mobile team
The `mobile` team is responsible for deploying their own application stack,
minus the database tier which is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}
## Where to go next
- [Access control design with Docker EE Advanced](access-control-design-ee-advanced.md)

View File

@ -1,49 +0,0 @@
---
title: Node access control in Docker EE Advanced
description: Learn how to architect node access by using Docker Enterprise Edition Standard.
keywords: authorize, authentication, node, UCP, role, access control
---
The ability to segment scheduling and visibility by node is called
*node access control* and is a feature of Docker EE Advanced. By default,
all nodes that aren't infrastructure nodes (UCP & DTR nodes) belong to a
built-in collection called `/Shared`. By default, all application workloads
in the cluster will get scheduled on nodes in the `/Shared` collection. This
includes users that are deploying in their private collections
(`/Shared/Private/`) and in any other collections under `/Shared`. This is
enabled by a built-in grant that grants every UCP user the `scheduler`
capability against the `/Shared` collection.
Node Access Control works by placing nodes in to custom collections outside of
`/Shared`. If the `scheduler` capability is granted via a role to a user or
group of users against a collection then they will be able to schedule
containers and services on these nodes. In the following example, users with
`scheduler` capability against `/collection1` will be able to schedule
applications on those nodes.
Note that in the directory these collections lie outside of the `/Shared`
collection so users without grants will not have access to these collections
unless explicitly granted access. These users will only be able to deploy
applications on the built-in `/Shared` collection nodes.
![image](../images/design-access-control-adv-custom-grant.png){: .with-border}
The tree representation of this collection structure looks like this:
```
/
├── Shared
├── System
├── collection1
└── collection2
├── sub-collection1
└── sub-collection2
```
With the use of default collections, users, teams, and organizations can be
constrained to what nodes and physical infrastructure they are capable of
deploying on.
## Where to go next
- [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md)

View File

@ -1,118 +0,0 @@
---
title: Create and manage teams
description: Learn how to create and manage user permissions, using teams in
your Docker Universal Control Plane cluster.
keywords: authorize, authentication, users, teams, groups, sync, UCP, Docker
---
You can extend the user's default permissions by granting them fine-grained
permissions over resources. You do this by adding the user to a team.
To create a new team, go to the UCP web UI, and navigate to the
**Organizations** page.
![](../images/create-and-manage-teams-1.png){: .with-border}
If you want to put the team in a new organization, click
**Create Organization** and give the new organization a name, like
"engineering". Click **Create** to create it.
In the list, click the organization where you want to create the new team.
Name the team, give it an optional description, and click **Create** to
create a new team.
![](../images/create-and-manage-teams-2.png){: .with-border}
## Add users to a team
You can now add and remove users from the team. In the current organization's
teams list, click the new team, and in the details pane, click **Add Users**.
Choose the users that you want to add to the team, and when you're done, click
**Add Users**.
![](../images/create-and-manage-teams-3.png){: .with-border}
## Enable Sync Team Members
To sync the team with your organization's LDAP directory, click **Yes**.
If UCP is configured to sync users with your organization's LDAP directory
server, you have the option to enable syncing the new team's members when
creating a new team or when modifying settings of an existing team.
[Learn how to configure integration with an LDAP directory](../admin/configure/external-auth/index.md).
Enabling this option expands the form with additional fields for configuring
the sync of team members.
![](../images/create-and-manage-teams-5.png){: .with-border}
There are two methods for matching group members from an LDAP directory:
**Match Group Members**
This option specifies that team members should be synced directly with members
of a group in your organization's LDAP directory. The team's membership will by
synced to match the membership of the group.
| Field | Description |
|:-----------------------|:------------------------------------------------------------------------------------------------------|
| Group DN | This specifies the distinguished name of the group from which to select users. |
| Group Member Attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. |
**Match Search Results**
This option specifies that team members should be synced using a search query
against your organization's LDAP directory. The team's membership will be
synced to match the users in the search results.
| Field | Description |
| :--------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
| Search Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. |
| Search Filter | The LDAP search filter used to find users. If you leave this field empty, all existing users in the search scope will be added as members of the team. |
| Search subtree instead of just one level | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. |
**Immediately Sync Team Members**
Select this option to run an LDAP sync operation immediately after saving the
configuration for the team. It may take a moment before the members of the team
are fully synced.
## Manage team permissions
Create a grant to manage the team's permissions.
[Learn how to grant permissions to users based on roles](grant-permissions.md).
In this example, you'll create a collection for the "Data Center" team,
where they can deploy services and resources, and you'll create a grant that
gives the team permission to access the collection.
![](../images/team-grant-diagram.svg){: .with-border}
1. Navigate to the **Organizations & Teams** page.
2. Select **docker-datacenter**, and click **Create Team**. Name the team
"Data Center", and click **Create**.
3. Navigate to the **Collections** page.
4. Click **Create Collection**, name the collection "Data Center Resources",
and click **Create**.
5. Navigate to the **Grants** page, and click **Create Grant**.
6. Find **Swarm** in the collections list, and click **View Children**.
7. Find **Data Center Resources**, and click **Select Collection**.
8. In the left pane, click **Roles** and in the **Role** dropdown, select
**Restricted Control**.
9. In the left pane, click **Subjects** and select the **Organizations**
subject type.
10. In the **Organization** dropdown, select **docker-datacenter**, and in the
**Teams** dropdown, select **Data Center**.
11. Click **Create** to create the grant.
![](../images/create-and-manage-teams-4.png){: .with-border}
In this example, you gave members of the `Data Center` team
`Restricted Control` permissions to create and edit resources in
the `Data Center Resources` collection.
## Where to go next
- [UCP permission levels](permission-levels.md)
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
- [Isolate swarm nodes between two different teams](isolate-nodes-between-teams.md)

View File

@ -1,37 +0,0 @@
---
title: Create and manage users
description: Learn how to create and manage users in your Docker Universal Control
Plane cluster.
keywords: authorize, authentication, users, teams, UCP, Docker
---
Docker Universal Control Plane provides built-in authentication and also
integrates with LDAP directory services. If you want to manage
users and groups from your organization's directory, choose LDAP.
[Learn to integrate with an LDAP directory](../configure/external-auth/index.md).
When using the UCP built-in authentication, you need to create users and
optionally grant them UCP administrator permissions.
Each new user gets a default permission level so that they can access the
swarm.
To create a new user, go to the UCP web UI, and navigate to the
**Users** page.
![](../images/create-users-1.png){: .with-border}
Click the **Create user** button, and fill-in the user information.
![](../images/create-users-2.png){: .with-border}
Check the `Is a UCP admin?` option, if you want to grant permissions for the
user to change the swarm configuration and manage grants, roles, and
collections.
Finally, click the **Create** button to create the user.
## Where to go next
* [Create and manage teams](create-and-manage-teams.md)
* [UCP permission levels](permission-levels.md)

View File

@ -1,100 +0,0 @@
---
title: Deploy a service with view-only access across an organization
description: Create a grant to control access to a service.
keywords: ucp, grant, role, permission, authentication
---
In this example, your organization is granted access to a new resource
collection that contains one service.
1. Create an organization and a team.
2. Create a collection for the view-only service.
3. Create a grant to manage user access to the collection.
![](../images/view-only-access-diagram.svg)
## Create an organization
In this example, you create an organization and a team, and you add one user
who isn't an administrator to the team.
[Learn how to create and manage teams](create-and-manage-teams.md).
1. Log in to UCP as an administrator.
2. Navigate to the **Organizations & Teams** page and click
**Create Organization**. Name the new organization "engineering" and
click **Create**.
3. Click **Create Team**, name the new team "Dev", and click **Create**.
3. Add a non-admin user to the Dev team.
## Create a collection for the service
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Find the **Shared** collection and click **View children**.
3. Click **Create collection** and name the collection "View-only services".
4. Click **Create** to create the collection.
![](../images/deploy-view-only-service-1.png)
The `/Shared/View-only services` collection is ready to use for access
control.
## Deploy a service
Currently, the new collection has no resources assigned to it. To access
resources through this collection, deploy a new service and add it to the
collection.
1. Navigate to the **Services** page and create a new service, named
"WordPress".
2. In the **Image** textbox, enter "wordpress:latest". This identifies the
most recent WordPress image in the Docker Store.
3. In the left pane, click **Collection**. The **Swarm** collection appears.
4. Click **View children** to list all of the collections. In **Shared**,
Click **View children**, find the **View-only services** collection and
select it.
5. Click **Create** to add the "WordPress" service to the collection and
deploy it.
![](../images/deploy-view-only-service-3.png)
You're ready to create a grant for controlling access to the "WordPress" service.
## Create a grant
Currently, users who aren't administrators can't access the
`/Shared/View-only services` collection. Create a grant to give the
`engineering` organization view-only access.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, navigate to **/Shared/View-only services**,
and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **View Only**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, select **engineering**.
5. Click **Create** to grant permissions to the organization.
![](../images/deploy-view-only-service-4.png)
Everything is in place to show role-based access control in action.
## Verify the user's permissions
Users in the `engineering` organization have view-only access to the
`/Shared/View-only services` collection. You can confirm this by logging in
as a non-admin user in the organization and trying to delete the service.
1. Log in as the user who you assigned to the Dev team.
2. Navigate to the **Services** page and click **WordPress**.
3. In the details pane, confirm that the service's collection is
**/Shared/View-only services**.
![](../images/deploy-view-only-service-2.png)
4. Click the checkbox next to the **WordPress** service, click **Actions**,
and select **Remove**. You get an error message, because the user
doesn't have `Service Delete` access to the collection.
## Where to go next
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)

View File

@ -1,47 +0,0 @@
---
title: Grant permissions to users based on roles
description: Grant access to swarm resources by using role-based access control.
keywords: ucp, grant, role, permission, authentication, authorization
---
If you're a UCP administrator, you can create *grants* to control how users
and organizations access swarm resources.
![](../images/ucp-grant-model-0.svg){: .with-border}
A grant is made up of a *subject*, a *role*, and a *resource collection*.
A grant defines who (subject) has how much access (role)
to a set of resources (collection). Each grant is a 1:1:1 mapping of
subject, role, collection. For example, you can grant the "Prod Team"
"Restricted Control" permissions for the "/Production" collection.
The usual workflow for creating grants has four steps.
1. Set up your users and teams. For example, you might want three teams,
Dev, QA, and Prod.
2. Organize swarm resources into separate collections that each team uses.
3. Optionally, create custom roles for specific permissions to the Docker API.
4. Grant role-based access to collections for your teams.
![](../images/ucp-grant-model.svg){: .with-border}
## Create a grant
When you have your users, collections, and roles set up, you can create
grants. Administrators create grants on the **Manage Grants** page.
1. Click **Create Grant**. All of the collections in the system are listed.
2. Click **Select** on the collection you want to grant access to.
3. In the left pane, click **Roles** and select a role from the dropdown list.
4. In the left pane, click **Subjects**. Click **All Users** to create a grant
for a specific user, or click **Organizations** to create a grant for an
organization or a team.
5. Select a user, team, or organization and click **Create**.
By default, all new users are placed in the `docker-datacenter` organization.
If you want to apply a grant to all UCP users, create a grant with the
`docker-datacenter` org as a subject.
## Where to go next
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)

View File

@ -1,156 +0,0 @@
---
title: Access control model
description: Manage access to containers, services, volumes, and networks by using role-based access control.
keywords: ucp, grant, role, permission, authentication, authorization
---
With Docker Universal Control Plane, you get to control who can create and
edit container resources in your swarm, like services, images, networks,
and volumes. You can grant and manage permissions to enforce fine-grained
access control as needed.
## Grant access to swarm resources
If you're a UCP administrator, you can create *grants* to control how users
and organizations access swarm resources.
A grant is made up of a *subject*, a *role*, and a *resource collection*.
A grant defines who (subject) has how much access (role)
to a set of resources (collection).
[Learn how to grant permissions to users based on roles](grant-permissions.md).
![](../images/ucp-grant-model.svg)
An administrator is a user who can manage grants, subjects, roles, and
collections. An administrator identifies which operations can be performed
against specific resources and who can perform these actions. An administrator
can create and manage role assignments against subject in the system.
Only an administrator can manage subjects, grants, roles, and collections.
## Subjects
A subject represents a user, team, or organization. A subject is granted a
role for a collection of resources.
- **User**: A person that the authentication backend validates. You can
assign users to one or more teams and one or more organizations.
- **Organization**: A group of users that share a specific set of
permissions, defined by the roles of the organization.
- **Team**: A group of users that share a set of permissions defined in the
team itself. A team exists only as part of an organization, and all of its
members must be members of the organization. Team members share
organization permissions. A team can be in one organization only.
## Roles
A role is a set of permitted API operations that you can assign to a specific
subject and collection by using a grant. UCP administrators view and manage
roles by navigating to the **Roles** page.
[Learn more about roles and permissions](permission-levels.md).
## Resource collections
Docker EE enables controlling access to swarm resources by using
*collections*. A collection is a grouping of swarm cluster resources that you
access by specifying a directory-like path.
Swarm resources that can be placed in to a collection include:
- Physical or virtual nodes
- Containers
- Services
- Networks
- Volumes
- Secrets
- Application configs
## Collection architecture
Grants tie together who has which kind of access to what resources. Grants
are effectively ACLs, which grouped together, can provide full and comprehensive
access policies for an entire organization. However, before grants can be
implemented, collections need to be designed to group resources in a way that
makes sense for an organization.
The following example shows a potential access policy of an organization.
Consider an organization with two application teams, Mobile and Payments, that
will share cluster hardware resources, but still need to segregate access to the
applications. Collections should be designed to map to the organizational
structure desired, in this case the two application teams. Their collection
architecture for a production UCP cluster might look something like this:
```
prod
├── mobile
└── payments
```
> A subject that has access to any level in a collection hierarchy will have
> that same access to any collections below it.
## Role composition
Roles define what operations can be done against cluster resources. An
organization will likely use several different kinds or roles to give the
right kind of access. A given team or user may have different roles provided
to them depending on what resource they are accessing. There are default roles
provided by UCP and also the ability to build custom roles. In this example
three different roles are used:
- Full Control - This is a default role that provides the full list of
operations against cluster resources.
- View Only - This is also a default role that allows a user to see resources,
but not to edit or delete.
- Dev - This is not a default role, but a potential custom role. In this
example "Dev" includes the ability to view containers and also `docker exec`.
This allows developers to run a shell inside their container process but not
see or change any other cluster resources.
## Grant composition
The following four grants define the access policy for the entire organization
for this cluster. They tie together the collections that were created, the
default and custom roles, and also teams of users that are in UCP.
![image](../images/access-control-grant-composition.png){: .with-border}
## Access architecture
The resulting access architecture defined by these grants is depicted below.
![image](../images/access-control-collection-architecture.png){: .with-border}
There are four teams that are given access to cluster resources:
- `security` can see, but not edit, all resources shown, as it has `View Only`
access to the entire `/prod` collection.
- `ops` has `Full Control` against the entire `/prod` collection, giving it the
capability to deploy, view, edit, and remove applications and application
resources.
- `mobile` has the `Dev` role against the `/prod/mobile` collection. This team
is able to see and `exec` in to their own applications, but will not see any
of the `payments` applications.
- `payments` has the same type of access but for the `/prod/payments` collection.
[See a deeper tutorial on how to design access control architectures.](access-control-design-ee-standard.md)
[Manage access to resources by using collections](manage-access-with-collections.md).
## Transition from UCP 2.1 access control
- Your existing access labels and permissions are migrated automatically
during an upgrade from UCP 2.1.x.
- Unlabeled "user-owned" resources are migrated into the user's private
collection, in `/Shared/Private/<username>`.
- Old access control labels are migrated into `/Shared/Legacy/<labelname>`.
- When deploying a resource, choose a collection instead of an access label.
- Use grants for access control, instead of unlabeled permissions.
## Where to go next
- [Create and manage users](create-and-manage-users.md)
- [Create and manage teams](create-and-manage-teams.md)
- [Deploy a service with view-only access across an organization](deploy-view-only-service.md)
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
- [Isolate swarm nodes between two different teams](isolate-nodes-between-teams.md)

View File

@ -1,174 +0,0 @@
---
title: Isolate swarm nodes to a specific team
description: Create grants that limit access to nodes to specific teams.
keywords: ucp, grant, role, permission, authentication
---
With Docker EE Advanced, you can enable physical isolation of resources
by organizing nodes into collections and granting `Scheduler` access for
different users. To control access to nodes, move them to dedicated collections
where you can grant access to specific users, teams, and organizations.
In this example, a team gets access to a node collection and a resource
collection, and UCP access control ensures that the team members can't view
or use swarm resources that aren't in their collection.
You need a Docker EE Advanced license and at least two worker nodes to
complete this example.
1. Create an `Ops` team and assign a user to it.
2. Create a `/Prod` collection for the team's node.
3. Assign a worker node to the `/Prod` collection.
4. Grant the `Ops` teams access to its collection.
![](../images/isolate-nodes-diagram.svg){: .with-border}
## Create a team
In the web UI, navigate to the **Organizations & Teams** page to create a team
named "Ops" in your organization. Add a user who isn't a UCP administrator to
the team.
[Learn to create and manage teams](create-and-manage-teams.md).
## Create a node collection and a resource collection
In this example, the Ops team uses an assigned group of nodes, which it
accesses through a collection. Also, the team has a separate collection
for its resources.
Create two collections: one for the team's worker nodes and another for the
team's resources.
1. Navigate to the **Collections** page to view all of the resource
collections in the swarm.
2. Click **Create collection** and name the new collection "Prod".
3. Click **Create** to create the collection.
4. Find **Prod** in the list, and click **View children**.
5. Click **Create collection**, and name the child collection
"Webserver". This creates a sub-collection for access control.
You've created two new collections. The `/Prod` collection is for the worker
nodes, and the `/Prod/Webserver` sub-collection is for access control to
an application that you'll deploy on the corresponding worker nodes.
## Move a worker node to a collection
By default, worker nodes are located in the `/Shared` collection.
Worker nodes that are running DTR are assigned to the `/System` collection.
To control access to the team's nodes, move them to a dedicated collection.
Move a worker node by changing the value of its access label key,
`com.docker.ucp.access.label`, to a different collection.
1. Navigate to the **Nodes** page to view all of the nodes in the swarm.
2. Click a worker node, and in the details pane, find its **Collection**.
If it's in the `/System` collection, click another worker node,
because you can't move nodes that are in the `/System` collection. By
default, worker nodes are assigned to the `/Shared` collection.
3. When you've found an available node, in the details pane, click
**Configure**.
3. In the **Labels** section, find `com.docker.ucp.access.label` and change
its value from `/Shared` to `/Prod`.
4. Click **Save** to move the node to the `/Prod` collection.
> Docker EE Advanced required
>
> If you don't have a Docker EE Advanced license, you'll get the following
> error message when you try to change the access label:
> **Nodes must be in either the shared or system collection without an advanced license.**
> [Get a Docker EE Advanced license](https://www.docker.com/pricing).
![](../images/isolate-nodes-1.png){: .with-border}
## Grant access for a team
You'll need two grants to control access to nodes and container resources:
- Grant the `Ops` team the `Restricted Control` role for the `/Prod/Webserver`
resources.
- Grant the `Ops` team the `Scheduler` role against the nodes in the `/Prod`
collection.
Create two grants for team access to the two collections:
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **View Children**.
4. In the **Webserver** collection, click **Select Collection**.
5. In the left pane, click **Roles**, and select **Restricted Control**
in the dropdown.
6. Click **Subjects**, and under **Select subject type**, click **Organizations**.
7. Select your organization, and in the **Team** dropdown, select **Ops**.
8. Click **Create** to grant the Ops team access to the `/Prod/Webserver`
collection.
The same steps apply for the nodes in the `/Prod` collection.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Prod** collection, click **Select Collection**.
4. In the left pane, click **Roles**, and in the dropdown, select **Scheduler**.
5. In the left pane, click **Subjects**, and under **Select subject type**, click
**Organizations**.
6. Select your organization, and in the **Team** dropdown, select **Ops** .
7. Click **Create** to grant the Ops team `Scheduler` access to the nodes in the
`/Prod` collection.
![](../images/isolate-nodes-2.png){: .with-border}
## Deploy a service as a team member
Your swarm is ready to show role-based access control in action. When a user
deploys a service, UCP assigns its resources to the user's default collection.
From the target collection of a resource, UCP walks up the ancestor collections
until it finds nodes that the user has `Scheduler` access to. In this example,
UCP assigns the user's service to the `/Prod/Webserver` collection and schedules
tasks on nodes in the `/Prod` collection.
As a user on the Ops team, set your default collection to `/Prod/Webserver`.
1. Log in as a user on the Ops team.
2. Navigate to the **Collections** page, and in the **Prod** collection,
click **View Children**.
3. In the **Webserver** collection, click the **More Options** icon and
select **Set to default**.
Deploy a service automatically to worker nodes in the `/Prod` collection.
All resources are deployed under the user's default collection,
`/Prod/Webserver`, and the containers are scheduled only on the nodes under
`/Prod`.
1. Navigate to the **Services** page, and click **Create Service**.
2. Name the service "NGINX", use the "nginx:latest" image, and click
**Create**.
3. When the **nginx** service status is green, click the service. In the
details view, click **Inspect Resource**, and in the dropdown, select
**Containers**.
4. Click the **NGINX** container, and in the details pane, confirm that its
**Collection** is **/Prod/Webserver**.
![](../../images/isolate-nodes-3.png){: .with-border}
5. Click **Inspect Resource**, and in the dropdown, select **Nodes**.
6. Click the node, and in the details pane, confirm that its **Collection**
is **/Prod**.
![](../images/isolate-nodes-4.png){: .with-border}
## Alternative: Use a grant instead of the default collection
Another approach is to use a grant instead of changing the user's default
collection. An administrator can create a grant for a role that has the
`Service Create` permission against the `/Prod/Webserver` collection or a child
collection. In this case, the user sets the value of the service's access label,
`com.docker.ucp.access.label`, to the new collection or one of its children
that has a `Service Create` grant for the user.
## Where to go next
- [Node access control in Docker EE Advanced](access-control-node.md)
- [Isolate volumes between two different teams](isolate-volumes-between-teams.md)
- [Deploy a service with view-only access across an organization](deploy-view-only-service.md)

View File

@ -1,97 +0,0 @@
---
title: Isolate volumes between two different teams
description: Create grants that limit access to volumes to specific teams.
keywords: ucp, grant, role, permission, authentication
---
In this example, two teams are granted access to volumes in two different
resource collections. UCP access control prevents the teams from viewing and
accessing each other's volumes, even though they may be located in the same
nodes.
1. Create two teams.
2. Create two collections, one for either team.
3. Create grants to manage access to the collections.
4. Team members create volumes that are specific to their team.
![](../images/isolate-volumes-diagram.svg){: .with-border}
## Create two teams
Navigate to the **Organizations & Teams** page to create two teams in your
organization, named "Dev" and "Prod". Add a user who's not a UCP administrator
to the Dev team, and add another non-admin user to the Prod team.
[Learn how to create and manage teams](create-and-manage-teams.md).
## Create resource collections
In this example, the Dev and Prod teams use two different volumes, which they
access through two corresponding resource collections. The collections are
placed under the `/Shared` collection.
1. In the left pane, click **Collections** to show all of the resource
collections in the swarm.
2. Find the **/Shared** collection and click **View children**.
2. Click **Create collection** and name the new collection "dev-volumes".
3. Click **Create** to create the collection.
4. Click **Create collection** again, name the new collection "prod-volumes",
and click **Create**.
## Create grants for controlling access to the new volumes
In this example, the Dev team gets access to its volumes from a grant that
associates the team with the `/Shared/dev-volumes` collection, and the Prod
team gets access to its volumes from another grant that associates the team
with the `/Shared/prod-volumes` collection.
1. Navigate to the **Grants** page and click **Create Grant**.
2. In the left pane, click **Collections**, and in the **Swarm** collection,
click **View Children**.
3. In the **Shared** collection, click **View Children**.
4. In the list, find **/Shared/dev-volumes** and click **Select Collection**.
3. Click **Roles**, and in the dropdown, select **Restricted Control**.
4. Click **Subjects**, and under **Select subject type**, click **Organizations**.
In the dropdown, pick your organization, and in the **Team** dropdown,
select **Dev**.
5. Click **Create** to grant permissions to the Dev team.
6. Click **Create Grant** and repeat the previous steps for the **/Shared/prod-volumes**
collection and the Prod team.
![](../images/isolate-volumes-1.png){: .with-border}
With the collections and grants in place, users can sign in and create volumes
in their assigned collections.
## Create a volume as a team member
Team members have permission to create volumes in their assigned collection.
1. Log in as one of the users on the Dev team.
2. Navigate to the **Volumes** page to view all of the volumes in the swarm
that the user can access.
2. Click **Create volume** and name the new volume "dev-data".
3. In the left pane, click **Collections**. The default collection appears.
At the top of the page, click **Shared**, find the **dev-volumes**
collection in the list, and click **Select Collection**.
4. Click **Create** to add the "dev-data" volume to the collection.
5. Log in as one of the users on the Prod team, and repeat the previous steps
to create a "prod-data" volume assigned to the `/Shared/prod-volumes`
collection.
![](../images/isolate-volumes-2.png){: .with-border}
Now you can see role-based access control in action for volumes. The user on
the Prod team can't see the Dev team's volumes, and if you log in again as a
user on the Dev team, you won't see the Prod team's volumes.
![](../images/isolate-volumes-3.png){: .with-border}
Sign in with a UCP administrator account, and you see all of the volumes
created by the Dev and Prod users.
![](../images/isolate-volumes-4.png){: .with-border}
## Where to go next
- [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md)

View File

@ -1,148 +0,0 @@
---
title: Manage access to resources by using collections
description: Use collections to enable access control for worker nodes and container resources.
keywords: ucp, grant, role, permission, authentication, resource collection
---
Docker EE enables controlling access to container resources by using
*collections*. A collection is a group of swarm resources,
like services, containers, volumes, networks, and secrets.
![](../images/collections-and-resources.svg){: .with-border}
Access to collections goes through a directory structure that arranges a
swarm's resources. To assign permissions, administrators create grants
against directory branches.
## Directory paths define access to collections
Access to collections is based on a directory-like structure.
For example, the path to a user's default collection is
`/Shared/Private/<username>`. Every user has a private collection that
has the default permission specified by the UCP administrator.
Each collection has an access label that identifies its path.
For example, the private collection for user "hans" has a label that looks
like this:
```
com.docker.ucp.access.label = /Shared/Private/hans
```
You can nest collections. If a user has a grant against a collection,
the grant applies to all of its child collections.
For a child collection, or for a user who belongs to more than one team,
the system concatenates permissions from multiple roles into an
"effective role" for the user, which specifies the operations that are
allowed against the target.
## Built-in collections
UCP provides a number of built-in collections.
- `/` - The path to the `Swarm` collection. All resources in the
cluster are here. Resources that aren't in a collection are assigned
to the `/` directory.
- `/System` - The system collection, which contains UCP managers, DTR nodes,
and UCP/DTR system services. By default, only admins have access to the
system collection, but you can change this.
- `/Shared` - All worker nodes are here by default, for scheduling.
In a system with a standard-tier license, all worker nodes are under
the `/Shared` collection. With the EE Advanced license, administrators
can move worker nodes to other collections and apply role-based access.
- `/Shared/Private` - User private collections are stored here.
- `/Shared/Legacy` - After updating from UCP 2.1, all legacy access control
labels are stored here.
![](../images/collections-diagram.svg){: .with-border}
This diagram shows the `/System` and `/Shared` collections that are created
by UCP. User private collections are children of the `/Shared/private`
collection. Also, an admin user has created a `/prod` collection and its
`/webserver` child collection.
## Default collections
A user always has a default collection. The user can select the default
in UI preferences. When a user deploys a resource in the web UI, the
preselected option is the default collection, but this can be changed.
Users can't deploy a resource without a collection. When deploying a
resource in CLI without an access label, UCP automatically places the
resource in the user's default collection.
[Learn how to add labels to cluster nodes](../admin/configure/add-labels-to-cluster-nodes/).
When using Docker Compose, the system applies default collection labels
across all resources in the stack, unless the `com.docker.ucp.access.label`
has been set explicitly.
> Default collections and collection labels
>
> Setting a default collection is most helpful for users who deploy stacks
> and don't want to edit the contents of their compose files. Also, setting
> a default collection is useful for users who work only on a well-defined
> slice of the system. On the other hand, setting the collection label for
> every resource works best for users who have versatile roles in the system,
> like administrators.
## Collections and labels
Resources are marked as being in a collection by using labels.
Some resource types don't have editable labels, so you can't move resources
like this across collections. You can't modify collections after
resource creation for containers, networks, and volumes, but you can
update labels for services, nodes, secrets, and configs.
For editable resources, like services, secrets, nodes, and configs,
you can change the `com.docker.ucp.access.label` to move resources to
different collections. With the CLI, you can use this label to deploy
resources to a collection other than your default collection. Omitting this
label on the CLI deploys a resource on the user's default resource collection.
The system uses the additional labels, `com.docker.ucp.collection.*`, to enable
efficient resource lookups. By default, nodes have the
`com.docker.ucp.collection.root`, `com.docker.ucp.collection.shared`, and
`com.docker.ucp.collection.swarm` labels set to `true`. UCP automatically
controls these labels, and you don't need to manage them.
Collections get generic default names, but you can give them meaningful names,
like "Dev", "Test", and "Prod".
A *stack* is a group of resources identified by a label. You can place the
stack's resources in multiple collections. Resources are placed in the user's
default collection unless you specify an explicit `com.docker.ucp.access.label`
within the stack/compose file.
## Control access to nodes
The Docker EE Advanced license enables access control on worker nodes. Admin
users can move worker nodes from the default `/Shared` collection into other
collections and create corresponding grants for scheduling tasks.
In this example, an administrator has moved worker nodes to a `/prod`
collection:
![](../images/containers-and-nodes-diagram.svg)
When you deploy a resource with a collection, UCP sets a constraint implicitly
based on what nodes the collection, and any ancestor collections, can access.
The `Scheduler` role allows users to deploy resources on a node.
By default, all users have the `Scheduler` role against the `/Shared`
collection.
When deploying a resource that isn't global, like local volumes, bridge
networks, containers, and services, the system identifies a set of
"schedulable nodes" for the user. The system identifies the target collection
of the resource, like `/Shared/Private/hans`, and it tries to find the parent
that's closest to the root that the user has the `Node Schedule` permission on.
For example, when a user with a default configuration runs `docker container run nginx`,
the system interprets this to mean, "Create an NGINX container under the
user's default collection, which is at `/Shared/Private/hans`, and deploy it
on one of the nodes under `/Shared`.
If you want to isolate nodes against other teams, place these nodes in
new collections, and assign the `Scheduler` role, which contains the
`Node Schedule` permission, to the team.
[Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md).

View File

@ -1,79 +0,0 @@
---
description: Learn about the permission levels available in Docker Universal
Control Plane.
keywords: authorization, authentication, users, teams, UCP
title: Roles and permission levels
---
Docker Universal Control Plane has two types of users: administrators and
regular users. Administrators can make changes to the UCP swarm, while
regular users have permissions that range from no access to full control over
resources like volumes, networks, images, and containers. Users are
grouped into teams and organizations.
![Diagram showing UCP permission levels](../images/role-diagram.svg)
Administrators create *grants* to users, teams, and organizations to give
permissions to swarm resources.
## Administrator users
In Docker UCP, only users with administrator privileges can make changes to
swarm settings. This includes:
* Managing user permissions by creating grants.
* Managing swarm configurations, like adding and removing nodes.
## Roles
A role is a set of permitted API operations on a collection that you
can assign to a specific user, team, or organization by using a grant.
UCP administrators view and manage roles by navigating to the **Roles** page.
The system provides the following default roles:
| Built-in role | Description |
|----------------------|-------------|
| `None` | The user has no access to swarm resources. This maps to the `No Access` role in UCP 2.1.x. |
| `View Only` | The user can view resources like services, volumes, and networks but can't create them. |
| `Restricted Control` | The user can view and edit volumes, networks, and images but can't run a service or container in a way that might affect the node where it's running. The user can't mount a node directory and can't `exec` into containers. Also, The user can't run containers in privileged mode or with additional kernel capabilities. |
| `Scheduler` | The user can view nodes and schedule workloads on them. Worker nodes and manager nodes are affected by `Scheduler` grants. Having `Scheduler` access doesn't allow the user to view workloads on these nodes. They need the appropriate resource permissions, like `Container View`. By default, all users get a grant with the `Scheduler` role against the `/Shared` collection. |
| `Full Control` | The user can view and edit volumes, networks, and images, They can create containers without any restriction, but can't see other users' containers. |
![Diagram showing UCP permission levels](../images/permissions-ucp.svg)
Administrators can create a custom role that has Docker API permissions
that specify the API actions that a subject may perform.
The **Roles** page lists the available roles, including the default roles
and any custom roles that administrators have created. In the **Roles**
list, click a role to see the API operations that it uses. For example, the
`Scheduler` role has two of the node operations, `Schedule` and `View`.
## Create a custom role
Click **Create role** to create a custom role and define the API operations
that it uses. When you create a custom role, all of the APIs that you can use
are listed on the **Create Role** page. For example, you can create a custom
role that uses the node operations, `Schedule`, `Update`, and `View`, and you
might give it a name like "Node Operator".
![](../images/custom-role.png){: .with-border}
You can give a role a global name, like "Remove Images", which might enable
the **Remove** and **Force Remove** operations for images. You can apply a
role with the same name to different collections.
Only an administrator can create and remove roles. Roles are always enabled.
Roles can't be edited, so to change a role's API operations, you must delete it
and create it again.
You can't delete a custom role if it's used in a grant. You must first delete
the grants that use the role.
## Where to go next
* [Create and manage users](create-and-manage-users.md)
* [Create and manage teams](create-and-manage-teams.md)
* [Docker Reference Architecture: Securing Docker EE and Security Best Practices](https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Securing_Docker_EE_and_Security_Best_Practices)

View File

@ -1,32 +0,0 @@
---
title: Reset a user password
description: Learn how to recover your Docker Datacenter credentials
keywords: ucp, authentication
---
If you have administrator credentials to UCP, you can reset the password of
other users.
If that user is being managed using an LDAP service, you need to change the
user password on that system. If the user account is managed using UCP,
log in with administrator credentials to the UCP web UI, navigate to
the **Users** page, and choose the user whose password you want to change.
In the details pane, click **Configure** and select **Security** from the
dropdown.
![](../images/recover-a-user-password-1.png){: .with-border}
Update the user's password and click **Save**.
If you're an administrator and forgot your password, you can ask other users
with administrator credentials to change your password.
If you're the only administrator, use **ssh** to log in to a manager
node managed by UCP, and run:
```none
{% raw %}
docker exec -it ucp-auth-api enzi \
$(docker inspect --format '{{range .Args}}{{if eq "--db-addr=" (printf "%.10s" .)}}{{.}}{{end}}{{end}}' ucp-auth-api) \
passwd -i
{% endraw %}
```

View File

@ -163,7 +163,7 @@ All resources are deployed under the user's default collection,
4. Click the **NGINX** container, and in the details pane, confirm that its
**Collection** is **/Prod/Webserver**.
![](../../images/isolate-nodes-3.png){: .with-border}
![](../images/isolate-nodes-3.png){: .with-border}
5. Click **Inspect Resource**, and in the dropdown, select **Nodes**.
6. Click the node, and in the details pane, confirm that its **Collection**
@ -328,7 +328,7 @@ All resources are deployed under the user's default collection,
4. Click the **NGINX** container, and in the details pane, confirm that its
**Collection** is **/Prod/Webserver**.
![](../../images/isolate-nodes-3.png){: .with-border}
![](../images/isolate-nodes-3.png){: .with-border}
5. Click **Inspect Resource**, and in the dropdown, select **Nodes**.
6. Click the node, and in the details pane, confirm that its **Collection**

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 120 KiB

View File

Before

Width:  |  Height:  |  Size: 69 KiB

After

Width:  |  Height:  |  Size: 69 KiB

View File

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 24 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 31 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 234 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 53 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 12 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 70 KiB