From 439643a2dc0439006c6f6481c7b5b79399e8ed5b Mon Sep 17 00:00:00 2001 From: ddeyo Date: Tue, 18 Sep 2018 13:48:34 -0700 Subject: [PATCH] authorization changes from jekyll --- .../_site/create-teams-with-ldap.html | 55 +++ .../create-users-and-teams-manually.html | 106 ++++++ ee/ucp/authorization/_site/define-roles.html | 77 +++++ .../_site/deploy-stateless-app.html | 191 +++++++++++ ee/ucp/authorization/_site/ee-advanced.html | 137 ++++++++ ee/ucp/authorization/_site/ee-standard.html | 137 ++++++++ .../_site/grant-permissions.html | 77 +++++ .../authorization/_site/group-resources.html | 136 ++++++++ ee/ucp/authorization/_site/index.html | 110 ++++++ ee/ucp/authorization/_site/isolate-nodes.html | 315 ++++++++++++++++++ .../authorization/_site/isolate-volumes.html | 100 ++++++ .../_site/migrate-kubernetes-roles.html | 122 +++++++ ee/ucp/authorization/_site/pull-images.html | 30 ++ .../_site/reset-user-password.html | 24 ++ 14 files changed, 1617 insertions(+) create mode 100644 ee/ucp/authorization/_site/create-teams-with-ldap.html create mode 100644 ee/ucp/authorization/_site/create-users-and-teams-manually.html create mode 100644 ee/ucp/authorization/_site/define-roles.html create mode 100644 ee/ucp/authorization/_site/deploy-stateless-app.html create mode 100644 ee/ucp/authorization/_site/ee-advanced.html create mode 100644 ee/ucp/authorization/_site/ee-standard.html create mode 100644 ee/ucp/authorization/_site/grant-permissions.html create mode 100644 ee/ucp/authorization/_site/group-resources.html create mode 100644 ee/ucp/authorization/_site/index.html create mode 100644 ee/ucp/authorization/_site/isolate-nodes.html create mode 100644 ee/ucp/authorization/_site/isolate-volumes.html create mode 100644 ee/ucp/authorization/_site/migrate-kubernetes-roles.html create mode 100644 ee/ucp/authorization/_site/pull-images.html create mode 100644 ee/ucp/authorization/_site/reset-user-password.html diff --git a/ee/ucp/authorization/_site/create-teams-with-ldap.html b/ee/ucp/authorization/_site/create-teams-with-ldap.html new file mode 100644 index 0000000000..2868eee2d1 --- /dev/null +++ b/ee/ucp/authorization/_site/create-teams-with-ldap.html @@ -0,0 +1,55 @@ +

To enable LDAP in UCP and sync to your LDAP directory:

+ +
    +
  1. Click Admin Settings under your username drop down.
  2. +
  3. Click Authentication & Authorization.
  4. +
  5. Scroll down and click Yes by LDAP Enabled. A list of LDAP settings displays.
  6. +
  7. Input values to match your LDAP server installation.
  8. +
  9. Test your configuration in UCP.
  10. +
  11. Manually create teams in UCP to mirror those in LDAP.
  12. +
  13. Click Sync Now.
  14. +
+ +

If Docker EE is configured to sync users with your organization’s LDAP directory +server, you can enable syncing the new team’s members when creating a new team +or when modifying settings of an existing team.

+ +

For more, see: Integrate with an LDAP Directory.

+ +

+ +

Binding to the LDAP server

+ +

There are two methods for matching group members from an LDAP directory, direct +bind and search bind.

+ +

Select Immediately Sync Team Members to run an LDAP sync operation +immediately after saving the configuration for the team. It may take a moment +before the members of the team are fully synced.

+ +

Match Group Members (Direct Bind)

+ +

This option specifies that team members should be synced directly with members +of a group in your organization’s LDAP directory. The team’s membership will by +synced to match the membership of the group.

+ + + +

Match Search Results (Search Bind)

+ +

This option specifies that team members should be synced using a search query +against your organization’s LDAP directory. The team’s membership will be +synced to match the users in the search results.

+ + diff --git a/ee/ucp/authorization/_site/create-users-and-teams-manually.html b/ee/ucp/authorization/_site/create-users-and-teams-manually.html new file mode 100644 index 0000000000..34e4790ce4 --- /dev/null +++ b/ee/ucp/authorization/_site/create-users-and-teams-manually.html @@ -0,0 +1,106 @@ +

Users, teams, and organizations are referred to as subjects in Docker EE.

+ +

Individual users can belong to one or more teams but each team can only be in +one organization. At the fictional startup, Acme Company, all teams in the +organization are necessarily unique but the user, Alex, is on two teams:

+ +
acme-datacenter
+├── dba
+│   └── Alex*
+├── dev
+│   └── Bett
+└── ops
+    ├── Alex*
+    └── Chad
+
+ +

Authentication

+ +

All users are authenticated on the backend. Docker EE provides built-in +authentication and also integrates with LDAP directory services.

+ +

To use Docker EE’s built-in authentication, you must create users manually.

+ +
+

To enable LDAP and authenticate and synchronize UCP users and teams with your +organization’s LDAP directory, see:

+ +
+ +

Build an organization architecture

+ +

The general flow of designing an organization with teams in UCP is:

+ +
    +
  1. Create an organization.
  2. +
  3. Add users or enable LDAD (for syncing users).
  4. +
  5. Create teams under the organization.
  6. +
  7. Add users to teams manually or sync with LDAP.
  8. +
+ +

Create an organization with teams

+ +

To create an organization in UCP:

+ +
    +
  1. Click Organization & Teams under User Management.
  2. +
  3. Click Create Organization.
  4. +
  5. Input the organization name.
  6. +
  7. Click Create.
  8. +
+ +

To create teams in the organization:

+ +
    +
  1. Click through the organization name.
  2. +
  3. Click Create Team.
  4. +
  5. Input a team name (and description).
  6. +
  7. Click Create.
  8. +
  9. Add existing users to the team. To sync LDAP users, see: Integrate with an LDAP Directory. +
      +
    • Click the team name and select Actions > Add Users.
    • +
    • Check the users to include and click Add Users.
    • +
    +
  10. +
+ +
+

Note: To sync teams with groups in an LDAP server, see Sync Teams with LDAP.

+
+ +

Create users manually

+ +

New users are assigned a default permission level so that they can access the +cluster. To extend a user’s default permissions, add them to a team and create grants. You can optionally grant them Docker EE +administrator permissions.

+ +

To manually create users in UCP:

+ +
    +
  1. Click Users under User Management.
  2. +
  3. Click Create User.
  4. +
  5. Input username, password, and full name.
  6. +
  7. Click Create.
  8. +
  9. Optionally, check “Is a Docker EE Admin” to give the user administrator +privileges.
  10. +
+ +
+

A Docker EE Admin can grant users permission to change the cluster +configuration and manage grants, roles, and resource sets.

+
+ +

+

+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/define-roles.html b/ee/ucp/authorization/_site/define-roles.html new file mode 100644 index 0000000000..7fe6660603 --- /dev/null +++ b/ee/ucp/authorization/_site/define-roles.html @@ -0,0 +1,77 @@ +

A role defines a set of API operations permitted against a resource set. +You apply roles to users and teams by creating grants.

+ +

Diagram showing UCP permission levels

+ +

Default roles

+ +

You can define custom roles or use the following built-in roles:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Built-in roleDescription
NoneUsers have no access to Swarm or Kubernetes resources. Maps to No Access role in UCP 2.1.x.
View OnlyUsers can view resources but can’t create them.
Restricted ControlUsers can view and edit resources but can’t run a service or container in a way that affects the node where it’s running. Users cannot mount a node directory, exec into containers, or run containers in privileged mode or with additional kernel capabilities.
SchedulerUsers can view nodes (worker and manager) and schedule (not view) workloads on these nodes. By default, all users are granted the Scheduler role against the /Shared collection. (To view workloads, users need permissions such as Container View).
Full ControlUsers can view and edit all granted resources. They can create containers without any restriction, but can’t see the containers of other users.
+ +

Create a custom role

+ +

The Roles page lists all default and custom roles applicable in the +organization.

+ +

You can give a role a global name, such as “Remove Images”, which might enable the +Remove and Force Remove operations for images. You can apply a role with +the same name to different resource sets.

+ +
    +
  1. Click Roles under User Management.
  2. +
  3. Click Create Role.
  4. +
  5. Input the role name on the Details page.
  6. +
  7. Click Operations. All available API operations are displayed.
  8. +
  9. Select the permitted operations per resource type.
  10. +
  11. Click Create.
  12. +
+ +

+ +
+

Some important rules regarding roles:

+ +
+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/deploy-stateless-app.html b/ee/ucp/authorization/_site/deploy-stateless-app.html new file mode 100644 index 0000000000..eaafe2dcfa --- /dev/null +++ b/ee/ucp/authorization/_site/deploy-stateless-app.html @@ -0,0 +1,191 @@ +

This tutorial explains how to deploy a NGINX web server and limit access to one +team with role-based access control (RBAC).

+ +

Scenario

+ +

You are the Docker EE system administrator at Acme Company and need to configure +permissions to company resources. The best way to do this is to:

+ + + +

Build the organization

+ +

Add the organization, acme-datacenter, and create three teams according to the +following structure:

+ +
acme-datacenter
+├── dba
+│   └── Alex Alutin
+├── dev
+│   └── Bett Bhatia
+└── ops
+    └── Chad Chavez
+
+ +

Learn to create and configure users and teams.

+ +

Kubernetes deployment

+ +

In this section, we deploy NGINX with Kubernetes. See Swarm stack +for the same exercise with Swarm.

+ +

Create namespace

+ +

Create a namespace to logically store the NGINX application:

+ +
    +
  1. Click Kubernetes > Namespaces.
  2. +
  3. Paste the following manifest in the terminal window and click Create.
  4. +
+ +
apiVersion: v1
+kind: Namespace
+metadata:
+  name: nginx-namespace
+
+ +

Define roles

+ +

You can use the built-in roles or define your own. For this exercise, create a +simple role for the ops team:

+ +
    +
  1. Click Roles under User Management.
  2. +
  3. Click Create Role.
  4. +
  5. On the Details tab, name the role Kube Deploy.
  6. +
  7. On the Operations tab, check all Kubernetes Deployment Operations.
  8. +
  9. Click Create.
  10. +
+ +

Learn to create and configure users and teams.

+ +

Grant access

+ +

Grant the ops team (and only the ops team) access to nginx-namespace with the +custom role, Kube Deploy.

+ +
acme-datacenter/ops + Kube Deploy + nginx-namespace
+
+ +

Deploy NGINX

+ +

You’ve configured Docker EE. The ops team can now deploy nginx.

+ +
    +
  1. Log on to UCP as “chad” (on the opsteam).
  2. +
  3. Click Kubernetes > Namespaces.
  4. +
  5. Paste the following manifest in the terminal window and click Create.
  6. +
+ +
apiVersion: apps/v1beta2  # Use apps/v1beta1 for versions < 1.8.0
+kind: Deployment
+metadata:
+  name: nginx-deployment
+spec:
+  selector:
+    matchLabels:
+      app: nginx
+  replicas: 2
+  template:
+    metadata:
+      labels:
+        app: nginx
+    spec:
+      containers:
+      - name: nginx
+        image: nginx:latest
+        ports:
+        - containerPort: 80
+
+ +
    +
  1. Log on to UCP as each user and ensure that: +
      +
    • dba (alex) can’t see nginx-namespace.
    • +
    • dev (bett) can’t see nginx-namespace.
    • +
    +
  2. +
+ +

Swarm stack

+ +

In this section, we deploy nginx as a Swarm service. See Kubernetes Deployment +for the same exercise with Kubernetes.

+ +

Create collection paths

+ +

Create a collection for NGINX resources, nested under the /Shared collection:

+ +
/
+├── System
+└── Shared
+    └── nginx-collection
+
+ +
+

Tip: To drill into a collection, click View Children.

+
+ +

Learn to group and isolate cluster resources.

+ +

Define roles

+ +

You can use the built-in roles or define your own. For this exercise, create a +simple role for the ops team:

+ +
    +
  1. Click Roles under User Management.
  2. +
  3. Click Create Role.
  4. +
  5. On the Details tab, name the role Swarm Deploy.
  6. +
  7. On the Operations tab, check all Service Operations.
  8. +
  9. Click Create.
  10. +
+ +

Learn to create and configure users and teams.

+ +

Grant access

+ +

Grant the ops team (and only the ops team) access to nginx-collection with +the built-in role, Swarm Deploy.

+ +
acme-datacenter/ops + Swarm Deploy + /Shared/nginx-collection
+
+ +

Learn to grant role-access to cluster resources.

+ +

Deploy NGINX

+ +

You’ve configured Docker EE. The ops team can now deploy an nginx Swarm +service.

+ +
    +
  1. Log on to UCP as chad (on the opsteam).
  2. +
  3. Click Swarm > Services.
  4. +
  5. Click Create Stack.
  6. +
  7. On the Details tab, enter: +
      +
    • Name: nginx-service
    • +
    • Image: nginx:latest
    • +
    +
  8. +
  9. On the Collections tab: +
      +
    • Click /Shared in the breadcrumbs.
    • +
    • Select nginx-collection.
    • +
    +
  10. +
  11. Click Create.
  12. +
  13. Log on to UCP as each user and ensure that: +
      +
    • dba (alex) cannot see nginx-collection.
    • +
    • dev (bett) cannot see nginx-collection.
    • +
    +
  14. +
+ diff --git a/ee/ucp/authorization/_site/ee-advanced.html b/ee/ucp/authorization/_site/ee-advanced.html new file mode 100644 index 0000000000..44cb8d7cf6 --- /dev/null +++ b/ee/ucp/authorization/_site/ee-advanced.html @@ -0,0 +1,137 @@ +

Go through the Docker Enterprise Standard tutorial, +before continuing here with Docker Enterprise Advanced.

+ +

In the first tutorial, the fictional company, OrcaBank, designed an architecture +with role-based access control (RBAC) to meet their organization’s security +needs. They assigned multiple grants to fine-tune access to resources across +collection boundaries on a single platform.

+ +

In this tutorial, OrcaBank implements new and more stringent security +requirements for production applications:

+ +

First, OrcaBank adds staging zone to their deployment model. They will no longer +move developed appliciatons directly in to production. Instead, they will deploy +apps from their dev cluster to staging for testing, and then to production.

+ +

Second, production applications are no longer permitted to share any physical +infrastructure with non-production infrastructure. OrcaBank segments the +scheduling and access of applications with Node Access Control.

+ +
+

Node Access Control is a feature of Docker EE +Advanced and provides secure multi-tenancy with node-based isolation. Nodes +can be placed in different collections so that resources can be scheduled and +isolated on disparate physical or virtual hardware resources.

+
+ +

Team access requirements

+ +

OrcaBank still has three application teams, payments, mobile, and db with +varying levels of segmentation between them.

+ +

Their RBAC redesign is going to organize their UCP cluster into two top-level +collections, staging and production, which are completely separate security +zones on separate physical infrastructure.

+ +

OrcaBank’s four teams now have different needs in production and staging:

+ + + +

Role composition

+ +

OrcaBank has decided to replace their custom Ops role with the built-in +Full Control role.

+ + + +

image

+ +

Collection architecture

+ +

In the previous tutorial, OrcaBank created separate collections for each +application team and nested them all under /Shared.

+ +

To meet their new security requirements for production, OrcaBank is redesigning +collections in two ways:

+ + + +

The collection architecture now has the following tree representation:

+ +
/
+├── System
+├── Shared
+├── prod
+│   ├── mobile
+│   ├── payments
+│   └── db
+│       ├── mobile
+│       └── payments
+|
+└── staging
+    ├── mobile
+    └── payments
+
+ +

Grant composition

+ +

OrcaBank must now diversify their grants further to ensure the proper division +of access.

+ +

The payments and mobile application teams will have three grants each–one +for deploying to production, one for deploying to staging, and the same grant to +access shared db networks and secrets.

+ +

image

+ +

OrcaBank access architecture

+ +

The resulting access architecture, designed with Docker EE Advanced, provides +physical segmentation between production and staging using node access control.

+ +

Applications are scheduled only on UCP worker nodes in the dedicated application +collection. And applications use shared resources across collection boundaries +to access the databases in the /prod/db collection.

+ +

image

+ +

DB team

+ +

The OrcaBank db team is responsible for deploying and managing the full +lifecycle of the databases that are in production. They have the full set of +operations against all database resources.

+ +

image

+ +

Mobile team

+ +

The mobile team is responsible for deploying their full application stack in +staging. In production they deploy their own applications but use the databases +that are provided by the db team.

+ +

image

+ diff --git a/ee/ucp/authorization/_site/ee-standard.html b/ee/ucp/authorization/_site/ee-standard.html new file mode 100644 index 0000000000..791cd3271b --- /dev/null +++ b/ee/ucp/authorization/_site/ee-standard.html @@ -0,0 +1,137 @@ +

Collections and grants are strong tools that can be used to control +access and visibility to resources in UCP.

+ +

This tutorial describes a fictitious company named OrcaBank that needs to +configure an architecture in UCP with role-based access control (RBAC) for +their application engineering group.

+ +

Team access requirements

+ +

OrcaBank reorganized their application teams by product with each team providing +shared services as necessary. Developers at OrcaBank do their own DevOps and +deploy and manage the lifecycle of their applications.

+ +

OrcaBank has four teams with the following resource needs:

+ + + +

Role composition

+ +

To assign the proper access, OrcaBank is employing a combination of default +and custom roles:

+ + + +

image

+ +

Collection architecture

+ +

OrcaBank is also creating collections of resources to mirror their team +structure.

+ +

Currently, all OrcaBank applications share the same physical resources, so all +nodes and applications are being configured in collections that nest under the +built-in collection, /Shared.

+ +

Other collections are also being created to enable shared db applications.

+ +
+

Note: For increased security with node-based isolation, use Docker +Enterprise Advanced.

+
+ + + +

The collection architecture has the following tree representation:

+ +
/
+├── System
+└── Shared
+    ├── mobile
+    ├── payments
+    └── db
+        ├── mobile
+        └── payments
+
+ +

OrcaBank’s Grant composition ensures that their collection +architecture gives the db team access to all db resources and restricts +app teams to shared db resources.

+ +

LDAP/AD integration

+ +

OrcaBank has standardized on LDAP for centralized authentication to help their +identity team scale across all the platforms they manage.

+ +

To implement LDAP authentication in UCP, OrcaBank is using UCP’s native LDAP/AD +integration to map LDAP groups directly to UCP teams. Users can be added to or +removed from UCP teams via LDAP which can be managed centrally by OrcaBank’s +identity team.

+ +

The following grant composition shows how LDAP groups are mapped to UCP teams.

+ +

Grant composition

+ +

OrcaBank is taking advantage of the flexibility in UCP’s grant model by applying +two grants to each application team. One grant allows each team to fully +manage the apps in their own collection, and the second grant gives them the +(limited) access they need to networks and secrets within the db collection.

+ +

image

+ +

OrcaBank access architecture

+ +

OrcaBank’s resulting access architecture shows applications connecting across +collection boundaries. By assigning multiple grants per team, the Mobile and +Payments applications teams can connect to dedicated Database resources through +a secure and controlled interface, leveraging Database networks and secrets.

+ +
+

Note: In Docker Enterprise Standard, all resources are deployed across the +same group of UCP worker nodes. Node segmentation is provided in Docker +Enterprise Advanced and discussed in the next tutorial.

+
+ +

image

+ +

DB team

+ +

The db team is responsible for deploying and managing the full lifecycle +of the databases used by the application teams. They can execute the full set of +operations against all database resources.

+ +

image

+ +

Mobile team

+ +

The mobile team is responsible for deploying their own application stack, +minus the database tier that is managed by the db team.

+ +

image

+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/grant-permissions.html b/ee/ucp/authorization/_site/grant-permissions.html new file mode 100644 index 0000000000..17bec8115c --- /dev/null +++ b/ee/ucp/authorization/_site/grant-permissions.html @@ -0,0 +1,77 @@ +

Docker EE administrators can create grants to control how users and +organizations access resource sets.

+ +

A grant defines who has how much access to what resources. Each grant is a +1:1:1 mapping of subject, role, and resource set. For example, you can +grant the “Prod Team” “Restricted Control” over services in the “/Production” +collection.

+ +

A common workflow for creating grants has four steps:

+ + + +

Kubernetes grants

+ +

With Kubernetes orchestration, a grant is made up of subject, role, and +namespace.

+ +
+

This section assumes that you have created objects for the grant: subject, role, +namespace.

+
+ +

To create a Kubernetes grant in UCP:

+ +
    +
  1. Click Grants under User Management.
  2. +
  3. Click Create Grant.
  4. +
  5. Click Namespaces under Kubernetes.
  6. +
  7. Find the desired namespace and click Select Namespace.
  8. +
  9. On the Roles tab, select a role.
  10. +
  11. On the Subjects tab, select a user, team, organization, or service +account to authorize.
  12. +
  13. Click Create.
  14. +
+ +

Swarm grants

+ +

With Swarm orchestration, a grant is made up of subject, role, and +collection.

+ +
+

This section assumes that you have created objects to grant: teams/users, +roles (built-in or custom), and a collection.

+
+ +

+

+ +

To create a grant in UCP:

+ +
    +
  1. Click Grants under User Management.
  2. +
  3. Click Create Grant.
  4. +
  5. On the Collections tab, click Collections (for Swarm).
  6. +
  7. Click View Children until you get to the desired collection and Select.
  8. +
  9. On the Roles tab, select a role.
  10. +
  11. On the Subjects tab, select a user, team, or organization to authorize.
  12. +
  13. Click Create.
  14. +
+ +
+

By default, all new users are placed in the docker-datacenter organization. +To apply permissions to all Docker EE users, create a grant with the +docker-datacenter org as a subject.

+
+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/group-resources.html b/ee/ucp/authorization/_site/group-resources.html new file mode 100644 index 0000000000..1ce779c9f8 --- /dev/null +++ b/ee/ucp/authorization/_site/group-resources.html @@ -0,0 +1,136 @@ +

Docker EE enables access control to cluster resources by grouping resources +into resource sets. Combine resource sets with grants +to give users permission to access specific cluster resources.

+ +

A resource set can be:

+ + + +

Kubernetes namespaces

+ +

A namespace allows you to group resources like Pods, Deployments, Services, or +any other Kubernetes-specific resources. You can then enforce RBAC policies +and resource quotas for the namespace.

+ +

Each Kubernetes resources can only be in one namespace, and namespaces cannot +be nested inside one another.

+ +

Learn more about Kubernetes namespaces.

+ +

Swarm collections

+ +

A Swarm collection is a directory of cluster resources like nodes, services, +volumes, or other Swarm-specific resources.

+ +

+ +

Each Swarm resource can only be in one collection at a time, but collections +can be nested inside one another, to create hierarchies.

+ +

Nested collections

+ +

You can nest collections inside one another. If a user is granted permissions +for one collection, they’ll have permissions for its child collections, +pretty much like a directory structure..

+ +

For a child collection, or for a user who belongs to more than one team, the +system concatenates permissions from multiple roles into an “effective role” for +the user, which specifies the operations that are allowed against the target.

+ +

Built-in collections

+ +

Docker EE provides a number of built-in collections.

+ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Default collectionDescription
/Path to all resources in the Swarm cluster. Resources not in a collection are put here.
/SystemPath to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable.
/SharedDefault path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In Docker EE Advanced, worker nodes can be moved and isolated.
/Shared/Private/Path to a user’s private collection.
/Shared/LegacyPath to the access control labels of legacy versions (UCP 2.1 and lower).
+ +

Default collections

+ +

Each user has a default collection which can be changed in UCP preferences.

+ +

Users can’t deploy a resource without a collection. When a user deploys a +resource without an access label, Docker EE automatically places the resource in +the user’s default collection. Learn how to add labels to nodes.

+ +

With Docker Compose, the system applies default collection labels across all +resources in the stack unless com.docker.ucp.access.label has been explicitly +set.

+ +
+

Default collections and collection labels

+ +

Default collections are good for users who work only on a well-defined slice of +the system, as well as users who deploy stacks and don’t want to edit the +contents of their compose files. A user with more versatile roles in the +system, such as an administrator, might find it better to set custom labels for +each resource.

+
+ +

Collections and labels

+ +

Resources are marked as being in a collection by using labels. Some resource +types don’t have editable labels, so you can’t move them across collections.

+ +
+

Can edit labels: services, nodes, secrets, and configs +Cannot edit labels: containers, networks, and volumes

+
+ +

For editable resources, you can change the com.docker.ucp.access.label to move +resources to different collections. For example, you may need deploy resources +to a collection other than your default collection.

+ +

The system uses the additional labels, com.docker.ucp.collection.*, to enable +efficient resource lookups. By default, nodes have the +com.docker.ucp.collection.root, com.docker.ucp.collection.shared, and +com.docker.ucp.collection.swarm labels set to true. UCP +automatically controls these labels, and you don’t need to manage them.

+ +

Collections get generic default names, but you can give them meaningful names, +like “Dev”, “Test”, and “Prod”.

+ +

A stack is a group of resources identified by a label. You can place the +stack’s resources in multiple collections. Resources are placed in the user’s +default collection unless you specify an explicit com.docker.ucp.access.label +within the stack/compose file.

+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/index.html b/ee/ucp/authorization/_site/index.html new file mode 100644 index 0000000000..bd8fc3f9c3 --- /dev/null +++ b/ee/ucp/authorization/_site/index.html @@ -0,0 +1,110 @@ +

Docker Universal Control Plane (UCP), +the UI for Docker EE, lets you +authorize users to view, edit, and use cluster resources by granting role-based +permissions against resource sets.

+ +

To authorize access to cluster resources across your organization, UCP +administrators might take the following high-level steps:

+ + + +

For an example, see Deploy stateless app with RBAC.

+ +

Subjects

+ +

A subject represents a user, team, organization, or service account. A subject +can be granted a role that defines permitted operations against one or more +resource sets.

+ + + +

Learn to create and configure users and teams.

+ +

Roles

+ +

Roles define what operations can be done by whom. A role is a set of permitted +operations against a type of resource, like a container or volume, that’s +assigned to a user or team with a grant.

+ +

For example, the built-in role, Restricted Control, includes permission to +view and schedule nodes but not to update nodes. A custom DBA role might +include permissions to r-w-x volumes and secrets.

+ +

Most organizations use multiple roles to fine-tune the appropriate access. A +given team or user may have different roles provided to them depending on what +resource they are accessing.

+ +

Learn to define roles with authorized API operations.

+ +

Resource sets

+ +

To control user access, cluster resources are grouped into Docker Swarm +collections or Kubernetes namespaces.

+ + + +

Together, collections and namespaces are named resource sets. Learn to +group and isolate cluster resources.

+ +

Grants

+ +

A grant is made up of subject, role, and resource set.

+ +

Grants define which users can access what resources in what way. Grants are +effectively Access Control Lists (ACLs), and when grouped together, they +provide comprehensive access policies for an entire organization.

+ +

Only an administrator can manage grants, subjects, roles, and access to +resources.

+ +
+

About administrators

+ +

An administrator is a user who creates subjects, groups resources by moving them +into collections or namespaces, defines roles by selecting allowable operations, +and applies grants to users and teams.

+
+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/isolate-nodes.html b/ee/ucp/authorization/_site/isolate-nodes.html new file mode 100644 index 0000000000..c307ebc382 --- /dev/null +++ b/ee/ucp/authorization/_site/isolate-nodes.html @@ -0,0 +1,315 @@ +

With Docker EE Advanced, you can enable physical isolation of resources +by organizing nodes into collections and granting Scheduler access for +different users. To control access to nodes, move them to dedicated collections +where you can grant access to specific users, teams, and organizations.

+ +

+ +

In this example, a team gets access to a node collection and a resource +collection, and UCP access control ensures that the team members can’t view +or use swarm resources that aren’t in their collection.

+ +

You need a Docker EE Advanced license and at least two worker nodes to +complete this example.

+ +
    +
  1. Create an Ops team and assign a user to it.
  2. +
  3. Create a /Prod collection for the team’s node.
  4. +
  5. Assign a worker node to the /Prod collection.
  6. +
  7. Grant the Ops teams access to its collection.
  8. +
+ +

+ +

Create a team

+ +

In the web UI, navigate to the Organizations & Teams page to create a team +named “Ops” in your organization. Add a user who isn’t a UCP administrator to +the team. +Learn to create and manage teams.

+ +

Create a node collection and a resource collection

+ +

In this example, the Ops team uses an assigned group of nodes, which it +accesses through a collection. Also, the team has a separate collection +for its resources.

+ +

Create two collections: one for the team’s worker nodes and another for the +team’s resources.

+ +
    +
  1. Navigate to the Collections page to view all of the resource +collections in the swarm.
  2. +
  3. Click Create collection and name the new collection “Prod”.
  4. +
  5. Click Create to create the collection.
  6. +
  7. Find Prod in the list, and click View children.
  8. +
  9. Click Create collection, and name the child collection +“Webserver”. This creates a sub-collection for access control.
  10. +
+ +

You’ve created two new collections. The /Prod collection is for the worker +nodes, and the /Prod/Webserver sub-collection is for access control to +an application that you’ll deploy on the corresponding worker nodes.

+ +

Move a worker node to a collection

+ +

By default, worker nodes are located in the /Shared collection. +Worker nodes that are running DTR are assigned to the /System collection. +To control access to the team’s nodes, move them to a dedicated collection.

+ +

Move a worker node by changing the value of its access label key, +com.docker.ucp.access.label, to a different collection.

+ +
    +
  1. Navigate to the Nodes page to view all of the nodes in the swarm.
  2. +
  3. Click a worker node, and in the details pane, find its Collection. +If it’s in the /System collection, click another worker node, +because you can’t move nodes that are in the /System collection. By +default, worker nodes are assigned to the /Shared collection.
  4. +
  5. When you’ve found an available node, in the details pane, click +Configure.
  6. +
  7. In the Labels section, find com.docker.ucp.access.label and change +its value from /Shared to /Prod.
  8. +
  9. Click Save to move the node to the /Prod collection.
  10. +
+ +
+

Docker EE Advanced required

+ +

If you don’t have a Docker EE Advanced license, you’ll get the following +error message when you try to change the access label: +Nodes must be in either the shared or system collection without an advanced license. +Get a Docker EE Advanced license.

+
+ +

+ +

Grant access for a team

+ +

You need two grants to control access to nodes and container resources:

+ + + +

Create two grants for team access to the two collections:

+ +
    +
  1. Navigate to the Grants page and click Create Grant.
  2. +
  3. In the left pane, click Resource Sets, and in the Swarm collection, +click View Children.
  4. +
  5. In the Prod collection, click View Children.
  6. +
  7. In the Webserver collection, click Select Collection.
  8. +
  9. In the left pane, click Roles, and select Restricted Control +in the dropdown.
  10. +
  11. Click Subjects, and under Select subject type, click Organizations.
  12. +
  13. Select your organization, and in the Team dropdown, select Ops.
  14. +
  15. Click Create to grant the Ops team access to the /Prod/Webserver +collection.
  16. +
+ +

The same steps apply for the nodes in the /Prod collection.

+ +
    +
  1. Navigate to the Grants page and click Create Grant.
  2. +
  3. In the left pane, click Collections, and in the Swarm collection, +click View Children.
  4. +
  5. In the Prod collection, click Select Collection.
  6. +
  7. In the left pane, click Roles, and in the dropdown, select Scheduler.
  8. +
  9. In the left pane, click Subjects, and under Select subject type, click +Organizations.
  10. +
  11. Select your organization, and in the Team dropdown, select Ops .
  12. +
  13. Click Create to grant the Ops team Scheduler access to the nodes in the +/Prod collection.
  14. +
+ +

+ +

The cluster is set up for node isolation. Users with access to nodes in the +/Prod collection can deploy Swarm services +and Kubernetes apps, and their workloads +won’t be scheduled on nodes that aren’t in the collection.

+ +

Deploy a Swarm service as a team member

+ +

When a user deploys a Swarm service, UCP assigns its resources to the user’s +default collection.

+ +

From the target collection of a resource, UCP walks up the ancestor collections +until it finds the highest ancestor that the user has Scheduler access to. +Tasks are scheduled on any nodes in the tree below this ancestor. In this example, +UCP assigns the user’s service to the /Prod/Webserver collection and schedules +tasks on nodes in the /Prod collection.

+ +

As a user on the Ops team, set your default collection to /Prod/Webserver.

+ +
    +
  1. Log in as a user on the Ops team.
  2. +
  3. Navigate to the Collections page, and in the Prod collection, +click View Children.
  4. +
  5. In the Webserver collection, click the More Options icon and +select Set to default.
  6. +
+ +

Deploy a service automatically to worker nodes in the /Prod collection. +All resources are deployed under the user’s default collection, +/Prod/Webserver, and the containers are scheduled only on the nodes under +/Prod.

+ +
    +
  1. Navigate to the Services page, and click Create Service.
  2. +
  3. Name the service “NGINX”, use the “nginx:latest” image, and click +Create.
  4. +
  5. When the nginx service status is green, click the service. In the +details view, click Inspect Resource, and in the dropdown, select +Containers.
  6. +
  7. +

    Click the NGINX container, and in the details pane, confirm that its +Collection is /Prod/Webserver.

    + +

    +
  8. +
  9. Click Inspect Resource, and in the dropdown, select Nodes.
  10. +
  11. +

    Click the node, and in the details pane, confirm that its Collection +is /Prod.

    + +

    +
  12. +
+ +

Alternative: Use a grant instead of the default collection

+ +

Another approach is to use a grant instead of changing the user’s default +collection. An administrator can create a grant for a role that has the +Service Create permission against the /Prod/Webserver collection or a child +collection. In this case, the user sets the value of the service’s access label, +com.docker.ucp.access.label, to the new collection or one of its children +that has a Service Create grant for the user.

+ +

Deploy a Kubernetes application

+ +

Starting in Docker Enterprise Edition 2.0, you can deploy a Kubernetes workload +to worker nodes, based on a Kubernetes namespace.

+ +
    +
  1. Convert a node to use the Kubernetes orchestrator.
  2. +
  3. Create a Kubernetes namespace.
  4. +
  5. Create a grant for the namespace.
  6. +
  7. Link the namespace to a node collection.
  8. +
  9. Deploy a Kubernetes workload.
  10. +
+ +

Convert a node to Kubernetes

+ +

To deploy Kubernetes workloads, an administrator must convert a worker node to +use the Kubernetes orchestrator. +Learn how to set the orchestrator type +for your nodes in the /Prod collection.

+ +

Create a Kubernetes namespace

+ +

An administrator must create a Kubernetes namespace to enable node isolation +for Kubernetes workloads.

+ +
    +
  1. In the left pane, click Kubernetes.
  2. +
  3. Click Create to open the Create Kubernetes Object page.
  4. +
  5. +

    In the Object YAML editor, paste the following YAML.

    + +
    apiVersion: v1
    +kind: Namespace
    +metadata:
    +  Name: ops-nodes
    +
    +
  6. +
  7. Click Create to create the ops-nodes namespace.
  8. +
+ +

Grant access to the Kubernetes namespace

+ +

Create a grant to the ops-nodes namespace for the Ops team by following the +same steps that you used to grant access to the /Prod collection, only this +time, on the Create Grant page, pick Namespaces, instead of +Collections.

+ +

+ +

Select the ops-nodes namespace, and create a Full Control grant for the +Ops team.

+ +

+ + + +

The last step is to link the Kubernetes namespace the /Prod collection.

+ +
    +
  1. Navigate to the Namespaces page, and find the ops-nodes namespace +in the list.
  2. +
  3. +

    Click the More options icon and select Link nodes in collection.

    + +

    +
  4. +
  5. In the Choose collection section, click View children on the +Swarm collection to navigate to the Prod collection.
  6. +
  7. On the Prod collection, click Select collection.
  8. +
  9. +

    Click Confirm to link the namespace to the collection.

    + +

    +
  10. +
+ +

Deploy a Kubernetes workload to the node collection

+ +
    +
  1. Log in in as a non-admin who’s on the Ops team.
  2. +
  3. In the left pane, open the Kubernetes section.
  4. +
  5. Confirm that ops-nodes is displayed under Namespaces.
  6. +
  7. +

    Click Create, and in the Object YAML editor, paste the following +YAML definition for an NGINX server.

    + +
    apiVersion: v1
    +kind: ReplicationController
    +metadata:
    +  name: nginx
    +spec:
    +  replicas: 1
    +  selector:
    +    app: nginx
    +  template:
    +    metadata:
    +      name: nginx
    +      labels:
    +        app: nginx
    +    spec:
    +      containers:
    +      - name: nginx
    +        image: nginx
    +        ports:
    +        - containerPort: 80
    +
    + +

    +
  8. +
  9. Click Create to deploy the workload.
  10. +
  11. +

    In the left pane, click Pods and confirm that the workload is running +on pods in the ops-nodes namespace.

    + +

    +
  12. +
+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/isolate-volumes.html b/ee/ucp/authorization/_site/isolate-volumes.html new file mode 100644 index 0000000000..ddb22fc23a --- /dev/null +++ b/ee/ucp/authorization/_site/isolate-volumes.html @@ -0,0 +1,100 @@ +

In this example, two teams are granted access to volumes in two different +resource collections. UCP access control prevents the teams from viewing and +accessing each other’s volumes, even though they may be located in the same +nodes.

+ +
    +
  1. Create two teams.
  2. +
  3. Create two collections, one for either team.
  4. +
  5. Create grants to manage access to the collections.
  6. +
  7. Team members create volumes that are specific to their team.
  8. +
+ +

+ +

Create two teams

+ +

Navigate to the Organizations & Teams page to create two teams in the +“engineering” organization, named “Dev” and “Prod”. Add a user who’s not a UCP administrator to the Dev team, and add another non-admin user to the Prod team. Learn how to create and manage teams.

+ +

+ +

Create resource collections

+ +

In this example, the Dev and Prod teams use two different volumes, which they +access through two corresponding resource collections. The collections are +placed under the /Shared collection.

+ +
    +
  1. In the left pane, click Collections to show all of the resource +collections in the swarm.
  2. +
  3. Find the /Shared collection and click View children.
  4. +
  5. Click Create collection and name the new collection “dev-volumes”.
  6. +
  7. Click Create to create the collection.
  8. +
  9. Click Create collection again, name the new collection “prod-volumes”, +and click Create.
  10. +
+ +

+ +

Create grants for controlling access to the new volumes

+ +

In this example, the Dev team gets access to its volumes from a grant that +associates the team with the /Shared/dev-volumes collection, and the Prod +team gets access to its volumes from another grant that associates the team +with the /Shared/prod-volumes collection.

+ +
    +
  1. Navigate to the Grants page and click Create Grant.
  2. +
  3. In the left pane, click Collections, and in the Swarm collection, +click View Children.
  4. +
  5. In the Shared collection, click View Children.
  6. +
  7. In the list, find /Shared/dev-volumes and click Select Collection.
  8. +
  9. Click Roles, and in the dropdown, select Restricted Control.
  10. +
  11. Click Subjects, and under Select subject type, click Organizations. +In the dropdown, pick the engineering organization, and in the +Team dropdown, select Dev.
  12. +
  13. Click Create to grant permissions to the Dev team.
  14. +
  15. Click Create Grant and repeat the previous steps for the /Shared/prod-volumes +collection and the Prod team.
  16. +
+ +

+ +

With the collections and grants in place, users can sign in and create volumes +in their assigned collections.

+ +

Create a volume as a team member

+ +

Team members have permission to create volumes in their assigned collection.

+ +
    +
  1. Log in as one of the users on the Dev team.
  2. +
  3. Navigate to the Volumes page to view all of the volumes in the +swarm that the user can access.
  4. +
  5. Click Create volume and name the new volume “dev-data”.
  6. +
  7. In the left pane, click Collections. The default collection +appears. At the top of the page, click Shared, find the dev-volumes collection in the list, and click Select Collection.
  8. +
  9. Click Create to add the “dev-data” volume to the collection.
  10. +
  11. Log in as one of the users on the Prod team, and repeat the +previous steps to create a “prod-data” volume assigned to the /Shared/prod-volumes collection.
  12. +
+ +

+ +

Now you can see role-based access control in action for volumes. The user on +the Prod team can’t see the Dev team’s volumes, and if you log in again as a +user on the Dev team, you won’t see the Prod team’s volumes.

+ +

+ +

Sign in with a UCP administrator account, and you see all of the volumes +created by the Dev and Prod users.

+ +

+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/migrate-kubernetes-roles.html b/ee/ucp/authorization/_site/migrate-kubernetes-roles.html new file mode 100644 index 0000000000..80f639d3c8 --- /dev/null +++ b/ee/ucp/authorization/_site/migrate-kubernetes-roles.html @@ -0,0 +1,122 @@ +

With Docker Enterprise Edition, you can create roles and grants +that implement the permissions that are defined in your Kubernetes apps. +Learn about RBAC authorization in Kubernetes.

+ +

Docker EE has its own implementation of role-based access control, so you +can’t use Kubernetes RBAC objects directly. Instead, you create UCP roles +and grants that correspond with the role objects and bindings in your +Kubernetes app.

+ + + +

Learn about UCP roles and grants.

+ +
+

Kubernetes yaml in UCP

+ +

Docker EE has its own RBAC system that’s distinct from the Kubernetes +system, so you can’t create any objects that are returned by the +/apis/rbac.authorization.k8s.io endpoints. If the yaml for your Kubernetes +app contains definitions for Role, ClusterRole, RoleBinding or +ClusterRoleBinding objects, UCP returns an error.

+
+ +

Migrate a Kubernetes Role to a custom UCP role

+ +

If you have Role and ClusterRole objects defined in the yaml for your +Kubernetes app, you can realize the same authorization model by creating +custom roles by using the UCP web UI.

+ +

The following Kubernetes yaml defines a pod-reader role, which gives users +access to the read-only pods resource APIs, get, watch, and list.

+ +
kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  namespace: default
+  name: pod-reader
+rules:
+- apiGroups: [""]
+  resources: ["pods"]
+  verbs: ["get", "watch", "list"]
+
+ +

Create a corresponding custom role by using the Create Role page in the +UCP web UI.

+ +
    +
  1. Log in to the UCP web UI with an administrator account.
  2. +
  3. Click Roles under User Management.
  4. +
  5. Click Create Role.
  6. +
  7. In the Role Details section, name the role “pod-reader”.
  8. +
  9. In the left pane, click Operations.
  10. +
  11. Scroll to the Kubernetes pod operations section and expand the +All Kubernetes Pod operations dropdown.
  12. +
  13. Select the Pod Get, Pod List, and Pod Watch operations. +
  14. +
  15. Click Create.
  16. +
+ +

The pod-reader role is ready to use in grants that control access to +cluster resources.

+ +

Migrate a Kubernetes RoleBinding to a UCP grant

+ +

If your Kubernetes app defines RoleBinding or ClusterRoleBinding +objects for specific users, create corresponding grants by using the UCP web UI.

+ +

The following Kubernetes yaml defines a RoleBinding that grants user “jane” +read-only access to pods in the default namespace.

+ +
kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: read-pods
+  namespace: default
+subjects:
+- kind: User
+  name: jane
+  apiGroup: rbac.authorization.k8s.io
+roleRef:
+  kind: Role
+  name: pod-reader
+  apiGroup: rbac.authorization.k8s.io
+
+ +

Create a corresponding grant by using the Create Grant page in the +UCP web UI.

+ +
    +
  1. Create a non-admin user named “jane”. Learn to create users and teams.
  2. +
  3. Click Grants under User Management.
  4. +
  5. Click Create Grant.
  6. +
  7. In the Type section, click Namespaces and ensure that default is selected.
  8. +
  9. In the left pane, click Roles, and in the Role dropdown, select pod-reader.
  10. +
  11. In the left pane, click Subjects, and click All Users.
  12. +
  13. In the User dropdown, select jane.
  14. +
  15. Click Create.
  16. +
+ +

+ +

User “jane” has access to inspect pods in the default namespace.

+ +

Kubernetes limitations

+ +

There are a few limitations that you should be aware of when creating +Kubernetes workloads:

+ + diff --git a/ee/ucp/authorization/_site/pull-images.html b/ee/ucp/authorization/_site/pull-images.html new file mode 100644 index 0000000000..af82bb922a --- /dev/null +++ b/ee/ucp/authorization/_site/pull-images.html @@ -0,0 +1,30 @@ +

By default only admin users can pull images into a cluster managed by UCP.

+ +

Images are a shared resource, as such they are always in the swarm collection. +To allow users access to pull images, you need to grant them the image load +permission for the swarm collection.

+ +

As an admin user, go to the UCP web UI, navigate to the Roles page, +and create a new role named Pull images.

+ +

+ +

Then go to the Grants page, and create a new grant with:

+ + + +

+ +

Once you click Create the user is able to pull images from the UCP web UI +or the CLI.

+ +

Where to go next

+ + diff --git a/ee/ucp/authorization/_site/reset-user-password.html b/ee/ucp/authorization/_site/reset-user-password.html new file mode 100644 index 0000000000..68cf08e3ae --- /dev/null +++ b/ee/ucp/authorization/_site/reset-user-password.html @@ -0,0 +1,24 @@ +

Docker EE administrators can reset user passwords managed in UCP:

+ +
    +
  1. Log in to UCP with administrator credentials.
  2. +
  3. Click Users under User Management.
  4. +
  5. Select the user whose password you want to change.
  6. +
  7. Select Configure and select Security.
  8. +
  9. Enter the new password, confirm, and click Update Password.
  10. +
+ +

Users passwords managed with an LDAP service must be changed on the LDAP server.

+ +

+ +

Change administrator passwords

+ +

Administrators who need a password change can ask another administrator for help +or use ssh to log in to a manager node managed by Docker EE and run:

+ +

+docker run --net=host -v ucp-auth-api-certs:/tls -it "$(docker inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' ucp-auth-api)" "$(docker inspect --format '{{ index .Spec.TaskTemplate.ContainerSpec.Args 0 }}' ucp-auth-api)" passwd -i
+
+
+