From dbda110c4355d4e293334c3483fa2bf3d2aa63cb Mon Sep 17 00:00:00 2001 From: ollypom Date: Tue, 19 Mar 2019 18:36:27 +0000 Subject: [PATCH 01/13] Make the pod-cidr warning more obvious in the docs --- ee/ucp/admin/install/install-on-azure.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/ee/ucp/admin/install/install-on-azure.md b/ee/ucp/admin/install/install-on-azure.md index 809965a77c..3e266b5b53 100644 --- a/ee/ucp/admin/install/install-on-azure.md +++ b/ee/ucp/admin/install/install-on-azure.md @@ -207,6 +207,9 @@ The `--pod-cidr` option maps to the IP address range that you configured for the subnets in the previous sections, and the `--host-address` maps to the IP address of the master node. +> Note: The `pod-cidr` range must be within an Azure subnet attached to the +> host. + ```bash docker container run --rm -it \ --name ucp \ @@ -216,8 +219,4 @@ docker container run --rm -it \ --pod-cidr \ --cloud-provider Azure \ --interactive -``` - -#### Additional Notes - -- The Kubernetes `pod-cidr` must match the Azure Vnet of the hosts. +``` \ No newline at end of file From 8d583cc562705b44888c9a1702e2f07a2d47b31e Mon Sep 17 00:00:00 2001 From: Wang Jie Date: Wed, 20 Mar 2019 14:51:33 +0800 Subject: [PATCH 02/13] Update overview.md --- machine/overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/machine/overview.md b/machine/overview.md index 9606519418..8c50c4919e 100644 --- a/machine/overview.md +++ b/machine/overview.md @@ -1,7 +1,7 @@ --- description: Introduction and Overview of Machine keywords: docker, machine, amazonec2, azure, digitalocean, google, openstack, rackspace, softlayer, virtualbox, vmwarefusion, vmwarevcloudair, vmwarevsphere, exoscale -title: Docker Machine Overview +title: Docker Machine overview --- You can use Docker Machine to: From 46bc26536fb512d351319f225969573e7cffc6e2 Mon Sep 17 00:00:00 2001 From: Wang Jie Date: Wed, 20 Mar 2019 15:23:52 +0800 Subject: [PATCH 03/13] Update index.md --- docker-hub/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docker-hub/index.md b/docker-hub/index.md index ce1d4491da..531ef56221 100644 --- a/docker-hub/index.md +++ b/docker-hub/index.md @@ -1,7 +1,7 @@ --- description: Docker Hub Quickstart keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, documentation, accounts, organizations, repositories, groups, teams -title: Docker Hub Quickstart +title: Docker Hub quickstart redirect_from: - /docker-hub/overview/ - /apidocs/docker-cloud/ @@ -141,7 +141,7 @@ Congratulations! You've successfully: - Built a Docker container image on your computer - Pushed it to Docker Hub -### Next Steps +### Next steps - Create an [Organization](orgs.md) to use Docker Hub with your team. - Automatically build container images from code through [Builds](builds/index.md). From f56f262899abfaec289f8968d45de52303a71c05 Mon Sep 17 00:00:00 2001 From: ollypom Date: Wed, 20 Mar 2019 21:46:16 +0000 Subject: [PATCH 04/13] Incorporating dockerpac's feedback, to call out appropriately sized vnets in the Pre-reqs section --- ee/ucp/admin/install/install-on-azure.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/ee/ucp/admin/install/install-on-azure.md b/ee/ucp/admin/install/install-on-azure.md index 3e266b5b53..94ac0a0784 100644 --- a/ee/ucp/admin/install/install-on-azure.md +++ b/ee/ucp/admin/install/install-on-azure.md @@ -42,6 +42,10 @@ to successfully deploy Docker UCP on Azure - All UCP Nodes (Managers and Workers) need to be deployed into the same Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups) components could be deployed in a second Azure Resource Group. +- The Azure Vnet and Subnet needs to be appropriately sized for your + environment, addresses from this pool will be consumed by Kubernetes Pods, see + [Considerations for IPAM + Configuration](#considerations-for-ipam-configuration) for more information. - All UCP Nodes (Managers and Workers) need to be attached to the same Azure Subnet. - All UCP (Managers and Workers) need to be tagged in Azure with the From a49b24f892c2ebdd32fc878ab70dd3bde82979f2 Mon Sep 17 00:00:00 2001 From: ollypom Date: Thu, 21 Mar 2019 08:35:40 +0000 Subject: [PATCH 05/13] Removed reference to Migrating Kubernetes RBAC Roles to UCP RBAC Roles --- _data/toc.yaml | 2 - .../authorization/migrate-kubernetes-roles.md | 121 ------------------ 2 files changed, 123 deletions(-) delete mode 100644 ee/ucp/authorization/migrate-kubernetes-roles.md diff --git a/_data/toc.yaml b/_data/toc.yaml index 3999e8238e..0eb2e767c6 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -1279,8 +1279,6 @@ manuals: title: Isolate nodes - path: /ee/ucp/authorization/pull-images/ title: Allow users to pull images - - path: /ee/ucp/authorization/migrate-kubernetes-roles/ - title: Migrate Kubernetes roles to Docker EE authorization - path: /ee/ucp/authorization/ee-standard/ title: Docker EE Standard use case - path: /ee/ucp/authorization/ee-advanced/ diff --git a/ee/ucp/authorization/migrate-kubernetes-roles.md b/ee/ucp/authorization/migrate-kubernetes-roles.md deleted file mode 100644 index de00cbf30c..0000000000 --- a/ee/ucp/authorization/migrate-kubernetes-roles.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: Migrate Kubernetes roles to Docker EE authorization -description: Learn how to transfer Kubernetes Role and RoleBinding objects to UCP roles and grants. -keywords: authorization, authentication, authorize, authenticate, user, team, UCP, Kubernetes, role, grant ---- - -With Docker Enterprise Edition, you can create roles and grants -that implement the permissions that are defined in your Kubernetes apps. -Learn about [RBAC authorization in Kubernetes](https://v1-11.docs.kubernetes.io/docs/admin/authorization/rbac/). - -Docker EE has its own implementation of role-based access control, so you -can't use Kubernetes RBAC objects directly. Instead, you create UCP roles -and grants that correspond with the role objects and bindings in your -Kubernetes app. - -- Kubernetes `Role` and `ClusterRole` objects become UCP roles. -- Kubernetes `RoleBinding` and `ClusterRoleBinding` objects become UCP grants. - -Learn about [UCP roles and grants](grant-permissions.md). - -> Kubernetes yaml in UCP -> -> Docker EE has its own RBAC system that's distinct from the Kubernetes -> system, so you can't create any objects that are returned by the -> `/apis/rbac.authorization.k8s.io` endpoints. If the yaml for your Kubernetes -> app contains definitions for `Role`, `ClusterRole`, `RoleBinding` or -> `ClusterRoleBinding` objects, UCP returns an error. -{: .important} - -## Migrate a Kubernetes Role to a custom UCP role - -If you have `Role` and `ClusterRole` objects defined in the yaml for your -Kubernetes app, you can realize the same authorization model by creating -custom roles by using the UCP web UI. - -The following Kubernetes yaml defines a `pod-reader` role, which gives users -access to the read-only `pods` resource APIs, `get`, `watch`, and `list`. - -```yaml -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - namespace: default - name: pod-reader -rules: -- apiGroups: [""] - resources: ["pods"] - verbs: ["get", "watch", "list"] -``` - -Create a corresponding custom role by using the **Create Role** page in the -UCP web UI. - -1. Log in to the UCP web UI with an administrator account. -2. Click **Roles** under **User Management**. -3. Click **Create Role**. -4. In the **Role Details** section, name the role "pod-reader". -5. In the left pane, click **Operations**. -6. Scroll to the **Kubernetes pod operations** section and expand the - **All Kubernetes Pod operations** dropdown. -7. Select the **Pod Get**, **Pod List**, and **Pod Watch** operations. - ![](../images/migrate-kubernetes-roles-1.png){: .with-border} -8. Click **Create**. - -The `pod-reader` role is ready to use in grants that control access to -cluster resources. - -## Migrate a Kubernetes RoleBinding to a UCP grant - -If your Kubernetes app defines `RoleBinding` or `ClusterRoleBinding` -objects for specific users, create corresponding grants by using the UCP web UI. - -The following Kubernetes yaml defines a `RoleBinding` that grants user "jane" -read-only access to pods in the `default` namespace. - -```yaml -kind: RoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: read-pods - namespace: default -subjects: -- kind: User - name: jane - apiGroup: rbac.authorization.k8s.io -roleRef: - kind: Role - name: pod-reader - apiGroup: rbac.authorization.k8s.io -``` - -Create a corresponding grant by using the **Create Grant** page in the -UCP web UI. - -1. Create a non-admin user named "jane". [Learn to create users and teams](create-users-and-teams-manually.md). -1. Click **Grants** under **User Management**. -2. Click **Create Grant**. -3. In the **Type** section, click **Namespaces** and ensure that **default** is selected. -4. In the left pane, click **Roles**, and in the **Role** dropdown, select **pod-reader**. -5. In the left pane, click **Subjects**, and click **All Users**. -6. In the **User** dropdown, select **jane**. -7. Click **Create**. - -![](../images/migrate-kubernetes-roles-2.png){: .with-border} - -User "jane" has access to inspect pods in the `default` namespace. - -## Kubernetes limitations - -There are a few limitations that you should be aware of when creating -Kubernetes workloads: - -* Docker EE has its own RBAC system, so it's not possible to create - `ClusterRole` objects, `ClusterRoleBinding` objects, or any other object that is - created by using the `/apis/rbac.authorization.k8s.io` endpoints. -* To make sure your cluster is secure, only users and service accounts that have been - granted "Full Control" of all Kubernetes namespaces can deploy pods with privileged - options. This includes: `PodSpec.hostIPC`, `PodSpec.hostNetwork`, - `PodSpec.hostPID`, `SecurityContext.allowPrivilegeEscalation`, - `SecurityContext.capabilities`, `SecurityContext.privileged`, and - `Volume.hostPath`. From fb291b1ea2cca0332e644ecc0a4746f2d37a2762 Mon Sep 17 00:00:00 2001 From: ollypom Date: Thu, 21 Mar 2019 08:51:30 +0000 Subject: [PATCH 06/13] Remove stale _site html --- .../_site/create-teams-with-ldap.html | 55 --- .../create-users-and-teams-manually.html | 106 ------ ee/ucp/authorization/_site/define-roles.html | 77 ----- .../_site/deploy-stateless-app.html | 191 ----------- ee/ucp/authorization/_site/ee-advanced.html | 137 -------- ee/ucp/authorization/_site/ee-standard.html | 137 -------- .../_site/grant-permissions.html | 77 ----- .../authorization/_site/group-resources.html | 136 -------- ee/ucp/authorization/_site/index.html | 110 ------ ee/ucp/authorization/_site/isolate-nodes.html | 315 ------------------ .../authorization/_site/isolate-volumes.html | 100 ------ .../_site/migrate-kubernetes-roles.html | 122 ------- ee/ucp/authorization/_site/pull-images.html | 30 -- .../_site/reset-user-password.html | 24 -- 14 files changed, 1617 deletions(-) delete mode 100644 ee/ucp/authorization/_site/create-teams-with-ldap.html delete mode 100644 ee/ucp/authorization/_site/create-users-and-teams-manually.html delete mode 100644 ee/ucp/authorization/_site/define-roles.html delete mode 100644 ee/ucp/authorization/_site/deploy-stateless-app.html delete mode 100644 ee/ucp/authorization/_site/ee-advanced.html delete mode 100644 ee/ucp/authorization/_site/ee-standard.html delete mode 100644 ee/ucp/authorization/_site/grant-permissions.html delete mode 100644 ee/ucp/authorization/_site/group-resources.html delete mode 100644 ee/ucp/authorization/_site/index.html delete mode 100644 ee/ucp/authorization/_site/isolate-nodes.html delete mode 100644 ee/ucp/authorization/_site/isolate-volumes.html delete mode 100644 ee/ucp/authorization/_site/migrate-kubernetes-roles.html delete mode 100644 ee/ucp/authorization/_site/pull-images.html delete mode 100644 ee/ucp/authorization/_site/reset-user-password.html diff --git a/ee/ucp/authorization/_site/create-teams-with-ldap.html b/ee/ucp/authorization/_site/create-teams-with-ldap.html deleted file mode 100644 index 2868eee2d1..0000000000 --- a/ee/ucp/authorization/_site/create-teams-with-ldap.html +++ /dev/null @@ -1,55 +0,0 @@ -

To enable LDAP in UCP and sync to your LDAP directory:

- -
    -
  1. Click Admin Settings under your username drop down.
  2. -
  3. Click Authentication & Authorization.
  4. -
  5. Scroll down and click Yes by LDAP Enabled. A list of LDAP settings displays.
  6. -
  7. Input values to match your LDAP server installation.
  8. -
  9. Test your configuration in UCP.
  10. -
  11. Manually create teams in UCP to mirror those in LDAP.
  12. -
  13. Click Sync Now.
  14. -
- -

If Docker EE is configured to sync users with your organization’s LDAP directory -server, you can enable syncing the new team’s members when creating a new team -or when modifying settings of an existing team.

- -

For more, see: Integrate with an LDAP Directory.

- -

- -

Binding to the LDAP server

- -

There are two methods for matching group members from an LDAP directory, direct -bind and search bind.

- -

Select Immediately Sync Team Members to run an LDAP sync operation -immediately after saving the configuration for the team. It may take a moment -before the members of the team are fully synced.

- -

Match Group Members (Direct Bind)

- -

This option specifies that team members should be synced directly with members -of a group in your organization’s LDAP directory. The team’s membership will by -synced to match the membership of the group.

- -
    -
  • Group DN: The distinguished name of the group from which to select users.
  • -
  • Group Member Attribute: The value of this group attribute corresponds to -the distinguished names of the members of the group.
  • -
- -

Match Search Results (Search Bind)

- -

This option specifies that team members should be synced using a search query -against your organization’s LDAP directory. The team’s membership will be -synced to match the users in the search results.

- -
    -
  • Search Base DN: Distinguished name of the node in the directory tree where -the search should start looking for users.
  • -
  • Search Filter: Filter to find users. If null, existing users in the search -scope are added as members of the team.
  • -
  • Search subtree: Defines search through the full LDAP tree, not just one -level, starting at the Base DN.
  • -
diff --git a/ee/ucp/authorization/_site/create-users-and-teams-manually.html b/ee/ucp/authorization/_site/create-users-and-teams-manually.html deleted file mode 100644 index 34e4790ce4..0000000000 --- a/ee/ucp/authorization/_site/create-users-and-teams-manually.html +++ /dev/null @@ -1,106 +0,0 @@ -

Users, teams, and organizations are referred to as subjects in Docker EE.

- -

Individual users can belong to one or more teams but each team can only be in -one organization. At the fictional startup, Acme Company, all teams in the -organization are necessarily unique but the user, Alex, is on two teams:

- -
acme-datacenter
-├── dba
-│   └── Alex*
-├── dev
-│   └── Bett
-└── ops
-    ├── Alex*
-    └── Chad
-
- -

Authentication

- -

All users are authenticated on the backend. Docker EE provides built-in -authentication and also integrates with LDAP directory services.

- -

To use Docker EE’s built-in authentication, you must create users manually.

- -
-

To enable LDAP and authenticate and synchronize UCP users and teams with your -organization’s LDAP directory, see:

- -
- -

Build an organization architecture

- -

The general flow of designing an organization with teams in UCP is:

- -
    -
  1. Create an organization.
  2. -
  3. Add users or enable LDAD (for syncing users).
  4. -
  5. Create teams under the organization.
  6. -
  7. Add users to teams manually or sync with LDAP.
  8. -
- -

Create an organization with teams

- -

To create an organization in UCP:

- -
    -
  1. Click Organization & Teams under User Management.
  2. -
  3. Click Create Organization.
  4. -
  5. Input the organization name.
  6. -
  7. Click Create.
  8. -
- -

To create teams in the organization:

- -
    -
  1. Click through the organization name.
  2. -
  3. Click Create Team.
  4. -
  5. Input a team name (and description).
  6. -
  7. Click Create.
  8. -
  9. Add existing users to the team. To sync LDAP users, see: Integrate with an LDAP Directory. -
      -
    • Click the team name and select Actions > Add Users.
    • -
    • Check the users to include and click Add Users.
    • -
    -
  10. -
- -
-

Note: To sync teams with groups in an LDAP server, see Sync Teams with LDAP.

-
- -

Create users manually

- -

New users are assigned a default permission level so that they can access the -cluster. To extend a user’s default permissions, add them to a team and create grants. You can optionally grant them Docker EE -administrator permissions.

- -

To manually create users in UCP:

- -
    -
  1. Click Users under User Management.
  2. -
  3. Click Create User.
  4. -
  5. Input username, password, and full name.
  6. -
  7. Click Create.
  8. -
  9. Optionally, check “Is a Docker EE Admin” to give the user administrator -privileges.
  10. -
- -
-

A Docker EE Admin can grant users permission to change the cluster -configuration and manage grants, roles, and resource sets.

-
- -

-

- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/define-roles.html b/ee/ucp/authorization/_site/define-roles.html deleted file mode 100644 index 7fe6660603..0000000000 --- a/ee/ucp/authorization/_site/define-roles.html +++ /dev/null @@ -1,77 +0,0 @@ -

A role defines a set of API operations permitted against a resource set. -You apply roles to users and teams by creating grants.

- -

Diagram showing UCP permission levels

- -

Default roles

- -

You can define custom roles or use the following built-in roles:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Built-in roleDescription
NoneUsers have no access to Swarm or Kubernetes resources. Maps to No Access role in UCP 2.1.x.
View OnlyUsers can view resources but can’t create them.
Restricted ControlUsers can view and edit resources but can’t run a service or container in a way that affects the node where it’s running. Users cannot mount a node directory, exec into containers, or run containers in privileged mode or with additional kernel capabilities.
SchedulerUsers can view nodes (worker and manager) and schedule (not view) workloads on these nodes. By default, all users are granted the Scheduler role against the /Shared collection. (To view workloads, users need permissions such as Container View).
Full ControlUsers can view and edit all granted resources. They can create containers without any restriction, but can’t see the containers of other users.
- -

Create a custom role

- -

The Roles page lists all default and custom roles applicable in the -organization.

- -

You can give a role a global name, such as “Remove Images”, which might enable the -Remove and Force Remove operations for images. You can apply a role with -the same name to different resource sets.

- -
    -
  1. Click Roles under User Management.
  2. -
  3. Click Create Role.
  4. -
  5. Input the role name on the Details page.
  6. -
  7. Click Operations. All available API operations are displayed.
  8. -
  9. Select the permitted operations per resource type.
  10. -
  11. Click Create.
  12. -
- -

- -
-

Some important rules regarding roles:

-
    -
  • Roles are always enabled.
  • -
  • Roles can’t be edited. To edit a role, you must delete and recreate it.
  • -
  • Roles used within a grant can be deleted only after first deleting the grant.
  • -
  • Only administrators can create and delete roles.
  • -
-
- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/deploy-stateless-app.html b/ee/ucp/authorization/_site/deploy-stateless-app.html deleted file mode 100644 index eaafe2dcfa..0000000000 --- a/ee/ucp/authorization/_site/deploy-stateless-app.html +++ /dev/null @@ -1,191 +0,0 @@ -

This tutorial explains how to deploy a NGINX web server and limit access to one -team with role-based access control (RBAC).

- -

Scenario

- -

You are the Docker EE system administrator at Acme Company and need to configure -permissions to company resources. The best way to do this is to:

- -
    -
  • Build the organization with teams and users.
  • -
  • Define roles with allowable operations per resource types, like -permission to run containers.
  • -
  • Create collections or namespaces for accessing actual resources.
  • -
  • Create grants that join team + role + resource set.
  • -
- -

Build the organization

- -

Add the organization, acme-datacenter, and create three teams according to the -following structure:

- -
acme-datacenter
-├── dba
-│   └── Alex Alutin
-├── dev
-│   └── Bett Bhatia
-└── ops
-    └── Chad Chavez
-
- -

Learn to create and configure users and teams.

- -

Kubernetes deployment

- -

In this section, we deploy NGINX with Kubernetes. See Swarm stack -for the same exercise with Swarm.

- -

Create namespace

- -

Create a namespace to logically store the NGINX application:

- -
    -
  1. Click Kubernetes > Namespaces.
  2. -
  3. Paste the following manifest in the terminal window and click Create.
  4. -
- -
apiVersion: v1
-kind: Namespace
-metadata:
-  name: nginx-namespace
-
- -

Define roles

- -

You can use the built-in roles or define your own. For this exercise, create a -simple role for the ops team:

- -
    -
  1. Click Roles under User Management.
  2. -
  3. Click Create Role.
  4. -
  5. On the Details tab, name the role Kube Deploy.
  6. -
  7. On the Operations tab, check all Kubernetes Deployment Operations.
  8. -
  9. Click Create.
  10. -
- -

Learn to create and configure users and teams.

- -

Grant access

- -

Grant the ops team (and only the ops team) access to nginx-namespace with the -custom role, Kube Deploy.

- -
acme-datacenter/ops + Kube Deploy + nginx-namespace
-
- -

Deploy NGINX

- -

You’ve configured Docker EE. The ops team can now deploy nginx.

- -
    -
  1. Log on to UCP as “chad” (on the opsteam).
  2. -
  3. Click Kubernetes > Namespaces.
  4. -
  5. Paste the following manifest in the terminal window and click Create.
  6. -
- -
apiVersion: apps/v1beta2  # Use apps/v1beta1 for versions < 1.8.0
-kind: Deployment
-metadata:
-  name: nginx-deployment
-spec:
-  selector:
-    matchLabels:
-      app: nginx
-  replicas: 2
-  template:
-    metadata:
-      labels:
-        app: nginx
-    spec:
-      containers:
-      - name: nginx
-        image: nginx:latest
-        ports:
-        - containerPort: 80
-
- -
    -
  1. Log on to UCP as each user and ensure that: -
      -
    • dba (alex) can’t see nginx-namespace.
    • -
    • dev (bett) can’t see nginx-namespace.
    • -
    -
  2. -
- -

Swarm stack

- -

In this section, we deploy nginx as a Swarm service. See Kubernetes Deployment -for the same exercise with Kubernetes.

- -

Create collection paths

- -

Create a collection for NGINX resources, nested under the /Shared collection:

- -
/
-├── System
-└── Shared
-    └── nginx-collection
-
- -
-

Tip: To drill into a collection, click View Children.

-
- -

Learn to group and isolate cluster resources.

- -

Define roles

- -

You can use the built-in roles or define your own. For this exercise, create a -simple role for the ops team:

- -
    -
  1. Click Roles under User Management.
  2. -
  3. Click Create Role.
  4. -
  5. On the Details tab, name the role Swarm Deploy.
  6. -
  7. On the Operations tab, check all Service Operations.
  8. -
  9. Click Create.
  10. -
- -

Learn to create and configure users and teams.

- -

Grant access

- -

Grant the ops team (and only the ops team) access to nginx-collection with -the built-in role, Swarm Deploy.

- -
acme-datacenter/ops + Swarm Deploy + /Shared/nginx-collection
-
- -

Learn to grant role-access to cluster resources.

- -

Deploy NGINX

- -

You’ve configured Docker EE. The ops team can now deploy an nginx Swarm -service.

- -
    -
  1. Log on to UCP as chad (on the opsteam).
  2. -
  3. Click Swarm > Services.
  4. -
  5. Click Create Stack.
  6. -
  7. On the Details tab, enter: -
      -
    • Name: nginx-service
    • -
    • Image: nginx:latest
    • -
    -
  8. -
  9. On the Collections tab: -
      -
    • Click /Shared in the breadcrumbs.
    • -
    • Select nginx-collection.
    • -
    -
  10. -
  11. Click Create.
  12. -
  13. Log on to UCP as each user and ensure that: -
      -
    • dba (alex) cannot see nginx-collection.
    • -
    • dev (bett) cannot see nginx-collection.
    • -
    -
  14. -
- diff --git a/ee/ucp/authorization/_site/ee-advanced.html b/ee/ucp/authorization/_site/ee-advanced.html deleted file mode 100644 index 44cb8d7cf6..0000000000 --- a/ee/ucp/authorization/_site/ee-advanced.html +++ /dev/null @@ -1,137 +0,0 @@ -

Go through the Docker Enterprise Standard tutorial, -before continuing here with Docker Enterprise Advanced.

- -

In the first tutorial, the fictional company, OrcaBank, designed an architecture -with role-based access control (RBAC) to meet their organization’s security -needs. They assigned multiple grants to fine-tune access to resources across -collection boundaries on a single platform.

- -

In this tutorial, OrcaBank implements new and more stringent security -requirements for production applications:

- -

First, OrcaBank adds staging zone to their deployment model. They will no longer -move developed appliciatons directly in to production. Instead, they will deploy -apps from their dev cluster to staging for testing, and then to production.

- -

Second, production applications are no longer permitted to share any physical -infrastructure with non-production infrastructure. OrcaBank segments the -scheduling and access of applications with Node Access Control.

- -
-

Node Access Control is a feature of Docker EE -Advanced and provides secure multi-tenancy with node-based isolation. Nodes -can be placed in different collections so that resources can be scheduled and -isolated on disparate physical or virtual hardware resources.

-
- -

Team access requirements

- -

OrcaBank still has three application teams, payments, mobile, and db with -varying levels of segmentation between them.

- -

Their RBAC redesign is going to organize their UCP cluster into two top-level -collections, staging and production, which are completely separate security -zones on separate physical infrastructure.

- -

OrcaBank’s four teams now have different needs in production and staging:

- -
    -
  • security should have view-only access to all applications in production (but -not staging).
  • -
  • db should have full access to all database applications and resources in -production (but not staging). See DB Team.
  • -
  • mobile should have full access to their Mobile applications in both -production and staging and limited access to shared db services. See -Mobile Team.
  • -
  • payments should have full access to their Payments applications in both -production and staging and limited access to shared db services.
  • -
- -

Role composition

- -

OrcaBank has decided to replace their custom Ops role with the built-in -Full Control role.

- -
    -
  • View Only (default role) allows users to see but not edit all cluster -resources.
  • -
  • Full Control (default role) allows users complete control of all collections -granted to them. They can also create containers without restriction but -cannot see the containers of other users.
  • -
  • View & Use Networks + Secrets (custom role) enables users to view/connect -to networks and view/use secrets used by db containers, but prevents them -from seeing or impacting the db applications themselves.
  • -
- -

image

- -

Collection architecture

- -

In the previous tutorial, OrcaBank created separate collections for each -application team and nested them all under /Shared.

- -

To meet their new security requirements for production, OrcaBank is redesigning -collections in two ways:

- -
    -
  • Adding collections for both the production and staging zones, and nesting a -set of application collections under each.
  • -
  • Segmenting nodes. Both the production and staging zones will have dedicated -nodes; and in production, each application will be on a dedicated node.
  • -
- -

The collection architecture now has the following tree representation:

- -
/
-├── System
-├── Shared
-├── prod
-│   ├── mobile
-│   ├── payments
-│   └── db
-│       ├── mobile
-│       └── payments
-|
-└── staging
-    ├── mobile
-    └── payments
-
- -

Grant composition

- -

OrcaBank must now diversify their grants further to ensure the proper division -of access.

- -

The payments and mobile application teams will have three grants each–one -for deploying to production, one for deploying to staging, and the same grant to -access shared db networks and secrets.

- -

image

- -

OrcaBank access architecture

- -

The resulting access architecture, designed with Docker EE Advanced, provides -physical segmentation between production and staging using node access control.

- -

Applications are scheduled only on UCP worker nodes in the dedicated application -collection. And applications use shared resources across collection boundaries -to access the databases in the /prod/db collection.

- -

image

- -

DB team

- -

The OrcaBank db team is responsible for deploying and managing the full -lifecycle of the databases that are in production. They have the full set of -operations against all database resources.

- -

image

- -

Mobile team

- -

The mobile team is responsible for deploying their full application stack in -staging. In production they deploy their own applications but use the databases -that are provided by the db team.

- -

image

- diff --git a/ee/ucp/authorization/_site/ee-standard.html b/ee/ucp/authorization/_site/ee-standard.html deleted file mode 100644 index 791cd3271b..0000000000 --- a/ee/ucp/authorization/_site/ee-standard.html +++ /dev/null @@ -1,137 +0,0 @@ -

Collections and grants are strong tools that can be used to control -access and visibility to resources in UCP.

- -

This tutorial describes a fictitious company named OrcaBank that needs to -configure an architecture in UCP with role-based access control (RBAC) for -their application engineering group.

- -

Team access requirements

- -

OrcaBank reorganized their application teams by product with each team providing -shared services as necessary. Developers at OrcaBank do their own DevOps and -deploy and manage the lifecycle of their applications.

- -

OrcaBank has four teams with the following resource needs:

- -
    -
  • security should have view-only access to all applications in the cluster.
  • -
  • db should have full access to all database applications and resources. See -DB Team.
  • -
  • mobile should have full access to their mobile applications and limited -access to shared db services. See Mobile Team.
  • -
  • payments should have full access to their payments applications and limited -access to shared db services.
  • -
- -

Role composition

- -

To assign the proper access, OrcaBank is employing a combination of default -and custom roles:

- -
    -
  • View Only (default role) allows users to see all resources (but not edit or use).
  • -
  • Ops (custom role) allows users to perform all operations against configs, -containers, images, networks, nodes, secrets, services, and volumes.
  • -
  • View & Use Networks + Secrets (custom role) enables users to view/connect to -networks and view/use secrets used by db containers, but prevents them from -seeing or impacting the db applications themselves.
  • -
- -

image

- -

Collection architecture

- -

OrcaBank is also creating collections of resources to mirror their team -structure.

- -

Currently, all OrcaBank applications share the same physical resources, so all -nodes and applications are being configured in collections that nest under the -built-in collection, /Shared.

- -

Other collections are also being created to enable shared db applications.

- -
-

Note: For increased security with node-based isolation, use Docker -Enterprise Advanced.

-
- -
    -
  • /Shared/mobile hosts all Mobile applications and resources.
  • -
  • /Shared/payments hosts all Payments applications and resources.
  • -
  • /Shared/db is a top-level collection for all db resources.
  • -
  • /Shared/db/payments is a collection of db resources for Payments applications.
  • -
  • /Shared/db/mobile is a collection of db resources for Mobile applications.
  • -
- -

The collection architecture has the following tree representation:

- -
/
-├── System
-└── Shared
-    ├── mobile
-    ├── payments
-    └── db
-        ├── mobile
-        └── payments
-
- -

OrcaBank’s Grant composition ensures that their collection -architecture gives the db team access to all db resources and restricts -app teams to shared db resources.

- -

LDAP/AD integration

- -

OrcaBank has standardized on LDAP for centralized authentication to help their -identity team scale across all the platforms they manage.

- -

To implement LDAP authentication in UCP, OrcaBank is using UCP’s native LDAP/AD -integration to map LDAP groups directly to UCP teams. Users can be added to or -removed from UCP teams via LDAP which can be managed centrally by OrcaBank’s -identity team.

- -

The following grant composition shows how LDAP groups are mapped to UCP teams.

- -

Grant composition

- -

OrcaBank is taking advantage of the flexibility in UCP’s grant model by applying -two grants to each application team. One grant allows each team to fully -manage the apps in their own collection, and the second grant gives them the -(limited) access they need to networks and secrets within the db collection.

- -

image

- -

OrcaBank access architecture

- -

OrcaBank’s resulting access architecture shows applications connecting across -collection boundaries. By assigning multiple grants per team, the Mobile and -Payments applications teams can connect to dedicated Database resources through -a secure and controlled interface, leveraging Database networks and secrets.

- -
-

Note: In Docker Enterprise Standard, all resources are deployed across the -same group of UCP worker nodes. Node segmentation is provided in Docker -Enterprise Advanced and discussed in the next tutorial.

-
- -

image

- -

DB team

- -

The db team is responsible for deploying and managing the full lifecycle -of the databases used by the application teams. They can execute the full set of -operations against all database resources.

- -

image

- -

Mobile team

- -

The mobile team is responsible for deploying their own application stack, -minus the database tier that is managed by the db team.

- -

image

- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/grant-permissions.html b/ee/ucp/authorization/_site/grant-permissions.html deleted file mode 100644 index 17bec8115c..0000000000 --- a/ee/ucp/authorization/_site/grant-permissions.html +++ /dev/null @@ -1,77 +0,0 @@ -

Docker EE administrators can create grants to control how users and -organizations access resource sets.

- -

A grant defines who has how much access to what resources. Each grant is a -1:1:1 mapping of subject, role, and resource set. For example, you can -grant the “Prod Team” “Restricted Control” over services in the “/Production” -collection.

- -

A common workflow for creating grants has four steps:

- -
    -
  • Add and configure subjects (users, teams, and service accounts).
  • -
  • Define custom roles (or use defaults) by adding permitted API operations -per type of resource.
  • -
  • Group cluster resources into Swarm collections or Kubernetes namespaces.
  • -
  • Create grants by combining subject + role + resource set.
  • -
- -

Kubernetes grants

- -

With Kubernetes orchestration, a grant is made up of subject, role, and -namespace.

- -
-

This section assumes that you have created objects for the grant: subject, role, -namespace.

-
- -

To create a Kubernetes grant in UCP:

- -
    -
  1. Click Grants under User Management.
  2. -
  3. Click Create Grant.
  4. -
  5. Click Namespaces under Kubernetes.
  6. -
  7. Find the desired namespace and click Select Namespace.
  8. -
  9. On the Roles tab, select a role.
  10. -
  11. On the Subjects tab, select a user, team, organization, or service -account to authorize.
  12. -
  13. Click Create.
  14. -
- -

Swarm grants

- -

With Swarm orchestration, a grant is made up of subject, role, and -collection.

- -
-

This section assumes that you have created objects to grant: teams/users, -roles (built-in or custom), and a collection.

-
- -

-

- -

To create a grant in UCP:

- -
    -
  1. Click Grants under User Management.
  2. -
  3. Click Create Grant.
  4. -
  5. On the Collections tab, click Collections (for Swarm).
  6. -
  7. Click View Children until you get to the desired collection and Select.
  8. -
  9. On the Roles tab, select a role.
  10. -
  11. On the Subjects tab, select a user, team, or organization to authorize.
  12. -
  13. Click Create.
  14. -
- -
-

By default, all new users are placed in the docker-datacenter organization. -To apply permissions to all Docker EE users, create a grant with the -docker-datacenter org as a subject.

-
- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/group-resources.html b/ee/ucp/authorization/_site/group-resources.html deleted file mode 100644 index fb9daa9272..0000000000 --- a/ee/ucp/authorization/_site/group-resources.html +++ /dev/null @@ -1,136 +0,0 @@ -

Docker EE enables access control to cluster resources by grouping resources -into resource sets. Combine resource sets with grants -to give users permission to access specific cluster resources.

- -

A resource set can be:

- -
    -
  • A Kubernetes namespace for Kubernetes workloads.
  • -
  • A UCP collection for Swarm workloads.
  • -
- -

Kubernetes namespaces

- -

A namespace allows you to group resources like Pods, Deployments, Services, or -any other Kubernetes-specific resources. You can then enforce RBAC policies -and resource quotas for the namespace.

- -

Each Kubernetes resources can only be in one namespace, and namespaces cannot -be nested inside one another.

- -

Learn more about Kubernetes namespaces.

- -

Swarm collections

- -

A Swarm collection is a directory of cluster resources like nodes, services, -volumes, or other Swarm-specific resources.

- -

- -

Each Swarm resource can only be in one collection at a time, but collections -can be nested inside one another, to create hierarchies.

- -

Nested collections

- -

You can nest collections inside one another. If a user is granted permissions -for one collection, they’ll have permissions for its child collections, -pretty much like a directory structure..

- -

For a child collection, or for a user who belongs to more than one team, the -system concatenates permissions from multiple roles into an “effective role” for -the user, which specifies the operations that are allowed against the target.

- -

Built-in collections

- -

Docker EE provides a number of built-in collections.

- -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Default collectionDescription
/Path to all resources in the Swarm cluster. Resources not in a collection are put here.
/SystemPath to UCP managers, DTR nodes, and UCP/DTR system services. By default, only admins have access, but this is configurable.
/SharedDefault path to all worker nodes for scheduling. In Docker EE Standard, all worker nodes are located here. In Docker EE Advanced, worker nodes can be moved and isolated.
/Shared/Private/Path to a user’s private collection.
/Shared/LegacyPath to the access control labels of legacy versions (UCP 2.1 and lower).
- -

Default collections

- -

Each user has a default collection which can be changed in UCP preferences.

- -

Users can’t deploy a resource without a collection. When a user deploys a -resource without an access label, Docker EE automatically places the resource in -the user’s default collection. Learn how to add labels to nodes.

- -

With Docker Compose, the system applies default collection labels across all -resources in the stack unless com.docker.ucp.access.label has been explicitly -set.

- -
-

Default collections and collection labels

- -

Default collections are good for users who work only on a well-defined slice of -the system, as well as users who deploy stacks and don’t want to edit the -contents of their compose files. A user with more versatile roles in the -system, such as an administrator, might find it better to set custom labels for -each resource.

-
- -

Collections and labels

- -

Resources are marked as being in a collection by using labels. Some resource -types don’t have editable labels, so you can’t move them across collections.

- -
-

Can edit labels: services, nodes, secrets, and configs -Cannot edit labels: containers, networks, and volumes

-
- -

For editable resources, you can change the com.docker.ucp.access.label to move -resources to different collections. For example, you may need deploy resources -to a collection other than your default collection.

- -

The system uses the additional labels, com.docker.ucp.collection.*, to enable -efficient resource lookups. By default, nodes have the -com.docker.ucp.collection.root, com.docker.ucp.collection.shared, and -com.docker.ucp.collection.swarm labels set to true. UCP -automatically controls these labels, and you don’t need to manage them.

- -

Collections get generic default names, but you can give them meaningful names, -like “Dev”, “Test”, and “Prod”.

- -

A stack is a group of resources identified by a label. You can place the -stack’s resources in multiple collections. Resources are placed in the user’s -default collection unless you specify an explicit com.docker.ucp.access.label -within the stack/compose file.

- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/index.html b/ee/ucp/authorization/_site/index.html deleted file mode 100644 index 8503976690..0000000000 --- a/ee/ucp/authorization/_site/index.html +++ /dev/null @@ -1,110 +0,0 @@ -

Docker Universal Control Plane (UCP), -the UI for Docker EE, lets you -authorize users to view, edit, and use cluster resources by granting role-based -permissions against resource sets.

- -

To authorize access to cluster resources across your organization, UCP -administrators might take the following high-level steps:

- -
    -
  • Add and configure subjects (users, teams, and service accounts).
  • -
  • Define custom roles (or use defaults) by adding permitted operations per -type of resource.
  • -
  • Group cluster resources into resource sets of Swarm collections or -Kubernetes namespaces.
  • -
  • Create grants by combining subject + role + resource set.
  • -
- -

For an example, see Deploy stateless app with RBAC.

- -

Subjects

- -

A subject represents a user, team, organization, or service account. A subject -can be granted a role that defines permitted operations against one or more -resource sets.

- -
    -
  • User: A person authenticated by the authentication backend. Users can -belong to one or more teams and one or more organizations.
  • -
  • Team: A group of users that share permissions defined at the team level. A -team can be in one organization only.
  • -
  • Organization: A group of teams that share a specific set of permissions, -defined by the roles of the organization.
  • -
  • Service account: A Kubernetes object that enables a workload to access -cluster resources that are assigned to a namespace.
  • -
- -

Learn to create and configure users and teams.

- -

Roles

- -

Roles define what operations can be done by whom. A role is a set of permitted -operations against a type of resource, like a container or volume, that’s -assigned to a user or team with a grant.

- -

For example, the built-in role, Restricted Control, includes permission to -view and schedule nodes but not to update nodes. A custom DBA role might -include permissions to r-w-x volumes and secrets.

- -

Most organizations use multiple roles to fine-tune the appropriate access. A -given team or user may have different roles provided to them depending on what -resource they are accessing.

- -

Learn to define roles with authorized API operations.

- -

Resource sets

- -

To control user access, cluster resources are grouped into Docker Swarm -collections or Kubernetes namespaces.

- -
    -
  • -

    Swarm collections: A collection has a directory-like structure that holds -Swarm resources. You can create collections in UCP by defining a directory path -and moving resources into it. Also, you can create the path in UCP and use -labels in your YAML file to assign application resources to the path. -Resource types that users can access in a Swarm collection include containers, -networks, nodes, services, secrets, and volumes.

    -
  • -
  • -

    Kubernetes namespaces: A -namespace -is a logical area for a Kubernetes cluster. Kubernetes comes with a default -namespace for your cluster objects, plus two more namespaces for system and -public resources. You can create custom namespaces, but unlike Swarm -collections, namespaces can’t be nested. Resource types that users can -access in a Kubernetes namespace include pods, deployments, network policies, -nodes, services, secrets, and many more.

    -
  • -
- -

Together, collections and namespaces are named resource sets. Learn to -group and isolate cluster resources.

- -

Grants

- -

A grant is made up of subject, role, and resource set.

- -

Grants define which users can access what resources in what way. Grants are -effectively Access Control Lists (ACLs), and when grouped together, they -provide comprehensive access policies for an entire organization.

- -

Only an administrator can manage grants, subjects, roles, and access to -resources.

- -
-

About administrators

- -

An administrator is a user who creates subjects, groups resources by moving them -into collections or namespaces, defines roles by selecting allowable operations, -and applies grants to users and teams.

-
- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/isolate-nodes.html b/ee/ucp/authorization/_site/isolate-nodes.html deleted file mode 100644 index c307ebc382..0000000000 --- a/ee/ucp/authorization/_site/isolate-nodes.html +++ /dev/null @@ -1,315 +0,0 @@ -

With Docker EE Advanced, you can enable physical isolation of resources -by organizing nodes into collections and granting Scheduler access for -different users. To control access to nodes, move them to dedicated collections -where you can grant access to specific users, teams, and organizations.

- -

- -

In this example, a team gets access to a node collection and a resource -collection, and UCP access control ensures that the team members can’t view -or use swarm resources that aren’t in their collection.

- -

You need a Docker EE Advanced license and at least two worker nodes to -complete this example.

- -
    -
  1. Create an Ops team and assign a user to it.
  2. -
  3. Create a /Prod collection for the team’s node.
  4. -
  5. Assign a worker node to the /Prod collection.
  6. -
  7. Grant the Ops teams access to its collection.
  8. -
- -

- -

Create a team

- -

In the web UI, navigate to the Organizations & Teams page to create a team -named “Ops” in your organization. Add a user who isn’t a UCP administrator to -the team. -Learn to create and manage teams.

- -

Create a node collection and a resource collection

- -

In this example, the Ops team uses an assigned group of nodes, which it -accesses through a collection. Also, the team has a separate collection -for its resources.

- -

Create two collections: one for the team’s worker nodes and another for the -team’s resources.

- -
    -
  1. Navigate to the Collections page to view all of the resource -collections in the swarm.
  2. -
  3. Click Create collection and name the new collection “Prod”.
  4. -
  5. Click Create to create the collection.
  6. -
  7. Find Prod in the list, and click View children.
  8. -
  9. Click Create collection, and name the child collection -“Webserver”. This creates a sub-collection for access control.
  10. -
- -

You’ve created two new collections. The /Prod collection is for the worker -nodes, and the /Prod/Webserver sub-collection is for access control to -an application that you’ll deploy on the corresponding worker nodes.

- -

Move a worker node to a collection

- -

By default, worker nodes are located in the /Shared collection. -Worker nodes that are running DTR are assigned to the /System collection. -To control access to the team’s nodes, move them to a dedicated collection.

- -

Move a worker node by changing the value of its access label key, -com.docker.ucp.access.label, to a different collection.

- -
    -
  1. Navigate to the Nodes page to view all of the nodes in the swarm.
  2. -
  3. Click a worker node, and in the details pane, find its Collection. -If it’s in the /System collection, click another worker node, -because you can’t move nodes that are in the /System collection. By -default, worker nodes are assigned to the /Shared collection.
  4. -
  5. When you’ve found an available node, in the details pane, click -Configure.
  6. -
  7. In the Labels section, find com.docker.ucp.access.label and change -its value from /Shared to /Prod.
  8. -
  9. Click Save to move the node to the /Prod collection.
  10. -
- -
-

Docker EE Advanced required

- -

If you don’t have a Docker EE Advanced license, you’ll get the following -error message when you try to change the access label: -Nodes must be in either the shared or system collection without an advanced license. -Get a Docker EE Advanced license.

-
- -

- -

Grant access for a team

- -

You need two grants to control access to nodes and container resources:

- -
    -
  • Grant the Ops team the Restricted Control role for the /Prod/Webserver -resources.
  • -
  • Grant the Ops team the Scheduler role against the nodes in the /Prod -collection.
  • -
- -

Create two grants for team access to the two collections:

- -
    -
  1. Navigate to the Grants page and click Create Grant.
  2. -
  3. In the left pane, click Resource Sets, and in the Swarm collection, -click View Children.
  4. -
  5. In the Prod collection, click View Children.
  6. -
  7. In the Webserver collection, click Select Collection.
  8. -
  9. In the left pane, click Roles, and select Restricted Control -in the dropdown.
  10. -
  11. Click Subjects, and under Select subject type, click Organizations.
  12. -
  13. Select your organization, and in the Team dropdown, select Ops.
  14. -
  15. Click Create to grant the Ops team access to the /Prod/Webserver -collection.
  16. -
- -

The same steps apply for the nodes in the /Prod collection.

- -
    -
  1. Navigate to the Grants page and click Create Grant.
  2. -
  3. In the left pane, click Collections, and in the Swarm collection, -click View Children.
  4. -
  5. In the Prod collection, click Select Collection.
  6. -
  7. In the left pane, click Roles, and in the dropdown, select Scheduler.
  8. -
  9. In the left pane, click Subjects, and under Select subject type, click -Organizations.
  10. -
  11. Select your organization, and in the Team dropdown, select Ops .
  12. -
  13. Click Create to grant the Ops team Scheduler access to the nodes in the -/Prod collection.
  14. -
- -

- -

The cluster is set up for node isolation. Users with access to nodes in the -/Prod collection can deploy Swarm services -and Kubernetes apps, and their workloads -won’t be scheduled on nodes that aren’t in the collection.

- -

Deploy a Swarm service as a team member

- -

When a user deploys a Swarm service, UCP assigns its resources to the user’s -default collection.

- -

From the target collection of a resource, UCP walks up the ancestor collections -until it finds the highest ancestor that the user has Scheduler access to. -Tasks are scheduled on any nodes in the tree below this ancestor. In this example, -UCP assigns the user’s service to the /Prod/Webserver collection and schedules -tasks on nodes in the /Prod collection.

- -

As a user on the Ops team, set your default collection to /Prod/Webserver.

- -
    -
  1. Log in as a user on the Ops team.
  2. -
  3. Navigate to the Collections page, and in the Prod collection, -click View Children.
  4. -
  5. In the Webserver collection, click the More Options icon and -select Set to default.
  6. -
- -

Deploy a service automatically to worker nodes in the /Prod collection. -All resources are deployed under the user’s default collection, -/Prod/Webserver, and the containers are scheduled only on the nodes under -/Prod.

- -
    -
  1. Navigate to the Services page, and click Create Service.
  2. -
  3. Name the service “NGINX”, use the “nginx:latest” image, and click -Create.
  4. -
  5. When the nginx service status is green, click the service. In the -details view, click Inspect Resource, and in the dropdown, select -Containers.
  6. -
  7. -

    Click the NGINX container, and in the details pane, confirm that its -Collection is /Prod/Webserver.

    - -

    -
  8. -
  9. Click Inspect Resource, and in the dropdown, select Nodes.
  10. -
  11. -

    Click the node, and in the details pane, confirm that its Collection -is /Prod.

    - -

    -
  12. -
- -

Alternative: Use a grant instead of the default collection

- -

Another approach is to use a grant instead of changing the user’s default -collection. An administrator can create a grant for a role that has the -Service Create permission against the /Prod/Webserver collection or a child -collection. In this case, the user sets the value of the service’s access label, -com.docker.ucp.access.label, to the new collection or one of its children -that has a Service Create grant for the user.

- -

Deploy a Kubernetes application

- -

Starting in Docker Enterprise Edition 2.0, you can deploy a Kubernetes workload -to worker nodes, based on a Kubernetes namespace.

- -
    -
  1. Convert a node to use the Kubernetes orchestrator.
  2. -
  3. Create a Kubernetes namespace.
  4. -
  5. Create a grant for the namespace.
  6. -
  7. Link the namespace to a node collection.
  8. -
  9. Deploy a Kubernetes workload.
  10. -
- -

Convert a node to Kubernetes

- -

To deploy Kubernetes workloads, an administrator must convert a worker node to -use the Kubernetes orchestrator. -Learn how to set the orchestrator type -for your nodes in the /Prod collection.

- -

Create a Kubernetes namespace

- -

An administrator must create a Kubernetes namespace to enable node isolation -for Kubernetes workloads.

- -
    -
  1. In the left pane, click Kubernetes.
  2. -
  3. Click Create to open the Create Kubernetes Object page.
  4. -
  5. -

    In the Object YAML editor, paste the following YAML.

    - -
    apiVersion: v1
    -kind: Namespace
    -metadata:
    -  Name: ops-nodes
    -
    -
  6. -
  7. Click Create to create the ops-nodes namespace.
  8. -
- -

Grant access to the Kubernetes namespace

- -

Create a grant to the ops-nodes namespace for the Ops team by following the -same steps that you used to grant access to the /Prod collection, only this -time, on the Create Grant page, pick Namespaces, instead of -Collections.

- -

- -

Select the ops-nodes namespace, and create a Full Control grant for the -Ops team.

- -

- - - -

The last step is to link the Kubernetes namespace the /Prod collection.

- -
    -
  1. Navigate to the Namespaces page, and find the ops-nodes namespace -in the list.
  2. -
  3. -

    Click the More options icon and select Link nodes in collection.

    - -

    -
  4. -
  5. In the Choose collection section, click View children on the -Swarm collection to navigate to the Prod collection.
  6. -
  7. On the Prod collection, click Select collection.
  8. -
  9. -

    Click Confirm to link the namespace to the collection.

    - -

    -
  10. -
- -

Deploy a Kubernetes workload to the node collection

- -
    -
  1. Log in in as a non-admin who’s on the Ops team.
  2. -
  3. In the left pane, open the Kubernetes section.
  4. -
  5. Confirm that ops-nodes is displayed under Namespaces.
  6. -
  7. -

    Click Create, and in the Object YAML editor, paste the following -YAML definition for an NGINX server.

    - -
    apiVersion: v1
    -kind: ReplicationController
    -metadata:
    -  name: nginx
    -spec:
    -  replicas: 1
    -  selector:
    -    app: nginx
    -  template:
    -    metadata:
    -      name: nginx
    -      labels:
    -        app: nginx
    -    spec:
    -      containers:
    -      - name: nginx
    -        image: nginx
    -        ports:
    -        - containerPort: 80
    -
    - -

    -
  8. -
  9. Click Create to deploy the workload.
  10. -
  11. -

    In the left pane, click Pods and confirm that the workload is running -on pods in the ops-nodes namespace.

    - -

    -
  12. -
- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/isolate-volumes.html b/ee/ucp/authorization/_site/isolate-volumes.html deleted file mode 100644 index ddb22fc23a..0000000000 --- a/ee/ucp/authorization/_site/isolate-volumes.html +++ /dev/null @@ -1,100 +0,0 @@ -

In this example, two teams are granted access to volumes in two different -resource collections. UCP access control prevents the teams from viewing and -accessing each other’s volumes, even though they may be located in the same -nodes.

- -
    -
  1. Create two teams.
  2. -
  3. Create two collections, one for either team.
  4. -
  5. Create grants to manage access to the collections.
  6. -
  7. Team members create volumes that are specific to their team.
  8. -
- -

- -

Create two teams

- -

Navigate to the Organizations & Teams page to create two teams in the -“engineering” organization, named “Dev” and “Prod”. Add a user who’s not a UCP administrator to the Dev team, and add another non-admin user to the Prod team. Learn how to create and manage teams.

- -

- -

Create resource collections

- -

In this example, the Dev and Prod teams use two different volumes, which they -access through two corresponding resource collections. The collections are -placed under the /Shared collection.

- -
    -
  1. In the left pane, click Collections to show all of the resource -collections in the swarm.
  2. -
  3. Find the /Shared collection and click View children.
  4. -
  5. Click Create collection and name the new collection “dev-volumes”.
  6. -
  7. Click Create to create the collection.
  8. -
  9. Click Create collection again, name the new collection “prod-volumes”, -and click Create.
  10. -
- -

- -

Create grants for controlling access to the new volumes

- -

In this example, the Dev team gets access to its volumes from a grant that -associates the team with the /Shared/dev-volumes collection, and the Prod -team gets access to its volumes from another grant that associates the team -with the /Shared/prod-volumes collection.

- -
    -
  1. Navigate to the Grants page and click Create Grant.
  2. -
  3. In the left pane, click Collections, and in the Swarm collection, -click View Children.
  4. -
  5. In the Shared collection, click View Children.
  6. -
  7. In the list, find /Shared/dev-volumes and click Select Collection.
  8. -
  9. Click Roles, and in the dropdown, select Restricted Control.
  10. -
  11. Click Subjects, and under Select subject type, click Organizations. -In the dropdown, pick the engineering organization, and in the -Team dropdown, select Dev.
  12. -
  13. Click Create to grant permissions to the Dev team.
  14. -
  15. Click Create Grant and repeat the previous steps for the /Shared/prod-volumes -collection and the Prod team.
  16. -
- -

- -

With the collections and grants in place, users can sign in and create volumes -in their assigned collections.

- -

Create a volume as a team member

- -

Team members have permission to create volumes in their assigned collection.

- -
    -
  1. Log in as one of the users on the Dev team.
  2. -
  3. Navigate to the Volumes page to view all of the volumes in the -swarm that the user can access.
  4. -
  5. Click Create volume and name the new volume “dev-data”.
  6. -
  7. In the left pane, click Collections. The default collection -appears. At the top of the page, click Shared, find the dev-volumes collection in the list, and click Select Collection.
  8. -
  9. Click Create to add the “dev-data” volume to the collection.
  10. -
  11. Log in as one of the users on the Prod team, and repeat the -previous steps to create a “prod-data” volume assigned to the /Shared/prod-volumes collection.
  12. -
- -

- -

Now you can see role-based access control in action for volumes. The user on -the Prod team can’t see the Dev team’s volumes, and if you log in again as a -user on the Dev team, you won’t see the Prod team’s volumes.

- -

- -

Sign in with a UCP administrator account, and you see all of the volumes -created by the Dev and Prod users.

- -

- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/migrate-kubernetes-roles.html b/ee/ucp/authorization/_site/migrate-kubernetes-roles.html deleted file mode 100644 index c64b429c29..0000000000 --- a/ee/ucp/authorization/_site/migrate-kubernetes-roles.html +++ /dev/null @@ -1,122 +0,0 @@ -

With Docker Enterprise Edition, you can create roles and grants -that implement the permissions that are defined in your Kubernetes apps. -Learn about RBAC authorization in Kubernetes.

- -

Docker EE has its own implementation of role-based access control, so you -can’t use Kubernetes RBAC objects directly. Instead, you create UCP roles -and grants that correspond with the role objects and bindings in your -Kubernetes app.

- -
    -
  • Kubernetes Role and ClusterRole objects become UCP roles.
  • -
  • Kubernetes RoleBinding and ClusterRoleBinding objects become UCP grants.
  • -
- -

Learn about UCP roles and grants.

- -
-

Kubernetes yaml in UCP

- -

Docker EE has its own RBAC system that’s distinct from the Kubernetes -system, so you can’t create any objects that are returned by the -/apis/rbac.authorization.k8s.io endpoints. If the yaml for your Kubernetes -app contains definitions for Role, ClusterRole, RoleBinding or -ClusterRoleBinding objects, UCP returns an error.

-
- -

Migrate a Kubernetes Role to a custom UCP role

- -

If you have Role and ClusterRole objects defined in the yaml for your -Kubernetes app, you can realize the same authorization model by creating -custom roles by using the UCP web UI.

- -

The following Kubernetes yaml defines a pod-reader role, which gives users -access to the read-only pods resource APIs, get, watch, and list.

- -
kind: Role
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
-  namespace: default
-  name: pod-reader
-rules:
-- apiGroups: [""]
-  resources: ["pods"]
-  verbs: ["get", "watch", "list"]
-
- -

Create a corresponding custom role by using the Create Role page in the -UCP web UI.

- -
    -
  1. Log in to the UCP web UI with an administrator account.
  2. -
  3. Click Roles under User Management.
  4. -
  5. Click Create Role.
  6. -
  7. In the Role Details section, name the role “pod-reader”.
  8. -
  9. In the left pane, click Operations.
  10. -
  11. Scroll to the Kubernetes pod operations section and expand the -All Kubernetes Pod operations dropdown.
  12. -
  13. Select the Pod Get, Pod List, and Pod Watch operations. -
  14. -
  15. Click Create.
  16. -
- -

The pod-reader role is ready to use in grants that control access to -cluster resources.

- -

Migrate a Kubernetes RoleBinding to a UCP grant

- -

If your Kubernetes app defines RoleBinding or ClusterRoleBinding -objects for specific users, create corresponding grants by using the UCP web UI.

- -

The following Kubernetes yaml defines a RoleBinding that grants user “jane” -read-only access to pods in the default namespace.

- -
kind: RoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
-  name: read-pods
-  namespace: default
-subjects:
-- kind: User
-  name: jane
-  apiGroup: rbac.authorization.k8s.io
-roleRef:
-  kind: Role
-  name: pod-reader
-  apiGroup: rbac.authorization.k8s.io
-
- -

Create a corresponding grant by using the Create Grant page in the -UCP web UI.

- -
    -
  1. Create a non-admin user named “jane”. Learn to create users and teams.
  2. -
  3. Click Grants under User Management.
  4. -
  5. Click Create Grant.
  6. -
  7. In the Type section, click Namespaces and ensure that default is selected.
  8. -
  9. In the left pane, click Roles, and in the Role dropdown, select pod-reader.
  10. -
  11. In the left pane, click Subjects, and click All Users.
  12. -
  13. In the User dropdown, select jane.
  14. -
  15. Click Create.
  16. -
- -

- -

User “jane” has access to inspect pods in the default namespace.

- -

Kubernetes limitations

- -

There are a few limitations that you should be aware of when creating -Kubernetes workloads:

- -
    -
  • Docker EE has its own RBAC system, so it’s not possible to create -ClusterRole objects, ClusterRoleBinding objects, or any other object that is -created by using the /apis/rbac.authorization.k8s.io endpoints.
  • -
  • To make sure your cluster is secure, only users and service accounts that have been -granted “Full Control” of all Kubernetes namespaces can deploy pods with privileged -options. This includes: PodSpec.hostIPC, PodSpec.hostNetwork, -PodSpec.hostPID, SecurityContext.allowPrivilegeEscalation, -SecurityContext.capabilities, SecurityContext.privileged, and -Volume.hostPath.
  • -
diff --git a/ee/ucp/authorization/_site/pull-images.html b/ee/ucp/authorization/_site/pull-images.html deleted file mode 100644 index af82bb922a..0000000000 --- a/ee/ucp/authorization/_site/pull-images.html +++ /dev/null @@ -1,30 +0,0 @@ -

By default only admin users can pull images into a cluster managed by UCP.

- -

Images are a shared resource, as such they are always in the swarm collection. -To allow users access to pull images, you need to grant them the image load -permission for the swarm collection.

- -

As an admin user, go to the UCP web UI, navigate to the Roles page, -and create a new role named Pull images.

- -

- -

Then go to the Grants page, and create a new grant with:

- -
    -
  • Subject: the user you want to be able to pull images.
  • -
  • Roles: the “Pull images” role you created.
  • -
  • Resource set: the swarm collection.
  • -
- -

- -

Once you click Create the user is able to pull images from the UCP web UI -or the CLI.

- -

Where to go next

- - diff --git a/ee/ucp/authorization/_site/reset-user-password.html b/ee/ucp/authorization/_site/reset-user-password.html deleted file mode 100644 index 68cf08e3ae..0000000000 --- a/ee/ucp/authorization/_site/reset-user-password.html +++ /dev/null @@ -1,24 +0,0 @@ -

Docker EE administrators can reset user passwords managed in UCP:

- -
    -
  1. Log in to UCP with administrator credentials.
  2. -
  3. Click Users under User Management.
  4. -
  5. Select the user whose password you want to change.
  6. -
  7. Select Configure and select Security.
  8. -
  9. Enter the new password, confirm, and click Update Password.
  10. -
- -

Users passwords managed with an LDAP service must be changed on the LDAP server.

- -

- -

Change administrator passwords

- -

Administrators who need a password change can ask another administrator for help -or use ssh to log in to a manager node managed by Docker EE and run:

- -

-docker run --net=host -v ucp-auth-api-certs:/tls -it "$(docker inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' ucp-auth-api)" "$(docker inspect --format '{{ index .Spec.TaskTemplate.ContainerSpec.Args 0 }}' ucp-auth-api)" passwd -i
-
-
- From 968300a50630bf6dbdef2f5d968b87a4a007e75a Mon Sep 17 00:00:00 2001 From: dnadares Date: Thu, 21 Mar 2019 10:36:03 -0300 Subject: [PATCH 07/13] Update index.md Open parenthesis in the wrong place --- get-started/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/get-started/index.md b/get-started/index.md index e172e7d4cd..2e58ef4f14 100644 --- a/get-started/index.md +++ b/get-started/index.md @@ -135,7 +135,7 @@ is available in Docker version 17.12.0-ce, build c97c6d6 ``` -2. Run `docker info` or (`docker version` without `--`) to view even more details about your docker installation: +2. Run `docker info` (or `docker version` without `--`) to view even more details about your docker installation: ```shell docker info From e6983fd73e405121aa0071240291764e0631a63b Mon Sep 17 00:00:00 2001 From: Maria Bermudez Date: Thu, 21 Mar 2019 10:31:10 -0700 Subject: [PATCH 08/13] Revert "Update index.md" --- docker-hub/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docker-hub/index.md b/docker-hub/index.md index 531ef56221..ce1d4491da 100644 --- a/docker-hub/index.md +++ b/docker-hub/index.md @@ -1,7 +1,7 @@ --- description: Docker Hub Quickstart keywords: Docker, docker, registry, accounts, plans, Dockerfile, Docker Hub, docs, documentation, accounts, organizations, repositories, groups, teams -title: Docker Hub quickstart +title: Docker Hub Quickstart redirect_from: - /docker-hub/overview/ - /apidocs/docker-cloud/ @@ -141,7 +141,7 @@ Congratulations! You've successfully: - Built a Docker container image on your computer - Pushed it to Docker Hub -### Next steps +### Next Steps - Create an [Organization](orgs.md) to use Docker Hub with your team. - Automatically build container images from code through [Builds](builds/index.md). From a62197261ff6ec0c9d8fff4055b14e7f7ea7f2fe Mon Sep 17 00:00:00 2001 From: Shubh <36189251+zeniri@users.noreply.github.com> Date: Thu, 21 Mar 2019 23:26:47 +0530 Subject: [PATCH 09/13] Revert "Update overview.md" --- machine/overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/machine/overview.md b/machine/overview.md index 8c50c4919e..9606519418 100644 --- a/machine/overview.md +++ b/machine/overview.md @@ -1,7 +1,7 @@ --- description: Introduction and Overview of Machine keywords: docker, machine, amazonec2, azure, digitalocean, google, openstack, rackspace, softlayer, virtualbox, vmwarefusion, vmwarevcloudair, vmwarevsphere, exoscale -title: Docker Machine overview +title: Docker Machine Overview --- You can use Docker Machine to: From 7720a2f9d8c86a38d8b5134cbd9704837aabb145 Mon Sep 17 00:00:00 2001 From: Shubh <36189251+zeniri@users.noreply.github.com> Date: Fri, 22 Mar 2019 00:01:26 +0530 Subject: [PATCH 10/13] Update index.md [Sample applications table] Providing entry to dockerize .Net Core application from the left navigation panel into the "Sample applications" table --- samples/index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/samples/index.md b/samples/index.md index e61e83be3a..37f9044ad7 100644 --- a/samples/index.md +++ b/samples/index.md @@ -39,6 +39,7 @@ Run popular software using Docker. | Sample | Description | | ------ | ----------- | | [apt-cacher-ng](/engine/examples/apt-cacher-ng) | Run a Dockerized apt-cacher-ng instance. | +| [.Net Core application](/engine/examples/dotnetcore) | Run a Dockerized ASP.NET Core application. | | [ASP.NET Core + SQL Server on Linux](/compose/aspnet-mssql-compose) | Run a Dockerized ASP.NET Core + SQL Server environment. | | [CouchDB](/engine/examples/couchdb_data_volumes) | Run a Dockerized CouchDB instance. | | [Django + PostgreSQL](/compose/django/) | Run a Dockerized Django + PostgreSQL environment. | From 0e6a7a16b9891228258c45fd81b9a9611a2dc486 Mon Sep 17 00:00:00 2001 From: Maria Bermudez Date: Thu, 21 Mar 2019 11:45:56 -0700 Subject: [PATCH 11/13] Minor fixes --- ee/ucp/kubernetes/deploy-with-compose.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/ee/ucp/kubernetes/deploy-with-compose.md b/ee/ucp/kubernetes/deploy-with-compose.md index 39838560fb..e75aa8e656 100644 --- a/ee/ucp/kubernetes/deploy-with-compose.md +++ b/ee/ucp/kubernetes/deploy-with-compose.md @@ -45,7 +45,7 @@ services: image: dockersamples/k8s-wordsmith-db ``` -1. Log in to `https://`, and navigate to **Shared Resources > Stacks**. +1. In your browser, log in to `https://`. Navigate to **Shared Resources > Stacks**. 2. Click **Create Stack** to open up the "Create Application" page. 3. Under "Configure Application", type "lab-words" for the application name. 4. Select **Kubernetes Workloads** for **Orchestrator Mode**. @@ -62,8 +62,8 @@ After a few minutes have passed, all of the pods in the `lab-words` deployment are running. 1. Navigate to **Kubernetes > Pods**. Confirm that there are seven pods and - that their status is **Running**. If any have a status of **Pending**, - wait until they're all running. + that their status is **Running**. If any pod has a status of **Pending**, + wait until every pod is running. 2. Next, select **Kubernetes > Load balancers** and find the **web-published** service. 4. Click the **web-published** service, and scroll down to the **Ports** section. @@ -71,6 +71,6 @@ are running. ![](../images/deploy-compose-kubernetes-2.png){: .with-border} -6. In the browser, enter your cloud instance public IP Address and append `:NodePort` from the previous step. For example, to find the public IP address of an EC2 instance, refer to [Amazon EC2 Instance IP Addressing]((https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/using-instance-addressing.html#concepts-public-addresses). The app is displayed. +6. In a new tab or window, enter your cloud instance public IP Address and append `:` from the previous step. For example, to find the public IP address of an EC2 instance, refer to [Amazon EC2 Instance IP Addressing](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/using-instance-addressing.html#concepts-public-addresses). The app is displayed. ![](../images/deploy-compose-kubernetes-3.png){: .with-border} From f9f993a73fc06c10ac5b9d249f31ab784efae700 Mon Sep 17 00:00:00 2001 From: Maria Bermudez Date: Thu, 21 Mar 2019 16:24:24 -0700 Subject: [PATCH 12/13] Capitalize product name --- get-started/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/get-started/index.md b/get-started/index.md index 2e58ef4f14..736a5a4226 100644 --- a/get-started/index.md +++ b/get-started/index.md @@ -135,7 +135,7 @@ is available in Docker version 17.12.0-ce, build c97c6d6 ``` -2. Run `docker info` (or `docker version` without `--`) to view even more details about your docker installation: +2. Run `docker info` (or `docker version` without `--`) to view even more details about your Docker installation: ```shell docker info From 127331a0858687444c2e425c723dcdb6166a459b Mon Sep 17 00:00:00 2001 From: paigehargrave Date: Thu, 21 Mar 2019 19:25:39 -0400 Subject: [PATCH 13/13] Update install-on-azure.md --- ee/ucp/admin/install/install-on-azure.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/ee/ucp/admin/install/install-on-azure.md b/ee/ucp/admin/install/install-on-azure.md index 94ac0a0784..badd4874d5 100644 --- a/ee/ucp/admin/install/install-on-azure.md +++ b/ee/ucp/admin/install/install-on-azure.md @@ -42,10 +42,10 @@ to successfully deploy Docker UCP on Azure - All UCP Nodes (Managers and Workers) need to be deployed into the same Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups) components could be deployed in a second Azure Resource Group. -- The Azure Vnet and Subnet needs to be appropriately sized for your - environment, addresses from this pool will be consumed by Kubernetes Pods, see +- The Azure Vnet and Subnet must be appropriately sized for your + environment, and addresses from this pool are consumed by Kubernetes Pods. For more information, see [Considerations for IPAM - Configuration](#considerations-for-ipam-configuration) for more information. + Configuration](#considerations-for-ipam-configuration). - All UCP Nodes (Managers and Workers) need to be attached to the same Azure Subnet. - All UCP (Managers and Workers) need to be tagged in Azure with the @@ -223,4 +223,4 @@ docker container run --rm -it \ --pod-cidr \ --cloud-provider Azure \ --interactive -``` \ No newline at end of file +```