Tighten OrcaBank tutorial; Update image (#294)

This commit is contained in:
Gwendolynne Barr 2017-11-16 08:35:05 -08:00 committed by Jim Galasyn
parent 9a3c1f26d8
commit fcd96e73dc
3 changed files with 160 additions and 133 deletions

View File

@ -18,85 +18,95 @@ ui_tabs:
This topic is under construction.
{% elsif include.version=="ucp-2.2" %}
[Collections and grants](index.md) are strong tools that can be used to control
access and visibility to resources in UCP.
The previous tutorial, [Access Control Design with Docker EE
Standard](access-control-design-ee-standard.md), describes a fictional company
called OrcaBank that has designed a resource access architecture that fits the
specific security needs of their organization. Be sure to go through this
tutorial if you have not already before continuing.
Standard](access-control-design-ee-standard.md), explains how the fictional
company, OrcaBank, designed an architecture with role-based access control
(RBAC) to fit the specific security needs of their organization. OrcaBank used
the ability to assign multiple grants to control access to resources across
collection boundaries.
In this tutorial, OrcaBank's deployment model adds a staging zone. Instead of
moving developed applications directly in to production, OrcaBank now deploys
apps from their dev cluster to staging for testing before deploying to
production. OrcaBank has very stringent security requirements for production
applications. Its security team recently read a blog post about DevSecOps and is
excited to implement some changes. Production applications aren't permitted to
share any physical infrastructure with non-Production infrastructure.
Go through the [first OrcaBank tutorial](access-control-design-ee-standard.md),
before continuing.
In this tutorial OrcaBank uses Docker EE Advanced features to segment the
scheduling and access control of applications across disparate physical
infrastructure. [Node Access Control](access-control-node.md) with EE Advanced
licensing allows nodes to be placed in different collections so that resources
can be scheduled and isolated on disparate physical or virtual hardware
resources.
In this tutorial, OrcaBank implements their new stringent security requirements
for production applications:
First, OrcaBank adds a staging zone to their deployment model. They no longer
move developed applications directly in to production. Instead, they deploy apps
from their dev cluster to staging for testing, and then to production.
Second, production applications are no longer permitted to share any physical
infrastructure with non-production infrastructure. Here, OrcaBank segments the
scheduling and access of applications with [Node Access Control](access-control-node.md).
> [Node Access Control](access-control-node.md) is a feature of Docker EE
> Advanced and provides secure multi-tenancy with node-based isolation. Nodes
> can be placed in different collections so that resources can be scheduled and
> isolated on disparate physical or virtual hardware resources.
## Team access requirements
As in the [Introductory Multitenancy Tutorial](access-control-design-ee-standard.md)
OrcaBank still has three application teams, `payments`, `mobile`, and `db` that
need to have varying levels of segmentation between them. Their upcoming Access
Control redesign will organize their UCP cluster in to two top-level collections,
Staging and Production, which will be completely separate security zones on
separate physical infrastructure.
OrcaBank still has three application teams, `payments`, `mobile`, and `db` with
varying levels of segmentation between them.
- `security` should have visibility-only access across all
applications that are in Production. The security team is not
concerned with Staging applications and thus will not have
access to Staging
- `db` should have the full set of operations against all database
applications that are in Production. `db` does not manage the
databases that are in Staging, which are managed directly by the
application teams.
- `payments` should have the full set of operations to deploy Payments
apps in both Production and Staging and also access some of the shared
services provided by the `db` team.
- `mobile` has the same rights as the `payments` team, with respect to the
Mobile applications.
Their RBAC redesign organizes their UCP cluster into two top-level collections,
staging and production, which are completely separate security zones on separate
physical infrastructure.
OrcaBank's four teams now have different needs in production and staging:
- `security` should have view-only access to all applications in production (but
not staging).
- `db` should have full access to all database applications and resources in
production (but not staging). See [DB Team](#db-team).
- `mobile` should have full access to their Mobile applications in both
production and staging and limited access to shared `db` services. See
[Mobile Team](#mobile-team).
- `payments` should have full access to their Payments applications in both
production and staging and limited access to shared `db` services.
## Role composition
OrcaBank will use the same roles as in the Introductory Tutorial. An `ops` role
will provide them with the ability to deploy, destroy, and view any kind of
resource. `View Only` will be used by the security team to only view resources
with no edit rights. `View & Use Networks + Secrets` will be used to access
shared resources across collection boundaries, such as the `db` services that
are offered by the `db` collection to the other app teams.
OrcaBank has decided to replace their custom `Ops` role with the built-in
`Full Control` role.
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `View & Use Networks + Secrets` (custom role) enables users to connect to
networks and use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
- `Full Control` (default role) allows users to view and edit volumes, networks,
and images. They can also create containers without restriction but cannot see
the containers of other users.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
The previous tutorial had separate collections for each application team.
In this Access Control redesign there will be collections for each zone,
Staging and Production, and also collections within each zone for the
individual applications. Another major change is that Docker nodes will be
segmented themselves so that nodes in Staging are separate from Production
nodes. Within the Production zone every application will also have their own
dedicated nodes.
In the previous tutorial, OrcaBank created separate collections for each
application team and nested them all under `/Shared`.
The resulting collection architecture takes the following tree representation:
To meet their new security requirements for production, OrcaBank is redesigning
collections in two ways:
- Adding collections for both the production and staging zones, and nesting a
set of application collections under each.
- Segmenting nodes. Both the production and staging zones will have dedicated
nodes; and in production, each application will be on a dedicated node.
The collection architecture now has the following tree representation:
```
/
├── System
├── Shared
├── prod
│   ├── db
│   ├── mobile
│   └── payments
│   ├── payments
│   └── db
│ ├── mobile
│   └── payments
|
└── staging
├── mobile
└── payments
@ -104,28 +114,30 @@ The resulting collection architecture takes the following tree representation:
## Grant composition
OrcaBank will now be granting teams diverse roles to different collections.
Multiple grants per team are required to grant this kind of access. Each of
the Payments and Mobile applications will have three grants that give them the
operation to deploy in their production zone, their staging zone, and also the
ability to share some resources with the `db` collection.
OrcaBank must now diversify their grants further to ensure the proper division
of access.
The payments and mobile applications teams will have three grants each--one for
deploying to production, one for deploying to staging, and the same grant to
access shared `db` networks and secrets.
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture provides the appropriate physical segmentation
between Production and Staging. Applications will be scheduled only on the UCP
Worker nodes in the collection where the application is placed. The production
Mobile and Payments applications use shared resources across collection
boundaries to access the databases in the `/prod/db` collection.
The resulting access architecture, desgined with Docker EE Advanced, provides
physical segmentation between production and staging using node access control.
Applications are scheduled only on UCP worker nodes in the dedicated application
collection. And applications use shared resources across collection boundaries
to access the databases in the `/prod/db` collection.
![image](../images/design-access-control-adv-architecture.png){: .with-border}
### DB team
The OrcaBank `db` team is responsible for deploying and managing the full
lifecycle of the databases that are in Production. They have the full set of
lifecycle of the databases that are in production. They have the full set of
operations against all database resources.
![image](../images/design-access-control-adv-db.png){: .with-border}
@ -133,8 +145,8 @@ operations against all database resources.
### Mobile team
The `mobile` team is responsible for deploying their full application stack in
staging. In production they deploy their own applications but utilize the
databases that are provided by the `db` team.
staging. In production they deploy their own applications but use the databases
that are provided by the `db` team.
![image](../images/design-access-control-adv-mobile.png){: .with-border}

View File

@ -18,104 +18,119 @@ This topic is under construction.
{% elsif include.version=="ucp-2.2" %}
[Collections and grants](index.md) are strong tools that can be used to control
access and visibility to resources in UCP. This tutorial describes a fictitious
company named OrcaBank that is designing the access architecture for two
application teams that they have, Payments and Mobile.
access and visibility to resources in UCP.
This tutorial introduces many concepts include collections, grants, centralized
LDAP/AD, and also the ability for resources to be shared between different teams
and across collections.
This tutorial describes a fictitious company named OrcaBank that is designing an
architecture with role-based access control (RBAC) for their application
engineering group.
## Team access requirements
OrcaBank has organized their application teams to specialize more and provide
shared services to other applications. A `db` team was created just to manage
the databases that other applications will utilize. Additionally, OrcaBank
recently read a book about DevOps. They have decided that developers should be
able to deploy and manage the lifecycle of their own applications.
OrcaBank has organized their application teams by specialty with each team
providing shared services to other applications. Developers do their own DevOps
and deploy and manage the lifecycle of their applications.
- `security` should have visibility-only access across all applications in the
swarm.
- `db` should have the full set of capabilities against all database
applications and their respective resources.
- `payments` should have the full set of capabilities to deploy Payments apps
and also access some of the shared services provided by the `db` team.
- `mobile` has the same rights as the `payments` team, with respect to the
Mobile applications.
OrcaBank has four teams with the following needs:
- `security` should have view-only access to all applications in the swarm.
- `db` should have full access to all database applications and resources. See
[DB Team](#db-team).
- `mobile` should have full access to their Mobile applications and limited
access to shared `db` services. See [Mobile Team](#mobile-team).
- `payments` should have full access to their Payments applications and limited
access to shared `db` services.
## Role composition
OrcaBank will use a combination of default and custom roles, roles which they
have created specifically for their use case. They are using the default
`View Only` role to provide security access to only see but not edit resources.
There is an `ops` role that they created which can do almost all operations
against all types of resources. They also created the
`View & Use Networks + Secrets` role. This type of role will enable application
DevOps teams to use shared resources provided by other teams. It will enable
applications to connect to networks and use secrets that will also be used by
`db` containers, but not the ability to see or impact the `db` applications
themselves.
To assign the proper access, OrcaBank is configuring a combination of default
and custom roles:
- `View Only` (default role) allows users to see but not edit all Swarm
resources.
- `View & Use Networks + Secrets` (custom role) enables users to connect to
networks and use secrets used by `db` containers, but prevents them from
seeing or impacting the `db` applications themselves.
- `Ops` (custom role) allows users to do almost all operations against all types
of resources.
![image](../images/design-access-control-adv-0.png){: .with-border}
## Collection architecture
OrcaBank will also create some collections that fit the organizational structure
of the company. Since all applications will share the same physical resources,
all nodes and applications are built in to collections underneath the `/Shared`
built-in collection.
OrcaBank is also creating collections that fit their team structure.
- `/Shared/payments` hosts all applications and resources for the Payments
applications.
- `/Shared/mobile` hosts all applications and resources for the Mobile
applications.
In their case, all applications share the same physical resources, so all nodes
and applications are being put into collections that nest under the built-in
collection, `/Shared`.
Some other collections will be created to enable the shared `db` applications.
- `/Shared/mobile` hosts all Mobile applications and resources.
- `/Shared/payments` hosts all Payments applications and resources.
- `/Shared/db` will be a top-level collection for all `db` resources.
- `/Shared/db/payments` will be specifically for `db` resources providing
service to the Payments applications.
- `/Shared/db/mobile` will do the same for the Mobile applications.
> For Secure multi-tenancy with node-based isolation, use Docker Enterprise
> Advanced. We cover this scenario in the [next tutorial](#).
The following grant composition will show that this collection architecture
allows an app team to access shared `db` resources without providing access
to _all_ `db` resources. At the same time _all_ `db` resources will be managed
by a single `db` team.
Other collections were also created to enable shared `db` applications.
- `/Shared/db` is a top-level collection for all `db` resources.
- `/Shared/db/payments` is a collection of `db` resources for Payments applications.
- `/Shared/db/mobile` is a collection of `db` resources for Mobile applications.
The collection architecture has the following tree representation:
```
/
├── System
└── Shared
├── mobile
├── payments
   └── db
├── mobile
   └── payments
```
OrcaBank's [Grant composition](#grant-composition) ensures that their collection
architecture gives the `db` team access to _all_ `db` resources and restricts
app teams to _shared_ `db` resources.
## LDAP/AD integration
OrcaBank has standardized on LDAP for centralized authentication to help their
identity team scale across all the platforms they manage. As a result LDAP
groups will be mapped directly to UCP teams using UCP's native LDAP/AD
integration. As a result users can be added to or removed from UCP teams via
LDAP which can be managed centrally by OrcaBank's identity team. The following
grant composition shows how LDAP groups are mapped to UCP teams .
identity team scale across all the platforms they manage.
To implement LDAP authenticaiton in UCP, OrcaBank is using UCP's native LDAP/AD
integration to map LDAP groups directly to UCP teams. Users can be added to or
removed from UCP teams via LDAP which can be managed centrally by OrcaBank's
identity team.
The following grant composition shows how LDAP groups are mapped to UCP teams.
## Grant composition
Two grants are applied for each application team, allowing each team to fully
manage their own apps in their collection, but also have limited access against
networks and secrets within the `db` collection. This kind of grant composition
provides flexibility to have different roles against different groups of
resources.
OrcaBank is taking advantage of the flexibility in UCP's grant model by applying
two grants to each application team. One grant allows each team to fully
manage the apps in their own collection, and the second grant gives them the
(limited) access they need to networks and secrets within the `db` collection.
![image](../images/design-access-control-adv-1.png){: .with-border}
## OrcaBank access architecture
The resulting access architecture shows applications connecting across
collection boundaries. Multiple grants per team allow Mobile applications and
Databases to connect to the same networks and use the same secrets so they can
securely connect with each other but through a secure and controlled interface.
Note that these resources are still deployed across the same group of UCP
worker nodes. Node segmentation is discussed in the [next tutorial](#).
OrcaBank's resulting access architecture shows applications connecting across
collection boundaries. By assigning multiple grants per team, the Mobile and
Payments applications teams can connect to dedicated Database resources through
a secure and controlled interface, leveraging Database networks and secrets.
> Note: Because this is Docker Enterprise Standard, all resources are deployed
> across the same group of UCP worker nodes. Node segmentation is provided in
> Docker Enterprise Advanced and discussed in the [next tutorial](#).
![image](../images/design-access-control-adv-2.png){: .with-border}
### DB team
The `db` team is responsible for deploying and managing the full lifecycle
of the databases used by the application teams. They have the full set of
of the databases used by the application teams. They can execute the full set of
operations against all database resources.
![image](../images/design-access-control-adv-3.png){: .with-border}
@ -123,7 +138,7 @@ operations against all database resources.
### Mobile team
The `mobile` team is responsible for deploying their own application stack,
minus the database tier which is managed by the `db` team.
minus the database tier that is managed by the `db` team.
![image](../images/design-access-control-adv-4.png){: .with-border}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 120 KiB