From c049efc98ef760cec61b7563d271179c30462df7 Mon Sep 17 00:00:00 2001 From: paigehargrave Date: Fri, 10 Jan 2020 12:05:23 -0500 Subject: [PATCH 1/3] RBAC instruction update for isolate-nodes.md (#8309) * Update isolate-nodes.md * Update isolate-nodes.md * Add changes from peer review Co-authored-by: Traci Morrison <52976526+traci-morrison@users.noreply.github.com> --- ee/ucp/authorization/isolate-nodes.md | 98 ++++++++------------------- 1 file changed, 29 insertions(+), 69 deletions(-) diff --git a/ee/ucp/authorization/isolate-nodes.md b/ee/ucp/authorization/isolate-nodes.md index c9e013fa71..cbbe600315 100644 --- a/ee/ucp/authorization/isolate-nodes.md +++ b/ee/ucp/authorization/isolate-nodes.md @@ -181,23 +181,15 @@ collection. In this case, the user sets the value of the service's access label, `com.docker.ucp.access.label`, to the new collection or one of its children that has a `Service Create` grant for the user. -## Deploy a Kubernetes application +## Isolating nodes to Kubernetes namespaces Starting in Docker Enterprise Edition 2.0, you can deploy a Kubernetes workload to worker nodes, based on a Kubernetes namespace. -1. Convert a node to use the Kubernetes orchestrator. -2. Create a Kubernetes namespace. -3. Create a grant for the namespace. -4. Link the namespace to a node collection. -5. Deploy a Kubernetes workload. - -### Convert a node to Kubernetes - -To deploy Kubernetes workloads, an administrator must convert a worker node to -use the Kubernetes orchestrator. -[Learn how to set the orchestrator type](../admin/configure/set-orchestrator-type.md) -for your nodes in the `/Prod` collection. +1. Create a Kubernetes namespace. +2. Create a grant for the namespace. +3. Associate nodes with the namespace. +4. Deploy a Kubernetes workload. ### Create a Kubernetes namespace @@ -212,78 +204,46 @@ for Kubernetes workloads. apiVersion: v1 kind: Namespace metadata: - Name: ops-nodes + Name: namespace-name ``` -4. Click **Create** to create the `ops-nodes` namespace. +4. Click **Create** to create the `namespace-name` namespace. ### Grant access to the Kubernetes namespace -Create a grant to the `ops-nodes` namespace for the `Ops` team by following the -same steps that you used to grant access to the `/Prod` collection, only this -time, on the **Create Grant** page, pick **Namespaces**, instead of -**Collections**. +Create a grant to the `namespace-name` namespace: -![](../images/isolate-nodes-5.png){: .with-border} +1. On the **Create Grant** page, select **Namespaces**. -Select the **ops-nodes** namespace, and create a `Full Control` grant for the -`Ops` team. + ![](../images/isolate-nodes-5.png){: .with-border} -![](../images/isolate-nodes-6.png){: .with-border} +2. Select the **namespace-name** namespace, and create a `Full Control` grant. -### Link the namespace to a node collection + ![](../images/isolate-nodes-6.png){: .with-border} -The last step is to link the Kubernetes namespace the `/Prod` collection. +### Associate nodes with the namespace -1. Navigate to the **Namespaces** page, and find the **ops-nodes** namespace - in the list. -2. Click the **More options** icon and select **Link nodes in collection**. +Namespaces can be associated with a node collection in either of the following ways: + - Define an annotation key during namespace creation. This is described in the following paragraphs. + - [Provide the namespace definition information in a configuration file](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#configuration-file-format-1). - ![](../images/isolate-nodes-7.png){: .with-border} +#### Annotation file +The `scheduler.alpha.kubernetes.io/node-selector` annotation key assigns node selectors to namespaces. If you define a `scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector` annotation key when creating a namespace, all applications deployed in that namespace are pinned to the nodes with the node selector specified. -3. In the **Choose collection** section, click **View children** on the - **Swarm** collection to navigate to the **Prod** collection. -4. On the **Prod** collection, click **Select collection**. -5. Click **Confirm** to link the namespace to the collection. +The following example labels nodes as `example-zone`, and adds a scheduler node selector annotation as part of the `ops-nodes` namespace definition: - ![](../images/isolate-nodes-8.png){: .with-border} -### Deploy a Kubernetes workload to the node collection +For example, to pin all applications deployed in the `ops-nodes` namespace to nodes in the `example-zone` region: +1. Label the nodes with `example-zone`. +2. Add an scheduler node selector annotation as part of the namespace definition. -1. Log in in as a non-admin who's on the `Ops` team. -2. In the left pane, open the **Kubernetes** section. -3. Confirm that **ops-nodes** is displayed under **Namespaces**. -4. Click **Create**, and in the **Object YAML** editor, paste the following - YAML definition for an NGINX server. - - ```yaml - apiVersion: v1 - kind: ReplicationController - metadata: - name: nginx - spec: - replicas: 1 - selector: - app: nginx - template: - metadata: - name: nginx - labels: - app: nginx - spec: - containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 ``` - - ![](../images/isolate-nodes-9.png){: .with-border} - -5. Click **Create** to deploy the workload. -6. In the left pane, click **Pods** and confirm that the workload is running - on pods in the `ops-nodes` namespace. - - ![](../images/isolate-nodes-10.png){: .with-border} + apiVersion: v1 + kind: Namespace + metadata: + annotations: + scheduler.alpha.kubernetes.io/node-selector: zone=example-zone + name: ops-nodes + ``` ## Where to go next From 834a977126b74aef64d9b30b08a569eb6081dd23 Mon Sep 17 00:00:00 2001 From: Olly P Date: Fri, 10 Jan 2020 18:06:45 +0000 Subject: [PATCH 2/3] Removed Interlock Dead Link (#10136) --- _data/toc.yaml | 3 --- 1 file changed, 3 deletions(-) diff --git a/_data/toc.yaml b/_data/toc.yaml index 34dc03bc6d..2178e27a33 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -1822,8 +1822,6 @@ manuals: path: /ee/ucp/interlock/usage/canary/ - title: Using context or path-based routing path: /ee/ucp/interlock/usage/context/ - - title: Publishing a default host service - path: /ee/ucp/interlock/usage/default-backend/ - title: Specifying a routing mode path: /ee/ucp/interlock/usage/interlock-vip-mode/ - title: Using routing labels @@ -4156,4 +4154,3 @@ manuals: - \ No newline at end of file From 858a0b1edeb739020d5ba6a49034518095ad0420 Mon Sep 17 00:00:00 2001 From: Olly P Date: Fri, 10 Jan 2020 18:07:03 +0000 Subject: [PATCH 3/3] Correct Typo in Secure Overlay (#10135) --- ee/ucp/admin/configure/ucp-configuration-file.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index 70a5b3c6ed..9de97087d3 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -111,7 +111,7 @@ Configures audit logging options for UCP components. Specifies scheduling options and the default orchestrator for new nodes. > Note -> +> > If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag. | Parameter | Required | Description | @@ -142,7 +142,7 @@ Specifies whether DTR images require signing. ### log_configuration table (optional) > Note -> +> > This feature has been deprecated. Refer to the [Deprecation notice](https://docs.docker.com/ee/ucp/release-notes/#deprecation-notice) for additional information. Configures the logging options for UCP components. @@ -228,13 +228,13 @@ components. Assigning these values overrides the settings in a container's | `local_volume_collection_mapping` | no | Store data about collections for volumes in UCP's local KV store instead of on the volume labels. This is used for enforcing access control on volumes. | | `manager_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on manager nodes. | | `worker_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes. | -| `kubelet_max_pods` | yes | Set Number of Pods that can run on a node. Default is `110`.| -| `secure-overlay` | no | Set to `true` to enable IPSec network encryption in Kubernetes. Default is `false`. | -| `image_scan_aggregation_enabled` | no | Set to `true` to enable image scan result aggregation. This feature displays image vulnerabilities in shared resource/containers and shared resources/images pages. Default is `false`.| -|`swarm_polling_disabled` | no | Set to `true` to turn off auto-refresh (which defaults to 15 seconds) and only call the Swarm API once. Default is `false`. | +| `kubelet_max_pods` | yes | Set Number of Pods that can run on a node. Default is `110`. | +| `secure_overlay` | no | Set to `true` to enable IPSec network encryption in Kubernetes. Default is `false`. | +| `image_scan_aggregation_enabled` | no | Set to `true` to enable image scan result aggregation. This feature displays image vulnerabilities in shared resource/containers and shared resources/images pages. Default is `false`. | +| `swarm_polling_disabled` | no | Set to `true` to turn off auto-refresh (which defaults to 15 seconds) and only call the Swarm API once. Default is `false`. | > Note -> +> > dev indicates that the functionality is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the Docker Enterprise Software Support Agreement. ### iSCSI (optional)