mirror of https://github.com/docker/docs.git
Update UCP install screenshots (#10056)
* Update screenshots * Update RBAC screenshots * Change add node image * Edits * Update uninstall topic
This commit is contained in:
parent
059d2baac1
commit
3a1f8a6635
|
|
@ -13,31 +13,23 @@ UCP 3.0 used its own role-based access control (RBAC) for Kubernetes clusters. N
|
|||
|
||||
Kubernetes RBAC is turned on by default for Kubernetes clusters when customers upgrade to UCP 3.1. See [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the v1.11 documentation for more information about Kubernetes role-based access control.
|
||||
|
||||
Starting with UCP 3.1, Kubernetes & Swarm roles have separate views. You can view all the roles for a particular cluster under **Access Control** then **Roles**. Select Kubernetes or Swarm to view the specific roles for each.
|
||||
Starting with UCP 3.1, Kubernetes and Swarm roles have separate views. You can view all the roles for a particular cluster under **Access Control** then **Roles**. Select Kubernetes or Swarm to view the specific roles for each.
|
||||
|
||||
## Creating roles
|
||||
|
||||
You create Kubernetes roles either through the CLI using `kubectl` or through the UCP web interface.
|
||||
|
||||
To create a Kuberenetes role in the UCP web interface:
|
||||
|
||||
1. Go to the UCP web interface.
|
||||
|
||||
2. Navigate to the **Access Control**.
|
||||
|
||||
3. In the lefthand menu, select **Roles**.
|
||||
|
||||

|
||||
|
||||
4. Select the **Kubernetes** tab at the top of the window.
|
||||
5. Select **Create** to create a Kubernetes role object in the following dialog:
|
||||
To create a Kubernetes role in the UCP web interface:
|
||||
|
||||
1. From the UCP UI, select **Access Control**.
|
||||
2. From the left navigation menu, select **Roles**.
|
||||

|
||||
3. Select the **Kubernetes** tab at the top of the window.
|
||||
4. Select **Create** to create a Kubernetes role object in the following dialog:
|
||||

|
||||
|
||||
6. Select a namespace from the **Namespace** dropdown list. Selecting a specific namespace creates a role for use in that namespace, but selecting all namespaces creates a `ClusterRole` where you can create rules for cluster-scoped Kubernetes resources as well as namespaced resources.
|
||||
7. Provide the YAML for the role, either by entering it in the **Object YAML** editor or select **Click to upload a .yml file** to choose and upload a .yml file instead.
|
||||
8. When you have finished specifying the YAML, Select **Create** to complete role creation.
|
||||
|
||||
5. Select a namespace from the **Namespace** drop-down list. Selecting a specific namespace creates a role for use in that namespace, but selecting all namespaces creates a `ClusterRole` where you can create rules for cluster-scoped Kubernetes resources as well as namespaced resources.
|
||||
6. Provide the YAML for the role, either by entering it in the **Object YAML** editor or select **Click to upload a .yml file** to choose and upload a .yml file instead.
|
||||
7. When you have finished specifying the YAML, Select **Create** to complete role creation.
|
||||
|
||||
## Creating role grants
|
||||
|
||||
|
|
@ -46,31 +38,22 @@ Kubernetes provides 2 types of role grants:
|
|||
- `ClusterRoleBinding` which applies to all namespaces
|
||||
- `RoleBinding` which applies to a specific namespace
|
||||
|
||||
To create a grant for a Kuberenetes role in the UCP web interface:
|
||||
|
||||
1. Go to the UCP web UI.
|
||||
2. Navigate to the **Access Control**.
|
||||
3. In the lefthand menu, select **Grants**.
|
||||
|
||||

|
||||
|
||||
4. Select the **Kubernetes** tab at the top of the window. All grants to Kubernetes roles can be viewed in the Kubernetes tab.
|
||||
5. Select **Create New Grant** to start the Create Role Binding wizard and create a new grant for a given user, team or service.
|
||||
To create a grant for a Kubernetes role in the UCP web interface:
|
||||
|
||||
1. From the UCP UI, select **Access Control**.
|
||||
2. From the left navigation menu, select **Grants**.
|
||||

|
||||
3. Select the **Kubernetes** tab at the top of the window. All grants to Kubernetes roles can be viewed in the Kubernetes tab.
|
||||
4. Select **Create New Grant** to start the Create Role Binding wizard and create a new grant for a given user, team or service.
|
||||

|
||||
|
||||
6. Select the subject type. Your choices are:
|
||||
5. Select the subject type. Your choices are:
|
||||
- **All Users**
|
||||
- **Organizations**
|
||||
- **Service account**
|
||||
7. To create a user role binding, select a username from the **Users** dropdown list then select **Next**.
|
||||
8. Select a resource set for the subject. The **default** namespace is automatically selected. To use a different namespace, select the **Select Namespace** button next to the desired namespace. For `Cluster Role Binding`, slide the **Apply Role Binding to all namespaces** selector to the right.
|
||||
|
||||
6. To create a user role binding, select a username from the **Users** drop-down list then select **Next**.
|
||||
7. Select a resource set for the subject. The **default** namespace is automatically selected. To use a different namespace, select the **Select Namespace** button next to the desired namespace. For `Cluster Role Binding`, slide the **Apply Role Binding to all namespaces** selector to the right.
|
||||

|
||||
|
||||
9. Select **Next** to continue.
|
||||
10. Select the **Cluster Role** from the dropdown list. If you create a `ClusterRoleBinding` (by selecting **Apply Role Binding to all namespaces**) then you may only select ClusterRoles. If you select a specific namespace, you can choose any role from that namespace or any ClusterRole.
|
||||
|
||||
8. Select **Next** to continue.
|
||||
9. Select the **Cluster Role** from the drop-down list. If you create a `ClusterRoleBinding` (by selecting **Apply Role Binding to all namespaces**) then you may only select ClusterRoles. If you select a specific namespace, you can choose any role from that namespace or any ClusterRole.
|
||||

|
||||
|
||||
11. Select **Create** to complete creating the grant.
|
||||
10. Select **Create** to complete creating the grant.
|
||||
|
|
|
|||
|
|
@ -16,8 +16,7 @@ Calico / Azure integration.
|
|||
## Docker UCP Networking
|
||||
|
||||
Docker UCP configures the Azure IPAM module for Kubernetes to allocate IP
|
||||
addresses for Kubernetes pods. The Azure IPAM module requires each Azure virtual
|
||||
machine which is part of the Kubernetes cluster to be configured with a pool of IP
|
||||
addresses for Kubernetes pods. The Azure IPAM module requires each Azure VM which is part of the Kubernetes cluster to be configured with a pool of IP
|
||||
addresses.
|
||||
|
||||
There are two options for provisioning IPs for the Kubernetes cluster on Azure:
|
||||
|
|
@ -81,7 +80,7 @@ objects are being deployed.
|
|||
For UCP to integrate with Microsoft Azure, all Linux UCP Manager and Linux UCP
|
||||
Worker nodes in your cluster need an identical Azure configuration file,
|
||||
`azure.json`. Place this file within `/etc/kubernetes` on each host. Since the
|
||||
configution file is owned by `root`, set its permissions to `0644` to ensure
|
||||
configuration file is owned by `root`, set its permissions to `0644` to ensure
|
||||
the container user has read access.
|
||||
|
||||
The following is an example template for `azure.json`. Replace `***` with real values, and leave the other
|
||||
|
|
@ -107,11 +106,11 @@ There are some optional parameters for Azure deployments:
|
|||
|
||||
- `primaryAvailabilitySetName` - The Worker Nodes availability set.
|
||||
- `vnetResourceGroup` - The Virtual Network Resource group, if your Azure Network objects live in a
|
||||
seperate resource group.
|
||||
separate resource group.
|
||||
- `routeTableName` - If you have defined multiple Route tables within
|
||||
an Azure subnet.
|
||||
|
||||
See the [Kubernetes Azure Cloud Provider Config](https://github.com/kubernetes/cloud-provider-azure/blob/master/docs/cloud-provider-config.md) for more details on this configuration file.
|
||||
See the [Kubernetes Azure Cloud Provider Configuration](https://github.com/kubernetes/cloud-provider-azure/blob/master/docs/cloud-provider-config.md) for more details on this configuration file.
|
||||
|
||||
## Guidelines for IPAM Configuration
|
||||
|
||||
|
|
@ -122,21 +121,26 @@ See the [Kubernetes Azure Cloud Provider Config](https://github.com/kubernetes/c
|
|||
> installation process.
|
||||
|
||||
The subnet and the virtual network associated with the primary interface of the
|
||||
Azure VMs need to be configured with a large enough address prefix/range. The number of required IP addresses depends on the workload and the number of nodes in the cluster.
|
||||
Azure VMs needs to be configured with a large enough address
|
||||
prefix/range. The number of required IP addresses depends on the workload and
|
||||
the number of nodes in the cluster.
|
||||
|
||||
For example, in a cluster of 256 nodes, make sure that the address space of the subnet and the virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods concurrently on a node. This would be ***in addition to*** initial IP allocations to VM network interface cards (NICs) during Azure resource creation.
|
||||
For example, in a cluster of 256 nodes, make sure that the address space of the subnet and the
|
||||
virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods
|
||||
concurrently on a node. This would be ***in addition to*** initial IP allocations to VM
|
||||
network interface card (NICs) during Azure resource creation.
|
||||
|
||||
Accounting for IP addresses that are allocated to NICs during VM bring up, set
|
||||
the address space of the subnet and virtual network to `10.0.0.0/16`. This
|
||||
ensures that the network can dynamically allocate at least 32768 addresses,
|
||||
plus a buffer for initial allocations for primary IP addresses.
|
||||
Accounting for IP addresses that are allocated to NICs during VM bring-up, set the address space of the subnet and virtual network to `10.0.0.0/16`. This
|
||||
ensures that the network can dynamically allocate at least 32768 addresses, plus a buffer for initial allocations for primary IP addresses.
|
||||
|
||||
> Note
|
||||
> Note
|
||||
>
|
||||
> The Azure IPAM module queries an Azure VM's metadata to obtain
|
||||
> a list of IP addresses which are assigned to the VM's NICs. The
|
||||
> IPAM module allocates these IP addresses to Kubernetes pods. You configure the
|
||||
> IP addresses as `ipConfigurations` in the NICs associated with a VM or scale set member, so that Azure IPAM can provide them to Kubernetes when requested.
|
||||
> IP addresses as `ipConfigurations` in the NICs associated with a VM
|
||||
> or scale set member, so that Azure IPAM can provide them to Kubernetes when
|
||||
> requested.
|
||||
{: .important}
|
||||
|
||||
## Manually provision IP address pools as part of an Azure VM scale set
|
||||
|
|
@ -206,8 +210,7 @@ for each VM in the VM scale set.
|
|||
|
||||
During a UCP installation, a user can alter the number of Azure IP addresses
|
||||
UCP will automatically provision for pods. By default, UCP will provision 128
|
||||
addresses, from the same Azure Subnet as the hosts, for each VM in
|
||||
the cluster. However, if you have manually attached additional IP addresses
|
||||
addresses, from the same Azure Subnet as the hosts, for each VM in the cluster. However, if you have manually attached additional IP addresses
|
||||
to the VMs (via an ARM Template, Azure CLI or Azure Portal) or you
|
||||
are deploying in to small Azure subnet (less than /16), an `--azure-ip-count`
|
||||
flag can be used at install time.
|
||||
|
|
@ -215,8 +218,7 @@ flag can be used at install time.
|
|||
> Note
|
||||
>
|
||||
> Do not set the `--azure-ip-count` variable to a value of less than 6 if
|
||||
> you have not manually provisioned additional IP addresses for each Virtual
|
||||
> Machine. The UCP installation will need at least 6 IP addresses to allocate
|
||||
> you have not manually provisioned additional IP addresses for each VM. The UCP installation will need at least 6 IP addresses to allocate
|
||||
> to the core UCP components that run as Kubernetes pods. This is in addition
|
||||
> to the VM's private IP address.
|
||||
|
||||
|
|
@ -225,8 +227,7 @@ to be defined.
|
|||
|
||||
**Scenario 1 - Manually Provisioned Addresses**
|
||||
|
||||
If you have manually provisioned additional IP addresses for each Virtual
|
||||
Machine, and want to disable UCP from dynamically provisioning more IP
|
||||
If you have manually provisioned additional IP addresses for each VM, and want to disable UCP from dynamically provisioning more IP
|
||||
addresses for you, then you would pass `--azure-ip-count 0` into the UCP
|
||||
installation command.
|
||||
|
||||
|
|
@ -236,7 +237,7 @@ If you want to reduce the number of IP addresses dynamically allocated from 128
|
|||
addresses to a custom value due to:
|
||||
|
||||
- Primarily using the Swarm Orchestrator
|
||||
- Deploying UCP on a small Azure subnet (for example /24)
|
||||
- Deploying UCP on a small Azure subnet (for example, /24)
|
||||
- Plan to run a small number of Kubernetes pods on each node.
|
||||
|
||||
For example if you wanted to provision 16 addresses per VM, then
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ of the [requirements UCP needs to run](system-requirements.md).
|
|||
Also, you need to ensure that all nodes, physical and virtual, are running
|
||||
the same version of Docker Enterprise.
|
||||
|
||||
> Cloud Providers
|
||||
> Note
|
||||
>
|
||||
> If you are installing on a public cloud platform, there is cloud specific UCP
|
||||
> If you are installing UCP on a public cloud platform, refer to the cloud-specific UCP
|
||||
> installation documentation. For [Microsoft
|
||||
> Azure](./cloudproviders/install-on-azure/) this is **mandatory**, for
|
||||
> [AWS](./cloudproviders/install-on-aws/) this is optional.
|
||||
> Azure](./cloudproviders/install-on-azure/), this is **mandatory**. For
|
||||
> [AWS](./cloudproviders/install-on-aws/), this is optional.
|
||||
{: important}
|
||||
|
||||
## Step 2: Install Docker Enterprise on all nodes
|
||||
|
|
@ -84,12 +84,12 @@ To install UCP:
|
|||
with SELinux enabled, check the [reference
|
||||
documentation](/reference/ucp/3.2/cli/install.md).
|
||||
|
||||
> Custom Container Networking Interface (CNI) plugins
|
||||
> Note
|
||||
>
|
||||
> UCP will install [Project Calico](https://docs.projectcalico.org/v3.7/introduction/)
|
||||
> for container-to-container communication for Kubernetes. A platform operator may
|
||||
> choose to install an alternative CNI plugin, such as Weave or Flannel. Please see
|
||||
>[Install an unmanaged CNI plugin](/ee/ucp/kubernetes/install-cni-plugin/).
|
||||
>[Install an unmanaged CNI plugin](/ee/ucp/kubernetes/install-cni-plugin/) for more information.
|
||||
{: important}
|
||||
|
||||
## Step 5: License your installation
|
||||
|
|
@ -123,7 +123,7 @@ To join manager nodes to the swarm,
|
|||
1. In the UCP web UI, navigate to the **Nodes** page, and click the
|
||||
**Add Node** button to add a new node.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
2. In the **Add Node** page, check **Add node as a manager** to turn this node
|
||||
into a manager and replicate UCP for high-availability.
|
||||
|
|
@ -141,12 +141,11 @@ To join manager nodes to the swarm,
|
|||
contact UCP. The joining node should be able to contact itself at this
|
||||
address. The format is `interface:port` or `ip:port`.
|
||||
|
||||
Click the copy icon  to copy the
|
||||
`docker swarm join` command that nodes use to join the swarm.
|
||||
5. Click the copy icon to copy the `docker swarm join` command that nodes use to join the swarm.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
5. For each manager node that you want to join to the swarm, log in using
|
||||
6. For each manager node that you want to join to the swarm, log in using
|
||||
ssh and run the join command that you copied. After the join command
|
||||
completes, the node appears on the **Nodes** page in the UCP web UI.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,26 +1,20 @@
|
|||
---
|
||||
title: Uninstall UCP
|
||||
description: Learn how to uninstall a Docker Universal Control Plane swarm.
|
||||
description: Learn how to uninstall a Docker Universal Control Plane.
|
||||
keywords: UCP, uninstall, install, Docker EE
|
||||
---
|
||||
|
||||
Docker UCP is designed to scale as your applications grow in size and usage.
|
||||
You can [add and remove nodes](../configure/scale-your-cluster.md) from the
|
||||
Docker Universal Control Plane (UCP) is designed to scale as your applications grow in size and usage. You can [add and remove nodes](../configure/scale-your-cluster.md) from the
|
||||
cluster to make it scale to your needs.
|
||||
|
||||
You can also uninstall Docker Universal Control Plane from your cluster. In this
|
||||
case the UCP services are stopped and removed, but your Docker Engines will
|
||||
continue running in swarm mode. You applications will continue running normally.
|
||||
You can also uninstall UCP from your cluster. In this case, the UCP services are stopped and removed, but your Docker Engines will continue running in swarm mode. You applications will continue running normally.
|
||||
|
||||
If you wish to remove a single node from the UCP cluster, you should instead
|
||||
[Remove that node from the cluster](../configure/scale-your-cluster.md).
|
||||
|
||||
After you uninstall UCP from the cluster, you'll no longer be able to enforce
|
||||
role-based access control to the cluster, or have a centralized way to monitor
|
||||
and manage the cluster.
|
||||
|
||||
After uninstalling UCP from the cluster, you will no longer be able to join new
|
||||
nodes using `docker swarm join`, unless you reinstall UCP.
|
||||
role-based access control (RBAC) to the cluster, or have a centralized way to monitor
|
||||
and manage the cluster. After uninstalling UCP from the cluster, you will no longer be able to join new nodes using `docker swarm join`, unless you reinstall UCP.
|
||||
|
||||
To uninstall UCP, log in to a manager node using ssh, and run the following
|
||||
command:
|
||||
|
|
@ -35,12 +29,15 @@ docker container run --rm -it \
|
|||
This runs the uninstall command in interactive mode, so that you are prompted
|
||||
for any necessary configuration values.
|
||||
|
||||
> **Important**: If the `uninstall-ucp` command fails, you can run the following commands to manually uninstall UCP:
|
||||
If the `uninstall-ucp` command fails, you can run the following commands to manually uninstall UCP:
|
||||
|
||||
```bash
|
||||
#Run the following command on one manager node to remove remaining UCP services
|
||||
docker service rm $(docker service ls -f name=ucp- -q)
|
||||
|
||||
#Run the following command on each manager node to remove remaining UCP containers
|
||||
docker container rm -f $(docker container ps -a -f name=ucp- -f name=k8s_ -q)
|
||||
|
||||
#Run the following command on each manager node to remove remaining UCP volumes
|
||||
docker volume rm $(docker volume ls -f name=ucp -q)
|
||||
```
|
||||
|
|
@ -49,8 +46,7 @@ The UCP configuration is kept in case you want to reinstall UCP with the same
|
|||
configuration. If you want to also delete the configuration, run the uninstall
|
||||
command with the `--purge-config` option.
|
||||
|
||||
[Check the reference
|
||||
documentation](/reference/ucp/3.0/cli/index.md) to learn the options available.
|
||||
Refer to the [reference documentation](/reference/ucp/3.0/cli/index.md) to learn the options available.
|
||||
|
||||
Once the uninstall command finishes, UCP is completely removed from all the
|
||||
nodes in the cluster. You don't need to run the command again from other nodes.
|
||||
|
|
|
|||
Binary file not shown.
|
After Width: | Height: | Size: 170 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 350 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 287 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 242 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 242 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 144 KiB |
|
|
@ -12,37 +12,35 @@ solution from Docker. You install it on-premises or in your virtual private
|
|||
cloud, and it helps you manage your Docker cluster and applications through a
|
||||
single interface.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
## Centralized cluster management
|
||||
|
||||
With Docker, you can join up to thousands of physical or virtual machines
|
||||
together to create a container cluster that allows you to deploy your
|
||||
applications at scale. Docker Universal Control Plane extends the
|
||||
functionality provided by Docker to make it easier to manage your cluster
|
||||
from a centralized place.
|
||||
applications at scale. UCP extends the functionality provided by Docker to make it easier to manage your cluster from a centralized place.
|
||||
|
||||
You can manage and monitor your container cluster using a graphical UI.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
## Deploy, manage, and monitor
|
||||
|
||||
With Docker UCP, you can manage from a centralized place all of the computing
|
||||
With UCP, you can manage from a centralized place all of the computing
|
||||
resources you have available, like nodes, volumes, and networks.
|
||||
|
||||
You can also deploy and monitor your applications and services.
|
||||
|
||||
## Built-in security and access control
|
||||
|
||||
Docker UCP has its own built-in authentication mechanism and integrates with
|
||||
UCP has its own built-in authentication mechanism and integrates with
|
||||
LDAP services. It also has role-based access control (RBAC), so that you can
|
||||
control who can access and make changes to your cluster and applications.
|
||||
[Learn about role-based access control](authorization/index.md).
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
Docker UCP integrates with Docker Trusted Registry so that you can keep the
|
||||
UCP integrates with Docker Trusted Registry (DTR) so that you can keep the
|
||||
Docker images you use for your applications behind your firewall, where they
|
||||
are safe and can't be tampered with.
|
||||
|
||||
|
|
@ -62,7 +60,7 @@ cluster that's managed by UCP:
|
|||
docker info
|
||||
```
|
||||
|
||||
This command produces the output that you expect from the Docker EE Engine:
|
||||
This command produces the output that you expect from Docker Enterprise:
|
||||
|
||||
```bash
|
||||
Containers: 38
|
||||
|
|
@ -83,4 +81,4 @@ Managers: 1
|
|||
## Where to go next
|
||||
|
||||
- [Install UCP](admin/install/index.md)
|
||||
- [Docker EE Platform 2.0 architecture](/ee/docker-ee-architecture.md)
|
||||
- [Docker Enterprise architecture](/ee/docker-ee-architecture.md)
|
||||
|
|
|
|||
Loading…
Reference in New Issue