[WIP] Join nodes topics, with orchestration settings (#295)

* Refactor join-node topics

* Incorporate feedback; add CLI steps
This commit is contained in:
Jim Galasyn 2017-11-17 09:46:13 -08:00
parent fcd96e73dc
commit 30b9f68396
6 changed files with 348 additions and 135 deletions

View File

@ -366,8 +366,14 @@ guides:
title: Deploy a workload to a Kubernetes cluster title: Deploy a workload to a Kubernetes cluster
- path: /deploy/deploy-workloads/manage-and-deploy-private-images/ - path: /deploy/deploy-workloads/manage-and-deploy-private-images/
title: Manage and deploy private images title: Manage and deploy private images
- sectiontitle: Install and configure
section:
- path: /deploy/install-and-configure/join-nodes-to-cluster/
title: Join nodes to your cluster
- path: /deploy/install-and-configure/join-windows-nodes-to-cluster/
title: Join Windows worker nodes to your cluster
- path: /deploy/install-and-configure/set-orchestrator-type/
title: Set the orchestrator type for a node
- sectiontitle: Run your app in production - sectiontitle: Run your app in production
section: section:

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

@ -6,82 +6,54 @@ keywords: Docker EE, UCP, cluster, scale, worker, manager
ui_tabs: ui_tabs:
- version: ucp-3.0 - version: ucp-3.0
orhigher: true orhigher: true
- version: ucp-2.2
orlower: true
cli_tabs: cli_tabs:
- version: docker-cli-linux - version: docker-cli-linux
next_steps: next_steps:
- path: /deploy/install-and-configure/join-windows-nodes-to-cluster - path: /deploy/install-and-configure/join-windows-nodes-to-cluster
title: Join Windows worker nodes to a cluster title: Join Windows worker nodes to a cluster
- path: /deploy/install-and-configure/set-orchestrator-type
title: Change the orchestrator for a node
--- ---
{% if include.ui %} {% if include.ui %}
{% if include.version=="ucp-3.0" %} {% if include.version=="ucp-3.0" %}
Docker EE is designed for scaling horizontally as your applications grow in Docker EE is designed for scaling horizontally as your applications grow in
size and usage. You can add or remove nodes from the cluster to scale it size and usage. You can add or remove nodes from the cluster to scale it
to your needs. to your needs. You can join Windows Server 2016, IBM z System, and Linux nodes
to the cluster.
Because Docker EE leverages the clustering functionality provided by Docker Because Docker EE leverages the clustering functionality provided by Docker
Engine, you use the [docker swarm join](/engine/swarm/swarm-tutorial/add-nodes.md) Engine, you use the [docker swarm join](/engine/swarm/swarm-tutorial/add-nodes.md)
command to add more nodes to your cluster. When you join a new node, Docker EE command to add more nodes to your cluster. When you join a new node, Docker EE
services start running on the node automatically. services start running on the node automatically.
## Choose the orchestrator for new nodes
When you add nodes to the cluster, you can choose if you want them to be
Kubernetes or Swarm nodes. Mixed nodes that run both orchestrators aren't
supported.
To choose the orchestrator for new nodes:
1. Log in to the Docker EE web UI with an administrator account.
2. Open the **Admin Settings** page, and in the left pane, click **Scheduler**.
3. In the **Set orchestrator type for new nodes** section, click **Swarm**
or **Kubernetes**.
4. Click **Save**.
![](../images/join-nodes-to-cluster-1.png){: .with-border}
Your cluster can run under Kubernetes or Swarm, or it can be a mixed cluster,
running both orchestrators. If you choose to run a mixed cluster, it's
important to understand that Docker EE doesn't migrate workloads from one
orchestrator to another automatically. For example, if you deploy WordPress
on a Swarm node and then change the **Scheduler** setting to Kubernetes,
Docker EE doesn't migrate the Swarm workload, and WordPress continues running
on Swarm. In this case, you must migrate your workload manually to another node
that's running under Kubernetes.
We recommend that you make the decision about orchestration when you set up the
cluster. Commit to Kubernetes or Swarm on all nodes, or assign each node to a
specific orchestrator. Once you start deploying workloads, avoid changing the
orchestrator setting. If you change the orchestrator for a cluster, you should
evict your workloads and deploy them again.
## Node roles ## Node roles
When you join a node to a cluster, you specify its role: manager or worker. When you join a node to a cluster, you specify its role: manager or worker.
* **Manager nodes** - **Manager**: Manager nodes are responsible for cluster management
functionality and dispatching tasks to worker nodes. Having multiple
Manager nodes are responsible for cluster management functionality and manager nodes allows your swarm to be highly available and tolerant of
dispatching tasks to worker nodes. Having multiple manager nodes allows node failures.
your swarm to be highly available and tolerant of node failures.
Manager nodes also run all Docker EE components in a replicated way, so Manager nodes also run all Docker EE components in a replicated way, so
by adding additional manager nodes, you're also making the cluster highly by adding additional manager nodes, you're also making the cluster highly
available. available.
[Learn more about the Docker EE architecture.](../architecture/how-docker-ee-delivers-ha.md) [Learn more about the Docker EE architecture.](../architecture/how-docker-ee-delivers-ha.md)
* **Worker nodes** - **Worker**: Worker nodes receive and execute your services and applications.
Having multiple worker nodes allows you to scale the computing capacity of
Worker nodes receive and execute your services and applications. Having your cluster.
multiple worker nodes allows you to scale the computing capacity of your
cluster.
When deploying Docker Trusted Registry in your cluster, you deploy it to a When deploying Docker Trusted Registry in your cluster, you deploy it to a
worker node. worker node.
## Join nodes to the cluster ## Join a node to the cluster
You can join Windows and Linux nodes to the cluster, but only Linux nodes can You can join Windows Server 2016, IBM z System, and Linux nodes to the cluster,
be managers. but only Linux nodes can be managers.
To join nodes to the cluster, go to the Docker EE web UI and navigate to the To join nodes to the cluster, go to the Docker EE web UI and navigate to the
**Nodes** page. **Nodes** page.
@ -102,27 +74,68 @@ join to the cluster, and run the `docker swarm join` command on the host.
To add a Windows node, click **Windows** and follow the instructions in To add a Windows node, click **Windows** and follow the instructions in
[Join Windows worker nodes to a cluster](join-windows-nodes-to-cluster.md). [Join Windows worker nodes to a cluster](join-windows-nodes-to-cluster.md).
After you run the join command in the node, the node is displayed in the UCP After you run the join command in the node, the node is displayed on the
web UI. **Nodes** page in the Docker EE web UI. From there, you can change the node's
cluster configuration, including its assigned orchestrator type.
[Learn how to change the orchestrator for a node](set-orchestrator-type.md).
## Remove nodes from the cluster ## Pause or drain a node
If the target node is a manager, you need to demote the node to a worker Once a node is part of the cluster, you can configure the node's availability
before proceeding with the removal. You can use the web UI or the CLI. so that it is:
In the Docker EE web UI: - **Active**: the node can receive and execute tasks.
- **Paused**: the node continues running existing tasks, but doesn't receive
new tasks.
- **Drained**: the node won't receive new tasks. Existing tasks are stopped and
replica tasks are launched in active nodes.
1. Navigate to the **Nodes** page. Pause or drain a node from the **Edit Node** page:
2. Select the node that you want to remove and switch its role to **Worker**.
3. Wait until the operation completes, and confirm that the node is no longer
a manager.
From the CLI: 1. In the Docker EE web UI, browse to the **Nodes** page and select the node.
2. In the details pane, click **Configure** and select **Details** to open
the **Edit Node** page.
3. In the **Availability** section, click **Active**, **Pause**, or **Drain**.
4. Click **Save** to change the availability of the node.
1. Log in to a manager node by using SSH. ![](../images/join-nodes-to-cluster-3.png){: .with-border}
2. Run `docker node ls` and identify the `nodeID` or `hostname` of the target
node. ## Promote or demote a node
3. Run `docker node demote <nodeID or hostname>`.
As your cluster architecture changes, you may need to promote worker nodes to
managers or demote manger nodes to workers. Change the current role of node on
the **Edit node** page.
If you remove a manager node from the cluster, always demote the node
before removing it.
> Load balancing
>
> If you're load-balancing user requests to Docker EE across multiple manager
> nodes, don't forget to remove these nodes from your load-balancing pool when
> you demote them to workers.
{: .important}
To promote or demote a manager node:
1. Navigate to the **Nodes** page, and click the node that you want to demote.
2. In the details pane, click **Configure** and select **Details** to open
the **Edit Node** page.
3. In the **Role** section, click **Manager** or **Worker**.
4. Click **Save** and wait until the operation completes.
5. Navigate to the **Nodes** page, and confirm that the node role has changed.
## Remove a node from the cluster
Before you remove a node from the cluster, ensure that it's not a manager node.
If it is, [demote it to a worker](#promote-or-demote-a-node) before you remove
it from the cluster.
To remove a node:
1. Navigate to the **Nodes** page and select the node.
2. In the details pane, click **Actions** and select **Remove**.
3. Click **Confirm** when you're prompted.
If the status of the worker node is `Ready`, you need to force the node to leave If the status of the worker node is `Ready`, you need to force the node to leave
the cluster manually. To do this, connect to the target node through SSH and the cluster manually. To do this, connect to the target node through SSH and
@ -130,54 +143,32 @@ run `docker swarm leave --force` directly against the local Docker EE Engine.
> Loss of quorum > Loss of quorum
> >
> Do not perform this step if the node is still a manager, as > Don't perform this step if the node is still a manager, as this may cause
> this may cause loss of quorum. > loss of quorum.
{: .important}
When the status of the node is reported as `Down`, you can remove the node from When the status of the node is reported as `Down`, you can remove the node from
the cluster. You can use the web UI or the CLI. the cluster.
From the Docker EE web UI: If you want to join the removed node to the cluster again, you need to force
the node to leave the cluster manually. To do this, connect to the target node
through SSH and run `docker swarm leave --force` directly against the local
Docker EE Engine.
1. Navigate to the **Nodes** page and select the node. {% elsif include.version=="ucp-2.2" %}
2. In the details pane, click **Actions** and select **Remove**.
3. Click **Confirm** when you're prompted.
From the CLI: [Learn how to scale your cluster](/datacenter/ucp/2.2/guides/admin/configure/scale-your-cluster.md).
1. Log in to a manager node by using SSH.
2. Run `docker node rm <nodeID or hostname>`.
## Pause and drain nodes
Once a node is part of the cluster you can change its role making a manager
node into a worker and *vice versa*. You can also configure the node availability
so that it is:
* Active: the node can receive and execute tasks.
* Paused: the node continues running existing tasks, but doesn't receive new ones.
* Drained: the node won't receive new tasks. Existing tasks are stopped and
replica tasks are launched in active nodes.
In the Docker EE web UI, browse to the **Nodes** page and select the node.
In the details pane, click the **Configure** to open the **Edit Node** page.
![](../../images/scale-your-cluster-3.png){: .with-border}
If you're load-balancing user requests to UCP across multiple manager nodes,
when demoting those nodes into workers, don't forget to remove them from your
load-balancing pool.
{% endif %} {% endif %}
{% endif %} {% endif %}
{% if include.cli %} {% if include.cli %}
## Scale your cluster from the CLI You can use the command line to join a node to a Docker EE cluster.
To get the join token, run the following command on a manager node:
You can also use the command line to do all of the above operations. To get the
join token, run the following command on a manager node:
{% if include.version=="docker-cli-linux" %} {% if include.version=="docker-cli-linux" %}
```bash ```bash
docker swarm join-token worker docker swarm join-token worker
``` ```
@ -196,21 +187,33 @@ $ docker swarm join \
Once your node is added, you can see it by running `docker node ls` on a manager: Once your node is added, you can see it by running `docker node ls` on a manager:
```bash ```bash
$ docker node ls docker node ls
``` ```
To change the node's availability, use: To change the node's availability, use:
```bash ```bash
$ docker node update --availability drain node2 docker node update --availability drain node2
``` ```
You can set the availability to `active`, `pause`, or `drain`. You can set the availability to `active`, `pause`, or `drain`.
To remove the node, use: ## Remove nodes from the cluster
If the target node is a manager, you need to demote the node to a worker
before proceeding with the removal.
1. Log in to a manager node, other than the one you'll be demoting, by using
SSH.
2. Run `docker node ls` and identify the `nodeID` or `hostname` of the target
node.
3. Run `docker node demote <nodeID or hostname>`.
When the status of the node is reported as `Down`, you can remove the node from
the cluster.
```bash ```bash
$ docker node rm <node-hostname> docker node rm <nodeID or hostname>
``` ```
{% endif %} {% endif %}

View File

@ -1,31 +1,27 @@
--- ---
title: Join Windows worker nodes to your cluster title: Join Windows worker nodes to your cluster
description: Join worker nodes that are running on Windows Server 2016 to a cluster managed by Docker EE. description: Join worker nodes that are running on Windows Server 2016 to a Docker EE cluster.
keywords: Docker EE, UCP, cluster, scale, worker, Windows keywords: Docker EE, UCP, cluster, scale, worker, Windows
next_steps:
- path: /deploy/install-and-configure/set-orchestrator-type
title: Change the orchestrator for a node
--- ---
Docker EE supports worker nodes that run on Windows Server 2016. Only worker nodes Docker Enterprise Edition supports worker nodes that run on Windows Server 2016.
are supported on Windows, and all manager nodes in the cluster must run on Only worker nodes are supported on Windows, and all manager nodes in the cluster
Linux. must run on Linux.
Follow these steps to enable a worker node on Windows. Follow these steps to enable a worker node on Windows.
1. Install Docker EE on a Linux distribution. 1. Install Docker EE Engine on Windows Server 2016.
2. Install Docker Enterprise Edition (*Docker EE*) on Windows Server 2016. 2. Configure the Windows node.
3. Configure the Windows node. 3. Join the Windows node to the cluster.
4. Join the Windows node to the cluster.
## Install Docker EE ## Install Docker EE Engine on Windows Server 2016
Install Docker EE on a Linux distribution. [Install Docker EE Engine](/docker-ee-for-windows/install/#using-a-script-to-install-docker-ee)
[Learn how to install Docker EE on production](../install/index.md).
Docker EE requires Docker EE version 17.06 or later.
## Install Docker EE on Windows Server 2016
[Install Docker EE](/docker-ee-for-windows/install/#using-a-script-to-install-docker-ee)
on a Windows Server 2016 instance to enable joining a cluster that's managed by on a Windows Server 2016 instance to enable joining a cluster that's managed by
Docker EE. Docker Enterprise Edition.
## Configure the Windows node ## Configure the Windows node
@ -33,7 +29,7 @@ Follow these steps to configure the docker daemon and the Windows environment.
1. Pull the Windows-specific image of `ucp-agent`, which is named `ucp-agent-win`. 1. Pull the Windows-specific image of `ucp-agent`, which is named `ucp-agent-win`.
2. Run the Windows worker setup script provided with `ucp-agent-win`. 2. Run the Windows worker setup script provided with `ucp-agent-win`.
3. Join the cluster with the token provided by the Docker EE web UI. 3. Join the cluster with the token provided by the Docker EE web UI or CLI.
### Pull the Windows-specific images ### Pull the Windows-specific images
@ -57,7 +53,8 @@ docker image pull {{ page.ucp_org }}/ucp-dsinfo-win:{{ page.ucp_version }}
### Run the Windows node setup script ### Run the Windows node setup script
You need to open ports 2376 and 12376, and create certificates You need to open ports 2376 and 12376, and create certificates
for the Docker daemon to communicate securely. Run this command: for the Docker daemon to communicate securely. Use this command to run
the Windows node setup script:
```powershell ```powershell
docker container run --rm {{ page.ucp_org }}/ucp-agent-win:{{ page.ucp_version }} windows-script | powershell -noprofile -noninteractive -command 'Invoke-Expression -Command $input' docker container run --rm {{ page.ucp_org }}/ucp-agent-win:{{ page.ucp_version }} windows-script | powershell -noprofile -noninteractive -command 'Invoke-Expression -Command $input'
@ -77,8 +74,7 @@ The script may be incompatible with installations that use a config file at
that the daemon runs on port 2376 and that it uses certificates located in that the daemon runs on port 2376 and that it uses certificates located in
`C:\ProgramData\docker\daemoncerts`. If certificates don't exist in this `C:\ProgramData\docker\daemoncerts`. If certificates don't exist in this
directory, run `ucp-agent-win generate-certs`, as shown in Step 2 of the directory, run `ucp-agent-win generate-certs`, as shown in Step 2 of the
[Set up certs for the dockerd service](#set-up-certs-for-the-dockerd-service) procedure in [Set up certs for the dockerd service](#set-up-certs-for-the-dockerd-service).
procedure.
In the daemon.json file, set the `tlscacert`, `tlscert`, and `tlskey` options In the daemon.json file, set the `tlscacert`, `tlscert`, and `tlskey` options
to the corresponding files in `C:\ProgramData\docker\daemoncerts`: to the corresponding files in `C:\ProgramData\docker\daemoncerts`:
@ -99,7 +95,7 @@ to the corresponding files in `C:\ProgramData\docker\daemoncerts`:
## Join the Windows node to the cluster ## Join the Windows node to the cluster
Now you can join the cluster by using the `docker cluster join` command that's Now you can join the cluster by using the `docker cluster join` command that's
provided by the Docker EE web UI. provided by the Docker EE web UI and CLI.
1. Log in to the Docker EE web UI with an administrator account. 1. Log in to the Docker EE web UI with an administrator account.
2. Navigate to the **Nodes** page. 2. Navigate to the **Nodes** page.
@ -120,6 +116,13 @@ Copy the displayed command. It looks similar to the following:
docker cluster join --token <token> <ucp-manager-ip> docker cluster join --token <token> <ucp-manager-ip>
``` ```
You can also use the command line to get the join token. On a manager node,
run the following command:
```bash
docker swarm join-token worker
```
Run the `docker cluster join` command on each instance of Windows Server that Run the `docker cluster join` command on each instance of Windows Server that
will be a worker node. will be a worker node.
@ -136,7 +139,6 @@ to the `Invoke-Expression` cmdlet.
docker container run --rm {{ page.ucp_org }}/ucp-agent-win:{{ page.ucp_version }} windows-script docker container run --rm {{ page.ucp_org }}/ucp-agent-win:{{ page.ucp_version }} windows-script
``` ```
### Open ports in the Windows firewall ### Open ports in the Windows firewall
Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic. Docker EE requires that ports 2376 and 12376 are open for inbound TCP traffic.
@ -171,12 +173,14 @@ netsh advfirewall firewall add rule name="docker_proxy" dir=in action=allow prot
The `dockerd` service and the Windows environment are now configured to join a Docker EE cluster. The `dockerd` service and the Windows environment are now configured to join a Docker EE cluster.
> **Tip:** If the TLS certificates aren't set up correctly, the Docker EE web UI shows the > TLS certificate setup
>
> If the TLS certificates aren't set up correctly, the Docker EE web UI shows the
> following warning. > following warning.
>
``` > ```
Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon. > Node WIN-NOOQV2PJGTE is a Windows node that cannot connect to its local Docker daemon.
``` > ```
## Windows nodes limitations ## Windows nodes limitations

View File

@ -0,0 +1,200 @@
---
title: Set the orchestrator type for a node
description: |
Learn how to specify the orchestrator for nodes in a Docker Enterprise Edition cluster.
keywords: Docker EE, UCP, cluster, orchestrator
ui_tabs:
- version: ucp-3.0
orhigher: true
cli_tabs:
- version: docker-cli-linux
next_steps:
- path: /deploy/install-and-configure/join-nodes-to-cluster
title: Join nodes to your cluster
- path: /deploy/install-and-configure/set-orchestrator-type
title: Change the orchestrator for a node
---
{% if include.ui %}
{% if include.version=="ucp-3.0" %}
When you add a node to the cluster, the node's workloads are managed by a
default orchestrator, either Docker Swarm or Kubernetes. When you install
Docker EE, new nodes are managed by Docker Swarm, but you can change the
default orchestrator to Kubernetes in the administrator settings.
Changing the default orchestrator doesn't affect existing nodes in the cluster.
You can change the orchestrator type for individual nodes in the cluster
by navigating to the node's configuration page in the Docker EE web UI.
## Change the orchestrator for a node
You can change the current orchestrator for any node that's joined to a
Docker EE cluster. The available orchestrator types are **Kubernetes**,
**Swarm**, and **Mixed**.
The **Mixed** type enables workloads to be scheduled by Kubernetes and Swarm
both on the same node. Although you can choose to mix orchestrator types on the
same node, this isn't recommended for production deployments because of the
likelihood of resource contention.
Change a node's orchestrator type on the **Edit node** page:
1. Log in to the Docker EE web UI with an administrator account.
2. Navigate to the **Nodes** page, and click the node that you want to assign
to a different orchestrator.
3. In the details pane, click **Configure** to open the **Edit node** page.
4. In the **Orchestrator properties** section, click the orchestrator type
for the node.
5. Click **Save** to assign the node to the selected orchestrator.
![](../images/change-orchestrator-for-node-1.png){: .with-border}
## What happens when you change a node's orchestrator
When you change the orchestrator type for a node, existing workloads are
evicted, and they're not migrated to the new orchestrator automatically.
If you want the workloads to be scheduled by the new orchestrator, you must
migrate them manually. For example, if you deploy WordPress on a Swarm
node, and you change the node's orchestrator type to Kubernetes, Docker EE
doesn't migrate the workload, and WordPress continues running on Swarm. In
this case, you must migrate your WordPress deployment to Kubernetes manually.
The following table summarizes the results of changing a node's orchestrator.
| Workload | On orchestrator change |
| ------------------------------------------- | ---------------------------------------------------------- |
| Containers | Container continues running in node |
| Docker service | Node is drained, and tasks are rescheduled to another node |
| Pods and other imperative resources | Continue running in node |
| Deployments and other declarative resources | Might change, but for now, continue running in node |
If a node is running containers, and you change the node to Kubernetes, these
containers will continue running, and Kubernetes won't be aware of them, so
you'll be in the same situation as if you were running in `Mixed` node.
> Be careful when mixing orchestrators on a node.
>
> When you change a node's orchestrator, you can choose to run the node in a
> mixed mode, with both Kubernetes and Swarm workloads. The `Mixed` type
> is not intended for production use, and it may impact existing workloads
> on the node.
>
> This is because the two orchestrator types have different views of the node's
> resources, and they don't know about each other's workloads. One orchestrator
> can schedule a workload without knowing that the node's resources are already
> committed to another workload that was scheduled by the other orchestrator.
> When this happens, the node could run out of memory or other resources.
>
> For this reason, we recommend against mixing orchestrators on a production
> node.
{: .warning}
## Set the default orchestrator type for new nodes
You can set the default orchestrator for new nodes to **Kubernetes** or
**Swarm**.
To set the orchestrator for new nodes:
1. Log in to the Docker EE web UI with an administrator account.
2. Open the **Admin Settings** page, and in the left pane, click **Scheduler**.
3. Under **Set orchestrator type for new nodes** click **Swarm**
or **Kubernetes**.
4. Click **Save**.
![](../images/join-nodes-to-cluster-1.png){: .with-border}
From now on, when you join a node to the cluster, new workloads on the node
are scheduled by the specified orchestrator type. Existing nodes in the cluster
aren't affected.
Once a node is joined to the cluster, you can
[change the orchestrator](#change-the-orchestrator-for-a-node) that schedules its
workloads.
## Choosing the orchestrator type
The workloads on your cluster can be scheduled by Kubernetes or by Swarm, or
the cluster can be mixed, running both orchestrator types. If you choose to
run a mixed cluster, be aware that the different orchestrators aren't aware of
each other, and there's no coordination between them.
We recommend that you make the decision about orchestration when you set up the
cluster initially. Commit to Kubernetes or Swarm on all nodes, or assign each
node individually to a specific orchestrator. Once you start deploying workloads,
avoid changing the orchestrator setting. If you do change the orchestrator for a
node, your workloads are evicted, and you must deploy them again through the
new orchestrator.
{% endif %}
{% endif %}
{% if include.cli %}
Set the orchestrator on a node by assigning the orchestrator labels,
`com.docker.ucp.orchestrator.swarm` or `com.docker.ucp.orchestrator.kubernetes`,
to `true`.
{% if include.version=="docker-cli-linux" %}
To schedule Swarm workloads on a node:
```bash
docker node update --label-add com.docker.ucp.orchestrator.swarm=true <node-id>
```
To schedule Kubernetes workloads on a node:
```bash
docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
```
To schedule Kubernetes and Swarm workloads on a node:
```bash
docker node update --label-add com.docker.ucp.orchestrator.swarm=true <node-id>
docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
```
> Mixed nodes
>
> Scheduling both Kubernetes and Swarm workloads on a node is not recommended
> for production deployments, because of the likelihood of resource contention.
{: .warning}
Check the value of the orchestrator label by inspecting the node:
```bash
docker node inspect <node-id> | grep -i orchestrator
```
The `docker node inspect` command returns the node's configuration, including
the orchestrator:
```bash
"com.docker.ucp.orchestrator.kubernetes": "true"
```
> Orchestrator label
>
> The `com.docker.ucp.orchestrator` label isn't displayed in the **Labels**
> list for a node in the Docker EE web UI.
{: .important}
## Set the default orchestrator type for new nodes
The default orchestrator for new nodes is a setting in the Docker EE
configuration file:
```
default_node_orchestrator = "swarm"
```
The value can be `swarm` or `kubernetes`.
[Learn to set up Docker EE by using a config file](UCP configuration file.md).
{% endif %}
{% endif %}