mirror of https://github.com/docker/docs.git
Delete index.html
This commit is contained in:
parent
9df2f34b75
commit
3980d971a4
|
@ -1,59 +0,0 @@
|
||||||
<p>Docker Universal Control Plane is designed for high availability (HA). You can
|
|
||||||
join multiple manager nodes to the cluster, so that if one manager node fails,
|
|
||||||
another can automatically take its place without impact to the cluster.</p>
|
|
||||||
|
|
||||||
<p>Having multiple manager nodes in your cluster allows you to:</p>
|
|
||||||
|
|
||||||
<ul>
|
|
||||||
<li>Handle manager node failures,</li>
|
|
||||||
<li>Load-balance user requests across all manager nodes.</li>
|
|
||||||
</ul>
|
|
||||||
|
|
||||||
<h2 id="size-your-deployment">Size your deployment</h2>
|
|
||||||
|
|
||||||
<p>To make the cluster tolerant to more failures, add additional replica nodes to
|
|
||||||
your cluster.</p>
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<thead>
|
|
||||||
<tr>
|
|
||||||
<th style="text-align: center">Manager nodes</th>
|
|
||||||
<th style="text-align: center">Failures tolerated</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: center">1</td>
|
|
||||||
<td style="text-align: center">0</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: center">3</td>
|
|
||||||
<td style="text-align: center">1</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td style="text-align: center">5</td>
|
|
||||||
<td style="text-align: center">2</td>
|
|
||||||
</tr>
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
<p>For production-grade deployments, follow these rules of thumb:</p>
|
|
||||||
|
|
||||||
<ul>
|
|
||||||
<li>When a manager node fails, the number of failures tolerated by your cluster
|
|
||||||
decreases. Don’t leave that node offline for too long.</li>
|
|
||||||
<li>You should distribute your manager nodes across different availability
|
|
||||||
zones. This way your cluster can continue working even if an entire
|
|
||||||
availability zone goes down.</li>
|
|
||||||
<li>Adding many manager nodes to the cluster might lead to performance
|
|
||||||
degradation, as changes to configurations need to be replicated across all
|
|
||||||
manager nodes. The maximum advisable is seven manager nodes.</li>
|
|
||||||
</ul>
|
|
||||||
|
|
||||||
<h2 id="where-to-go-next">Where to go next</h2>
|
|
||||||
|
|
||||||
<ul>
|
|
||||||
<li><a href="join-linux-nodes-to-cluster.md">Join nodes to your cluster</a></li>
|
|
||||||
<li><a href="join-windows-nodes-to-cluster.md">Join Windows worker nodes to your cluster</a></li>
|
|
||||||
<li><a href="use-a-load-balancer.md">Use a load balancer</a></li>
|
|
||||||
</ul>
|
|
Loading…
Reference in New Issue