mirror of https://github.com/docker/docs.git
Move externally hosted images to the repo, and update image links
This commit is contained in:
parent
4380320118
commit
0abb3de031
Binary file not shown.
After Width: | Height: | Size: 141 KiB |
Binary file not shown.
After Width: | Height: | Size: 130 KiB |
Binary file not shown.
After Width: | Height: | Size: 93 KiB |
|
@ -138,7 +138,7 @@ For example, if your cluster is running in the Ireland Region of Amazon Web
|
|||
Services (eu-west-1) and you configure three swarm managers (1 x primary, 2 x
|
||||
secondary), you should place one in each availability zone as shown below.
|
||||
|
||||

|
||||

|
||||
|
||||
In this configuration, the swarm cluster can survive the loss of any two
|
||||
availability zones. For your applications to survive such failures, they must be
|
||||
|
@ -186,7 +186,7 @@ domains (availability zones). It also has swarm nodes balanced across all three
|
|||
failure domains. The loss of two availability zones in the configuration shown
|
||||
below does not cause the swarm cluster to go down.
|
||||
|
||||

|
||||

|
||||
|
||||
It is possible to share the same Consul, etcd, or Zookeeper containers between
|
||||
the swarm discovery and Engine container networks. However, for best
|
||||
|
@ -199,7 +199,7 @@ You can architect and build swarm clusters that stretch across multiple cloud
|
|||
providers, and even across public cloud and on premises infrastructures. The
|
||||
diagram below shows an example swarm cluster stretched across AWS and Azure.
|
||||
|
||||

|
||||

|
||||
|
||||
While such architectures may appear to provide the ultimate in availability,
|
||||
there are several factors to consider. Network latency can be problematic, as
|
||||
|
|
Loading…
Reference in New Issue