Sync published with master (#8839)

* updated Layer 7 UI image to have correct http port, updated deployment steps to make it clear what can be done via the UI and the alternative manualy approach

Signed-off-by: Steve Richards <steve.james.richards@gmail.com>

* Propose 3 as the number of manager nodes (#8827)

* Propose 3 as the number of manager nodes

Propose 3 managers as the default number of manager nodes.

* Minor style updates

* typeo in document and small update to image (#8837)

* Fix typos (#8650)

* remove extra 'but' on line 40 (#8626)
This commit is contained in:
Maria Bermudez 2019-05-22 18:38:13 -07:00 committed by GitHub
parent e678ad3f1b
commit 52e081c767
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 44 additions and 34 deletions

View File

@ -37,7 +37,7 @@ Vulnerability Database that is installed on your DTR instance. When
this database is updated, DTR reviews the indexed components for newly
discovered vulnerabilities.
DTR scans both Linux and Windows images, but but by default Docker doesn't push
DTR scans both Linux and Windows images, but by default Docker doesn't push
foreign image layers for Windows images so DTR won't be able to scan them. If
you want DTR to scan your Windows images, [configure Docker to always push image
layers](pull-and-push-images.md), and it will scan the non-foreign layers.

View File

@ -28,14 +28,15 @@ your cluster.
For production-grade deployments, follow these rules of thumb:
* For high availability with minimal
network overhead, the recommended number of manager nodes is 3. The recommended maximum number of manager
nodes is 5. Adding too many manager nodes to the cluster can lead to performance degradation,
because changes to configurations must be replicated across all manager nodes.
* When a manager node fails, the number of failures tolerated by your cluster
decreases. Don't leave that node offline for too long.
* You should distribute your manager nodes across different availability
zones. This way your cluster can continue working even if an entire
availability zone goes down.
* Adding many manager nodes to the cluster might lead to performance
degradation, as changes to configurations need to be replicated across all
manager nodes. The maximum advisable is seven manager nodes.
## Where to go next

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

View File

@ -8,11 +8,13 @@ redirect_from:
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
1. [Prerequisites](#prerequisites)
2. [Enable layer 7 routing](#enable-layer-7-routing)
3. [Work with the core service configuration file](#work-with-the-core-service-configuration-file)
4. [Create a dedicated network for Interlock and extensions](#create-a-dedicated-network-for-Interlock-and-extensions)
5. [Create the Interlock service](#create-the-interlock-service)
- [Prerequisites](#prerequisites)
- [Enable layer 7 routing via UCP](#enable-layer-7-routing-via-ucp)
- [Enable layer 7 routing manually](#enable-layer-7-routing-manually)
- [Work with the core service configuration file](#work-with-the-core-service-configuration-file)
- [Create a dedicated network for Interlock and extensions](#create-a-dedicated-network-for-interlock-and-extensions)
- [Create the Interlock service](#create-the-interlock-service)
- [Next steps](#next-steps)
## Prerequisites
@ -20,7 +22,8 @@ This topic covers deploying a layer 7 routing solution into a Docker Swarm to ro
- Docker must be running in [Swarm mode](/engine/swarm/)
- Internet access (see [Offline installation](./offline-install.md) for installing without internet access)
## Enable layer 7 routing
## Enable layer 7 routing via UCP
By default, layer 7 routing is disabled, so you must first
enable this service from the UCP web UI.
@ -28,7 +31,7 @@ enable this service from the UCP web UI.
2. Navigate to **Admin Settings**
3. Select **Layer 7 Routing** and then select **Enable Layer 7 Routing**
![http routing mesh](../../images/interlock-install-3.png){: .with-border}
![http routing mesh](../../images/interlock-install-4.png){: .with-border}
By default, the routing mesh service listens on port 8080 for HTTP and port
8443 for HTTPS. Change the ports if you already have services that are using
@ -55,8 +58,7 @@ start the `ucp-interlock-proxy` service.
At this point everything is ready for you to start using the layer 7 routing
service with your swarm workloads.
The following code sample provides a default UCP configuration:
The following code sample provides a default UCP configuration (this will be created automatically when enabling Interlock as per section [Enable layer 7 routing](#enable-layer-7-routing)):
```toml
ListenAddr = ":8080"
@ -78,7 +80,7 @@ PollInterval = "3s"
ProxyStopGracePeriod = "5s"
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
PublishMode = "ingress"
PublishedPort = 80
PublishedPort = 8080
TargetPort = 80
PublishedSSLPort = 8443
TargetSSLPort = 443
@ -123,7 +125,12 @@ PollInterval = "3s"
HideInfoHeaders = false
```
## Enable layer 7 routing manually
Interlock can also be enabled from the command line by following the below sections.
### Work with the core service configuration file
Interlock uses the TOML file for the core service configuration. The following example utilizes Swarm deployment and recovery features by creating a Docker Config object:
```bash
@ -143,9 +150,9 @@ PollInterval = "3s"
ProxyStopGracePeriod = "3s"
ServiceCluster = ""
PublishMode = "ingress"
PublishedPort = 80
PublishedPort = 8080
TargetPort = 80
PublishedSSLPort = 443
PublishedSSLPort = 8443
TargetSSLPort = 443
[Extensions.default.Config]
User = "nginx"
@ -166,6 +173,7 @@ $> docker network create -d overlay interlock
```
### Create the Interlock service
Now you can create the Interlock service. Note the requirement to constrain to a manager. The
Interlock core service must have access to a Swarm manager, however the extension and proxy services
are recommended to run on workers. See the [Production](./production.md) section for more information

View File

@ -1,5 +1,5 @@
---
title: Use Macvlan networks
title: Use macvlan networks
description: All about using macvlan to make your containers appear like physical machines on the network
keywords: network, macvlan, standalone
redirect_from:
@ -13,25 +13,27 @@ this type of situation, you can use the `macvlan` network driver to assign a MAC
address to each container's virtual network interface, making it appear to be
a physical network interface directly connected to the physical network. In this
case, you need to designate a physical interface on your Docker host to use for
the Macvlan, as well as the subnet and gateway of the Macvlan. You can even
isolate your Macvlan networks using different physical network interfaces.
the `macvlan`, as well as the subnet and gateway of the `macvlan`. You can even
isolate your `macvlan` networks using different physical network interfaces.
Keep the following things in mind:
- It is very easy to unintentionally damage your network due to IP address
exhaustion or to "VLAN spread", which is a situation in which you have an
inappropriately large number of unique MAC addresses in your network.
- Your networking equipment needs to be able to handle "promiscuous mode",
where one physical interface can be assigned multiple MAC addresses.
- If your application can work using a bridge (on a single Docker host) or
overlay (to communicate across multiple Docker hosts), these solutions may be
better in the long term.
## Create a macvlan network
When you create a Macvlan network, it can either be in bridge mode or 802.1q
When you create a `macvlan` network, it can either be in bridge mode or 802.1q
trunk bridge mode.
- In bridge mode,Macvlan traffic goes through a physical device on the host.
- In bridge mode, `macvlan` traffic goes through a physical device on the host.
- In 802.1q trunk bridge mode, traffic goes through an 802.1q sub-interface
which Docker creates on the fly. This allows you to control routing and
@ -39,7 +41,7 @@ trunk bridge mode.
### Bridge mode
To create a Macvlan network which bridges with a given physical network
To create a `macvlan` network which bridges with a given physical network
interface, use `--driver macvlan` with the `docker network create` command. You
also need to specify the `parent`, which is the interface the traffic will
physically go through on the Docker host.
@ -47,18 +49,18 @@ physically go through on the Docker host.
```bash
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
```
If you need to exclude IP addresses from being used in the Macvlan network, such
If you need to exclude IP addresses from being used in the `macvlan` network, such
as when a given IP address is already in use, use `--aux-addresses`:
```bash
$ docker network create -d macvlan \
--subnet=192.168.32.0/24 \
$ docker network create -d macvlan \
--subnet=192.168.32.0/24 \
--ip-range=192.168.32.128/25 \
--gateway=192.168.32.254 \
--gateway=192.168.32.254 \
--aux-address="my-router=192.168.32.129" \
-o parent=eth0 macnet32
```
@ -70,7 +72,7 @@ Docker interprets that as a sub-interface of `eth0` and creates the sub-interfac
automatically.
```bash
$ docker network create -d macvlan \
$ docker network create -d macvlan \
--subnet=192.168.50.0/24 \
--gateway=192.168.50.1 \
-o parent=eth0.50 macvlan50
@ -85,26 +87,25 @@ instead, and get an L2 bridge. Specify `-o ipvlan_mode=l2`.
$ docker network create -d ipvlan \
--subnet=192.168.210.0/24 \
--subnet=192.168.212.0/24 \
--gateway=192.168.210.254 \
--gateway=192.168.212.254 \
--gateway=192.168.210.254 \
--gateway=192.168.212.254 \
-o ipvlan_mode=l2 ipvlan210
```
## Use IPv6
If you have [configured the Docker daemon to allow IPv6](/config/daemon/ipv6.md),
you can use dual-stack IPv4/IPv6 Macvlan networks.
you can use dual-stack IPv4/IPv6 `macvlan` networks.
```bash
$ docker network create -d macvlan \
$ docker network create -d macvlan \
--subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \
--gateway=192.168.216.1 --gateway=192.168.218.1 \
--gateway=192.168.216.1 --gateway=192.168.218.1 \
--subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
-o parent=eth0.218 \
-o macvlan_mode=bridge macvlan216
```
## Next steps
- Go through the [macvlan networking tutorial](/network/network-tutorial-macvlan.md)