Merge pull request #8460 from bermudezmt/bermudezmt-ucp-backport

Fix broken image links
This commit is contained in:
paigehargrave 2019-03-13 17:57:23 -07:00 committed by GitHub
commit ede90a2b05
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 12 additions and 12 deletions

View File

@ -34,7 +34,7 @@ within the Universal Control Plane dashboard or at the UCP API endpoint [version
From the UCP dashboard, click on **About Docker EE** within the **Admin** menu in the top left corner
of the dashboard. Then navigate to **Kubernetes**.
![Find Kubernetes version](../images/kubernetes-version.png){: .with-border}
![Find Kubernetes version](../../images/kubernetes-version.png){: .with-border}
Once you have the Kubernetes version, install the kubectl client for the relevant
operating system.

View File

@ -20,7 +20,7 @@ proxy services.
This is what the default configuration looks like, once you enable layer 7
routing in UCP:
![](../images/interlock-architecture-1.svg)
![](../../images/interlock-architecture-1.svg)
An Interlock service starts running on a manager node, an Interlock-extension
service starts running on a worker node, and two replicas of the

View File

@ -9,7 +9,7 @@ To enable support for layer 7 routing, also known as HTTP routing mesh,
log in to the UCP web UI as an administrator, navigate to the **Admin Settings**
page, and click the **Routing Mesh** option. Check the **Enable routing mesh** option.
![http routing mesh](../../images/interlock-install-3.png){: .with-border}
![http routing mesh](../../../images/interlock-install-3.png){: .with-border}
By default, the routing mesh service listens on port 80 for HTTP and port
8443 for HTTPS. Change the ports if you already have services that are using

View File

@ -9,7 +9,7 @@ The layer 7 solution that ships out of the box with UCP is highly available
and fault tolerant. It is also designed to work independently of how many
nodes you're managing with UCP.
![production deployment](../../images/interlock-deploy-production-1.svg)
![production deployment](../../../images/interlock-deploy-production-1.svg)
For a production-grade deployment, you should tune the default deployment to
have two nodes dedicated for running the two replicas of the

View File

@ -8,7 +8,7 @@ Docker Engine running in swarm mode has a routing mesh, which makes it easy
to expose your services to the outside world. Since all nodes participate
in the routing mesh, users can access your service by contacting any node.
![swarm routing mess](../images/interlock-overview-1.svg)
![swarm routing mess](../../images/interlock-overview-1.svg)
In this example the WordPress service is listening on port 8000 of the routing
mesh. Even though the service is running on a single node, users can access
@ -21,7 +21,7 @@ instead of IP addresses.
This functionality is made available through the Interlock component.
![layer 7 routing](../images/interlock-overview-2.svg)
![layer 7 routing](../../images/interlock-overview-2.svg)
In this example, users can access the WordPress service using
`http://wordpress.example.org`. Interlock takes care of routing traffic to

View File

@ -9,7 +9,7 @@ The default proxy service used by UCP to provide layer 7 routing is NGINX,
so when users try to access a route that hasn't been configured, they will
see the default NGINX 404 page.
![Default NGINX page](../../images/interlock-default-service-1.png){: .with-border}
![Default NGINX page](../../../images/interlock-default-service-1.png){: .with-border}
You can customize this by labelling a service with
`com.docker.lb.defaul_backend=true`. When users try to access a route that's
@ -46,5 +46,5 @@ docker stack deploy --compose-file docker-compose.yml demo
Once users try to access a route that's not configured, they are directed
to this demo service.
![Custom default page](../../images/interlock-default-service-2.png){: .with-border}
![Custom default page](../../../images/interlock-default-service-2.png){: .with-border}

View File

@ -91,5 +91,5 @@ Make sure the `/etc/hosts` file in your system has an entry mapping
`app.example.org` to the IP address of a UCP node. Once you do that, you'll be
able to start using the service from your browser.
![browser](../../images/route-simple-app-1.png){: .with-border }
![browser](../../../images/route-simple-app-1.png){: .with-border }

View File

@ -35,7 +35,7 @@ the TLS connection. All traffic between the proxy and the swarm service is
not secured, so you should only use this option if you trust that no one can
monitor traffic inside services running on your datacenter.
![TLS Termination](../../images/interlock-tls-1.png)
![TLS Termination](../../../images/interlock-tls-1.png)
Start by getting a private key and certificate for the TLS connection. Make
sure the Common Name in the certificate matches the name where your service
@ -119,7 +119,7 @@ Where:
* `hostname` is the name you used with the `com.docker.lb.hosts` label.
* `https-port` is the port you've configured in the [UCP settings](../deploy/index.md).
![Browser screenshot](../../images/interlock-tls-2.png){: .with-border}
![Browser screenshot](../../../images/interlock-tls-2.png){: .with-border}
Since we're using self-sign certificates in this example, client tools like
browsers display a warning that the connection is insecure.
@ -148,7 +148,7 @@ was aborterd.
You can also encrypt the traffic from end-users to your swarm service.
![End-to-end encryption](../../images/interlock-tls-3.png)
![End-to-end encryption](../../../images/interlock-tls-3.png)
To do that, deploy your swarm service using the following docker-compose.yml file: