diff --git a/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md b/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md index f7d73a825f..e64e464f2b 100644 --- a/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md +++ b/datacenter/ucp/3.0/guides/user/access-ucp/kubectl.md @@ -34,7 +34,7 @@ within the Universal Control Plane dashboard or at the UCP API endpoint [version From the UCP dashboard, click on **About Docker EE** within the **Admin** menu in the top left corner of the dashboard. Then navigate to **Kubernetes**. - ![Find Kubernetes version](../images/kubernetes-version.png){: .with-border} + ![Find Kubernetes version](../../images/kubernetes-version.png){: .with-border} Once you have the Kubernetes version, install the kubectl client for the relevant operating system. diff --git a/datacenter/ucp/3.0/guides/user/interlock/architecture.md b/datacenter/ucp/3.0/guides/user/interlock/architecture.md index 3b29d88561..801fe38188 100644 --- a/datacenter/ucp/3.0/guides/user/interlock/architecture.md +++ b/datacenter/ucp/3.0/guides/user/interlock/architecture.md @@ -20,7 +20,7 @@ proxy services. This is what the default configuration looks like, once you enable layer 7 routing in UCP: -![](../images/interlock-architecture-1.svg) +![](../../images/interlock-architecture-1.svg) An Interlock service starts running on a manager node, an Interlock-extension service starts running on a worker node, and two replicas of the diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md index 6cda7383c7..720d15fc67 100644 --- a/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/index.md @@ -9,7 +9,7 @@ To enable support for layer 7 routing, also known as HTTP routing mesh, log in to the UCP web UI as an administrator, navigate to the **Admin Settings** page, and click the **Routing Mesh** option. Check the **Enable routing mesh** option. -![http routing mesh](../../images/interlock-install-3.png){: .with-border} +![http routing mesh](../../../images/interlock-install-3.png){: .with-border} By default, the routing mesh service listens on port 80 for HTTP and port 8443 for HTTPS. Change the ports if you already have services that are using diff --git a/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md b/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md index fb17de7a92..24d8a0ef10 100644 --- a/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md +++ b/datacenter/ucp/3.0/guides/user/interlock/deploy/production.md @@ -9,7 +9,7 @@ The layer 7 solution that ships out of the box with UCP is highly available and fault tolerant. It is also designed to work independently of how many nodes you're managing with UCP. -![production deployment](../../images/interlock-deploy-production-1.svg) +![production deployment](../../../images/interlock-deploy-production-1.svg) For a production-grade deployment, you should tune the default deployment to have two nodes dedicated for running the two replicas of the diff --git a/datacenter/ucp/3.0/guides/user/interlock/index.md b/datacenter/ucp/3.0/guides/user/interlock/index.md index cd63d61bfe..2deef44542 100644 --- a/datacenter/ucp/3.0/guides/user/interlock/index.md +++ b/datacenter/ucp/3.0/guides/user/interlock/index.md @@ -8,7 +8,7 @@ Docker Engine running in swarm mode has a routing mesh, which makes it easy to expose your services to the outside world. Since all nodes participate in the routing mesh, users can access your service by contacting any node. -![swarm routing mess](../images/interlock-overview-1.svg) +![swarm routing mess](../../images/interlock-overview-1.svg) In this example the WordPress service is listening on port 8000 of the routing mesh. Even though the service is running on a single node, users can access @@ -21,7 +21,7 @@ instead of IP addresses. This functionality is made available through the Interlock component. -![layer 7 routing](../images/interlock-overview-2.svg) +![layer 7 routing](../../images/interlock-overview-2.svg) In this example, users can access the WordPress service using `http://wordpress.example.org`. Interlock takes care of routing traffic to diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md b/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md index 0602d8c1c9..fffe86cd8a 100644 --- a/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/default-service.md @@ -9,7 +9,7 @@ The default proxy service used by UCP to provide layer 7 routing is NGINX, so when users try to access a route that hasn't been configured, they will see the default NGINX 404 page. -![Default NGINX page](../../images/interlock-default-service-1.png){: .with-border} +![Default NGINX page](../../../images/interlock-default-service-1.png){: .with-border} You can customize this by labelling a service with `com.docker.lb.defaul_backend=true`. When users try to access a route that's @@ -46,5 +46,5 @@ docker stack deploy --compose-file docker-compose.yml demo Once users try to access a route that's not configured, they are directed to this demo service. -![Custom default page](../../images/interlock-default-service-2.png){: .with-border} +![Custom default page](../../../images/interlock-default-service-2.png){: .with-border} diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/index.md b/datacenter/ucp/3.0/guides/user/interlock/usage/index.md index 4895b67160..82f0922cf3 100644 --- a/datacenter/ucp/3.0/guides/user/interlock/usage/index.md +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/index.md @@ -91,5 +91,5 @@ Make sure the `/etc/hosts` file in your system has an entry mapping `app.example.org` to the IP address of a UCP node. Once you do that, you'll be able to start using the service from your browser. -![browser](../../images/route-simple-app-1.png){: .with-border } +![browser](../../../images/route-simple-app-1.png){: .with-border } diff --git a/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md b/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md index 32f7e9910e..5e23c44ddc 100644 --- a/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md +++ b/datacenter/ucp/3.0/guides/user/interlock/usage/tls.md @@ -35,7 +35,7 @@ the TLS connection. All traffic between the proxy and the swarm service is not secured, so you should only use this option if you trust that no one can monitor traffic inside services running on your datacenter. -![TLS Termination](../../images/interlock-tls-1.png) +![TLS Termination](../../../images/interlock-tls-1.png) Start by getting a private key and certificate for the TLS connection. Make sure the Common Name in the certificate matches the name where your service @@ -119,7 +119,7 @@ Where: * `hostname` is the name you used with the `com.docker.lb.hosts` label. * `https-port` is the port you've configured in the [UCP settings](../deploy/index.md). -![Browser screenshot](../../images/interlock-tls-2.png){: .with-border} +![Browser screenshot](../../../images/interlock-tls-2.png){: .with-border} Since we're using self-sign certificates in this example, client tools like browsers display a warning that the connection is insecure. @@ -148,7 +148,7 @@ was aborterd. You can also encrypt the traffic from end-users to your swarm service. -![End-to-end encryption](../../images/interlock-tls-3.png) +![End-to-end encryption](../../../images/interlock-tls-3.png) To do that, deploy your swarm service using the following docker-compose.yml file: