Moving images for docs to /images/docs and using absolute paths site-wide for images

This commit is contained in:
John Mulhausen 2016-02-17 03:11:57 -08:00
parent 25d4daff37
commit 889b8d82f2
49 changed files with 49 additions and 62 deletions

View File

@ -540,7 +540,7 @@ header {
background-color: #303030; }
#hero {
background-image: url(../images/texture.png);
background-image: url(/images/texture.png);
background-color: #303030;
text-align: center;
padding-left: 0;
@ -558,7 +558,7 @@ header {
footer {
width: 100%;
background-image: url(../images/texture.png);
background-image: url(/images/texture.png);
background-color: #303030; }
footer main {
padding: 40px 0; }
@ -600,7 +600,7 @@ footer {
.social a {
display: inline-block;
background-image: url(../images/social_sprite.png);
background-image: url(/images/social_sprite.png);
background-repeat: no-repeat;
background-size: auto;
width: 50px;
@ -821,7 +821,7 @@ section {
white-space: nowrap;
text-indent: 50px;
overflow: hidden;
background: #3371e3 url(../images/pencil.png) no-repeat;
background: #3371e3 url(/images/pencil.png) no-repeat;
background-position: 1px 1px;
background-size: auto; }
#docsContent #pageTOC ul, #docsContent #pageTOC li {

View File

Before

Width:  |  Height:  |  Size: 221 KiB

After

Width:  |  Height:  |  Size: 221 KiB

View File

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

View File

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 86 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

Before

Width:  |  Height:  |  Size: 204 KiB

After

Width:  |  Height:  |  Size: 204 KiB

View File

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 112 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View File

Before

Width:  |  Height:  |  Size: 522 KiB

After

Width:  |  Height:  |  Size: 522 KiB

View File

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 169 KiB

View File

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 51 KiB

View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View File

Before

Width:  |  Height:  |  Size: 71 KiB

After

Width:  |  Height:  |  Size: 71 KiB

View File

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 52 KiB

View File

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 67 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 76 KiB

View File

Before

Width:  |  Height:  |  Size: 180 KiB

After

Width:  |  Height:  |  Size: 180 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 103 KiB

View File

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 67 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

@ -27,7 +27,7 @@ The steps involved are as follows:
* [Setting up master-elected Kubernetes scheduler and controller-manager daemons](#master-elected-components)
Here's what the system should look like when it's finished:
![High availability Kubernetes diagram](high-availability/ha.png)
![High availability Kubernetes diagram](/images/docs/ha.png)
Ready? Let's get started.

View File

@ -4,7 +4,7 @@ title: "Kubernetes OpenVSwitch GRE/VxLAN networking"
This document describes how OpenVSwitch is used to setup networking between pods across nodes.
The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network.
![ovs-networking](ovs-networking.png "OVS Networking")
![OVS Networking](/images/docs/ovs-networking.png)
The vagrant setup in Kubernetes does the following:

View File

@ -15,7 +15,7 @@ Below, we outline one of the more common git workflows that core developers use.
### Visual overview
![Git workflow](git_workflow.png)
![Git workflow](/images/docs/git_workflow.png)
### Fork the main repository

View File

@ -237,7 +237,7 @@ others around it will either have `v0.4-dev` or `v0.5-dev`.
The diagram below illustrates it.
![Diagram of git commits involved in the release](releasing.png)
![Diagram of git commits involved in the release](/images/docs/releasing.png)
After working on `v0.4-dev` and merging PR 99 we decide it is time to release
`v0.5`. So we start a new branch, create one commit to update

View File

@ -1,13 +1,8 @@
---
title: "Kubernetes on Azure with CoreOS and Weave"
---
## Introduction
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
{% include pagetoc.html %}
### Prerequisites
@ -37,7 +32,7 @@ Now, all you need to do is:
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
![VMs in Azure](initial_cluster.png)
![VMs in Azure](/images/docs/initial_cluster.png)
Once the creation of Azure VMs has finished, you should see the following:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 286 KiB

View File

@ -1,24 +1,12 @@
---
title: "Kubernetes on Azure with CoreOS and Weave"
---
{% include pagetoc.html %}
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Let's go!](#lets-go)
- [Deploying the workload](#deploying-the-workload)
- [Scaling](#scaling)
- [Exposing the app to the outside world](#exposing-the-app-to-the-outside-world)
- [Next steps](#next-steps)
- [Tear down...](#tear-down)
## Introduction
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with [Weave](http://weave.works),
which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box
implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes
master and etcd nodes, and show how to scale the cluster with ease.
{% include pagetoc.html %}
### Prerequisites
@ -52,7 +40,7 @@ This script will provision a cluster suitable for production use, where there is
The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to
ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
![VMs in Azure](initial_cluster.png)
![VMs in Azure](/images/docs/initial_cluster.png)
Once the creation of Azure VMs has finished, you should see the following:

View File

@ -10,8 +10,6 @@ _Note_:
There is a [bug](https://github.com/docker/docker/issues/14106) in Docker 1.7.0 that prevents this from working correctly.
Please install Docker 1.6.2 or Docker 1.7.1.
{% include pagetoc.html %}
## Prerequisites
@ -25,7 +23,7 @@ and a _worker_ node which receives work from the master. You can repeat the pro
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-docker.png)
![Kubernetes Single Node on Docker](/images/docs/k8s-docker.png)
### Bootstrap Docker

View File

@ -1,14 +1,11 @@
---
title: "Running Kubernetes locally via Docker"
---
## Overview
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/{{page.version}}/docs/getting-started-guides/k8s-singlenode-docker.png)
![Kubernetes Single Node on Docker](/images/docs/k8s-singlenode-docker.png)
{% include pagetoc.html %}

View File

@ -123,7 +123,7 @@ Use the username `admin` and provide the basic auth password reported by `kubect
cluster you are trying to connect to. Connecting to the Elasticsearch URL should then give the
status page for Elasticsearch.
![Elasticsearch Status](es-browser.png)
![Elasticsearch Status](/images/docs/es-browser.png)
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
from your local machine using `curl` but first you need to know what your bearer token is:
@ -227,7 +227,7 @@ timeseries values and select `@timestamp`. On the following page select the `Dis
should be able to see the ingested logs. You can set the refresh interval to 5 seconds to have the logs
regulary refreshed. Here is a typical view of ingested logs from the Kibana viewer.
![Kibana logs](kibana-logs.png)
![Kibana logs](/images/docs/kibana-logs.png)
Another way to access Elasticsearch and Kibana in the cluster is to use `kubectl proxy` which will serve
a local proxy to the remote master:

View File

@ -168,11 +168,11 @@ This pod specification maps the directory on the host containing the Docker log
We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
![Cloud Logging Console](cloud-logging-console.png)
![Cloud Logging Console](/images/docs/cloud-logging-console.png)
When we view the logs in the Developer Console we observe the logs for both invocations of the container.
![Both Logs](all-lines.png)
![Both Logs](/images/docs/all-lines.png)
Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
@ -189,7 +189,7 @@ SELECT metadata.timestamp, structPayload.log
Here is some sample output:
![BigQuery](bigquery-logging.png)
![BigQuery](/images/docs/bigquery-logging.png)
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a Compute Engine project called `myproject`. Only logs for the date 2015-06-11 are fetched.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

View File

@ -10,7 +10,7 @@ environment metadata available to running containers inside the
Kubernetes cluster. The documentation for the Kubernetes environment
is [here](/{{page.version}}/docs/user-guide/container-environment).
![Diagram](diagram.png)
![Diagram](/images/docs/diagram.png)
## Prerequisites

View File

@ -10,7 +10,7 @@ environment metadata available to running containers inside the
Kubernetes cluster. The documentation for the Kubernetes environment
is [here](/{{page.version}}/docs/user-guide/container-environment).
![Diagram](diagram.png)
![Diagram](/images/docs/diagram.png)
## Prerequisites

View File

@ -17,7 +17,7 @@ to match the observed average CPU utilization to the target specified by user.
## How does Horizontal Pod Autoscaler work?
![Horizontal Pod Autoscaler diagram](horizontal-pod-autoscaler.png)
![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.png)
The autoscaler is implemented as a control loop.
It periodically queries CPU utilization for the pods it targets.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

View File

@ -7,7 +7,7 @@ Understanding how an application behaves when deployed is crucial to scaling the
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](https://releases.k8s.io/release-1.1/DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization) and [Google Cloud Monitoring](https://cloud.google.com/monitoring/). The overall architecture of the service can be seen below:
![overall monitoring architecture](monitoring-architecture.png)
![overall monitoring architecture](/images/docs/monitoring-architecture.png)
Let's look at some of the other components in more detail.
@ -17,7 +17,7 @@ cAdvisor is an open source container resource usage and performance analysis age
On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisor's UI that shows the overall machine usage:
![cAdvisor](cadvisor.png)
![cAdvisor](/images/docs/cadvisor.png)
### Kubelet
@ -37,7 +37,7 @@ Here is a video showing how to monitor a Kubernetes cluster using heapster, Infl
Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
![snapshot of the default Kubernetes Grafana dashboard](influx.png)
![snapshot of the default Kubernetes Grafana dashboard](/images/docs/influx.png)
### Google Cloud Monitoring
@ -49,7 +49,7 @@ Here is a video showing how to setup and run a Google Cloud Monitoring backed He
Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
![Google Cloud Monitoring dashboard](gcm.png)
![Google Cloud Monitoring dashboard](/images/docs/gcm.png)
## Try it out!

View File

@ -160,7 +160,7 @@ The net result is that any traffic bound for the `Service` is proxied to an
appropriate backend without the clients knowing anything about Kubernetes or
`Services` or `Pods`.
![Services overview diagram](services-overview.png)
![Services overview diagram](/images/docs/services-overview.png)
By default, the choice of backend is round robin. Client-IP based session affinity
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
@ -523,7 +523,7 @@ This means that `Service` owners can choose any port they want without risk of
collision. Clients can simply connect to an IP and port, without being aware
of which `Pods` they are actually accessing.
![Services detailed diagram](services-detail.png)
![Services detailed diagram](/images/docs/services-detail.png)
## API Object

View File

@ -21,8 +21,9 @@ The Kubernetes UI can be used to introspect your current cluster, such as checki
### Node Resource Usage
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
![Kubernetes UI home page](k8s-ui-overview.png)
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
![Kubernetes UI home page](/images/docs/k8s-ui-overview.png)
### Dashboard Views
@ -30,19 +31,27 @@ Click on the "Views" button in the top-right of the page to see other views avai
#### Explore View
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
![Kubernetes UI Explore View](k8s-ui-explore.png)
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
![Kubernetes UI Explore View - Group by](k8s-ui-explore-groupby.png)
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
![Kubernetes UI Explore View - Filter](k8s-ui-explore-filter.png)
To see more details of each resource instance, simply click on it.
![Kubernetes UI - Pod](k8s-ui-explore-poddetail.png)
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
![Kubernetes UI Explore View](/images/docs/k8s-ui-explore.png)
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
![Kubernetes UI Explore View - Group by](/images/docs/k8s-ui-explore-groupby.png)
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
![Kubernetes UI Explore View - Filter](/images/docs/k8s-ui-explore-filter.png)
To see more details of each resource instance, simply click on it.
![Kubernetes UI - Pod](/images/docs/k8s-ui-explore-poddetail.png)
### Other Views
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
![Kubernetes UI - Nodes](k8s-ui-nodes.png)
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
![Kubernetes UI - Nodes](/images/docs/k8s-ui-nodes.png)
## More Information

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB