Moving images for docs to /images/docs and using absolute paths site-wide for images
|
|
@ -540,7 +540,7 @@ header {
|
|||
background-color: #303030; }
|
||||
|
||||
#hero {
|
||||
background-image: url(../images/texture.png);
|
||||
background-image: url(/images/texture.png);
|
||||
background-color: #303030;
|
||||
text-align: center;
|
||||
padding-left: 0;
|
||||
|
|
@ -558,7 +558,7 @@ header {
|
|||
|
||||
footer {
|
||||
width: 100%;
|
||||
background-image: url(../images/texture.png);
|
||||
background-image: url(/images/texture.png);
|
||||
background-color: #303030; }
|
||||
footer main {
|
||||
padding: 40px 0; }
|
||||
|
|
@ -600,7 +600,7 @@ footer {
|
|||
|
||||
.social a {
|
||||
display: inline-block;
|
||||
background-image: url(../images/social_sprite.png);
|
||||
background-image: url(/images/social_sprite.png);
|
||||
background-repeat: no-repeat;
|
||||
background-size: auto;
|
||||
width: 50px;
|
||||
|
|
@ -821,7 +821,7 @@ section {
|
|||
white-space: nowrap;
|
||||
text-indent: 50px;
|
||||
overflow: hidden;
|
||||
background: #3371e3 url(../images/pencil.png) no-repeat;
|
||||
background: #3371e3 url(/images/pencil.png) no-repeat;
|
||||
background-position: 1px 1px;
|
||||
background-size: auto; }
|
||||
#docsContent #pageTOC ul, #docsContent #pageTOC li {
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 221 KiB After Width: | Height: | Size: 221 KiB |
|
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 56 KiB |
|
Before Width: | Height: | Size: 59 KiB After Width: | Height: | Size: 59 KiB |
|
Before Width: | Height: | Size: 86 KiB After Width: | Height: | Size: 86 KiB |
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 40 KiB After Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 204 KiB After Width: | Height: | Size: 204 KiB |
|
Before Width: | Height: | Size: 112 KiB After Width: | Height: | Size: 112 KiB |
|
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 38 KiB |
|
Before Width: | Height: | Size: 21 KiB After Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 522 KiB After Width: | Height: | Size: 522 KiB |
|
Before Width: | Height: | Size: 169 KiB After Width: | Height: | Size: 169 KiB |
|
Before Width: | Height: | Size: 51 KiB After Width: | Height: | Size: 51 KiB |
|
Before Width: | Height: | Size: 31 KiB After Width: | Height: | Size: 31 KiB |
|
Before Width: | Height: | Size: 70 KiB After Width: | Height: | Size: 70 KiB |
|
Before Width: | Height: | Size: 71 KiB After Width: | Height: | Size: 71 KiB |
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 52 KiB |
|
Before Width: | Height: | Size: 67 KiB After Width: | Height: | Size: 67 KiB |
|
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
|
Before Width: | Height: | Size: 76 KiB After Width: | Height: | Size: 76 KiB |
|
Before Width: | Height: | Size: 180 KiB After Width: | Height: | Size: 180 KiB |
|
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 103 KiB After Width: | Height: | Size: 103 KiB |
|
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 67 KiB After Width: | Height: | Size: 67 KiB |
|
Before Width: | Height: | Size: 42 KiB After Width: | Height: | Size: 42 KiB |
|
|
@ -27,7 +27,7 @@ The steps involved are as follows:
|
|||
* [Setting up master-elected Kubernetes scheduler and controller-manager daemons](#master-elected-components)
|
||||
|
||||
Here's what the system should look like when it's finished:
|
||||

|
||||

|
||||
|
||||
Ready? Let's get started.
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ title: "Kubernetes OpenVSwitch GRE/VxLAN networking"
|
|||
This document describes how OpenVSwitch is used to setup networking between pods across nodes.
|
||||
The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network.
|
||||
|
||||

|
||||

|
||||
|
||||
The vagrant setup in Kubernetes does the following:
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ Below, we outline one of the more common git workflows that core developers use.
|
|||
|
||||
### Visual overview
|
||||
|
||||

|
||||

|
||||
|
||||
### Fork the main repository
|
||||
|
||||
|
|
|
|||
|
|
@ -237,7 +237,7 @@ others around it will either have `v0.4-dev` or `v0.5-dev`.
|
|||
|
||||
The diagram below illustrates it.
|
||||
|
||||

|
||||

|
||||
|
||||
After working on `v0.4-dev` and merging PR 99 we decide it is time to release
|
||||
`v0.5`. So we start a new branch, create one commit to update
|
||||
|
|
|
|||
|
|
@ -1,13 +1,8 @@
|
|||
---
|
||||
title: "Kubernetes on Azure with CoreOS and Weave"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
|
||||
|
||||
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
### Prerequisites
|
||||
|
|
@ -37,7 +32,7 @@ Now, all you need to do is:
|
|||
```
|
||||
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
|
||||
|
||||

|
||||

|
||||
|
||||
Once the creation of Azure VMs has finished, you should see the following:
|
||||
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 286 KiB |
|
|
@ -1,24 +1,12 @@
|
|||
---
|
||||
title: "Kubernetes on Azure with CoreOS and Weave"
|
||||
---
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Let's go!](#lets-go)
|
||||
- [Deploying the workload](#deploying-the-workload)
|
||||
- [Scaling](#scaling)
|
||||
- [Exposing the app to the outside world](#exposing-the-app-to-the-outside-world)
|
||||
- [Next steps](#next-steps)
|
||||
- [Tear down...](#tear-down)
|
||||
|
||||
## Introduction
|
||||
|
||||
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with [Weave](http://weave.works),
|
||||
which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box
|
||||
implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes
|
||||
master and etcd nodes, and show how to scale the cluster with ease.
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
### Prerequisites
|
||||
|
||||
|
|
@ -52,7 +40,7 @@ This script will provision a cluster suitable for production use, where there is
|
|||
The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to
|
||||
ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
|
||||
|
||||

|
||||

|
||||
|
||||
Once the creation of Azure VMs has finished, you should see the following:
|
||||
|
||||
|
|
|
|||
|
|
@ -10,8 +10,6 @@ _Note_:
|
|||
There is a [bug](https://github.com/docker/docker/issues/14106) in Docker 1.7.0 that prevents this from working correctly.
|
||||
Please install Docker 1.6.2 or Docker 1.7.1.
|
||||
|
||||
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## Prerequisites
|
||||
|
|
@ -25,7 +23,7 @@ and a _worker_ node which receives work from the master. You can repeat the pro
|
|||
times to create larger clusters.
|
||||
|
||||
Here's a diagram of what the final result will look like:
|
||||

|
||||

|
||||
|
||||
### Bootstrap Docker
|
||||
|
||||
|
|
|
|||
|
|
@ -1,14 +1,11 @@
|
|||
---
|
||||
title: "Running Kubernetes locally via Docker"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
|
||||
|
||||
Here's a diagram of what the final result will look like:
|
||||
|
||||

|
||||

|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
|
|
|
|||
|
|
@ -123,7 +123,7 @@ Use the username `admin` and provide the basic auth password reported by `kubect
|
|||
cluster you are trying to connect to. Connecting to the Elasticsearch URL should then give the
|
||||
status page for Elasticsearch.
|
||||
|
||||

|
||||

|
||||
|
||||
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
|
||||
from your local machine using `curl` but first you need to know what your bearer token is:
|
||||
|
|
@ -227,7 +227,7 @@ timeseries values and select `@timestamp`. On the following page select the `Dis
|
|||
should be able to see the ingested logs. You can set the refresh interval to 5 seconds to have the logs
|
||||
regulary refreshed. Here is a typical view of ingested logs from the Kibana viewer.
|
||||
|
||||

|
||||

|
||||
|
||||
Another way to access Elasticsearch and Kibana in the cluster is to use `kubectl proxy` which will serve
|
||||
a local proxy to the remote master:
|
||||
|
|
|
|||
|
|
@ -168,11 +168,11 @@ This pod specification maps the directory on the host containing the Docker log
|
|||
|
||||
We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
|
||||
|
||||

|
||||

|
||||
|
||||
When we view the logs in the Developer Console we observe the logs for both invocations of the container.
|
||||
|
||||

|
||||

|
||||
|
||||
Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
|
||||
|
||||
|
|
@ -189,7 +189,7 @@ SELECT metadata.timestamp, structPayload.log
|
|||
|
||||
Here is some sample output:
|
||||
|
||||

|
||||

|
||||
|
||||
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a Compute Engine project called `myproject`. Only logs for the date 2015-06-11 are fetched.
|
||||
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 87 KiB |
|
Before Width: | Height: | Size: 43 KiB |
|
|
@ -10,7 +10,7 @@ environment metadata available to running containers inside the
|
|||
Kubernetes cluster. The documentation for the Kubernetes environment
|
||||
is [here](/{{page.version}}/docs/user-guide/container-environment).
|
||||
|
||||

|
||||

|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ environment metadata available to running containers inside the
|
|||
Kubernetes cluster. The documentation for the Kubernetes environment
|
||||
is [here](/{{page.version}}/docs/user-guide/container-environment).
|
||||
|
||||

|
||||

|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ to match the observed average CPU utilization to the target specified by user.
|
|||
|
||||
## How does Horizontal Pod Autoscaler work?
|
||||
|
||||

|
||||

|
||||
|
||||
The autoscaler is implemented as a control loop.
|
||||
It periodically queries CPU utilization for the pods it targets.
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 81 KiB |
|
Before Width: | Height: | Size: 87 KiB |
|
|
@ -7,7 +7,7 @@ Understanding how an application behaves when deployed is crucial to scaling the
|
|||
|
||||
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](https://releases.k8s.io/release-1.1/DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization) and [Google Cloud Monitoring](https://cloud.google.com/monitoring/). The overall architecture of the service can be seen below:
|
||||
|
||||

|
||||

|
||||
|
||||
Let's look at some of the other components in more detail.
|
||||
|
||||
|
|
@ -17,7 +17,7 @@ cAdvisor is an open source container resource usage and performance analysis age
|
|||
|
||||
On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisor's UI that shows the overall machine usage:
|
||||
|
||||

|
||||

|
||||
|
||||
### Kubelet
|
||||
|
||||
|
|
@ -37,7 +37,7 @@ Here is a video showing how to monitor a Kubernetes cluster using heapster, Infl
|
|||
|
||||
Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
|
||||
|
||||

|
||||

|
||||
|
||||
### Google Cloud Monitoring
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ Here is a video showing how to setup and run a Google Cloud Monitoring backed He
|
|||
|
||||
Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
|
||||
|
||||

|
||||

|
||||
|
||||
## Try it out!
|
||||
|
||||
|
|
|
|||
|
|
@ -160,7 +160,7 @@ The net result is that any traffic bound for the `Service` is proxied to an
|
|||
appropriate backend without the clients knowing anything about Kubernetes or
|
||||
`Services` or `Pods`.
|
||||
|
||||

|
||||

|
||||
|
||||
By default, the choice of backend is round robin. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
|
|
@ -523,7 +523,7 @@ This means that `Service` owners can choose any port they want without risk of
|
|||
collision. Clients can simply connect to an IP and port, without being aware
|
||||
of which `Pods` they are actually accessing.
|
||||
|
||||

|
||||

|
||||
|
||||
## API Object
|
||||
|
||||
|
|
|
|||
|
|
@ -21,8 +21,9 @@ The Kubernetes UI can be used to introspect your current cluster, such as checki
|
|||
|
||||
### Node Resource Usage
|
||||
|
||||
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
|
||||

|
||||
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
|
||||
|
||||

|
||||
|
||||
### Dashboard Views
|
||||
|
||||
|
|
@ -30,19 +31,27 @@ Click on the "Views" button in the top-right of the page to see other views avai
|
|||
|
||||
#### Explore View
|
||||
|
||||
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
|
||||

|
||||
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
|
||||

|
||||
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
|
||||

|
||||
To see more details of each resource instance, simply click on it.
|
||||

|
||||
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
|
||||
|
||||

|
||||
|
||||
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
|
||||
|
||||

|
||||
|
||||
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
|
||||
|
||||

|
||||
|
||||
To see more details of each resource instance, simply click on it.
|
||||
|
||||

|
||||
|
||||
### Other Views
|
||||
|
||||
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
|
||||

|
||||
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
|
||||
|
||||

|
||||
|
||||
## More Information
|
||||
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 2.3 KiB |