Syntax highlighting

This commit is contained in:
John Mulhausen 2016-02-16 17:53:42 -08:00
parent bbeb111863
commit ee72211075
115 changed files with 1922 additions and 1919 deletions

View File

@ -783,14 +783,15 @@ section {
background-color: #f7f7f7;
padding: 2px 4px; }
#docsContent pre pi, #docsContent pre s {
margin: 0px;
padding: 0px;
margin: 0px;
padding: 0px;
}
#docsContent code, #docsContent pre code {
color: #303030;
.highlight code span, #docsContent code, #docsContent pre code {
font-family: "Roboto Mono", monospace; }
#docsContent code, #docsContent pre code {
color: #303030; }
#docsContent pre code {
padding: 0px; }
padding: 0px; }
#docsContent pre {
background-color: #f7f7f7;
display: block;

View File

@ -104,11 +104,11 @@ system:serviceaccount:<namespace>:default
For example, if you wanted to grant the default service account in the kube-system full privilege to the API, you would add this line to your policy file:
{% highlight json %}
```json
{"user":"system:serviceaccount:kube-system:default"}
{% endhighlight %}
```
The apiserver will need to be restarted to pickup the new policy lines.
@ -117,13 +117,13 @@ The apiserver will need to be restarted to pickup the new policy lines.
Other implementations can be developed fairly easily.
The APIserver calls the Authorizer interface:
{% highlight go %}
```go
type Authorizer interface {
Authorize(a Attributes) error
}
{% endhighlight %}
```
to determine whether or not to allow each API action.

View File

@ -40,7 +40,7 @@ To prevent memory leaks or other resource issues in [cluster addons](https://rel
For example:
{% highlight yaml %}
```yaml
containers:
- image: gcr.io/google_containers/heapster:v0.15.0
name: heapster
@ -48,7 +48,7 @@ containers:
limits:
cpu: 100m
memory: 200Mi
{% endhighlight %}
```
These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.

View File

@ -46,19 +46,19 @@ Get its usage by running `cluster/gce/upgrade.sh -h`.
For example, to upgrade just your master to a specific version (v1.0.2):
{% highlight console %}
```shell
cluster/gce/upgrade.sh -M v1.0.2
{% endhighlight %}
```
Alternatively, to upgrade your entire cluster to the latest stable release:
{% highlight console %}
```shell
cluster/gce/upgrade.sh release/stable
{% endhighlight %}
```
### Other platforms
@ -122,11 +122,11 @@ If you want more control over the upgrading process, you may use the following w
Mark the node to be rebooted as unschedulable:
{% highlight console %}
```shell
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
{% endhighlight %}
```
This keeps new pods from landing on the node while you are trying to get them off.
@ -134,11 +134,11 @@ Get the pods off the machine, via any of the following strategies:
* Wait for finite-duration pods to complete.
* Delete pods with:
{% highlight console %}
```shell
kubectl delete pods $PODNAME
{% endhighlight %}
```
For pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
@ -148,11 +148,11 @@ Perform maintenance work on the node.
Make the node schedulable again:
{% highlight console %}
```shell
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'
{% endhighlight %}
```
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
@ -192,12 +192,12 @@ for changes to this variable to take effect.
You can use the `kube-version-change` utility to convert config files between different API versions.
{% highlight console %}
```shell
$ hack/build-go.sh cmd/kube-version-change
$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml
{% endhighlight %}
```

View File

@ -12,11 +12,11 @@ The first thing to debug in your cluster is if your nodes are all registered cor
Run
{% highlight sh %}
```shell
kubectl get nodes
{% endhighlight %}
```
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.

View File

@ -41,11 +41,11 @@ To test whether `etcd` is running correctly, you can try writing a value to a
test key. On your master VM (or somewhere with firewalls configured such that
you can talk to your cluster's etcd), try:
{% highlight sh %}
```shell
curl -fs -X PUT "http://${host}:${port}/v2/keys/_test"
{% endhighlight %}
```

View File

@ -85,9 +85,9 @@ a simple cluster set up, using etcd's built in discovery to build our cluster.
First, hit the etcd discovery service to create a new token:
{% highlight sh %}
```shell
curl https://discovery.etcd.io/new?size=3
{% endhighlight %}
```
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
@ -103,15 +103,15 @@ for `${NODE_IP}` on each machine.
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
{% highlight sh %}
```shell
etcdctl member list
{% endhighlight %}
```
and
{% highlight sh %}
```shell
etcdctl cluster-health
{% endhighlight %}
```
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
on a different node.
@ -141,9 +141,9 @@ Once you have replicated etcd set up correctly, we will also install the apiserv
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
{% highlight sh %}
```shell
touch /var/log/kube-apiserver.log
{% endhighlight %}
```
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
* basic_auth.csv - basic auth user and password
@ -193,10 +193,10 @@ In the future, we expect to more tightly integrate this lease-locking into the s
First, create empty log files on each node, so that Docker will mount the files not make new directories:
{% highlight sh %}
```shell
touch /var/log/kube-scheduler.log
touch /var/log/kube-controller-manager.log
{% endhighlight %}
```
Next, set up the descriptions of the scheduler and controller manager pods on each node.
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`

View File

@ -40,7 +40,7 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called limit-example:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/namespace.yaml
namespace "limit-example" created
@ -49,22 +49,22 @@ NAME LABELS STATUS AGE
default <none> Active 5m
limit-example <none> Active 53s
{% endhighlight %}
```
## Step 2: Apply a limit to the namespace
Let's create a simple limit in our namespace.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created
{% endhighlight %}
```
Let's describe the limits that we have imposed in our namespace.
{% highlight console %}
```shell
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
@ -76,7 +76,7 @@ Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
{% endhighlight %}
```
In this scenario, we have said the following:
@ -104,7 +104,7 @@ of creation explaining why.
Let's first spin up a replication controller that creates a single container pod to demonstrate
how default values are applied to each pod.
{% highlight console %}
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
replicationcontroller "nginx" created
@ -113,9 +113,9 @@ NAME READY STATUS RESTARTS AGE
nginx-aq0mf 1/1 Running 0 35s
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
{% endhighlight %}
```
{% highlight yaml %}
```yaml
resourceVersion: "127"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
@ -135,30 +135,30 @@ spec:
terminationMessagePath: /dev/termination-log
volumeMounts:
{% endhighlight %}
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
{% endhighlight %}
```
Let's create a pod that falls within the allowed limit boundaries.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
{% endhighlight %}
```
{% highlight yaml %}
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
spec:
@ -174,7 +174,7 @@ spec:
cpu: "1"
memory: 512Mi
{% endhighlight %}
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
@ -196,7 +196,7 @@ $ kubelet --cpu-cfs-quota=true ...
To remove the resources used by this example, you can just delete the limit-example namespace.
{% highlight console %}
```shell
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
@ -204,7 +204,7 @@ $ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 20m
{% endhighlight %}
```
## Summary

View File

@ -40,7 +40,7 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called limit-example:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/namespace.yaml
namespace "limit-example" created
@ -49,22 +49,22 @@ NAME LABELS STATUS AGE
default <none> Active 5m
limit-example <none> Active 53s
{% endhighlight %}
```
## Step 2: Apply a limit to the namespace
Let's create a simple limit in our namespace.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created
{% endhighlight %}
```
Let's describe the limits that we have imposed in our namespace.
{% highlight console %}
```shell
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
@ -76,7 +76,7 @@ Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
{% endhighlight %}
```
In this scenario, we have said the following:
@ -104,7 +104,7 @@ of creation explaining why.
Let's first spin up a replication controller that creates a single container pod to demonstrate
how default values are applied to each pod.
{% highlight console %}
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
replicationcontroller "nginx" created
@ -113,9 +113,9 @@ NAME READY STATUS RESTARTS AGE
nginx-aq0mf 1/1 Running 0 35s
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
{% endhighlight %}
```
{% highlight yaml %}
```yaml
resourceVersion: "127"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
@ -135,30 +135,30 @@ spec:
terminationMessagePath: /dev/termination-log
volumeMounts:
{% endhighlight %}
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
{% endhighlight %}
```
Let's create a pod that falls within the allowed limit boundaries.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
{% endhighlight %}
```
{% highlight yaml %}
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
spec:
@ -174,7 +174,7 @@ spec:
cpu: "1"
memory: 512Mi
{% endhighlight %}
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
@ -196,7 +196,7 @@ $ kubelet --cpu-cfs-quota=true ...
To remove the resources used by this example, you can just delete the limit-example namespace.
{% highlight console %}
```shell
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
@ -204,7 +204,7 @@ $ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 20m
{% endhighlight %}
```
## Summary

View File

@ -45,14 +45,14 @@ Look [here](namespaces/) for an in depth example of namespaces.
You can list the current namespaces in a cluster using:
{% highlight console %}
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
{% endhighlight %}
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
@ -60,15 +60,15 @@ Kubernetes starts with two initial namespaces:
You can also get the summary of a specific namespace using:
{% highlight console %}
```shell
$ kubectl get namespaces <name>
{% endhighlight %}
```
Or you can get detailed information with:
{% highlight console %}
```shell
$ kubectl describe namespaces <name>
Name: default
@ -82,7 +82,7 @@ Resource Limits
---- -------- --- --- ---
Container cpu - - 100m
{% endhighlight %}
```
Note that these details show both resource quota (if present) as well as resource limit ranges.
@ -104,14 +104,14 @@ See the [design doc](../design/namespaces.html#phases) for more details.
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
{% endhighlight %}
```
Note that the name of your namespace must be a DNS compatible label.
@ -119,11 +119,11 @@ More information on the `finalizers` field can be found in the namespace [design
Then run:
{% highlight console %}
```shell
$ kubectl create -f ./my-namespace.yaml
{% endhighlight %}
```
### Working in namespaces
@ -134,11 +134,11 @@ and [Setting the namespace preference](/{{page.version}}/docs/user-guide/namespa
You can delete a namespace with
{% highlight console %}
```shell
$ kubectl delete namespaces <insert-some-namespace-name>
{% endhighlight %}
```
**WARNING, this deletes _everything_ under the namespace!**

View File

@ -27,11 +27,11 @@ services, and replication controllers used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
{% highlight console %}
```shell
$ kubectl get namespaces
NAME LABELS
default <none>
{% endhighlight %}
```
### Step Two: Create new namespaces
@ -54,7 +54,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
{% highlight json %}
```json
{
"kind": "Namespace",
"apiVersion": "v1",
@ -65,32 +65,32 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
}
}
}
{% endhighlight %}
```
[Download example](namespace-dev.json)
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
Create the development namespace using kubectl.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
{% endhighlight %}
```
And then lets create the production namespace using kubectl.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
{% endhighlight %}
```
To be sure things are right, let's list all of the namespaces in our cluster.
{% highlight console %}
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
production name=production Active
{% endhighlight %}
```
### Step Three: Create pods in each namespace
@ -103,7 +103,7 @@ To demonstrate this, let's spin up a simple replication controller and pod in th
We first check what is the current context:
{% highlight yaml %}
```yaml
apiVersion: v1
clusters:
- cluster:
@ -128,31 +128,31 @@ users:
user:
password: h5M0FtUUIflBSdI7
username: admin
{% endhighlight %}
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
{% highlight console %}
```shell
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
{% endhighlight %}
```
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
Let's switch to operate in the development namespace.
{% highlight console %}
```shell
$ kubectl config use-context dev
{% endhighlight %}
```
You can verify your current context by doing the following:
{% highlight console %}
```shell
$ kubectl config view
{% endhighlight %}
```
{% highlight yaml %}
```yaml
apiVersion: v1
clusters:
- cluster:
@ -187,19 +187,19 @@ users:
user:
password: h5M0FtUUIflBSdI7
username: admin
{% endhighlight %}
```
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
Let's create some content.
{% highlight console %}
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
{% endhighlight %}
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
@ -208,29 +208,29 @@ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
snowflake-8w0qn 1/1 Running 0 22s
snowflake-jrpzb 1/1 Running 0 22s
{% endhighlight %}
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
{% highlight console %}
```shell
$ kubectl config use-context prod
{% endhighlight %}
```
The production namespace should be empty.
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
{% endhighlight %}
```
Production likes to run cattle, so let's create some cattle pods.
{% highlight console %}
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get rc
@ -244,7 +244,7 @@ cattle-i9ojn 1/1 Running 0 12s
cattle-qj3yv 1/1 Running 0 12s
cattle-yc7vn 1/1 Running 0 12s
cattle-zz7ea 1/1 Running 0 12s
{% endhighlight %}
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.

View File

@ -26,13 +26,13 @@ services, and replication controllers used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
{% highlight console %}
```shell
$ kubectl get namespaces
NAME LABELS
default <none>
{% endhighlight %}
```
### Step Two: Create new namespaces
@ -55,7 +55,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
{% highlight json %}
```json
{
"kind": "Namespace",
@ -68,30 +68,30 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
}
}
{% endhighlight %}
```
[Download example](namespace-dev.json)
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
Create the development namespace using kubectl.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
{% endhighlight %}
```
And then lets create the production namespace using kubectl.
{% highlight console %}
```shell
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
{% endhighlight %}
```
To be sure things are right, let's list all of the namespaces in our cluster.
{% highlight console %}
```shell
$ kubectl get namespaces
NAME LABELS STATUS
@ -99,7 +99,7 @@ default <none> Active
development name=development Active
production name=production Active
{% endhighlight %}
```
### Step Three: Create pods in each namespace
@ -112,7 +112,7 @@ To demonstrate this, let's spin up a simple replication controller and pod in th
We first check what is the current context:
{% highlight yaml %}
```yaml
apiVersion: v1
clusters:
@ -139,37 +139,37 @@ users:
password: h5M0FtUUIflBSdI7
username: admin
{% endhighlight %}
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
{% highlight console %}
```shell
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
{% endhighlight %}
```
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
Let's switch to operate in the development namespace.
{% highlight console %}
```shell
$ kubectl config use-context dev
{% endhighlight %}
```
You can verify your current context by doing the following:
{% highlight console %}
```shell
$ kubectl config view
{% endhighlight %}
```
{% highlight yaml %}
```yaml
apiVersion: v1
clusters:
@ -206,21 +206,21 @@ users:
password: h5M0FtUUIflBSdI7
username: admin
{% endhighlight %}
```
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
Let's create some content.
{% highlight console %}
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
{% endhighlight %}
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
@ -231,21 +231,21 @@ NAME READY STATUS RESTARTS AGE
snowflake-8w0qn 1/1 Running 0 22s
snowflake-jrpzb 1/1 Running 0 22s
{% endhighlight %}
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
{% highlight console %}
```shell
$ kubectl config use-context prod
{% endhighlight %}
```
The production namespace should be empty.
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
@ -253,11 +253,11 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
{% endhighlight %}
```
Production likes to run cattle, so let's create some cattle pods.
{% highlight console %}
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
@ -273,7 +273,7 @@ cattle-qj3yv 1/1 Running 0 12s
cattle-yc7vn 1/1 Running 0 12s
cattle-zz7ea 1/1 Running 0 12s
{% endhighlight %}
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.

View File

@ -107,11 +107,11 @@ on that subnet, and is passed to docker's `--bridge` flag.
We start Docker with:
{% highlight sh %}
```shell
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
{% endhighlight %}
```
This bridge is created by Kubelet (controlled by the `--configure-cbr0=true`
flag) according to the `Node`'s `spec.podCIDR`.
@ -126,20 +126,20 @@ masquerade (aka SNAT - to make it seem as if packets came from the `Node`
itself) traffic that is bound for IPs outside the GCE project network
(10.0.0.0/8).
{% highlight sh %}
```shell
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
{% endhighlight %}
```
Lastly we enable IP forwarding in the kernel (so the kernel will process
packets for bridged containers):
{% highlight sh %}
```shell
sysctl net.ipv4.ip_forward=1
{% endhighlight %}
```
The result of all this is that all `Pods` can reach each other and can egress
traffic to the internet.

View File

@ -56,7 +56,7 @@ node recently (currently 40 seconds).
Node condition is represented as a json object. For example,
the following conditions mean the node is in sane state:
{% highlight json %}
```json
"conditions": [
{
@ -65,7 +65,7 @@ the following conditions mean the node is in sane state:
},
]
{% endhighlight %}
```
If the Status of the Ready condition
is Unknown or False for more than five minutes, then all of the Pods on the node are terminated by the Node Controller.
@ -90,7 +90,7 @@ Kubernetes creates a node, it is really just creating an object that represents
After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:
{% highlight json %}
```json
{
"kind": "Node",
@ -103,7 +103,7 @@ For example, if you try to create a node from the following content:
}
}
{% endhighlight %}
```
Kubernetes will create a Node object internally (the representation), and
validate the node by health checking based on the `metadata.name` field: we
@ -164,11 +164,11 @@ node, but will not affect any existing pods on the node. This is useful as a
preparatory step before a node reboot, etc. For example, to mark a node
unschedulable, run this command:
{% highlight sh %}
```shell
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
{% endhighlight %}
```
Note that pods which are created by a daemonSet controller bypass the Kubernetes scheduler,
and do not respect the unschedulable attribute on a node. The assumption is that daemons belong on
@ -189,7 +189,7 @@ processes not in containers.
If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
pod. Use the following template:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -204,7 +204,7 @@ spec:
cpu: 100m
memory: 100Mi
{% endhighlight %}
```
Set the `cpu` and `memory` values to the amount of resources you want to reserve.
Place the file in the manifest directory (`--config=DIR` flag of kubelet). Do this

View File

@ -86,7 +86,7 @@ supply of Pod IPs.
Kubectl supports creating, updating, and viewing quotas:
{% highlight console %}
```shell
$ kubectl namespace myspace
$ cat <<EOF > quota.json
@ -123,7 +123,7 @@ replicationcontrollers 5 20
resourcequotas 1 1
services 3 5
{% endhighlight %}
```
## Quota and Cluster Capacity

View File

@ -13,7 +13,7 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called quota-example:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
namespace "quota-example" created
@ -22,7 +22,7 @@ NAME LABELS STATUS AGE
default <none> Active 2m
quota-example <none> Active 39s
{% endhighlight %}
```
## Step 2: Apply a quota to the namespace
@ -37,12 +37,12 @@ checks the total resource *requests*, not resource *limits* of all containers/po
Let's create a simple quota in our namespace:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
{% endhighlight %}
```
Once your quota is applied to a namespace, the system will restrict any creation of content
in the namespace until the quota usage has been calculated. This should happen quickly.
@ -50,7 +50,7 @@ in the namespace until the quota usage has been calculated. This should happen
You can describe your current quota usage to see what resources are being consumed in your
namespace.
{% highlight console %}
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
@ -66,7 +66,7 @@ resourcequotas 1 1
secrets 1 10
services 0 5
{% endhighlight %}
```
## Step 3: Applying default resource requests and limits
@ -77,25 +77,25 @@ cpu and memory by creating an nginx container.
To demonstrate, lets create a replication controller that runs nginx:
{% highlight console %}
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
replicationcontroller "nginx" created
{% endhighlight %}
```
Now let's look at the pods that were created.
{% highlight console %}
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
{% endhighlight %}
```
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
{% highlight console %}
```shell
kubectl describe rc nginx --namespace=quota-example
Name: nginx
@ -110,14 +110,14 @@ Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
{% endhighlight %}
```
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
do not specify any memory usage *request*.
So let's set some default values for the amount of cpu and memory a pod can consume:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
limitrange "limits" created
@ -129,7 +129,7 @@ Type Resource Min Max Request Limit Limit/Request
Container memory - - 256Mi 512Mi -
Container cpu - - 100m 200m -
{% endhighlight %}
```
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
@ -137,17 +137,17 @@ amount of cpu and memory per container will be applied, and the request will be
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
create its pods.
{% highlight console %}
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-fca65 1/1 Running 0 1m
{% endhighlight %}
```
And if we print out our quota usage in the namespace:
{% highlight console %}
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
@ -163,7 +163,7 @@ resourcequotas 1 1
secrets 1 10
services 0 5
{% endhighlight %}
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
and the usage is being tracked by the Kubernetes system properly.

View File

@ -13,7 +13,7 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called quota-example:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
namespace "quota-example" created
@ -22,7 +22,7 @@ NAME LABELS STATUS AGE
default <none> Active 2m
quota-example <none> Active 39s
{% endhighlight %}
```
## Step 2: Apply a quota to the namespace
@ -37,12 +37,12 @@ checks the total resource *requests*, not resource *limits* of all containers/po
Let's create a simple quota in our namespace:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
{% endhighlight %}
```
Once your quota is applied to a namespace, the system will restrict any creation of content
in the namespace until the quota usage has been calculated. This should happen quickly.
@ -50,7 +50,7 @@ in the namespace until the quota usage has been calculated. This should happen
You can describe your current quota usage to see what resources are being consumed in your
namespace.
{% highlight console %}
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
@ -66,7 +66,7 @@ resourcequotas 1 1
secrets 1 10
services 0 5
{% endhighlight %}
```
## Step 3: Applying default resource requests and limits
@ -77,25 +77,25 @@ cpu and memory by creating an nginx container.
To demonstrate, lets create a replication controller that runs nginx:
{% highlight console %}
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
replicationcontroller "nginx" created
{% endhighlight %}
```
Now let's look at the pods that were created.
{% highlight console %}
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
{% endhighlight %}
```
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
{% highlight console %}
```shell
kubectl describe rc nginx --namespace=quota-example
Name: nginx
@ -110,14 +110,14 @@ Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
{% endhighlight %}
```
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
do not specify any memory usage *request*.
So let's set some default values for the amount of cpu and memory a pod can consume:
{% highlight console %}
```shell
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
limitrange "limits" created
@ -129,7 +129,7 @@ Type Resource Min Max Request Limit Limit/Request
Container memory - - 256Mi 512Mi -
Container cpu - - 100m 200m -
{% endhighlight %}
```
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
@ -137,17 +137,17 @@ amount of cpu and memory per container will be applied, and the request will be
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
create its pods.
{% highlight console %}
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-fca65 1/1 Running 0 1m
{% endhighlight %}
```
And if we print out our quota usage in the namespace:
{% highlight console %}
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
@ -163,7 +163,7 @@ resourcequotas 1 1
secrets 1 10
services 0 5
{% endhighlight %}
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
and the usage is being tracked by the Kubernetes system properly.

View File

@ -13,10 +13,10 @@ The **salt-minion** service runs on the kubernetes-master and each kubernetes-no
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
{% highlight console %}
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
master: kubernetes-master
{% endhighlight %}
master: kubernetes-master
```
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
@ -34,11 +34,11 @@ All remaining sections that refer to master/minion setups should be ignored for
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
{% highlight console %}
```shell
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
open_mode: True
auto_accept: True
{% endhighlight %}
auto_accept: True
```
## Salt minion configuration
@ -46,14 +46,14 @@ Each minion in the salt cluster has an associated configuration that instructs t
An example file is presented below using the Vagrant based environment.
{% highlight console %}
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
grains:
etcd_servers: $MASTER_IP
cloud_provider: vagrant
roles:
- kubernetes-master
{% endhighlight %}
- kubernetes-master
```
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
@ -77,13 +77,15 @@ These keys may be leveraged by the Salt sls files to branch behavior.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
{% highlight jinja %}
```liquid
{% raw %}
{% if grains['os_family'] == 'RedHat' %}
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
{% else %}
// something specific to Debian environment (apt-get, initd)
{% endif %}
{% endhighlight %}
{% endif %}
{% endraw %}
```
## Best Practices

View File

@ -61,7 +61,7 @@ account. To create additional API tokens for a service account, create a secret
of type `ServiceAccountToken` with an annotation referencing the service
account, and the controller will update it with a generated token:
{% highlight json %}
```json
secret.json:
{
@ -76,22 +76,22 @@ secret.json:
"type": "kubernetes.io/service-account-token"
}
{% endhighlight %}
```
{% highlight sh %}
```shell
kubectl create -f ./secret.json
kubectl describe secret mysecretname
{% endhighlight %}
```
#### To delete/invalidate a service account token
{% highlight sh %}
```shell
kubectl delete secret mysecretname
{% endhighlight %}
```
### Service Account Controller

View File

@ -20,13 +20,13 @@ For example, this is how to start a simple web server as a static pod:
1. Choose a node where we want to run the static pod. In this example, it's `my-minion1`.
{% highlight console %}
```shell
[joe@host ~] $ ssh my-minion1
{% endhighlight %}
```
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
{% highlight console %}
```shell
[root@my-minion1 ~] $ mkdir /etc/kubernetes.d/
[root@my-minion1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
apiVersion: v1
@ -44,7 +44,7 @@ For example, this is how to start a simple web server as a static pod:
containerPort: 80
protocol: tcp
EOF
{% endhighlight %}
```
2. Configure your kubelet daemon on the node to use this directory by running it with `--config=/etc/kubelet.d/` argument. On Fedora Fedora 21 with Kubernetes 0.17 edit `/etc/kubernetes/kubelet` to include this line:
@ -56,9 +56,9 @@ For example, this is how to start a simple web server as a static pod:
3. Restart kubelet. On Fedora 21, this is:
{% highlight console %}
```shell
[root@my-minion1 ~] $ systemctl restart kubelet
{% endhighlight %}
```
## Pods created via HTTP
@ -68,50 +68,50 @@ Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argume
When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient'|):
{% highlight console %}
```shell
[joe@my-minion1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-minion1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
{% endhighlight %}
```
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
{% highlight console %}
```shell
[joe@host ~] $ ssh my-master
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 role=myrole Running 11 minutes
web nginx Running 11 minutes
{% endhighlight %}
```
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl) command), kubelet simply won't remove it.
{% highlight console %}
```shell
[joe@my-master ~] $ kubectl delete pod static-web-my-minion1
pods/static-web-my-minion1
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST ...
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 ...
{% endhighlight %}
```
Back to our `my-minion1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
{% highlight console %}
```shell
[joe@host ~] $ ssh my-minion1
[joe@my-minion1 ~] $ docker stop f6d05272b57e
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
{% endhighlight %}
```
## Dynamic addition and removal of static pods
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
{% highlight console %}
```shell
[joe@my-minion1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
@ -121,7 +121,7 @@ Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in
[joe@my-minion1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
{% endhighlight %}
```

View File

@ -120,7 +120,7 @@ Objects that contain both spec and status should not contain additional top-leve
The `FooCondition` type for some resource type `Foo` may include a subset of the following fields, but must contain at least `type` and `status` fields:
{% highlight go %}
```go
Type FooConditionType `json:"type" description:"type of Foo condition"`
Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"`
@ -129,7 +129,7 @@ The `FooCondition` type for some resource type `Foo` may include a subset of the
Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"`
Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"`
{% endhighlight %}
```
Additional fields may be added in the future.
@ -165,23 +165,23 @@ Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps
For example:
{% highlight yaml %}
```yaml
ports:
- name: www
containerPort: 80
{% endhighlight %}
```
vs.
{% highlight yaml %}
```yaml
ports:
www:
containerPort: 80
{% endhighlight %}
```
This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, annotations, data), as opposed to sets of subobjects.
@ -236,18 +236,18 @@ The API supports three different PATCH operations, determined by their correspon
In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. Let's say we start with the following Pod:
{% highlight yaml %}
```yaml
spec:
containers:
- name: nginx
image: nginx-1.0
{% endhighlight %}
```
...and we POST that to the server (as JSON). Then let's say we want to *add* a container to this Pod.
{% highlight yaml %}
```yaml
PATCH /api/v1/namespaces/default/pods/pod-name
spec:
@ -255,7 +255,7 @@ spec:
- name: log-tailer
image: log-tailer-1.0
{% endhighlight %}
```
If we were to use standard Merge Patch, the entire container list would be replaced with the single log-tailer container. However, our intent is for the container lists to merge together based on the `name` field.
@ -269,18 +269,18 @@ Strategic Merge Patch also supports special operations as listed below.
To override the container list to be strictly replaced, regardless of the default:
{% highlight yaml %}
```yaml
containers:
- name: nginx
image: nginx-1.0
- $patch: replace # any further $patch operations nested in this list will be ignored
{% endhighlight %}
```
To delete an element of a list that should be merged:
{% highlight yaml %}
```yaml
containers:
- name: nginx
@ -288,31 +288,31 @@ containers:
- $patch: delete
name: log-tailer # merge key and value goes here
{% endhighlight %}
```
### Map Operations
To indicate that a map should not be merged and instead should be taken literally:
{% highlight yaml %}
```yaml
$patch: replace # recursive and applies to all fields of the map it's in
containers:
- name: nginx
image: nginx-1.0
{% endhighlight %}
```
To delete a field of a map:
{% highlight yaml %}
```yaml
name: nginx
image: nginx-1.0
labels:
live: null # set the value of the map key to null
{% endhighlight %}
```
## Idempotency
@ -501,7 +501,7 @@ The status object is encoded as JSON and provided as the body of the response.
**Example:**
{% highlight console %}
```shell
$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana
@ -531,7 +531,7 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/
"code": 404
}
{% endhighlight %}
```
`status` field contains one of two possible values:
* `Success`

View File

@ -91,7 +91,7 @@ compatible.
Let's consider some examples. In a hypothetical API (assume we're at version
v6), the `Frobber` struct looks something like this:
{% highlight go %}
```go
// API v6.
type Frobber struct {
@ -99,12 +99,12 @@ type Frobber struct {
Param string `json:"param"`
}
{% endhighlight %}
```
You want to add a new `Width` field. It is generally safe to add new fields
without changing the API version, so you can simply change it to:
{% highlight go %}
```go
// Still API v6.
type Frobber struct {
@ -113,7 +113,7 @@ type Frobber struct {
Param string `json:"param"`
}
{% endhighlight %}
```
The onus is on you to define a sane default value for `Width` such that rule #1
above is true - API calls and stored objects that used to work must continue to
@ -123,7 +123,7 @@ For your next change you want to allow multiple `Param` values. You can not
simply change `Param string` to `Params []string` (without creating a whole new
API version) - that fails rules #1 and #2. You can instead do something like:
{% highlight go %}
```go
// Still API v6, but kind of clumsy.
type Frobber struct {
@ -133,7 +133,7 @@ type Frobber struct {
ExtraParams []string `json:"params"` // additional params
}
{% endhighlight %}
```
Now you can satisfy the rules: API calls that provide the old style `Param`
will still work, while servers that don't understand `ExtraParams` can ignore
@ -143,7 +143,7 @@ Part of the reason for versioning APIs and for using internal structs that are
distinct from any one version is to handle growth like this. The internal
representation can be implemented as:
{% highlight go %}
```go
// Internal, soon to be v7beta1.
type Frobber struct {
@ -152,7 +152,7 @@ type Frobber struct {
Params []string
}
{% endhighlight %}
```
The code that converts to/from versioned APIs can decode this into the somewhat
uglier (but compatible!) structures. Eventually, a new API version, let's call
@ -174,7 +174,7 @@ let's say you decide to rename a field within the same API version. In this case
you add units to `height` and `width`. You implement this by adding duplicate
fields:
{% highlight go %}
```go
type Frobber struct {
Height *int `json:"height"`
@ -183,7 +183,7 @@ type Frobber struct {
WidthInInches *int `json:"widthInInches"`
}
{% endhighlight %}
```
You convert all of the fields to pointers in order to distinguish between unset and
set to 0, and then set each corresponding field from the other in the defaulting
@ -197,18 +197,18 @@ in the case of an old client that was only aware of the old field (e.g., `height
Say the client creates:
{% highlight json %}
```json
{
"height": 10,
"width": 5
}
{% endhighlight %}
```
and GETs:
{% highlight json %}
```json
{
"height": 10,
@ -217,11 +217,11 @@ and GETs:
"widthInInches": 5
}
{% endhighlight %}
```
then PUTs back:
{% highlight json %}
```json
{
"height": 13,
@ -230,7 +230,7 @@ then PUTs back:
"widthInInches": 5
}
{% endhighlight %}
```
The update should not fail, because it would have worked before `heightInInches` was added.
@ -400,11 +400,11 @@ Once all the necessary manually written conversions are added, you need to
regenerate auto-generated ones. To regenerate them:
- run
{% highlight sh %}
```shell
hack/update-generated-conversions.sh
{% endhighlight %}
```
If running the above script is impossible due to compile errors, the easiest
workaround is to comment out the code causing errors and let the script to
@ -428,11 +428,11 @@ The deep copy code resides with each versioned API:
To regenerate them:
- run
{% highlight sh %}
```shell
hack/update-generated-deep-copies.sh
{% endhighlight %}
```
## Edit json (un)marshaling code
@ -446,11 +446,11 @@ The auto-generated code resides with each versioned API:
To regenerate them:
- run
{% highlight sh %}
```shell
hack/update-codecgen.sh
{% endhighlight %}
```
## Making a new API Group
@ -531,11 +531,11 @@ an example to illustrate your change.
Make sure you update the swagger API spec by running:
{% highlight sh %}
```shell
hack/update-swagger-spec.sh
{% endhighlight %}
```
The API spec changes should be in a commit separate from your other changes.

View File

@ -19,7 +19,7 @@ for kubernetes.
The submit-queue does the following:
{% highlight go %}
```go
for _, pr := range readyToMergePRs() {
if testsAreStable() {
@ -27,7 +27,7 @@ for _, pr := range readyToMergePRs() {
}
}
{% endhighlight %}
```
The status of the submit-queue is [online.](http://submit-queue.k8s.io/)

View File

@ -8,11 +8,11 @@ Kubernetes projects.
Any contributor can propose a cherry pick of any pull request, like so:
{% highlight sh %}
```shell
hack/cherry_pick_pull.sh upstream/release-3.14 98765
{% endhighlight %}
```
This will walk you through the steps to propose an automated cherry pick of pull
#98765 for remote branch `upstream/release-3.14`.

View File

@ -17,26 +17,26 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run:
{% highlight sh %}
```shell
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
{% endhighlight %}
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default) environment variable:
{% highlight sh %}
```shell
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
{% endhighlight %}
```
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
@ -44,25 +44,25 @@ By default, each VM in the cluster is running Fedora, and all of the Kubernetes
To access the master or any node:
{% highlight sh %}
```shell
vagrant ssh master
vagrant ssh minion-1
{% endhighlight %}
```
If you are running more than one nodes, you can access the others by:
{% highlight sh %}
```shell
vagrant ssh minion-2
vagrant ssh minion-3
{% endhighlight %}
```
To view the service status and/or logs on the kubernetes-master:
{% highlight console %}
```shell
$ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
@ -74,11 +74,11 @@ $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo systemctl status etcd
[vagrant@kubernetes-master ~] $ sudo systemctl status nginx
{% endhighlight %}
```
To view the services on any of the nodes:
{% highlight console %}
```shell
$ vagrant ssh minion-1
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
@ -86,7 +86,7 @@ $ vagrant ssh minion-1
[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet
{% endhighlight %}
```
### Interacting with your Kubernetes cluster with Vagrant.
@ -94,34 +94,34 @@ With your Kubernetes cluster up, you can manage the nodes in your cluster with t
To push updates to new Kubernetes code after making source changes:
{% highlight sh %}
```shell
./cluster/kube-push.sh
{% endhighlight %}
```
To stop and then restart the cluster:
{% highlight sh %}
```shell
vagrant halt
./cluster/kube-up.sh
{% endhighlight %}
```
To destroy the cluster:
{% highlight sh %}
```shell
vagrant destroy
{% endhighlight %}
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
{% highlight console %}
```shell
$ ./cluster/kubectl.sh get nodes
@ -130,7 +130,7 @@ kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready
kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready
kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready
{% endhighlight %}
```
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
@ -138,49 +138,49 @@ Alternatively to using the vagrant commands, you can also use the `cluster/kube-
All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately:
{% highlight sh %}
```shell
export KUBERNETES_PROVIDER=vagrant
{% endhighlight %}
```
Bring up a vagrant cluster
{% highlight sh %}
```shell
./cluster/kube-up.sh
{% endhighlight %}
```
Destroy the vagrant cluster
{% highlight sh %}
```shell
./cluster/kube-down.sh
{% endhighlight %}
```
Update the vagrant cluster after you make changes (only works when building your own releases locally):
{% highlight sh %}
```shell
./cluster/kube-push.sh
{% endhighlight %}
```
Interact with the cluster
{% highlight sh %}
```shell
./cluster/kubectl.sh
{% endhighlight %}
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
{% highlight console %}
```shell
$ cat ~/.kubernetes_vagrant_auth
{ "User": "vagrant",
@ -190,21 +190,21 @@ $ cat ~/.kubernetes_vagrant_auth
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
{% endhighlight %}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
{% highlight sh %}
```shell
./cluster/kubectl.sh get nodes
{% endhighlight %}
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
{% highlight console %}
```shell
$ ./cluster/kubectl.sh get nodes
@ -213,14 +213,14 @@ kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready
kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready
kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready
{% endhighlight %}
```
Now start running some containers!
You can now use any of the cluster/kube-*.sh commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
{% highlight console %}
```shell
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
@ -231,21 +231,21 @@ NAME LABELS SELECTOR IP(S) PORT(S)
$ cluster/kubectl.sh get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
{% endhighlight %}
```
Start a container running nginx with a replication controller and three replicas
{% highlight console %}
```shell
$ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx my-nginx nginx run=my-nginx 3
{% endhighlight %}
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
{% highlight console %}
```shell
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
@ -253,11 +253,11 @@ my-nginx-389da 1/1 Waiting 0 33s
my-nginx-kqdjk 1/1 Waiting 0 33s
my-nginx-nyj3x 1/1 Waiting 0 33s
{% endhighlight %}
```
You need to wait for the provisioning to complete, you can monitor the minions by doing:
{% highlight console %}
```shell
$ sudo salt '*minion-1' cmd.run 'docker images'
kubernetes-minion-1:
@ -265,11 +265,11 @@ kubernetes-minion-1:
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
{% endhighlight %}
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
{% highlight console %}
```shell
$ sudo salt '*minion-1' cmd.run 'docker ps'
kubernetes-minion-1:
@ -277,11 +277,11 @@ kubernetes-minion-1:
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
{% endhighlight %}
```
Going back to listing the pods, services and replicationcontrollers, you now have:
{% highlight console %}
```shell
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
@ -296,13 +296,13 @@ $ cluster/kubectl.sh get rc
NAME IMAGE(S) SELECTOR REPLICAS
my-nginx nginx run=my-nginx 3
{% endhighlight %}
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](../../../examples/guestbook/README) application to learn how to create a service.
You can already play with scaling the replicas with:
{% highlight console %}
```shell
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
@ -310,7 +310,7 @@ NAME READY STATUS RESTARTS AGE
my-nginx-kqdjk 1/1 Running 0 13m
my-nginx-nyj3x 1/1 Running 0 13m
{% endhighlight %}
```
Congratulations!
@ -318,11 +318,11 @@ Congratulations!
The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`:
{% highlight sh %}
```shell
NUM_MINIONS=3 hack/e2e-test.sh
{% endhighlight %}
```
### Troubleshooting
@ -330,28 +330,28 @@ NUM_MINIONS=3 hack/e2e-test.sh
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
{% highlight sh %}
```shell
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
{% endhighlight %}
```
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
{% highlight sh %}
```shell
rm ~/.kubernetes_vagrant_auth
{% endhighlight %}
```
After using kubectl.sh make sure that the correct credentials are set:
{% highlight console %}
```shell
$ cat ~/.kubernetes_vagrant_auth
{
@ -359,7 +359,7 @@ $ cat ~/.kubernetes_vagrant_auth
"Password": "vagrant"
}
{% endhighlight %}
```
#### I just created the cluster, but I do not see my container running!
@ -378,31 +378,31 @@ Are you sure you built a release first? Did you install `net-tools`? For more cl
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so:
{% highlight sh %}
```shell
export NUM_MINIONS=1
{% endhighlight %}
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
{% highlight sh %}
```shell
export KUBERNETES_MEMORY=2048
{% endhighlight %}
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
{% highlight sh %}
```shell
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_MINION_MEMORY=2048
{% endhighlight %}
```
#### I ran vagrant suspend and nothing works!

View File

@ -26,7 +26,7 @@ Below, we outline one of the more common git workflows that core developers use.
The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`.
{% highlight sh %}
```shell
mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
@ -35,42 +35,42 @@ git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git
cd kubernetes
git remote add upstream 'https://github.com/kubernetes/kubernetes.git'
{% endhighlight %}
```
### Create a branch and make changes
{% highlight sh %}
```shell
git checkout -b myfeature
# Make your code changes
{% endhighlight %}
```
### Keeping your development fork in sync
{% highlight sh %}
```shell
git fetch upstream
git rebase upstream/master
{% endhighlight %}
```
Note: If you have write access to the main repository at github.com/kubernetes/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream:
{% highlight sh %}
```shell
git remote set-url --push upstream no_push
{% endhighlight %}
```
### Committing changes to your fork
{% highlight sh %}
```shell
git commit
git push -f origin myfeature
{% endhighlight %}
```
### Creating a pull request
@ -106,22 +106,22 @@ directly from mercurial.
2) Create a new GOPATH for your tools and install godep:
{% highlight sh %}
```shell
export GOPATH=$HOME/go-tools
mkdir -p $GOPATH
go get github.com/tools/godep
{% endhighlight %}
```
3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile:
{% highlight sh %}
```shell
export GOPATH=$HOME/go-tools
export PATH=$PATH:$GOPATH/bin
{% endhighlight %}
```
### Using godep
@ -131,7 +131,7 @@ Here's a quick walkthrough of one way to use godeps to add or update a Kubernete
_Devoting a separate directory is not required, but it is helpful to separate dependency updates from other changes._
{% highlight sh %}
```shell
export KPATH=$HOME/code/kubernetes
mkdir -p $KPATH/src/k8s.io/kubernetes
@ -139,11 +139,11 @@ cd $KPATH/src/k8s.io/kubernetes
git clone https://path/to/your/fork .
# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work.
{% endhighlight %}
```
2) Set up your GOPATH.
{% highlight sh %}
```shell
# Option A: this will let your builds see packages that exist elsewhere on your system.
export GOPATH=$KPATH:$GOPATH
@ -151,20 +151,20 @@ export GOPATH=$KPATH:$GOPATH
export GOPATH=$KPATH
# Option B is recommended if you're going to mess with the dependencies.
{% endhighlight %}
```
3) Populate your new GOPATH.
{% highlight sh %}
```shell
cd $KPATH/src/k8s.io/kubernetes
godep restore
{% endhighlight %}
```
4) Next, you can either add a new dependency or update an existing one.
{% highlight sh %}
```shell
# To add a new dependency, do:
cd $KPATH/src/k8s.io/kubernetes
@ -178,7 +178,7 @@ go get -u path/to/dependency
# Change code in Kubernetes accordingly if necessary.
godep update path/to/dependency/...
{% endhighlight %}
```
_If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency`
to fetch the dependencies without compiling them. This can happen when updating the cadvisor dependency._
@ -198,34 +198,34 @@ Please send dependency updates in separate commits within your PR, for easier re
Before committing any changes, please link/copy these hooks into your .git
directory. This will keep you from accidentally committing non-gofmt'd go code.
{% highlight sh %}
```shell
cd kubernetes/.git/hooks/
ln -s ../../hooks/pre-commit .
{% endhighlight %}
```
## Unit tests
{% highlight sh %}
```shell
cd kubernetes
hack/test-go.sh
{% endhighlight %}
```
Alternatively, you could also run:
{% highlight sh %}
```shell
cd kubernetes
godep go test ./...
{% endhighlight %}
```
If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet:
{% highlight console %}
```shell
$ cd kubernetes # step into the kubernetes directory.
$ cd pkg/kubelet
@ -234,7 +234,7 @@ $ godep go test
PASS
ok k8s.io/kubernetes/pkg/kubelet 0.317s
{% endhighlight %}
```
## Coverage
@ -242,23 +242,23 @@ Currently, collecting coverage is only supported for the Go unit tests.
To run all unit tests and generate an HTML coverage report, run the following:
{% highlight sh %}
```shell
cd kubernetes
KUBE_COVER=y hack/test-go.sh
{% endhighlight %}
```
At the end of the run, an the HTML report will be generated with the path printed to stdout.
To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example:
{% highlight sh %}
```shell
cd kubernetes
KUBE_COVER=y hack/test-go.sh pkg/kubectl
{% endhighlight %}
```
Multiple arguments can be passed, in which case the coverage results will be combined for all tests run.
@ -268,37 +268,37 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover
You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``.
{% highlight sh %}
```shell
cd kubernetes
hack/test-integration.sh
{% endhighlight %}
```
## End-to-End tests
You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce".
{% highlight sh %}
```shell
cd kubernetes
hack/e2e-test.sh
{% endhighlight %}
```
Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command:
{% highlight sh %}
```shell
go run hack/e2e.go --down
{% endhighlight %}
```
### Flag options
See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview:
{% highlight sh %}
```shell
# Build binaries for testing
go run hack/e2e.go --build
@ -327,11 +327,11 @@ go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env"
# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly:
hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env
{% endhighlight %}
```
### Combining flags
{% highlight sh %}
```shell
# Flags can be combined, and their actions will take place in this order:
# -build, -push|-up|-pushup, -test|-tests=..., -down
@ -347,7 +347,7 @@ go run hack/e2e.go -build -pushup -test -down
go run hack/e2e.go -v -ctl='get events'
go run hack/e2e.go -v -ctl='delete pod foobar'
{% endhighlight %}
```
## Conformance testing
@ -367,11 +367,11 @@ See [conformance-test.sh](http://releases.k8s.io/release-1.1/hack/conformance-te
## Regenerating the CLI documentation
{% highlight sh %}
```shell
hack/update-generated-docs.sh
{% endhighlight %}
```

View File

@ -13,7 +13,7 @@ There is a testing image `brendanburns/flake` up on the docker hub. We will use
Create a replication controller with the following config:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -35,21 +35,21 @@ spec:
- name: REPO_SPEC
value: https://github.com/kubernetes/kubernetes
{% endhighlight %}
```
Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default.
{% highlight sh %}
```shell
kubectl create -f ./controller.yaml
{% endhighlight %}
```
This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test.
You can examine the recent runs of the test by calling `docker ps -a` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently.
You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes:
{% highlight sh %}
```shell
echo "" > output.txt
for i in {1..4}; do
@ -59,15 +59,15 @@ for i in {1..4}; do
done
grep "Exited ([^0])" output.txt
{% endhighlight %}
```
Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running:
{% highlight sh %}
```shell
kubectl stop replicationcontroller flakecontroller
{% endhighlight %}
```
If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller.

View File

@ -7,38 +7,38 @@ Run `./hack/get-build.sh -h` for its usage.
For example, to get a build at a specific version (v1.0.2):
{% highlight console %}
```shell
./hack/get-build.sh v1.0.2
{% endhighlight %}
```
Alternatively, to get the latest stable release:
{% highlight console %}
```shell
./hack/get-build.sh release/stable
{% endhighlight %}
```
Finally, you can just print the latest or stable version:
{% highlight console %}
```shell
./hack/get-build.sh -v ci/latest
{% endhighlight %}
```
You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples:
{% highlight sh %}
```shell
gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number
gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e
gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release
gsutil ls gs://kubernetes-release/release # list all official releases and rcs
{% endhighlight %}
```

View File

@ -12,11 +12,11 @@ Find the most-recent PR that was merged with the current .0 release. Remember t
### 2) Run the release-notes tool
{% highlight bash %}
```shell
${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR
{% endhighlight %}
```
### 3) Trim the release notes

View File

@ -11,13 +11,13 @@ Go comes with inbuilt 'net/http/pprof' profiling library and profiling web servi
TL;DR: Add lines:
{% highlight go %}
```go
m.mux.HandleFunc("/debug/pprof/", pprof.Index)
m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
{% endhighlight %}
```
to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package.
@ -27,19 +27,19 @@ In most use cases to use profiler service it's enough to do 'import _ net/http/p
Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running:
{% highlight sh %}
```shell
ssh kubernetes_master -L<local_port>:localhost:8080
{% endhighlight %}
```
or analogous one for you Cloud provider. Afterwards you can e.g. run
{% highlight sh %}
```shell
go tool pprof http://localhost:<local_port>/debug/pprof/profile
{% endhighlight %}
```
to get 30 sec. CPU profile.

View File

@ -33,11 +33,11 @@ to make sure they're solid around then as well. Once you find some greens, you
can find the Git hash for a build by looking at the "Console Log", then look for
`githash=`. You should see a line line:
{% highlight console %}
```shell
+ githash=v0.20.2-322-g974377b
{% endhighlight %}
```
Because Jenkins builds frequently, if you're looking between jobs
(e.g. `kubernetes-e2e-gke-ci` and `kubernetes-e2e-gce`), there may be no single
@ -50,11 +50,11 @@ oncall.
Before proceeding to the next step:
{% highlight sh %}
```shell
export BRANCHPOINT=v0.20.2-322-g974377b
{% endhighlight %}
```
Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become
our (retroactive) branch point.
@ -202,14 +202,14 @@ present.
We are using `pkg/version/base.go` as the source of versioning in absence of
information from git. Here is a sample of that file's contents:
{% highlight go %}
```go
var (
gitVersion string = "v0.4-dev" // version from git, output of $(git describe)
gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD)
)
{% endhighlight %}
```
This means a build with `go install` or `go get` or a build from a tarball will
yield binaries that will identify themselves as `v0.4-dev` and will not be able
@ -287,7 +287,7 @@ projects seem to live with that and it does not really become a large problem.
As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is
not present in Docker `v1.2.0`:
{% highlight console %}
```shell
$ git describe a327d9b91edf
v1.1.1-822-ga327d9b91edf
@ -297,7 +297,7 @@ a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB
(Non-empty output here means the commit is not present on v1.2.0.)
{% endhighlight %}
```
## Release Notes

View File

@ -14,21 +14,21 @@ title: "Getting started on AWS EC2"
NOTE: This script use the 'default' AWS profile by default.
You may explicitly set AWS profile to use using the `AWS_DEFAULT_PROFILE` environment variable:
{% highlight bash %}
```shell
export AWS_DEFAULT_PROFILE=myawsprofile
{% endhighlight %}
```
## Cluster turnup
### Supported procedure: `get-kube`
{% highlight bash %}
```shell
#Using wget
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
#Using cURL
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
{% endhighlight %}
```
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/release-1.1/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/release-1.1/cluster/aws/util.sh)
@ -41,7 +41,7 @@ tokens are written in `~/.kube/config`, they will be necessary to use the CLI or
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/release-1.1/cluster/aws/config-default.sh) to change this behavior as follows:
{% highlight bash %}
```shell
export KUBE_AWS_ZONE=eu-west-1c
export NUM_MINIONS=2
export MINION_SIZE=m3.medium
@ -49,7 +49,7 @@ export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
export INSTANCE_PREFIX=k8s
...
{% endhighlight %}
```
It will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If these already exist, make sure you want them to be used here.
@ -70,13 +70,13 @@ Alternately, you can download the latest Kubernetes release from [this page](htt
Next, add the appropriate binary folder to your `PATH` to access kubectl:
{% highlight bash %}
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
{% endhighlight %}
```
An up-to-date documentation page for this tool is available here: [kubectl manual](/{{page.version}}/docs/user-guide/kubectl/kubectl)
@ -96,9 +96,9 @@ For more complete applications, please look in the [examples directory](../../ex
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
{% highlight bash %}
```shell
cluster/kube-down.sh
{% endhighlight %}
```
## Further reading

View File

@ -15,13 +15,13 @@ Get the Kubernetes source. If you are simply building a release from source the
Building a release is simple.
{% highlight bash %}
```shell
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release
{% endhighlight %}
```
For more details on the release process see the [`build/` directory](http://releases.k8s.io/release-1.1/build/)

View File

@ -39,9 +39,9 @@ gpgcheck=0
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
{% highlight sh %}
```shell
yum -y install --enablerepo=virt7-testing kubernetes
{% endhighlight %}
```
* Note * Using etcd-0.4.6-7 (This is temporary update in documentation)
@ -49,27 +49,27 @@ If you do not get etcd-0.4.6-7 installed with virt7-testing repo,
In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this,
{% highlight sh %}
```shell
yum erase etcd
{% endhighlight %}
```
It will uninstall the current available etcd package
{% highlight sh %}
```shell
yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm
yum -y install --enablerepo=virt7-testing kubernetes
{% endhighlight %}
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
{% highlight sh %}
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
{% endhighlight %}
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
{% highlight sh %}
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:4001"
@ -81,20 +81,20 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
{% endhighlight %}
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
{% highlight sh %}
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
{% endhighlight %}
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such:
{% highlight sh %}
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -112,17 +112,17 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
{% endhighlight %}
```
* Start the appropriate services on master:
{% highlight sh %}
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
{% endhighlight %}
```
**Configure the Kubernetes services on the node.**
@ -130,7 +130,7 @@ done
* Edit /etc/kubernetes/kubelet to appear as such:
{% highlight sh %}
```shell
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
@ -145,27 +145,27 @@ KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
{% endhighlight %}
```
* Start the appropriate services on node (centos-minion).
{% highlight sh %}
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
{% endhighlight %}
```
*You should be finished!*
* Check to make sure the cluster can see the node (on centos-master)
{% highlight console %}
```shell
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
{% endhighlight %}
```
**The cluster should be running! Launch a test pod.**

View File

@ -18,25 +18,25 @@ In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure clo
To get started, you need to checkout the code:
{% highlight sh %}
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
{% endhighlight %}
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
{% highlight sh %}
```shell
npm install
{% endhighlight %}
```
Now, all you need to do is:
{% highlight sh %}
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
{% endhighlight %}
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
@ -44,50 +44,50 @@ This script will provision a cluster suitable for production use, where there is
Once the creation of Azure VMs has finished, you should see the following:
{% highlight console %}
```shell
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
{% endhighlight %}
```
Let's login to the master node like so:
{% highlight sh %}
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
{% endhighlight %}
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
{% endhighlight %}
```
## Deploying the workload
Let's follow the Guestbook example now:
{% highlight sh %}
```shell
kubectl create -f ~/guestbook-example
{% endhighlight %}
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
{% highlight sh %}
```shell
kubectl get pods --watch
{% endhighlight %}
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
{% highlight console %}
```shell
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 4m
frontend-4wahe 1/1 Running 0 4m
@ -95,7 +95,7 @@ frontend-6l36j 1/1 Running 0 4m
redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
{% endhighlight %}
```
## Scaling
@ -105,13 +105,13 @@ You will need to open another terminal window on your machine and go to the same
First, lets set the size of new VMs:
{% highlight sh %}
```shell
export AZ_VM_SIZE=Large
{% endhighlight %}
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
{% highlight console %}
```shell
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
@ -125,63 +125,63 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
{% endhighlight %}
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
{% endhighlight %}
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
{% endhighlight %}
```
As there are 4 nodes, let's scale proportionally:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
>>>>>>> coreos/azure: Updates for 1.0
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
{% endhighlight %}
```
Check what you have now:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
{% endhighlight %}
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
{% highlight console %}
```shell
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 22m
frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
{% endhighlight %}
```
## Exposing the app to the outside world
@ -217,9 +217,9 @@ You should probably try deploy other [example apps](../../../../examples/) or wr
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
{% highlight sh %}
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
{% endhighlight %}
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.

View File

@ -28,31 +28,31 @@ master and etcd nodes, and show how to scale the cluster with ease.
To get started, you need to checkout the code:
{% highlight sh %}
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
{% endhighlight %}
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
{% highlight sh %}
```shell
npm install
{% endhighlight %}
```
Now, all you need to do is:
{% highlight sh %}
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
{% endhighlight %}
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes.
The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to
@ -62,7 +62,7 @@ ensure a user of the free tier can reproduce it without paying extra. I will sho
Once the creation of Azure VMs has finished, you should see the following:
{% highlight console %}
```shell
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
@ -70,52 +70,52 @@ azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
{% endhighlight %}
```
Let's login to the master node like so:
{% highlight sh %}
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
{% endhighlight %}
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
{% endhighlight %}
```
## Deploying the workload
Let's follow the Guestbook example now:
{% highlight sh %}
```shell
kubectl create -f ~/guestbook-example
{% endhighlight %}
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
{% highlight sh %}
```shell
kubectl get pods --watch
{% endhighlight %}
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
{% highlight console %}
```shell
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 4m
@ -125,7 +125,7 @@ redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
{% endhighlight %}
```
## Scaling
@ -135,15 +135,15 @@ You will need to open another terminal window on your machine and go to the same
First, lets set the size of new VMs:
{% highlight sh %}
```shell
export AZ_VM_SIZE=Large
{% endhighlight %}
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
{% highlight console %}
```shell
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
@ -159,13 +159,13 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
{% endhighlight %}
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
@ -174,13 +174,13 @@ kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
{% endhighlight %}
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
@ -188,11 +188,11 @@ frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=f
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
{% endhighlight %}
```
As there are 4 nodes, let's scale proportionally:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
>>>>>>> coreos/azure: Updates for 1.0
@ -200,11 +200,11 @@ scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
{% endhighlight %}
```
Check what you have now:
{% highlight console %}
```shell
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
@ -212,11 +212,11 @@ frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=f
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
{% endhighlight %}
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
{% highlight console %}
```shell
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE
@ -225,7 +225,7 @@ frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
{% endhighlight %}
```
## Exposing the app to the outside world
@ -263,11 +263,11 @@ You should probably try deploy other [example apps](../../../../examples/) or wr
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
{% highlight sh %}
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
{% endhighlight %}
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.

View File

@ -20,16 +20,16 @@ Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/n
#### Provision the Master
{% highlight sh %}
```shell
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
{% endhighlight %}
```
{% highlight sh %}
```shell
aws ec2 run-instances \
--image-id <ami_image_id> \
@ -39,15 +39,15 @@ aws ec2 run-instances \
--instance-type m3.medium \
--user-data file://master.yaml
{% endhighlight %}
```
#### Capture the private IP address
{% highlight sh %}
```shell
aws ec2 describe-instances --instance-id <master-instance-id>
{% endhighlight %}
```
#### Edit node.yaml
@ -55,7 +55,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
#### Provision worker nodes
{% highlight sh %}
```shell
aws ec2 run-instances \
--count 1 \
@ -66,7 +66,7 @@ aws ec2 run-instances \
--instance-type m3.medium \
--user-data file://node.yaml
{% endhighlight %}
```
### Google Compute Engine (GCE)
@ -74,7 +74,7 @@ aws ec2 run-instances \
#### Provision the Master
{% highlight sh %}
```shell
gcloud compute instances create master \
--image-project coreos-cloud \
@ -84,15 +84,15 @@ gcloud compute instances create master \
--zone us-central1-a \
--metadata-from-file user-data=master.yaml
{% endhighlight %}
```
#### Capture the private IP address
{% highlight sh %}
```shell
gcloud compute instances list
{% endhighlight %}
```
#### Edit node.yaml
@ -100,7 +100,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
#### Provision worker nodes
{% highlight sh %}
```shell
gcloud compute instances create node1 \
--image-project coreos-cloud \
@ -110,7 +110,7 @@ gcloud compute instances create node1 \
--zone us-central1-a \
--metadata-from-file user-data=node.yaml
{% endhighlight %}
```
#### Establish network connectivity
@ -127,7 +127,7 @@ These instructions were tested on the Ice House release on a Metacloud distribut
Make sure the environment variables are set for OpenStack such as:
{% highlight sh %}
```shell
OS_TENANT_ID
OS_PASSWORD
@ -135,7 +135,7 @@ OS_AUTH_URL
OS_USERNAME
OS_TENANT_NAME
{% endhighlight %}
```
Test this works with something like:
@ -150,28 +150,28 @@ nova list
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack)
Once you download that, upload it to glance. An example is shown below:
{% highlight sh %}
```shell
glance image-create --name CoreOS723 \
--container-format bare --disk-format qcow2 \
--file coreos_production_openstack_image.img \
--is-public True
{% endhighlight %}
```
#### Create security group
{% highlight sh %}
```shell
nova secgroup-create kubernetes "Kubernetes Security Group"
nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0
nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0
{% endhighlight %}
```
#### Provision the Master
{% highlight sh %}
```shell
nova boot \
--image <image_name> \
@ -181,7 +181,7 @@ nova boot \
--user-data files/master.yaml \
kube-master
{% endhighlight %}
```
```<image_name>``` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
@ -213,7 +213,7 @@ where ```<ip address>``` is the IP address that was available from the ```nova f
Edit ```node.yaml``` and replace all instances of ```<master-private-ip>``` with the private IP address of the master node. You can get this by running ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
{% highlight sh %}
```shell
nova boot \
--image <image_name> \
@ -223,7 +223,7 @@ nova boot \
--user-data files/node.yaml \
minion01
{% endhighlight %}
```
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.

View File

@ -51,10 +51,10 @@ The first step in the process is to initialize the master node.
Clone the Kubernetes repo, and run [master.sh](/{{page.version}}/docs/getting-started-guides/docker-multinode/master.sh) on the master machine with root:
{% highlight sh %}
```shell
cd kubernetes/docs/getting-started-guides/docker-multinode/
./master.sh
{% endhighlight %}
```
`Master done!`
@ -66,11 +66,11 @@ Once your master is up and running you can add one or more workers on different
Clone the Kubernetes repo, and run [worker.sh](/{{page.version}}/docs/getting-started-guides/docker-multinode/worker.sh) on the worker machine with root:
{% highlight sh %}
```shell
export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
cd kubernetes/docs/getting-started-guides/docker-multinode/
./worker.sh
{% endhighlight %}
```
`Worker done!`

View File

@ -22,11 +22,11 @@ Docker containers themselves. To achieve this, we need a separate "bootstrap" i
Run:
{% highlight sh %}
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
{% endhighlight %}
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
@ -37,19 +37,19 @@ across reboots and failures.
Run:
{% highlight sh %}
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
{% endhighlight %}
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
{% highlight sh %}
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.12 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
{% endhighlight %}
```
### Set up Flannel on the master node
@ -64,19 +64,19 @@ To re-configure Docker to use flannel, we need to take docker down, run flannel
Turning down Docker is system dependent, it may be:
{% highlight sh %}
```shell
sudo /etc/init.d/docker stop
{% endhighlight %}
```
or
{% highlight sh %}
```shell
sudo systemctl stop docker
{% endhighlight %}
```
or it may be something else.
@ -84,21 +84,21 @@ or it may be something else.
Now run flanneld itself:
{% highlight sh %}
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0
{% endhighlight %}
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
{% highlight sh %}
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
{% endhighlight %}
```
#### Edit the docker configuration
@ -108,22 +108,22 @@ This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or
Regardless, you need to add the following to the docker command line:
{% highlight sh %}
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
{% endhighlight %}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
{% highlight sh %}
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
{% endhighlight %}
```
You may need to install the `bridge-utils` package for the `brctl` binary.
@ -131,25 +131,25 @@ You may need to install the `bridge-utils` package for the `brctl` binary.
Again this is system dependent, it may be:
{% highlight sh %}
```shell
sudo /etc/init.d/docker start
{% endhighlight %}
```
it may be:
{% highlight sh %}
```shell
systemctl start docker
{% endhighlight %}
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
{% highlight sh %}
```shell
sudo docker run \
--volume=/:/rootfs:ro \
@ -164,17 +164,17 @@ sudo docker run \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests-multi --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
{% endhighlight %}
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
### Also run the service proxy
{% highlight sh %}
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
{% endhighlight %}
```
### Test it out
@ -186,20 +186,20 @@ Download the kubectl binary and make it available by editing your PATH ENV.
List the nodes
{% highlight sh %}
```shell
kubectl get nodes
{% endhighlight %}
```
This should print:
{% highlight console %}
```shell
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
{% endhighlight %}
```
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
If all else fails, ask questions on [Slack](../../troubleshooting.html#slack).

View File

@ -3,65 +3,65 @@ title: "Testing your Kubernetes cluster."
---
To validate that your node(s) have been added, run:
{% highlight sh %}
```shell
kubectl get nodes
{% endhighlight %}
```
That should show something like:
{% highlight console %}
```shell
NAME LABELS STATUS
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
{% endhighlight %}
```
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on [Slack](../../troubleshooting.html#slack).
### Run an application
{% highlight sh %}
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
{% endhighlight %}
```
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
{% highlight sh %}
```shell
kubectl expose rc nginx --port=80
{% endhighlight %}
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
{% highlight sh %}
```shell
kubectl get svc nginx
{% endhighlight %}
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
{% highlight sh %}
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
{% endhighlight %}
```
Hit the webserver with the first IP (CLUSTER_IP):
{% highlight sh %}
```shell
curl <insert-cluster-ip-here>
{% endhighlight %}
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
@ -69,19 +69,19 @@ Note that you will need run this curl command on your boot2docker VM if you are
Now try to scale up the nginx you created before:
{% highlight sh %}
```shell
kubectl scale rc nginx --replicas=3
{% endhighlight %}
```
And list the pods
{% highlight sh %}
```shell
kubectl get pods
{% endhighlight %}
```
You should see pods landing on the newly added machine.

View File

@ -25,11 +25,11 @@ As previously, we need a second instance of the Docker daemon running to bootstr
Run:
{% highlight sh %}
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
{% endhighlight %}
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
@ -41,19 +41,19 @@ To re-configure Docker to use flannel, we need to take docker down, run flannel
Turning down Docker is system dependent, it may be:
{% highlight sh %}
```shell
sudo /etc/init.d/docker stop
{% endhighlight %}
```
or
{% highlight sh %}
```shell
sudo systemctl stop docker
{% endhighlight %}
```
or it may be something else.
@ -61,21 +61,21 @@ or it may be something else.
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
{% highlight sh %}
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
{% endhighlight %}
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
{% highlight sh %}
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
{% endhighlight %}
```
#### Edit the docker configuration
@ -86,22 +86,22 @@ This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or
Regardless, you need to add the following to the docker command line:
{% highlight sh %}
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
{% endhighlight %}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
{% highlight sh %}
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
{% endhighlight %}
```
You may need to install the `bridge-utils` package for the `brctl` binary.
@ -109,19 +109,19 @@ You may need to install the `bridge-utils` package for the `brctl` binary.
Again this is system dependent, it may be:
{% highlight sh %}
```shell
sudo /etc/init.d/docker start
{% endhighlight %}
```
it may be:
{% highlight sh %}
```shell
systemctl start docker
{% endhighlight %}
```
### Start Kubernetes on the worker node
@ -129,7 +129,7 @@ systemctl start docker
Again this is similar to the above, but the `--api-servers` now points to the master we set up in the beginning.
{% highlight sh %}
```shell
sudo docker run \
--volume=/:/rootfs:ro \
@ -144,17 +144,17 @@ sudo docker run \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=$(hostname -i) --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
{% endhighlight %}
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
{% highlight sh %}
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
{% endhighlight %}
```
### Next steps

View File

@ -20,40 +20,40 @@ Here's a diagram of what the final result will look like:
2. Your kernel should support memory and swap accounting. Ensure that the
following configs are turned on in your linux kernel:
{% highlight console %}
```shell
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
{% endhighlight %}
```
3. Enable the memory and swap accounting in the kernel, at boot, as command line
parameters as follows:
{% highlight console %}
```shell
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
{% endhighlight %}
```
NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:
{% highlight console %}
```shell
$cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory
swapaccount=1
{% endhighlight %}
```
### Step One: Run etcd
{% highlight sh %}
```shell
docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
{% endhighlight %}
```
### Step Two: Run the master
{% highlight sh %}
```shell
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
@ -67,15 +67,15 @@ docker run \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
{% endhighlight %}
```
This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods) that contains the other master components.
### Step Three: Run the service proxy
{% highlight sh %}
```shell
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
{% endhighlight %}
```
### Test it out
@ -87,56 +87,56 @@ binary
*Note:*
On OS/X you will need to set up port forwarding via ssh:
{% highlight sh %}
```shell
boot2docker ssh -L8080:localhost:8080
{% endhighlight %}
```
List the nodes in your cluster by running:
{% highlight sh %}
```shell
kubectl get nodes
{% endhighlight %}
```
This should print:
{% highlight console %}
```shell
NAME LABELS STATUS
127.0.0.1 <none> Ready
{% endhighlight %}
```
If you are running different Kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
### Run an application
{% highlight sh %}
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
{% endhighlight %}
```
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
{% highlight sh %}
```shell
kubectl expose rc nginx --port=80
{% endhighlight %}
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
{% highlight sh %}
```shell
kubectl get svc nginx
{% endhighlight %}
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
{% highlight sh %}
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
{% endhighlight %}
```
Hit the webserver with the first IP (CLUSTER_IP):
{% highlight sh %}
```shell
curl <insert-cluster-ip-here>
{% endhighlight %}
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.

View File

@ -20,11 +20,11 @@ The hosts can be virtual or bare metal. Ansible will take care of the rest of th
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
{% highlight console %}
```shell
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
{% endhighlight %}
```
**Make sure your local machine has**
@ -34,22 +34,22 @@ A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a c
If not
{% highlight sh %}
```shell
yum install -y ansible git python-netaddr
{% endhighlight %}
```
**Now clone down the Kubernetes repository**
{% highlight sh %}
```shell
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible
{% endhighlight %}
```
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
{% highlight console %}
```shell
[masters]
kube-master.example.com
@ -59,7 +59,7 @@ kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
{% endhighlight %}
```
## Setting up ansible access to your nodes
@ -69,9 +69,9 @@ If you already are running on a machine which has passwordless ssh access to the
edit: ~/contrib/ansible/group_vars/all.yml
{% highlight yaml %}
```yaml
ansible_ssh_user: root
{% endhighlight %}
```
**Configuring ssh access to the cluster**
@ -79,17 +79,17 @@ If you already have ssh access to every machine using ssh public keys you may sk
Make sure your local machine (root) has an ssh key pair if not
{% highlight sh %}
```shell
ssh-keygen
{% endhighlight %}
```
Copy the ssh public key to **all** nodes in the cluster
{% highlight sh %}
```shell
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
{% endhighlight %}
```
## Setting up the cluster
@ -101,17 +101,17 @@ edit: ~/contrib/ansible/group_vars/all.yml
Modify `source_type` as below to access kubernetes packages through the package manager.
{% highlight yaml %}
```yaml
source_type: packageManager
{% endhighlight %}
```
**Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
{% highlight yaml %}
```yaml
kube_service_addresses: 10.254.0.0/16
{% endhighlight %}
```
**Managing flannel**
@ -122,31 +122,31 @@ Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defa
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
{% highlight yaml %}
```yaml
cluster_logging: true
{% endhighlight %}
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
{% highlight yaml %}
```yaml
cluster_monitoring: true
{% endhighlight %}
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
{% highlight yaml %}
```yaml
dns_setup: true
{% endhighlight %}
```
**Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you.
{% highlight sh %}
```shell
cd ~/contrib/ansible/
./setup.sh
{% endhighlight %}
```
## Testing and using your new cluster
@ -156,25 +156,25 @@ That's all there is to it. It's really that easy. At this point you should hav
Run the following on the kube-master:
{% highlight sh %}
```shell
kubectl get nodes
{% endhighlight %}
```
**Show services running on masters and nodes**
{% highlight sh %}
```shell
systemctl | grep -i kube
{% endhighlight %}
```
**Show firewall rules on the masters and nodes**
{% highlight sh %}
```shell
iptables -nvL
{% endhighlight %}
```
**Create /tmp/apache.json on the master with the following contents and deploy pod**
{% highlight json %}
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -199,29 +199,29 @@ iptables -nvL
]
}
}
{% endhighlight %}
```
{% highlight sh %}
```shell
kubectl create -f /tmp/apache.json
{% endhighlight %}
```
**Check where the pod was created**
{% highlight sh %}
```shell
kubectl get pods
{% endhighlight %}
```
**Check Docker status on nodes**
{% highlight sh %}
```shell
docker ps
docker images
{% endhighlight %}
```
**After the pod is 'Running' Check web server access on the node**
{% highlight sh %}
```shell
curl http://localhost
{% endhighlight %}
```
That's it !

View File

@ -43,32 +43,32 @@ fed-node = 192.168.121.65
Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum
install command below.
{% highlight sh %}
```shell
yum -y install --enablerepo=updates-testing kubernetes
{% endhighlight %}
```
* Install etcd and iptables
{% highlight sh %}
```shell
yum -y install etcd iptables
{% endhighlight %}
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
{% highlight sh %}
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
{% endhighlight %}
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
{% highlight sh %}
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
@ -82,23 +82,23 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
{% endhighlight %}
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
{% highlight sh %}
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
{% endhighlight %}
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else.
They do not need to be routed or assigned to anything.
{% highlight sh %}
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -112,29 +112,29 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
{% endhighlight %}
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
{% highlight sh %}
```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
{% endhighlight %}
```
* Create /var/run/kubernetes on master:
{% highlight sh %}
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
{% endhighlight %}
```
* Start the appropriate services on master:
{% highlight sh %}
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
@ -142,13 +142,13 @@ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl status $SERVICES
done
{% endhighlight %}
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
{% highlight json %}
```json
{
"apiVersion": "v1",
@ -162,11 +162,11 @@ done
}
}
{% endhighlight %}
```
Now create a node object internally in your Kubernetes cluster by running:
{% highlight console %}
```shell
$ kubectl create -f ./node.json
@ -174,7 +174,7 @@ $ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
{% endhighlight %}
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
@ -188,7 +188,7 @@ a Kubernetes node (fed-node) below.
* Edit /etc/kubernetes/kubelet to appear as such:
{% highlight sh %}
```shell
###
# Kubernetes kubelet (node) config
@ -205,11 +205,11 @@ KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
{% endhighlight %}
```
* Start the appropriate services on the node (fed-node).
{% highlight sh %}
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
@ -217,27 +217,27 @@ for SERVICES in kube-proxy kubelet docker; do
systemctl status $SERVICES
done
{% endhighlight %}
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
{% highlight console %}
```shell
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
{% endhighlight %}
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
{% highlight sh %}
```shell
kubectl delete -f ./node.json
{% endhighlight %}
```
*You should be finished!*

View File

@ -19,7 +19,7 @@ This document describes how to deploy Kubernetes on multiple hosts to set up a m
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
{% highlight json %}
```json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
@ -28,21 +28,21 @@ This document describes how to deploy Kubernetes on multiple hosts to set up a m
"VNI": 1
}
}
{% endhighlight %}
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
* Add the configuration to the etcd server on fed-master.
{% highlight sh %}
```shell
etcdctl set /coreos.com/network/config < flannel-config.json
{% endhighlight %}
```
* Verify the key exists in the etcd server on fed-master.
{% highlight sh %}
```shell
etcdctl get /coreos.com/network/config
{% endhighlight %}
```
## Node Setup
@ -50,7 +50,7 @@ etcdctl get /coreos.com/network/config
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
{% highlight sh %}
```shell
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
@ -62,30 +62,30 @@ FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
{% endhighlight %}
```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
* Enable the flannel service.
{% highlight sh %}
```shell
systemctl enable flanneld
{% endhighlight %}
```
* If docker is not running, then starting flannel service is enough and skip the next step.
{% highlight sh %}
```shell
systemctl start flanneld
{% endhighlight %}
```
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
{% highlight sh %}
```shell
systemctl stop docker
ip link delete docker0
systemctl start flanneld
systemctl start docker
{% endhighlight %}
```
***
@ -93,21 +93,21 @@ systemctl start docker
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
{% highlight console %}
```shell
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0
{% endhighlight %}
```
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
{% highlight sh %}
```shell
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
{% endhighlight %}
```
{% highlight json %}
```json
{
"node": {
"key": "/coreos.com/network/subnets",
@ -125,54 +125,54 @@ curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjso
}
}
}
{% endhighlight %}
```
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
{% highlight console %}
```shell
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
{% endhighlight %}
```
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
* Issue the following commands on any 2 nodes:
{% highlight console %}
```shell
# docker run -it fedora:latest bash
bash-4.3#
{% endhighlight %}
```
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
{% highlight console %}
```shell
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
{% endhighlight %}
```
* Now note the IP address on the first node:
{% highlight console %}
```shell
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
{% endhighlight %}
```
* And also note the IP address on the other node:
{% highlight console %}
```shell
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
{% endhighlight %}
```
* Now ping from the first node to the other node:
{% highlight console %}
```shell
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
{% endhighlight %}
```
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.

View File

@ -29,15 +29,15 @@ If you want to use custom binaries or pure open source Kubernetes, please contin
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
{% highlight bash %}
```shell
curl -sS https://get.k8s.io | bash
{% endhighlight %}
```
or
{% highlight bash %}
```shell
wget -q -O - https://get.k8s.io | bash
{% endhighlight %}
```
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
@ -47,10 +47,10 @@ The script run by the commands above creates a cluster with the name/prefix "kub
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
{% highlight bash %}
```shell
cd kubernetes
cluster/kube-up.sh
{% endhighlight %}
```
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
@ -74,13 +74,13 @@ You will use it to look at your new cluster and bring up example apps.
Add the appropriate binary folder to your `PATH` to access kubectl:
{% highlight bash %}
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
{% endhighlight %}
```
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
However the gcloud bundled kubectl version may be older than the one downloaded by the
@ -111,32 +111,32 @@ but then you have to update it when you update kubectl.
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
{% highlight console %}
```shell
$ kubectl get --all-namespaces services
{% endhighlight %}
```
should show a set of [services](../user-guide/services) that look something like this:
{% highlight console %}
```shell
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
default kubernetes 10.0.0.1 <none> 443/TCP <none> 1d
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP k8s-app=kube-dns 1d
kube-system kube-ui 10.0.0.3 <none> 80/TCP k8s-app=kube-ui 1d
...
{% endhighlight %}
```
Similarly, you can take a look at the set of [pods](../user-guide/pods) that were created during cluster startup.
You can do this via the
{% highlight console %}
```shell
$ kubectl get --all-namespaces pods
{% endhighlight %}
```
command.
You'll see a list of pods that looks something like this (the name specifics will be different):
{% highlight console %}
```shell
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
@ -146,7 +146,7 @@ kube-system kube-dns-v5-7ztia 3/3 Running
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
{% endhighlight %}
```
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
@ -160,10 +160,10 @@ For more complete applications, please look in the [examples directory](../../ex
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
{% highlight bash %}
```shell
cd kubernetes
cluster/kube-down.sh
{% endhighlight %}
```
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.

View File

@ -104,7 +104,7 @@ No pods will be available before starting a container:
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
{% highlight json %}
```json
{
"apiVersion": "v1",
"kind": "Pod",
@ -126,7 +126,7 @@ We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
}]
}
}
{% endhighlight %}
```
Create the pod with kubectl:

View File

@ -49,15 +49,15 @@ On the other hand, `libvirt-coreos` might be useful for people investigating low
You can test it with the following command:
{% highlight sh %}
```shell
virsh -c qemu:///system pool-list
{% endhighlight %}
```
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
{% highlight sh %}
```shell
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
@ -68,18 +68,18 @@ polkit.addRule(function(action, subject) {
}
});
EOF
{% endhighlight %}
```
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
{% raw %}
{% highlight console %}
```shell
$ ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
$ usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account
{% endhighlight %}
```
{% endraw %}
(Replace `$USER` with your login name)
@ -92,9 +92,9 @@ As we're using the `qemu:///system` instance of libvirt, qemu will run with a sp
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
{% highlight console %}
```shell
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
{% endhighlight %}
```
In order to fix that issue, you have several possibilities:
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
@ -105,9 +105,9 @@ In order to fix that issue, you have several possibilities:
On Arch:
{% highlight sh %}
```shell
setfacl -m g:kvm:--x ~
{% endhighlight %}
```
### Setup
@ -115,12 +115,12 @@ By default, the libvirt-coreos setup will create a single Kubernetes master and
To start your local cluster, open a shell and run:
{% highlight sh %}
```shell
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh
{% endhighlight %}
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
@ -133,7 +133,7 @@ The `KUBE_PUSH` environment variable may be set to specify which Kubernetes bina
You can check that your machines are there and running with:
{% highlight console %}
```shell
$ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
@ -141,17 +141,17 @@ $ virsh -c qemu:///system list
16 kubernetes_minion-01 running
17 kubernetes_minion-02 running
18 kubernetes_minion-03 running
{% endhighlight %}
```
You can check that the Kubernetes cluster is working with:
{% highlight console %}
```shell
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> Ready
{% endhighlight %}
```
The VMs are running [CoreOS](https://coreos.com/).
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
@ -161,53 +161,53 @@ The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to `kubernetes_master`:
{% highlight sh %}
```shell
ssh core@192.168.10.1
{% endhighlight %}
```
Connect to `kubernetes_minion-01`:
{% highlight sh %}
```shell
ssh core@192.168.10.2
{% endhighlight %}
```
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
{% highlight sh %}
```shell
export KUBERNETES_PROVIDER=libvirt-coreos
{% endhighlight %}
```
Bring up a libvirt-CoreOS cluster of 5 nodes
{% highlight sh %}
```shell
NUM_MINIONS=5 cluster/kube-up.sh
{% endhighlight %}
```
Destroy the libvirt-CoreOS cluster
{% highlight sh %}
```shell
cluster/kube-down.sh
{% endhighlight %}
```
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
{% highlight sh %}
```shell
cluster/kube-push.sh
{% endhighlight %}
```
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
{% highlight sh %}
```shell
KUBE_PUSH=local cluster/kube-push.sh
{% endhighlight %}
```
Interact with the cluster
{% highlight sh %}
```shell
kubectl ...
{% endhighlight %}
```
### Troubleshooting
@ -215,9 +215,9 @@ kubectl ...
Build the release tarballs:
{% highlight sh %}
```shell
make release
{% endhighlight %}
```
#### Can't find virsh in PATH, please fix and retry.
@ -225,21 +225,21 @@ Install libvirt
On Arch:
{% highlight sh %}
```shell
pacman -S qemu libvirt
{% endhighlight %}
```
On Ubuntu 14.04.1:
{% highlight sh %}
```shell
aptitude install qemu-system-x86 libvirt-bin
{% endhighlight %}
```
On Fedora 21:
{% highlight sh %}
```shell
yum install qemu libvirt
{% endhighlight %}
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
@ -247,15 +247,15 @@ Start the libvirt daemon
On Arch:
{% highlight sh %}
```shell
systemctl start libvirtd
{% endhighlight %}
```
On Ubuntu 14.04.1:
{% highlight sh %}
```shell
service libvirt-bin start
{% endhighlight %}
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
@ -263,7 +263,7 @@ Fix libvirt access permission (Remember to adapt `$USER`)
On Arch and Fedora 21:
{% highlight sh %}
```shell
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
@ -274,13 +274,13 @@ polkit.addRule(function(action, subject) {
}
});
EOF
{% endhighlight %}
```
On Ubuntu:
{% highlight sh %}
```shell
usermod -a -G libvirtd $USER
{% endhighlight %}
```
#### error: Out of memory initializing network (virsh net-create...)

View File

@ -30,10 +30,10 @@ You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root):
{% highlight sh %}
```shell
cd kubernetes
hack/local-up-cluster.sh
{% endhighlight %}
```
This will build and start a lightweight local cluster, consisting of a master
and a single node. Type Control-C to shut it down.
@ -48,7 +48,7 @@ Your cluster is running, and you want to start running containers!
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
{% highlight sh %}
```shell
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
@ -67,7 +67,7 @@ cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
{% endhighlight %}
```
### Running a user defined pod
@ -78,9 +78,9 @@ However you cannot view the nginx start page on localhost. To verify that nginx
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
{% highlight sh %}
```shell
cluster/kubectl.sh create -f docs/user-guide/pod.yaml
{% endhighlight %}
```
Congratulations!
@ -104,11 +104,11 @@ You are running a single node setup. This has the limitation of only supporting
#### I changed Kubernetes code, how do I run it?
{% highlight sh %}
```shell
cd kubernetes
hack/build-go.sh
hack/local-up-cluster.sh
{% endhighlight %}
```
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.

View File

@ -8,18 +8,18 @@ alternative to Google Cloud Logging.
To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:
{% highlight console %}
```shell
KUBE_LOGGING_DESTINATION=elasticsearch
{% endhighlight %}
```
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
Now when you create a cluster a message will indicate that the Fluentd node-level log collectors
will target Elasticsearch:
{% highlight console %}
```shell
$ cluster/kube-up.sh
...
@ -39,12 +39,12 @@ kubernetes-master-pd us-central1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
+++ Logging using Fluentd to elasticsearch
{% endhighlight %}
```
The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana
viewer should be running in the kube-system namespace soon after the cluster comes to life.
{% highlight console %}
```shell
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
@ -59,7 +59,7 @@ kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
{% endhighlight %}
```
Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers
the Docker container logs and sends them to Elasticsearch. The Fluentd collector communicates to
@ -67,7 +67,7 @@ a Kubernetes service that maps requests to specific Elasticsearch pods. Similarl
accessed via a Kubernetes service definition.
{% highlight console %}
```shell
$ kubectl get services --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
@ -80,11 +80,11 @@ monitoring-grafana kubernetes.io/cluster-service=true,kubernetes.io/name=Gr
monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster 10.0.208.221 80/TCP
monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.188.57 8083/TCP
{% endhighlight %}
```
By default two Elasticsearch replicas are created and one Kibana replica is created.
{% highlight console %}
```shell
$ kubectl get rc --namespace=kube-system
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
@ -97,13 +97,13 @@ monitoring-heapster-v4 heapster gcr.io/google_containers/
monitoring-influx-grafana-v1 influxdb gcr.io/google_containers/heapster_influxdb:v0.3 k8s-app=influxGrafana,version=v1 1
grafana gcr.io/google_containers/heapster_grafana:v0.7
{% endhighlight %}
```
The Elasticsearch and Kibana services are not directly exposed via a publicly reachable IP address. Instead,
they can be accessed via the service proxy running at the master. The URLs for accessing Elasticsearch
and Kibana via the service proxy can be found using the `kubectl cluster-info` command.
{% highlight console %}
```shell
$ kubectl cluster-info
Kubernetes master is running at https://146.148.94.154
@ -115,12 +115,12 @@ Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system
Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
{% endhighlight %}
```
Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out
the `admin` password for the cluster using `kubectl config view`.
{% highlight console %}
```shell
$ kubectl config view
...
@ -130,7 +130,7 @@ $ kubectl config view
username: admin
...
{% endhighlight %}
```
The first time you try to access the cluster from a browser a dialog box appears asking for the username and password.
Use the username `admin` and provide the basic auth password reported by `kubectl config view` for the
@ -142,7 +142,7 @@ status page for Elasticsearch.
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
from your local machine using `curl` but first you need to know what your bearer token is:
{% highlight console %}
```shell
$ kubectl config view --minify
apiVersion: v1
@ -166,11 +166,11 @@ users:
client-key-data: REDACTED
token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp
{% endhighlight %}
```
Now you can issue requests to Elasticsearch:
{% highlight console %}
```shell
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/
{
@ -187,11 +187,11 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
"tagline" : "You Know, for Search"
}
{% endhighlight %}
```
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
{% highlight console %}
```shell
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?pretty=true
{
@ -228,7 +228,7 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
}
}
{% endhighlight %}
```
The Elasticsearch website contains information about [URI search queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request) which can be used to extract the required logs.
@ -243,12 +243,12 @@ regulary refreshed. Here is a typical view of ingested logs from the Kibana view
Another way to access Elasticsearch and Kibana in the cluster is to use `kubectl proxy` which will serve
a local proxy to the remote master:
{% highlight console %}
```shell
$ kubectl proxy
Starting to serve on localhost:8001
{% endhighlight %}
```
Now you can visit the URL [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging) to contact Elasticsearch and [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging) to access the Kibana viewer.

View File

@ -6,7 +6,7 @@ A Kubernetes cluster will typically be humming along running many system and app
Cluster level logging for Kubernetes allows us to collect logs which persist beyond the lifetime of the pod's container images or the lifetime of the pod or even cluster. In this article we assume that a Kubernetes cluster has been created with cluster level logging support for sending logs to Google Cloud Logging. After a cluster has been created you will have a collection of system pods running in the `kube-system` namespace that support monitoring,
logging and DNS resolution for names of Kubernetes services:
{% highlight console %}
```shell
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
@ -17,7 +17,7 @@ fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 31
kube-dns-v3-pk22 3/3 Running 0 32m
monitoring-heapster-v1-20ej 0/1 Running 9 32m
{% endhighlight %}
```
Here is the same information in a picture which shows how the pods might be placed on specific nodes.
@ -30,7 +30,7 @@ To help explain how cluster level logging works let's start off with a synthetic
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -43,7 +43,7 @@ spec:
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
{% endhighlight %}
```
[Download example](../../examples/blog-logging/counter-pod.yaml)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
@ -51,22 +51,22 @@ spec:
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let's create the pod in the default
namespace.
{% highlight console %}
```shell
$ kubectl create -f examples/blog-logging/counter-pod.yaml
pods/counter
{% endhighlight %}
```
We can observe the running pod:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 5m
{% endhighlight %}
```
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
@ -76,7 +76,7 @@ One of the nodes is now running the counter pod:
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
{% highlight console %}
```shell
$ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015
@ -87,11 +87,11 @@ $ kubectl logs counter
5: Tue Jun 2 21:37:36 UTC 2015
...
{% endhighlight %}
```
This command fetches the log text from the Docker log file for the image that is running in this container. We can connect to the running container and observe the running counter bash script.
{% highlight console %}
```shell
$ kubectl exec -i counter bash
ps aux
@ -101,29 +101,29 @@ root 468 0.0 0.0 17968 2904 ? Ss 00:05 0:00 bash
root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
{% endhighlight %}
```
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's stop the currently running counter.
{% highlight console %}
```shell
$ kubectl stop pod counter
pods/counter
{% endhighlight %}
```
Now let's restart the counter.
{% highlight console %}
```shell
$ kubectl create -f examples/blog-logging/counter-pod.yaml
pods/counter
{% endhighlight %}
```
Let's wait for the container to restart and get the log lines again.
{% highlight console %}
```shell
$ kubectl logs counter
0: Tue Jun 2 21:51:40 UTC 2015
@ -136,7 +136,7 @@ $ kubectl logs counter
7: Tue Jun 2 21:51:47 UTC 2015
8: Tue Jun 2 21:51:48 UTC 2015
{% endhighlight %}
```
We've lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But don't fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana.
@ -146,7 +146,7 @@ This log collection pod has a specification which looks something like this:
<!-- BEGIN MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -179,7 +179,7 @@ spec:
hostPath:
path: /var/lib/docker/containers
{% endhighlight %}
```
[Download example](https://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
<!-- END MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
@ -201,13 +201,13 @@ Note the first container counted to 108 and then it was terminated. When the nex
We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first:
{% highlight console %}
```shell
SELECT metadata.timestamp, structPayload.log
FROM [mylogs.kubernetes_counter_default_count_20150611]
ORDER BY metadata.timestamp DESC
{% endhighlight %}
```
Here is some sample output:
@ -216,15 +216,15 @@ Here is some sample output:
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a Compute Engine project called `myproject`. Only logs for the date 2015-06-11 are fetched.
{% highlight console %}
```shell
$ gsutil -m cp -r gs://myproject/kubernetes.counter_default_count/2015/06/11 .
{% endhighlight %}
```
Now we can run queries over the ingested logs. The example below uses the [jq](http://stedolan.github.io/jq/) program to extract just the log lines.
{% highlight console %}
```shell
$ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
"0: Thu Jun 11 21:39:38 UTC 2015\n"
@ -237,7 +237,7 @@ $ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
"7: Thu Jun 11 21:39:45 UTC 2015\n"
...
{% endhighlight %}
```
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/release-1.1/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.

View File

@ -43,26 +43,26 @@ Further information is available in the Kubernetes on Mesos [contrib directory][
Log into the future Kubernetes *master node* over SSH, replacing the placeholder below with the correct IP address.
{% highlight bash %}
```shell
ssh jclouds@${ip_address_of_master_node}
{% endhighlight %}
```
Build Kubernetes-Mesos.
{% highlight bash %}
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
export KUBERNETES_CONTRIB=mesos
make
{% endhighlight %}
```
Set some environment variables.
The internal IP address of the master may be obtained via `hostname -i`.
{% highlight bash %}
```shell
export KUBERNETES_MASTER_IP=$(hostname -i)
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
{% endhighlight %}
```
Note that KUBERNETES_MASTER is used as the api endpoint. If you have existing `~/.kube/config` and point to another endpoint, you need to add option `--server=${KUBERNETES_MASTER}` to kubectl in later steps.
@ -70,24 +70,24 @@ Note that KUBERNETES_MASTER is used as the api endpoint. If you have existing `~
Start etcd and verify that it is running:
{% highlight bash %}
```shell
sudo docker run -d --hostname $(uname -n) --name etcd \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.0.12 \
--listen-client-urls http://0.0.0.0:4001 \
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
{% endhighlight %}
```
{% highlight console %}
```shell
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 quay.io/coreos/etcd:v2.0.12 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
{% endhighlight %}
```
It's also a good idea to ensure your etcd instance is reachable by testing it
{% highlight bash %}
```shell
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
{% endhighlight %}
```
If connectivity is OK, you will see an output of the available keys in etcd (if any).
@ -95,29 +95,29 @@ If connectivity is OK, you will see an output of the available keys in etcd (if
Update your PATH to more easily run the Kubernetes-Mesos binaries:
{% highlight bash %}
```shell
export PATH="$(pwd)/_output/local/go/bin:$PATH"
{% endhighlight %}
```
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos-master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
{% highlight bash %}
```shell
export MESOS_MASTER=<host:port or zk:// url>
{% endhighlight %}
```
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
{% highlight console %}
```shell
$ cat <<EOF >mesos-cloud.conf
[mesos-cloud]
mesos-master = ${MESOS_MASTER}
EOF
{% endhighlight %}
```
Now start the kubernetes-mesos API server, controller manager, and scheduler on the master node:
{% highlight console %}
```shell
$ km apiserver \
--address=${KUBERNETES_MASTER_IP} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
@ -143,36 +143,36 @@ $ km scheduler \
--cluster-dns=10.10.10.10 \
--cluster-domain=cluster.local \
--v=2 >scheduler.log 2>&1 &
{% endhighlight %}
```
Disown your background jobs so that they'll stay running if you log out.
{% highlight bash %}
```shell
disown -a
{% endhighlight %}
```
#### Validate KM Services
Add the appropriate binary folder to your `PATH` to access kubectl:
{% highlight bash %}
```shell
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
{% endhighlight %}
```
Interact with the kubernetes-mesos framework via `kubectl`:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
{% endhighlight %}
```
{% highlight console %}
```shell
# NOTE: your service IPs will likely differ
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
k8sm-scheduler component=scheduler,provider=k8sm <none> 10.10.10.113 10251/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.1 443/TCP
{% endhighlight %}
```
Lastly, look for Kubernetes in the Mesos web GUI by pointing your browser to
`http://<mesos-master-ip:port>`. Make sure you have an active VPN connection.
@ -182,11 +182,11 @@ Go to the Frameworks tab, and look for an active framework named "Kubernetes".
Write a JSON pod description to a local file:
{% highlight bash %}
```shell
$ cat <<EOPOD >nginx.yaml
{% endhighlight %}
```
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -198,23 +198,23 @@ spec:
ports:
- containerPort: 80
EOPOD
{% endhighlight %}
```
Send the pod description to Kubernetes using the `kubectl` CLI:
{% highlight console %}
```shell
$ kubectl create -f ./nginx.yaml
pods/nginx
{% endhighlight %}
```
Wait a minute or two while `dockerd` downloads the image layers from the internet.
We can use the `kubectl` interface to monitor the status of our pod:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 14s
{% endhighlight %}
```
Verify that the pod task is running in the Mesos web GUI. Click on the
Kubernetes framework. The next screen should show the running Mesos task that
@ -251,31 +251,31 @@ In addition the service template at [cluster/addons/dns/skydns-svc.yaml.in][12]
To do this automatically:
{% highlight bash %}
```shell
sed -e "s/{{ pillar\['dns_replicas'\] }}/1/g;"\
"s,\(command = \"/kube2sky\"\),\\1\\"$'\n'" - --kube_master_url=${KUBERNETES_MASTER},;"\
"s/{{ pillar\['dns_domain'\] }}/cluster.local/g" \
cluster/addons/dns/skydns-rc.yaml.in > skydns-rc.yaml
sed -e "s/{{ pillar\['dns_server'\] }}/10.10.10.10/g" \
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
{% endhighlight %}
```
Now the kube-dns pod and service are ready to be launched:
{% highlight bash %}
```shell
kubectl create -f ./skydns-rc.yaml
kubectl create -f ./skydns-svc.yaml
{% endhighlight %}
```
Check with `kubectl get pods --namespace=kube-system` that 3/3 containers of the pods are eventually up and running. Note that the kube-dns pods run in the `kube-system` namespace, not in `default`.
To check that the new DNS service in the cluster works, we start a busybox pod and use that to do a DNS lookup. First create the `busybox.yaml` pod spec:
{% highlight bash %}
```shell
cat <<EOF >busybox.yaml
{% endhighlight %}
```
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -291,29 +291,29 @@ spec:
name: busybox
restartPolicy: Always
EOF
{% endhighlight %}
```
Then start the pod:
{% highlight bash %}
```shell
kubectl create -f ./busybox.yaml
{% endhighlight %}
```
When the pod is up and running, start a lookup for the Kubernetes master service, made available on 10.10.10.1 by default:
{% highlight bash %}
```shell
kubectl exec busybox -- nslookup kubernetes
{% endhighlight %}
```
If everything works fine, you will get this output:
{% highlight console %}
```shell
Server: 10.10.10.10
Address 1: 10.10.10.10
Name: kubernetes
Address 1: 10.10.10.1
{% endhighlight %}
```
## What next?

View File

@ -22,50 +22,50 @@ as well to select which [stage1 image](https://github.com/coreos/rkt/blob/master
If you are using the [hack/local-up-cluster.sh](https://releases.k8s.io/release-1.1/hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
set these flags:
{% highlight console %}
```shell
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
{% endhighlight %}
```
Then we can launch the local cluster using the script:
{% highlight console %}
```shell
$ hack/local-up-cluster.sh
{% endhighlight %}
```
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
{% highlight console %}
```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
{% endhighlight %}
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
{% highlight console %}
```shell
$ export KUBE_RKT_VERSION=0.8.0
{% endhighlight %}
```
Then you can launch the cluster by:
{% highlight console %}
```shell
$ kube-up.sh
{% endhighlight %}
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
@ -73,37 +73,37 @@ Note that we are still working on making all containerized the master components
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
{% highlight console %}
```shell
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
{% endhighlight %}
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
{% highlight console %}
```shell
$ export KUBE_RKT_VERSION=0.8.0
{% endhighlight %}
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
{% highlight console %}
```shell
$ export COREOS_CHANNEL=stable
{% endhighlight %}
```
Then you can launch the cluster by:
{% highlight console %}
```shell
$ kube-up.sh
{% endhighlight %}
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
@ -137,21 +137,21 @@ using `journalctl`:
- Check the running state of the systemd service:
{% highlight console %}
```shell
$ sudo journalctl -u $SERVICE_FILE
{% endhighlight %}
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
{% highlight console %}
```shell
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
{% endhighlight %}
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.

View File

@ -22,50 +22,50 @@ as well to select which [stage1 image](https://github.com/coreos/rkt/blob/master
If you are using the [hack/local-up-cluster.sh](https://releases.k8s.io/release-1.1/hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
set these flags:
{% highlight console %}
```shell
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
{% endhighlight %}
```
Then we can launch the local cluster using the script:
{% highlight console %}
```shell
$ hack/local-up-cluster.sh
{% endhighlight %}
```
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
{% highlight console %}
```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
{% endhighlight %}
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
{% highlight console %}
```shell
$ export KUBE_RKT_VERSION=0.8.0
{% endhighlight %}
```
Then you can launch the cluster by:
{% highlight console %}
```shell
$ kube-up.sh
{% endhighlight %}
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
@ -73,37 +73,37 @@ Note that we are still working on making all containerized the master components
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
{% highlight console %}
```shell
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
{% endhighlight %}
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
{% highlight console %}
```shell
$ export KUBE_RKT_VERSION=0.8.0
{% endhighlight %}
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
{% highlight console %}
```shell
$ export COREOS_CHANNEL=stable
{% endhighlight %}
```
Then you can launch the cluster by:
{% highlight console %}
```shell
$ kube-up.sh
{% endhighlight %}
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
@ -137,21 +137,21 @@ using `journalctl`:
- Check the running state of the systemd service:
{% highlight console %}
```shell
$ sudo journalctl -u $SERVICE_FILE
{% endhighlight %}
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
{% highlight console %}
```shell
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
{% endhighlight %}
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.

View File

@ -258,7 +258,7 @@ many distinct files to make:
You can make the files by copying the `$HOME/.kube/config`, by following the code
in `cluster/gce/configure-vm.sh` or by using the following template:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Config
users:
@ -275,7 +275,7 @@ contexts:
user: kubelet
name: service-account-context
current-context: service-account-context
{% endhighlight %}
```
Put the kubeconfig(s) on every node. The examples later in this
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
@ -305,11 +305,11 @@ If you previously had Docker installed on a node without setting Kubernetes-spec
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
as follows before proceeding to configure Docker for Kubernetes.
{% highlight sh %}
```shell
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
{% endhighlight %}
```
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
Some suggested docker options:
@ -412,9 +412,9 @@ If you have turned off Docker's IP masquerading to allow pods to talk to each
other, then you may need to do masquerading just for destination IPs outside
the cluster network. For example:
{% highlight sh %}
```shell
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE \! -d ${CLUSTER_SUBNET}
{% endhighlight %}
```
This will rewrite the source address from
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
@ -483,7 +483,7 @@ For each of these components, the steps to start them running are similar:
#### Apiserver pod template
{% highlight json %}
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -554,7 +554,7 @@ For each of these components, the steps to start them running are similar:
]
}
}
{% endhighlight %}
```
Here are some apiserver flags you may need to set:
@ -614,7 +614,7 @@ Some cloud providers require a config file. If so, you need to put config file i
Complete this template for the scheduler pod:
{% highlight json %}
```json
{
"kind": "Pod",
@ -650,7 +650,7 @@ Complete this template for the scheduler pod:
}
}
{% endhighlight %}
```
Typically, no additional flags are required for the scheduler.
@ -660,7 +660,7 @@ Optionally, you may want to mount `/var/log` as well and redirect output there.
Template for controller manager pod:
{% highlight json %}
```json
{
"kind": "Pod",
@ -721,7 +721,7 @@ Template for controller manager pod:
}
}
{% endhighlight %}
```
Flags to consider using with controller manager:
- `--cluster-name=$CLUSTER_NAME`
@ -742,14 +742,14 @@ controller manager will retry reaching the apiserver until it is up.
Use `ps` or `docker ps` to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this:
{% highlight console %}
```shell
$ sudo docker ps | grep apiserver:
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
{% endhighlight %}
```
Then try to connect to the apiserver:
{% highlight console %}
```shell
$ echo $(curl -s http://localhost:8080/healthz)
ok
$ curl -s http://localhost:8080/api
@ -758,7 +758,7 @@ $ curl -s http://localhost:8080/api
"v1"
]
}
{% endhighlight %}
```
If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver.
You should soon be able to see all your nodes by running the `kubectl get nodes` command.

View File

@ -30,20 +30,20 @@ Ubuntu 15 which use systemd instead of upstart. We are working around fixing thi
First clone the kubernetes github repo
{% highlight console %}
```shell
$ git clone https://github.com/kubernetes/kubernetes.git
{% endhighlight %}
```
Then download all the needed binaries into given directory (cluster/ubuntu/binaries)
{% highlight console %}
```shell
$ cd kubernetes/cluster/ubuntu
$ ./build.sh
{% endhighlight %}
```
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` in build.sh, by default etcd version is 2.0.12,
@ -67,7 +67,7 @@ An example cluster is listed below:
First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample.
{% highlight sh %}
```shell
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
@ -79,7 +79,7 @@ export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
{% endhighlight %}
```
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
@ -114,21 +114,21 @@ After all the above variables being set correctly, we can use following command
The scripts automatically scp binaries and config files to all the machines and start the k8s service on them.
The only thing you need to do is to type the sudo password when promoted.
{% highlight console %}
```shell
Deploying minion on machine 10.10.103.223
...
[sudo] password to copy files and start minion:
{% endhighlight %}
```
If all things goes right, you will see the below message from console indicating the k8s is up.
{% highlight console %}
```shell
Cluster validation succeeded
{% endhighlight %}
```
### Test it out
@ -138,7 +138,7 @@ You can make it available via PATH, then you can use the below command smoothly.
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.
{% highlight console %}
```shell
$ kubectl get nodes
NAME LABELS STATUS
@ -146,7 +146,7 @@ NAME LABELS STATUS
10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
{% endhighlight %}
```
Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s
@ -158,7 +158,7 @@ and UI onto the existing cluster.
The configuration of DNS is configured in cluster/ubuntu/config-default.sh.
{% highlight sh %}
```shell
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
@ -168,27 +168,27 @@ DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
{% endhighlight %}
```
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the `SERVICE_CLUSTER_IP_RANGE`.
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
By default, we also take care of kube-ui addon.
{% highlight sh %}
```shell
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
{% endhighlight %}
```
After all the above variables have been set, just type the following command.
{% highlight console %}
```shell
$ cd cluster/ubuntu
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
{% endhighlight %}
```
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster.
@ -222,12 +222,12 @@ Please try:
3. You may find following commands useful, the former one to bring down the cluster, while
the latter one could start it again.
{% highlight console %}
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
{% endhighlight %}
```
4. You can also customize your own settings in `/etc/default/{component_name}`.
@ -237,16 +237,16 @@ the latter one could start it again.
If you already have a kubernetes cluster, and want to upgrade to a new version,
you can use following command in cluster/ directory to update the whole cluster or a specified node to a new version.
{% highlight console %}
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
{% endhighlight %}
```
It can be done for all components (by default), master(`-m`) or specified node(`-n`).
If the version is not specified, the script will try to use local binaries.You should ensure all the binaries are well prepared in path `cluster/ubuntu/binaries`.
{% highlight console %}
```shell
$ tree cluster/ubuntu/binaries
binaries/
@ -263,15 +263,15 @@ binaries/
'<27><>'<27><>'<27><> kubelet
'<27><>'<27><>'<27><> kube-proxy
{% endhighlight %}
```
Upgrading single node is experimental now. You can use following command to get a help.
{% highlight console %}
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
{% endhighlight %}
```
Some examples are as follows:

View File

@ -22,19 +22,19 @@ Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
Setting up a cluster is as simple as running:
{% highlight sh %}
```shell
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
{% endhighlight %}
```
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
{% highlight sh %}
```shell
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
{% endhighlight %}
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
@ -44,27 +44,27 @@ Vagrant will provision each machine in the cluster with all the necessary compon
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default) environment variable:
{% highlight sh %}
```shell
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
{% endhighlight %}
```
By default, each VM in the cluster is running Fedora.
To access the master or any node:
{% highlight sh %}
```shell
vagrant ssh master
vagrant ssh minion-1
{% endhighlight %}
```
If you are running more than one node, you can access the others by:
{% highlight sh %}
```shell
vagrant ssh minion-2
vagrant ssh minion-3
{% endhighlight %}
```
Each node in the cluster installs the docker daemon and the kubelet.
@ -72,7 +72,7 @@ The master node instantiates the Kubernetes master components as pods on the mac
To view the service status and/or logs on the kubernetes-master:
{% highlight console %}
```shell
[vagrant@kubernetes-master ~] $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo su
@ -85,11 +85,11 @@ To view the service status and/or logs on the kubernetes-master:
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
{% endhighlight %}
```
To view the services on any of the nodes:
{% highlight console %}
```shell
[vagrant@kubernetes-master ~] $ vagrant ssh minion-1
[vagrant@kubernetes-master ~] $ sudo su
@ -98,7 +98,7 @@ To view the services on any of the nodes:
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
{% endhighlight %}
```
### Interacting with your Kubernetes cluster with Vagrant.
@ -106,78 +106,78 @@ With your Kubernetes cluster up, you can manage the nodes in your cluster with t
To push updates to new Kubernetes code after making source changes:
{% highlight sh %}
```shell
./cluster/kube-push.sh
{% endhighlight %}
```
To stop and then restart the cluster:
{% highlight sh %}
```shell
vagrant halt
./cluster/kube-up.sh
{% endhighlight %}
```
To destroy the cluster:
{% highlight sh %}
```shell
vagrant destroy
{% endhighlight %}
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
{% highlight console %}
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
{% endhighlight %}
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
{% highlight sh %}
```shell
cat ~/.kubernetes_vagrant_auth
{% endhighlight %}
```
{% highlight json %}
```json
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
{% endhighlight %}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
{% highlight sh %}
```shell
./cluster/kubectl.sh get nodes
{% endhighlight %}
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
{% highlight console %}
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
{% endhighlight %}
```
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
{% highlight console %}
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
@ -186,38 +186,38 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
{% endhighlight %}
```
Start a container running nginx with a replication controller and three replicas
{% highlight console %}
```shell
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
{% endhighlight %}
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
{% highlight console %}
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 0/1 Pending 0 10s
my-nginx-gr3hh 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
{% endhighlight %}
```
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
{% highlight console %}
```shell
$ vagrant ssh minion-1 -c 'sudo docker images'
kubernetes-minion-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
{% endhighlight %}
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
{% highlight console %}
```shell
$ vagrant ssh minion-1 -c 'sudo docker ps'
kubernetes-minion-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@ -225,11 +225,11 @@ kubernetes-minion-1:
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
{% endhighlight %}
```
Going back to listing the pods, services and replicationcontrollers, you now have:
{% highlight console %}
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 1m
@ -239,19 +239,19 @@ my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.0.0.1 <none> 80/TCP run=my-nginx 1h
{% endhighlight %}
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](../../examples/guestbook/README) application to learn how to create a service.
You can already play with scaling the replicas with:
{% highlight console %}
```shell
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
{% endhighlight %}
```
Congratulations!
@ -261,33 +261,33 @@ Congratulations!
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
{% highlight sh %}
```shell
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
{% endhighlight %}
```
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
{% highlight sh %}
```shell
rm ~/.kubernetes_vagrant_auth
{% endhighlight %}
```
After using kubectl.sh make sure that the correct credentials are set:
{% highlight sh %}
```shell
cat ~/.kubernetes_vagrant_auth
{% endhighlight %}
```
{% highlight json %}
```json
{
"User": "vagrant",
"Password": "vagrant"
}
{% endhighlight %}
```
#### I just created the cluster, but I do not see my container running!
@ -305,25 +305,25 @@ Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so:
{% highlight sh %}
```shell
export NUM_MINIONS=1
{% endhighlight %}
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
{% highlight sh %}
```shell
export KUBERNETES_MEMORY=2048
{% endhighlight %}
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
{% highlight sh %}
```shell
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_MINION_MEMORY=2048
{% endhighlight %}
```
#### I ran vagrant suspend and nothing works!
@ -333,6 +333,6 @@ export KUBERNETES_MINION_MEMORY=2048
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
{% highlight sh %}
```shell
export KUBERNETES_VAGRANT_USE_NFS=true
{% endhighlight %}
```

View File

@ -16,17 +16,17 @@ convenient).
2. You must have Go (version 1.2 or later) installed: [www.golang.org](http://www.golang.org).
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
{% highlight sh %}
```shell
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
{% endhighlight %}
```
4. Install the govc tool to interact with ESXi/vCenter:
{% highlight sh %}
```shell
go get github.com/vmware/govmomi/govc
{% endhighlight %}
```
5. Get or build a [binary release](binary_release)
@ -34,28 +34,28 @@ convenient).
Download a prebuilt Debian 7.7 VMDK that we'll use as a base image:
{% highlight sh %}
```shell
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2014-11-11/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
{% endhighlight %}
```
Import this VMDK into your vSphere datastore:
{% highlight sh %}
```shell
export GOVC_URL='user:pass@hostname'
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
govc import.vmdk kube.vmdk ./kube/
{% endhighlight %}
```
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
{% highlight sh %}
```shell
govc datastore.ls ./kube/
{% endhighlight %}
```
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
parameters. The guest login for the image that you imported is `kube:kube`.
@ -65,11 +65,11 @@ parameters. The guest login for the image that you imported is `kube:kube`.
Now, let's continue with deploying Kubernetes.
This process takes about ~10 minutes.
{% highlight sh %}
```shell
cd kubernetes # Extracted binary release OR repository root
export KUBERNETES_PROVIDER=vsphere
cluster/kube-up.sh
{% endhighlight %}
```
Refer to the top level README and the getting started guide for Google Compute
Engine. Once you have successfully reached this point, your vSphere Kubernetes

View File

@ -20,9 +20,9 @@ or someone else setup the cluster and provided you with credentials and a locati
Check the location and credentials that kubectl knows about with this command:
{% highlight console %}
```shell
$ kubectl config view
{% endhighlight %}
```
Many of the [examples](../../examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl).
@ -49,29 +49,29 @@ The following command runs kubectl in a mode where it acts as a reverse proxy.
locating the apiserver and authenticating.
Run it like this:
{% highlight console %}
```shell
$ kubectl proxy --port=8080 &
{% endhighlight %}
```
See [kubectl proxy](kubectl/kubectl_proxy) for more details.
Then you can explore the API with curl, wget, or a browser, like so:
{% highlight console %}
```shell
$ curl http://localhost:8080/api/
{
"versions": [
"v1"
]
}
{% endhighlight %}
```
#### Without kubectl proxy
It is also possible to avoid using kubectl proxy by passing an authentication token
directly to the apiserver, like this:
{% highlight console %}
```shell
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
@ -80,7 +80,7 @@ $ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
"v1"
]
}
{% endhighlight %}
```
The above example uses the `--insecure` flag. This leaves it subject to MITM
attacks. When kubectl accesses the cluster it uses a stored root certificate
@ -173,7 +173,7 @@ You have several options for connecting to nodes, pods and services from outside
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
with the `kubectl cluster-info` command:
{% highlight console %}
```shell
$ kubectl cluster-info
Kubernetes master is running at https://104.197.5.247
@ -182,7 +182,7 @@ $ kubectl cluster-info
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kube-dns
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
{% endhighlight %}
```
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
@ -202,7 +202,7 @@ about namespaces? 'proxy' verb? -->
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?q=user:kimchy`
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_cluster/health?pretty=true`
{% highlight json %}
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
@ -215,7 +215,7 @@ about namespaces? 'proxy' verb? -->
"initializing_shards" : 0,
"unassigned_shards" : 5
}
{% endhighlight %}
```
#### Using web browsers to access services running on the cluster

View File

@ -7,14 +7,14 @@ It is also useful to be able to attach arbitrary non-identifying metadata, for r
Like labels, annotations are key-value maps.
{% highlight json %}
```json
"annotations": {
"key1" : "value1",
"key2" : "value2"
}
{% endhighlight %}
```
Possible information that could be recorded in annotations:

View File

@ -23,11 +23,11 @@ your Service?
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
{% highlight console %}
```shell
$ kubectl describe pods ${POD_NAME}
{% endhighlight %}
```
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
@ -61,37 +61,37 @@ Again, the information from `kubectl describe ...` should be informative. The m
First, take a look at the logs of
the current container:
{% highlight console %}
```shell
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
{% endhighlight %}
```
If your container has previously crashed, you can access the previous container's crash log with:
{% highlight console %}
```shell
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
{% endhighlight %}
```
Alternately, you can run commands inside that container with `exec`:
{% highlight console %}
```shell
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
{% endhighlight %}
```
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for Pods that only contain a single container.
As an example, to look at the logs from a running Cassandra pod, you might run
{% highlight console %}
```shell
$ kubectl exec cassandra -- cat /var/log/cassandra/system.log
{% endhighlight %}
```
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
@ -147,11 +147,11 @@ First, verify that there are endpoints for the service. For every Service object
You can view this resource with:
{% highlight console %}
```shell
$ kubectl get endpoints ${SERVICE_NAME}
{% endhighlight %}
```
Make sure that the endpoints match up with the number of containers that you expect to be a member of your service.
For example, if your Service is for an nginx container with 3 replicas, you would expect to see three different
@ -162,7 +162,7 @@ IP addresses in the Service's endpoints.
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
a Service where the labels are:
{% highlight yaml %}
```yaml
...
spec:
@ -170,15 +170,15 @@ spec:
name: nginx
type: frontend
{% endhighlight %}
```
You can use:
{% highlight console %}
```shell
$ kubectl get pods --selector=name=nginx,type=frontend
{% endhighlight %}
```
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.

View File

@ -41,7 +41,7 @@ The following pod has two containers. Each has a request of 0.25 core of cpu an
be said to have a request of 0.5 core and 128 MiB of memory and a limit of 1 core and 256MiB of
memory.
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -68,7 +68,7 @@ spec:
memory: "128Mi"
cpu: "500m"
{% endhighlight %}
```
## How Pods with Resource Requests are Scheduled
@ -122,7 +122,7 @@ If the scheduler cannot find any node where a pod can fit, then the pod will rem
until a place can be found. An event will be produced each time the scheduler fails to find a
place for the pod, like this:
{% highlight console %}
```shell
$ kubectl describe pod frontend | grep -A 3 Events
Events:
@ -130,7 +130,7 @@ Events:
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
{% endhighlight %}
```
In the case shown above, the pod "frontend" fails to be scheduled due to insufficient
CPU resource on the node. Similar error messages can also suggest failure due to insufficient
@ -144,7 +144,7 @@ have a capacity of `cpu: 1`, then a pod with a limit of `cpu: 1.1` will never be
You can check node capacities and amounts allocated with the `kubectl describe nodes` command.
For example:
{% highlight console %}
```shell
$ kubectl describe nodes gke-cluster-4-386701dd-node-ww4p
Name: gke-cluster-4-386701dd-node-ww4p
@ -168,7 +168,7 @@ TotalResourceLimits:
Memory(bytes): 2485125120 (59% of total)
[ ... lines removed for clarity ...]
{% endhighlight %}
```
Here you can see from the `Allocated resources` section that that a pod which ask for more than
90 millicpus or more than 1341MiB of memory will not be able to fit on this node.
@ -184,7 +184,7 @@ with namespaces, it can prevent one team from hogging all the resources.
Your container may be terminated because it's resource-starved. To check if a container is being killed because it is hitting a resource limit, call `kubectl describe pod`
on the pod you are interested in:
{% highlight console %}
```shell
[12:54:41] $ ./cluster/kubectl.sh describe pod simmemleak-hra99
Name: simmemleak-hra99
@ -222,19 +222,19 @@ Events:
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
{% endhighlight %}
```
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
You can call `get pod` with the `-o go-template=...` option to fetch the status of previously terminated containers:
{% highlight console %}
```shell
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]][13:59:03] clusterScaleDoc ~/go/src/github.com/kubernetes/kubernetes $
{% endhighlight %}
```
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.

View File

@ -13,7 +13,7 @@ In the declarative style, all configuration is stored in YAML or JSON configurat
Kubernetes executes containers in [*Pods*](pods). A pod containing a simple Hello World container can be specified in YAML as follows:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -26,7 +26,7 @@ spec: # specification of the pod's contents
image: "ubuntu:14.04"
command: ["/bin/echo","hello'?,'?world"]
{% endhighlight %}
```
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
@ -34,21 +34,21 @@ The value of `metadata.name`, `hello-world`, will be the name of the pod resourc
The [`command`](containers.html#containers-and-commands) overrides the Docker container's `Entrypoint`. Command arguments (corresponding to Docker's `Cmd`) may be specified using `args`, as follows:
{% highlight yaml %}
```yaml
command: ["/bin/echo"]
args: ["hello","world"]
{% endhighlight %}
```
This pod can be created using the `create` command:
{% highlight console %}
```shell
$ kubectl create -f ./hello-world.yaml
pods/hello-world
{% endhighlight %}
```
`kubectl` prints the resource type and name of the resource created when successful.
@ -56,21 +56,21 @@ pods/hello-world
If you're not sure you specified the resource correctly, you can ask `kubectl` to validate it for you:
{% highlight console %}
```shell
$ kubectl create -f ./hello-world.yaml --validate
{% endhighlight %}
```
Let's say you specified `entrypoint` instead of `command`. You'd see output as follows:
{% highlight console %}
```shell
I0709 06:33:05.600829 14160 schema.go:126] unknown field: entrypoint
I0709 06:33:05.600988 14160 schema.go:129] this may be a false alarm, see http://issue.k8s.io/6842
pods/hello-world
{% endhighlight %}
```
`kubectl create --validate` currently warns about problems it detects, but creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
View the [Pod API
@ -81,7 +81,7 @@ to see the list of valid fields.
Kubernetes [does not automatically run commands in a shell](https://github.com/kubernetes/kubernetes/wiki/User-FAQ#use-of-environment-variables-on-the-command-line) (not all images contain shells). If you would like to run your command in a shell, such as to expand environment variables (specified using `env`), you could do the following:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -98,16 +98,16 @@ spec: # specification of the pod's contents
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
{% endhighlight %}
```
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](/{{page.version}}/docs/design/expansion):
{% highlight yaml %}
```yaml
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
{% endhighlight %}
```
## Viewing pod status
@ -115,70 +115,70 @@ You can see the pod you created (actually all of your cluster's pods) using the
If you're quick, it will look as follows:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 0s
{% endhighlight %}
```
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldn't see pods in an unscheduled state unless there's a problem.
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadn't been pulled already. After a few seconds, you should see the container running:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 5s
{% endhighlight %}
```
The `READY` column shows how many containers in the pod are running.
Almost immediately after it starts running, this command will terminate. `kubectl` shows that the container is no longer running and displays the exit status:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 ExitCode:0 0 15s
{% endhighlight %}
```
## Viewing pod output
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/userguide/usingdocker/), `kubectl logs` will show you the output:
{% highlight console %}
```shell
$ kubectl logs hello-world
hello world
{% endhighlight %}
```
## Deleting pods
When you're done looking at the output, you should delete the pod:
{% highlight console %}
```shell
$ kubectl delete pod hello-world
pods/hello-world
{% endhighlight %}
```
As with `create`, `kubectl` prints the resource type and name of the resource deleted when successful.
You can also use the resource/name format to specify the pod:
{% highlight console %}
```shell
$ kubectl delete pods/hello-world
pods/hello-world
{% endhighlight %}
```
Terminated pods aren't currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.

View File

@ -17,7 +17,7 @@ This guide uses a simple nginx server to demonstrate proof of concept. The same
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
{% highlight yaml %}
```yaml
$ cat nginxrc.yaml
apiVersion: v1
@ -37,28 +37,28 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
{% highlight console %}
```shell
$ kubectl create -f ./nginxrc.yaml
$ kubectl get pods -l app=nginx -o wide
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-minion-93ly
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-93ly
{% endhighlight %}
```
Check your pods' IPs:
{% highlight console %}
```shell
$ kubectl get pods -l app=nginx -o json | grep podIP
"podIP": "10.245.0.15",
"podIP": "10.245.0.14",
{% endhighlight %}
```
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
@ -72,7 +72,7 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods runni
You can create a Service for your 2 nginx replicas with the following yaml:
{% highlight yaml %}
```yaml
$ cat nginxsvc.yaml
apiVersion: v1
@ -88,23 +88,23 @@ spec:
selector:
app: nginx
{% endhighlight %}
```
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_service) to see the list of supported fields in service definition.
Check your Service:
{% highlight console %}
```shell
$ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.179.240.1 <none> 443/TCP <none> 8d
nginxsvc 10.179.252.126 122.222.183.144 80/TCP,81/TCP,82/TCP run=nginx2 11m
{% endhighlight %}
```
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
{% highlight console %}
```shell
$ kubectl describe svc nginxsvc
Name: nginxsvc
@ -122,7 +122,7 @@ $ kubectl get ep
NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
{% endhighlight %}
```
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](services.html#virtual-ips-and-service-proxies).
@ -134,17 +134,17 @@ Kubernetes supports 2 primary modes of finding a Service - environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
{% highlight console %}
```shell
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
{% endhighlight %}
```
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the replication controller to recreate them. This time around the Service exists *before* the replicas. This will given you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
{% highlight console %}
```shell
$ kubectl scale rc my-nginx --replicas=0; kubectl scale rc my-nginx --replicas=2;
$ kubectl get pods -l app=nginx -o wide
@ -158,23 +158,23 @@ NGINXSVC_SERVICE_HOST=10.0.116.146
KUBERNETES_SERVICE_HOST=10.0.0.1
NGINXSVC_SERVICE_PORT=80
{% endhighlight %}
```
### DNS
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it's running on your cluster:
{% highlight console %}
```shell
$ kubectl get services kube-dns --namespace=kube-system
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kube-dns 10.179.240.10 <none> 53/UDP,53/TCP k8s-app=kube-dns 8d
{% endhighlight %}
```
If it isn't running, you can [enable it](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's create another pod to test this:
{% highlight yaml %}
```yaml
$ cat curlpod.yaml
apiVersion: v1
@ -191,11 +191,11 @@ spec:
name: curlcontainer
restartPolicy: Always
{% endhighlight %}
```
And perform a lookup of the nginx Service
{% highlight console %}
```shell
$ kubectl create -f ./curlpod.yaml
default/curlpod
@ -209,7 +209,7 @@ Address 1: 10.0.0.10
Name: nginxsvc
Address 1: 10.0.116.146
{% endhighlight %}
```
## Securing the Service
@ -220,7 +220,7 @@ Till now we have only accessed the nginx server from within the cluster. Before
You can acquire all these from the [nginx https example](../../examples/https-nginx/README), in short:
{% highlight console %}
```shell
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
$ kubectl create -f /tmp/secret.json
@ -230,11 +230,11 @@ NAME TYPE DATA
default-token-il9rc kubernetes.io/service-account-token 1
nginxsecret Opaque 2
{% endhighlight %}
```
Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
{% highlight yaml %}
```yaml
$ cat nginx-app.yaml
apiVersion: v1
@ -281,14 +281,14 @@ spec:
- mountPath: /etc/nginx/ssl
name: secret-volume
{% endhighlight %}
```
Noteworthy points about the nginx-app manifest:
- It contains both rc and service specification in the same file
- The [nginx server](../../examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
{% highlight console %}
```shell
$ kubectl delete rc,svc -l app=nginx; kubectl create -f ./nginx-app.yaml
replicationcontrollers/my-nginx
@ -296,11 +296,11 @@ services/nginxsvc
services/nginxsvc
replicationcontrollers/my-nginx
{% endhighlight %}
```
At this point you can reach the nginx server from any node.
{% highlight console %}
```shell
$ kubectl get pods -o json | grep -i podip
"podIP": "10.1.0.80",
@ -308,13 +308,13 @@ node $ curl -k https://10.1.0.80
...
<h1>Welcome to nginx!</h1>
{% endhighlight %}
```
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
{% highlight console %}
```shell
$ cat curlpod.yaml
vapiVersion: v1
@ -354,13 +354,13 @@ $ kubectl exec curlpod -- curl https://nginxsvc --cacert /etc/nginx/ssl/nginx.cr
<title>Welcome to nginx!</title>
...
{% endhighlight %}
```
## Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
{% highlight console %}
```shell
$ kubectl get svc nginxsvc -o json | grep -i nodeport -C 5
{
@ -393,11 +393,11 @@ $ curl https://104.197.63.17:30645 -k
...
<h1>Welcome to nginx!</h1>
{% endhighlight %}
```
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
{% highlight console %}
```shell
$ kubectl delete rc, svc -l app=nginx
$ kubectl create -f ./nginx-app.yaml
@ -409,7 +409,7 @@ $ curl https://162.22.184.144 -k
...
<title>Welcome to nginx!</title>
{% endhighlight %}
```
The IP address in the `EXTERNAL_IP` column is the one that is available on the public internet. The `CLUSTER_IP` is only available inside your
cluster/private cloud network.

View File

@ -5,44 +5,44 @@ kubectl port-forward forwards connections to a local port to a port on a pod. It
## Creating a Redis master
{% highlight console %}
```shell
$ kubectl create examples/redis/redis-master.yaml
pods/redis-master
{% endhighlight %}
```
wait until the Redis master pod is Running and Ready,
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master 2/2 Running 0 41s
{% endhighlight %}
```
## Connecting to the Redis master[a]
The Redis master is listening on port 6397, to verify this,
{% highlight console %}
```shell
$ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379
{% endhighlight %}
```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
{% highlight console %}
```shell
$ kubectl port-forward redis-master 6379:6379
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
{% endhighlight %}
```
To verify the connection is successful, we run a redis-cli on the local workstation,
{% highlight console %}
```shell
$ redis-cli
127.0.0.1:6379> ping
PONG
{% endhighlight %}
```
Now one can debug the database from the local workstation.

View File

@ -8,10 +8,10 @@ You have seen the [basics](accessing-the-cluster) about `kubectl proxy` and `api
kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
{% highlight console %}
```shell
$ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
{% endhighlight %}
```
if this command does not find the URL, try the steps [here](ui.html#accessing-the-ui).
@ -20,9 +20,9 @@ if this command does not find the URL, try the steps [here](ui.html#accessing-th
The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication.
{% highlight console %}
```shell
$ kubectl proxy --port=8001
Starting to serve on localhost:8001
{% endhighlight %}
```
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)

View File

@ -33,12 +33,12 @@ Currently the list of all services that are running at the time when the contain
For a service named **foo** that maps to a container port named **bar**, the following variables are defined:
{% highlight sh %}
```shell
FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
{% endhighlight %}
```
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.

View File

@ -18,30 +18,30 @@ clear what is expected, this document will use the following conventions.
If the command "COMMAND" is expected to run in a `Pod` and produce "OUTPUT":
{% highlight console %}
```shell
u@pod$ COMMAND
OUTPUT
{% endhighlight %}
```
If the command "COMMAND" is expected to run on a `Node` and produce "OUTPUT":
{% highlight console %}
```shell
u@node$ COMMAND
OUTPUT
{% endhighlight %}
```
If the command is "kubectl ARGS":
{% highlight console %}
```shell
$ kubectl ARGS
OUTPUT
{% endhighlight %}
```
## Running commands in a Pod
@ -49,7 +49,7 @@ For many steps here you will want to see what a `Pod` running in the cluster
sees. Kubernetes does not directly support interactive `Pod`s (yet), but you can
approximate it:
{% highlight console %}
```shell
$ cat <<EOF | kubectl create -f -
apiVersion: v1
@ -66,25 +66,25 @@ spec:
EOF
pods/busybox-sleep
{% endhighlight %}
```
Now, when you need to run a command (even an interactive shell) in a `Pod`-like
context, use:
{% highlight console %}
```shell
$ kubectl exec busybox-sleep -- <COMMAND>
{% endhighlight %}
```
or
{% highlight console %}
```shell
$ kubectl exec -ti busybox-sleep sh
/ #
{% endhighlight %}
```
## Setup
@ -92,7 +92,7 @@ For the purposes of this walk-through, let's run some `Pod`s. Since you're
probably debugging your own `Service` you can substitute your own details, or you
can follow along and get a second data point.
{% highlight console %}
```shell
$ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
@ -101,12 +101,12 @@ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hostnames hostnames gcr.io/google_containers/serve_hostname app=hostnames 3
{% endhighlight %}
```
Note that this is the same as if you had started the `ReplicationController` with
the following YAML:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -128,11 +128,11 @@ spec:
- containerPort: 9376
protocol: TCP
{% endhighlight %}
```
Confirm your `Pod`s are running:
{% highlight console %}
```shell
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
@ -140,7 +140,7 @@ hostnames-0uton 1/1 Running 0 12s
hostnames-bvc05 1/1 Running 0 12s
hostnames-yp2kp 1/1 Running 0 12s
{% endhighlight %}
```
## Does the Service exist?
@ -152,53 +152,53 @@ So what would happen if I tried to access a non-existent `Service`? Assuming yo
have another `Pod` that consumes this `Service` by name you would get something
like:
{% highlight console %}
```shell
u@pod$ wget -qO- hostnames
wget: bad address 'hostname'
{% endhighlight %}
```
or:
{% highlight console %}
```shell
u@pod$ echo $HOSTNAMES_SERVICE_HOST
{% endhighlight %}
```
So the first thing to check is whether that `Service` actually exists:
{% highlight console %}
```shell
$ kubectl get svc hostnames
Error from server: service "hostnames" not found
{% endhighlight %}
```
So we have a culprit, let's create the `Service`. As before, this is for the
walk-through - you can use your own `Service`'s details here.
{% highlight console %}
```shell
$ kubectl expose rc hostnames --port=80 --target-port=9376
service "hostnames" exposed
{% endhighlight %}
```
And read it back, just to be sure:
{% highlight console %}
```shell
$ kubectl get svc hostnames
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hostnames 10.0.0.1 <none> 80/TCP run=hostnames 1h
{% endhighlight %}
```
As before, this is the same as if you had started the `Service` with YAML:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Service
@ -213,7 +213,7 @@ spec:
port: 80
targetPort: 9376
{% endhighlight %}
```
Now you can confirm that the `Service` exists.
@ -221,7 +221,7 @@ Now you can confirm that the `Service` exists.
From a `Pod` in the same `Namespace`:
{% highlight console %}
```shell
u@pod$ nslookup hostnames
Server: 10.0.0.10
@ -230,12 +230,12 @@ Address: 10.0.0.10#53
Name: hostnames
Address: 10.0.1.175
{% endhighlight %}
```
If this fails, perhaps your `Pod` and `Service` are in different
`Namespace`s, try a namespace-qualified name:
{% highlight console %}
```shell
u@pod$ nslookup hostnames.default
Server: 10.0.0.10
@ -244,12 +244,12 @@ Address: 10.0.0.10#53
Name: hostnames.default
Address: 10.0.1.175
{% endhighlight %}
```
If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
`Namespace`. If this still fails, try a fully-qualified name:
{% highlight console %}
```shell
u@pod$ nslookup hostnames.default.svc.cluster.local
Server: 10.0.0.10
@ -258,7 +258,7 @@ Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
{% endhighlight %}
```
Note the suffix here: "default.svc.cluster.local". The "default" is the
`Namespace` we're operating in. The "svc" denotes that this is a `Service`.
@ -267,7 +267,7 @@ The "cluster.local" is your cluster domain.
You can also try this from a `Node` in the cluster (note: 10.0.0.10 is my DNS
`Service`):
{% highlight console %}
```shell
u@node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
Server: 10.0.0.10
@ -276,7 +276,7 @@ Address: 10.0.0.10#53
Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
{% endhighlight %}
```
If you are able to do a fully-qualified name lookup but not a relative one, you
need to check that your `kubelet` is running with the right flags.
@ -291,7 +291,7 @@ If the above still fails - DNS lookups are not working for your `Service` - we
can take a step back and see what else is not working. The Kubernetes master
`Service` should always work:
{% highlight console %}
```shell
u@pod$ nslookup kubernetes.default
Server: 10.0.0.10
@ -300,7 +300,7 @@ Address 1: 10.0.0.10
Name: kubernetes
Address 1: 10.0.0.1
{% endhighlight %}
```
If this fails, you might need to go to the kube-proxy section of this doc, or
even go back to the top of this document and start over, but instead of
@ -311,7 +311,7 @@ debugging your own `Service`, debug DNS.
The next thing to test is whether your `Service` works at all. From a
`Node` in your cluster, access the `Service`'s IP (from `kubectl get` above).
{% highlight console %}
```shell
u@node$ curl 10.0.1.175:80
hostnames-0uton
@ -322,7 +322,7 @@ hostnames-yp2kp
u@node$ curl 10.0.1.175:80
hostnames-bvc05
{% endhighlight %}
```
If your `Service` is working, you should get correct responses. If not, there
are a number of things that could be going wrong. Read on.
@ -333,7 +333,7 @@ It might sound silly, but you should really double and triple check that your
`Service` is correct and matches your `Pods`. Read back your `Service` and
verify it:
{% highlight console %}
```shell
$ kubectl get service hostnames -o json
{
@ -372,7 +372,7 @@ $ kubectl get service hostnames -o json
}
}
{% endhighlight %}
```
Is the port you are trying to access in `spec.ports[]`? Is the `targetPort`
correct for your `Pod`s? If you meant it to be a numeric port, is it a number
@ -388,7 +388,7 @@ actually being selected by the `Service`.
Earlier we saw that the `Pod`s were running. We can re-check that:
{% highlight console %}
```shell
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
@ -396,7 +396,7 @@ hostnames-0uton 1/1 Running 0 1h
hostnames-bvc05 1/1 Running 0 1h
hostnames-yp2kp 1/1 Running 0 1h
{% endhighlight %}
```
The "AGE" column says that these `Pod`s are about an hour old, which implies that
they are running fine and not crashing.
@ -405,13 +405,13 @@ The `-l app=hostnames` argument is a label selector - just like our `Service`
has. Inside the Kubernetes system is a control loop which evaluates the
selector of every `Service` and save the results into an `Endpoints` object.
{% highlight console %}
```shell
$ kubectl get endpoints hostnames
NAME ENDPOINTS
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
{% endhighlight %}
```
This confirms that the control loop has found the correct `Pod`s for your
`Service`. If the `hostnames` row is blank, you should check that the
@ -424,7 +424,7 @@ At this point, we know that your `Service` exists and has selected your `Pod`s.
Let's check that the `Pod`s are actually working - we can bypass the `Service`
mechanism and go straight to the `Pod`s.
{% highlight console %}
```shell
u@pod$ wget -qO- 10.244.0.5:9376
hostnames-0uton
@ -435,7 +435,7 @@ hostnames-bvc05
u@pod$ wget -qO- 10.244.0.7:9376
hostnames-yp2kp
{% endhighlight %}
```
We expect each `Pod` in the `Endpoints` list to return its own hostname. If
this is not what happens (or whatever the correct behavior is for your own
@ -454,12 +454,12 @@ suspect. Let's confirm it, piece by piece.
Confirm that `kube-proxy` is running on your `Node`s. You should get something
like the below:
{% highlight console %}
```shell
u@node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
{% endhighlight %}
```
Next, confirm that it is not failing something obvious, like contacting the
master. To do this, you'll have to look at the logs. Accessing the logs
@ -467,7 +467,7 @@ depends on your `Node` OS. On some OSes it is a file, such as
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
should see something like:
{% highlight console %}
```shell
I0707 17:34:53.945651 30031 server.go:88] Running in resource-only container "/kube-proxy"
I0707 17:34:53.945921 30031 proxier.go:121] Setting proxy IP to 10.240.115.247 and initializing iptables
@ -488,7 +488,7 @@ I0707 17:34:54.903107 30031 proxysocket.go:130] Accepted TCP connection from 1
I0707 17:35:46.015868 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:57493
I0707 17:35:46.017061 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:55471
{% endhighlight %}
```
If you see error messages about not being able to contact the master, you
should double-check your `Node` configuration and installation steps.
@ -499,13 +499,13 @@ One of the main responsibilities of `kube-proxy` is to write the `iptables`
rules which implement `Service`s. Let's check that those rules are getting
written.
{% highlight console %}
```shell
u@node$ iptables-save | grep hostnames
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
{% endhighlight %}
```
There should be 2 rules for each port on your `Service` (just one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
@ -516,32 +516,32 @@ then look at the logs again.
Assuming you do see the above rules, try again to access your `Service` by IP:
{% highlight console %}
```shell
u@node$ curl 10.0.1.175:80
hostnames-0uton
{% endhighlight %}
```
If this fails, we can try accessing the proxy directly. Look back at the
`iptables-save` output above, and extract the port number that `kube-proxy` is
using for your `Service`. In the above examples it is "48577". Now connect to
that:
{% highlight console %}
```shell
u@node$ curl localhost:48577
hostnames-yp2kp
{% endhighlight %}
```
If this still fails, look at the `kube-proxy` logs for specific lines like:
{% highlight console %}
```shell
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
{% endhighlight %}
```
If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.

View File

@ -13,7 +13,7 @@ A replication controller simply ensures that a specified number of pod "replicas
The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start) could be specified using YAML as follows:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -32,7 +32,7 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods don't need to be specified explicitly because they are generated from the name of the replication controller.
View the [replication controller API
@ -41,12 +41,12 @@ to view the list of supported fields.
This replication controller can be created using `create`, just as with pods:
{% highlight console %}
```shell
$ kubectl create -f ./nginx-rc.yaml
replicationcontrollers/my-nginx
{% endhighlight %}
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
@ -54,37 +54,37 @@ Unlike in the case where you directly create pods, a replication controller repl
You can view the replication controller you created using `get`:
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
{% endhighlight %}
```
This tells you that your controller will ensure that you have two nginx replicas.
You can see those replicas using `get`, just as with pods you created directly:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-065jq 1/1 Running 0 51s
my-nginx-buaiq 1/1 Running 0 51s
{% endhighlight %}
```
## Deleting replication controllers
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start):
{% highlight console %}
```shell
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
{% endhighlight %}
```
By default, this will also cause the pods managed by the replication controller to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running, specify `--cascade=false`.
@ -94,33 +94,33 @@ If you try to delete the pods before deleting the replication controller, it wil
Kubernetes uses user-defined key-value attributes called [*labels*](labels) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
{% highlight console %}
```shell
$ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
my-nginx-afv12 0/1 Running 0 3s nginx
my-nginx-lg99z 0/1 Running 0 3s nginx
{% endhighlight %}
```
The labels from the pod template are copied to the replication controller's labels by default, as well -- all resources in Kubernetes support labels:
{% highlight console %}
```shell
$ kubectl get rc my-nginx -L app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
{% endhighlight %}
```
More importantly, the pod template's labels are used to create a [`selector`](labels.html#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get):
{% highlight console %}
```shell
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]
{% endhighlight %}
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didn't want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it won't match pods created by other replication controllers. The most straightforward way to ensure the latter is to create a unique label value for the replication controller, and to specify it in both the pod template's labels and in the selector.

View File

@ -36,7 +36,7 @@ bring up 3 nginx pods.
<!-- BEGIN MUNGE: EXAMPLE nginx-deployment.yaml -->
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
@ -55,56 +55,56 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
[Download example](nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE nginx-deployment.yaml -->
Run the example by downloading the example file and then running this command:
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/nginx-deployment.yaml
deployment "nginx-deployment" created
{% endhighlight %}
```
Running a get immediately will give:
{% highlight console %}
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 8s
{% endhighlight %}
```
This indicates that deployment is trying to update 3 replicas. It has not
updated any one of those yet.
Running a get again after a minute, will give:
{% highlight console %}
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 1m
{% endhighlight %}
```
This indicates that deployent has created all the 3 replicas.
Running ```kubectl get rc``` and ```kubectl get pods``` will show the replication controller (RC) and pods created.
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
REPLICAS AGE
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 3 2m
{% endhighlight %}
```
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -112,7 +112,7 @@ deploymentrc-1975012602-4f2tb 1/1 Running 0 1m
deploymentrc-1975012602-j975u 1/1 Running 0 1m
deploymentrc-1975012602-uashb 1/1 Running 0 1m
{% endhighlight %}
```
The created RC will ensure that there are 3 nginx pods at all time.
@ -124,7 +124,7 @@ For this, we update our deployment to be as follows:
<!-- BEGIN MUNGE: EXAMPLE new-nginx-deployment.yaml -->
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
@ -143,68 +143,68 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
[Download example](new-nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE new-nginx-deployment.yaml -->
{% highlight console %}
```shell
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
deployment "nginx-deployment" configured
{% endhighlight %}
```
Running a get immediately will still give:
{% highlight console %}
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 8s
{% endhighlight %}
```
This indicates that deployment status has not been updated yet (it is still
showing old status).
Running a get again after a minute, will give:
{% highlight console %}
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 1/3 1m
{% endhighlight %}
```
This indicates that deployment has updated one of the three pods that it needs
to update.
Eventually, it will get around to updating all the pods.
{% highlight console %}
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 3m
{% endhighlight %}
```
We can run ```kubectl get rc``` to see that deployment updated the pods by creating a new RC
which it scaled up to 3 and scaled down the old RC to 0.
{% highlight console %}
```shell
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
deploymentrc-1562004724 nginx nginx:1.9.1 deployment.kubernetes.io/podTemplateHash=1562004724,app=nginx 3 5m
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 0 7m
{% endhighlight %}
```
Running get pods, will only show the new pods.
{% highlight console %}
```shell
kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -212,7 +212,7 @@ deploymentrc-1562004724-0tgk5 1/1 Running 0 9m
deploymentrc-1562004724-1rkfl 1/1 Running 0 8m
deploymentrc-1562004724-6v702 1/1 Running 0 8m
{% endhighlight %}
```
Next time we want to update pods, we can just update the deployment again.
@ -222,7 +222,7 @@ up. For example, if you look at the above deployment closely, you will see that
it first created a new pod, then deleted some old pods and created new ones. It
does not kill old pods until a sufficient number of new pods have come up.
{% highlight console %}
```shell
$ kubectl describe deployments
Name: nginx-deployment
@ -244,7 +244,7 @@ Events:
1m 1m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 3
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0
{% endhighlight %}
```
Here we see that when we first created the deployment, it created an RC and scaled it up to 3 replicas directly.
When we updated the deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.

View File

@ -11,7 +11,7 @@ How do I run an nginx container and expose it to the world? Checkout [kubectl ru
With docker:
{% highlight console %}
```shell
$ docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
a9ec34d9878748d2f33dc20cb25c714ff21da8d40558b45bfaec9955859075d0
@ -19,11 +19,11 @@ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp nginx-app
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
@ -31,17 +31,17 @@ replicationcontroller "nginx-app" created
# expose a port through with a service
$ kubectl expose rc nginx-app --port=80 --name=nginx-http
{% endhighlight %}
```
With kubectl, we create a [replication controller](replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services) with a selector that matches the replication controller's selector. See the [Quick start](quick-start) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
{% highlight console %}
```shell
kubectl run [-i] [--tty] --attach <name> --image=<image>
{% endhighlight %}
```
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
@ -54,23 +54,23 @@ How do I list what is currently running? Checkout [kubectl get](kubectl/kubectl_
With docker:
{% highlight console %}
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp nginx-app
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 1h
{% endhighlight %}
```
#### docker attach
@ -78,7 +78,7 @@ How do I attach to a process that is already running in a container? Checkout [
With docker:
{% highlight console %}
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@ -86,11 +86,11 @@ a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago
$ docker attach -it a9ec34d98787
...
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -99,7 +99,7 @@ $ kubectl attach -it nginx-app-5jyvm
...
{% endhighlight %}
```
#### docker exec
@ -107,7 +107,7 @@ How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubec
With docker:
{% highlight console %}
```shell
$ docker ps
@ -117,11 +117,11 @@ $ docker exec a9ec34d98787 cat /etc/hostname
a9ec34d98787
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl get po
@ -131,14 +131,14 @@ $ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm
{% endhighlight %}
```
What about interactive commands?
With docker:
{% highlight console %}
```shell
$ docker exec -ti a9ec34d98787 /bin/sh
@ -146,11 +146,11 @@ $ docker exec -ti a9ec34d98787 /bin/sh
# exit
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
@ -158,7 +158,7 @@ $ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
{% endhighlight %}
```
For more information see [Getting into containers](getting-into-containers).
@ -169,7 +169,7 @@ How do I follow stdout/stderr of a running process? Checkout [kubectl logs](kube
With docker:
{% highlight console %}
```shell
$ docker logs -f a9e
@ -177,11 +177,11 @@ $ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl logs -f nginx-app-zibvs
@ -189,11 +189,11 @@ $ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
{% endhighlight %}
```
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
{% highlight console %}
```shell
$ kubectl logs --previous nginx-app-zibvs
@ -201,7 +201,7 @@ $ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
{% endhighlight %}
```
See [Logging](logging) for more information.
@ -211,7 +211,7 @@ How do I stop and delete a running process? Checkout [kubectl delete](kubectl/ku
With docker
{% highlight console %}
```shell
$ docker ps
@ -223,11 +223,11 @@ $ docker rm a9ec34d98787
a9ec34d98787
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl get rc nginx-app
@ -243,7 +243,7 @@ $ kubectl get po
NAME READY STATUS RESTARTS AGE
{% endhighlight %}
```
Notice that we don't delete the pod directly. With kubectl we want to delete the replication controller that owns the pod. If we delete the pod directly, the replication controller will recreate the pod.
@ -257,7 +257,7 @@ How do I get the version of my client and server? Checkout [kubectl version](kub
With docker:
{% highlight console %}
```shell
$ docker version
@ -273,11 +273,11 @@ Git commit (server): 0baf609
OS/Arch (server): linux/amd64
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl version
@ -285,7 +285,7 @@ Client Version: version.Info{Major:"0", Minor:"20.1", GitVersion:"v0.20.1", GitC
Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32699e873ae1ca-dirty", GitCommit:"32699e873ae1caa01812e41de7eab28df4358ee4", GitTreeState:"dirty"}
{% endhighlight %}
```
#### docker info
@ -293,7 +293,7 @@ How do I get miscellaneous info about my environment and configuration? Checkout
With docker:
{% highlight console %}
```shell
$ docker info
@ -315,11 +315,11 @@ ID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO
WARNING: No swap limit support
{% endhighlight %}
```
With kubectl:
{% highlight console %}
```shell
$ kubectl cluster-info
@ -331,7 +331,7 @@ Heapster is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system
InfluxDB is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
{% endhighlight %}
```

View File

@ -50,7 +50,7 @@ downward API:
<!-- BEGIN MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -76,7 +76,7 @@ spec:
fieldPath: status.podIP
restartPolicy: Never
{% endhighlight %}
```
[Download example](downward-api/dapi-pod.yaml)
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
@ -117,7 +117,7 @@ This is an example of a pod that consumes its labels and annotations via the dow
<!-- BEGIN MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -150,7 +150,7 @@ spec:
fieldRef:
fieldPath: metadata.annotations
{% endhighlight %}
```
[Download example](downward-api/volume/dapi-volume.yaml)
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->

View File

@ -18,25 +18,25 @@ containers to be injected with the name and namespace of the pod the container i
Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
downward API.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
{% endhighlight %}
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
through the pod logs to see that the pod was injected with the correct values:
{% highlight console %}
```shell
$ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
{% endhighlight %}
```

View File

@ -18,25 +18,25 @@ containers to be injected with the name and namespace of the pod the container i
Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
downward API.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
{% endhighlight %}
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
through the pod logs to see that the pod was injected with the correct values:
{% highlight console %}
```shell
$ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
{% endhighlight %}
```

View File

@ -19,17 +19,17 @@ This example assumes you have a Kubernetes cluster installed and running, and th
Use the `docs/user-guide/downward-api/dapi-volume.yaml` file to create a Pod with a  downward API volume which stores pod labels and pod annotations to `/etc/labels` and  `/etc/annotations` respectively.
{% highlight sh %}
```shell
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
{% endhighlight %}
```
### Step Two: Examine pod/container output
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
{% highlight sh %}
```shell
$ kubectl logs kubernetes-downwardapi-volume-example
cluster="test-cluster1"
@ -40,13 +40,13 @@ builder="john-doe"
kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
kubernetes.io/config.source="api"
{% endhighlight %}
```
### Internals
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
{% highlight sh %}
```shell
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
/ # ls -laR /etc
@ -67,6 +67,6 @@ drwxrwxrwt 3 0 0 180 Aug 24 13:03 ..
-rw-r--r-- 1 0 0 53 Aug 24 13:03 labels
/ #
{% endhighlight %}
```
The file `labels` is stored in a temporary directory (`..2015_08_24_13_03_44259413923` in the example above) which is symlinked to by `..downwardapi`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..downwardapi` indirection.  This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..downwardapi` symlink is updated atomically using `rename(2)`.

View File

@ -19,17 +19,17 @@ This example assumes you have a Kubernetes cluster installed and running, and th
Use the `docs/user-guide/downward-api/dapi-volume.yaml` file to create a Pod with a  downward API volume which stores pod labels and pod annotations to `/etc/labels` and  `/etc/annotations` respectively.
{% highlight sh %}
```shell
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
{% endhighlight %}
```
### Step Two: Examine pod/container output
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
{% highlight sh %}
```shell
$ kubectl logs kubernetes-downwardapi-volume-example
cluster="test-cluster1"
@ -40,13 +40,13 @@ builder="john-doe"
kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
kubernetes.io/config.source="api"
{% endhighlight %}
```
### Internals
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
{% highlight sh %}
```shell
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
/ # ls -laR /etc
@ -67,6 +67,6 @@ drwxrwxrwt 3 0 0 180 Aug 24 13:03 ..
-rw-r--r-- 1 0 0 53 Aug 24 13:03 labels
/ #
{% endhighlight %}
```
The file `labels` is stored in a temporary directory (`..2015_08_24_13_03_44259413923` in the example above) which is symlinked to by `..downwardapi`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..downwardapi` indirection.  This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..downwardapi` symlink is updated atomically using `rename(2)`.

View File

@ -10,28 +10,28 @@ Kubernetes exposes [services](services.html#environment-variables) through envir
We first create a pod and a service,
{% highlight console %}
```shell
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml
{% endhighlight %}
```
wait until the pod is Running and Ready,
{% highlight console %}
```shell
$ kubectl get pod
NAME READY REASON RESTARTS AGE
redis-master-ft9ex 1/1 Running 0 12s
{% endhighlight %}
```
then we can check the environment variables of the pod,
{% highlight console %}
```shell
$ kubectl exec redis-master-ft9ex env
...
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_SERVICE_HOST=10.0.0.219
...
{% endhighlight %}
```
We can use these environment variables in applications to find the service.
@ -41,32 +41,32 @@ We can use these environment variables in applications to find the service.
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
We first create a Pod with a volume mounted at /data/redis,
{% highlight console %}
```shell
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
{% endhighlight %}
```
wait until the pod is Running and Ready,
{% highlight console %}
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
storage 1/1 Running 0 1m
{% endhighlight %}
```
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
{% highlight console %}
```shell
$ kubectl exec storage ls /data
redis
{% endhighlight %}
```
## Using kubectl exec to open a bash terminal in a pod
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
{% highlight console %}
```shell
$ kubectl exec -ti storage -- bash
root@storage:/data#
{% endhighlight %}
```
This gets you a terminal.

View File

@ -26,7 +26,7 @@ First, we will start a replication controller running the image and expose it as
<a name="kubectl-run"></a>
{% highlight console %}
```shell
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m
replicationcontroller "php-apache" created
@ -34,11 +34,11 @@ replicationcontroller "php-apache" created
$ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
{% endhighlight %}
```
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -47,21 +47,21 @@ php-apache-wa3t1 1/1 Running 0 12m
$ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
{% endhighlight %}
```
We may now check that php-apache server works correctly by calling ``curl`` with the service's IP:
{% highlight console %}
```shell
$ curl http://146.148.24.244
OK!
{% endhighlight %}
```
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
{% highlight console %}
```shell
$ kubectl expose rc php-apache --port=80 --type=ClusterIP
service "php-apache" exposed
@ -72,7 +72,7 @@ Kubernetes master is running at https://146.148.6.215
$ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/default/services/php-apache/
OK!
{% endhighlight %}
```
## Step Two: Create horizontal pod autoscaler
@ -80,7 +80,7 @@ OK!
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this:
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
@ -97,7 +97,7 @@ spec:
cpuUtilization:
targetPercentage: 50
{% endhighlight %}
```
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
@ -108,12 +108,12 @@ See [here](/{{page.version}}/docs/design/horizontal-pod-autoscaler.html#autoscal
We will create the autoscaler by executing the following command:
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
{% endhighlight %}
```
Alternatively, we can create the autoscaler using [kubectl autoscale](../kubectl/kubectl_autoscale).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
@ -127,13 +127,13 @@ replicationcontroller "php-apache" autoscaled
We may check the current status of autoscaler by running:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
{% endhighlight %}
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
@ -143,44 +143,44 @@ Please note that the current CPU consumption is 0% as we are not sending any req
Now, we will see how the autoscaler reacts on the increased load of the server.
We will start an infinite loop of queries to our server (please run it in a different terminal):
{% highlight console %}
```shell
$ while true; do curl http://146.148.6.244; done
{% endhighlight %}
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
{% endhighlight %}
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
{% endhighlight %}
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
{% highlight console %}
```shell
$ while true; do curl http://146.148.6.244; done
{% endhighlight %}
```
In the case presented here, it increased the number of serving pods to 10:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
@ -190,14 +190,14 @@ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
{% endhighlight %}
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
@ -207,7 +207,7 @@ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
{% endhighlight %}
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.

View File

@ -26,7 +26,7 @@ First, we will start a replication controller running the image and expose it as
<a name="kubectl-run"></a>
{% highlight console %}
```shell
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m
replicationcontroller "php-apache" created
@ -34,11 +34,11 @@ replicationcontroller "php-apache" created
$ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
{% endhighlight %}
```
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -47,21 +47,21 @@ php-apache-wa3t1 1/1 Running 0 12m
$ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
{% endhighlight %}
```
We may now check that php-apache server works correctly by calling ``curl`` with the service's IP:
{% highlight console %}
```shell
$ curl http://146.148.24.244
OK!
{% endhighlight %}
```
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
{% highlight console %}
```shell
$ kubectl expose rc php-apache --port=80 --type=ClusterIP
service "php-apache" exposed
@ -72,7 +72,7 @@ Kubernetes master is running at https://146.148.6.215
$ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/default/services/php-apache/
OK!
{% endhighlight %}
```
## Step Two: Create horizontal pod autoscaler
@ -80,7 +80,7 @@ OK!
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this:
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
@ -97,7 +97,7 @@ spec:
cpuUtilization:
targetPercentage: 50
{% endhighlight %}
```
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
@ -108,12 +108,12 @@ See [here](/{{page.version}}/docs/design/horizontal-pod-autoscaler.html#autoscal
We will create the autoscaler by executing the following command:
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
{% endhighlight %}
```
Alternatively, we can create the autoscaler using [kubectl autoscale](../kubectl/kubectl_autoscale).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
@ -127,13 +127,13 @@ replicationcontroller "php-apache" autoscaled
We may check the current status of autoscaler by running:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
{% endhighlight %}
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
@ -143,44 +143,44 @@ Please note that the current CPU consumption is 0% as we are not sending any req
Now, we will see how the autoscaler reacts on the increased load of the server.
We will start an infinite loop of queries to our server (please run it in a different terminal):
{% highlight console %}
```shell
$ while true; do curl http://146.148.6.244; done
{% endhighlight %}
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
{% endhighlight %}
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
{% highlight console %}
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
{% endhighlight %}
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
{% highlight console %}
```shell
$ while true; do curl http://146.148.6.244; done
{% endhighlight %}
```
In the case presented here, it increased the number of serving pods to 10:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
@ -190,14 +190,14 @@ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
{% endhighlight %}
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
{% highlight console %}
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
@ -207,7 +207,7 @@ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
{% endhighlight %}
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.

View File

@ -73,7 +73,7 @@ example, run these on your desktop/laptop:
Verify by creating a pod that uses a private image, e.g.:
{% highlight yaml %}
```yaml
$ cat <<EOF > /tmp/private-image-test-1.yaml
apiVersion: v1
@ -91,25 +91,25 @@ $ kubectl create -f /tmp/private-image-test-1.yaml
pods/private-image-test-1
$
{% endhighlight %}
```
If everything is working, then, after a few moments, you should see:
{% highlight console %}
```shell
$ kubectl logs private-image-test-1
SUCCESS
{% endhighlight %}
```
If it failed, then you will see:
{% highlight console %}
```shell
$ kubectl describe pods/private-image-test-1 | grep "Failed"
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
{% endhighlight %}
```
You must ensure all nodes in the cluster have the same `.dockercfg`. Otherwise, pods will run on
@ -152,7 +152,7 @@ Kubernetes supports specifying registry keys on a pod.
First, create a `.dockercfg`, such as running `docker login <registry.domain>`.
Then put the resulting `.dockercfg` file into a [secret resource](secrets). For example:
{% highlight console %}
```shell
$ docker login
Username: janedoe
@ -181,7 +181,7 @@ $ kubectl create -f /tmp/image-pull-secret.yaml
secrets/myregistrykey
$
{% endhighlight %}
```
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockercfg]: invalid value ...` it means
@ -192,7 +192,7 @@ This process only needs to be done one time (per namespace).
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
section to a pod definition.
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -205,7 +205,7 @@ spec:
imagePullSecrets:
- name: myregistrykey
{% endhighlight %}
```
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets

View File

@ -52,7 +52,7 @@ Before you start using the Ingress resource, there are a few things you should u
A minimal Ingress might look like:
{% highlight yaml %}
```yaml
01. apiVersion: extensions/v1beta1
02. kind: Ingress
@ -67,7 +67,7 @@ A minimal Ingress might look like:
11. serviceName: test
12. servicePort: 80
{% endhighlight %}
```
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
@ -93,7 +93,7 @@ There are existing Kubernetes concepts that allow you to expose a single service
<!-- BEGIN MUNGE: EXAMPLE ingress.yaml -->
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
@ -104,20 +104,20 @@ spec:
serviceName: testsvc
servicePort: 80
{% endhighlight %}
```
[Download example](ingress.yaml)
<!-- END MUNGE: EXAMPLE ingress.yaml -->
If you create it using `kubectl -f` you should see:
{% highlight sh %}
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 107.178.254.228
{% endhighlight %}
```
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic send to the IP is directed to the Kubernetes Service listed under `BACKEND`.
@ -134,7 +134,7 @@ foo.bar.com -> 178.91.123.132 -> / foo s1:80
would require an Ingress such as:
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
@ -154,7 +154,7 @@ spec:
serviceName: s2
servicePort: 80
{% endhighlight %}
```
When you create the Ingress with `kubectl create -f`:
@ -186,7 +186,7 @@ bar.foo.com --| |-> bar.foo.com s2:80
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
@ -207,7 +207,7 @@ spec:
serviceName: s2
servicePort: 80
{% endhighlight %}
```
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the url of the request.
@ -222,7 +222,7 @@ It's also worth noting that even though health checks are not exposed directly t
Say you'd like to add a new Host to an existing Ingress, you can update it by editing the resource:
{% highlight sh %}
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
@ -231,11 +231,11 @@ test - 178.91.123.132
/foo s1:80
$ kubectl edit ing test
{% endhighlight %}
```
This should pop up an editor with the existing yaml, modify it to include the new Host.
{% highlight yaml %}
```yaml
spec:
rules:
@ -255,11 +255,11 @@ spec:
path: /foo
..
{% endhighlight %}
```
saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
{% highlight sh %}
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
@ -269,7 +269,7 @@ test - 178.91.123.132
bar.baz.com
/foo s2:80
{% endhighlight %}
```
You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.

View File

@ -11,7 +11,7 @@ your pods. But there are a number of ways to get even more information about you
For this example we'll use a ReplicationController to create two pods, similar to the earlier example.
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -34,27 +34,27 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
{% highlight console %}
```shell
$ kubectl create -f ./my-nginx-rc.yaml
replicationcontrollers/my-nginx
{% endhighlight %}
```
{% highlight console %}
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-gy1ij 1/1 Running 0 1m
my-nginx-yv5cn 1/1 Running 0 1m
{% endhighlight %}
```
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
{% highlight console %}
```shell
$ kubectl describe pod my-nginx-gy1ij
Name: my-nginx-gy1ij
@ -89,7 +89,7 @@ Events:
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
{% endhighlight %}
```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
@ -107,7 +107,7 @@ Lastly, you see a log of recent events related to your Pod. The system compresse
A common scenario that you can detect using events is when you've created a Pod that won't fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesn't match any nodes. Let's say we created the previous Replication Controller with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
{% highlight console %}
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
@ -117,11 +117,11 @@ my-nginx-i595c 0/1 Running 0 8s
my-nginx-iichp 0/1 Running 0 8s
my-nginx-tc2j9 0/1 Running 0 8s
{% endhighlight %}
```
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
{% highlight console %}
```shell
$ kubectl describe pod my-nginx-9unp9
Name: my-nginx-9unp9
@ -146,7 +146,7 @@ Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
{% endhighlight %}
```
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
@ -172,7 +172,7 @@ To see events from all namespaces, you can use the `--all-namespaces` argument.
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
{% highlight yaml %}
```yaml
$ kubectl get pod my-nginx-i595c -o yaml
apiVersion: v1
@ -234,13 +234,13 @@ status:
podIP: 10.244.3.4
startTime: 2015-07-10T06:56:21Z
{% endhighlight %}
```
## Example: debugging a down/unreachable node
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that's running on the node, or to find out why a Pod won't schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
{% highlight console %}
```shell
$ kubectl get nodes
NAME LABELS STATUS
@ -321,7 +321,7 @@ status:
osImage: Debian GNU/Linux 7 (wheezy)
systemUUID: ABE5F6B4-D44B-108B-C46A-24CCE16C8B6E
{% endhighlight %}
```
## What's next?

View File

@ -19,7 +19,7 @@ Here is an example Job config. It computes π to 2000 places and prints it out.
It takes around 10s to complete.
<!-- BEGIN MUNGE: EXAMPLE job.yaml -->
{% highlight yaml %}
```yaml
apiVersion: extensions/v1beta1
kind: Job
@ -41,23 +41,23 @@ spec:
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
{% endhighlight %}
```
[Download example](job.yaml)
<!-- END MUNGE: EXAMPLE job.yaml -->
Run the example job by downloading the example file and then running this command:
{% highlight console %}
```shell
$ kubectl create -f ./job.yaml
jobs/pi
{% endhighlight %}
```
Check on the status of the job using this command:
{% highlight console %}
```shell
$ kubectl describe jobs/pi
Name: pi
@ -74,31 +74,31 @@ Events:
1m 1m 1 {job } SuccessfulCreate Created pod: pi-z548a
{% endhighlight %}
```
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.
To list all the pods that belong to job in a machine readable form, you can use a command like this:
{% highlight console %}
```shell
$ pods=$(kubectl get pods --selector=app=pi --output=jsonpath={.items..metadata.name})
echo $pods
pi-aiw0a
{% endhighlight %}
```
Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression
that just gets the name from each pod in the returned list.
View the standard output of one of the pods:
{% highlight console %}
```shell
$ kubectl logs pi-aiw0a
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
{% endhighlight %}
```
## Writing a Job Spec

View File

@ -13,7 +13,7 @@ The result object is printed as its String() function.
Given the input:
{% highlight json %}
```json
{
"kind": "List",
@ -50,7 +50,7 @@ Given the input:
]
}
{% endhighlight %}
```
Function | Description | Example | Result
---------|--------------------|--------------------|------------------

View File

@ -22,7 +22,7 @@ http://issue.k8s.io/1755
The below file contains a `current-context` which will be used by default by clients which are using the file to connect to a cluster. Thus, this kubeconfig file has more information in it then we will necessarily have to use in a given session. You can see it defines many clusters, and users associated with those clusters. The context itself is associated with both a cluster AND a user.
{% highlight yaml %}
```yaml
current-context: federal-context
apiVersion: v1
clusters:
@ -60,7 +60,7 @@ users:
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
{% endhighlight %}
```
### Building your own kubeconfig file
@ -126,18 +126,18 @@ See [kubectl/kubectl_config.md](kubectl/kubectl_config) for help.
### Example
{% highlight console %}
```shell
$ kubectl config set-credentials myself --username=admin --password=secret
$ kubectl config set-cluster local-server --server=http://localhost:8080
$ kubectl config set-context default-context --cluster=local-server --user=myself
$ kubectl config use-context default-context
$ kubectl config set contexts.default-context.namespace the-right-prefix
$ kubectl config view
{% endhighlight %}
```
produces this output
{% highlight yaml %}
```yaml
apiVersion: v1
clusters:
- cluster:
@ -157,11 +157,11 @@ users:
user:
password: secret
username: admin
{% endhighlight %}
```
and a kubeconfig file that looks like this
{% highlight yaml %}
```yaml
apiVersion: v1
clusters:
- cluster:
@ -181,11 +181,11 @@ users:
user:
password: secret
username: admin
{% endhighlight %}
```
#### Commands for the example file
{% highlight console %}
```shell
$ kubectl config set preferences.colors true
$ kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
$ kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
@ -195,7 +195,7 @@ $ kubectl config set-credentials green-user --client-certificate=path/to/my/clie
$ kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
$ kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
$ kubectl config use-context federal-context
{% endhighlight %}
```
### Final notes for tying it all together

View File

@ -146,19 +146,19 @@ To define custom columns and output only the details that you want into a table,
* Inline:
{% highlight console %}
```shell
$ kubectl get pods <pod-name> -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion
{% endhighlight %}
```
* Template file:
{% highlight console %}
```shell
$ kubectl get pods <pod-name> -o=custom-columns-file=template.txt
{% endhighlight %}
```
where the `template.txt` file contains:
@ -171,12 +171,12 @@ To define custom columns and output only the details that you want into a table,
The result of running either command is:
{% highlight console %}
```shell
NAME RSRC
submit-queue 610995
{% endhighlight %}
```
### Sorting list objects

View File

@ -6,14 +6,14 @@ Labels are intended to be used to specify identifying attributes of objects that
Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time.
Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
{% highlight json %}
```json
"labels": {
"key1" : "value1",
"key2" : "value2"
}
{% endhighlight %}
```
We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](annotations).
@ -106,35 +106,35 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
Both label selector styles can be used to list or watch resources via a REST client. For example targetting `apiserver` with `kubectl` and using _equality-based_ one may write:
{% highlight console %}
```shell
$ kubectl get pods -l environment=production,tier=frontend
{% endhighlight %}
```
or using _set-based_ requirements:
{% highlight console %}
```shell
$ kubectl get pods -l 'environment in (production),tier in (frontend)'
{% endhighlight %}
```
As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values:
{% highlight console %}
```shell
$ kubectl get pods -l 'environment in (production, qa)'
{% endhighlight %}
```
or restricting negative matching via _exists_ operator:
{% highlight console %}
```shell
$ kubectl get pods -l 'environment,environment notin (frontend)'
{% endhighlight %}
```
### Set references in API objects
@ -146,22 +146,22 @@ The set of pods that a `service` targets is defined with a label selector. Simil
Labels selectors for both objects are defined in `json` or `yaml` files using maps, and only _equality-based_ requirement selectors are supported:
{% highlight json %}
```json
"selector": {
"component" : "redis",
}
{% endhighlight %}
```
or
{% highlight yaml %}
```yaml
selector:
component: redis
{% endhighlight %}
```
this selector (respectively in `json` or `yaml` format) is equivalent to `component=redis` or `component in (redis)`.
@ -169,7 +169,7 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon
Newer resources, such as [job](jobs), support _set-based_ requirements as well.
{% highlight yaml %}
```yaml
selector:
matchLabels:
@ -178,7 +178,7 @@ selector:
- {key: tier, operator: In, values: [cache]}
- {key: environment, operator: NotIn, values: [dev]}
{% endhighlight %}
```
`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.

View File

@ -6,7 +6,7 @@ This example shows two types of pod [health checks](../production-pods.html#live
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
{% highlight yaml %}
```yaml
livenessProbe:
exec:
command:
@ -14,29 +14,29 @@ The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container executio
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
{% endhighlight %}
```
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
Note that the container removes the `/tmp/health` file after 10 seconds,
{% highlight sh %}
```shell
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
{% endhighlight %}
```
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
{% highlight yaml %}
```yaml
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1
{% endhighlight %}
```
The Kubelet sends an HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails. The Kubelet sends the probe to the container's ip address by default which could be specified with `host` as part of httpGet probe. If the container listens on `127.0.0.1`, `host` should be specified as `127.0.0.1`. In general, if the container listens on its ip address or on all interfaces (0.0.0.0), there is no need to specify the `host` as part of the httpGet probe.
@ -46,38 +46,38 @@ This [guide](../walkthrough/k8s201.html#health-checking) has more information on
To show the health check is actually working, first create the pods:
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/liveness/exec-liveness.yaml
$ kubectl create -f docs/user-guide/liveness/http-liveness.yaml
{% endhighlight %}
```
Check the status of the pods once they are created:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
[...]
liveness-exec 1/1 Running 0 13s
liveness-http 1/1 Running 0 13s
{% endhighlight %}
```
Check the status half a minute later, you will see the container restart count being incremented:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
[...]
liveness-exec 1/1 Running 1 36s
liveness-http 1/1 Running 1 36s
{% endhighlight %}
```
At the bottom of the *kubectl describe* output there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
{% highlight console %}
```shell
$ kubectl describe pods liveness-exec
[...]
Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
{% endhighlight %}
```

View File

@ -5,7 +5,7 @@ This example shows two types of pod [health checks](../production-pods.html#live
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
{% highlight yaml %}
```yaml
livenessProbe:
exec:
@ -15,24 +15,24 @@ The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container executio
initialDelaySeconds: 15
timeoutSeconds: 1
{% endhighlight %}
```
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
Note that the container removes the `/tmp/health` file after 10 seconds,
{% highlight sh %}
```shell
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
{% endhighlight %}
```
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
{% highlight yaml %}
```yaml
livenessProbe:
httpGet:
@ -41,7 +41,7 @@ The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
initialDelaySeconds: 15
timeoutSeconds: 1
{% endhighlight %}
```
The Kubelet sends an HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails. The Kubelet sends the probe to the container's ip address by default which could be specified with `host` as part of httpGet probe. If the container listens on `127.0.0.1`, `host` should be specified as `127.0.0.1`. In general, if the container listens on its ip address or on all interfaces (0.0.0.0), there is no need to specify the `host` as part of the httpGet probe.
@ -51,16 +51,16 @@ This [guide](../walkthrough/k8s201.html#health-checking) has more information on
To show the health check is actually working, first create the pods:
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/liveness/exec-liveness.yaml
$ kubectl create -f docs/user-guide/liveness/http-liveness.yaml
{% endhighlight %}
```
Check the status of the pods once they are created:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -68,11 +68,11 @@ NAME READY STATUS RESTARTS
liveness-exec 1/1 Running 0 13s
liveness-http 1/1 Running 0 13s
{% endhighlight %}
```
Check the status half a minute later, you will see the container restart count being incremented:
{% highlight console %}
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@ -80,11 +80,11 @@ NAME READY STATUS RESTARTS
liveness-exec 1/1 Running 1 36s
liveness-http 1/1 Running 1 36s
{% endhighlight %}
```
At the bottom of the *kubectl describe* output there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
{% highlight console %}
```shell
$ kubectl describe pods liveness-exec
[...]
@ -93,7 +93,7 @@ Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kube
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
{% endhighlight %}
```

View File

@ -15,7 +15,7 @@ output every second. (You can find different pod specifications [here](logging-d
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -26,21 +26,21 @@ spec:
image: ubuntu:14.04
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
{% endhighlight %}
```
[Download example](../../examples/blog-logging/counter-pod.yaml)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
we can run the pod:
{% highlight console %}
```shell
$ kubectl create -f ./counter-pod.yaml
pods/counter
{% endhighlight %}
```
and then fetch the logs:
{% highlight console %}
```shell
$ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015
1: Tue Jun 2 21:37:32 UTC 2015
@ -49,12 +49,12 @@ $ kubectl logs counter
4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015
...
{% endhighlight %}
```
If a pod has more than one container then you need to specify which container's log files should
be fetched e.g.
{% highlight console %}
```shell
$ kubectl logs kube-dns-v3-7r1l9 etcd
2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
2015/06/23 00:43:10 etcdserver: compacted log at index 30003
@ -70,7 +70,7 @@ $ kubectl logs kube-dns-v3-7r1l9 etcd
2015/06/23 04:51:03 etcdserver: compacted log at index 60006
2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006
...
{% endhighlight %}
```
## Cluster level logging to Google Cloud Logging

View File

@ -9,7 +9,7 @@ You've deployed your application and exposed it via a service. Now what? Kuberne
Many applications require multiple resources to be created, such as a Replication Controller and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Service
@ -41,35 +41,35 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
Multiple resources can be created the same way as a single resource:
{% highlight console %}
```shell
$ kubectl create -f ./nginx-app.yaml
services/my-nginx-svc
replicationcontrollers/my-nginx
{% endhighlight %}
```
The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the replication controller(s).
`kubectl create` also accepts multiple `-f` arguments:
{% highlight console %}
```shell
$ kubectl create -f ./nginx-svc.yaml -f ./nginx-rc.yaml
{% endhighlight %}
```
And a directory can be specified rather than or in addition to individual files:
{% highlight console %}
```shell
$ kubectl create -f ./nginx/
{% endhighlight %}
```
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
@ -77,46 +77,46 @@ It is a recommended practice to put resources related to the same microservice o
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
{% highlight console %}
```shell
$ kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/docs/user-guide/replication.yaml
replicationcontrollers/nginx
{% endhighlight %}
```
## Bulk operations in kubectl
Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
{% highlight console %}
```shell
$ kubectl delete -f ./nginx/
replicationcontrollers/my-nginx
services/my-nginx-svc
{% endhighlight %}
```
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
{% highlight console %}
```shell
$ kubectl delete replicationcontrollers/my-nginx services/my-nginx-svc
{% endhighlight %}
```
For larger numbers of resources, one can use labels to filter resources. The selector is specified using `-l`:
{% highlight console %}
```shell
$ kubectl delete all -lapp=nginx
replicationcontrollers/my-nginx
services/my-nginx-svc
{% endhighlight %}
```
Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`:
{% highlight console %}
```shell
$ kubectl get $(kubectl create -f ./nginx/ | grep my-nginx)
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
@ -124,7 +124,7 @@ my-nginx nginx nginx app=nginx 2
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx-svc app=nginx app=nginx 10.0.152.174 80/TCP
{% endhighlight %}
```
## Using labels effectively
@ -132,39 +132,39 @@ The examples we've used so far apply at most a single label to any resource. The
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](../../examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
{% highlight yaml %}
```yaml
labels:
app: guestbook
tier: frontend
{% endhighlight %}
```
while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
{% highlight yaml %}
```yaml
labels:
app: guestbook
tier: backend
role: master
{% endhighlight %}
```
and
{% highlight yaml %}
```yaml
labels:
app: guestbook
tier: backend
role: slave
{% endhighlight %}
```
The labels allow us to slice and dice our resources along any dimension specified by a label:
{% highlight console %}
```shell
$ kubectl create -f ./guestbook-fe.yaml -f ./redis-master.yaml -f ./redis-slave.yaml
replicationcontrollers/guestbook-fe
@ -185,47 +185,47 @@ NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m
{% endhighlight %}
```
## Canary deployments
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. For example, it is common practice to deploy a *canary* of a new application release (specified via image tag) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. For instance, a new release of the guestbook frontend might carry the following labels:
{% highlight yaml %}
```yaml
labels:
app: guestbook
tier: frontend
track: canary
{% endhighlight %}
```
and the primary, stable release would have a different value of the `track` label, so that the sets of pods controlled by the two replication controllers would not overlap:
{% highlight yaml %}
```yaml
labels:
app: guestbook
tier: frontend
track: stable
{% endhighlight %}
```
The frontend service would span both sets of replicas by selecting the common subset of their labels, omitting the `track` label:
{% highlight yaml %}
```yaml
selector:
app: guestbook
tier: frontend
{% endhighlight %}
```
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. For example:
{% highlight console %}
```shell
$ kubectl label pods -lapp=nginx tier=fe
NAME READY STATUS RESTARTS AGE
@ -246,13 +246,13 @@ my-nginx-v4-mde6m 1/1 Running 0 18m fe
my-nginx-v4-sh6m8 1/1 Running 0 19m fe
my-nginx-v4-wfof4 1/1 Running 0 16m fe
{% endhighlight %}
```
## Scaling your application
When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to increase the number of nginx replicas from 2 to 3, do:
{% highlight console %}
```shell
$ kubectl scale rc my-nginx --replicas=3
scaled
@ -262,7 +262,7 @@ my-nginx-1jgkf 1/1 Running 0 3m
my-nginx-divi2 1/1 Running 0 1h
my-nginx-o0ef1 1/1 Running 0 1h
{% endhighlight %}
```
## Updating your application without a service outage
@ -272,7 +272,7 @@ To update a service without an outage, `kubectl` supports what is called ['rolli
Let's say you were running version 1.7.9 of nginx:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -291,20 +291,20 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](/{{page.version}}/docs/design/simple-rolling-update):
{% highlight console %}
```shell
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
Creating my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
{% endhighlight %}
```
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
{% highlight console %}
```shell
$ kubectl get pods -lapp=nginx -Ldeployment
NAME READY STATUS RESTARTS AGE DEPLOYMENT
@ -315,11 +315,11 @@ my-nginx-divi2 1/1 Running 0
my-nginx-o0ef1 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
my-nginx-q6all 1/1 Running 0 8m 2d1d7a8f682934a254002b56404b813e
{% endhighlight %}
```
`kubectl rolling-update` reports progress as it progresses:
{% highlight console %}
```shell
Updating my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
At end of loop: my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
@ -339,11 +339,11 @@ Update succeeded. Deleting old controller: my-nginx
Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx
my-nginx
{% endhighlight %}
```
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
{% highlight console %}
```shell
$ kubectl kubectl rolling-update my-nginx --image=nginx:1.9.1 --rollback
Found existing update in progress (my-nginx-ccba8fbd8cc8160970f63f9a2696fc46), resuming.
@ -352,13 +352,13 @@ Stopping my-nginx-02ca3e87d8685813dbe1f8c164a46f02 replicas: 1 -> 0
Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
my-nginx
{% endhighlight %}
```
This is one example where the immutability of containers is a huge asset.
If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -382,11 +382,11 @@ spec:
ports:
- containerPort: 80
{% endhighlight %}
```
and roll it out:
{% highlight console %}
```shell
$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml
Creating my-nginx-v4
@ -408,7 +408,7 @@ At end of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
Update succeeded. Deleting my-nginx
my-nginx-v4
{% endhighlight %}
```
You can also run the [update demo](update-demo/) to see a visual representation of the rolling update process.
@ -416,7 +416,7 @@ You can also run the [update demo](update-demo/) to see a visual representation
Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created. For instance, you might want to add an [annotation](annotations) with a description of your object. That's easiest to do with `kubectl patch`:
{% highlight console %}
```shell
$ kubectl patch rc my-nginx-v4 -p '{"metadata": {"annotations": {"description": "my frontend running nginx"}}}'
my-nginx-v4
@ -428,13 +428,13 @@ metadata:
description: my frontend running nginx
...
{% endhighlight %}
```
The patch is specified using json.
For more significant changes, you can `get` the resource, edit it, and then `replace` the resource with the updated version:
{% highlight console %}
```shell
$ kubectl get rc my-nginx-v4 -o yaml > /tmp/nginx.yaml
$ vi /tmp/nginx.yaml
@ -442,7 +442,7 @@ $ kubectl replace -f /tmp/nginx.yaml
replicationcontrollers/my-nginx-v4
$ rm $TMP
{% endhighlight %}
```
The system ensures that you don't clobber changes made by other users or components by confirming that the `resourceVersion` doesn't differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don't use your original configuration file as the source since additional fields most likely were set in the live state.
@ -450,13 +450,13 @@ The system ensures that you don't clobber changes made by other users or compone
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a replication controller. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
{% highlight console %}
```shell
$ kubectl replace -f ./nginx-rc.yaml --force
replicationcontrollers/my-nginx-v4
replicationcontrollers/my-nginx-v4
{% endhighlight %}
```
## What's next?

View File

@ -31,14 +31,14 @@ for namespaces](/{{page.version}}/docs/admin/namespaces)
You can list the current namespaces in a cluster using:
{% highlight console %}
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
{% endhighlight %}
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
@ -50,12 +50,12 @@ To temporarily set the namespace for a request, use the `--namespace` flag.
For example:
{% highlight console %}
```shell
$ kubectl --namespace=<insert-namespace-name-here> run nginx --image=nginx
$ kubectl --namespace=<insert-namespace-name-here> get pods
{% endhighlight %}
```
### Setting the namespace preference
@ -64,19 +64,19 @@ context.
First get your current context:
{% highlight console %}
```shell
$ export CONTEXT=$(kubectl config view | grep current-context | awk '{print $2}')
{% endhighlight %}
```
Then update the default namespace:
{% highlight console %}
```shell
$ kubectl config set-context $(CONTEXT) --namespace=<insert-namespace-name-here>
{% endhighlight %}
```
## Namespaces and DNS

View File

@ -62,7 +62,7 @@ The reclaim policy for a `PersistentVolume` tells the cluster what to do with th
Each PV contains a spec and status, which is the specification and status of the volume.
{% highlight yaml %}
```yaml
apiVersion: v1
kind: PersistentVolume
@ -78,7 +78,7 @@ Each PV contains a spec and status, which is the specification and status of the
path: /tmp
server: 172.17.0.2
{% endhighlight %}
```
### Capacity
@ -129,7 +129,7 @@ The CLI will show the name of the PVC bound to the PV.
Each PVC contains a spec and status, which is the specification and status of the claim.
{% highlight yaml %}
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
@ -142,7 +142,7 @@ spec:
requests:
storage: 8Gi
{% endhighlight %}
```
### Access Modes
@ -156,7 +156,7 @@ Claims, like pods, can request specific quantities of a resource. In this case,
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the pod.
{% highlight yaml %}
```yaml
kind: Pod
apiVersion: v1
@ -174,7 +174,7 @@ spec:
persistentVolumeClaim:
claimName: myclaim
{% endhighlight %}
```

View File

@ -21,23 +21,23 @@ support local storage on the host at this time. There is no guarantee your pod
{% highlight console %}
```shell
# This will be nginx's webroot
$ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
{% endhighlight %}
```
PVs are created by posting them to the API server.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/persistent-volumes/volumes/local-01.yaml
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Available
{% endhighlight %}
```
## Requesting storage
@ -46,7 +46,7 @@ They just know they can rely on their claim to storage and can manage its lifecy
Claims must be created in the same namespace as the pods that use them.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/persistent-volumes/claims/claim-01.yaml
@ -66,13 +66,13 @@ $ kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Bound default/myclaim-1
{% endhighlight %}
```
## Using your claim as a volume
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/persistent-volumes/simpletest/pod.yaml
@ -86,19 +86,19 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
frontendservice 10.0.0.241 <none> 3000/TCP name=frontendhttp 1d
kubernetes 10.0.0.2 <none> 443/TCP <none> 2d
{% endhighlight %}
```
## Next steps
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
need to disable SELinux (setenforce 0).
{% highlight console %}
```shell
$ curl 10.0.0.241:3000
I love Kubernetes storage!
{% endhighlight %}
```
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join the team on [Slack](../../troubleshooting.html#slack) and ask!

View File

@ -21,23 +21,23 @@ support local storage on the host at this time. There is no guarantee your pod
{% highlight console %}
```shell
# This will be nginx's webroot
$ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
{% endhighlight %}
```
PVs are created by posting them to the API server.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/persistent-volumes/volumes/local-01.yaml
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Available
{% endhighlight %}
```
## Requesting storage
@ -46,7 +46,7 @@ They just know they can rely on their claim to storage and can manage its lifecy
Claims must be created in the same namespace as the pods that use them.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/persistent-volumes/claims/claim-01.yaml
@ -66,13 +66,13 @@ $ kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Bound default/myclaim-1
{% endhighlight %}
```
## Using your claim as a volume
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/persistent-volumes/simpletest/pod.yaml
@ -86,19 +86,19 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
frontendservice 10.0.0.241 <none> 3000/TCP name=frontendhttp 1d
kubernetes 10.0.0.2 <none> 443/TCP <none> 2d
{% endhighlight %}
```
## Next steps
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
need to disable SELinux (setenforce 0).
{% highlight console %}
```shell
$ curl 10.0.0.241:3000
I love Kubernetes storage!
{% endhighlight %}
```
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join the team on [Slack](../../troubleshooting.html#slack) and ask!

View File

@ -13,26 +13,26 @@ The kubectl binary doesn't have to be installed to be executable, but the rest o
The simplest way to install is to copy or move kubectl into a dir already in PATH (e.g. `/usr/local/bin`). For example:
{% highlight console %}
```shell
# OS X
$ sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl
# Linux
$ sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl
{% endhighlight %}
```
You also need to ensure it's executable:
{% highlight console %}
```shell
$ sudo chmod +x /usr/local/bin/kubectl
{% endhighlight %}
```
If you prefer not to copy kubectl, you need to ensure the tool is in your path:
{% highlight bash %}
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
@ -40,7 +40,7 @@ export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
{% endhighlight %}
```
## Configuring kubectl
@ -51,11 +51,11 @@ By default, kubectl configuration lives at `~/.kube/config`.
Check that kubectl is properly configured by getting the cluster state:
{% highlight console %}
```shell
$ kubectl cluster-info
{% endhighlight %}
```
If you see a url response, you are ready to go.

View File

@ -11,7 +11,7 @@ The container file system only lives as long as the container does, so when a co
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](../../examples/guestbook/) and other examples. We can add a volume to it to store persistent data as follows:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -38,7 +38,7 @@ spec:
- mountPath: /redis-master-data
name: data # must match the name of the volume, above
{% endhighlight %}
```
`emptyDir` volumes live for the lifespan of the [pod](pods), which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.
@ -50,7 +50,7 @@ Many applications need credentials, such as passwords, OAuth tokens, and TLS key
Kubernetes provides a mechanism, called [*secrets*](secrets), that facilitates delivery of sensitive credentials to applications. A `Secret` is a simple resource containing a map of data. For instance, a simple secret with a username and password might look as follows:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Secret
@ -61,11 +61,11 @@ data:
password: dmFsdWUtMg0K
username: dmFsdWUtMQ0K
{% endhighlight %}
```
As with other resources, this secret can be instantiated using `create` and can be viewed with `get`:
{% highlight console %}
```shell
$ kubectl create -f ./secret.yaml
secrets/mysecret
@ -74,11 +74,11 @@ NAME TYPE DATA
default-token-v9pyz kubernetes.io/service-account-token 2
mysecret Opaque 2
{% endhighlight %}
```
To use the secret, you need to reference it in a pod or pod template. The `secret` volume source enables you to mount it as an in-memory directory into your containers.
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -109,7 +109,7 @@ spec:
- mountPath: /var/run/secrets/super
name: supersecret
{% endhighlight %}
```
For more details, see the [secrets document](secrets), [example](secrets/) and [design doc](/{{page.version}}/docs/design/secrets).
@ -120,7 +120,7 @@ Secrets can also be used to pass [image registry credentials](images.html#using-
First, create a `.dockercfg` file, such as running `docker login <registry.domain>`.
Then put the resulting `.dockercfg` file into a [secret resource](secrets). For example:
{% highlight console %}
```shell
$ docker login
Username: janedoe
@ -148,12 +148,12 @@ EOF
$ kubectl create -f ./image-pull-secret.yaml
secrets/myregistrykey
{% endhighlight %}
```
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
section to a pod definition.
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -166,7 +166,7 @@ spec:
imagePullSecrets:
- name: myregistrykey
{% endhighlight %}
```
## Helper containers
@ -174,7 +174,7 @@ spec:
Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a [program that polls a git repository](http://releases.k8s.io/release-1.1/contrib/git-sync/) for new updates:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -207,7 +207,7 @@ spec:
- mountPath: /data
name: www-data
{% endhighlight %}
```
More examples can be found in our [blog article](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns) and [presentation slides](http://www.slideshare.net/Docker/slideshare-burns).
@ -217,7 +217,7 @@ Kubernetes's scheduler will place applications only where they have adequate CPU
If no resource requirements are specified, a nominal amount of resources is assumed. (This default is applied via a [LimitRange](../admin/limitrange/) for the default [Namespace](namespaces). It can be viewed with `kubectl describe limitrange limits`.) You may explicitly specify the amount of resources required as follows:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -247,7 +247,7 @@ spec:
# memory units are bytes
memory: 64Mi
{% endhighlight %}
```
The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See [Resource QoS](../proposals/resource-qos) for the difference between resource limits and requests.
@ -259,7 +259,7 @@ Many applications running for long periods of time eventually transition to brok
A common way to probe an application is using HTTP, which can be specified as follows:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -285,7 +285,7 @@ spec:
initialDelaySeconds: 30
timeoutSeconds: 1
{% endhighlight %}
```
Other times, applications are only temporarily unable to serve, and will recover on their own. Typically in such cases you'd prefer not to kill the application, but don't want to send it requests, either, since the application won't respond correctly or at all. A common such scenario is loading large data or configuration files during application startup. Kubernetes provides *readiness probes* to detect and mitigate such situations. Readiness probes are configured similarly to liveness probes, just using the `readinessProbe` field. A pod with containers reporting that they are not ready will not receive traffic through Kubernetes [services](connecting-applications).
@ -300,7 +300,7 @@ Of course, nodes and applications may fail at any time, but many applications be
The specification of a pre-stop hook is similar to that of probes, but without the timing-related parameters. For example:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: ReplicationController
@ -324,7 +324,7 @@ spec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["/usr/sbin/nginx","-s","quit"]
{% endhighlight %}
```
## Termination message
@ -332,7 +332,7 @@ In order to achieve a reasonably high level of availability, especially for acti
Here is a toy example:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Pod
@ -345,11 +345,11 @@ spec:
command: ["/bin/sh","-c"]
args: ["sleep 60 && /bin/echo Sleep expired > /dev/termination-log"]
{% endhighlight %}
```
The message is recorded along with the other state of the last (i.e., most recent) termination:
{% highlight console %}
```shell
$ kubectl create -f ./pod.yaml
pods/pod-w-message
@ -359,7 +359,7 @@ Sleep expired
$ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.exitCode}}{{end}}"
0
{% endhighlight %}
```
## What's next?

View File

@ -11,24 +11,24 @@ Once your application is packaged into a container and pushed to an image regist
For example, [nginx](http://wiki.nginx.org/Main) is a popular HTTP server, with a [pre-built container on Docker hub](https://registry.hub.docker.com/_/nginx/). The [`kubectl run`](kubectl/kubectl_run) command below will create two nginx replicas, listening on port 80.
{% highlight console %}
```shell
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx my-nginx nginx run=my-nginx 2
{% endhighlight %}
```
You can see that they are running by:
{% highlight console %}
```shell
$ kubectl get po
NAME READY STATUS RESTARTS AGE
my-nginx-l8n3i 1/1 Running 0 29m
my-nginx-q7jo3 1/1 Running 0 29m
{% endhighlight %}
```
Kubernetes will ensure that your application keeps running, by automatically restarting containers that fail, spreading containers across nodes, and recreating containers on new nodes when nodes fail.
@ -36,22 +36,22 @@ Kubernetes will ensure that your application keeps running, by automatically res
Through integration with some cloud providers (for example Google Compute Engine and AWS EC2), Kubernetes enables you to request that it provision a public IP address for your application. To do this run:
{% highlight console %}
```shell
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
service "my-nginx" exposed
{% endhighlight %}
```
To find the public IP address assigned to your application, execute:
{% highlight console %}
```shell
$ kubectl get svc my-nginx
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.179.240.1 25.1.2.3 80/TCP run=nginx 8d
{% endhighlight %}
```
You may need to wait for a minute or two for the external ip address to be provisioned.
@ -61,14 +61,14 @@ In order to access your nginx landing page, you also have to make sure that traf
To kill the application and delete its containers and public IP address, do:
{% highlight console %}
```shell
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
$ kubectl delete svc my-nginx
services/my-nginx
{% endhighlight %}
```
## What's next?

View File

@ -38,7 +38,7 @@ information on how Service Accounts work.
This is an example of a simple secret, in yaml format:
{% highlight yaml %}
```yaml
apiVersion: v1
kind: Secret
@ -49,7 +49,7 @@ data:
password: dmFsdWUtMg0K
username: dmFsdWUtMQ0K
{% endhighlight %}
```
The data field is a map. Its keys must match
[`DNS_SUBDOMAIN`](../design/identifiers), except that leading dots are also
@ -66,7 +66,7 @@ that it should use the secret.
This is an example of a pod that mounts a secret in a volume:
{% highlight json %}
```json
{
"apiVersion": "v1",
@ -94,7 +94,7 @@ This is an example of a pod that mounts a secret in a volume:
}
}
{% endhighlight %}
```
Each secret you want to use needs its own `spec.volumes`.
@ -156,7 +156,7 @@ files and the secret values are base-64 decoded and stored inside these files.
This is the result of commands
executed inside the container from the example above:
{% highlight console %}
```shell
$ ls /etc/foo/
username
@ -166,7 +166,7 @@ value-1
$ cat /etc/foo/password
value-2
{% endhighlight %}
```
The program in a container is responsible for reading the secret(s) from the
files. Currently, if a program expects a secret to be stored in an environment
@ -210,7 +210,7 @@ update the data of existing secrets, but to create new ones with distinct names.
To create a pod that uses an ssh key stored as a secret, we first need to create a secret:
{% highlight json %}
```json
{
"kind": "Secret",
@ -224,7 +224,7 @@ To create a pod that uses an ssh key stored as a secret, we first need to create
}
}
{% endhighlight %}
```
**Note:** The serialized JSON and YAML values of secret data are encoded as
base64 strings. Newlines are not valid within these strings and must be
@ -233,7 +233,7 @@ omitted.
Now we can create a pod which references the secret with the ssh key and
consumes it in a volume:
{% highlight json %}
```json
{
"kind": "Pod",
@ -269,7 +269,7 @@ consumes it in a volume:
}
}
{% endhighlight %}
```
When the container's command runs, the pieces of the key will be available in:
@ -286,7 +286,7 @@ credentials.
The secrets:
{% highlight json %}
```json
{
"apiVersion": "v1",
@ -316,11 +316,11 @@ The secrets:
}]
}
{% endhighlight %}
```
The pods:
{% highlight json %}
```json
{
"apiVersion": "v1",
@ -394,16 +394,16 @@ The pods:
}]
}
{% endhighlight %}
```
Both containers will have the following files present on their filesystems:
{% highlight console %}
```shell
/etc/secret-volume/username
/etc/secret-volume/password
{% endhighlight %}
```
Note how the specs for the two pods differ only in one field; this facilitates
creating pods with different capabilities from a common pod config template.
@ -412,7 +412,7 @@ You could further simplify the base pod specification by using two Service Accou
one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
{% highlight json %}
```json
{
"kind": "Pod",
@ -433,7 +433,7 @@ one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
]
}
{% endhighlight %}
```
### Use-case: Secret visible to one container in a pod

View File

@ -15,15 +15,15 @@ A secret contains a set of named byte arrays.
Use the [`examples/secrets/secret.yaml`](secret.yaml) file to create a secret:
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/secrets/secret.yaml
{% endhighlight %}
```
You can use `kubectl` to see information about the secret:
{% highlight console %}
```shell
$ kubectl get secrets
NAME TYPE DATA
@ -41,7 +41,7 @@ Data
data-1: 9 bytes
data-2: 11 bytes
{% endhighlight %}
```
## Step Two: Create a pod that consumes a secret
@ -50,21 +50,21 @@ consumes it.
Use the [`examples/secrets/secret-pod.yaml`](secret-pod.yaml) file to create a Pod that consumes the secret.
{% highlight console %}
```shell
$ kubectl create -f docs/user-guide/secrets/secret-pod.yaml
{% endhighlight %}
```
This pod runs a binary that displays the content of one of the pieces of secret data in the secret
volume:
{% highlight console %}
```shell
$ kubectl logs secret-test-pod
2015-04-29T21:17:24.712206409Z content of file "/etc/secret-volume/data-1": value-1
{% endhighlight %}
```

Some files were not shown because too many files have changed in this diff Show More