Merge remote-tracking branch 'kubernetes/master' into add-clc

This commit is contained in:
ckleban 2016-03-30 12:02:50 -07:00
commit f27bcff8b1
213 changed files with 25695 additions and 14663 deletions

1
.gitignore vendored
View File

@ -3,3 +3,4 @@
.jekyll-metadata
_site/**
.sass-cache/**
CNAME

60
404.md
View File

@ -3,21 +3,60 @@ layout: docwithnav
title: 404 Error!
permalink: /404.html
---
<script language="JavaScript">
$( document ).ready(function() {
var oldURLs=[".html",".md","/v1.1/","/v1.0/","/README"];
var doRedirect=false;
var oldURLs=["/README.md","/README.html",".html",".md","/v1.1/","/v1.0/"];
var fwdDirs=["examples/","cluster/","docs/devel","docs/design"];
var doRedirect = false;
var notHere = false;
var forwardingURL=window.location.href;
for (i=0;i<oldURLs.length;i++) {
if (forwardingURL.indexOf(oldURLs[i]) > -1) {
doRedirect=true;
forwardingURL=forwardingURL.replace(oldURLs[i],"/");
if (forwardingURL.indexOf("third_party/swagger-ui") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/kubernetes/third_party/swagger-ui/");
}
if (forwardingURL.indexOf("resource-quota") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/docs/admin/resourcequota/");
}
if (forwardingURL.indexOf("horizontal-pod-autoscaler") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/");
}
if (forwardingURL.indexOf("docs/roadmap") > -1)
{
notHere = true;
window.location.replace("https://github.com/kubernetes/kubernetes/milestones/");
}
if (forwardingURL.indexOf("api-ref/") > -1)
{
notHere = true;
window.location.replace("http://kubernetes.io/docs/api/");
}
for (i=0;i<fwdDirs.length;i++) {
if (forwardingURL.indexOf(fwdDirs[i]) > -1)
{
var urlPieces = forwardingURL.split(fwdDirs[i]);
var newURL = "https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/" + fwdDirs[i] + urlPieces[1];
notHere = true;
window.location.replace(newURL);
}
}
if (doRedirect) {
window.location.replace(forwardingURL);
};
if (!notHere) {
for (i=0;i<oldURLs.length;i++) {
if (forwardingURL.indexOf(oldURLs[i]) > -1)
{
doRedirect=true;
forwardingURL=forwardingURL.replace(oldURLs[i],"/");
}
}
if (doRedirect)
{
window.location.replace(forwardingURL);
};
}
});
</script>
@ -25,4 +64,3 @@ Sorry, this page was not found. :(
You can let us know by filling out the "I wish this page" text field at
the bottom of this page. Maybe try: "I wish this page _existed_."

View File

@ -4,8 +4,8 @@ Welcome! We are very pleased you want to contribute to the documentation and/or
You can click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it.
If you want to see your changes staged without having to install anything locally,
change your fork of our repo to be named:
If you want to see your changes staged without having to install anything locally, remove the CNAME file in this directory and
change the name of the fork to be:
YOUR_GITHUB_USERNAME.github.io
@ -25,7 +25,7 @@ First install rvm
Then load it into your environment
source /Users/(USERNAME)/.rvm/scripts/rvm (or whatever is prompted by the installer)
source ${HOME}/.rvm/scripts/rvm (or whatever is prompted by the installer)
Then install Ruby 2.2 or higher
@ -66,6 +66,9 @@ might help for Windows users.
Edit the yaml files in `/_data/` for the Guides, Reference, Samples, or Support areas.
You may have to exit and `jekyll clean` before restarting the `jekyll serve` to
get changes to files in `/_data/` to show up.
### Add Images
Put the new image in `/images/docs/` if it's for the documentation, and just `/images/` if it's for the website.
@ -88,6 +91,22 @@ To include a file that is hosted in the external, main Kubernetes repo, make sur
* `PATHFROMK8SROOT`: The path to the file relative to the root of [the Kubernetes repo](https://github.com/kubernetes/kubernetes/tree/release-1.2), e.g. `/examples/rbd/foo.yaml`
## Using tabs for multi-language examples
By specifying some inline CSV in a varable called `tabspec`, you can include a file
called `tabs.html` that generates tabs showing code examples in multiple langauges.
<pre>&#123;% capture tabspec %&#125;servicesample
JSON,json,service-sample.json,/docs/user-guide/services/service-sample.json
YAML,yaml,service-sample.yaml,/docs/user-guide/services/service-sample.yaml&#123;% endcapture %&#125;
&#123;% include tabs.html %&#125;</pre>
In English, this would read: "Create a set of tabs with the alias `servicesample`,
and have tabs visually labeled "JSON" and "YAML" that use `json` and `yaml` Rouge syntax highlighting, which display the contents of
`service-sample.{extension}` on the page, and link to the file in GitHub at (full path)."
Example file: [Pods: Multi-Container](/docs/user-guide/pods/multi-container/).
## Use a global variable
The `/_config.yml` file defines some useful variables you can use when editing docs.

View File

@ -14,11 +14,16 @@ lsi: false
defaults:
-
scope:
path: "docs"
path: ""
values:
version: "v1.2"
layout: docwithnav
showedit: true
githubbranch: "release-1.2"
docsbranch: "master"
-
scope:
path: "docs"
values:
layout: docwithnav
showedit: true
permalink: pretty

View File

@ -81,7 +81,7 @@ toc:
path: /docs/getting-started-guides/fedora/fedora-calico/
- title: rkt
section:
- title: Running Kubernetes on rat
- title: Running Kubernetes on rkt
path: /docs/getting-started-guides/rkt/
- title: Notes on Different UX with rkt Container Runtime
path: /docs/getting-started-guides/rkt/notes/
@ -120,18 +120,28 @@ toc:
path: /docs/admin/multi-cluster/
- title: Using Large Clusters
path: /docs/admin/cluster-large/
- title: Running in Multiple Zones
path: /docs/admin/multiple-zones/
- title: Building High-Availability Clusters
path: /docs/admin/high-availability/
- title: Accessing Clusters
path: /docs/user-guide/accessing-the-cluster/
- title: Sharing a Cluster
- title: Sharing a Cluster with Namespaces
path: /docs/admin/namespaces/
- title: Namespaces Walkthrough
path: /docs/admin/namespaces/walkthrough/
- title: Changing Cluster Size
path: https://github.com/kubernetes/kubernetes/wiki/User-FAQ#how-do-i-change-the-size-of-my-cluster/
- title: Creating a Custom Cluster from Scratch
path: /docs/getting-started-guides/scratch/
- title: Authenticating Across Clusters with kubeconfig
path: /docs/user-guide/kubeconfig-file/
- title: Replication Controller Operations
path: /docs/user-guide/replication-controller/operations/
- title: Resizing a Replication Controller
path: /docs/user-guide/resizing-a-replication-controller/
- title: Service Operations
path: /docs/user-guide/services/operations/
- title: Using Nodes, Pods, and Containers
section:
@ -147,6 +157,10 @@ toc:
path: /docs/user-guide/getting-into-containers/
- title: The Lifecycle of a Pod
path: /docs/user-guide/pod-states/
- title: Creating Single-Container Pods
path: /docs/user-guide/pods/single-container/
- title: Creating Multi-Container Pods
path: /docs/user-guide/pods/multi-container/
- title: Pod Templates
path: /docs/user-guide/pod-templates/
- title: Assigning Pods to Nodes
@ -166,6 +180,8 @@ toc:
section:
- title: Networking in Kubernetes
path: /docs/admin/networking/
- title: Creating an External Load Balancer
path: /docs/user-guide/load-balancer/
- title: Using DNS Pods and Services
path: /docs/admin/dns/
- title: Setting Up and Configuring DNS
@ -205,8 +221,10 @@ toc:
path: /docs/user-guide/compute-resources/
- title: Using kubectl to Manage Resources
path: /docs/user-guide/working-with-resources/
- title: Applying Resource Quotas and Limits
- title: Understanding Resource Quotas
path: /docs/admin/resourcequota/
- title: Applying Resource Quotas and Limits
path: /docs/admin/resourcequota/walkthrough/
- title: Setting Pod CPU and Memory Limits
path: /docs/admin/limitrange/
- title: Configuring Garbage Collection
@ -224,6 +242,8 @@ toc:
path: /docs/user-guide/managing-deployments/
- title: Deploying Applications
path: /docs/user-guide/deploying-applications/
- title: Updating Applications with Rolling Updates
path: /docs/user-guide/rolling-updates/
- title: Launching, Exposing, and Killing Applications
path: /docs/user-guide/quick-start/

View File

@ -32,6 +32,20 @@ toc:
- title: Extensions API Definitions
path: /docs/api-reference/extensions/v1beta1/definitions/
- title: Autoscaling API
section:
- title: Autoscaling API Operations
path: /docs/api-reference/autoscaling/v1/operations/
- title: Autoscaling API Definitions
path: /docs/api-reference/autoscaling/v1/definitions/
- title: Batch API
section:
- title: Batch API Operations
path: /docs/api-reference/batch/v1/operations/
- title: Batch API Definitions
path: /docs/api-reference/batch/v1/definitions/
- title: kubectl CLI
section:
- title: kubectl Overview
@ -155,7 +169,7 @@ toc:
- title: kube-proxy CLI
path: /docs/admin/kube-proxy/
- title: kub-scheduler CLI
- title: kube-scheduler CLI
path: /docs/admin/kube-scheduler/
- title: kubelet CLI
@ -201,7 +215,7 @@ toc:
- title: Ingress Resources
path: /docs/user-guide/ingress/
- title: Horizontal Pod Autoscaling
path: /docs/user-guide/horizontal-pod-autoscaler/
path: /docs/user-guide/horizontal-pod-autoscaling/
- title: Jobs
path: /docs/user-guide/jobs/
- title: Resource Quotas

View File

@ -35,6 +35,8 @@ toc:
- title: Persistent Volume Samples
section:
- title: Persistent Volumes Walkthrough
path: /docs/user-guide/persistent-volumes/walkthrough/
- title: WordPress on a Kubernetes Persistent Volume
path: https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/mysql-wordpress-pd/
- title: GlusterFS
@ -60,3 +62,9 @@ toc:
- title: ConfigMap Example
path: https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/configmap
- title: Horizontal Pod Autoscaling Example
path: /docs/user-guide/horizontal-pod-autoscaling/walkthrough/
- title: Secrets Walkthrough
path: /docs/user-guide/secrets/walkthrough/

View File

@ -6,6 +6,8 @@ toc:
- title: Troubleshooting
section:
- title: Debugging Pods and Replication Controllers
path: /docs/user-guide/debugging-pods-and-replication-controllers/
- title: Web Interface
path: /docs/user-guide/ui/
- title: Application Introspection and Debugging
@ -30,8 +32,6 @@ toc:
- title: Other Resources
section:
- title: Known Issues
path: /docs/user-guide/known-issues/
- title: Kubernetes Issue Tracker on GitHub
path: https://github.com/kubernetes/kubernetes/issues/
- title: Report a Security Vulnerability

View File

@ -1,10 +1,11 @@
{% capture samplecode %}{% include_relative {{include.file}} %}{% endcapture %}
{% if include.k8slink %}{% capture ghlink %}https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}{{include.k8slink}}{% endcapture %}{% endif %}
{% if include.ghlink %}{% capture ghlink %}https://github.com/kubernetes/kubernetes.github.io/blob/master{{include.ghlink}}{% endcapture %}{% endif %}
{% if include.k8slink %}{% capture ghlink %}https://raw.githubusercontent.com/kubernetes/kubernetes/blob/{{page.githubbranch}}{{include.k8slink}}{% endcapture %}{% endif %}
{% if include.ghlink %}{% capture ghlink %}https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/{{page.docsbranch}}{{include.ghlink}}{% endcapture %}{% endif %}
{% capture mysample %}
```{{include.language}}
{{ samplecode | raw | strip }}
```
{: id="{{include.file | handleize}}"}
{% endcapture %}
<table class="includecode"><thead><tr><th>{% if ghlink %}<a href="{{ghlink}}">{% endif %}<code>{{include.file}}</code></a></th></tr></thead>
<table class="includecode"><thead><tr><th>{% if ghlink %}<a href="{{ghlink}}" download="{{include.file}}">{% endif %}<code>{{include.file}}</code></a><img src="/images/copycode.svg" style="max-height:24px" onClick="copyCode('{{include.file | handleize}}')" title="Copy {{include.file}} to clipboard"></th></tr></thead>
<tr><td>{{ mysample | markdownify }}</td></tr></table>

View File

@ -2,12 +2,17 @@
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="canonical" href="http://kubernetes.io{{page.url}}" />
<link rel="shortcut icon" type="image/png" href="/images/favicon.png">
<link href='https://fonts.googleapis.com/css?family=Roboto:400,100,100italic,300,300italic,400italic,500,500italic,700,700italic,900,900italic' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Roboto+Mono' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href='https://fonts.googleapis.com/css?family=Roboto+Mono' type='text/css'>
<link rel="stylesheet" href="/css/styles.css"/>
<link rel="stylesheet" href="/css/jquery-ui.min.css">
<link rel="stylesheet" href="/css/sweetalert.css">
<script src="/js/jquery-2.2.0.min.js"></script>
<script src="/js/jquery-ui.min.js"></script>
<script src="/js/script.js"></script>
<script src="/js/sweetalert.min.js"></script>
<title>Kubernetes - {{ title }}</title>
</head>
<body>

17
_includes/tabs.html Normal file
View File

@ -0,0 +1,17 @@
{% assign tabsraw = tabspec | newline_to_br | split: '<br />' %}
{% assign tabsetname = tabsraw[0] %}
<script>$(function(){$("#{{tabsetname}}").tabs();});</script>
<div id="{{tabsetname}}">
<ul>{% for tab in tabsraw offset:1 %}{% assign thisTab = tab | split: ',' %}
<li><a href="#{{ thisTab[0] | strip | handleize }}">{{ thisTab[0] | strip}}</a></li>{% endfor %}
</ul>
{% for tab in tabsraw offset:1 %}
{% assign thisTab = tab | split: ',' %}
{% assign tabLang=thisTab[1] %}
{% assign tabFile=thisTab[2] %}
{% assign tabGHLink=thisTab[3] %}
<div id="{{ thisTab[0] | strip | handleize }}">
{% include code.html language=tabLang file=tabFile ghlink=tabGHLink %}
</div>
{% endfor %}
</div>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1099,6 +1099,47 @@ Examples:<br>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_scalestatus">v1.ScaleStatus</h3>
<div class="paragraph">
<p>ScaleStatus represents the current status of a scale subresource.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">replicas</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">actual number of observed instances of the scaled object.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">selector</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">label query over pods that should match the replicas count. This is same as the label selector but in the string format to avoid introspection by clients. The string will be in the same format as the query-param syntax. More info about label selectors: <a href="http://releases.k8s.io/release-1.2/docs/user-guide/labels.md#label-selectors">http://releases.k8s.io/release-1.2/docs/user-guide/labels.md#label-selectors</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_configmap">v1.ConfigMap</h3>
@ -2502,6 +2543,68 @@ The resulting set of endpoints can be viewed as:<br>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_scale">v1.Scale</h3>
<div class="paragraph">
<p>Scale represents a scaling request for a resource.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">metadata</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Standard object metadata; More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata</a>.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_objectmeta">v1.ObjectMeta</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">spec</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">defines the behavior of the scale. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status</a>.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_scalespec">v1.ScaleSpec</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">status</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">current status of the scale. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status</a>. Read-only.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_scalestatus">v1.ScaleStatus</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_loadbalanceringress">v1.LoadBalancerIngress</h3>
@ -5105,6 +5208,13 @@ The resulting set of endpoints can be viewed as:<br>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">fullyLabeledReplicas</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">The number of pods that have labels matching the labels of the pod template of the replication controller.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">observedGeneration</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">ObservedGeneration reflects the generation of the most recently observed replication controller.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
@ -5235,6 +5345,40 @@ The resulting set of endpoints can be viewed as:<br>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_scalespec">v1.ScaleSpec</h3>
<div class="paragraph">
<p>ScaleSpec describes the attributes of a scale subresource.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">replicas</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">desired number of instances for the scaled object.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_componentstatuslist">v1.ComponentStatusList</h3>
@ -7133,7 +7277,7 @@ The resulting set of endpoints can be viewed as:<br>
</div>
<div id="footer">
<div id="footer-text">
Last updated 2016-03-04 23:53:10 UTC
Last updated 2016-03-15 23:35:46 UTC
</div>
</div>
</body>

File diff suppressed because it is too large Load Diff

View File

@ -120,7 +120,7 @@ header
top: 0
left: 0
transform: none
background-image: url(../images/nav_logo.svg)
background-image: url(/images/nav_logo.svg)
background-size: contain
background-position: center center
background-repeat: no-repeat
@ -133,7 +133,7 @@ header
left: 20px
width: 50px
height: 50px
background-image: url(../images/toc_icon.png)
background-image: url(/images/toc_icon.png)
background-position: center center
background-repeat: no-repeat
background-size: auto
@ -152,11 +152,11 @@ header
display: block
width: 45px
height: 44px
background-image: url(../images/favicon.png)
background-image: url(/images/favicon.png)
&.flip-nav .flyout-button
background-image: url(../images/toc_icon_grey.png)
background-image: url(/images/toc_icon_grey.png)
.nav-buttons
@ -295,7 +295,7 @@ header
// HERO
#hero
background-image: url(../images/texture.png)
background-image: url(/images/texture.png)
background-color: $dark-grey
text-align: center
padding-left: 0
@ -321,7 +321,7 @@ header
// FOOTER
footer
width: 100%
background-image: url(../images/texture.png)
background-image: url(/images/texture.png)
background-color: $dark-grey
main
@ -357,45 +357,6 @@ footer
text-align: center
//.social
// position: relative
// text-align: center
//
// a, label
// margin: 20px 0
//
// a
// vertical-align: middle
//
// a.line-break
// display: block
//
//
// label
// //float: right
// display: inline-block
// width: 100%
// height: 50px
// line-height: 50px
// font-weight: 100
// white-space: nowrap
//
// input
// margin-left: 8px
// width: 300px
// max-width: 60%
//
//
//.social
// a
// float: left
// margin-right: 10px
//
// a:nth-child(4)
// clear: left
#search, #wishField
background-color: transparent
@ -412,7 +373,7 @@ footer
.social a
display: inline-block
background-image: url(../images/social_sprite.png)
background-image: url(/images/social_sprite.png)
background-repeat: no-repeat
background-size: auto
width: 50px
@ -527,6 +488,8 @@ section
position: relative
padding: 50px 20px 20px 20px
overflow: hidden
font-size: 14px
& > div
height: 100%
@ -549,7 +512,7 @@ section
background-color: $light-grey
border-left: 3px solid $blue
padding: 7.5px 10px 7.5px 18px
margin-left: -21px
margin-left: -3px
color: $blue
.open-toc
@ -563,10 +526,13 @@ section
overflow-y: auto
.pi-accordion
margin-left: -20px
& > .container:first-child > .item:first-child > .title:first-child
padding-left: 0
font-size: 1.5em
font-weight: 700
.container
padding-left: 20px
& > .container:first-child > .item.yah:first-child > .title:first-child
margin-left: -20px !important
.item
overflow: hidden
@ -583,7 +549,6 @@ section
a.item > .title
color: black
padding-left: 0
&:hover
color: $blue
@ -618,6 +583,12 @@ section
opacity: 1
dt
margin-bottom: 8px
dd
margin-bottom: 16px
.pi-pushmenu
display: none
position: fixed
@ -720,7 +691,6 @@ section
#docsContent
position: relative
float: right
//width: calc(100% - 400px)
width: 100%
$toc-margin: 15px
@ -736,14 +706,6 @@ section
padding-bottom: 10px
border-bottom: 1px solid #cccccc
&:before
content: ''
display: block
height: 100px
margin-top: -100px
background-color: red
visibility: hidden
h1
font-size: 32px
padding-right: 60px
@ -762,7 +724,7 @@ section
font-weight: 500
p
font-size: 16px
font-size: 14px
font-weight: 300
line-height: 1.25em
@ -876,9 +838,9 @@ section
white-space: nowrap
text-indent: 50px
overflow: hidden
background: $blue url(../images/pencil.png) no-repeat
background-position: 1px 1px
background-size: auto
background: $blue url(/images/icon-pencil.svg) no-repeat
background-position: 12px 10px
background-size: 29px 29px
#markdown-toc
margin-bottom: 20px
@ -960,7 +922,7 @@ $feature-box-div-margin-bottom: 40px
#home
&.flip-nav, &.open-nav
.logo
background-image: url(../images/nav_logo2.svg)
background-image: url(/images/nav_logo2.svg)
#hero
margin-bottom: 0
@ -979,8 +941,12 @@ $feature-box-div-margin-bottom: 40px
padding-top: $ocean-nodes-padding-Y
padding-bottom: $ocean-nodes-padding-Y
a
color: $blue
main
margin-bottom: $ocean-nodes-padding-Y
min-height: 160px
.image-wrapper
max-width: 75%
@ -1009,7 +975,7 @@ $feature-box-div-margin-bottom: 40px
#video
width: 100%
position: relative
background-image: url(../images/kub_video_thm.jpg)
background-image: url(/images/kub_video_thm.jpg)
background-position: center center
background-size: cover
@ -1114,7 +1080,7 @@ $feature-box-div-margin-bottom: 40px
#features
padding-top: 140px
background-color: $light-grey
background-image: url(../images/wheel.png)
background-image: url(/images/wheel.png)
background-position: center 60px
background-repeat: no-repeat
background-size: auto
@ -1154,7 +1120,7 @@ $feature-box-div-margin-bottom: 40px
#community
&.open-nav, &.flip-nav
.logo
background-image: url(../images/nav_logo2.svg)
background-image: url(/images/nav_logo2.svg)
#hero
padding-bottom: 20px
@ -1205,6 +1171,30 @@ $feature-box-div-margin-bottom: 40px
width: 100%
height: 100%
// Tabs
.ui-widget-header
background: transparent !important
background-color: transparent !important
border: 0px !important
.ui-tabs
ul, ol, li
padding: 0px !important
list-style: none !important
margin-bottom: 0px !important
margin-left: 1px !important
.ui-widget-content
border: 0px !important
.ui-widget-content
table
margin: 0px !important
.ui-tabs .ui-tabs-panel
padding: 0px !important
border: 1px solid #ccc !important
// Talk to us
#talkToUs
h3, h4
@ -1233,16 +1223,16 @@ $feature-box-div-margin-bottom: 40px
background-repeat: no-repeat
div:nth-child(1)
background-image: url(../images/twitter_icon.png)
background-image: url(/images/twitter_icon.png)
div:nth-child(2)
background-image: url(../images/github_icon.png)
background-image: url(/images/github_icon.png)
div:nth-child(3)
background-image: url(../images/slack_icon.png)
background-image: url(/images/slack_icon.png)
div:nth-child(4)
background-image: url(../images/stackoverflow_icon.png)
background-image: url(/images/stackoverflow_icon.png)
div + div
margin-top: 20px
@ -1270,10 +1260,10 @@ $feature-box-div-margin-bottom: 40px
padding-top: 125px
div:nth-child(1)
background-image: url(../images/community_logos/viacom_logo.png)
background-image: url(/images/community_logos/viacom_logo.png)
div:nth-child(2)
background-image: url(../images/community_logos/ebay_logo.png)
background-image: url(/images/community_logos/ebay_logo.png)
div:nth-child(3)
background-image: url(../images/community_logos/wikimedia_foundation_logo.png)
background-image: url(/images/community_logos/wikimedia_foundation_logo.png)

View File

@ -66,6 +66,7 @@ $video-section-height: 550px
#encyclopedia
padding: 50px 50px 20px 20px
clear: both
#docsToc
position: relative
@ -88,20 +89,33 @@ $video-section-height: 550px
main
max-width: $main-max-width
#home
section, header, footer
main
max-width: 1000px
#oceanNodes
main
position: relative
max-width: $main-max-width
max-width: 830px
&:nth-child(1)
max-width: 1000px
padding-right: 475px
h3, p
text-align: left
.image-wrapper
position: absolute
max-width: 48%
transform: translateY(-50%)
.content
width: 50%
img
max-width: 425px
//.content
// width: 50%
#video
@ -165,30 +179,6 @@ $video-section-height: 550px
div:last-child
float: right
//.social
// position: relative
// margin: 20px 0
//
// a
// float: left
//
// a + a
// margin-left: 10px
//
// label
// float: right
// width: auto
// display: inline-block
// height: 50px
// line-height: 50px
// font-weight: 100
// white-space: nowrap
//
// input
// margin-left: 8px
// max-width: none
#search, #wishField
background-color: transparent
padding: 10px

View File

@ -68,6 +68,25 @@ $feature-box-div-width: 45%
@media screen and (min-width: 750px)
@import "size"
p
font-size: 16px
line-height: 24px
letter-spacing: 0.1px
h1
font-size: 36px
line-height: 44px
h3
font-size: 28px
line-height: 36px
h4
font-size: 24px
line-height: 40px
#home
#viewDocs, #tryKubernetes
display: inline-block
@ -95,10 +114,17 @@ $feature-box-div-width: 45%
#oceanNodes
h3
text-align: left
margin-bottom: 18px
main
position: relative
clear: both
display: table
.content
display: table-cell
position: relative
vertical-align: middle
.image-wrapper
position: absolute
@ -108,28 +134,33 @@ $feature-box-div-width: 45%
transform: translateY(-50%)
&:nth-child(odd)
.content
padding-right: 30%
padding-right: 210px
.image-wrapper
right: 0
&:nth-child(even)
.content
padding-left: 30%
padding-left: 210px
.image-wrapper
left: 0
&:nth-child(1)
.content
padding-right: 0
padding-right: 0
h3, p
text-align: center
.image-wrapper
position: relative
display: block
float: none
max-width: 100%
transform: none
.content
display: block
img
width: 100%
@ -148,6 +179,8 @@ $feature-box-div-width: 45%
padding-bottom: 60px
.feature-box
margin-bottom: 30px
&:last-child
margin-bottom: 0
@ -155,8 +188,6 @@ $feature-box-div-width: 45%
margin-bottom: $features-h3-margin-bottom
.feature-box
margin-bottom: $feature-box-margin-bottom
& > div
width: $feature-box-div-width
margin-bottom: $feature-box-div-margin-bottom

7
css/jquery-ui.min.css vendored Normal file

File diff suppressed because one or more lines are too long

5
css/jquery-ui.structure.min.css vendored Normal file

File diff suppressed because one or more lines are too long

5
css/jquery-ui.theme.min.css vendored Normal file

File diff suppressed because one or more lines are too long

934
css/sweetalert.css Normal file
View File

@ -0,0 +1,934 @@
body.stop-scrolling {
height: 100%;
overflow: hidden; }
.sweet-overlay {
background-color: black;
/* IE8 */
-ms-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=40)";
/* IE8 */
background-color: rgba(0, 0, 0, 0.4);
position: fixed;
left: 0;
right: 0;
top: 0;
bottom: 0;
display: none;
z-index: 10000; }
.sweet-alert {
background-color: white;
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
width: 478px;
padding: 17px;
border-radius: 5px;
text-align: left;
position: fixed;
left: 50%;
top: 50%;
margin-left: -256px;
margin-top: -200px;
overflow: hidden;
display: none;
z-index: 99999; }
@media all and (max-width: 540px) {
.sweet-alert {
width: auto;
margin-left: 0;
margin-right: 0;
left: 15px;
right: 15px; } }
.sweet-alert h2 {
color: #575757;
font-size: 30px;
text-align: center;
font-weight: 600;
text-transform: none;
position: relative;
margin: 25px 0;
padding: 0;
line-height: 40px;
display: block; }
.sweet-alert p {
color: #797979;
font-size: 16px;
text-align: left;
font-weight: 300;
position: relative;
text-align: inherit;
float: none;
margin: 0;
padding: 0;
padding-left: 10px !important;
font-family: courier,monospace;
line-height: normal; }
.sweet-alert fieldset {
border: none;
position: relative; }
.sweet-alert .sa-error-container {
background-color: #f1f1f1;
margin-left: -17px;
margin-right: -17px;
overflow: hidden;
padding: 0 10px;
max-height: 0;
webkit-transition: padding 0.15s, max-height 0.15s;
transition: padding 0.15s, max-height 0.15s; }
.sweet-alert .sa-error-container.show {
padding: 10px 0;
max-height: 100px;
webkit-transition: padding 0.2s, max-height 0.2s;
transition: padding 0.25s, max-height 0.25s; }
.sweet-alert .sa-error-container .icon {
display: inline-block;
width: 24px;
height: 24px;
border-radius: 50%;
background-color: #ea7d7d;
color: white;
line-height: 24px;
text-align: center;
margin-right: 3px; }
.sweet-alert .sa-error-container p {
display: inline-block; }
.sweet-alert .sa-input-error {
position: absolute;
top: 29px;
right: 26px;
width: 20px;
height: 20px;
opacity: 0;
-webkit-transform: scale(0.5);
transform: scale(0.5);
-webkit-transform-origin: 50% 50%;
transform-origin: 50% 50%;
-webkit-transition: all 0.1s;
transition: all 0.1s; }
.sweet-alert .sa-input-error::before, .sweet-alert .sa-input-error::after {
content: "";
width: 20px;
height: 6px;
background-color: #f06e57;
border-radius: 3px;
position: absolute;
top: 50%;
margin-top: -4px;
left: 50%;
margin-left: -9px; }
.sweet-alert .sa-input-error::before {
-webkit-transform: rotate(-45deg);
transform: rotate(-45deg); }
.sweet-alert .sa-input-error::after {
-webkit-transform: rotate(45deg);
transform: rotate(45deg); }
.sweet-alert .sa-input-error.show {
opacity: 1;
-webkit-transform: scale(1);
transform: scale(1); }
.sweet-alert input {
width: 100%;
box-sizing: border-box;
border-radius: 3px;
border: 1px solid #d7d7d7;
height: 43px;
margin-top: 10px;
margin-bottom: 17px;
font-size: 18px;
box-shadow: inset 0px 1px 1px rgba(0, 0, 0, 0.06);
padding: 0 12px;
display: none;
-webkit-transition: all 0.3s;
transition: all 0.3s; }
.sweet-alert input:focus {
outline: none;
box-shadow: 0px 0px 3px #c4e6f5;
border: 1px solid #b4dbed; }
.sweet-alert input:focus::-moz-placeholder {
transition: opacity 0.3s 0.03s ease;
opacity: 0.5; }
.sweet-alert input:focus:-ms-input-placeholder {
transition: opacity 0.3s 0.03s ease;
opacity: 0.5; }
.sweet-alert input:focus::-webkit-input-placeholder {
transition: opacity 0.3s 0.03s ease;
opacity: 0.5; }
.sweet-alert input::-moz-placeholder {
color: #bdbdbd; }
.sweet-alert input:-ms-input-placeholder {
color: #bdbdbd; }
.sweet-alert input::-webkit-input-placeholder {
color: #bdbdbd; }
.sweet-alert.show-input input {
display: block; }
.sweet-alert .sa-confirm-button-container {
display: inline-block;
position: relative; }
.sweet-alert .la-ball-fall {
position: absolute;
left: 50%;
top: 50%;
margin-left: -27px;
margin-top: 4px;
opacity: 0;
visibility: hidden; }
.sweet-alert button {
background-color: #8CD4F5;
color: white;
border: none;
box-shadow: none;
font-size: 17px;
font-weight: 500;
-webkit-border-radius: 4px;
border-radius: 5px;
padding: 10px 32px;
margin: 26px 5px 0 5px;
cursor: pointer; }
.sweet-alert button:focus {
outline: none;
box-shadow: 0 0 2px rgba(128, 179, 235, 0.5), inset 0 0 0 1px rgba(0, 0, 0, 0.05); }
.sweet-alert button:hover {
background-color: #7ecff4; }
.sweet-alert button:active {
background-color: #5dc2f1; }
.sweet-alert button.cancel {
background-color: #C1C1C1; }
.sweet-alert button.cancel:hover {
background-color: #b9b9b9; }
.sweet-alert button.cancel:active {
background-color: #a8a8a8; }
.sweet-alert button.cancel:focus {
box-shadow: rgba(197, 205, 211, 0.8) 0px 0px 2px, rgba(0, 0, 0, 0.0470588) 0px 0px 0px 1px inset !important; }
.sweet-alert button[disabled] {
opacity: .6;
cursor: default; }
.sweet-alert button.confirm[disabled] {
color: transparent; }
.sweet-alert button.confirm[disabled] ~ .la-ball-fall {
opacity: 1;
visibility: visible;
transition-delay: 0s; }
.sweet-alert button::-moz-focus-inner {
border: 0; }
.sweet-alert[data-has-cancel-button=false] button {
box-shadow: none !important; }
.sweet-alert[data-has-confirm-button=false][data-has-cancel-button=false] {
padding-bottom: 40px; }
.sweet-alert .sa-icon {
width: 80px;
height: 80px;
border: 4px solid gray;
-webkit-border-radius: 40px;
border-radius: 40px;
border-radius: 50%;
margin: 20px auto;
padding: 0;
position: relative;
box-sizing: content-box; }
.sweet-alert .sa-icon.sa-error {
border-color: #F27474; }
.sweet-alert .sa-icon.sa-error .sa-x-mark {
position: relative;
display: block; }
.sweet-alert .sa-icon.sa-error .sa-line {
position: absolute;
height: 5px;
width: 47px;
background-color: #F27474;
display: block;
top: 37px;
border-radius: 2px; }
.sweet-alert .sa-icon.sa-error .sa-line.sa-left {
-webkit-transform: rotate(45deg);
transform: rotate(45deg);
left: 17px; }
.sweet-alert .sa-icon.sa-error .sa-line.sa-right {
-webkit-transform: rotate(-45deg);
transform: rotate(-45deg);
right: 16px; }
.sweet-alert .sa-icon.sa-warning {
border-color: #F8BB86; }
.sweet-alert .sa-icon.sa-warning .sa-body {
position: absolute;
width: 5px;
height: 47px;
left: 50%;
top: 10px;
-webkit-border-radius: 2px;
border-radius: 2px;
margin-left: -2px;
background-color: #F8BB86; }
.sweet-alert .sa-icon.sa-warning .sa-dot {
position: absolute;
width: 7px;
height: 7px;
-webkit-border-radius: 50%;
border-radius: 50%;
margin-left: -3px;
left: 50%;
bottom: 10px;
background-color: #F8BB86; }
.sweet-alert .sa-icon.sa-info {
border-color: #C9DAE1; }
.sweet-alert .sa-icon.sa-info::before {
content: "";
position: absolute;
width: 5px;
height: 29px;
left: 50%;
bottom: 17px;
border-radius: 2px;
margin-left: -2px;
background-color: #C9DAE1; }
.sweet-alert .sa-icon.sa-info::after {
content: "";
position: absolute;
width: 7px;
height: 7px;
border-radius: 50%;
margin-left: -3px;
top: 19px;
background-color: #C9DAE1; }
.sweet-alert .sa-icon.sa-success {
border-color: #A5DC86; }
.sweet-alert .sa-icon.sa-success::before, .sweet-alert .sa-icon.sa-success::after {
content: '';
-webkit-border-radius: 40px;
border-radius: 40px;
border-radius: 50%;
position: absolute;
width: 60px;
height: 120px;
background: white;
-webkit-transform: rotate(45deg);
transform: rotate(45deg); }
.sweet-alert .sa-icon.sa-success::before {
-webkit-border-radius: 120px 0 0 120px;
border-radius: 120px 0 0 120px;
top: -7px;
left: -33px;
-webkit-transform: rotate(-45deg);
transform: rotate(-45deg);
-webkit-transform-origin: 60px 60px;
transform-origin: 60px 60px; }
.sweet-alert .sa-icon.sa-success::after {
-webkit-border-radius: 0 120px 120px 0;
border-radius: 0 120px 120px 0;
top: -11px;
left: 30px;
-webkit-transform: rotate(-45deg);
transform: rotate(-45deg);
-webkit-transform-origin: 0px 60px;
transform-origin: 0px 60px; }
.sweet-alert .sa-icon.sa-success .sa-placeholder {
width: 80px;
height: 80px;
border: 4px solid rgba(165, 220, 134, 0.2);
-webkit-border-radius: 40px;
border-radius: 40px;
border-radius: 50%;
box-sizing: content-box;
position: absolute;
left: -4px;
top: -4px;
z-index: 2; }
.sweet-alert .sa-icon.sa-success .sa-fix {
width: 5px;
height: 90px;
background-color: white;
position: absolute;
left: 28px;
top: 8px;
z-index: 1;
-webkit-transform: rotate(-45deg);
transform: rotate(-45deg); }
.sweet-alert .sa-icon.sa-success .sa-line {
height: 5px;
background-color: #A5DC86;
display: block;
border-radius: 2px;
position: absolute;
z-index: 2; }
.sweet-alert .sa-icon.sa-success .sa-line.sa-tip {
width: 25px;
left: 14px;
top: 46px;
-webkit-transform: rotate(45deg);
transform: rotate(45deg); }
.sweet-alert .sa-icon.sa-success .sa-line.sa-long {
width: 47px;
right: 8px;
top: 38px;
-webkit-transform: rotate(-45deg);
transform: rotate(-45deg); }
.sweet-alert .sa-icon.sa-custom {
background-size: contain;
border-radius: 0;
border: none;
background-position: center center;
background-repeat: no-repeat; }
/*
* Animations
*/
@-webkit-keyframes showSweetAlert {
0% {
transform: scale(0.7);
-webkit-transform: scale(0.7); }
45% {
transform: scale(1.05);
-webkit-transform: scale(1.05); }
80% {
transform: scale(0.95);
-webkit-transform: scale(0.95); }
100% {
transform: scale(1);
-webkit-transform: scale(1); } }
@keyframes showSweetAlert {
0% {
transform: scale(0.7);
-webkit-transform: scale(0.7); }
45% {
transform: scale(1.05);
-webkit-transform: scale(1.05); }
80% {
transform: scale(0.95);
-webkit-transform: scale(0.95); }
100% {
transform: scale(1);
-webkit-transform: scale(1); } }
@-webkit-keyframes hideSweetAlert {
0% {
transform: scale(1);
-webkit-transform: scale(1); }
100% {
transform: scale(0.5);
-webkit-transform: scale(0.5); } }
@keyframes hideSweetAlert {
0% {
transform: scale(1);
-webkit-transform: scale(1); }
100% {
transform: scale(0.5);
-webkit-transform: scale(0.5); } }
@-webkit-keyframes slideFromTop {
0% {
top: 0%; }
100% {
top: 50%; } }
@keyframes slideFromTop {
0% {
top: 0%; }
100% {
top: 50%; } }
@-webkit-keyframes slideToTop {
0% {
top: 50%; }
100% {
top: 0%; } }
@keyframes slideToTop {
0% {
top: 50%; }
100% {
top: 0%; } }
@-webkit-keyframes slideFromBottom {
0% {
top: 70%; }
100% {
top: 50%; } }
@keyframes slideFromBottom {
0% {
top: 70%; }
100% {
top: 50%; } }
@-webkit-keyframes slideToBottom {
0% {
top: 50%; }
100% {
top: 70%; } }
@keyframes slideToBottom {
0% {
top: 50%; }
100% {
top: 70%; } }
.showSweetAlert[data-animation=pop] {
-webkit-animation: showSweetAlert 0.3s;
animation: showSweetAlert 0.3s; }
.showSweetAlert[data-animation=none] {
-webkit-animation: none;
animation: none; }
.showSweetAlert[data-animation=slide-from-top] {
-webkit-animation: slideFromTop 0.3s;
animation: slideFromTop 0.3s; }
.showSweetAlert[data-animation=slide-from-bottom] {
-webkit-animation: slideFromBottom 0.3s;
animation: slideFromBottom 0.3s; }
.hideSweetAlert[data-animation=pop] {
-webkit-animation: hideSweetAlert 0.2s;
animation: hideSweetAlert 0.2s; }
.hideSweetAlert[data-animation=none] {
-webkit-animation: none;
animation: none; }
.hideSweetAlert[data-animation=slide-from-top] {
-webkit-animation: slideToTop 0.4s;
animation: slideToTop 0.4s; }
.hideSweetAlert[data-animation=slide-from-bottom] {
-webkit-animation: slideToBottom 0.3s;
animation: slideToBottom 0.3s; }
@-webkit-keyframes animateSuccessTip {
0% {
width: 0;
left: 1px;
top: 19px; }
54% {
width: 0;
left: 1px;
top: 19px; }
70% {
width: 50px;
left: -8px;
top: 37px; }
84% {
width: 17px;
left: 21px;
top: 48px; }
100% {
width: 25px;
left: 14px;
top: 45px; } }
@keyframes animateSuccessTip {
0% {
width: 0;
left: 1px;
top: 19px; }
54% {
width: 0;
left: 1px;
top: 19px; }
70% {
width: 50px;
left: -8px;
top: 37px; }
84% {
width: 17px;
left: 21px;
top: 48px; }
100% {
width: 25px;
left: 14px;
top: 45px; } }
@-webkit-keyframes animateSuccessLong {
0% {
width: 0;
right: 46px;
top: 54px; }
65% {
width: 0;
right: 46px;
top: 54px; }
84% {
width: 55px;
right: 0px;
top: 35px; }
100% {
width: 47px;
right: 8px;
top: 38px; } }
@keyframes animateSuccessLong {
0% {
width: 0;
right: 46px;
top: 54px; }
65% {
width: 0;
right: 46px;
top: 54px; }
84% {
width: 55px;
right: 0px;
top: 35px; }
100% {
width: 47px;
right: 8px;
top: 38px; } }
@-webkit-keyframes rotatePlaceholder {
0% {
transform: rotate(-45deg);
-webkit-transform: rotate(-45deg); }
5% {
transform: rotate(-45deg);
-webkit-transform: rotate(-45deg); }
12% {
transform: rotate(-405deg);
-webkit-transform: rotate(-405deg); }
100% {
transform: rotate(-405deg);
-webkit-transform: rotate(-405deg); } }
@keyframes rotatePlaceholder {
0% {
transform: rotate(-45deg);
-webkit-transform: rotate(-45deg); }
5% {
transform: rotate(-45deg);
-webkit-transform: rotate(-45deg); }
12% {
transform: rotate(-405deg);
-webkit-transform: rotate(-405deg); }
100% {
transform: rotate(-405deg);
-webkit-transform: rotate(-405deg); } }
.animateSuccessTip {
-webkit-animation: animateSuccessTip 0.75s;
animation: animateSuccessTip 0.75s; }
.animateSuccessLong {
-webkit-animation: animateSuccessLong 0.75s;
animation: animateSuccessLong 0.75s; }
.sa-icon.sa-success.animate::after {
-webkit-animation: rotatePlaceholder 4.25s ease-in;
animation: rotatePlaceholder 4.25s ease-in; }
@-webkit-keyframes animateErrorIcon {
0% {
transform: rotateX(100deg);
-webkit-transform: rotateX(100deg);
opacity: 0; }
100% {
transform: rotateX(0deg);
-webkit-transform: rotateX(0deg);
opacity: 1; } }
@keyframes animateErrorIcon {
0% {
transform: rotateX(100deg);
-webkit-transform: rotateX(100deg);
opacity: 0; }
100% {
transform: rotateX(0deg);
-webkit-transform: rotateX(0deg);
opacity: 1; } }
.animateErrorIcon {
-webkit-animation: animateErrorIcon 0.5s;
animation: animateErrorIcon 0.5s; }
@-webkit-keyframes animateXMark {
0% {
transform: scale(0.4);
-webkit-transform: scale(0.4);
margin-top: 26px;
opacity: 0; }
50% {
transform: scale(0.4);
-webkit-transform: scale(0.4);
margin-top: 26px;
opacity: 0; }
80% {
transform: scale(1.15);
-webkit-transform: scale(1.15);
margin-top: -6px; }
100% {
transform: scale(1);
-webkit-transform: scale(1);
margin-top: 0;
opacity: 1; } }
@keyframes animateXMark {
0% {
transform: scale(0.4);
-webkit-transform: scale(0.4);
margin-top: 26px;
opacity: 0; }
50% {
transform: scale(0.4);
-webkit-transform: scale(0.4);
margin-top: 26px;
opacity: 0; }
80% {
transform: scale(1.15);
-webkit-transform: scale(1.15);
margin-top: -6px; }
100% {
transform: scale(1);
-webkit-transform: scale(1);
margin-top: 0;
opacity: 1; } }
.animateXMark {
-webkit-animation: animateXMark 0.5s;
animation: animateXMark 0.5s; }
@-webkit-keyframes pulseWarning {
0% {
border-color: #F8D486; }
100% {
border-color: #F8BB86; } }
@keyframes pulseWarning {
0% {
border-color: #F8D486; }
100% {
border-color: #F8BB86; } }
.pulseWarning {
-webkit-animation: pulseWarning 0.75s infinite alternate;
animation: pulseWarning 0.75s infinite alternate; }
@-webkit-keyframes pulseWarningIns {
0% {
background-color: #F8D486; }
100% {
background-color: #F8BB86; } }
@keyframes pulseWarningIns {
0% {
background-color: #F8D486; }
100% {
background-color: #F8BB86; } }
.pulseWarningIns {
-webkit-animation: pulseWarningIns 0.75s infinite alternate;
animation: pulseWarningIns 0.75s infinite alternate; }
@-webkit-keyframes rotate-loading {
0% {
transform: rotate(0deg); }
100% {
transform: rotate(360deg); } }
@keyframes rotate-loading {
0% {
transform: rotate(0deg); }
100% {
transform: rotate(360deg); } }
/* Internet Explorer 9 has some special quirks that are fixed here */
/* The icons are not animated. */
/* This file is automatically merged into sweet-alert.min.js through Gulp */
/* Error icon */
.sweet-alert .sa-icon.sa-error .sa-line.sa-left {
-ms-transform: rotate(45deg) \9; }
.sweet-alert .sa-icon.sa-error .sa-line.sa-right {
-ms-transform: rotate(-45deg) \9; }
/* Success icon */
.sweet-alert .sa-icon.sa-success {
border-color: transparent\9; }
.sweet-alert .sa-icon.sa-success .sa-line.sa-tip {
-ms-transform: rotate(45deg) \9; }
.sweet-alert .sa-icon.sa-success .sa-line.sa-long {
-ms-transform: rotate(-45deg) \9; }
/*!
* Load Awesome v1.1.0 (http://github.danielcardoso.net/load-awesome/)
* Copyright 2015 Daniel Cardoso <@DanielCardoso>
* Licensed under MIT
*/
.la-ball-fall,
.la-ball-fall > div {
position: relative;
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box; }
.la-ball-fall {
display: block;
font-size: 0;
color: #fff; }
.la-ball-fall.la-dark {
color: #333; }
.la-ball-fall > div {
display: inline-block;
float: none;
background-color: currentColor;
border: 0 solid currentColor; }
.la-ball-fall {
width: 54px;
height: 18px; }
.la-ball-fall > div {
width: 10px;
height: 10px;
margin: 4px;
border-radius: 100%;
opacity: 0;
-webkit-animation: ball-fall 1s ease-in-out infinite;
-moz-animation: ball-fall 1s ease-in-out infinite;
-o-animation: ball-fall 1s ease-in-out infinite;
animation: ball-fall 1s ease-in-out infinite; }
.la-ball-fall > div:nth-child(1) {
-webkit-animation-delay: -200ms;
-moz-animation-delay: -200ms;
-o-animation-delay: -200ms;
animation-delay: -200ms; }
.la-ball-fall > div:nth-child(2) {
-webkit-animation-delay: -100ms;
-moz-animation-delay: -100ms;
-o-animation-delay: -100ms;
animation-delay: -100ms; }
.la-ball-fall > div:nth-child(3) {
-webkit-animation-delay: 0ms;
-moz-animation-delay: 0ms;
-o-animation-delay: 0ms;
animation-delay: 0ms; }
.la-ball-fall.la-sm {
width: 26px;
height: 8px; }
.la-ball-fall.la-sm > div {
width: 4px;
height: 4px;
margin: 2px; }
.la-ball-fall.la-2x {
width: 108px;
height: 36px; }
.la-ball-fall.la-2x > div {
width: 20px;
height: 20px;
margin: 8px; }
.la-ball-fall.la-3x {
width: 162px;
height: 54px; }
.la-ball-fall.la-3x > div {
width: 30px;
height: 30px;
margin: 12px; }
/*
* Animation
*/
@-webkit-keyframes ball-fall {
0% {
opacity: 0;
-webkit-transform: translateY(-145%);
transform: translateY(-145%); }
10% {
opacity: .5; }
20% {
opacity: 1;
-webkit-transform: translateY(0);
transform: translateY(0); }
80% {
opacity: 1;
-webkit-transform: translateY(0);
transform: translateY(0); }
90% {
opacity: .5; }
100% {
opacity: 0;
-webkit-transform: translateY(145%);
transform: translateY(145%); } }
@-moz-keyframes ball-fall {
0% {
opacity: 0;
-moz-transform: translateY(-145%);
transform: translateY(-145%); }
10% {
opacity: .5; }
20% {
opacity: 1;
-moz-transform: translateY(0);
transform: translateY(0); }
80% {
opacity: 1;
-moz-transform: translateY(0);
transform: translateY(0); }
90% {
opacity: .5; }
100% {
opacity: 0;
-moz-transform: translateY(145%);
transform: translateY(145%); } }
@-o-keyframes ball-fall {
0% {
opacity: 0;
-o-transform: translateY(-145%);
transform: translateY(-145%); }
10% {
opacity: .5; }
20% {
opacity: 1;
-o-transform: translateY(0);
transform: translateY(0); }
80% {
opacity: 1;
-o-transform: translateY(0);
transform: translateY(0); }
90% {
opacity: .5; }
100% {
opacity: 0;
-o-transform: translateY(145%);
transform: translateY(145%); } }
@keyframes ball-fall {
0% {
opacity: 0;
-webkit-transform: translateY(-145%);
-moz-transform: translateY(-145%);
-o-transform: translateY(-145%);
transform: translateY(-145%); }
10% {
opacity: .5; }
20% {
opacity: 1;
-webkit-transform: translateY(0);
-moz-transform: translateY(0);
-o-transform: translateY(0);
transform: translateY(0); }
80% {
opacity: 1;
-webkit-transform: translateY(0);
-moz-transform: translateY(0);
-o-transform: translateY(0);
transform: translateY(0); }
90% {
opacity: .5; }
100% {
opacity: 0;
-webkit-transform: translateY(145%);
-moz-transform: translateY(145%);
-o-transform: translateY(145%);
transform: translateY(145%); } }

View File

@ -59,7 +59,7 @@ with a value of `Basic BASE64ENCODED(USER:PASSWORD)`.
**Keystone authentication** is enabled by passing the `--experimental-keystone-url=<AuthURL>`
option to the apiserver during startup. The plugin is implemented in
`plugin/pkg/auth/authenticator/request/keystone/keystone.go`.
`plugin/pkg/auth/authenticator/password/keystone/keystone.go`.
For details on how to use keystone to manage projects and users, refer to the
[Keystone documentation](http://docs.openstack.org/developer/keystone/). Please note that
this plugin is still experimental which means it is subject to changes.
@ -131,5 +131,3 @@ into apiserver start parameters.
1. View the certificate.
`openssl x509 -noout -text -in ./server.crt`
Finally, do not forget fill the same parameters and add parameters into apiserver start parameters.

View File

@ -161,6 +161,14 @@ users:
user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require a context. Provide one for the API Server.
current-context: webhook
contexts:
- context:
cluster: name-of-remote-authz-service
user: name-of-api-sever
name: webhook
```
### Request Payloads

View File

@ -44,15 +44,11 @@ process.
These controllers include:
* Node Controller
* Responsible for noticing & responding when nodes go down.
* Replication Controller
* Responsible for maintaining the correct number of pods for every replication
controller object in the system.
* Endpoints Controller
* Populates the Endpoints object (i.e., join Services & Pods).
* Service Account & Token Controllers
* Create default accounts and API access tokens for new namespaces.
* Node Controller: Responsible for noticing & responding when nodes go down.
* Replication Controller: Responsible for maintaining the correct number of pods for every replication
controller object in the system.
* Endpoints Controller: Populates the Endpoints object (i.e., join Services & Pods).
* Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.
* ... and others.
### kube-scheduler
@ -72,7 +68,7 @@ Addon objects are created in the "kube-system" namespace.
#### DNS
While the other addons are not strictly required, all Kubernetes
clusters should have [cluster DNS](dns.md), as many examples rely on it.
clusters should have [cluster DNS](/docs/admin/dns/), as many examples rely on it.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your
environment, which serves DNS records for Kubernetes services.
@ -110,14 +106,15 @@ the Kubernetes runtime environment.
### kubelet
[kubelet](/docs/admin/kubelet) is the primary node agent. It:
* Watches for pods that have been assigned to its node (either by apiserver
or via local configuration file) and:
* Mounts the pod's required volumes
* Downloads the pod's secrets
* Run the pod's containers via docker (or, experimentally, rkt).
* Periodically executes any requested container liveness probes.
* Reports the status of the pod back to the rest of the system, by creating a
"mirror pod" if necessary.
* Mounts the pod's required volumes
* Downloads the pod's secrets
* Run the pod's containers via docker (or, experimentally, rkt).
* Periodically executes any requested container liveness probes.
* Reports the status of the pod back to the rest of the system, by creating a
"mirror pod" if necessary.
* Reports the status of the node back to the rest of the system.
### kube-proxy

View File

@ -1,18 +1,18 @@
---
---
---
---
## Support
At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
* No more than 1000 nodes
* No more than 1000 nodes
* No more than 30000 total pods
* No more than 60000 total containers
* No more than 100 pods per node
* TOC
{:toc}
* No more than 60000 total containers
* No more than 100 pods per node
* TOC
{:toc}
## Setup
@ -26,8 +26,8 @@ When setting up a large Kubernetes cluster, the following issues must be conside
### Quota Issues
To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
* Increase the quota for things like CPU, IPs, etc.
* In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
* CPUs
@ -40,35 +40,58 @@ To avoid running into cloud provider quota issues, when creating a cluster with
* Target pools
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
### Etcd storage
To improve performance of large clusters, we store events in a separate dedicated etcd instance.
When creating a cluster, existing salt scripts:
* start and configure additional etcd instance
* configure api-server to use it for storing events
### Etcd storage
To improve performance of large clusters, we store events in a separate dedicated etcd instance.
When creating a cluster, existing salt scripts:
* start and configure additional etcd instance
* configure api-server to use it for storing events
### Size of master and master components
On GCE/GKE and AWS, `kube-up` automatically configures the proper VM size for your master depending on the number of nodes
in your cluster. On other providers, you will need to configure it manually. For reference, the sizes we use on GCE are
* 1-5 nodes: n1-standard-1
* 6-10 nodes: n1-standard-2
* 11-100 nodes: n1-standard-4
* 101-250 nodes: n1-standard-8
* 251-500 nodes: n1-standard-16
* more than 500 nodes: n1-standard-32
And the sizes we use on AWS are
* 1-5 nodes: m3.medium
* 6-10 nodes: m3.large
* 11-100 nodes: m3.xlarge
* 101-250 nodes: m3.2xlarge
* 251-500 nodes: c4.4xlarge
* more than 500 nodes: c4.8xlarge
Note that these master node sizes are currently only set at cluster startup time, and are not adjusted if you later scale your cluster up or down (e.g. manually removing or adding nodes, or using a cluster autoscaler).
### Addon Resources
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
For [example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
```yaml
```yaml
containers:
- name: fluentd-cloud-logging
- name: fluentd-cloud-logging
image: gcr.io/google_containers/fluentd-gcp:1.16
resources:
limits:
cpu: 100m
memory: 200Mi
```
```
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
* [InfluxDB and Grafana](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-rc.yaml.in)
@ -79,20 +102,21 @@ To avoid running into cluster addon resource issues, when creating a cluster wit
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185) and [#21258](http://issue.k8s.io/21258)). If you find that Heapster is running
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/user-guide/compute-resources/#troubleshooting).
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
We welcome PRs that implement those features.
### Allowing minor node failure at startup
For various reasons (see [#18969](https://github.com/kubernetes/kubernetes/issues/18969) for more details) running
`kube-up.sh` with a very large `NUM_NODES` may fail due to a very small number of nodes not coming up properly.
Currently you have two choices: restart the cluster (`kube-down.sh` and then `kube-up.sh` again), or before
running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to whatever value you feel comfortable
with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
reason for the failure, those additional nodes may join later or the cluster may remain at a size of
`NUM_NODES - ALLOWED_NOTREADY_NODES`.
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/user-guide/compute-resources/#troubleshooting).
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
We welcome PRs that implement those features.
### Allowing minor node failure at startup
For various reasons (see [#18969](https://github.com/kubernetes/kubernetes/issues/18969) for more details) running
`kube-up.sh` with a very large `NUM_NODES` may fail due to a very small number of nodes not coming up properly.
Currently you have two choices: restart the cluster (`kube-down.sh` and then `kube-up.sh` again), or before
running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to whatever value you feel comfortable
with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
reason for the failure, those additional nodes may join later or the cluster may remain at a size of
`NUM_NODES - ALLOWED_NOTREADY_NODES`.

View File

@ -64,7 +64,7 @@ specified, they are defaulted to be equal. Config with these not matching will
Also you should not normally create any pods whose labels match this selector, either directly, via
another DaemonSet, or via other controller such as ReplicationController. Otherwise, the DaemonSet
controller will think that those pods were created by it. Kubernetes will not stop you from doing
this. Once case where you might want to do this is manually create a pod with a different value on
this. One case where you might want to do this is manually create a pod with a different value on
a node for testing.
### Running Pods on Only Some Nodes

View File

@ -1,13 +1,8 @@
---
---
## Introduction
PLEASE NOTE: Note that the podmaster implementation is obsoleted by https://github.com/kubernetes/kubernetes/pull/16830,
which provides a primitive for leader election in the experimental kubernetes API.
Nevertheless, the concepts and implementation in this document are still valid, as is the podmaster implementation itself.
---
---
## Introduction
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
the simple [Docker based single node cluster instructions](/docs/getting-started-guides/docker),
@ -32,10 +27,9 @@ The steps involved are as follows:
* [Starting replicated, load balanced Kubernetes API servers](#replicated-api-servers)
* [Setting up master-elected Kubernetes scheduler and controller-manager daemons](#master-elected-components)
Here's what the system should look like when it's finished:
![High availability Kubernetes diagram](/images/docs/ha.svg)
Here's what the system should look like when it's finished:
Ready? Let's get started.
![High availability Kubernetes diagram](/images/docs/ha.svg)
## Initial set-up
@ -60,11 +54,11 @@ choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
[kubelet init file](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](/docs/admin/high-availability/default-kubelet)
[kubelet init file](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kubelet/initd) and [default-kubelet](/docs/admin/high-availability/default-kubelet)
scripts.
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](/docs/admin/high-availability/monit-kubelet) and
[high-availability/monit-docker](/docs/admin/high-availability/monit-docker) configs.
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [monit-kubelet](/docs/admin/high-availability/monit-kubelet) and
[monit-docker](/docs/admin/high-availability/monit-docker) configs.
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
@ -187,12 +181,8 @@ them to talk to the external load balancer's IP address.
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
master election. On each of the three apiserver nodes, we run a small utility application named `podmaster`. It's job is to implement a master
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/high-availability.md)
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in the API to perform
master election. We will use the `--leader-elect` flag for each scheduler and controller-manager, using a lease in the API will ensure that only 1 instance of the scheduler and controller-manager are running at once.
### Installing configuration files
@ -204,18 +194,7 @@ touch /var/log/kube-controller-manager.log
```
Next, set up the descriptions of the scheduler and controller manager pods on each node.
by copying [kube-scheduler.yaml](/docs/admin/high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](/docs/admin/high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/` directory.
### Running the podmaster
Now that the configuration files are in place, copy the [podmaster.yaml](/docs/admin/high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.
Now you will have one instance of the scheduler process running on a single master node, and likewise one
controller-manager process running on a single (possibly different) master node. If either of these processes fail,
the kubelet will restart them. If any of these nodes fail, the process will move to a different instance of a master
node.
by copying [kube-scheduler.yaml](/docs/admin/high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](/docs/admin/high-availability/kube-controller-manager.yaml) into the `/etc/kubernetes/manifests/` directory.
## Conclusion
@ -225,4 +204,4 @@ If you have an existing cluster, this is as simple as reconfiguring your kubelet
restarting the kubelets on each node.
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
set the `--apiserver` flag to your replicated endpoint.
set the `--apiserver` flag to your replicated endpoint.

View File

@ -9,7 +9,7 @@ spec:
- -c
- /usr/local/bin/kube-controller-manager --master=127.0.0.1:8080 --cluster-name=e2e-test-bburns
--cluster-cidr=10.245.0.0/16 --allocate-node-cidrs=true --cloud-provider=gce --service-account-private-key-file=/srv/kubernetes/server.key
--v=2 1>>/var/log/kube-controller-manager.log 2>&1
--v=2 --leader-elect=true 1>>/var/log/kube-controller-manager.log 2>&1
image: gcr.io/google_containers/kube-controller-manager:fda24638d51a48baa13c35337fcd4793
livenessProbe:
httpGet:

View File

@ -10,7 +10,7 @@ spec:
command:
- /bin/sh
- -c
- /usr/local/bin/kube-scheduler --master=127.0.0.1:8080 --v=2 1>>/var/log/kube-scheduler.log
- /usr/local/bin/kube-scheduler --master=127.0.0.1:8080 --v=2 --leader-elect=true 1>>/var/log/kube-scheduler.log
2>&1
livenessProbe:
httpGet:

View File

@ -61,7 +61,7 @@ project](/docs/admin/salt).
## Multi-tenant support
* **Resource Quota** ([resource-quota.md](/docs/admin/resource-quota))
* **Resource Quota** ([resourcequota/](/docs/admin/resourcequota/))
## Security

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kube-apiserver
@ -40,6 +36,9 @@ kube-apiserver
--cloud-provider="": The provider for cloud services. Empty string for no provider.
--cors-allowed-origins=[]: List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
--delete-collection-workers=1: Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.
--etcd-cafile="": SSL Certificate Authority file used to secure etcd communication
--etcd-certfile="": SSL certification file used to secure etcd communication
--etcd-keyfile="": SSL key file used to secure etcd communication
--etcd-prefix="/registry": The prefix for all resource paths in etcd.
--etcd-quorum-read[=false]: If true, enable quorum read
--etcd-servers=[]: List of etcd servers to watch (http://ip:port), comma separated. Mutually exclusive with -etcd-config
@ -85,16 +84,4 @@ kube-apiserver
--watch-cache-sizes=[]: List of watch cache sizes for every resource (pods, nodes, etc.), comma separated. The individual override format: resource#size, where size is a number. It takes effect when watch-cache is enabled.
```
###### Auto generated by spf13/cobra on 24-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-apiserver.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 6-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kube-controller-manager
@ -40,6 +36,7 @@ kube-controller-manager
--concurrent-replicaset-syncs=5: The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--concurrent-resource-quota-syncs=5: The number of resource quotas that are allowed to sync concurrently. Larger number = more responsive quota management, but more CPU (and network) load
--concurrent_rc_syncs=5: The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--daemonset-lookup-cache-size=1024: The the size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load.
--deleting-pods-burst=10: Number of nodes on which pods are bursty deleted in case of node failure. For more details look into RateLimiter.
--deleting-pods-qps=0.1: Number of nodes per second on which pods are deleted in case of node failure.
--deployment-controller-sync-period=30s: Period for syncing the deployments.
@ -80,16 +77,4 @@ kube-controller-manager
--terminated-pod-gc-threshold=12500: Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
```
###### Auto generated by spf13/cobra on 25-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-controller-manager.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 29-Feb-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kube-proxy
@ -50,16 +46,4 @@ kube-proxy
--udp-timeout=250ms: How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
```
###### Auto generated by spf13/cobra on 7-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-proxy.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 7-Feb-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kube-scheduler
@ -45,16 +41,4 @@ kube-scheduler
--scheduler-name="default-scheduler": Name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's annotation with key 'scheduler.alpha.kubernetes.io/name'
```
###### Auto generated by spf13/cobra on 28-Jan-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-scheduler.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 28-Jan-2016

View File

@ -121,9 +121,4 @@ kubelet
--volume-stats-agg-period=1m0s: Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. Default: '1m'
```
###### Auto generated by spf13/cobra on 4-Mar-2016
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kubelet.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 15-Mar-2016

View File

@ -22,7 +22,7 @@ may be too small to be useful, but big enough for the waste to be costly over th
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
average node size in order to provide for more uniform scheduling and to limit waste.
This example demonstrates how limits can be applied to a Kubernetes namespace to control
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
@ -41,12 +41,17 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called limit-example:
```shell
$ kubectl create -f docs/admin/limitrange/namespace.yaml
namespace "limit-example" created
$ kubectl create namespace limit-example
namespace "limit-example" created
```
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
```shell
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 5m
limit-example <none> Active 53s
NAME STATUS AGE
default Active 51s
limit-example Active 45s
```
## Step 2: Apply a limit to the namespace
@ -95,36 +100,45 @@ were previously created in a namespace.
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
of creation explaining why.
Let's first spin up a replication controller that creates a single container pod to demonstrate
Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate
how default values are applied to each pod.
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
replicationcontroller "nginx" created
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-aq0mf 1/1 Running 0 35s
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
deployment "nginx" created
```
```yaml
resourceVersion: "127"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
uid: 51be42a7-7156-11e5-9921-286ed488f785
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details.
The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod:
```shell
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-2040093540-s8vzu 1/1 Running 0 11s
```
Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different.
``` shell
$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8
resourceVersion: "57"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu
uid: 67b20741-f53b-11e5-b066-64510658e388
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
@ -141,37 +155,39 @@ Let's create a pod that falls within the allowed limit boundaries.
```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
```
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
imagePullPolicy: IfNotPresent
name: kubernetes-serve-hostname
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "1"
memory: 512Mi
Now look at the Pod's resources field:
```shell
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
uid: 3b1bfd7a-f53c-11e5-b066-64510658e388
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
imagePullPolicy: Always
name: kubernetes-serve-hostname
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "1"
memory: 512Mi
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
```shell
$ kubelet --help
Usage of kubelet
....
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=true ...
--cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=false ...
```
## Step 4: Cleanup
@ -182,8 +198,8 @@ To remove the resources used by this example, you can just delete the limit-exam
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 20m
NAME STATUS AGE
default Active 12m
```
## Summary
@ -191,4 +207,4 @@ default <none> Active 20m
Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node.
constrain the amount of resource a pod consumes on a node.

View File

@ -0,0 +1,313 @@
---
---
## Introduction
Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
This is a lightweight version of a broader effort for federating multiple
Kubernetes clusters together (sometimes referred to by the affectionate
nickname ["Ubernetes"](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md).
Full federation will allow combining separate
Kubernetes clusters running in different regions or clouds. However, many
users simply want to run a more available Kubernetes cluster in multiple zones
of their cloud provider, and this is what the multizone support in 1.2 allows
(we nickname this "Ubernetes Lite").
Multizone support is deliberately limited: a single Kubernetes cluster can run
in multiple zones, but only within the same region (and cloud provider). Only
GCE and AWS are currently supported automatically (though it is easy to
add similar support for other clouds or even bare metal, by simply arranging
for the appropriate labels to be added to nodes and volumes).
* TOC
{:toc}
## Functionality
When nodes are started, the kubelet automatically adds labels to them with
zone information.
Kubernetes will automatically spread the pods in a replication controller
or service across nodes in a single-zone cluster (to reduce the impact of
failures.) With multiple-zone clusters, this spreading behaviour is
extended across zones (to reduce the impact of zone failures.) (This is
achieved via `SelectorSpreadPriority`). This is a best-effort
placement, and so if the zones in your cluster are heterogenous
(e.g. different numbers of nodes, different types of nodes, or
different pod resource requirements), this might prevent perfectly
even spreading of your pods across zones. If desired, you can use
homogenous zones (same number and types of nodes) to reduce the
probability of unequal spreading.
When persistent volumes are created, the `PersistentVolumeLabel`
admission controller automatically adds zone labels to them. The scheduler (via the
`VolumeZonePredicate` predicate) will then ensure that pods that claim a
given volume are only placed into the same zone as that volume, as volumes
cannot be attached across zones.
## Limitations
There are some important limitations of the multizone support:
* We assume that the different zones are located close to each other in the
network, so we don't perform any zone-aware routing. In particular, traffic
that goes via services might cross zones (even if pods in some pods backing that service
exist in the same zone as the client), and this may incur additional latency and cost.
* Volume zone-affinity will only work with a `PersistentVolume`, and will not
work if you directly specify an EBS volume in the pod spec (for example).
* Clusters cannot span clouds or regions (this functionality will require full
federation support).
* Although your nodes are in multiple zones, kube-up currently builds
a single master node by default. While services are highly
available and can tolerate the loss of a zone, the control plane is
located in a single zone. Users that want a highly available control
plane should follow the [high availability](/docs/admin/high-availability) instructions.
## Walkthough
We're now going to walk through setting up and using a multi-zone
cluster on both GCE & AWS. To do so, you bring up a full cluster
(specifying `MULTIZONE=1`), and then you add nodes in additional zones
by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`).
### Bringing up your cluster
Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a.
GCE:
```shell
curl -sS https://get.k8s.io | MULTIZONE=1 KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
```
AWS:
```shell
curl -sS https://get.k8s.io | MULTIZONE=1 KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
```
This step brings up a cluster as normal, still running in a single zone
(but `MULTIZONE=1` has enabled multi-zone capabilities).
### Nodes are labeled
View the nodes; you can see that they are labeled with zone information.
They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The
labels are `failure-domain.beta.kubernetes.io/region` for the region,
and `failure-domain.beta.kubernetes.io/zone` for the zone:
```shell
> kubectl get nodes --show-labels
NAME STATUS AGE LABELS
kubernetes-master Ready,SchedulingDisabled 6m beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
kubernetes-minion-87j9 Ready 6m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
kubernetes-minion-9vlv Ready 6m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
kubernetes-minion-a12q Ready 6m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
```
### Add more nodes in a second zone
Let's add another set of nodes to the existing cluster, reusing the
existing master, running in a different zone (us-central1-b or us-west-2b).
We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=1`
kube-up will not create a new master, but will reuse one that was previously
created instead.
GCE:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
```
On AWS we also need to specify the network CIDR for the additional
subnet, along with the master internal IP address:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
```
View the nodes again; 3 more nodes should have launched and be tagged
in us-central1-b:
```shell
> kubectl get nodes --show-labels
NAME STATUS AGE LABELS
kubernetes-master Ready,SchedulingDisabled 16m beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
kubernetes-minion-281d Ready 2m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
kubernetes-minion-87j9 Ready 16m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
kubernetes-minion-9vlv Ready 16m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
kubernetes-minion-a12q Ready 17m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
kubernetes-minion-pp2f Ready 2m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
kubernetes-minion-wf8i Ready 2m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
```
### Volume affinity
Create a volume (only PersistentVolumes are supported for zone
affinity), using the new dynamic volume creation:
```json
kubectl create -f - <<EOF
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "claim1",
"annotations": {
"volume.alpha.kubernetes.io/storage-class": "foo"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "5Gi"
}
}
}
}
EOF
```
The PV is also labeled with the zone & region it was created in. For
version 1.2, dynamic persistent volumes are always created in the zone
of the cluster master (here us-centaral1-a / us-west-2a); this will
be improved in a future version (issue [#23330](https://github.com/kubernetes/kubernetes/issues/23330).)
```shell
> kubectl get pv --show-labels
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE LABELS
pv-gce-mj4gm 5Gi RWO Bound default/claim1 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a
```
So now we will create a pod that uses the persistent volume claim.
Because GCE PDs / AWS EBS volumes cannot be attached across zones,
this means that this pod can only be created in the same zone as the volume:
```yaml
kubectl create -f - <<EOF
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: claim1
EOF
```
Note that the pod was automatically created in the same zone as the volume, as
cross-zone attachments are not generally permitted by cloud providers:
```shell
> kubectl describe pod mypod | grep Node
Node: kubernetes-minion-9vlv/10.240.0.5
> kubectl get node kubernetes-minion-9vlv --show-labels
NAME STATUS AGE LABELS
kubernetes-minion-9vlv Ready 22m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
```
### Pods are spread across zones
Pods in a replication controller or service are automatically spread
across zones. First, let's launch more nodes in a third zone:
GCE:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh
```
AWS:
```shell
KUBE_USE_EXISTING_MASTER=true MULTIZONE=1 KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
```
Verify that you now have nodes in 3 zones:
```shell
kubectl get nodes --show-labels
```
Create the guestbook-go example, which includes an RC of size 3, running a simple web app:
```shell
find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl create -f {}
```
The pods should be spread across all 3 zones:
```shell
> kubectl describe pod -l app=guestbook | grep Node
Node: kubernetes-minion-9vlv/10.240.0.5
Node: kubernetes-minion-281d/10.240.0.8
Node: kubernetes-minion-olsh/10.240.0.11
> kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels
NAME STATUS AGE LABELS
kubernetes-minion-9vlv Ready 34m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
kubernetes-minion-281d Ready 20m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
kubernetes-minion-olsh Ready 3m beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
```
Load-balancers span all zones in a cluster; the guestbook-go example
includes an example load-balanced service:
```shell
> kubectl describe service guestbook | grep LoadBalancer.Ingress
LoadBalancer Ingress: 130.211.126.21
> ip=130.211.126.21
> curl -s http://${ip}:3000/env | grep HOSTNAME
"HOSTNAME": "guestbook-44sep",
> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq
"HOSTNAME": "guestbook-44sep",
"HOSTNAME": "guestbook-hum5n",
"HOSTNAME": "guestbook-ppm40",
```
The load balancer correctly targets all the pods, even though they are in multiple zones.
### Shutting down the cluster
When you're done, clean up:
GCE:
```shell
KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh
KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh
KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh
```
AWS:
```shell
KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh
KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh
KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh
```

View File

@ -1,145 +0,0 @@
---
---
A Namespace is a mechanism to partition resources created by users into
a logically named group.
## Motivation
A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community').
Each user community wants to be able to work in isolation from other communities.
Each user community has its own:
1. resources (pods, services, replication controllers, etc.)
2. policies (who can or cannot perform actions in their community)
3. constraints (this community is allowed this much quota, etc.)
A cluster operator may create a Namespace for each unique user community.
The Namespace provides a unique scope for:
1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
3. ability to limit community resource consumption
## Use cases
1. As a cluster operator, I want to support multiple user communities on a single cluster.
2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users
in those communities.
3. As a cluster operator, I want to limit the amount of resources each community can consume in order
to limit the impact to other communities using the cluster.
4. As a cluster user, I want to interact with resources that are pertinent to my user community in
isolation of what other user communities are doing on the cluster.
## Usage
Look [here](/docs/admin/namespaces/) for an in depth example of namespaces.
### Viewing namespaces
You can list the current namespaces in a cluster using:
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
```
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
You can also get the summary of a specific namespace using:
```shell
$ kubectl get namespaces <name>
```
Or you can get detailed information with:
```shell
$ kubectl describe namespaces <name>
Name: default
Labels: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
```
Note that these details show both resource quota (if present) as well as resource limit ranges.
Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators
to define *Hard* resource usage limits that a *Namespace* may consume.
A limit range defines min/max constraints on the amount of resources a single entity can consume in
a *Namespace*.
See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md)
A namespace can be in one of two phases:
* `Active` the namespace is in use
* `Terminating` the namespace is being deleted, and can not be used for new objects
See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#phases) for more details.
### Creating a new namespace
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Note that the name of your namespace must be a DNS compatible label.
More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#finalizers).
Then run:
```shell
$ kubectl create -f ./my-namespace.yaml
```
### Working in namespaces
See [Setting the namespace for a request](/docs/user-guide/namespaces/#setting-the-namespace-for-a-request)
and [Setting the namespace preference](/docs/user-guide/namespaces/#setting-the-namespace-preference).
### Deleting a namespace
You can delete a namespace with
```shell
$ kubectl delete namespaces <insert-some-namespace-name>
```
**WARNING, this deletes _everything_ under the namespace!**
This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state.
## Namespaces and DNS
When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/admin/dns).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
## Design
Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md)

View File

@ -1,234 +1,145 @@
---
---
Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
A Namespace is a mechanism to partition resources created by users into
a logically named group.
It does this by providing the following:
## Motivation
1. A scope for [Names](/docs/user-guide/identifiers).
2. A mechanism to attach authorization and policy to a subsection of the cluster.
A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community').
Use of multiple namespaces is optional.
Each user community wants to be able to work in isolation from other communities.
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
Each user community has its own:
### Step Zero: Prerequisites
1. resources (pods, services, replication controllers, etc.)
2. policies (who can or cannot perform actions in their community)
3. constraints (this community is allowed this much quota, etc.)
This example assumes the following:
A cluster operator may create a Namespace for each unique user community.
1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/).
2. You have a basic understanding of Kubernetes _[pods](/docs/user-guide/pods)_, _[services](/docs/user-guide/services)_, and _[replication controllers](/docs/user-guide/replication-controller)_.
The Namespace provides a unique scope for:
### Step One: Understand the default namespace
1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
3. ability to limit community resource consumption
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods,
services, and replication controllers used by the cluster.
## Use cases
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
1. As a cluster operator, I want to support multiple user communities on a single cluster.
2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users
in those communities.
3. As a cluster operator, I want to limit the amount of resources each community can consume in order
to limit the impact to other communities using the cluster.
4. As a cluster user, I want to interact with resources that are pertinent to my user community in
isolation of what other user communities are doing on the cluster.
## Usage
Look [here](/docs/admin/namespaces/) for an in depth example of namespaces.
### Viewing namespaces
You can list the current namespaces in a cluster using:
```shell
$ kubectl get namespaces
NAME LABELS
default <none>
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
```
### Step Two: Create new namespaces
Kubernetes starts with two initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication controllers
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
are relaxed to enable agile development.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
pods, services, and replication controllers that run the production site.
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.json`](/docs/admin/namespaces/namespace-dev.json) which describes a development namespace:
{% include code.html language="json" file="namespace-dev.json" ghlink="/docs/admin/namespaces/namespace-dev.json" %}
Create the development namespace using kubectl.
You can also get the summary of a specific namespace using:
```shell
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
$ kubectl get namespaces <name>
```
And then lets create the production namespace using kubectl.
Or you can get detailed information with:
```shell
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
$ kubectl describe namespaces <name>
Name: default
Labels: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
```
To be sure things are right, let's list all of the namespaces in our cluster.
Note that these details show both resource quota (if present) as well as resource limit ranges.
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
production name=production Active
```
Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators
to define *Hard* resource usage limits that a *Namespace* may consume.
### Step Three: Create pods in each namespace
A limit range defines min/max constraints on the amount of resources a single entity can consume in
a *Namespace*.
A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md)
Users interacting with one namespace do not see the content in another namespace.
A namespace can be in one of two phases:
* `Active` the namespace is in use
* `Terminating` the namespace is being deleted, and can not be used for new objects
To demonstrate this, let's spin up a simple replication controller and pod in the development namespace.
See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#phases) for more details.
We first check what is the current context:
### Creating a new namespace
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
current-context: lithe-cocoa-92103_kubernetes
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
Note that the name of your namespace must be a DNS compatible label.
More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#finalizers).
Then run:
```shell
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl create -f ./my-namespace.yaml
```
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
### Working in namespaces
Let's switch to operate in the development namespace.
See [Setting the namespace for a request](/docs/user-guide/namespaces/#setting-the-namespace-for-a-request)
and [Setting the namespace preference](/docs/user-guide/namespaces/#setting-the-namespace-preference).
### Deleting a namespace
You can delete a namespace with
```shell
$ kubectl config use-context dev
$ kubectl delete namespaces <insert-some-namespace-name>
```
You can verify your current context by doing the following:
**WARNING, this deletes _everything_ under the namespace!**
```shell
$ kubectl config view
```
This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state.
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: development
user: lithe-cocoa-92103_kubernetes
name: dev
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
- context:
cluster: lithe-cocoa-92103_kubernetes
namespace: production
user: lithe-cocoa-92103_kubernetes
name: prod
current-context: dev
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
```
## Namespaces and DNS
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/admin/dns).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
Let's create some content.
## Design
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
snowflake-8w0qn 1/1 Running 0 22s
snowflake-jrpzb 1/1 Running 0 22s
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
```
The production namespace should be empty.
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
Production likes to run cattle, so let's create some cattle pods.
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
cattle cattle kubernetes/serve_hostname run=cattle 5
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cattle-97rva 1/1 Running 0 12s
cattle-i9ojn 1/1 Running 0 12s
cattle-qj3yv 1/1 Running 0 12s
cattle-yc7vn 1/1 Running 0 12s
cattle-zz7ea 1/1 Running 0 12s
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
authorization rules for each namespace.
Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md)

View File

@ -0,0 +1,200 @@
---
---
Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
It does this by providing the following:
1. A scope for [Names](/docs/user-guide/identifiers/).
2. A mechanism to attach authorization and policy to a subsection of the cluster.
Use of multiple namespaces is optional.
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
### Step Zero: Prerequisites
This example assumes the following:
1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/).
2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods/)_, _[Services](/docs/user-guide/services/)_, and _[Deployments](/docs/user-guide/deployments/)_.
### Step One: Understand the default namespace
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods,
Services, and Deployments used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
```shell
$ kubectl get namespaces
NAME STATUS AGE
default Active 13m
```
### Step Two: Create new namespaces
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
are relaxed to enable agile development.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
Pods, Services, and Deployments that run the production site.
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.json`](/docs/admin/namespaces/namespace-dev.json) which describes a development namespace:
{% include code.html language="json" file="namespace-dev.json" ghlink="/docs/admin/namespaces/namespace-dev.json" %}
Create the development namespace using kubectl.
```shell
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
```
And then lets create the production namespace using kubectl.
```shell
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
```shell
$ kubectl get namespaces --show-labels
NAME STATUS AGE LABELS
default Active 32m <none>
development Active 29s name=development
production Active 23s name=production
```
### Step Three: Create pods in each namespace
A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
Users interacting with one namespace do not see the content in another namespace.
To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace.
We first check what is the current context:
```shell
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://130.211.122.180
name: lithe-cocoa-92103_kubernetes
contexts:
- context:
cluster: lithe-cocoa-92103_kubernetes
user: lithe-cocoa-92103_kubernetes
name: lithe-cocoa-92103_kubernetes
current-context: lithe-cocoa-92103_kubernetes
kind: Config
preferences: {}
users:
- name: lithe-cocoa-92103_kubernetes
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
$ kubectl config current-context
lithe-cocoa-92103_kubernetes
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
```shell
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
Let's switch to operate in the development namespace.
```shell
$ kubectl config use-context dev
```
You can verify your current context by doing the following:
```shell
$ kubectl config current-context
dev
```
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
Let's create some content.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
Note that `kubectl run` creates deployments only on kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details.
```shell
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
snowflake 2 2 2 2 2m
$ kubectl get pods -l run=snowflake
NAME READY STATUS RESTARTS AGE
snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
```
The production namespace should be empty, and the following commands should return nothing.
```shell
$ kubectl get deployment
$ kubectl get pods
```
Production likes to run cattle, so let's create some cattle pods.
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
cattle 5 5 5 5 10s
kubectl get pods -l run=cattle
NAME READY STATUS RESTARTS AGE
cattle-2263376956-41xy6 1/1 Running 0 34s
cattle-2263376956-kw466 1/1 Running 0 34s
cattle-2263376956-n4v97 1/1 Running 0 34s
cattle-2263376956-p5p3i 1/1 Running 0 34s
cattle-2263376956-sxpth 1/1 Running 0 34s
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
authorization rules for each namespace.

View File

@ -5,10 +5,10 @@ Kubernetes approaches networking somewhat differently than Docker does by
default. There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container communications: this is solved by
[pods](/docs/user-guide/pods) and `localhost` communications.
[pods](/docs/user-guide/pods/) and `localhost` communications.
2. Pod-to-Pod communications: this is the primary focus of this document.
3. Pod-to-Service communications: this is covered by [services](/docs/user-guide/services).
4. External-to-Service communications: this is covered by [services](/docs/user-guide/services).
3. Pod-to-Service communications: this is covered by [services](/docs/user-guide/services/).
4. External-to-Service communications: this is covered by [services](/docs/user-guide/services/).
* TOC
{:toc}
@ -102,7 +102,7 @@ here.
### Google Compute Engine (GCE)
For the Google Compute Engine cluster configuration scripts, we use [advanced
routing](https://developers.google.com/compute/docs/networking#routing) to
routing](https://cloud.google.com/compute/docs/networking#routing) to
assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for that
subnet will be routed directly to the VM by the GCE network fabric. This is in
addition to the "main" IP address assigned to the VM, which is NAT'ed for
@ -177,8 +177,12 @@ network, primarily aiming at Docker integration.
[Calico](https://github.com/projectcalico/calico-containers) uses BGP to enable real container
IPs.
### Romana
[Romana](https://romana.io) is an open source software defined networking (SDN) solution that lets you deploy Kubernetes without an overlay network.
## Other reading
The early design of the networking model and its rationale, and some future
plans are described in more detail in the [networking design
document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/networking.md).
document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/networking.md).

View File

@ -61,8 +61,8 @@ the following conditions mean the node is in sane state:
"conditions": [
{
"kind": "Ready",
"status": "True",
},
"status": "True"
}
]
```

View File

@ -1,154 +0,0 @@
---
---
When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern. Resource quotas
work like this:
- Different teams work in different namespaces. Currently this is voluntary, but
support for making this mandatory via ACLs is planned.
- The administrator creates a Resource Quota for each namespace.
- Users put compute resource requests on their pods. The sum of all resource requests across
all pods in the same namespace must not exceed any hard resource limit in any Resource Quota
document for the namespace. Note that we used to verify Resource Quota by taking the sum of
resource limits of the pods, but this was altered to use resource requests. Backwards compatibility
for those pods previously created is preserved because pods that only specify a resource limit have
their resource requests defaulted to match their defined limits. The user is only charged for the
resources they request in the Resource Quota versus their limits because the request is the minimum
amount of resource guaranteed by the cluster during scheduling. For more information on over commit,
see [compute-resources](/docs/user-guide/compute-resources).
- If creating a pod would cause the namespace to exceed any of the limits specified in the
the Resource Quota for that namespace, then the request will fail with HTTP status
code `403 FORBIDDEN`.
- If quota is enabled in a namespace and the user does not specify *requests* on the pod for each
of the resources for which quota is enabled, then the POST of the pod will fail with HTTP
status code `403 FORBIDDEN`. Hint: Use the LimitRange admission controller to force default
values of *limits* (then resource *requests* would be equal to *limits* by default, see
[admission controller](/docs/admin/admission-controllers)) before the quota is checked to avoid this problem.
Examples of policies that could be created using namespaces and quotas are:
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 Gib and 10 cores,
let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation.
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
use any amount.
In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces,
there may be contention for resources. This is handled on a first-come-first-served basis.
Neither contention nor changes to quota will affect already-running pods.
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as
one of its arguments.
Resource Quota is enforced in a particular namespace when there is a
`ResourceQuota` object in that namespace. There should be at most one
`ResourceQuota` object in a namespace.
## Compute Resource Quota
The total sum of [compute resources](/docs/user-guide/compute-resources) requested by pods
in a namespace can be limited. The following compute resource types are supported:
| ResourceName | Description |
| ------------ | ----------- |
| cpu | Total cpu requests of containers |
| memory | Total memory requests of containers
For example, `cpu` quota sums up the `resources.requests.cpu` fields of every
container of every pod in the namespace, and enforces a maximum on that sum.
## Object Count Quota
The number of objects of a given type can be restricted. The following types
are supported:
| ResourceName | Description |
| ------------ | ----------- |
| pods | Total number of pods |
| services | Total number of services |
| replicationcontrollers | Total number of replication controllers |
| resourcequotas | Total number of [resource quotas](/docs/admin/admission-controllers/#resourcequota) |
| secrets | Total number of secrets |
| persistentvolumeclaims | Total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) |
For example, `pods` quota counts and enforces a maximum on the number of `pods`
created in a single namespace.
You might want to set a pods quota on a namespace
to avoid the case where a user creates many small pods and exhausts the cluster's
supply of Pod IPs.
## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas:
```shell
$ kubectl namespace myspace
$ cat <<EOF > quota.json
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": "quota"
},
"spec": {
"hard": {
"memory": "1Gi",
"cpu": "20",
"pods": "10",
"services": "5",
"replicationcontrollers":"20",
"resourcequotas":"1"
}
}
}
EOF
$ kubectl create -f ./quota.json
$ kubectl get quota
NAME
quota
$ kubectl describe quota quota
Name: quota
Resource Used Hard
-------- ---- ----
cpu 0m 20
memory 0 1Gi
pods 5 10
replicationcontrollers 5 20
resourcequotas 1 1
services 3 5
```
## Quota and Cluster Capacity
Resource Quota objects are independent of the Cluster Capacity. They are
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
automatically give each namespace the ability to consume more resources.
Sometimes more complex policies may be desired, such as:
- proportionally divide total cluster resources among several teams.
- allow each tenant to grow resource usage as needed, but have a generous
limit to prevent accidental resource exhaustion.
- detect demand from one namespace, add nodes, and increase quota.
Such policies could be implemented using ResourceQuota as a building-block, by
writing a 'controller' which watches the quota usage and adjusts the quota
hard limits of each namespace according to other signals.
Note that resource quota divides up aggregate cluster resources, but it creates no
restrictions around nodes: pods from several namespaces may run on the same node.
## Example
See a [detailed example for how to use resource quota](/docs/admin/resourcequota/).
## Read More
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information.

View File

@ -1,156 +1,154 @@
---
---
This example demonstrates how [resource quota](/docs/admin/admission-controllers/#resourcequota) and
[limitsranger](/docs/admin/admission-controllers/#limitranger) can be applied to a Kubernetes namespace.
When several users or teams share a cluster with a fixed number of nodes,
there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern. Resource quotas
work like this:
- Different teams work in different namespaces. Currently this is voluntary, but
support for making this mandatory via ACLs is planned.
- The administrator creates a Resource Quota for each namespace.
- Users put compute resource requests on their pods. The sum of all resource requests across
all pods in the same namespace must not exceed any hard resource limit in any Resource Quota
document for the namespace. Note that we used to verify Resource Quota by taking the sum of
resource limits of the pods, but this was altered to use resource requests. Backwards compatibility
for those pods previously created is preserved because pods that only specify a resource limit have
their resource requests defaulted to match their defined limits. The user is only charged for the
resources they request in the Resource Quota versus their limits because the request is the minimum
amount of resource guaranteed by the cluster during scheduling. For more information on over commit,
see [compute-resources](/docs/user-guide/compute-resources).
- If creating a pod would cause the namespace to exceed any of the limits specified in the
the Resource Quota for that namespace, then the request will fail with HTTP status
code `403 FORBIDDEN`.
- If quota is enabled in a namespace and the user does not specify *requests* on the pod for each
of the resources for which quota is enabled, then the POST of the pod will fail with HTTP
status code `403 FORBIDDEN`. Hint: Use the LimitRange admission controller to force default
values of *limits* (then resource *requests* would be equal to *limits* by default, see
[admission controller](/docs/admin/admission-controllers)) before the quota is checked to avoid this problem.
Examples of policies that could be created using namespaces and quotas are:
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 Gib and 10 cores,
let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation.
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
use any amount.
In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces,
there may be contention for resources. This is handled on a first-come-first-served basis.
Neither contention nor changes to quota will affect already-running pods.
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as
one of its arguments.
Resource Quota is enforced in a particular namespace when there is a
`ResourceQuota` object in that namespace. There should be at most one
`ResourceQuota` object in a namespace.
## Compute Resource Quota
The total sum of [compute resources](/docs/user-guide/compute-resources) requested by pods
in a namespace can be limited. The following compute resource types are supported:
| ResourceName | Description |
| ------------ | ----------- |
| cpu | Total cpu requests of containers |
| memory | Total memory requests of containers
For example, `cpu` quota sums up the `resources.requests.cpu` fields of every
container of every pod in the namespace, and enforces a maximum on that sum.
## Object Count Quota
The number of objects of a given type can be restricted. The following types
are supported:
| ResourceName | Description |
| ------------ | ----------- |
| pods | Total number of pods |
| services | Total number of services |
| replicationcontrollers | Total number of replication controllers |
| resourcequotas | Total number of [resource quotas](/docs/admin/admission-controllers/#resourcequota) |
| secrets | Total number of secrets |
| persistentvolumeclaims | Total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) |
For example, `pods` quota counts and enforces a maximum on the number of `pods`
created in a single namespace.
You might want to set a pods quota on a namespace
to avoid the case where a user creates many small pods and exhausts the cluster's
supply of Pod IPs.
## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas:
```shell
$ kubectl namespace myspace
$ cat <<EOF > quota.json
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"name": "quota"
},
"spec": {
"hard": {
"memory": "1Gi",
"cpu": "20",
"pods": "10",
"services": "5",
"replicationcontrollers":"20",
"resourcequotas":"1"
}
}
}
EOF
$ kubectl create -f ./quota.json
$ kubectl get quota
NAME
quota
$ kubectl describe quota quota
Name: quota
Resource Used Hard
-------- ---- ----
cpu 0m 20
memory 0 1Gi
pods 5 10
replicationcontrollers 5 20
resourcequotas 1 1
services 3 5
```
## Quota and Cluster Capacity
Resource Quota objects are independent of the Cluster Capacity. They are
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
automatically give each namespace the ability to consume more resources.
Sometimes more complex policies may be desired, such as:
- proportionally divide total cluster resources among several teams.
- allow each tenant to grow resource usage as needed, but have a generous
limit to prevent accidental resource exhaustion.
- detect demand from one namespace, add nodes, and increase quota.
Such policies could be implemented using ResourceQuota as a building-block, by
writing a 'controller' which watches the quota usage and adjusts the quota
hard limits of each namespace according to other signals.
Note that resource quota divides up aggregate cluster resources, but it creates no
restrictions around nodes: pods from several namespaces may run on the same node.
## Example
See a [detailed example for how to use resource quota](/docs/admin/resourcequota/).
## Read More
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information.
This example assumes you have a functional Kubernetes setup.
## Step 1: Create a namespace
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called quota-example:
```shell
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
namespace "quota-example" created
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 2m
quota-example <none> Active 39s
```
## Step 2: Apply a quota to the namespace
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
system will be able to consume as much CPU and memory on the node that executes the pod.
Users may want to restrict how much of the cluster resources a given namespace may consume
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes
checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace.
Let's create a simple quota in our namespace:
```shell
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
```
Once your quota is applied to a namespace, the system will restrict any creation of content
in the namespace until the quota usage has been calculated. This should happen quickly.
You can describe your current quota usage to see what resources are being consumed in your
namespace.
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 0 20
memory 0 1Gi
persistentvolumeclaims 0 10
pods 0 10
replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
## Step 3: Applying default resource requests and limits
Pod authors rarely specify resource requests and limits for their pods.
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
cpu and memory by creating an nginx container.
To demonstrate, lets create a replication controller that runs nginx:
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
replicationcontroller "nginx" created
```
Now let's look at the pods that were created.
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
```
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
```shell
kubectl describe rc nginx --namespace=quota-example
Name: nginx
Namespace: quota-example
Image(s): nginx
Selector: run=nginx
Labels: run=nginx
Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
```
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
do not specify any memory usage *request*.
So let's set some default values for the amount of cpu and memory a pod can consume:
```shell
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
limitrange "limits" created
$ kubectl describe limits limits --namespace=quota-example
Name: limits
Namespace: quota-example
Type Resource Min Max Request Limit Limit/Request
---- -------- --- --- ------- ----- -------------
Container memory - - 256Mi 512Mi -
Container cpu - - 100m 200m -
```
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
create its pods.
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-fca65 1/1 Running 0 1m
```
And if we print out our quota usage in the namespace:
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 100m 20
memory 256Mi 1Gi
persistentvolumeclaims 0 10
pods 1 10
replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*), and the usage is being tracked by the Kubernetes system properly.
## Summary
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to meet your end goal.

View File

@ -0,0 +1,164 @@
---
---
This example demonstrates how [resource quota](/docs/admin/admission-controllers/#resourcequota) and
[limitsranger](/docs/admin/admission-controllers/#limitranger) can be applied to a Kubernetes namespace.
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information.
This example assumes you have a functional Kubernetes setup.
## Step 1: Create a namespace
This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called quota-example:
```shell
$ kubectl create namespace quota-example
namespace "quota-example" created
```
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
```shell
$ kubectl get namespaces
NAME STATUS AGE
default Active 50m
quota-example Active 2s
```
## Step 2: Apply a quota to the namespace
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
system will be able to consume as much CPU and memory on the node that executes the pod.
Users may want to restrict how much of the cluster resources a given namespace may consume
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes
checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace.
Let's create a simple quota in our namespace:
```shell
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
```
Once your quota is applied to a namespace, the system will restrict any creation of content
in the namespace until the quota usage has been calculated. This should happen quickly.
You can describe your current quota usage to see what resources are being consumed in your
namespace.
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 0 20
memory 0 1Gi
persistentvolumeclaims 0 10
pods 0 10
replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
## Step 3: Applying default resource requests and limits
Pod authors rarely specify resource requests and limits for their pods.
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
cpu and memory by creating an nginx container.
To demonstrate, lets create a Deployment that runs nginx:
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
deployment "nginx" created
```
This creates a Deployment "nginx" with its underlying resource, a ReplicaSet, which handles the creation and deletion of Pod replicas. Now let's look at the pods that were created.
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
```
What happened? I have no pods! Let's describe the ReplicaSet managed by the nginx Deployment to get a view of what is happening.
Note that `kubectl describe rs` works only on kubernetes cluster >= v1.2. If you are running older versions, use `kubectl describe rc` instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details.
```shell
$ kubectl describe rs -l run=nginx --namespace=quota-example
Name: nginx-2040093540
Namespace: quota-example
Image(s): nginx
Selector: pod-template-hash=2040093540,run=nginx
Labels: pod-template-hash=2040093540,run=nginx
Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
48s 26s 4 {replicaset-controller } Warning FailedCreate Error creating: pods "nginx-2040093540-" is forbidden: Failed quota: quota: must specify cpu,memory
```
The Kubernetes API server is rejecting the ReplicaSet requests to create a pod because our pods
do not specify any memory usage *request*.
So let's set some default values for the amount of cpu and memory a pod can consume:
```shell
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
limitrange "limits" created
$ kubectl describe limits limits --namespace=quota-example
Name: limits
Namespace: quota-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 100m 200m -
Container memory - - 256Mi 512Mi -
```
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
Now that we have applied default resource *request* for our namespace, our Deployment should be able to
create its pods.
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-2040093540-miohp 1/1 Running 0 5s
```
And if we print out our quota usage in the namespace:
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 100m 20
memory 256Mi 1Gi
persistentvolumeclaims 0 10
pods 1 10
replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*), and the usage is being tracked by the Kubernetes system properly.
## Summary
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to meet your end goal.

View File

@ -0,0 +1,785 @@
---
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="generator" content="Asciidoctor 0.1.4">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Top Level API Objects</title>
</head>
<body class="article">
<div id="header">
</div>
<div id="content">
<div class="sect1">
<h2 id="_top_level_api_objects">Top Level API Objects</h2>
<div class="sectionbody">
<div class="ulist">
<ul>
<li>
<p><a href="#_v1_horizontalpodautoscaler">v1.HorizontalPodAutoscaler</a></p>
</li>
<li>
<p><a href="#_v1_horizontalpodautoscalerlist">v1.HorizontalPodAutoscalerList</a></p>
</li>
</ul>
</div>
</div>
</div>
<div class="sect1">
<h2 id="_definitions">Definitions</h2>
<div class="sectionbody">
<div class="sect2">
<h3 id="_unversioned_patch">unversioned.Patch</h3>
<div class="paragraph">
<p>Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.</p>
</div>
</div>
<div class="sect2">
<h3 id="_v1_deleteoptions">v1.DeleteOptions</h3>
<div class="paragraph">
<p>DeleteOptions may be provided when deleting an API object</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">gracePeriodSeconds</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int64)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_horizontalpodautoscalerlist">v1.HorizontalPodAutoscalerList</h3>
<div class="paragraph">
<p>list of horizontal pod autoscaler objects.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">metadata</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Standard list metadata.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_unversioned_listmeta">unversioned.ListMeta</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">items</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">list of horizontal pod autoscaler objects.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_horizontalpodautoscaler">v1.HorizontalPodAutoscaler</a> array</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_unversioned_statusdetails">unversioned.StatusDetails</h3>
<div class="paragraph">
<p>StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response. The Reason field of a Status object defines what attributes will be set. Clients must ignore fields that do not match the defined type of each attribute, and should assume that any attribute may be empty, invalid, or under defined.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">name</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described).</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">group</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">The group attribute of the resource associated with the status StatusReason.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">causes</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">The Causes array includes more details associated with the StatusReason failure. Not all StatusReasons may provide detailed causes.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_unversioned_statuscause">unversioned.StatusCause</a> array</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">retryAfterSeconds</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">If specified, the time in seconds before the operation should be retried.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_unversioned_listmeta">unversioned.ListMeta</h3>
<div class="paragraph">
<p>ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">selfLink</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">SelfLink is a URL representing this object. Populated by the system. Read-only.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">resourceVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">String that identifies the server&#8217;s internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#concurrency-control-and-consistency">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#concurrency-control-and-consistency</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_crossversionobjectreference">v1.CrossVersionObjectReference</h3>
<div class="paragraph">
<p>CrossVersionObjectReference contains enough information to let you identify the referred resource.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind of the referent; More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds"">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds"</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">name</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Name of the referent; More info: <a href="http://releases.k8s.io/release-1.2/docs/user-guide/identifiers.md#names">http://releases.k8s.io/release-1.2/docs/user-guide/identifiers.md#names</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">API version of the referent</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_horizontalpodautoscaler">v1.HorizontalPodAutoscaler</h3>
<div class="paragraph">
<p>configuration of a horizontal pod autoscaler.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">metadata</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Standard object metadata. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_objectmeta">v1.ObjectMeta</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">spec</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">behaviour of autoscaler. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status</a>.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_horizontalpodautoscalerspec">v1.HorizontalPodAutoscalerSpec</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">status</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">current information about the autoscaler.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_horizontalpodautoscalerstatus">v1.HorizontalPodAutoscalerStatus</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_unversioned_status">unversioned.Status</h3>
<div class="paragraph">
<p>Status is a return value for calls that don&#8217;t return other objects.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">metadata</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Standard list metadata. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_unversioned_listmeta">unversioned.ListMeta</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">status</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Status of the operation. One of: "Success" or "Failure". More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#spec-and-status</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">message</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">A human-readable description of the status of this operation.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">reason</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">details</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_unversioned_statusdetails">unversioned.StatusDetails</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">code</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Suggested HTTP return code for this status, 0 if not set.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_horizontalpodautoscalerstatus">v1.HorizontalPodAutoscalerStatus</h3>
<div class="paragraph">
<p>current status of a horizontal pod autoscaler</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">observedGeneration</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">most recent generation observed by this autoscaler.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int64)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">lastScaleTime</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">last time the HorizontalPodAutoscaler scaled the number of pods; used by the autoscaler to control how often the number of pods is changed.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">currentReplicas</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">current number of replicas of pods managed by this autoscaler.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">desiredReplicas</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">desired number of replicas of pods managed by this autoscaler.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">currentCPUUtilizationPercentage</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">current average CPU utilization over all pods, represented as a percentage of requested CPU, e.g. 70 means that an average pod is using now 70% of its requested CPU.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_horizontalpodautoscalerspec">v1.HorizontalPodAutoscalerSpec</h3>
<div class="paragraph">
<p>specification of a horizontal pod autoscaler.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">scaleTargetRef</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">reference to scaled resource; horizontal pod autoscaler will learn the current resource consumption and will set the desired number of pods by using its Scale subresource.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_crossversionobjectreference">v1.CrossVersionObjectReference</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">minReplicas</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">lower limit for the number of pods that can be set by the autoscaler, default 1.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">maxReplicas</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">upper limit for the number of pods that can be set by the autoscaler; cannot be smaller than MinReplicas.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">targetCPUUtilizationPercentage</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">target average CPU utilization (represented as a percentage of requested CPU) over all the pods; if not specified the default autoscaling policy will be used.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int32)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_objectmeta">v1.ObjectMeta</h3>
<div class="paragraph">
<p>ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">name</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: <a href="http://releases.k8s.io/release-1.2/docs/user-guide/identifiers.md#names">http://releases.k8s.io/release-1.2/docs/user-guide/identifiers.md#names</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">generateName</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.<br>
<br>
If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).<br>
<br>
Applied only if Name is not specified. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#idempotency">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#idempotency</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">namespace</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.<br>
<br>
Must be a DNS_LABEL. Cannot be updated. More info: <a href="http://releases.k8s.io/release-1.2/docs/user-guide/namespaces.md">http://releases.k8s.io/release-1.2/docs/user-guide/namespaces.md</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">selfLink</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">SelfLink is a URL representing this object. Populated by the system. Read-only.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">uid</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.<br>
<br>
Populated by the system. Read-only. More info: <a href="http://releases.k8s.io/release-1.2/docs/user-guide/identifiers.md#uids">http://releases.k8s.io/release-1.2/docs/user-guide/identifiers.md#uids</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">resourceVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.<br>
<br>
Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#concurrency-control-and-consistency">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#concurrency-control-and-consistency</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">generation</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int64)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">creationTimestamp</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.<br>
<br>
Populated by the system. Read-only. Null for lists. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">deletionTimestamp</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource will be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. Once the resource is deleted in the API, the Kubelet will send a hard termination signal to the container. If not set, graceful deletion of the object has not been requested.<br>
<br>
Populated by the system when a graceful deletion is requested. Read-only. More info: <a href="http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata">http://releases.k8s.io/release-1.2/docs/devel/api-conventions.md#metadata</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">deletionGracePeriodSeconds</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">integer (int64)</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">labels</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: <a href="http://releases.k8s.io/release-1.2/docs/user-guide/labels.md">http://releases.k8s.io/release-1.2/docs/user-guide/labels.md</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_any">any</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">annotations</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: <a href="http://releases.k8s.io/release-1.2/docs/user-guide/annotations.md">http://releases.k8s.io/release-1.2/docs/user-guide/annotations.md</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_any">any</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_unversioned_statuscause">unversioned.StatusCause</h3>
<div class="paragraph">
<p>StatusCause provides more information about an api.Status failure, including cases when multiple errors are encountered.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">reason</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">A machine-readable description of the cause of the error. If this value is empty there is no information available.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">message</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">A human-readable description of the cause of the error. This field may be presented as-is to a reader.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">field</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.<br>
<br>
Examples:<br>
"name" - the field "name" on the current resource<br>
"items[0].name" - the field "name" on the first array entry in "items"</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_json_watchevent">json.WatchEvent</h3>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">type</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">the type of watch event; may be ADDED, MODIFIED, DELETED, or ERROR</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">object</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">the object being watched; will match the type of the resource endpoint or be a Status object if the type is ERROR</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_any">any</h3>
<div class="paragraph">
<p>Represents an untyped JSON map - see the description of the field for more info about the structure of this object.</p>
</div>
</div>
</div>
</div>
</div>
<div id="footer">
<div id="footer-text">
Last updated 2016-03-15 20:37:10 UTC
</div>
</div>
</body>
</html>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +1,3 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
<!-- needed for gh-pages to render html files when imported -->
{% include v1.2/extensions-v1beta1-definitions.html %}
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/extensions/v1beta1/definitions.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -1,22 +1,3 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
<!-- needed for gh-pages to render html files when imported -->
{% include v1.2/extensions-v1beta1-operations.html %}
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/extensions/v1beta1/operations.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -1,22 +1,3 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
<!-- needed for gh-pages to render html files when imported -->
{% include v1.2/v1-definitions.html %}
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/v1/definitions.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -1,22 +1,3 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
<!-- needed for gh-pages to render html files when imported -->
{% include v1.2/v1-operations.html %}
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/v1/operations.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -12,6 +12,8 @@ The list of binary releases is available for download from the [GitHub Kubernete
Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud.
On OS X you can also use the [homebrew](http://brew.sh/) package manager: `brew install kubernetes-cli`
### Building from source
Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.

View File

@ -21,22 +21,16 @@ $ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns`
### Replace the corresponding value in the template and create the pod
```shell
```shell{% raw %}
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns.yaml.in > ./skydns.yaml
# If the kube-system namespace isn't already created, create it
$ kubectl get ns
$ kubectl create -f ./kube-system.yaml
$ kubectl create -f ./skydns.yaml
$ kubectl create -f ./skydns.yaml{% endraw %}
```
### Test if DNS works
Follow [this link](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns#how-do-i-test-if-it-is-working) to check it out.

View File

@ -14,7 +14,7 @@ Here's a diagram of what the final result will look like:
1. You need to have docker installed on one machine.
2. Decide what Kubernetes version to use. Set the `${K8S_VERSION}` variable to
a released version of Kubernetes >= "1.2.0-alpha.7"
a released version of Kubernetes >= "1.2.0"
### Run it
@ -28,6 +28,7 @@ docker run \
--net=host \
--pid=host \
--privileged=true \
--name=kubelet \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
@ -53,39 +54,43 @@ At this point you should have a running Kubernetes cluster. You can test this
by downloading the kubectl binary for `${K8S_VERSION}` (look at the URL in the
following links) and make it available by editing your PATH environment
variable.
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/amd64/kubectl))
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/386/kubectl))
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/arm/kubectl))
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl))
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl))
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl))
For example, OS X:
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
chmod 755 kubectl
PATH=$PATH:`pwd`
```
Linux:
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
chmod 755 kubectl
PATH=$PATH:`pwd`
```
Create configuration:
On OS X, to make the API server accessible locally, setup a ssh tunnel.
```shell
$ kubectl config set-cluster test-doc --server=http://localhost:8080
$ kubectl config set-context test-doc --cluster=test-doc
$ kubectl config use-context test-doc
docker-machine ssh `docker-machine active` -N -L 8080:localhost:8080
```
For Max OS X users instead of `localhost` you will have to use IP address of your docker machine,
which you can find by running `docker-machine env <machinename>` (see [documentation](https://docs.docker.com/machine/reference/env/)
for details).
Setting up a ssh tunnel is applicable to remote docker hosts as well.
(Optional) Create kubernetes cluster configuration:
```shell
kubectl config set-cluster test-doc --server=http://localhost:8080
kubectl config set-context test-doc --cluster=test-doc
kubectl config use-context test-doc
```
### Test it out
@ -113,62 +118,72 @@ Now run `docker ps` you should see nginx running. You may need to wait a few mi
### Expose it as a service
```shell
kubectl expose rc nginx --port=80
kubectl expose deployment nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP (if a LoadBalancer is configured)
Run the following command to obtain the cluster local IP of this service we just created:
```shell
kubectl get svc nginx
```
```shell{% raw %}
ip=$(kubectl get svc nginx --template={{.spec.clusterIP}})
echo $ip
{% endraw %}```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
Hit the webserver with this IP:
```shell
```shell{% raw %}
kubectl get svc nginx --template={{.spec.clusterIP}}
```
Hit the webserver with the first IP (CLUSTER_IP):
{% endraw %}```
On OS X, since docker is running inside a VM, run the following command instead:
```shell
curl <insert-cluster-ip-here>
docker-machine ssh `docker-machine active` curl $ip
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
## Deploy a DNS
See [here](/docs/getting-started-guides/docker-multinode/deployDNS/) for instructions.
### A note on turning down your cluster
### Turning down your cluster
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers.
1. Delete all the containers including the kubelet:
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail.
So, in order to turn down the cluster, you need to first kill the kubelet container, and then any other containers.
You may use `docker kill $(docker ps -aq)`, note this removes _all_ containers running under Docker, so use with caution.
2. Cleanup the filesystem:
On OS X, first ssh into the docker VM:
```shell
docker-machine ssh `docker-machine active`
```
```shell
sudo umount `cat /proc/mounts | grep /var/lib/kubelet | awk '{print $2}'`
sudo rm -rf /var/lib/kubelet
```
### Troubleshooting
#### Node is in `NotReady` state
If you see your node as `NotReady` it's possible that your OS does not have memcg and swap enabled.
If you see your node as `NotReady` it's possible that your OS does not have memcg enabled.
1. Your kernel should support memory and swap accounting. Ensure that the
1. Your kernel should support memory accounting. Ensure that the
following configs are turned on in your linux kernel:
```shell
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
```
2. Enable the memory and swap accounting in the kernel, at boot, as command line
2. Enable the memory accounting in the kernel, at boot, as command line
parameters as follows:
```shell
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
GRUB_CMDLINE_LINUX="cgroup_enable=memory=1"
```
NOTE: The above is specifically for GRUB2.
@ -177,5 +192,5 @@ parameters as follows:
```shell
$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory swapaccount=1
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory=1
```

View File

@ -9,9 +9,11 @@ spec:
dnsPolicy: Default
containers:
- name: fluentd-cloud-logging
image: gcr.io/google_containers/fluentd-gcp:1.17
image: gcr.io/google_containers/fluentd-gcp:1.18
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
env:

View File

@ -119,7 +119,7 @@ which is unique from future cluster names. This will be used in several ways:
second one sometime later, such as for testing new Kubernetes releases, running in a different
region of the world, etc.
- Kubernetes clusters can create cloud provider resources (e.g. AWS ELBs) and different clusters
need to distinguish which resources each created. Call this `CLUSTERNAME`.
need to distinguish which resources each created. Call this `CLUSTER_NAME`.
### Software Binaries
@ -160,7 +160,7 @@ You have several choices for Kubernetes images:
release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest).
- Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy.
- The [hyperkube](https://releases.k8s.io/{{page.githubbranch}}/cmd/hyperkube) binary is an all in one binary
- `hyperkube kubelet ...` runs the kublet, `hyperkube apiserver ...` runs an apiserver, etc.
- `hyperkube kubelet ...` runs the kubelet, `hyperkube apiserver ...` runs an apiserver, etc.
- Build your own images.
- Useful if you are using a private registry.
- The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which
@ -795,18 +795,18 @@ Notes for setting up each cluster service are given below:
* Cluster DNS:
* required for many kubernetes examples
* [Setup instructions](http://releases.k8s.io/release-1.2/cluster/addons/dns/)
* [Admin Guide](../admin/dns.md)
* [Setup instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/)
* [Admin Guide](/docs/admin/dns/)
* Cluster-level Logging
* Multiple implementations with different storage backends and UIs.
* [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/release-1.2/cluster/addons/fluentd-elasticsearch/)
* [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/release-1.2/cluster/addons/fluentd-gcp/).
* [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/)
* [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-gcp/).
* Both require running fluentd on each node.
* [User Guide](../user-guide/logging.md)
* [User Guide](/docs/user-guide/logging/)
* Container Resource Monitoring
* [Setup instructions](http://releases.k8s.io/release-1.2/cluster/addons/cluster-monitoring/)
* [Setup instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/)
* GUI
* [Setup instructions](http://releases.k8s.io/release-1.2/cluster/addons/kube-ui/)
* [Setup instructions](https://github.com/kubernetes/kube-ui)
cluster.
## Troubleshooting
@ -836,4 +836,4 @@ pinging or SSH-ing from one node to another.
### Getting Help
If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](/docs/troubleshooting#slack).
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](/docs/troubleshooting#slack).

View File

@ -24,7 +24,7 @@ use a CNI plugin instead.
Internet to download the necessary files, while worker nodes do not.
3. These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with
Ubuntu 15 which uses systemd instead of upstart.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.4, may work with higher versions.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.8, may work with higher versions.
5. All the remote servers can be ssh logged in without a password by using key authentication.
@ -41,12 +41,12 @@ $ git clone https://github.com/kubernetes/kubernetes.git
#### Configure and start the Kubernetes cluster
The startup process will first download all the required binaries automatically.
By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.1.4.
By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.1.8.
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` like following.
```shell
$ export KUBE_VERSION=1.0.5
$ export KUBE_VERSION=1.1.8
$ export FLANNEL_VERSION=0.5.0
$ export ETCD_VERSION=2.2.0
```

View File

@ -365,21 +365,25 @@ export KUBERNETES_NODE_MEMORY=2048
#### I want to set proxy settings for my Kubernetes cluster boot strapping!
If you are behind a proxy, you need to install vagrant proxy plugin and set the proxy settings by
If you are behind a proxy, you need to install the Vagrant proxy plugin and set the proxy settings:
```sh
```shell
vagrant plugin install vagrant-proxyconf
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally you can specify addresses to not proxy, for example
You can also specify addresses that bypass the proxy, for example:
```sh
```shell
export KUBERNETES_NO_PROXY=127.0.0.1
```
If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
If you are using sudo to make Kubernetes build, use the `-E` flag to pass in the environment variables. For example, if running `make quick-release`, use:
```shell
sudo -E make quick-release
```
#### I ran vagrant suspend and nothing works!

View File

@ -30,6 +30,16 @@ New users of Google Cloud Platform recieve a [$300 free trial](https://console.d
Next, make sure you [download Node.js](https://nodejs.org/en/download/).
Then install [Docker](https://docs.docker.com/engine/installation/), and [Google Cloud SDK](https://cloud.google.com/sdk/).
Finally, after Google Cloud SDK installs, run the following command to install [`kubectl`](http://kubernetes.io/docs/user-guide/kubectl-overview/):
```shell
gcloud components install kubectl
```
You're all set up with an environment that can build container images, run Node apps, run Kubernetes clusters locally, and deploy Kubernetes clusters to Google Container Engine. Let's begin!
## Create your Node.js application
The first step is to write the application. Save this code in a folder called "`hellonode/`" with the filename `server.js`:
@ -101,14 +111,14 @@ docker ps
CONTAINER ID IMAGE COMMAND
2c66d0efcbd4 gcr.io/PROJECT_ID/hello-node:v1 "/bin/sh -c 'node
$ docker stop 2c66d0efcbd4
docker stop 2c66d0efcbd4
2c66d0efcbd4
```
Now that the image works as intended and is all tagged with your `PROJECT_ID`, we can push it to the [Google Container Registry](https://cloud.google.com/tools/container-registry/), a private repository for your Docker images accessible from every Google Cloud project (but also from outside Google Cloud Platform) :
```shell
docker push gcr.io/PROJECT_ID/hello-node:v1
gcloud docker push gcr.io/PROJECT_ID/hello-node:v1
```
If all goes well, you should be able to see the container image listed in the console: *Compute > Container Engine > Container Registry*. We now have a project-wide Docker image available which Kubernetes can access and orchestrate.
@ -124,81 +134,124 @@ Create a cluster via the Console: *Compute > Container Engine > Container Cluste
![image](/images/hellonode/image_11.png)
Its now time to deploy your own containerized application to the Kubernetes cluster!
Its now time to deploy your own containerized application to the Kubernetes cluster! Please ensure that you have [configured](https://cloud.google.com/container-engine/docs/before-you-begin#optional_set_gcloud_defaults) `kubectl` to use the cluster you just created.
## Create your pod
A kubernetes **pod** is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple.
A kubernetes **[pod](/docs/user-guide/pods/)** is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple.
Create a pod with the `kubectl run` command:
```shell
kubectl run hello-node \
--image=gcr.io/PROJECT_ID/hello-node:v1 \
--port=8080
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hello-node hello-node gcr.io/..../hello-node:v1 run=hello-node 1
kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
deployment "hello-node" created
```
Now is probably a good time to run through some of the following interesting kubectl commands (none of these will change the state of the cluster, full documentation is available [here](https://cloud.google.com/container-engine/docs/kubectl/)):
As shown in the output, the `kubectl run` created a **[deployment](/docs/user-guide/deployments/)** object. Deployments are the recommended way for managing creation and scaling of pods. In this example, a new deployment manages a single pod replica running the *hello-node:v1* image.
To view the deployment we just created run:
```shell
$ kubectl get pods
$ kubectl logs
$ kubectl cluster-info
$ kubectl config view
$ kubectl get events
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 1 1 1 1 3m
```
To view the pod created by the deployment run:
```shell
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-714049816-ztzrb 1/1 Running 0 6m
```
To view the stdout / stderr from a pod (hello-node image has no output, so logs will be empty in this case) run:
```shell
kubectl logs <POD-NAME>
```
To view metadata about the cluster run:
```shell
kubectl cluster-info
```
To view cluster events run:
```shell
kubectl get events
```
To view the kubectl configuration run:
```shell
kubectl config view
```
Full documentation for kubectl commands is available **[here](/docs/user-guide/kubectl-overview/)**:
At this point you should have our container running under the control of Kubernetes but we still have to make it accessible to the outside world.
## Allow external traffic
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the kubernetes virtual network, you have to expose the pod as a kubernetes service.
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the kubernetes virtual network, you have to expose the pod as a kubernetes **[service](/docs/user-guide/services/)**.
From our development machine we can expose the pod with the `kubectl` expose command and the `--create-external-load-balancer=true` flag which creates an external IP to accept traffic:
From our development machine we can expose the pod with the `kubectl` expose command and the `--type="LoadBalancer"` flag which creates an external IP to accept traffic:
```shell
kubectl expose rc hello-node --type="LoadBalancer"
kubectl expose deployment hello-node --type="LoadBalancer"
```
The flag used in this command specifies that well be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). The `rc` refers to the Kubernetes "replication controller" -- which is a Kubernetes service which controls load balancing and scaling behavior for your cluster.
The flag used in this command specifies that well be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). Note that we expose the deployment, and not the pod directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later).
The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.
To find the publicly-accessible IP address, ask `kubectl` to describe the `hello-node` cluster service:
To find the ip addresses associated with the service run:
```shell
kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.3.246.12 23.251.159.72 8080/TCP run=hello-node 53s
hello-node 10.3.246.12 8080/TCP run=hello-node 23s
```
Note there are 2 IP addresses listed, both serving port 8080. One is the internal IP that is only visible inside your cloud virtual network; the other is the external load-balanced IP. In this example, the external IP address is 23.251.159.72. Traffic to the load-balanced IP will be load balanced to the three nodes you provisioned when initially creating the cluster!
The `EXTERNAL_IP` may take several minutes to become available and visible. If the `EXTERNAL_IP` is missing, wait a few minutes and try again.
You should now be able to reach the service by pointing your browser to this address: http://<EXTERNAL_IP>**:8080**
```shell
kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.3.246.12 23.251.159.72 8080/TCP run=hello-node 2m
```
Note there are 2 IP addresses listed, both serving port 8080. `CLUSTER_IP` is only visible inside your cloud virtual network. `EXTERNAL_IP` is externally accessible. In this example, the external IP address is 23.251.159.72.
You should now be able to reach the service by pointing your browser to this address: http://EXTERNAL_IP**:8080** or running `curl http://EXTERNAL_IP:8080`
![image](/images/hellonode/image_12.png)
## Scale up your website
One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity for your application; you can simply tell the replication controller to manage a new number of replicas for your pod:
One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity for your application; you can simply tell the deployment to manage a new number of replicas for your pod:
```shell
kubectl scale rc hello-node --replicas=4
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-6uzt8 1/1 Running 0 8m
hello-node-gxhty 1/1 Running 0 34s
hello-node-z2odh 1/1 Running 0 34s
kubectl scale deployment hello-node --replicas=4
```
You now have four replicas of your application, each running independently on the cluster with the load balancer you created earlier and serving traffic to all of them.
You now have four replicas of your application, each running independently on the cluster with the load balancer you created earlier and serving traffic to all of them.
```shell
kubectl get rc hello-node
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hello-node hello-node gcr.io/..../hello-node:v1 run=hello-node 3
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 4 4 4 3 40m
```
```shell
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-714049816-g4azy 1/1 Running 0 1m
hello-node-714049816-rk0u6 1/1 Running 0 1m
hello-node-714049816-sh812 1/1 Running 0 1m
hello-node-714049816-ztzrb 1/1 Running 0 41m
```
Note the **declarative approach** here - rather than starting or stopping new instances you declare how many instances you want to be running. Kubernetes reconciliation loops simply make sure the reality matches what you requested and take action if needed.
@ -222,35 +275,87 @@ We can now build and publish a new container image to the registry with an incre
```shell
docker build -t gcr.io/PROJECT_ID/hello-node:v2 .
docker push gcr.io/PROJECT_ID/hello-node:v2
gcloud docker push gcr.io/PROJECT_ID/hello-node:v2
```
Building and pushing this updated image should be much quicker as we take full advantage of the Docker cache.
Were now ready for kubernetes to smoothly update our replication controller to the new version of the application:
Were now ready for kubernetes to smoothly update our deployment to the new version of the application. In order to change
the image label for our running container, we will need to edit the existing *hello-node deployment* and change the image from
`gcr.io/PROJECT_ID/hello-node:v1` to `gcr.io/PROJECT_ID/hello-node:v2`. To do this, we will use the `kubectl edit` command.
This will open up a text editor displaying the full deployment yaml configuration. It isn't necessary to understand the full yaml config
right now, instead just understand that by updating the `spec.template.spec.containers.image` field in the config we are telling
the deployment to update the pods to use the new image.
```shell
kubectl rolling-update hello-node \
--image=gcr.io/PROJECT_ID/hello-node:v2 \
--update-period=2s
Creating hello-node-324d23dd3e0e2474d6b76dc599abb519
At beginning of loop: hello-node replicas: 2, hello-node-324d23dd3e0e2474d6b76dc599abb519 replicas: 1
...
At end of loop: hello-node replicas: 0, hello-node-324d23dd3e0e2474d6b76dc599abb519 replicas: 3
Update succeeded. Deleting old controller: hello-node
Renaming hello-node-324d23dd3e0e2474d6b76dc599abb519 to hello-node
hello-node
kubectl edit deployment hello-node
```
You should see in the standard output how the rolling update actually works:
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2016-03-24T17:55:28Z
generation: 3
labels:
run: hello-node
name: hello-node
namespace: default
resourceVersion: "151017"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/hello-node
uid: 981fe302-f1e9-11e5-9a78-42010af00005
spec:
replicas: 4
selector:
matchLabels:
run: hello-node
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-node
spec:
containers:
- image: gcr.io/PROJECT_ID/hello-node:v1 # Update this line
imagePullPolicy: IfNotPresent
name: hello-node
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
```
1. A new replication controller is created based on the new image
After making the change save and close the file.
2. The replica count on the new and old controllers is increased/decreased by one respectively until the desired number of replicas is reached
```
deployment "hello-node" edited
```
3. The original replication controller is deleted.
This updates the deployment with the new image, causing new pods to be created with the new image and old pods to be deleted.
While this is happening, the users of the services should not see any interruption. After a little while they will start accessing the new version of your application. You can find more details on rolling updates in [this documentation](https://cloud.google.com/container-engine/docs/rolling-updates).
```
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 4 5 4 3 1h
```
While this is happening, the users of the services should not see any interruption. After a little while they will start accessing the new version of your application. You can find more details in the [deployment documentation](/docs/user-guide/deployments/).
Hopefully with these deployment, scaling and update features youll agree that once youve setup your environment (your GKE/Kubernetes cluster here), Kubernetes is here to help you focus on the application rather than the infrastructure.
@ -261,7 +366,7 @@ While logged into your development machine, execute the following commands:
```shell
kubectl config view | grep "password"
password: vUYwC5ATJMWa6goh
$ kubectl cluster-info
kubectl cluster-info
...
KubeUI is running at https://<ip-address>/api/v1/proxy/namespaces/kube-system/services/kube-ui
...
@ -275,16 +380,10 @@ Navigate to the URL that is shown under after KubeUI is running at and log in wi
That's it for the demo! So you don't leave this all running and incur charges, let's learn how to tear things down.
First, delete the Service, which also deletes your external load balancer:
Delete the Deployment (which also deletes the running pods) and Service (which also deletes your external load balancer):
```shell
kubectl delete services hello-node
```
Delete the running pods:
```shell
kubectl delete rc hello-node
kubectl delete service,deployment hello-node
```
Delete your cluster:
@ -306,7 +405,7 @@ Finally delete the Docker registry storage bucket hosting your image(s) :
```shell
gsutil ls
gs://artifacts.<PROJECT_ID>.appspot.com/
$ gsutil rm -r gs://artifacts.<PROJECT_ID>.appspot.com/
gsutil rm -r gs://artifacts.<PROJECT_ID>.appspot.com/
Removing gs://artifacts.<PROJECT_ID>.appspot.com/...
```

View File

@ -8,13 +8,13 @@ Sometimes things go wrong. This guide is aimed at making them right. It has tw
* [Troubleshooting your application](/docs/user-guide/application-troubleshooting) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
* [Troubleshooting your cluster](/docs/admin/cluster-troubleshooting) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy.
You should also check the [known issues](/docs/user-guide/known-issues) for the release you're using.
You should also check the known issues for the [release](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md) you're using.
# Getting help
### Getting help
If your problem isn't answered by any of the guides above, there are variety of ways for you to get help from the Kubernetes team.
## Questions
### Questions
If you aren't familiar with it, many of your questions may be answered by the [user guide](/docs/user-guide/).
@ -29,21 +29,21 @@ You may also find the Stack Overflow topics relevant:
* [Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes)
* [Google Container Engine - GKE](http://stackoverflow.com/questions/tagged/google-container-engine)
# Help! My question isn't covered! I need help now!
## Help! My question isn't covered! I need help now!
## Stack Overflow
### Stack Overflow
Someone else from the community may have already asked a similar question or may be able to help with your problem. The Kubernetes team will also monitor [posts tagged kubernetes](http://stackoverflow.com/questions/tagged/kubernetes). If there aren't any existing questions that help, please [ask a new one](http://stackoverflow.com/questions/ask?tags=kubernetes)!
## <a name="slack"></a>Slack
### Slack
The Kubernetes team hangs out on Slack in the `#kubernetes-users` channel. You can participate in the Kubernetes team [here](https://kubernetes.slack.com). Slack requires registration, but the Kubernetes team is open invitation to anyone to register [here](http://slack.kubernetes.io). Feel free to come and ask any and all questions.
## Mailing List
### Mailing List
The Google Container Engine mailing list is [google-containers@googlegroups.com](https://groups.google.com/forum/#!forum/google-containers)
## Bugs and Feature requests
### Bugs and Feature requests
If you have what looks like a bug, or you would like to make a feature request, please use the [Github issue tracking system](https://github.com/kubernetes/kubernetes/issues).

View File

@ -0,0 +1,16 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.91
ports:
- containerPort: 80

View File

@ -171,7 +171,7 @@ Here you can see from the `Allocated resources` section that that a pod which as
Looking at the `Pods` section, you can see which pods are taking up space on the node.
The [resource quota](/docs/admin/resource-quota) feature can be configured
The [resource quota](/docs/admin/resourcequota/) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.

View File

@ -79,7 +79,7 @@ This document is meant to highlight and consolidate in one place configuration b
controller 'version names'. A desired state of an object is described by a Deployment, and if
changes to that spec are _applied_, the deployment controller changes the actual state to the
desired state at a controlled rate. (Deployment objects are currently part of the [`extensions`
API Group](/docs/api/#api-groups), and are not enabled by default.)
API Group](/docs/api/#api-groups).)
- You can manipulate labels for debugging. Because Kubernetes replication controllers and services
match to pods using labels, this allows you to remove a pod from being considered by a
@ -108,6 +108,6 @@ This document is meant to highlight and consolidate in one place configuration b
- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/user-guide/managing-deployments/#using-labels-effectively).
- Use `kubectl run` and `expose` to quickly create and expose single container replication controllers. See the [quick start guide](/docs/user-guide/quick-start/) for an example.
- Use `kubectl run` and `expose` to quickly create and expose single container Deployments. See the [quick start guide](/docs/user-guide/quick-start/) for an example.

View File

@ -0,0 +1,105 @@
---
---
* TOC
{:toc}
## Debugging pods
The first step in debugging a pod is taking a look at it. Check the current
state of the pod and recent events with the following command:
$ kubectl describe pods ${POD_NAME}
Look at the state of the containers in the pod. Are they all `Running`? Have
there been recent restarts?
Continue debugging depending on the state of the pods.
### My pod stays pending
If a pod is stuck in `Pending` it means that it can not be scheduled onto a
node. Generally this is because there are insufficient resources of one type or
another that prevent scheduling. Look at the output of the `kubectl describe
...` command above. There should be messages from the scheduler about why it
can not schedule your pod. Reasons include:
#### Insufficient resources
You may have exhausted the supply of CPU or Memory in your cluster. In this
case you can try several things:
* [Add more nodes](/docs/admin/cluster-management/#resizing-a-cluster) to the cluster.
* [Terminate unneeded pods](/docs/user-guide/pods/single-container/#deleting_a_pod)
to make room for pending pods.
* Check that the pod is not larger than your nodes. For example, if all
nodes have a capacity of `cpu:1`, then a pod with a limit of `cpu: 1.1`
will never be scheduled.
You can check node capacities with the `kubectl get nodes -o <format>`
command. Here are some example command lines that extract just the necessary
information:
kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'
kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'
The [resource quota](/docs/admin/resourcequota/)
feature can be configured to limit the total amount of
resources that can be consumed. If used in conjunction with namespaces, it can
prevent one team from hogging all the resources.
#### Using hostPort
When you bind a pod to a `hostPort` there are a limited number of places that
the pod can be scheduled. In most cases, `hostPort` is unnecessary; try using a
service object to expose your pod. If you do require `hostPort` then you can
only schedule as many pods as there are nodes in your container cluster.
### My pod stays waiting
If a pod is stuck in the `Waiting` state, then it has been scheduled to a
worker node, but it can't run on that machine. Again, the information from
`kubectl describe ...` should be informative. The most common cause of
`Waiting` pods is a failure to pull the image. There are three things to check:
* Make sure that you have the name of the image correct.
* Have you pushed the image to the repository?
* Run a manual `docker pull <image>` on your machine to see if the image can be
pulled.
### My pod is crashing or otherwise unhealthy
First, take a look at the logs of the current container:
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
If your container has previously crashed, you can access the previous
container's crash log with:
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Alternately, you can run commands inside that container with `exec`:
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for pods that
only contain a single container.
As an example, to look at the logs from a running Cassandra pod, you might run:
$ kubectl exec cassandra -- cat /var/log/cassandra/system.log
If none of these approaches work, you can find the host machine that the pod is
running on and SSH into that host.
## Debugging Replication Controllers
Replication controllers are fairly straightforward. They can either create pods
or they can't. If they can't create pods, then please refer to the
[instructions above](#debugging_pods) to debug your pods.
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to inspect events
related to the replication controller.

View File

@ -3,13 +3,12 @@
An issue that comes up rather frequently for new installations of Kubernetes is
that `Services` are not working properly. You've run all your `Pod`s and
`ReplicationController`s, but you get no response when you try to access them.
`Deployment`s, but you get no response when you try to access them.
This document will hopefully help you to figure out what's going wrong.
* TOC
{:toc}
## Conventions
Throughout this doc you will see various commands that you can run. Some
@ -41,37 +40,27 @@ OUTPUT
## Running commands in a Pod
For many steps here you will want to see what a `Pod` running in the cluster
sees. Kubernetes does not directly support interactive `Pod`s (yet), but you can
approximate it:
sees. You can start a busybox `Pod` and run commands in it:
```shell
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
EOF
pods/busybox-sleep
$ kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
Hit enter for command prompt
/ #
```
Now, when you need to run a command (even an interactive shell) in a `Pod`-like
context, use:
If you already have a running `Pod`, run a command in it using:
```shell
$ kubectl exec busybox-sleep -- <COMMAND>
$ kubectl exec <POD-NAME> -c <CONTAINER-NAME> -- <COMMAND>
```
or
or run an interactive shell with:
```shell
$ kubectl exec -ti busybox-sleep sh
$ kubectl exec -ti <POD-NAME> -c <CONTAINER-NAME> sh
/ #
```
@ -86,16 +75,16 @@ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
--port=9376 \
--replicas=3
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hostnames hostnames gcr.io/google_containers/serve_hostname app=hostnames 3
deployment "hostnames" created
```
Note that this is the same as if you had started the `ReplicationController` with
`kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.
Note that this is the same as if you had started the `Deployment` with
the following YAML:
```yaml
apiVersion: v1
kind: ReplicationController
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hostnames
spec:
@ -119,10 +108,10 @@ Confirm your `Pod`s are running:
```shell
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 12s
hostnames-bvc05 1/1 Running 0 12s
hostnames-yp2kp 1/1 Running 0 12s
NAME READY STATUS RESTARTS AGE
hostnames-632524106-bbpiw 1/1 Running 0 2m
hostnames-632524106-ly40y 1/1 Running 0 2m
hostnames-632524106-tlaok 1/1 Running 0 2m
```
## Does the Service exist?
@ -157,7 +146,7 @@ So we have a culprit, let's create the `Service`. As before, this is for the
walk-through - you can use your own `Service`'s details here.
```shell
$ kubectl expose rc hostnames --port=80 --target-port=9376
$ kubectl expose deployment hostnames --port=80 --target-port=9376
service "hostnames" exposed
```
@ -165,8 +154,8 @@ And read it back, just to be sure:
```shell
$ kubectl get svc hostnames
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hostnames 10.0.0.1 <none> 80/TCP run=hostnames 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames 10.0.0.226 <none> 80/TCP 5s
```
As before, this is the same as if you had started the `Service` with YAML:
@ -506,6 +495,66 @@ Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9
If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.
Services provide load balancing across a set of pods. There are several common
problems that can make services not work properly. The following instructions
should help debug service problems.
First, verify that there are endpoints for the service. For every service
object, the apiserver makes an `endpoints` resource available.
You can view this resource with:
$ kubectl get endpoints ${SERVICE_NAME}
Make sure that the endpoints match up with the number of containers that you
expect to be a member of your service. For example, if your service is for an
nginx container with 3 replicas, you would expect to see three different IP
addresses in the service's endpoints.
### My service is missing endpoints
If you are missing endpoints, try listing pods using the labels that service
uses. Imagine that you have a service where the labels are:
...
spec:
- selector:
name: nginx
type: frontend
You can use:
$ kubectl get pods --selector=name=nginx,type=frontend
to list pods that match this selector. Verify that the list matches the pods
that you expect to provide your service.
If the list of pods matches expectations, but your endpoints are still empty,
it's possible that you don't have the right ports exposed. If your service has
a `containerPort` specified, but the pods that are selected don't have that
port listed, then they won't be added to the endpoints list.
Verify that the pod's `containerPort` matches up with the service's
`containerPort`.
### Network traffic is not forwarded
If you can connect to the service, but the connection is immediately dropped,
and there are endpoints in the endpoints list, it's likely that the proxy can't
contact your pods.
There are three things to check:
* Are your pods working correctly? Look for restart count, and
[debug pods](#debugging_pods).
* Can you connect to your pods directly? Get the IP address for the pod, and
try to connect directly to that IP.
* Is your application serving on the port that you configured? Container
Engine doesn't do port remapping, so if your application serves on 8080,
the `containerPort` field needs to be 8080.
## Seek help
If you get this far, something very strange is happening. Your `Service` is
@ -521,4 +570,4 @@ Contact us on
## More information
Visit [troubleshooting document](/docs/troubleshooting/) for more information.
Visit [troubleshooting document](/docs/troubleshooting/) for more information.

View File

@ -7,107 +7,91 @@
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](/docs/user-guide/pods)) using [*Replication Controllers*](/docs/user-guide/replication-controller).
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](/docs/user-guide/pods)) using [*Deployments*](/docs/user-guide/deployments).
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. It's analogous to Google Compute Engine's [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWS's [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup) (with no scaling policies).
A Deployment simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. It's analogous to Google Compute Engine's [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWS's [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).
The replication controller created to run nginx by `kubectl run` in the [Quick start](/docs/user-guide/quick-start) could be specified using YAML as follows:
The Deployment created to run nginx by `kubectl run` in the [Quick start](/docs/user-guide/quick-start) could be specified using YAML as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
{% include code.html language="yaml" file="run-my-nginx.yaml" ghlink="/docs/user-guide/run-my-nginx.yaml" %}
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods don't need to be specified explicitly because they are generated from the name of the replication controller.
View the [replication controller API
object](/docs/api-reference/v1/definitions/#_v1_replicationcontroller)
Some differences compared to specifying just a pod are that the `kind` is `Deployment`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods don't need to be specified explicitly because they are generated from the name of the Deployment.
View the [Deployment API
object](/docs/api-reference/extensions/v1beta1/definitions/#_v1beta1_deployment)
to view the list of supported fields.
This replication controller can be created using `create`, just as with pods:
This Deployment can be created using `create`, just as with pods:
```shell
$ kubectl create -f ./nginx-rc.yaml
replicationcontrollers/my-nginx
$ kubectl create -f ./run-my-nginx.yaml
deployment "my-nginx" created
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
Unlike in the case where you directly create pods, a Deployment replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a Deployment for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
## Viewing replication controller status
## Viewing Deployment status
You can view the replication controller you created using `get`:
You can view the Deployment you created using `get`:
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-nginx 2 2 2 2 6s
```
This tells you that your controller will ensure that you have two nginx replicas.
This tells you that your Deployment will ensure that you have two nginx replicas (desired replicas = 2).
You can see those replicas using `get`, just as with pods you created directly:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-065jq 1/1 Running 0 51s
my-nginx-buaiq 1/1 Running 0 51s
NAME READY STATUS RESTARTS AGE
my-nginx-3800858182-9hk43 1/1 Running 0 8m
my-nginx-3800858182-e529s 1/1 Running 0 8m
```
## Deleting replication controllers
## Deleting Deployments
When you want to kill your application, delete your replication controller, as in the [Quick start](/docs/user-guide/quick-start):
When you want to kill your application, delete your Deployment, as in the [Quick start](/docs/user-guide/quick-start):
```shell
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
$ kubectl delete deployment/my-nginx
deployment "my-nginx" deleted
```
By default, this will also cause the pods managed by the replication controller to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running, specify `--cascade=false`.
By default, this will also cause the pods managed by the Deployment to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running instead, specify `--cascade=false`.
If you try to delete the pods before deleting the replication controller, it will just replace them, as it is supposed to do.
If you try to delete the pods before deleting the Deployments, it will just replace them, as it is supposed to do.
## Labels
Kubernetes uses user-defined key-value attributes called [*labels*](/docs/user-guide/labels) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
Kubernetes uses user-defined key-value attributes called [*labels*](/docs/user-guide/labels) to categorize and identify sets of resources, such as pods and Deployments. The example above specified a single label in the pod template, with key `run` and value `my-nginx`. All pods created carry that label, which can be viewed using `-L`:
```shell
$ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
my-nginx-afv12 0/1 Running 0 3s nginx
my-nginx-lg99z 0/1 Running 0 3s nginx
$ kubectl get pods -L run
NAME READY STATUS RESTARTS AGE RUN
my-nginx-3800858182-1v53o 1/1 Running 0 46s my-nginx
my-nginx-3800858182-2ds1q 1/1 Running 0 46s my-nginx
```
The labels from the pod template are copied to the replication controller's labels by default, as well -- all resources in Kubernetes support labels:
The labels from the pod template are copied to the Deployment's labels by default, as well -- all resources in Kubernetes support labels:
```shell
$ kubectl get rc my-nginx -L app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
$ kubectl get deployment/my-nginx -L run
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE RUN
my-nginx 2 2 2 2 2m my-nginx
```
More importantly, the pod template's labels are used to create a [`selector`](/docs/user-guide/labels/#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](/docs/user-guide/kubectl/kubectl_get):
```shell
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]
$ kubectl get deployment/my-nginx -o template --template="{{.spec.selector}}"
map[matchLabels:map[run:my-nginx]]
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didn't want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it won't match pods created by other replication controllers. The most straightforward way to ensure the latter is to create a unique label value for the replication controller, and to specify it in both the pod template's labels and in the selector.
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didn't want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it won't match pods created by other Deployments. The most straightforward way to ensure the latter is to create a unique label value for the Deployment, and to specify it in both the pod template's labels and in the selector's
matchLabels.
## What's next?
[Learn about exposing applications to users and clients, and connecting tiers of your application together.](/docs/user-guide/connecting-applications)
[Learn about exposing applications to users and clients, and connecting tiers of your application together.](/docs/user-guide/connecting-applications)

View File

@ -6,20 +6,23 @@
## What is a _Deployment_?
A _Deployment_ provides declarative updates for Pods and ReplicationControllers.
Users describe the desired state in a Deployment object, and the deployment
controller changes the actual state to the desired state at a controlled rate.
Users can define Deployments to create new resources, or replace existing ones
A _Deployment_ provides declarative updates for Pods and ReplicaSets.
You only need to describe the desired state in a Deployment object, and the deployment
controller will change the actual state to the desired state at a controlled rate for you.
You can define Deployments to create new resources, or replace existing ones
by new ones.
A typical use case is:
* Create a Deployment to bring up a replication controller and pods.
* Create a Deployment to bring up a replica set and pods.
* Check the status of a Deployment to see if it succeeds or not.
* Later, update that Deployment to recreate the pods (for example, to use a new image).
* Rollback to an earlier Deployment revision if the current Deployment isn't stable.
* Pause and resume a Deployment.
## Creating a Deployment
Here is an example Deployment. It creates a replication controller to
Here is an example Deployment. It creates a replica set to
bring up 3 nginx pods.
{% include code.html language="yaml" file="nginx-deployment.yaml" ghlink="/docs/user-guide/nginx-deployment.yaml" %}
@ -27,53 +30,75 @@ bring up 3 nginx pods.
Run the example by downloading the example file and then running this command:
```shell
$ kubectl create -f docs/user-guide/nginx-deployment.yaml
$ kubectl create -f docs/user-guide/nginx-deployment.yaml --record
deployment "nginx-deployment" created
```
Running
Setting the kubectl flag `--record` to `true` allows you to record current command in the annotations of the resources being created or updated. It will be useful for future introspection; for example, to see the commands executed in each Deployment revision.
Then running `get` immediately will give:
```shell
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 0 0 0 1s
```
immediately will give:
This indicates that the Deployment's number of desired replicas is 3 (according to deployment's `.spec.replicas`), the number of current replicas (`.status.replicas`) is 0, the number of up-to-date replicas (`.status.updatedReplicas`) is 0, and the number of available replicas (`.status.availableReplicas`) is also 0.
Running the `get` again a few seconds later, should give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 8s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
```
This indicates that the Deployment is trying to update 3 replicas, and has not updated any of them yet.
Running the `get` again after a minute, should give:
This indicates that the Deployment has created all three replicas, and all replicas are up-to-date (contains the latest pod template) and available (pod status is ready for at least deployment's `.spec.minReadySeconds`). Running `kubectl get rs` and `kubectl get pods` will show the replica set (RS) and pods created.
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 1m
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-2035384211 3 3 18s
```
This indicates that the Deployment has created all three replicas.
Running `kubectl get rc` and `kubectl get pods` will show the replication controller (RC) and pods created.
You may notice that the name of the replica set is always `<the name of the Deployment>-<hash value of the pod template>`.
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
REPLICAS AGE
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 3 2m
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
nginx-deployment-2035384211-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
```
The created replica set will ensure that there are three nginx pods at all times.
## The Status of a Deployment
After creating or updating a Deployment, you would want to confirm whether it succeeded or not. The best way to do this is through checking its status.
To verify if the above Deployment succeeded or not, first compare the `.metadata.generation` and `.status.observedGeneration` of the Deployment:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1975012602-4f2tb 1/1 Running 0 1m
deploymentrc-1975012602-j975u 1/1 Running 0 1m
deploymentrc-1975012602-uashb 1/1 Running 0 1m
$ kubectl get deployment/nginx-deployment -o yaml | grep [Gg]eneration
generation: 2
observedGeneration: 2
```
The created RC will ensure that there are three nginx pods at all times.
When `observedGeneration` >= `generation`, the Deployment controller has observed current Deployment; if not, wait for a few more seconds.
Once the above condition is met, check the Deployment's up-to-date replicas (`.status.updatedReplicas`) and see if it matches the desired replicas (`.spec.replicas`):
```shell
$ kubectl get deployment/nginx-deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 9m
```
Additionally, if you set `.spec.minReadySeconds`, you would also want to check if the available replicas (`.status.availableReplicas`) matches the desired replicas too.
**Note:** It's impossible to know whether a Deployment will ever succeed, so one has to timeout and give up at some point.
## Updating a Deployment
@ -83,120 +108,303 @@ For this, we update our deployment file as follows:
{% include code.html language="yaml" file="new-nginx-deployment.yaml" ghlink="/docs/user-guide/new-nginx-deployment.yaml" %}
We can then `apply` the Deployment:
We can then `apply` the new Deployment:
```shell
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
deployment "nginx-deployment" configured
```
Running a `get` immediately will still give:
Alternatively, we can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`:
```shell
$ kubectl edit deployment/nginx-deployment
deployment "nginx-deployment" edited
```
Running a `get` immediately will give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 8s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 0 3 20s
```
This indicates that deployment status has not been updated yet (it is still
showing old status).
Running a `get` again after a minute, should show:
The 0 number of up-to-date replicas indicates that the deployment hasn't updated the replicas to the latest configuration. The current replicas indicates the total replicas (3 with old configuration and 0 with new configuration) this Deployment manages, and the available replicas indicates the number of current replicas that are available.
The Deployment will update all the pods in a few seconds.
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 1/3 1m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
```
This indicates that the Deployment has updated one of the three pods that it needs
to update.
Eventually, it will update all the pods.
We can run `kubectl get rs` to see that the Deployment updated the pods by creating a new replica set and scaling it up to 3 replicas, as well as scaling down the old replica set to 0 replicas.
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 3m
```
We can run `kubectl get rc` to see that the Deployment updated the pods by creating a new RC,
which it scaled up to 3 replicas, and has scaled down the old RC to 0 replicas.
```shell
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
deploymentrc-1562004724 nginx nginx:1.9.1 pod-template-hash=1562004724,app=nginx 3 5m
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 0 7m
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-1564180365 3 3 6s
nginx-deployment-2035384211 0 0 36s
```
Running `get pods` should now show only the new pods:
```shell
kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1562004724-0tgk5 1/1 Running 0 9m
deploymentrc-1562004724-1rkfl 1/1 Running 0 8m
deploymentrc-1562004724-6v702 1/1 Running 0 8m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-1564180365-khku8 1/1 Running 0 14s
nginx-deployment-1564180365-nacti 1/1 Running 0 14s
nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
```
Next time we want to update these pods, we can just update and re-apply the Deployment again.
Next time we want to update these pods, we only need to update and re-apply the Deployment again.
Deployment ensures that not all pods are down while they are being updated. By
default, it ensures that minimum of 1 less than the desired number of pods are
up. For example, if you look at the above deployment closely, you will see that
Deployment can ensure that only a certain number of pods may be down while they are being updated. By
default, it ensures that at least 1 less than the desired number of pods are
up (1 max unavailable).
Deployment can also ensure that only a certain number of pods may be created above the desired number of pods. By default, it ensures that at most 1 more than the desired number of pods are up (1 max surge).
For example, if you look at the above deployment closely, you will see that
it first created a new pod, then deleted some old pods and created new ones. It
does not kill old pods until a sufficient number of new pods have come up.
does not kill old pods until a sufficient number of new pods have come up, and does not create new pods until a sufficient number of old pods have been killed. It makes sure that number of available pods is at least 2 and the number of total pods is at most 4.
```shell
$ kubectl describe deployments
Name: nginx-deployment
Namespace: default
CreationTimestamp: Thu, 22 Oct 2015 17:58:49 -0700
Labels: app=nginx-deployment
Selector: app=nginx
Replicas: 3 updated / 3 total
StrategyType: RollingUpdate
RollingUpdateStrategy: 1 max unavailable, 1 max surge, 0 min ready seconds
OldReplicationControllers: deploymentrc-1562004724 (3/3 replicas created)
NewReplicationController: <none>
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 12:01:06 -0700
Labels: app=nginx
Selector: app=nginx
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
10m 10m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1975012602 to 3
2m 2m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 1
2m 2m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 1
1m 1m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 3
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
36s 36s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
```
Here we see that when we first created the Deployment, it created an RC and scaled it up to 3 replicas directly.
When we updated the Deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.
It then scaled up the new RC to 3 and when those pods were ready, it scaled down the old RC to 0.
Here we see that when we first created the Deployment, it created a replica set (nginx-deployment-2035384211) and scaled it up to 3 replicas directly.
When we updated the Deployment, it created a new replica set (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old replica set to 2, so that at least 2 pods were available and at most 4 pods were created at all times.
It then continued scaling up and down the new and the old replica set, with the same rolling update strategy. Finally, we'll have 3 available replicas in the new replica set, and the old replica set is scaled down to 0.
### Multiple Updates
Each time a new deployment object is observed, a replication controller is
created to bring up the desired pods if there is no existing RC doing so.
Existing RCs controlling pods whose labels match `.spec.selector` but whose
Each time a new deployment object is observed by the deployment controller, a replica set is
created to bring up the desired pods if there is no existing replica set doing so.
Existing replica set controlling pods whose labels match `.spec.selector` but whose
template does not match `.spec.template` are scaled down.
Eventually, the new RC will be scaled to `.spec.replicas` and all old RCs will
Eventually, the new replica set will be scaled to `.spec.replicas` and all old replica sets will
be scaled to 0.
If the user updates a Deployment while an existing deployment is in progress,
the Deployment will create a new RC as per the update and start scaling that up, and
will roll the RC that it was scaling up previously-- it will add it to its list of old RCs and will
If you update a Deployment while an existing deployment is in progress,
the Deployment will create a new replica set as per the update and start scaling that up, and
will roll the replica set that it was scaling up previously -- it will add it to its list of old replica sets and will
start scaling it down.
For example, suppose the user creates a deployment to create 5 replicas of `nginx:1.7.9`,
but then updates the deployment to create 5 replicas of `nginx:1.9.1`, when only 3
replicas of `nginx:1.7.9` had been created. In that case, deployment will immediately start
For example, suppose you create a Deployment to create 5 replicas of `nginx:1.7.9`,
but then updates the Deployment to create 5 replicas of `nginx:1.9.1`, when only 3
replicas of `nginx:1.7.9` had been created. In that case, Deployment will immediately start
killing the 3 `nginx:1.7.9` pods that it had created, and will start creating
`nginx:1.9.1` pods. It will not wait for 5 replicas of `nginx:1.7.9` to be created
before changing course.
## Rolling Back a Deployment
Sometimes we may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.
Suppose that we made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
{% include code.html language="yaml" file="bad-nginx-deployment.yaml" ghlink="/docs/user-guide/bad-nginx-deployment.yaml" %}
```shell
$ kubectl apply -f docs/user-guide/bad-nginx-deployment.yaml
deployment "nginx-deployment" configured
```
You will see that both the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) and new replicas (nginx-deployment-3066724191) are 2.
```shell
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-1564180365 2 2 25s
nginx-deployment-2035384211 0 0 36s
nginx-deployment-3066724191 2 2 6s
```
Looking at the pods created, you will see that the 2 pods created by new replica set are stuck in an image pull loop.
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
nginx-deployment-3066724191-eocby 0/1 ImagePullBackOff 0 6s
```
Note that the Deployment controller will stop the bad rollout automatically, and will stop scaling up the new replica set.
```shell
$ kubectl describe deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
Labels: app=nginx
Selector: app=nginx
Replicas: 2 updated | 3 total | 2 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: nginx-deployment-1564180365 (2/2 replicas created)
NewReplicaSet: nginx-deployment-3066724191 (2/2 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2
```
To fix this, we need to rollback to a previous revision of Deployment that is stable.
First, check the revisions of this deployment:
```shell
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment":
REVISION CHANGE-CAUSE
1 kubectl create -f docs/user-guide/nginx-deployment.yaml --record
2 kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
3 kubectl apply -f docs/user-guide/bad-nginx-deployment.yaml
```
Because we recorded the command while creating this Deployment using `--record`, we can easily see the changes we made in each revision.
To further see the details of each revision, run:
```shell
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx,pod-template-hash=1564180365
Annotations: kubernetes.io/change-cause=kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
Image(s): nginx:1.9.1
No volumes.
```
Now we've decided to undo the current rollout and rollback to the previous revision:
```shell
$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
```
Alternatively, you can rollback to a specific revision by specify that in `--to-revision`:
```shell
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back
```
The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event for rolling back to revision 2 is generated from Deployment controller.
```shell
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 30m
$ kubectl describe deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
Labels: app=nginx
Selector: app=nginx
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
30m 30m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
2m 2m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-3066724191 to 0
2m 2m 1 {deployment-controller } Normal DeploymentRollback Rolled back deployment "nginx-deployment" to revision 2
29m 2m 2 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
```
## Pausing and Resuming a Deployment
You can also pause a Deployment mid-way and then resume it. A use case is to support canary deployment.
Update the Deployment again and then pause the Deployment with `kubectl rollout pause`:
```shell
$ kubectl apply -f docs/user-guide/new-nginx-deployment; kubectl rollout pause deployment/nginx-deployment
deployment "nginx-deployment" configured
deployment "nginx-deployment" paused
```
Note that any current state of the Deployment will continue its function, but new updates to the Deployment will not have an effect as long as the Deployment is paused.
The Deployment was still in progress when we paused it, so the actions of scaling up and down replica sets are paused too.
```shell
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-1564180365 2 2 1h
nginx-deployment-2035384211 2 2 1h
nginx-deployment-3066724191 0 0 1h
```
To resume the Deployment, simply do `kubectl rollout resume`:
```shell
$ kubectl rollout resume deployment/nginx-deployment
deployment "nginx-deployment" resumed
```
Then the Deployment will continue and finish the rollout:
```shell
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-1564180365 3 3 1h
nginx-deployment-2035384211 0 0 1h
nginx-deployment-3066724191 0 0 1h
```
Note: A paused Deployment cannot be scaled at this moment, and we will add this feature in 1.3 release, see [issue #20853](https://github.com/kubernetes/kubernetes/issues/20853). You cannot rollback a paused Deployment either, and you should resume a Deployment first before doing a rollback.
## Writing a Deployment Spec
As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and
`metadata` fields. For general information about working with config files,
see [here](/docs/user-guide/deploying-applications), [here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
see [deploying applications](/docs/user-guide/deploying-applications), [configuring containers](/docs/user-guide/configuring-containers), and [using kubectl to manage resources](/docs/user-guide/working-with-resources) documents.
A Deployment also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
@ -231,13 +439,12 @@ the default value.
All existing pods are killed before new ones are created when
`.spec.strategy.type==Recreate`.
__Note: This is not implemented yet__.
#### Rolling Update Deployment
The Deployment updates pods in a [rolling update](/docs/user-guide/update-demo/) fashion
when `.spec.strategy.type==RollingUpdate`.
Users can specify `maxUnavailable`, `maxSurge` and `minReadySeconds` to control
You can specify `maxUnavailable` and `maxSurge` to control
the rolling update process.
##### Max Unavailable
@ -250,9 +457,9 @@ The absolute number is calculated from percentage by rounding up.
This can not be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0.
By default, a fixed value of 1 is used.
For example, when this value is set to 30%, the old RC can be scaled down to
For example, when this value is set to 30%, the old replica set can be scaled down to
70% of desired pods immediately when the rolling update starts. Once new pods are
ready, old RC can be scaled down further, followed by scaling up the new RC,
ready, old replica set can be scaled down further, followed by scaling up the new replica set,
ensuring that the total number of pods available at all times during the
update is at least 70% of the desired pods.
@ -266,13 +473,13 @@ This can not be 0 if `MaxUnavailable` is 0.
The absolute number is calculated from percentage by rounding up.
By default, a value of 1 is used.
For example, when this value is set to 30%, the new RC can be scaled up immediately when
For example, when this value is set to 30%, the new replica set can be scaled up immediately when
the rolling update starts, such that the total number of old and new pods do not exceed
130% of desired pods. Once old pods have been killed,
the new RC can be scaled up further, ensuring that the total number of pods running
the new replica set can be scaled up further, ensuring that the total number of pods running
at any time during the update is at most 130% of desired pods.
##### Min Ready Seconds
### Min Ready Seconds
`.spec.minReadySeconds` is an optional field that specifies the
minimum number of seconds for which a newly created pod should be ready
@ -280,9 +487,25 @@ without any of its containers crashing, for it to be considered available.
This defaults to 0 (the pod will be considered available as soon as it is ready).
To learn more about when a pod is considered ready, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
### Rollback To
`.spec.rollbackTo` is an optional field with the configuration the Deployment is rolling back to. Setting this field will trigger a rollback, and this field will be cleared every time a rollback is done.
#### Revision
`.spec.rollbackTo.revision` is an optional field specifying the revision to rollback to. This defaults to 0, meaning rollback to the last revision in history.
### Revision History Limit
`.spec.revisionHistoryLimit` is an optional field that specifies the number of old replica sets to retain to allow rollback. All old replica sets will be kept by default, if this field is not set. The configuration of each Deployment revision is stored in its replica sets; therefore, once an old replica set is deleted, you lose the ability to rollback to that revision of Deployment.
### Paused
`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. It defaults to false (a Deployment is not paused).
## Alternative to Deployments
### kubectl rolling update
[Kubectl rolling update](/docs/user-guide/kubectl/kubectl_rolling-update) also updates pods and replication controllers in a similar fashion.
But deployments is declarative and is server side.
[Kubectl rolling update](/docs/user-guide/kubectl/kubectl_rolling-update) updates pods and replication controllers in a similar fashion.
But deployments are recommended, since they are declarative, server side, and have additional features, such as rolling back to any previous revision even after the rolling update is done.

View File

@ -8,7 +8,7 @@ In this doc, we introduce the Kubernetes command line for interacting with the a
#### docker run
How do I run an nginx container and expose it to the world? Checkout [kubectl run](/docs/user-guide/kubectl/kubectl_run).
How do I run an nginx Deployment and expose it to the world? Checkout [kubectl run](/docs/user-guide/kubectl/kubectl_run).
With docker:
@ -25,12 +25,20 @@ With kubectl:
```shell
# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
replicationcontroller "nginx-app" created
# expose a port through with a service
$ kubectl expose rc nginx-app --port=80 --name=nginx-http
deployment "nginx-app" created
```
With kubectl, we create a [replication controller](/docs/user-guide/replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/user-guide/services) with a selector that matches the replication controller's selector. See the [Quick start](/docs/user-guide/quick-start) for more information.
`kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details.
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands. Now, we can expose a new Service with the deployment created above:
```shell
# expose a port through with a service
$ kubectl expose deployment nginx-app --port=80 --name=nginx-http
service "nginx-http" exposed
```
With kubectl, we create a [Deployment](/docs/user-guide/deployments) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/user-guide/services) with a selector that matches the Deployment's selector. See the [Quick start](/docs/user-guide/quick-start) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
@ -40,8 +48,8 @@ kubectl run [-i] [--tty] --attach <name> --image=<image>
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
Because we start a replication controller for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different than `docker run -it`.
To destroy the replication controller (and it's pods) you need to run `kubectl delete rc <name>`
Because we start a Deployment for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different than `docker run -it`.
To destroy the Deployment (and its pods) you need to run `kubectl delete deployment <name>`
#### docker ps
@ -180,20 +188,19 @@ a9ec34d98787
With kubectl:
```shell
$ kubectl get rc nginx-app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
nginx-app nginx-app nginx run=nginx-app 1
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-aualv 1/1 Running 0 16s
$ kubectl delete rc nginx-app
NAME READY STATUS RESTARTS AGE
nginx-app-aualv 1/1 Running 0 16s
$ kubectl get po
NAME READY STATUS RESTARTS AGE
$ kubectl get deployment nginx-app
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-app 1 1 1 1 2m
$ kubectl get po -l run=nginx-app
NAME READY STATUS RESTARTS AGE
nginx-app-2883164633-aklf7 1/1 Running 0 2m
$ kubectl delete deployment nginx-app
deployment "nginx-app" deleted
$ kubectl get po -l run=nginx-app
# Return nothing
```
Notice that we don't delete the pod directly. With kubectl we want to delete the replication controller that owns the pod. If we delete the pod directly, the replication controller will recreate the pod.
Notice that we don't delete the pod directly. With kubectl we want to delete the Deployment that owns the pod. If we delete the pod directly, the Deployment will recreate the pod.
#### docker login
@ -263,4 +270,4 @@ KubeUI is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/s
Grafana is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
Heapster is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
```
```

View File

@ -52,7 +52,7 @@ environment variable they want.
This is an example of a pod that consumes its name and namespace via the
downward API:
{% include code.html language="yaml" file="downward-api/dapi-pod.yaml" ghlink="/docs/user-guide/downward-api/dapi-pod.yaml" %}
{% include code.html language="yaml" file="dapi-pod.yaml" ghlink="/docs/user-guide/downward-api/dapi-pod.yaml" %}
### Downward API volume
@ -86,7 +86,7 @@ In future, it will be possible to specify a specific annotation or label.
This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations are dumped in `/etc/labels` and in `/etc/annotations`, respectively:
{% include code.html language="yaml" file="downward-api/volume/dapi-volume.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume.yaml" %}
{% include code.html language="yaml" file="volume/dapi-volume.yaml" ghlink="/docs/user-guide/downward-api/volume/dapi-volume.yaml" %}
Some more thorough examples:

View File

@ -1,78 +0,0 @@
---
---
This document describes the current state of Horizontal Pod Autoscaler in Kubernetes.
## What is Horizontal Pod Autoscaler?
Horizontal pod autoscaling allows the number of pods in a replication controller or deployment
to scale automatically based on observed CPU utilization.
It is a [beta](/docs/api/)#api-versioning) feature in Kubernetes 1.1.
The autoscaler is implemented as a Kubernetes API resource and a controller.
The resource describes behavior of the controller.
The controller periodically adjusts the number of replicas in a replication controller or deployment
to match the observed average CPU utilization to the target specified by user.
## How does Horizontal Pod Autoscaler work?
![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg)
The autoscaler is implemented as a control loop.
It periodically queries CPU utilization for the pods it targets.
(The period of the autoscaler is controlled by `--horizontal-pod-autoscaler-sync-period` flag of controller manager.
The default value is 30 seconds).
Then, it compares the arithmetic mean of the pods' CPU utilization with the target and adjust the number of replicas if needed.
CPU utilization is the recent CPU usage of a pod divided by the sum of CPU requested by the pod's containers.
Please note that if some of the pod's containers do not have CPU request set,
CPU utilization for the pod will not be defined and the autoscaler will not take any action.
Further details of the autoscaling algorithm are given [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm).
Autoscaler uses heapster to collect CPU utilization.
Therefore, it is required to deploy heapster monitoring in your cluster for autoscaling to work.
Autoscaler accesses corresponding replication controller or deployment by scale sub-resource.
Scale is an interface which allows to dynamically set the number of replicas and to learn the current state of them.
More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#scale-subresource).
## API Object
Horizontal pod autoscaler is a top-level resource in the Kubernetes REST API (currently in [beta](/docs/api/)#api-versioning)).
More details about the API object can be found at
[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
## Support for horizontal pod autoscaler in kubectl
Horizontal pod autoscaler, like every API resource, is supported in a standard way by `kubectl`.
We can create a new autoscaler using `kubectl create` command.
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
Finally, we can delete an autoscaler using `kubectl delete hpa`.
In addition, there is a special `kubectl autoscale` command that allows for easy creation of horizontal pod autoscaler.
For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
The detailed documentation of `kubectl autoscale` can be found [here](/docs/user-guide/kubectl/kubectl_autoscale).
## Autoscaling during rolling update
Currently in Kubernetes, it is possible to perform a rolling update by managing replication controllers directly,
or by using the deployment object, which manages the underlying replication controllers for you.
Horizontal pod autoscaler only supports the latter approach: the horizontal pod autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replication controllers.
Horizontal pod autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a horizontal pod autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
The reason this doesn't work is that when rolling update creates a new replication controller,
the horizontal pod autoscaler will not be bound to the new replication controller.
## Further reading
* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md).
* Manual of autoscale command in kubectl: [kubectl autoscale](/docs/user-guide/kubectl/kubectl_autoscale).
* Usage example of [Horizontal Pod Autoscaler](/docs/user-guide/horizontal-pod-autoscaling/).

View File

@ -5,7 +5,7 @@ metadata:
namespace: default
spec:
scaleRef:
kind: ReplicationController
kind: Deployment
name: php-apache
subresource: scale
minReplicas: 1

View File

@ -1,184 +1,83 @@
---
---
Horizontal pod autoscaling is a [beta](/docs/api/#api-versioning) feature in Kubernetes 1.1.
It allows the number of pods in a replication controller or deployment to scale automatically based on observed CPU usage.
In the future also other metrics will be supported.
This document describes the current state of Horizontal Pod Autoscaler in Kubernetes.
In this document we explain how this feature works by walking you through an example of enabling horizontal pod autoscaling with the php-apache server.
## What is Horizontal Pod Autoscaler?
## Prerequisites
Horizontal pod autoscaling allows to automatically scale the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization.
This example requires a running Kubernetes cluster and kubectl in the version at least 1.1.
[Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster
as horizontal pod autoscaler uses it to collect metrics
(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce),
heapster monitoring will be turned-on by default).
The autoscaler is implemented as a Kubernetes API resource and a controller.
The resource describes behavior of the controller.
The controller periodically adjusts the number of replicas in a replication controller or deployment
to match the observed average CPU utilization to the target specified by user.
## Step One: Run & expose php-apache server
## How does Horizontal Pod Autoscaler work?
To demonstrate horizontal pod autoscaler we will use a custom docker image based on php-apache server.
The image can be found [here](https://releases.k8s.io/{{page.githubbranch}}/docs/user-guide/horizontal-pod-autoscaling/image).
It defines [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg)
First, we will start a replication controller running the image and expose it as an external service:
The autoscaler is implemented as a control loop.
It periodically queries CPU utilization for the pods it targets.
(The period of the autoscaler is controlled by `--horizontal-pod-autoscaler-sync-period` flag of controller manager.
The default value is 30 seconds).
Then, it compares the arithmetic mean of the pods' CPU utilization with the target and adjust the number of replicas if needed.
<a name="kubectl-run"></a>
CPU utilization is the recent CPU usage of a pod divided by the sum of CPU requested by the pod's containers.
Please note that if some of the pod's containers do not have CPU request set,
CPU utilization for the pod will not be defined and the autoscaler will not take any action.
Further details of the autoscaling algorithm are given [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm).
```shell
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m
replicationcontroller "php-apache" created
Autoscaler uses heapster to collect CPU utilization.
Therefore, it is required to deploy heapster monitoring in your cluster for autoscaling to work.
$ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
```
Autoscaler accesses corresponding replication controller, deployment or replica set by scale sub-resource.
Scale is an interface which allows to dynamically set the number of replicas and to learn the current state of them.
More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#scale-subresource).
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
php-apache-wa3t1 1/1 Running 0 12m
## API Object
$ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
```
Horizontal pod autoscaler is a top-level resource in the Kubernetes REST API.
In Kubernetes 1.2 HPA was graduated from beta to stable (more details about [api versioning](/docs/api/#api-versioning)) with compatibility between versions.
The stable version is available in `autoscaling/v1` api group whereas the beta vesion is available in `extensions/v1beta1` api group as before.
The transition plan is to depracate beta version of HPA in Kubernetes 1.3 and get it rid off completely in Kubernetes 1.4.
We may now check that php-apache server works correctly by calling `curl` with the service's IP:
**Warning!** Please have in mind that all Kubernetes components still use HPA in version `extensions/v1beta1` in Kubernetes 1.2.
```shell
$ curl http://146.148.24.244
OK!
```
More details about the API object can be found at
[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
## Support for horizontal pod autoscaler in kubectl
```shell
$ kubectl expose rc php-apache --port=80 --type=ClusterIP
service "php-apache" exposed
Horizontal pod autoscaler, like every API resource, is supported in a standard way by `kubectl`.
We can create a new autoscaler using `kubectl create` command.
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
Finally, we can delete an autoscaler using `kubectl delete hpa`.
$ kubectl cluster-info | grep master
Kubernetes master is running at https://146.148.6.215
In addition, there is a special `kubectl autoscale` command that allows for easy creation of horizontal pod autoscaler.
For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
The detailed documentation of `kubectl autoscale` can be found [here](/docs/user-guide/kubectl/kubectl_autoscale).
$ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/default/services/php-apache/
OK!
```
## Step Two: Create horizontal pod autoscaler
## Autoscaling during rolling update
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file, which looks like this:
Currently in Kubernetes, it is possible to perform a rolling update by managing replication controllers directly,
or by using the deployment object, which manages the underlying replication controllers for you.
Horizontal pod autoscaler only supports the latter approach: the horizontal pod autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replication controllers.
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleRef:
kind: ReplicationController
name: php-apache
namespace: default
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
```
Horizontal pod autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a horizontal pod autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
The reason this doesn't work is that when rolling update creates a new replication controller,
the horizontal pod autoscaler will not be bound to the new replication controller.
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
(via the replication controller) so as to maintain an average CPU utilization across all Pods of 50%
(since each pod requests 200 milli-cores by [kubectl run](#kubectl-run), this means average CPU utilization of 100 milli-cores).
See [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
We will create the autoscaler by executing the following command:
## Further reading
```shell
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
```
Alternatively, we can create the autoscaler using [kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_autoscale.md).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file:
```shell
$ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10
replicationcontroller "php-apache" autoscaled
```
We may check the current status of autoscaler by running:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
## Step Three: Increase load
Now, we will see how the autoscaler reacts on the increased load of the server.
We will start an infinite loop of queries to our server (please run it in a different terminal):
```shell
$ while true; do curl http://146.148.6.244; done
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
```shell
$ while true; do curl http://146.148.6.244; done
```
In the case presented here, it increased the number of serving pods to 10:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 65% 1 10 14m
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 21m
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.
* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md).
* Manual of autoscale command in kubectl: [kubectl autoscale](/docs/user-guide/kubectl/kubectl_autoscale).
* Usage example of [Horizontal Pod Autoscaler](/docs/user-guide/horizontal-pod-autoscaling/).

View File

@ -0,0 +1,147 @@
---
---
Horizontal pod autoscaling allows to automatically scale the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization.
In the future also other metrics will be supported.
In this document we explain how this feature works by walking you through an example of enabling horizontal pod autoscaling for the php-apache server.
## Prerequisites
This example requires a running Kubernetes cluster and kubectl in the version at least 1.2.
[Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster
as horizontal pod autoscaler uses it to collect metrics
(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce),
heapster monitoring will be turned-on by default).
## Step One: Run & expose php-apache server
To demonstrate horizontal pod autoscaler we will use a custom docker image based on php-apache server.
The image can be found [here](/docs/user-guide/horizontal-pod-autoscaling/image).
It defines [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
First, we will start a deployment running the image and expose it as a service:
```shell
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m --expose --port=80
service "php-apache" created
deployment "php-apache" created
```
## Step Two: Create horizontal pod autoscaler
Now that the server is running, we will create the autoscaler using
[kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_autoscale.md).
The following command will create a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache deployment we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
(via the deployment) to maintain an average CPU utilization across all Pods of 50%
(since each pod requests 200 milli-cores by [kubectl run](#kubectl-run), this means average CPU usage of 100 milli-cores).
See [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
```shell
$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
deployment "php-apache" autoscaled
```
We may check the current status of autoscaler by running:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache Deployment/php-apache/scale 50% 0% 1 10 18s
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding deployment).
## Step Three: Increase load
Now, we will see how the autoscaler reacts on the increased load on the server.
We will start a container with `busybox` image and an infinite loop of queries to our server inside (please run it in a different terminal):
```shell
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
Hit enter for command prompt
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
```
We may examine, how CPU load was increased by executing (it usually takes 1 minute):
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache Deployment/php-apache/scale 50% 305% 1 10 3m
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the deployment was resized to 7 replicas:
```shell
$ kubectl get deployment php-apache
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 7 7 7 7 19m
```
**Warning!** Sometimes it may take few steps to stabilize the number of replicas.
Since the amount of load is not controlled in any way it may happen that the final number of replicas will
differ from this example.
## Step Four: Stop load
We will finish our example by stopping the user load.
In the terminal where we created container with `busybox` image we will terminate
infinite ``while`` loop by sending `SIGINT` signal,
which can be done using `<Ctrl> + C` combination.
Then we will verify the result state:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache Deployment/php-apache/scale 50% 0% 1 10 11m
$ kubectl get deployment php-apache
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 1 1 1 1 27m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.
**Warning!** Sometimes dropping number of replicas may take few steps.
## Appendix: Other possible scenarios
### Creating the autoscaler from a .yaml file
Instead of using `kubectl autoscale` command we can use the [hpa-php-apache.yaml](/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file, which looks like this:
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleRef:
kind: Deployment
name: php-apache
subresource: scale
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
```
We will create the autoscaler by executing the following command:
```shell
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
```

View File

@ -28,7 +28,7 @@ Typically, services and pods have IPs only routable by the cluster network. All
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
```
internet
internet
|
[ Ingress ]
--|-----|--
@ -39,12 +39,28 @@ It can be configured to give services externally-reachable urls, load balance tr
## Prerequisites
Before you start using the Ingress resource, there are a few things you should understand:
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
* The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1.
* You need an Ingress controller to satisfy an Ingress. Simply creating the resource will have no effect.
* On GCE/GKE there should be a [L7 cluster addon](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-loadbalancing/glbc/README.md#prerequisites), on other platforms you either need to write your own or [deploy an existing controller](https://github.com/kubernetes/contrib/tree/master/Ingress) as a pod.
* The resource currently does not support HTTPS, but will do so before it leaves beta.
On GCE/GKE there should be a [L7 cluster addon](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md), deployed into the `kube-system` namespace:
```shell
$ kubectl get pods --namespace=kube-system -l name=glbc
NAME READY STATUS RESTARTS AGE
l7-lb-controller-v0.6.0-chnan 2/2 Running 0 1d
```
Make sure you review the [beta limitations](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce/BETA_LIMITATIONS.md) of this controller. In particular, you need to create a single firewall-rule on your cloudprovider, to allow health checks. On GKE this would be:
```shell
$ export TAG=$(basename `gcloud container clusters describe ${CLUSTER_NAME} --zone ${ZONE} | grep gke | awk '{print $2}'` | sed -e s/group/node/)
$ export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services echoheaders)
$ gcloud compute firewall-rules create allow-130-211-0-0-22 \
--source-ranges 130.211.0.0/22 \
--target-tags $TAG \
--allow tcp:$NODE_PORT
```
In environments other than GCE/GKE, you need to [deploy a controller](https://github.com/kubernetes/contrib/tree/master/ingress/controllers) as a pod.
## The Ingress Resource
@ -75,11 +91,11 @@ __Lines 8-9__: Each http rule contains the following information: A host (eg: fo
__Lines 10-12__: A backend is a service:port combination as described in the [services doc](/docs/user-guide/services). Ingress traffic is typically sent directly to the endpoints matching a backend.
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [api-reference](https://releases.k8s.io/{{page.githubbranch}}/pkg/apis/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller. Though the Ingress resource doesn't support HTTPS yet, security configs would also be global.
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [api-reference](https://releases.k8s.io/{{page.githubbranch}}/pkg/apis/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller.
## Ingress controllers
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://github.com/kubernetes/contrib/tree/master/Ingress).
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://github.com/kubernetes/contrib/tree/master/ingress/controllers).
## Types of Ingress
@ -177,6 +193,37 @@ spec:
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the url of the request.
### TLS
You can secure an Ingress by specifying a [secret](/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS configuration section in an Ingress specifies different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:
```yaml
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: Secret
metadata:
name: testsecret
namespace: default
type: Opaque
```
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
secretName: testsecret
backend:
serviceName: s1
servicePort: 80
```
### Loadbalancing
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
@ -234,12 +281,12 @@ You can achieve the same by invoking `kubectl replace -f` on a modified Ingress
## Future Work
* Various modes of HTTPS/TLS support (edge termination, sni etc)
* Various modes of HTTPS/TLS support (eg: SNI, re-encryption)
* Requesting an IP or Hostname via claims
* Combining L4 and L7 Ingress
* More Ingress controllers
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress sub-repository](https://github.com/kubernetes/contrib/tree/master/Ingress) for more details on the evolution of various Ingress controllers.
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress sub-repository](https://github.com/kubernetes/contrib/tree/master/ingress) for more details on the evolution of various Ingress controllers.
## Alternatives
@ -248,4 +295,4 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
* Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer)
* Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport)
* Use a [Port Proxy](https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)
* Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
* Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.

View File

@ -10,13 +10,13 @@ your pods. But there are a number of ways to get even more information about you
## Using `kubectl describe pod` to fetch details about pods
For this example we'll use a ReplicationController to create two pods, similar to the earlier example.
For this example we'll use a Deployment to create two pods, similar to the earlier example.
```yaml
apiVersion: v1
kind: ReplicationController
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
name: nginx-deployment
spec:
replicas: 2
template:
@ -35,53 +35,67 @@ spec:
- containerPort: 80
```
Copy this to a file *./my-nginx-dep.yaml*
```shell
$ kubectl create -f ./my-nginx-rc.yaml
replicationcontrollers/my-nginx
$ kubectl create -f ./my-nginx-dep.yaml
deployment "nginx-deployment" created
```
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-gy1ij 1/1 Running 0 1m
my-nginx-yv5cn 1/1 Running 0 1m
NAME READY STATUS RESTARTS AGE
nginx-deployment-1006230814-6winp 1/1 Running 0 11s
nginx-deployment-1006230814-fmgu3 1/1 Running 0 11s
```
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
```shell
$ kubectl describe pod my-nginx-gy1ij
Name: my-nginx-gy1ij
Image(s): nginx
Node: kubernetes-node-y3vk/10.240.154.168
Labels: app=nginx
Status: Running
Reason:
Message:
IP: 10.244.1.4
Replication Controllers: my-nginx (2/2 replicas created)
$ kubectl describe pod nginx-deployment-1006230814-6winp
Name: nginx-deployment-1006230814-6winp
Namespace: default
Node: kubernetes-node-wul5/10.240.0.9
Start Time: Thu, 24 Mar 2016 01:39:49 +0000
Labels: app=nginx,pod-template-hash=1006230814
Status: Running
IP: 10.244.0.6
Controllers: ReplicaSet/nginx-deployment-1006230814
Containers:
nginx:
Image: nginx
Container ID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
Image: nginx
Image ID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
Port: 80/TCP
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
cpu: 500m
cpu: 500m
memory: 128Mi
Requests:
memory: 128Mi
cpu: 500m
State: Running
Started: Thu, 09 Jul 2015 15:33:07 -0700
Started: Thu, 24 Mar 2016 01:39:51 +0000
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {scheduler } scheduled Successfully assigned my-nginx-gy1ij to kubernetes-node-y3vk
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD created Created with docker id cd1644065066
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD started Started with docker id cd1644065066
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx"
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
54s 54s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-deployment-1006230814-6winp to kubernetes-node-wul5
54s 54s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulling pulling image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulled Successfully pulled image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Created Created container with docker id 90315cc9f513
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Started Started container with docker id 90315cc9f513
```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
@ -98,48 +112,59 @@ Lastly, you see a log of recent events related to your Pod. The system compresse
## Example: debugging Pending Pods
A common scenario that you can detect using events is when you've created a Pod that won't fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesn't match any nodes. Let's say we created the previous Replication Controller with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
A common scenario that you can detect using events is when you've created a Pod that won't fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesn't match any nodes. Let's say we created the previous Deployment with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-9unp9 0/1 Pending 0 8s
my-nginx-b7zs9 0/1 Running 0 8s
my-nginx-i595c 0/1 Running 0 8s
my-nginx-iichp 0/1 Running 0 8s
my-nginx-tc2j9 0/1 Running 0 8s
NAME READY STATUS RESTARTS AGE
nginx-deployment-1006230814-6winp 1/1 Running 0 7m
nginx-deployment-1006230814-fmgu3 1/1 Running 0 7m
nginx-deployment-1370807587-6ekbw 1/1 Running 0 1m
nginx-deployment-1370807587-fg172 0/1 Pending 0 1m
nginx-deployment-1370807587-fz9sd 0/1 Pending 0 1m
```
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
To find out why the nginx-deployment-1370807587-fz9sd pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
```shell
$ kubectl describe pod my-nginx-9unp9
Name: my-nginx-9unp9
Image(s): nginx
Node: /
Labels: app=nginx
Status: Pending
Reason:
Message:
IP:
Replication Controllers: my-nginx (5/5 replicas created)
Containers:
nginx:
Image: nginx
Limits:
cpu: 600m
memory: 128Mi
State: Waiting
Ready: False
Restart Count: 0
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
$ kubectl describe pod nginx-deployment-1370807587-fz9sd
Name: nginx-deployment-1370807587-fz9sd
Namespace: default
Node: /
Labels: app=nginx,pod-template-hash=1370807587
Status: Pending
IP:
Controllers: ReplicaSet/nginx-deployment-1370807587
Containers:
nginx:
Image: nginx
Port: 80/TCP
QoS Tier:
memory: Guaranteed
cpu: Guaranteed
Limits:
cpu: 1
memory: 128Mi
Requests:
cpu: 1
memory: 128Mi
Environment Variables:
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 48s 7 {default-scheduler } Warning FailedScheduling pod (nginx-deployment-1370807587-fz9sd) failed to fit in any node
fit failure on node (kubernetes-node-6ta5): Node didn't have enough resource: CPU, requested: 1000, used: 1420, capacity: 2000
fit failure on node (kubernetes-node-wul5): Node didn't have enough resource: CPU, requested: 1000, used: 1100, capacity: 2000
```
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `FailedScheduling` (and possibly others). The message tells us that there were not enough resources for the Pod on any of the nodes.
To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use
@ -158,65 +183,75 @@ To see events from all namespaces, you can use the `--all-namespaces` argument.
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
```yaml
$ kubectl get pod my-nginx-i595c -o yaml
$kubectl get pod nginx-deployment-1006230814-6winp -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: '{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"my-nginx","uid":"c555c14f-26d0-11e5-99cb-42010af00e4b","apiVersion":"v1","resourceVersion":"26174"}}'
creationTimestamp: 2015-07-10T06:56:21Z
generateName: my-nginx-
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-deployment-1006230814","uid":"4c84c175-f161-11e5-9a78-42010af00005","apiVersion":"extensions","resourceVersion":"133434"}}
creationTimestamp: 2016-03-24T01:39:50Z
generateName: nginx-deployment-1006230814-
labels:
app: nginx
name: my-nginx-i595c
pod-template-hash: "1006230814"
name: nginx-deployment-1006230814-6winp
namespace: default
resourceVersion: "26243"
selfLink: /api/v1/namespaces/default/pods/my-nginx-i595c
uid: c558e44b-26d0-11e5-99cb-42010af00e4b
resourceVersion: "133447"
selfLink: /api/v1/namespaces/default/pods/nginx-deployment-1006230814-6winp
uid: 4c879808-f161-11e5-9a78-42010af00005
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 600m
cpu: 500m
memory: 128Mi
requests:
cpu: 500m
memory: 128Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-zkhkk
name: default-token-4bcbi
readOnly: true
dnsPolicy: ClusterFirst
nodeName: kubernetes-node-u619
nodeName: kubernetes-node-wul5
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-zkhkk
- name: default-token-4bcbi
secret:
secretName: default-token-zkhkk
secretName: default-token-4bcbi
status:
conditions:
- status: "True"
- lastProbeTime: null
lastTransitionTime: 2016-03-24T01:39:51Z
status: "True"
type: Ready
containerStatuses:
- containerID: docker://9506ace0eb91fbc31aef1d249e0d1d6d6ef5ebafc60424319aad5b12e3a4e6a9
- containerID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
image: nginx
imageID: docker://319d2015d149943ff4d2a20ddea7d7e5ce06a64bbab1792334c0d3273bbbff1e
imageID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2015-07-10T06:56:28Z
hostIP: 10.240.112.234
startedAt: 2016-03-24T01:39:51Z
hostIP: 10.240.0.9
phase: Running
podIP: 10.244.3.4
startTime: 2015-07-10T06:56:21Z
podIP: 10.244.0.6
startTime: 2016-03-24T01:39:49Z
```
## Example: debugging a down/unreachable node

View File

@ -11,7 +11,7 @@ As pods successfully complete, the _job_ tracks the successful completions. Whe
of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the
pods it created.
A simple case is to create 1 Job object in order to reliably run one Pod to completion.
A simple case is to create one Job object in order to reliably run one Pod to completion.
The Job object will start a new Pod if the first pod fails or is deleted (for example
due to a node hardware failure or a node reboot).
@ -88,9 +88,9 @@ the same schema as a [pod](/docs/user-guide/pods), except it is nested and does
`kind`.
In addition to required fields for a Pod, a pod template in a job must specify appropriate
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
labels (see [pod selector](#pod-selector)) and an appropriate restart policy.
Only a [`RestartPolicy`](/docs/user-guide/pod-states) equal to `Never` or `OnFailure` are allowed.
Only a [`RestartPolicy`](/docs/user-guide/pod-states/#restartpolicy) equal to `Never` or `OnFailure` are allowed.
### Pod Selector
@ -331,15 +331,6 @@ driver, and then cleans up.
An advantage of this approach is that the overall process gets the completion guarantee of a Job
object, but complete control over what pods are created and how work is assigned to them.
## Caveats
Job objects are in the [`extensions` API Group](/docs/api/#api-groups).
Job objects have [API version `v1beta1`](/docs/api/)#api-versioning). Beta objects may
undergo changes to their schema and/or semantics in future software releases, but
similar functionality will be supported.
## Future work
Support for creating Jobs at specified times/dates (i.e. cron) is expected in the next minor
release.
Support for creating Jobs at specified times/dates (i.e. cron) is expected in [1.3](https://github.com/kubernetes/kubernetes/pull/11980).

View File

@ -1,25 +0,0 @@
---
---
This document summarizes known issues with existing Kubernetes releases.
Please consult this document before filing new bugs.
### Release 1.0.1
* `exec` liveness/readiness probes leak resources due to Docker exec leaking resources (#10659)
* `docker load` sometimes hangs which causes the `kube-apiserver` not to start. Restarting the Docker daemon should fix the issue (#10868)
* The kubelet on the master node doesn't register with the `kube-apiserver` so statistics aren't collected for master daemons (#10891)
* Heapster and InfluxDB both leak memory (#10653)
* Wrong node cpu/memory limit metrics from Heapster (https://github.com/GoogleCloudPlatform/heapster/issues/399)
* Services that set `type=LoadBalancer` can not use port `10250` because of Google Compute Engine firewall limitations
* Add-on services can not be created or deleted via `kubectl` or the Kubernetes API (#11435)
* If a pod with a GCE PD is created and deleted in rapid succession, it may fail to attach/mount correctly leaving PD data inaccessible (or corrupted in the worst case). (http://issue.k8s.io/11231#issuecomment-122049113)
* Suggested temporary work around: introduce a 1-2 minute delay between deleting and recreating a pod with a PD on the same node.
* Explicit errors while detaching GCE PD could prevent PD from ever being detached (#11321)
* GCE PDs may sometimes fail to attach (#11302)
* If multiple Pods use the same RBD volume in read-write mode, it is possible data on the RBD volume could get corrupted. This problem has been found in environments where both apiserver and etcd rebooted and Pods were redistributed.
* A workaround is to ensure there is no other Ceph client using the RBD volume before mapping RBD image in read-write mode. For example, `rados -p poolname listwatchers image_name.rbd` can list RBD clients that are mapping the image.

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl
@ -24,7 +20,6 @@ kubectl
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -50,36 +45,37 @@ kubectl
### SEE ALSO
* [kubectl annotate](../kubectl_annotate) - Update the annotations on a resource
* [kubectl api-versions](../kubectl_api-versions) - Print the supported API versions on the server, in the form of "group/version".
* [kubectl apply](../kubectl_apply) - Apply a configuration to a resource by filename or stdin
* [kubectl attach](../kubectl_attach) - Attach to a running container.
* [kubectl autoscale](../kubectl_autoscale) - Auto-scale a deployment or replication controller
* [kubectl cluster-info](../kubectl_cluster-info) - Display cluster info
* [kubectl config](../kubectl_config) - config modifies kubeconfig files
* [kubectl convert](../kubectl_convert) - Convert config files between different API versions
* [kubectl cordon](../kubectl_cordon) - Mark node as unschedulable
* [kubectl create](../kubectl_create) - Create a resource by filename or stdin
* [kubectl delete](../kubectl_delete) - Delete resources by filenames, stdin, resources and names, or by resources and label selector.
* [kubectl describe](../kubectl_describe) - Show details of a specific resource or group of resources
* [kubectl drain](../kubectl_drain) - Drain node in preparation for maintenance
* [kubectl edit](../kubectl_edit) - Edit a resource on the server
* [kubectl exec](../kubectl_exec) - Execute a command in a container.
* [kubectl explain](../kubectl_explain) - Documentation of resources.
* [kubectl expose](../kubectl_expose) - Take a replication controller, service or pod and expose it as a new Kubernetes Service
* [kubectl get](../kubectl_get) - Display one or many resources
* [kubectl label](../kubectl_label) - Update the labels on a resource
* [kubectl logs](../kubectl_logs) - Print the logs for a container in a pod.
* [kubectl namespace](../kubectl_namespace) - SUPERSEDED: Set and view the current Kubernetes namespace
* [kubectl patch](../kubectl_patch) - Update field(s) of a resource using strategic merge patch.
* [kubectl port-forward](../kubectl_port-forward) - Forward one or more local ports to a pod.
* [kubectl proxy](../kubectl_proxy) - Run a proxy to the Kubernetes API server
* [kubectl replace](../kubectl_replace) - Replace a resource by filename or stdin.
* [kubectl rolling-update](../kubectl_rolling-update) - Perform a rolling update of the given ReplicationController.
* [kubectl rollout](../kubectl_rollout) - rollout manages a deployment
* [kubectl run](../kubectl_run) - Run a particular image on the cluster.
* [kubectl scale](../kubectl_scale) - Set a new size for a Replication Controller, Job, or Deployment.
* [kubectl uncordon](../kubectl_uncordon) - Mark node as schedulable
* [kubectl version](../kubectl_version) - Print the client and server version information.
* [kubectl annotate](/docs/user-guide/kubectl/kubectl_annotate/) - Update the annotations on a resource
* [kubectl api-versions](/docs/user-guide/kubectl/kubectl_api-versions/) - Print the supported API versions on the server, in the form of "group/version".
* [kubectl apply](/docs/user-guide/kubectl/kubectl_apply/) - Apply a configuration to a resource by filename or stdin
* [kubectl attach](/docs/user-guide/kubectl/kubectl_attach/) - Attach to a running container.
* [kubectl autoscale](/docs/user-guide/kubectl/kubectl_autoscale/) - Auto-scale a deployment or replication controller
* [kubectl cluster-info](/docs/user-guide/kubectl/kubectl_cluster-info/) - Display cluster info
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
* [kubectl convert](/docs/user-guide/kubectl/kubectl_convert/) - Convert config files between different API versions
* [kubectl cordon](/docs/user-guide/kubectl/kubectl_cordon/) - Mark node as unschedulable
* [kubectl create](/docs/user-guide/kubectl/kubectl_create/) - Create a resource by filename or stdin
* [kubectl delete](/docs/user-guide/kubectl/kubectl_delete/) - Delete resources by filenames, stdin, resources and names, or by resources and label selector.
* [kubectl describe](/docs/user-guide/kubectl/kubectl_describe/) - Show details of a specific resource or group of resources
* [kubectl drain](/docs/user-guide/kubectl/kubectl_drain/) - Drain node in preparation for maintenance
* [kubectl edit](/docs/user-guide/kubectl/kubectl_edit/) - Edit a resource on the server
* [kubectl exec](/docs/user-guide/kubectl/kubectl_exec/) - Execute a command in a container.
* [kubectl explain](/docs/user-guide/kubectl/kubectl_explain/) - Documentation of resources.
* [kubectl expose](/docs/user-guide/kubectl/kubectl_expose/) - Take a replication controller, service or pod and expose it as a new Kubernetes Service
* [kubectl get](/docs/user-guide/kubectl/kubectl_get/) - Display one or many resources
* [kubectl label](/docs/user-guide/kubectl/kubectl_label/) - Update the labels on a resource
* [kubectl logs](/docs/user-guide/kubectl/kubectl_logs/) - Print the logs for a container in a pod.
* [kubectl namespace](/docs/user-guide/kubectl/kubectl_namespace/) - SUPERSEDED: Set and view the current Kubernetes namespace
* [kubectl patch](/docs/user-guide/kubectl/kubectl_patch/) - Update field(s) of a resource using strategic merge patch.
* [kubectl port-forward](/docs/user-guide/kubectl/kubectl_port-forward/) - Forward one or more local ports to a pod.
* [kubectl proxy](/docs/user-guide/kubectl/kubectl_proxy/) - Run a proxy to the Kubernetes API server
* [kubectl replace](/docs/user-guide/kubectl/kubectl_replace/) - Replace a resource by filename or stdin.
* [kubectl rolling-update](/docs/user-guide/kubectl/kubectl_rolling-update/) - Perform a rolling update of the given ReplicationController.
* [kubectl rollout](/docs/user-guide/kubectl/kubectl_rollout/) - rollout manages a deployment
* [kubectl run](/docs/user-guide/kubectl/kubectl_run/) - Run a particular image on the cluster.
* [kubectl scale](/docs/user-guide/kubectl/kubectl_scale/) - Set a new size for a Replication Controller, Job, or Deployment.
* [kubectl uncordon](/docs/user-guide/kubectl/kubectl_uncordon/) - Mark node as schedulable
* [kubectl version](/docs/user-guide/kubectl/kubectl_version/) - Print the client and server version information.
###### Auto generated by spf13/cobra on 2-Mar-2016
###### Auto generated by spf13/cobra on 19-Jan-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl annotate
@ -59,7 +55,7 @@ kubectl annotate pods foo description-
-f, --filename=[]: Filename, directory, or URL to a file identifying the resource to update the annotation
--no-headers[=false]: When using the default output, don't print headers.
-o, --output="": Output format. One of: json|yaml|wide|name|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=... See golang template [http://golang.org/pkg/text/template/#pkg-overview] and jsonpath template [http://releases.k8s.io/release-1.2/docs/user-guide/jsonpath.md].
--output-version="": Output the formatted object with the given version (default api-version).
--output-version="": Output the formatted object with the given group version (for ex: 'extensions/v1beta1').
--overwrite[=false]: If true, allow annotations to be overwritten, otherwise reject annotation updates that overwrite existing annotations.
--record[=false]: Record current kubectl command in the resource annotation.
--resource-version="": If non-empty, the annotation update will only succeed if this is the current resource-version for the object. Only valid when specifying a single resource.
@ -74,7 +70,6 @@ kubectl annotate pods foo description-
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -100,17 +95,6 @@ kubectl annotate pods foo description-
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
* [kubectl](/docs/user-guide/kubectl/kubectl/) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_annotate.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 11-Mar-2016

View File

@ -1,10 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl api-versions
Print the supported API versions on the server, in the form of "group/version".
@ -22,7 +17,6 @@ kubectl api-versions
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -48,17 +42,6 @@ kubectl api-versions
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
* [kubectl](/docs/user-guide/kubectl/kubectl/) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra on 8-Dec-2015
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_api-versions.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl apply
@ -45,7 +41,6 @@ cat pod.json | kubectl apply -f -
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -71,17 +66,6 @@ cat pod.json | kubectl apply -f -
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
* [kubectl](/docs/user-guide/kubectl/kubectl/) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_apply.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl attach
@ -44,7 +40,6 @@ kubectl attach 123456-7890 -c ruby-container -i -t
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -70,17 +65,6 @@ kubectl attach 123456-7890 -c ruby-container -i -t
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
* [kubectl](/docs/user-guide/kubectl/kubectl/) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_attach.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,10 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl autoscale
Auto-scale a deployment or replication controller
@ -43,7 +38,7 @@ kubectl autoscale rc foo --max=5 --cpu-percent=80
--name="": The name for the newly created object. If not specified, the name of the input resource will be used.
--no-headers[=false]: When using the default output, don't print headers.
-o, --output="": Output format. One of: json|yaml|wide|name|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=... See golang template [http://golang.org/pkg/text/template/#pkg-overview] and jsonpath template [http://releases.k8s.io/release-1.2/docs/user-guide/jsonpath.md].
--output-version="": Output the formatted object with the given version (default api-version).
--output-version="": Output the formatted object with the given group version (for ex: 'extensions/v1beta1').
--record[=false]: Record current kubectl command in the resource annotation.
--save-config[=false]: If true, the configuration of current object will be saved in its annotation. This is useful when you want to perform kubectl apply on this object in the future.
-a, --show-all[=false]: When printing, show all resources (default hide terminated pods.)
@ -56,7 +51,6 @@ kubectl autoscale rc foo --max=5 --cpu-percent=80
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -82,17 +76,6 @@ kubectl autoscale rc foo --max=5 --cpu-percent=80
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
* [kubectl](/docs/user-guide/kubectl/kubectl/) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_autoscale.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 11-Mar-2016

View File

@ -1,10 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl cluster-info
Display cluster info
@ -22,7 +17,6 @@ kubectl cluster-info
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -48,17 +42,6 @@ kubectl cluster-info
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
* [kubectl](/docs/user-guide/kubectl/kubectl/) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra on 8-Dec-2015
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_cluster-info.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config
@ -15,9 +11,9 @@ config modifies kubeconfig files
config modifies kubeconfig files using subcommands like "kubectl config set current-context my-context"
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place.
2. If $KUBECONFIG environment variable is set, then it is used a list of paths (normal path delimitting rules for your system). These paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place.
2. If $KUBECONFIG environment variable is set, then it is used a list of paths (normal path delimitting rules for your system). These paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
```
@ -34,7 +30,6 @@ kubectl config SUBCOMMAND
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -59,25 +54,15 @@ kubectl config SUBCOMMAND
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
* [kubectl config current-context](kubectl_config_current-context.md) - Displays the current-context
* [kubectl config set](kubectl_config_set.md) - Sets an individual value in a kubeconfig file
* [kubectl config set-cluster](kubectl_config_set-cluster.md) - Sets a cluster entry in kubeconfig
* [kubectl config set-context](kubectl_config_set-context.md) - Sets a context entry in kubeconfig
* [kubectl config set-credentials](kubectl_config_set-credentials.md) - Sets a user entry in kubeconfig
* [kubectl config unset](kubectl_config_unset.md) - Unsets an individual value in a kubeconfig file
* [kubectl config use-context](kubectl_config_use-context.md) - Sets the current-context in a kubeconfig file
* [kubectl config view](kubectl_config_view.md) - Displays merged kubeconfig settings or a specified kubeconfig file.
* [kubectl](/docs/user-guide/kubectl/kubectl/) - kubectl controls the Kubernetes cluster manager
* [kubectl config current-context](/docs/user-guide/kubectl/kubectl_config_current-context/) - Displays the current-context
* [kubectl config set](/docs/user-guide/kubectl/kubectl_config_set/) - Sets an individual value in a kubeconfig file
* [kubectl config set-cluster](/docs/user-guide/kubectl/kubectl_config_set-cluster/) - Sets a cluster entry in kubeconfig
* [kubectl config set-context](/docs/user-guide/kubectl/kubectl_config_set-context/) - Sets a context entry in kubeconfig
* [kubectl config set-credentials](/docs/user-guide/kubectl/kubectl_config_set-credentials/) - Sets a user entry in kubeconfig
* [kubectl config unset](/docs/user-guide/kubectl/kubectl_config_unset/) - Unsets an individual value in a kubeconfig file
* [kubectl config use-context](/docs/user-guide/kubectl/kubectl_config_use-context/) - Sets the current-context in a kubeconfig file
* [kubectl config view](/docs/user-guide/kubectl/kubectl_config_view/)
- Displays merged kubeconfig settings or a specified kubeconfig file.
###### Auto generated by spf13/cobra on 9-Jan-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config current-context
@ -29,7 +25,6 @@ kubectl config current-context
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -55,17 +50,6 @@ kubectl config current-context
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_current-context.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config set-cluster
@ -16,7 +12,7 @@ Sets a cluster entry in kubeconfig.
Specifying a name that already exists will merge new fields on top of existing values for those fields.
```
kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certficate/authority] [--api-version=apiversion] [--insecure-skip-tls-verify=true]
kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certficate/authority] [--insecure-skip-tls-verify=true]
```
### Examples
@ -68,17 +64,6 @@ kubectl config set-cluster e2e --insecure-skip-tls-verify=true
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_set-cluster.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config set-context
@ -38,7 +34,6 @@ kubectl config set-context gce --user=cluster-admin
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -61,17 +56,6 @@ kubectl config set-context gce --user=cluster-admin
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_set-context.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config set-credentials
@ -60,7 +56,6 @@ kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--cluster="": The name of the kubeconfig cluster to use
--context="": The name of the kubeconfig context to use
@ -81,17 +76,6 @@ kubectl config set-credentials cluster-admin --client-certificate=~/.kube/admin.
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 29-Feb-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_set-credentials.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config set
@ -24,7 +20,6 @@ kubectl config set PROPERTY_NAME PROPERTY_VALUE
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -50,17 +45,6 @@ kubectl config set PROPERTY_NAME PROPERTY_VALUE
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 8-Dec-2015
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_set.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config unset
@ -23,7 +19,6 @@ kubectl config unset PROPERTY_NAME
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -49,17 +44,6 @@ kubectl config unset PROPERTY_NAME
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 8-Dec-2015
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_unset.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,9 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config use-context
@ -22,7 +18,6 @@ kubectl config use-context CONTEXT_NAME
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -48,17 +43,6 @@ kubectl config use-context CONTEXT_NAME
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 8-Dec-2015
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_use-context.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
###### Auto generated by spf13/cobra on 2-Mar-2016

View File

@ -1,10 +1,5 @@
---
---
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## kubectl config view
Displays merged kubeconfig settings or a specified kubeconfig file.
@ -38,7 +33,7 @@ kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
--minify[=false]: remove all information not used by current-context from the output
--no-headers[=false]: When using the default output, don't print headers.
-o, --output="": Output format. One of: json|yaml|wide|name|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=... See golang template [http://golang.org/pkg/text/template/#pkg-overview] and jsonpath template [http://releases.k8s.io/release-1.2/docs/user-guide/jsonpath.md].
--output-version="": Output the formatted object with the given version (default api-version).
--output-version="": Output the formatted object with the given group version (for ex: 'extensions/v1beta1').
--raw[=false]: display raw byte data
-a, --show-all[=false]: When printing, show all resources (default hide terminated pods.)
--show-labels[=false]: When printing, show all labels as the last column (default hide labels column)
@ -50,7 +45,6 @@ kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
```
--alsologtostderr[=false]: log to standard error as well as files
--api-version="": The API version to use when talking to the server
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client certificate file for TLS.
--client-key="": Path to a client key file for TLS.
@ -76,17 +70,7 @@ kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
* [kubectl config](/docs/user-guide/kubectl/kubectl_config/) - config modifies kubeconfig files
###### Auto generated by spf13/cobra on 29-Feb-2016
###### Auto generated by spf13/cobra on 11-Mar-2016
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_config_view.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

Some files were not shown because too many files have changed in this diff Show More