Merge branch 'master' into master

This commit is contained in:
devin-donnelly 2016-12-21 15:22:12 -08:00 committed by GitHub
commit 2b54bd41ed
35 changed files with 462 additions and 70 deletions

View File

@ -27,6 +27,13 @@ toc:
- docs/api-reference/batch/v1/operations.html
- docs/api-reference/batch/v1/definitions.html
- title: Apps API
section:
- title: Apps API Operations
path: /docs/api-reference/apps/v1beta1/operations/
- title: Apps API Definitions
path: /docs/api-reference/apps/v1beta1/definitions/
- title: Extensions API
section:
- docs/api-reference/extensions/v1beta1/operations.html

View File

@ -18,9 +18,17 @@
<div id="cellophane" onclick="kub.toggleMenu()"></div>
<header>
<a href="/" class="logo"></a>
<div class="nav-buttons" data-auto-burger="primary">
<a href="/docs/" class="button" id="viewDocs" data-auto-burger-exclude>View Documentation</a>
<a href="/docs/hellonode/" class="button" id="tryKubernetes" data-auto-burger-exclude>Try Kubernetes</a>
<ul class="global-nav">
<li><a href="/docs/">Documentation</a></li>
<li><a href="http://blog.kubernetes.io/">Blog</a></li>
<li><a href="/partners/">Partners</a></li>
<li><a href="/community/">Community</a></li>
<li><a href="/case-studies/">Case Studies</a></li>
</ul>
<!-- <a href="/docs/" class="button" id="viewDocs" data-auto-burger-exclude>View Documentation</a> -->
<a href="/docs/tutorials/kubernetes-basics/" class="button" id="tryKubernetes" data-auto-burger-exclude>Try Kubernetes</a>
<button id="hamburger" onclick="kub.toggleMenu()" data-auto-burger-exclude><div></div></button>
</div>

View File

@ -234,6 +234,40 @@ header
color: $blue
text-decoration: none
// Global Nav - 12/9/2016 Update
ul.global-nav
display: none
li
display: inline-block
margin-right: 14px
a
color: #fff
font-weight: 400
padding: 0
position: relative
&.active:after
position: absolute
width: 100%
height: 2px
content: ''
bottom: -4px
left: 0
background: #fff
.flip-nav ul.global-nav li a,
.open-nav ul.global-nav li a,
color: #333
.flip-nav ul.global-nav li a.active:after,
.open-nav ul.global-nav li a.active:after,
background: $blue
// FLIP NAV
.flip-nav
header
@ -301,6 +335,26 @@ header
padding-left: 0
padding-right: 0
margin-bottom: 0
position: relative
&.bot-bar:after
display: block
margin-bottom: -20px
height: 8px
width: 100%
background-color: transparentize(white, 0.9)
content: ''
&.no-sub
h5
display: none
h1
margin-bottom: 20px
#home #hero:after
display: none
// VENDOR STRIP
#vendorStrip
@ -482,6 +536,19 @@ section
margin: 0 auto
height: 44px
line-height: 44px
position: relative
&:before
position: absolute
width: 15px
height: 15px
content: ''
right: 8px
top: 7px
background-image: url(/images/search-icon.svg)
background-repeat: no-repeat
background-size: 100% 100%
z-index: 1
#search
width: 100%
@ -490,6 +557,10 @@ section
line-height: 30px
font-size: 16px
vertical-align: top
background: #fff
border: none
border-radius: 4px
position: relative
#encyclopedia
@ -758,7 +829,7 @@ dd
background-color: $light-grey
color: $dark-grey
font-family: $mono-font
vertical-align: bottom
vertical-align: baseline
font-size: 14px
font-weight: bold
padding: 2px 4px

View File

@ -3,6 +3,15 @@ $vendor-strip-height: 44px
$video-section-height: 550px
@media screen and (min-width: 1025px)
#hamburger
display: none
ul.global-nav
display: inline-block
#docs #vendorStrip #searchBox:before
top: 15px
#vendorStrip
height: $vendor-strip-height
line-height: $vendor-strip-height
@ -40,7 +49,7 @@ $video-section-height: 550px
#searchBox
float: right
width: 30%
width: 320px
#search
vertical-align: middle
@ -65,7 +74,7 @@ $video-section-height: 550px
#encyclopedia
padding: 50px 50px 20px 20px
padding: 50px 50px 100px 100px
clear: both
#docsToc
@ -88,6 +97,11 @@ $video-section-height: 550px
section, header, footer
main
max-width: $main-max-width
header, #vendorStrip, #encyclopedia, #hero h1, #hero h5, #docs #hero h1, #docs #hero h5,
#community #hero h1, .gridPage #hero h1, #community #hero h5, .gridPage #hero h5
padding-left: 100px
padding-right: 100px
#home
section, header, footer
@ -276,7 +290,7 @@ $video-section-height: 550px
text-align: left
h1
padding: 20px
padding: 20px 100px
#tryKubernetes
width: auto

View File

@ -148,7 +148,7 @@ By default the Kubernetes APIserver serves HTTP on 2 ports:
- default IP is first non-localhost network interface, change with `--bind-address` flag.
- request handled by authentication and authorization modules.
- request handled by admission control module(s).
- authentication and authoriation modules run.
- authentication and authorisation modules run.
When the cluster is created by `kube-up.sh`, on Google Compute Engine (GCE),
and on several other cloud providers, the API server serves on port 443. On

View File

@ -23,7 +23,7 @@ answer the following questions:
- to where was it going?
NOTE: Currently, Kubernetes provides only basic audit capabilities, there is still a lot
of work going on to provide fully featured auditing capabilities (see https://github.com/kubernetes/features/issues/22).
of work going on to provide fully featured auditing capabilities (see [this issue](https://github.com/kubernetes/features/issues/22)).
Kubernetes audit is part of [kube-apiserver](/docs/admin/kube-apiserver) logging all requests
coming to the server. Each audit log contains two entries:

View File

@ -31,7 +31,7 @@ to talk to the Kubernetes API.
API requests are tied to either a normal user or a service account, or are treated
as anonymous requests. This means every process inside or outside the cluster, from
a human user typing `kubectl` on a workstation, to `kubelets` on nodes, to members
of the control plane, must authenticate when making requests to the the API server,
of the control plane, must authenticate when making requests to the API server,
or be treated as an anonymous user.
## Authentication strategies

View File

@ -299,9 +299,8 @@ subjects:
name: jane
roleRef:
kind: Role
namespace: default
name: pod-reader
apiVersion: rbac.authorization.k8s.io/v1alpha1
apiGroup: rbac.authorization.k8s.io
```
`RoleBindings` may also refer to a `ClusterRole`. However, a `RoleBinding` that
@ -326,7 +325,7 @@ subjects:
roleRef:
kind: ClusterRole
name: secret-reader
apiVersion: rbac.authorization.k8s.io/v1alpha1
apiGroup: rbac.authorization.k8s.io
```
Finally a `ClusterRoleBinding` may be used to grant permissions in all
@ -338,14 +337,14 @@ namespaces. The following `ClusterRoleBinding` allows any user in the group
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: read-secrets
name: read-secrets-global
subjects:
- kind: Group # May be "User", "Group" or "ServiceAccount"
name: manager
roleRef:
kind: ClusterRole
name: secret-reader
apiVersion: rbac.authorization.k8s.io/v1alpha1
 name: secret-reader
apiGroup: rbac.authorization.k8s.io
```
### Referring to Resources

View File

@ -99,7 +99,7 @@ Some possible patterns for communicating with pods in a DaemonSet are:
- **Push**: Pods in the Daemon Set are configured to send updates to another service, such
as a stats database. They do not have clients.
- **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable
via the node IPs. Clients knows the the list of nodes ips somehow, and know the port by convention.
via the node IPs. Clients knows the list of nodes ips somehow, and know the port by convention.
- **DNS**: Create a [headless service](/docs/user-guide/services/#headless-services) with the same pod selector,
and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from
DNS.

View File

@ -70,7 +70,7 @@ is no longer supported.
When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local`.
For example, a pod with ip `1.2.3.4` in the namespace `default` with a dns name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`.
For example, a pod with ip `1.2.3.4` in the namespace `default` with a DNS name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`.
#### A Records and hostname based on Pod's hostname and subdomain fields
@ -171,7 +171,7 @@ busybox 1/1 Running 0 <some-time>
Once that pod is running, you can exec nslookup in that environment:
```
kubectl exec busybox -- nslookup kubernetes.default
kubectl exec -ti busybox -- nslookup kubernetes.default
```
You should see something like:
@ -194,10 +194,10 @@ If the nslookup command fails, check the following:
Take a look inside the resolv.conf file. (See "Inheriting DNS from the node" and "Known issues" below for more information)
```
cat /etc/resolv.conf
kubectl exec busybox cat /etc/resolv.conf
```
Verify that the search path and name server are set up like the following (note that seach path may vary for different cloud providers):
Verify that the search path and name server are set up like the following (note that search path may vary for different cloud providers):
```
search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal
@ -210,7 +210,7 @@ options ndots:5
Errors such as the following indicate a problem with the kube-dns add-on or associated Services:
```
$ kubectl exec busybox -- nslookup kubernetes.default
$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10
@ -220,7 +220,7 @@ nslookup: can't resolve 'kubernetes.default'
or
```
$ kubectl exec busybox -- nslookup kubernetes.default
$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
@ -244,7 +244,7 @@ kube-dns-v19-ezo1y 3/3 Running 0
...
```
If you see that no pod is running or that the pod has failed/completed, the dns add-on may not be deployed by default in your current environment and you will have to deploy it manually.
If you see that no pod is running or that the pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.
#### Check for Errors in the DNS pod
@ -258,7 +258,7 @@ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system
See if there is any suspicious log. W, E, F letter at the beginning represent Warning, Error and Failure. Please search for entries that have these as the logging level and use [kubernetes issues](https://github.com/kubernetes/kubernetes/issues) to report unexpected errors.
#### Is dns service up?
#### Is DNS service up?
Verify that the DNS service is up by using the `kubectl get service` command.
@ -277,7 +277,7 @@ kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](http://kubernetes.io/docs/user-guide/debugging-services/) for more information.
#### Are dns endpoints exposed?
#### Are DNS endpoints exposed?
You can verify that dns endpoints are exposed by using the `kubectl get endpoints` command.
@ -348,7 +348,7 @@ some of those settings will be lost. As a partial workaround, the node can run
`dnsmasq` which will provide more `nameserver` entries, but not more `search`
entries. You can also use kubelet's `--resolv-conf` flag.
If you are using Alpine version 3.3 or earlier as your base image, dns may not
If you are using Alpine version 3.3 or earlier as your base image, DNS may not
work properly owing to a known issue with Alpine. Check [here](https://github.com/kubernetes/kubernetes/issues/30215)
for more information.

View File

@ -17,7 +17,7 @@ kubernetes manages lifecycle of all images through imageManager, with the cooper
of cadvisor.
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the the high threshold
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
will trigger garbage collection. The garbage collection will delete least recently used images until the low
threshold has been met.

View File

@ -45,7 +45,7 @@ kube-controller-manager
--concurrent_rc_syncs int32 The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load (default 5)
--configure-cloud-routes Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider. (default true)
--controller-start-interval duration Interval between starting controller managers.
--daemonset-lookup-cache-size int32 The the size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load. (default 1024)
--daemonset-lookup-cache-size int32 The size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load. (default 1024)
--deployment-controller-sync-period duration Period for syncing the deployments. (default 30s)
--enable-dynamic-provisioning Enable dynamic provisioning for environments that support it. (default true)
--enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver. (default true)
@ -89,8 +89,8 @@ StreamingProxyRedirects=true|false (ALPHA - default=false)
--pv-recycler-pod-template-filepath-nfs string The file path to a pod definition used as a template for NFS persistent volume recycling
--pv-recycler-timeout-increment-hostpath int32 the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster. (default 30)
--pvclaimbinder-sync-period duration The period for syncing persistent volumes and persistent volume claims (default 15s)
--replicaset-lookup-cache-size int32 The the size of lookup cache for replicatsets. Larger number = more responsive replica management, but more MEM load. (default 4096)
--replication-controller-lookup-cache-size int32 The the size of lookup cache for replication controllers. Larger number = more responsive replica management, but more MEM load. (default 4096)
--replicaset-lookup-cache-size int32 The size of lookup cache for replicatsets. Larger number = more responsive replica management, but more MEM load. (default 4096)
--replication-controller-lookup-cache-size int32 The size of lookup cache for replication controllers. Larger number = more responsive replica management, but more MEM load. (default 4096)
--resource-quota-sync-period duration The period for syncing quota usage status in the system (default 5m0s)
--root-ca-file string If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle.
--route-reconciliation-period duration The period for reconciling routes created for Nodes by cloud provider. (default 10s)

View File

@ -17,35 +17,40 @@ This document describes how to authenticate and authorize access to the kubelet'
## Kubelet authentication
By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured
authentication methods are treated as anonymous requests, and given a username of `system:anonymous`
authentication methods are treated as anonymous requests, and given a username of `system:anonymous`
and a group of `system:unauthenticated`.
To disable anonymous access and send `401 Unauthorized` responses to unauthenticated requests:
* start the kubelet with the `--anonymous-auth=false` flag
To enable X509 client certificate authentication to the kubelet's HTTPS endpoint:
* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with
* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with
* start the apiserver with `--kubelet-client-certificate` and `--kubelet-client-key` flags
* see the [apiserver authentication documentation](/docs/admin/authentication/#x509-client-certs) for more details
To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet's HTTPS endpoint:
* ensure the `authentication.k8s.io/v1beta1` API group is enabled in the API server
* start the kubelet with the `--authentication-token-webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens
* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens
## Kubelet authorization
Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is `AlwaysAllow`, which allows all requests.
There are many possible reasons to subdivide access to the kubelet API:
* anonymous auth is enabled, but anonymous users' ability to call the kubelet API should be limited
* bearer token auth is enabled, but arbitrary API users' (like service accounts) ability to call the kubelet API should be limited
* client certificate auth is enabled, but only some of the client certificates signed by the configured CA should be allowed to use the kubelet API
To subdivide access to the kubelet API, delegate authorization to the API server:
* ensure the `authorization.k8s.io/v1beta1` API group is enabled in the API server
* start the kubelet with the `--authorization-mode=Webhook`, `--kubeconfig`, and `--require-kubeconfig` flags
* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized
* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized
The kubelet authorizes API requests using the same [request attributes](/docs/admin/authorization/#request-attributes) approach as the apiserver.
@ -63,19 +68,20 @@ The resource and subresource is determined from the incoming request's path:
Kubelet API | resource | subresource
-------------|----------|------------
/stats/* | nodes | stats
/metrics/* | nodes | metrics
/logs/* | nodes | log
/spec/* | nodes | spec
/stats/\* | nodes | stats
/metrics/\* | nodes | metrics
/logs/\* | nodes | log
/spec/\* | nodes | spec
*all others* | nodes | proxy
The namespace and API group attributes are always an empty string, and
The namespace and API group attributes are always an empty string, and
the resource name is always the name of the kubelet's `Node` API object.
When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key`
When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key`
flags passed to the apiserver is authorized for the following attributes:
* verb=*, resource=nodes, subresource=proxy
* verb=*, resource=nodes, subresource=stats
* verb=*, resource=nodes, subresource=log
* verb=*, resource=nodes, subresource=spec
* verb=*, resource=nodes, subresource=metrics
* verb=\*, resource=nodes, subresource=proxy
* verb=\*, resource=nodes, subresource=stats
* verb=\*, resource=nodes, subresource=log
* verb=\*, resource=nodes, subresource=spec
* verb=\*, resource=nodes, subresource=metrics

View File

@ -17,7 +17,7 @@ various mechanisms (primarily through the apiserver) and ensures that the contai
described in those PodSpecs are running and healthy. The kubelet doesn't manage
containers which were not created by Kubernetes.
Other than from an PodSpec from the apiserver, there are three ways that a container
Other than from a PodSpec from the apiserver, there are three ways that a container
manifest can be provided to the Kubelet.
File: Path passed as a flag on the command line. This file is rechecked every 20

View File

@ -181,6 +181,14 @@ The Nuage platform uses overlays to provide seamless policy-based networking bet
complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking.
### OVN (Open Virtual Networking)
OVN is an opensource network virtualization solution developed by the
Open vSwitch community. It lets one create logical switches, logical routers,
stateful ACLs, load-balancers etc to build different virtual networking
topologies. The project has a specific Kubernetes plugin and documentation
at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
### Project Calico
[Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine.

View File

@ -186,7 +186,7 @@ Modifications include setting labels on the node and marking it unschedulable.
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling,
e.g. to constrain a pod to only be eligible to run on a subset of the nodes.
Marking a node as unscheduleable will prevent new pods from being scheduled to that
Marking a node as unschedulable will prevent new pods from being scheduled to that
node, but will not affect any existing pods on the node. This is useful as a
preparatory step before a node reboot, etc. For example, to mark a node
unschedulable, run this command:

View File

@ -349,7 +349,7 @@ in favor of the simpler configuation supported around eviction.
The `kubelet` currently polls `cAdvisor` to collect memory usage stats at a regular interval. If memory usage
increases within that window rapidly, the `kubelet` may not observe `MemoryPressure` fast enough, and the `OOMKiller`
will still be invoked. We intend to integrate with the `memcg` notification API in a future release to reduce this
latency, and instead have the kernel tell us when a threshold has been crossed immmediately.
latency, and instead have the kernel tell us when a threshold has been crossed immediately.
If you are not trying to achieve extreme utilization, but a sensible measure of overcommit, a viable workaround for
this issue is to set eviction thresholds at approximately 75% capacity. This increases the ability of this feature

View File

@ -36,7 +36,7 @@ Each critical add-on has to tolerate it,
the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled.
*Warning:* currently there is no guarantee which node is chosen and which pods are being killed
in order to schedule crical pod, so if rescheduler is enabled you pods might be occasionally
in order to schedule critical pods, so if rescheduler is enabled you pods might be occasionally
killed for this purpose.
## Config

View File

@ -52,8 +52,7 @@ Resource Quota is enforced in a particular namespace when there is a
## Compute Resource Quota
You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) and [storage resources](/docs/user-guide/persistent-volumes)
that can be requested in a given namespace.
You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace.
The following resource types are supported:
@ -65,7 +64,25 @@ The following resource types are supported:
| `memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
| `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
## Storage Resource Quota
You can limit the total sum of [storage resources](/docs/user-guide/persistent-volumes) that can be requested in a given namespace.
In addition, you can limit consumption of storage resources based on associated storage-class.
| Resource Name | Description |
| --------------------- | ----------------------------------------------------------- |
| `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. |
| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the storage-class-name, the sum of storage requests cannot exceed this value. |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the storage-class-name, the total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
define a quota as follows:
* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi`
* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi`
## Object Count Quota
@ -125,7 +142,7 @@ The quota can be configured to quota either value.
If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming
container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`,
then it requires that every incoming container specifies an explict limit for those resources.
then it requires that every incoming container specifies an explicit limit for those resources.
## Viewing and Setting Quotas

View File

@ -232,7 +232,7 @@ services.loadbalancers 0 2
services.nodeports 0 0
```
As you can see, the pod that was created is consuming explict amounts of compute resources, and the usage is being
As you can see, the pod that was created is consuming explicit amounts of compute resources, and the usage is being
tracked by Kubernetes properly.
## Step 5: Advanced quota scopes

View File

@ -8,6 +8,7 @@ Use the following reference docs to understand the kubernetes REST API for vario
* extensions/v1beta1: [operations](/docs/api-reference/extensions/v1beta1/operations.html), [model definitions](/docs/api-reference/extensions/v1beta1/definitions.html)
* batch/v1: [operations](/docs/api-reference/batch/v1/operations.html), [model definitions](/docs/api-reference/batch/v1/definitions.html)
* autoscaling/v1: [operations](/docs/api-reference/autoscaling/v1/operations.html), [model definitions](/docs/api-reference/autoscaling/v1/definitions.html)
* apps/v1beta1: [operations](/docs/api-reference/apps/v1beta1/operations.html), [model definitions](/docs/api-reference/apps/v1beta1/definitions.html)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -24,11 +24,13 @@ In our experience, any system that is successful needs to grow and change as new
What constitutes a compatible change and how to change the API are detailed by the [API change document](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api_changes.md).
## API Swagger definitions
## OpenAPI and Swagger definitions
Complete API details are documented using [Swagger v1.2](http://swagger.io/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver.
Complete API details are documented using [Swagger v1.2](http://swagger.io/) and [OpenAPI](https://www.openapis.org/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger v1.2 Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver.
We also host a version of the [latest API documentation](http://kubernetes.io/docs/api-reference/README/). This is updated with the latest release, so if you are using a different version of Kubernetes you will want to use the spec from your apiserver.
We also host a version of the [latest v1.2 API documentation UI](http://kubernetes.io/kubernetes/third_party/swagger-ui/). This is updated with the latest release, so if you are using a different version of Kubernetes you will want to use the spec from your apiserver.
Staring kubernetes 1.4, OpenAPI spec is also available at `/swagger.json`. While we are transitioning from Swagger v1.2 to OpenAPI (aka Swagger v2.0), some of the tools such as kubectl and swagger-ui are still using v1.2 spec. OpenAPI spec is in Beta as of Kubernetes 1.5.
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.

View File

@ -122,7 +122,7 @@ image format is SVG.
{% capture whatsnext %}
* Learn about [using page templates](/docs/contribute/page-templates/).
* Learn about [staging your changes](/docs/contribute/stage-documentation-changes).
* Learn about [creating a pull request](/docs/contribute/write-new-topic).
* Learn about [creating a pull request](/docs/contribute/create-pull-request/).
{% endcapture %}
{% include templates/task.md %}

View File

@ -19,7 +19,7 @@ The installation uses a tool called `kubeadm` which is part of Kubernetes.
This process works with local VMs, physical servers and/or cloud servers.
It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
See the full [`kubeadm` reference](/docs/admin/kubeadm) for information on all `kubeadm` command-line flags and for advice on automating `kubeadm` itself.
See the full `kubeadm` [reference](/docs/admin/kubeadm) for information on all `kubeadm` command-line flags and for advice on automating `kubeadm` itself.
**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)!
Be sure to read the [limitations](#limitations); in particular note that kubeadm doesn't have great support for

View File

@ -646,7 +646,7 @@ This pod mounts several node file system directories using the `hostPath` volum
Apiserver supports several cloud providers.
- options for `--cloud-provider` flag are `aws`, `gce`, `mesos`, `openshift`, `ovirt`, `rackspace`, `vagrant`, or unset.
- options for `--cloud-provider` flag are `aws`, `azure`, `cloudstack`, `fake`, `gce`, `mesos`, `openstack`, `ovirt`, `photon`, `rackspace`, `vsphere`, or unset.
- unset used for e.g. bare metal setups.
- support for new IaaS is added by contributing code [here](https://releases.k8s.io/{{page.githubbranch}}/pkg/cloudprovider/providers)

View File

@ -12,7 +12,7 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported
1. Kubernetes control plane running on existing Linux infrastructure (version 1.5 or later)
2. Kubenet network plugin setup on the Linux nodes
3. Windows Server 2016 (RTM version 10.0.14393 or later)
4. Docker Version 1.12.2-cs2-ws-beta or later
4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version)
## Networking
Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) dont natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
@ -40,6 +40,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your
2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/)
3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause`
4. RRAS (Routing) Windows feature enabled
5. Install a VMSwitch of type `Internal`, by running `New-VMSwitch -Name KubeProxySwitch -SwitchType Internal` command in *PowerShell* window. This will create a new Network Interface with name `vEthernet (KubeProxySwitch)`. This interface will be used by kube-proxy to add Service IPs.
**Linux Host Setup**
@ -127,14 +128,14 @@ To start kube-proxy on your Windows node:
Run the following in a PowerShell window with administrative privileges. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kube-proxy.
1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to a node only network interface. The interface created when docker is installed should work
`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (HNS Internal NIC)"`
1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to `vEthernet (KubeProxySwitch)` which we created in **_Windows Host Setup_** above
`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (KubeProxySwitch)"`
2. Run *kube-proxy* executable using the below command
`.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override=<ip address/hostname of the windows node> --master=<api server location> --bind-address=<ip address of the windows node>`
## Scheduling Pods on Windows
Because your cluster has both Linux and Windows nodes, you must explictly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example:
Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example:
```
{

View File

@ -7,7 +7,10 @@ In the reference section, you can find reference documentation for Kubernetes AP
## API References
* [Kubernetes API](/docs/api/) - The core API for Kubernetes.
* [Extensions API](/docs/api-reference/extensions/v1beta1/operations/) - Manages extensions resources such as Jobs, Ingress and HorizontalPodAutoscalers.
* [Autoscaling API](/docs/api-reference/autoscaling/v1/operations/) - Manages autoscaling resources such as HorizontalPodAutoscalers.
* [Batch API](/docs/api-reference/batch/v1/operations/) - Manages batch resources such as Jobs.
* [Apps API](/docs/api-reference/apps/v1beta1/operations/) - Manages apps resources such as StatefulSets.
* [Extensions API](/docs/api-reference/extensions/v1beta1/operations/) - Manages extensions resources such as Ingress, Deployments, and ReplicaSets.
## CLI References

View File

@ -12,7 +12,15 @@ external IP address.
{% capture prerequisites %}
{% include task-tutorial-prereqs.md %}
* Install [kubectl](http://kubernetes.io/docs/user-guide/prereqs).
* Use a cloud provider like Google Container Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](/docs/user-guide/load-balancer/),
which requires a cloud provider.
* Configure `kubectl` to communicate with your Kubernetes API server. For
instructions, see the documentation for your cloud provider.
{% endcapture %}

View File

@ -328,7 +328,7 @@ Host: k8s-master:8080
```
To consume opaque resources in pods, include the name of the opaque
resource as a key in the the `spec.containers[].resources.requests` map.
resource as a key in the `spec.containers[].resources.requests` map.
The pod will be scheduled only if all of the resource requests are
satisfied (including cpu, memory and any opaque resources.) The pod will

View File

@ -90,7 +90,7 @@ The cluster has to be started with `ENABLE_CUSTOM_METRICS` environment variable
### Pod configuration
The pods to be scaled must have cAdvisor-specific custom (aka application) metrics endpoint configured. The configuration format is described [here](https://github.com/google/cadvisor/blob/master/docs/application_metrics.md). Kubernetes expects the configuration to
be placed in `definition.json` mounted via a [config map](/docs/user-guide/horizontal-pod-autoscaling/configmap/) in `/etc/custom-metrics`. A sample config map may look like this:
be placed in `definition.json` mounted via a [config map](/docs/user-guide/configmap/) in `/etc/custom-metrics`. A sample config map may look like this:
```yaml
apiVersion: v1

View File

@ -69,7 +69,7 @@ kubectl get [(-o|--output=)json|yaml|wide|custom-columns=...|custom-columns-file
kubectl get -f pod.yaml -o json
# Return only the phase value of the specified pod.
kubectl get -o template pod/web-pod-13je7 --template={% raw %}{{.status.phase}}{% endraw %}
kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}
# List all replication controllers and services together in ps output format.
kubectl get rc,services

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.4 KiB

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.4 KiB

After

Width:  |  Height:  |  Size: 10 KiB

13
images/search-icon.svg Normal file
View File

@ -0,0 +1,13 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 16.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="575.996px" height="576px" viewBox="512 32 575.996 576" enable-background="new 512 32 575.996 576" xml:space="preserve">
<path fill="#3371E3" d="M1076.525,541.14L960.947,425.562c70.432-96.992,61.952-233.465-25.498-320.915
C887.015,56.218,823.533,32,760.051,32c-63.481,0-126.963,24.218-175.398,72.653c-96.87,96.871-96.87,253.926,0,350.796
c48.436,48.436,111.917,72.653,175.398,72.653c51.13,0,102.24-15.737,145.511-47.155l115.577,115.577
c7.643,7.648,17.671,11.476,27.693,11.476s20.051-3.827,27.693-11.476C1091.82,581.235,1091.82,556.436,1076.525,541.14z
M623.424,416.679c-75.334-75.335-75.334-197.92,0-273.255c36.493-36.493,85.018-56.595,136.627-56.595
c51.61,0,100.135,20.096,136.628,56.595c75.334,75.334,75.334,197.92,0,273.255c-36.493,36.492-85.018,56.595-136.628,56.595
C708.441,473.273,659.923,453.171,623.424,416.679z"/>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@ -503,3 +503,21 @@ var pushmenu = (function(){
show: show
};
})();
$(function() {
// Make global nav be active based on pathname
if ((location.pathname.split("/")[1]) !== ""){
$('.global-nav li a[href^="/' + location.pathname.split("/")[1] + '"]').addClass('active');
}
// If vendor strip doesn't exist add className
if ( !$('#vendorStrip').length > 0 ) {
$('#hero').addClass('bot-bar');
}
// If is not homepage add class to hero section
if (!$('#home').length > 0 ) {
$('#hero').addClass('no-sub');
}
});