From 78a113146b69c015194ef05aa64e624647781369 Mon Sep 17 00:00:00 2001 From: Babajide Fowotade Date: Sun, 25 Sep 2016 21:00:28 +0100 Subject: [PATCH 01/48] Update nodeJS code to supported es6 version LTS Nodejs now support some es6 features, on it's latest stable version 4.5. https://nodejs.org/dist/latest-v4.x/docs/api/synopsis.html --- docs/hellonode.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/hellonode.md b/docs/hellonode.md index e489927aff..df3106c83c 100755 --- a/docs/hellonode.md +++ b/docs/hellonode.md @@ -59,13 +59,13 @@ The first step is to write the application. Save this code in a folder called "` #### server.js ```javascript -var http = require('http'); -var handleRequest = function(request, response) { +const http = require('http'); +const handleRequest = (request, response) => { console.log('Received request for URL: ' + request.url); response.writeHead(200); response.end('Hello World!'); }; -var www = http.createServer(handleRequest); +const www = http.createServer(handleRequest); www.listen(8080); ``` @@ -88,7 +88,7 @@ Next, create a file, also within `hellonode/` named `Dockerfile`. A Dockerfile d #### Dockerfile ```conf -FROM node:4.4 +FROM node:4.5 EXPOSE 8080 COPY server.js . CMD node server.js From d59443b6fbea2717a228c8a013733006799ed00c Mon Sep 17 00:00:00 2001 From: Steven Pousty Date: Fri, 30 Sep 2016 14:44:18 -0700 Subject: [PATCH 02/48] Clarifying the purpose and intended use of a ConfigMap --- docs/user-guide/configmap/index.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/user-guide/configmap/index.md b/docs/user-guide/configmap/index.md index f37c83f4ac..e6d8686f00 100644 --- a/docs/user-guide/configmap/index.md +++ b/docs/user-guide/configmap/index.md @@ -19,6 +19,8 @@ or used to store configuration data for system components such as controllers. to [Secrets](/docs/user-guide/secrets/), but designed to more conveniently support working with strings that do not contain sensitive information. +Note: ConfigMaps are not intended to act as a replacement for a properties file. ConfigMaps are more intended to act a reference to multipe propertie files. You can think of them as way to represent something similar to the /etc directory and all it's files on a Linux computer. This model for ConfigMaps becomes especially apparent when looking at creating Volumes from ConfigMaps. Each data item in the ConfigMap becomes a new file. + Let's look at a made-up example: ```yaml From 0884c3a5976dd98c730f4066f59756c1434a525f Mon Sep 17 00:00:00 2001 From: David Newswanger Date: Mon, 7 Nov 2016 13:37:33 -0500 Subject: [PATCH 03/48] made some minor fixes to the centos guide to make it a little easier for beginners to follow --- .../centos/centos_manual_config.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/getting-started-guides/centos/centos_manual_config.md b/docs/getting-started-guides/centos/centos_manual_config.md index 08419bcff7..bd791f0172 100644 --- a/docs/getting-started-guides/centos/centos_manual_config.md +++ b/docs/getting-started-guides/centos/centos_manual_config.md @@ -78,9 +78,10 @@ KUBE_ALLOW_PRIV="--allow-privileged=false" KUBE_MASTER="--master=http://centos-master:8080" ``` -* Disable the firewall on the master and all the nodes, as docker does not play well with other firewall rule managers +* Disable the firewall on the master and all the nodes, as docker does not play well with other firewall rule managers. CentOS won't let you disable the firewall as long as SELinux is enforcing, so that needs to be disabled first. ```shell +setenforce 0 systemctl disable iptables-services firewalld systemctl stop iptables-services firewalld ``` @@ -118,10 +119,11 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_API_ARGS="" ``` -* Configure ETCD to hold the network overlay configuration on master: +* Start ETCD and configure it to hold the network overlay configuration on master: **Warning** This network must be unused in your network infrastructure! `172.30.0.0/16` is free in our network. ```shell +$ systemctl start etcd $ etcdctl mkdir /kube-centos/network $ etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }" ``` @@ -164,7 +166,8 @@ KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname -KUBELET_HOSTNAME="--hostname-override=centos-minion-n" # Check the node number! +# Check the node number! +KUBELET_HOSTNAME="--hostname-override=centos-minion-n" # Location of the api-server KUBELET_API_SERVER="--api-servers=http://centos-master:8080" @@ -228,4 +231,3 @@ IaaS Provider | Config. Mgmt | OS | Networking | Docs Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. - From 8c4fc0351ac5fa4b0f396b7c1c17b43e0ecb54f4 Mon Sep 17 00:00:00 2001 From: Chris Riviere Date: Mon, 21 Nov 2016 11:27:15 -0800 Subject: [PATCH 04/48] clarification on type LoadBalancer for exposing Took me a bit of time to learn that type LadBalancer wasn't fully working in an OpenStack environment as I was going through this. --- docs/user-guide/quick-start.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/quick-start.md b/docs/user-guide/quick-start.md index 9e9e59dbd8..fcb3b079b9 100644 --- a/docs/user-guide/quick-start.md +++ b/docs/user-guide/quick-start.md @@ -22,7 +22,7 @@ $ kubectl run my-nginx --image=nginx --replicas=2 --port=80 deployment "my-nginx" created ``` -To expose your service to the public internet, run: +To expose your service to the public internet, run the following. Note, the type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. Type LoadBalancer may work in public cloud environments just fine but may require some additional configuration in a private cloud environment (ie. OpenStack). ```shell $ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer From 08577c385a27a32056c876307986428689f9e6b9 Mon Sep 17 00:00:00 2001 From: Chris Riviere Date: Mon, 21 Nov 2016 12:08:25 -0800 Subject: [PATCH 05/48] minor changes moved text under command and changed wording based on some feedback. With regards to mentioning cloud providers. I think a lot of this documentation in general assumes a public cloud like Google so it may be worthwhile to mention a specific private cloud where this isn't implemented. --- docs/user-guide/quick-start.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/user-guide/quick-start.md b/docs/user-guide/quick-start.md index fcb3b079b9..6e0920f973 100644 --- a/docs/user-guide/quick-start.md +++ b/docs/user-guide/quick-start.md @@ -22,12 +22,13 @@ $ kubectl run my-nginx --image=nginx --replicas=2 --port=80 deployment "my-nginx" created ``` -To expose your service to the public internet, run the following. Note, the type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. Type LoadBalancer may work in public cloud environments just fine but may require some additional configuration in a private cloud environment (ie. OpenStack). +To expose your service to the public internet, run the following: ```shell $ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer service "my-nginx" exposed ``` +Note: The type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. If your cloudprovider doesn't have a loadbalancer implementation (e.g. OpenStack) for Kubernetes, you can simply use the allocated [nodePort](link to nodeport service) as a rudimentary form of loadblancing across your endpoints. You can see that they are running by: From 18068bae8037d5c0f4779c7133fc3edfc5595585 Mon Sep 17 00:00:00 2001 From: Chris Riviere Date: Mon, 21 Nov 2016 12:09:05 -0800 Subject: [PATCH 06/48] minor text change --- docs/user-guide/quick-start.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/quick-start.md b/docs/user-guide/quick-start.md index 6e0920f973..0cf35997b4 100644 --- a/docs/user-guide/quick-start.md +++ b/docs/user-guide/quick-start.md @@ -22,7 +22,7 @@ $ kubectl run my-nginx --image=nginx --replicas=2 --port=80 deployment "my-nginx" created ``` -To expose your service to the public internet, run the following: +To expose your service to the public internet, run: ```shell $ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer From e73c755de32eac119a5828c922b1e584cb00f06b Mon Sep 17 00:00:00 2001 From: Anirudh Ramanathan Date: Mon, 21 Nov 2016 13:11:19 -0800 Subject: [PATCH 07/48] Updated link and fixed typo --- docs/user-guide/quick-start.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/quick-start.md b/docs/user-guide/quick-start.md index 0cf35997b4..e64f9d1d19 100644 --- a/docs/user-guide/quick-start.md +++ b/docs/user-guide/quick-start.md @@ -28,7 +28,7 @@ To expose your service to the public internet, run: $ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer service "my-nginx" exposed ``` -Note: The type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. If your cloudprovider doesn't have a loadbalancer implementation (e.g. OpenStack) for Kubernetes, you can simply use the allocated [nodePort](link to nodeport service) as a rudimentary form of loadblancing across your endpoints. +Note: The type, LoadBalancer, is highly dependent upon the underlying platform that Kubernetes is running on. If your cloudprovider doesn't have a loadbalancer implementation (e.g. OpenStack) for Kubernetes, you can simply use the allocated [NodePort](http://kubernetes.io/docs/user-guide/services/#type-nodeport) as a rudimentary form of loadblancing across your endpoints. You can see that they are running by: From 3a9566810ee116658dd9f3d05a9bc34fcf6fc8a0 Mon Sep 17 00:00:00 2001 From: Cao Shufeng Date: Tue, 22 Nov 2016 06:03:28 -0500 Subject: [PATCH 08/48] [kubenet] add description about non-masquerade-cidr For kubenet, users should run kubelet with argument --non-masquerade-cidr, or pods may have problem when access "10.0.0.0/8" cidr. Also, if --pod-cidr is set outside the scope of --non-masquerade-cidr, node will do ip masquerade for ingress packets, which is not excepted. --- docs/admin/network-plugins.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/admin/network-plugins.md b/docs/admin/network-plugins.md index 6c5f354423..5f138da005 100644 --- a/docs/admin/network-plugins.md +++ b/docs/admin/network-plugins.md @@ -57,6 +57,7 @@ The plugin requires a few things: * The standard CNI `bridge`, `lo` and `host-local` plugins are required, at minimum version 0.2.0. Kubenet will first search for them in `/opt/cni/bin`. Specify `network-plugin-dir` to supply additional search path. The first found match will take effect. * Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin * Kubelet must also be run with the `--reconcile-cidr` argument to ensure the IP subnet assigned to the node by configuration or the controller-manager is propagated to the plugin +* Kubelet should also be run with the `--non-masquerade-cidr=` argumment to ensure traffic to IPs outside this range will use IP masquerade. * The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=` controller-manager command-line options. ### Customizing the MTU (with kubenet) From d4b9023a7f3c64266385721332954d169a1c013a Mon Sep 17 00:00:00 2001 From: pospispa Date: Mon, 7 Nov 2016 15:59:24 +0100 Subject: [PATCH 09/48] Added a Recycler Pod Template Example Recycle pod template example is missing in the documentation. That's why it is now added. --- docs/user-guide/persistent-volumes/index.md | 31 ++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/docs/user-guide/persistent-volumes/index.md b/docs/user-guide/persistent-volumes/index.md index ffab80a61b..eb579145ec 100644 --- a/docs/user-guide/persistent-volumes/index.md +++ b/docs/user-guide/persistent-volumes/index.md @@ -70,7 +70,36 @@ When a user is done with their volume, they can delete the PVC objects from the ### Reclaiming -The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled or Deleted. Retention allows for manual reclamation of the resource. For those volume plugins that support it, deletion removes both the `PersistentVolume` object from Kubernetes as well as deletes associated storage asset in external infrastructure such as AWS EBS, GCE PD or Cinder volume. Volumes that were dynamically provisioned are always deleted. If supported by appropriate volume plugin, recycling performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim. +The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled or Deleted. Retention allows for manual reclamation of the resource. For those volume plugins that support it, deletion removes both the `PersistentVolume` object from Kubernetes as well as deletes associated storage asset in external infrastructure such as AWS EBS, GCE PD or Cinder volume. Volumes that were dynamically provisioned are always deleted. + +#### Recycling + +If supported by appropriate volume plugin, recycling performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim. + +However, an administrator can configure a custom recycler pod templates using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler pod template must contain a `volumes` specification, as shown in the example below: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pv-recycler- + namespace: default +spec: + restartPolicy: Never + volumes: + - name: vol + hostPath: + path: /any/path/it/will/be/replaced + containers: + - name: pv-recycler + image: "gcr.io/google_containers/busybox" + command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] + volumeMounts: + - name: vol + mountPath: /scrub +``` + +However, the particular path specified in the custom recycler pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled. ## Types of Persistent Volumes From 448b8c074f640c312e03a1ad2e7af7e70f61fef2 Mon Sep 17 00:00:00 2001 From: pao Date: Mon, 14 Nov 2016 13:42:40 +0800 Subject: [PATCH 10/48] Update dns.md: add details about subdomain. --- docs/admin/dns.md | 39 ++++++++++++++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 5 deletions(-) diff --git a/docs/admin/dns.md b/docs/admin/dns.md index d75acfa093..f474f31885 100644 --- a/docs/admin/dns.md +++ b/docs/admin/dns.md @@ -94,13 +94,42 @@ Example: ```yaml apiVersion: v1 +kind: Service +metadata: + name: default-subdomain +spec: + selector: + name: busybox + ports: + - name: foo # Actually, no port is needed. + port: 1234 + targetPort: 1234 +--- +apiVersion: v1 kind: Pod metadata: - name: busybox - namespace: default + name: busybox1 + labels: + name: busybox spec: hostname: busybox-1 - subdomain: default + subdomain: default-subdomain + containers: + - image: busybox + command: + - sleep + - "3600" + name: busybox +--- +apiVersion: v1 +kind: Pod +metadata: + name: busybox2 + labels: + name: busybox +spec: + hostname: busybox-2 + subdomain: default-subdomain containers: - image: busybox command: @@ -110,9 +139,9 @@ spec: ``` If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's KubeDNS Server also returns an A record for the Pod's fully qualified hostname. -Given a Pod with the hostname set to "foo" and the subdomain set to "bar", and a headless Service named "bar" in the same namespace, the pod will see it's own FQDN as "foo.bar.my-namespace.svc.cluster.local". DNS serves an A record at that name, pointing to the Pod's IP. +Given a Pod with the hostname set to "busybox-1" and the subdomain set to "default-subdomain", and a headless Service named "default-subdomain" in the same namespace, the pod will see it's own FQDN as "busybox-1.default-subdomain.my-namespace.svc.cluster.local". DNS serves an A record at that name, pointing to the Pod's IP. Both pods "busybox1" and "busybox2" can have their distinct A records. -With v1.2, the Endpoints object also has a new annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'. +As of Kubernetes v1.2, the Endpoints object also has the annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'. If the Endpoints are for a headless service, an A record is created with the format ...svc. For the example json, if endpoints are for a headless service named "bar", and one of the endpoints has IP "10.245.1.6", an A is created with the name "my-webserver.bar.my-namespace.svc.cluster.local" and the A record lookup would return "10.245.1.6". This endpoints annotation generally does not need to be specified by end-users, but can used by the internal service controller to deliver the aforementioned feature. From b6cd807bacb503cc118016a73a69d6de1bdc48c6 Mon Sep 17 00:00:00 2001 From: Martin Rauscher Date: Thu, 1 Dec 2016 11:51:20 +0100 Subject: [PATCH 11/48] allow copy&paste of sample code multi-line code samples should not be prefixed with stuff that makes copy&paste hard! --- docs/getting-started-guides/kubeadm.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/getting-started-guides/kubeadm.md b/docs/getting-started-guides/kubeadm.md index a122180b23..f6f0021eef 100644 --- a/docs/getting-started-guides/kubeadm.md +++ b/docs/getting-started-guides/kubeadm.md @@ -69,18 +69,18 @@ For each host in turn: * SSH into the machine and become `root` if you are not already (for example, run `sudo su -`). * If the machine is running Ubuntu 16.04 or HypriotOS v1.0.1, run: - # curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - - # cat < /etc/apt/sources.list.d/kubernetes.list + curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - + cat < /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF - # apt-get update - # # Install docker if you don't have it already. - # apt-get install -y docker.io - # apt-get install -y kubelet kubeadm kubectl kubernetes-cni + apt-get update + # Install docker if you don't have it already. + apt-get install -y docker.io + apt-get install -y kubelet kubeadm kubectl kubernetes-cni If the machine is running CentOS 7, run: - # cat < /etc/yum.repos.d/kubernetes.repo + cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 @@ -90,10 +90,10 @@ For each host in turn: gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF - # setenforce 0 - # yum install -y docker kubelet kubeadm kubectl kubernetes-cni - # systemctl enable docker && systemctl start docker - # systemctl enable kubelet && systemctl start kubelet + setenforce 0 + yum install -y docker kubelet kubeadm kubectl kubernetes-cni + systemctl enable docker && systemctl start docker + systemctl enable kubelet && systemctl start kubelet The kubelet is now restarting every few seconds, as it waits in a crashloop for `kubeadm` to tell it what to do. From 30652c7847c6cb31b6b2cf171934f7a1b7c535f2 Mon Sep 17 00:00:00 2001 From: Matt Baldwin Date: Tue, 6 Dec 2016 14:51:23 -0800 Subject: [PATCH 12/48] Adding Stackpoint.io as a turn-key provider. Change-Id: I017e314f67ca3945f658f834ae5b908f30ddd65a --- _data/guides.yml | 2 + docs/getting-started-guides/index.md | 1 + docs/getting-started-guides/stackpoint.md | 203 ++++++++++++++++++++++ 3 files changed, 206 insertions(+) create mode 100644 docs/getting-started-guides/stackpoint.md diff --git a/_data/guides.yml b/_data/guides.yml index 0c1a785720..6aa0b74c09 100644 --- a/_data/guides.yml +++ b/_data/guides.yml @@ -179,6 +179,8 @@ toc: path: /docs/getting-started-guides/clc/ - title: Running Kubernetes on IBM SoftLayer path: https://github.com/patrocinio/kubernetes-softlayer + - title: Running Kubernetes on Multiple Clouds with Stackpoint.io + path: /docs/getting-started-guides/stackpoint/ - title: Running Kubernetes on Custom Solutions section: - title: Creating a Custom Cluster from Scratch diff --git a/docs/getting-started-guides/index.md b/docs/getting-started-guides/index.md index 609a0cc03d..daa2217c5e 100644 --- a/docs/getting-started-guides/index.md +++ b/docs/getting-started-guides/index.md @@ -58,6 +58,7 @@ few commands, and have active community support. - [Azure](/docs/getting-started-guides/coreos/azure/) (Weave-based, contributed by WeaveWorks employees) - [CenturyLink Cloud](/docs/getting-started-guides/clc) - [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer) +- [Stackpoint.io](/docs/getting-started-guides/stackpoint/) ### Custom Solutions diff --git a/docs/getting-started-guides/stackpoint.md b/docs/getting-started-guides/stackpoint.md new file mode 100644 index 0000000000..6cba8f5efb --- /dev/null +++ b/docs/getting-started-guides/stackpoint.md @@ -0,0 +1,203 @@ +--- +assignees: +- baldwinspc + +--- + +* TOC +{:toc} + + +## Introduction + +StackPointCloud is the universal control plane for Kubernetes Anywhere. StackPointCloud allows you to deploy and manage a Kubernetes cluster to the cloud provider of your choice in 3 steps using a web-based interface. + +## AWS + +To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS. + +### Choose a Provider + +Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + +Click **+ADD A CLUSTER NOW**. + +Click to select Amazon Web Services (AWS). + +### Configure Your Provider + +Add your Access Key ID and a Secret Access Key from AWS. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + +Click **SUBMIT** to submit the authorization information. + +### Configure Your Cluster + +Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +### Running the Cluster + +You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + +For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](http://kubernetes.io/docs/getting-started-guides/aws/). + + + + +## GCE + +To create a Kubernetes cluster on GCE, you will need the Service Account JSON Data from Google. + + +### Choose a Provider + +Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + +Click **+ADD A CLUSTER NOW**. + +Click to select Google Compute Engine (GCE). + +### Configure Your Provider + +Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + +Click **SUBMIT** to submit the authorization information. + +### Configure Your Cluster + +Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +### Running the Cluster + +You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + +For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](http://kubernetes.io/docs/getting-started-guides/gce). + + + + + +## GKE + +To create a Kubernetes cluster on GKE, you will need the Service Account JSON Data from Google. + +### Choose a Provider + +Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + +Click **+ADD A CLUSTER NOW**. + +Click to select Google Container Engine (GKE). + +### Configure Your Provider + +Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + +Click **SUBMIT** to submit the authorization information. + +### Configure Your Cluster + +Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + + +### Running the Cluster + +You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + +For information on using and managing a Kubernetes cluster on GKE, consult [the official documentation](http://kubernetes.io/docs/). + + + + + +## DigitalOcean + +To create a Kubernetes cluster on DigitalOcean, you will need a DigitalOcean API Token. + +### Choose a Provider + +Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + +Click **+ADD A CLUSTER NOW**. + +Click to select DigitalOcean. + +### Configure Your Provider + +Add your DigitalOcean API Token. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + +Click **SUBMIT** to submit the authorization information. + +### Configure Your Cluster + +Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +### Running the Cluster + +You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + +For information on using and managing a Kubernetes cluster on DigitalOcean, consult [the official documentation](http://kubernetes.io/docs/). + + + + + +## Microsoft Azure + +To create a Kubernetes cluster on Microsoft Azure, you will need an Azure Subscription ID, Username/Email, and Password. + +### Choose a Provider + +Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + +Click **+ADD A CLUSTER NOW**. + +Click to select Microsoft Azure. + +### Configure Your Provider + +Add your Azure Subscription ID, Username/Email, and Password. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + +Click **SUBMIT** to submit the authorization information. + +### Configure Your Cluster + +Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + + +### Running the Cluster + +You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + +For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](http://kubernetes.io/docs/getting-started-guides/azure/). + + + + + +## Packet + +To create a Kubernetes cluster on Packet, you will need a Packet API Key. + +### Choose a Provider + +Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + +Click **+ADD A CLUSTER NOW**. + +Click to select Packet. + +### Configure Your Provider + +Add your Packet API Key. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + +Click **SUBMIT** to submit the authorization information. + +### Configure Your Cluster + +Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +### Running the Cluster + +You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + +For information on using and managing a Kubernetes cluster on Packet, consult [the official documentation](http://kubernetes.io/docs/). From 0142f40d590dff710f5595a02c9125aad4088dfa Mon Sep 17 00:00:00 2001 From: Cao Shufeng Date: Thu, 8 Dec 2016 07:41:32 -0500 Subject: [PATCH 13/48] [authorization] update doc about roleRef This update is made according to this commit in code: https://github.com/kubernetes/kubernetes/commit/8c788233e778f3b0ebef560762c1433c12ea1d43 It has already be released with k8s v1.5, so it's reasonable to update the doc now. --- docs/admin/authorization.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/docs/admin/authorization.md b/docs/admin/authorization.md index 1a86359a92..dfec38b216 100644 --- a/docs/admin/authorization.md +++ b/docs/admin/authorization.md @@ -297,9 +297,8 @@ subjects: name: jane roleRef: kind: Role - namespace: default name: pod-reader - apiVersion: rbac.authorization.k8s.io/v1alpha1 + apiGroup: rbac.authorization.k8s.io ``` `RoleBindings` may also refer to a `ClusterRole`. However, a `RoleBinding` that @@ -324,7 +323,7 @@ subjects: roleRef: kind: ClusterRole name: secret-reader - apiVersion: rbac.authorization.k8s.io/v1alpha1 + apiGroup: rbac.authorization.k8s.io ``` Finally a `ClusterRoleBinding` may be used to grant permissions in all @@ -336,14 +335,14 @@ namespaces. The following `ClusterRoleBinding` allows any user in the group kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1alpha1 metadata: - name: read-secrets + name: read-secrets-global subjects: - kind: Group # May be "User", "Group" or "ServiceAccount" name: manager roleRef: kind: ClusterRole - name: secret-reader - apiVersion: rbac.authorization.k8s.io/v1alpha1 + name: secret-reader-global + apiGroup: rbac.authorization.k8s.io ``` ### Referring to Resources From eed33495017b9fa557cfb47b713b12e00ad1107f Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Fri, 11 Nov 2016 13:41:49 +0000 Subject: [PATCH 14/48] getting-started: replace 'yum' with 'dnf' in Fedora docs dnf has replaced yum as the standard package install tool in all versions of Fedora that are currently supported. Signed-off-by: Daniel P. Berrange --- .../fedora/fedora_ansible_config.md | 2 +- .../getting-started-guides/fedora/fedora_manual_config.md | 8 ++++---- .../fedora/flannel_multi_node_cluster.md | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/getting-started-guides/fedora/fedora_ansible_config.md b/docs/getting-started-guides/fedora/fedora_ansible_config.md index b5fe3802e0..3f4848f692 100644 --- a/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ b/docs/getting-started-guides/fedora/fedora_ansible_config.md @@ -37,7 +37,7 @@ master,etcd = kube-master.example.com If not ```shell -yum install -y ansible git python-netaddr +dnf install -y ansible git python-netaddr ``` **Now clone down the Kubernetes repository** diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index d1948530a8..134ebabff1 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -33,18 +33,18 @@ fed-node = 192.168.121.65 **Prepare the hosts:** * Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond. -* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. -* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below. +* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the dnf command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. +* If you want the very latest Kubernetes release [you can download and dnf install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the dnf install command below. * Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras. ```shell -yum -y install --enablerepo=updates-testing kubernetes +dnf -y install --enablerepo=updates-testing kubernetes ``` * Install etcd and iptables ```shell -yum -y install etcd iptables +dnf -y install etcd iptables ``` * Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index bcd10f57b9..494f7cd343 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -148,7 +148,7 @@ bash-4.3# This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error. ```shell -bash-4.3# yum -y install iproute iputils +bash-4.3# dnf -y install iproute iputils bash-4.3# setcap cap_net_raw-ep /usr/bin/ping ``` From 28b636c466e8ebe1b8693cc86a8d2dedf702f187 Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Fri, 11 Nov 2016 13:43:01 +0000 Subject: [PATCH 15/48] getting-started: don't use updates-testing repo for Fedora There is no reason to tell people to enable updates-testing repo for Fedora by default. The latest stable packages are already in the main repos and updates-testing should only be used by people wishing to test & report quality of pre-released software, not general Kubnetes users. Also remove the link to Koji builds - any Koji build which is working well enough to user will be submitted as an update. Telling users to install packages straight from Koji will lead to them using potentially unstable / buggy builds. Signed-off-by: Daniel P. Berrange --- docs/getting-started-guides/fedora/fedora_manual_config.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 134ebabff1..20deecf4cc 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -33,12 +33,10 @@ fed-node = 192.168.121.65 **Prepare the hosts:** * Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond. -* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the dnf command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. -* If you want the very latest Kubernetes release [you can download and dnf install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the dnf install command below. * Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras. ```shell -dnf -y install --enablerepo=updates-testing kubernetes +dnf -y install kubernetes ``` * Install etcd and iptables From d35fdeb503fcba2d02aef7b86123ccf8555070d4 Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Fri, 11 Nov 2016 14:17:27 +0000 Subject: [PATCH 16/48] getting-started: only list config vars that need changing The KUBE_LOGTOSTDERR, KUBE_LOG_LEVEL and KUBE_ALLOW_PRIV settings will already have correct values out of the box, so no need to tell people to set them. Signed-off-by: Daniel P. Berrange --- .../fedora/fedora_manual_config.md | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 20deecf4cc..0086af1d14 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -52,20 +52,12 @@ echo "192.168.121.9 fed-master 192.168.121.65 fed-node" >> /etc/hosts ``` -* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain: +* Edit /etc/kubernetes/config (which should be the same on all hosts) to set +the name of the master server: ```shell # Comma separated list of nodes in the etcd cluster KUBE_MASTER="--master=http://fed-master:8080" - -# logging to stderr means we get it in the systemd journal -KUBE_LOGTOSTDERR="--logtostderr=true" - -# journal message level, 0 is debug -KUBE_LOG_LEVEL="--v=0" - -# Should this cluster be allowed to run privileged docker containers -KUBE_ALLOW_PRIV="--allow-privileged=false" ``` * Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install. @@ -93,7 +85,7 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_API_ARGS="" ``` -* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001). +* Edit /etc/etcd/etcd.conf to let etcd listen on all available IPs instead of 127.0.0.1; If you have not done this, you might see an error such as "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001). ```shell ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" From 363052020c483f13a533775266891eb15ffdaf3d Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Fri, 11 Nov 2016 14:19:21 +0000 Subject: [PATCH 17/48] getting-started: update to assume port numbers for etcd >= 2.0 The docs still refer to etcd using port 4001, but in all currently released Fedora versions it will be using 2379 Signed-off-by: Daniel P. Berrange --- docs/getting-started-guides/fedora/fedora_manual_config.md | 6 +++--- .../fedora/flannel_multi_node_cluster.md | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 0086af1d14..52627c9f47 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -76,7 +76,7 @@ systemctl stop iptables-services firewalld KUBE_API_ADDRESS="--address=0.0.0.0" # Comma separated list of nodes in the etcd cluster -KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001" +KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" @@ -85,10 +85,10 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_API_ARGS="" ``` -* Edit /etc/etcd/etcd.conf to let etcd listen on all available IPs instead of 127.0.0.1; If you have not done this, you might see an error such as "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001). +* Edit /etc/etcd/etcd.conf to let etcd listen on all available IPs instead of 127.0.0.1; If you have not done this, you might see an error such as "connection refused". ```shell -ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" +ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ``` * Create /var/run/kubernetes on master: diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index 494f7cd343..a7f25b0d14 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -55,7 +55,7 @@ Edit the flannel configuration file /etc/sysconfig/flanneld as follows: # Flanneld configuration options # etcd url location. Point this to the server where etcd runs -FLANNEL_ETCD="http://fed-master:4001" +FLANNEL_ETCD="http://fed-master:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment @@ -104,7 +104,7 @@ Now check the interfaces on the nodes. Notice there is now a flannel.1 interface From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output. ```shell -curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool +curl -s http://fed-master:2379/v2/keys/coreos.com/network/subnets | python -mjson.tool ``` ```json From 0275ca78d048b23d57b5e45a9446c2c759bffa75 Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Fri, 11 Nov 2016 14:23:07 +0000 Subject: [PATCH 18/48] getting-started: remove note to create /var/run/kubernetes The /var/run/kubernetes directory is already created by installation of the kubernetes RPM with correct user and group ownership and permissions. Signed-off-by: Daniel P. Berrange --- .../getting-started-guides/fedora/fedora_manual_config.md | 8 -------- 1 file changed, 8 deletions(-) diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 52627c9f47..7b196a0711 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -91,14 +91,6 @@ KUBE_API_ARGS="" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ``` -* Create /var/run/kubernetes on master: - -```shell -mkdir /var/run/kubernetes -chown kube:kube /var/run/kubernetes -chmod 750 /var/run/kubernetes -``` - * Start the appropriate services on master: ```shell From 33a83b1221b71835f3fcf4fc73c9503306acd6e7 Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Fri, 11 Nov 2016 13:49:06 +0000 Subject: [PATCH 19/48] getting-started: remove note about installing iptables on Fedora The install of kubernetes pulls in docker, which has a direct dependancy on iptables. So there is no need to tell people to yum install iptables manually. Signed-off-by: Daniel P. Berrange --- docs/getting-started-guides/fedora/fedora_manual_config.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 7b196a0711..d404aa315c 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -39,10 +39,10 @@ fed-node = 192.168.121.65 dnf -y install kubernetes ``` -* Install etcd and iptables +* Install etcd ```shell -dnf -y install etcd iptables +dnf -y install etcd ``` * Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. From a6e375a4b1ff5262ff05069884a4a5f1de5ea196 Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Fri, 11 Nov 2016 15:07:16 +0000 Subject: [PATCH 20/48] getting-started: add instruction to install flannel The Fedora multi-node doc need to tell users to install flannel, since previous setup won't have brought it in. Signed-off-by: Daniel P. Berrange --- .../fedora/flannel_multi_node_cluster.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index a7f25b0d14..a0a5c80cb4 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -49,6 +49,12 @@ etcdctl get /coreos.com/network/config **Perform following commands on all Kubernetes nodes** +Install the flannel package + +```shell +# dnf -y install flannel +``` + Edit the flannel configuration file /etc/sysconfig/flanneld as follows: ```shell From f891170c92a1ccf1d84ebe2cd83e9a80bc712e7a Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Mon, 21 Nov 2016 15:28:31 +0000 Subject: [PATCH 21/48] getting-started: note that Fedora can use VMs or bare metal Although the Fedora install docs are under the "Bare Metal" section in the docs, the instructions provided work fine for Fedora installs in virtual machines Signed-off-by: Daniel P. Berrange --- docs/getting-started-guides/fedora/fedora_manual_config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index d404aa315c..d111c7c771 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -11,7 +11,7 @@ assignees: ## Prerequisites -1. You need 2 or more machines with Fedora installed. +1. You need 2 or more machines with Fedora installed. These can be either bare metal machines or virtual machines. ## Instructions From c029794dc87aabe3f47a51dae5db769531823d77 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Alexandre=20Gonz=C3=A1lez?= Date: Sun, 13 Nov 2016 22:47:32 +0100 Subject: [PATCH 22/48] Use --decode for base64 command `-d` is used for the Linux version of the command but in the Mac/BSD version they use `-D`. Using `--decode` we are sure that the flag is compatible with both. --- docs/user-guide/secrets/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/secrets/index.md b/docs/user-guide/secrets/index.md index c55868f1f1..13f5a1eec1 100644 --- a/docs/user-guide/secrets/index.md +++ b/docs/user-guide/secrets/index.md @@ -158,7 +158,7 @@ type: Opaque Decode the password field: ```shell -$ echo "MWYyZDFlMmU2N2Rm" | base64 -d +$ echo "MWYyZDFlMmU2N2Rm" | base64 --decode 1f2d1e2e67df ``` From 54a08f7bdb77d8675c066fefac8479994a8c4446 Mon Sep 17 00:00:00 2001 From: Jeff Schroeder Date: Thu, 8 Dec 2016 16:59:52 -0600 Subject: [PATCH 23/48] Linkify issue for audit support in kubernetes --- docs/admin/audit.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/admin/audit.md b/docs/admin/audit.md index c3a6fda6da..e195d0909f 100644 --- a/docs/admin/audit.md +++ b/docs/admin/audit.md @@ -23,7 +23,7 @@ answer the following questions: - to where was it going? NOTE: Currently, Kubernetes provides only basic audit capabilities, there is still a lot -of work going on to provide fully featured auditing capabilities (see https://github.com/kubernetes/features/issues/22). +of work going on to provide fully featured auditing capabilities (see [this issue](https://github.com/kubernetes/features/issues/22)). Kubernetes audit is part of [kube-apiserver](/docs/admin/kube-apiserver) logging all requests coming to the server. Each audit log contains two entries: From 2a122c7a00b891ec6a3ad9fe24255b65a91c7c2c Mon Sep 17 00:00:00 2001 From: CaoShuFeng Date: Thu, 8 Dec 2016 17:28:56 -0600 Subject: [PATCH 24/48] Update authorization.md fix role name --- docs/admin/authorization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/admin/authorization.md b/docs/admin/authorization.md index dfec38b216..57018e35fa 100644 --- a/docs/admin/authorization.md +++ b/docs/admin/authorization.md @@ -341,7 +341,7 @@ subjects: name: manager roleRef: kind: ClusterRole - name: secret-reader-global +  name: secret-reader apiGroup: rbac.authorization.k8s.io ``` From cb85b960e301d5561df89091772464d0d303657c Mon Sep 17 00:00:00 2001 From: Bilgin Ibryam Date: Fri, 9 Dec 2016 00:16:46 +0000 Subject: [PATCH 25/48] Fixed wrong URL --- docs/contribute/write-new-topic.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contribute/write-new-topic.md b/docs/contribute/write-new-topic.md index c34f3cfde1..d4037e724b 100644 --- a/docs/contribute/write-new-topic.md +++ b/docs/contribute/write-new-topic.md @@ -77,7 +77,7 @@ Depending page type, create an entry in one of these files: {% capture whatsnext %} * Learn about [using page templates](/docs/contribute/page-templates/). * Learn about [staging your changes](/docs/contribute/stage-documentation-changes). -* Learn about [creating a pull request](/docs/contribute/write-new-topic). +* Learn about [creating a pull request](/docs/contribute/create-pull-request/). {% endcapture %} {% include templates/task.md %} From 3ebef7eeceb9a93f19be6947961c2dd9738d9a5d Mon Sep 17 00:00:00 2001 From: Bilgin Ibryam Date: Fri, 9 Dec 2016 08:49:26 +0000 Subject: [PATCH 26/48] Fixed typos --- docs/admin/dns.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/admin/dns.md b/docs/admin/dns.md index d75acfa093..5e9d55f822 100644 --- a/docs/admin/dns.md +++ b/docs/admin/dns.md @@ -70,7 +70,7 @@ is no longer supported. When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local`. -For example, a pod with ip `1.2.3.4` in the namespace `default` with a dns name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`. +For example, a pod with ip `1.2.3.4` in the namespace `default` with a DNS name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`. #### A Records and hostname based on Pod's hostname and subdomain fields @@ -171,7 +171,7 @@ busybox 1/1 Running 0 Once that pod is running, you can exec nslookup in that environment: ``` -kubectl exec busybox -- nslookup kubernetes.default +kubectl exec -ti busybox -- nslookup kubernetes.default ``` You should see something like: @@ -194,10 +194,10 @@ If the nslookup command fails, check the following: Take a look inside the resolv.conf file. (See "Inheriting DNS from the node" and "Known issues" below for more information) ``` -cat /etc/resolv.conf +kubectl exec busybox cat /etc/resolv.conf ``` -Verify that the search path and name server are set up like the following (note that seach path may vary for different cloud providers): +Verify that the search path and name server are set up like the following (note that search path may vary for different cloud providers): ``` search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal @@ -210,7 +210,7 @@ options ndots:5 Errors such as the following indicate a problem with the kube-dns add-on or associated Services: ``` -$ kubectl exec busybox -- nslookup kubernetes.default +$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 @@ -220,7 +220,7 @@ nslookup: can't resolve 'kubernetes.default' or ``` -$ kubectl exec busybox -- nslookup kubernetes.default +$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local @@ -244,7 +244,7 @@ kube-dns-v19-ezo1y 3/3 Running 0 ... ``` -If you see that no pod is running or that the pod has failed/completed, the dns add-on may not be deployed by default in your current environment and you will have to deploy it manually. +If you see that no pod is running or that the pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually. #### Check for Errors in the DNS pod @@ -258,7 +258,7 @@ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system See if there is any suspicious log. W, E, F letter at the beginning represent Warning, Error and Failure. Please search for entries that have these as the logging level and use [kubernetes issues](https://github.com/kubernetes/kubernetes/issues) to report unexpected errors. -#### Is dns service up? +#### Is DNS service up? Verify that the DNS service is up by using the `kubectl get service` command. @@ -277,7 +277,7 @@ kube-dns 10.0.0.10 53/UDP,53/TCP 1h If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](http://kubernetes.io/docs/user-guide/debugging-services/) for more information. -#### Are dns endpoints exposed? +#### Are DNS endpoints exposed? You can verify that dns endpoints are exposed by using the `kubectl get endpoints` command. @@ -348,7 +348,7 @@ some of those settings will be lost. As a partial workaround, the node can run `dnsmasq` which will provide more `nameserver` entries, but not more `search` entries. You can also use kubelet's `--resolv-conf` flag. -If you are using Alpine version 3.3 or earlier as your base image, dns may not +If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check [here](https://github.com/kubernetes/kubernetes/issues/30215) for more information. From 8dad44f45c892bf4b4117ae6d8731d9bfcd97a6e Mon Sep 17 00:00:00 2001 From: Derek Carr Date: Thu, 8 Dec 2016 14:34:33 -0500 Subject: [PATCH 27/48] quota by storage class --- docs/admin/resourcequota/index.md | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/docs/admin/resourcequota/index.md b/docs/admin/resourcequota/index.md index ff76942702..93da108b99 100644 --- a/docs/admin/resourcequota/index.md +++ b/docs/admin/resourcequota/index.md @@ -52,8 +52,7 @@ Resource Quota is enforced in a particular namespace when there is a ## Compute Resource Quota -You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) and [storage resources](/docs/user-guide/persistent-volumes) -that can be requested in a given namespace. +You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace. The following resource types are supported: @@ -65,7 +64,25 @@ The following resource types are supported: | `memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. | | `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. | | `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. | + +## Storage Resource Quota + +You can limit the total sum of [storage resources](/docs/user-guide/persistent-volumes) that can be requested in a given namespace. + +In addition, you can limit consumption of storage resources based on associated storage-class. + +| Resource Name | Description | +| --------------------- | ----------------------------------------------------------- | | `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. | +| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | +| `.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the storage-class-name, the sum of storage requests cannot exceed this value. | +| `.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the storage-class-name, the total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | + +For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can +define a quota as follows: + +* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi` +* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi` ## Object Count Quota From 08a199d6d17009a85edc3aaa226845d91079fb36 Mon Sep 17 00:00:00 2001 From: Eric Baum Date: Fri, 9 Dec 2016 23:03:48 +0000 Subject: [PATCH 28/48] Update header Updates header to remove hamburger on desktop, set 100px margin, and adds 100px margin to body. --- _includes/head-header.html | 10 +++++- _sass/_base.sass | 71 ++++++++++++++++++++++++++++++++++++++ _sass/_desktop.sass | 20 +++++++++-- images/search-icon.svg | 13 +++++++ js/script.js | 18 ++++++++++ 5 files changed, 128 insertions(+), 4 deletions(-) create mode 100644 images/search-icon.svg diff --git a/_includes/head-header.html b/_includes/head-header.html index 17a83fa31e..598f6e80fb 100644 --- a/_includes/head-header.html +++ b/_includes/head-header.html @@ -20,8 +20,16 @@
+ diff --git a/_sass/_base.sass b/_sass/_base.sass index 635012a39b..7ef5103ff8 100644 --- a/_sass/_base.sass +++ b/_sass/_base.sass @@ -234,6 +234,40 @@ header color: $blue text-decoration: none +// Global Nav - 12/9/2016 Update + +ul.global-nav + display: none + + li + display: inline-block + margin-right: 14px + + a + color: #fff + font-weight: bold + padding: 0 + position: relative + + &.active:after + position: absolute + width: 100% + height: 2px + content: '' + bottom: -4px + left: 0 + background: #fff + + +.flip-nav ul.global-nav li a, +.open-nav ul.global-nav li a, + color: #333 + +.flip-nav ul.global-nav li a.active:after, +.open-nav ul.global-nav li a.active:after, + + background: $blue + // FLIP NAV .flip-nav header @@ -301,6 +335,26 @@ header padding-left: 0 padding-right: 0 margin-bottom: 0 + position: relative + + &.bot-bar:after + display: block + margin-bottom: -20px + height: 8px + width: 100% + background-color: transparentize(white, 0.9) + content: '' + + &.no-sub + + h5 + display: none + + h1 + margin-bottom: 20px + +#home #hero:after + display: none // VENDOR STRIP #vendorStrip @@ -482,6 +536,19 @@ section margin: 0 auto height: 44px line-height: 44px + position: relative + + &:before + position: absolute + width: 15px + height: 15px + content: '' + right: 8px + top: 7px + background-image: url(/images/search-icon.svg) + background-repeat: no-repeat + background-size: 100% 100% + z-index: 1 #search width: 100% @@ -490,6 +557,10 @@ section line-height: 30px font-size: 16px vertical-align: top + background: #fff + border: none + border-radius: 4px + position: relative #encyclopedia diff --git a/_sass/_desktop.sass b/_sass/_desktop.sass index 2e70b23e8a..27fbc46ae1 100644 --- a/_sass/_desktop.sass +++ b/_sass/_desktop.sass @@ -3,6 +3,15 @@ $vendor-strip-height: 44px $video-section-height: 550px @media screen and (min-width: 1025px) + #hamburger + display: none + + ul.global-nav + display: inline-block + + #docs #vendorStrip #searchBox:before + top: 15px + #vendorStrip height: $vendor-strip-height line-height: $vendor-strip-height @@ -40,7 +49,7 @@ $video-section-height: 550px #searchBox float: right - width: 30% + width: 320px #search vertical-align: middle @@ -65,7 +74,7 @@ $video-section-height: 550px #encyclopedia - padding: 50px 50px 20px 20px + padding: 50px 50px 100px 100px clear: both #docsToc @@ -88,6 +97,11 @@ $video-section-height: 550px section, header, footer main max-width: $main-max-width + + header, #vendorStrip, #encyclopedia, #hero h1, #hero h5, #docs #hero h1, #docs #hero h5, + #community #hero h1, .gridPage #hero h1, #community #hero h5, .gridPage #hero h5 + padding-left: 100px + padding-right: 100px #home section, header, footer @@ -276,7 +290,7 @@ $video-section-height: 550px text-align: left h1 - padding: 20px + padding: 20px 100px #tryKubernetes width: auto diff --git a/images/search-icon.svg b/images/search-icon.svg new file mode 100644 index 0000000000..285f57caff --- /dev/null +++ b/images/search-icon.svg @@ -0,0 +1,13 @@ + + + + + + diff --git a/js/script.js b/js/script.js index 22eff0a1b4..b944d91175 100755 --- a/js/script.js +++ b/js/script.js @@ -503,3 +503,21 @@ var pushmenu = (function(){ show: show }; })(); + +$(function() { + + // Make global nav be active based on pathname + if ((location.pathname.split("/")[1]) !== ""){ + $('.global-nav li a[href^="/' + location.pathname.split("/")[1] + '"]').addClass('active'); + } + + // If vendor strip doesn't exist add className + if ( !$('#vendorStrip').length > 0 ) { + $('#hero').addClass('bot-bar'); + } + + // If is not homepage add class to hero section + if (!$('#home').length > 0 ) { + $('#hero').addClass('no-sub'); + } +}); \ No newline at end of file From af3d24c7ef9ca8fce02eecf63d3d74c356888ed8 Mon Sep 17 00:00:00 2001 From: craigbox Date: Sat, 10 Dec 2016 18:20:24 +0000 Subject: [PATCH 29/48] Change vertical alignment of code blocks Things looked odd due to the padding on code blocks; they sat 2px above the text surrounding them. This PR will fix that by aligning inline code blocks, and the text surrounding them, on their baseline. --- _sass/_base.sass | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_sass/_base.sass b/_sass/_base.sass index 109124052f..c4db0c93ec 100644 --- a/_sass/_base.sass +++ b/_sass/_base.sass @@ -753,7 +753,7 @@ dd background-color: $light-grey color: $dark-grey font-family: $mono-font - vertical-align: bottom + vertical-align: baseline font-size: 14px font-weight: bold padding: 2px 4px From b43e72124ff656e54321d65ea18e3783c3800f82 Mon Sep 17 00:00:00 2001 From: Eric Baum Date: Tue, 13 Dec 2016 21:37:53 +0000 Subject: [PATCH 30/48] Updates logo --- images/nav_logo.svg | 111 ++++++++++++++++++++++++++++++++++++++++++- images/nav_logo2.svg | 109 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 218 insertions(+), 2 deletions(-) diff --git a/images/nav_logo.svg b/images/nav_logo.svg index 666997a143..982c04f4aa 100644 --- a/images/nav_logo.svg +++ b/images/nav_logo.svg @@ -1 +1,110 @@ - \ No newline at end of file + + + + +Kubernetes_Logo_Hrz_lockup_REV + + + + + + + + + + + + + + + + + + + + + + diff --git a/images/nav_logo2.svg b/images/nav_logo2.svg index 1c88bd436a..92b8d19ac4 100644 --- a/images/nav_logo2.svg +++ b/images/nav_logo2.svg @@ -1 +1,108 @@ - \ No newline at end of file + + + + +Kubernetes_Logo_Hrz_lockup_POS + + + + + + + + + + + + + + + + + + + + + From 3921711fd90c17ffb98ba0b3909764a4c9d2b623 Mon Sep 17 00:00:00 2001 From: Janet Kuo Date: Tue, 13 Dec 2016 14:07:07 -0800 Subject: [PATCH 31/48] Add left nav for apps API group --- _data/reference.yml | 7 +++++++ docs/api-reference/README.md | 1 + docs/reference.md | 5 ++++- 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/_data/reference.yml b/_data/reference.yml index ce0504eed8..6b3351c954 100644 --- a/_data/reference.yml +++ b/_data/reference.yml @@ -41,6 +41,13 @@ toc: - title: Batch API Definitions path: /docs/api-reference/batch/v1/definitions/ +- title: Apps API + section: + - title: Apps API Operations + path: /docs/api-reference/apps/v1beta1/operations/ + - title: Apps API Definitions + path: /docs/api-reference/apps/v1beta1/definitions/ + - title: Extensions API section: - title: Extensions API Operations diff --git a/docs/api-reference/README.md b/docs/api-reference/README.md index c0c1f3620d..a2fae5b001 100644 --- a/docs/api-reference/README.md +++ b/docs/api-reference/README.md @@ -8,6 +8,7 @@ Use the following reference docs to understand the kubernetes REST API for vario * extensions/v1beta1: [operations](/docs/api-reference/extensions/v1beta1/operations.html), [model definitions](/docs/api-reference/extensions/v1beta1/definitions.html) * batch/v1: [operations](/docs/api-reference/batch/v1/operations.html), [model definitions](/docs/api-reference/batch/v1/definitions.html) * autoscaling/v1: [operations](/docs/api-reference/autoscaling/v1/operations.html), [model definitions](/docs/api-reference/autoscaling/v1/definitions.html) +* apps/v1beta1: [operations](/docs/api-reference/apps/v1beta1/operations.html), [model definitions](/docs/api-reference/apps/v1beta1/definitions.html) diff --git a/docs/reference.md b/docs/reference.md index 88f35a74f4..dc1cd2f297 100644 --- a/docs/reference.md +++ b/docs/reference.md @@ -6,7 +6,10 @@ In the reference section, you can find reference documentation for Kubernetes AP ## API References * [Kubernetes API](/docs/api/) - The core API for Kubernetes. -* [Extensions API](/docs/api-reference/extensions/v1beta1/operations/) - Manages extensions resources such as Jobs, Ingress and HorizontalPodAutoscalers. +* [Autoscaling API](/docs/api-reference/autoscaling/v1/operations/) - Manages autoscaling resources such as HorizontalPodAutoscalers. +* [Batch API](/docs/api-reference/batch/v1/operations/) - Manages batch resources such as Jobs. +* [Apps API](/docs/api-reference/apps/v1beta1/operations/) - Manages apps resources such as StatefulSets. +* [Extensions API](/docs/api-reference/extensions/v1beta1/operations/) - Manages extensions resources such as Ingress, Deployments, and ReplicaSets. ## CLI References From 47a75ca01181fccaeeb1e4baefb8e39e88e964de Mon Sep 17 00:00:00 2001 From: Eric Baum Date: Wed, 14 Dec 2016 00:54:21 +0000 Subject: [PATCH 32/48] Minor header change Change "Try Kubernetes" link point to /docs/tutorials/kubernetes-basics/ instead of "Hello Node" Reduce font weight in links across the top. --- _includes/head-header.html | 2 +- _sass/_base.sass | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/_includes/head-header.html b/_includes/head-header.html index 598f6e80fb..bb8d1e7f77 100644 --- a/_includes/head-header.html +++ b/_includes/head-header.html @@ -30,7 +30,7 @@
  • Case Studies
  • - Try Kubernetes + Try Kubernetes diff --git a/_sass/_base.sass b/_sass/_base.sass index 7ef5103ff8..27d19a0fd3 100644 --- a/_sass/_base.sass +++ b/_sass/_base.sass @@ -245,7 +245,7 @@ ul.global-nav a color: #fff - font-weight: bold + font-weight: 400 padding: 0 position: relative From a1dededa56d75a1919c6af33377946ef03f48eda Mon Sep 17 00:00:00 2001 From: Jimmy Cuadra Date: Wed, 14 Dec 2016 15:52:22 -0800 Subject: [PATCH 33/48] Fix the formatting of bullet lists on the kubelet auth page. --- .../kubelet-authentication-authorization.md | 36 +++++++++++-------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/docs/admin/kubelet-authentication-authorization.md b/docs/admin/kubelet-authentication-authorization.md index b0617b8854..509792bf24 100644 --- a/docs/admin/kubelet-authentication-authorization.md +++ b/docs/admin/kubelet-authentication-authorization.md @@ -17,35 +17,40 @@ This document describes how to authenticate and authorize access to the kubelet' ## Kubelet authentication By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured -authentication methods are treated as anonymous requests, and given a username of `system:anonymous` +authentication methods are treated as anonymous requests, and given a username of `system:anonymous` and a group of `system:unauthenticated`. To disable anonymous access and send `401 Unauthorized` responses to unauthenticated requests: + * start the kubelet with the `--anonymous-auth=false` flag To enable X509 client certificate authentication to the kubelet's HTTPS endpoint: -* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with + +* start the kubelet with the `--client-ca-file` flag, providing a CA bundle to verify client certificates with * start the apiserver with `--kubelet-client-certificate` and `--kubelet-client-key` flags * see the [apiserver authentication documentation](/docs/admin/authentication/#x509-client-certs) for more details To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet's HTTPS endpoint: + * ensure the `authentication.k8s.io/v1beta1` API group is enabled in the API server * start the kubelet with the `--authentication-token-webhook`, `--kubeconfig`, and `--require-kubeconfig` flags -* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens +* the kubelet calls the `TokenReview` API on the configured API server to determine user information from bearer tokens ## Kubelet authorization Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is `AlwaysAllow`, which allows all requests. There are many possible reasons to subdivide access to the kubelet API: + * anonymous auth is enabled, but anonymous users' ability to call the kubelet API should be limited * bearer token auth is enabled, but arbitrary API users' (like service accounts) ability to call the kubelet API should be limited * client certificate auth is enabled, but only some of the client certificates signed by the configured CA should be allowed to use the kubelet API To subdivide access to the kubelet API, delegate authorization to the API server: + * ensure the `authorization.k8s.io/v1beta1` API group is enabled in the API server * start the kubelet with the `--authorization-mode=Webhook`, `--kubeconfig`, and `--require-kubeconfig` flags -* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized +* the kubelet calls the `SubjectAccessReview` API on the configured API server to determine whether each request is authorized The kubelet authorizes API requests using the same [request attributes](/docs/admin/authorization/#request-attributes) approach as the apiserver. @@ -63,19 +68,20 @@ The resource and subresource is determined from the incoming request's path: Kubelet API | resource | subresource -------------|----------|------------ -/stats/* | nodes | stats -/metrics/* | nodes | metrics -/logs/* | nodes | log -/spec/* | nodes | spec +/stats/\* | nodes | stats +/metrics/\* | nodes | metrics +/logs/\* | nodes | log +/spec/\* | nodes | spec *all others* | nodes | proxy -The namespace and API group attributes are always an empty string, and +The namespace and API group attributes are always an empty string, and the resource name is always the name of the kubelet's `Node` API object. -When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key` +When running in this mode, ensure the user identified by the `--kubelet-client-certificate` and `--kubelet-client-key` flags passed to the apiserver is authorized for the following attributes: -* verb=*, resource=nodes, subresource=proxy -* verb=*, resource=nodes, subresource=stats -* verb=*, resource=nodes, subresource=log -* verb=*, resource=nodes, subresource=spec -* verb=*, resource=nodes, subresource=metrics + +* verb=\*, resource=nodes, subresource=proxy +* verb=\*, resource=nodes, subresource=stats +* verb=\*, resource=nodes, subresource=log +* verb=\*, resource=nodes, subresource=spec +* verb=\*, resource=nodes, subresource=metrics From 27d614a4f59980bd05c40a20533375534b8308e6 Mon Sep 17 00:00:00 2001 From: Gurucharan Shetty Date: Thu, 15 Dec 2016 23:17:20 -0800 Subject: [PATCH 34/48] networking.md: Add OVN as a networking plugin option. OVN is an opensource network virtualization solution developed by the Open vSwitch community. It lets one create logical switches, logical routers, stateful ACLs, load-balancers etc to build different virtual networking topologies. The project has a specific Kubernetes plugin and documentation at https://github.com/openvswitch/ovn-kubernetes. --- docs/admin/networking.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/admin/networking.md b/docs/admin/networking.md index 903bac24f8..e5cf737088 100644 --- a/docs/admin/networking.md +++ b/docs/admin/networking.md @@ -181,6 +181,14 @@ The Nuage platform uses overlays to provide seamless policy-based networking bet complicated way to build an overlay network. This is endorsed by several of the "Big Shops" for networking. +### OVN (Open Virtual Networking) + +OVN is an opensource network virtualization solution developed by the +Open vSwitch community. It lets one create logical switches, logical routers, +stateful ACLs, load-balancers etc to build different virtual networking +topologies. The project has a specific Kubernetes plugin and documentation +at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes). + ### Project Calico [Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine. From 9cefac36181b5af87097909954e0da600c703797 Mon Sep 17 00:00:00 2001 From: king-julien Date: Mon, 19 Dec 2016 14:28:33 -0600 Subject: [PATCH 35/48] More Description: use the minikube's built-in Docker daemon --- docs/getting-started-guides/minikube.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started-guides/minikube.md b/docs/getting-started-guides/minikube.md index b7fefcf3c4..6967754c4f 100644 --- a/docs/getting-started-guides/minikube.md +++ b/docs/getting-started-guides/minikube.md @@ -116,7 +116,7 @@ plugins, if required. ### Reusing the Docker daemon -When using a single VM of kubernetes its really handy to reuse the Docker daemon inside the VM; as this means you don't have to build on your host machine and push the image into a docker registry - you can just build inside the same docker daemon as minikube which speeds up local experiments. +When using a single VM of kubernetes, it's really handy to reuse the minikube's built-in Docker daemon; as this means you don't have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. Just make sure you tag your Docker image with something other than 'latest' and use that tag while you pull the image. Otherwise, if you do not specify version of your image, it will be assumed as `:latest`, with pull image policy of `Always` correspondingly, which may eventually result in `ErrImagePull` as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet. To be able to work with the docker daemon on your mac/linux host use the [docker-env command](./docs/minikube_docker-env.md) in your shell: From b413c7eecbef622d139000f7fe75c1fdf1e94bfe Mon Sep 17 00:00:00 2001 From: Bilgin Ibryam Date: Tue, 20 Dec 2016 08:47:16 +0000 Subject: [PATCH 36/48] Small typos fixed --- docs/admin/kubelet.md | 2 +- docs/admin/rescheduler.md | 2 +- docs/getting-started-guides/kubeadm.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md index a3004ea1aa..31c91d6ca2 100644 --- a/docs/admin/kubelet.md +++ b/docs/admin/kubelet.md @@ -14,7 +14,7 @@ various mechanisms (primarily through the apiserver) and ensures that the contai described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes. -Other than from an PodSpec from the apiserver, there are three ways that a container +Other than from a PodSpec from the apiserver, there are three ways that a container manifest can be provided to the Kubelet. File: Path passed as a flag on the command line. This file is rechecked every 20 diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md index e1a2cca5de..fe710d4a3d 100644 --- a/docs/admin/rescheduler.md +++ b/docs/admin/rescheduler.md @@ -36,7 +36,7 @@ Each critical add-on has to tolerate it, the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled. *Warning:* currently there is no guarantee which node is chosen and which pods are being killed -in order to schedule crical pod, so if rescheduler is enabled you pods might be occasionally +in order to schedule crical pods, so if rescheduler is enabled you pods might be occasionally killed for this purpose. ## Config diff --git a/docs/getting-started-guides/kubeadm.md b/docs/getting-started-guides/kubeadm.md index fa2ad56dd9..f969a4e9c2 100644 --- a/docs/getting-started-guides/kubeadm.md +++ b/docs/getting-started-guides/kubeadm.md @@ -19,7 +19,7 @@ The installation uses a tool called `kubeadm` which is part of Kubernetes. This process works with local VMs, physical servers and/or cloud servers. It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc). -See the full [`kubeadm` reference](/docs/admin/kubeadm) for information on all `kubeadm` command-line flags and for advice on automating `kubeadm` itself. +See the full `kubeadm` [reference](/docs/admin/kubeadm) for information on all `kubeadm` command-line flags and for advice on automating `kubeadm` itself. **The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)! Be sure to read the [limitations](#limitations); in particular note that kubeadm doesn't have great support for From e380ff891dc63b504d2bbf0b94da92d2df8c0ced Mon Sep 17 00:00:00 2001 From: steveperry-53 Date: Tue, 20 Dec 2016 14:11:35 -0800 Subject: [PATCH 37/48] Create prerequisites appropriate for load balancer. --- .../expose-external-ip-address.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/tutorials/stateless-application/expose-external-ip-address.md b/docs/tutorials/stateless-application/expose-external-ip-address.md index e740b6df05..2d2e28d594 100644 --- a/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -12,7 +12,15 @@ external IP address. {% capture prerequisites %} -{% include task-tutorial-prereqs.md %} + * Install [kubectl](http://kubernetes.io/docs/user-guide/prereqs). + + * Use a cloud provider like Google Container Engine or Amazon Web Services to + create a Kubernetes cluster. This tutorial creates an + [external load balancer](/docs/user-guide/load-balancer/), + which requires a cloud provider. + + * Configure `kubectl` to communicate with your Kubernetes API server. For + instructions, see the documentation for your cloud provider. {% endcapture %} From 42d9427b756ad4ff7074e763b69627a390e3a5be Mon Sep 17 00:00:00 2001 From: xiangpengzhao Date: Tue, 20 Dec 2016 23:05:45 -0500 Subject: [PATCH 38/48] Fix #2005: Issue with /docs/user-guide/horizontal-pod-autoscaling/ --- docs/user-guide/horizontal-pod-autoscaling/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/horizontal-pod-autoscaling/index.md b/docs/user-guide/horizontal-pod-autoscaling/index.md index 4fb1fa9be7..76087ceff0 100644 --- a/docs/user-guide/horizontal-pod-autoscaling/index.md +++ b/docs/user-guide/horizontal-pod-autoscaling/index.md @@ -90,7 +90,7 @@ The cluster has to be started with `ENABLE_CUSTOM_METRICS` environment variable ### Pod configuration The pods to be scaled must have cAdvisor-specific custom (aka application) metrics endpoint configured. The configuration format is described [here](https://github.com/google/cadvisor/blob/master/docs/application_metrics.md). Kubernetes expects the configuration to - be placed in `definition.json` mounted via a [config map](/docs/user-guide/horizontal-pod-autoscaling/configmap/) in `/etc/custom-metrics`. A sample config map may look like this: + be placed in `definition.json` mounted via a [config map](/docs/user-guide/configmap/) in `/etc/custom-metrics`. A sample config map may look like this: ```yaml apiVersion: v1 From b4e6c6e1c7bfb4e2073dd303aac3bb3b4b6bf635 Mon Sep 17 00:00:00 2001 From: Bilgin Ibryam Date: Wed, 21 Dec 2016 06:24:43 +0000 Subject: [PATCH 39/48] Fixed cricial to critical --- docs/admin/rescheduler.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md index fe710d4a3d..c9a3bd074c 100644 --- a/docs/admin/rescheduler.md +++ b/docs/admin/rescheduler.md @@ -36,7 +36,7 @@ Each critical add-on has to tolerate it, the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled. *Warning:* currently there is no guarantee which node is chosen and which pods are being killed -in order to schedule crical pods, so if rescheduler is enabled you pods might be occasionally +in order to schedule critical pods, so if rescheduler is enabled you pods might be occasionally killed for this purpose. ## Config From ccd202a07aaf1e02ad75bb8192cbf5de5ae4be45 Mon Sep 17 00:00:00 2001 From: Jie Luo Date: Wed, 21 Dec 2016 14:33:37 +0800 Subject: [PATCH 40/48] Duplicated 'the' Signed-off-by: Jie Luo --- docs/admin/authentication.md | 2 +- docs/admin/daemons.md | 2 +- docs/admin/garbage-collection.md | 2 +- docs/admin/kube-controller-manager.md | 6 +++--- docs/user-guide/compute-resources.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index ab41a6fd39..3ada61a5fd 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -31,7 +31,7 @@ to talk to the Kubernetes API. API requests are tied to either a normal user or a service account, or are treated as anonymous requests. This means every process inside or outside the cluster, from a human user typing `kubectl` on a workstation, to `kubelets` on nodes, to members -of the control plane, must authenticate when making requests to the the API server, +of the control plane, must authenticate when making requests to the API server, or be treated as an anonymous user. ## Authentication strategies diff --git a/docs/admin/daemons.md b/docs/admin/daemons.md index 7db42fe4e0..90637239b3 100644 --- a/docs/admin/daemons.md +++ b/docs/admin/daemons.md @@ -99,7 +99,7 @@ Some possible patterns for communicating with pods in a DaemonSet are: - **Push**: Pods in the Daemon Set are configured to send updates to another service, such as a stats database. They do not have clients. - **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable - via the node IPs. Clients knows the the list of nodes ips somehow, and know the port by convention. + via the node IPs. Clients knows the list of nodes ips somehow, and know the port by convention. - **DNS**: Create a [headless service](/docs/user-guide/services/#headless-services) with the same pod selector, and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from DNS. diff --git a/docs/admin/garbage-collection.md b/docs/admin/garbage-collection.md index 0276596f6c..0492f9f277 100644 --- a/docs/admin/garbage-collection.md +++ b/docs/admin/garbage-collection.md @@ -17,7 +17,7 @@ kubernetes manages lifecycle of all images through imageManager, with the cooper of cadvisor. The policy for garbage collecting images takes two factors into consideration: -`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the the high threshold +`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold will trigger garbage collection. The garbage collection will delete least recently used images until the low threshold has been met. diff --git a/docs/admin/kube-controller-manager.md b/docs/admin/kube-controller-manager.md index f6f11c5f37..4b158fe4e4 100644 --- a/docs/admin/kube-controller-manager.md +++ b/docs/admin/kube-controller-manager.md @@ -45,7 +45,7 @@ kube-controller-manager --concurrent_rc_syncs int32 The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load (default 5) --configure-cloud-routes Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider. (default true) --controller-start-interval duration Interval between starting controller managers. - --daemonset-lookup-cache-size int32 The the size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load. (default 1024) + --daemonset-lookup-cache-size int32 The size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load. (default 1024) --deployment-controller-sync-period duration Period for syncing the deployments. (default 30s) --enable-dynamic-provisioning Enable dynamic provisioning for environments that support it. (default true) --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver. (default true) @@ -89,8 +89,8 @@ StreamingProxyRedirects=true|false (ALPHA - default=false) --pv-recycler-pod-template-filepath-nfs string The file path to a pod definition used as a template for NFS persistent volume recycling --pv-recycler-timeout-increment-hostpath int32 the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster. (default 30) --pvclaimbinder-sync-period duration The period for syncing persistent volumes and persistent volume claims (default 15s) - --replicaset-lookup-cache-size int32 The the size of lookup cache for replicatsets. Larger number = more responsive replica management, but more MEM load. (default 4096) - --replication-controller-lookup-cache-size int32 The the size of lookup cache for replication controllers. Larger number = more responsive replica management, but more MEM load. (default 4096) + --replicaset-lookup-cache-size int32 The size of lookup cache for replicatsets. Larger number = more responsive replica management, but more MEM load. (default 4096) + --replication-controller-lookup-cache-size int32 The size of lookup cache for replication controllers. Larger number = more responsive replica management, but more MEM load. (default 4096) --resource-quota-sync-period duration The period for syncing quota usage status in the system (default 5m0s) --root-ca-file string If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle. --route-reconciliation-period duration The period for reconciling routes created for Nodes by cloud provider. (default 10s) diff --git a/docs/user-guide/compute-resources.md b/docs/user-guide/compute-resources.md index 2aac91d0ba..2e524e9117 100644 --- a/docs/user-guide/compute-resources.md +++ b/docs/user-guide/compute-resources.md @@ -328,7 +328,7 @@ Host: k8s-master:8080 ``` To consume opaque resources in pods, include the name of the opaque -resource as a key in the the `spec.containers[].resources.requests` map. +resource as a key in the `spec.containers[].resources.requests` map. The pod will be scheduled only if all of the resource requests are satisfied (including cpu, memory and any opaque resources.) The pod will From 757f101117a7a2190a45c5c67a77d7ac3f863e9d Mon Sep 17 00:00:00 2001 From: Jie Luo Date: Wed, 21 Dec 2016 16:37:06 +0800 Subject: [PATCH 41/48] fix some typos Signed-off-by: Jie Luo --- docs/admin/accessing-the-api.md | 2 +- docs/admin/node.md | 2 +- docs/admin/out-of-resource.md | 2 +- docs/admin/resourcequota/index.md | 2 +- docs/admin/resourcequota/walkthrough.md | 2 +- docs/getting-started-guides/windows/index.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index 0e491ccf0d..c8f239969f 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -148,7 +148,7 @@ By default the Kubernetes APIserver serves HTTP on 2 ports: - default IP is first non-localhost network interface, change with `--bind-address` flag. - request handled by authentication and authorization modules. - request handled by admission control module(s). - - authentication and authoriation modules run. + - authentication and authorisation modules run. When the cluster is created by `kube-up.sh`, on Google Compute Engine (GCE), and on several other cloud providers, the API server serves on port 443. On diff --git a/docs/admin/node.md b/docs/admin/node.md index 3c3e16178d..a18aaf5ca7 100644 --- a/docs/admin/node.md +++ b/docs/admin/node.md @@ -186,7 +186,7 @@ Modifications include setting labels on the node and marking it unschedulable. Labels on nodes can be used in conjunction with node selectors on pods to control scheduling, e.g. to constrain a pod to only be eligible to run on a subset of the nodes. -Marking a node as unscheduleable will prevent new pods from being scheduled to that +Marking a node as unschedulable will prevent new pods from being scheduled to that node, but will not affect any existing pods on the node. This is useful as a preparatory step before a node reboot, etc. For example, to mark a node unschedulable, run this command: diff --git a/docs/admin/out-of-resource.md b/docs/admin/out-of-resource.md index a663703d9c..0fa6f3942c 100644 --- a/docs/admin/out-of-resource.md +++ b/docs/admin/out-of-resource.md @@ -349,7 +349,7 @@ in favor of the simpler configuation supported around eviction. The `kubelet` currently polls `cAdvisor` to collect memory usage stats at a regular interval. If memory usage increases within that window rapidly, the `kubelet` may not observe `MemoryPressure` fast enough, and the `OOMKiller` will still be invoked. We intend to integrate with the `memcg` notification API in a future release to reduce this -latency, and instead have the kernel tell us when a threshold has been crossed immmediately. +latency, and instead have the kernel tell us when a threshold has been crossed immediately. If you are not trying to achieve extreme utilization, but a sensible measure of overcommit, a viable workaround for this issue is to set eviction thresholds at approximately 75% capacity. This increases the ability of this feature diff --git a/docs/admin/resourcequota/index.md b/docs/admin/resourcequota/index.md index c967975dec..88f5d55afd 100644 --- a/docs/admin/resourcequota/index.md +++ b/docs/admin/resourcequota/index.md @@ -125,7 +125,7 @@ The quota can be configured to quota either value. If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`, -then it requires that every incoming container specifies an explict limit for those resources. +then it requires that every incoming container specifies an explicit limit for those resources. ## Viewing and Setting Quotas diff --git a/docs/admin/resourcequota/walkthrough.md b/docs/admin/resourcequota/walkthrough.md index d5ef21ff6c..1120e7550d 100644 --- a/docs/admin/resourcequota/walkthrough.md +++ b/docs/admin/resourcequota/walkthrough.md @@ -232,7 +232,7 @@ services.loadbalancers 0 2 services.nodeports 0 0 ``` -As you can see, the pod that was created is consuming explict amounts of compute resources, and the usage is being +As you can see, the pod that was created is consuming explicit amounts of compute resources, and the usage is being tracked by Kubernetes properly. ## Step 5: Advanced quota scopes diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 511d125dcd..35a8b28f7a 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -134,7 +134,7 @@ Run the following in a PowerShell window with administrative privileges. Be awar `.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override= --master= --bind-address=` ## Scheduling Pods on Windows -Because your cluster has both Linux and Windows nodes, you must explictly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example: +Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example: ``` { From 0573336261f5466cea8431d3d1ea68f884d5ddac Mon Sep 17 00:00:00 2001 From: Jitendra Bhurat Date: Wed, 21 Dec 2016 13:32:35 -0500 Subject: [PATCH 42/48] Making the Docker version requirement clear and added command to create a new VMSwtich for kube-proxy to use --- docs/getting-started-guides/windows/index.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 511d125dcd..86e90cdf32 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -12,7 +12,7 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported 1. Kubernetes control plane running on existing Linux infrastructure (version 1.5 or later) 2. Kubenet network plugin setup on the Linux nodes 3. Windows Server 2016 (RTM version 10.0.14393 or later) -4. Docker Version 1.12.2-cs2-ws-beta or later +4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version) ## Networking Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. @@ -40,6 +40,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your 2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/) 3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause` 4. RRAS (Routing) Windows feature enabled +5. Install a VMSwitch of type `Internal`, by running `New-VMSwitch -Name KubeProxySwitch -SwitchType Internal` command in *PowerShell* window. This will create a new Network Interface with name `vEthernet (KubeProxySwitch)`. This interface will be used by kube-proxy to add Service IPs. **Linux Host Setup** @@ -127,8 +128,8 @@ To start kube-proxy on your Windows node: Run the following in a PowerShell window with administrative privileges. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kube-proxy. -1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to a node only network interface. The interface created when docker is installed should work -`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (HNS Internal NIC)"` +1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to `vEthernet (KubeProxySwitch)` which we created in **_Windows Host Setup_** above +`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (KubeProxySwitch)"` 2. Run *kube-proxy* executable using the below command `.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override= --master= --bind-address=` From 9f9e44d1741666bc65f1faaafff34a797f1787b8 Mon Sep 17 00:00:00 2001 From: Anthony Yeh Date: Wed, 21 Dec 2016 11:43:09 -0800 Subject: [PATCH 43/48] Remove accidentally nested {% raw %} tags. These tags cannot be nested, causing a Liquid syntax error. The nesting was introduced accidentally by concurrent PRs. --- docs/user-guide/kubectl/kubectl_get.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/kubectl/kubectl_get.md b/docs/user-guide/kubectl/kubectl_get.md index 439f2b1ed0..7e973d7bc9 100644 --- a/docs/user-guide/kubectl/kubectl_get.md +++ b/docs/user-guide/kubectl/kubectl_get.md @@ -69,7 +69,7 @@ kubectl get [(-o|--output=)json|yaml|wide|custom-columns=...|custom-columns-file kubectl get -f pod.yaml -o json # Return only the phase value of the specified pod. - kubectl get -o template pod/web-pod-13je7 --template={% raw %}{{.status.phase}}{% endraw %} + kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}} # List all replication controllers and services together in ps output format. kubectl get rc,services From c3b282f2b6a741e8d208c8a122a817d6d98f254c Mon Sep 17 00:00:00 2001 From: Steve Gordon Date: Wed, 7 Dec 2016 15:55:31 -0500 Subject: [PATCH 44/48] Provide valid --cloud-providers Provide valid --cloud-providers, vagrant and openshift were listed but is not actually a valid cloud provider in this context in the current code base while a number of other valid options were omitted including azure, cloudstack, openstack, photon and vsphere. --- docs/getting-started-guides/scratch.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index ebb765fec6..dd554c5715 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -646,7 +646,7 @@ This pod mounts several node file system directories using the `hostPath` volum Apiserver supports several cloud providers. -- options for `--cloud-provider` flag are `aws`, `gce`, `mesos`, `openshift`, `ovirt`, `rackspace`, `vagrant`, or unset. +- options for `--cloud-provider` flag are `aws`, `azure`, `cloudstack`, `fake`, `gce`, `mesos`, `openstack`, `ovirt`, `photon`, `rackspace`, `vsphere`, or unset. - unset used for e.g. bare metal setups. - support for new IaaS is added by contributing code [here](https://releases.k8s.io/{{page.githubbranch}}/pkg/cloudprovider/providers) From 517e3c30f9746c9602c91b9a90190fa7567bcdfa Mon Sep 17 00:00:00 2001 From: mbohlool Date: Mon, 12 Dec 2016 13:39:17 -0800 Subject: [PATCH 45/48] Mention OpenAPI in API docs --- docs/api.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/docs/api.md b/docs/api.md index 7964f604d0..cfc3c32125 100644 --- a/docs/api.md +++ b/docs/api.md @@ -24,11 +24,13 @@ In our experience, any system that is successful needs to grow and change as new What constitutes a compatible change and how to change the API are detailed by the [API change document](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api_changes.md). -## API Swagger definitions +## OpenAPI and Swagger definitions -Complete API details are documented using [Swagger v1.2](http://swagger.io/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver. +Complete API details are documented using [Swagger v1.2](http://swagger.io/) and [OpenAPI](https://www.openapis.org/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger v1.2 Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver. -We also host a version of the [latest API documentation](http://kubernetes.io/docs/api-reference/README/). This is updated with the latest release, so if you are using a different version of Kubernetes you will want to use the spec from your apiserver. +We also host a version of the [latest v1.2 API documentation UI](http://kubernetes.io/kubernetes/third_party/swagger-ui/). This is updated with the latest release, so if you are using a different version of Kubernetes you will want to use the spec from your apiserver. + +Staring kubernetes 1.4, OpenAPI spec is also available at `/swagger.json`. While we are transitioning from Swagger v1.2 to OpenAPI (aka Swagger v2.0), some of the tools such as kubectl and swagger-ui are still using v1.2 spec. OpenAPI spec is in Beta as of Kubernetes 1.5. Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects. From 58822257f0bef0838a7e7ba90dd93b5de40c05d9 Mon Sep 17 00:00:00 2001 From: devin-donnelly Date: Wed, 21 Dec 2016 16:15:24 -0800 Subject: [PATCH 46/48] Update quick-start.md --- docs/user-guide/quick-start.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/quick-start.md b/docs/user-guide/quick-start.md index e64f9d1d19..7fc3c3faa1 100644 --- a/docs/user-guide/quick-start.md +++ b/docs/user-guide/quick-start.md @@ -28,7 +28,7 @@ To expose your service to the public internet, run: $ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer service "my-nginx" exposed ``` -Note: The type, LoadBalancer, is highly dependent upon the underlying platform that Kubernetes is running on. If your cloudprovider doesn't have a loadbalancer implementation (e.g. OpenStack) for Kubernetes, you can simply use the allocated [NodePort](http://kubernetes.io/docs/user-guide/services/#type-nodeport) as a rudimentary form of loadblancing across your endpoints. +Note: The type, LoadBalancer, is highly dependent upon the underlying platform that Kubernetes is running on. If your cloud provider doesn't have a load balancer implementation (e.g. OpenStack) for Kubernetes, you can simply use the allocated [NodePort](http://kubernetes.io/docs/user-guide/services/#type-nodeport) as a rudimentary form of load balancing across your endpoints. You can see that they are running by: From 1d3f64a5bf4bb2e248d898734c57bc1cae9e262f Mon Sep 17 00:00:00 2001 From: devin-donnelly Date: Wed, 21 Dec 2016 16:27:04 -0800 Subject: [PATCH 47/48] Update index.md --- docs/user-guide/configmap/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/user-guide/configmap/index.md b/docs/user-guide/configmap/index.md index e6d8686f00..83d1107040 100644 --- a/docs/user-guide/configmap/index.md +++ b/docs/user-guide/configmap/index.md @@ -19,9 +19,9 @@ or used to store configuration data for system components such as controllers. to [Secrets](/docs/user-guide/secrets/), but designed to more conveniently support working with strings that do not contain sensitive information. -Note: ConfigMaps are not intended to act as a replacement for a properties file. ConfigMaps are more intended to act a reference to multipe propertie files. You can think of them as way to represent something similar to the /etc directory and all it's files on a Linux computer. This model for ConfigMaps becomes especially apparent when looking at creating Volumes from ConfigMaps. Each data item in the ConfigMap becomes a new file. +Note: ConfigMaps are not intended to act as a replacement for a properties file. ConfigMaps are intended to act as a reference to multiple properties files. You can think of them as way to represent something similar to the /etc directory, and the files within, on a Linux computer. One example of this model is creating Kubernetes Volumes from ConfigMaps, where each data item in the ConfigMap becomes a new file. -Let's look at a made-up example: +Consider the following example: ```yaml kind: ConfigMap From 383e40f978334792e7ca30399845c9bf38ea813c Mon Sep 17 00:00:00 2001 From: SRaddict Date: Thu, 22 Dec 2016 11:24:05 +0800 Subject: [PATCH 48/48] fix a series punctuation errors --- LICENSE | 2 +- case-studies/index.html | 4 ++-- case-studies/pearson.html | 10 +++++----- case-studies/wikimedia.html | 8 ++++---- docs/admin/admission-controllers.md | 4 ++-- docs/admin/rescheduler.md | 2 +- docs/getting-started-guides/windows/index.md | 6 +++--- docs/tutorials/kubernetes-basics/explore-intro.html | 2 +- docs/user-guide/replicasets.md | 2 +- 9 files changed, 20 insertions(+), 20 deletions(-) diff --git a/LICENSE b/LICENSE index 06c608dcf4..b6988e7edc 100644 --- a/LICENSE +++ b/LICENSE @@ -378,7 +378,7 @@ Section 8 -- Interpretation. Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances -will be considered the “Licensor.” The text of the Creative Commons +will be considered the "Licensor." The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as diff --git a/case-studies/index.html b/case-studies/index.html index ce14542424..6d288a8bb2 100644 --- a/case-studies/index.html +++ b/case-studies/index.html @@ -17,13 +17,13 @@ title: Case Studies
    Pearson -

    “We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers’ productivity.”

    +

    "We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers’ productivity."

    Read about Pearson
    Wikimedia -

    “With Kubernetes, we’re simplifying our environment and making it easier for developers to build the tools that make wikis run better.”

    +

    "With Kubernetes, we’re simplifying our environment and making it easier for developers to build the tools that make wikis run better."

    Read about Wikimedia
    diff --git a/case-studies/pearson.html b/case-studies/pearson.html index bf871789b9..5eecc6f349 100644 --- a/case-studies/pearson.html +++ b/case-studies/pearson.html @@ -19,7 +19,7 @@ title: Pearson Case Study
    Pearson

    - “To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”

    + "To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity."

    — Chris Jackson, Director for Cloud Product Engineering, Pearson

    @@ -63,9 +63,9 @@ title: Pearson Case Study

    Kubernetes powers a comprehensive developer experience

    -

    Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, “Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work.”

    -

    It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools.“

    -

    Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped,” Jackson says.

    +

    Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, "Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work."

    +

    It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools."

    +

    Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped," Jackson says.

    @@ -76,7 +76,7 @@ title: Pearson Case Study

    Encouraging experimentation, saving engineers time

    With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won’t spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.

    Beyond that, Pearson says the platform will encourage innovation because of the ease with which new applications can be developed, and because applications will be deployed far more quickly than in the past. It expects that will help the company meet its goal of reaching 200 million learners within the next 10 years.

    -

    “We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online,” says Jackson.

    +

    "We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online," says Jackson.

    diff --git a/case-studies/wikimedia.html b/case-studies/wikimedia.html index 00eb47e3e0..0dc910fbe4 100644 --- a/case-studies/wikimedia.html +++ b/case-studies/wikimedia.html @@ -20,7 +20,7 @@ title: Wikimedia Case Study
    Wikimedia

    - “Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better.” + "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better."

    — Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs

    @@ -67,13 +67,13 @@ title: Wikimedia Case Study

    Using Kubernetes to provide tools for maintaining wikis

    - Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, “It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile.” + Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."

    To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.

    - “With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously,” says Yuvi. + "With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously," says Yuvi.

    @@ -90,7 +90,7 @@ title: Wikimedia Case Study In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs’ web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.

    - “Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive,” says Yuvi. + "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.

    diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index 475f2e4be9..de544e3d8b 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -126,7 +126,7 @@ For additional HTTP configuration, refer to the [kubeconfig](/docs/user-guide/ku When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`. -Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the “apiVersion” field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). +Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). An example request body: @@ -151,7 +151,7 @@ An example request body: } ``` -The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s “spec” field is ignored and may be omitted. A permissive response would return: +The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s "spec" field is ignored and may be omitted. A permissive response would return: ``` { diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md index c9a3bd074c..651fdf15b5 100644 --- a/docs/admin/rescheduler.md +++ b/docs/admin/rescheduler.md @@ -30,7 +30,7 @@ given the pods that are already running in the cluster the rescheduler tries to free up space for the add-on by evicting some pods; then the scheduler will schedule the add-on pod. To avoid situation when another pod is scheduled into the space prepared for the critical add-on, -the chosen node gets a temporary taint “CriticalAddonsOnly” before the eviction(s) +the chosen node gets a temporary taint "CriticalAddonsOnly" before the eviction(s) (see [more details](https://github.com/kubernetes/kubernetes/blob/master/docs/design/taint-toleration-dedicated.md)). Each critical add-on has to tolerate it, the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled. diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 3096bed7eb..b5926744ae 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -18,15 +18,15 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. ### Linux -The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC. +The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. ### Windows Each Window Server node should have the following configuration: 1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC. 2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below -3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also “captures” packets that have the destination IP of a POD running on the node. To enable, open “Server Manager”. Click on “Roles”, “Add Roles”. Click “Next”. Select “Network Policy and Access Services”. Click on “Routing and Remote Access Service” and the underlying checkboxes -4. Routes defined pointing to the other pod CIDRs via the “public” NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below +3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also "captures" packets that have the destination IP of a POD running on the node. To enable, open "Server Manager". Click on "Roles", "Add Roles". Click "Next". Select "Network Policy and Access Services". Click on "Routing and Remote Access Service" and the underlying checkboxes +4. Routes defined pointing to the other pod CIDRs via the "public" NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below The following diagram illustrates the Windows Server networking setup for Kubernetes Setup ![Windows Setup](windows-setup.png) diff --git a/docs/tutorials/kubernetes-basics/explore-intro.html b/docs/tutorials/kubernetes-basics/explore-intro.html index edc813d3d4..56bde41cfd 100644 --- a/docs/tutorials/kubernetes-basics/explore-intro.html +++ b/docs/tutorials/kubernetes-basics/explore-intro.html @@ -34,7 +34,7 @@ title: Viewing Pods and Nodes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use
  • -

    A Pod models an application-specific “logical host” and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    +

    A Pod models an application-specific "logical host" and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

    diff --git a/docs/user-guide/replicasets.md b/docs/user-guide/replicasets.md index f0aa08bf04..86e60cffda 100644 --- a/docs/user-guide/replicasets.md +++ b/docs/user-guide/replicasets.md @@ -35,7 +35,7 @@ their Replica Sets. ## When to use a Replica Set? -A Replica Set ensures that a specified number of pod “replicas” are running at any given +A Replica Set ensures that a specified number of pod "replicas" are running at any given time. However, a Deployment is a higher-level concept that manages Replica Sets and provides declarative updates to pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using Replica Sets, unless