` argument to `kubeadm init`.
-If you want to use [flannel](https://github.com/coreos/flannel) as the pod network; specify `--pod-network-cidr=10.244.0.0/16` if you're using the daemonset manifest below. _However, please note that this is not required for any other networks, including Weave, which is the recommended pod network._
+If you want to use [flannel](https://github.com/coreos/flannel) as the pod network, specify `--pod-network-cidr=10.244.0.0/16` if you're using the daemonset manifest below. _However, please note that this is not required for any other networks besides Flannel._
Please refer to the [kubeadm reference doc](/docs/admin/kubeadm/) if you want to read more about the flags `kubeadm init` provides.
diff --git a/docs/getting-started-guides/kubectl.md b/docs/getting-started-guides/kubectl.md
new file mode 100644
index 0000000000..bd2512707b
--- /dev/null
+++ b/docs/getting-started-guides/kubectl.md
@@ -0,0 +1,110 @@
+---
+---
+
+
+
+## Overview
+
+kubectl is the command line tool you use to interact with Kubernetes clusters.
+
+You should use a version of kubectl that is at least as new as your server.
+`kubectl version` will print the server and client versions. Using the same version of kubectl
+as your server naturally works; using a newer kubectl than your server also works; but if you use
+an older kubectl with a newer server you may see odd validation errors .
+
+## Download a release
+
+Download kubectl from the [official Kubernetes releases](https://console.cloud.google.com/storage/browser/kubernetes-release/release/):
+
+On MacOS:
+
+```shell
+wget https://storage.googleapis.com/kubernetes-release/release/v1.4.4/bin/darwin/amd64/kubectl
+chmod +x kubectl
+mv kubectl /usr/local/bin/kubectl
+```
+
+On Linux:
+
+```shell
+wget https://storage.googleapis.com/kubernetes-release/release/v1.4.4/bin/linux/amd64/kubectl
+chmod +x kubectl
+mv kubectl /usr/local/bin/kubectl
+```
+
+
+You may need to `sudo` the `mv`; you can put it anywhere in your `PATH` - some people prefer to install to `~/bin`.
+
+
+## Alternatives
+
+### Download as part of the Google Cloud SDK
+
+kubectl can be installed as part of the Google Cloud SDK:
+
+First install the [Google Cloud SDK](https://cloud.google.com/sdk/).
+
+After Google Cloud SDK installs, run the following command to install `kubectl`:
+
+```shell
+gcloud components install kubectl
+```
+
+Do check that the version is sufficiently up-to-date using `kubectl version`.
+
+### Install with brew
+
+If you are on MacOS and using brew, you can install with:
+
+```shell
+brew install kubectl
+```
+
+The homebrew project is independent from kubernetes, so do check that the version is
+sufficiently up-to-date using `kubectl version`.
+
+
+# Enabling shell autocompletion
+
+kubectl includes autocompletion support, which can save a lot of typing!
+
+The completion script itself is generated by kubectl, so you typically just need to invoke it from your profile.
+
+Common examples are provided here, but for more details please consult `kubectl completion -h`
+
+## On Linux, using bash
+
+To add it to your current shell: `source <(kubectl completion bash)`
+
+To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
+
+```shell
+echo "source <(kubectl completion bash)" >> ~/.bashrc
+```
+
+## On MacOS, using bash
+
+On MacOS, you will need to install the bash-completion support first:
+
+```shell
+brew install bash-completion
+```
+
+To add it to your current shell:
+
+```shell
+source $(brew --prefix)/etc/bash_completion
+source <(kubectl completion bash)
+```
+
+To add kubectl autocompletion to your profile (so it is automatically loaded in future shells):
+
+```shell
+echo "source $(brew --prefix)/etc/bash_completion" >> ~/.bash_profile
+echo "source <(kubectl completion bash)" >> ~/.bash_profile
+```
+
+Please note that this only appears to work currently if you install using `brew install kubectl`,
+and not if you downloaded kubectl directly.
\ No newline at end of file
diff --git a/docs/getting-started-guides/vsphere.md b/docs/getting-started-guides/vsphere.md
index b3679c56e8..a1e59b0cd0 100644
--- a/docs/getting-started-guides/vsphere.md
+++ b/docs/getting-started-guides/vsphere.md
@@ -65,6 +65,7 @@ export GOVC_DATACENTER='ha-datacenter' # The datacenter to be used by vSphere cl
```
Sample environment
+
```shell
export GOVC_URL='10.161.236.217'
export GOVC_USERNAME='administrator'
@@ -79,6 +80,7 @@ export GOVC_DATACENTER='Datacenter'
```
Import this VMDK into your vSphere datastore:
+
```shell
govc import.vmdk kube.vmdk ./kube/
```
diff --git a/docs/index.md b/docs/index.md
index 0bdaa33b5c..c430dac710 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -5,9 +5,7 @@ assignees:
---
-The Kubernetes documentation can help you set up Kubernetes, learn about the system, or get your applications and workloads running on Kubernetes.
-
-Read the Kubernetes Overview
+Kubernetes documentation can help you set up Kubernetes, learn about the system, or get your applications and workloads running on Kubernetes. To learn the basics of what Kubernetes is and how it works, read "What is Kubernetes".
Interactive Tutorial
@@ -40,4 +38,4 @@ assignees:
Tools
-The tools page contains a list of native and third-party tools for Kubernetes.
\ No newline at end of file
+The tools page contains a list of native and third-party tools for Kubernetes.
diff --git a/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md b/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md
new file mode 100644
index 0000000000..f0f611e235
--- /dev/null
+++ b/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md
@@ -0,0 +1,110 @@
+---
+---
+
+{% capture overview %}
+
+This page shows how to write and read a Container
+termination message.
+
+Termination messages provide a way for containers to write
+information about fatal events to a location where it can
+be easily retrieved and surfaced by tools like dashboards
+and monitoring software. In most cases, information that you
+put in a termination message should also be written to
+the general
+[Kubernetes logs](/docs/user-guide/logging/).
+
+{% endcapture %}
+
+
+{% capture prerequisites %}
+
+{% include task-tutorial-prereqs.md %}
+
+{% endcapture %}
+
+
+{% capture steps %}
+
+### Writing and reading a termination message
+
+In this exercise, you create a Pod that runs one container.
+The configuration file specifies a command that runs when
+the container starts.
+
+{% include code.html language="yaml" file="termination.yaml" ghlink="/docs/tasks/debug-pod-container/termination.yaml" %}
+
+1. Create a Pod based on the YAML configuration file:
+
+ export REPO=https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master
+ kubectl create -f $REPO/docs/tasks/debug-pod-container/termination.yaml
+
+ In the YAML file, in the `cmd` and `args` fields, you can see that the
+ container sleeps for 10 seconds and then writes "Sleep expired" to
+ the `/dev/termination-log` file. After the container writes
+ the "Sleep expired" message, it terminates.
+
+1. Display information about the Pod:
+
+ kubectl get pod termination-demo
+
+ Repeat the preceding command until the Pod is no longer running.
+
+1. Display detailed information about the Pod:
+
+ kubectl get pod --output=yaml
+
+ The output includes the "Sleep expired" message:
+
+ apiVersion: v1
+ kind: Pod
+ ...
+ lastState:
+ terminated:
+ containerID: ...
+ exitCode: 0
+ finishedAt: ...
+ message: |
+ Sleep expired
+ ...
+
+1. Use a Go template to filter the output so that it includes
+only the termination message:
+
+```
+{% raw %} kubectl get pod termination-demo -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"{% endraw %}
+```
+
+### Setting the termination log file
+
+By default Kubernetes retrieves termination messages from
+`/dev/termination-log`. To change this to a different file,
+specify a `terminationMessagePath` field for your Container.
+
+For example, suppose your Container writes termination messages to
+`/tmp/my-log`, and you want Kubernetes to retrieve those messages.
+Set `terminationMessagePath` as shown here:
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: msg-path-demo
+ spec:
+ containers:
+ - name: msg-path-demo-container
+ image: debian
+ terminationMessagePath: "/tmp/my-log"
+
+{% endcapture %}
+
+{% capture whatsnext %}
+
+* See the `terminationMessagePath` field in
+ [Container](/docs/api-reference/v1/definitions#_v1_container).
+* Learn about [retrieving logs](/docs/user-guide/logging/).
+* Learn about [Go templates](https://golang.org/pkg/text/template/).
+
+{% endcapture %}
+
+
+{% include templates/task.md %}
diff --git a/docs/tasks/debug-application-cluster/termination.yaml b/docs/tasks/debug-application-cluster/termination.yaml
new file mode 100644
index 0000000000..3f63748f72
--- /dev/null
+++ b/docs/tasks/debug-application-cluster/termination.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: termination-demo
+spec:
+ containers:
+ - name: termination-demo-container
+ image: debian
+ command: ["/bin/sh"]
+ args: ["-c", "sleep 10 && echo Sleep expired > /dev/termination-log"]
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 14530ca25e..60aab6a8fb 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -15,6 +15,10 @@ The Tutorials section of the Kubernetes documentation is a work in progress.
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
+#### Stateful Applications
+
+* [Running a Single-Instance Stateful Application](/docs/tutorials/stateful-application/run-stateful-application/)
+
### What's next
If you would like to write a tutorial, see
diff --git a/docs/tutorials/stateful-application/gce-volume.yaml b/docs/tutorials/stateful-application/gce-volume.yaml
new file mode 100644
index 0000000000..ddb9ecc3ce
--- /dev/null
+++ b/docs/tutorials/stateful-application/gce-volume.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: mysql-pv
+spec:
+ capacity:
+ storage: 20Gi
+ accessModes:
+ - ReadWriteOnce
+ gcePersistentDisk:
+ pdName: mysql-disk
+ fsType: ext4
diff --git a/docs/tutorials/stateful-application/mysql-deployment.yaml b/docs/tutorials/stateful-application/mysql-deployment.yaml
new file mode 100644
index 0000000000..3b2aa22f6c
--- /dev/null
+++ b/docs/tutorials/stateful-application/mysql-deployment.yaml
@@ -0,0 +1,51 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysql
+spec:
+ ports:
+ - port: 3306
+ selector:
+ app: mysql
+ clusterIP: None
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: mysql-pv-claim
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 20Gi
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: mysql
+spec:
+ strategy:
+ type: Recreate
+ template:
+ metadata:
+ labels:
+ app: mysql
+ spec:
+ containers:
+ - image: mysql:5.6
+ name: mysql
+ env:
+ # Use secret in real usage
+ - name: MYSQL_ROOT_PASSWORD
+ value: password
+ ports:
+ - containerPort: 3306
+ name: mysql
+ volumeMounts:
+ - name: mysql-persistent-storage
+ mountPath: /var/lib/mysql
+ volumes:
+ - name: mysql-persistent-storage
+ persistentVolumeClaim:
+ claimName: mysql-pv-claim
diff --git a/docs/tutorials/stateful-application/run-stateful-application.md b/docs/tutorials/stateful-application/run-stateful-application.md
new file mode 100644
index 0000000000..443d9cdea5
--- /dev/null
+++ b/docs/tutorials/stateful-application/run-stateful-application.md
@@ -0,0 +1,220 @@
+---
+---
+
+{% capture overview %}
+
+This page shows you how to run a single-instance stateful application
+in Kubernetes using a PersistentVolume and a Deployment. The
+application is MySQL.
+
+{% endcapture %}
+
+
+{% capture objectives %}
+
+* Create a PersistentVolume referencing a disk in your environment.
+* Create a MySQL Deployment.
+* Expose MySQL to other pods in the cluster at a known DNS name.
+
+{% endcapture %}
+
+
+{% capture prerequisites %}
+
+* {% include task-tutorial-prereqs.md %}
+
+* For data persistence we will create a Persistent Volume that
+ references a disk in your
+ environment. See
+ [here](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes) for
+ the types of environments supported. This Tutorial will demonstrate
+ `GCEPersistentDisk` but any type will work. `GCEPersistentDisk`
+ volumes only work on Google Compute Engine.
+
+{% endcapture %}
+
+
+{% capture lessoncontent %}
+
+### Set up a disk in your environment
+
+You can use any type of persistent volume for your stateful app. See
+[Types of Persistent Volumes](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes)
+for a list of supported environment disks. For Google Compute Engine, run:
+
+```
+gcloud compute disks create --size=20GB mysql-disk
+```
+
+Next create a PersistentVolume that points to the `mysql-disk`
+disk just created. Here is a configuration file for a PersistentVolume
+that points to the Compute Engine disk above:
+
+{% include code.html language="yaml" file="gce-volume.yaml" ghlink="/docs/tutorials/stateful-application/gce-volume.yaml" %}
+
+Notice that the `pdName: mysql-disk` line matches the name of the disk
+in the Compute Engine environment. See the
+[Persistent Volumes](/docs/user-guide/persistent-volumes/)
+for details on writing a PersistentVolume configuration file for other
+environments.
+
+Create the persistent volume:
+
+```
+kubectl create -f http://k8s.io/docs/tutorials/stateful-application/gce-volume.yaml
+```
+
+
+### Deploy MySQL
+
+You can run a stateful application by creating a Kubernetes Deployment
+and connecting it to an existing PersistentVolume using a
+PersistentVolumeClaim. For example, this YAML file describes a
+Deployment that runs MySQL and references the PersistentVolumeClaim. The file
+defines a volume mount for /var/lib/mysql, and then creates a
+PersistentVolumeClaim that looks for a 20G volume. This claim is
+satisfied by any volume that meets the requirements, in this case, the
+volume created above.
+
+Note: The password is defined in the config yaml, and this is insecure. See
+[Kubernetes Secrets](/docs/user-guide/secrets/)
+for a secure solution.
+
+{% include code.html language="yaml" file="mysql-deployment.yaml" ghlink="/docs/tutorials/stateful-application/mysql-deployment.yaml" %}
+
+1. Deploy the contents of the YAML file:
+
+ kubectl create -f http://k8s.io/docs/tutorials/stateful-application/mysql-deployment.yaml
+
+1. Display information about the Deployment:
+
+ kubectl describe deployment mysql
+
+ Name: mysql
+ Namespace: default
+ CreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700
+ Labels: app=mysql
+ Selector: app=mysql
+ Replicas: 1 updated | 1 total | 0 available | 1 unavailable
+ StrategyType: Recreate
+ MinReadySeconds: 0
+ OldReplicaSets:
+ NewReplicaSet: mysql-63082529 (1/1 replicas created)
+ Events:
+ FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- -------- ------ -------
+ 33s 33s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-63082529 to 1
+
+1. List the pods created by the Deployment:
+
+ kubectl get pods -l app=mysql
+
+ NAME READY STATUS RESTARTS AGE
+ mysql-63082529-2z3ki 1/1 Running 0 3m
+
+1. Inspect the Persistent Volume:
+
+ kubectl describe pv mysql-pv
+
+ Name: mysql-pv
+ Labels:
+ Status: Bound
+ Claim: default/mysql-pv-claim
+ Reclaim Policy: Retain
+ Access Modes: RWO
+ Capacity: 20Gi
+ Message:
+ Source:
+ Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
+ PDName: mysql-disk
+ FSType: ext4
+ Partition: 0
+ ReadOnly: false
+ No events.
+
+1. Inspect the PersistentVolumeClaim:
+
+ kubectl describe pvc mysql-pv-claim
+
+ Name: mysql-pv-claim
+ Namespace: default
+ Status: Bound
+ Volume: mysql-pv
+ Labels:
+ Capacity: 20Gi
+ Access Modes: RWO
+ No events.
+
+### Accessing the MySQL instance
+
+The preceding YAML file creates a service that
+allows other Pods in the cluster to access the database. The Service option
+`clusterIP: None` lets the Service DNS name resolve directly to the
+Pod's IP address. This is optimal when you have only one Pod
+behind a Service and you don't intend to increase the number of Pods.
+
+Run a MySQL client to connect to the server:
+
+```
+kubectl run -it --rm --image=mysql:5.6 mysql-client -- mysql -h mysql -ppassword
+```
+
+This command creates a new Pod in the cluster running a mysql client
+and connects it to the server through the Service. If it connects, you
+know your stateful MySQL database is up and running.
+
+```
+Waiting for pod default/mysql-client-274442439-zyp6i to be running, status is Pending, pod ready: false
+If you don't see a command prompt, try pressing enter.
+
+mysql>
+```
+
+### Updating
+
+The image or any other part of the Deployment can be updated as usual
+with the `kubectl apply` command. Here are some precautions that are
+specific to stateful apps:
+
+* Don't scale the app. This setup is for single-instance apps
+ only. The underlying PersistentVolume can only be mounted to one
+ Pod. For clustered stateful apps, see the
+ [StatefulSet documentation](/docs/user-guide/petset/).
+* Use `strategy:` `type: Recreate` in the Deployment configuration
+ YAML file. This instructs Kubernetes to _not_ use rolling
+ updates. Rolling updates will not work, as you cannot have more than
+ one Pod running at a time. The `Recreate` strategy will stop the
+ first pod before creating a new one with the updated configuration.
+
+### Deleting a deployment
+
+Delete the deployed objects by name:
+
+```
+kubectl delete deployment,svc mysql
+kubectl delete pvc mysql-pv-claim
+kubectl delete pv mysql-pv
+```
+
+Also, if you are using Compute Engine disks:
+
+```
+gcloud compute disks delete mysql-disk
+```
+
+{% endcapture %}
+
+
+{% capture whatsnext %}
+
+* Learn more about [Deployment objects](/docs/user-guide/deployments/).
+
+* Learn more about [Deploying applications](/docs/user-guide/deploying-applications/)
+
+* [kubectl run documentation](/docs/user-guide/kubectl/kubectl_run/)
+
+* [Volumes](/docs/user-guide/volumes/) and [Persistent Volumes](/docs/user-guide/persistent-volumes/)
+
+{% endcapture %}
+
+{% include templates/tutorial.md %}
diff --git a/docs/user-guide/accessing-the-cluster.md b/docs/user-guide/accessing-the-cluster.md
index 6f78ab5293..63134b4909 100644
--- a/docs/user-guide/accessing-the-cluster.md
+++ b/docs/user-guide/accessing-the-cluster.md
@@ -129,7 +129,7 @@ To use it,
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
The Go client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file)
-as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/examples/out-of-cluster.go):
+as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster/main.go):
```golang
import (
@@ -183,7 +183,8 @@ From within a pod the recommended ways to connect to API are:
in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
- This handles locating and authenticating to the apiserver. [example](https://github.com/kubernetes/client-go/examples/in-cluster.go)
+ This handles locating and authenticating to the apiserver. See this [example of using Go client
+ library in a pod](https://github.com/kubernetes/client-go/blob/master/examples/in-cluster/main.go).
In each case, the credentials of the pod are used to communicate securely with the apiserver.
diff --git a/docs/user-guide/node-selection/index.md b/docs/user-guide/node-selection/index.md
index 49d30b51c9..725848b544 100644
--- a/docs/user-guide/node-selection/index.md
+++ b/docs/user-guide/node-selection/index.md
@@ -173,7 +173,7 @@ on node N if node N has a label with key `failure-domain.beta.kubernetes.io/zone
such that there is at least one node in the cluster with key `failure-domain.beta.kubernetes.io/zone` and
value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity
rule says that the pod cannot schedule onto a node if that node is already running a pod with label
-having key "security" and value "S2". (If the `topologyKey` were `failure-domain.beta.kuberntes.io/zone` then
+having key "security" and value "S2". (If the `topologyKey` were `failure-domain.beta.kubernetes.io/zone` then
it would mean that the pod cannot schedule onto a node if that node is in the same zone as a pod with
label having key "security" and value "S2".) See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/podaffinity.md).
for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution`
diff --git a/docs/user-guide/pod-states.md b/docs/user-guide/pod-states.md
index b29270e5f8..8f745e9f56 100644
--- a/docs/user-guide/pod-states.md
+++ b/docs/user-guide/pod-states.md
@@ -66,8 +66,8 @@ The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If
Three types of controllers are currently available:
- Use a [`Job`](/docs/user-guide/jobs/) for pods which are expected to terminate (e.g. batch computations).
-- Use a [`ReplicationController`](/docs/user-guide/replication-controller/) for pods which are not expected to
- terminate (e.g. web servers).
+- Use a [`ReplicationController`](/docs/user-guide/replication-controller/) or [`Deployment`](/docs/user-guide/deployments/)
+ for pods which are not expected to terminate (e.g. web servers).
- Use a [`DaemonSet`](/docs/admin/daemons/): Use for pods which need to run 1 per machine because they provide a
machine-specific system service.
If you are unsure whether to use ReplicationController or Daemon, then see [Daemon Set versus
diff --git a/docs/user-guide/working-with-resources.md b/docs/user-guide/working-with-resources.md
index 5b300ee6bd..d2aeeb621e 100644
--- a/docs/user-guide/working-with-resources.md
+++ b/docs/user-guide/working-with-resources.md
@@ -46,7 +46,7 @@ The system adds fields in several ways:
- Some fields are added synchronously with creation of the resource and some are set asynchronously.
- For example: `metadata.uid` is set synchronously. (Read more about [metadata](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#metadata)).
- - For example, `status.hostIP` is set only after the pod has been scheduled. This often happens fast, but you may notice pods which do not have this set yet. This is called Late Initialization. (Read mode about [status](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status) and [late initialization](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#late-initialization) ).
+ - For example, `status.hostIP` is set only after the pod has been scheduled. This often happens fast, but you may notice pods which do not have this set yet. This is called Late Initialization. (Read more about [status](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status) and [late initialization](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#late-initialization)).
- Some fields are set to default values. Some defaults vary by cluster and some are fixed for the API at a certain version. (Read more about [defaulting](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#defaulting)).
- For example, `spec.containers[0].imagePullPolicy` always defaults to `IfNotPresent` in api v1.
- For example, `spec.containers[0].resources.limits.cpu` may be defaulted to `100m` on some clusters, to some other value on others, and not defaulted at all on others.
diff --git a/images/square-logos/datadog.png b/images/square-logos/datadog.png
index 82d6d11aaf..aeab0f227f 100644
Binary files a/images/square-logos/datadog.png and b/images/square-logos/datadog.png differ
diff --git a/images/square-logos/endocode.png b/images/square-logos/endocode.png
new file mode 100644
index 0000000000..a90189d6f9
Binary files /dev/null and b/images/square-logos/endocode.png differ
diff --git a/images/square-logos/giant_swarm.png b/images/square-logos/giant_swarm.png
new file mode 100644
index 0000000000..6434f98735
Binary files /dev/null and b/images/square-logos/giant_swarm.png differ
diff --git a/images/square-logos/mirantis.png b/images/square-logos/mirantis.png
new file mode 100644
index 0000000000..9dc83103d9
Binary files /dev/null and b/images/square-logos/mirantis.png differ
diff --git a/test/examples_test.go b/test/examples_test.go
index 1853cdaf0a..7e5660c4f9 100644
--- a/test/examples_test.go
+++ b/test/examples_test.go
@@ -127,11 +127,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
t.Namespace = api.NamespaceDefault
}
errors = expvalidation.ValidateDaemonSet(t)
- case *batch.ScheduledJob:
+ case *batch.CronJob:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = batch_validation.ValidateScheduledJob(t)
+ errors = batch_validation.ValidateCronJob(t)
default:
errors = field.ErrorList{}
errors = append(errors, field.InternalError(field.NewPath(""), fmt.Errorf("no validation defined for %#v", obj)))
@@ -242,7 +242,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"redis-resource-deployment": &extensions.Deployment{},
"redis-secret-deployment": &extensions.Deployment{},
"run-my-nginx": &extensions.Deployment{},
- "sj": &batch.ScheduledJob{},
+ "sj": &batch.CronJob{},
},
"../docs/admin": {
"daemon": &extensions.DaemonSet{},
@@ -272,7 +272,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"../docs/user-guide/node-selection": {
"pod": &api.Pod{},
"pod-with-node-affinity": &api.Pod{},
- "pod-with-pod-affinity": &api.Pod{},
+ "pod-with-pod-affinity": &api.Pod{},
},
"../docs/admin/resourcequota": {
"best-effort": &api.ResourceQuota{},