Merge pull request #11551 from a-robinson/docs

Improve syntax highlighting for design and devel docs
This commit is contained in:
Brian Grant 2015-07-19 08:45:54 -07:00
commit 43bfff4f73
9 changed files with 39 additions and 48 deletions

View File

@ -128,7 +128,7 @@ The server is updated to be aware of **LimitRange** objects.
The constraints are only enforced if the kube-apiserver is started as follows: The constraints are only enforced if the kube-apiserver is started as follows:
``` ```console
$ kube-apiserver -admission_control=LimitRanger $ kube-apiserver -admission_control=LimitRanger
``` ```
@ -140,7 +140,7 @@ kubectl is modified to support the **LimitRange** resource.
For example, For example,
```shell ```console
$ kubectl namespace myspace $ kubectl namespace myspace
$ kubectl create -f docs/user-guide/limitrange/limits.yaml $ kubectl create -f docs/user-guide/limitrange/limits.yaml
$ kubectl get limits $ kubectl get limits

View File

@ -140,7 +140,7 @@ The server is updated to be aware of **ResourceQuota** objects.
The quota is only enforced if the kube-apiserver is started as follows: The quota is only enforced if the kube-apiserver is started as follows:
``` ```console
$ kube-apiserver -admission_control=ResourceQuota $ kube-apiserver -admission_control=ResourceQuota
``` ```
@ -167,7 +167,7 @@ kubectl is modified to support the **ResourceQuota** resource.
For example, For example,
``` ```console
$ kubectl namespace myspace $ kubectl namespace myspace
$ kubectl create -f docs/user-guide/resourcequota/quota.yaml $ kubectl create -f docs/user-guide/resourcequota/quota.yaml
$ kubectl get quota $ kubectl get quota

View File

@ -34,7 +34,7 @@ This directory contains diagrams for the clustering design doc.
This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with
```bash ```sh
pip install seqdiag pip install seqdiag
``` ```
@ -44,7 +44,7 @@ Just call `make` to regenerate the diagrams.
If you are on a Mac or your pip install is messed up, you can easily build with docker. If you are on a Mac or your pip install is messed up, you can easily build with docker.
``` ```sh
make docker make docker
``` ```

View File

@ -90,7 +90,7 @@ Each binary that generates events:
Sample kubectl output Sample kubectl output
``` ```console
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet.

View File

@ -74,7 +74,7 @@ The Namespace provides a unique scope for:
A *Namespace* defines a logically named group for multiple *Kind*s of resources. A *Namespace* defines a logically named group for multiple *Kind*s of resources.
``` ```go
type Namespace struct { type Namespace struct {
TypeMeta `json:",inline"` TypeMeta `json:",inline"`
ObjectMeta `json:"metadata,omitempty"` ObjectMeta `json:"metadata,omitempty"`
@ -125,7 +125,7 @@ See [Admission control: Resource Quota](admission_control_resource_quota.md)
Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects. Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects.
``` ```go
type FinalizerName string type FinalizerName string
// These are internal finalizers to Kubernetes, must be qualified name unless defined here // These are internal finalizers to Kubernetes, must be qualified name unless defined here
@ -154,7 +154,7 @@ set by default.
A *Namespace* may exist in the following phases. A *Namespace* may exist in the following phases.
``` ```go
type NamespacePhase string type NamespacePhase string
const( const(
NamespaceActive NamespacePhase = "Active" NamespaceActive NamespacePhase = "Active"
@ -262,7 +262,7 @@ to take part in Namespace termination.
OpenShift creates a Namespace in Kubernetes OpenShift creates a Namespace in Kubernetes
``` ```json
{ {
"apiVersion":"v1", "apiVersion":"v1",
"kind": "Namespace", "kind": "Namespace",
@ -287,7 +287,7 @@ own storage associated with the "development" namespace unknown to Kubernetes.
User deletes the Namespace in Kubernetes, and Namespace now has following state: User deletes the Namespace in Kubernetes, and Namespace now has following state:
``` ```json
{ {
"apiVersion":"v1", "apiVersion":"v1",
"kind": "Namespace", "kind": "Namespace",
@ -312,7 +312,7 @@ and begins to terminate all of the content in the namespace that it knows about.
success, it executes a *finalize* action that modifies the *Namespace* by success, it executes a *finalize* action that modifies the *Namespace* by
removing *kubernetes* from the list of finalizers: removing *kubernetes* from the list of finalizers:
``` ```json
{ {
"apiVersion":"v1", "apiVersion":"v1",
"kind": "Namespace", "kind": "Namespace",
@ -340,7 +340,7 @@ from the list of finalizers.
This results in the following state: This results in the following state:
``` ```json
{ {
"apiVersion":"v1", "apiVersion":"v1",
"kind": "Namespace", "kind": "Namespace",

View File

@ -131,7 +131,7 @@ differentiate it from `docker0`) is set up outside of Docker proper.
Example of GCE's advanced routing rules: Example of GCE's advanced routing rules:
``` ```sh
gcloud compute routes add "${MINION_NAMES[$i]}" \ gcloud compute routes add "${MINION_NAMES[$i]}" \
--project "${PROJECT}" \ --project "${PROJECT}" \
--destination-range "${MINION_IP_RANGES[$i]}" \ --destination-range "${MINION_IP_RANGES[$i]}" \

View File

@ -127,7 +127,7 @@ Events that communicate the state of a mounted volume are left to the volume plu
An administrator provisions storage by posting PVs to the API. Various way to automate this task can be scripted. Dynamic provisioning is a future feature that can maintain levels of PVs. An administrator provisions storage by posting PVs to the API. Various way to automate this task can be scripted. Dynamic provisioning is a future feature that can maintain levels of PVs.
``` ```yaml
POST: POST:
kind: PersistentVolume kind: PersistentVolume
@ -140,15 +140,13 @@ spec:
persistentDisk: persistentDisk:
pdName: "abc123" pdName: "abc123"
fsType: "ext4" fsType: "ext4"
```
-------------------------------------------------- ```console
$ kubectl get pv
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 map[] 10737418240 RWO Pending pv0001 map[] 10737418240 RWO Pending
``` ```
#### Users request storage #### Users request storage
@ -157,9 +155,9 @@ A user requests storage by posting a PVC to the API. Their request contains the
The user must be within a namespace to create PVCs. The user must be within a namespace to create PVCs.
``` ```yaml
POST: POST:
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
apiVersion: v1 apiVersion: v1
metadata: metadata:
@ -170,15 +168,13 @@ spec:
resources: resources:
requests: requests:
storage: 3 storage: 3
```
-------------------------------------------------- ```console
$ kubectl get pvc
kubectl get pvc
NAME LABELS STATUS VOLUME NAME LABELS STATUS VOLUME
myclaim-1 map[] pending myclaim-1 map[] pending
``` ```
@ -186,9 +182,8 @@ myclaim-1 map[] pending
The ```PersistentVolumeClaimBinder``` attempts to find an available volume that most closely matches the user's request. If one exists, they are bound by putting a reference on the PV to the PVC. Requests can go unfulfilled if a suitable match is not found. The ```PersistentVolumeClaimBinder``` attempts to find an available volume that most closely matches the user's request. If one exists, they are bound by putting a reference on the PV to the PVC. Requests can go unfulfilled if a suitable match is not found.
``` ```console
$ kubectl get pv
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 map[] 10737418240 RWO Bound myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e pv0001 map[] 10737418240 RWO Bound myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e
@ -198,8 +193,6 @@ kubectl get pvc
NAME LABELS STATUS VOLUME NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e
``` ```
#### Claim usage #### Claim usage
@ -208,7 +201,7 @@ The claim holder can use their claim as a volume. The ```PersistentVolumeClaimV
The claim holder owns the claim and its data for as long as the claim exists. The pod using the claim can be deleted, but the claim remains in the user's namespace. It can be used again and again by many pods. The claim holder owns the claim and its data for as long as the claim exists. The pod using the claim can be deleted, but the claim remains in the user's namespace. It can be used again and again by many pods.
``` ```yaml
POST: POST:
kind: Pod kind: Pod
@ -229,17 +222,14 @@ spec:
accessMode: ReadWriteOnce accessMode: ReadWriteOnce
claimRef: claimRef:
name: myclaim-1 name: myclaim-1
``` ```
#### Releasing a claim and Recycling a volume #### Releasing a claim and Recycling a volume
When a claim holder is finished with their data, they can delete their claim. When a claim holder is finished with their data, they can delete their claim.
``` ```console
$ kubectl delete pvc myclaim-1
kubectl delete pvc myclaim-1
``` ```
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'. The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.

View File

@ -89,7 +89,7 @@ Both users and a number of system components, such as schedulers, (horizontal) a
Resource requirements for a container or pod should have the following form: Resource requirements for a container or pod should have the following form:
``` ```yaml
resourceRequirementSpec: [ resourceRequirementSpec: [
request: [ cpu: 2.5, memory: "40Mi" ], request: [ cpu: 2.5, memory: "40Mi" ],
limit: [ cpu: 4.0, memory: "99Mi" ], limit: [ cpu: 4.0, memory: "99Mi" ],
@ -103,7 +103,7 @@ Where:
Total capacity for a node should have a similar structure: Total capacity for a node should have a similar structure:
``` ```yaml
resourceCapacitySpec: [ resourceCapacitySpec: [
total: [ cpu: 12, memory: "128Gi" ] total: [ cpu: 12, memory: "128Gi" ]
] ]
@ -159,15 +159,16 @@ rather than decimal ones: "64MiB" rather than "64MB".
A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example: A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example:
``` ```yaml
resourceTypes: [ resourceTypes: [
"kubernetes.io/memory": [ "kubernetes.io/memory": [
isCompressible: false, ... isCompressible: false, ...
] ]
"kubernetes.io/cpu": [ "kubernetes.io/cpu": [
isCompressible: true, internalScaleExponent: 3, ... isCompressible: true,
internalScaleExponent: 3, ...
] ]
"kubernetes.io/disk-space": [ ... } "kubernetes.io/disk-space": [ ... ]
] ]
``` ```
@ -195,7 +196,7 @@ Because resource usage and related metrics change continuously, need to be track
Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information: Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information:
``` ```yaml
resourceStatus: [ resourceStatus: [
usage: [ cpu: <CPU-info>, memory: <memory-info> ], usage: [ cpu: <CPU-info>, memory: <memory-info> ],
maxusage: [ cpu: <CPU-info>, memory: <memory-info> ], maxusage: [ cpu: <CPU-info>, memory: <memory-info> ],
@ -205,7 +206,7 @@ resourceStatus: [
where a `<CPU-info>` or `<memory-info>` structure looks like this: where a `<CPU-info>` or `<memory-info>` structure looks like this:
``` ```yaml
{ {
mean: <value> # arithmetic mean mean: <value> # arithmetic mean
max: <value> # minimum value max: <value> # minimum value
@ -218,7 +219,7 @@ where a `<CPU-info>` or `<memory-info>` structure looks like this:
"99.9": <99.9th-percentile-value>, "99.9": <99.9th-percentile-value>,
... ...
] ]
} }
``` ```
All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_ All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_

View File

@ -62,7 +62,7 @@ To facilitate recovery in the case of a crash of the updating process itself, we
Recovery is achieved by issuing the same command again: Recovery is achieved by issuing the same command again:
``` ```sh
kubectl rolling-update foo [foo-v2] --image=myimage:v2 kubectl rolling-update foo [foo-v2] --image=myimage:v2
``` ```