Replace ``` with ` when emphasizing something inline in docs/
This commit is contained in:
parent
fabd20afce
commit
4bef20df21
|
@ -136,7 +136,7 @@ $ kube-apiserver -admission_control=LimitRanger
|
|||
|
||||
kubectl is modified to support the **LimitRange** resource.
|
||||
|
||||
```kubectl describe``` provides a human-readable output of limits.
|
||||
`kubectl describe` provides a human-readable output of limits.
|
||||
|
||||
For example,
|
||||
|
||||
|
|
|
@ -163,7 +163,7 @@ this being the resource most closely running at the prescribed quota limits.
|
|||
|
||||
kubectl is modified to support the **ResourceQuota** resource.
|
||||
|
||||
```kubectl describe``` provides a human-readable output of quota.
|
||||
`kubectl describe` provides a human-readable output of quota.
|
||||
|
||||
For example,
|
||||
|
||||
|
|
|
@ -38,41 +38,41 @@ This document captures the design of event compression.
|
|||
|
||||
## Background
|
||||
|
||||
Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate ```image_not_existing``` and ```container_is_waiting``` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)).
|
||||
Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate `image_not_existing` and `container_is_waiting` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)).
|
||||
|
||||
## Proposal
|
||||
|
||||
Each binary that generates events (for example, ```kubelet```) should keep track of previously generated events so that it can collapse recurring events into a single event instead of creating a new instance for each new event.
|
||||
Each binary that generates events (for example, `kubelet`) should keep track of previously generated events so that it can collapse recurring events into a single event instead of creating a new instance for each new event.
|
||||
|
||||
Event compression should be best effort (not guaranteed). Meaning, in the worst case, ```n``` identical (minus timestamp) events may still result in ```n``` event entries.
|
||||
Event compression should be best effort (not guaranteed). Meaning, in the worst case, `n` identical (minus timestamp) events may still result in `n` event entries.
|
||||
|
||||
## Design
|
||||
|
||||
Instead of a single Timestamp, each event object [contains](../../pkg/api/types.go#L1111) the following fields:
|
||||
* ```FirstTimestamp util.Time```
|
||||
* `FirstTimestamp util.Time`
|
||||
* The date/time of the first occurrence of the event.
|
||||
* ```LastTimestamp util.Time```
|
||||
* `LastTimestamp util.Time`
|
||||
* The date/time of the most recent occurrence of the event.
|
||||
* On first occurrence, this is equal to the FirstTimestamp.
|
||||
* ```Count int```
|
||||
* `Count int`
|
||||
* The number of occurrences of this event between FirstTimestamp and LastTimestamp
|
||||
* On first occurrence, this is 1.
|
||||
|
||||
Each binary that generates events:
|
||||
* Maintains a historical record of previously generated events:
|
||||
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](../../pkg/client/record/events_cache.go).
|
||||
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [`pkg/client/record/events_cache.go`](../../pkg/client/record/events_cache.go).
|
||||
* The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event:
|
||||
* ```event.Source.Component```
|
||||
* ```event.Source.Host```
|
||||
* ```event.InvolvedObject.Kind```
|
||||
* ```event.InvolvedObject.Namespace```
|
||||
* ```event.InvolvedObject.Name```
|
||||
* ```event.InvolvedObject.UID```
|
||||
* ```event.InvolvedObject.APIVersion```
|
||||
* ```event.Reason```
|
||||
* ```event.Message```
|
||||
* `event.Source.Component`
|
||||
* `event.Source.Host`
|
||||
* `event.InvolvedObject.Kind`
|
||||
* `event.InvolvedObject.Namespace`
|
||||
* `event.InvolvedObject.Name`
|
||||
* `event.InvolvedObject.UID`
|
||||
* `event.InvolvedObject.APIVersion`
|
||||
* `event.Reason`
|
||||
* `event.Message`
|
||||
* The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache.
|
||||
* When an event is generated, the previously generated events cache is checked (see [```pkg/client/record/event.go```](../../pkg/client/record/event.go)).
|
||||
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](../../pkg/client/record/event.go)).
|
||||
* If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd:
|
||||
* The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count.
|
||||
* The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update).
|
||||
|
|
|
@ -65,7 +65,7 @@ Kubernetes makes no guarantees at runtime that the underlying storage exists or
|
|||
|
||||
#### Describe available storage
|
||||
|
||||
Cluster administrators use the API to manage *PersistentVolumes*. A custom store ```NewPersistentVolumeOrderedIndex``` will index volumes by access modes and sort by storage capacity. The ```PersistentVolumeClaimBinder``` watches for new claims for storage and binds them to an available volume by matching the volume's characteristics (AccessModes and storage size) to the user's request.
|
||||
Cluster administrators use the API to manage *PersistentVolumes*. A custom store `NewPersistentVolumeOrderedIndex` will index volumes by access modes and sort by storage capacity. The `PersistentVolumeClaimBinder` watches for new claims for storage and binds them to an available volume by matching the volume's characteristics (AccessModes and storage size) to the user's request.
|
||||
|
||||
PVs are system objects and, thus, have no namespace.
|
||||
|
||||
|
|
|
@ -297,7 +297,7 @@ storing it. Secrets contain multiple pieces of data that are presented as differ
|
|||
the secret volume (example: SSH key pair).
|
||||
|
||||
In order to remove the burden from the end user in specifying every file that a secret consists of,
|
||||
it should be possible to mount all files provided by a secret with a single ```VolumeMount``` entry
|
||||
it should be possible to mount all files provided by a secret with a single `VolumeMount` entry
|
||||
in the container specification.
|
||||
|
||||
### Secret API Resource
|
||||
|
@ -349,7 +349,7 @@ finer points of secrets and resource allocation are fleshed out.
|
|||
|
||||
### Secret Volume Source
|
||||
|
||||
A new `SecretSource` type of volume source will be added to the ```VolumeSource``` struct in the
|
||||
A new `SecretSource` type of volume source will be added to the `VolumeSource` struct in the
|
||||
API:
|
||||
|
||||
```go
|
||||
|
|
|
@ -33,15 +33,15 @@ Documentation for other releases can be found at
|
|||
|
||||
## Simple rolling update
|
||||
|
||||
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in ```kubectl```.
|
||||
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in `kubectl`.
|
||||
|
||||
Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information.
|
||||
|
||||
### Lightweight rollout
|
||||
|
||||
Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1```
|
||||
Assume that we have a current replication controller named `foo` and it is running image `image:v1`
|
||||
|
||||
```kubectl rolling-update foo [foo-v2] --image=myimage:v2```
|
||||
`kubectl rolling-update foo [foo-v2] --image=myimage:v2`
|
||||
|
||||
If the user doesn't specify a name for the 'next' replication controller, then the 'next' replication controller is renamed to
|
||||
the name of the original replication controller.
|
||||
|
@ -50,15 +50,15 @@ Obviously there is a race here, where if you kill the client between delete foo,
|
|||
See [Recovery](#recovery) below
|
||||
|
||||
If the user does specify a name for the 'next' replication controller, then the 'next' replication controller is retained with its existing name,
|
||||
and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` replication controllers.
|
||||
The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` replication controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
|
||||
and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label `kubernetes.io/deployment` to both the `foo` and `foo-next` replication controllers.
|
||||
The value of that label is the hash of the complete JSON representation of the`foo-next` or`foo` replication controller. The name of this label can be overridden by the user with the `--deployment-label-key` flag.
|
||||
|
||||
#### Recovery
|
||||
|
||||
If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out.
|
||||
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the ```kubernetes.io/``` annotation namespace:
|
||||
* ```desired-replicas``` The desired number of replicas for this replication controller (either N or zero)
|
||||
* ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
|
||||
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the `kubernetes.io/` annotation namespace:
|
||||
* `desired-replicas` The desired number of replicas for this replication controller (either N or zero)
|
||||
* `update-partner` A pointer to the replication controller resource that is the other half of this update (syntax `<name>` the namespace is assumed to be identical to the namespace of this replication controller.)
|
||||
|
||||
Recovery is achieved by issuing the same command again:
|
||||
|
||||
|
@ -66,70 +66,70 @@ Recovery is achieved by issuing the same command again:
|
|||
kubectl rolling-update foo [foo-v2] --image=myimage:v2
|
||||
```
|
||||
|
||||
Whenever the rolling update command executes, the kubectl client looks for replication controllers called ```foo``` and ```foo-next```, if they exist, an attempt is
|
||||
made to roll ```foo``` to ```foo-next```. If ```foo-next``` does not exist, then it is created, and the rollout is a new rollout. If ```foo``` doesn't exist, then
|
||||
it is assumed that the rollout is nearly completed, and ```foo-next``` is renamed to ```foo```. Details of the execution flow are given below.
|
||||
Whenever the rolling update command executes, the kubectl client looks for replication controllers called `foo` and `foo-next`, if they exist, an attempt is
|
||||
made to roll `foo` to `foo-next`. If `foo-next` does not exist, then it is created, and the rollout is a new rollout. If `foo` doesn't exist, then
|
||||
it is assumed that the rollout is nearly completed, and `foo-next` is renamed to `foo`. Details of the execution flow are given below.
|
||||
|
||||
|
||||
### Aborting a rollout
|
||||
|
||||
Abort is assumed to want to reverse a rollout in progress.
|
||||
|
||||
```kubectl rolling-update foo [foo-v2] --rollback```
|
||||
`kubectl rolling-update foo [foo-v2] --rollback`
|
||||
|
||||
This is really just semantic sugar for:
|
||||
|
||||
```kubectl rolling-update foo-v2 foo```
|
||||
`kubectl rolling-update foo-v2 foo`
|
||||
|
||||
With the added detail that it moves the ```desired-replicas``` annotation from ```foo-v2``` to ```foo```
|
||||
With the added detail that it moves the `desired-replicas` annotation from `foo-v2` to `foo`
|
||||
|
||||
|
||||
### Execution Details
|
||||
|
||||
For the purposes of this example, assume that we are rolling from ```foo``` to ```foo-next``` where the only change is an image update from `v1` to `v2`
|
||||
For the purposes of this example, assume that we are rolling from `foo` to `foo-next` where the only change is an image update from `v1` to `v2`
|
||||
|
||||
If the user doesn't specify a ```foo-next``` name, then it is either discovered from the ```update-partner``` annotation on ```foo```. If that annotation doesn't exist,
|
||||
then ```foo-next``` is synthesized using the pattern ```<controller-name>-<hash-of-next-controller-JSON>```
|
||||
If the user doesn't specify a `foo-next` name, then it is either discovered from the `update-partner` annotation on `foo`. If that annotation doesn't exist,
|
||||
then `foo-next` is synthesized using the pattern `<controller-name>-<hash-of-next-controller-JSON>`
|
||||
|
||||
#### Initialization
|
||||
|
||||
* If ```foo``` and ```foo-next``` do not exist:
|
||||
* If `foo` and `foo-next` do not exist:
|
||||
* Exit, and indicate an error to the user, that the specified controller doesn't exist.
|
||||
* If ```foo``` exists, but ```foo-next``` does not:
|
||||
* Create ```foo-next``` populate it with the ```v2``` image, set ```desired-replicas``` to ```foo.Spec.Replicas```
|
||||
* If `foo` exists, but `foo-next` does not:
|
||||
* Create `foo-next` populate it with the `v2` image, set `desired-replicas` to `foo.Spec.Replicas`
|
||||
* Goto Rollout
|
||||
* If ```foo-next``` exists, but ```foo``` does not:
|
||||
* If `foo-next` exists, but `foo` does not:
|
||||
* Assume that we are in the rename phase.
|
||||
* Goto Rename
|
||||
* If both ```foo``` and ```foo-next``` exist:
|
||||
* If both `foo` and `foo-next` exist:
|
||||
* Assume that we are in a partial rollout
|
||||
* If ```foo-next``` is missing the ```desired-replicas``` annotation
|
||||
* Populate the ```desired-replicas``` annotation to ```foo-next``` using the current size of ```foo```
|
||||
* If `foo-next` is missing the `desired-replicas` annotation
|
||||
* Populate the `desired-replicas` annotation to `foo-next` using the current size of `foo`
|
||||
* Goto Rollout
|
||||
|
||||
#### Rollout
|
||||
|
||||
* While size of ```foo-next``` < ```desired-replicas``` annotation on ```foo-next```
|
||||
* increase size of ```foo-next```
|
||||
* if size of ```foo``` > 0
|
||||
decrease size of ```foo```
|
||||
* While size of `foo-next` < `desired-replicas` annotation on `foo-next`
|
||||
* increase size of `foo-next`
|
||||
* if size of `foo` > 0
|
||||
decrease size of `foo`
|
||||
* Goto Rename
|
||||
|
||||
#### Rename
|
||||
|
||||
* delete ```foo```
|
||||
* create ```foo``` that is identical to ```foo-next```
|
||||
* delete ```foo-next```
|
||||
* delete `foo`
|
||||
* create `foo` that is identical to `foo-next`
|
||||
* delete `foo-next`
|
||||
|
||||
#### Abort
|
||||
|
||||
* If ```foo-next``` doesn't exist
|
||||
* If `foo-next` doesn't exist
|
||||
* Exit and indicate to the user that they may want to simply do a new rollout with the old version
|
||||
* If ```foo``` doesn't exist
|
||||
* If `foo` doesn't exist
|
||||
* Exit and indicate not found to the user
|
||||
* Otherwise, ```foo-next``` and ```foo``` both exist
|
||||
* Set ```desired-replicas``` annotation on ```foo``` to match the annotation on ```foo-next```
|
||||
* Goto Rollout with ```foo``` and ```foo-next``` trading places.
|
||||
* Otherwise, `foo-next` and `foo` both exist
|
||||
* Set `desired-replicas` annotation on `foo` to match the annotation on `foo-next`
|
||||
* Goto Rollout with `foo` and `foo-next` trading places.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
Loading…
Reference in New Issue