Qualified all references to "controller" so that references to "replication controller" are clear. fixes #9404

Also ran hacks/run-gendocs.sh
This commit is contained in:
RichieEscarez 2015-06-16 14:48:51 -07:00
parent a407b64a3d
commit 9e35c48d4a
3 changed files with 9 additions and 9 deletions

View File

@ -193,7 +193,7 @@ K8s authorization should:
- Allow for a range of maturity levels, from single-user for those test driving the system, to integration with existing to enterprise authorization systems.
- Allow for centralized management of users and policies. In some organizations, this will mean that the definition of users and access policies needs to reside on a system other than k8s and encompass other web services (such as a storage service).
- Allow processes running in K8s Pods to take on identity, and to allow narrow scoping of permissions for those identities in order to limit damage from software faults.
- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Controllers, Services, and the identities and policies for those Pods and Controllers.
- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Replication Controllers, Services, and the identities and policies for those Pods and Replication Controllers.
- Be separate as much as practical from Authentication, to allow Authentication methods to change over time and space, without impacting Authorization policies.
K8s will implement a relatively simple

View File

@ -5,7 +5,7 @@
Processes in Pods may need to call the Kubernetes API. For example:
- scheduler
- replication controller
- minion controller
- node controller
- a map-reduce type framework which has a controller that then tries to make a dynamically determined number of workers and watch them
- continuous build and push system
- monitoring system

View File

@ -8,20 +8,20 @@ Assume that we have a current replication controller named ```foo``` and it is r
```kubectl rolling-update rc foo [foo-v2] --image=myimage:v2```
If the user doesn't specify a name for the 'next' controller, then the 'next' controller is renamed to
the name of the original controller.
If the user doesn't specify a name for the 'next' replication controller, then the 'next' replication controller is renamed to
the name of the original replication controller.
Obviously there is a race here, where if you kill the client between delete foo, and creating the new version of 'foo' you might be surprised about what is there, but I think that's ok.
See [Recovery](#recovery) below
If the user does specify a name for the 'next' controller, then the 'next' controller is retained with its existing name,
and the old 'foo' controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` controllers.
The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
If the user does specify a name for the 'next' replication controller, then the 'next' replication controller is retained with its existing name,
and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` replication controllers.
The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` replication controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
#### Recovery
If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out.
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replicaController in the ```kubernetes.io/``` annotation namespace:
* ```desired-replicas``` The desired number of replicas for this controller (either N or zero)
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the ```kubernetes.io/``` annotation namespace:
* ```desired-replicas``` The desired number of replicas for this replication controller (either N or zero)
* ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
Recovery is achieved by issuing the same command again: