Copy edits for spelling errors and typos

Signed-off-by: Ed Costello <epc@epcostello.com>
This commit is contained in:
Ed Costello 2015-06-11 01:11:44 -04:00
parent 867825849d
commit a407b64a3d
6 changed files with 8 additions and 8 deletions

View File

@ -38,7 +38,7 @@ The proposed solution will provide a range of options for setting up and maintai
The building blocks of an easier solution:
* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly idenitfy the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN.
* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly identify the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN.
* [optional] **API driven CA** Optionally, we will run a CA in the master that will mint certificates for the nodes/kubelets. There will be pluggable policies that will automatically approve certificate requests here as appropriate.
* **CA approval policy** This is a pluggable policy object that can automatically approve CA signing requests. Stock policies will include `always-reject`, `queue` and `insecure-always-approve`. With `queue` there would be an API for evaluating and accepting/rejecting requests. Cloud providers could implement a policy here that verifies other out of band information and automatically approves/rejects based on other external factors.
* **Scoped Kubelet Accounts** These accounts are per-minion and (optionally) give a minion permission to register itself.

View File

@ -25,7 +25,7 @@ Instead of a single Timestamp, each event object [contains](https://github.com/G
Each binary that generates events:
* Maintains a historical record of previously generated events:
* Implmented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go).
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go).
* The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event:
* ```event.Source.Component```
* ```event.Source.Host```

View File

@ -55,7 +55,7 @@ available to subsequent expansions.
### Use Case: Variable expansion in command
Users frequently need to pass the values of environment variables to a container's command.
Currently, Kubernetes does not perform any expansion of varibles. The workaround is to invoke a
Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a
shell in the container's command and have the shell perform the substitution, or to write a wrapper
script that sets up the environment and runs the command. This has a number of drawbacks:
@ -116,7 +116,7 @@ expanded, then `$(VARIABLE_NAME)` should be present in the output.
Although the `$(var)` syntax does overlap with the `$(command)` form of command substitution
supported by many shells, because unexpanded variables are present verbatim in the output, we
expect this will not present a problem to many users. If there is a collision between a varible
expect this will not present a problem to many users. If there is a collision between a variable
name and command substitution syntax, the syntax can be escaped with the form `$$(VARIABLE_NAME)`,
which will evaluate to `$(VARIABLE_NAME)` whether `VARIABLE_NAME` can be expanded or not.

View File

@ -22,13 +22,13 @@ While Kubernetes today is not primarily a multi-tenant system, the long term evo
We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories:
1. k8s admin - administers a kubernetes cluster and has access to the undelying components of the system
1. k8s admin - administers a kubernetes cluster and has access to the underlying components of the system
2. k8s project administrator - administrates the security of a small subset of the cluster
3. k8s developer - launches pods on a kubernetes cluster and consumes cluster resources
Automated process users fall into the following categories:
1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources indepedent of the human users attached to a project
1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources independent of the human users attached to a project
2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles

View File

@ -13,7 +13,7 @@ Processes in Pods may need to call the Kubernetes API. For example:
They also may interact with services other than the Kubernetes API, such as:
- an image repository, such as docker -- both when the images are pulled to start the containers, and for writing
images in the case of pods that generate images.
- accessing other cloud services, such as blob storage, in the context of a larged, integrated, cloud offering (hosted
- accessing other cloud services, such as blob storage, in the context of a large, integrated, cloud offering (hosted
or private).
- accessing files in an NFS volume attached to the pod

View File

@ -22,7 +22,7 @@ The value of that label is the hash of the complete JSON representation of the``
If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out.
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replicaController in the ```kubernetes.io/``` annotation namespace:
* ```desired-replicas``` The desired number of replicas for this controller (either N or zero)
* ```update-partner``` A pointer to the replicaiton controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
* ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
Recovery is achieved by issuing the same command again: