diff --git a/clustering.md b/clustering.md index d57d631da..4cef06f85 100644 --- a/clustering.md +++ b/clustering.md @@ -38,7 +38,7 @@ The proposed solution will provide a range of options for setting up and maintai The building blocks of an easier solution: -* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly idenitfy the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN. +* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly identify the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN. * [optional] **API driven CA** Optionally, we will run a CA in the master that will mint certificates for the nodes/kubelets. There will be pluggable policies that will automatically approve certificate requests here as appropriate. * **CA approval policy** This is a pluggable policy object that can automatically approve CA signing requests. Stock policies will include `always-reject`, `queue` and `insecure-always-approve`. With `queue` there would be an API for evaluating and accepting/rejecting requests. Cloud providers could implement a policy here that verifies other out of band information and automatically approves/rejects based on other external factors. * **Scoped Kubelet Accounts** These accounts are per-minion and (optionally) give a minion permission to register itself. diff --git a/event_compression.md b/event_compression.md index db0337f02..74aba66f1 100644 --- a/event_compression.md +++ b/event_compression.md @@ -25,7 +25,7 @@ Instead of a single Timestamp, each event object [contains](https://github.com/G Each binary that generates events: * Maintains a historical record of previously generated events: - * Implmented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go). + * Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go). * The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event: * ```event.Source.Component``` * ```event.Source.Host``` diff --git a/expansion.md b/expansion.md index f4c85e8d3..8b31526a3 100644 --- a/expansion.md +++ b/expansion.md @@ -55,7 +55,7 @@ available to subsequent expansions. ### Use Case: Variable expansion in command Users frequently need to pass the values of environment variables to a container's command. -Currently, Kubernetes does not perform any expansion of varibles. The workaround is to invoke a +Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a shell in the container's command and have the shell perform the substitution, or to write a wrapper script that sets up the environment and runs the command. This has a number of drawbacks: @@ -116,7 +116,7 @@ expanded, then `$(VARIABLE_NAME)` should be present in the output. Although the `$(var)` syntax does overlap with the `$(command)` form of command substitution supported by many shells, because unexpanded variables are present verbatim in the output, we -expect this will not present a problem to many users. If there is a collision between a varible +expect this will not present a problem to many users. If there is a collision between a variable name and command substitution syntax, the syntax can be escaped with the form `$$(VARIABLE_NAME)`, which will evaluate to `$(VARIABLE_NAME)` whether `VARIABLE_NAME` can be expanded or not. diff --git a/security.md b/security.md index 26d543c97..6ea611b79 100644 --- a/security.md +++ b/security.md @@ -22,13 +22,13 @@ While Kubernetes today is not primarily a multi-tenant system, the long term evo We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories: -1. k8s admin - administers a kubernetes cluster and has access to the undelying components of the system +1. k8s admin - administers a kubernetes cluster and has access to the underlying components of the system 2. k8s project administrator - administrates the security of a small subset of the cluster 3. k8s developer - launches pods on a kubernetes cluster and consumes cluster resources Automated process users fall into the following categories: -1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources indepedent of the human users attached to a project +1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources independent of the human users attached to a project 2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles diff --git a/service_accounts.md b/service_accounts.md index 72a102070..e87e8e6c2 100644 --- a/service_accounts.md +++ b/service_accounts.md @@ -13,7 +13,7 @@ Processes in Pods may need to call the Kubernetes API. For example: They also may interact with services other than the Kubernetes API, such as: - an image repository, such as docker -- both when the images are pulled to start the containers, and for writing images in the case of pods that generate images. - - accessing other cloud services, such as blob storage, in the context of a larged, integrated, cloud offering (hosted + - accessing other cloud services, such as blob storage, in the context of a large, integrated, cloud offering (hosted or private). - accessing files in an NFS volume attached to the pod diff --git a/simple-rolling-update.md b/simple-rolling-update.md index fed1b84f4..e5b47d98a 100644 --- a/simple-rolling-update.md +++ b/simple-rolling-update.md @@ -22,7 +22,7 @@ The value of that label is the hash of the complete JSON representation of the`` If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out. To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replicaController in the ```kubernetes.io/``` annotation namespace: * ```desired-replicas``` The desired number of replicas for this controller (either N or zero) - * ```update-partner``` A pointer to the replicaiton controller resource that is the other half of this update (syntax `````` the namespace is assumed to be identical to the namespace of this replication controller.) + * ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax `````` the namespace is assumed to be identical to the namespace of this replication controller.) Recovery is achieved by issuing the same command again: