Merge pull request #1562 from cofyc/typofix

fix typos in storage docs
This commit is contained in:
k8s-ci-robot 2018-01-04 12:14:18 -08:00 committed by GitHub
commit 92f6bad6a2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 16 additions and 16 deletions

View File

@ -73,7 +73,7 @@ new controller will be:
* Watch for pvc update requests and add pvc to controller's work queue if a increase in volume size was requested. Once PVC is added to
controller's work queue - `pvc.Status.Conditions` will be updated with `ResizeStarted: True`.
* For unbound or pending PVCs - resize will trigger no action in `volume_expand_controller`.
* If `pv.Spec.Capacity` already is of size greater or equal than requested size, similarly no action will be perfomed by the controller.
* If `pv.Spec.Capacity` already is of size greater or equal than requested size, similarly no action will be performed by the controller.
* A separate goroutine will read work queue and perform corresponding volume resize operation. If there is a resize operation in progress
for same volume then resize request will be pending and retried once previous resize request has completed.
* Controller resize in effect will be level based rather than edge based. If there are more than one pending resize request for same PVC then
@ -110,7 +110,7 @@ func (og *operationGenerator) GenerateExpandVolumeFunc(
return expandErr
}
// CloudProvider resize succeded - lets mark api objects as resized
// CloudProvider resize succeeded - lets mark api objects as resized
if expanderPlugin.RequiresFSResize() {
err := resizeMap.MarkForFileSystemResize(pvcWithResizeRequest)
if err != nil {
@ -180,7 +180,7 @@ This can be done by checking `pvc.Status.Conditions` during force detach. `Attac
#### Reduce coupling between resize operation and file system type
A file system resize in general requires presence of tools such as `resize2fs` or `xfs_growfs` on the host where kubelet is running. There is a concern
that open coding call to different resize tools direclty in Kubernetes will result in coupling between file system and resize operation. To solve this problem
that open coding call to different resize tools directly in Kubernetes will result in coupling between file system and resize operation. To solve this problem
we have considered following options:
1. Write a library that abstracts away various file system operations, such as - resizing, formatting etc.
@ -203,7 +203,7 @@ we have considered following options:
Of all options - #3 is our best bet but we are not quite there yet. Hence, I would like to propose that we ship with support for
most common file systems in curent release and we revisit this coupling and solve it in next release.
most common file systems in current release and we revisit this coupling and solve it in next release.
## API and UI Design

View File

@ -50,7 +50,7 @@ Primary partitions are shared partitions that can provide ephemeral local storag
This partition holds the kubelets root directory (`/var/lib/kubelet` by default) and `/var/log` directory. This partition may be shared between user pods, OS and Kubernetes system daemons. This partition can be consumed by pods via EmptyDir volumes, container logs, image layers and container writable layers. Kubelet will manage shared access and isolation of this partition. This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition.
### Runtime
This is an optional partition which runtimes can use for overlay filesystems. Kubelet will attempt to identify and provide shared access along with isolation to this partition. Container image layers and writable later is stored here. If the runtime partition exists, `root` parition will not hold any image layer or writable layers.
This is an optional partition which runtimes can use for overlay filesystems. Kubelet will attempt to identify and provide shared access along with isolation to this partition. Container image layers and writable later is stored here. If the runtime partition exists, `root` partition will not hold any image layer or writable layers.
## Secondary Partitions
All other partitions are exposed as local persistent volumes. The PV interface allows for varying storage configurations to be supported, while hiding specific configuration details to the pod. All the local PVs can be queried and viewed from a cluster level using the existing PV object. Applications can continue to use their existing PVC specifications with minimal changes to request local storage.
@ -59,7 +59,7 @@ The local PVs can be precreated by an addon DaemonSet that discovers all the sec
Local PVs can only provide semi-persistence, and are only suitable for specific use cases that need performance, data gravity and can tolerate data loss. If the node or PV fails, then either the pod cannot run, or the pod has to give up on the local PV and find a new one. Failure scenarios can be handled by unbinding the PVC from the local PV, and forcing the pod to reschedule and find a new PV.
Since local PVs are only accessible from specific nodes, the scheduler needs to take into account a PV's node constraint when placing pods. This can be generalized to a storage toplogy constraint, which can also work with zones, and in the future: racks, clusters, etc.
Since local PVs are only accessible from specific nodes, the scheduler needs to take into account a PV's node constraint when placing pods. This can be generalized to a storage topology constraint, which can also work with zones, and in the future: racks, clusters, etc.
The term `Partitions` are used here to describe the main use cases for local storage. However, the proposal doesn't require a local volume to be an entire disk or a partition - it supports arbitrary directory. This implies that cluster administrator can create multiple local volumes in one partition, each has the capacity of the partition, or even create local volume under primary partitions. Unless strictly required, e.g. if you have only one partition in your host, this is strongly discouraged. For this reason, following description will use `partition` or `mount point` exclusively.
@ -520,7 +520,7 @@ Note: Block access will be considered as a separate feature because it can work
storage: 80Gi
```
4. It is also possible for a PVC that requests `volumeType: block` to also use file-based bolume. In this situation, the block device would get formatted with the filesystem type specified in the PVC spec. And when the PVC gets destroyed, then the filesystem also gets destroyed to return back to the original block state.
4. It is also possible for a PVC that requests `volumeType: block` to also use file-based volume. In this situation, the block device would get formatted with the filesystem type specified in the PVC spec. And when the PVC gets destroyed, then the filesystem also gets destroyed to return back to the original block state.
```yaml
kind: PersistentVolumeClaim
@ -575,7 +575,7 @@ Note: Block access will be considered as a separate feature because it can work
### Why is the kubelet managing logs?
Kubelet is managing access to shared storage on the node. Container logs outputted via it's stdout and stderr ends up on the shared storage that kubelet is managing. So, kubelet needs direct control over the log data to keep the containers running (by rotating logs), store them long enough for break glass situations and apply different storage policies in a multi-tenent cluster. All of these features are not easily expressible through external logging agents like journald for example.
Kubelet is managing access to shared storage on the node. Container logs outputted via it's stdout and stderr ends up on the shared storage that kubelet is managing. So, kubelet needs direct control over the log data to keep the containers running (by rotating logs), store them long enough for break glass situations and apply different storage policies in a multi-tenant cluster. All of these features are not easily expressible through external logging agents like journald for example.
### Master are upgraded prior to nodes. How should storage as a new compute resource be rolled out on to existing clusters?
@ -591,7 +591,7 @@ Kubelet will attempt to enforce capacity limits on a best effort basis. If the u
### Are LocalStorage PVs required to be a whole partition?
No, but it is the recommended way to ensure capacity and performance isolation. For HDDs, a whole disk is recommended for performance isolation. In some environments, multiple storage partitions are not available, so the only option is to share the same filesystem. In that case, directories in the same filesystem can be specified, and the adminstrator could configure group quota to provide capacity isolation.
No, but it is the recommended way to ensure capacity and performance isolation. For HDDs, a whole disk is recommended for performance isolation. In some environments, multiple storage partitions are not available, so the only option is to share the same filesystem. In that case, directories in the same filesystem can be specified, and the administrator could configure group quota to provide capacity isolation.
# Features & Milestones

View File

@ -13,7 +13,7 @@ so this document will be extended for each new release as we add more features.
* Allow pods to mount dedicated local disks, or channeled partitions as volumes for
IOPS isolation.
* Allow pods do access local volumes without root privileges.
* Allow pods to access local volumes without needing to undestand the storage
* Allow pods to access local volumes without needing to understand the storage
layout on every node.
* Persist local volumes and provide data gravity for pods. Any pod
using the local volume will be scheduled to the same node that the local volume
@ -191,7 +191,7 @@ providing persistent local storage should be considered.
## Feature Plan
A detailed implementation plan can be found in the
[Storage SIG planning spreadhseet](https://docs.google.com/spreadsheets/d/1t4z5DYKjX2ZDlkTpCnp18icRAQqOE85C1T1r2gqJVck/view#gid=1566770776).
[Storage SIG planning spreadsheet](https://docs.google.com/spreadsheets/d/1t4z5DYKjX2ZDlkTpCnp18icRAQqOE85C1T1r2gqJVck/view#gid=1566770776).
The following is a high level summary of the goals in each phase.
### Phase 1
@ -567,8 +567,8 @@ specific nodes.
##### PV node affinity unit tests
* Nil or empty node affinity evalutes to true for any node
* Node affinity specifying existing node labels evalutes to true
* Nil or empty node affinity evaluates to true for any node
* Node affinity specifying existing node labels evaluates to true
* Node affinity specifying non-existing node label keys evaluates to false
* Node affinity specifying non-existing node label values evaluates to false
@ -611,7 +611,7 @@ and can mount, read, and write
and verify that PVs are created and a Pod can mount, read, and write.
* After destroying a PVC managed by the local volume provisioner, it should cleanup
the volume and recreate a new PV.
* Pod using a Local PV with non-existant path fails to mount
* Pod using a Local PV with non-existent path fails to mount
* Pod that sets nodeName to a different node than the PV node affinity cannot schedule.
@ -624,7 +624,7 @@ type of volume that imposes topology constraints, such as local storage and zona
Because this problem affects more than just local volumes, it will be treated as a
separate feature with a separate proposal. Once that feature is implemented, then the
limitations outlined above wil be fixed.
limitations outlined above will be fixed.
#### Block devices and raw partitions

View File

@ -105,7 +105,7 @@ Deleting a pod with grace period 0 is called **force deletion** and will
update the pod with a `deletionGracePeriodSeconds` of 0, and then immediately
remove the pod from etcd. Because all communication is asynchronous,
force deleting a pod means that the pod processes may continue
to run for an arbitary amount of time. If a higher level component like the
to run for an arbitrary amount of time. If a higher level component like the
StatefulSet controller treats the existence of the pod API object as a strongly
consistent entity, deleting the pod in this fashion will violate the
at-most-one guarantee we wish to offer for pet sets.