diff --git a/contributors/design-proposals/api-chunking.md b/contributors/design-proposals/api-chunking.md
deleted file mode 100644
index 4eb328eb2..000000000
--- a/contributors/design-proposals/api-chunking.md
+++ /dev/null
@@ -1,182 +0,0 @@
-# Allow clients to retrieve consistent API lists in chunks
-
-On large clusters, performing API queries that return all of the objects of a given resource type (GET /api/v1/pods, GET /api/v1/secrets) can lead to significant variations in peak memory use on the server and contribute substantially to long tail request latency.
-
-When loading very large sets of objects -- some clusters are now reaching 100k pods or equivalent numbers of supporting resources -- the system must:
-
-* Construct the full range description in etcd in memory and serialize it as protobuf in the client
- * Some clusters have reported over 500MB being stored in a single object type
- * This data is read from the underlying datastore and converted to a protobuf response
- * Large reads to etcd can block writes to the same range (https://github.com/coreos/etcd/issues/7719)
-* The data from etcd has to be transferred to the apiserver in one large chunk
-* The `kube-apiserver` also has to deserialize that response into a single object, and then re-serialize it back to the client
- * Much of the decoded etcd memory is copied into the struct used to serialize to the client
-* An API client like `kubectl get` will then decode the response from JSON or protobuf
- * An API client with a slow connection may not be able to receive the entire response body within the default 60s timeout
- * This may cause other failures downstream of that API client with their own timeouts
- * The recently introduced client compression feature can assist
- * The large response will also be loaded entirely into memory
-
-The standard solution for reducing the impact of large reads is to allow them to be broken into smaller reads via a technique commonly referred to as paging or chunking. By efficiently splitting large list ranges from etcd to clients into many smaller list ranges, we can reduce the peak memory allocation on etcd and the apiserver, without losing the consistent read invariant our clients depend on.
-
-This proposal does not cover general purpose ranging or paging for arbitrary clients, such as allowing web user interfaces to offer paged output, but does define some parameters for future extension. To that end, this proposal uses the phrase "chunking" to describe retrieving a consistent snapshot range read from the API server in distinct pieces.
-
-Our primary consistent store etcd3 offers support for efficient chunking with minimal overhead, and mechanisms exist for other potential future stores such as SQL databases or Consul to also implement a simple form of consistent chunking.
-
-Relevant issues:
-
-* https://github.com/kubernetes/kubernetes/issues/2349
-
-## Terminology
-
-**Consistent list** - A snapshot of all resources at a particular moment in time that has a single `resourceVersion` that clients can begin watching from to receive updates. All Kubernetes controllers depend on this semantic. Allows a controller to refresh its internal state, and then receive a stream of changes from the initial state.
-
-**API paging** - API parameters designed to allow a human to view results in a series of "pages".
-
-**API chunking** - API parameters designed to allow a client to break one large request into multiple smaller requests without changing the semantics of the original request.
-
-
-## Proposed change:
-
-Expose a simple chunking mechanism to allow large API responses to be broken into consistent partial responses. Clients would indicate a tolerance for chunking (opt-in) by specifying a desired maximum number of results to return in a `LIST` call. The server would return up to that amount of objects, and if more exist it would return a `continue` parameter that the client could pass to receive the next set of results. The server would be allowed to ignore the limit if it does not implement limiting (backward compatible), but it is not allowed to support limiting without supporting a way to continue the query past the limit (may not implement `limit` without `continue`).
-
-```
-GET /api/v1/pods?limit=500
-{
- "metadata": {"continue": "ABC...", "resourceVersion": "147"},
- "items": [
- // no more than 500 items
- ]
-}
-GET /api/v1/pods?limit=500&continue=ABC...
-{
- "metadata": {"continue": "DEF...", "resourceVersion": "147"},
- "items": [
- // no more than 500 items
- ]
-}
-GET /api/v1/pods?limit=500&continue=DEF...
-{
- "metadata": {"resourceVersion": "147"},
- "items": [
- // no more than 500 items
- ]
-}
-```
-
-The token returned by the server for `continue` would be an opaque serialized string that would contain a simple serialization of a version identifier (to allow future extension), and any additional data needed by the server storage to identify where to start the next range.
-
-The continue token is not required to encode other filtering parameters present on the initial request, and clients may alter their filter parameters on subsequent chunk reads. However, the server implementation **may** reject such changes with a `400 Bad Request` error, and clients should consider this behavior undefined and left to future clarification. Chunking is intended to return consistent lists, and clients **should not** alter their filter parameters on subsequent chunk reads.
-
-If the resource version parameter specified on the request is inconsistent with the `continue` token, the server **must** reject the request with a `400 Bad Request` error.
-
-The schema of the continue token is chosen by the storage layer and is not guaranteed to remain consistent for clients - clients **must** consider the continue token as opaque. Server implementations **should** ensure that continue tokens can persist across server restarts and across upgrades.
-
-Servers **may** return fewer results than `limit` if server side filtering returns no results such as when a `label` or `field` selector is used. If the entire result set is filtered, the server **may** return zero results with a valid `continue` token. A client **must** use the presence of a `continue` token in the response to determine whether more results are available, regardless of the number of results returned. A server that supports limits **must not** return more results than `limit` if a `continue` token is also returned. If the server does not return a `continue` token, the server **must** return all remaining results. The server **may** return zero results with no `continue` token on the last call.
-
-The server **may** limit the amount of time a continue token is valid for. Clients **should** assume continue tokens last only a few minutes.
-
-The server **must** support `continue` tokens that are valid across multiple API servers. The server **must** support a mechanism for rolling restart such that continue tokens are valid after one or all API servers have been restarted.
-
-
-### Proposed Implementations
-
-etcd3 is the primary Kubernetes store and has been designed to support consistent range reads in chunks for this use case. The etcd3 store is an ordered map of keys to values, and Kubernetes places all keys within a resource type under a common prefix, with namespaces being a further prefix of those keys. A read of all keys within a resource type is an in-order scan of the etcd3 map, and therefore we can retrieve in chunks by defining a start key for the next chunk that skips the last key read.
-
-etcd2 will not be supported as it has no option to perform a consistent read and is on track to be deprecated in Kubernetes. Other databases that might back Kubernetes could either choose to not implement limiting, or leverage their own transactional characteristics to return a consistent list. In the near term our primary store remains etcd3 which can provide this capability at low complexity.
-
-Implementations that cannot offer consistent ranging (returning a set of results that are logically equivalent to receiving all results in one response) must not allow continuation, because consistent listing is a requirement of the Kubernetes API list and watch pattern.
-
-#### etcd3
-
-For etcd3 the continue token would contain a resource version (the snapshot that we are reading that is consistent across the entire LIST) and the start key for the next set of results. Upon receiving a valid continue token the apiserver would instruct etcd3 to retrieve the set of results at a given resource version, beginning at the provided start key, limited by the maximum number of requests provided by the continue token (or optionally, by a different limit specified by the client). If more results remain after reading up to the limit, the storage should calculate a continue token that would begin at the next possible key, and the continue token set on the returned list.
-
-The storage layer in the apiserver must apply consistency checking to the provided continue token to ensure that malicious users cannot trick the server into serving results outside of its range. The storage layer must perform defensive checking on the provided value, check for path traversal attacks, and have stable versioning for the continue token.
-
-#### Possible SQL database implementation
-
-A SQL database backing a Kubernetes server would need to implement a consistent snapshot read of an entire resource type, plus support changefeed style updates in order to implement the WATCH primitive. A likely implementation in SQL would be a table that stores multiple versions of each object, ordered by key and version, and filters out all historical versions of an object. A consistent paged list over such a table might be similar to:
-
- SELECT * FROM resource_type WHERE resourceVersion < ? AND deleted = false AND namespace > ? AND name > ? LIMIT ? ORDER BY namespace, name ASC
-
-where `namespace` and `name` are part of the continuation token and an index exists over `(namespace, name, resourceVersion, deleted)` that makes the range query performant. The highest returned resource version row for each `(namespace, name)` tuple would be returned.
-
-
-### Security implications of returning last or next key in the continue token
-
-If the continue token encodes the next key in the range, that key may expose info that is considered security sensitive, whether simply the name or namespace of resources not under the current tenant's control, or more seriously the name of a resource which is also a shared secret (for example, an access token stored as a kubernetes resource). There are a number of approaches to mitigating this impact:
-
-1. Disable chunking on specific resources
-2. Disable chunking when the user does not have permission to view all resources within a range
-3. Encrypt the next key or the continue token using a shared secret across all API servers
-4. When chunking, continue reading until the next visible start key is located after filtering, so that start keys are always keys the user has access to.
-
-In the short term we have no supported subset filtering (i.e. a user who can LIST can also LIST ?fields= and vice versa), so 1 is sufficient to address the sensitive key name issue. Because clients are required to proceed as if limiting is not possible, the server is always free to ignore a chunked request for other reasons. In the future, 4 may be the best option because we assume that most users starting a consistent read intend to finish it, unlike more general user interface paging where only a small fraction of requests continue to the next page.
-
-
-### Handling expired resource versions
-
-If the required data to perform a consistent list is no longer available in the storage backend (by default, old versions of objects in etcd3 are removed after 5 minutes), the server **must** return a `410 Gone ResourceExpired` status reponse (the same as for watch), which means clients must start from the beginning.
-
-```
-# resourceVersion is expired
-GET /api/v1/pods?limit=500&continue=DEF...
-{
- "kind": "Status",
- "code": 410,
- "reason": "ResourceExpired"
-}
-```
-
-Some clients may wish to follow a failed paged list with a full list attempt.
-
-The 5 minute default compaction interval for etcd3 bounds how long a list can run. Since clients may wish to perform processing over very large sets, increasing that timeout may make sense for large clusters. It should be possible to alter the interval at which compaction runs to accomodate larger clusters.
-
-
-#### Types of clients and impact
-
-Some clients such as controllers, receiving a 410 error, may instead wish to perform a full LIST without chunking.
-
-* Controllers with full caches
- * Any controller with a full in-memory cache of one or more resources almost certainly depends on having a consistent view of resources, and so will either need to perform a full list or a paged list, without dropping results
-* `kubectl get`
- * Most administrators would probably prefer to see a very large set with some inconsistency rather than no results (due to a timeout under load). They would likely be ok with handling `410 ResourceExpired` as "continue from the last key I processed"
-* Migration style commands
- * Assuming a migration command has to run on the full data set (to upgrade a resource from json to protobuf, or to check a large set of resources for errors) and is performing some expensive calculation on each, very large sets may not complete over the server expiration window.
-
-For clients that do not care about consistency, the server **may** return a `continue` value on the `ResourceExpired` error that allows the client to restart from the same prefix key, but using the latest resource version. This would allow clients that do not require a fully consistent LIST to opt in to partially consistent LISTs but still be able to scan the entire working set. It is likely this could be a sub field (opaque data) of the `Status` response under `statusDetails`.
-
-
-### Rate limiting
-
-Since the goal is to reduce spikiness of load, the standard API rate limiter might prefer to rate limit page requests differently from global lists, allowing full LISTs only slowly while smaller pages can proceed more quickly.
-
-
-### Chunk by default?
-
-On a very large data set, chunking trades total memory allocated in etcd, the apiserver, and the client for higher overhead per request (request/response processing, authentication, authorization). Picking a sufficiently high chunk value like 500 or 1000 would not impact smaller clusters, but would reduce the peak memory load of a very large cluster (10k resources and up). In testing, no significant overhead was shown in etcd3 for a paged historical query which is expected since the etcd3 store is an MVCC store and must always filter some values to serve a list.
-
-For clients that must perform sequential processing of lists (kubectl get, migration commands) this change dramatically improves initial latency - clients got their first chunk of data in milliseconds, rather than seconds for the full set. It also improves user experience for web consoles that may be accessed by administrators with access to large parts of the system.
-
-It is recommended that most clients attempt to page by default at a large page size (500 or 1000) and gracefully degrade to not chunking.
-
-
-### Other solutions
-
-Compression from the apiserver and between the apiserver and etcd can reduce total network bandwidth, but cannot reduce the peak CPU and memory used inside the client, apiserver, or etcd processes.
-
-Various optimizations exist that can and should be applied to minimizing the amount of data that is transferred from etcd to the client or number of allocations made in each location, but do not how response size scales with number of entries.
-
-
-## Plan
-
-The initial chunking implementation would focus on consistent listing on server and client as well as measuring the impact of chunking on total system load, since chunking will slightly increase the cost to view large data sets because of the additional per page processing. The initial implementation should make the fewest assumptions possible in constraining future backend storage.
-
-For the initial alpha release, chunking would be behind a feature flag and attempts to provide the `continue` or `limit` flags should be ignored. While disabled, a `continue` token should never be returned by the server as part of a list.
-
-Future work might offer more options for clients to page in an inconsistent fashion, or allow clients to directly specify the parts of the namespace / name keyspace they wish to range over (paging).
-
-
-
-[]()
-
diff --git a/contributors/design-proposals/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md b/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
similarity index 100%
rename from contributors/design-proposals/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
rename to contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
diff --git a/contributors/design-proposals/admission_control.md b/contributors/design-proposals/api-machinery/admission_control.md
similarity index 100%
rename from contributors/design-proposals/admission_control.md
rename to contributors/design-proposals/api-machinery/admission_control.md
diff --git a/contributors/design-proposals/admission_control_extension.md b/contributors/design-proposals/api-machinery/admission_control_extension.md
similarity index 100%
rename from contributors/design-proposals/admission_control_extension.md
rename to contributors/design-proposals/api-machinery/admission_control_extension.md
diff --git a/contributors/design-proposals/admission_control_limit_range.md b/contributors/design-proposals/api-machinery/admission_control_limit_range.md
similarity index 100%
rename from contributors/design-proposals/admission_control_limit_range.md
rename to contributors/design-proposals/api-machinery/admission_control_limit_range.md
diff --git a/contributors/design-proposals/admission_control_resource_quota.md b/contributors/design-proposals/api-machinery/admission_control_resource_quota.md
similarity index 100%
rename from contributors/design-proposals/admission_control_resource_quota.md
rename to contributors/design-proposals/api-machinery/admission_control_resource_quota.md
diff --git a/contributors/design-proposals/aggregated-api-servers.md b/contributors/design-proposals/api-machinery/aggregated-api-servers.md
similarity index 100%
rename from contributors/design-proposals/aggregated-api-servers.md
rename to contributors/design-proposals/api-machinery/aggregated-api-servers.md
diff --git a/contributors/design-proposals/api-group.md b/contributors/design-proposals/api-machinery/api-group.md
similarity index 100%
rename from contributors/design-proposals/api-group.md
rename to contributors/design-proposals/api-machinery/api-group.md
diff --git a/contributors/design-proposals/apiserver-build-in-admission-plugins.md b/contributors/design-proposals/api-machinery/apiserver-build-in-admission-plugins.md
similarity index 100%
rename from contributors/design-proposals/apiserver-build-in-admission-plugins.md
rename to contributors/design-proposals/api-machinery/apiserver-build-in-admission-plugins.md
diff --git a/contributors/design-proposals/apiserver-watch.md b/contributors/design-proposals/api-machinery/apiserver-watch.md
similarity index 100%
rename from contributors/design-proposals/apiserver-watch.md
rename to contributors/design-proposals/api-machinery/apiserver-watch.md
diff --git a/contributors/design-proposals/auditing.md b/contributors/design-proposals/api-machinery/auditing.md
similarity index 100%
rename from contributors/design-proposals/auditing.md
rename to contributors/design-proposals/api-machinery/auditing.md
diff --git a/contributors/design-proposals/configmap.md b/contributors/design-proposals/api-machinery/configmap.md
similarity index 100%
rename from contributors/design-proposals/configmap.md
rename to contributors/design-proposals/api-machinery/configmap.md
diff --git a/contributors/design-proposals/container-init.md b/contributors/design-proposals/api-machinery/container-init.md
similarity index 100%
rename from contributors/design-proposals/container-init.md
rename to contributors/design-proposals/api-machinery/container-init.md
diff --git a/contributors/design-proposals/csi-client-structure-proposal.md b/contributors/design-proposals/api-machinery/csi-client-structure-proposal.md
similarity index 100%
rename from contributors/design-proposals/csi-client-structure-proposal.md
rename to contributors/design-proposals/api-machinery/csi-client-structure-proposal.md
diff --git a/contributors/design-proposals/csi-new-client-library-procedure.md b/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md
similarity index 100%
rename from contributors/design-proposals/csi-new-client-library-procedure.md
rename to contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md
diff --git a/contributors/design-proposals/customresources-validation.md b/contributors/design-proposals/api-machinery/customresources-validation.md
similarity index 100%
rename from contributors/design-proposals/customresources-validation.md
rename to contributors/design-proposals/api-machinery/customresources-validation.md
diff --git a/contributors/design-proposals/dynamic-admission-control-configuration.md b/contributors/design-proposals/api-machinery/dynamic-admission-control-configuration.md
similarity index 100%
rename from contributors/design-proposals/dynamic-admission-control-configuration.md
rename to contributors/design-proposals/api-machinery/dynamic-admission-control-configuration.md
diff --git a/contributors/design-proposals/envvar-configmap.md b/contributors/design-proposals/api-machinery/envvar-configmap.md
similarity index 100%
rename from contributors/design-proposals/envvar-configmap.md
rename to contributors/design-proposals/api-machinery/envvar-configmap.md
diff --git a/contributors/design-proposals/extending-api.md b/contributors/design-proposals/api-machinery/extending-api.md
similarity index 100%
rename from contributors/design-proposals/extending-api.md
rename to contributors/design-proposals/api-machinery/extending-api.md
diff --git a/contributors/design-proposals/garbage-collection.md b/contributors/design-proposals/api-machinery/garbage-collection.md
similarity index 100%
rename from contributors/design-proposals/garbage-collection.md
rename to contributors/design-proposals/api-machinery/garbage-collection.md
diff --git a/contributors/design-proposals/optional-configmap.md b/contributors/design-proposals/api-machinery/optional-configmap.md
similarity index 100%
rename from contributors/design-proposals/optional-configmap.md
rename to contributors/design-proposals/api-machinery/optional-configmap.md
diff --git a/contributors/design-proposals/pod-preset.md b/contributors/design-proposals/api-machinery/pod-preset.md
similarity index 100%
rename from contributors/design-proposals/pod-preset.md
rename to contributors/design-proposals/api-machinery/pod-preset.md
diff --git a/contributors/design-proposals/pod-safety.md b/contributors/design-proposals/api-machinery/pod-safety.md
similarity index 100%
rename from contributors/design-proposals/pod-safety.md
rename to contributors/design-proposals/api-machinery/pod-safety.md
diff --git a/contributors/design-proposals/principles.md b/contributors/design-proposals/api-machinery/principles.md
similarity index 100%
rename from contributors/design-proposals/principles.md
rename to contributors/design-proposals/api-machinery/principles.md
diff --git a/contributors/design-proposals/resource-quota-scoping.md b/contributors/design-proposals/api-machinery/resource-quota-scoping.md
similarity index 100%
rename from contributors/design-proposals/resource-quota-scoping.md
rename to contributors/design-proposals/api-machinery/resource-quota-scoping.md
diff --git a/contributors/design-proposals/server-get.md b/contributors/design-proposals/api-machinery/server-get.md
similarity index 100%
rename from contributors/design-proposals/server-get.md
rename to contributors/design-proposals/api-machinery/server-get.md
diff --git a/contributors/design-proposals/thirdpartyresources.md b/contributors/design-proposals/api-machinery/thirdpartyresources.md
similarity index 100%
rename from contributors/design-proposals/thirdpartyresources.md
rename to contributors/design-proposals/api-machinery/thirdpartyresources.md
diff --git a/contributors/design-proposals/apiserver-count-fix.md b/contributors/design-proposals/apiserver-count-fix.md
deleted file mode 100644
index 0b17772f5..000000000
--- a/contributors/design-proposals/apiserver-count-fix.md
+++ /dev/null
@@ -1,86 +0,0 @@
-# apiserver-count fix proposal
-
-Authors: @rphillips
-
-## Table of Contents
-
-1. [Overview](#overview)
-2. [Known Issues](#known-issues)
-3. [Proposal](#proposal)
-4. [Alternate Proposals](#alternate-proposals)
- 1. [Custom Resource Definitions](#custom-resource-definitions)
- 2. [Refactor Old Reconciler](#refactor-old-reconciler)
-
-## Overview
-
-Proposal to fix Issue [#22609](https://github.com/kubernetes/kubernetes/issues/22609)
-
-`kube-apiserver` currently has a command-line argument `--apiserver-count`
-specifying the number of api servers. This masterCount is used in the
-MasterCountEndpointReconciler on a 10 second interval to potentially cleanup
-stale API Endpoints. The issue is when the number of kube-apiserver instances
-gets below or above the masterCount. If the below case happens, the stale
-instances within the Endpoints does not get cleaned up, or in the latter case
-the endpoints start to flap.
-
-## Known Issues
-
-Each apiserver’s reconciler only cleans up for it's own IP. If a new
-server is spun up at a new IP, then the old IP in the Endpoints list is
-only reclaimed if the number of apiservers becomes greater-than or equal
-to the masterCount. For example:
-
-* If the masterCount = 3, and there are 3 API servers running (named: A, B, and C)
-* ‘B’ API server is terminated for any reason
-* The IP for endpoint ‘B’ is not
-removed from the Endpoints list
-
-There is logic within the
-[MasterCountEndpointReconciler](https://github.com/kubernetes/kubernetes/blob/68814c0203c4b8abe59812b1093844a1f9bdac05/pkg/master/controller.go#L293)
-to attempt to make the Endpoints eventually consistent, but the code relies on
-the Endpoints count becoming equal to or greater than masterCount. When the
-apiservers become greater than the masterCount the Endpoints tend to flap.
-
-If the number endpoints were scaled down from automation, then the
-Endpoints would never become consistent.
-
-## Proposal
-
-### Create New Reconciler
-
-| Kubernetes Release | Quality | Description |
-| ------------- | ------------- | ----------- |
-| 1.9 | alpha |
- Add a new reconciler
- Add a command-line type `--alpha-apiserver-endpoint-reconciler-type`
-| 1.10 | beta | - Turn on the `storage` type by default
-| 1.11 | stable | - Remove code for old reconciler
- Remove --apiserver-count
-
-The MasterCountEndpointReconciler does not meet the current needs for durability
-of API Endpoint creation, deletion, or failure cases.
-
-Custom Resource Definitions were proposed, but they do not have clean layering.
-Additionally, liveness and locking would be a nice to have feature for a long
-term solution.
-
-ConfigMaps were proposed, but since they are watched globally, liveliness
-updates could be overly chatty.
-
-By porting OpenShift's
-[LeaseEndpointReconciler](https://github.com/openshift/origin/blob/master/pkg/cmd/server/election/lease_endpoint_reconciler.go)
-to Kubernetes we can use use the Storage API directly to store Endpoints
-dynamically within the system.
-
-### Alternate Proposals
-
-#### Custom Resource Definitions and ConfigMaps
-
-CRD's and ConfigMaps were considered for this proposal. They were not adopted
-for this proposal by the community due to tecnical issues explained earlier.
-
-#### Refactor Old Reconciler
-
-| Release | Quality | Description |
-| ------- | ------- | ------------------------------------------------------------ |
-| 1.9 | stable | Change the logic in the current reconciler
-
-We could potentially reuse the old reconciler by changing the reconciler to count
-the endpoints and set the `masterCount` (with a RWLock) to the count.
diff --git a/contributors/design-proposals/annotations-downward-api.md b/contributors/design-proposals/apps/annotations-downward-api.md
similarity index 100%
rename from contributors/design-proposals/annotations-downward-api.md
rename to contributors/design-proposals/apps/annotations-downward-api.md
diff --git a/contributors/design-proposals/controller-ref.md b/contributors/design-proposals/apps/controller-ref.md
similarity index 100%
rename from contributors/design-proposals/controller-ref.md
rename to contributors/design-proposals/apps/controller-ref.md
diff --git a/contributors/design-proposals/controller_history.md b/contributors/design-proposals/apps/controller_history.md
similarity index 100%
rename from contributors/design-proposals/controller_history.md
rename to contributors/design-proposals/apps/controller_history.md
diff --git a/contributors/design-proposals/cronjob.md b/contributors/design-proposals/apps/cronjob.md
similarity index 100%
rename from contributors/design-proposals/cronjob.md
rename to contributors/design-proposals/apps/cronjob.md
diff --git a/contributors/design-proposals/daemon.md b/contributors/design-proposals/apps/daemon.md
similarity index 100%
rename from contributors/design-proposals/daemon.md
rename to contributors/design-proposals/apps/daemon.md
diff --git a/contributors/design-proposals/daemonset-update.md b/contributors/design-proposals/apps/daemonset-update.md
similarity index 100%
rename from contributors/design-proposals/daemonset-update.md
rename to contributors/design-proposals/apps/daemonset-update.md
diff --git a/contributors/design-proposals/deploy.md b/contributors/design-proposals/apps/deploy.md
similarity index 100%
rename from contributors/design-proposals/deploy.md
rename to contributors/design-proposals/apps/deploy.md
diff --git a/contributors/design-proposals/deployment.md b/contributors/design-proposals/apps/deployment.md
similarity index 100%
rename from contributors/design-proposals/deployment.md
rename to contributors/design-proposals/apps/deployment.md
diff --git a/contributors/design-proposals/indexed-job.md b/contributors/design-proposals/apps/indexed-job.md
similarity index 100%
rename from contributors/design-proposals/indexed-job.md
rename to contributors/design-proposals/apps/indexed-job.md
diff --git a/contributors/design-proposals/job.md b/contributors/design-proposals/apps/job.md
similarity index 100%
rename from contributors/design-proposals/job.md
rename to contributors/design-proposals/apps/job.md
diff --git a/contributors/design-proposals/stateful-apps.md b/contributors/design-proposals/apps/stateful-apps.md
similarity index 100%
rename from contributors/design-proposals/stateful-apps.md
rename to contributors/design-proposals/apps/stateful-apps.md
diff --git a/contributors/design-proposals/statefulset-update.md b/contributors/design-proposals/apps/statefulset-update.md
similarity index 100%
rename from contributors/design-proposals/statefulset-update.md
rename to contributors/design-proposals/apps/statefulset-update.md
diff --git a/contributors/design-proposals/architecture.dia b/contributors/design-proposals/architecture/architecture.dia
similarity index 100%
rename from contributors/design-proposals/architecture.dia
rename to contributors/design-proposals/architecture/architecture.dia
diff --git a/contributors/design-proposals/architecture.md b/contributors/design-proposals/architecture/architecture.md
similarity index 100%
rename from contributors/design-proposals/architecture.md
rename to contributors/design-proposals/architecture/architecture.md
diff --git a/contributors/design-proposals/architecture.png b/contributors/design-proposals/architecture/architecture.png
similarity index 100%
rename from contributors/design-proposals/architecture.png
rename to contributors/design-proposals/architecture/architecture.png
diff --git a/contributors/design-proposals/architecture.svg b/contributors/design-proposals/architecture/architecture.svg
similarity index 100%
rename from contributors/design-proposals/architecture.svg
rename to contributors/design-proposals/architecture/architecture.svg
diff --git a/contributors/design-proposals/access.md b/contributors/design-proposals/auth/access.md
similarity index 100%
rename from contributors/design-proposals/access.md
rename to contributors/design-proposals/auth/access.md
diff --git a/contributors/design-proposals/apparmor.md b/contributors/design-proposals/auth/apparmor.md
similarity index 100%
rename from contributors/design-proposals/apparmor.md
rename to contributors/design-proposals/auth/apparmor.md
diff --git a/contributors/design-proposals/bulk_watch.md b/contributors/design-proposals/auth/bulk_watch.md
similarity index 100%
rename from contributors/design-proposals/bulk_watch.md
rename to contributors/design-proposals/auth/bulk_watch.md
diff --git a/contributors/design-proposals/enhance-pluggable-policy.md b/contributors/design-proposals/auth/enhance-pluggable-policy.md
similarity index 100%
rename from contributors/design-proposals/enhance-pluggable-policy.md
rename to contributors/design-proposals/auth/enhance-pluggable-policy.md
diff --git a/contributors/design-proposals/no-new-privs.md b/contributors/design-proposals/auth/no-new-privs.md
similarity index 100%
rename from contributors/design-proposals/no-new-privs.md
rename to contributors/design-proposals/auth/no-new-privs.md
diff --git a/contributors/design-proposals/pod-security-context.md b/contributors/design-proposals/auth/pod-security-context.md
similarity index 100%
rename from contributors/design-proposals/pod-security-context.md
rename to contributors/design-proposals/auth/pod-security-context.md
diff --git a/contributors/design-proposals/security-context-constraints.md b/contributors/design-proposals/auth/security-context-constraints.md
similarity index 100%
rename from contributors/design-proposals/security-context-constraints.md
rename to contributors/design-proposals/auth/security-context-constraints.md
diff --git a/contributors/design-proposals/security.md b/contributors/design-proposals/auth/security.md
similarity index 100%
rename from contributors/design-proposals/security.md
rename to contributors/design-proposals/auth/security.md
diff --git a/contributors/design-proposals/security_context.md b/contributors/design-proposals/auth/security_context.md
similarity index 100%
rename from contributors/design-proposals/security_context.md
rename to contributors/design-proposals/auth/security_context.md
diff --git a/contributors/design-proposals/service_accounts.md b/contributors/design-proposals/auth/service_accounts.md
similarity index 100%
rename from contributors/design-proposals/service_accounts.md
rename to contributors/design-proposals/auth/service_accounts.md
diff --git a/contributors/design-proposals/horizontal-pod-autoscaler.md b/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md
similarity index 100%
rename from contributors/design-proposals/horizontal-pod-autoscaler.md
rename to contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md
diff --git a/contributors/design-proposals/hpa-status-conditions.md b/contributors/design-proposals/autoscaling/hpa-status-conditions.md
similarity index 100%
rename from contributors/design-proposals/hpa-status-conditions.md
rename to contributors/design-proposals/autoscaling/hpa-status-conditions.md
diff --git a/contributors/design-proposals/hpa-v2.md b/contributors/design-proposals/autoscaling/hpa-v2.md
similarity index 100%
rename from contributors/design-proposals/hpa-v2.md
rename to contributors/design-proposals/autoscaling/hpa-v2.md
diff --git a/contributors/design-proposals/aws_under_the_hood.md b/contributors/design-proposals/aws/aws_under_the_hood.md
similarity index 100%
rename from contributors/design-proposals/aws_under_the_hood.md
rename to contributors/design-proposals/aws/aws_under_the_hood.md
diff --git a/contributors/design-proposals/cloud-provider-refactoring.md b/contributors/design-proposals/cloud-provider/cloud-provider-refactoring.md
similarity index 100%
rename from contributors/design-proposals/cloud-provider-refactoring.md
rename to contributors/design-proposals/cloud-provider/cloud-provider-refactoring.md
diff --git a/contributors/design-proposals/cloudprovider-storage-metrics.md b/contributors/design-proposals/cloud-provider/cloudprovider-storage-metrics.md
similarity index 100%
rename from contributors/design-proposals/cloudprovider-storage-metrics.md
rename to contributors/design-proposals/cloud-provider/cloudprovider-storage-metrics.md
diff --git a/contributors/design-proposals/bootstrap-discovery.md b/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
similarity index 100%
rename from contributors/design-proposals/bootstrap-discovery.md
rename to contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
diff --git a/contributors/design-proposals/cluster-deployment.md b/contributors/design-proposals/cluster-lifecycle/cluster-deployment.md
similarity index 100%
rename from contributors/design-proposals/cluster-deployment.md
rename to contributors/design-proposals/cluster-lifecycle/cluster-deployment.md
diff --git a/contributors/design-proposals/clustering.md b/contributors/design-proposals/cluster-lifecycle/clustering.md
similarity index 100%
rename from contributors/design-proposals/clustering.md
rename to contributors/design-proposals/cluster-lifecycle/clustering.md
diff --git a/contributors/design-proposals/clustering/.gitignore b/contributors/design-proposals/cluster-lifecycle/clustering/.gitignore
similarity index 100%
rename from contributors/design-proposals/clustering/.gitignore
rename to contributors/design-proposals/cluster-lifecycle/clustering/.gitignore
diff --git a/contributors/design-proposals/clustering/Dockerfile b/contributors/design-proposals/cluster-lifecycle/clustering/Dockerfile
similarity index 100%
rename from contributors/design-proposals/clustering/Dockerfile
rename to contributors/design-proposals/cluster-lifecycle/clustering/Dockerfile
diff --git a/contributors/design-proposals/clustering/Makefile b/contributors/design-proposals/cluster-lifecycle/clustering/Makefile
similarity index 100%
rename from contributors/design-proposals/clustering/Makefile
rename to contributors/design-proposals/cluster-lifecycle/clustering/Makefile
diff --git a/contributors/design-proposals/clustering/OWNERS b/contributors/design-proposals/cluster-lifecycle/clustering/OWNERS
similarity index 100%
rename from contributors/design-proposals/clustering/OWNERS
rename to contributors/design-proposals/cluster-lifecycle/clustering/OWNERS
diff --git a/contributors/design-proposals/clustering/README.md b/contributors/design-proposals/cluster-lifecycle/clustering/README.md
similarity index 100%
rename from contributors/design-proposals/clustering/README.md
rename to contributors/design-proposals/cluster-lifecycle/clustering/README.md
diff --git a/contributors/design-proposals/clustering/dynamic.png b/contributors/design-proposals/cluster-lifecycle/clustering/dynamic.png
similarity index 100%
rename from contributors/design-proposals/clustering/dynamic.png
rename to contributors/design-proposals/cluster-lifecycle/clustering/dynamic.png
diff --git a/contributors/design-proposals/clustering/dynamic.seqdiag b/contributors/design-proposals/cluster-lifecycle/clustering/dynamic.seqdiag
similarity index 100%
rename from contributors/design-proposals/clustering/dynamic.seqdiag
rename to contributors/design-proposals/cluster-lifecycle/clustering/dynamic.seqdiag
diff --git a/contributors/design-proposals/clustering/static.png b/contributors/design-proposals/cluster-lifecycle/clustering/static.png
similarity index 100%
rename from contributors/design-proposals/clustering/static.png
rename to contributors/design-proposals/cluster-lifecycle/clustering/static.png
diff --git a/contributors/design-proposals/clustering/static.seqdiag b/contributors/design-proposals/cluster-lifecycle/clustering/static.seqdiag
similarity index 100%
rename from contributors/design-proposals/clustering/static.seqdiag
rename to contributors/design-proposals/cluster-lifecycle/clustering/static.seqdiag
diff --git a/contributors/design-proposals/dramatically-simplify-cluster-creation.md b/contributors/design-proposals/cluster-lifecycle/dramatically-simplify-cluster-creation.md
similarity index 100%
rename from contributors/design-proposals/dramatically-simplify-cluster-creation.md
rename to contributors/design-proposals/cluster-lifecycle/dramatically-simplify-cluster-creation.md
diff --git a/contributors/design-proposals/self-hosted-final-cluster.png b/contributors/design-proposals/cluster-lifecycle/self-hosted-final-cluster.png
similarity index 100%
rename from contributors/design-proposals/self-hosted-final-cluster.png
rename to contributors/design-proposals/cluster-lifecycle/self-hosted-final-cluster.png
diff --git a/contributors/design-proposals/self-hosted-kubelet.md b/contributors/design-proposals/cluster-lifecycle/self-hosted-kubelet.md
similarity index 100%
rename from contributors/design-proposals/self-hosted-kubelet.md
rename to contributors/design-proposals/cluster-lifecycle/self-hosted-kubelet.md
diff --git a/contributors/design-proposals/self-hosted-kubernetes.md b/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
similarity index 100%
rename from contributors/design-proposals/self-hosted-kubernetes.md
rename to contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
diff --git a/contributors/design-proposals/self-hosted-layers.png b/contributors/design-proposals/cluster-lifecycle/self-hosted-layers.png
similarity index 100%
rename from contributors/design-proposals/self-hosted-layers.png
rename to contributors/design-proposals/cluster-lifecycle/self-hosted-layers.png
diff --git a/contributors/design-proposals/self-hosted-moving-parts.png b/contributors/design-proposals/cluster-lifecycle/self-hosted-moving-parts.png
similarity index 100%
rename from contributors/design-proposals/self-hosted-moving-parts.png
rename to contributors/design-proposals/cluster-lifecycle/self-hosted-moving-parts.png
diff --git a/contributors/design-proposals/containerized-mounter.md b/contributors/design-proposals/containerized-mounter.md~
similarity index 100%
rename from contributors/design-proposals/containerized-mounter.md
rename to contributors/design-proposals/containerized-mounter.md~
diff --git a/contributors/design-proposals/federated-api-servers.md b/contributors/design-proposals/federation/federated-api-servers.md
similarity index 100%
rename from contributors/design-proposals/federated-api-servers.md
rename to contributors/design-proposals/federation/federated-api-servers.md
diff --git a/contributors/design-proposals/federated-ingress.md b/contributors/design-proposals/federation/federated-ingress.md
similarity index 100%
rename from contributors/design-proposals/federated-ingress.md
rename to contributors/design-proposals/federation/federated-ingress.md
diff --git a/contributors/design-proposals/federated-placement-policy.md b/contributors/design-proposals/federation/federated-placement-policy.md
similarity index 100%
rename from contributors/design-proposals/federated-placement-policy.md
rename to contributors/design-proposals/federation/federated-placement-policy.md
diff --git a/contributors/design-proposals/federated-replicasets.md b/contributors/design-proposals/federation/federated-replicasets.md
similarity index 100%
rename from contributors/design-proposals/federated-replicasets.md
rename to contributors/design-proposals/federation/federated-replicasets.md
diff --git a/contributors/design-proposals/federated-services.md b/contributors/design-proposals/federation/federated-services.md
similarity index 100%
rename from contributors/design-proposals/federated-services.md
rename to contributors/design-proposals/federation/federated-services.md
diff --git a/contributors/design-proposals/federation-clusterselector.md b/contributors/design-proposals/federation/federation-clusterselector.md
similarity index 100%
rename from contributors/design-proposals/federation-clusterselector.md
rename to contributors/design-proposals/federation/federation-clusterselector.md
diff --git a/contributors/design-proposals/federation-high-level-arch.png b/contributors/design-proposals/federation/federation-high-level-arch.png
similarity index 100%
rename from contributors/design-proposals/federation-high-level-arch.png
rename to contributors/design-proposals/federation/federation-high-level-arch.png
diff --git a/contributors/design-proposals/federation-lite.md b/contributors/design-proposals/federation/federation-lite.md
similarity index 100%
rename from contributors/design-proposals/federation-lite.md
rename to contributors/design-proposals/federation/federation-lite.md
diff --git a/contributors/design-proposals/federation-phase-1.md b/contributors/design-proposals/federation/federation-phase-1.md
similarity index 100%
rename from contributors/design-proposals/federation-phase-1.md
rename to contributors/design-proposals/federation/federation-phase-1.md
diff --git a/contributors/design-proposals/federation.md b/contributors/design-proposals/federation/federation.md
similarity index 100%
rename from contributors/design-proposals/federation.md
rename to contributors/design-proposals/federation/federation.md
diff --git a/contributors/design-proposals/ubernetes-cluster-state.png b/contributors/design-proposals/federation/ubernetes-cluster-state.png
similarity index 100%
rename from contributors/design-proposals/ubernetes-cluster-state.png
rename to contributors/design-proposals/federation/ubernetes-cluster-state.png
diff --git a/contributors/design-proposals/ubernetes-design.png b/contributors/design-proposals/federation/ubernetes-design.png
similarity index 100%
rename from contributors/design-proposals/ubernetes-design.png
rename to contributors/design-proposals/federation/ubernetes-design.png
diff --git a/contributors/design-proposals/ubernetes-scheduling.png b/contributors/design-proposals/federation/ubernetes-scheduling.png
similarity index 100%
rename from contributors/design-proposals/ubernetes-scheduling.png
rename to contributors/design-proposals/federation/ubernetes-scheduling.png
diff --git a/contributors/design-proposals/gcp/containerized-mounter.md b/contributors/design-proposals/gcp/containerized-mounter.md
new file mode 100644
index 000000000..e06deb2a0
--- /dev/null
+++ b/contributors/design-proposals/gcp/containerized-mounter.md
@@ -0,0 +1,43 @@
+# Containerized Mounter with Chroot for Container-Optimized OS
+
+## Goal
+
+Due to security and management overhead, our new Container-Optimized OS used by GKE
+does not carry certain storage drivers and tools needed for such as nfs and
+glusterfs. This project takes a containerized mount approach to package mount
+binaries into a container. Volume plugin will execute mount inside of container
+and share the mount with the host.
+
+
+## Design
+
+1. A docker image has storage tools (nfs and glusterfs) pre-installed and uploaded
+ to gcs.
+2. During GKE cluster configuration, the docker image is pulled and installed on
+ the cluster node.
+3. When nfs or glusterfs type mount is invoked by kubelet, it will run the mount
+ command inside of a container with the pre-install docker image and the mount
+ propagation set to “shared. In this way, the mount inside the container will
+ visible to host node too.
+4. A special case for NFSv3, a rpcbind process is issued before running mount
+ command.
+
+## Implementation details
+
+* In the first version of containerized mounter, we use rkt fly to dynamically
+ start a container during mount. When mount command finishes, the container is
+ normally exited and will be garbage-collected. However, in case the glusterfs
+ mount, because a gluster daemon is running after command mount finishes util
+ glusterfs unmount, the container started for mount will continue to run until
+ glusterfs client finishes. The container cannot be garbage-collected right away
+ and multiple containers might be running for some time. Due to shared mount
+ propagation, with more containers running, the number of mounts will increase
+ significantly and might cause kernel panic. To solve this problem, a chroot
+ approach is proposed and implemented.
+* In the second version, instead of running a container on the host, the docker
+ container’s file system is exported as a tar archive and pre-installed on host.
+ Kubelet directory is shared mount between host and inside of the container’s
+ rootfs. When a gluster/nfs mount is issued, a mounter script will use chroot to
+ change to the container’s rootfs and run the mount. This approach is very clean
+ since there is no need to manage a container’s lifecycle and avoid having large
+ number of mounts.
diff --git a/contributors/design-proposals/gce-l4-loadbalancer-healthcheck.md b/contributors/design-proposals/gcp/gce-l4-loadbalancer-healthcheck.md
similarity index 100%
rename from contributors/design-proposals/gce-l4-loadbalancer-healthcheck.md
rename to contributors/design-proposals/gcp/gce-l4-loadbalancer-healthcheck.md
diff --git a/contributors/design-proposals/custom-metrics-api.md b/contributors/design-proposals/instrumentation/custom-metrics-api.md
similarity index 100%
rename from contributors/design-proposals/custom-metrics-api.md
rename to contributors/design-proposals/instrumentation/custom-metrics-api.md
diff --git a/contributors/design-proposals/metrics-server.md b/contributors/design-proposals/instrumentation/metrics-server.md
similarity index 100%
rename from contributors/design-proposals/metrics-server.md
rename to contributors/design-proposals/instrumentation/metrics-server.md
diff --git a/contributors/design-proposals/monitoring_architecture.md b/contributors/design-proposals/instrumentation/monitoring_architecture.md
similarity index 100%
rename from contributors/design-proposals/monitoring_architecture.md
rename to contributors/design-proposals/instrumentation/monitoring_architecture.md
diff --git a/contributors/design-proposals/monitoring_architecture.png b/contributors/design-proposals/instrumentation/monitoring_architecture.png
similarity index 100%
rename from contributors/design-proposals/monitoring_architecture.png
rename to contributors/design-proposals/instrumentation/monitoring_architecture.png
diff --git a/contributors/design-proposals/performance-related-monitoring.md b/contributors/design-proposals/instrumentation/performance-related-monitoring.md
similarity index 100%
rename from contributors/design-proposals/performance-related-monitoring.md
rename to contributors/design-proposals/instrumentation/performance-related-monitoring.md
diff --git a/contributors/design-proposals/resource-metrics-api.md b/contributors/design-proposals/instrumentation/resource-metrics-api.md
similarity index 100%
rename from contributors/design-proposals/resource-metrics-api.md
rename to contributors/design-proposals/instrumentation/resource-metrics-api.md
diff --git a/contributors/design-proposals/command_execution_port_forwarding.md b/contributors/design-proposals/network/command_execution_port_forwarding.md
similarity index 100%
rename from contributors/design-proposals/command_execution_port_forwarding.md
rename to contributors/design-proposals/network/command_execution_port_forwarding.md
diff --git a/contributors/design-proposals/external-lb-source-ip-preservation.md b/contributors/design-proposals/network/external-lb-source-ip-preservation.md
similarity index 100%
rename from contributors/design-proposals/external-lb-source-ip-preservation.md
rename to contributors/design-proposals/network/external-lb-source-ip-preservation.md
diff --git a/contributors/design-proposals/flannel-integration.md b/contributors/design-proposals/network/flannel-integration.md
similarity index 100%
rename from contributors/design-proposals/flannel-integration.md
rename to contributors/design-proposals/network/flannel-integration.md
diff --git a/contributors/design-proposals/network-policy.md b/contributors/design-proposals/network/network-policy.md
similarity index 100%
rename from contributors/design-proposals/network-policy.md
rename to contributors/design-proposals/network/network-policy.md
diff --git a/contributors/design-proposals/networking.md b/contributors/design-proposals/network/networking.md
similarity index 100%
rename from contributors/design-proposals/networking.md
rename to contributors/design-proposals/network/networking.md
diff --git a/contributors/design-proposals/container-runtime-interface-v1.md b/contributors/design-proposals/node/container-runtime-interface-v1.md
similarity index 100%
rename from contributors/design-proposals/container-runtime-interface-v1.md
rename to contributors/design-proposals/node/container-runtime-interface-v1.md
diff --git a/contributors/design-proposals/disk-accounting.md b/contributors/design-proposals/node/disk-accounting.md
similarity index 100%
rename from contributors/design-proposals/disk-accounting.md
rename to contributors/design-proposals/node/disk-accounting.md
diff --git a/contributors/design-proposals/dynamic-kubelet-configuration.md b/contributors/design-proposals/node/dynamic-kubelet-configuration.md
similarity index 100%
rename from contributors/design-proposals/dynamic-kubelet-configuration.md
rename to contributors/design-proposals/node/dynamic-kubelet-configuration.md
diff --git a/contributors/design-proposals/kubelet-auth.md b/contributors/design-proposals/node/kubelet-auth.md
similarity index 100%
rename from contributors/design-proposals/kubelet-auth.md
rename to contributors/design-proposals/node/kubelet-auth.md
diff --git a/contributors/design-proposals/kubelet-authorizer.md b/contributors/design-proposals/node/kubelet-authorizer.md
similarity index 100%
rename from contributors/design-proposals/kubelet-authorizer.md
rename to contributors/design-proposals/node/kubelet-authorizer.md
diff --git a/contributors/design-proposals/kubelet-cri-logging.md b/contributors/design-proposals/node/kubelet-cri-logging.md
similarity index 100%
rename from contributors/design-proposals/kubelet-cri-logging.md
rename to contributors/design-proposals/node/kubelet-cri-logging.md
diff --git a/contributors/design-proposals/kubelet-eviction.md b/contributors/design-proposals/node/kubelet-eviction.md
similarity index 100%
rename from contributors/design-proposals/kubelet-eviction.md
rename to contributors/design-proposals/node/kubelet-eviction.md
diff --git a/contributors/design-proposals/kubelet-hypercontainer-runtime.md b/contributors/design-proposals/node/kubelet-hypercontainer-runtime.md
similarity index 100%
rename from contributors/design-proposals/kubelet-hypercontainer-runtime.md
rename to contributors/design-proposals/node/kubelet-hypercontainer-runtime.md
diff --git a/contributors/design-proposals/kubelet-rkt-runtime.md b/contributors/design-proposals/node/kubelet-rkt-runtime.md
similarity index 100%
rename from contributors/design-proposals/kubelet-rkt-runtime.md
rename to contributors/design-proposals/node/kubelet-rkt-runtime.md
diff --git a/contributors/design-proposals/kubelet-rootfs-distribution.md b/contributors/design-proposals/node/kubelet-rootfs-distribution.md
similarity index 100%
rename from contributors/design-proposals/kubelet-rootfs-distribution.md
rename to contributors/design-proposals/node/kubelet-rootfs-distribution.md
diff --git a/contributors/design-proposals/kubelet-systemd.md b/contributors/design-proposals/node/kubelet-systemd.md
similarity index 100%
rename from contributors/design-proposals/kubelet-systemd.md
rename to contributors/design-proposals/node/kubelet-systemd.md
diff --git a/contributors/design-proposals/kubelet-tls-bootstrap.md b/contributors/design-proposals/node/kubelet-tls-bootstrap.md
similarity index 100%
rename from contributors/design-proposals/kubelet-tls-bootstrap.md
rename to contributors/design-proposals/node/kubelet-tls-bootstrap.md
diff --git a/contributors/design-proposals/node-allocatable.md b/contributors/design-proposals/node/node-allocatable.md
similarity index 100%
rename from contributors/design-proposals/node-allocatable.md
rename to contributors/design-proposals/node/node-allocatable.md
diff --git a/contributors/design-proposals/pod-resource-management.md b/contributors/design-proposals/node/pod-resource-management.md
similarity index 100%
rename from contributors/design-proposals/pod-resource-management.md
rename to contributors/design-proposals/node/pod-resource-management.md
diff --git a/contributors/design-proposals/resource-qos.md b/contributors/design-proposals/node/resource-qos.md
similarity index 100%
rename from contributors/design-proposals/resource-qos.md
rename to contributors/design-proposals/node/resource-qos.md
diff --git a/contributors/design-proposals/release-notes.md b/contributors/design-proposals/release/release-notes.md
similarity index 100%
rename from contributors/design-proposals/release-notes.md
rename to contributors/design-proposals/release/release-notes.md
diff --git a/contributors/design-proposals/release-test-signal.md b/contributors/design-proposals/release/release-test-signal.md
similarity index 100%
rename from contributors/design-proposals/release-test-signal.md
rename to contributors/design-proposals/release/release-test-signal.md
diff --git a/contributors/design-proposals/device-plugin-overview.png b/contributors/design-proposals/resource-management/device-plugin-overview.png
similarity index 100%
rename from contributors/design-proposals/device-plugin-overview.png
rename to contributors/design-proposals/resource-management/device-plugin-overview.png
diff --git a/contributors/design-proposals/device-plugin.md b/contributors/design-proposals/resource-management/device-plugin.md
similarity index 100%
rename from contributors/design-proposals/device-plugin.md
rename to contributors/design-proposals/resource-management/device-plugin.md
diff --git a/contributors/design-proposals/device-plugin.png b/contributors/design-proposals/resource-management/device-plugin.png
similarity index 100%
rename from contributors/design-proposals/device-plugin.png
rename to contributors/design-proposals/resource-management/device-plugin.png
diff --git a/contributors/design-proposals/gpu-support.md b/contributors/design-proposals/resource-management/gpu-support.md
similarity index 100%
rename from contributors/design-proposals/gpu-support.md
rename to contributors/design-proposals/resource-management/gpu-support.md
diff --git a/contributors/design-proposals/runtime-pod-cache.md b/contributors/design-proposals/runtime-pod-cache.md
deleted file mode 100644
index d4926c3e8..000000000
--- a/contributors/design-proposals/runtime-pod-cache.md
+++ /dev/null
@@ -1,173 +0,0 @@
-# Kubelet: Runtime Pod Cache
-
-This proposal builds on top of the Pod Lifecycle Event Generator (PLEG) proposed
-in [#12802](https://issues.k8s.io/12802). It assumes that Kubelet subscribes to
-the pod lifecycle event stream to eliminate periodic polling of pod
-states. Please see [#12802](https://issues.k8s.io/12802). for the motivation and
-design concept for PLEG.
-
-Runtime pod cache is an in-memory cache which stores the *status* of
-all pods, and is maintained by PLEG. It serves as a single source of
-truth for internal pod status, freeing Kubelet from querying the
-container runtime.
-
-## Motivation
-
-With PLEG, Kubelet no longer needs to perform comprehensive state
-checking for all pods periodically. It only instructs a pod worker to
-start syncing when there is a change of its pod status. Nevertheless,
-during each sync, a pod worker still needs to construct the pod status
-by examining all containers (whether dead or alive) in the pod, due to
-the lack of the caching of previous states. With the integration of
-pod cache, we can further improve Kubelet's CPU usage by
-
- 1. Lowering the number of concurrent requests to the container
- runtime since pod workers no longer have to query the runtime
- individually.
- 2. Lowering the total number of inspect requests because there is no
- need to inspect containers with no state changes.
-
-***Don't we already have a [container runtime cache]
-(https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/container/runtime_cache.go)?***
-
-The runtime cache is an optimization that reduces the number of `GetPods()`
-calls from the workers. However,
-
- * The cache does not store all information necessary for a worker to
- complete a sync (e.g., `docker inspect`); workers still need to inspect
- containers individually to generate `api.PodStatus`.
- * Workers sometimes need to bypass the cache in order to retrieve the
- latest pod state.
-
-This proposal generalizes the cache and instructs PLEG to populate the cache, so
-that the content is always up-to-date.
-
-**Why can't each worker cache its own pod status?**
-
-The short answer is yes, they can. The longer answer is that localized
-caching limits the use of the cache content -- other components cannot
-access it. This often leads to caching at multiple places and/or passing
-objects around, complicating the control flow.
-
-## Runtime Pod Cache
-
-
-
-Pod cache stores the `PodStatus` for all pods on the node. `PodStatus` encompasses
-all the information required from the container runtime to generate
-`api.PodStatus` for a pod.
-
-```go
-// PodStatus represents the status of the pod and its containers.
-// api.PodStatus can be derived from examining PodStatus and api.Pod.
-type PodStatus struct {
- ID types.UID
- Name string
- Namespace string
- IP string
- ContainerStatuses []*ContainerStatus
-}
-
-// ContainerStatus represents the status of a container.
-type ContainerStatus struct {
- ID ContainerID
- Name string
- State ContainerState
- CreatedAt time.Time
- StartedAt time.Time
- FinishedAt time.Time
- ExitCode int
- Image string
- ImageID string
- Hash uint64
- RestartCount int
- Reason string
- Message string
-}
-```
-
-`PodStatus` is defined in the container runtime interface, hence is
-runtime-agnostic.
-
-PLEG is responsible for updating the entries pod cache, hence always keeping
-the cache up-to-date.
-
-1. Detect change of container state
-2. Inspect the pod for details
-3. Update the pod cache with the new PodStatus
- - If there is no real change of the pod entry, do nothing
- - Otherwise, generate and send out the corresponding pod lifecycle event
-
-Note that in (3), PLEG can check if there is any disparity between the old
-and the new pod entry to filter out duplicated events if needed.
-
-### Evict cache entries
-
-Note that the cache represents all the pods/containers known by the container
-runtime. A cache entry should only be evicted if the pod is no longer visible
-by the container runtime. PLEG is responsible for deleting entries in the
-cache.
-
-### Generate `api.PodStatus`
-
-Because pod cache stores the up-to-date `PodStatus` of the pods, Kubelet can
-generate the `api.PodStatus` by interpreting the cache entry at any
-time. To avoid sending intermediate status (e.g., while a pod worker
-is restarting a container), we will instruct the pod worker to generate a new
-status at the beginning of each sync.
-
-### Cache contention
-
-Cache contention should not be a problem when the number of pods is
-small. When Kubelet scales, we can always shard the pods by ID to
-reduce contention.
-
-### Disk management
-
-The pod cache is not capable to fulfill the needs of container/image garbage
-collectors as they may demand more than pod-level information. These components
-will still need to query the container runtime directly at times. We may
-consider extending the cache for these use cases, but they are beyond the scope
-of this proposal.
-
-
-## Impact on Pod Worker Control Flow
-
-A pod worker may perform various operations (e.g., start/kill a container)
-during a sync. They will expect to see the results of such operations reflected
-in the cache in the next sync. Alternately, they can bypass the cache and
-query the container runtime directly to get the latest status. However, this
-is not desirable since the cache is introduced exactly to eliminate unnecessary,
-concurrent queries. Therefore, a pod worker should be blocked until all expected
-results have been updated to the cache by PLEG.
-
-Depending on the type of PLEG (see [#12802](https://issues.k8s.io/12802)) in
-use, the methods to check whether a requirement is met can differ. For a
-PLEG that solely relies on relisting, a pod worker can simply wait until the
-relist timestamp is newer than the end of the worker's last sync. On the other
-hand, if pod worker knows what events to expect, they can also block until the
-events are observed.
-
-It should be noted that `api.PodStatus` will only be generated by the pod
-worker *after* the cache has been updated. This means that the perceived
-responsiveness of Kubelet (from querying the API server) will be affected by
-how soon the cache can be populated. For the pure-relisting PLEG, the relist
-period can become the bottleneck. On the other hand, A PLEG which watches the
-upstream event stream (and knows how what events to expect) is not restricted
-by such periods and should improve Kubelet's perceived responsiveness.
-
-## TODOs for v1.2
-
- - Redefine container runtime types ([#12619](https://issues.k8s.io/12619)):
- and introduce `PodStatus`. Refactor dockertools and rkt to use the new type.
-
- - Add cache and instruct PLEG to populate it.
-
- - Refactor Kubelet to use the cache.
-
- - Deprecate the old runtime cache.
-
-
-
-[]()
-
diff --git a/contributors/design-proposals/runtimeconfig.md b/contributors/design-proposals/runtimeconfig.md
deleted file mode 100644
index b2ed83dd4..000000000
--- a/contributors/design-proposals/runtimeconfig.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Overview
-
-Proposes adding a `--feature-config` to core kube system components:
-apiserver , scheduler, controller-manager, kube-proxy, and selected addons.
-This flag will be used to enable/disable alpha features on a per-component basis.
-
-## Motivation
-
-Motivation is enabling/disabling features that are not tied to
-an API group. API groups can be selectively enabled/disabled in the
-apiserver via existing `--runtime-config` flag on apiserver, but there is
-currently no mechanism to toggle alpha features that are controlled by
-e.g. annotations. This means the burden of controlling whether such
-features are enabled in a particular cluster is on feature implementors;
-they must either define some ad hoc mechanism for toggling (e.g. flag
-on component binary) or else toggle the feature on/off at compile time.
-
-By adding a`--feature-config` to all kube-system components, alpha features
-can be toggled on a per-component basis by passing `enableAlphaFeature=true|false`
-to `--feature-config` for each component that the feature touches.
-
-## Design
-
-The following components will all get a `--feature-config` flag,
-which loads a `config.ConfigurationMap`:
-
-- kube-apiserver
-- kube-scheduler
-- kube-controller-manager
-- kube-proxy
-- kube-dns
-
-(Note kubelet is omitted, it's dynamic config story is being addressed
-by [#29459](https://issues.k8s.io/29459)). Alpha features that are not accessed via an alpha API
-group should define an `enableFeatureName` flag and use it to toggle
-activation of the feature in each system component that the feature
-uses.
-
-## Suggested conventions
-
-This proposal only covers adding a mechanism to toggle features in
-system components. Implementation details will still depend on the alpha
-feature's owner(s). The following are suggested conventions:
-
-- Naming for feature config entries should follow the pattern
- "enable=true".
-- Features that touch multiple components should reserve the same key
- in each component to toggle on/off.
-- Alpha features should be disabled by default. Beta features may
- be enabled by default. Refer to docs/devel/api_changes.md#alpha-beta-and-stable-versions
- for more detailed guidance on alpha vs. beta.
-
-## Upgrade support
-
-As the primary motivation for cluster config is toggling alpha
-features, upgrade support is not in scope. Enabling or disabling
-a feature is necessarily a breaking change, so config should
-not be altered in a running cluster.
-
-## Future work
-
-1. The eventual plan is for component config to be managed by versioned
-APIs and not flags ([#12245](https://issues.k8s.io/12245)). When that is added, toggling of features
-could be handled by versioned component config and the component flags
-deprecated.
-
-
-[]()
-
diff --git a/contributors/design-proposals/Kubemark_architecture.png b/contributors/design-proposals/scalability/Kubemark_architecture.png
similarity index 100%
rename from contributors/design-proposals/Kubemark_architecture.png
rename to contributors/design-proposals/scalability/Kubemark_architecture.png
diff --git a/contributors/design-proposals/kubemark.md b/contributors/design-proposals/scalability/kubemark.md
similarity index 100%
rename from contributors/design-proposals/kubemark.md
rename to contributors/design-proposals/scalability/kubemark.md
diff --git a/contributors/design-proposals/scalability-testing.md b/contributors/design-proposals/scalability/scalability-testing.md
similarity index 100%
rename from contributors/design-proposals/scalability-testing.md
rename to contributors/design-proposals/scalability/scalability-testing.md
diff --git a/contributors/design-proposals/hugepages.md b/contributors/design-proposals/scheduling/hugepages.md
similarity index 100%
rename from contributors/design-proposals/hugepages.md
rename to contributors/design-proposals/scheduling/hugepages.md
diff --git a/contributors/design-proposals/multiple-schedulers.md b/contributors/design-proposals/scheduling/multiple-schedulers.md
similarity index 100%
rename from contributors/design-proposals/multiple-schedulers.md
rename to contributors/design-proposals/scheduling/multiple-schedulers.md
diff --git a/contributors/design-proposals/nodeaffinity.md b/contributors/design-proposals/scheduling/nodeaffinity.md
similarity index 100%
rename from contributors/design-proposals/nodeaffinity.md
rename to contributors/design-proposals/scheduling/nodeaffinity.md
diff --git a/contributors/design-proposals/pod-preemption.md b/contributors/design-proposals/scheduling/pod-preemption.md
similarity index 100%
rename from contributors/design-proposals/pod-preemption.md
rename to contributors/design-proposals/scheduling/pod-preemption.md
diff --git a/contributors/design-proposals/pod-priority-api.md b/contributors/design-proposals/scheduling/pod-priority-api.md
similarity index 100%
rename from contributors/design-proposals/pod-priority-api.md
rename to contributors/design-proposals/scheduling/pod-priority-api.md
diff --git a/contributors/design-proposals/podaffinity.md b/contributors/design-proposals/scheduling/podaffinity.md
similarity index 100%
rename from contributors/design-proposals/podaffinity.md
rename to contributors/design-proposals/scheduling/podaffinity.md
diff --git a/contributors/design-proposals/rescheduler.md b/contributors/design-proposals/scheduling/rescheduler.md
similarity index 100%
rename from contributors/design-proposals/rescheduler.md
rename to contributors/design-proposals/scheduling/rescheduler.md
diff --git a/contributors/design-proposals/rescheduling-for-critical-pods.md b/contributors/design-proposals/scheduling/rescheduling-for-critical-pods.md
similarity index 100%
rename from contributors/design-proposals/rescheduling-for-critical-pods.md
rename to contributors/design-proposals/scheduling/rescheduling-for-critical-pods.md
diff --git a/contributors/design-proposals/rescheduling.md b/contributors/design-proposals/scheduling/rescheduling.md
similarity index 100%
rename from contributors/design-proposals/rescheduling.md
rename to contributors/design-proposals/scheduling/rescheduling.md
diff --git a/contributors/design-proposals/scheduler_extender.md b/contributors/design-proposals/scheduling/scheduler_extender.md
similarity index 100%
rename from contributors/design-proposals/scheduler_extender.md
rename to contributors/design-proposals/scheduling/scheduler_extender.md
diff --git a/contributors/design-proposals/taint-node-by-condition.md b/contributors/design-proposals/scheduling/taint-node-by-condition.md
similarity index 100%
rename from contributors/design-proposals/taint-node-by-condition.md
rename to contributors/design-proposals/scheduling/taint-node-by-condition.md
diff --git a/contributors/design-proposals/taint-toleration-dedicated.md b/contributors/design-proposals/scheduling/taint-toleration-dedicated.md
similarity index 100%
rename from contributors/design-proposals/taint-toleration-dedicated.md
rename to contributors/design-proposals/scheduling/taint-toleration-dedicated.md
diff --git a/contributors/design-proposals/selector-generation.md b/contributors/design-proposals/selector-generation.md
deleted file mode 100644
index 9b4b51fa3..000000000
--- a/contributors/design-proposals/selector-generation.md
+++ /dev/null
@@ -1,180 +0,0 @@
-Design
-=============
-
-# Goals
-
-Make it really hard to accidentally create a job which has an overlapping
-selector, while still making it possible to chose an arbitrary selector, and
-without adding complex constraint solving to the APIserver.
-
-# Use Cases
-
-1. user can leave all label and selector fields blank and system will fill in
-reasonable ones: non-overlappingness guaranteed.
-2. user can put on the pod template some labels that are useful to the user,
-without reasoning about non-overlappingness. System adds additional label to
-assure not overlapping.
-3. If user wants to reparent pods to new job (very rare case) and knows what
-they are doing, they can completely disable this behavior and specify explicit
-selector.
-4. If a controller that makes jobs, like scheduled job, wants to use different
-labels, such as the time and date of the run, it can do that.
-5. If User reads v1beta1 documentation or reuses v1beta1 Job definitions and
-just changes the API group, the user should not automatically be allowed to
-specify a selector, since this is very rarely what people want to do and is
-error prone.
-6. If User downloads an existing job definition, e.g. with
-`kubectl get jobs/old -o yaml` and tries to modify and post it, he should not
-create an overlapping job.
-7. If User downloads an existing job definition, e.g. with
-`kubectl get jobs/old -o yaml` and tries to modify and post it, and he
-accidentally copies the uniquifying label from the old one, then he should not
-get an error from a label-key conflict, nor get erratic behavior.
-8. If user reads swagger docs and sees the selector field, he should not be able
-to set it without realizing the risks.
-8. (Deferred requirement:) If user wants to specify a preferred name for the
-non-overlappingness key, they can pick a name.
-
-# Proposed changes
-
-## API
-
-`extensions/v1beta1 Job` remains the same. `batch/v1 Job` changes change as
-follows.
-
-Field `job.spec.manualSelector` is added. It controls whether selectors are
-automatically generated. In automatic mode, user cannot make the mistake of
-creating non-unique selectors. In manual mode, certain rare use cases are
-supported.
-
-Validation is not changed. A selector must be provided, and it must select the
-pod template.
-
-Defaulting changes. Defaulting happens in one of two modes:
-
-### Automatic Mode
-
-- User does not specify `job.spec.selector`.
-- User is probably unaware of the `job.spec.manualSelector` field and does not
-think about it.
-- User optionally puts labels on pod template (optional). User does not think
-about uniqueness, just labeling for user's own reasons.
-- Defaulting logic sets `job.spec.selector` to
-`matchLabels["controller-uid"]="$UIDOFJOB"`
-- Defaulting logic appends 2 labels to the `.spec.template.metadata.labels`.
- - The first label is controller-uid=$UIDOFJOB.
- - The second label is "job-name=$NAMEOFJOB".
-
-### Manual Mode
-
-- User means User or Controller for the rest of this list.
-- User does specify `job.spec.selector`.
-- User does specify `job.spec.manualSelector=true`
-- User puts a unique label or label(s) on pod template (required). User does
-think carefully about uniqueness.
-- No defaulting of pod labels or the selector happen.
-
-### Rationale
-
-UID is better than Name in that:
-- it allows cross-namespace control someday if we need it.
-- it is unique across all kinds. `controller-name=foo` does not ensure
-uniqueness across Kinds `job` vs `replicaSet`. Even `job-name=foo` has a
-problem: you might have a `batch.Job` and a `snazzyjob.io/types.Job` -- the
-latter cannot use label `job-name=foo`, though there is a temptation to do so.
-- it uniquely identifies the controller across time. This prevents the case
-where, for example, someone deletes a job via the REST api or client
-(where cascade=false), leaving pods around. We don't want those to be picked up
-unintentionally. It also prevents the case where a user looks at an old job that
-finished but is not deleted, and tries to select its pods, and gets the wrong
-impression that it is still running.
-
-Job name is more user friendly. It is self documenting
-
-Commands like `kubectl get pods -l job-name=myjob` should do exactly what is
-wanted 99.9% of the time. Automated control loops should still use the
-controller-uid=label.
-
-Using both gets the benefits of both, at the cost of some label verbosity.
-
-The field is a `*bool`. Since false is expected to be much more common,
-and since the feature is complex, it is better to leave it unspecified so that
-users looking at a stored pod spec do not need to be aware of this field.
-
-### Overriding Unique Labels
-
-If user does specify `job.spec.selector` then the user must also specify
-`job.spec.manualSelector`. This ensures the user knows that what he is doing is
-not the normal thing to do.
-
-To prevent users from copying the `job.spec.manualSelector` flag from existing
-jobs, it will be optional and default to false, which means when you ask GET and
-existing job back that didn't use this feature, you don't even see the
-`job.spec.manualSelector` flag, so you are not tempted to wonder if you should
-fiddle with it.
-
-## Job Controller
-
-No changes
-
-## Kubectl
-
-No required changes. Suggest moving SELECTOR to wide output of `kubectl get
-jobs` since users do not write the selector.
-
-## Docs
-
-Remove examples that use selector and remove labels from pod templates.
-Recommend `kubectl get jobs -l job-name=name` as the way to find pods of a job.
-
-# Conversion
-
-The following applies to Job, as well as to other types that adopt this pattern:
-
-- Type `extensions/v1beta1` gets a field called `job.spec.autoSelector`.
-- Both the internal type and the `batch/v1` type will get
-`job.spec.manualSelector`.
-- The fields `manualSelector` and `autoSelector` have opposite meanings.
-- Each field defaults to false when unset, and so v1beta1 has a different
-default than v1 and internal. This is intentional: we want new uses to default
-to the less error-prone behavior, and we do not want to change the behavior of
-v1beta1.
-
-*Note*: since the internal default is changing, client library consumers that
-create Jobs may need to add "job.spec.manualSelector=true" to keep working, or
-switch to auto selectors.
-
-Conversion is as follows:
-- `extensions/__internal` to `extensions/v1beta1`: the value of
-`__internal.Spec.ManualSelector` is defaulted to false if nil, negated,
-defaulted to nil if false, and written `v1beta1.Spec.AutoSelector`.
-- `extensions/v1beta1` to `extensions/__internal`: the value of
-`v1beta1.SpecAutoSelector` is defaulted to false if nil, negated, defaulted to
-nil if false, and written to `__internal.Spec.ManualSelector`.
-
-This conversion gives the following properties.
-
-1. Users that previously used v1beta1 do not start seeing a new field when they
-get back objects.
-2. Distinction between originally unset versus explicitly set to false is not
-preserved (would have been nice to do so, but requires more complicated
-solution).
-3. Users who only created v1beta1 examples or v1 examples, will not ever see the
-existence of either field.
-4. Since v1beta1 are convertible to/from v1, the storage location (path in etcd)
-does not need to change, allowing scriptable rollforward/rollback.
-
-# Future Work
-
-Follow this pattern for Deployments, ReplicaSet, DaemonSet when going to v1, if
-it works well for job.
-
-Docs will be edited to show examples without a `job.spec.selector`.
-
-We probably want as much as possible the same behavior for Job and
-ReplicationController.
-
-
-
-[]()
-
diff --git a/contributors/design-proposals/service-external-name.md b/contributors/design-proposals/service-external-name.md
deleted file mode 100644
index eaab4c514..000000000
--- a/contributors/design-proposals/service-external-name.md
+++ /dev/null
@@ -1,161 +0,0 @@
-# Service externalName
-
-Author: Tim Hockin (@thockin), Rodrigo Campos (@rata), Rudi C (@therc)
-
-Date: August 2016
-
-Status: Implementation in progress
-
-# Goal
-
-Allow a service to have a CNAME record in the cluster internal DNS service. For
-example, the lookup for a `db` service could return a CNAME that points to the
-RDS resource `something.rds.aws.amazon.com`. No proxying is involved.
-
-# Motivation
-
-There were many related issues, but we'll try to summarize them here. More info
-is on GitHub issues/PRs: [#13748](https://issues.k8s.io/13748), [#11838](https://issues.k8s.io/11838), [#13358](https://issues.k8s.io/13358), [#23921](https://issues.k8s.io/23921)
-
-One motivation is to present as native cluster services, services that are
-hosted externally. Some cloud providers, like AWS, hand out hostnames (IPs are
-not static) and the user wants to refer to these services using regular
-Kubernetes tools. This was requested in bugs, at least for AWS, for RedShift,
-RDS, Elasticsearch Service, ELB, etc.
-
-Other users just want to use an external service, for example `oracle`, with dns
-name `oracle-1.testdev.mycompany.com`, without having to keep DNS in sync, and
-are fine with a CNAME.
-
-Another use case is to "integrate" some services for local development. For
-example, consider a search service running in Kubernetes in staging, let's say
-`search-1.stating.mycompany.com`. It's running on AWS, so it resides behind an
-ELB (which has no static IP, just a hostname). A developer is building an app
-that consumes `search-1`, but doesn't want to run it on their machine (before
-Kubernetes, they didn't, either). They can just create a service that has a
-CNAME to the `search-1` endpoint in staging and be happy as before.
-
-Also, Openshift needs this for "service refs". Service ref is really just the
-three use cases mentioned above, but in the future a way to automatically inject
-"service ref"s into namespaces via "service catalog"[1] might be considered. And
-service ref is the natural way to integrate an external service, since it takes
-advantage of native DNS capabilities already in wide use.
-
-[1]: https://github.com/kubernetes/kubernetes/pull/17543
-
-# Alternatives considered
-
-In the issues linked above, some alternatives were also considered. A partial
-summary of them follows.
-
-One option is to add the hostname to endpoints, as proposed in
-https://github.com/kubernetes/kubernetes/pull/11838. This is problematic, as
-endpoints are used in many places and users assume the required fields (such as
-IP address) are always present and valid (and check that, too). If the field is
-not required anymore or if there is just a hostname instead of the IP,
-applications could break. Even assuming those cases could be solved, the
-hostname will have to be resolved, which presents further questions and issues:
-the timeout to use, whether the lookup is synchronous or asynchronous, dealing
-with DNS TTL and more. One imperfect approach was to only resolve the hostname
-upon creation, but this was considered not a great idea. A better approach
-would be at a higher level, maybe a service type.
-
-There are more ideas described in [#13748](https://issues.k8s.io/13748), but all raised further issues,
-ranging from using another upstream DNS server to creating a Name object
-associated with DNSs.
-
-# Proposed solution
-
-The proposed solution works at the service layer, by adding a new `externalName`
-type for services. This will create a CNAME record in the internal cluster DNS
-service. No virtual IP or proxying is involved.
-
-Using a CNAME gets rid of unnecessary DNS lookups. There's no need for the
-Kubernetes control plane to issue them, to pick a timeout for them and having to
-refresh them when the TTL for a record expires. It's way simpler to implement,
-while solving the right problem. And addressing it at the service layer avoids
-all the complications mentioned above about doing it at the endpoints layer.
-
-The solution was outlined by Tim Hockin in
-https://github.com/kubernetes/kubernetes/issues/13748#issuecomment-230397975
-
-Currently a ServiceSpec looks like this, with comments edited for clarity:
-
-```go
-type ServiceSpec struct {
- Ports []ServicePort
-
- // If not specified, the associated Endpoints object is not automatically managed
- Selector map[string]string
-
- // "", a real IP, or "None". If not specified, this is default allocated. If "None", this Service is not load-balanced
- ClusterIP string
-
- // ClusterIP, NodePort, LoadBalancer. Only applies if clusterIP != "None"
- Type ServiceType
-
- // Only applies if clusterIP != "None"
- ExternalIPs []string
- SessionAffinity ServiceAffinity
-
- // Only applies to type=LoadBalancer
- LoadBalancerIP string
- LoadBalancerSourceRanges []string
-```
-
-The proposal is to change it to:
-
-```go
-type ServiceSpec struct {
- Ports []ServicePort
-
- // If not specified, the associated Endpoints object is not automatically managed
-+ // Only applies if type is ClusterIP, NodePort, or LoadBalancer. If type is ExternalName, this is ignored.
- Selector map[string]string
-
- // "", a real IP, or "None". If not specified, this is default allocated. If "None", this Service is not load-balanced.
-+ // Only applies if type is ClusterIP, NodePort, or LoadBalancer. If type is ExternalName, this is ignored.
- ClusterIP string
-
-- // ClusterIP, NodePort, LoadBalancer. Only applies if clusterIP != "None"
-+ // ExternalName, ClusterIP, NodePort, LoadBalancer. Only applies if clusterIP != "None"
- Type ServiceType
-
-+ // Only applies if type is ExternalName
-+ ExternalName string
-
- // Only applies if clusterIP != "None"
- ExternalIPs []string
- SessionAffinity ServiceAffinity
-
- // Only applies to type=LoadBalancer
- LoadBalancerIP string
- LoadBalancerSourceRanges []string
-```
-
-For example, it can be used like this:
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: my-rds
-spec:
- ports:
- - port: 12345
- type: ExternalName
- externalName: myapp.rds.whatever.aws.says
-```
-
-There is one issue to take into account, that no other alternative considered
-fixes, either: TLS. If the service is a CNAME for an endpoint that uses TLS,
-connecting with the Kubernetes name `my-service.my-ns.svc.cluster.local` may
-result in a failure during server certificate validation. This is acknowledged
-and left for future consideration. For the time being, users and administrators
-might need to ensure that the server certificates also mentions the Kubernetes
-name as an alternate host name.
-
-
-
-[]()
-
diff --git a/contributors/design-proposals/expansion.md b/contributors/design-proposals/sig-cli/expansion.md
similarity index 100%
rename from contributors/design-proposals/expansion.md
rename to contributors/design-proposals/sig-cli/expansion.md
diff --git a/contributors/design-proposals/kubectl-create-from-env-file.md b/contributors/design-proposals/sig-cli/kubectl-create-from-env-file.md
similarity index 100%
rename from contributors/design-proposals/kubectl-create-from-env-file.md
rename to contributors/design-proposals/sig-cli/kubectl-create-from-env-file.md
diff --git a/contributors/design-proposals/kubectl-extension.md b/contributors/design-proposals/sig-cli/kubectl-extension.md
similarity index 100%
rename from contributors/design-proposals/kubectl-extension.md
rename to contributors/design-proposals/sig-cli/kubectl-extension.md
diff --git a/contributors/design-proposals/kubectl-login.md b/contributors/design-proposals/sig-cli/kubectl-login.md
similarity index 100%
rename from contributors/design-proposals/kubectl-login.md
rename to contributors/design-proposals/sig-cli/kubectl-login.md
diff --git a/contributors/design-proposals/multi-fields-merge-key.md b/contributors/design-proposals/sig-cli/multi-fields-merge-key.md
similarity index 100%
rename from contributors/design-proposals/multi-fields-merge-key.md
rename to contributors/design-proposals/sig-cli/multi-fields-merge-key.md
diff --git a/contributors/design-proposals/preserve-order-in-strategic-merge-patch.md b/contributors/design-proposals/sig-cli/preserve-order-in-strategic-merge-patch.md
similarity index 100%
rename from contributors/design-proposals/preserve-order-in-strategic-merge-patch.md
rename to contributors/design-proposals/sig-cli/preserve-order-in-strategic-merge-patch.md
diff --git a/contributors/design-proposals/simple-rolling-update.md b/contributors/design-proposals/simple-rolling-update.md
deleted file mode 100644
index c4a5f6714..000000000
--- a/contributors/design-proposals/simple-rolling-update.md
+++ /dev/null
@@ -1,131 +0,0 @@
-## Simple rolling update
-
-This is a lightweight design document for simple
-[rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in `kubectl`.
-
-Complete execution flow can be found [here](#execution-details). See the
-[example of rolling update](../user-guide/update-demo/) for more information.
-
-### Lightweight rollout
-
-Assume that we have a current replication controller named `foo` and it is
-running image `image:v1`
-
-`kubectl rolling-update foo [foo-v2] --image=myimage:v2`
-
-If the user doesn't specify a name for the 'next' replication controller, then
-the 'next' replication controller is renamed to
-the name of the original replication controller.
-
-Obviously there is a race here, where if you kill the client between delete foo,
-and creating the new version of 'foo' you might be surprised about what is
-there, but I think that's ok. See [Recovery](#recovery) below
-
-If the user does specify a name for the 'next' replication controller, then the
-'next' replication controller is retained with its existing name, and the old
-'foo' replication controller is deleted. For the purposes of the rollout, we add
-a unique-ifying label `kubernetes.io/deployment` to both the `foo` and
-`foo-next` replication controllers. The value of that label is the hash of the
-complete JSON representation of the`foo-next` or`foo` replication controller.
-The name of this label can be overridden by the user with the
-`--deployment-label-key` flag.
-
-#### Recovery
-
-If a rollout fails or is terminated in the middle, it is important that the user
-be able to resume the roll out. To facilitate recovery in the case of a crash of
-the updating process itself, we add the following annotations to each
-replication controller in the `kubernetes.io/` annotation namespace:
- * `desired-replicas` The desired number of replicas for this replication
-controller (either N or zero)
- * `update-partner` A pointer to the replication controller resource that is
-the other half of this update (syntax `` the namespace is assumed to be
-identical to the namespace of this replication controller.)
-
-Recovery is achieved by issuing the same command again:
-
-```sh
-kubectl rolling-update foo [foo-v2] --image=myimage:v2
-```
-
-Whenever the rolling update command executes, the kubectl client looks for
-replication controllers called `foo` and `foo-next`, if they exist, an attempt
-is made to roll `foo` to `foo-next`. If `foo-next` does not exist, then it is
-created, and the rollout is a new rollout. If `foo` doesn't exist, then it is
-assumed that the rollout is nearly completed, and `foo-next` is renamed to
-`foo`. Details of the execution flow are given below.
-
-
-### Aborting a rollout
-
-Abort is assumed to want to reverse a rollout in progress.
-
-`kubectl rolling-update foo [foo-v2] --rollback`
-
-This is really just semantic sugar for:
-
-`kubectl rolling-update foo-v2 foo`
-
-With the added detail that it moves the `desired-replicas` annotation from
-`foo-v2` to `foo`
-
-
-### Execution Details
-
-For the purposes of this example, assume that we are rolling from `foo` to
-`foo-next` where the only change is an image update from `v1` to `v2`
-
-If the user doesn't specify a `foo-next` name, then it is either discovered from
-the `update-partner` annotation on `foo`. If that annotation doesn't exist,
-then `foo-next` is synthesized using the pattern
-`-`
-
-#### Initialization
-
- * If `foo` and `foo-next` do not exist:
- * Exit, and indicate an error to the user, that the specified controller
-doesn't exist.
- * If `foo` exists, but `foo-next` does not:
- * Create `foo-next` populate it with the `v2` image, set
-`desired-replicas` to `foo.Spec.Replicas`
- * Goto Rollout
- * If `foo-next` exists, but `foo` does not:
- * Assume that we are in the rename phase.
- * Goto Rename
- * If both `foo` and `foo-next` exist:
- * Assume that we are in a partial rollout
- * If `foo-next` is missing the `desired-replicas` annotation
- * Populate the `desired-replicas` annotation to `foo-next` using the
-current size of `foo`
- * Goto Rollout
-
-#### Rollout
-
- * While size of `foo-next` < `desired-replicas` annotation on `foo-next`
- * increase size of `foo-next`
- * if size of `foo` > 0
- decrease size of `foo`
- * Goto Rename
-
-#### Rename
-
- * delete `foo`
- * create `foo` that is identical to `foo-next`
- * delete `foo-next`
-
-#### Abort
-
- * If `foo-next` doesn't exist
- * Exit and indicate to the user that they may want to simply do a new
-rollout with the old version
- * If `foo` doesn't exist
- * Exit and indicate not found to the user
- * Otherwise, `foo-next` and `foo` both exist
- * Set `desired-replicas` annotation on `foo` to match the annotation on
-`foo-next`
- * Goto Rollout with `foo` and `foo-next` trading places.
-
-
-
-[]()
-
diff --git a/contributors/design-proposals/all-in-one-volume.md b/contributors/design-proposals/storage/all-in-one-volume.md
similarity index 100%
rename from contributors/design-proposals/all-in-one-volume.md
rename to contributors/design-proposals/storage/all-in-one-volume.md
diff --git a/contributors/design-proposals/default-storage-class.md b/contributors/design-proposals/storage/default-storage-class.md
similarity index 100%
rename from contributors/design-proposals/default-storage-class.md
rename to contributors/design-proposals/storage/default-storage-class.md
diff --git a/contributors/design-proposals/flex-volumes-drivers-psp.md b/contributors/design-proposals/storage/flex-volumes-drivers-psp.md
similarity index 100%
rename from contributors/design-proposals/flex-volumes-drivers-psp.md
rename to contributors/design-proposals/storage/flex-volumes-drivers-psp.md
diff --git a/contributors/design-proposals/flexvolume-deployment.md b/contributors/design-proposals/storage/flexvolume-deployment.md
similarity index 100%
rename from contributors/design-proposals/flexvolume-deployment.md
rename to contributors/design-proposals/storage/flexvolume-deployment.md
diff --git a/contributors/design-proposals/local-storage-overview.md b/contributors/design-proposals/storage/local-storage-overview.md
similarity index 100%
rename from contributors/design-proposals/local-storage-overview.md
rename to contributors/design-proposals/storage/local-storage-overview.md
diff --git a/contributors/design-proposals/mount-options.md b/contributors/design-proposals/storage/mount-options.md
similarity index 100%
rename from contributors/design-proposals/mount-options.md
rename to contributors/design-proposals/storage/mount-options.md
diff --git a/contributors/design-proposals/persistent-storage.md b/contributors/design-proposals/storage/persistent-storage.md
similarity index 100%
rename from contributors/design-proposals/persistent-storage.md
rename to contributors/design-proposals/storage/persistent-storage.md
diff --git a/contributors/design-proposals/propagation.md b/contributors/design-proposals/storage/propagation.md
similarity index 100%
rename from contributors/design-proposals/propagation.md
rename to contributors/design-proposals/storage/propagation.md
diff --git a/contributors/design-proposals/volume-hostpath-qualifiers.md b/contributors/design-proposals/storage/volume-hostpath-qualifiers.md
similarity index 100%
rename from contributors/design-proposals/volume-hostpath-qualifiers.md
rename to contributors/design-proposals/storage/volume-hostpath-qualifiers.md
diff --git a/contributors/design-proposals/volume-metrics.md b/contributors/design-proposals/storage/volume-metrics.md
similarity index 100%
rename from contributors/design-proposals/volume-metrics.md
rename to contributors/design-proposals/storage/volume-metrics.md
diff --git a/contributors/design-proposals/volume-ownership-management.md b/contributors/design-proposals/storage/volume-ownership-management.md
similarity index 100%
rename from contributors/design-proposals/volume-ownership-management.md
rename to contributors/design-proposals/storage/volume-ownership-management.md
diff --git a/contributors/design-proposals/volume-provisioning.md b/contributors/design-proposals/storage/volume-provisioning.md
similarity index 100%
rename from contributors/design-proposals/volume-provisioning.md
rename to contributors/design-proposals/storage/volume-provisioning.md
diff --git a/contributors/design-proposals/volume-selectors.md b/contributors/design-proposals/storage/volume-selectors.md
similarity index 100%
rename from contributors/design-proposals/volume-selectors.md
rename to contributors/design-proposals/storage/volume-selectors.md
diff --git a/contributors/design-proposals/volume-snapshotting.md b/contributors/design-proposals/storage/volume-snapshotting.md
similarity index 100%
rename from contributors/design-proposals/volume-snapshotting.md
rename to contributors/design-proposals/storage/volume-snapshotting.md
diff --git a/contributors/design-proposals/volume-snapshotting.png b/contributors/design-proposals/storage/volume-snapshotting.png
similarity index 100%
rename from contributors/design-proposals/volume-snapshotting.png
rename to contributors/design-proposals/storage/volume-snapshotting.png
diff --git a/contributors/design-proposals/volumes.md b/contributors/design-proposals/storage/volumes.md
similarity index 100%
rename from contributors/design-proposals/volumes.md
rename to contributors/design-proposals/storage/volumes.md
diff --git a/contributors/design-proposals/synchronous-garbage-collection.md b/contributors/design-proposals/synchronous-garbage-collection.md
deleted file mode 100644
index 6f2a9be5f..000000000
--- a/contributors/design-proposals/synchronous-garbage-collection.md
+++ /dev/null
@@ -1,175 +0,0 @@
-**Table of Contents**
-
-
-- [Overview](#overview)
-- [API Design](#api-design)
- - [Standard Finalizers](#standard-finalizers)
- - [OwnerReference](#ownerreference)
- - [DeleteOptions](#deleteoptions)
-- [Components changes](#components-changes)
- - [API Server](#api-server)
- - [Garbage Collector](#garbage-collector)
- - [Controllers](#controllers)
-- [Handling circular dependencies](#handling-circular-dependencies)
-- [Unhandled cases](#unhandled-cases)
-- [Implications to existing clients](#implications-to-existing-clients)
-
-
-
-# Overview
-
-Users of the server-side garbage collection need to determine if the garbage collection is done. For example:
-* Currently `kubectl delete rc` blocks until all the pods are terminating. To convert to use server-side garbage collection, kubectl has to be able to determine if the garbage collection is done.
-* [#19701](https://github.com/kubernetes/kubernetes/issues/19701#issuecomment-236997077) is a use case where the user needs to wait for all service dependencies garbage collected and their names released, before she recreates the dependencies.
-
-We define the garbage collection as "done" when all the dependents are deleted from the key-value store, rather than merely in the terminating state. There are two reasons: *i)* for `Pod`s, the most usual garbage, only when they are deleted from the key-value store, we know kubelet has released resources they occupy; *ii)* some users need to recreate objects with the same names, they need to wait for the old objects to be deleted from the key-value store. (This limitation is because we index objects by their names in the key-value store today.)
-
-Synchronous Garbage Collection is a best-effort (see [unhandled cases](#unhandled-cases)) mechanism that allows user to determine if the garbage collection is done: after the API server receives a deletion request of an owning object, the object keeps existing in the key-value store until all its dependents are deleted from the key-value store by the garbage collector.
-
-Tracking issue: https://github.com/kubernetes/kubernetes/issues/29891
-
-# API Design
-
-## Standard Finalizers
-
-We will introduce a new standard finalizer:
-
-```go
-const GCFinalizer string = “DeletingDependents”
-```
-
-This finalizer indicates the object is terminating and is waiting for its dependents whose `OwnerReference.BlockOwnerDeletion` is true get deleted.
-
-## OwnerReference
-
-```go
-OwnerReference {
- ...
- // If true, AND if the owner has the "DeletingDependents" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed.
- // Defaults to false.
- // To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.
- BlockOwnerDeletion *bool
-}
-```
-
-The initial draft of the proposal did not include this field and it had a security loophole: a user who is only authorized to update one resource can set ownerReference to block the synchronous GC of other resources. Requiring users to explicitly set `BlockOwnerDeletion` allows the master to properly authorize the request.
-
-## DeleteOptions
-
-```go
-DeleteOptions {
- …
- // Whether and how garbage collection will be performed.
- // Defaults to DeletePropagationDefault
- // Either this field or OrphanDependents may be set, but not both.
- PropagationPolicy *DeletePropagationPolicy
-}
-
-type DeletePropagationPolicy string
-
-const (
- // The default depends on the existing finalizers on the object and the type of the object.
- DeletePropagationDefault DeletePropagationPolicy = "DeletePropagationDefault"
- // Orphans the dependents
- DeletePropagationOrphan DeletePropagationPolicy = "DeletePropagationOrphan"
- // Deletes the object from the key-value store, the garbage collector will delete the dependents in the background.
- DeletePropagationBackground DeletePropagationPolicy = "DeletePropagationBackground"
- // The object exists in the key-value store until the garbage collector deletes all the dependents whose ownerReference.blockOwnerDeletion=true from the key-value store.
- // API sever will put the "DeletingDependents" finalizer on the object, and sets its deletionTimestamp.
- // This policy is cascading, i.e., the dependents will be deleted with GarbageCollectionSynchronous.
- DeletePropagationForeground DeletePropagationPolicy = "DeletePropagationForeground"
-)
-```
-
-The `DeletePropagationForeground` policy represents the synchronous GC mode.
-
-`DeleteOptions.OrphanDependents *bool` will be marked as deprecated and will be removed in 1.7. Validation code will make sure only one of `OrphanDependents` and `PropagationPolicy` may be set. We decided not to add another `DeleteAfterDependentsDeleted *bool`, because together with `OrphanDependents`, it will result in 9 possible combinations and is thus confusing.
-
-The conversion rules are described in the following table:
-
-| 1.5 | pre 1.4/1.4 |
-|------------------------------------------|--------------------------|
-| DeletePropagationDefault | OrphanDependents==nil |
-| DeletePropagationOrphan | *OrphanDependents==true |
-| DeletePropagationBackground | *OrphanDependents==false |
-| DeletePropagationForeground | N/A |
-
-# Components changes
-
-## API Server
-
-`Delete()` function checks `DeleteOptions.PropagationPolicy`. If the policy is `DeletePropagationForeground`, the API server will update the object instead of deleting it, add the "DeletingDependents" finalizer, remove the "OrphanDependents" finalizer if it's present, and set the `ObjectMeta.DeletionTimestamp`.
-
-When validating the ownerReference, API server needs to query the `Authorizer` to check if the user has "delete" permission of the owner object. It returns 422 if the user does not have the permissions but intends to set `OwnerReference.BlockOwnerDeletion` to true.
-
-## Garbage Collector
-
-**Modifications to processEvent()**
-
-Currently `processEvent()` manages GC's internal owner-dependency relationship graph, `uidToNode`. It updates `uidToNode` according to the Add/Update/Delete events in the cluster. To support synchronous GC, it has to:
-
-* handle Add or Update events where `obj.Finalizers.Has(GCFinalizer) && obj.DeletionTimestamp != nil`. The object will be added into the `dirtyQueue`. The object will be marked as “GC in progress” in `uidToNode`.
-* Upon receiving the deletion event of an object, put its owner into the `dirtyQueue` if the owner node is marked as "GC in progress". This is to force the `processItem()` (described next) to re-check if all dependents of the owner is deleted.
-
-**Modifications to processItem()**
-
-Currently `processItem()` consumes the `dirtyQueue`, requests the API server to delete an item if all of its owners do not exist. To support synchronous GC, it has to:
-
-* treat an owner as "not exist" if `owner.DeletionTimestamp != nil && !owner.Finalizers.Has(OrphanFinalizer)`, otherwise synchronous GC will not progress because the owner keeps existing in the key-value store.
-* when deleting dependents, if the owner's finalizers include `DeletingDependents`, it should use the `GarbageCollectionSynchronous` as GC policy.
-* if an object has multiple owners, some owners still exist while other owners are in the synchronous GC stage, then according to the existing logic of GC, the object wouldn't be deleted. To unblock the synchronous GC of owners, `processItem()` has to remove the ownerReferences pointing to them.
-
-In addition, if an object popped from `dirtyQueue` is marked as "GC in progress", `processItem()` treats it specially:
-
-* To avoid racing with another controller, it requeues the object if `observedGeneration < Generation`. This is best-effort, see [unhandled cases](#unhandled-cases).
-* Checks if the object has dependents
- * If not, send a PUT request to remove the `GCFinalizer`;
- * If so, then add all dependents to the `dirtryQueue`; we need bookkeeping to avoid adding the dependents repeatedly if the owner gets in the `synchronousGC queue` multiple times.
-
-## Controllers
-
-To utilize the synchronous garbage collection feature, controllers (e.g., the replicaset controller) need to set `OwnerReference.BlockOwnerDeletion` when creating dependent objects (e.g. pods).
-
-# Handling circular dependencies
-
-SynchronousGC will enter a deadlock in the presence of circular dependencies. The garbage collector can break the circle by lazily breaking circular dependencies: when `processItem()` processes an object, if it finds the object and all of its owners have the `GCFinalizer`, it removes the `GCFinalizer` from the object.
-
-Note that the approach is not rigorous and thus having false positives. For example, if a user first sends a SynchronousGC delete request for an object, then sends the delete request for its owner, then `processItem()` will be fooled to believe there is a circle. We expect user not to do this. We can make the circle detection more rigorous if needed.
-
-Circular dependencies are regarded as user error. If needed, we can add more guarantees to handle such cases later.
-
-# Unhandled cases
-
-* If the GC observes the owning object with the `GCFinalizer` before it observes the creation of all the dependents, GC will remove the finalizer from the owning object before all dependents are gone. Hence, synchronous GC is best-effort, though we guarantee that the dependents will be deleted eventually. We face a similar case when handling OrphanFinalizer, see [GC known issues](https://github.com/kubernetes/kubernetes/issues/26120).
-
-# Implications to existing clients
-
-Finalizer breaks an assumption that many Kubernetes components have: a deletion request with `grace period=0` will immediately remove the object from the key-value store. This is not true if an object has pending finalizers, the object will continue to exist, and currently the API server will not return an error in this case.
-
-**Namespace controller** suffered from this [problem](https://github.com/kubernetes/kubernetes/issues/32519) and was fixed in [#32524](https://github.com/kubernetes/kubernetes/pull/32524) by retrying every 15s if there are objects with pending finalizers to be removed from the key-value store. Object with pending `GCFinalizer` might take arbitrary long time be deleted, so namespace deletion might time out.
-
-**kubelet** deletes the pod from the key-value store after all its containers are terminated ([code](../../pkg/kubelet/status/status_manager.go#L441-L443)). It also assumes that if the API server does not return an error, the pod is removed from the key-value store. Breaking the assumption will not break `kubelet` though, because the `pod` must have already been in the terminated phase, `kubelet` will not care to manage it.
-
-**Node controller** forcefully deletes pod if the pod is scheduled to a node that does not exist ([code](../../pkg/controller/node/nodecontroller.go#L474)). The pod will continue to exist if it has pending finalizers. The node controller will futilely retry the deletion. Also, the `node controller` forcefully deletes pods before deleting the node ([code](../../pkg/controller/node/nodecontroller.go#L592)). If the pods have pending finalizers, the `node controller` will go ahead deleting the node, leaving those pods behind. These pods will be deleted from the key-value store when the pending finalizers are removed.
-
-**Podgc** deletes terminated pods if there are too many of them in the cluster. We need to make sure finalizers on Pods are taken off quickly enough so that the progress of `Podgc` is not affected.
-
-**Deployment controller** adopts existing `ReplicaSet` (RS) if its template matches. If a matching RS has a pending `GCFinalizer`, deployment should adopt it, take its pods into account, but shouldn't try to mutate it, because the RS controller will ignore a RS that's being deleted. Hence, `deployment controller` should wait for the RS to be deleted, and then create a new one.
-
-**Replication controller manager**, **Job controller**, and **ReplicaSet controller** ignore pods in terminated phase, so pods with pending finalizers will not block these controllers.
-
-**StatefulSet controller** will be blocked by a pod with pending finalizers, so synchronous GC might slow down its progress.
-
-**kubectl**: synchronous GC can simplify the **kubectl delete** reapers. Let's take the `deployment reaper` as an example, since it's the most complicated one. Currently, the reaper finds all `RS` with matching labels, scales them down, polls until `RS.Status.Replica` reaches 0, deletes the `RS`es, and finally deletes the `deployment`. If using synchronous GC, `kubectl delete deployment` is as easy as sending a synchronous GC delete request for the deployment, and polls until the deployment is deleted from the key-value store.
-
-Note that this **changes the behavior** of `kubectl delete`. The command will be blocked until all pods are deleted from the key-value store, instead of being blocked until pods are in the terminating state. This means `kubectl delete` blocks for longer time, but it has the benefit that the resources used by the pods are released when the `kubectl delete` returns. To allow kubectl user not waiting for the cleanup, we will add a `--wait` flag. It defaults to true; if it's set to `false`, `kubectl delete` will send the delete request with `PropagationPolicy=DeletePropagationBackground` and return immediately.
-
-To make the new kubectl compatible with the 1.4 and earlier masters, kubectl needs to switch to use the old reaper logic if it finds synchronous GC is not supported by the master.
-
-1.4 `kubectl delete rc/rs` uses `DeleteOptions.OrphanDependents=true`, which is going to be converted to `DeletePropagationBackground` (see [API Design](#api-changes)) by a 1.5 master, so its behavior keeps the same.
-
-Pre 1.4 `kubectl delete` uses `DeleteOptions.OrphanDependents=nil`, so does the 1.4 `kubectl delete` for resources other than rc and rs. The option is going to be converted to `DeletePropagationDefault` (see [API Design](#api-changes)) by a 1.5 master, so these commands behave the same as when working with a 1.4 master.
-
-
-[]()
-
diff --git a/contributors/design-proposals/volume_stats_pvc_ref.md b/contributors/design-proposals/volume_stats_pvc_ref.md
deleted file mode 100644
index 1b2f599b9..000000000
--- a/contributors/design-proposals/volume_stats_pvc_ref.md
+++ /dev/null
@@ -1,57 +0,0 @@
-# Add PVC reference in Volume Stats
-
-## Background
-Pod volume stats tracked by kubelet do not currently include any information about the PVC (if the pod volume was referenced via a PVC)
-
-This prevents exposing (and querying) volume metrics labeled by PVC name which is preferable for users, given that PVC is a top-level API object.
-
-## Proposal
-
-Modify ```VolumeStats``` tracked in Kubelet and populate with PVC info:
-
-```
-// VolumeStats contains data about Volume filesystem usage.
-type VolumeStats struct {
- // Embedded FsStats
- FsStats
- // Name is the name given to the Volume
- // +optional
- Name string `json:"name,omitempty"`
-+ // PVCRef is a reference to the measured PVC.
-+ // +optional
-+ PVCRef PVCReference `json:"pvcRef"`
-}
-
-+// PVCReference contains enough information to describe the referenced PVC.
-+type PVCReference struct {
-+ Name string `json:"name"`
-+ Namespace string `json:"namespace"`
-+}
-```
-
-## Implementation
-2 options are described below. Option 1 supports current requirements/requested use cases. Option 2 supports an additional use case that was being discussed and is called out for completeness/discussion/feedback.
-
-### Option 1
-- Modify ```kubelet::server::stats::calcAndStoreStats()```
- - If the pod volume is referenced via a PVC, populate ```PVCRef``` in VolumeStats using the Pod spec
-
- - The Pod spec is already available in this method, so the changes are contained to this function.
-
-- The limitation of this approach is that we're limited to reporting only what is available in the pod spec (Pod namespace and PVC claimname)
-
-### Option 2
-- Modify the ```volumemanager::GetMountedVolumesForPod()``` (or add a new function) to return additional volume information from the actual/desired state-of-world caches
- - Use this to populate PVCRef in VolumeStats
-
-- This allows us to get information not available in the Pod spec such as the PV name/UID which can be used to label metrics - enables exposing/querying volume metrics by PV name
-- It's unclear whether this is a use case we need to/should support:
- * Volume metrics are only refreshed for mounted volumes which implies a bound/available PVC
- * We expect most user-storage interactions to be via the PVC
-- Admins monitoring PVs (and not PVC's) so that they know when their users are running out of space or are over-provisioning would be a use case supporting adding PV information to
- metrics
-
-
-
-
-