fix some markdown formats
Signed-off-by: Qinglan Peng <qinglanpeng@zju.edu.cn> Update federated-ingress.md
This commit is contained in:
parent
7c3e8af980
commit
bd88127eb6
|
@ -181,7 +181,7 @@ Another option is to expose disk usage of all images together as a first-class f
|
|||
|
||||
##### Overlayfs and Aufs
|
||||
|
||||
####### `du`
|
||||
###### `du`
|
||||
|
||||
We can list all the image layer specific directories, excluding container directories, and run `du` on each of those directories.
|
||||
|
||||
|
@ -200,7 +200,7 @@ We can list all the image layer specific directories, excluding container direct
|
|||
* Can block container deletion by keeping file descriptors open.
|
||||
|
||||
|
||||
####### Linux gid based Disk Quota
|
||||
###### Linux gid based Disk Quota
|
||||
|
||||
[Disk quota](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-disk-quotas.html) feature provided by the linux kernel can be used to track the usage of image layers. Ideally, we need `project` support for disk quota, which lets us track usage of directory hierarchies using `project ids`. Unfortunately, that feature is only available for zfs filesystems. Since most of our distributions use `ext4` by default, we will have to use either `uid` or `gid` based quota tracking.
|
||||
|
||||
|
@ -417,15 +417,12 @@ Tested on Debian jessie
|
|||
|
||||
8. Check usage using quota and group ‘x’
|
||||
|
||||
```shell
|
||||
$ quota -g x -v
|
||||
|
||||
Disk quotas for group x (gid 9000):
|
||||
|
||||
Filesystem **blocks** quota limit grace files quota limit grace
|
||||
|
||||
/dev/sda1 **10248** 0 0 3 0 0
|
||||
```
|
||||
```shell
|
||||
$ quota -g x -v
|
||||
Disk quotas for group x (gid 9000):
|
||||
Filesystem blocks quota limit grace files quota limit grace
|
||||
/dev/sda1 10248 0 0 3 0 0
|
||||
```
|
||||
|
||||
Using the same workflow, we can add new sticky group IDs to emptyDir volumes and account for their usage against pods.
|
||||
|
||||
|
@ -484,29 +481,24 @@ Overlayfs works similar to Aufs. The path to the writable directory for containe
|
|||
* Check quota before and after running the container.
|
||||
|
||||
```shell
|
||||
$ quota -g x -v
|
||||
|
||||
Disk quotas for group x (gid 9000):
|
||||
|
||||
Filesystem blocks quota limit grace files quota limit grace
|
||||
|
||||
/dev/sda1 48 0 0 19 0 0
|
||||
```
|
||||
$ quota -g x -v
|
||||
Disk quotas for group x (gid 9000):
|
||||
Filesystem blocks quota limit grace files quota limit grace
|
||||
/dev/sda1 48 0 0 19 0 0
|
||||
```
|
||||
|
||||
* Start the docker container
|
||||
|
||||
* `docker start b8`
|
||||
|
||||
* ```shell
|
||||
quota -g x -v
|
||||
Notice the **blocks** has changed
|
||||
|
||||
Disk quotas for group x (gid 9000):
|
||||
|
||||
Filesystem **blocks** quota limit grace files quota limit grace
|
||||
|
||||
/dev/sda1 **10288** 0 0 20 0 0
|
||||
|
||||
```
|
||||
```sh
|
||||
$ quota -g x -v
|
||||
Disk quotas for group x (gid 9000):
|
||||
Filesystem blocks quota limit grace files quota limit grace
|
||||
/dev/sda1 10288 0 0 20 0 0
|
||||
```
|
||||
|
||||
##### Device mapper
|
||||
|
||||
|
@ -518,60 +510,41 @@ These devices can be loopback or real storage devices.
|
|||
|
||||
The base device has a maximum storage capacity. This means that the sum total of storage space occupied by images and containers cannot exceed this capacity.
|
||||
|
||||
By default, all images and containers are created from an initial filesystem with a 10GB limit.
|
||||
By default, all images and containers are created from an initial filesystem with a 10GB limit.
|
||||
|
||||
A separate filesystem is created for each container as part of start (not create).
|
||||
|
||||
It is possible to [resize](https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/) the container filesystem.
|
||||
|
||||
For the purposes of image space tracking, we can
|
||||
For the purposes of image space tracking, we can
|
||||
|
||||
####Testing notes:
|
||||
|
||||
* ```shell
|
||||
#### Testing notes:
|
||||
Notice the **Pool Name**
|
||||
```shell
|
||||
$ docker info
|
||||
|
||||
...
|
||||
|
||||
Storage Driver: devicemapper
|
||||
|
||||
Pool Name: **docker-8:1-268480-pool**
|
||||
|
||||
Pool Name: docker-8:1-268480-pool
|
||||
Pool Blocksize: 65.54 kB
|
||||
|
||||
Backing Filesystem: extfs
|
||||
|
||||
Data file: /dev/loop0
|
||||
|
||||
Metadata file: /dev/loop1
|
||||
|
||||
Data Space Used: 2.059 GB
|
||||
|
||||
Data Space Total: 107.4 GB
|
||||
|
||||
Data Space Available: 48.45 GB
|
||||
|
||||
Metadata Space Used: 1.806 MB
|
||||
|
||||
Metadata Space Total: 2.147 GB
|
||||
|
||||
Metadata Space Available: 2.146 GB
|
||||
|
||||
Udev Sync Supported: true
|
||||
|
||||
Deferred Removal Enabled: false
|
||||
|
||||
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
|
||||
|
||||
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
|
||||
|
||||
Library Version: 1.02.99 (2015-06-20)
|
||||
```
|
||||
|
||||
```shell
|
||||
$ dmsetup table docker-8\:1-268480-pool
|
||||
|
||||
0 209715200 thin-pool 7:1 7:0 **128** 32768 1 skip_block_zeroing
|
||||
$ dmsetup table docker-8\:1-268480-pool
|
||||
0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing
|
||||
```
|
||||
|
||||
128 is the data block size
|
||||
|
@ -579,9 +552,8 @@ $ dmsetup table docker-8\:1-268480-pool
|
|||
Usage from kernel for the primary block device
|
||||
|
||||
```shell
|
||||
$ dmsetup status docker-8\:1-268480-pool
|
||||
|
||||
0 209715200 thin-pool 37 441/524288 **31424/1638400** - rw discard_passdown queue_if_no_space -
|
||||
$ dmsetup status docker-8\:1-268480-pool
|
||||
0 209715200 thin-pool 37 441/524288 31424/1638400 - rw discard_passdown queue_if_no_space -
|
||||
```
|
||||
|
||||
Usage/Available - 31424/1638400
|
||||
|
|
|
@ -141,7 +141,7 @@ Volumes are pod scoped, so a selector must be specified with a container name.
|
|||
Full json path selectors will use existing `type ObjectFieldSelector`
|
||||
to extend the current implementation for resources requests and limits.
|
||||
|
||||
```
|
||||
```go
|
||||
// ObjectFieldSelector selects an APIVersioned field of an object.
|
||||
type ObjectFieldSelector struct {
|
||||
APIVersion string `json:"apiVersion"`
|
||||
|
@ -154,7 +154,7 @@ type ObjectFieldSelector struct {
|
|||
|
||||
These examples show how to use full selectors with environment variables and volume plugin.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -178,7 +178,7 @@ spec:
|
|||
fieldPath: spec.containers[?(@.name=="test-container")].resources.limits.cpu
|
||||
```
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -221,7 +221,7 @@ relative to the container spec. These will be implemented by introducing a
|
|||
`ContainerSpecFieldSelector` (json: `containerSpecFieldRef`) to extend the current
|
||||
implementation for `type DownwardAPIVolumeFile struct` and `type EnvVarSource struct`.
|
||||
|
||||
```
|
||||
```go
|
||||
// ContainerSpecFieldSelector selects an APIVersioned field of an object.
|
||||
type ContainerSpecFieldSelector struct {
|
||||
APIVersion string `json:"apiVersion"`
|
||||
|
@ -300,7 +300,7 @@ Volumes are pod scoped, the container name must be specified as part of
|
|||
|
||||
These examples show how to use partial selectors with environment variables and volume plugin.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -337,7 +337,7 @@ spec:
|
|||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "250m"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "500m"
|
||||
|
@ -388,7 +388,7 @@ For example, if requests.cpu is `250m` (250 millicores) and the divisor by defau
|
|||
exposed value will be `1` core. It is because 250 millicores when converted to cores will be 0.25 and
|
||||
the ceiling of 0.25 is 1.
|
||||
|
||||
```
|
||||
```go
|
||||
type ResourceFieldSelector struct {
|
||||
// Container name
|
||||
ContainerName string `json:"containerName,omitempty"`
|
||||
|
@ -462,7 +462,7 @@ Volumes are pod scoped, the container name must be specified as part of
|
|||
|
||||
These examples show how to use magic keys approach with environment variables and volume plugin.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -493,7 +493,7 @@ spec:
|
|||
|
||||
In the above example, the exposed values of CPU_LIMIT and MEMORY_LIMIT will be 1 (in cores) and 128 (in Mi), respectively.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -578,7 +578,7 @@ in a shell script, and then export `JAVA_OPTS` (assuming your container image su
|
|||
and GOMAXPROCS environment variables inside the container image. The spec file for the
|
||||
application pod could look like:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -609,7 +609,7 @@ spec:
|
|||
Note that the value of divisor by default is `1`. Now inside the container,
|
||||
the HEAP_SIZE (in bytes) and GOMAXPROCS (in cores) could be exported as:
|
||||
|
||||
```
|
||||
```sh
|
||||
export JAVA_OPTS="$JAVA_OPTS -Xmx:$(HEAP_SIZE)"
|
||||
|
||||
and
|
||||
|
|
|
@ -141,7 +141,7 @@ accepts creates. The caller POSTs a SubjectAccessReview to this URL and he gets
|
|||
a SubjectAccessReviewResponse back. Here is an example of a call and its
|
||||
corresponding return:
|
||||
|
||||
```
|
||||
```json
|
||||
// input
|
||||
{
|
||||
"kind": "SubjectAccessReview",
|
||||
|
@ -172,7 +172,7 @@ only accepts creates. The caller POSTs a PersonalSubjectAccessReview to this URL
|
|||
and he gets a SubjectAccessReviewResponse back. Here is an example of a call and
|
||||
its corresponding return:
|
||||
|
||||
```
|
||||
```json
|
||||
// input
|
||||
{
|
||||
"kind": "PersonalSubjectAccessReview",
|
||||
|
@ -202,7 +202,7 @@ accepts creates. The caller POSTs a LocalSubjectAccessReview to this URL and he
|
|||
gets a LocalSubjectAccessReviewResponse back. Here is an example of a call and
|
||||
its corresponding return:
|
||||
|
||||
```
|
||||
```json
|
||||
// input
|
||||
{
|
||||
"kind": "LocalSubjectAccessReview",
|
||||
|
@ -353,7 +353,7 @@ accepts creates. The caller POSTs a ResourceAccessReview to this URL and he gets
|
|||
a ResourceAccessReviewResponse back. Here is an example of a call and its
|
||||
corresponding return:
|
||||
|
||||
```
|
||||
```json
|
||||
// input
|
||||
{
|
||||
"kind": "ResourceAccessReview",
|
||||
|
|
|
@ -275,7 +275,7 @@ func (r *objectRecorderImpl) Event(reason, message string) {
|
|||
}
|
||||
|
||||
func ObjectEventRecorderFor(object runtime.Object, recorder EventRecorder) ObjectEventRecorder {
|
||||
return &objectRecorderImpl{object, recorder}
|
||||
return &objectRecorderImpl{object, recorder}
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -367,7 +367,7 @@ No other variables are defined.
|
|||
| `"--$($($($($--"` | `"--$($($($($--"` |
|
||||
| `"$($($($($--foo$("` | `"$($($($($--foo$("` |
|
||||
| `"foo0--$($($($("` | `"foo0--$($($($("` |
|
||||
| `"$(foo$$var)` | `$(foo$$var)` |
|
||||
| `"$(foo$$var)"` | `"$(foo$$var)"` |
|
||||
|
||||
#### In a pod: building a URL
|
||||
|
||||
|
|
|
@ -194,27 +194,27 @@ The cases we should test are:
|
|||
|
||||
1. Core Functionality Tests
|
||||
|
||||
1.1 Source IP Preservation
|
||||
1.1 Source IP Preservation
|
||||
|
||||
Test the main intent of this change, source ip preservation - use the all-in-one network tests container
|
||||
with new functionality that responds with the client IP. Verify the container is seeing the external IP
|
||||
of the test client.
|
||||
Test the main intent of this change, source ip preservation - use the all-in-one network tests container
|
||||
with new functionality that responds with the client IP. Verify the container is seeing the external IP
|
||||
of the test client.
|
||||
|
||||
1.2 Health Check responses
|
||||
1.2 Health Check responses
|
||||
|
||||
Testcases use pods explicitly pinned to nodes and delete/add to nodes randomly. Validate that healthchecks succeed
|
||||
and fail on the expected nodes as endpoints move around. Gather LB response times (time from pod declares ready to
|
||||
time for Cloud LB to declare node healthy and vice versa) to endpoint changes.
|
||||
Testcases use pods explicitly pinned to nodes and delete/add to nodes randomly. Validate that healthchecks succeed
|
||||
and fail on the expected nodes as endpoints move around. Gather LB response times (time from pod declares ready to
|
||||
time for Cloud LB to declare node healthy and vice versa) to endpoint changes.
|
||||
|
||||
2. Inter-Operability Tests
|
||||
|
||||
Validate that internal cluster communications are still possible from nodes without local endpoints. This change
|
||||
is only for externally sourced traffic.
|
||||
Validate that internal cluster communications are still possible from nodes without local endpoints. This change
|
||||
is only for externally sourced traffic.
|
||||
|
||||
3. Backward Compatibility Tests
|
||||
|
||||
Validate that old and new functionality can simultaneously exist in a single cluster. Create services with and without
|
||||
the annotation, and validate datapath correctness.
|
||||
Validate that old and new functionality can simultaneously exist in a single cluster. Create services with and without
|
||||
the annotation, and validate datapath correctness.
|
||||
|
||||
# Beta Design
|
||||
|
||||
|
|
|
@ -96,7 +96,7 @@ be passed as annotations.
|
|||
The preferences are expressed by the following structure, passed as a
|
||||
serialized json inside annotations.
|
||||
|
||||
```
|
||||
```go
|
||||
type FederatedReplicaSetPreferences struct {
|
||||
// If set to true then already scheduled and running replicas may be moved to other clusters to
|
||||
// in order to bring cluster replicasets towards a desired state. Otherwise, if set to false,
|
||||
|
@ -126,7 +126,7 @@ How this works in practice:
|
|||
|
||||
**Scenario 1**. I want to spread my 50 replicas evenly across all available clusters. Config:
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Rebalance : true
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
|
@ -146,7 +146,7 @@ Example:
|
|||
|
||||
**Scenario 2**. I want to have only 2 replicas in each of the clusters.
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Rebalance : true
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
|
@ -157,7 +157,7 @@ FederatedReplicaSetPreferences {
|
|||
|
||||
Or
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Rebalance : true
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
|
@ -169,7 +169,7 @@ FederatedReplicaSetPreferences {
|
|||
|
||||
Or
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Rebalance : true
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
|
@ -182,7 +182,7 @@ There is a global target for 50, however if there are 3 clusters there will be o
|
|||
|
||||
**Scenario 3**. I want to have 20 replicas in each of 3 clusters.
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Rebalance : true
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
|
@ -196,7 +196,7 @@ There is a global target for 50, however clusters require 60. So some clusters w
|
|||
|
||||
**Scenario 4**. I want to have equal number of replicas in clusters A,B,C, however don’t put more than 20 replicas to cluster C.
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Rebalance : true
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
|
@ -217,7 +217,7 @@ Example:
|
|||
|
||||
**Scenario 5**. I want to run my application in cluster A, however if there are troubles FRS can also use clusters B and C, equally.
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
“A” : LocalReplicaSet{ Weight: 1000000}
|
||||
|
@ -236,7 +236,7 @@ Example:
|
|||
|
||||
**Scenario 6**. I want to run my application in clusters A, B and C. Cluster A gets twice the QPS than other clusters.
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
“A” : LocalReplicaSet{ Weight: 2}
|
||||
|
@ -249,7 +249,7 @@ FederatedReplicaSetPreferences {
|
|||
**Scenario 7**. I want to spread my 50 replicas evenly across all available clusters, but if there
|
||||
are already some replicas, please do not move them. Config:
|
||||
|
||||
```
|
||||
```go
|
||||
FederatedReplicaSetPreferences {
|
||||
Rebalance : false
|
||||
Clusters : map[string]LocalReplicaSet {
|
||||
|
|
|
@ -473,11 +473,13 @@ underlying clusters, to make up the total of 6 replicas required. To handle
|
|||
entire cluster failures, various approaches are possible, including:
|
||||
1. **simple overprovisioning**, such that sufficient replicas remain even if a
|
||||
cluster fails. This wastes some resources, but is simple and reliable.
|
||||
|
||||
2. **pod autoscaling**, where the replication controller in each
|
||||
cluster automatically and autonomously increases the number of
|
||||
replicas in its cluster in response to the additional traffic
|
||||
diverted from the failed cluster. This saves resources and is relatively
|
||||
simple, but there is some delay in the autoscaling.
|
||||
|
||||
3. **federated replica migration**, where the Cluster Federation
|
||||
control system detects the cluster failure and automatically
|
||||
increases the replica count in the remaining clusters to make up
|
||||
|
|
|
@ -306,7 +306,7 @@ cases it may be complex. For example:
|
|||
|
||||
Below is a sample of the YAML to create such a replication controller.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
|
|
|
@ -33,9 +33,9 @@ Non-goals include:
|
|||
|
||||
## API Changes
|
||||
|
||||
```
|
||||
```go
|
||||
type ObjectMeta struct {
|
||||
...
|
||||
...
|
||||
OwnerReferences []OwnerReference
|
||||
}
|
||||
```
|
||||
|
@ -43,7 +43,7 @@ type ObjectMeta struct {
|
|||
**ObjectMeta.OwnerReferences**:
|
||||
List of objects depended by this object. If ***all*** objects in the list have been deleted, this object will be garbage collected. For example, a replica set `R` created by a deployment `D` should have an entry in ObjectMeta.OwnerReferences pointing to `D`, set by the deployment controller when `R` is created. This field can be updated by any client that has the privilege to both update ***and*** delete the object. For safety reasons, we can add validation rules to restrict what resources could be set as owners. For example, Events will likely be banned from being owners.
|
||||
|
||||
```
|
||||
```go
|
||||
type OwnerReference struct {
|
||||
// Version of the referent.
|
||||
APIVersion string
|
||||
|
@ -96,7 +96,7 @@ Users may want to delete an owning object (e.g., a replicaset) while orphaning t
|
|||
|
||||
## API changes
|
||||
|
||||
```
|
||||
```go
|
||||
type ObjectMeta struct {
|
||||
…
|
||||
Finalizers []string
|
||||
|
@ -133,7 +133,7 @@ type ObjectMeta struct {
|
|||
|
||||
## API changes
|
||||
|
||||
```
|
||||
```go
|
||||
type DeleteOptions struct {
|
||||
…
|
||||
OrphanDependents bool
|
||||
|
@ -243,7 +243,7 @@ This section presents an example of all components working together to enforce t
|
|||
|
||||
## API Changes
|
||||
|
||||
```
|
||||
```go
|
||||
type DeleteOptions struct {
|
||||
…
|
||||
OrphanChildren bool
|
||||
|
@ -252,16 +252,16 @@ type DeleteOptions struct {
|
|||
|
||||
**DeleteOptions.OrphanChildren**: allows a user to express whether the child objects should be orphaned.
|
||||
|
||||
```
|
||||
```go
|
||||
type ObjectMeta struct {
|
||||
...
|
||||
...
|
||||
ParentReferences []ObjectReference
|
||||
}
|
||||
```
|
||||
|
||||
**ObjectMeta.ParentReferences**: links the resource to the parent resources. For example, a replica set `R` created by a deployment `D` should have an entry in ObjectMeta.ParentReferences pointing to `D`. The link should be set when the child object is created. It can be updated after the creation.
|
||||
|
||||
```
|
||||
```go
|
||||
type Tombstone struct {
|
||||
unversioned.TypeMeta
|
||||
ObjectMeta
|
||||
|
|
|
@ -203,7 +203,7 @@ linked directly into kubelet. A partial list of tradeoffs:
|
|||
| Reliability | Need to handle the binary disappearing at any time | Fewer headeaches |
|
||||
| (Un)Marshalling | Need to talk over JSON | None |
|
||||
| Administration cost | One more daemon to install, configure and monitor | No extra work required, other than perhaps configuring flags |
|
||||
| Releases | Potentially on its own schedule | Tied to Kubernetes' |
|
||||
| Releases | Potentially on its own schedule | Tied to Kubernetes |
|
||||
|
||||
## Implementation plan
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# High Availability of Scheduling and Controller Components in Kubernetes
|
||||
|
||||
This document is deprecated. For more details about running a highly available
|
||||
cluster master, please see the [admin instructions document](../../docs/admin/high-availability.md).
|
||||
cluster master, please see the [admin instructions document](https://github.com/kubernetes/kubernetes/blob/master/docs/admin/high-availability.md).
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
|
|
@ -121,7 +121,7 @@ tracking the following resources:
|
|||
|
||||
## Data Model Impact
|
||||
|
||||
```
|
||||
```go
|
||||
// The following identify resource constants for Kubernetes object types
|
||||
const (
|
||||
// CPU request, in cores. (500m = .5 cores)
|
||||
|
@ -241,7 +241,7 @@ The cluster-admin wants to restrict the following:
|
|||
|
||||
This would require the following quotas to be added to the namespace:
|
||||
|
||||
```
|
||||
```sh
|
||||
$ cat quota-best-effort
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
|
@ -279,7 +279,7 @@ spec:
|
|||
cpu.limit: 4
|
||||
scopes:
|
||||
- NotTerminating
|
||||
- NotBestEffort
|
||||
- NotBestEffort
|
||||
|
||||
$ cat quota
|
||||
apiVersion: v1
|
||||
|
|
|
@ -31,7 +31,7 @@ which loads a `config.ConfigurationMap`:
|
|||
- kube-dns
|
||||
|
||||
(Note kubelet is omitted, it's dynamic config story is being addressed
|
||||
by #29459). Alpha features that are not accessed via an alpha API
|
||||
by [#29459](https://issues.k8s.io/29459)). Alpha features that are not accessed via an alpha API
|
||||
group should define an `enableFeatureName` flag and use it to toggle
|
||||
activation of the feature in each system component that the feature
|
||||
uses.
|
||||
|
@ -60,7 +60,7 @@ not be altered in a running cluster.
|
|||
## Future work
|
||||
|
||||
1. The eventual plan is for component config to be managed by versioned
|
||||
APIs and not flags (#12245). When that is added, toggling of features
|
||||
APIs and not flags ([#12245](https://issues.k8s.io/12245)). When that is added, toggling of features
|
||||
could be handled by versioned component config and the component flags
|
||||
deprecated.
|
||||
|
||||
|
|
|
@ -94,7 +94,7 @@ and are not affected by this setting.
|
|||
|
||||
In other words, the fields will look like this:
|
||||
|
||||
```
|
||||
```go
|
||||
type SecretVolumeSource struct {
|
||||
// Name of the secret in the pod's namespace to use.
|
||||
SecretName string `json:"secretName,omitempty"`
|
||||
|
|
|
@ -15,7 +15,7 @@ RDS resource `something.rds.aws.amazon.com`. No proxying is involved.
|
|||
# Motivation
|
||||
|
||||
There were many related issues, but we'll try to summarize them here. More info
|
||||
is on GitHub issues/PRs: #13748, #11838, #13358, #23921
|
||||
is on GitHub issues/PRs: [#13748](https://issues.k8s.io/13748), [#11838](https://issues.k8s.io/11838), [#13358](https://issues.k8s.io/13358), [#23921](https://issues.k8s.io/23921)
|
||||
|
||||
One motivation is to present as native cluster services, services that are
|
||||
hosted externally. Some cloud providers, like AWS, hand out hostnames (IPs are
|
||||
|
@ -60,7 +60,7 @@ with DNS TTL and more. One imperfect approach was to only resolve the hostname
|
|||
upon creation, but this was considered not a great idea. A better approach
|
||||
would be at a higher level, maybe a service type.
|
||||
|
||||
There are more ideas described in #13748, but all raised further issues,
|
||||
There are more ideas described in [#13748](https://issues.k8s.io/13748), but all raised further issues,
|
||||
ranging from using another upstream DNS server to creating a Name object
|
||||
associated with DNSs.
|
||||
|
||||
|
@ -81,7 +81,7 @@ https://github.com/kubernetes/kubernetes/issues/13748#issuecomment-230397975
|
|||
|
||||
Currently a ServiceSpec looks like this, with comments edited for clarity:
|
||||
|
||||
```
|
||||
```go
|
||||
type ServiceSpec struct {
|
||||
Ports []ServicePort
|
||||
|
||||
|
@ -105,7 +105,7 @@ type ServiceSpec struct {
|
|||
|
||||
The proposal is to change it to:
|
||||
|
||||
```
|
||||
```go
|
||||
type ServiceSpec struct {
|
||||
Ports []ServicePort
|
||||
|
||||
|
@ -135,7 +135,7 @@ type ServiceSpec struct {
|
|||
|
||||
For example, it can be used like this:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
|
|
|
@ -108,7 +108,7 @@ if a previous pod has been fully terminated (reached its graceful termination li
|
|||
A StatefulSet has 0..N **members**, each with a unique **identity** which is a name that is unique within the
|
||||
set.
|
||||
|
||||
```
|
||||
```go
|
||||
type StatefulSet struct {
|
||||
ObjectMeta
|
||||
|
||||
|
@ -225,7 +225,7 @@ fashion via DNS by leveraging information written to the endpoints by the endpoi
|
|||
|
||||
The end result might be DNS resolution as follows:
|
||||
|
||||
```
|
||||
```sh
|
||||
# service mongo pointing to pods created by StatefulSet mdb, with identities mdb-1, mdb-2, mdb-3
|
||||
|
||||
dig mongodb.namespace.svc.cluster.local +short A
|
||||
|
@ -244,9 +244,9 @@ dig mdb-3.mongodb.namespace.svc.cluster.local +short A
|
|||
This is currently implemented via an annotation on pods, which is surfaced to endpoints, and finally
|
||||
surfaced as DNS on the service that exposes those pods.
|
||||
|
||||
```
|
||||
// The pods created by this StatefulSet will have the DNS names "mysql-0.NAMESPACE.svc.cluster.local"
|
||||
// and "mysql-1.NAMESPACE.svc.cluster.local"
|
||||
```yaml
|
||||
# The pods created by this StatefulSet will have the DNS names "mysql-0.NAMESPACE.svc.cluster.local"
|
||||
# and "mysql-1.NAMESPACE.svc.cluster.local"
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: mysql
|
||||
|
|
|
@ -157,7 +157,7 @@ Template definition.
|
|||
|
||||
**Template Object**
|
||||
|
||||
```
|
||||
```go
|
||||
// Template contains the inputs needed to produce a Config.
|
||||
type Template struct {
|
||||
unversioned.TypeMeta
|
||||
|
@ -179,7 +179,7 @@ type Template struct {
|
|||
|
||||
**Parameter Object**
|
||||
|
||||
```
|
||||
```go
|
||||
// Parameter defines a name/value variable that is to be processed during
|
||||
// the Template to Config transformation.
|
||||
type Parameter struct {
|
||||
|
@ -194,7 +194,7 @@ type Parameter struct {
|
|||
Description string
|
||||
|
||||
// Optional: Value holds the Parameter data.
|
||||
// The value replaces all occurrences of the Parameter $(Name) or
|
||||
// The value replaces all occurrences of the Parameter $(Name) or
|
||||
// $((Name)) expression during the Template to Config transformation.
|
||||
Value string
|
||||
|
||||
|
@ -213,21 +213,21 @@ and `$((PARAM))`. When the single parens option is used, the result of the subs
|
|||
parens option is used, the result of the substitution will not be quoted. For example, given a parameter defined with a value
|
||||
of "BAR", the following behavior will be observed:
|
||||
|
||||
```
|
||||
```go
|
||||
somefield: "$(FOO)" -> somefield: "BAR"
|
||||
somefield: "$((FOO))" -> somefield: BAR
|
||||
```
|
||||
|
||||
// for concatenation, the result value reflects the type of substitution (quoted or unquoted):
|
||||
for concatenation, the result value reflects the type of substitution (quoted or unquoted):
|
||||
|
||||
```
|
||||
```go
|
||||
somefield: "prefix_$(FOO)_suffix" -> somefield: "prefix_BAR_suffix"
|
||||
somefield: "prefix_$((FOO))_suffix" -> somefield: prefix_BAR_suffix
|
||||
```
|
||||
|
||||
// if both types of substitution exist, quoting is performed:
|
||||
if both types of substitution exist, quoting is performed:
|
||||
|
||||
```
|
||||
```go
|
||||
somefield: "prefix_$((FOO))_$(FOO)_suffix" -> somefield: "prefix_BAR_BAR_suffix"
|
||||
```
|
||||
|
||||
|
@ -243,7 +243,7 @@ Illustration of a template which defines a service and replication controller wi
|
|||
the name of the top level objects, the number of replicas, and several environment variables defined on the
|
||||
pod template.
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"kind": "Template",
|
||||
"apiVersion": "v1",
|
||||
|
|
|
@ -154,7 +154,7 @@ version changes, not new major nor minor versions).
|
|||
|
||||
* Users can upgrade from any Kube 1.x release to any other Kube 1.x release as a
|
||||
rolling upgrade across their cluster. (Rolling upgrade means being able to
|
||||
upgrade the master first, then one node at a time. See #4855 for details.)
|
||||
upgrade the master first, then one node at a time. See [#4855](https://issues.k8s.io/4855) for details.)
|
||||
* However, we do not recommend upgrading more than two minor releases at a
|
||||
time (see [Supported releases](#supported-releases)), and do not recommend
|
||||
running non-latest patch releases of a given minor release.
|
||||
|
|
|
@ -58,7 +58,7 @@ type Builder interface {
|
|||
Each volume plugin will have to change to support the new `SetUp` signature. The existing
|
||||
ownership management code will be refactored into a library that volume plugins can use:
|
||||
|
||||
```
|
||||
```go
|
||||
package volume
|
||||
|
||||
func ManageOwnership(path string, fsGroup int64) error {
|
||||
|
|
|
@ -464,7 +464,7 @@ provisioner and to favor existing volumes before provisioning a new one.
|
|||
|
||||
This example shows two storage classes, "aws-fast" and "aws-slow".
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
|
|
Loading…
Reference in New Issue