Merge branch 'master' of https://github.com/kubernetes/kubernetes.github.io into release-1.6
* 'master' of https://github.com/kubernetes/kubernetes.github.io: Fix travis.yml re: issue #2034; fix docker image link (#2532) re: issue #1671; update to direct link Update cheatsheet for multi-container handling add "--show-all" to kubectl get pods Fix broken/outdated links in the ingress.md file column READY is missed fix Kubenetes typo Update overview.md Update pod.md Update multiple-schedulers.md Fix the standard storageClass for GCE fix typo fix typo Update index.md doc(kubeadm.md) - change base64 decode option to '--decode' Update links to ingress repository Updated dead links Fix unmatched closing paren
This commit is contained in:
commit
9310444630
|
|
@ -14,6 +14,7 @@ install:
|
|||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apiserver
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/client-go
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/sample-apiserver
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator
|
||||
- cp -r $GOPATH/src/k8s.io/kubernetes/vendor/* $GOPATH/src/
|
||||
- rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/*
|
||||
- cp -r $GOPATH/src/k8s.io/kubernetes/staging/src/* $GOPATH/src/
|
||||
|
|
|
|||
|
|
@ -95,7 +95,7 @@ Now that our second scheduler is running, let's create some pods, and direct the
|
|||
scheduler in that pod spec. Let's look at three examples.
|
||||
|
||||
|
||||
1. Pod spec without any scheduler name
|
||||
- Pod spec without any scheduler name
|
||||
|
||||
{% include code.html language="yaml" file="multiple-schedulers/pod1.yaml" ghlink="/docs/admin/multiple-schedulers/pod1.yaml" %}
|
||||
|
||||
|
|
@ -108,7 +108,7 @@ scheduler in that pod spec. Let's look at three examples.
|
|||
kubectl create -f pod1.yaml
|
||||
```
|
||||
|
||||
2. Pod spec with `default-scheduler`
|
||||
- Pod spec with `default-scheduler`
|
||||
|
||||
{% include code.html language="yaml" file="multiple-schedulers/pod2.yaml" ghlink="/docs/admin/multiple-schedulers/pod2.yaml" %}
|
||||
|
||||
|
|
@ -121,7 +121,7 @@ scheduler in that pod spec. Let's look at three examples.
|
|||
kubectl create -f pod2.yaml
|
||||
```
|
||||
|
||||
3. Pod spec with `my-scheduler`
|
||||
- Pod spec with `my-scheduler`
|
||||
|
||||
{% include code.html language="yaml" file="multiple-schedulers/pod3.yaml" ghlink="/docs/admin/multiple-schedulers/pod3.yaml" %}
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ This page explains how Kubernetes objects are represented in the Kubernetes API,
|
|||
{% capture body %}
|
||||
## Understanding Kubernetes Objects
|
||||
|
||||
*Kubernetes Objects* are persistent entities in the Kubernetes system. Kubenetes uses these entities to represent the state of your cluster. Specifically, they can describe:
|
||||
*Kubernetes Objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
|
||||
|
||||
* What containerized applications are running (and on which nodes)
|
||||
* The resources available to those applications
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ The [Kubernetes Blog](http://blog.kubernetes.io) has some additional information
|
|||
* [The Distributed System Toolkit: Patterns for Composite Containers](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html)
|
||||
* [Container Design Patterns](http://blog.kubernetes.io/2016/06/container-design-patterns.html)
|
||||
|
||||
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run muliple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
|
||||
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information.
|
||||
|
||||
### How Pods Manage Multiple Containers
|
||||
|
||||
|
|
@ -57,7 +57,7 @@ Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fail
|
|||
|
||||
### Pods and Controllers
|
||||
|
||||
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node).
|
||||
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node.
|
||||
|
||||
Some examples of Controllers that contain one or more pods include:
|
||||
|
||||
|
|
|
|||
|
|
@ -233,7 +233,7 @@ after their announced deprecation for no less than:**
|
|||
* **Beta: 3 months or 1 release (whichever is longer)**
|
||||
* **Alpha: 0 releases**
|
||||
|
||||
**Rule #6: Deprecated CLI elements must emit warnings (optionally disableable)
|
||||
**Rule #6: Deprecated CLI elements must emit warnings (optionally disable)
|
||||
when used.**
|
||||
|
||||
## Deprecating a feature or behavior
|
||||
|
|
|
|||
|
|
@ -352,7 +352,7 @@ Please note: `kubeadm` is a work in progress and these limitations will be addre
|
|||
1. There is no built-in way of fetching the token easily once the cluster is up and running, but here is a `kubectl` command you can copy and paste that will print out the token for you:
|
||||
|
||||
```console
|
||||
# kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 -D | sed "s|{||g;s|}||g;s|:|.|g;s/\"//g;" | xargs echo
|
||||
# kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 --decode | sed "s|{||g;s|}||g;s|:|.|g;s/\"//g;" | xargs echo
|
||||
```
|
||||
|
||||
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
|
||||
|
|
|
|||
|
|
@ -247,7 +247,7 @@ where you would set it. Suppose the Container listens on 127.0.0.1 and the Pod's
|
|||
If your pod relies on virtual hosts, which is probably the more common case,
|
||||
you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||
|
||||
In addition to command probes and HTTP probes, Kubenetes supports
|
||||
In addition to command probes and HTTP probes, Kubernetes supports
|
||||
[TCP probes](/docs/api-reference/v1/definitions/#_v1_tcpsocketaction).
|
||||
|
||||
{% endcapture %}
|
||||
|
|
|
|||
|
|
@ -62,8 +62,8 @@ This indicates that the Deployment has created all three replicas, and all repli
|
|||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT AGE
|
||||
nginx-deployment-2035384211 3 3 18s
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-2035384211 3 3 0 18s
|
||||
```
|
||||
|
||||
You may notice that the name of the Replica Set is always `<the name of the Deployment>-<hash value of the pod template>`.
|
||||
|
|
@ -180,9 +180,9 @@ We can run `kubectl get rs` to see that the Deployment updated the Pods by creat
|
|||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT AGE
|
||||
nginx-deployment-1564180365 3 3 6s
|
||||
nginx-deployment-2035384211 0 0 36s
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 0 6s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
```
|
||||
|
||||
Running `get pods` should now show only the new Pods:
|
||||
|
|
@ -287,10 +287,10 @@ You will also see that both the number of old replicas (nginx-deployment-1564180
|
|||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT AGE
|
||||
nginx-deployment-1564180365 2 2 25s
|
||||
nginx-deployment-2035384211 0 0 36s
|
||||
nginx-deployment-3066724191 2 2 6s
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 2 2 0 25s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
nginx-deployment-3066724191 2 2 2 6s
|
||||
```
|
||||
|
||||
Looking at the Pods created, you will see that the 2 Pods created by new Replica Set are stuck in an image pull loop.
|
||||
|
|
@ -514,10 +514,10 @@ The Deployment was still in progress when we paused it, so the actions of scalin
|
|||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT AGE
|
||||
nginx-deployment-1564180365 2 2 1h
|
||||
nginx-deployment-2035384211 2 2 1h
|
||||
nginx-deployment-3066724191 0 0 1h
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 2 2 2 1h
|
||||
nginx-deployment-2035384211 2 2 0 1h
|
||||
nginx-deployment-3066724191 0 0 0 1h
|
||||
```
|
||||
|
||||
In a separate terminal, watch for rollout status changes and you'll see the rollout won't continue:
|
||||
|
|
@ -546,10 +546,10 @@ deployment nginx-deployment successfully rolled out
|
|||
|
||||
```shell
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT AGE
|
||||
nginx-deployment-1564180365 3 3 1h
|
||||
nginx-deployment-2035384211 0 0 1h
|
||||
nginx-deployment-3066724191 0 0 1h
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 3 1h
|
||||
nginx-deployment-2035384211 0 0 0 1h
|
||||
nginx-deployment-3066724191 0 0 0 1h
|
||||
```
|
||||
|
||||
Note: You cannot rollback a paused Deployment until you resume it.
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ heapster monitoring will be turned-on by default).
|
|||
## Step One: Run & expose php-apache server
|
||||
|
||||
To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image.
|
||||
The image can be found [here](/docs/user-guide/horizontal-pod-autoscaling/image).
|
||||
The Dockerfile can be found [here](/docs/user-guide/horizontal-pod-autoscaling/image/Dockerfile).
|
||||
It defines an [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
|
||||
|
||||
First, we will start a deployment running the image and expose it as a service:
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ Before running examples in the user guides, please ensure you have completed the
|
|||
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
|
||||
|
||||
[**Volume**](/docs/user-guide/volumes/)
|
||||
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the volume directory and/or device.
|
||||
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/), adding provisioning of the volume directory and/or device.
|
||||
|
||||
[**Secret**](/docs/user-guide/secrets/)
|
||||
: A secret stores sensitive data, such as authentication tokens, which can be made available to containers upon request.
|
||||
|
|
|
|||
|
|
@ -44,9 +44,9 @@ It can be configured to give services externally-reachable urls, load balance tr
|
|||
|
||||
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
|
||||
|
||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers) and [here](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://github.com/kubernetes/ingress/tree/master/controllers/nginx#running-multiple-ingress-controllers) and [here](https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
|
||||
Make sure you review the [beta limitations](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://github.com/kubernetes/contrib/tree/master/ingress/controllers) as a pod.
|
||||
Make sure you review the [beta limitations](https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://github.com/kubernetes/ingress/tree/master/controllers) as a pod.
|
||||
|
||||
## The Ingress Resource
|
||||
|
||||
|
|
@ -71,7 +71,7 @@ spec:
|
|||
|
||||
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/docs/user-guide/deploying-applications), [here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
|
||||
|
||||
__Lines 5-7__: Ingress [spec](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
||||
__Lines 5-7__: Ingress [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
||||
|
||||
__Lines 8-9__: Each http rule contains the following information: A host (e.g.: foo.bar.com, defaults to * in this example), a list of paths (e.g.: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
|
||||
|
||||
|
|
@ -81,11 +81,11 @@ __Global Parameters__: For the sake of simplicity the example Ingress has no glo
|
|||
|
||||
## Ingress controllers
|
||||
|
||||
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://github.com/kubernetes/contrib/tree/master/ingress/controllers).
|
||||
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://github.com/kubernetes/ingress/tree/master/controllers).
|
||||
|
||||
## Before you begin
|
||||
|
||||
The following document describes a set of cross platform features exposed through the Ingress resource. Ideally, all Ingress controllers should fulfill this specification, but we're not there yet. The docs for the GCE and nginx controllers are [here](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md) and [here](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/README.md) respectively. **Make sure you review controller specific docs so you understand the caveats of each one**.
|
||||
The following document describes a set of cross platform features exposed through the Ingress resource. Ideally, all Ingress controllers should fulfill this specification, but we're not there yet. The docs for the GCE and nginx controllers are [here](https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md) and [here](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md) respectively. **Make sure you review controller specific docs so you understand the caveats of each one**.
|
||||
|
||||
## Types of Ingress
|
||||
|
||||
|
|
@ -214,13 +214,13 @@ spec:
|
|||
servicePort: 80
|
||||
```
|
||||
|
||||
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#https), [GCE](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce#tls), or any other platform specific Ingress controller to understand how TLS works in your environment.
|
||||
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md#https), [GCE](https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#tls), or any other platform specific Ingress controller to understand how TLS works in your environment.
|
||||
|
||||
### Loadbalancing
|
||||
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/README.md), [GCE](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md#health-checks)).
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md), [GCE](https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#health-checks)).
|
||||
|
||||
## Updating an Ingress
|
||||
|
||||
|
|
@ -282,7 +282,7 @@ Techniques for spreading traffic across failure domains differs between cloud pr
|
|||
* Combining L4 and L7 Ingress
|
||||
* More Ingress controllers
|
||||
|
||||
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress sub-repository](https://github.com/kubernetes/contrib/tree/master/ingress) for more details on the evolution of various Ingress controllers.
|
||||
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the evolution of various Ingress controllers.
|
||||
|
||||
## Alternatives
|
||||
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ To view completed pods of a job, use `kubectl get pods --show-all`. The `--show
|
|||
To list all the pods that belong to a job in a machine readable form, you can use a command like this:
|
||||
|
||||
```shell
|
||||
$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name})
|
||||
$ pods=$(kubectl get pods --show-all --selector=job-name=pi --output=jsonpath={.items..metadata.name})
|
||||
echo $pods
|
||||
pi-aiw0a
|
||||
```
|
||||
|
|
|
|||
|
|
@ -305,7 +305,7 @@ $ kubectl config use-context federal-context
|
|||
|
||||
### Final notes for tying it all together
|
||||
|
||||
So, tying this all together, a quick start to creating your own kubeconfig file:
|
||||
So, tying this all together, a quick start to create your own kubeconfig file:
|
||||
|
||||
- Take a good look and understand how your api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
|
||||
|
||||
|
|
|
|||
|
|
@ -197,7 +197,9 @@ $ kubectl -n my-ns delete po,svc --all # Delete all pods and servic
|
|||
|
||||
```console
|
||||
$ kubectl logs my-pod # dump pod logs (stdout)
|
||||
$ kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
|
||||
$ kubectl logs -f my-pod # stream pod logs (stdout)
|
||||
$ kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)
|
||||
$ kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
|
||||
$ kubectl attach my-pod -i # Attach to Running Container
|
||||
$ kubectl port-forward my-pod 5000:6000 # Forward port 6000 of Pod to your to 5000 on your local machine
|
||||
|
|
|
|||
|
|
@ -172,23 +172,23 @@ In the CLI, the access modes are abbreviated to:
|
|||
|
||||
| Volume Plugin | ReadWriteOnce| ReadOnlyMany| ReadWriteMany|
|
||||
| :--- | :---: | :---: | :---: |
|
||||
| AWSElasticBlockStore | x | - | - |
|
||||
| AzureFile | x | x | x |
|
||||
| AzureDisk | x | - | - |
|
||||
| CephFS | x | x | x |
|
||||
| Cinder | x | - | - |
|
||||
| FC | x | x | - |
|
||||
| FlexVolume | x | x | - |
|
||||
| Flocker | x | - | - |
|
||||
| GCEPersistentDisk | x | x | - |
|
||||
| Glusterfs | x | x | x |
|
||||
| HostPath | x | - | - |
|
||||
| iSCSI | x | x | - |
|
||||
| PhotonPersistentDisk | x | - | - |
|
||||
| Quobyte | x | x | x |
|
||||
| NFS | x | x | x |
|
||||
| RBD | x | x | - |
|
||||
| VsphereVolume | x | - | - |
|
||||
| AWSElasticBlockStore | ✓ | - | - |
|
||||
| AzureFile | ✓ | ✓ | ✓ |
|
||||
| AzureDisk | ✓ | - | - |
|
||||
| CephFS | ✓ | ✓ | ✓ |
|
||||
| Cinder | ✓ | - | - |
|
||||
| FC | ✓ | ✓ | - |
|
||||
| FlexVolume | ✓ | ✓ | - |
|
||||
| Flocker | ✓ | - | - |
|
||||
| GCEPersistentDisk | ✓ | ✓ | - |
|
||||
| Glusterfs | ✓ | ✓ | ✓ |
|
||||
| HostPath | ✓ | - | - |
|
||||
| iSCSI | ✓ | ✓ | - |
|
||||
| PhotonPersistentDisk | ✓ | - | - |
|
||||
| Quobyte | ✓ | ✓ | ✓ |
|
||||
| NFS | ✓ | ✓ | ✓ |
|
||||
| RBD | ✓ | ✓ | - |
|
||||
| VsphereVolume | ✓ | - | - |
|
||||
|
||||
### Class
|
||||
|
||||
|
|
@ -396,7 +396,7 @@ parameters:
|
|||
zone: us-central1-a
|
||||
```
|
||||
|
||||
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-ssd`
|
||||
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-standard`
|
||||
* `zone`: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
|
||||
|
||||
#### Glusterfs
|
||||
|
|
|
|||
Loading…
Reference in New Issue