Merge pull request #421 from EmilyM1/update-language-yaml
Update language yaml
This commit is contained in:
commit
2b1e860e04
|
@ -1,6 +1,6 @@
|
|||
## Guestbook Example
|
||||
|
||||
This example shows how to build a simple multi-tier web application using Kubernetes and Docker. The application consists of a web front end, Redis master for storage, and replicated set of Redis slaves, all for which we will create Kubernetes replication controllers, pods, and services.
|
||||
This example shows how to build a simple multi-tier web application using Kubernetes and Docker. The application consists of a web front end, Redis master for storage, and replicated set of Redis replicas, all for which we will create Kubernetes replication controllers, pods, and services.
|
||||
|
||||
If you are running a cluster in Google Container Engine (GKE), instead see the [Guestbook Example for Google Container Engine](https://cloud.google.com/container-engine/docs/tutorials/guestbook).
|
||||
|
||||
|
@ -9,8 +9,8 @@ If you are running a cluster in Google Container Engine (GKE), instead see the [
|
|||
* [Step Zero: Prerequisites](#step-zero)
|
||||
* [Step One: Create the Redis master pod](#step-one)
|
||||
* [Step Two: Create the Redis master service](#step-two)
|
||||
* [Step Three: Create the Redis slave pods](#step-three)
|
||||
* [Step Four: Create the Redis slave service](#step-four)
|
||||
* [Step Three: Create the Redis replica pods](#step-three)
|
||||
* [Step Four: Create the Redis replica service](#step-four)
|
||||
* [Step Five: Create the guestbook pods](#step-five)
|
||||
* [Step Six: Create the guestbook service](#step-six)
|
||||
* [Step Seven: View the guestbook](#step-seven)
|
||||
|
@ -92,77 +92,77 @@ Services find the pods to load balance based on pod labels. The pod that you cre
|
|||
Result: All new pods will see the `redis-master` service running on the host (`$REDIS_MASTER_SERVICE_HOST` environment variable) at port 6379, or running on `redis-master:6379`. After the service is created, the service proxy on each node is configured to set up a proxy on the specified port (in our example, that's port 6379).
|
||||
|
||||
|
||||
### Step Three: Create the Redis slave pods <a id="step-three"></a>
|
||||
### Step Three: Create the Redis replica pods <a id="step-three"></a>
|
||||
|
||||
The Redis master we created earlier is a single pod (REPLICAS = 1), while the Redis read slaves we are creating here are 'replicated' pods. In Kubernetes, a replication controller is responsible for managing the multiple instances of a replicated pod.
|
||||
The Redis master we created earlier is a single pod (REPLICAS = 1), while the Redis read replicas we are creating here are 'replicated' pods. In Kubernetes, a replication controller is responsible for managing the multiple instances of a replicated pod.
|
||||
|
||||
1. Use the file [redis-slave-controller.json](redis-slave-controller.json) to create the replication controller by running the `kubectl create -f` *`filename`* command:
|
||||
1. Use the file [redis-replica-controller.json](redis-replica-controller.json) to create the replication controller by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook-go/redis-slave-controller.json
|
||||
$ kubectl create -f examples/guestbook-go/redis-replica-controller.json
|
||||
|
||||
```
|
||||
|
||||
2. To verify that the redis-slave controller is running, run the `kubectl get rc` command:
|
||||
2. To verify that the redis-replica controller is running, run the `kubectl get rc` command:
|
||||
|
||||
```console
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
redis-master redis-master redis app=redis,role=master 1
|
||||
redis-slave redis-slave k8s.gcr.io/redis-slave:v2 app=redis,role=slave 2
|
||||
redis-replica redis-replica k8s.gcr.io/redis-slave:v2 app=redis,role=replica 2
|
||||
...
|
||||
```
|
||||
|
||||
Result: The replication controller creates and configures the Redis slave pods through the redis-master service (name:port pair, in our example that's `redis-master:6379`).
|
||||
Result: The replication controller creates and configures the Redis replica pods through the redis-master service (name:port pair, in our example that's `redis-master:6379`).
|
||||
|
||||
Example:
|
||||
The Redis slaves get started by the replication controller with the following command:
|
||||
The Redis replicas get started by the replication controller with the following command:
|
||||
|
||||
```console
|
||||
redis-server --slaveof redis-master 6379
|
||||
redis-server --replicaof redis-master 6379
|
||||
```
|
||||
|
||||
3. To verify that the Redis master and slaves pods are running, run the `kubectl get pods` command:
|
||||
3. To verify that the Redis master and replicas pods are running, run the `kubectl get pods` command:
|
||||
|
||||
```console
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-xx4uv 1/1 Running 0 18m
|
||||
redis-slave-b6wj4 1/1 Running 0 1m
|
||||
redis-slave-iai40 1/1 Running 0 1m
|
||||
redis-replica-b6wj4 1/1 Running 0 1m
|
||||
redis-replica-iai40 1/1 Running 0 1m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You see the single Redis master and two Redis slave pods.
|
||||
Result: You see the single Redis master and two Redis replica pods.
|
||||
|
||||
### Step Four: Create the Redis slave service <a id="step-four"></a>
|
||||
### Step Four: Create the Redis replica service <a id="step-four"></a>
|
||||
|
||||
Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the Redis slave service provides transparent load balancing to clients.
|
||||
Just like the master, we want to have a service to proxy connections to the read replicas. In this case, in addition to discovery, the Redis replica service provides transparent load balancing to clients.
|
||||
|
||||
1. Use the [redis-slave-service.json](redis-slave-service.json) file to create the Redis slave service by running the `kubectl create -f` *`filename`* command:
|
||||
1. Use the [redis-replica-service.json](redis-replica-service.json) file to create the Redis replica service by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook-go/redis-slave-service.json
|
||||
$ kubectl create -f examples/guestbook-go/redis-replica-service.json
|
||||
|
||||
```
|
||||
|
||||
2. To verify that the redis-slave service is up, list the services you created in the cluster with the `kubectl get services` command:
|
||||
2. To verify that the redis-replica service is up, list the services you created in the cluster with the `kubectl get services` command:
|
||||
|
||||
```console
|
||||
$ kubectl get services
|
||||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
redis-master 10.0.136.3 <none> 6379/TCP app=redis,role=master 1h
|
||||
redis-slave 10.0.21.92 <none> 6379/TCP app-redis,role=slave 1h
|
||||
redis-replica 10.0.21.92 <none> 6379/TCP app-redis,role=replica 1h
|
||||
...
|
||||
```
|
||||
|
||||
Result: The service is created with labels `app=redis` and `role=slave` to identify that the pods are running the Redis slaves.
|
||||
Result: The service is created with labels `app=redis` and `role=replica` to identify that the pods are running the Redis replicas.
|
||||
|
||||
Tip: It is helpful to set labels on your services themselves--as we've done here--to make it easy to locate them later.
|
||||
|
||||
### Step Five: Create the guestbook pods <a id="step-five"></a>
|
||||
|
||||
This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. The pods we are creating expose a simple JSON interface and serves a jQuery-Ajax based UI. Like the Redis slave pods, these pods are also managed by a replication controller.
|
||||
This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the replica or master services depending on whether the request is a read or a write. The pods we are creating expose a simple JSON interface and serves a jQuery-Ajax based UI. Like the Redis replica pods, these pods are also managed by a replication controller.
|
||||
|
||||
1. Use the [guestbook-controller.json](guestbook-controller.json) file to create the guestbook replication controller by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
|
@ -178,9 +178,9 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
|||
```console
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
guestbook guestbook k8s.gcr.io/guestbook:v3 app=guestbook 3
|
||||
guestbook guestbook k8s.gcr.io/guestbook:v3 app=guestbook 3
|
||||
redis-master redis-master redis app=redis,role=master 1
|
||||
redis-slave redis-slave k8s.gcr.io/redis-slave:v2 app=redis,role=slave 2
|
||||
redis-replica redis-replica k8s.gcr.io/redis-replica:v2 app=redis,role=replica 2
|
||||
...
|
||||
```
|
||||
|
||||
|
@ -193,12 +193,12 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
|||
guestbook-gv7i6 1/1 Running 0 2m
|
||||
guestbook-x405a 1/1 Running 0 2m
|
||||
redis-master-xx4uv 1/1 Running 0 23m
|
||||
redis-slave-b6wj4 1/1 Running 0 6m
|
||||
redis-slave-iai40 1/1 Running 0 6m
|
||||
redis-replica-b6wj4 1/1 Running 0 6m
|
||||
redis-replica-iai40 1/1 Running 0 6m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You see a single Redis master, two Redis slaves, and three guestbook pods.
|
||||
Result: You see a single Redis master, two Redis replicas, and three guestbook pods.
|
||||
|
||||
### Step Six: Create the guestbook service <a id="step-six"></a>
|
||||
|
||||
|
@ -218,7 +218,7 @@ Just like the others, we create a service to group the guestbook pods but this t
|
|||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
guestbook 10.0.217.218 146.148.81.8 3000/TCP app=guestbook 1h
|
||||
redis-master 10.0.136.3 <none> 6379/TCP app=redis,role=master 1h
|
||||
redis-slave 10.0.21.92 <none> 6379/TCP app-redis,role=slave 1h
|
||||
redis-replica 10.0.21.92 <none> 6379/TCP app-redis,role=replica 1h
|
||||
...
|
||||
```
|
||||
|
||||
|
@ -258,8 +258,8 @@ guestbook-controller
|
|||
guestbook
|
||||
redid-master-controller
|
||||
redis-master
|
||||
redis-slave-controller
|
||||
redis-slave
|
||||
redis-replica-controller
|
||||
redis-replica
|
||||
```
|
||||
|
||||
Tip: To turn down your Kubernetes cluster, follow the corresponding instructions in the version of the
|
||||
|
|
|
@ -29,12 +29,12 @@ import (
|
|||
|
||||
var (
|
||||
masterPool *simpleredis.ConnectionPool
|
||||
slavePool *simpleredis.ConnectionPool
|
||||
replicaPool *simpleredis.ConnectionPool
|
||||
)
|
||||
|
||||
func ListRangeHandler(rw http.ResponseWriter, req *http.Request) {
|
||||
key := mux.Vars(req)["key"]
|
||||
list := simpleredis.NewList(slavePool, key)
|
||||
list := simpleredis.NewList(replicaPool, key)
|
||||
members := HandleError(list.GetAll()).([]string)
|
||||
membersJSON := HandleError(json.MarshalIndent(members, "", " ")).([]byte)
|
||||
rw.Write(membersJSON)
|
||||
|
@ -76,8 +76,8 @@ func HandleError(result interface{}, err error) (r interface{}) {
|
|||
func main() {
|
||||
masterPool = simpleredis.NewConnectionPoolHost("redis-master:6379")
|
||||
defer masterPool.Close()
|
||||
slavePool = simpleredis.NewConnectionPoolHost("redis-slave:6379")
|
||||
defer slavePool.Close()
|
||||
replicaPool = simpleredis.NewConnectionPoolHost("redis-replica:6379")
|
||||
defer replicaPool.Close()
|
||||
|
||||
r := mux.NewRouter()
|
||||
r.Path("/lrange/{key}").Methods("GET").HandlerFunc(ListRangeHandler)
|
||||
|
|
|
@ -2,29 +2,29 @@
|
|||
"kind":"ReplicationController",
|
||||
"apiVersion":"v1",
|
||||
"metadata":{
|
||||
"name":"redis-slave",
|
||||
"name":"redis-replica",
|
||||
"labels":{
|
||||
"app":"redis",
|
||||
"role":"slave"
|
||||
"role":"replica"
|
||||
}
|
||||
},
|
||||
"spec":{
|
||||
"replicas":2,
|
||||
"selector":{
|
||||
"app":"redis",
|
||||
"role":"slave"
|
||||
"role":"replica"
|
||||
},
|
||||
"template":{
|
||||
"metadata":{
|
||||
"labels":{
|
||||
"app":"redis",
|
||||
"role":"slave"
|
||||
"role":"replica"
|
||||
}
|
||||
},
|
||||
"spec":{
|
||||
"containers":[
|
||||
{
|
||||
"name":"redis-slave",
|
||||
"name":"redis-replica",
|
||||
"image":"k8s.gcr.io/redis-slave:v2",
|
||||
"ports":[
|
||||
{
|
|
@ -2,10 +2,10 @@
|
|||
"kind":"Service",
|
||||
"apiVersion":"v1",
|
||||
"metadata":{
|
||||
"name":"redis-slave",
|
||||
"name":"redis-replica",
|
||||
"labels":{
|
||||
"app":"redis",
|
||||
"role":"slave"
|
||||
"role":"replica"
|
||||
}
|
||||
},
|
||||
"spec":{
|
||||
|
@ -17,7 +17,7 @@
|
|||
],
|
||||
"selector":{
|
||||
"app":"redis",
|
||||
"role":"slave"
|
||||
"role":"replica"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -46,39 +46,39 @@ spec:
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-slave
|
||||
name: redis-replica
|
||||
labels:
|
||||
app: redis
|
||||
tier: backend
|
||||
role: slave
|
||||
role: replica
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
selector:
|
||||
app: redis
|
||||
tier: backend
|
||||
role: slave
|
||||
role: replica
|
||||
---
|
||||
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
name: redis-replica
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: slave
|
||||
- name: replica
|
||||
image: gcr.io/google_samples/gb-redisslave:v1
|
||||
resources:
|
||||
requests:
|
||||
|
@ -104,7 +104,7 @@ metadata:
|
|||
tier: frontend
|
||||
spec:
|
||||
# comment or delete the following line if you want to use a LoadBalancer
|
||||
type: NodePort
|
||||
type: NodePort
|
||||
# if your cluster supports it, uncomment the following to automatically create
|
||||
# an external load-balanced IP for the frontend service.
|
||||
# type: LoadBalancer
|
||||
|
|
|
@ -1,39 +1,39 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-slave
|
||||
name: redis-replica
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
---
|
||||
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
name: redis-replica
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: slave
|
||||
- name: replica
|
||||
image: gcr.io/google_samples/gb-redisslave:v1
|
||||
resources:
|
||||
requests:
|
|
@ -1,10 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: redis-slave
|
||||
name: redis-replica
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
spec:
|
||||
replicas: 2
|
||||
|
@ -12,11 +12,11 @@ spec:
|
|||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: slave
|
||||
- name: replica
|
||||
image: gcr.io/google_samples/gb-redisslave:v1
|
||||
resources:
|
||||
requests:
|
|
@ -1,19 +1,19 @@
|
|||
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
name: redis-replica
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
|
@ -1,15 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-slave
|
||||
name: redis-replica
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: slave
|
||||
role: replica
|
||||
tier: backend
|
|
@ -191,24 +191,24 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
cases := map[string]map[string]runtime.Object{
|
||||
"../examples/guestbook": {
|
||||
"frontend-deployment": &extensions.Deployment{},
|
||||
"redis-slave-deployment": &extensions.Deployment{},
|
||||
"redis-replica-deployment": &extensions.Deployment{},
|
||||
"redis-master-deployment": &extensions.Deployment{},
|
||||
"frontend-service": &api.Service{},
|
||||
"redis-master-service": &api.Service{},
|
||||
"redis-slave-service": &api.Service{},
|
||||
"redis-replica-service": &api.Service{},
|
||||
},
|
||||
"../examples/guestbook/legacy": {
|
||||
"frontend-controller": &api.ReplicationController{},
|
||||
"redis-slave-controller": &api.ReplicationController{},
|
||||
"redis-replica-controller": &api.ReplicationController{},
|
||||
"redis-master-controller": &api.ReplicationController{},
|
||||
},
|
||||
"../examples/guestbook-go": {
|
||||
"guestbook-controller": &api.ReplicationController{},
|
||||
"redis-slave-controller": &api.ReplicationController{},
|
||||
"redis-replica-controller": &api.ReplicationController{},
|
||||
"redis-master-controller": &api.ReplicationController{},
|
||||
"guestbook-service": &api.Service{},
|
||||
"redis-master-service": &api.Service{},
|
||||
"redis-slave-service": &api.Service{},
|
||||
"redis-replica-service": &api.Service{},
|
||||
},
|
||||
"../examples/volumes/iscsi": {
|
||||
"chap-secret": &api.Secret{},
|
||||
|
|
|
@ -12,7 +12,7 @@ This example was tested on OS X with a Galera cluster running on VMWare using th
|
|||
|
||||
### Basic concept
|
||||
|
||||
The basic idea is this: three replication controllers with a single pod, corresponding services, and a single overall service to connect to all three nodes. One of the important design goals of MySQL replication and/or clustering is that you don't want a single-point-of-failure, hence the need to distribute each node or slave across hosts or even geographical locations. Kubernetes is well-suited for facilitating this design pattern using the service and replication controller configuration files in this example.
|
||||
The basic idea is this: three replication controllers with a single pod, corresponding services, and a single overall service to connect to all three nodes. One of the important design goals of MySQL replication and/or clustering is that you don't want a single-point-of-failure, hence the need to distribute each node or replica across hosts or even geographical locations. Kubernetes is well-suited for facilitating this design pattern using the service and replication controller configuration files in this example.
|
||||
|
||||
By defaults, there are only three pods (hence replication controllers) for this cluster. This number can be increased using the variable NUM_NODES, specified in the replication controller configuration file. It's important to know the number of nodes must always be odd.
|
||||
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
#Use this sysdig.yaml when Daemon Sets are NOT enabled on Kubernetes (minimum version 1.1.1). If Daemon Sets are available, use the other example sysdig.yaml - that is the recommended method.
|
||||
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: sysdig-agent
|
||||
labels:
|
||||
app: sysdig-agent
|
||||
spec:
|
||||
replicas: 100 #REQUIRED - replace with the maximum number of slave nodes in the cluster
|
||||
replicas: 100 #REQUIRED - replace with the maximum number of replica nodes in the cluster
|
||||
template:
|
||||
spec:
|
||||
volumes:
|
||||
|
@ -48,10 +48,10 @@ spec:
|
|||
# - name: K8S_API_URI #OPTIONAL - only necessary when connecting remotely to API server
|
||||
# value: "http[s]://[username:passwd@]host[:port]"
|
||||
# - name: TAGS #OPTIONAL
|
||||
# value: linux:ubuntu,dept:dev,local:nyc
|
||||
# value: linux:ubuntu,dept:dev,local:nyc
|
||||
# - name: COLLECTOR #OPTIONAL
|
||||
# value: 192.168.183.200
|
||||
# - name: SECURE #OPTIONAL
|
||||
# value: 192.168.183.200
|
||||
# - name: SECURE #OPTIONAL
|
||||
# value: false
|
||||
# - name: CHECK_CERTIFICATE #OPTIONAL
|
||||
# value: false
|
||||
|
|
|
@ -13,7 +13,7 @@ A Flocker cluster is required to use Flocker with Kubernetes. A Flocker cluster
|
|||
- *Flocker Dataset Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration;
|
||||
- *Flocker Container Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration (unused in this configuration but still required in the cluster).
|
||||
|
||||
The Flocker cluster can be installed on the same nodes you are using for Kubernetes. For instance, you can install the Flocker Control Service on the same node as Kubernetes Master and Flocker Dataset/Container Agents on every Kubernetes Slave node.
|
||||
The Flocker cluster can be installed on the same nodes you are using for Kubernetes. For instance, you can install the Flocker Control Service on the same node as Kubernetes Master and Flocker Dataset/Container Agents on every Kubernetes Replica node.
|
||||
|
||||
It is recommended to follow [Installing Flocker](https://docs.clusterhq.com/en/latest/install/index.html) and the instructions below to set-up the Flocker cluster to be used with Kubernetes.
|
||||
|
||||
|
|
Loading…
Reference in New Issue