more updates
This commit is contained in:
parent
69f7ab04f8
commit
162bc4def1
|
@ -191,11 +191,11 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
cases := map[string]map[string]runtime.Object{
|
||||
"../examples/guestbook": {
|
||||
"frontend-deployment": &extensions.Deployment{},
|
||||
"redis-slave-deployment": &extensions.Deployment{},
|
||||
"redis-replica-deployment": &extensions.Deployment{},
|
||||
"redis-master-deployment": &extensions.Deployment{},
|
||||
"frontend-service": &api.Service{},
|
||||
"redis-master-service": &api.Service{},
|
||||
"redis-slave-service": &api.Service{},
|
||||
"redis-replica-service": &api.Service{},
|
||||
},
|
||||
"../examples/guestbook/legacy": {
|
||||
"frontend-controller": &api.ReplicationController{},
|
||||
|
|
|
@ -12,7 +12,7 @@ This example was tested on OS X with a Galera cluster running on VMWare using th
|
|||
|
||||
### Basic concept
|
||||
|
||||
The basic idea is this: three replication controllers with a single pod, corresponding services, and a single overall service to connect to all three nodes. One of the important design goals of MySQL replication and/or clustering is that you don't want a single-point-of-failure, hence the need to distribute each node or slave across hosts or even geographical locations. Kubernetes is well-suited for facilitating this design pattern using the service and replication controller configuration files in this example.
|
||||
The basic idea is this: three replication controllers with a single pod, corresponding services, and a single overall service to connect to all three nodes. One of the important design goals of MySQL replication and/or clustering is that you don't want a single-point-of-failure, hence the need to distribute each node or replica across hosts or even geographical locations. Kubernetes is well-suited for facilitating this design pattern using the service and replication controller configuration files in this example.
|
||||
|
||||
By defaults, there are only three pods (hence replication controllers) for this cluster. This number can be increased using the variable NUM_NODES, specified in the replication controller configuration file. It's important to know the number of nodes must always be odd.
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ A Flocker cluster is required to use Flocker with Kubernetes. A Flocker cluster
|
|||
- *Flocker Dataset Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration;
|
||||
- *Flocker Container Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration (unused in this configuration but still required in the cluster).
|
||||
|
||||
The Flocker cluster can be installed on the same nodes you are using for Kubernetes. For instance, you can install the Flocker Control Service on the same node as Kubernetes Master and Flocker Dataset/Container Agents on every Kubernetes Slave node.
|
||||
The Flocker cluster can be installed on the same nodes you are using for Kubernetes. For instance, you can install the Flocker Control Service on the same node as Kubernetes Master and Flocker Dataset/Container Agents on every Kubernetes Replica node.
|
||||
|
||||
It is recommended to follow [Installing Flocker](https://docs.clusterhq.com/en/latest/install/index.html) and the instructions below to set-up the Flocker cluster to be used with Kubernetes.
|
||||
|
||||
|
|
Loading…
Reference in New Issue