Add redirection notices to tutorials
This adds an EXCLUDE_FROM_DOCS section (removed by docs script) to the tutorials and fixes relative-link issues in the tutorial contents. Signed-off-by: Ahmet Alp Balkan <ahmetb@google.com>
This commit is contained in:
parent
8ad93a19dc
commit
c1d30a6240
|
@ -1,5 +1,11 @@
|
|||
<!-- EXCLUDE_FROM_DOCS BEGIN -->
|
||||
|
||||
> :warning: :warning: Follow this tutorial on the Kubernetes website:
|
||||
> https://kubernetes.io/docs/tutorials/stateful-application/cassandra/.
|
||||
> Otherwise some of the URLs will not work properly.
|
||||
|
||||
# Cloud Native Deployments of Cassandra using Kubernetes
|
||||
<!-- EXCLUDE_FROM_DOCS END -->
|
||||
|
||||
## Table of Contents
|
||||
|
||||
|
@ -27,18 +33,18 @@ new Cassandra nodes as they join the cluster.
|
|||
|
||||
This example also uses some of the core components of Kubernetes:
|
||||
|
||||
- [_Pods_](../../../docs/user-guide/pods.md)
|
||||
- [ _Services_](../../../docs/user-guide/services.md)
|
||||
- [_Replication Controllers_](../../../docs/user-guide/replication-controller.md)
|
||||
- [_Stateful Sets_](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
|
||||
- [_Daemon Sets_](../../../docs/admin/daemons.md)
|
||||
- [_Pods_](/docs/user-guide/pods)
|
||||
- [ _Services_](/docs/user-guide/services)
|
||||
- [_Replication Controllers_](/docs/user-guide/replication-controller)
|
||||
- [_Stateful Sets_](/docs/concepts/workloads/controllers/statefulset/)
|
||||
- [_Daemon Sets_](/docs/admin/daemons)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This example assumes that you have a Kubernetes version >=1.2 cluster installed and running,
|
||||
and that you have installed the [`kubectl`](../../../docs/user-guide/kubectl/kubectl.md)
|
||||
and that you have installed the [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
|
||||
command line tool somewhere in your path. Please see the
|
||||
[getting started guides](../../../docs/getting-started-guides/)
|
||||
[getting started guides](https://kubernetes.io/docs/getting-started-guides/)
|
||||
for installation instructions for your platform.
|
||||
|
||||
This example also has a few code and configuration files needed. To avoid
|
||||
|
@ -68,17 +74,21 @@ here are the steps:
|
|||
# StatefulSet
|
||||
#
|
||||
|
||||
# clone the example repository
|
||||
git clone https://github.com/kubernetes/examples
|
||||
cd examples
|
||||
|
||||
# create a service to track all cassandra statefulset nodes
|
||||
kubectl create -f examples/storage/cassandra/cassandra-service.yaml
|
||||
kubectl create -f cassandra/cassandra-service.yaml
|
||||
|
||||
# create a statefulset
|
||||
kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
|
||||
kubectl create -f cassandra/cassandra-statefulset.yaml
|
||||
|
||||
# validate the Cassandra cluster. Substitute the name of one of your pods.
|
||||
kubectl exec -ti cassandra-0 -- nodetool status
|
||||
|
||||
# cleanup
|
||||
grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
|
||||
grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \
|
||||
&& kubectl delete statefulset,po -l app=cassandra \
|
||||
&& echo "Sleeping $grace" \
|
||||
&& sleep $grace \
|
||||
|
@ -89,7 +99,7 @@ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSec
|
|||
#
|
||||
|
||||
# create a replication controller to replicate cassandra nodes
|
||||
kubectl create -f examples/storage/cassandra/cassandra-controller.yaml
|
||||
kubectl create -f cassandra/cassandra-controller.yaml
|
||||
|
||||
# validate the Cassandra cluster. Substitute the name of one of your pods.
|
||||
kubectl exec -ti cassandra-xxxxx -- nodetool status
|
||||
|
@ -104,7 +114,7 @@ kubectl delete rc cassandra
|
|||
# Create a DaemonSet to place a cassandra node on each kubernetes node
|
||||
#
|
||||
|
||||
kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
|
||||
kubectl create -f cassandra/cassandra-daemonset.yaml --validate=false
|
||||
|
||||
# resource cleanup
|
||||
kubectl delete service -l app=cassandra
|
||||
|
@ -113,8 +123,8 @@ kubectl delete daemonset cassandra
|
|||
|
||||
## Step 1: Create a Cassandra Headless Service
|
||||
|
||||
A Kubernetes _[Service](../../../docs/user-guide/services.md)_ describes a set of
|
||||
[_Pods_](../../../docs/user-guide/pods.md) that perform the same task. In
|
||||
A Kubernetes _[Service](/docs/user-guide/services)_ describes a set of
|
||||
[_Pods_](/docs/user-guide/pods) that perform the same task. In
|
||||
Kubernetes, the atomic unit of an application is a Pod: one or more containers
|
||||
that _must_ be scheduled onto the same host.
|
||||
|
||||
|
@ -140,14 +150,14 @@ spec:
|
|||
app: cassandra
|
||||
```
|
||||
|
||||
[Download example](cassandra-service.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/cassandra-service.yaml)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
|
||||
|
||||
Create the service for the StatefulSet:
|
||||
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-service.yaml
|
||||
$ kubectl create -f cassandra/cassandra-service.yaml
|
||||
```
|
||||
|
||||
The following command shows if the service has been created.
|
||||
|
@ -278,13 +288,13 @@ parameters:
|
|||
type: pd-ssd
|
||||
```
|
||||
|
||||
[Download example](cassandra-statefulset.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/cassandra-statefulset.yaml)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-statefulset.yaml -->
|
||||
|
||||
Create the Cassandra StatefulSet as follows:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
|
||||
$ kubectl create -f cassandra/cassandra-statefulset.yaml
|
||||
```
|
||||
|
||||
## Step 3: Validate and Modify The Cassandra StatefulSet
|
||||
|
@ -353,7 +363,7 @@ system_traces system_schema system_auth system system_distributed
|
|||
```
|
||||
|
||||
In order to increase or decrease the size of the Cassandra StatefulSet, you must use
|
||||
`kubectl edit`. You can find more information about the edit command in the [documentation](../../../docs/user-guide/kubectl/kubectl_edit.md).
|
||||
`kubectl edit`. You can find more information about the edit command in the [documentation](/docs/user-guide/kubectl/kubectl_edit).
|
||||
|
||||
Use the following command to edit the StatefulSet.
|
||||
|
||||
|
@ -416,7 +426,7 @@ Deleting and/or scaling a StatefulSet down will not delete the volumes associate
|
|||
Use the following commands to delete the StatefulSet.
|
||||
|
||||
```console
|
||||
$ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
|
||||
$ grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \
|
||||
&& kubectl delete statefulset -l app=cassandra \
|
||||
&& echo "Sleeping $grace" \
|
||||
&& sleep $grace \
|
||||
|
@ -426,7 +436,7 @@ $ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodS
|
|||
## Step 5: Use a Replication Controller to create Cassandra node pods
|
||||
|
||||
A Kubernetes
|
||||
_[Replication Controller](../../../docs/user-guide/replication-controller.md)_
|
||||
_[Replication Controller](/docs/user-guide/replication-controller)_
|
||||
is responsible for replicating sets of identical pods. Like a
|
||||
Service, it has a selector query which identifies the members of its set.
|
||||
Unlike a Service, it also has a desired number of replicas, and it will create
|
||||
|
@ -500,7 +510,7 @@ spec:
|
|||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](cassandra-controller.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/cassandra-controller.yaml)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
|
||||
|
||||
There are a few things to note in this description.
|
||||
|
@ -520,7 +530,7 @@ Create the Replication Controller:
|
|||
|
||||
```console
|
||||
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-controller.yaml
|
||||
$ kubectl create -f cassandra/cassandra-controller.yaml
|
||||
|
||||
```
|
||||
|
||||
|
@ -654,7 +664,7 @@ $ kubectl delete rc cassandra
|
|||
|
||||
## Step 8: Use a DaemonSet instead of a Replication Controller
|
||||
|
||||
In Kubernetes, a [_Daemon Set_](../../../docs/admin/daemons.md) can distribute pods
|
||||
In Kubernetes, a [_Daemon Set_](/docs/admin/daemons) can distribute pods
|
||||
onto Kubernetes nodes, one-to-one. Like a _ReplicationController_, it has a
|
||||
selector query which identifies the members of its set. Unlike a
|
||||
_ReplicationController_, it has a node selector to limit which nodes are
|
||||
|
@ -732,7 +742,7 @@ spec:
|
|||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](cassandra-daemonset.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/cassandra-daemonset.yaml)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-daemonset.yaml -->
|
||||
|
||||
Most of this DaemonSet definition is identical to the ReplicationController
|
||||
|
@ -748,7 +758,7 @@ Create this DaemonSet:
|
|||
|
||||
```console
|
||||
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml
|
||||
$ kubectl create -f cassandra/cassandra-daemonset.yaml
|
||||
|
||||
```
|
||||
|
||||
|
@ -756,7 +766,7 @@ You may need to disable config file validation, like so:
|
|||
|
||||
```console
|
||||
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
|
||||
$ kubectl create -f cassandra/cassandra-daemonset.yaml --validate=false
|
||||
|
||||
```
|
||||
|
||||
|
@ -834,7 +844,7 @@ ring. The [`KubernetesSeedProvider`](java/src/main/java/io/k8s/cassandra/Kuberne
|
|||
discovers Cassandra seeds IP addresses via the Kubernetes API, those Cassandra
|
||||
instances are defined within the Cassandra Service.
|
||||
|
||||
Refer to the custom seed provider [README](java/README.md) for further
|
||||
Refer to the custom seed provider [README](https://git.k8s.io/examples/cassandra/java/README.md) for further
|
||||
`KubernetesSeedProvider` configurations. For this example you should not need
|
||||
to customize the Seed Provider configurations.
|
||||
|
||||
|
@ -843,12 +853,12 @@ how the container docker image was built and what it contains.
|
|||
|
||||
You may also note that we are setting some Cassandra parameters (`MAX_HEAP_SIZE`
|
||||
and `HEAP_NEWSIZE`), and adding information about the
|
||||
[namespace](../../../docs/user-guide/namespaces.md).
|
||||
[namespace](/docs/user-guide/namespaces).
|
||||
We also tell Kubernetes that the container exposes
|
||||
both the `CQL` and `Thrift` API ports. Finally, we tell the cluster
|
||||
manager that we need 0.1 cpu (0.1 core).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -1,5 +1,11 @@
|
|||
<!-- EXCLUDE_FROM_DOCS BEGIN -->
|
||||
|
||||
> :warning: :warning: Follow this tutorial on the Kubernetes website:
|
||||
> https://kubernetes.io/docs/tutorials/stateless-application/guestbook/.
|
||||
> Otherwise some of the URLs will not work properly.
|
||||
|
||||
## Guestbook Example
|
||||
<!-- EXCLUDE_FROM_DOCS END -->
|
||||
|
||||
This example shows how to build a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/).
|
||||
|
||||
|
@ -47,7 +53,7 @@ $ kubectl cluster-info
|
|||
|
||||
If you see a url response, you are ready to go. If not, read the [Getting Started guides](http://kubernetes.io/docs/getting-started-guides/) for how to get started, and follow the [prerequisites](http://kubernetes.io/docs/user-guide/prereqs/) to install and configure `kubectl`. As noted above, if you have a Google Container Engine cluster set up, read [this example](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead.
|
||||
|
||||
All the files referenced in this example can be downloaded in [current folder](./).
|
||||
All the files referenced in this example can be downloaded [from GitHub](https://git.k8s.io/examples/guestbook).
|
||||
|
||||
### Quick Start
|
||||
|
||||
|
@ -56,7 +62,7 @@ This section shows the simplest way to get the example work. If you want to know
|
|||
Start the guestbook with one command:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
|
||||
$ kubectl create -f guestbook/all-in-one/guestbook-all-in-one.yaml
|
||||
service "redis-master" created
|
||||
deployment "redis-master" created
|
||||
service "redis-slave" created
|
||||
|
@ -68,7 +74,7 @@ deployment "frontend" created
|
|||
Alternatively, you can start the guestbook by running:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook/
|
||||
$ kubectl create -f guestbook/
|
||||
```
|
||||
|
||||
Then, list all your Services:
|
||||
|
@ -86,13 +92,13 @@ Now you can access the guestbook on each node with frontend Service's `<Cluster-
|
|||
Clean up the guestbook:
|
||||
|
||||
```console
|
||||
$ kubectl delete -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
|
||||
$ kubectl delete -f guestbook/all-in-one/guestbook-all-in-one.yaml
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```console
|
||||
$ kubectl delete -f examples/guestbook/
|
||||
$ kubectl delete -f guestbook/
|
||||
```
|
||||
|
||||
|
||||
|
@ -103,7 +109,7 @@ Before continuing to the gory details, we also recommend you to read Kubernetes
|
|||
|
||||
#### Define a Deployment
|
||||
|
||||
To start the redis master, use the file [redis-master-deployment.yaml](redis-master-deployment.yaml), which describes a single [pod](http://kubernetes.io/docs/user-guide/pods/) running a redis key-value server in a container.
|
||||
To start the redis master, use the file [redis-master-deployment.yaml](https://git.k8s.io/examples/guestbook/redis-master-deployment.yaml), which describes a single [pod](http://kubernetes.io/docs/user-guide/pods/) running a redis key-value server in a container.
|
||||
|
||||
Although we have a single instance of our redis master, we are using a [Deployment](http://kubernetes.io/docs/user-guide/deployments/) to enforce that exactly one pod keeps running. E.g., if the node were to go down, the Deployment will ensure that the redis master gets restarted on a healthy node. (In our simplified example, this could result in data loss.)
|
||||
|
||||
|
@ -151,7 +157,7 @@ spec:
|
|||
- containerPort: 6379
|
||||
```
|
||||
|
||||
[Download example](redis-master-deployment.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-master-deployment.yaml)
|
||||
<!-- END MUNGE: EXAMPLE redis-master-deployment.yaml -->
|
||||
|
||||
#### Define a Service
|
||||
|
@ -161,7 +167,7 @@ A Kubernetes [Service](http://kubernetes.io/docs/user-guide/services/) is a name
|
|||
Services find the pods to load balance based on the pods' labels.
|
||||
The selector field of the Service description determines which pods will receive the traffic sent to the Service, and the `port` and `targetPort` information defines what port the Service proxy will run at.
|
||||
|
||||
The file [redis-master-service.yaml](redis-master-deployment.yaml) defines the redis master Service:
|
||||
The file [redis-master-service.yaml](https://git.k8s.io/examples/guestbook/redis-master-deployment.yaml) defines the redis master Service:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE redis-master-service.yaml -->
|
||||
|
||||
|
@ -185,7 +191,7 @@ spec:
|
|||
tier: backend
|
||||
```
|
||||
|
||||
[Download example](redis-master-service.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-master-service.yaml)
|
||||
<!-- END MUNGE: EXAMPLE redis-master-service.yaml -->
|
||||
|
||||
#### Create a Service
|
||||
|
@ -193,7 +199,7 @@ spec:
|
|||
According to the [config best practices](http://kubernetes.io/docs/user-guide/config-best-practices/), create a Service before corresponding Deployments so that the scheduler can spread the pods comprising the Service. So we first create the Service by running:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook/redis-master-service.yaml
|
||||
$ kubectl create -f guestbook/redis-master-service.yaml
|
||||
service "redis-master" created
|
||||
```
|
||||
|
||||
|
@ -233,7 +239,7 @@ This example has been configured to use the DNS service by default.
|
|||
|
||||
If your cluster does not have the DNS service enabled, then you can use environment variables by setting the
|
||||
`GET_HOSTS_FROM` env value in both
|
||||
[redis-slave-deployment.yaml](redis-slave-deployment.yaml) and [frontend-deployment.yaml](frontend-deployment.yaml)
|
||||
[redis-slave-deployment.yaml](https://git.k8s.io/examples/guestbook/redis-slave-deployment.yaml) and [frontend-deployment.yaml](https://git.k8s.io/examples/guestbook/frontend-deployment.yaml)
|
||||
from `dns` to `env` before you start up the app.
|
||||
(However, this is unlikely to be necessary. You can check for the DNS service in the list of the cluster's services by
|
||||
running `kubectl --namespace=kube-system get rc -l k8s-app=kube-dns`.)
|
||||
|
@ -244,7 +250,7 @@ Note that switching to env causes creation-order dependencies, since Services ne
|
|||
Second, create the redis master pod in your Kubernetes cluster by running:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook/redis-master-deployment.yaml
|
||||
$ kubectl create -f guestbook/redis-master-deployment.yaml
|
||||
deployment "redis-master" created
|
||||
```
|
||||
|
||||
|
@ -345,7 +351,7 @@ In Kubernetes, a Deployment is responsible for managing multiple instances of a
|
|||
Just like the master, we want to have a Service to proxy connections to the redis slaves. In this case, in addition to discovery, the slave Service will provide transparent load balancing to web app clients.
|
||||
|
||||
This time we put the Service and Deployment into one [file](http://kubernetes.io/docs/user-guide/managing-deployments/#organizing-resource-configurations). Grouping related objects together in a single file is often better than having separate files.
|
||||
The specification for the slaves is in [all-in-one/redis-slave.yaml](all-in-one/redis-slave.yaml):
|
||||
The specification for the slaves is in [all-in-one/redis-slave.yaml](https://git.k8s.io/examples/guestbook/all-in-one/redis-slave.yaml):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE all-in-one/redis-slave.yaml -->
|
||||
|
||||
|
@ -414,7 +420,7 @@ spec:
|
|||
- containerPort: 6379
|
||||
```
|
||||
|
||||
[Download example](all-in-one/redis-slave.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/redis-slave.yaml)
|
||||
<!-- END MUNGE: EXAMPLE all-in-one/redis-slave.yaml -->
|
||||
|
||||
This time the selector for the Service is `app=redis,role=slave,tier=backend`, because that identifies the pods running redis slaves. It is generally helpful to set labels on your Service itself as we've done here to make it easy to locate them with the `kubectl get services -l "app=redis,role=slave,tier=backend"` command. For more information on the usage of labels, see [using-labels-effectively](http://kubernetes.io/docs/user-guide/managing-deployments/#using-labels-effectively).
|
||||
|
@ -422,7 +428,7 @@ This time the selector for the Service is `app=redis,role=slave,tier=backend`, b
|
|||
Now that you have created the specification, create the Service in your cluster by running:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook/all-in-one/redis-slave.yaml
|
||||
$ kubectl create -f guestbook/all-in-one/redis-slave.yaml
|
||||
service "redis-slave" created
|
||||
deployment "redis-slave" created
|
||||
|
||||
|
@ -455,7 +461,7 @@ A frontend pod is a simple PHP server that is configured to talk to either the s
|
|||
Again we'll create a set of replicated frontend pods instantiated by a Deployment — this time, with three replicas.
|
||||
|
||||
As with the other pods, we now want to create a Service to group the frontend pods.
|
||||
The Deployment and Service are described in the file [all-in-one/frontend.yaml](all-in-one/frontend.yaml):
|
||||
The Deployment and Service are described in the file [all-in-one/frontend.yaml](https://git.k8s.io/examples/guestbook/all-in-one/frontend.yaml):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE all-in-one/frontend.yaml -->
|
||||
|
||||
|
@ -522,21 +528,21 @@ spec:
|
|||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](all-in-one/frontend.yaml?raw=true)
|
||||
[Download example](https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/frontend.yaml)
|
||||
<!-- END MUNGE: EXAMPLE all-in-one/frontend.yaml -->
|
||||
|
||||
#### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)
|
||||
|
||||
For supported cloud providers, such as Google Compute Engine or Google Container Engine, you can specify to use an external load balancer
|
||||
in the service `spec`, to expose the service onto an external load balancer IP.
|
||||
To do this, uncomment the `type: LoadBalancer` line in the [all-in-one/frontend.yaml](all-in-one/frontend.yaml) file before you start the service.
|
||||
To do this, uncomment the `type: LoadBalancer` line in the [all-in-one/frontend.yaml](https://git.k8s.io/examples/guestbook/all-in-one/frontend.yaml) file before you start the service.
|
||||
|
||||
[See the appendix below](#appendix-accessing-the-guestbook-site-externally) on accessing the guestbook site externally for more details.
|
||||
|
||||
Create the service and Deployment like this:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/guestbook/all-in-one/frontend.yaml
|
||||
$ kubectl create -f guestbook/all-in-one/frontend.yaml
|
||||
service "frontend" created
|
||||
deployment "frontend" created
|
||||
```
|
||||
|
|
|
@ -1,4 +1,11 @@
|
|||
<!-- EXCLUDE_FROM_DOCS BEGIN -->
|
||||
|
||||
> :warning: :warning: Follow this tutorial on the Kubernetes website:
|
||||
> https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/.
|
||||
> Otherwise some of the URLs will not work properly.
|
||||
|
||||
# Persistent Installation of MySQL and WordPress on Kubernetes
|
||||
<!-- EXCLUDE_FROM_DOCS END -->
|
||||
|
||||
This example describes how to run a persistent installation of
|
||||
[WordPress](https://wordpress.org/) and
|
||||
|
@ -31,10 +38,10 @@ your editor added one.
|
|||
|
||||
```shell
|
||||
tr --delete '\n' <password.txt >.strippedpassword.txt && mv .strippedpassword.txt password.txt
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/local-volumes.yaml
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/local-volumes.yaml
|
||||
kubectl create secret generic mysql-pass --from-file=password.txt
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/mysql-deployment.yaml
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/wordpress-deployment.yaml
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/mysql-deployment.yaml
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/wordpress-deployment.yaml
|
||||
```
|
||||
|
||||
## Table of Contents
|
||||
|
@ -117,11 +124,11 @@ chcon -Rt svirt_sandbox_file_t /tmp/data
|
|||
```
|
||||
|
||||
Continuing with host path, create the persistent volume objects in Kubernetes using
|
||||
[local-volumes.yaml](local-volumes.yaml):
|
||||
[local-volumes.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/local-volumes.yaml):
|
||||
|
||||
```shell
|
||||
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/kubernetes/master
|
||||
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/local-volumes.yaml
|
||||
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/examples/master
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/local-volumes.yaml
|
||||
```
|
||||
|
||||
|
||||
|
@ -134,10 +141,10 @@ Create two persistent disks. You will need to create the disks in the
|
|||
same [GCE zone](https://cloud.google.com/compute/docs/zones) as the
|
||||
Kubernetes cluster. The default setup script will create the cluster
|
||||
in the `us-central1-b` zone, as seen in the
|
||||
[config-default.sh](../../cluster/gce/config-default.sh) file. Replace
|
||||
[config-default.sh](https://git.k8s.io/kubernetes/cluster/gce/config-default.sh) file. Replace
|
||||
`<zone>` below with the appropriate zone. The names `wordpress-1` and
|
||||
`wordpress-2` must match the `pdName` fields we have specified in
|
||||
[gce-volumes.yaml](gce-volumes.yaml).
|
||||
[gce-volumes.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/gce-volumes.yaml).
|
||||
|
||||
```shell
|
||||
gcloud compute disks create --size=20GB --zone=<zone> wordpress-1
|
||||
|
@ -147,8 +154,8 @@ gcloud compute disks create --size=20GB --zone=<zone> wordpress-2
|
|||
Create the persistent volume objects in Kubernetes for those disks:
|
||||
|
||||
```shell
|
||||
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/kubernetes/master
|
||||
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/gce-volumes.yaml
|
||||
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/examples/master
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/gce-volumes.yaml
|
||||
```
|
||||
|
||||
## Create the MySQL Password Secret
|
||||
|
@ -175,13 +182,13 @@ access the database.
|
|||
|
||||
Now that the persistent disks and secrets are defined, the Kubernetes
|
||||
pods can be launched. Start MySQL using
|
||||
[mysql-deployment.yaml](mysql-deployment.yaml).
|
||||
[mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml).
|
||||
|
||||
```shell
|
||||
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/mysql-deployment.yaml
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/mysql-deployment.yaml
|
||||
```
|
||||
|
||||
Take a look at [mysql-deployment.yaml](mysql-deployment.yaml), and
|
||||
Take a look at [mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml), and
|
||||
note that we've defined a volume mount for `/var/lib/mysql`, and then
|
||||
created a Persistent Volume Claim that looks for a 20G volume. This
|
||||
claim is satisfied by any volume that meets the requirements, in our
|
||||
|
@ -230,7 +237,7 @@ kubectl logs <pod-name>
|
|||
Version: '5.6.29' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
|
||||
```
|
||||
|
||||
Also in [mysql-deployment.yaml](mysql-deployment.yaml) we created a
|
||||
Also in [mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml) we created a
|
||||
service to allow other pods to reach this mysql instance. The name is
|
||||
`wordpress-mysql` which resolves to the pod IP.
|
||||
|
||||
|
@ -264,10 +271,10 @@ local-pv-2 20Gi RWO Bound default/mysql-pv-claim
|
|||
## Deploy WordPress
|
||||
|
||||
Next deploy WordPress using
|
||||
[wordpress-deployment.yaml](wordpress-deployment.yaml):
|
||||
[wordpress-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/wordpress-deployment.yaml):
|
||||
|
||||
```shell
|
||||
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/wordpress-deployment.yaml
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/wordpress-deployment.yaml
|
||||
```
|
||||
|
||||
Here we are using many of the same features, such as a volume claim
|
||||
|
|
Loading…
Reference in New Issue