|
…
|
||
|---|---|---|
| .. | ||
| README.md | ||
| tidb-cluster.yaml | ||
| tidb-monitor.yaml | ||
README.md
A Basic TiDB cluster with TiFlash and Monitoring
Note:
This setup is for test or demo purpose only and IS NOT applicable for critical environment. Refer to the Documents for production setup.
The following steps will create a TiDB cluster with TiFlash deployed and monitoring.
Prerequisites:
-
TiDB operator
v1.1.0-rc.3or higher version installed. Doc -
Available
StorageClassconfigured, and there are enough PVs (by default, 9 PVs are required) of that storageClass:The available
StorageClasscan by checked with the following command:> kubectl get storageclassThe output is similar to the following:
NAME PROVISIONER AGE standard (default) kubernetes.io/gce-pd 1d gold kubernetes.io/gce-pd 1d local-storage kubernetes.io/no-provisioner 189dThe default storageClassName in
tidb-cluster.yamlandtidb-monitor.yamlis set tolocal-storage, please update them to your available storageClass.
Install
The following commands is assumed to be executed in this directory.
Install the cluster:
> kubectl create ns <namespace>
> kubectl -n <namespace> apply -f ./
Wait for cluster Pods ready:
> watch kubectl -n <namespace> get pod
Explore
Explore the TiDB SQL interface:
> kubectl -n <namespace> port-forward svc/demo-tidb 4000:4000 &>/tmp/pf-tidb.log &
> mysql -h 127.0.0.1 -P 4000 -u root --comments
Refer to the doc to try TiFlash.
Explore the monitoring dashboards:
> kubectl -n <namespace> port-forward svc/demo-grafana 3000:3000 &>/tmp/pf-grafana.log &
Browse localhost:3000.
Destroy
> kubectl -n <namespace> delete -f ./
The PVCs used by the TiDB cluster will not be deleted in the above command, therefore, the PVs will be not be released either. You can delete PVCs and release the PVs with the following command:
> kubectl -n <namespace> delete pvc -l app.kubernetes.io/instance=demo,app.kubernetes.io/managed-by=tidb-operator