Update config and deploy, fix broken links and typos (#207)

* Update rocksdb config and image link

Signed-off-by: lilin90 <lilin@pingcap.com>
This commit is contained in:
Lilian Lee 2020-10-30 16:50:50 +08:00 committed by GitHub
parent ea565c4725
commit c35355a334
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 811 additions and 315 deletions

View File

@ -7,7 +7,7 @@ menu:
weight: 6
---
TiKV uses [RocksDB](https://rocksdb.org/) as its underlying storage engine for storing both [Raft logs](architecture#raft) and KV (key-value) pairs.
TiKV uses [RocksDB](https://rocksdb.org/) as its underlying storage engine for storing both [Raft logs](../../../concepts/architecture#raft) and KV (key-value) pairs.
{{< info >}}
RocksDB was chosen for TiKV because it provides a highly customizable persistent key-value store that can be tuned to run in a variety of production environments, including pure memory, Flash, hard disks, or HDFS, it supports various compression algorithms, and it provides solid tools for production support and debugging.
@ -16,7 +16,7 @@ RocksDB was chosen for TiKV because it provides a highly customizable persistent
TiKV creates two RocksDB instances on each Node:
* A `rocksdb` instance that stores most TiKV data
* A `raftdb` that stores [Raft logs](architecture#raft) and has a single column family called `raftdb.defaultcf`
* A `raftdb` that stores [Raft logs](../../../concepts/architecture#raft) and has a single column family called `raftdb.defaultcf`
The `rocksdb` instance has three column families:
@ -32,6 +32,152 @@ RocksDB can be configured on a per-column-family basis. Here's an example:
max-background-jobs = 8
```
### RocksDB configuration options
## RocksDB configuration options
{{< config "rocksdb" >}}
### `max-background-jobs`
+ The number of background threads in RocksDB
+ Default value: `8`
+ Minimum value: `1`
### `max-sub-compactions`
+ The number of sub-compaction operations performed concurrently in RocksDB
+ Default value: `3`
+ Minimum value: `1`
### `max-open-files`
+ The total number of files that RocksDB can open
+ Default value: `40960`
+ Minimum value: `-1`
### `max-manifest-file-size`
+ The maximum size of a RocksDB Manifest file
+ Default value: `128MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `create-if-missing`
+ Determines whether to automatically create a DB switch
+ Default value: `true`
### `wal-recovery-mode`
+ WAL recovery mode
+ Optional values: `0` (`TolerateCorruptedTailRecords`), `1` (`AbsoluteConsistency`), `2` (`PointInTimeRecovery`), `3` (`SkipAnyCorruptedRecords`)
+ Default value: `2`
+ Minimum value: `0`
+ Maximum value: `3`
### `wal-dir`
+ The directory in which WAL files are stored
+ Default value: `/tmp/tikv/store`
### `wal-ttl-seconds`
+ The living time of the archived WAL files. When the value is exceeded, the system deletes these files.
+ Default value: `0`
+ Minimum value: `0`
+ unit: second
### `wal-size-limit`
+ The size limit of the archived WAL files. When the value is exceeded, the system deletes these files.
+ Default value: `0`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `enable-statistics`
+ Determines whether to automatically optimize the configuration of Rate LImiter
+ Default value: `false`
### `stats-dump-period`
+ Enables or disables Pipelined Write
+ Default value: `true`
### `compaction-readahead-size`
+ The size of `readahead` when compaction is being performed
+ Default value: `0`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `writable-file-max-buffer-size`
+ The maximum buffer size used in WritableFileWrite
+ Default value: `1MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `use-direct-io-for-flush-and-compaction`
+ Determines whether to use `O_DIRECT` for both reads and writes in background flush and compactions
+ Default value: `false`
### `rate-bytes-per-sec`
+ The maximum rate permitted by Rate Limiter
+ Default value: `0`
+ Minimum value: `0`
+ Unit: Bytes
### `rate-limiter-mode`
+ Rate LImiter mode
+ Optional values: `1` (`ReadOnly`), `2` (`WriteOnly`), `3` (`AllIo`)
+ Default value: `2`
+ Minimum value: `1`
+ Maximum value: `3`
### `auto-tuned`
+ Determines whether to automatically optimize the configuration of the Rate LImiter
+ Default value: `false`
### `enable-pipelined-write`
+ Enables or disables Pipelined Write
+ Default value: `true`
### `bytes-per-sync`
+ The rate at which OS incrementally synchronizes files to disk while these files are being written asynchronously
+ Default value: `1MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `wal-bytes-per-sync`
+ The rate at which OS incrementally synchronizes WAL files to disk while the WAL files are being written
+ Default value: `512KB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `info-log-max-size`
+ The maximum size of Info log
+ Default value: `1GB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `info-log-roll-time`
+ The time interval at which Info logs are truncated. If the value is `0`, logs are not truncated.
+ Default value: `0`
### `info-log-keep-log-file-num`
+ The maximum number of kept log files
+ Default value: `10`
+ Minimum value: `0`
### `info-log-dir`
+ The directory in which logs are stored
+ Default value: ""

View File

@ -1,6 +1,6 @@
---
title: Deploy
description: Prerequisites for deploying TiKV
description: TiKV deployment prerequisites and methods
menu:
"3.0":
parent: Tasks
@ -8,96 +8,10 @@ menu:
name: Deploy
---
This section introduces the prerequisites for deploying TiKV and how to deploy a TiKV cluster.
Typical deployments of TiKV include a number of components:
* 3+ TiKV nodes
* 3+ Placement Driver (PD) nodes
* 1 Monitoring node
* 1 or more client application or query layer (like [TiDB](https://github.com/pingcap/tidb))
{{< info >}}
TiKV is deployed alongside a [Placement Driver](https://github.com/pingcap/pd/) (PD) cluster. PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.
{{< /info >}}
Your **first steps** into TiKV require only the following:
* A modest machine that fulfills the [system requirements](#system-requirements).
* A running [Docker](https://docker.com) service.
After you set up the environment, follow through the [Try](../../try) guide to get a test setup of TiKV running on your machine.
**Production** usage is typically done via automation requiring:
* A control machine (it can be one of your target servers) with [Ansible](https://www.ansible.com/) installed.
* Several (6+) machines fulfilling the [system requirements](#system-requirements) and at least up to [production specifications](#production-specifications).
* The ability to configure your infrastructure to allow the ports from [network requirements](#network-requirements).
If you have your production environment ready, follow through the [Ansible deployment](../ansible) guide. You may optionally choose unsupported manual [Docker deployment](../docker) or [binary deployment](../binary) strategies.
Finally, if you want to **build your own binary** TiKV you should consult the [README](https://github.com/tikv/tikv/blob/master/README.md) of the repository.
## System requirements
The **minimum** specifications for testing or developing TiKV or PD are:
* 2+ core
* 8+ GB RAM
* An SSD
TiKV hosts must support the x86-64 architecture and the SSE 4.2 instruction set.
TiKV works well in VMWare, KVM, and Xen virtual machines.
## Production Specifications
The **suggested PD** specifications for production are:
* 3+ nodes
* 4+ cores
* 8+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (PD is most widely tested on CentOS 7).
The **suggested TiKV** specifications for production are:
* 3+ nodes
* 16+ cores
* 32+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive (Under 1.5 TB capacity is ideal in our tests)
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (TiKV is most widely tested on CentOS 7).
## Network requirements
TiKV deployments require **total connectivity of all services**. Each TiKV, PD, and client must be able to reach each all other and advertise the addresses of all other services to new services. This connectivity allows TiKV and PD to replicate and balance data resiliently across the entire deployment.
If the hosts are not already able to reach each other, it is possible to accomplish this through a Virtual Local Area Network (VLAN). Speak to your system administrator to explore your options.
TiKV requires the following network port configuration to run. Based on the TiKV deployment in actual environments, the administrator can open relevant ports in the network side and host side.
| Component | Default Port | Protocol | Description |
| :--:| :--: | :--: | :-- |
| TiKV | 20160 | gRPC | Client (such as Query Layers) port. |
| TiKV | 20180 | Text | Status port, Prometheus metrics at `/metrics`. |
| PD | 2379 | gRPC | The client port, for communication with clients. |
| PD | 2380 | gRPC | The server port, for communication with TiKV. |
{{< info >}}
If you are deploying tools alongside TiKV you may need to open or configure other ports. For example, port 3000 for the Grafana service.
{{< /info >}}
You can ensure your confguration is correct by creating echo servers on the ports/IPs by using `ncat` (from the `nmap` package):
```bash
ncat -l $PORT -k -c 'xargs -n1 echo'
```
Then from the other machines, verify that the echo server is reachable with `curl $IP:$PORT`.
## Optional: Configure Monitoring
TiKV can work with Prometheus and Grafana to provide a rich visual monitoring dashboard. This comes preconfigured if you use the [Ansible](../ansible) or [Docker Compose](../docker-compose) deployment methods.
We strongly recommend using an up-to-date version of Mozilla Firefox or Google Chrome when accessing Grafana.
- [Prerequisites](../prerequisites)
- [Deploy TiKV using Ansible](../ansible)
- [Deploy TiKV using Docker](../docker)
- [Deploy TiKV using binary files](../binary)
- [Deploy TiKV using Docker Compose/Swarm](../docker-compose)

View File

@ -0,0 +1,101 @@
---
title: Prerequisites
description: Prerequisites for deploying TiKV
menu:
"3.0":
parent: Deploy
weight: 1
---
Typical deployments of TiKV include a number of components:
* 3+ TiKV nodes
* 3+ Placement Driver (PD) nodes
* 1 Monitoring node
* 1 or more client application or query layer (like [TiDB](https://github.com/pingcap/tidb))
{{< info >}}
TiKV is deployed alongside a [Placement Driver](https://github.com/pingcap/pd/) (PD) cluster. PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.
{{< /info >}}
Your **first steps** into TiKV require only the following:
* A modest machine that fulfills the [system requirements](#system-requirements).
* A running [Docker](https://docker.com) service.
After you set up the environment, follow through the [Try](../../try) guide to get a test setup of TiKV running on your machine.
**Production** usage is typically done via automation requiring:
* A control machine (it can be one of your target servers) with [Ansible](https://www.ansible.com/) installed.
* Several (6+) machines fulfilling the [system requirements](#system-requirements) and at least up to [production specifications](#production-specifications).
* The ability to configure your infrastructure to allow the ports from [network requirements](#network-requirements).
If you have your production environment ready, follow through the [Ansible deployment](../ansible) guide. You may optionally choose unsupported manual [Docker deployment](../docker) or [binary deployment](../binary) strategies.
Finally, if you want to **build your own binary** TiKV you should consult the [README](https://github.com/tikv/tikv/blob/master/README.md) of the repository.
## System requirements
The **minimum** specifications for testing or developing TiKV or PD are:
* 2+ core
* 8+ GB RAM
* An SSD
TiKV hosts must support the x86-64 architecture and the SSE 4.2 instruction set.
TiKV works well in VMWare, KVM, and Xen virtual machines.
## Production Specifications
The **suggested PD** specifications for production are:
* 3+ nodes
* 4+ cores
* 8+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (PD is most widely tested on CentOS 7).
The **suggested TiKV** specifications for production are:
* 3+ nodes
* 16+ cores
* 32+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive (Under 1.5 TB capacity is ideal in our tests)
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (TiKV is most widely tested on CentOS 7).
## Network requirements
TiKV deployments require **total connectivity of all services**. Each TiKV, PD, and client must be able to reach each all other and advertise the addresses of all other services to new services. This connectivity allows TiKV and PD to replicate and balance data resiliently across the entire deployment.
If the hosts are not already able to reach each other, it is possible to accomplish this through a Virtual Local Area Network (VLAN). Speak to your system administrator to explore your options.
TiKV requires the following network port configuration to run. Based on the TiKV deployment in actual environments, the administrator can open relevant ports in the network side and host side.
| Component | Default Port | Protocol | Description |
| :--:| :--: | :--: | :-- |
| TiKV | 20160 | gRPC | Client (such as Query Layers) port. |
| TiKV | 20180 | Text | Status port, Prometheus metrics at `/metrics`. |
| PD | 2379 | gRPC | The client port, for communication with clients. |
| PD | 2380 | gRPC | The server port, for communication with TiKV. |
{{< info >}}
If you are deploying tools alongside TiKV you may need to open or configure other ports. For example, port 3000 for the Grafana service.
{{< /info >}}
You can ensure your configuration is correct by creating echo servers on the ports/IPs by using `ncat` (from the `nmap` package):
```bash
ncat -l $PORT -k -c 'xargs -n1 echo'
```
Then from the other machines, verify that the echo server is reachable with `curl $IP:$PORT`.
## Optional: Configure Monitoring
TiKV can work with Prometheus and Grafana to provide a rich visual monitoring dashboard. This comes preconfigured if you use the [Ansible](../ansible) or [Docker Compose](../docker-compose) deployment methods.
We strongly recommend using an up-to-date version of Mozilla Firefox or Google Chrome when accessing Grafana.

View File

@ -17,12 +17,17 @@ In order to get a functioning TiKV service you will need to start a TiKV service
Communication between TiKV, PD, and any services which use TiKV is done via gRPC. We provide clients for [several languages](../../reference/clients/introduction/), and this guide will briefly show you how to use the Rust client.
Using Docker, you'll create pair of persistent services `tikv` and `pd` and learn to manage them easily. Then you'll write a simple Rust client application and run it from your local host. Finally, you'll learn how to quicky teardown and bring up the services, and review some basic limitations of this configuration.
Using Docker, you'll create pair of persistent services `tikv` and `pd` and learn to manage them easily. Then you'll write a simple Rust client application and run it from your local host. Finally, you'll learn how to quickly teardown and bring up the services, and review some basic limitations of this configuration.
![Architecture](../../../../img/docs/getting-started-docker.svg)
{{< figure
src="/img/docs/getting-started-docker.svg"
caption="Docker Stack"
alt="Docker Stack diagram"
width="70"
number="1" >}}
{{< warning >}}
In a production deployment there would be **at least** three TiKV services and three PD services spread among 6 machines. Most deployments also include kernel tuning, sysctl tuning, robust systemd services, firewalls, monitoring with prometheus, grafana dashboards, log collection, and more. Even still, to be sure of your resilience and security, consider consulting our [maintainers](https://github.com/tikv/tikv/blob/master/MAINTAINERS.md).
In a production deployment there would be **at least** three TiKV services and three PD services spread among 6 machines. Most deployments also include kernel tuning, sysctl tuning, robust systemd services, firewalls, monitoring with Prometheus, Grafana dashboards, log collection, and more. Even still, to be sure of your resilience and security, consider consulting our [maintainers](https://github.com/tikv/tikv/blob/master/MAINTAINERS.md).
If you are interested in deploying for production we suggest investigating the [deploy](../../deploy/introduction) guides.
{{< /warning >}}

View File

@ -7,7 +7,7 @@ menu:
weight: 6
---
TiKV uses [RocksDB](https://rocksdb.org/) as its underlying storage engine for storing both [Raft logs](architecture#raft) and KV (key-value) pairs.
TiKV uses [RocksDB](https://rocksdb.org/) as its underlying storage engine for storing both [Raft logs](../../../concepts/architecture#raft) and KV (key-value) pairs.
{{< info >}}
RocksDB was chosen for TiKV because it provides a highly customizable persistent key-value store that can be tuned to run in a variety of production environments, including pure memory, Flash, hard disks, or HDFS, it supports various compression algorithms, and it provides solid tools for production support and debugging.
@ -16,7 +16,7 @@ RocksDB was chosen for TiKV because it provides a highly customizable persistent
TiKV creates two RocksDB instances on each Node:
* A `rocksdb` instance that stores most TiKV data
* A `raftdb` that stores [Raft logs](architecture#raft) and has a single column family called `raftdb.defaultcf`
* A `raftdb` that stores [Raft logs](../../../concepts/architecture#raft) and has a single column family called `raftdb.defaultcf`
The `rocksdb` instance has three column families:
@ -32,6 +32,152 @@ RocksDB can be configured on a per-column-family basis. Here's an example:
max-background-jobs = 8
```
### RocksDB configuration options
## RocksDB configuration options
{{< config "rocksdb" >}}
### `max-background-jobs`
+ The number of background threads in RocksDB
+ Default value: `8`
+ Minimum value: `1`
### `max-sub-compactions`
+ The number of sub-compaction operations performed concurrently in RocksDB
+ Default value: `3`
+ Minimum value: `1`
### `max-open-files`
+ The total number of files that RocksDB can open
+ Default value: `40960`
+ Minimum value: `-1`
### `max-manifest-file-size`
+ The maximum size of a RocksDB Manifest file
+ Default value: `128MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `create-if-missing`
+ Determines whether to automatically create a DB switch
+ Default value: `true`
### `wal-recovery-mode`
+ WAL recovery mode
+ Optional values: `0` (`TolerateCorruptedTailRecords`), `1` (`AbsoluteConsistency`), `2` (`PointInTimeRecovery`), `3` (`SkipAnyCorruptedRecords`)
+ Default value: `2`
+ Minimum value: `0`
+ Maximum value: `3`
### `wal-dir`
+ The directory in which WAL files are stored
+ Default value: `/tmp/tikv/store`
### `wal-ttl-seconds`
+ The living time of the archived WAL files. When the value is exceeded, the system deletes these files.
+ Default value: `0`
+ Minimum value: `0`
+ unit: second
### `wal-size-limit`
+ The size limit of the archived WAL files. When the value is exceeded, the system deletes these files.
+ Default value: `0`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `enable-statistics`
+ Determines whether to automatically optimize the configuration of Rate LImiter
+ Default value: `false`
### `stats-dump-period`
+ Enables or disables Pipelined Write
+ Default value: `true`
### `compaction-readahead-size`
+ The size of `readahead` when compaction is being performed
+ Default value: `0`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `writable-file-max-buffer-size`
+ The maximum buffer size used in WritableFileWrite
+ Default value: `1MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `use-direct-io-for-flush-and-compaction`
+ Determines whether to use `O_DIRECT` for both reads and writes in background flush and compactions
+ Default value: `false`
### `rate-bytes-per-sec`
+ The maximum rate permitted by Rate Limiter
+ Default value: `0`
+ Minimum value: `0`
+ Unit: Bytes
### `rate-limiter-mode`
+ Rate LImiter mode
+ Optional values: `1` (`ReadOnly`), `2` (`WriteOnly`), `3` (`AllIo`)
+ Default value: `2`
+ Minimum value: `1`
+ Maximum value: `3`
### `auto-tuned`
+ Determines whether to automatically optimize the configuration of the Rate LImiter
+ Default value: `false`
### `enable-pipelined-write`
+ Enables or disables Pipelined Write
+ Default value: `true`
### `bytes-per-sync`
+ The rate at which OS incrementally synchronizes files to disk while these files are being written asynchronously
+ Default value: `1MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `wal-bytes-per-sync`
+ The rate at which OS incrementally synchronizes WAL files to disk while the WAL files are being written
+ Default value: `512KB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `info-log-max-size`
+ The maximum size of Info log
+ Default value: `1GB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `info-log-roll-time`
+ The time interval at which Info logs are truncated. If the value is `0`, logs are not truncated.
+ Default value: `0`
### `info-log-keep-log-file-num`
+ The maximum number of kept log files
+ Default value: `10`
+ Minimum value: `0`
### `info-log-dir`
+ The directory in which logs are stored
+ Default value: ""

View File

@ -1,6 +1,6 @@
---
title: Deploy
description: Prerequisites for deploying TiKV
description: TiKV deployment prerequisites and methods
menu:
"4.0":
parent: Tasks
@ -8,96 +8,10 @@ menu:
name: Deploy
---
This section introduces the prerequisites for deploying TiKV and how to deploy a TiKV cluster.
Typical deployments of TiKV include a number of components:
* 3+ TiKV nodes
* 3+ Placement Driver (PD) nodes
* 1 Monitoring node
* 1 or more client application or query layer (like [TiDB](https://github.com/pingcap/tidb))
{{< info >}}
TiKV is deployed alongside a [Placement Driver](https://github.com/pingcap/pd/) (PD) cluster. PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.
{{< /info >}}
Your **first steps** into TiKV require only the following:
* A modest machine that fulfills the [system requirements](#system-requirements).
* A running [Docker](https://docker.com) service.
After you set up the environment, follow through the [Try](../try) guide to get a test setup of TiKV running on your machine.
**Production** usage is typically done via automation requiring:
* A control machine (it can be one of your target servers) with [Ansible](https://www.ansible.com/) installed.
* Several (6+) machines fulfilling the [system requirements](#system-requirements) and at least up to [production specifications](#production-specifications).
* The ability to configure your infrastructure to allow the ports from [network requirements](#network-requirements).
If you have your production environment ready, follow through the [Ansible deployment](../ansible) guide. You may optionally choose unsupported manual [Docker deployment](../docker) or [binary deployment](../binary) strategies.
Finally, if you want to **build your own binary** TiKV you should consult the [README](https://github.com/tikv/tikv/blob/master/README.md) of the repository.
## System requirements
The **minimum** specifications for testing or developing TiKV or PD are:
* 2+ core
* 8+ GB RAM
* An SSD
TiKV hosts must support the x86-64 architecture and the SSE 4.2 instruction set.
TiKV works well in VMWare, KVM, and Xen virtual machines.
## Production Specifications
The **suggested PD** specifications for production are:
* 3+ nodes
* 4+ cores
* 8+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System, PD is most widely tested on CentOS 7.
The **suggested TiKV** specifications for production are:
* 3+ nodes
* 16+ cores
* 32+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive (Under 1.5 TB capacity is ideal in our tests)
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System, PD is most widely tested on CentOS 7.
## Network requirements
TiKV deployments require **total connectivity of all services**. Each TiKV, PD, and client must be able to reach each all other and advertise the addresses of all other services to new services. This connectivity allows TiKV and PD to replicate and balance data resiliently across the entire deployment.
If the hosts are not already able to reach each other, it is possible to accomplish this through a Virtual Local Area Network (VLAN). Speak to your system administrator to explore your options.
TiKV requires the following network port configuration to run. Based on the TiKV deployment in actual environments, the administrator can open relevant ports in the network side and host side.
| Component | Default Port | Protocol | Description |
| :--:| :--: | :--: | :-- |
| TiKV | 20160 | gRPC | Client (such as Query Layers) port. |
| TiKV | 20180 | Text | Status port, Prometheus metrics at `/metrics`. |
| PD | 2379 | gRPC | The client port, for communication with clients. |
| PD | 2380 | gRPC | The server port, for communication with TiKV. |
{{< info >}}
If you are deploying tools alongside TiKV you may need to open or configure other ports. For example, port 3000 for the Grafana service.
{{< /info >}}
You can ensure your confguration is correct by creating echo servers on the ports/IPs by using `ncat` (from the `nmap` package):
```bash
ncat -l $PORT -k -c 'xargs -n1 echo'
```
Then from the other machines, verify that the echo server is reachable with `curl $IP:$PORT`.
## Optional: Configure Monitoring
TiKV can work with Prometheus and Grafana to provide a rich visual monitoring dashboard. This comes preconfigured if you use the [Ansible](../ansible) or [Docker Compose](../docker-compose) deployment methods.
We strongly recommend using an up-to-date version of Mozilla Firefox or Google Chrome when accessing Grafana.
- [Prerequisites](../prerequisites)
- [Deploy TiKV using Ansible](../ansible)
- [Deploy TiKV using Docker](../docker)
- [Deploy TiKV using binary files](../binary)
- [Deploy TiKV using Docker Compose/Swarm](../docker-compose)

View File

@ -0,0 +1,101 @@
---
title: Prerequisites
description: Prerequisites for deploying TiKV
menu:
"4.0":
parent: Deploy
weight: 1
---
Typical deployments of TiKV include a number of components:
* 3+ TiKV nodes
* 3+ Placement Driver (PD) nodes
* 1 Monitoring node
* 1 or more client application or query layer (like [TiDB](https://github.com/pingcap/tidb))
{{< info >}}
TiKV is deployed alongside a [Placement Driver](https://github.com/pingcap/pd/) (PD) cluster. PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.
{{< /info >}}
Your **first steps** into TiKV require only the following:
* A modest machine that fulfills the [system requirements](#system-requirements).
* A running [Docker](https://docker.com) service.
After you set up the environment, follow through the [Try](../../try) guide to get a test setup of TiKV running on your machine.
**Production** usage is typically done via automation requiring:
* A control machine (it can be one of your target servers) with [Ansible](https://www.ansible.com/) installed.
* Several (6+) machines fulfilling the [system requirements](#system-requirements) and at least up to [production specifications](#production-specifications).
* The ability to configure your infrastructure to allow the ports from [network requirements](#network-requirements).
If you have your production environment ready, follow through the [Ansible deployment](../ansible) guide. You may optionally choose unsupported manual [Docker deployment](../docker) or [binary deployment](../binary) strategies.
Finally, if you want to **build your own binary** TiKV you should consult the [README](https://github.com/tikv/tikv/blob/master/README.md) of the repository.
## System requirements
The **minimum** specifications for testing or developing TiKV or PD are:
* 2+ core
* 8+ GB RAM
* An SSD
TiKV hosts must support the x86-64 architecture and the SSE 4.2 instruction set.
TiKV works well in VMWare, KVM, and Xen virtual machines.
## Production Specifications
The **suggested PD** specifications for production are:
* 3+ nodes
* 4+ cores
* 8+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (PD is most widely tested on CentOS 7).
The **suggested TiKV** specifications for production are:
* 3+ nodes
* 16+ cores
* 32+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive (Under 1.5 TB capacity is ideal in our tests)
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (TiKV is most widely tested on CentOS 7).
## Network requirements
TiKV deployments require **total connectivity of all services**. Each TiKV, PD, and client must be able to reach each all other and advertise the addresses of all other services to new services. This connectivity allows TiKV and PD to replicate and balance data resiliently across the entire deployment.
If the hosts are not already able to reach each other, it is possible to accomplish this through a Virtual Local Area Network (VLAN). Speak to your system administrator to explore your options.
TiKV requires the following network port configuration to run. Based on the TiKV deployment in actual environments, the administrator can open relevant ports in the network side and host side.
| Component | Default Port | Protocol | Description |
| :--:| :--: | :--: | :-- |
| TiKV | 20160 | gRPC | Client (such as Query Layers) port. |
| TiKV | 20180 | Text | Status port, Prometheus metrics at `/metrics`. |
| PD | 2379 | gRPC | The client port, for communication with clients. |
| PD | 2380 | gRPC | The server port, for communication with TiKV. |
{{< info >}}
If you are deploying tools alongside TiKV you may need to open or configure other ports. For example, port 3000 for the Grafana service.
{{< /info >}}
You can ensure your configuration is correct by creating echo servers on the ports/IPs by using `ncat` (from the `nmap` package):
```bash
ncat -l $PORT -k -c 'xargs -n1 echo'
```
Then from the other machines, verify that the echo server is reachable with `curl $IP:$PORT`.
## Optional: Configure Monitoring
TiKV can work with Prometheus and Grafana to provide a rich visual monitoring dashboard. This comes preconfigured if you use the [Ansible](../ansible) or [Docker Compose](../docker-compose) deployment methods.
We strongly recommend using an up-to-date version of Mozilla Firefox or Google Chrome when accessing Grafana.

View File

@ -17,12 +17,17 @@ In order to get a functioning TiKV service you will need to start a TiKV service
Communication between TiKV, PD, and any services which use TiKV is done via gRPC. We provide clients for [several languages](../../reference/clients/introduction/), and this guide will briefly show you how to use the Rust client.
Using Docker, you'll create pair of persistent services `tikv` and `pd` and learn to manage them easily. Then you'll write a simple Rust client application and run it from your local host. Finally, you'll learn how to quicky teardown and bring up the services, and review some basic limitations of this configuration.
Using Docker, you'll create pair of persistent services `tikv` and `pd` and learn to manage them easily. Then you'll write a simple Rust client application and run it from your local host. Finally, you'll learn how to quickly teardown and bring up the services, and review some basic limitations of this configuration.
![Architecture](../../../../img/docs/getting-started-docker.svg)
{{< figure
src="/img/docs/getting-started-docker.svg"
caption="Docker Stack"
alt="Docker Stack diagram"
width="70"
number="1" >}}
{{< warning >}}
In a production deployment there would be **at least** three TiKV services and three PD services spread among 6 machines. Most deployments also include kernel tuning, sysctl tuning, robust systemd services, firewalls, monitoring with prometheus, grafana dashboards, log collection, and more. Even still, to be sure of your resilience and security, consider consulting our [maintainers](https://github.com/tikv/tikv/blob/master/MAINTAINERS.md).
In a production deployment there would be **at least** three TiKV services and three PD services spread among 6 machines. Most deployments also include kernel tuning, sysctl tuning, robust systemd services, firewalls, monitoring with Prometheus, Grafana dashboards, log collection, and more. Even still, to be sure of your resilience and security, consider consulting our [maintainers](https://github.com/tikv/tikv/blob/master/MAINTAINERS.md).
If you are interested in deploying for production we suggest investigating the [deploy](../../deploy/introduction) guides.
{{< /warning >}}

View File

@ -47,7 +47,6 @@ The architecture of each TiKV instance is illustrated in **Figure 2** below:
width="60"
number="2" >}}
## Placement driver (PD) {#placement-driver}
The TiKV placement driver is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically across nodes and regions in a process called **auto-sharding**.

View File

@ -16,7 +16,7 @@ In TiKV, PD generates different scheduling operators based on the information ga
PD provides the following two methods to configure scheduling rate limits on stores:
- Configure the rate limit using **`store-balance-rate`**.
{{< info >}}
The modification only takes effect on stores added after this configuration change, and will be applied to all stores in the cluster after you restart TiKV. If you want this change to work immediately on all stores or some individual stores before the change without restarting, combine this configuration with the `pd-ctl` tool method below. See [Sample usages](#sample-usages) for more details.
{{< /info >}}
@ -31,16 +31,14 @@ The modification only takes effect on stores added after this configuration chan
» config set store-balance-rate 20
```
- Use the `pd-ctl` tool to view or modify the upper limit of the scheduling rate. The commands are:
{{< info >}}
This method is not persistent, and the configuration will revert after restarting TiKV.
{{< /info >}}
- **`stores show limit`**
Example:
```bash
@ -56,7 +54,7 @@ This method is not persistent, and the configuration will revert after restartin
# ...
}
```
- **`stores set limit <rate>`**
Example:
@ -65,7 +63,7 @@ This method is not persistent, and the configuration will revert after restartin
# Set the upper limit of scheduling rate for all stores to be 20 scheduling tasks per minute.
» stores set limit 20
```
- **`store limit <store_id> <rate>`**
Example:
@ -87,14 +85,13 @@ See [PD Control](../../reference/tools/pd-ctl/) for more detailed description of
```
- The following example modifies the rate limit for all stores to 20 and applies immediately. After restart, the configuration becomes invalid, and the rate limit for all stores specified by `store-balance-rate` takes over.
```bash
» stores set limit 20
```
- The following example modifies the rate limit for store 2 to 20 and applies immediately. After restart, the configuration becomes invalid, and the rate limit for store 2 becomes the value specified by `store-balance-rate`.
```bash
» store limit 2 20
```

View File

@ -9,7 +9,6 @@ menu:
TiKV replicates a segment of data in Regions via the Raft state machine. As data writes increase, a Region Split happens when the size of the region or the number of keys has reached a threshold. Conversely, if the size of the Region and the amount of keys shrinks because of data deletion, we can use Region Merge to merge adjacent regions that are smaller. This relieves some stress on Raftstore.
## Merge process
Region Merge is initiated by the Placement Driver (PD). The steps are:

View File

@ -7,7 +7,7 @@ menu:
weight: 6
---
TiKV uses [RocksDB](https://rocksdb.org/) as its underlying storage engine for storing both [Raft logs](architecture#raft) and KV (key-value) pairs.
TiKV uses [RocksDB](https://rocksdb.org/) as its underlying storage engine for storing both [Raft logs](../../../concepts/architecture#raft) and KV (key-value) pairs.
{{< info >}}
RocksDB was chosen for TiKV because it provides a highly customizable persistent key-value store that can be tuned to run in a variety of production environments, including pure memory, Flash, hard disks, or HDFS, it supports various compression algorithms, and it provides solid tools for production support and debugging.
@ -16,7 +16,7 @@ RocksDB was chosen for TiKV because it provides a highly customizable persistent
TiKV creates two RocksDB instances on each Node:
* A `rocksdb` instance that stores most TiKV data
* A `raftdb` that stores [Raft logs](architecture#raft) and has a single column family called `raftdb.defaultcf`
* A `raftdb` that stores [Raft logs](../../../concepts/architecture#raft) and has a single column family called `raftdb.defaultcf`
The `rocksdb` instance has three column families:
@ -32,6 +32,154 @@ RocksDB can be configured on a per-column-family basis. Here's an example:
max-background-jobs = 8
```
### RocksDB configuration options
## RocksDB configuration options
{{< config "rocksdb" >}}
<!-- {{< config "rocksdb" >}} -->
### `max-background-jobs`
+ The number of background threads in RocksDB
+ Default value: `8`
+ Minimum value: `1`
### `max-sub-compactions`
+ The number of sub-compaction operations performed concurrently in RocksDB
+ Default value: `3`
+ Minimum value: `1`
### `max-open-files`
+ The total number of files that RocksDB can open
+ Default value: `40960`
+ Minimum value: `-1`
### `max-manifest-file-size`
+ The maximum size of a RocksDB Manifest file
+ Default value: `128MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `create-if-missing`
+ Determines whether to automatically create a DB switch
+ Default value: `true`
### `wal-recovery-mode`
+ WAL recovery mode
+ Optional values: `0` (`TolerateCorruptedTailRecords`), `1` (`AbsoluteConsistency`), `2` (`PointInTimeRecovery`), `3` (`SkipAnyCorruptedRecords`)
+ Default value: `2`
+ Minimum value: `0`
+ Maximum value: `3`
### `wal-dir`
+ The directory in which WAL files are stored
+ Default value: `/tmp/tikv/store`
### `wal-ttl-seconds`
+ The living time of the archived WAL files. When the value is exceeded, the system deletes these files.
+ Default value: `0`
+ Minimum value: `0`
+ unit: second
### `wal-size-limit`
+ The size limit of the archived WAL files. When the value is exceeded, the system deletes these files.
+ Default value: `0`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `enable-statistics`
+ Determines whether to automatically optimize the configuration of Rate LImiter
+ Default value: `false`
### `stats-dump-period`
+ Enables or disables Pipelined Write
+ Default value: `true`
### `compaction-readahead-size`
+ The size of `readahead` when compaction is being performed
+ Default value: `0`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `writable-file-max-buffer-size`
+ The maximum buffer size used in WritableFileWrite
+ Default value: `1MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `use-direct-io-for-flush-and-compaction`
+ Determines whether to use `O_DIRECT` for both reads and writes in background flush and compactions
+ Default value: `false`
### `rate-bytes-per-sec`
+ The maximum rate permitted by Rate Limiter
+ Default value: `0`
+ Minimum value: `0`
+ Unit: Bytes
### `rate-limiter-mode`
+ Rate LImiter mode
+ Optional values: `1` (`ReadOnly`), `2` (`WriteOnly`), `3` (`AllIo`)
+ Default value: `2`
+ Minimum value: `1`
+ Maximum value: `3`
### `auto-tuned`
+ Determines whether to automatically optimize the configuration of the Rate LImiter
+ Default value: `false`
### `enable-pipelined-write`
+ Enables or disables Pipelined Write
+ Default value: `true`
### `bytes-per-sync`
+ The rate at which OS incrementally synchronizes files to disk while these files are being written asynchronously
+ Default value: `1MB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `wal-bytes-per-sync`
+ The rate at which OS incrementally synchronizes WAL files to disk while the WAL files are being written
+ Default value: `512KB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `info-log-max-size`
+ The maximum size of Info log
+ Default value: `1GB`
+ Minimum value: `0`
+ Unit: B|KB|MB|GB
### `info-log-roll-time`
+ The time interval at which Info logs are truncated. If the value is `0`, logs are not truncated.
+ Default value: `0`
### `info-log-keep-log-file-num`
+ The maximum number of kept log files
+ Default value: `10`
+ Minimum value: `0`
### `info-log-dir`
+ The directory in which logs are stored
+ Default value: ""

View File

@ -23,11 +23,11 @@ It's often necessary to use TLS in situations where TiKV is being deployed or ac
Before you get started, review your infrastructure. Your organization may already use something like the [Kubernetes certificates API](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) to issue certificates. You will need the following for your deployment:
- A **Certificate Authority** (CA) certificate
- Individual unique **certificates** and **keys** for each TiKV or PD service
- One or many **certificates** and **keys** for TiKV clients depending on your needs.
- A **Certificate Authority** (CA) certificate
- Individual unique **certificates** and **keys** for each TiKV or PD service
- One or many **certificates** and **keys** for TiKV clients depending on your needs.
If you have these, you can skip the optional section below.
If you have these, you can skip the optional section below.
If your organization doesn't yet have a public key infrastructure (PKI), you can create a simple Certificate Authority to issue certificates for the services in your deployment. The instructions below show you how to do this in a few quick steps:

View File

@ -25,6 +25,7 @@ In order for PD to get the topology of the cluster, TiKV reports the topological
[server]
labels = "zone=<zone>,rack=<rack>,host=<host>"
```
## PD understands the TiKV topology
After getting the topology of the TiKV cluster, PD also needs to know the hierarchical relationship of the topology. You can configure it through the PD configuration or `pd-ctl`:

View File

@ -1,6 +1,6 @@
---
title: Deploy
description: Prerequisites for deploying TiKV
description: TiKV deployment prerequisites and methods
menu:
"dev":
parent: Tasks
@ -8,96 +8,10 @@ menu:
name: Deploy
---
This section introduces the prerequisites for deploying TiKV and how to deploy a TiKV cluster.
Typical deployments of TiKV include a number of components:
* 3+ TiKV nodes
* 3+ Placement Driver (PD) nodes
* 1 Monitoring node
* 1 or more client application or query layer (like [TiDB](https://github.com/pingcap/tidb))
{{< info >}}
TiKV is deployed alongside a [Placement Driver](https://github.com/pingcap/pd/) (PD) cluster. PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.
{{< /info >}}
Your **first steps** into TiKV require only the following:
* A modest machine that fulfills the [system requirements](#system-requirements).
* A running [Docker](https://docker.com) service.
After you set up the environment, follow through the [Try](../try) guide to get a test setup of TiKV running on your machine.
**Production** usage is typically done via automation requiring:
* A control machine (it can be one of your target servers) with [Ansible](https://www.ansible.com/) installed.
* Several (6+) machines fulfilling the [system requirements](#system-requirements) and at least up to [production specifications](#production-specifications).
* The ability to configure your infrastructure to allow the ports from [network requirements](#network-requirements).
If you have your production environment ready, follow through the [Ansible deployment](../ansible) guide. You may optionally choose unsupported manual [Docker deployment](../docker) or [binary deployment](../binary) strategies.
Finally, if you want to **build your own binary** TiKV you should consult the [README](https://github.com/tikv/tikv/blob/master/README.md) of the repository.
## System requirements
The **minimum** specifications for testing or developing TiKV or PD are:
* 2+ core
* 8+ GB RAM
* An SSD
TiKV hosts must support the x86-64 architecture and the SSE 4.2 instruction set.
TiKV works well in VMWare, KVM, and Xen virtual machines.
## Production Specifications
The **suggested PD** specifications for production are:
* 3+ nodes
* 4+ cores
* 8+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System, PD is most widely tested on CentOS 7.
The **suggested TiKV** specifications for production are:
* 3+ nodes
* 16+ cores
* 32+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive (Under 1.5 TB capacity is ideal in our tests)
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System, PD is most widely tested on CentOS 7.
## Network requirements
TiKV deployments require **total connectivity of all services**. Each TiKV, PD, and client must be able to reach each all other and advertise the addresses of all other services to new services. This connectivity allows TiKV and PD to replicate and balance data resiliently across the entire deployment.
If the hosts are not already able to reach each other, it is possible to accomplish this through a Virtual Local Area Network (VLAN). Speak to your system administrator to explore your options.
TiKV requires the following network port configuration to run. Based on the TiKV deployment in actual environments, the administrator can open relevant ports in the network side and host side.
| Component | Default Port | Protocol | Description |
| :--:| :--: | :--: | :-- |
| TiKV | 20160 | gRPC | Client (such as Query Layers) port. |
| TiKV | 20180 | Text | Status port, Prometheus metrics at `/metrics`. |
| PD | 2379 | gRPC | The client port, for communication with clients. |
| PD | 2380 | gRPC | The server port, for communication with TiKV. |
{{< info >}}
If you are deploying tools alongside TiKV you may need to open or configure other ports. For example, port 3000 for the Grafana service.
{{< /info >}}
You can ensure your confguration is correct by creating echo servers on the ports/IPs by using `ncat` (from the `nmap` package):
```bash
ncat -l $PORT -k -c 'xargs -n1 echo'
```
Then from the other machines, verify that the echo server is reachable with `curl $IP:$PORT`.
## Optional: Configure Monitoring
TiKV can work with Prometheus and Grafana to provide a rich visual monitoring dashboard. This comes preconfigured if you use the [Ansible](../ansible) or [Docker Compose](../docker-compose) deployment methods.
We strongly recommend using an up-to-date version of Mozilla Firefox or Google Chrome when accessing Grafana.
- [Prerequisites](../prerequisites)
- [Deploy TiKV using Ansible](../ansible)
- [Deploy TiKV using Docker](../docker)
- [Deploy TiKV using binary files](../binary)
- [Deploy TiKV using Docker Compose/Swarm](../docker-compose)

View File

@ -0,0 +1,101 @@
---
title: Prerequisites
description: Prerequisites for deploying TiKV
menu:
"dev":
parent: Deploy
weight: 1
---
Typical deployments of TiKV include a number of components:
* 3+ TiKV nodes
* 3+ Placement Driver (PD) nodes
* 1 Monitoring node
* 1 or more client application or query layer (like [TiDB](https://github.com/pingcap/tidb))
{{< info >}}
TiKV is deployed alongside a [Placement Driver](https://github.com/pingcap/pd/) (PD) cluster. PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.
{{< /info >}}
Your **first steps** into TiKV require only the following:
* A modest machine that fulfills the [system requirements](#system-requirements).
* A running [Docker](https://docker.com) service.
After you set up the environment, follow through the [Try](../../try) guide to get a test setup of TiKV running on your machine.
**Production** usage is typically done via automation requiring:
* A control machine (it can be one of your target servers) with [Ansible](https://www.ansible.com/) installed.
* Several (6+) machines fulfilling the [system requirements](#system-requirements) and at least up to [production specifications](#production-specifications).
* The ability to configure your infrastructure to allow the ports from [network requirements](#network-requirements).
If you have your production environment ready, follow through the [Ansible deployment](../ansible) guide. You may optionally choose unsupported manual [Docker deployment](../docker) or [binary deployment](../binary) strategies.
Finally, if you want to **build your own binary** TiKV you should consult the [README](https://github.com/tikv/tikv/blob/master/README.md) of the repository.
## System requirements
The **minimum** specifications for testing or developing TiKV or PD are:
* 2+ core
* 8+ GB RAM
* An SSD
TiKV hosts must support the x86-64 architecture and the SSE 4.2 instruction set.
TiKV works well in VMWare, KVM, and Xen virtual machines.
## Production Specifications
The **suggested PD** specifications for production are:
* 3+ nodes
* 4+ cores
* 8+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (PD is most widely tested on CentOS 7).
The **suggested TiKV** specifications for production are:
* 3+ nodes
* 16+ cores
* 32+ GB RAM, with no swap space.
* 200+ GB Optane, NVMe, or SSD drive (Under 1.5 TB capacity is ideal in our tests)
* 10 Gigabit ethernet (2x preferred)
* A Linux Operating System (TiKV is most widely tested on CentOS 7).
## Network requirements
TiKV deployments require **total connectivity of all services**. Each TiKV, PD, and client must be able to reach each all other and advertise the addresses of all other services to new services. This connectivity allows TiKV and PD to replicate and balance data resiliently across the entire deployment.
If the hosts are not already able to reach each other, it is possible to accomplish this through a Virtual Local Area Network (VLAN). Speak to your system administrator to explore your options.
TiKV requires the following network port configuration to run. Based on the TiKV deployment in actual environments, the administrator can open relevant ports in the network side and host side.
| Component | Default Port | Protocol | Description |
| :--:| :--: | :--: | :-- |
| TiKV | 20160 | gRPC | Client (such as Query Layers) port. |
| TiKV | 20180 | Text | Status port, Prometheus metrics at `/metrics`. |
| PD | 2379 | gRPC | The client port, for communication with clients. |
| PD | 2380 | gRPC | The server port, for communication with TiKV. |
{{< info >}}
If you are deploying tools alongside TiKV you may need to open or configure other ports. For example, port 3000 for the Grafana service.
{{< /info >}}
You can ensure your configuration is correct by creating echo servers on the ports/IPs by using `ncat` (from the `nmap` package):
```bash
ncat -l $PORT -k -c 'xargs -n1 echo'
```
Then from the other machines, verify that the echo server is reachable with `curl $IP:$PORT`.
## Optional: Configure Monitoring
TiKV can work with Prometheus and Grafana to provide a rich visual monitoring dashboard. This comes preconfigured if you use the [Ansible](../ansible) or [Docker Compose](../docker-compose) deployment methods.
We strongly recommend using an up-to-date version of Mozilla Firefox or Google Chrome when accessing Grafana.

View File

@ -17,12 +17,17 @@ In order to get a functioning TiKV service you will need to start a TiKV service
Communication between TiKV, PD, and any services which use TiKV is done via gRPC. We provide clients for [several languages](../../reference/clients/introduction/), and this guide will briefly show you how to use the Rust client.
Using Docker, you'll create pair of persistent services `tikv` and `pd` and learn to manage them easily. Then you'll write a simple Rust client application and run it from your local host. Finally, you'll learn how to quicky teardown and bring up the services, and review some basic limitations of this configuration.
Using Docker, you'll create pair of persistent services `tikv` and `pd` and learn to manage them easily. Then you'll write a simple Rust client application and run it from your local host. Finally, you'll learn how to quickly teardown and bring up the services, and review some basic limitations of this configuration.
![Architecture](../../../../img/docs/getting-started-docker.svg)
{{< figure
src="/img/docs/getting-started-docker.svg"
caption="Docker Stack"
alt="Docker Stack diagram"
width="70"
number="1" >}}
{{< warning >}}
In a production deployment there would be **at least** three TiKV services and three PD services spread among 6 machines. Most deployments also include kernel tuning, sysctl tuning, robust systemd services, firewalls, monitoring with prometheus, grafana dashboards, log collection, and more. Even still, to be sure of your resilience and security, consider consulting our [maintainers](https://github.com/tikv/tikv/blob/master/MAINTAINERS.md).
In a production deployment there would be **at least** three TiKV services and three PD services spread among 6 machines. Most deployments also include kernel tuning, sysctl tuning, robust systemd services, firewalls, monitoring with Prometheus, Grafana dashboards, log collection, and more. Even still, to be sure of your resilience and security, consider consulting our [maintainers](https://github.com/tikv/tikv/blob/master/MAINTAINERS.md).
If you are interested in deploying for production we suggest investigating the [deploy](../../deploy/introduction) guides.
{{< /warning >}}