Add some consistency and refactor warning

Signed-off-by: Ana Hobden <operator@hoverbear.org>
This commit is contained in:
Ana Hobden 2019-07-24 10:17:16 -07:00
parent 5e3cdff1ad
commit b2213945ff
51 changed files with 193 additions and 1021 deletions

View File

@ -4,4 +4,6 @@ description: The features of TiKV
menu:
docs:
parent: Concepts
---
---
{{< features >}}

View File

@ -1,7 +0,0 @@
---
title: Clients
description: Interact with TiKV using the raw key-value API or the transactional key-value API
menu:
docs:
parent: Reference
---

View File

@ -0,0 +1,12 @@
---
title: C Client
description: Interact with TiKV using C.
menu:
docs:
parent: Clients
weight: 4
---
This document, like our C API, is still a work in progress. In the meantime, you can track development at [tikv/client-c](https://github.com/tikv/client-c/) repository. Most development happens on the `dev` branch.
**You should not use the C client for production use until it is released.**

View File

@ -1,14 +1,13 @@
---
title: Go
description: Learn how to use the Raw Key-Value API and the Transactional Key-Value API in TiKV.
title: Go Client
description: Interact with TiKV using Go.
menu:
docs:
parent: Clients
weight: 1
---
# Try Two Types of APIs
To apply to different scenarios, TiKV provides [two types of APIs](../../overview.md#two-types-of-apis) for developers: the Raw Key-Value API and the Transactional Key-Value API. This document uses two examples to guide you through how to use the two APIs in TiKV. The usage examples are based on multiple nodes for testing. You can also quickly try the two types of APIs on a single machine.
To apply to different scenarios, TiKV provides two types of APIs for developers: the Raw Key-Value API and the Transactional Key-Value API. This document uses two examples to guide you through how to use the two APIs in TiKV. The usage examples are based on multiple nodes for testing. You can also quickly try the two types of APIs on a single machine.
> **Warning:** Do not use these two APIs together in the same cluster, otherwise they might corrupt each other's data.

View File

@ -0,0 +1,14 @@
---
title: Clients
description: Interact with TiKV using the raw key-value API or the transactional key-value API
menu:
docs:
parent: Reference
---
TiKV has clients for a number of languages:
* [Go](../go) (Stable)
* [Java](../java) (Unstable)
* [Rust](../rust) (Unstable)
* [C](../c) (early development)

View File

@ -0,0 +1,12 @@
---
title: Java Client
description: Interact with TiKV using Java.
menu:
docs:
parent: Clients
weight: 3
---
This document, like our Java API, is still a work in progress. In the meantime, you can track development at [tikv/client-java](https://github.com/tikv/client-java/) repository.
**You should not use the Java client for production use until it is released.**

View File

@ -1,9 +1,10 @@
---
title: Rust
description: Interact with TiKV using the raw key-value API or the transactional key-value API
title: Rust Client
description: Interact with TiKV using Rust.
menu:
docs:
parent: Clients
weight: 2
---
TiKV offers two APIs that you can interact with:

View File

@ -1,7 +0,0 @@
---
title: Configuration
description: How to configure TiKV
menu:
docs:
parent: Reference
---

View File

@ -1,85 +0,0 @@
---
title: Coprocessor Options
description: Learn how to configure the coprocessor in TiKV.
menu:
docs:
parent: Configuration
---
# TiKV Coprocessor Configuration
Coprocessor is the component that handles most of the read requests from TiDB. Unlike Storage, it is more high-leveled that it not only fetches KV data but also does computing like filter or aggregation. TiKV is used as a distribution computing engine and Coprocessor is also used to reduce data serialization and traffic. This document describes how to configure TiKV Coprocessor.
## Configuration
Most Coprocessor configurations are in the `[readpool.coprocessor]` section and some configurations are in the `[server]` section.
### `[readpool.coprocessor]`
There are three thread pools for handling high priority, normal priority and low priority requests respectively. TiDB point select is high priority, range scan is normal priority and background jobs like table analyzing is low priority.
#### `high-concurrency`
- Specifies the thread pool size for handling high priority Coprocessor requests
- Default value: number of cores * 0.8 (> 8 cores) or 8 (<= 8 cores)
- Minimum value: 1
- It must be larger than zero but should not exceed the number of CPU cores of the host machine
- On a machine with more than 8 CPU cores, its default value is NUM_CPUS * 0.8. Otherwise it is 8
- If you are running multiple TiKV instances on the same machine, make sure that the sum of this configuration item does not exceed the number of CPU cores. For example, assuming that you have a 48 core server running 3 TiKVs, then the `high-concurrency` value for each instance should be less than 16
- Do not set it to a too small value, otherwise your read request QPS is limited. On the other hand, a larger value is not always the optimal choice because there might be larger resource contention
#### `normal-concurrency`
- Specifies the thread pool size for handling normal priority Coprocessor requests
- Default value: number of cores * 0.8 (> 8 cores) or 8 (<= 8 cores)
- Minimum value: 1
#### `low-concurrency`
- Specifies the thread pool size for handling low priority Coprocessor requests
- Default value: number of cores * 0.8 (> 8 cores) or 8 (<= 8 cores)
- Minimum value: 1
- Generally, you dont need to ensure that the sum of high + normal + low < the number of CPU cores, because a single Coprocessor request is handled by only one of them
#### `max-tasks-per-worker-high`
- Specifies the max number of running operations for each thread in high priority thread pool
- Default value: number of cores * 0.8 (> 8 cores) or 8 (<= 8 cores)
- Minimum value: 1
- Because actually a throttle of the thread-pool level instead of single thread level is performed, the max number of running operations for the thread pool is limited to `max-tasks-per-worker-high * high-concurrency`. If the number of running operations exceeds this configuration, new operations are simply rejected without being handled and it contains an error header telling that TiKV is busy
- Generally, you dont need to adjust this configuration unless you are following trustworthy advice
#### `max-tasks-per-worker-normal`
- Specifies the max running operations for each thread in the normal priority thread pool
- Default value: 2000
- Minimum value: 2000
#### `max-tasks-per-worker-low`
- Specifies the max running operations for each thread in the low priority thread pool
- Default value: 2000
- Minimum value: 2000
#### `stack-size`
- Sets the stack size for each thread in the three thread pools
- Default value: 10MB
- Minimum value: 2MB
- For large requests, you need a large stack to handle. Some Coprocessor requests are extremely large, change with caution
### `[server]`
#### `end-point-recursion-limit`
- Sets the max allowed recursions when decoding Coprocessor DAG expressions
- Default value: 1000
- Minimum value: 100
- Smaller value might cause large Coprocessor DAG requests to fail
#### `end-point-request-max-handle-duration`
- Sets the max allowed waiting time for each request
- Default value: 60s
- Minimum value: 60s
- When there are many backlog Coprocessor requests, new requests might wait in queue. If the waiting time of a request exceeds this configuration, it is rejected with the TiKV busy error and is not handled

View File

@ -1,51 +0,0 @@
---
title: gRPC (Remote procedure Calls)
description: Learn how to configure gRPC.
menu:
docs:
parent: Configuration
---
# gRPC Configuration
TiKV uses gRPC, a remote procedure call (RPC) framework, to build a distributed transactional key-value database. gRPC is designed to be high-performance, but ill-configured gRPC leads to performance regression of TiKV. This document describes how to configure gRPC.
## grpc-compression-type
- Compression type for the gRPC channel
- Default: "none"
- Available values are “none”, “deflate” and “gzip”
- To exchange the CPU time for network I/O, you can set it to “deflate” or “gzip”. It is useful when the network bandwidth is limited
## grpc-concurrency
- The size of the thread pool that drives gRPC
- Default: 4. It is suitable for a commodity computer. You can double the size if TiKV is deployed in a high-end server (32 core+ CPU)
- Higher concurrency is for higher QPS, but it consumes more CPU
## grpc-concurrent-stream
- The number of max concurrent streams/requests on a connection
- Default: 1024. It is suitable for most workload
- Increase the number if you find that most of your requests are not time consuming, e.g., RawKV Get
## grpc-keepalive-time
- Time to wait before sending out a ping to check whether the server is still alive. This is only for the communication between TiKV instances
- Default: 10s
## grpc-keepalive-timeout
- Time to wait before closing the connection without receiving the `keepalive` ping ACK
- Default: 3s
## grpc-raft-conn-num
- The number of connections with each TiKV server to send Raft messages
- Default: 10
## grpc-stream-initial-window-size
- Amount to Read Ahead on individual gRPC streams
- Default: 2MB
- Larger values can help throughput on high-latency connections

View File

@ -1,102 +0,0 @@
---
title: PD Scheduler Options
description: Learn how to configure PD Scheduler.
menu:
docs:
parent: Configuration
---
# PD Scheduler Configuration
PD Scheduler is responsible for scheduling the storage and computing resources. PD has many kinds of schedulers to meet the requirements in different scenarios. PD Scheduler is one of the most important component in PD.
The basic workflow of PD Scheduler is as follows. First, the scheduler is triggered according to `minAdjacentSchedulerInterval` defined in `ScheduleController`. Then it tries to select the source store and the target store, create the corresponding operators and send a message to TiKV to do some operations.
## Usage description
This section describes the usage of PD Scheduler parameters.
### `max-merge-region-keys && max-merge-region-size`
If the Region size is smaller than `max-merge-region-size` and the number of keys in the Region is smaller than `max-merge-region-keys` at the same time, the Region will try to merge with adjacent Regions. The default value of both the two parameters is 0. Currently, `merge` is not enabled by default.
### `split-merge-interval`
`split-merge-interval` is the minimum interval time to allow merging after split. The default value is "1h".
### `max-snapshot-count`
If the snapshot count of one store is larger than the value of `max-snapshot-count`, it will never be used as a source or target store. The default value is 3.
### `max-pending-peer-count`
If the pending peer count of one store is larger than the value of `max-pending-peer-count`, it will never be used as a source or target store. The default value is 16.
### `max-store-down-time`
`max-store-down-time` is the maximum duration after which a store is considered to be down if it has not reported heartbeats. The default value is “30m”.
### `leader-schedule-limit`
`leader-schedule-limit` is the maximum number of coexistent leaders that are under scheduling. The default value is 4.
### `region-schedule-limit`
`region-schedule-limit` is the maximum number of coexistent Regions that are under scheduling. The default value is 4.
### `replica-schedule-limit`
`replica-schedule-limit` is the maximum number of coexistent replicas that are under scheduling. The default value is 8.
### `merge-schedule-limit`
`merge-schedule-limit` is the maximum number of coexistent merges that are under scheduling. The default value is 8.
### `tolerant-size-ratio`
`tolerant-size-ratio` is the ratio of buffer size for the balance scheduler. The default value is 5.0.
### `low-space-ratio`
`low-space-ratio` is the lowest usage ratio of a storage which can be regarded as low space. When a storage is in low space, the score turns to be high and varies inversely with the available size.
### `high-space-ratio`
`high-space-ratio` is the highest usage ratio of storage which can be regarded as high space. High space means there is a lot of available space of the storage, and the score varies directly with the used size.
### `disable-raft-learner`
`disable-raft-learner` is the option to disable `AddNode` and use `AddLearnerNode` instead.
### `disable-remove-down-replica`
`disable-remove-down-replica` is the option to prevent replica checker from removing replicas whose status are down.
### `disable-replace-offline-replica`
`disable-replace-offline-replica` is the option to prevent the replica checker from replacing offline replicas.
### `disable-make-up-replica`
`disable-make-up-replica` is the option to prevent the replica checker from making up replicas when the count of replicas is less than expected.
### `disable-remove-extra-replica`
`disable-remove-extra-replica` is the option to prevent the replica checker from removing extra replicas.
### `disable-location-replacement`
`disable-location-replacement` is the option to prevent the replica checker from moving the replica to a better location.
## Customization
The default schedulers include `balance-leader`, `balance-region` and `hot-region`. In addition, you can also customize the schedulers. For each scheduler, the configuration has three variables: `type`, `args` and `disable`.
Here is an example to enable the `evict-leader` scheduler in the `config.toml` file:
```
[[schedule.schedulers]]
type = "evict-leader"
args = ["1"]
disable = false
```

View File

@ -1,70 +0,0 @@
---
title: Raftstore Options
description: Learn about Raftstore configuration in TiKV.
menu:
docs:
parent: Configuration
---
# Raftstore Configurations
Raftstore is TiKV's implementation of [Multi-raft](https://tikv.org/deep-dive/scalability/multi-raft/) to manage multiple Raft peers on one node. Raftstore is comprised of two major components:
- **Raftstore** component writes Raft logs into RaftDB.
- **Apply** component resolves Raft logs and flush the data in the log into the underlying storage engine.
This document introduces the following features of Raftstore and their configurations:
- [Multi-thread Raftstore](#multi-thread-raftstore)
- [Hibernate Region](#hibernate-region)
## Multi-thread Raftstore
Multi-thread support for the Raftstore and the Apply components means higher throughput and lower latency per each single node. In the multi-thread mode, each thread obtains peers from the queue in batch, so that small writes of multiple peers can be consolidated into a big write for better throughput.
![Multi-thread Raftstore Model](../../images/multi-thread-raftstore.png)
> **Note:**
>
> In the multi-thread mode, peers are obtained in batch, so pressure from hot write Regions cannot be scattered evenly to each CPU. For better load balancing, it is recommended you use smaller batch sizes.
### Configuration items
You can specify the following items in the TiKV configuration file to configure multi-thread Raftstore:
**`raftstore.store_max_batch_size`**
Determines the maximum number of peers that a single thread can obtain in a batch. The value must be a positive integer. A smaller value provides better load balancing for CPU, but may cause more frequent writes.
**`raftstore.store_pool_size`**
Determines the number of threads to process peers in batch. The value must be a positive integer. For better performance, it is recommended that you set a value less than or equal to the number of CPU cores on your machine.
**`raftstore.apply_max_batch_size`**
Determines the maximum number of ApplyDelegates requests that a single thread can resolve in a batch. The value must be a positive integer. A smaller value provides better load balancing for CPU, but may cause more frequent writes.
**`raftstore.apply_pool_size`**
Determines the number of threads. The value must be a positive integer. For better performance, it is recommended that you set a value less than or equal to the number of CPU cores on your machine.
## Hibernate Region
Hibernate Region is a Raftstore feature to reduce the extra overhead caused by heartbeat messages between the Raft leader and the followers for idle Regions. With this feature enabled, a Region idle for a long time is automatically set as hibernated. The heartbeat interval for the leader to maintain its lease becomes much longer, and the followers do not initiate elections simply because they cannot receive heartbeats from the leader.
> **Note:**
>
> - Hibernate Region is still an Experimental feature and is disabled by default.
> - Any requests from the client or disconnections will activate the Region from the hibernated state.
### Configuration items
You can specify the following items in the TiKV configuration file to configure Hibernate Region:
**`raftstore.hibernate-regions`**
Enables or disables Hibernate Region. Possible values are true and false. The default value is false.
**`raftstore.peer_stale_state_check_interval`**
Modifies the state check interval for hibernated Regions. The default value is 5 minutes. This value also determines the heartbeat interval between the leader and followers of the hibernated Regions.

View File

@ -1,383 +0,0 @@
---
title: RocksDB Options
description: Learn how to configure RocksDB options.
menu:
docs:
parent: Configuration
---
# RocksDB Option Configuration
TiKV uses RocksDB as its underlying storage engine for storing both Raft logs and KV (key-value) pairs. [RocksDB](https://github.com/facebook/rocksdb/wiki) is a highly customizable persistent key-value store that can be tuned to run on a variety of production environments, including pure memory, Flash, hard disks or HDFS. It supports various compression algorithms and good tools for production support and debugging.
## Configuration
TiKV creates two RocksDB instances called `rocksdb` and `raftdb` separately.
- `rocksdb` has three column families:
- `rocksdb.defaultcf` is used to store actual KV pairs of TiKV
- `rocksdb.writecf` is used to store the commit information in the MVCC model
- `rocksdb.lockcf` is used to store the lock information in the MVCC model
- `raftdb` has only one column family called `raftdb.defaultcf`, which is used to store the Raft logs.
Each RocksDB instance and column family is configurable. Below explains the details of DBOptions for tuning the RocksDB instance and CFOptions for tuning the column family.
### DBOptions
#### max-background-jobs
- The maximum number of concurrent background jobs (compactions and flushes)
#### max-sub-compactions
- The maximum number of threads that will concurrently perform a compaction job by breaking the job into multiple smaller ones that run simultaneously
#### max-open-files
- The number of open files that can be used by RocksDB. You may need to increase this if your database has a large working set
- Value -1 means files opened are always kept open. You can estimate the number of files based on `target_file_size_base` and `target_file_size_multiplier` for level-based compaction
- If max-open-files = -1, RocksDB will prefetch index blocks and filter blocks into block cache at startup, so if your database has a large working set, it will take several minutes to open RocksDB
#### max-manifest-file-size
- The maximum size of RocksDB's MANIFEST file. For details, see [MANIFEST](https://github.com/facebook/rocksdb/wiki/MANIFEST)
#### create-if-missing
- If it is true, the database will be created when it is missing
#### wal-recovery-mode
RocksDB WAL(write-ahead log) recovery mode:
- `0`: TolerateCorruptedTailRecords, tolerates incomplete record in trailing data on all logs
- `1`: AbsoluteConsistency, tolerates no We don't expect to find any corruption (all the I/O errors are considered as corruptions) in the WAL
- `2`: PointInTimeRecovery, recovers to point-in-time consistency
- `3`: SkipAnyCorruptedRecords, recovery after a disaster
#### wal-dir
- RocksDB write-ahead logs directory path. This specifies the absolute directory path for write-ahead logs
- If it is empty, the log files will be in the same directory as data
- When you set the path to the RocksDB directory in memory like in `/dev/shm`, you may want to set `wal-dir` to a directory on a persistent storage. For details, see [RocksDB documentation](https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database)
#### wal-ttl-seconds
See [wal-size-limit](#wal-size-limit)
#### wal-size-limit
`wal-ttl-seconds` and `wal-size-limit` affect how archived write-ahead logs will be deleted
- If both are set to 0, logs will be deleted immediately and will not get into the archive
- If `wal-ttl-seconds` is 0 and `wal-size-limit` is not 0,
WAL files will be checked every 10 minutes and if the total size is greater
than `wal-size-limit`, WAL files will be deleted from the earliest position with the
earliest until `size_limit` is met. All empty files will be deleted
- If `wal-ttl-seconds` is not 0 and `wal-size-limit` is 0,
WAL files will be checked every wal-ttl-seconds / 2 and those that
are older than `wal-ttl-seconds` will be deleted
- If both are not 0, WAL files will be checked every 10 minutes and both `ttl` and `size` checks will be performed with ttl being first
- When you set the path to the RocksDB directory in memory like in `/dev/shm`, you may want to set `wal-ttl-seconds` to a value greater than 0 (like 86400) and backup your RocksDB on a regular basis. For details, see [RocksDB documentation](https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database)
#### wal-bytes-per-sync
- Allows OS to incrementally synchronize WAL to the disk while the log is being written
#### max-total-wal-size
- Once the total size of write-ahead logs exceeds this size, RocksDB will start forcing the flush of column families whose memtables are backed up by the oldest live WAL file
- If it is set to 0, we will dynamically set the WAL size limit to be [sum of all write_buffer_size * max_write_buffer_number] * 4
#### enable-statistics
- RocksDB statistics provide cumulative statistics over time. Turning statistics on will introduce about 5%-10% overhead for RocksDB, but it is worthwhile to know the internal status of RocksDB
#### stats-dump-period
- Dumps statistics periodically in information logs
#### compaction-readahead-size
- According to [RocksDB FAQ](https://github.com/facebook/rocksdb/wiki/RocksDB-FAQ): if you want to use RocksDB on multi disks or spinning disks, you should set this value to at least 2MB
#### writable-file-max-buffer-size
- The maximum buffer size that is used by `WritableFileWrite`
#### use-direct-io-for-flush-and-compaction
- Uses `O_DIRECT` for both reads and writes in background flush and compactions
#### rate-bytes-per-sec
- Limits the disk I/O of compaction and flush
- Compaction and flush can cause terrible spikes if they exceed a certain threshold. It is recommended to set this to 50% ~ 80% of the disk throughput for a more stable result. But for heavy write workload, limiting compaction and flush speed can cause write stalls too
#### enable-pipelined-write
- Enables/Disables the pipelined write. For details, see [Pipelined Write](https://github.com/facebook/rocksdb/wiki/Pipelined-Write)
#### bytes-per-sync
- Allows OS to incrementally synchronize files to the disk while the files are being written asynchronously in the background
#### info-log-max-size
- Specifies the maximum size of the RocksDB log file
- If the log file is larger than `max_log_file_size`, a new log file will be created
- If max_log_file_size == 0, all logs will be written to one log file
#### info-log-roll-time
- Time for the RocksDB log file to roll (in seconds)
- If it is specified with non-zero value, the log file will be rolled when its active time is longer than `log_file_time_to_roll`
#### info-log-keep-log-file-num
- The maximum number of RocksDB log files to be kept
#### info-log-dir
- Specifies the RocksDB info log directory
- If it is empty, the log files will be in the same directory as data
- If it is non-empty, the log files will be in the specified directory, and the absolute path of RocksDB data directory will be used as the prefix of the log file name
### CFOptions
#### compression-per-level
- Per level compression. The compression method (if any) is used to compress a block
- no: kNoCompression
- snappy: kSnappyCompression
- zlib: kZlibCompression
- bzip2: kBZip2Compression
- lz4: kLZ4Compression
- lz4hc: kLZ4HCCompression
- zstd: kZSTD
- For details, see [Compression of RocksDB](https://github.com/facebook/rocksdb/wiki/Compression)
#### block-size
- Approximate size of user data packed per block. The block size specified here corresponds to the uncompressed data
#### bloom-filter-bits-per-key
- If you're doing point lookups, you definitely want to turn bloom filters on. Bloom filter is used to avoid unnecessary disk read
- Default: 10, which yields ~1% false positive rate
- Larger values will reduce false positive rate, but will increase memory usage and space amplification
#### block-based-bloom-filter
- False: one `sst` file has a corresponding bloom filter
- True: every block has a corresponding bloom filter
#### level0-file-num-compaction-trigger
- The number of files to trigger level-0 compaction
- A value less than 0 means that level-0 compaction will not be triggered by the number of files
#### level0-slowdown-writes-trigger
- Soft limit on the number of level-0 files. The write performance is slowed down at this point
#### level0-stop-writes-trigger
- The maximum number of level-0 files. The write operation is stopped at this point
#### write-buffer-size
- The amount of data to build up in memory (backed up by an unsorted log on the disk) before it is converted to a sorted on-disk file
#### max-write-buffer-number
- The maximum number of write buffers that are built up in memory
#### min-write-buffer-number-to-merge
- The minimum number of write buffers that will be merged together before writing to the storage
#### max-bytes-for-level-base
- Controls the maximum total data size for the base level (level 1).
#### target-file-size-base
- Target file size for compaction
#### max-compaction-bytes
- The maximum bytes for `compaction.max_compaction_bytes`
#### compaction-pri
There are four different algorithms to pick files to compact:
- `0`: ByCompensatedSize
- `1`: OldestLargestSeqFirst
- `2`: OldestSmallestSeqFirst
- `3`: MinOverlappingRatio
#### block-cache-size
- Caches uncompressed blocks
- Big block-cache can speed up the read performance. Generally, this should be set to 30%-50% of the system's total memory
#### cache-index-and-filter-blocks
- Indicates if index/filter blocks will be put to the block cache
- If it is not specified, each "table reader" object will pre-load the index/filter blocks during table initialization
#### pin-l0-filter-and-index-blocks
- Pins level0 filter and index blocks in the cache
#### read-amp-bytes-per-bit
Enables read amplification statistics
- value => memory usage (percentage of loaded blocks memory)
- 0 => disable
- 1 => 12.50 %
- 2 => 06.25 %
- 4 => 03.12 %
- 8 => 01.56 %
- 16 => 00.78 %
#### dynamic-level-bytes
- Picks the target size of each level dynamically
- This feature can reduce space amplification. It is highly recommended to setit to true. For details, see [Dynamic Level Size for Level-Based Compaction]( https://rocksdb.org/blog/2015/07/23/dynamic-level.html)
## Template
This template shows the default RocksDB configuration for TiKV:
```
[rocksdb]
max-background-jobs = 8
max-sub-compactions = 1
max-open-files = 40960
max-manifest-file-size = "20MB"
create-if-missing = true
wal-recovery-mode = 2
wal-dir = "/tmp/tikv/store"
wal-ttl-seconds = 0
wal-size-limit = 0
max-total-wal-size = "4GB"
enable-statistics = true
stats-dump-period = "10m"
compaction-readahead-size = 0
writable-file-max-buffer-size = "1MB"
use-direct-io-for-flush-and-compaction = false
rate-bytes-per-sec = 0
enable-pipelined-write = true
bytes-per-sync = "0MB"
wal-bytes-per-sync = "0KB"
info-log-max-size = "1GB"
info-log-roll-time = "0"
info-log-keep-log-file-num = 10
info-log-dir = ""
# Column Family default used to store actual data of the database.
[rocksdb.defaultcf]
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
block-size = "64KB"
bloom-filter-bits-per-key = 10
block-based-bloom-filter = false
level0-file-num-compaction-trigger = 4
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "512MB"
target-file-size-base = "8MB"
max-compaction-bytes = "2GB"
compaction-pri = 3
block-cache-size = "1GB"
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
read-amp-bytes-per-bit = 0
dynamic-level-bytes = true
# Options for Column Family write
# Column Family write used to store commit information in MVCC model
[rocksdb.writecf]
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
block-size = "64KB"
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "512MB"
target-file-size-base = "8MB"
# In normal cases it should be tuned to 10%-30% of the system's total memory.
block-cache-size = "256MB"
level0-file-num-compaction-trigger = 4
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
compaction-pri = 3
read-amp-bytes-per-bit = 0
dynamic-level-bytes = true
[rocksdb.lockcf]
compression-per-level = ["no", "no", "no", "no", "no", "no", "no"]
block-size = "16KB"
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "128MB"
target-file-size-base = "8MB"
block-cache-size = "256MB"
level0-file-num-compaction-trigger = 1
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
compaction-pri = 0
read-amp-bytes-per-bit = 0
dynamic-level-bytes = true
[raftdb]
max-sub-compactions = 1
max-open-files = 40960
max-manifest-file-size = "20MB"
create-if-missing = true
enable-statistics = true
stats-dump-period = "10m"
compaction-readahead-size = 0
writable-file-max-buffer-size = "1MB"
use-direct-io-for-flush-and-compaction = false
enable-pipelined-write = true
allow-concurrent-memtable-write = false
bytes-per-sync = "0MB"
wal-bytes-per-sync = "0KB"
info-log-max-size = "1GB"
info-log-roll-time = "0"
info-log-keep-log-file-num = 10
info-log-dir = ""
[raftdb.defaultcf]
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
block-size = "64KB"
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "512MB"
target-file-size-base = "8MB"
# should tune to 256MB~2GB.
block-cache-size = "256MB"
level0-file-num-compaction-trigger = 4
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
compaction-pri = 0
read-amp-bytes-per-bit = 0
dynamic-level-bytes = true
```

View File

@ -1,23 +0,0 @@
---
title: Security Options
description: Learn about the security configuration in TiKV.
menu:
docs:
parent: Configuration
---
# TiKV Security Configuration
TiKV has SSL/TLS integration to encrypt the data exchanged between nodes. This document describes the security configuration in the TiKV cluster.
## ca-path = "/path/to/ca.pem"
The path to the file that contains the PEM encoding of the servers CA certificates.
## cert-path = "/path/to/cert.pem"
The path to the file that contains the PEM encoding of the servers certificate chain.
## key-path = "/path/to/key.pem"
The path to the file that contains the PEM encoding of the servers private key.

View File

@ -1,110 +0,0 @@
---
title: Storage Options
description: Learn how to configure TiKV Storage.
menu:
docs:
parent: Configuration
---
# TiKV Storage Configuration
In TiKV, Storage is the component responsible for handling read and write requests. Note that if you are using TiKV with TiDB, most read requests are handled by the Coprocessor component instead of Storage.
## Configuration
There are two sections related to Storage: `[readpool.storage]` and `[storage]`.
### `[readpool.storage]`
This configuration section mainly affects storage read operations. Most read requests from TiDB are not controlled by this configuration section. For configuring the read requests from TiDB, see [Coprocessor configurations](coprocessor-config.md).
There are 3 thread pools for handling read operations, namely read-high, read-normal and read-low, which process high-priority, normal-priority and low-priority read requests respectively. The priority can be specified by corresponding fields in the gRPC request.
#### `high-concurrency`
- Specifies the thread pool size for handling high priority requests
- Default value: 4. It means at most 4 CPU cores are used
- Minimum value: 1
- It must be larger than zero but should not exceed the number of CPU cores of the host machine
- If you are running multiple TiKV instances on the same machine, make sure that the sum of this configuration item does not exceed number of CPU cores. For example, assuming that you have a 48 core server running 3 TiKVs, then the `high-concurrency` value for each instance should be less than 16
- Do not set this configuration item to a too small value, otherwise your read request QPS is limited. On the other hand, larger value is not always the most optimal choice because there could be larger resource contention
#### `normal-concurrency`
- Specifies the thread pool size for handling normal priority requests
- Default value: 4
- Minimum value: 1
#### `low-concurrency`
- Specifies the thread pool size for handling low priority requests
- Default value: 4
- Minimum value: 1
- Generally, you dont need to ensure that the sum of high + normal + low < number of CPU cores, because a single request is handled by only one of them
#### `max-tasks-per-worker-high`
- Specifies the max number of running operations for each thread in the read-high thread pool, which handles high priority read requests. Because a throttle of the thread-pool level instead of single thread level is performed, the max number of running operations for the read-high thread pool is limited to `max-tasks-per-worker-high * high-concurrency`
- Default value: 2000
- Minimum value: 2000
- If the number of running operations exceeds this configuration, new operations are simply rejected without being handled and it will contain an error header telling that TiKV is busy
- Generally, you dont need to adjust this configuration unless you are following trustworthy advice
#### `max-tasks-per-worker-normal`
- Specifies the max running operations for each thread in the read-normal thread pool, which handles normal priority read requests.
- Default value: 2000
- Minimum value: 2000
#### `max-tasks-per-worker-low`
- Specifies the max running operations for each thread in the read-low thread pool, which handles low priority read requests
- Default value: 2000
- Minimum value: 2000
#### `stack-size`
- Sets the stack size for each thread in the three thread pools. For large requests, you need a large stack to handle
- Default value: 10MB
- Minimum value: 2MB
### `[storage]`
This configuration section mainly affects storage write operations, including where data is stored and the TiKV component Scheduler. Scheduler is the core component in Storage that coordinates and processes write requests. It contains a channel to coordinate requests and a thread pool to process requests.
#### `data-dir`
- Specifies the path to the data directory
- Default value: /tmp/tikv/store
- Make sure that the data directory is moved before changing this configuration
#### `scheduler-notify-capacity`
- Specifies the Scheduler channel size
- Default value: 10240
- Do not set it too small, otherwise TiKV might crash
- Do not set it too large, because it might consume more memory
- Generally, you dont need to adjust this configuration unless you are following trustworthy advice
#### `scheduler-concurrency`
- Specifies the number of slots of Schedulers latch, which controls concurrent write requests
- Default value: 2048000
- You can set it to a larger value to reduce latch contention if there are a lot of write requests. But it will consume more memory
#### `scheduler-worker-pool-size`
- Specifies the Schedulers thread pool size. Write requests are finally handled by each worker thread of this thread pool
- Default value: 8 (>= 16 cores) or 4 (< 16 cores)
- Minimum value: 1
- This configuration must be set larger than zero but should not exceed the number of CPU cores of the host machine
- On machines with more than 16 CPU cores, the default value of this configuration is 8, otherwise 4
- If you have heavy write requests, you can set this configuration to a larger value. If you are running multiple TiKV instances on the same machine, make sure that the sum of this configuration item does not exceed the number of CPU cores
- You should not set this configuration item to a too small value, otherwise your write request QPS is limited. On the other hand, a larger value is not always the most optimal choice because there could be larger resource contention
#### `scheduler-pending-write-threshold`
- Specifies the maximum allowed byte size of pending writes
- Default value: 100MB
- If the size of pending write bytes exceeds this threshold, new requests are simply rejected with the “scheduler too busy” error and not be handled

View File

@ -4,11 +4,10 @@ description: Details about TiKV
menu:
docs:
name: Reference
weight: 3
nav:
parent: Docs
weight: 3
---
Blerp
This section includes instructions on using TiKV clients and tools.

View File

@ -1,8 +0,0 @@
---
title: Source Code
description: Source Code
draft: true
menu:
docs:
parent: Reference
---

View File

@ -1,7 +0,0 @@
---
title: Tools
description: Tools which can be used to administrate TiKV
menu:
docs:
parent: Reference
---

View File

@ -0,0 +1,17 @@
---
title: Tools
description: Tools which can be used to administrate TiKV
menu:
docs:
parent: Reference
---
There are a number of components and tools involved in maintaining a TiKV deployment.
You can browse documentation on:
* [`tikv-server`](../tikv-server): The TiKV service which stores data and serves queries.
* [`tikv-ctl`](../tikv-ctl)
* [`pd-server`](../pd-server)
* [`pd-ctl`](../pd-ctl)
* [`pd-recover`](../pd-recover)

View File

@ -1,13 +1,12 @@
---
title: PD Control User Guide
description: Use PD Control to obtain the state information of a cluster and tune a cluster.
title: pd-ctl User Guide
description: Learn about interacting with pd-ctl
menu:
docs:
parent: Tools
weight: 4
---
# PD Control User Guide
As a command line tool of PD, PD Control obtains the state information of the cluster and tunes the cluster.
## Source code compiling

View File

@ -1,13 +1,12 @@
---
title: PD Recover User Guide
description: Use PD Recover to recover a PD cluster which cannot start or provide services normally.
title: pd-recover User Guide
description: Learn about interacting with pd-recover
menu:
docs:
parent: Tools
weight: 5
---
# PD Recover User Guide
PD Recover is a disaster recovery tool of PD, used to recover the PD cluster which cannot start or provide services normally.
## Source code compiling

View File

@ -0,0 +1,10 @@
---
title: pd-server User Guide
description: Learn about interacting with pd-server
menu:
docs:
parent: Tools
weight: 3
---
You can explore `pd-server --help` and try `pd-server your sub command --help` to dig into functionality.

View File

@ -1,13 +1,12 @@
---
title: TiKV Control User Guide
description: Use TiKV Control to manage a TiKV cluster.
title: tikv-ctl User Guide
description: Learn about interacting with tikv-ctl
menu:
docs:
parent: Tools
weight: 2
---
# TiKV Control User Guide
TiKV Control (`tikv-ctl`) is a command line tool of TiKV, used to manage the cluster.
When you compile TiKV, the `tikv-ctl` command is also compiled at the same time. If the cluster is deployed using Ansible, the `tikv-ctl` binary file exists in the corresponding `tidb-ansible/resources/bin` directory. If the cluster is deployed using the binary, the `tikv-ctl` file is in the `bin` directory together with other files such as `tidb-server`, `pd-server`, `tikv-server`, etc.

View File

@ -0,0 +1,10 @@
---
title: tikv-server User Guide
description: Learn about interacting with tikv-server
menu:
docs:
parent: Tools
weight: 1
---
You can explore `tikv-server --help` and try `tikv-server your sub command --help` to dig into functionality.

View File

@ -1,6 +1,7 @@
---
title: Backup
description: Backup TiKV
draft: true
menu:
docs:
parent: Tasks

View File

@ -0,0 +1,23 @@
---
title: Configure
description: Configure a wide range of TiKV facets, including RocksDB, gRPC, the Placement Driver, and more
menu:
docs:
parent: Tasks
weight: 2
---
TiKV features a large number of configuration options you can use to tweak TiKV's behavior. When getting started with TiKV, it's usually safe to start with the defaults, configuring only the `--pd` (`pd.endpoints`) configuration.
There are several guides that you can use to inform your configuration:
* [**Security**](../security): Use TLS security and review security procedures.
* [**Topology**](../topology): Use location awareness to improve resilency and performance.
* [**Namespace**](../namespace): Use namespacing to configure resource isolation.
* [**Limit**](../limit): Tweak rate limiting.
* [**Region Merge**](../region-merge): Tweak region merging.
* [**RocksDB**](../rocksdb): Tweak RocksDB configuration options.
* [**Titan**](../titan): Enable titan to improve performance with large values.
You can find an exhaustive list of all options, as well as what they do, in the documented [**full configuration template**](https://github.com/tikv/tikv/blob/release-3.0/etc/config-template.toml).

View File

@ -1,13 +1,12 @@
---
title: Store Limits
title: Limit Config
description: Learn how to configure scheduling rate limit on stores
menu:
docs:
parent: Configure TiKV
parent: Configure
weight: 4
---
# Store Limit
This section describes how to configure scheduling rate limit, specifically, at the store level.
In TiKV, PD generates different scheduling operators based on the information gathered from TiKV and scheduling strategies. The operators are then sent to TiKV to perform scheduling on Regions. You can use `*-schedule-limit` to set speed limits on different operators, but this may cause performance bottlenecks in certain scenarios because these parameters function globally on the entire cluster. Rate limit at the store level allows you to control scheduling more flexibly with more refined granularities.

View File

@ -1,13 +1,12 @@
---
title: Namespace Configuration
title: Namespace Config
description: Learn how to configure namespace in TiKV.
menu:
docs:
parent: Configure TiKV
parent: Configure
weight: 3
---
# Namespace Configuration
Namespace is a mechanism used to meet the requirements of resource isolation. In this mechanism, TiKV supports dividing all the TiKV nodes in the cluster among multiple separate namespaces and classifying Regions into the corresponding namespace by using a custom namespace classifier.
In this case, there is actually a constraint for the scheduling policy: the namespace that a Region belongs to should match the namespace of TiKV where each replica of this Region resides. PD continuously performs the constraint check during runtime. When it finds unmatched namespaces, it will schedule the Regions to make the replica distribution conform to the namespace configuration.

View File

@ -1,13 +1,12 @@
---
title: Region Merge
title: Region Merge Config
description: Learn how to configure Region Merge in TiKV.
menu:
docs:
parent: Configure TiKV
parent: Configure
weight: 5
---
# Region Merge
TiKV replicates a segment of data in Regions via the Raft state machine. As data writes increase, a Region Split happens when the size of the region or the number of keys has reached a threshold. Conversely, if the size of the Region or the amount of keys shrinks because of data deletion, we can use Region Merge to merge adjacent regions that are smaller. This relieves some stress on Raftstore.
@ -19,12 +18,12 @@ Region Merge is initiated by the Placement Driver (PD). The steps are:
2. When the region size is less than `max-merge-region-size` or the number of keys the region includes is less than `max-merge-region-keys`, PD performs Region Merge on the smaller of the two adjacent Regions.
> **Note:**
>
> - All replicas of the merged Regions must belong to the same set of TiKVs.
> - Newly split Regions won't be merged within the period of time specified by `split-merge-interval`.
> - Region Merge won't happen within the period of time specified by `split-merge-interval` after PD starts or restarts.
>- Region Merge won't happen for two Regions that belong to different tables if `namespace-classifier = table` (default).
**Note:**
- All replicas of the merged Regions must belong to the same set of TiKVs.
- Newly split Regions won't be merged within the period of time specified by `split-merge-interval`.
- Region Merge won't happen within the period of time specified by `split-merge-interval` after PD starts or restarts.
- Region Merge won't happen for two Regions that belong to different tables if `namespace-classifier = table` (default).
## Configure Region Merge

View File

@ -1,13 +1,12 @@
---
title: Configure TiKV
description: Configure a wide range of TiKV facets, including RocksDB, gRPC, the Placement Driver, and more
title: RocksDB Config
description: Learn how to configure namespace in TiKV.
menu:
docs:
parent: Tasks
parent: Configure
weight: 6
---
## RocksDB configuration {#rocksdb}
TiKV uses [RocksDB](https://rocksdb.org/) as its underlying storage engine for storing both [Raft logs](architecture#raft) and KV (key-value) pairs.
{{< info >}}

View File

@ -1,8 +1,10 @@
---
title: Secure TiKV
title: Security Config
description: Keeping your TiKV deployment secure
menu:
docs:
parent: Configure TiKV
parent: Configure
weight: 1
---
This page discusses how to secure your TiKV deployment. Learn how to:

View File

@ -1,24 +1,26 @@
---
title: Titan
title: Titan Config
description: Learn how to enable Titan in TiKV.
menu:
docs:
parent: Configure TiKV
parent: Configure
weight: 7
---
# Titan
Titan is a plugin of RocksDB developed by PingCAP to provide key-value separation. The goal of Titan is to reduce write amplification of RocksDB when using large values.
## How Titan works
![Titan Architecture](../../images/titan-architecture.png)
{{< figure
src="/img/docs/titan-architecture.png"
caption="Titan Architecture"
number="" >}}
Titan separates values from the LSM-tree during flush and compaction. While the actual value is stored in a blob file, the value in the LSM tree functions as the position index of the actual value. When a GET operation is performed, Titan obtains the blob index for the corresponding key from the LSM tree. Using the index, Titan identifies the actual value from the blob file and returns it. For more details on design and implementation of Titan, please refer to [Titan: A RocksDB Plugin to Reduce Write Amplification](https://pingcap.com/blog/titan-storage-engine-design-and-implementation/).
> Caveat:
>
> Titan's improved write performance is at the cost of sacrificing storage space and range query performance. It's mostly recommended for scenarios of large values (>= 1KB).
{{< info >}}
**Caveat:** Titan's improved write performance is at the cost of sacrificing storage space and range query performance. It's mostly recommended for scenarios of large values (>= 1KB).
{{< /info >}}
## How to enable Titan

View File

@ -1,13 +1,12 @@
---
title: Label Configuration
title: Topology Config
description: Learn how to configure labels.
menu:
docs:
parent: Deploy
parent: Configure
weight: 2
---
# Label Configuration
TiKV uses labels to label its location information and PD schedulers according to the topology of the cluster, to maximize TiKV's capability of disaster recovery. This document describes how to configure labels.
## TiKV reports the topological information

View File

@ -1,48 +0,0 @@
---
title: Deploy
description: Run TiKV using Ansible or Docker
menu:
docs:
parent: Tasks
---
This document tells you how to install TiKV using:
* [Ansible](#ansible)
* [Docker](#docker)
## Ansible
Ansible is an IT automation tool that can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
[TiDB-Ansible](https://github.com/pingcap/tidb-ansible) is a TiDB cluster deployment tool developed by PingCAP, based on Ansible playbook. TiDB-Ansible enables you to quickly deploy a new TiKV cluster which includes PD, TiKV, and the cluster monitoring modules.
{{< warning >}}
For production environments, use TiDB-Ansible to deploy your TiKV cluster. If you only want to try TiKV out and explore the features, see Install and Deploy TiKV using Docker Compose on a single machine.
{{< /warning >}}
### Prepare
Before you start, make sure you have:
1. Several target machines that meet the following requirements:
* 4 or more machines. A standard TiKV cluster contains 6 machines. You can use 4 machines for testing.
* CentOS 7.3 (64 bit) or later with Python 2.7 installed, x86_64 architecture (AMD64)
* Network between machines
**Note**: When you deploy TiKV using Ansible, use SSD disks for the data directory of TiKV and PD nodes. Otherwise, the system will not perform well.
2. A Control Machine that meets the following requirements:
* CentOS 7.3 (64 bit) or later with Python 2.7 installed
* Access to the Internet
* Git installed
**Note**: The Control Machine can be one of the target machines.
TODO...
## Docker
TODO

View File

@ -1,11 +1,12 @@
---
title: Software and Hardware Requirements
description: Learn the software and hardware requirements for deploying and running TiKV.
title: Deploy
description: Run TiKV using Ansible or Docker
menu:
docs:
parent: Tasks
weight: 1
---
# Software and Hardware Requirements
As an open source distributed Key-Value database with high performance, TiKV can be deployed in the Intel architecture server and major virtualization environments and runs well. TiKV supports most of the major hardware networks and Linux operating systems.
TiKV must work together with [Placement Driver](https://github.com/pingcap/pd/) (PD). PD is the cluster manager of TiKV, which periodically checks replication constraints to balance load and data automatically.

View File

@ -1,13 +1,12 @@
---
title: Install and Deploy TiKV Using Ansible
title: Ansible Deployment
description: Use TiDB-Ansible to deploy a TiKV cluster on multiple nodes.
menu:
docs:
parent: Deploy
weight: 2
---
# Install and Deploy TiKV Using Ansible
This guide describes how to install and deploy TiKV using Ansible. Ansible is an IT automation tool that can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
[TiDB-Ansible](https://github.com/pingcap/tidb-ansible) is a TiDB cluster deployment tool developed by PingCAP, based on Ansible playbook. TiDB-Ansible enables you to quickly deploy a new TiKV cluster which includes PD, TiKV, and the cluster monitoring modules.

View File

@ -1,13 +1,12 @@
---
title: Install and Deploy TiKV Using Binary Files
title: Binary Deployment
description: Use binary files to deploy a TiKV cluster on a single machine or on multiple nodes for testing.
menu:
docs:
parent: Deploy
weight: 4
---
# Install and Deploy TiKV Using Binary Files
This guide describes how to deploy a TiKV cluster using binary files.
> **Warning:** Do not use binary files to deploy the TiKV cluster in the production environment. For production, [use Ansible to deploy the TiKV cluster](using-ansible.md).

View File

@ -1,13 +1,12 @@
---
title: Install and Deploy TiKV Using Docker Compose
title: Docker Compose
description: Use Docker Compose to quickly deploy a TiKV testing cluster on a single machine.
menu:
docs:
parent: Deploy
weight: 5
---
# Install and Deploy TiKV Using Docker Compose
This guide describes how to quickly deploy a TiKV testing cluster using [Docker Compose](https://github.com/pingcap/tidb-docker-compose/) on a single machine. Currently, this installation method only supports the Linux system.
> **Warning:** Do not use Docker Compose to deploy the TiKV cluster in the production environment. For production, [use Ansible to deploy the TiKV cluster](deploy-tikv-using-ansible.md).

View File

@ -1,13 +1,12 @@
---
title: Install and Deploy TiKV Using Docker
title: Docker Deployment
description: Use Docker to deploy a TiKV cluster on multiple nodes.
menu:
docs:
parent: Deploy
weight: 3
---
# Install and Deploy TiKV Using Docker
This guide describes how to deploy a multi-node TiKV cluster using Docker.
> **Warning:** Do not use Docker to deploy the TiKV cluster in the production environment. For production, [use Ansible to deploy the TiKV cluster](using-ansible.md).

View File

@ -9,8 +9,6 @@ menu:
weight: 2
---
# Install and Deploy TiKV Using Docker Compose
This guide describes how to quickly deploy a TiKV testing cluster using [Docker Compose](https://github.com/pingcap/tidb-docker-compose/) on a single machine. Currently, this installation method only supports the Linux system.
> **Warning:** Do not use Docker Compose to deploy the TiKV cluster in the production environment. For production, [use Ansible to deploy the TiKV cluster](../deploy/using-ansible.md).

View File

@ -1,7 +0,0 @@
---
title: Monitor
description: Monitor TiKV
menu:
docs:
parent: Tasks
---

View File

@ -1,13 +1,12 @@
---
title: Overview of the TiKV Monitoring Framework
description: Use Prometheus and Grafana to build the TiKV monitoring framework.
title: Monitor
description: Monitor TiKV
menu:
docs:
parent: Monitor
parent: Tasks
weight: 3
---
# Overview of the TiKV Monitoring Framework
The TiKV monitoring framework adopts two open-source projects: [Prometheus](https://github.com/prometheus/prometheus) and [Grafana](https://github.com/grafana/grafana). TiKV uses Prometheus to store the monitoring and performance metrics, and uses Grafana to visualize these metrics.
## About Prometheus in TiKV
@ -19,9 +18,10 @@ As a time series database, Prometheus has a multi-dimensional data model and fle
- Pushgateway: to receive the data from Client Push for the Prometheus main server
- AlertManager: for the alerting mechanism
The diagram is as follows:
![Prometheus in TiKV](../../images/prometheus-in-tikv.png)
{{< figure
src="/img/docs/prometheus-in-tikv.png"
caption="Prometheus in TiKV"
number="" >}}
## About Grafana in TiKV

View File

@ -4,10 +4,9 @@ description: Learn some key metrics displayed on the Grafana Overview dashboard.
menu:
docs:
parent: Monitor
weight: 2
---
# Key Metrics
If your TiKV cluster is deployed using Ansible or Docker Compose, the monitoring system is deployed at the same time. For more details, see [Overview of the TiKV Monitoring Framework](../../how-to/monitor/overview.md).
The Grafana dashboard is divided into a series of sub-dashboards which include Overview, PD, TiKV, and so on. You can use various metrics to help you diagnose the cluster.

View File

@ -1,13 +1,12 @@
---
title: Monitor a TiKV Cluster
title: Monitoring a Cluster
description: Learn how to monitor the state of a TiKV cluster.
menu:
docs:
parent: Monitor
weight: 1
---
# Monitor a TiKV Cluster
Currently, you can use two types of interfaces to monitor the state of the TiKV cluster:
- [The component state interface](#the-component-state-interface): use the HTTP interface to get the internal information of a component, which is called the component state interface.

View File

@ -1,7 +0,0 @@
---
title: Scale
description: SCale TiKV
menu:
docs:
parent: Tasks
---

View File

@ -1,16 +1,14 @@
---
title: Scale a TiKV Cluster Using TiDB-Ansible
title: Ansible Scaling
description: Use TiDB-Ansible to scale out or scale in a TiKV cluster.
menu:
docs:
parent: Scale
---
# Scale a TiKV Cluster Using TiDB-Ansible
This document describes how to use TiDB-Ansible to scale out or scale in a TiKV cluster without affecting the online services.
> **Note:** This document applies to the TiKV deployment using Ansible. If your TiKV cluster is deployed in other ways, see [Scale a TiKV Cluster](horizontal-scale.md).
> **Note:** This document applies to the TiKV deployment using Ansible. If your TiKV cluster is deployed in other ways, see [Scale a TiKV Cluster](../introduction).
Assume that the topology is as follows:

View File

@ -1,16 +1,15 @@
---
title: Scale a TiKV Cluster
description: Learn how to scale out or scale in a TiKV cluster.
title: Scale
description: SCale TiKV
menu:
docs:
parent: Scale
parent: Tasks
weight: 4
---
# Scale a TiKV Cluster
You can scale out a TiKV cluster by adding nodes to increase the capacity without affecting online services. You can also scale in a TiKV cluster by deleting nodes to decrease the capacity without affecting online services.
> **Note:** If your TiKV cluster is deployed using Ansible, see [Scale the TiKV Cluster Using TiDB-Ansible](ansible-deployment-scale.md).
> **Note:** If your TiKV cluster is deployed using Ansible, see [Scale the TiKV Cluster Using TiDB-Ansible](../ansible).
## Scale out or scale in PD

View File

@ -13,16 +13,17 @@ TiKV | {{ .Title }}
<div class="column is-narrow">
<div class="toc">
{{ range .Site.Menus.docs }}
{{ $submenu := (index (where $docs "Name" .Name) 0) }}
{{ if (or ($currentPage.HasMenuCurrent "docs" $submenu) ($currentPage.IsMenuCurrent "docs" $submenu)) }}
{{ partial "entry-tree.html" (dict "entries" .Children "currentPage" $currentPage ) }}
{{ end }}
{{ $submenu := (index (where $docs "Name" .Name) 0) }}
{{ if (or ($currentPage.HasMenuCurrent "docs" $submenu) ($currentPage.IsMenuCurrent "docs" $submenu)) }}
{{ partial "entry-tree.html" (dict "entries" .Children "currentPage" $currentPage ) }}
{{ end }}
{{ end }}
</div>
</div>
<div class="column">
<div class="content is-medium docs-content">
{{ partial "admonition.html" (dict "type" "info" "icon" "fa-info-circle" "text" ("We are currently refactoring our documentation. Please excuse any problems you may find and report them [here](https://github.com/tikv/website)." | markdownify)) }}
{{ partial "docs/version-warning.html" . }}
{{ partial "math.html" . }}

View File

@ -35,7 +35,7 @@
{{ else }}
<li>
<a
{{ if $currentPage.IsMenuCurrent .Menu . }}class="is-active"{{ end }}
{{ if (or ($currentPage.IsMenuCurrent .Menu .) ($currentPage.HasMenuCurrent .Menu .)) }}class="is-active"{{ end }}
href="{{ .URL }}">
{{ .Name }}
</a>

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB