Add and update multiple config and tools docs (#211)

* Add and update multiple config and tools docs

Signed-off-by: lilin90 <lilin@pingcap.com>
This commit is contained in:
Lilian Lee 2020-11-10 19:40:25 +08:00 committed by GitHub
parent cc2094d8d1
commit a2be731878
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 236 additions and 67 deletions

View File

@ -2,6 +2,93 @@
title: Adopters
---
TiKV has been adopted by many companies across a wide range of industries. The table below lists many of them:
TiKV has been adopted by many companies across a wide range of industries.
{{< adopters >}}
The following table lists adopters deploying TiKV separately without TiDB:
| Company | Industry | Success Story |
| :--- | :--- | :--- |
|[JD Cloud & AI](https://www.crunchbase.com/organization/jd-cloud)|Cloud Computing|[English](https://tikv.org/blog/tikv-in-jd-cloud-ai/); [Chinese](https://pingcap.com/cases-cn/user-case-jingdongyun/)|
|[Shopee](https://en.wikipedia.org/wiki/Shopee)|E-commerce|[English](https://www.pingcap.com/success-stories/tidb-in-shopee/); [Chinese](https://www.pingcap.com/cases-cn/user-case-shopee/)|
|[LY.com](https://www.crunchbase.com/organization/ly-com)|Travel|[Chinese](https://www.pingcap.com/cases-cn/user-case-tongcheng/)|
|[Zhuan Zhuan](https://www.crunchbase.com/organization/zhuan-zhuan)|Online Marketplace| English [#1](https://pingcap.com/case-studies/tidb-in-zhuanzhuan/), [#2](https://pingcap.com/case-studies/scale-out-database-powers-china-letgo-with-reduced-maintenance-costs); Chinese [#1](https://pingcap.com/cases-cn/user-case-zhuanzhuan/), [#2](https://pingcap.com/cases-cn/user-case-zhuanzhuan-2/), [#3](https://pingcap.com/cases-cn/user-case-zhuanzhuan-3/)|
|[Meituan-Dianping](https://en.wikipedia.org/wiki/Meituan-Dianping)|Food Delivery|[English](https://www.pingcap.com/success-stories/tidb-in-meituan-dianping/); [Chinese](https://pingcap.com/cases-cn/user-case-meituan/)|
|[Ele.me](https://en.wikipedia.org/wiki/Ele.me)|Food Delivery|
|[Yidian Zixun](https://www.crunchbase.com/organization/yidian-zixun#section-overview)|Media and Entertainment|
|[KuGou](https://en.wikipedia.org/wiki/KuGou)|Entertainment and Music Streaming|
|[Meitu](https://en.wikipedia.org/wiki/Meitu)|Photo-editing|
The following table lists adopters deploying TiKV with TiDB:
| Company | Industry | Success Story |
| :--- | :--- | :--- |
|[Zhihu](https://en.wikipedia.org/wiki/Zhihu)|Knowledge Sharing|[English](https://pingcap.com/success-stories/lesson-learned-from-queries-over-1.3-trillion-rows-of-data-within-milliseconds-of-response-time-at-zhihu/); [Chinese](https://pingcap.com/cases-cn/user-case-zhihu/)|
|[Mobike](https://en.wikipedia.org/wiki/Mobike)|Ridesharing|[English](https://pingcap.com/case-studies/tidb-in-mobike); Chinese [#1](https://pingcap.com/cases-cn/user-case-mobike/), [#2](https://pingcap.com/cases-cn/user-case-mobike-2/)|
|[Jinri Toutiao](https://en.wikipedia.org/wiki/Toutiao)|Mobile News Platform|[Chinese](https://www.pingcap.com/cases-cn/user-case-toutiao/)|
|[Yiguo.com](https://www.crunchbase.com/organization/shanghai-yiguo-electron-business)|E-commerce|[English](https://www.datanami.com/2018/02/22/hybrid-database-capturing-perishable-insights-yiguo/); [Chinese](https://www.pingcap.com/cases-cn/user-case-yiguo)|
|[Xiaohongshu](https://en.wikipedia.org/wiki/Xiaohongshu)|E-commerce|[English](https://pingcap.com/case-studies/how-we-use-a-scale-out-htap-database-for-real-time-analytics-and-complex-queries); Chinese [#1](https://pingcap.com/cases-cn/user-case-xiaohongshu/), [#2](https://pingcap.com/cases-cn/user-case-xiaohongshu-2/)|
|[Happigo.com](https://www.crunchbase.com/organization/happigo-com)|E-commerce||
|[Yimutian](http://www.ymt.com/)|E-commerce||
|[Maizuo](https://www.crunchbase.com/organization/maizhuo)|E-commerce||
|[Mogujie](https://www.crunchbase.com/organization/mogujie)|E-commerce||
|[Xiaomi](https://en.wikipedia.org/wiki/Xiaomi)|Consumer Electronics|[Chinese](https://pingcap.com/cases-cn/user-case-xiaomi/)|
|[Qunar.com](https://www.crunchbase.com/organization/qunar-com)|Travel|[Chinese](https://www.pingcap.com/cases-cn/user-case-qunar/)|
|[Hulu](https://www.hulu.com)|Entertainment||
|[VIPKID](https://www.crunchbase.com/organization/vipkid)|EdTech||
|[Yuanfudao.com](https://www.crunchbase.com/organization/yuanfudao)|EdTech|[English](https://pingcap.com/blog/2017-08-08-tidbforyuanfudao/); [Chinese](https://pingcap.com/cases-cn/user-case-yuanfudao/)|
|[Bank of Beijing](https://en.wikipedia.org/wiki/Bank_of_Beijing)|Banking|[English](https://pingcap.com/case-studies/how-we-use-a-distributed-database-to-achieve-horizontal-scaling-without-downtime); Chinese [#1](https://pingcap.com/cases-cn/user-case-beijing-bank/), [#2](https://pingcap.com/cases-cn/user-case-beijing-bank-2/)|
|[Bank of China](https://en.wikipedia.org/wiki/Bank_of_China)|Banking|[English](https://en.pingcap.com/case-studies/how-bank-of-china-uses-a-scale-out-database-to-support-zabbix-monitoring-at-scale); [Chinese](https://pingcap.com/cases-cn/user-case-bank-of-china/)|
|[Industrial and Commercial Bank of China](https://en.wikipedia.org/wiki/Industrial_and_Commercial_Bank_of_China)|Banking||
|[Yimian Data](https://www.crunchbase.com/organization/yimian-data)|Big Data|[Chinese](https://www.pingcap.com/cases-cn/user-case-yimian)|
|[CAASDATA](https://www.caasdata.com/)|Big Data|[Chinese](https://pingcap.com/cases-cn/user-case-kasi/)|
|[Mobikok](http://www.mobikok.com/en/)|AdTech|[Chinese](https://pingcap.com/cases-cn/user-case-mobikok/)|
|[G7 Networks](https://www.english.g7.com.cn/)| Logistics|[Chinese](https://www.pingcap.com/cases-cn/user-case-g7/)|
|[Hive-Box](http://www.fcbox.com/en/pc/index.html#/)|Logistics|[Chinese](https://pingcap.com/cases-cn/user-case-fengchao/)|
|[GAEA](http://www.gaea.com/en/)|Gaming|[English](https://www.pingcap.com/blog/2017-05-22-Comparison-between-MySQL-and-TiDB-with-tens-of-millions-of-data-per-day/); [Chinese](https://www.pingcap.com/cases-cn/user-case-gaea-ad/)|
|[YOOZOO Games](https://www.crunchbase.com/organization/yoozoo-games)|Gaming|[Chinese](https://pingcap.com/cases-cn/user-case-youzu/)|
|[Seasun Games](https://www.crunchbase.com/organization/seasun)|Gaming|[Chinese](https://pingcap.com/cases-cn/user-case-xishanju/)|
|[NetEase Games](https://game.163.com/en/)|Gaming||
|[FUNYOURS JAPAN](http://company.funyours.co.jp/)|Gaming|[Chinese](https://pingcap.com/cases-cn/user-case-funyours-japan/)|
|[Hoodinn](https://www.crunchbase.com/organization/hoodinn)|Gaming||
|[SEA group](https://sea-group.org/?lang=en)|Gaming||
|[Zhaopin.com](https://www.crunchbase.com/organization/zhaopin)|Recruiting||
|[BIGO](https://www.crunchbase.com/organization/bigo-technology)|Live Streaming|[English](https://en.pingcap.com/case-studies/why-we-chose-an-htap-database-over-mysql-for-horizontal-scaling-and-complex-queries); [Chinese](https://pingcap.com/cases-cn/user-case-bigo/)|
|[Panda.tv](https://www.crunchbase.com/organization/panda-tv)|Live Streaming||
|[VNG](https://en.wikipedia.org/wiki/VNG_Corporation)|Mobile Payment|English [#1](https://pingcap.com/case-studies/tidb-at-zalopay-infrastructure-lesson-learned/), [#2](https://pingcap.com/case-studies/zalopay-using-a-scale-out-mysql-alternative-to-serve-millions-of-users)|
|[Ping++](https://www.crunchbase.com/organization/ping-5)|Mobile Payment|[Chinese](https://pingcap.com/cases-cn/user-case-ping++/)|
|[LianLian Tech](http://www.10030.com.cn/web/)|Mobile Payment||
|[Phoenix New Media](https://www.crunchbase.com/organization/phoenix-new-media)|Media|[Chinese](https://pingcap.com/cases-cn/user-case-ifeng/)|
|[Tencent OMG](https://en.wikipedia.org/wiki/Tencent)|Media||
|[Terren](http://webterren.com.zigstat.com/)|Media||
|[LeCloud](https://www.crunchbase.com/organization/letv-2)|Media||
|[Miaopai](https://en.wikipedia.org/wiki/Miaopai)|Media||
|[Meizu](https://en.wikipedia.org/wiki/Meizu)|Media||
|[Sogou](https://en.wikipedia.org/wiki/Sogou)|MediaTech||
|[Dailymotion](https://en.wikipedia.org/wiki/Dailymotion)|Media and Entertainment||
|[iQiyi](https://en.wikipedia.org/wiki/IQiyi)|Media and Entertainment|[English](https://pingcap.com/success-stories/tidb-in-iqiyi/); [Chinese](https://pingcap.com/cases-cn/user-case-iqiyi/)|
|[BookMyShow](https://www.crunchbase.com/organization/bookmyshow)|Media and Entertainment|[English](https://pingcap.com/success-stories/tidb-in-bookmyshow/)|
|[Gengmei](https://www.crunchbase.com/organization/gengmei)|Plastic Surgery||
|[Keruyun](https://www.crunchbase.com/organization/keruyun-technology-beijing-co-ltd)|SaaS|[Chinese](https://pingcap.com/cases-cn/user-case-keruyun/)|
|[LinkDoc Technology](https://www.crunchbase.com/organization/linkdoc-technology)|HealthTech|[Chinese](https://pingcap.com/cases-cn/user-case-linkdoc/)|
|[Chunyu Yisheng](https://www.crunchbase.com/organization/chunyu)|HealthTech||
|[Qutoutiao](https://www.crunchbase.com/organization/qutoutiao)|Social Network||
|[360 Finance](https://www.crunchbase.com/organization/360-finance)|FinTech|[Chinese](https://pingcap.com/cases-cn/user-case-360/)|
|[Tongdun Technology](https://www.crunchbase.com/organization/tongdun-technology)|FinTech||
|[Wacai](https://www.crunchbase.com/organization/wacai)|FinTech||
|[Tree Finance](https://www.treefinance.com.cn/)|FinTech||
|[Mashang Consumer Finance](https://www.crunchbase.com/organization/ms-finance)|FinTech||
|[Snowball Finance](https://www.crunchbase.com/organization/snowball-finance)|FinTech||
|[QuantGroup](https://www.crunchbase.com/organization/quantgroup)|FinTech||
|[FINUP](https://www.crunchbase.com/organization/finup)|FinTech||
|[Meili Finance](https://www.crunchbase.com/organization/meili-jinrong)|FinTech||
|[Guolian Securities](https://www.crunchbase.com/organization/guolian-securities)|Financial Services||
|[Founder Securities](https://www.linkedin.com/company/founder-securities-co-ltd-/)|Financial Services||
|[China Telecom Shanghai](http://sh.189.cn/en/index.html)|Telecom||
|[State Administration of Taxation](https://en.wikipedia.org/wiki/State_Administration_of_Taxation)|Finance||
|[Hainan eKing Technology](https://www.crunchbase.com/organization/hainan-eking-technology)|Enterprise Technology|[Chinese](https://pingcap.com/cases-cn/user-case-ekingtech/)|
|[Wuhan Antian Information Technology](https://www.avlsec.com/)|Enterprise Technology||
|[Lenovo](https://en.wikipedia.org/wiki/Lenovo)|Enterprise Technology||
|[2Dfire.com](http://www.2dfire.com/)|FoodTech|[Chinese](https://pingcap.com/cases-cn/user-case-erweihuo/)|
|[Acewill](https://www.crunchbase.com/organization/acewill)|FoodTech||
|[Ausnutria Dairy](https://www.crunchbase.com/organization/ausnutria-dairy)|FoodTech||
|[Qingdao Telaidian](https://www.teld.cn/)|Electric Car Charger|[Chinese](https://pingcap.com/cases-cn/user-case-telaidian/)|

View File

@ -20,7 +20,7 @@ When you compile TiKV, the `tikv-ctl` command is also compiled at the same time.
For this mode, if SSL is enabled in TiKV, `tikv-ctl` also needs to specify the related certificate file. For example:
```
$ tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:21060 <subcommands>
$ tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:20160 <subcommands>
```
However, sometimes `tikv-ctl` communicates with PD instead of TiKV. In this case, you need to use the `--pd` option instead of `--host`. Here is an example:
@ -58,7 +58,7 @@ Use the `raft` subcommand to view the status of the Raft state machine at a spec
Use the `region` and `log` subcommands to obtain the above information respectively. The two subcommands both support the remote mode and the local mode at the same time. Their usage and output are as follows:
```bash
$ tikv-ctl --host 127.0.0.1:21060 raft region -r 2
$ tikv-ctl --host 127.0.0.1:20160 raft region -r 2
region id: 2
region state key: \001\003\000\000\000\000\000\000\000\002\001
region state: Some(region {id: 2 region_epoch {conf_ver: 3 version: 1} peers {id: 3 store_id: 1} peers {id: 5 store_id: 4} peers {id: 7 store_id: 6}})
@ -180,9 +180,9 @@ success!
Use the `consistency-check` command to execute a consistency check among replicas in the corresponding Raft of a specific Region. If the check fails, TiKV itself panics. If the TiKV instance specified by `--host` is not the Region leader, an error is reported.
```bash
$ tikv-ctl --host 127.0.0.1:21060 consistency-check -r 2
$ tikv-ctl --host 127.0.0.1:20160 consistency-check -r 2
success!
$ tikv-ctl --host 127.0.0.1:21061 consistency-check -r 2
$ tikv-ctl --host 127.0.0.1:20161 consistency-check -r 2
DebugClient::check_region_consistency: RpcFailure(RpcStatus { status: Unknown, details: Some("StringError(\"Leader is on store 1\")") })
```

View File

@ -168,8 +168,8 @@ max-background-jobs = 8
### `info-log-roll-time`
+ The time interval at which Info logs are truncated. If the value is `0`, logs are not truncated.
+ Default value: `0`
+ The time interval at which Info logs are truncated. If the value is `0s`, logs are not truncated.
+ Default value: `0s`
### `info-log-keep-log-file-num`

View File

@ -95,7 +95,7 @@ Then you can check the state of this TiKV node:
{
"store": {
"id": 1,
"address": "127.0.0.1:21060",
"address": "127.0.0.1:20160",
"state": 1,
"state_name": "Offline"
},

View File

@ -20,7 +20,7 @@ When you compile TiKV, the `tikv-ctl` command is also compiled at the same time.
For this mode, if SSL is enabled in TiKV, `tikv-ctl` also needs to specify the related certificate file. For example:
```
$ tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:21060 <subcommands>
$ tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:20160 <subcommands>
```
However, sometimes `tikv-ctl` communicates with PD instead of TiKV. In this case, you need to use the `--pd` option instead of `--host`. Here is an example:
@ -58,7 +58,7 @@ Use the `raft` subcommand to view the status of the Raft state machine at a spec
Use the `region` and `log` subcommands to obtain the above information respectively. The two subcommands both support the remote mode and the local mode at the same time. Their usage and output are as follows:
```bash
$ tikv-ctl --host 127.0.0.1:21060 raft region -r 2
$ tikv-ctl --host 127.0.0.1:20160 raft region -r 2
region id: 2
region state key: \001\003\000\000\000\000\000\000\000\002\001
region state: Some(region {id: 2 region_epoch {conf_ver: 3 version: 1} peers {id: 3 store_id: 1} peers {id: 5 store_id: 4} peers {id: 7 store_id: 6}})
@ -180,9 +180,9 @@ success!
Use the `consistency-check` command to execute a consistency check among replicas in the corresponding Raft of a specific Region. If the check fails, TiKV itself panics. If the TiKV instance specified by `--host` is not the Region leader, an error is reported.
```bash
$ tikv-ctl --host 127.0.0.1:21060 consistency-check -r 2
$ tikv-ctl --host 127.0.0.1:20160 consistency-check -r 2
success!
$ tikv-ctl --host 127.0.0.1:21061 consistency-check -r 2
$ tikv-ctl --host 127.0.0.1:20161 consistency-check -r 2
DebugClient::check_region_consistency: RpcFailure(RpcStatus { status: Unknown, details: Some("StringError(\"Leader is on store 1\")") })
```

View File

@ -12,8 +12,8 @@ This document describes the configuration parameters related to gRPC.
### `grpc-compression-type`
+ The compression algorithm for gRPC messages
+ Optional values: `none`, `deflate`, `gzip`
+ Default value: `none`
+ Optional values: `"none"`, `"deflate"`, `"gzip"`
+ Default value: `"none"`
### `grpc-concurrency`
@ -27,6 +27,12 @@ This document describes the configuration parameters related to gRPC.
+ Default value: `1024`
+ Minimum value: `1`
### `grpc-memory-pool-quota`
+ Limit the memory size that can be used by gRPC
+ Default: `"32G"`
+ Limit the memory in case OOM is observed. Note that limit the usage can lead to potential stall
### `server.grpc-raft-conn-num`
+ The maximum number of links among TiKV nodes for Raft communication
@ -36,18 +42,18 @@ This document describes the configuration parameters related to gRPC.
### `server.grpc-stream-initial-window-size`
+ The window size of the gRPC stream
+ Default: 2MB
+ Default: `"2MB"`
+ Unit: KB|MB|GB
+ Minimum value: `1KB`
+ Minimum value: `"1KB"`
### `server.grpc-keepalive-time`
+ The time interval at which that gRPC sends `keepalive` Ping messages
+ Default: `10s`
+ Minimum value: `1s`
+ Default: `"10s"`
+ Minimum value: `"1s"`
### `server.grpc-keepalive-timeout`
+ Disables the timeout for gRPC streams
+ Default: `3s`
+ Minimum value: `1s`
+ Default: `"3s"`
+ Minimum value: `"1s"`

View File

@ -18,12 +18,12 @@ menu:
### `tso-save-interval`
+ The interval for PD to allocate TSOs for persistent storage in etcd
+ Default value: `3` seconds
+ Default value: `"3s"`
### `initial-cluster-state`
+ The initial state of the cluster
+ Default value: `new`
+ Default value: `"new"`
### `enable-prevote`
@ -54,12 +54,12 @@ menu:
### `tick-interval`
+ The tick period of etcd Raft
+ Default value: `100ms`
+ Default value: `"100ms"`
### `election-interval`
+ The timeout for the etcd leader election
+ Default value: `3s`
+ Default value: `"3s"`
### `use-region-storage`
@ -92,7 +92,7 @@ Configuration items related to log
### `format`
+ The log format, which can be specified as "text", "json", or "console"
+ Default value: `text`
+ Default value: `"text"`
### `disable-timestamp`
@ -129,7 +129,7 @@ Configuration items related to monitoring
### `interval`
+ The interval at which monitoring metric data is pushed to Prometheus
+ Default value: `15s`
+ Default value: `"15s"`
## `schedule`
@ -148,7 +148,7 @@ Configuration items related to scheduling
### `patrol-region-interval`
+ Controls the running frequency at which `replicaChecker` checks the health state of a Region. The smaller this value is, the faster `replicaChecker` runs. Normally, you do not need to adjust this parameter.
+ Default value: `100ms`
+ Default value: `"100ms"`
### `split-merge-interval`
@ -168,7 +168,7 @@ Configuration items related to scheduling
### `max-store-down-time`
+ The downtime after which PD judges that the disconnected store can not be recovered. When PD fails to receive the heartbeat from a store after the specified period of time, it adds replicas at other nodes.
+ Default value: `30m`
+ Default value: `"30m"`
### `leader-schedule-limit`

View File

@ -40,6 +40,12 @@ max-background-jobs = 8
+ Default value: `8`
+ Minimum value: `1`
### `max-background-flushes`
+ The maximum number of concurrent background memtable flush jobs
+ Default value: `2`
+ Minimum value: `1`
### `max-sub-compactions`
+ The number of sub-compaction operations performed concurrently in RocksDB
@ -168,8 +174,8 @@ max-background-jobs = 8
### `info-log-roll-time`
+ The time interval at which Info logs are truncated. If the value is `0`, logs are not truncated.
+ Default value: `0`
+ The time interval at which Info logs are truncated. If the value is `0s`, logs are not truncated.
+ Default value: `0s`
### `info-log-keep-log-file-num`

View File

@ -9,12 +9,6 @@ menu:
This document describes the configuration parameters related to storage.
### `scheduler-notify-capacity`
+ The maximum number of messages that `scheduler` gets each time
+ Default value: `10240`
+ Minimum value: `1`
### `scheduler-concurrency`
+ A built-in memory lock mechanism to prevent simultaneous operations on a key. Each key has a hash in a different slot.

View File

@ -95,7 +95,7 @@ Then you can check the state of this TiKV node:
{
"store": {
"id": 1,
"address": "127.0.0.1:21060",
"address": "127.0.0.1:20160",
"state": 1,
"state_name": "Offline"
},

View File

@ -20,7 +20,7 @@ When you compile TiKV, the `tikv-ctl` command is also compiled at the same time.
For this mode, if SSL is enabled in TiKV, `tikv-ctl` also needs to specify the related certificate file. For example:
```
$ tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:21060 <subcommands>
$ tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:20160 <subcommands>
```
However, sometimes `tikv-ctl` communicates with PD instead of TiKV. In this case, you need to use the `--pd` option instead of `--host`. Here is an example:
@ -58,7 +58,7 @@ Use the `raft` subcommand to view the status of the Raft state machine at a spec
Use the `region` and `log` subcommands to obtain the above information respectively. The two subcommands both support the remote mode and the local mode at the same time. Their usage and output are as follows:
```bash
$ tikv-ctl --host 127.0.0.1:21060 raft region -r 2
$ tikv-ctl --host 127.0.0.1:20160 raft region -r 2
region id: 2
region state key: \001\003\000\000\000\000\000\000\000\002\001
region state: Some(region {id: 2 region_epoch {conf_ver: 3 version: 1} peers {id: 3 store_id: 1} peers {id: 5 store_id: 4} peers {id: 7 store_id: 6}})
@ -180,9 +180,9 @@ success!
Use the `consistency-check` command to execute a consistency check among replicas in the corresponding Raft of a specific Region. If the check fails, TiKV itself panics. If the TiKV instance specified by `--host` is not the Region leader, an error is reported.
```bash
$ tikv-ctl --host 127.0.0.1:21060 consistency-check -r 2
$ tikv-ctl --host 127.0.0.1:20160 consistency-check -r 2
success!
$ tikv-ctl --host 127.0.0.1:21061 consistency-check -r 2
$ tikv-ctl --host 127.0.0.1:20161 consistency-check -r 2
DebugClient::check_region_consistency: RpcFailure(RpcStatus { status: Unknown, details: Some("StringError(\"Leader is on store 1\")") })
```

View File

@ -12,8 +12,8 @@ This document describes the configuration parameters related to gRPC.
### `grpc-compression-type`
+ The compression algorithm for gRPC messages
+ Optional values: `none`, `deflate`, `gzip`
+ Default value: `none`
+ Optional values: `"none"`, `"deflate"`, `"gzip"`
+ Default value: `"none"`
### `grpc-concurrency`
@ -27,6 +27,12 @@ This document describes the configuration parameters related to gRPC.
+ Default value: `1024`
+ Minimum value: `1`
### `grpc-memory-pool-quota`
+ Limit the memory size that can be used by gRPC
+ Default: `"32G"`
+ Limit the memory in case OOM is observed. Note that limit the usage can lead to potential stall
### `server.grpc-raft-conn-num`
+ The maximum number of links among TiKV nodes for Raft communication
@ -36,18 +42,18 @@ This document describes the configuration parameters related to gRPC.
### `server.grpc-stream-initial-window-size`
+ The window size of the gRPC stream
+ Default: 2MB
+ Default: `"2MB"`
+ Unit: KB|MB|GB
+ Minimum value: `1KB`
+ Minimum value: `"1KB"`
### `server.grpc-keepalive-time`
+ The time interval at which that gRPC sends `keepalive` Ping messages
+ Default: `10s`
+ Minimum value: `1s`
+ Default: `"10s"`
+ Minimum value: `"1s"`
### `server.grpc-keepalive-timeout`
+ Disables the timeout for gRPC streams
+ Default: `3s`
+ Minimum value: `1s`
+ Default: `"3s"`
+ Minimum value: `"1s"`

View File

@ -23,5 +23,6 @@ There are several guides that you can use to inform your configuration:
* [**Storage**](../storage): Tweak storage configuration parameters.
* [**gRPC**](../grpc): Tweak gRPC configuration parameters.
* [**Coprocessor**](../coprocessor): Tweak Coprocessor configuration parameters.
* [**Synchronous Replication**](../synchronous-replication): Configure synchronous replication in dual data centers.
You can find an exhaustive list of all parameters, as well as what they do, in the documented [**full configuration template**](https://github.com/tikv/tikv/blob/master/etc/config-template.toml).

View File

@ -18,12 +18,12 @@ menu:
### `tso-save-interval`
+ The interval for PD to allocate TSOs for persistent storage in etcd
+ Default value: `3` seconds
+ Default value: `"3s"`
### `initial-cluster-state`
+ The initial state of the cluster
+ Default value: `new`
+ Default value: `"new"`
### `enable-prevote`
@ -54,12 +54,12 @@ menu:
### `tick-interval`
+ The tick period of etcd Raft
+ Default value: `100ms`
+ Default value: `"100ms"`
### `election-interval`
+ The timeout for the etcd leader election
+ Default value: `3s`
+ Default value: `"3s"`
### `use-region-storage`
@ -92,7 +92,7 @@ Configuration items related to log
### `format`
+ The log format, which can be specified as "text", "json", or "console"
+ Default value: `text`
+ Default value: `"text"`
### `disable-timestamp`
@ -129,7 +129,7 @@ Configuration items related to monitoring
### `interval`
+ The interval at which monitoring metric data is pushed to Prometheus
+ Default value: `15s`
+ Default value: `"15s"`
## `schedule`
@ -148,7 +148,7 @@ Configuration items related to scheduling
### `patrol-region-interval`
+ Controls the running frequency at which `replicaChecker` checks the health state of a Region. The smaller this value is, the faster `replicaChecker` runs. Normally, you do not need to adjust this parameter.
+ Default value: `100ms`
+ Default value: `"100ms"`
### `split-merge-interval`
@ -168,7 +168,7 @@ Configuration items related to scheduling
### `max-store-down-time`
+ The downtime after which PD judges that the disconnected store can not be recovered. When PD fails to receive the heartbeat from a store after the specified period of time, it adds replicas at other nodes.
+ Default value: `30m`
+ Default value: `"30m"`
### `leader-schedule-limit`

View File

@ -42,6 +42,12 @@ max-background-jobs = 8
+ Default value: `8`
+ Minimum value: `1`
### `max-background-flushes`
+ The maximum number of concurrent background memtable flush jobs
+ Default value: `2`
+ Minimum value: `1`
### `max-sub-compactions`
+ The number of sub-compaction operations performed concurrently in RocksDB
@ -170,8 +176,8 @@ max-background-jobs = 8
### `info-log-roll-time`
+ The time interval at which Info logs are truncated. If the value is `0`, logs are not truncated.
+ Default value: `0`
+ The time interval at which Info logs are truncated. If the value is `0s`, logs are not truncated.
+ Default value: `0s`
### `info-log-keep-log-file-num`

View File

@ -9,12 +9,6 @@ menu:
This document describes the configuration parameters related to storage.
### `scheduler-notify-capacity`
+ The maximum number of messages that `scheduler` gets each time
+ Default value: `10240`
+ Minimum value: `1`
### `scheduler-concurrency`
+ A built-in memory lock mechanism to prevent simultaneous operations on a key. Each key has a hash in a different slot.

View File

@ -0,0 +1,69 @@
---
title: Synchronous Replication Config
description: Learn how to configure synchronous replication.
menu:
"dev":
parent: Configure
weight: 13
---
This document describes how to configure synchronous replication in dual data centers.
One of the data centers is primary, and the other is dr (data recovery). When a Region has an odd number of replicas, primary will be placed more replicas. When dr is down for more than configured time, replication will be changed to be asynchronous and primary will serve requests by its own. Asynchronous replication is used by default.
> **Note:** Synchronous replication is still an experimental feature.
## Enable synchronous replication
Replication mode is controlled by PD. You can use the following example PD configuration to set up a cluster with synchronous replication from scratch:
```toml
[replication-mode]
replication-mode = "dr-auto-sync"
[replication-mode.dr-auto-sync]
label-key = "zone"
primary = "z1"
dr = "z2"
primary-replicas = 2
dr-replicas = 1
wait-store-timeout = "1m"
wait-sync-timeout = "1m"
```
In the above configuration, `dr-auto-sync` is the mode to enable synchronous replication. Label key `zone` is used to distinguish different data centers. TiKV instances with "z1" value are considered in primary data center, and "z2" are in dr data center. `primary-replicas` and `dr-replicas` are the numbers of replicas that should be placed in the data centers respectively. `wait-store-timeout` is the time to wait before falling back to asynchronous replication. `wait-sync-timeout` is the time to wait before forcing TiKV to change replication mode, but it's not supported yet.
You can use the following URL to check current replication status of the cluster:
```bash
% curl http://pd_ip:pd_port/pd/api/v1/replication_mode/status
{
"mode": "dr-auto-sync",
"dr-auto-sync": {
"label-key": "zone",
"state": "sync"
}
}
```
After the cluster becomes `sync`, it won't become `async` unless the number of down instances is larger than the specified replica number in either data centers. After the state becomes `async`, PD will ask TiKV to change replication mode to `asynchronous` and check if TiKV instances are recovered from time to time. When the number of down instances is smaller than the number of replicas in both data centers, it will become the `sync-recover` state, and ask TiKV to change replication mode to `synchronous`. After all regions become `synchronous`, the cluster becomes `sync` again.
## Change replication mode manually
You can use `pd-ctl` to change a cluster from `asynchronous` to `synchronous`.
```bash
>> config set replication-mode dr-auto-sync
```
Or change back to `asynchronous`:
```bash
>> config set replication-mode majority
```
You can also update the label key:
```bash
>> config set replication-mode dr-auto-sync label-key dc
```

View File

@ -95,7 +95,7 @@ Then you can check the state of this TiKV node:
{
"store": {
"id": 1,
"address": "127.0.0.1:21060",
"address": "127.0.0.1:20160",
"state": 1,
"state_name": "Offline"
},