Compare commits

...

91 Commits

Author SHA1 Message Date
CZJCC df8f9d2dcd
Merge pull request #237 from Sunrisea/master-fix-login-bug
Fix bug of Inject security info
2025-07-17 19:08:17 +08:00
濯光 4600b89e45 fix bug of get_access_token 2025-07-17 19:00:22 +08:00
濯光 ce22e6a437 fix bug of get_access_token 2025-07-17 18:51:14 +08:00
CZJCC b3c26c7ba9
Merge pull request #235 from Sunrisea/master-fix-deadlock-bug
[ISSUE #233] Fix bug of deadlock and v1 config publish
2025-07-02 15:25:43 +08:00
濯光 cb17109542 fix bug of deadlock and v1 config publish 2025-07-02 14:39:16 +08:00
CZJCC 1892ebbe43
Merge pull request #232 from CZJCC/feature/v2-readme
fix typeing.Dict
2025-05-06 16:59:19 +08:00
CZJCC bcdda37351 fix typeing.Dict 2025-05-06 16:57:26 +08:00
CZJCC d3f9b0daec
Merge pull request #231 from CZJCC/feature/v2-readme
fix 安装命令
2025-05-06 16:34:42 +08:00
CZJCC 95236b1643 fix 安装命令 2025-05-06 16:32:33 +08:00
CZJCC 58019d4395
Merge pull request #230 from CZJCC/feature/v2-readme
fix typeing.List
2025-05-06 16:13:19 +08:00
CZJCC 096b1bf077 fix typeing.List 2025-05-06 16:04:36 +08:00
CZJCC a23ed4f7b6
Merge pull request #228 from CZJCC/feature/v2-readme
修复配置监听异步任务异常问题
2025-04-10 19:37:14 +08:00
CZJCC f37de79b9a 修复配置监听异步任务异常问题 2025-04-10 19:35:48 +08:00
CZJCC 13919631b1
Merge pull request #226 from yk125123/fix-issue-#222
Update client.py
2025-03-21 14:19:33 +08:00
kevin 3fa3bd353b Update client.py
fix: issue #222
resolve the issue of 403 forbidden after custom user added watchers.

/nacos/v1/cs/configs/listener?accessToken=your_token 403
2025-03-20 19:38:19 +08:00
CZJCC 7460efd9b7
Merge pull request #221 from libaiyun/custom-2.0.1
chore: Relax psutil dependency to >=5.9.5 for compatibility
2025-03-20 17:19:22 +08:00
CZJCC 5a78759e0a
Merge pull request #225 from CZJCC/feature/v2-readme
修复grpc连接复用问题
2025-03-20 16:24:34 +08:00
CZJCC 4d69532224 修复grpc连接复用问题 2025-03-20 16:22:01 +08:00
CZJCC 7e8d782743
Merge pull request #224 from CZJCC/feature/v2-readme
修复grpc连接复用问题
2025-03-20 16:20:07 +08:00
CZJCC 1992f16ea6 修复grpc连接复用问题 2025-03-20 16:15:08 +08:00
libaiyun 81fac4c584 chore: Relax psutil dependency to >=5.9.5 for compatibility 2025-03-20 15:00:31 +08:00
CZJCC 5112b5daf9
Merge pull request #220 from CZJCC/feature/v2-readme
readme
2025-03-20 10:53:58 +08:00
CZJCC 19c99db739 readme 2025-03-20 10:51:32 +08:00
CZJCC c0e090e982
Merge pull request #219 from GavinAstk/master
修复:server未启动时client进入无限等待
2025-03-20 10:38:19 +08:00
gavinastk 6cd7360d83 补充NacosException异常码 2025-03-19 22:37:10 +08:00
gavinastk 8a0a1470bb 修复:server未启动时client进入无限等待 2025-03-19 22:07:27 +08:00
CZJCC bdfb9c62a7
Merge pull request #184 from nightosong/test
rm redundant errors
2025-03-19 09:59:17 +08:00
CZJCC 78b586810e
Merge pull request #128 from shouldsee/patch-1
Add func signature to parameter of client.subscribe
2025-03-18 19:24:19 +08:00
CZJCC 997abb1b56
Merge branch 'master' into patch-1 2025-03-18 19:20:20 +08:00
CZJCC 39f64a66fd
Merge pull request #216 from hellodeveye/master
fix: remove unnecessary constructor causing model_validate error
2025-03-18 18:49:56 +08:00
杨凯 4b5cce247c fix: remove unnecessary constructor causing model_validate error 2025-03-18 18:30:15 +08:00
CZJCC 61930a6276
Merge pull request #215 from hellodeveye/master
fix: typo in error code check condition in rpc_client.py
2025-03-18 14:21:57 +08:00
杨凯 22d687e69f fix: typo in error code check condition in rpc_client.py 2025-03-18 10:46:48 +08:00
CZJCC 1596fcfac0
Merge pull request #211 from pro4jhu/master
resolve watcher high cpu usage
2025-03-17 11:32:33 +08:00
pro4jhu fe27fcb7be
resolve watcher high cpu usage 2025-02-20 16:10:53 +08:00
CZJCC ec2d037a96
Merge pull request #207 from CZJCC/feature/v2-readme
fix bug
2025-01-07 11:00:42 +08:00
CZJCC 73d9865519 fix bug 2025-01-07 10:05:42 +08:00
CZJCC 067557db1a
Merge pull request #205 from zeyu-zh/master
支持credentials provider
2025-01-03 11:13:51 +08:00
张泽宇 b9681a7be7 Add test cases for credentials provider 2025-01-03 00:28:11 +08:00
张泽宇 4a78416a56 Update readme for credentials provider 2025-01-03 00:28:05 +08:00
张泽宇 797daefd1b Retrieve credentials from providers
Now all the access keys are provisioned by credentials provider and the STS
token can be used for authentication.
2025-01-03 00:28:00 +08:00
张泽宇 c164c81438 Enhance security by introducing credential providers
Implemented the concepts of 'credentials' and 'credential providers'
to eliminate the need for hardcoding access keys in the codebase. Us-
ers can now securely obtain credentials via third-party SDKs, suppo-
rting retrieval from services like KMS and environment variables. This
enhancement aligns with best practices for cloud service interaction
and adds flexibility for custom use cases.
2025-01-03 00:27:55 +08:00
CZJCC 76681286c0
Merge pull request #204 from CZJCC/feature/v2-readme
update readme
2025-01-02 15:56:46 +08:00
CZJCC 4d8159ea10 update readme 2025-01-02 15:55:42 +08:00
CZJCC d2b30027e2
Merge pull request #203 from CZJCC/feature/v2-readme
fix bug with python 2.7
2025-01-02 15:53:06 +08:00
CZJCC b107c75c88 fix bug with python 2.7 2025-01-02 15:50:17 +08:00
CZJCC c760b4dc0c
Merge pull request #202 from CZJCC/feature/v2-readme
fix endpoint bug
2025-01-02 15:45:08 +08:00
CZJCC 5c232dff0e fix endpoint bug 2025-01-02 15:40:05 +08:00
CZJCC 7bceb7192d
Merge pull request #197 from CZJCC/feature/v2-readme
Feature/v2 readme
2024-12-19 17:42:48 +08:00
CZJCC a3fedeb83c v2 readme & beta4发布 2024-12-19 17:41:57 +08:00
CZJCC 7ede9bb65d add readme 2024-12-17 20:17:29 +08:00
CZJCC 809e38767e
Merge pull request #194 from nacos-group/feature/v2
Feature/v2
2024-12-17 16:33:21 +08:00
CZJCC 80a166158b
Merge branch 'master' into feature/v2 2024-12-17 16:33:11 +08:00
CZJCC d13a8587e3
Merge pull request #193 from CZJCC/feature/v2-develop
Feature/v2 develop
2024-12-17 16:30:06 +08:00
CZJCC f1c171329e 支持异步任务自动清理 2024-12-17 16:28:59 +08:00
CZJCC 51f3caa043 支持配置标签灰度 2024-12-17 14:35:12 +08:00
CZJCC 1773770c09 支持配置标签灰度 2024-12-17 14:30:05 +08:00
CZJCC e6559774bd 支持配置加解密 2024-12-17 11:48:57 +08:00
CZJCC 3383bedd2b
Merge pull request #192 from yecol/patch-1
Fixed a format issue.
2024-12-12 10:15:31 +08:00
Jingbo Xu cd390ebfda
fix a format issue. 2024-12-12 10:11:13 +08:00
CZJCC c52e125789 naming v2 2024-12-09 17:38:43 +08:00
CZJCC e32ed9e82e naming fix bug 2024-11-07 17:53:25 +08:00
CZJCC 38c4e2c55a
Merge pull request #183 from Aiswel/develop-v2
Develop v2 for config_client
2024-10-31 20:12:50 +08:00
Aiswel 86f39e604c final test 2024-10-31 18:51:06 +08:00
Aiswel d10010c3aa Add config_client
grpc protol支持

commit config_client

test publish config

update publish config

add config client and test config client

remove useless comments

remove useless comments

remove useless .vscode

change some test

add ConfigRemoveResponse in grpc_util

remove print

add test for console

fix some bugs based on comments in pr

add kms encryption

remove kms v3 and change readme.md for nacos v2
2024-10-31 17:37:23 +08:00
CZJCC f111222d01
Merge pull request #185 from CZJCC/feature/v2
Feature/v2
2024-10-31 15:20:47 +08:00
CZJCC 885bf8b4a4 service v2 逻辑补全 2024-10-31 15:18:59 +08:00
CZJCC 53e48682da naming subscribe支持 2024-10-18 18:35:53 +08:00
Guodong 3092737318
rm redundant errors 2024-09-25 18:11:29 +08:00
CZJCC 11d911555f
Merge pull request #182 from CZJCC/feature/v2
grpc protol支持
2024-09-04 18:02:58 +08:00
CZJCC a539cdf298 grpc protol支持 2024-09-04 17:58:09 +08:00
CZJCC 46c0fea1df
Merge pull request #180 from Aiswel/develop-v2
Develop v2
2024-08-16 22:31:50 +08:00
Aiswel 6609a3a540 commit transport 2024-08-16 13:40:45 +08:00
Aiswel 0b7ba3482f Merge branch 'upstream-develop' into develop-v2 2024-08-16 13:16:24 +08:00
CZJCC 72a06f6892
Merge pull request #176 from CZJCC/feature/v2
Feature/v2
2024-08-06 11:22:40 +08:00
CZJCC 310cf49ed2 grpc_tools版本回退 2024-08-06 11:19:38 +08:00
Aiswel f1ee37bd61 add coroutine and grpc_client 2024-08-04 23:10:22 +08:00
CZJCC 5d37eafd97 commit 2024-08-02 16:46:10 +08:00
Aiswel b4963847ba change framework 2024-07-31 00:04:22 +08:00
Aiswel fc7a1b08f9 t config --global --unset https.proxy
Merge branch 'upstream-develop' into develop-v2
2024-07-30 23:36:41 +08:00
CZJCC 69b0d49fec
Merge pull request #174 from CZJCC/feature/v2
目录结构梳理
2024-07-24 10:55:44 +08:00
CZJCC c443bf0f70 目录结构梳理 2024-07-24 10:54:18 +08:00
CZJCC 697bda42fc
Merge pull request #173 from CZJCC/feature/v2
目录结构梳理
2024-07-24 10:50:53 +08:00
CZJCC 22e6354d9b 目录结构梳理 2024-07-24 10:48:48 +08:00
Aiswel 192da91e02 add grpc and encryption for ospp 2024-07-21 23:06:56 +08:00
CZJCC 1da81c17c1
Merge pull request #161 from CZJCC/feature/v2
[WIP] Feature/v2
2024-07-01 16:13:01 +08:00
CZJCC c90b801a01 [WIP] feature v2 2024-07-01 16:00:20 +08:00
CZJCC fb679ecab8 commit 2024-06-25 09:44:58 +08:00
CZJCC 0dc9c27ca3 commit 2024-06-06 14:47:28 +08:00
CZJCC 439eb795b8 commit
commit
2024-05-31 10:58:26 +08:00
shouldsee b530df154c
Add func signature to parameter of client.subscribe 2022-12-19 13:43:56 +08:00
99 changed files with 5852 additions and 81 deletions

401
README.md
View File

@ -1,4 +1,310 @@
# nacos-sdk-python
# nacos-sdk-python v2
A Python implementation of Nacos OpenAPI.
see: https://nacos.io/zh-cn/docs/open-API.html
[![Pypi Version](https://badge.fury.io/py/nacos-sdk-python.svg)](https://badge.fury.io/py/nacos-sdk-python)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/nacos-group/nacos-sdk-python/blob/master/LICENSE)
### Supported Python version
Python 3.7+
### Supported Nacos version
Supported Nacos version over 2.x
## Installation
```shell
pip install nacos-sdk-python
```
## Client Configuration
```
from v2.nacos import NacosNamingService, ClientConfigBuilder, GRPCConfig, Instance, SubscribeServiceParam, \
RegisterInstanceParam, DeregisterInstanceParam, BatchRegisterInstanceParam, GetServiceParam, ListServiceParam, \
ListInstanceParam, NacosConfigService, ConfigParam
client_config = (ClientConfigBuilder()
.access_key(os.getenv('NACOS_ACCESS_KEY'))
.secret_key(os.getenv('NACOS_SECRET_KEY'))
.server_address(os.getenv('NACOS_SERVER_ADDR', 'localhost:8848'))
.log_level('INFO')
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
```
* *server_address* - **required** - Nacos server address
* *access_key* - The aliyun accessKey to authenticate.
* *secret_key* - The aliyun secretKey to authenticate.
* *credentials_provider* - The custom access key manager.
* *username* - The username to authenticate.
* *password* - The password to authenticate.
* *log_level* - Log level | default: `logging.INFO`
* *cache_dir* - cache dir path. | default: `~/nacos/cache`
* *log_dir* - log dir path. | default: `~/logs/nacos`
* *namespace_id* - namespace id. | default: ``
* *grpc_config* - grpc config.
* *max_receive_message_length* - max receive message length in grpc. | default: 100 * 1024 * 1024
* *max_keep_alive_ms* - max keep alive ms in grpc. | default: 60 * 1000
* *initial_window_size* - initial window size in grpc. | default: 10 * 1024 * 1024
* *initial_conn_window_size* - initial connection window size in grpc. | default: 10 * 1024 * 1024
* *grpc_timeout* - grpc timeout in milliseconds. default: 3000
* *tls_config* - tls config
* *enabled* - whether enable tls.
* *ca_file* - ca file path.
* *cert_file* - cert file path.
* *key_file* - key file path.
* *kms_config* - aliyun kms config
* *enabled* - whether enable aliyun kms.
* *endpoint* - aliyun kms endpoint.
* *access_key* - aliyun accessKey.
* *secret_key* - aliyun secretKey.
* *password* - aliyun kms password.
## Config Client
```
config_client = await NacosConfigService.create_config_service(client_config)
```
### config client common parameters
> `param: ConfigParam`
* `param` *data_id* Data id.
* `param` *group* Group, use `DEFAULT_GROUP` if no group specified.
* `param` *content* Config content.
* `param` *tag* Config tag.
* `param` *app_name* Application name.
* `param` *beta_ips* Beta test ip address.
* `param` *cas_md5* MD5 check code.
* `param` *type* Config type.
* `param` *src_user* Source user.
* `param` *encrypted_data_key* Encrypted data key.
* `param` *kms_key_id* Kms encrypted data key id.
* `param` *usage_type* Usage type.
### Get Config
```
content = await config_client.get_config(ConfigParam(
data_id=data_id,
group=group
))
```
* `param` *ConfigParam* config client common parameters. When getting configuration, it is necessary to specify the
required data_id and group in param.
* `return` Config content if success or an exception will be raised.
Get value of one config item following priority:
* Step 1 - Get from local failover dir.
* Step 2 - Get from one server until value is got or all servers tried.
* Content will be saved to snapshot dir after got from server.
* Step 3 - Get from snapshot dir.
### Add Listener
```
async def config_listener(tenant, data_id, group, content):
print("listen, tenant:{} data_id:{} group:{} content:{}".format(tenant, data_id, group, content))
await config_client.add_listener(dataID, groupName, config_listener)
```
* `param` *ConfigParam* config client common parameters.
* `listener` *listener* Configure listener, defined by the namespace_id、group、data_id、content.
* `return`
Add Listener to a specified config item.
* Once changes or deletion of the item happened, callback functions will be invoked.
* If the item is already exists in server, callback functions will be invoked for once.
* Callback functions are invoked from current process.
### Remove Listener
```
await client.remove_listener(dataID, groupName, config_listener)
```
* `param` *ConfigParam* config client common parameters.
* `return` True if success or an exception will be raised.
Remove watcher from specified key.
### Publish Config
```
res = await client.publish_config(ConfigParam(
data_id=dataID,
group=groupName,
content="Hello world")
)
```
* `param` *ConfigParam* config client common parameters. When publishing configuration, it is necessary to specify the
required data_id, group and content in param.
* `return` True if success or an exception will be raised.
Publish one congfig data item to Nacos.
* If the data key is not exist, create one first.
* If the data key is exist, update to the content specified.
* Content can not be set to None, if there is need to delete config item, use function **remove** instead.
### Remove Config
```
res = await client.remove_config(ConfigParam(
data_id=dataID,
group=groupName
))
```
* `param` *ConfigParam* config client common parameters.When removing configuration, it is necessary to specify the
required data_id and group in param.
* `return` True if success or an exception will be raised.
Remove one config data item from Nacos.
### Stop Config Client
```
await client.shutdown()
```
## Naming Client
```
naming_client = await NacosNamingService.create_naming_service(client_config)
```
### Register Instance
```angular2html
response = await client.register_instance(
request=RegisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', ip='1.1.1.1',
port=7001, weight=1.0, cluster_name='c1', metadata={'a': 'b'},
enabled=True,
healthy=True, ephemeral=True))
```
### Batch Register Instance
```angular2html
param1 = RegisterInstanceParam(service_name='nacos.test.1',
group_name='DEFAULT_GROUP',
ip='1.1.1.1',
port=7001,
weight=1.0,
cluster_name='c1',
metadata={'a': 'b'},
enabled=True,
healthy=True,
ephemeral=True
)
param2 = RegisterInstanceParam(service_name='nacos.test.1',
group_name='DEFAULT_GROUP',
ip='1.1.1.1',
port=7002,
weight=1.0,
cluster_name='c1',
metadata={'a': 'b'},
enabled=True,
healthy=True,
ephemeral=True
)
param3 = RegisterInstanceParam(service_name='nacos.test.1',
group_name='DEFAULT_GROUP',
ip='1.1.1.1',
port=7003,
weight=1.0,
cluster_name='c1',
metadata={'a': 'b'},
enabled=True,
healthy=False,
ephemeral=True
)
response = await client.batch_register_instances(
request=BatchRegisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP',
instances=[param1, param2, param3]))
```
### Deregister Instance
```angular2html
response = await client.deregister_instance(
request=DeregisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', ip='1.1.1.1',
port=7001, cluster_name='c1', ephemeral=True)
)
```
### Update Instance
```angular2html
response = await client.update_instance(
request=RegisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', ip='1.1.1.1',
port=7001, weight=2.0, cluster_name='c1', metadata={'a': 'b'},
enabled=True,
healthy=True, ephemeral=True))
```
### Get Service
```angular2html
service = await client.get_service(
GetServiceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', cluster_name='c1'))
```
### List Service
```angular2html
service_list = await client.list_services(ListServiceParam())
```
### List Instance
```angular2html
instance_list = await client.list_instances(ListInstanceParam(service_name='nacos.test.1', healthy_only=True))
instance_list = await client.list_instances(ListInstanceParam(service_name='nacos.test.1', healthy_only=False))
instance_list = await client.list_instances(ListInstanceParam(service_name='nacos.test.1', healthy_only=None))
```
### Subscribe
```angular2html
async def cb(instance_list: List[Instance]):
print('received subscribe callback', str(instance_list))
await client.subscribe(
SubscribeServiceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', subscribe_callback=cb))
```
### Unsubscribe
```angular2html
async def cb(instance_list: List[Instance]):
print('received subscribe callback', str(instance_list))
await client.unsubscribe(
SubscribeServiceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', subscribe_callback=cb))
```
# nacos-sdk-python 1.0
A Python implementation of Nacos OpenAPI.
see: https://nacos.io/docs/latest/guide/user/open-api/
@ -6,7 +312,6 @@ see: https://nacos.io/docs/latest/guide/user/open-api/
[![Pypi Version](https://badge.fury.io/py/nacos-sdk-python.svg)](https://badge.fury.io/py/nacos-sdk-python)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/nacos-group/nacos-sdk-python/blob/master/LICENSE)
### Supported Python version
Python 2.7
@ -14,17 +319,20 @@ Python 3.6
Python 3.7
### Supported Nacos version
Nacos 0.8.0+
Nacos 1.x
Nacos 2.x with http protocol
## Installation
```shell
pip install nacos-sdk-python
```
## Getting Started
```python
import nacos
@ -36,7 +344,7 @@ NAMESPACE = "namespace id"
# no auth mode
client = nacos.NacosClient(SERVER_ADDRESSES, namespace=NAMESPACE)
# auth mode
#client = nacos.NacosClient(SERVER_ADDRESSES, namespace=NAMESPACE, ak="{ak}", sk="{sk}")
# client = nacos.NacosClient(SERVER_ADDRESSES, namespace=NAMESPACE, ak="{ak}", sk="{sk}")
# get config
data_id = "config.nacos"
@ -45,6 +353,7 @@ print(client.get_config(data_id, group))
```
## Configuration
```
client = NacosClient(server_addresses, namespace=your_ns, ak=your_ak, sk=your_sk)
```
@ -53,10 +362,12 @@ client = NacosClient(server_addresses, namespace=your_ns, ak=your_ak, sk=your_sk
* *namespace* - Namespace. | default: `None`
* *ak* - The accessKey to authenticate. | default: null
* *sk* - The secretKey to authentication. | default: null
* *credentials_provider* - The custom access key manager | default: null
* *log_level* - Log level. | default: null
* *log_rotation_backup_count* - The number of log files to keep. | default: `7`
#### Extra Options
Extra option can be set by `set_options`, as following:
```
@ -73,32 +384,36 @@ Configurable options are:
* *failover_base* - Dir to store failover config files.
* *snapshot_base* - Dir to store snapshot config files.
* *no_snapshot* - To disable default snapshot behavior, this can be overridden by param *no_snapshot* in *get* method.
* *proxies* - Dict proxy mapping, some environments require proxy access, so you can set this parameter, this way http requests go through the proxy.
* *proxies* - Dict proxy mapping, some environments require proxy access, so you can set this parameter, this way http
requests go through the proxy.
## API Reference
### Get Config
>`NacosClient.get_config(data_id, group, timeout, no_snapshot)`
> `NacosClient.get_config(data_id, group, timeout, no_snapshot)`
* `param` *data_id* Data id.
* `param` *group* Group, use `DEFAULT_GROUP` if no group specified.
* `param` *timeout* Timeout for requesting server in seconds.
* `param` *no_snapshot* Whether to use local snapshot while server is unavailable.
* `return`
W
Get value of one config item following priority:
* `return`
W
Get value of one config item following priority:
* Step 1 - Get from local failover dir(default: `${cwd}/nacos-data/data`).
* Failover dir can be manually copied from snapshot dir(default: `${cwd}/nacos-data/snapshot`) in advance.
* This helps to suppress the effect of known server failure.
* Failover dir can be manually copied from snapshot dir(default: `${cwd}/nacos-data/snapshot`) in advance.
* This helps to suppress the effect of known server failure.
* Step 2 - Get from one server until value is got or all servers tried.
* Content will be save to snapshot dir after got from server.
* Content will be save to snapshot dir after got from server.
* Step 3 - Get from snapshot dir.
### Add Watchers
>`NacosClient.add_config_watchers(data_id, group, cb_list)`
> `NacosClient.add_config_watchers(data_id, group, cb_list)`
* `param` *data_id* Data id.
* `param` *group* Group, use `DEFAULT_GROUP` if no group specified.
@ -106,13 +421,15 @@ Get value of one config item following priority:
* `return`
Add watchers to a specified config item.
* Once changes or deletion of the item happened, callback functions will be invoked.
* If the item is already exists in server, callback functions will be invoked for once.
* Multiple callbacks on one item is allowed and all callback functions are invoked concurrently by `threading.Thread`.
* Callback functions are invoked from current process.
### Remove Watcher
>`NacosClient.remove_config_watcher(data_id, group, cb, remove_all)`
> `NacosClient.remove_config_watcher(data_id, group, cb, remove_all)`
* `param` *data_id* Data id.
* `param` *group* Group, use "DEFAULT_GROUP" if no group specified.
@ -123,7 +440,8 @@ Add watchers to a specified config item.
Remove watcher from specified key.
### Publish Config
>`NacosClient.publish_config(data_id, group, content, timeout)`
> `NacosClient.publish_config(data_id, group, content, timeout)`
* `param` *data_id* Data id.
* `param` *group* Group, use "DEFAULT_GROUP" if no group specified.
@ -132,12 +450,15 @@ Remove watcher from specified key.
* `return` True if success or an exception will be raised.
Publish one data item to Nacos.
* If the data key is not exist, create one first.
* If the data key is exist, update to the content specified.
* Content can not be set to None, if there is need to delete config item, use function **remove** instead.
### Remove Config
>`NacosClient.remove_config(data_id, group, timeout)`
> `NacosClient.remove_config(data_id, group, timeout)`
* `param` *data_id* Data id.
* `param` *group* Group, use "DEFAULT_GROUP" if no group specified.
* `param` *timeout* Timeout for requesting server in seconds.
@ -146,7 +467,10 @@ Publish one data item to Nacos.
Remove one data item from Nacos.
### Register Instance
>`NacosClient.add_naming_instance(service_name, ip, port, cluster_name, weight, metadata, enable, healthy,ephemeral,group_name,heartbeat_interval)`
>
`NacosClient.add_naming_instance(service_name, ip, port, cluster_name, weight, metadata, enable, healthy,ephemeral,group_name,heartbeat_interval)`
* `param` *service_name* **required** Service name to register to.
* `param` *ip* **required** IP of the instance.
* `param` *port* **required** Port of the instance.
@ -160,7 +484,9 @@ Remove one data item from Nacos.
* `return` True if success or an exception will be raised.
### Deregister Instance
>`NacosClient.remove_naming_instance(service_name, ip, port, cluster_name)`
> `NacosClient.remove_naming_instance(service_name, ip, port, cluster_name)`
* `param` *service_name* **required** Service name to deregister from.
* `param` *ip* **required** IP of the instance.
* `param` *port* **required** Port of the instance.
@ -169,7 +495,9 @@ Remove one data item from Nacos.
* `return` True if success or an exception will be raised.
### Modify Instance
>`NacosClient.modify_naming_instance(service_name, ip, port, cluster_name, weight, metadata, enable)`
> `NacosClient.modify_naming_instance(service_name, ip, port, cluster_name, weight, metadata, enable)`
* `param` *service_name* **required** Service name.
* `param` *ip* **required** IP of the instance.
* `param` *port* **required** Port of the instance.
@ -181,7 +509,9 @@ Remove one data item from Nacos.
* `return` True if success or an exception will be raised.
### Query Instances
>`NacosClient.list_naming_instance(service_name, clusters, namespace_id, group_name, healthy_only)`
> `NacosClient.list_naming_instance(service_name, clusters, namespace_id, group_name, healthy_only)`
* `param` *service_name* **required** Service name to query.
* `param` *clusters* Cluster names separated by comma.
* `param` *namespace_id* Customized group name, default `blank`.
@ -190,7 +520,9 @@ Remove one data item from Nacos.
* `return` Instance info list if success or an exception will be raised.
### Query Instance Detail
>`NacosClient.get_naming_instance(service_name, ip, port, cluster_name)`
> `NacosClient.get_naming_instance(service_name, ip, port, cluster_name)`
* `param` *service_name* **required** Service name.
* `param` *ip* **required** IP of the instance.
* `param` *port* **required** Port of the instance.
@ -198,7 +530,9 @@ Remove one data item from Nacos.
* `return` Instance info if success or an exception will be raised.
### Send Instance Beat
>`NacosClient.send_heartbeat(service_name, ip, port, cluster_name, weight, metadata)`
> `NacosClient.send_heartbeat(service_name, ip, port, cluster_name, weight, metadata)`
* `param` *service_name* **required** Service name.
* `param` *ip* **required** IP of the instance.
* `param` *port* **required** Port of the instance.
@ -209,8 +543,10 @@ Remove one data item from Nacos.
* `return` A JSON object include server recommended beat interval if success or an exception will be raised.
### Subscribe Service Instances Changed
>`NacosClient.subscribe(listener_fn, listener_interval=7, *args, **kwargs)`
* `param` *listener_fn* **required** Customized listener function.
* `param` *listener_fn* **required** Customized listener function. with signature `fn_listener1(event, instance)->None`
* `param` *listener_interval* Listen interval , default 7 second.
* `param` *service_name* **required** Service name which subscribes.
* `param` *clusters* Cluster names separated by comma.
@ -220,20 +556,25 @@ Remove one data item from Nacos.
* `return`
### Unsubscribe Service Instances Changed
>`NacosClient.unsubscribe(service_name, listener_name)`
> `NacosClient.unsubscribe(service_name, listener_name)`
* `param` *service_name* **required** Service name to subscribed.
* `param` *listener_name* listener_name which is customized.
* `return`
### Stop All Service Subscribe
>`NacosClient.stop_subscribe()`
### Stop All Service Subscribe
> `NacosClient.stop_subscribe()`
* `return`
## Debugging Mode
Debugging mode if useful for getting more detailed log on console.
Debugging mode can be set by:
```
client = nacos.NacosClient(SERVER_ADDRESSES, namespace=NAMESPACE, username=USERNAME, password=PASSWORD,log_level="DEBUG")
```
```

26
nacos/auth.py Normal file
View File

@ -0,0 +1,26 @@
class Credentials(object):
def __init__(self, access_key_id, access_key_secret, security_token=None):
self.access_key_id = access_key_id
self.access_key_secret = access_key_secret
self.security_token = security_token
def get_access_key_id(self):
return self.access_key_id
def get_access_key_secret(self):
return self.access_key_secret
def get_security_token(self):
return self.security_token
class CredentialsProvider(object):
def get_credentials(self):
return
class StaticCredentialsProvider(CredentialsProvider):
def __init__(self, access_key_id="", access_key_secret="", security_token=""):
self.credentials = Credentials(access_key_id, access_key_secret, security_token)
def get_credentials(self):
return self.credentials

View File

@ -14,6 +14,7 @@ from logging.handlers import TimedRotatingFileHandler
from typing import Dict
from .task import HeartbeatInfo, HeartbeatTask
from .auth import StaticCredentialsProvider
try:
import ssl
@ -276,7 +277,8 @@ class NacosClient:
logger.propagate = False
def __init__(self, server_addresses=None, endpoint=None, namespace=None, ak=None,
sk=None, username=None, password=None, logDir=None, log_level=None, log_rotation_backup_count=None):
sk=None, username=None, password=None, logDir=None, log_level=None,
log_rotation_backup_count=None, credentials_provider=None):
self.server_list = list()
self.initLog(logDir, log_level, log_rotation_backup_count)
try:
@ -308,8 +310,7 @@ class NacosClient:
self.endpoint = endpoint
self.namespace = namespace or DEFAULT_NAMESPACE or ""
self.ak = ak
self.sk = sk
self.credentials_provider = credentials_provider if credentials_provider else StaticCredentialsProvider(ak, sk)
self.username = username
self.password = password
@ -330,7 +331,8 @@ class NacosClient:
self.process_mgr = None
self.default_timeout = DEFAULTS["TIMEOUT"]
self.auth_enabled = self.ak and self.sk
credentials = self.credentials_provider.get_credentials()
self.auth_enabled = credentials.get_access_key_id() and credentials.get_access_key_secret()
self.cai_enabled = True
self.pulling_timeout = DEFAULTS["PULLING_TIMEOUT"]
self.pulling_config_size = DEFAULTS["PULLING_CONFIG_SIZE"]
@ -341,7 +343,7 @@ class NacosClient:
self.proxies = None
self.logDir = logDir
self.heartbeats: Dict[str, HeartbeatTask] = {}
self.heartbeats = {}
if self.username and self.password:
self.get_access_token()
logger.info("[client-init] endpoint:%s, tenant:%s" % (endpoint, namespace))
@ -423,7 +425,7 @@ class NacosClient:
params["type"] = config_type
try:
resp = self._do_sync_req("/nacos/v1/cs/configs", None, params, None,
resp = self._do_sync_req("/nacos/v1/cs/configs", None, None, params,
timeout or self.default_timeout, "POST")
c = resp.read()
logger.info("[publish] publish content, group:%s, data_id:%s, server response:%s" % (
@ -747,6 +749,7 @@ class NacosClient:
cache_pool[cache_key] = CacheData(cache_key, self)
while cache_list:
time.sleep(1)
unused_keys = set(cache_pool.keys())
contains_init_key = False
probe_update_string = ""
@ -774,20 +777,20 @@ class NacosClient:
# if contains_init_key:
# headers["longPullingNoHangUp"] = "true"
params = {"tenant": self.namespace} if self.namespace else None
data = {"Listening-Configs": probe_update_string}
changed_keys = list()
try:
resp = self._do_sync_req("/nacos/v1/cs/configs/listener", headers, None, data,
resp = self._do_sync_req("/nacos/v1/cs/configs/listener", headers, params, data,
self.pulling_timeout + 10, "POST")
changed_keys = [group_key(*i) for i in parse_pulling_result(resp.read())]
logger.info("[do-pulling] following keys are changed from server %s" % truncate(str(changed_keys)))
except NacosException as e:
logger.error("[do-pulling] nacos exception: %s, waiting for recovery" % str(e))
time.sleep(1)
except Exception as e:
logger.exception("[do-pulling] exception %s occur, return empty list, waiting for recovery" % str(e))
time.sleep(1)
for cache_key, cache_data in cache_pool.items():
cache_data.is_init = False
@ -813,35 +816,38 @@ class NacosClient:
logger.info("[init-pulling] init completed")
def _process_polling_result(self):
while True:
cache_key, content, md5 = self.notify_queue.get()
logger.info("[process-polling-result] receive an event:%s" % cache_key)
wl = self.watcher_mapping.get(cache_key)
if not wl:
logger.warning("[process-polling-result] no watcher on %s, ignored" % cache_key)
continue
try:
while True:
cache_key, content, md5 = self.notify_queue.get()
logger.info("[process-polling-result] receive an event:%s" % cache_key)
wl = self.watcher_mapping.get(cache_key)
if not wl:
logger.warning("[process-polling-result] no watcher on %s, ignored" % cache_key)
continue
data_id, group, namespace = parse_key(cache_key)
plain_content = content
data_id, group, namespace = parse_key(cache_key)
plain_content = content
params = {
"data_id": data_id,
"group": group,
"namespace": namespace,
"raw_content": content,
"content": plain_content,
}
for watcher in wl:
if not watcher.last_md5 == md5:
logger.info(
"[process-polling-result] md5 changed since last call, calling %s with changed md5: %s ,params: %s"
% (watcher.callback.__name__, md5, params))
try:
self.callback_tread_pool.apply(watcher.callback, (params,))
except Exception as e:
logger.exception("[process-polling-result] exception %s occur while calling %s " % (
str(e), watcher.callback.__name__))
watcher.last_md5 = md5
params = {
"data_id": data_id,
"group": group,
"namespace": namespace,
"raw_content": content,
"content": plain_content,
}
for watcher in wl:
if not watcher.last_md5 == md5:
logger.info(
"[process-polling-result] md5 changed since last call, calling %s with changed md5: %s ,params: %s"
% (watcher.callback.__name__, md5, params))
try:
self.callback_tread_pool.apply(watcher.callback, (params,))
except Exception as e:
logger.exception("[process-polling-result] exception %s occur while calling %s " % (
str(e), watcher.callback.__name__))
watcher.last_md5 = md5
except (EOFError, OSError) as e:
logger.exception(f"[process-polling-result] break when receive exception of {type(e)}")
@staticmethod
def _inject_version_info(headers):
@ -884,7 +890,8 @@ class NacosClient:
if not params and not data:
return
ts = str(int(time.time() * 1000))
ak, sk = self.ak, self.sk
# now we have a fixed credentials (access key or sts token)
credentials = self.credentials_provider.get_credentials()
sign_str = ""
@ -892,7 +899,7 @@ class NacosClient:
# config signature
if "config" == module:
headers.update({
"Spas-AccessKey": ak,
"Spas-AccessKey": credentials.get_access_key_id(),
"timeStamp": ts,
})
@ -905,7 +912,9 @@ class NacosClient:
sign_str = sign_str + group + "+"
if sign_str:
sign_str += ts
headers["Spas-Signature"] = self.__do_sign(sign_str, sk)
headers["Spas-Signature"] = self.__do_sign(sign_str, credentials.get_access_key_secret())
if credentials.get_security_token():
headers["Spas-SecurityToken"] = credentials.get_security_token()
# naming signature
else:
@ -922,10 +931,12 @@ class NacosClient:
sign_str = ts
params.update({
"ak": ak,
"ak": credentials.get_access_key_id(),
"data": sign_str,
"signature": self.__do_sign(sign_str, sk),
"signature": self.__do_sign(sign_str, credentials.get_access_key_secret()),
})
if credentials.get_security_token():
params.update({"Spas-SecurityToken": credentials.get_security_token()})
def __do_sign(self, sign_str, sk):
return base64.encodebytes(
@ -1150,8 +1161,8 @@ class NacosClient:
def send_heartbeat(self, service_name, ip, port, cluster_name=None, weight=1.0, metadata=None, ephemeral=True,
group_name=DEFAULT_GROUP_NAME):
logger.info("[send-heartbeat] ip:%s, port:%s, service_name:%s, namespace:%s" % (ip, port, service_name,
self.namespace))
logger.debug("[send-heartbeat] ip:%s, port:%s, service_name:%s, namespace:%s" % (ip, port, service_name,
self.namespace))
if "@@" not in service_name and group_name:
service_name = group_name + "@@" + service_name
@ -1186,7 +1197,7 @@ class NacosClient:
resp = self._do_sync_req("/nacos/v1/ns/instance/beat", None, params, None, self.default_timeout, "PUT",
"naming")
c = resp.read()
logger.info("[send-heartbeat] ip:%s, port:%s, service_name:%s, namespace:%s, server response:%s" %
logger.debug("[send-heartbeat] ip:%s, port:%s, service_name:%s, namespace:%s, server response:%s" %
(ip, port, service_name, self.namespace, c))
return json.loads(c.decode("UTF-8"))
except HTTPError as e:

9
requirements.txt Normal file
View File

@ -0,0 +1,9 @@
aiofiles>=24.1.0
aiohttp>=3.10.11
alibabacloud_kms20160120>=2.2.3
alibabacloud_tea_openapi>=0.3.12
grpcio>=1.66.1
protobuf>=3.20.3
psutil>=5.9.5
pycryptodome>=3.19.1
pydantic>=2.10.4

View File

@ -5,8 +5,6 @@ from shutil import rmtree
from setuptools import find_packages, setup, Command
import nacos
# just run `python setup.py upload`
here = os.path.abspath(os.path.dirname(__file__))
@ -14,6 +12,11 @@ with io.open(os.path.join(here, 'README.md'), encoding='UTF-8') as f:
long_description = '\n' + f.read()
def read_requirements():
with open(os.path.join(here, 'requirements.txt'), encoding='utf-8') as f:
return f.read().splitlines()
class UploadCommand(Command):
"""Support setup.py upload."""
@ -50,7 +53,7 @@ class UploadCommand(Command):
setup(
name="nacos-sdk-python",
version=nacos.__version__,
version="2.0.5",
packages=find_packages(
exclude=["test", "*.tests", "*.tests.*", "tests.*", "tests"]),
url="https://github.com/nacos-group/nacos-sdk-python",
@ -69,7 +72,7 @@ setup(
description="Python client for Nacos.",
long_description=long_description,
long_description_content_type="text/markdown",
install_requires=[],
install_requires=read_requirements(),
# $ setup.py publish support.
cmdclass={
'upload': UploadCommand,

View File

@ -8,6 +8,7 @@ import nacos
from nacos import files
from nacos.listener import SubscribeListener, SimpleListenerManager
from nacos.timer import NacosTimer, NacosTimerManager
from nacos.auth import CredentialsProvider, Credentials
import time
import shutil
@ -25,6 +26,13 @@ client = nacos.NacosClient(SERVER_ADDRESSES, namespace=NAMESPACE, username=USERN
# Set the following option if http requests need through by proxy
# client.set_options(proxies={"http":"192.168.56.1:809"})
class CustomCredentialsProvider(CredentialsProvider):
def __init__(self, ak="", sk="", token=""):
self.credential = Credentials(ak, sk, token)
def get_credentials(self):
return self.credential
class TestClient(unittest.TestCase):
def test_get_server(self):
self.assertEqual(client.get_server(), (SERVER_1, 8848))
@ -315,7 +323,7 @@ class TestClient(unittest.TestCase):
self.assertEqual(0, len(headers))
self.assertEqual(0, len(params))
def test_inject_auth_info_of_config(self):
def test_inject_auth_info_of_config_with_fixed_ak(self):
headers = {}
params = {"tenant": "abc", "group": "bbb"}
client_auth = nacos.NacosClient(SERVER_ADDRESSES, ak="1", sk="1")
@ -325,7 +333,7 @@ class TestClient(unittest.TestCase):
self.assertTrue("timeStamp" in headers)
self.assertTrue("Spas-Signature" in headers)
def test_inject_auth_info_of_naming(self):
def test_inject_auth_info_of_naming_with_fixed_ak(self):
headers = {}
params = {"serviceName": "abc", "groupName": "bbb"}
client_auth = nacos.NacosClient(SERVER_ADDRESSES, ak="1", sk="1")
@ -335,6 +343,48 @@ class TestClient(unittest.TestCase):
self.assertTrue("data" in params)
self.assertTrue("signature" in params)
def test_inject_auth_info_of_config_with_provider(self):
headers = {}
params = {"tenant": "abc", "group": "bbb"}
# access_key
client_auth = nacos.NacosClient(SERVER_ADDRESSES, credentials_provider=CustomCredentialsProvider("1", "1"))
self.assertFalse(client_auth.auth_enabled is None)
client_auth._inject_auth_info(headers, params, data=None, module="config")
self.assertEqual("1", headers.get("Spas-AccessKey"))
self.assertTrue("timeStamp" in headers)
self.assertTrue("Spas-Signature" in headers)
self.assertTrue("Spas-SecurityToken" not in headers)
# security_token
client_auth = nacos.NacosClient(SERVER_ADDRESSES, credentials_provider=CustomCredentialsProvider("1", "1", "1"))
self.assertFalse(client_auth.auth_enabled is None)
client_auth._inject_auth_info(headers, params, data=None, module="config")
self.assertEqual("1", headers.get("Spas-AccessKey"))
self.assertEqual("1", headers.get("Spas-SecurityToken"))
self.assertTrue("timeStamp" in headers)
self.assertTrue("Spas-Signature" in headers)
def test_inject_auth_info_of_naming_with_provider(self):
headers = {}
params = {"serviceName": "abc", "groupName": "bbb"}
# access_key
client_auth = nacos.NacosClient(SERVER_ADDRESSES, credentials_provider=CustomCredentialsProvider("1", "1"))
self.assertFalse(client_auth.auth_enabled is None)
client_auth._inject_auth_info(headers, params, data=None, module="naming")
self.assertEqual("1", params.get("ak"))
self.assertTrue("data" in params)
self.assertTrue("signature" in params)
self.assertTrue("Spas-SecurityToken" not in headers)
# security_token
client_auth = nacos.NacosClient(SERVER_ADDRESSES, credentials_provider=CustomCredentialsProvider("1", "1", "1"))
self.assertFalse(client_auth.auth_enabled is None)
client_auth._inject_auth_info(headers, params, data=None, module="naming")
self.assertEqual("1", params.get("ak"))
self.assertEqual("1", params.get("Spas-SecurityToken"))
self.assertTrue("data" in params)
self.assertTrue("signature" in params)
if __name__ == '__main__':
unittest.main()
unittest.main()

191
test/client_v2_test.py Normal file
View File

@ -0,0 +1,191 @@
import asyncio
import os
import unittest
from typing import List
from v2.nacos import ConfigParam
from v2.nacos.common.client_config import GRPCConfig
from v2.nacos.common.client_config_builder import ClientConfigBuilder
from v2.nacos.naming.model.instance import Instance
from v2.nacos.naming.model.naming_param import RegisterInstanceParam, DeregisterInstanceParam, \
BatchRegisterInstanceParam, GetServiceParam, ListServiceParam, SubscribeServiceParam, ListInstanceParam
from v2.nacos.naming.nacos_naming_service import NacosNamingService
from v2.nacos.config.nacos_config_service import NacosConfigService
from v2.nacos.common.auth import CredentialsProvider, Credentials
client_config = (ClientConfigBuilder()
.access_key(os.getenv('NACOS_ACCESS_KEY'))
.secret_key(os.getenv('NACOS_SECRET_KEY'))
.server_address(os.getenv('NACOS_SERVER_ADDR', 'localhost:8848'))
.log_level('INFO')
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
class CustomCredentialsProvider(CredentialsProvider):
def __init__(self, ak="", sk="", token=""):
self.credential = Credentials(ak, sk, token)
def get_credentials(self):
return self.credential
class TestClientV2(unittest.IsolatedAsyncioTestCase):
async def test_init_naming_and_config_service(self):
config_client = await NacosConfigService.create_config_service(client_config)
assert await config_client.server_health()
naming_client = await NacosNamingService.create_naming_service(client_config)
assert await naming_client.server_health()
response = await naming_client.register_instance(
request=RegisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', ip='1.1.1.1',
port=7001, weight=1.0, cluster_name='c1', metadata={'a': 'b'},
enabled=True,
healthy=True, ephemeral=True))
self.assertEqual(response, True)
print('register instance')
data_id = "com.alibaba.nacos.test.config"
group = "DEFAULT_GROUP"
content = await config_client.get_config(ConfigParam(
data_id=data_id,
group=group,
))
assert content == ""
await asyncio.sleep(10000)
async def test_register_with_endpoint_and_fixed_ak(self):
config = (ClientConfigBuilder()
.access_key(os.getenv('NACOS_ACCESS_KEY'))
.secret_key(os.getenv('NACOS_SECRET_KEY'))
.endpoint(os.getenv('NACOS_SERVER_ENDPOINT', 'localhost:8848'))
.log_level('INFO')
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
client = await NacosNamingService.create_naming_service(config)
assert await client.server_health()
async def test_register_with_endpoint_and_provider(self):
config = (ClientConfigBuilder()
.credentials_provider(CustomCredentialsProvider(os.getenv('NACOS_ACCESS_KEY'), os.getenv('NACOS_SECRET_KEY')))
.endpoint(os.getenv('NACOS_SERVER_ENDPOINT', 'localhost:8848'))
.log_level('INFO')
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
client = await NacosNamingService.create_naming_service(config)
assert await client.server_health()
async def test_register(self):
client = await NacosNamingService.create_naming_service(client_config)
assert await client.server_health()
async def cb(instance_list: List[Instance]):
print('received subscribe callback', str(instance_list))
await client.subscribe(
SubscribeServiceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', subscribe_callback=cb))
print('subscribe service')
response = await client.register_instance(
request=RegisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', ip='1.1.1.1',
port=7001, weight=1.0, cluster_name='c1', metadata={'a': 'b'},
enabled=True,
healthy=True, ephemeral=True))
self.assertEqual(response, True)
print('register instance')
await asyncio.sleep(1)
response = await client.update_instance(
request=RegisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', ip='1.1.1.1',
port=7001, weight=2.0, cluster_name='c1', metadata={'a': 'b'},
enabled=True,
healthy=True, ephemeral=True))
self.assertEqual(response, True)
print('update instance')
await asyncio.sleep(1)
response = await client.deregister_instance(
request=DeregisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', ip='1.1.1.1',
port=7001, cluster_name='c1', ephemeral=True)
)
self.assertEqual(response, True)
print('deregister instance')
await asyncio.sleep(1)
param1 = RegisterInstanceParam(service_name='nacos.test.1',
group_name='DEFAULT_GROUP',
ip='1.1.1.1',
port=7001,
weight=1.0,
cluster_name='c1',
metadata={'a': 'b'},
enabled=True,
healthy=True,
ephemeral=True
)
param2 = RegisterInstanceParam(service_name='nacos.test.1',
group_name='DEFAULT_GROUP',
ip='1.1.1.1',
port=7002,
weight=1.0,
cluster_name='c1',
metadata={'a': 'b'},
enabled=True,
healthy=True,
ephemeral=True
)
param3 = RegisterInstanceParam(service_name='nacos.test.1',
group_name='DEFAULT_GROUP',
ip='1.1.1.1',
port=7003,
weight=1.0,
cluster_name='c1',
metadata={'a': 'b'},
enabled=True,
healthy=False,
ephemeral=True
)
response = await client.batch_register_instances(
request=BatchRegisterInstanceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP',
instances=[param1, param2, param3]))
self.assertEqual(response, True)
print('batch register instance')
await asyncio.sleep(1)
service = await client.get_service(
GetServiceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', cluster_name='c1'))
print('get service', str(service))
assert service.name == 'nacos.test.1'
service_list = await client.list_services(ListServiceParam())
assert service_list.count == 1
instance_list = await client.list_instances(ListInstanceParam(service_name='nacos.test.1', healthy_only=True))
assert len(instance_list) == 2
instance_list = await client.list_instances(ListInstanceParam(service_name='nacos.test.1', healthy_only=False))
assert len(instance_list) == 1
instance_list = await client.list_instances(ListInstanceParam(service_name='nacos.test.1', healthy_only=None))
assert len(instance_list) == 3
await client.unsubscribe(
SubscribeServiceParam(service_name='nacos.test.1', group_name='DEFAULT_GROUP', subscribe_callback=cb))
await client.shutdown()
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,282 @@
import asyncio
import os
import unittest
from v2.nacos.common.client_config import GRPCConfig, KMSConfig
from v2.nacos.common.client_config_builder import ClientConfigBuilder
from v2.nacos.config.model.config_param import ConfigParam
from v2.nacos.config.nacos_config_service import NacosConfigService
from v2.nacos.common.auth import CredentialsProvider, Credentials
client_config = (ClientConfigBuilder()
# .access_key(os.getenv('NACOS_ACCESS_KEY'))
# .secret_key(os.getenv('NACOS_SECRET_KEY'))
.username(os.getenv('NACOS_USERNAME'))
.password(os.getenv('NACOS_PASSWORD'))
.server_address(os.getenv('NACOS_SERVER_ADDR', 'localhost:8848'))
.log_level('INFO')
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
class CustomCredentialsProvider(CredentialsProvider):
def __init__(self, ak="", sk="", token=""):
self.credential = Credentials(ak, sk, token)
def get_credentials(self):
return self.credential
class TestClientV2(unittest.IsolatedAsyncioTestCase):
async def test_publish_config(self):
client = await NacosConfigService.create_config_service(client_config)
assert await client.server_health()
data_id = "com.alibaba.nacos.test.config"
group = "DEFAULT_GROUP"
content = await client.get_config(ConfigParam(
data_id=data_id,
group=group,
))
assert content == ""
res = await client.publish_config(ConfigParam(
data_id=data_id,
group=group,
content="Hello world")
)
assert res
print("success to publish")
await asyncio.sleep(0.2)
content = await client.get_config(ConfigParam(
data_id=data_id,
group=group,
))
assert content == "Hello world"
print("success to get config")
res = await client.remove_config(ConfigParam(
data_id=data_id,
group=group
))
assert res
print("success to remove")
await asyncio.sleep(0.1)
content = await client.get_config(ConfigParam(
data_id=data_id,
group=group,
))
assert content == ""
await client.shutdown()
async def test_config_listener(self):
client = await NacosConfigService.create_config_service(client_config)
assert await client.server_health()
dataID = "com.alibaba.nacos.test.config"
groupName = "DEFAULT_GROUP"
async def config_listener1(tenant, data_id, group, content):
print("listen1, tenant:{} data_id:{} group:{} content:{}".format(tenant, data_id, group, content))
async def config_listener2(tenant, data_id, group, content):
print("listen2, tenant:{} data_id:{} group:{} content:{}".format(tenant, data_id, group, content))
await client.add_listener(dataID, groupName, config_listener1)
await client.add_listener(dataID, groupName, config_listener2)
await asyncio.sleep(3)
res = await client.publish_config(ConfigParam(
data_id=dataID,
group=groupName,
content="Hello world")
)
assert res
print("success to publish")
await asyncio.sleep(3)
res = await client.publish_config(ConfigParam(
data_id=dataID,
group=groupName,
content="Hello world2")
)
assert res
await asyncio.sleep(3)
await client.remove_listener(dataID, groupName, config_listener1)
await asyncio.sleep(3)
res = await client.publish_config(ConfigParam(
data_id=dataID,
group=groupName,
content="Hello world3")
)
assert res
await asyncio.sleep(3)
res = await client.remove_config(ConfigParam(
data_id=dataID,
group=groupName
))
assert res
print("success to remove")
await asyncio.sleep(3)
await client.shutdown()
async def test_cipher_config(self):
kms_config = KMSConfig(
enabled=True,
endpoint=os.getenv('KMS_ENDPOINT'),
access_key=os.getenv('NACOS_ACCESS_KEY'),
secret_key=os.getenv('NACOS_SECRET_KEY'),
)
client_config.set_kms_config(kms_config)
client = await NacosConfigService.create_config_service(client_config)
dataID = "cipher-kms-aes-128-crypt"
groupName = "DEFAULT_GROUP"
res = await client.publish_config(
param=ConfigParam(
data_id=dataID,
group=groupName,
content="加密内容-128",
kms_key_id=os.getenv("KMS_KEY_ID")))
assert res
print("success to publish")
await asyncio.sleep(0.1)
content = await client.get_config(ConfigParam(
data_id=dataID,
group=groupName,
kms_key_id=os.getenv("KMS_KEY_ID")
))
print("success to get config:" + content)
assert content == '加密内容-128'
dataID = "cipher-kms-aes-256-crypt"
groupName = "DEFAULT_GROUP"
res = await client.publish_config(
param=ConfigParam(
data_id=dataID,
group=groupName,
content="加密内容-256",
kms_key_id=os.getenv("KMS_KEY_ID")))
assert res
print("success to publish")
await asyncio.sleep(0.1)
content = await client.get_config(ConfigParam(
data_id=dataID,
group=groupName,
kms_key_id=os.getenv("KMS_KEY_ID")
))
print("success to get config:" + content)
assert content == '加密内容-256'
await asyncio.sleep(5)
await client.shutdown()
async def test_cipher_config_listener(self):
kms_config = KMSConfig(
enabled=True,
endpoint=os.getenv("KMS_ENDPOINT"),
access_key=os.getenv('NACOS_ACCESS_KEY'),
secret_key=os.getenv('NACOS_SECRET_KEY'),
)
client_cfg = (ClientConfigBuilder()
.access_key(os.getenv('NACOS_ACCESS_KEY'))
.secret_key(os.getenv('NACOS_SECRET_KEY'))
.server_address(os.getenv('NACOS_SERVER_ADDR', 'localhost:8848'))
.log_level('INFO')
.kms_config(kms_config)
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
client = await NacosConfigService.create_config_service(client_cfg)
dataID = "cipher-kms-aes-128-crypt"
groupName = "DEFAULT_GROUP"
async def config_listener(tenant, data_id, group, content):
print("listen1, tenant:{} data_id:{} group:{} content:{}".format(tenant, data_id, group, content))
await client.add_listener(dataID, groupName, config_listener)
await asyncio.sleep(3)
res = await client.publish_config(
param=ConfigParam(
data_id=dataID,
group=groupName,
content="加密内容-1",
kms_key_id=os.getenv("KMS_KEY_ID")))
assert res
print("success to publish")
await asyncio.sleep(3)
res = await client.publish_config(
param=ConfigParam(
data_id=dataID,
group=groupName,
content="加密内容-2",
kms_key_id=os.getenv("KMS_KEY_ID")))
assert res
await asyncio.sleep(3)
await client.shutdown()
async def _gray_config(self, client_cfg):
client = await NacosConfigService.create_config_service(client_cfg)
dataID = "com.alibaba.nacos.test.config.gray"
groupName = "DEFAULT_GROUP"
async def config_listener(tenant, data_id, group, content):
print("listen1, tenant:{} data_id:{} group:{} content:{}".format(tenant, data_id, group, content))
await client.add_listener(dataID, groupName, config_listener)
await asyncio.sleep(1000)
async def test_gray_config_with_fixed_ak(self):
client_cfg = (ClientConfigBuilder()
.access_key(os.getenv('NACOS_ACCESS_KEY'))
.secret_key(os.getenv('NACOS_SECRET_KEY'))
.server_address(os.getenv('NACOS_SERVER_ADDR', 'localhost:8848'))
.log_level('INFO')
.app_conn_labels({"k1": "v1", "k2": "v2", "nacos_config_gray_label": "gray"})
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
await self._gray_config(client_cfg)
async def test_gray_config_with_provider(self):
client_cfg = (ClientConfigBuilder()
.credentials_provider(
CustomCredentialsProvider(os.getenv('NACOS_ACCESS_KEY'), os.getenv('NACOS_SECRET_KEY')))
.server_address(os.getenv('NACOS_SERVER_ADDR', 'localhost:8848'))
.log_level('INFO')
.app_conn_labels({"k1": "v1", "k2": "v2", "nacos_config_gray_label": "gray"})
.grpc_config(GRPCConfig(grpc_timeout=5000))
.build())
await self._gray_config(client_cfg)

0
v2/__init__.py Normal file
View File

41
v2/nacos/__init__.py Normal file
View File

@ -0,0 +1,41 @@
from .common.client_config import (KMSConfig,
GRPCConfig,
TLSConfig,
ClientConfig)
from .common.client_config_builder import ClientConfigBuilder
from .common.nacos_exception import NacosException
from .config.model.config_param import ConfigParam
from .config.nacos_config_service import NacosConfigService
from .naming.model.instance import Instance
from .naming.model.naming_param import (RegisterInstanceParam,
BatchRegisterInstanceParam,
DeregisterInstanceParam,
ListInstanceParam,
SubscribeServiceParam,
GetServiceParam,
ListServiceParam)
from .naming.model.service import (Service,
ServiceList)
from .naming.nacos_naming_service import NacosNamingService
__all__ = [
"KMSConfig",
"GRPCConfig",
"TLSConfig",
"ClientConfig",
"ClientConfigBuilder",
"NacosException",
"ConfigParam",
"NacosConfigService",
"Instance",
"Service",
"ServiceList",
"RegisterInstanceParam",
"BatchRegisterInstanceParam",
"DeregisterInstanceParam",
"ListInstanceParam",
"SubscribeServiceParam",
"GetServiceParam",
"ListServiceParam",
"NacosNamingService"
]

View File

32
v2/nacos/common/auth.py Normal file
View File

@ -0,0 +1,32 @@
class Credentials(object):
def __init__(self, access_key_id, access_key_secret, security_token=None):
self.access_key_id = access_key_id
self.access_key_secret = access_key_secret
self.security_token = security_token
def get_access_key_id(self):
return self.access_key_id
def get_access_key_secret(self):
return self.access_key_secret
def get_security_token(self):
return self.security_token
class CredentialsProvider(object):
def get_credentials(self):
return
class StaticCredentialsProvider(CredentialsProvider):
def __init__(self, access_key_id="", access_key_secret="", security_token=""):
self.credentials = Credentials(access_key_id, access_key_secret, security_token)
def get_credentials(self):
return self.credentials
def set_access_key_id(self, access_key_id):
self.credentials.access_key_id = access_key_id
def set_access_key_secret(self, access_key_secret):
self.credentials.access_key_secret = access_key_secret

View File

@ -0,0 +1,127 @@
import logging
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.common.auth import StaticCredentialsProvider
class KMSConfig:
def __init__(self, enabled=False, endpoint='', access_key='', secret_key='', client_key_content='', password=''):
self.enabled = enabled # 是否启用kms
self.endpoint = endpoint # kms服务的地址
self.client_key_content = client_key_content
self.password = password
self.access_key = access_key
self.secret_key = secret_key
class TLSConfig:
def __init__(self, enabled=False, appointed=False, ca_file='', cert_file='',
key_file='', server_name_override=''):
self.enabled = enabled # 是否启用tls
self.appointed = appointed # 指明是否使用预设的配置
self.ca_file = ca_file # CA证书文件的路径
self.cert_file = cert_file # 客户端证书文件的路径
self.key_file = key_file # 私钥文件的路径
self.server_name_override = server_name_override # 服务器名称覆盖(用于测试)
def __str__(self):
return str(self.__dict__)
class GRPCConfig:
def __init__(self, max_receive_message_length=Constants.GRPC_MAX_RECEIVE_MESSAGE_LENGTH,
max_keep_alive_ms=Constants.GRPC_KEEPALIVE_TIME_MILLS,
initial_window_size=Constants.GRPC_INITIAL_WINDOW_SIZE,
initial_conn_window_size=Constants.GRPC_INITIAL_CONN_WINDOW_SIZE,
grpc_timeout=Constants.DEFAULT_GRPC_TIMEOUT_MILLS):
self.max_receive_message_length = max_receive_message_length
self.max_keep_alive_ms = max_keep_alive_ms
self.initial_window_size = initial_window_size
self.initial_conn_window_size = initial_conn_window_size
self.grpc_timeout = grpc_timeout
class ClientConfig:
def __init__(self, server_addresses=None, endpoint=None, namespace_id='', context_path='', access_key=None,
secret_key=None, username=None, password=None, app_name='', app_key='', log_dir='', log_level=None,
log_rotation_backup_count=None, app_conn_labels=None, credentials_provider=None):
self.server_list = []
try:
if server_addresses is not None and server_addresses.strip() != "":
for server_address in server_addresses.strip().split(','):
self.server_list.append(server_address.strip())
except Exception:
raise NacosException(INVALID_PARAM, "server_addresses is invalid")
self.endpoint = endpoint
self.endpoint_context_path = Constants.WEB_CONTEXT
self.endpoint_query_header = None
self.namespace_id = namespace_id
self.credentials_provider = credentials_provider if credentials_provider else StaticCredentialsProvider(access_key, secret_key)
self.context_path = context_path
self.username = username # the username for nacos auth
self.password = password # the password for nacos auth
self.app_name = app_name
self.app_key = app_key
self.cache_dir = ''
self.disable_use_config_cache = False
self.log_dir = log_dir
self.log_level = logging.INFO if log_level is None else log_level # the log level for nacos client, default value is logging.INFO: log_level
self.log_rotation_backup_count = 7 if log_rotation_backup_count is None else log_rotation_backup_count
self.timeout_ms = 10 * 1000 # timeout for requesting Nacos server, default value is 10000ms
self.heart_beat_interval = 5 * 1000 # the time interval for sending beat to server,default value is 5000ms
self.kms_config = KMSConfig(enabled=False)
self.tls_config = TLSConfig(enabled=False)
self.grpc_config = GRPCConfig()
self.load_cache_at_start = True
self.update_cache_when_empty = False
self.app_conn_labels = app_conn_labels
def set_log_level(self, log_level):
self.log_level = log_level
return self
def set_cache_dir(self, cache_dir):
self.cache_dir = cache_dir
return self
def set_log_dir(self, log_dir):
self.log_dir = log_dir
return self
def set_timeout_ms(self, timeout_ms):
self.timeout_ms = timeout_ms
return self
def set_heart_beat_interval(self, heart_beat_interval):
self.heart_beat_interval = heart_beat_interval
return self
def set_tls_config(self, tls_config: TLSConfig):
self.tls_config = tls_config
return self
def set_kms_config(self, kms_config: KMSConfig):
self.kms_config = kms_config
return self
def set_grpc_config(self, grpc_config: GRPCConfig):
self.grpc_config = grpc_config
return self
def set_load_cache_at_start(self, load_cache_at_start):
self.load_cache_at_start = load_cache_at_start
return self
def set_update_cache_when_empty(self, update_cache_when_empty: bool):
self.update_cache_when_empty = update_cache_when_empty
return self
def set_endpoint_context_path(self, endpoint_context_path):
self.endpoint_context_path = endpoint_context_path
return self
def set_app_conn_labels(self, app_conn_labels: dict):
self.app_conn_labels = app_conn_labels
return self

View File

@ -0,0 +1,105 @@
from typing import Dict, List
from v2.nacos.common.auth import CredentialsProvider, StaticCredentialsProvider
from v2.nacos.common.client_config import ClientConfig, GRPCConfig
from v2.nacos.common.client_config import KMSConfig
from v2.nacos.common.client_config import TLSConfig
from v2.nacos.common.constants import Constants
class ClientConfigBuilder:
def __init__(self):
self._config = ClientConfig()
def server_address(self, server_address: str) -> "ClientConfigBuilder":
if server_address is not None and server_address.strip() != "":
for server_address in server_address.strip().split(','):
self._config.server_list.append(server_address.strip())
return self
def endpoint(self, endpoint) -> "ClientConfigBuilder":
self._config.endpoint = endpoint
return self
def namespace_id(self, namespace_id: str) -> "ClientConfigBuilder":
if namespace_id is None:
namespace_id = Constants.DEFAULT_NAMESPACE_ID
self._config.namespace_id = namespace_id
return self
def timeout_ms(self, timeout_ms) -> "ClientConfigBuilder":
self._config.timeout_ms = timeout_ms
return self
def heart_beat_interval(self, heart_beat_interval) -> "ClientConfigBuilder":
self._config.heart_beat_interval = heart_beat_interval
return self
def log_level(self, log_level) -> "ClientConfigBuilder":
self._config.log_level = log_level
return self
def log_dir(self, log_dir: str) -> "ClientConfigBuilder":
self._config.log_dir = log_dir
return self
def access_key(self, access_key: str) -> "ClientConfigBuilder":
if not self._config.credentials_provider:
self._config.credentials_provider = StaticCredentialsProvider(access_key_id=access_key)
else:
self._config.credentials_provider.set_access_key_id(access_key)
return self
def secret_key(self, secret_key: str) -> "ClientConfigBuilder":
if not self._config.credentials_provider:
self._config.credentials_provider = StaticCredentialsProvider(access_key_secret=secret_key)
else:
self._config.credentials_provider.set_access_key_secret(secret_key)
return self
def credentials_provider(self, credentials_provider: CredentialsProvider) -> "ClientConfigBuilder":
self._config.credentials_provider = credentials_provider
return self
def username(self, username: str) -> "ClientConfigBuilder":
self._config.username = username
return self
def password(self, password: str) -> "ClientConfigBuilder":
self._config.password = password
return self
def cache_dir(self, cache_dir: str) -> "ClientConfigBuilder":
self._config.cache_dir = cache_dir
return self
def tls_config(self, tls_config: TLSConfig) -> "ClientConfigBuilder":
self._config.tls_config = tls_config
return self
def kms_config(self, kms_config: KMSConfig) -> "ClientConfigBuilder":
self._config.kms_config = kms_config
return self
def grpc_config(self, grpc_config: GRPCConfig) -> "ClientConfigBuilder":
self._config.grpc_config = grpc_config
return self
def load_cache_at_start(self, load_cache_at_start: bool) -> "ClientConfigBuilder":
self._config.load_cache_at_start = load_cache_at_start
return self
def app_conn_labels(self, app_conn_labels: dict) -> "ClientConfigBuilder":
if self._config.app_conn_labels is None:
self._config.app_conn_labels = {}
self._config.app_conn_labels.update(app_conn_labels)
return self
def endpoint_query_header(self, endpoint_query_header: Dict[str, str]) -> "ClientConfigBuilder":
if self._config.endpoint_query_header is None:
self._config.endpoint_query_header = {}
self._config.endpoint_query_header.update(endpoint_query_header)
return self
def build(self):
return self._config

View File

@ -0,0 +1,227 @@
class Constants:
NAMING_MODULE = "naming"
CONFIG_MODULE = "config"
LABEL_SOURCE = "source"
LABEL_SOURCE_SDK = "sdk"
LABEL_SOURCE_CLUSTER = "cluster"
LABEL_MODULE = "module"
CLIENT_VERSION = "Nacos-Python-Client:v2.0.0"
DATA_IN_BODY_VERSION = 204
DEFAULT_GROUP = "DEFAULT_GROUP"
WEB_CONTEXT = "/nacos"
APPNAME = "AppName"
UNKNOWN_APP = "UnknownApp"
DEFAULT_DOMAINNAME = "commonconfig.config-host.taobao.com"
DAILY_DOMAINNAME = "commonconfig.taobao.net"
NULL = ""
DATAID = "dataId"
GROUP = "group"
DEFAULT_HEARTBEAT_INTERVAL = 5
LAST_MODIFIED = "Last-Modified"
ACCEPT_ENCODING = "Accept-Encoding"
CONTENT_ENCODING = "Content-Encoding"
PROBE_MODIFY_REQUEST = "Listening-Configs"
PROBE_MODIFY_RESPONSE = "Probe-Modify-Response"
PROBE_MODIFY_RESPONSE_NEW = "Probe-Modify-Response-New"
USE_ZIP = "true"
CONTENT_MD5 = "Content-MD5"
CONFIG_VERSION = "Config-Version"
CONFIG_TYPE = "Config-Type"
ENCRYPTED_DATA_KEY = "Encrypted-Data-Key"
IF_MODIFIED_SINCE = "If-Modified-Since"
SPACING_INTERVAL = "client-spacing-interval"
BASE_PATH = "/v1/cs"
SERVICE_BASE_PATH = "/v1/ns"
CONFIG_CONTROLLER_PATH = BASE_PATH + "/configs"
TOKEN = "token"
ACCESS_TOKEN = "accessToken"
TOKEN_TTL = "tokenTtl"
GLOBAL_ADMIN = "globalAdmin"
USERNAME = "username"
TOKEN_REFRESH_WINDOW = "tokenRefreshWindow"
# second.
ASYNC_UPDATE_ADDRESS_INTERVAL = 300
# second.
POLLING_INTERVAL_TIME = 15
# millisecond.
ONCE_TIMEOUT = 2000
# millisecond.
SO_TIMEOUT = 60000
# millisecond.
CONFIG_LONG_POLL_TIMEOUT = 30000
# millisecond.
MIN_CONFIG_LONG_POLL_TIMEOUT = 10000
# millisecond.
CONFIG_RETRY_TIME = 2000
# Maximum number of retries.
MAX_RETRY = 3
# millisecond.
RECV_WAIT_TIMEOUT = ONCE_TIMEOUT * 5
ENCODE = "UTF-8"
MAP_FILE = "map-file.js"
FLOW_CONTROL_THRESHOLD = 20
FLOW_CONTROL_SLOT = 10
FLOW_CONTROL_INTERVAL = 1000
DEFAULT_PROTECT_THRESHOLD = 0.0
LINE_SEPARATOR = chr(1)
WORD_SEPARATOR = chr(2)
LONGPOLLING_LINE_SEPARATOR = "\r\n"
CLIENT_APPNAME_HEADER = "Client-AppName"
APP_NAME_HEADER = "AppName"
CLIENT_REQUEST_TS_HEADER = "Client-RequestTS"
CLIENT_REQUEST_TOKEN_HEADER = "Client-RequestToken"
ATOMIC_MAX_SIZE = 1000
NAMING_INSTANCE_ID_SPLITTER = "#"
NAMING_INSTANCE_ID_SEG_COUNT = 4
NAMING_HTTP_HEADER_SPLITTER = "\\|"
DEFAULT_CLUSTER_NAME = "DEFAULT"
DEFAULT_HEART_BEAT_TIMEOUT = 15000
DEFAULT_IP_DELETE_TIMEOUT = 30000
DEFAULT_HEART_BEAT_INTERVAL = 5000
DEFAULT_NAMESPACE_ID = ""
DEFAULT_USE_CLOUD_NAMESPACE_PARSING = True
WRITE_REDIRECT_CODE = 307
RESPONSE_SUCCESS_CODE = 200
REQUEST_DOMAIN_RETRY_TIME = 3
SERVICE_INFO_SPLITER = "@@"
CONFIG_INFO_SPLITER = "@@"
SERVICE_INFO_SPLIT_COUNT = 2
NULL_STRING = "null"
NUMBER_PATTERN = "^\\d+$"
ANY_PATTERN = ".*"
DEFAULT_INSTANCE_ID_GENERATOR = "simple"
SNOWFLAKE_INSTANCE_ID_GENERATOR = "snowflake"
HTTP_PREFIX = "http"
ALL_PATTERN = "*"
COLON = ":"
LINE_BREAK = "\n"
POUND = "#"
VIPSERVER_TAG = "Vipserver-Tag"
AMORY_TAG = "Amory-Tag"
LOCATION_TAG = "Location-Tag"
CHARSET_KEY = "charset"
EX_CONFIG_INFO = "exConfigInfo"
GROUP_NAME_KEY = "groupName"
SERVICE_NAME_KEY = "serviceName"
CIPHER_PRE_FIX = "cipher-"
DEFAULT_PORT = 8848
GRPC_MAX_RECEIVE_MESSAGE_LENGTH = 100 * 1024 * 1024
GRPC_KEEPALIVE_TIME_MILLS = 60 * 1000
GRPC_INITIAL_WINDOW_SIZE = 10 * 1024 * 1024
GRPC_INITIAL_CONN_WINDOW_SIZE = 10 * 1024 * 1024
KEEP_ALIVE_TIME_MILLS = 5000
INTERNAL_TIME_MILLS = 3000
DEFAULT_GRPC_TIMEOUT_MILLS = 3000
DEFAULT_TIMEOUT_MILLS = 10000
PER_TASK_CONFIG_SIZE = 3000
MSE_KMS_V1_DEFAULT_KEY_ID = "alias/acs/mse"
KMS_AES_128_ALGORITHM_NAME = "AES_128"
KMS_AES_256_ALGORITHM_NAME = "AES_256"

View File

@ -0,0 +1,40 @@
CLIENT_INVALID_PARAM = -400
CLIENT_DISCONNECT = -401
CLIENT_OVER_THRESHOLD = -503
INVALID_PARAM = 400
NO_RIGHT = 403
NOT_FOUND = 404
CONFLICT = 409
SERVER_ERROR = 500
BAD_GATEWAY = 502
OVER_THRESHOLD = 503
INVALID_SERVER_STATUS = 300
UN_REGISTER = 301
NO_HANDLER = 302
INVALID_INTERFACE_ERROR = -403
RESOURCE_NOT_FOUND = -404
HTTP_CLIENT_ERROR_CODE = -500
class NacosException(Exception):
"""Custom exception class with an error code attribute."""
def __init__(self, error_code, message="An error occurred"):
self.error_code = error_code
self.message = message
super().__init__(f'Error [{error_code}]: {message}')

View File

@ -0,0 +1,24 @@
class PayloadRegistry:
_REGISTRY_REQUEST = {}
@classmethod
def init(cls, payloads):
cls.payloads = payloads
cls.scan()
@classmethod
def scan(cls):
for payload_class in cls.payloads:
cls.register(payload_class.__name__, payload_class)
@classmethod
def register(cls, type_name, clazz):
if isinstance(clazz, type) and any("Abstract" in b.__name__ for b in clazz.__bases__):
return
if type_name in cls._REGISTRY_REQUEST:
raise RuntimeError(f"Fail to register, type:{type_name}, clazz:{clazz.__name__}")
cls._REGISTRY_REQUEST[type_name] = clazz
@classmethod
def get_class_by_type(cls, type_name):
return cls._REGISTRY_REQUEST.get(type_name)

View File

@ -0,0 +1,10 @@
class PreservedMetadataKeys:
REGISTER_SOURCE = "preserved.register.source"
HEART_BEAT_TIMEOUT = "preserved.heart.beat.timeout"
IP_DELETE_TIMEOUT = "preserved.ip.delete.timeout"
HEART_BEAT_INTERVAL = "preserved.heart.beat.interval"
INSTANCE_ID_GENERATOR = "preserved.instance.id.generator"

View File

0
v2/nacos/config/cache/__init__.py vendored Normal file
View File

View File

@ -0,0 +1,48 @@
import logging
import os
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.config.util.config_client_util import get_config_cache_key
from v2.nacos.utils.file_util import read_file, write_to_file
FAILOVER_FILE_SUFFIX = "_failover"
ENCRYPTED_DATA_KEY_FILE_NAME = "encrypted-data-key"
class ConfigInfoCache:
def __init__(self, client_config: ClientConfig):
self.logger = logging.getLogger(Constants.CONFIG_MODULE)
self.config_cache_dir = os.path.join(client_config.cache_dir, Constants.CONFIG_MODULE)
self.namespace_id = client_config.namespace_id
async def write_config_to_cache(self, cache_key: str, content: str, encrypted_data_key: str):
file_path = os.path.join(self.config_cache_dir, cache_key)
encrypted_data_key_file_path = os.path.join(self.config_cache_dir, ENCRYPTED_DATA_KEY_FILE_NAME,
cache_key)
await write_to_file(self.logger, file_path, content)
await write_to_file(self.logger, encrypted_data_key_file_path, encrypted_data_key)
async def get_config_cache(self, data_id: str, group: str):
cache_key = get_config_cache_key(data_id, group, self.namespace_id)
file_path = os.path.join(self.config_cache_dir, cache_key)
config_content = await read_file(self.logger, file_path)
if not data_id.startswith(Constants.CIPHER_PRE_FIX):
return config_content, ""
else:
encrypted_data_key_file_path = os.path.join(self.config_cache_dir, ENCRYPTED_DATA_KEY_FILE_NAME,
cache_key)
config_encrypted_data_key = await read_file(self.logger, encrypted_data_key_file_path)
return config_content, config_encrypted_data_key
async def get_fail_over_config_cache(self, data_id: str, group: str):
cache_key = get_config_cache_key(data_id, group, self.namespace_id) + FAILOVER_FILE_SUFFIX
file_path = os.path.join(self.config_cache_dir, cache_key)
config_content = await read_file(self.logger, file_path)
if not config_content:
return "", ""
self.logger.info(f"get fail over content, namespace:{self.namespace_id}, group:{group}, dataId:{data_id}")
encrypted_data_key_path = os.path.join(self.config_cache_dir, ENCRYPTED_DATA_KEY_FILE_NAME,
cache_key)
config_encrypted_data_key = await read_file(self.logger, encrypted_data_key_path)
return config_content, config_encrypted_data_key

View File

@ -0,0 +1,99 @@
import asyncio
from logging import Logger
from typing import Optional, Callable, List, Dict
from v2.nacos.common.constants import Constants
from v2.nacos.config.cache.config_info_cache import ConfigInfoCache
from v2.nacos.config.filter.config_filter import ConfigFilterChainManager
from v2.nacos.config.model.config import SubscribeCacheData
from v2.nacos.config.util.config_client_util import get_config_cache_key
from v2.nacos.utils import md5_util
from v2.nacos.utils.md5_util import md5
class ConfigSubscribeManager:
def __init__(self, logger: Logger, config_info_cache: ConfigInfoCache, namespace_id: str,
config_filter_chain_manager: ConfigFilterChainManager,
execute_config_listen_channel: asyncio.Queue):
self.subscribe_cache_map: Dict[str, SubscribeCacheData] = {}
self.logger = logger
self.lock = asyncio.Lock()
self.namespace_id = namespace_id
self.config_filter_chain_manager = config_filter_chain_manager
self.config_info_cache = config_info_cache
self.execute_config_listen_channel = execute_config_listen_channel
async def add_listener(self, data_id: str, group_name: str, tenant: str,
listener: Optional[Callable]):
cache_key = get_config_cache_key(data_id, group_name, tenant)
async with self.lock:
if cache_key in self.subscribe_cache_map:
subscribe_cache = self.subscribe_cache_map[cache_key]
else:
content, encrypted_data_key = await self.config_info_cache.get_config_cache(data_id, group_name)
md5_str = md5_util.md5(content)
subscribe_cache = SubscribeCacheData(
data_id=data_id,
group=group_name,
tenant=self.namespace_id,
content=content,
md5=md5_str,
chain_manager=self.config_filter_chain_manager,
encrypted_data_key=encrypted_data_key)
subscribe_cache.task_id = len(self.subscribe_cache_map) // Constants.PER_TASK_CONFIG_SIZE
self.subscribe_cache_map[cache_key] = subscribe_cache
await subscribe_cache.add_listener(listener)
async def remove_listener(self, data_id: str, group_name: str, tenant: str, listener: Optional[Callable]):
if listener is None:
return
cache_key = get_config_cache_key(data_id, group_name, tenant)
async with self.lock:
subscribe_cache = self.subscribe_cache_map.get(cache_key)
if not subscribe_cache:
return
await subscribe_cache.remove_listener(listener)
async def notify_config_changed(self, data_id: str, group_name: str, tenant: str):
cache_key = get_config_cache_key(data_id, group_name, tenant)
async with self.lock:
subscribe_cache = self.subscribe_cache_map.get(cache_key)
if not subscribe_cache:
return
async with subscribe_cache.lock:
subscribe_cache.is_sync_with_server = False
self.subscribe_cache_map[cache_key] = subscribe_cache
await self.execute_config_listen_channel.put(None)
async def batch_set_config_changed(self, task_id: int):
for cache_data in self.subscribe_cache_map.values():
if cache_data.task_id == task_id:
async with cache_data.lock:
cache_data.is_sync_with_server = False
async def update_subscribe_cache(self, data_id: str, group_name: str, tenant: str, content: str,
encrypted_data_key: str):
cache_key = get_config_cache_key(data_id, group_name, tenant)
async with self.lock:
subscribe_cache = self.subscribe_cache_map.get(cache_key)
if not subscribe_cache:
return
subscribe_cache.content = content
subscribe_cache.encrypted_data_key = encrypted_data_key
subscribe_cache.md5 = md5(content)
subscribe_cache.is_sync_with_server = True
await subscribe_cache.execute_listener()
async def execute_listener_and_build_tasks(self, is_sync_all: bool):
listen_fetch_task_map: Dict[int, List[SubscribeCacheData]] = {}
for cache_data in self.subscribe_cache_map.values():
if cache_data.is_sync_with_server:
await cache_data.execute_listener()
if not is_sync_all:
continue
if cache_data.task_id not in listen_fetch_task_map:
listen_fetch_task_map[cache_data.task_id] = []
listen_fetch_task_map[cache_data.task_id].append(cache_data)
return listen_fetch_task_map

View File

View File

@ -0,0 +1,41 @@
from alibabacloud_kms20160120 import models as kms_20160120_models
from alibabacloud_kms20160120.client import Client
from alibabacloud_tea_openapi import models as open_api_models
from v2.nacos.common.client_config import KMSConfig
from v2.nacos.utils.encode_util import bytes_to_str
class KmsClient:
def __init__(self, client: Client):
self.client = client
@staticmethod
def create_kms_client(kms_config: KMSConfig):
config = open_api_models.Config(
access_key_id=kms_config.access_key,
access_key_secret=kms_config.secret_key,
endpoint=kms_config.endpoint)
config.protocol = "https"
client = Client(config)
kms_client = KmsClient(client)
return kms_client
def encrypt(self, content: str, key_id: str):
encrypt_request = kms_20160120_models.EncryptRequest()
encrypt_request.plaintext = content.encode("utf-8")
encrypt_request.key_id = key_id
encrypt_response = self.client.encrypt(encrypt_request)
return encrypt_response.body.ciphertext_blob
def decrypt(self, content: str):
decrypt_request = kms_20160120_models.DecryptRequest(ciphertext_blob=content)
decrypt_response = self.client.decrypt(decrypt_request)
return decrypt_response.body.plaintext
def generate_secret_key(self, key_id: str, key_spec: str):
request = kms_20160120_models.GenerateDataKeyRequest()
request.key_id = key_id
request.key_spec = key_spec
resp = self.client.generate_data_key(request)
return resp.body.plaintext, resp.body.ciphertext_blob

View File

@ -0,0 +1,48 @@
from typing import Dict
from v2.nacos.common.client_config import KMSConfig
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.config.encryption.kms_client import KmsClient
from v2.nacos.config.encryption.plugin.encryption_plugin import EncryptionPlugin
from v2.nacos.config.encryption.plugin.kms_aes_128_encrytion_plugin import KmsAes128EncryptionPlugin
from v2.nacos.config.encryption.plugin.kms_aes_256_encrytion_plugin import KmsAes256EncryptionPlugin
from v2.nacos.config.encryption.plugin.kms_base_encryption_plugin import KmsBaseEncryptionPlugin
from v2.nacos.config.model.config_param import HandlerParam
class KMSHandler:
def __init__(self, kms_config: KMSConfig):
self.kms_plugins: Dict[str, EncryptionPlugin] = {}
self.kms_client = KmsClient.create_kms_client(kms_config)
kms_aes_128_encryption_plugin = KmsAes128EncryptionPlugin(self.kms_client)
self.kms_plugins[kms_aes_128_encryption_plugin.algorithm_name()] = kms_aes_128_encryption_plugin
kms_aes_256_encryption_plugin = KmsAes256EncryptionPlugin(self.kms_client)
self.kms_plugins[kms_aes_256_encryption_plugin.algorithm_name()] = kms_aes_256_encryption_plugin
kms_base_encryption_plugin = KmsBaseEncryptionPlugin(self.kms_client)
self.kms_plugins[kms_base_encryption_plugin.algorithm_name()] = kms_base_encryption_plugin
def find_encryption_service(self, data_id: str):
for algorithm_name in self.kms_plugins:
if data_id.startswith(algorithm_name):
return self.kms_plugins[algorithm_name]
raise NacosException(INVALID_PARAM, f"encryption plugin service not found, data_id:{data_id}")
@staticmethod
def check_param(handler_param: HandlerParam):
if not handler_param.data_id.startswith(Constants.CIPHER_PRE_FIX):
raise NacosException(INVALID_PARAM, "dataId prefix should start with 'cipher-'")
if len(handler_param.content) == 0:
raise NacosException(INVALID_PARAM, "encrypt empty content error")
def encrypt_handler(self, handler_param: HandlerParam):
self.check_param(handler_param)
plugin = self.find_encryption_service(handler_param.data_id)
handler_param = plugin.generate_secret_key(handler_param)
return plugin.encrypt(handler_param)
def decrypt_handler(self, handler_param: HandlerParam):
self.check_param(handler_param)
plugin = self.find_encryption_service(handler_param.data_id)
handler_param.plain_data_key = plugin.decrypt_secret_key(handler_param)
return plugin.decrypt(handler_param)

View File

@ -0,0 +1,30 @@
from abc import ABC, abstractmethod
from v2.nacos.config.model.config_param import HandlerParam
class EncryptionPlugin(ABC):
@abstractmethod
def encrypt(self, handler_param: HandlerParam) -> HandlerParam:
pass
@abstractmethod
def decrypt(self, handler_param: HandlerParam) -> HandlerParam:
pass
@abstractmethod
def generate_secret_key(self, handler_param: HandlerParam) -> HandlerParam:
pass
@abstractmethod
def algorithm_name(self):
pass
@abstractmethod
def encrypt_secret_key(self, handler_param: HandlerParam) -> str:
pass
@abstractmethod
def decrypt_secret_key(self, handler_param: HandlerParam) -> str:
pass

View File

@ -0,0 +1,21 @@
from v2.nacos.common.constants import Constants
from v2.nacos.config.encryption.plugin.kms_encrytion_plugin import KmsEncryptionPlugin
from v2.nacos.config.encryption.kms_client import KmsClient
from v2.nacos.config.model.config_param import HandlerParam
class KmsAes128EncryptionPlugin(KmsEncryptionPlugin):
def __init__(self, kms_client: KmsClient):
super().__init__(kms_client)
self.ALGORITHM = 'cipher-kms-aes-128'
def generate_secret_key(self, handler_param: HandlerParam) -> HandlerParam:
key_id = handler_param.key_id if handler_param.key_id.strip() else Constants.MSE_KMS_V1_DEFAULT_KEY_ID
plain_secret_key, encryted_secret_key = self.kms_client.generate_secret_key(key_id, 'AES_128')
handler_param.plain_data_key = plain_secret_key
handler_param.encrypted_data_key = encryted_secret_key
return handler_param
def algorithm_name(self):
return self.ALGORITHM

View File

@ -0,0 +1,21 @@
from v2.nacos.common.constants import Constants
from v2.nacos.config.encryption.plugin.kms_encrytion_plugin import KmsEncryptionPlugin
from v2.nacos.config.encryption.kms_client import KmsClient
from v2.nacos.config.model.config_param import HandlerParam
class KmsAes256EncryptionPlugin(KmsEncryptionPlugin):
def __init__(self, kms_client: KmsClient):
super().__init__(kms_client)
self.ALGORITHM = 'cipher-kms-aes-256'
def generate_secret_key(self, handler_param: HandlerParam) -> HandlerParam:
key_id = handler_param.key_id if handler_param.key_id.strip() else Constants.MSE_KMS_V1_DEFAULT_KEY_ID
plain_secret_key, encryted_secret_key = self.kms_client.generate_secret_key(key_id, 'AES_256')
handler_param.plain_data_key = plain_secret_key
handler_param.encrypted_data_key = encryted_secret_key
return handler_param
def algorithm_name(self):
return self.ALGORITHM

View File

@ -0,0 +1,39 @@
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.config.encryption.kms_client import KmsClient
from v2.nacos.config.encryption.plugin.kms_encrytion_plugin import KmsEncryptionPlugin
from v2.nacos.config.model.config_param import HandlerParam
class KmsBaseEncryptionPlugin(KmsEncryptionPlugin):
def __init__(self, kms_client: KmsClient):
super().__init__(kms_client)
self.ALGORITHM = 'cipher'
def encrypt(self, handler_param: HandlerParam) -> HandlerParam:
key_id = handler_param.key_id if handler_param.key_id.strip() else Constants.MSE_KMS_V1_DEFAULT_KEY_ID
if len(handler_param.content) == 0:
raise NacosException(INVALID_PARAM, "encrypt empty content error")
encrypted_content = self.kms_client.encrypt(handler_param.content, key_id)
handler_param.content = encrypted_content
return handler_param
def decrypt(self, handler_param: HandlerParam) -> HandlerParam:
if len(handler_param.content) == 0:
raise NacosException(INVALID_PARAM, "decrypt empty content error")
plain_content = self.kms_client.decrypt(handler_param.content)
handler_param.content = plain_content
return handler_param
def generate_secret_key(self, handler_param: HandlerParam) -> HandlerParam:
return handler_param
def algorithm_name(self):
return self.ALGORITHM
def encrypt_secret_key(self, handler_param: HandlerParam) -> str:
return ""
def decrypt_secret_key(self, handler_param: HandlerParam) -> str:
return ""

View File

@ -0,0 +1,51 @@
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.config.encryption.plugin.encryption_plugin import EncryptionPlugin
from v2.nacos.config.encryption.kms_client import KmsClient
from v2.nacos.config.model.config_param import HandlerParam
from v2.nacos.utils import aes_util
from v2.nacos.utils.encode_util import decode_base64, str_to_bytes
class KmsEncryptionPlugin(EncryptionPlugin):
def __init__(self, kms_client: KmsClient):
self.ALGORITHM = 'cipher-kms'
self.kms_client = kms_client
@staticmethod
def param_check(handler_param: HandlerParam):
if not handler_param.plain_data_key.strip():
raise NacosException(INVALID_PARAM, "empty plain_data_key error")
if not handler_param.content.strip():
raise NacosException(INVALID_PARAM, "encrypt empty content error")
def encrypt(self, handler_param: HandlerParam) -> HandlerParam:
self.param_check(handler_param)
handler_param.content = aes_util.encrypt(
key=handler_param.plain_data_key,
message=handler_param.content)
return handler_param
def decrypt(self, handler_param: HandlerParam) -> HandlerParam:
self.param_check(handler_param)
handler_param.content = aes_util.decrypt(
key=handler_param.plain_data_key,
encr_data=handler_param.content)
return handler_param
def generate_secret_key(self, handler_param: HandlerParam) -> HandlerParam:
pass
def algorithm_name(self):
pass
def encrypt_secret_key(self, handler_param: HandlerParam) -> str:
key_id = handler_param.key_id if handler_param.key_id.strip() else Constants.MSE_KMS_V1_DEFAULT_KEY_ID
if len(handler_param.plain_data_key) == 0:
raise NacosException(INVALID_PARAM, "empty plain_data_key error")
return self.kms_client.encrypt(handler_param.plain_data_key, key_id)
def decrypt_secret_key(self, handler_param: HandlerParam) -> str:
if len(handler_param.encrypted_data_key) == 0:
raise NacosException(INVALID_PARAM, "empty encrypted data key error")
return self.kms_client.decrypt(handler_param.encrypted_data_key)

View File

View File

@ -0,0 +1,37 @@
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.config.encryption.kms_handler import KMSHandler
from v2.nacos.config.filter.config_filter import IConfigFilter
from v2.nacos.config.model.config_param import ConfigParam, HandlerParam, UsageType
def _param_check(param: ConfigParam):
if param.data_id.startswith(Constants.CIPHER_PRE_FIX) and len(param.content.strip()) != 0:
return False
return True
class ConfigEncryptionFilter(IConfigFilter):
def __init__(self, client_config: ClientConfig):
self.kms_handler = KMSHandler(client_config.kms_config)
def do_filter(self, param: ConfigParam) -> None:
if param.usage_type == UsageType.request_type.value:
encryption_param = HandlerParam(data_id=param.data_id, content=param.content, key_id=param.kms_key_id)
self.kms_handler.encrypt_handler(encryption_param)
param.content = encryption_param.content
param.encrypted_data_key = encryption_param.encrypted_data_key
elif param.usage_type == UsageType.response_type.value:
decryption_param = HandlerParam(data_id=param.data_id, content=param.content,
encrypted_data_key=param.encrypted_data_key)
self.kms_handler.decrypt_handler(decryption_param)
param.content = decryption_param.content
def get_order(self) -> int:
return 0
def get_filter_name(self) -> str:
return "defaultConfigEncryptionFilter"

View File

@ -0,0 +1,47 @@
from abc import ABC, abstractmethod
from typing import List
from v2.nacos.config.model.config_param import ConfigParam
class IConfigFilter(ABC):
@abstractmethod
def do_filter(self, config_param):
pass
@abstractmethod
def get_order(self):
pass
@abstractmethod
def get_filter_name(self):
pass
class ConfigFilterChainManager:
def __init__(self):
self.config_filters = []
def add_filter(self, conf_filter: IConfigFilter) -> None:
for existing_filter in self.config_filters:
if conf_filter.get_filter_name() == existing_filter.get_filter_name():
return
for i, existing_filter in enumerate(self.config_filters):
if conf_filter.get_order() < existing_filter.get_order():
self.config_filters.insert(i, conf_filter)
return
self.config_filters.append(conf_filter)
def get_filters(self) -> List[IConfigFilter]:
return self.config_filters
def do_filters(self, param: ConfigParam) -> None:
for config_filter in self.config_filters:
config_filter.do_filter(param)
def do_filter_by_name(self, param: ConfigParam, name: str) -> None:
for config_filter in self.config_filters:
if config_filter.get_filter_name() == name:
config_filter.do_filter(param)
return
raise ValueError(f"Cannot find the filter with name {name}")

View File

View File

@ -0,0 +1,103 @@
import asyncio
from typing import Optional, Callable, List
from pydantic import BaseModel
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.config.filter.config_filter import ConfigFilterChainManager
from v2.nacos.config.model.config_param import ConfigParam, UsageType
class ConfigItem(BaseModel):
id: str = ''
dataId: str = ''
group: str = ''
content: str = ''
md5: Optional[str] = ''
tenant: str = ''
appname: str = ''
class ConfigPage(BaseModel):
totalCount: int = 0
pageNumber: int = 0
pagesAvailable: int = 0
pageItems: List[ConfigItem] = []
class ConfigListenContext(BaseModel):
group: str = ''
md5: str = ''
dataId: str = ''
tenant: str = ''
class ConfigContext(BaseModel):
group: str = ''
dataId: str = ''
tenant: str = ''
class SubscribeCacheData:
def __init__(self, data_id: str, group: str, tenant: str, content: str, md5: str,
encrypted_data_key: str,
chain_manager: ConfigFilterChainManager, content_type: str = '',
is_sync_with_server: bool = False):
self.data_id = data_id
self.group = group
self.tenant = tenant
self.content = content
self.content_type = content_type
self.md5 = md5
self.cache_data_listeners: List[CacheDataListenerWrap] = []
self.encrypted_data_key = encrypted_data_key
self.task_id = 0
self.config_chain_manager = chain_manager
self.is_sync_with_server = is_sync_with_server
self.lock = asyncio.Lock()
async def add_listener(self, listener: Optional[Callable]):
if listener is None:
raise NacosException(INVALID_PARAM, "cache data listener is None")
async with self.lock:
if any(CacheDataListenerWrap(listener, self.md5) == existing_listener for existing_listener in
self.cache_data_listeners):
return
self.cache_data_listeners.append(CacheDataListenerWrap(listener, self.md5))
async def remove_listener(self, listener: Optional[Callable]):
if listener is None:
return
async with self.lock:
self.cache_data_listeners = [existing_listener for existing_listener in self.cache_data_listeners
if existing_listener.listener != listener]
async def execute_listener(self):
async with self.lock:
for listener_wrap in self.cache_data_listeners:
if listener_wrap.last_md5 != self.md5:
listener_wrap.last_md5 = self.md5
param = ConfigParam(data_id=self.data_id,
group=self.group,
content=self.content,
encrypted_data_key=self.encrypted_data_key,
usage_type=UsageType.response_type.value
)
self.config_chain_manager.do_filters(param)
decrypted_content = param.content
await listener_wrap.listener(self.tenant, self.group, self.data_id, decrypted_content)
class CacheDataListenerWrap:
def __init__(self, listener: Callable, last_md5):
self.listener = listener
self.last_md5 = last_md5
def __eq__(self, other):
if not isinstance(other, CacheDataListenerWrap):
return False
return self.listener == other.listener and self.last_md5 == other.last_md5
def __hash__(self):
return hash((self.listener, self.last_md5))

View File

@ -0,0 +1,49 @@
from abc import ABC, abstractmethod
from enum import Enum
from pydantic import BaseModel
class Listener(ABC):
@abstractmethod
def listen(self, namespace: str, group: str, data_id: str, content: str):
raise NotImplementedError("Subclasses should implement this method.")
class UsageType(Enum):
request_type = "RequestType"
response_type = "ResponseType"
class SearchConfigParam(BaseModel):
search: str = ''
dataId: str = ''
group: str = ''
tag: str = ''
appName: str = ''
pageNo: int = 0
pageSize: int = 0
class ConfigParam(BaseModel):
data_id: str = ''
group: str = ''
content: str = ''
tag: str = ''
app_name: str = ''
beta_ips: str = ''
cas_md5: str = ''
type: str = ''
src_user: str = ''
encrypted_data_key: str = ''
kms_key_id: str = ''
usage_type: str = ''
class HandlerParam(BaseModel):
data_id: str = ''
content: str = ''
encrypted_data_key: str = ''
plain_data_key: str = ''
key_id: str = ''

View File

@ -0,0 +1,58 @@
from abc import ABC, abstractmethod
from typing import Optional, List, Dict
from v2.nacos.config.model.config import ConfigListenContext
from v2.nacos.transport.model.rpc_request import Request
CONFIG_CHANGE_NOTIFY_REQUEST_TYPE = "ConfigChangeNotifyRequest"
class AbstractConfigRequest(Request, ABC):
group: Optional[str]
dataId: Optional[str]
tenant: Optional[str] = ''
def get_module(self):
return "config"
def get_request_type(self) -> str:
"""
提供一个默认实现或抛出NotImplementedError明确指示子类需要覆盖此方法
"""
raise NotImplementedError("Subclasses should implement this method.")
class ConfigBatchListenRequest(AbstractConfigRequest):
listen: bool = True
configListenContexts: List[ConfigListenContext] = []
def get_request_type(self):
return "ConfigBatchListenRequest"
class ConfigChangeNotifyRequest(AbstractConfigRequest):
def get_request_type(self):
return "ConfigChangeNotifyRequest"
class ConfigQueryRequest(AbstractConfigRequest):
tag: Optional[str] = ''
def get_request_type(self):
return "ConfigQueryRequest"
class ConfigPublishRequest(AbstractConfigRequest):
content: Optional[str]
casMd5: Optional[str]
additionMap: Dict[str, str] = {}
def get_request_type(self):
return "ConfigPublishRequest"
class ConfigRemoveRequest(AbstractConfigRequest):
def get_request_type(self):
return "ConfigRemoveRequest"

View File

@ -0,0 +1,43 @@
from typing import Optional, List
from pydantic import BaseModel
from v2.nacos.transport.model.rpc_response import Response
class ConfigContext(BaseModel):
group: str = ''
dataId: str = ''
tenant: str = ''
class ConfigChangeBatchListenResponse(Response):
changedConfigs: List[ConfigContext] = []
def get_response_type(self) -> str:
return "ConfigChangeBatchListenResponse"
class ConfigQueryResponse(Response):
content: Optional[str] = ''
encryptedDataKey: Optional[str] = ''
contentType: Optional[str] = ''
md5: Optional[str] = ''
lastModified: Optional[int] = ''
isBeta: bool = False
tag: bool = False
def get_response_type(self) -> str:
return "ConfigQueryResponse"
class ConfigPublishResponse(Response):
def get_response_type(self) -> str:
return "ConfigPublishResponse"
class ConfigRemoveResponse(Response):
def get_response_type(self) -> str:
return "ConfigRemoveResponse"

View File

@ -0,0 +1,122 @@
import asyncio
import copy
import time
from typing import Callable
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.config.cache.config_info_cache import ConfigInfoCache
from v2.nacos.config.filter.config_encryption_filter import ConfigEncryptionFilter
from v2.nacos.config.filter.config_filter import ConfigFilterChainManager
from v2.nacos.config.model.config_param import UsageType, ConfigParam
from v2.nacos.config.remote.config_grpc_client_proxy import ConfigGRPCClientProxy
from v2.nacos.nacos_client import NacosClient
class NacosConfigService(NacosClient):
def __init__(self, client_config: ClientConfig):
super().__init__(client_config, Constants.CONFIG_MODULE)
self.lock = asyncio.Lock()
self.config_filter_chain_manager = ConfigFilterChainManager()
self.namespace_id = client_config.namespace_id
self.config_info_cache = ConfigInfoCache(client_config)
self.last_all_sync_time = time.time()
self.grpc_client_proxy = ConfigGRPCClientProxy(client_config, self.http_agent, self.config_info_cache,
self.config_filter_chain_manager)
if client_config.kms_config and client_config.kms_config.enabled:
config_encryption_filter = ConfigEncryptionFilter(client_config)
self.config_filter_chain_manager.add_filter(config_encryption_filter)
self.logger.info("config encryption filter initialized")
@staticmethod
async def create_config_service(client_config: ClientConfig):
config_service = NacosConfigService(client_config)
await config_service.grpc_client_proxy.start()
return config_service
async def get_config(self, param: ConfigParam) -> str:
if not param.data_id or not param.data_id.strip():
raise NacosException(INVALID_PARAM, "data_id can not be empty")
if not param.group:
param.group = Constants.DEFAULT_GROUP
content, encrypted_data_key = await self.config_info_cache.get_fail_over_config_cache(param.data_id,
param.group)
if not content:
try:
content, encrypted_data_key = await self.grpc_client_proxy.query_config(param.data_id, param.group)
except NacosException as e:
if e.error_code == 400:
if self.client_config.disable_use_config_cache:
self.logger.warning(
"failed to get config from server,and is not allowed to read local cache,error:%s",
str(e))
raise e
return await self.config_info_cache.get_config_cache(param.data_id, param.group)
raise e
deep_copy_param = copy.deepcopy(param)
deep_copy_param.encrypted_data_key = encrypted_data_key
deep_copy_param.content = content
deep_copy_param.usage_type = UsageType.response_type.value
self.config_filter_chain_manager.do_filters(deep_copy_param)
return deep_copy_param.content
async def publish_config(self, param: ConfigParam) -> bool:
if not param.data_id or not param.data_id.strip():
raise NacosException(INVALID_PARAM, "data_id can not be empty")
if not param.content or not param.content.strip():
raise NacosException(INVALID_PARAM, "config content can not be empty")
if not param.group:
param.group = Constants.DEFAULT_GROUP
param.usage_type = UsageType.request_type.value
self.config_filter_chain_manager.do_filters(param)
return await self.grpc_client_proxy.publish_config(param)
async def remove_config(self, param: ConfigParam):
if not param.data_id or not param.data_id.strip():
raise NacosException(INVALID_PARAM, "data_id can not be empty")
if not param.group:
param.group = Constants.DEFAULT_GROUP
return await self.grpc_client_proxy.remove_config(param.group, param.data_id)
async def add_listener(self, data_id: str, group: str, listener: Callable) -> None:
if not data_id or not data_id.strip():
raise NacosException(INVALID_PARAM, "data_id can not be empty")
if not group:
group = Constants.DEFAULT_GROUP
if listener is None:
raise NacosException(INVALID_PARAM, "config listener can not be null")
return await self.grpc_client_proxy.add_listener(data_id, group, listener)
async def remove_listener(self, data_id: str, group: str, listener: Callable):
if not data_id or not data_id.strip():
raise NacosException(INVALID_PARAM, "data_id can not be empty")
if not group:
group = Constants.DEFAULT_GROUP
if listener is None:
raise NacosException(INVALID_PARAM, "config listener can not be null")
return await self.grpc_client_proxy.remove_listener(data_id, group, listener)
async def server_health(self) -> bool:
return await self.grpc_client_proxy.server_health()
async def shutdown(self):
"""关闭资源服务"""
await self.grpc_client_proxy.close_client()

View File

View File

@ -0,0 +1,29 @@
from typing import Optional
from v2.nacos.config.cache.config_subscribe_manager import ConfigSubscribeManager
from v2.nacos.config.model.config_request import ConfigChangeNotifyRequest
from v2.nacos.transport.model.internal_response import NotifySubscriberResponse
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.rpc_response import Response
from v2.nacos.transport.server_request_handler import IServerRequestHandler
class ConfigChangeNotifyRequestHandler(IServerRequestHandler):
def name(self):
return "ConfigChangeNotifyRequestHandler"
def __init__(self, logger, config_subscribe_manager: ConfigSubscribeManager, client_name: str):
self.logger = logger
self.config_subscribe_manager = config_subscribe_manager
self.client_name = client_name
async def request_reply(self, request: Request) -> Optional[Response]:
if not isinstance(request, ConfigChangeNotifyRequest):
return None
self.logger.info(
f"received config change push,clientName:{self.client_name},dataId:{request.dataId},group:{request.group},tenant:{request.tenant}")
await self.config_subscribe_manager.notify_config_changed(request.dataId, request.group,
self.config_subscribe_manager.namespace_id)
return NotifySubscriberResponse()

View File

@ -0,0 +1,255 @@
import asyncio
import base64
import hashlib
import hmac
import logging
import uuid
from typing import Callable
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, SERVER_ERROR, CLIENT_OVER_THRESHOLD
from v2.nacos.config.cache.config_info_cache import ConfigInfoCache
from v2.nacos.config.cache.config_subscribe_manager import ConfigSubscribeManager
from v2.nacos.config.filter.config_filter import ConfigFilterChainManager
from v2.nacos.config.model.config import ConfigListenContext
from v2.nacos.config.model.config_param import ConfigParam
from v2.nacos.config.model.config_request import AbstractConfigRequest, ConfigQueryRequest, \
CONFIG_CHANGE_NOTIFY_REQUEST_TYPE, ConfigPublishRequest, ConfigRemoveRequest, ConfigBatchListenRequest
from v2.nacos.config.model.config_response import ConfigQueryResponse, ConfigPublishResponse, ConfigRemoveResponse, \
ConfigChangeBatchListenResponse
from v2.nacos.config.remote.config_change_notify_request_handler import ConfigChangeNotifyRequestHandler
from v2.nacos.config.remote.config_grpc_connection_event_listener import ConfigGrpcConnectionEventListener
from v2.nacos.config.util.config_client_util import get_config_cache_key
from v2.nacos.transport.http_agent import HttpAgent
from v2.nacos.transport.nacos_server_connector import NacosServerConnector
from v2.nacos.transport.rpc_client import ConnectionType, RpcClient
from v2.nacos.transport.rpc_client_factory import RpcClientFactory
from v2.nacos.utils.common_util import get_current_time_millis
from v2.nacos.utils.md5_util import md5
class ConfigGRPCClientProxy:
def __init__(self,
client_config: ClientConfig,
http_agent: HttpAgent,
config_info_cache: ConfigInfoCache,
config_filter_chain_manager: ConfigFilterChainManager):
self.logger = logging.getLogger(Constants.CONFIG_MODULE)
self.client_config = client_config
self.namespace_id = client_config.namespace_id
self.nacos_server_connector = NacosServerConnector(self.logger, self.client_config, http_agent)
self.config_info_cache = config_info_cache
self.uuid = uuid.uuid4()
self.app_name = self.client_config.app_name if self.client_config.app_name else "unknown"
self.rpc_client_manager = RpcClientFactory(self.logger)
self.execute_config_listen_channel = asyncio.Queue()
self.stop_event = asyncio.Event()
self.listen_task = asyncio.create_task(self._execute_config_listen_task())
self.last_all_sync_time = get_current_time_millis()
self.config_subscribe_manager = ConfigSubscribeManager(self.logger, config_info_cache,
self.namespace_id,
config_filter_chain_manager,
self.execute_config_listen_channel)
async def start(self):
await self.nacos_server_connector.init()
await self.fetch_rpc_client(0)
async def fetch_rpc_client(self, task_id: int = 0) -> RpcClient:
labels = {
Constants.LABEL_SOURCE: Constants.LABEL_SOURCE_SDK,
Constants.LABEL_MODULE: Constants.CONFIG_MODULE,
Constants.APP_NAME_HEADER: self.app_name,
"taskId": str(task_id),
}
rpc_client = await self.rpc_client_manager.create_client(
str(self.uuid) + "_config_" + str(task_id), ConnectionType.GRPC, labels,
self.client_config, self.nacos_server_connector)
if rpc_client.is_wait_initiated():
await rpc_client.register_server_request_handler(CONFIG_CHANGE_NOTIFY_REQUEST_TYPE,
ConfigChangeNotifyRequestHandler(
self.logger,
self.config_subscribe_manager,
rpc_client.name))
await rpc_client.register_connection_listener(ConfigGrpcConnectionEventListener(
self.logger,
self.config_subscribe_manager,
self.execute_config_listen_channel,
rpc_client)
)
await rpc_client.start()
return rpc_client
async def request_config_server(self, rpc_client: RpcClient, request: AbstractConfigRequest, response_class):
try:
await self.nacos_server_connector.inject_security_info(request.get_headers())
now = get_current_time_millis()
request.put_all_headers({
Constants.CLIENT_APPNAME_HEADER: self.app_name,
Constants.CLIENT_REQUEST_TS_HEADER: str(now),
Constants.CLIENT_REQUEST_TOKEN_HEADER: md5(str(now) + self.client_config.app_key),
Constants.EX_CONFIG_INFO: "true",
Constants.CHARSET_KEY: "utf-8",
'Timestamp': str(now),
})
credentials = self.client_config.credentials_provider.get_credentials()
if credentials.get_access_key_id() and credentials.get_access_key_secret():
if request.tenant:
resource = request.tenant + "+" + request.group
else:
resource = request.group
if resource.strip():
sign_str = f"{resource}+{now}"
else:
sign_str = str(now)
request.put_all_headers({
'Spas-AccessKey': credentials.get_access_key_id(),
'Spas-Signature': base64.encodebytes(
hmac.new(credentials.get_access_key_secret().encode(), sign_str.encode(),
digestmod=hashlib.sha1).digest()).decode().strip(),
})
if credentials.get_security_token():
request.put_header("Spas-SecurityToken", credentials.get_security_token())
response = await rpc_client.request(request, self.client_config.grpc_config.grpc_timeout)
if response.get_result_code() != 200:
raise NacosException(response.get_error_code(), response.get_message())
if issubclass(response.__class__, response_class):
return response
else:
raise NacosException(SERVER_ERROR, " Server return invalid response")
except NacosException as e:
self.logger.error("failed to invoke nacos config server : " + str(e))
raise e
except Exception as e:
self.logger.error("failed to invoke nacos config server : " + str(e))
raise NacosException(SERVER_ERROR, "Request nacos config server failed: " + str(e))
async def query_config(self, data_id: str, group: str):
self.logger.info("query config group:%s,dataId:%s,namespace:%s", group, data_id,
self.namespace_id)
request = ConfigQueryRequest(
group=group,
dataId=data_id,
tenant=self.namespace_id)
request.put_header("notify", str(False))
cache_key = get_config_cache_key(data_id, group, self.namespace_id)
try:
response = await self.request_config_server(await self.fetch_rpc_client(), request, ConfigQueryResponse)
await self.config_info_cache.write_config_to_cache(cache_key, response.content,
response.encryptedDataKey)
return response.content, response.encryptedDataKey
except NacosException as e:
if e.error_code == 300:
await self.config_info_cache.write_config_to_cache(cache_key, "", "")
return "", ""
raise e
async def publish_config(self, param: ConfigParam):
self.logger.info("publish config group:%s,dataId:%s,content:%s,tag:%s", param.group, param.data_id,
param.content, param.tag)
request = ConfigPublishRequest(
group=param.group,
dataId=param.data_id,
tenant=self.namespace_id,
content=param.content,
casMd5=param.cas_md5)
request.additionMap["tag"] = param.tag
request.additionMap["appName"] = param.app_name
request.additionMap["betaIps"] = param.beta_ips
request.additionMap["type"] = param.type
request.additionMap["src_user"] = param.src_user
request.additionMap["encryptedDataKey"] = param.encrypted_data_key if param.encrypted_data_key else ""
response = await self.request_config_server(await self.fetch_rpc_client(), request, ConfigPublishResponse)
return response.is_success()
async def remove_config(self, group: str, data_id: str):
self.logger.info("remove config group:%s,dataId:%s", group, data_id)
request = ConfigRemoveRequest(group=group,
dataId=data_id,
tenant=self.namespace_id)
response = await self.request_config_server(await self.fetch_rpc_client(), request, ConfigRemoveResponse)
return response.is_success()
async def add_listener(self, data_id: str, group: str, listener: Callable) -> None:
self.logger.info(f"add config listener,dataId:{data_id},group:{group}")
await self.config_subscribe_manager.add_listener(data_id, group, self.namespace_id, listener)
async def remove_listener(self, data_id: str, group: str, listener: Callable):
self.logger.info(f"remove config listener,dataId:{data_id},group:{group}")
await self.config_subscribe_manager.remove_listener(data_id, group, self.namespace_id, listener)
async def _execute_config_listen_task(self):
while not self.stop_event.is_set():
try:
await asyncio.wait_for(self.execute_config_listen_channel.get(), timeout=5)
except asyncio.TimeoutError:
self.logger.debug("Timeout occurred")
except asyncio.CancelledError:
return
has_changed_keys = False
is_sync_all = (get_current_time_millis() - self.last_all_sync_time) >= 5 * 60 * 1000
listen_task_map = await self.config_subscribe_manager.execute_listener_and_build_tasks(is_sync_all)
if len(listen_task_map) == 0:
continue
for task_id, cache_data_list in listen_task_map.items():
if len(cache_data_list) == 0:
continue
request = ConfigBatchListenRequest(group='', dataId='', tenant='')
for cache_data in cache_data_list:
config_listen_context = ConfigListenContext(group=cache_data.group,
md5=cache_data.md5,
dataId=cache_data.data_id,
tenant=cache_data.tenant)
request.configListenContexts.append(config_listen_context)
try:
rpc_client = await self.fetch_rpc_client(task_id)
response: ConfigChangeBatchListenResponse = await self.request_config_server(
rpc_client, request, ConfigChangeBatchListenResponse)
if len(response.changedConfigs) > 0:
has_changed_keys = True
for config_ctx in response.changedConfigs:
change_key = get_config_cache_key(config_ctx.dataId, config_ctx.group, config_ctx.tenant)
try:
content, encrypted_data_key = await self.query_config(config_ctx.dataId,
config_ctx.group)
await self.config_subscribe_manager.update_subscribe_cache(config_ctx.dataId,
config_ctx.group,
self.namespace_id,
content,
encrypted_data_key)
except Exception as e:
self.logger.error(f"failed to refresh config:{change_key},error:{str(e)}")
continue
except Exception as e:
self.logger.error(f"failed to batch listen config ,error:{str(e)}")
continue
if is_sync_all:
self.last_all_sync_time = get_current_time_millis()
if has_changed_keys:
await self.execute_config_listen_channel.put(None)
async def server_health(self):
return (await self.fetch_rpc_client()).is_running()
async def close_client(self):
self.logger.info("close Nacos python config grpc client...")
self.stop_event.set()
await self.listen_task
await self.rpc_client_manager.shutdown_all_clients()

View File

@ -0,0 +1,23 @@
import asyncio
from v2.nacos.config.cache.config_subscribe_manager import ConfigSubscribeManager
from v2.nacos.transport.connection_event_listener import ConnectionEventListener
from v2.nacos.transport.rpc_client import RpcClient
class ConfigGrpcConnectionEventListener(ConnectionEventListener):
def __init__(self, logger, config_subscribe_manager: ConfigSubscribeManager,
execute_config_listen_channel: asyncio.Queue, rpc_client: RpcClient):
self.logger = logger
self.config_subscribe_manager = config_subscribe_manager
self.execute_config_listen_channel = execute_config_listen_channel
self.rpc_client = rpc_client
async def on_connected(self) -> None:
self.logger.info(f"{self.rpc_client.name} rpc client connected,notify listen config")
await self.execute_config_listen_channel.put(None)
async def on_disconnect(self) -> None:
task_id = self.rpc_client.labels["taskId"]
await self.config_subscribe_manager.batch_set_config_changed(int(task_id))

View File

View File

@ -0,0 +1,5 @@
from v2.nacos.common.constants import Constants
def get_config_cache_key(data_id: str, group: str, tenant: str):
return f"{data_id}{Constants.CONFIG_INFO_SPLITER}{group}{Constants.CONFIG_INFO_SPLITER}{tenant}"

57
v2/nacos/nacos_client.py Normal file
View File

@ -0,0 +1,57 @@
import logging
import os
from logging.handlers import TimedRotatingFileHandler
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.transport.http_agent import HttpAgent
class NacosClient:
def __init__(self, client_config: ClientConfig, log_file: str):
if not client_config:
raise NacosException(INVALID_PARAM, "client config is required")
self.logger = None
self.init_log(client_config, log_file)
if client_config.timeout_ms <= 0:
client_config.timeout_ms = 10 * 1000
if client_config.heart_beat_interval <= 0:
client_config.heart_beat_interval = 5 * 1000
self.client_config = client_config
self.http_agent = HttpAgent(self.logger, client_config.tls_config, client_config.timeout_ms)
def init_log(self, client_config: ClientConfig, module):
log_level = client_config.log_level or logging.INFO
if client_config.cache_dir == '':
client_config.cache_dir = os.path.join(os.path.expanduser("~"), "nacos", "cache")
if not client_config.cache_dir.endswith(os.path.sep):
client_config.cache_dir += os.path.sep
if client_config.log_dir is None or client_config.log_dir.strip() == '':
client_config.log_dir = os.path.join(os.path.expanduser("~"), "logs", "nacos")
if not client_config.log_dir.endswith(os.path.sep):
client_config.log_dir += os.path.sep
os.makedirs(client_config.log_dir, exist_ok=True)
os.makedirs(client_config.cache_dir, exist_ok=True)
log_path = client_config.log_dir + module + ".log"
self.logger = logging.getLogger(module)
file_handler = TimedRotatingFileHandler(log_path, when="midnight", interval=1,
backupCount=client_config.log_rotation_backup_count,
encoding='utf-8')
self.logger.setLevel(log_level)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
file_handler.setFormatter(formatter)
self.logger.addHandler(file_handler)
self.logger.propagate = False
self.logger.info(f"log directory: {client_config.log_dir}.")

View File

0
v2/nacos/naming/cache/__init__.py vendored Normal file
View File

View File

@ -0,0 +1,141 @@
import asyncio
import json
import logging
import os
from typing import Callable, Optional, List, Dict
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.naming.cache.subscribe_manager import SubscribeManager
from v2.nacos.naming.model.instance import Instance
from v2.nacos.naming.model.service import Service
from v2.nacos.naming.util.naming_client_util import get_service_cache_key, get_group_name
from v2.nacos.utils.common_util import get_current_time_millis, to_json_string
from v2.nacos.utils.file_util import read_all_files_in_dir, write_to_file
class ServiceInfoCache:
def __init__(self, client_config: ClientConfig):
self.logger = logging.getLogger(Constants.NAMING_MODULE)
self.cache_dir = os.path.join(client_config.cache_dir, Constants.NAMING_MODULE, client_config.namespace_id)
self.service_info_map: Dict[str, Service] = {}
self.update_time_map = {}
self.lock = asyncio.Lock()
self.sub_callback_manager = SubscribeManager()
self.update_cache_when_empty = client_config.update_cache_when_empty
if client_config.load_cache_at_start:
asyncio.create_task(self.load_cache_from_disk())
async def load_cache_from_disk(self):
cache_file_content_dict = await read_all_files_in_dir(self.logger, self.cache_dir)
if cache_file_content_dict is None:
return
service_map = {}
for file_name, cache_content in cache_file_content_dict.items():
try:
service_data = json.loads(cache_content)
service = Service(**service_data)
if len(service.hosts) == 0:
self.logger.warning(
f"instance cache list for service:{service.name} is empty, json string:{cache_content}")
if service is None:
continue
cache_key = get_service_cache_key(get_group_name(service.name, service.groupName), service.clusters)
service_map[cache_key] = service
except json.JSONDecodeError as e:
self.logger.error(f"failed to parse json:{cache_content}, err:{e}")
continue
self.logger.info(f"finish loading name cache, total file size: {len(cache_file_content_dict)}")
if service_map is None or len(service_map) == 0:
self.logger.info("[load_cache_from_disk] no cache file found, skip loading cache from disk.")
return
self.service_info_map = service_map
self.logger.info("[load_cache_from_disk] loaded {%s} entries cache from disk.", len(service_map))
async def process_service_json(self, data: str):
try:
service_data = json.loads(data)
service = Service(**service_data)
if service is None:
return
except json.JSONDecodeError as e:
self.logger.error(f"failed to parse json:{data}, err:{e}")
return
await self.process_service(service)
async def process_service(self, service: Service):
if service is None:
return
if not self.update_cache_when_empty and len(service.hosts) == 0:
# 如果服务实例列表是空的且update_cache_when_empty为假则跳过更新缓存
self.logger.warning(
f"instance list is empty, skipping update as update_cache_when_empty is set to False. service name: {service.name}")
return
cache_key = get_service_cache_key(get_group_name(service.name, service.groupName), service.clusters)
async with self.lock:
old_service = self.service_info_map.get(cache_key, None)
if old_service is not None and old_service.lastRefTime >= service.lastRefTime:
self.logger.warning(
f"out of date data received, old-t: {old_service.lastRefTime}, new-t: {service.lastRefTime}")
return
# 更新时间和服务信息
self.update_time_map[cache_key] = get_current_time_millis()
self.service_info_map[cache_key] = service
if not old_service or self.check_instance_changed(old_service, service):
self.logger.info(f"service key: {cache_key} was updated to: {str(service)}")
await write_to_file(self.logger, os.path.join(self.cache_dir, cache_key), to_json_string(service))
await self.sub_callback_manager.service_changed(cache_key, service)
self.logger.info(f"current service map size: {len(self.service_info_map)}")
async def get_service_info(self, service_name, group_name, clusters) -> Service:
cache_key = get_service_cache_key(get_group_name(service_name, group_name), clusters)
async with self.lock:
service = self.service_info_map.get(cache_key)
self.logger.info(
f"get service info from cache, key: {cache_key}instances:{service.hosts if service is not None else 'None'}")
return service
def check_instance_changed(self, old_service: Optional[Service], new_service: Service):
if old_service is None:
return True
if len(old_service.hosts) != len(new_service.hosts):
return True
old_ref_time = old_service.lastRefTime
new_ref_time = new_service.lastRefTime
if old_ref_time > new_ref_time:
self.logger.warning(f"out of date data received, old-t: {old_ref_time}, new-t: {new_ref_time}")
return False
# 排序实例列表并比较,函数需要你实现
old_instance = self.sort_instances(old_service.hosts)
new_instance = self.sort_instances(new_service.hosts)
return old_instance != new_instance
@staticmethod
def sort_instances(instances: List[Instance]) -> List[Instance]:
def instance_key(instance: Instance) -> (int, int):
ip_num = int(''.join(instance.ip.split('.')))
return ip_num, instance.port
return sorted(instances, key=instance_key)
async def register_callback(self, service_name: str, clusters: str, callback_func: Callable):
await self.sub_callback_manager.add_callback_func(service_name, clusters, callback_func)
async def deregister_callback(self, service_name: str, clusters: str, callback_func: Callable):
await self.sub_callback_manager.remove_callback_func(service_name, clusters, callback_func)
async def is_subscribed(self, service_name: str, clusters: str) -> bool:
return await self.sub_callback_manager.is_subscribed(service_name, clusters)

View File

@ -0,0 +1,35 @@
import asyncio
from typing import Dict, List, Callable
from v2.nacos.naming.model.service import Service
from v2.nacos.naming.util.naming_client_util import get_service_cache_key
class SubscribeManager:
def __init__(self):
self.callback_func_map: Dict[str, List[Callable]] = {}
self.mux = asyncio.Lock()
async def is_subscribed(self, service_name: str, clusters: str) -> bool:
key = get_service_cache_key(service_name, clusters)
return key in self.callback_func_map
async def add_callback_func(self, service_name: str, clusters: str, callback_func: Callable):
key = get_service_cache_key(service_name, clusters)
async with self.mux:
if key not in self.callback_func_map:
self.callback_func_map[key] = []
self.callback_func_map[key].append(callback_func)
async def remove_callback_func(self, service_name: str, clusters: str, callback_func: Callable):
key = get_service_cache_key(service_name, clusters)
async with self.mux:
if key in self.callback_func_map:
self.callback_func_map[key] = [func for func in self.callback_func_map[key] if func != callback_func]
if not self.callback_func_map[key]:
del self.callback_func_map[key]
async def service_changed(self, cache_key: str, service: Service):
if cache_key in self.callback_func_map:
for callback_func in self.callback_func_map[cache_key]:
await callback_func(service.hosts)

View File

View File

@ -0,0 +1,5 @@
from threading import RLock
class InstancesChangeNotifier:
pass

View File

View File

@ -0,0 +1,83 @@
import re
from pydantic import BaseModel
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.common.preserved_metadata_key import PreservedMetadataKeys
class Instance(BaseModel):
instanceId: str = ''
ip: str
port: int
weight: float = 1.0
healthy: bool = True
enabled: bool = True
ephemeral: bool = True
clusterName: str = ''
serviceName: str = ''
metadata: dict = {}
def __str__(self):
return f"Instance({', '.join(f'{key}={value!r}' for key, value in self.__dict__.items())})"
def to_inet_addr(self):
return self.ip + ":" + str(self.port)
def is_ephemeral(self) -> bool:
return self.ephemeral
def get_weight(self):
return self.weight
def add_metadata(self, key: str, value: str) -> None:
if self.metadata is None:
self.metadata = {}
self.metadata[key] = value
def get_instance_heart_beat_interval(self):
return self.__get_metadata_by_key_with_int_default(PreservedMetadataKeys.HEART_BEAT_INTERVAL,
Constants.DEFAULT_HEART_BEAT_INTERVAL)
def get_instance_heart_beat_timeout(self):
return self.__get_metadata_by_key_with_int_default(PreservedMetadataKeys.HEART_BEAT_INTERVAL,
Constants.DEFAULT_HEART_BEAT_TIMEOUT)
def get_ip_delete_timeout(self):
return self.__get_metadata_by_key_with_int_default(PreservedMetadataKeys.IP_DELETE_TIMEOUT,
Constants.DEFAULT_IP_DELETE_TIMEOUT)
def get_instance_id_generator(self):
return self.__get_metadata_by_key_with_str_default(PreservedMetadataKeys.INSTANCE_ID_GENERATOR,
Constants.DEFAULT_INSTANCE_ID_GENERATOR)
def check_instance_is_legal(self):
if self.get_instance_heart_beat_timeout() < self.get_instance_heart_beat_interval() or \
self.get_ip_delete_timeout() < self.get_instance_heart_beat_interval():
raise NacosException(
INVALID_PARAM,
"Instance 'heart beat interval' must less than 'heart beat timeout' and 'ip delete timeout'."
)
def contains_metadata(self, key: str) -> bool:
if not self.metadata:
return False
return key in self.metadata.keys()
def __get_metadata_by_key_with_int_default(self, key: str, default_value: int) -> int:
if not self.metadata or key not in self.metadata:
return default_value
value = self.metadata[key]
pattern = re.compile(Constants.NUMBER_PATTERN)
if value.strip() and re.match(pattern, value):
return int(value)
return default_value
def __get_metadata_by_key_with_str_default(self, key: str, default_value: str) -> str:
if not self.metadata:
return default_value
return self.metadata[key]

View File

@ -0,0 +1,60 @@
from typing import Optional, Callable, List, Dict
from pydantic import BaseModel
from v2.nacos.common.constants import Constants
class RegisterInstanceParam(BaseModel):
ip: str
port: int
weight: float = 1.0
enabled: bool = True
healthy: bool = True
metadata: Dict[str, str] = {}
cluster_name: str = ''
service_name: str
group_name: str = Constants.DEFAULT_GROUP
ephemeral: bool = True
class BatchRegisterInstanceParam(BaseModel):
service_name: str
group_name: str = Constants.DEFAULT_GROUP
instances: List[RegisterInstanceParam] = []
class DeregisterInstanceParam(BaseModel):
ip: str
port: int
cluster_name: str = ''
service_name: str
group_name: str = Constants.DEFAULT_GROUP
ephemeral: bool = True
class ListInstanceParam(BaseModel):
service_name: str
group_name: str = Constants.DEFAULT_GROUP
clusters: List[str] = []
subscribe: bool = True
healthy_only: Optional[bool]
class SubscribeServiceParam(BaseModel):
service_name: str
group_name: str = Constants.DEFAULT_GROUP
clusters: List[str] = []
subscribe_callback: Optional[Callable] = None
class GetServiceParam(BaseModel):
service_name: str
group_name: str = Constants.DEFAULT_GROUP
clusters: List[str] = []
class ListServiceParam(BaseModel):
namespace_id: str = Constants.DEFAULT_NAMESPACE_ID
group_name: str = Constants.DEFAULT_GROUP
page_no: int = 1
page_size: int = 10

View File

@ -0,0 +1,63 @@
from abc import ABC
from typing import Optional, Any, List
from v2.nacos.naming.model.instance import Instance
from v2.nacos.naming.model.service import Service
from v2.nacos.transport.model.rpc_request import Request
class AbstractNamingRequest(Request, ABC):
namespace: Optional[str] = ''
serviceName: Optional[str] = ''
groupName: Optional[str] = ''
def get_module(self):
return "naming"
def get_request_type(self) -> str:
"""
提供一个默认实现或抛出NotImplementedError明确指示子类需要覆盖此方法
"""
raise NotImplementedError("Subclasses should implement this method.")
NOTIFY_SUBSCRIBER_REQUEST_TYPE = "NotifySubscriberRequest"
class InstanceRequest(AbstractNamingRequest):
type: Optional[str]
instance: Optional[Instance]
def get_request_type(self) -> str:
return 'InstanceRequest'
class BatchInstanceRequest(AbstractNamingRequest):
type: Optional[str]
instances: Optional[List[Instance]]
def get_request_type(self) -> str:
return 'BatchInstanceRequest'
class NotifySubscriberRequest(AbstractNamingRequest):
serviceInfo: Optional[Service]
def get_request_type(self) -> str:
return 'NotifySubscriberRequest'
class ServiceListRequest(AbstractNamingRequest):
pageNo: Optional[int]
pageSize: Optional[int]
def get_request_type(self) -> str:
return 'ServiceListRequest'
class SubscribeServiceRequest(AbstractNamingRequest):
subscribe: Optional[bool]
clusters: Optional[str]
def get_request_type(self) -> str:
return 'SubscribeServiceRequest'

View File

@ -0,0 +1,37 @@
from typing import Optional, Any, List
from v2.nacos.naming.model.service import Service
from v2.nacos.transport.model.rpc_response import Response
class NotifySubscriberResponse(Response):
def get_response_type(self) -> str:
return "NotifySubscriberResponse"
class SubscribeServiceResponse(Response):
serviceInfo: Optional[Service] = None
def get_response_type(self) -> str:
return "SubscribeServiceResponse"
def get_service_info(self) -> Service:
return self.serviceInfo
class InstanceResponse(Response):
def get_response_type(self) -> str:
return "InstanceResponse"
class BatchInstanceResponse(Response):
def get_response_type(self) -> str:
return "BatchInstanceResponse"
class ServiceListResponse(Response):
count: int
serviceNames: List[str]
def get_response_type(self) -> str:
return "ServiceListResponse"

View File

@ -0,0 +1,117 @@
import time
import urllib.parse
from typing import Optional, List
from pydantic import BaseModel
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException
from v2.nacos.naming.model.instance import Instance
EMPTY = ""
ALL_IPS = "000--00-ALL_IPS--00--000"
SPLITER = "@@"
DEFAULT_CHARSET = "UTF-8"
class Service(BaseModel):
name: str
groupName: str
clusters: Optional[str] = ''
cacheMillis: int = 1000
hosts: List[Instance] = []
lastRefTime: int = 0
checksum: str = ""
allIps: bool = False
reachProtectionThreshold: bool = False
jsonFromServer: str = ""
def init_from_key(self, key=None):
if key:
max_index = 2
cluster_index = 2
service_name_index = 1
group_index = 0
keys = key.split(Constants.SERVICE_INFO_SPLITER)
if len(keys) >= max_index + 1:
self.groupName = keys[group_index]
self.name = keys[service_name_index]
self.clusters = keys[cluster_index]
elif len(keys) == max_index:
self.groupName = keys[group_index]
self.name = keys[service_name_index]
else:
raise NacosException("Can't parse out 'group_name', but it must not None!")
def get_ip_count(self):
return len(self.hosts)
def is_expired(self):
return int(round(time.time() * 1000)) - self.lastRefTime > self.cacheMillis
def add_host(self, host):
self.hosts.append(host)
def add_all_hosts(self, hosts):
self.hosts.extend(hosts)
def is_valid(self):
return self.hosts != []
def validate(self):
if self.allIps:
return True
if not self.hosts:
return False
valid_hosts = []
for host in self.hosts:
if not host.is_healthy():
continue
for i in range(host.get_weight()):
valid_hosts.append(i)
return len(valid_hosts) > 0
def get_key_default(self):
service_name = self.get_grouped_service_name()
return self.get_key(service_name, self.clusters)
def get_key_encoded(self):
service_name = self.get_grouped_service_name().encode("utf-8")
service_name = urllib.parse.quote(service_name)
return self.get_key(service_name, self.clusters)
def get_grouped_service_name(self):
service_name = self.name
if self.groupName and Constants.SERVICE_INFO_SPLITER not in service_name:
service_name = self.groupName + Constants.SERVICE_INFO_SPLITER + service_name
return service_name
@staticmethod
def from_key(key: str):
info = key.split(Constants.SERVICE_INFO_SPLITER)
if len(info) > 2:
service = Service(name=info[1], groupName=info[0], clusters=info[2])
else:
service = Service(name=info[1], groupName=info[0], clusters="")
return service
def get_hosts_str(self):
hosts_str = ""
for host in self.hosts:
hosts_str += host.json() + ";"
return hosts_str
class Config:
arbitrary_types_allowed = True
class ServiceList(BaseModel):
count: int
services: List[str]

View File

@ -0,0 +1,201 @@
import asyncio
from typing import List
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM
from v2.nacos.nacos_client import NacosClient
from v2.nacos.naming.cache.service_info_cache import ServiceInfoCache
from v2.nacos.naming.model.instance import Instance
from v2.nacos.naming.model.naming_param import RegisterInstanceParam, BatchRegisterInstanceParam, \
DeregisterInstanceParam, ListInstanceParam, SubscribeServiceParam, GetServiceParam, ListServiceParam
from v2.nacos.naming.model.service import ServiceList
from v2.nacos.naming.model.service import Service
from v2.nacos.naming.remote.naming_grpc_client_proxy import NamingGRPCClientProxy
from v2.nacos.naming.util.naming_client_util import get_group_name
class NacosNamingService(NacosClient):
def __init__(self, client_config: ClientConfig):
super().__init__(client_config, Constants.NAMING_MODULE)
self.namespace_id = client_config.namespace_id
self.service_info_holder = ServiceInfoCache(client_config)
self.grpc_client_proxy = NamingGRPCClientProxy(client_config, self.http_agent, self.service_info_holder)
@staticmethod
async def create_naming_service(client_config: ClientConfig) -> 'NacosNamingService':
naming_service = NacosNamingService(client_config)
await naming_service.grpc_client_proxy.start()
return naming_service
async def register_instance(self, request: RegisterInstanceParam) -> bool:
if not request.service_name or not request.service_name.strip():
raise NacosException(INVALID_PARAM, "service_name can not be empty")
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
if request.metadata is None:
request.metadata = {}
instance = Instance(ip=request.ip,
port=request.port,
metadata=request.metadata,
clusterName=request.cluster_name,
healthy=request.healthy,
enabled=request.enabled,
weight=request.weight,
ephemeral=request.ephemeral,
)
instance.check_instance_is_legal()
return await self.grpc_client_proxy.register_instance(request.service_name, request.group_name, instance)
async def batch_register_instances(self, request: BatchRegisterInstanceParam) -> bool:
if not request.service_name:
raise NacosException(INVALID_PARAM, "service_name can not be empty")
instance_list = []
for instance in request.instances:
if not instance.ephemeral:
raise NacosException(INVALID_PARAM,
f"batch registration does not allow persistent instance:{instance}")
instance_list.append(Instance(
ip=instance.ip,
port=instance.port,
metadata=instance.metadata,
clusterName=instance.cluster_name,
healthy=instance.healthy,
enable=instance.enabled,
weight=instance.weight,
ephemeral=instance.ephemeral,
))
return await self.grpc_client_proxy.batch_register_instance(request.service_name, request.group_name,
instance_list)
async def deregister_instance(self, request: DeregisterInstanceParam) -> bool:
if not request.service_name:
raise NacosException(INVALID_PARAM, "service_name can not be empty")
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
instance = Instance(ip=request.ip,
port=request.port,
cluster_name=request.cluster_name,
ephemeral=request.ephemeral,
)
return await self.grpc_client_proxy.deregister_instance(request.service_name, request.group_name, instance)
async def update_instance(self, request: RegisterInstanceParam) -> bool:
if not request.service_name:
raise NacosException(INVALID_PARAM, "service_name can not be empty")
if request.metadata is None:
request.metadata = {}
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
instance = Instance(ip=request.ip,
port=request.port,
metadata=request.metadata,
clusterName=request.cluster_name,
enabled=request.enabled,
healthy=request.healthy,
weight=request.weight,
ephemeral=request.ephemeral,
)
instance.check_instance_is_legal()
return await self.grpc_client_proxy.register_instance(request.service_name, request.group_name, instance)
async def get_service(self, request: GetServiceParam) -> Service:
if not request.service_name:
raise NacosException(INVALID_PARAM, "service_name can not be empty")
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
clusters = ",".join(request.clusters)
service = await self.service_info_holder.get_service_info(request.service_name, request.group_name, clusters)
if not service:
service = await self.grpc_client_proxy.subscribe(request.service_name, request.group_name, clusters)
return service
async def list_services(self, request: ListServiceParam) -> ServiceList:
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
if not request.namespace_id:
if not self.client_config.namespace_id:
request.namespace_id = Constants.DEFAULT_NAMESPACE_ID
else:
request.namespace_id = self.client_config.namespace_id
return await self.grpc_client_proxy.list_services(request)
async def list_instances(self, request: ListInstanceParam) -> List[Instance]:
if not request.service_name:
raise NacosException(INVALID_PARAM, "service_name can not be empty")
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
clusters = ",".join(request.clusters)
service_info = None
# 如果subscribe为true, 则优先从缓存中获取服务信息,并订阅该服务
if request.subscribe:
service_info = await self.service_info_holder.get_service_info(request.service_name, request.group_name,
clusters)
if service_info is None:
service_info = await self.grpc_client_proxy.subscribe(request.service_name, request.group_name, clusters)
instance_list = []
if service_info is not None and len(service_info.hosts) > 0:
instance_list = service_info.hosts
# 如果设置了healthy_only参数,表示需要查询健康或不健康的实例列表为true时仅会返回健康的实例列表反之则返回不健康的实例列表。默认为None
if request.healthy_only is not None:
instance_list = list(
filter(lambda host: host.healthy == request.healthy_only and host.enabled and host.weight > 0,
instance_list))
return instance_list
async def subscribe(self, request: SubscribeServiceParam) -> None:
if not request.service_name:
raise NacosException(INVALID_PARAM, "service_name can not be empty")
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
clusters = ",".join(request.clusters)
await self.service_info_holder.register_callback(get_group_name(request.service_name, request.group_name),
clusters, request.subscribe_callback)
await self.grpc_client_proxy.subscribe(request.service_name, request.group_name, clusters)
async def unsubscribe(self, request: SubscribeServiceParam) -> None:
if not request.service_name:
raise NacosException(INVALID_PARAM, "service_name can not be empty")
if not request.group_name:
request.group_name = Constants.DEFAULT_GROUP
clusters = ",".join(request.clusters)
await self.service_info_holder.deregister_callback(get_group_name(request.service_name, request.group_name),
clusters, request.subscribe_callback)
await self.grpc_client_proxy.unsubscribe(request.service_name, request.group_name, clusters)
async def server_health(self) -> bool:
return self.grpc_client_proxy.server_health()
async def shutdown(self) -> None:
await self.grpc_client_proxy.close_client()

View File

View File

@ -0,0 +1,193 @@
import asyncio
import base64
import hashlib
import hmac
import logging
import uuid
from typing import Optional, List
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, SERVER_ERROR
from v2.nacos.naming.cache.service_info_cache import ServiceInfoCache
from v2.nacos.naming.model.instance import Instance
from v2.nacos.naming.model.naming_param import ListServiceParam
from v2.nacos.naming.model.naming_request import InstanceRequest, NOTIFY_SUBSCRIBER_REQUEST_TYPE, \
SubscribeServiceRequest, AbstractNamingRequest, ServiceListRequest, BatchInstanceRequest
from v2.nacos.naming.model.naming_response import SubscribeServiceResponse, InstanceResponse, ServiceListResponse, \
BatchInstanceResponse
from v2.nacos.naming.model.service import Service
from v2.nacos.naming.model.service import ServiceList
from v2.nacos.naming.remote.naming_grpc_connection_event_listener import NamingGrpcConnectionEventListener
from v2.nacos.naming.remote.naming_push_request_handler import NamingPushRequestHandler
from v2.nacos.naming.util.naming_client_util import get_group_name
from v2.nacos.naming.util.naming_remote_constants import NamingRemoteConstants
from v2.nacos.transport.http_agent import HttpAgent
from v2.nacos.transport.nacos_server_connector import NacosServerConnector
from v2.nacos.transport.rpc_client import ConnectionType
from v2.nacos.transport.rpc_client_factory import RpcClientFactory
from v2.nacos.utils.common_util import get_current_time_millis, to_json_string
class NamingGRPCClientProxy:
DEFAULT_SERVER_PORT = 8848
def __init__(self,
client_config: ClientConfig,
http_client: HttpAgent,
service_info_cache: ServiceInfoCache):
self.logger = logging.getLogger(Constants.NAMING_MODULE)
self.client_config = client_config
self.uuid = uuid.uuid4()
self.service_info_cache = service_info_cache
self.rpc_client = None
self.namespace_id = client_config.namespace_id
self.nacos_server_connector = NacosServerConnector(self.logger, client_config, http_client)
self.event_listener = NamingGrpcConnectionEventListener(self)
async def start(self):
await self.nacos_server_connector.init()
labels = {Constants.LABEL_SOURCE: Constants.LABEL_SOURCE_SDK,
Constants.LABEL_MODULE: Constants.NAMING_MODULE}
self.rpc_client = await RpcClientFactory(self.logger).create_client(str(self.uuid), ConnectionType.GRPC, labels,
self.client_config,
self.nacos_server_connector)
await self.rpc_client.register_server_request_handler(NOTIFY_SUBSCRIBER_REQUEST_TYPE,
NamingPushRequestHandler(self.logger,
self.service_info_cache))
await self.rpc_client.register_connection_listener(self.event_listener)
await self.rpc_client.start()
async def request_naming_server(self, request: AbstractNamingRequest, response_class):
try:
await self.nacos_server_connector.inject_security_info(request.get_headers())
credentials = self.client_config.credentials_provider.get_credentials()
if credentials.get_access_key_id() and credentials.get_access_key_secret():
service_name = get_group_name(request.serviceName, request.groupName)
if service_name.strip():
sign_str = str(get_current_time_millis()) + Constants.SERVICE_INFO_SPLITER + service_name
else:
sign_str = str(get_current_time_millis())
request.put_all_headers({
"ak": credentials.get_access_key_id(),
"data": sign_str,
"signature": base64.encodebytes(hmac.new(credentials.get_access_key_secret().encode(), sign_str.encode(),
digestmod=hashlib.sha1).digest()).decode().strip()
})
if credentials.get_security_token():
request.put_header("Spas-SecurityToken", credentials.get_security_token())
response = await self.rpc_client.request(request, self.client_config.grpc_config.grpc_timeout)
if response.get_result_code() != 200:
raise NacosException(response.get_error_code(), response.get_message())
if issubclass(response.__class__, response_class): # todo check and fix if anything wrong
return response
raise NacosException(SERVER_ERROR, " Server return invalid response")
except NacosException as e:
self.logger.error("failed to invoke nacos naming server : " + str(e))
raise e
except Exception as e:
self.logger.error("failed to invoke nacos naming server : " + str(e))
raise NacosException(SERVER_ERROR, "Request nacos naming server failed: " + str(e))
async def register_instance(self, service_name: str, group_name: str, instance: Instance):
self.logger.info("register instance service_name:%s, group_name:%s, namespace:%s, instance:%s" % (
service_name, group_name, self.namespace_id, str(instance)))
await self.event_listener.cache_instance_for_redo(service_name, group_name, instance)
request = InstanceRequest(
namespace=self.namespace_id,
serviceName=service_name,
groupName=group_name,
instance=instance,
type=NamingRemoteConstants.REGISTER_INSTANCE)
response = await self.request_naming_server(request, InstanceResponse)
return response.is_success()
async def batch_register_instance(self, service_name: str, group_name: str, instances: List[Instance]) -> bool:
self.logger.info("batch register instance service_name:%s, group_name:%s, namespace:%s,instances:%s" % (
service_name, group_name, self.namespace_id, str(instances)))
await self.event_listener.cache_instances_for_redo(service_name, group_name, instances)
request = BatchInstanceRequest(
namespace=self.namespace_id,
serviceName=service_name,
groupName=group_name,
instances=instances,
type=NamingRemoteConstants.BATCH_REGISTER_INSTANCE)
response = await self.request_naming_server(request, BatchInstanceResponse)
return response.is_success()
async def deregister_instance(self, service_name: str, group_name: str, instance: Instance) -> bool:
self.logger.info("deregister instance ip:%s, port:%s, service_name:%s, group_name:%s, namespace:%s" % (
instance.ip, instance.port, service_name, group_name, self.namespace_id))
request = InstanceRequest(
namespace=self.namespace_id,
serviceName=service_name,
groupName=group_name,
instance=instance,
type=NamingRemoteConstants.DE_REGISTER_INSTANCE)
response = await self.request_naming_server(request, InstanceResponse)
await self.event_listener.remove_instance_for_redo(service_name, group_name)
return response.is_success()
async def list_services(self, param: ListServiceParam) -> ServiceList:
self.logger.info("listService group_name:%s, namespace:%s", param.group_name, param.namespace_id)
request = ServiceListRequest(
namespace=param.namespace_id,
groupName=param.group_name,
serviceName='',
pageNo=param.page_no,
pageSize=param.page_size)
response = await self.request_naming_server(request, ServiceListResponse)
return ServiceList(
count=response.count,
services=response.serviceNames
)
async def subscribe(self, service_name: str, group_name: str, clusters: str) -> Optional[Service]:
self.logger.info("subscribe service_name:%s, group_name:%s, clusters:%s, namespace:%s",
service_name, group_name, clusters, self.namespace_id)
await self.event_listener.cache_subscribe_for_redo(get_group_name(service_name, group_name), clusters)
request = SubscribeServiceRequest(
namespace=self.namespace_id,
groupName=group_name,
serviceName=service_name,
clusters=clusters,
subscribe=True)
request.put_header("app", self.client_config.app_name)
response = await self.request_naming_server(request, SubscribeServiceResponse)
if not response.is_success():
self.logger.error(
"failed to subscribe service_name:%s, group_name:%s, clusters:%s, namespace:%s, response:%s",
service_name, group_name, clusters, self.namespace_id, response)
return None
return response.serviceInfo
async def unsubscribe(self, service_name: str, group_name: str, clusters: str):
self.logger.info("unSubscribe service_name:%s, group_name:%s, clusters:%s, namespace:%s",
service_name, group_name, clusters, self.namespace_id)
await self.event_listener.remove_subscriber_for_redo(get_group_name(service_name, group_name), clusters)
_ = await self.request_naming_server(SubscribeServiceRequest(
namespace=self.namespace_id,
groupName=group_name,
serviceName=service_name,
clusters=clusters,
subscribe=False
), SubscribeServiceResponse)
return
async def close_client(self):
self.logger.info("close Nacos python naming grpc client...")
await self.rpc_client.shutdown()
def server_health(self):
return self.rpc_client.is_running()

View File

@ -0,0 +1,71 @@
import asyncio
from typing import List
from v2.nacos.naming.model.instance import Instance
from v2.nacos.naming.model.service import Service
from v2.nacos.naming.util.naming_client_util import get_group_name, get_service_cache_key
from v2.nacos.transport.connection_event_listener import ConnectionEventListener
class NamingGrpcConnectionEventListener(ConnectionEventListener):
def __init__(self, client_proxy):
self.logger = client_proxy.logger
self.client_proxy = client_proxy
self.registered_instance_cached = {}
self.subscribes = {}
self.lock = asyncio.Lock()
async def on_connected(self) -> None:
await self.__redo_subscribe()
await self.__redo_register_each_service()
async def on_disconnect(self) -> None:
self.logger.info("grpc connection disconnected")
async def __redo_subscribe(self) -> None:
for service_key in self.subscribes.keys():
try:
service = Service.from_key(service_key)
service_info = await self.client_proxy.subscribe(service.name, service.groupName, service.clusters)
except Exception as e:
self.logger.warning("failed to redo subscribe service %s, caused by: %s", service_key, e)
continue
await self.client_proxy.service_info_cache.process_service(service_info)
async def __redo_register_each_service(self) -> None:
for key, instanceVal in self.registered_instance_cached.items():
info = Service.from_key(key)
try:
if isinstance(instanceVal, Instance):
await self.client_proxy.register_instance(info.name, info.groupName, instanceVal)
elif isinstance(instanceVal, list) and all(isinstance(x, Instance) for x in instanceVal):
await self.client_proxy.batch_register_instance(info.name, info.groupName, info)
except Exception as e:
self.logger.info("redo register service %s@@%s failed: %s"
% (info.groupName, info.name, e))
async def cache_instance_for_redo(self, service_name: str, group_name: str, instance: Instance) -> None:
key = get_group_name(service_name, group_name)
async with self.lock:
self.registered_instance_cached[key] = instance
async def cache_instances_for_redo(self, service_name: str, group_name: str, instances: List[Instance]) -> None:
key = get_group_name(service_name, group_name)
async with self.lock:
self.registered_instance_cached[key] = instances
async def remove_instance_for_redo(self, service_name: str, group_name: str) -> None:
key = get_group_name(service_name, group_name)
async with self.lock:
self.registered_instance_cached.pop(key)
async def cache_subscribe_for_redo(self, full_service_name: str, cluster: str) -> None:
cache_key = get_service_cache_key(full_service_name, cluster)
async with self.lock:
if cache_key not in self.subscribes:
self.subscribes[cache_key] = None
async def remove_subscriber_for_redo(self, full_service_name: str, cluster: str) -> None:
cache_key = get_service_cache_key(full_service_name, cluster)
async with self.lock:
self.subscribes.pop(cache_key)

View File

@ -0,0 +1,27 @@
from typing import Optional
from v2.nacos.naming.cache.service_info_cache import ServiceInfoCache
from v2.nacos.naming.model.naming_request import NotifySubscriberRequest
from v2.nacos.naming.model.naming_response import NotifySubscriberResponse
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.rpc_response import Response
from v2.nacos.transport.server_request_handler import IServerRequestHandler
class NamingPushRequestHandler(IServerRequestHandler):
def name(self) -> str:
return "NamingPushRequestHandler"
def __init__(self, logger, service_info_cache: ServiceInfoCache):
self.logger = logger
self.service_info_cache = service_info_cache
async def request_reply(self, request: Request) -> Optional[Response]:
if not isinstance(request, NotifySubscriberRequest):
return None
self.logger.info("received naming push service info: %s,ackId:%s", str(request.serviceInfo),
request.requestId)
await self.service_info_cache.process_service(request.serviceInfo)
return NotifySubscriberResponse()

View File

View File

@ -0,0 +1,11 @@
from v2.nacos.common.constants import Constants
def get_group_name(service_name, group_name):
return f"{group_name}{Constants.SERVICE_INFO_SPLITER}{service_name}"
def get_service_cache_key(service_name, clusters):
if not clusters:
return service_name
return f"{service_name}{Constants.SERVICE_INFO_SPLITER}{clusters}"

View File

@ -0,0 +1,18 @@
class NamingRemoteConstants:
REGISTER_INSTANCE = "registerInstance"
BATCH_REGISTER_INSTANCE = "batchRegisterInstance"
DE_REGISTER_INSTANCE = "deregisterInstance"
QUERY_SERVICE = "queryService"
SUBSCRIBE_SERVICE = "subscribeService"
NOTIFY_SUBSCRIBER = "notifySubscriber"
LIST_SERVICE = "listService"
FORWARD_INSTANCE = "forwardInstance"
FORWARD_HEART_BEAT = "forwardHeartBeat"

View File

View File

@ -0,0 +1,54 @@
/*
* Copyright 1999-2020 Alibaba Group Holding Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
syntax = "proto3";
import "google/protobuf/any.proto";
import "google/protobuf/timestamp.proto";
option java_multiple_files = true;
option java_package = "com.alibaba.nacos.api.grpc.auto";
message Metadata {
string type = 3;
string clientIp = 8;
map<string, string> headers = 7;
}
message Payload {
Metadata metadata = 2;
google.protobuf.Any body = 3;
}
service RequestStream {
// build a streamRequest
rpc requestStream (Payload) returns (stream Payload) {
}
}
service Request {
// Sends a commonRequest
rpc request (Payload) returns (Payload) {
}
}
service BiRequestStream {
// Sends a commonRequest
rpc requestBiStream (stream Payload) returns (stream Payload) {
}
}

View File

View File

@ -0,0 +1,47 @@
import json
import time
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.nacos_exception import NacosException, SERVER_ERROR
from v2.nacos.transport.http_agent import HttpAgent
class AuthClient:
def __init__(self, logger, client_config: ClientConfig, get_server_list_func, http_agent: HttpAgent):
self.logger = logger
self.username = client_config.username
self.password = client_config.password
self.client_config = client_config
self.get_server_list = get_server_list_func
self.http_agent = http_agent
self.access_token = None
self.token_ttl = 0
self.last_refresh_time = 0
self.token_expired_time = None
async def get_access_token(self, force_refresh=False):
current_time = time.time()
if self.access_token and not force_refresh and self.token_expired_time > current_time:
return self.access_token
params = {
"username": self.username,
"password": self.password
}
server_list = self.get_server_list()
for server_address in server_list:
url = server_address + "/nacos/v1/auth/users/login"
resp, error = await self.http_agent.request(url, "POST", None, params, None)
if not resp or error:
self.logger.warning(f"[get-access-token] request {url} failed, error: {error}")
continue
response_data = json.loads(resp.decode("UTF-8"))
self.access_token = response_data.get('accessToken')
self.token_ttl = response_data.get('tokenTtl', 18000) # 默认使用返回值无返回则使用18000秒
self.token_expired_time = current_time + self.token_ttl - 10 # 更新 Token 的过期时间
self.logger.info(
f"[get_access_token] AccessToken: {self.access_token}, TTL: {self.token_ttl}, force_refresh: {force_refresh}")
return self.access_token
raise NacosException(SERVER_ERROR, "get access token failed")

View File

@ -0,0 +1,34 @@
from abc import ABC, abstractmethod
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.rpc_response import Response
from v2.nacos.transport.model.server_info import ServerInfo
class IConnection(ABC):
@abstractmethod
def request(self, request: Request, timeout_mills: int) -> Response:
pass
@abstractmethod
def close(self):
pass
class Connection(IConnection, ABC):
def __init__(self, connection_id, server_info: ServerInfo):
self.connection_id = connection_id
self.abandon = False
self.server_info = server_info
def get_connection_id(self) -> str:
return self.connection_id
def get_server_info(self) -> ServerInfo:
return self.server_info
def set_abandon(self, flag: bool):
self.abandon = flag
def is_abandon(self):
return self.abandon

View File

@ -0,0 +1,12 @@
from abc import ABC, abstractmethod
class ConnectionEventListener(ABC):
@abstractmethod
async def on_connected(self) -> None:
pass
@abstractmethod
async def on_disconnect(self) -> None:
pass

View File

@ -0,0 +1,166 @@
import asyncio
from typing import Optional
import grpc
import pydantic
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, CLIENT_DISCONNECT
from v2.nacos.transport.connection import Connection
from v2.nacos.transport.grpc_connection import GrpcConnection
from v2.nacos.transport.grpc_util import GrpcUtils
from v2.nacos.transport.grpcauto.nacos_grpc_service_pb2_grpc import BiRequestStreamStub, RequestStub
from v2.nacos.transport.model.internal_request import ConnectionSetupRequest, ServerCheckRequest
from v2.nacos.transport.model.internal_response import ServerCheckResponse
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.server_info import ServerInfo
from v2.nacos.transport.nacos_server_connector import NacosServerConnector
from v2.nacos.transport.rpc_client import RpcClient, ConnectionType
class GrpcClient(RpcClient):
def __init__(self, logger, name: str, client_config: ClientConfig, nacos_server: NacosServerConnector):
super().__init__(logger=logger, name=name, nacos_server=nacos_server)
self.logger = logger
self.tls_config = client_config.tls_config
self.grpc_config = client_config.grpc_config
self.tenant = client_config.namespace_id
async def _create_new_managed_channel(self, server_ip, grpc_port):
options = [
('grpc.max_call_recv_msg_size', self.grpc_config.max_receive_message_length),
('grpc.keepalive_time_ms', self.grpc_config.max_keep_alive_ms),
('grpc.use_local_subchannel_pool', 1), # 禁用全局连接池
('grpc.so_reuseport', 0) # 禁止端口复用
]
if self.tls_config and self.tls_config.enabled:
with open(self.tls_config.ca_file, 'rb') as f:
root_certificates = f.read()
with open(self.tls_config.cert_file, 'rb') as f:
cert_chain = f.read()
with open(self.tls_config.key_file, 'rb') as f:
private_key = f.read()
credentials = grpc.ssl_channel_credentials(root_certificates=root_certificates,
private_key=private_key,
certificate_chain=cert_chain)
channel = grpc.aio.secure_channel(f'{server_ip}:{grpc_port}', credentials=credentials,
options=options)
else:
channel = grpc.aio.insecure_channel(f'{server_ip}:{grpc_port}',
options=options)
try:
await asyncio.wait_for(channel.channel_ready(), self.grpc_config.grpc_timeout / 1000)
except asyncio.TimeoutError as e:
await channel.close()
raise NacosException(CLIENT_DISCONNECT, 'failed to connect nacos server') from e
else:
return channel
async def _server_check(self, server_ip, server_port, channel_stub: RequestStub):
for i in range(self.RETRY_TIMES):
try:
server_check_request = ServerCheckRequest()
response_payload = await channel_stub.request(
GrpcUtils.convert_request_to_payload(server_check_request),
timeout=self.grpc_config.grpc_timeout / 1000.0)
server_check_response = GrpcUtils.parse(response_payload)
if not server_check_response or not isinstance(server_check_response, ServerCheckResponse):
return None
if 300 <= server_check_response.get_error_code() < 400:
self.logger.error(
f"server check fail for {server_ip}:{server_port}, error code = {server_check_response.get_error_code()}")
await asyncio.sleep(1)
continue
return server_check_response
except grpc.FutureTimeoutError:
self.logger.error(f"server check timed out for {server_ip}:{server_port}")
continue
except grpc.aio.AioRpcError as e:
raise NacosException(error_code=e.code(), message=e.details())
except Exception as e:
self.logger.error(f"server check fail for {server_ip}:{server_port}, error = {e}")
if self.tls_config and self.tls_config.enabled:
self.logger.error("current client requires tls encrypted, server must support tls, please check.")
return None
async def connect_to_server(self, server_info: ServerInfo) -> Optional[Connection]:
managed_channel = await self._create_new_managed_channel(server_info.server_ip, server_info.server_port)
# Create a stub
channel_stub = RequestStub(managed_channel)
server_check_response = await self._server_check(server_info.server_ip, server_info.server_port,
channel_stub)
if not server_check_response:
await self._shunt_down_channel(managed_channel)
return None
connection_id = server_check_response.get_connection_id()
self.logger.info(
f"connect to server success,labels:{self.labels},tenant:{self.tenant},connection_id:{connection_id}")
bi_request_stream_stub = BiRequestStreamStub(managed_channel)
grpc_conn = GrpcConnection(server_info, connection_id, managed_channel,
channel_stub, bi_request_stream_stub)
connection_setup_request = ConnectionSetupRequest(clientVersion=Constants.CLIENT_VERSION, tenant=self.tenant,
labels=self.labels)
await grpc_conn.send_bi_request(GrpcUtils.convert_request_to_payload(connection_setup_request))
asyncio.create_task(self._server_request_watcher(grpc_conn))
await asyncio.sleep(0.1)
return grpc_conn
async def _handle_server_request(self, request: Request, grpc_connection: GrpcConnection):
request_type = request.get_request_type()
server_request_handler_instance = self.server_request_handler_mapping.get(request_type)
if not server_request_handler_instance:
self.logger.error("unsupported payload type:%s, grpc connection id:%s", request_type,
grpc_connection.get_connection_id())
return
response = await server_request_handler_instance.request_reply(request)
if not response:
self.logger.warning("failed to process server request,connection_id:%s,ackID:%s",
grpc_connection.get_connection_id(), request.get_request_id())
return
try:
response.set_request_id(request.requestId)
await grpc_connection.send_bi_request(GrpcUtils.convert_response_to_payload(response))
except Exception as e:
if isinstance(e, EOFError):
self.logger.error(
f"{grpc_connection.get_connection_id()} connection closed before response could be sent, ackId->{request.requestId}")
else:
self.logger.error(
f"{grpc_connection.get_connection_id()} failed to send response:{response.get_response_type()}, ackId:{request.requestId},error:{str(e)}")
async def _server_request_watcher(self, grpc_conn: GrpcConnection):
async for payload in grpc_conn.bi_stream_send():
try:
self.logger.info("receive stream server request, connection_id:%s, original info: %s"
% (grpc_conn.get_connection_id(), str(payload)))
request = GrpcUtils.parse(payload)
if request:
await self._handle_server_request(request, grpc_conn)
except Exception as e:
self.logger.error(f"[{grpc_conn.connection_id}] handle server request occur exception: {e}")
@staticmethod
async def _shunt_down_channel(channel):
if channel:
await channel.close()
def get_connection_type(self):
return ConnectionType.GRPC
def get_rpc_port_offset(self) -> int:
return 1000

View File

@ -0,0 +1,46 @@
import asyncio
import grpc
from v2.nacos.common.nacos_exception import NacosException
from v2.nacos.transport.connection import Connection
from v2.nacos.transport.grpc_util import GrpcUtils
from v2.nacos.transport.grpcauto.nacos_grpc_service_pb2 import Payload
from v2.nacos.transport.grpcauto.nacos_grpc_service_pb2_grpc import RequestStub, BiRequestStreamStub
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.rpc_response import Response
class GrpcConnection(Connection):
def __init__(self, server_info, connection_id, channel, client: RequestStub, bi_stream_client: BiRequestStreamStub):
super().__init__(connection_id=connection_id, server_info=server_info)
self.channel = channel
self.client = client
self.bi_stream_client = bi_stream_client
self.queue = asyncio.Queue()
async def request(self, request: Request, timeout_millis) -> Response:
payload = GrpcUtils.convert_request_to_payload(request)
response_payload = await self.client.request(payload, timeout=timeout_millis / 1000.0)
return GrpcUtils.parse(response_payload)
def set_channel(self, channel: grpc.Channel) -> None:
self.channel = channel
async def close(self) -> None:
if self.channel:
await self.channel.close()
async def send_bi_request(self, payload: Payload) -> None:
await self.queue.put(payload)
async def request_payloads(self):
while True:
try:
payload = await self.queue.get()
yield payload
except NacosException:
pass
def bi_stream_send(self):
return self.bi_stream_client.requestBiStream(self.request_payloads())

View File

@ -0,0 +1,84 @@
import json
from google.protobuf.any_pb2 import Any
from v2.nacos.common.nacos_exception import NacosException, SERVER_ERROR
from v2.nacos.config.model.config_request import ConfigChangeNotifyRequest
from v2.nacos.config.model.config_response import ConfigPublishResponse, ConfigQueryResponse, \
ConfigChangeBatchListenResponse, ConfigRemoveResponse
from v2.nacos.naming.model.naming_request import NotifySubscriberRequest
from v2.nacos.naming.model.naming_response import InstanceResponse, SubscribeServiceResponse, BatchInstanceResponse, \
ServiceListResponse
from v2.nacos.transport.grpcauto.nacos_grpc_service_pb2 import Payload, Metadata
from v2.nacos.transport.model import ServerCheckResponse
from v2.nacos.transport.model.internal_request import ClientDetectionRequest
from v2.nacos.transport.model.internal_response import ErrorResponse, HealthCheckResponse
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.rpc_response import Response
from v2.nacos.utils.net_util import NetUtils
class GrpcUtils:
SERVICE_INFO_KEY = "serviceInfo"
remote_type = {
"ServerCheckResponse": ServerCheckResponse,
"NotifySubscriberRequest": NotifySubscriberRequest,
"ErrorResponse": ErrorResponse,
"InstanceResponse": InstanceResponse,
"ServiceListResponse": ServiceListResponse,
"BatchInstanceResponse": BatchInstanceResponse,
"ClientDetectionRequest": ClientDetectionRequest,
"HealthCheckResponse": HealthCheckResponse,
"SubscribeServiceResponse": SubscribeServiceResponse,
"ConfigPublishResponse": ConfigPublishResponse,
"ConfigQueryResponse": ConfigQueryResponse,
"ConfigChangeNotifyRequest": ConfigChangeNotifyRequest,
"ConfigChangeBatchListenResponse": ConfigChangeBatchListenResponse,
"ConfigRemoveResponse": ConfigRemoveResponse
}
@staticmethod
def convert_request_to_payload(request: Request):
payload_metadata = Metadata(type=request.get_request_type(), clientIp=NetUtils.get_local_ip(),
headers=request.get_headers())
payload_body_bytes = json.dumps(request, default=GrpcUtils.to_json).encode('utf-8')
payload_body = Any(value=payload_body_bytes)
payload = Payload(metadata=payload_metadata, body=payload_body)
return payload
@staticmethod
def convert_response_to_payload(response: Response):
metadata = Metadata(type=response.get_response_type(), clientIp=NetUtils.get_local_ip())
payload_body_bytes = json.dumps(response, default=GrpcUtils.to_json).encode('utf-8')
payload_body = Any(value=payload_body_bytes)
payload = Payload(metadata=metadata, body=payload_body)
return payload
@staticmethod
def parse(payload: Payload):
metadata_type = payload.metadata.type
if metadata_type and metadata_type in GrpcUtils.remote_type.keys():
json_dict = json.loads(payload.body.value.decode('utf-8'))
response_class = GrpcUtils.remote_type[metadata_type]
obj = response_class.model_validate(json_dict)
if isinstance(obj, Request):
obj.put_all_headers(payload.metadata.headers)
return obj
else:
raise NacosException(SERVER_ERROR, "unknown payload type:" + payload.metadata.type)
@staticmethod
def to_json(obj):
d = {}
d.update(obj.__dict__)
return d
def parse_payload_to_response(payload):
body = payload.body
response = Response(**body)
return response

View File

View File

@ -0,0 +1,51 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# NO CHECKED-IN PROTOBUF GENCODE
# source: nacos_grpc_service.proto
# Protobuf Python Version: 5.27.2
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import runtime_version as _runtime_version
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
_runtime_version.ValidateProtobufRuntimeVersion(
_runtime_version.Domain.PUBLIC,
5,
27,
2,
'',
'nacos_grpc_service.proto'
)
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x18nacos_grpc_service.proto\x1a\x19google/protobuf/any.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\x83\x01\n\x08Metadata\x12\x0c\n\x04type\x18\x03 \x01(\t\x12\x10\n\x08\x63lientIp\x18\x08 \x01(\t\x12\'\n\x07headers\x18\x07 \x03(\x0b\x32\x16.Metadata.HeadersEntry\x1a.\n\x0cHeadersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"J\n\x07Payload\x12\x1b\n\x08metadata\x18\x02 \x01(\x0b\x32\t.Metadata\x12\"\n\x04\x62ody\x18\x03 \x01(\x0b\x32\x14.google.protobuf.Any28\n\rRequestStream\x12\'\n\rrequestStream\x12\x08.Payload\x1a\x08.Payload\"\x00\x30\x01\x32*\n\x07Request\x12\x1f\n\x07request\x12\x08.Payload\x1a\x08.Payload\"\x00\x32>\n\x0f\x42iRequestStream\x12+\n\x0frequestBiStream\x12\x08.Payload\x1a\x08.Payload\"\x00(\x01\x30\x01\x42#\n\x1f\x63om.alibaba.nacos.api.grpc.autoP\x01\x62\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'nacos_grpc_service_pb2', _globals)
if not _descriptor._USE_C_DESCRIPTORS:
_globals['DESCRIPTOR']._loaded_options = None
_globals['DESCRIPTOR']._serialized_options = b'\n\037com.alibaba.nacos.api.grpc.autoP\001'
_globals['_METADATA_HEADERSENTRY']._loaded_options = None
_globals['_METADATA_HEADERSENTRY']._serialized_options = b'8\001'
_globals['_METADATA']._serialized_start=89
_globals['_METADATA']._serialized_end=220
_globals['_METADATA_HEADERSENTRY']._serialized_start=174
_globals['_METADATA_HEADERSENTRY']._serialized_end=220
_globals['_PAYLOAD']._serialized_start=222
_globals['_PAYLOAD']._serialized_end=296
_globals['_REQUESTSTREAM']._serialized_start=298
_globals['_REQUESTSTREAM']._serialized_end=354
_globals['_REQUEST']._serialized_start=356
_globals['_REQUEST']._serialized_end=398
_globals['_BIREQUESTSTREAM']._serialized_start=400
_globals['_BIREQUESTSTREAM']._serialized_end=462
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,244 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
import v2.nacos.transport.grpcauto.nacos_grpc_service_pb2 as nacos__grpc__service__pb2
GRPC_GENERATED_VERSION = '1.66.1'
GRPC_VERSION = grpc.__version__
_version_not_supported = False
try:
from grpc._utilities import first_version_is_lower
_version_not_supported = first_version_is_lower(GRPC_VERSION, GRPC_GENERATED_VERSION)
except ImportError:
_version_not_supported = True
if _version_not_supported:
raise RuntimeError(
f'The grpc package installed is at version {GRPC_VERSION},'
+ f' but the generated code in nacos_grpc_service_pb2_grpc.py depends on'
+ f' grpcio>={GRPC_GENERATED_VERSION}.'
+ f' Please upgrade your grpc module to grpcio>={GRPC_GENERATED_VERSION}'
+ f' or downgrade your generated code using grpcio-tools<={GRPC_VERSION}.'
)
class RequestStreamStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.requestStream = channel.unary_stream(
'/RequestStream/requestStream',
request_serializer=nacos__grpc__service__pb2.Payload.SerializeToString,
response_deserializer=nacos__grpc__service__pb2.Payload.FromString,
_registered_method=True)
class RequestStreamServicer(object):
"""Missing associated documentation comment in .proto file."""
def requestStream(self, request, context):
"""build a streamRequest
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_RequestStreamServicer_to_server(servicer, server):
rpc_method_handlers = {
'requestStream': grpc.unary_stream_rpc_method_handler(
servicer.requestStream,
request_deserializer=nacos__grpc__service__pb2.Payload.FromString,
response_serializer=nacos__grpc__service__pb2.Payload.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'RequestStream', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
server.add_registered_method_handlers('RequestStream', rpc_method_handlers)
# This class is part of an EXPERIMENTAL API.
class RequestStream(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def requestStream(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(
request,
target,
'/RequestStream/requestStream',
nacos__grpc__service__pb2.Payload.SerializeToString,
nacos__grpc__service__pb2.Payload.FromString,
options,
channel_credentials,
insecure,
call_credentials,
compression,
wait_for_ready,
timeout,
metadata,
_registered_method=True)
class RequestStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.request = channel.unary_unary(
'/Request/request',
request_serializer=nacos__grpc__service__pb2.Payload.SerializeToString,
response_deserializer=nacos__grpc__service__pb2.Payload.FromString,
_registered_method=True)
class RequestServicer(object):
"""Missing associated documentation comment in .proto file."""
def request(self, request, context):
"""Sends a commonRequest
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_RequestServicer_to_server(servicer, server):
rpc_method_handlers = {
'request': grpc.unary_unary_rpc_method_handler(
servicer.request,
request_deserializer=nacos__grpc__service__pb2.Payload.FromString,
response_serializer=nacos__grpc__service__pb2.Payload.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'Request', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
server.add_registered_method_handlers('Request', rpc_method_handlers)
# This class is part of an EXPERIMENTAL API.
class Request(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def request(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(
request,
target,
'/Request/request',
nacos__grpc__service__pb2.Payload.SerializeToString,
nacos__grpc__service__pb2.Payload.FromString,
options,
channel_credentials,
insecure,
call_credentials,
compression,
wait_for_ready,
timeout,
metadata,
_registered_method=True)
class BiRequestStreamStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.requestBiStream = channel.stream_stream(
'/BiRequestStream/requestBiStream',
request_serializer=nacos__grpc__service__pb2.Payload.SerializeToString,
response_deserializer=nacos__grpc__service__pb2.Payload.FromString,
_registered_method=True)
class BiRequestStreamServicer(object):
"""Missing associated documentation comment in .proto file."""
def requestBiStream(self, request_iterator, context):
"""Sends a commonRequest
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_BiRequestStreamServicer_to_server(servicer, server):
rpc_method_handlers = {
'requestBiStream': grpc.stream_stream_rpc_method_handler(
servicer.requestBiStream,
request_deserializer=nacos__grpc__service__pb2.Payload.FromString,
response_serializer=nacos__grpc__service__pb2.Payload.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'BiRequestStream', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
server.add_registered_method_handlers('BiRequestStream', rpc_method_handlers)
# This class is part of an EXPERIMENTAL API.
class BiRequestStream(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def requestBiStream(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(
request_iterator,
target,
'/BiRequestStream/requestBiStream',
nacos__grpc__service__pb2.Payload.SerializeToString,
nacos__grpc__service__pb2.Payload.FromString,
options,
channel_credentials,
insecure,
call_credentials,
compression,
wait_for_ready,
timeout,
metadata,
_registered_method=True)

View File

@ -0,0 +1,58 @@
import ssl
from http import HTTPStatus
from urllib.parse import urlencode
import aiohttp
from v2.nacos.common.client_config import TLSConfig
HTTP_STATUS_SUCCESS = 200
class HttpAgent:
def __init__(self, logger, tls_config: TLSConfig, default_timeout):
self.logger = logger
self.tls_config = tls_config
self.default_timeout = default_timeout
self.ssl_context = None
if tls_config and tls_config.enabled:
ctx = ssl.create_default_context(
cafile=tls_config.ca_file) if tls_config.ca_file else ssl.create_default_context()
if self.tls_config.cert_file and self.tls_config.key_file:
ctx.load_cert_chain(certfile=self.tls_config.cert_file, keyfile=self.tls_config.key_file)
self.ssl_context = ctx
async def request(self, url: str, method: str, headers: dict = None, params: dict = None, data: dict = None):
if not headers:
headers = {}
if params:
url += '?' + urlencode(params)
data = urlencode(data).encode() if data else None
self.logger.debug(
f"[http-request] url: {url}, headers: {headers}, params: {params}, data: {data}, timeout: {self.default_timeout}")
try:
if not url.startswith("http"):
url = f"http://{url}"
connector = aiohttp.TCPConnector(ssl=self.ssl_context) if self.ssl_context else aiohttp.TCPConnector()
async with aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=self.default_timeout),
connector=connector) as session:
async with session.request(method, url, headers=headers, data=data) as response:
if response.status == HTTPStatus.OK:
return await response.read(), None
else:
error_msg = f"HTTP error: {response.status} - {response.reason}"
self.logger.debug(f"[http-request] {error_msg}")
return None, error_msg
except aiohttp.ClientError as e:
self.logger.warning(f"[http-request] client error: {e}")
return None, e
except Exception as e:
self.logger.warning(f"[http-request] unexpected error: {e}")
return None, e

View File

@ -0,0 +1 @@
from v2.nacos.transport.model.internal_response import ServerCheckResponse

View File

@ -0,0 +1,47 @@
from abc import ABC
from typing import Optional
from v2.nacos.transport.model.rpc_request import Request
CONNECTION_RESET_REQUEST_TYPE = "ConnectResetRequest"
CLIENT_DETECTION_REQUEST_TYPE = "ClientDetectionRequest"
class InternalRequest(Request, ABC):
def get_module(self) -> str:
return 'internal'
class HealthCheckRequest(InternalRequest):
def get_request_type(self):
return "HealthCheckRequest"
class ConnectResetRequest(InternalRequest):
serverIp: Optional[str]
serverPort: Optional[str]
def get_request_type(self) -> str:
return CONNECTION_RESET_REQUEST_TYPE
class ClientDetectionRequest(InternalRequest):
def get_request_type(self) -> str:
return CLIENT_DETECTION_REQUEST_TYPE
class ServerCheckRequest(InternalRequest):
def get_request_type(self):
return "ServerCheckRequest"
class ConnectionSetupRequest(InternalRequest):
clientVersion: Optional[str] = ''
tenant: Optional[str] = ''
labels: dict = {}
def get_request_type(self):
return "ConnectionSetupRequest"

View File

@ -0,0 +1,42 @@
from typing import Optional
from v2.nacos.transport.model.rpc_response import Response
class NotifySubscriberResponse(Response):
def get_response_type(self) -> str:
return "NotifySubscriberResponse"
class ConnectResetResponse(Response):
def get_response_type(self) -> str:
return "ConnectResetResponse"
class ClientDetectionResponse(Response):
def get_response_type(self) -> str:
return "ClientDetectionResponse"
class ServerCheckResponse(Response):
connectionId: Optional[str] = ''
def get_response_type(self) -> str:
return "ServerCheckResponse"
def set_connection_id(self, connection_id: str) -> None:
self.connectionId = connection_id
def get_connection_id(self) -> str:
return self.connectionId
class HealthCheckResponse(Response):
def get_response_type(self):
return "HealthCheckResponse"
class ErrorResponse(Response):
def get_response_type(self):
return "ErrorResponse"

View File

@ -0,0 +1,41 @@
from abc import ABC, abstractmethod
from pydantic import BaseModel
class Request(BaseModel, ABC):
headers: dict = {}
requestId: str = ''
module: str = ''
def put_all_headers(self, headers: dict):
if not headers:
return
self.headers.update(headers)
def put_header(self, key: str, value: str) -> None:
self.headers[key] = value
def clear_headers(self):
self.headers.clear()
def get_header(self, key: str, default_value=None) -> str:
return self.headers[key] if self.headers[key] else default_value
def get_headers(self) -> dict:
return self.headers
def get_request_id(self) -> str:
return self.requestId
@abstractmethod
def get_module(self) -> str:
pass
@abstractmethod
def get_request_type(self) -> str:
pass
def __str__(self):
return self.__class__.__name__ + "{headers" + str(self.headers) if self.headers else "None" + ", requestId='" + \
self.requestId + "'}"

View File

@ -0,0 +1,40 @@
from abc import ABC, abstractmethod
from pydantic import BaseModel
class Response(BaseModel, ABC):
resultCode: int = 200
errorCode: int = 0
message: str = ''
requestId: str = ''
@classmethod
def convert(cls, obj: object):
new_obj = cls()
for key, value in obj.__dict__.items():
new_obj.__dict__[key] = value
return new_obj
def set_request_id(self, request_id: str):
self.requestId = request_id
def is_success(self) -> bool:
return self.errorCode == 0
def get_error_code(self) -> int:
return self.errorCode
def get_result_code(self) -> int:
return self.resultCode
def get_message(self) -> str:
return self.message
def __str__(self):
return "Response{resultCode=" + str(self.resultCode) + ", errorCode=" + str(self.errorCode) + ", message='" \
+ self.message + "'" + ", requestId='" + self.requestId + "'}"
@abstractmethod
def get_response_type(self) -> str:
pass

View File

@ -0,0 +1,25 @@
from v2.nacos.common.constants import Constants
class ServerInfo:
def __init__(self, server_ip: str, server_port: int):
self.server_ip = server_ip
self.server_port = server_port
def get_address(self):
return self.server_ip + Constants.COLON + str(self.server_port)
def get_server_ip(self):
return self.server_ip
def set_server_ip(self, server_ip):
self.server_ip = server_ip
def get_server_port(self):
return self.server_port
def set_server_port(self, server_port):
self.server_port = server_port
def __str__(self):
return "{serverIp='" + str(self.server_ip) + "', server main port=" + str(self.server_port) + "}"

View File

@ -0,0 +1,99 @@
import asyncio
from random import randrange
from typing import List, Optional
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, INVALID_PARAM, INVALID_SERVER_STATUS
from v2.nacos.transport.auth_client import AuthClient
from v2.nacos.transport.http_agent import HttpAgent
class NacosServerConnector:
def __init__(self, logger, client_config: ClientConfig, http_agent: HttpAgent):
self.logger = logger
if len(client_config.server_list) == 0 and not client_config.endpoint:
raise NacosException(INVALID_PARAM, "both server list and endpoint are empty")
self.client_config = client_config
self.server_list = client_config.server_list
self.current_index = 0
self.http_agent = http_agent
self.endpoint = client_config.endpoint
self.server_list_lock = asyncio.Lock()
self.refresh_server_list_internal = 30 # second
if len(self.server_list) != 0:
self.current_index = randrange(0, len(self.server_list))
if client_config.username and client_config.password:
self.auth_client = AuthClient(self.logger, client_config, self.get_server_list, http_agent)
asyncio.create_task(self.auth_client.get_access_token(True))
async def init(self):
if len(self.server_list) != 0:
return
await self._get_server_list_from_endpoint()
if len(self.server_list) == 0:
raise NacosException(INVALID_SERVER_STATUS, "server list is empty")
asyncio.create_task(self._refresh_server_srv_if_need())
async def _get_server_list_from_endpoint(self) -> Optional[List[str]]:
if not self.endpoint or self.endpoint.strip() == "":
return None
url = self.endpoint.strip() + self.client_config.endpoint_context_path + "/serverlist"
server_list = []
try:
response, err = await self.http_agent.request(url, "GET", self.client_config.endpoint_query_header, None,
None)
if err:
self.logger.error("[get-server-list] get server list from endpoint failed,url:%s, err:%s", url, err)
return None
else:
self.logger.debug("[get-server-list] content from endpoint,url:%s,response:%s", url, response)
if response:
for server_info in response.decode('utf-8').strip().split("\n"):
sp = server_info.strip().split(":")
if len(sp) == 1:
server_list.append((sp[0] + ":" + str(Constants.DEFAULT_PORT)))
else:
server_list.append(server_info)
if len(server_list) != 0 and set(server_list) != set(self.server_list):
async with self.server_list_lock:
old_server_list = self.server_list
self.server_list = server_list
self.current_index = randrange(0, len(self.server_list))
self.logger.info("nacos server list is updated from %s to %s",
str(old_server_list), str(server_list))
except Exception as e:
self.logger.error("[get-server-list] get server list from endpoint failed,url:%s, err:%s", url, e)
return server_list
async def _refresh_server_srv_if_need(self):
while True:
await asyncio.sleep(self.refresh_server_list_internal)
server_list = await self._get_server_list_from_endpoint()
if not server_list or len(server_list) == 0:
self.logger.warning("failed to get server list from endpoint, endpoint: " + self.endpoint)
def get_server_list(self):
return self.server_list
def get_next_server(self):
if not self.server_list:
raise NacosException(INVALID_SERVER_STATUS, 'server list is empty')
self.current_index = (self.current_index + 1) % len(self.server_list)
return self.server_list[self.current_index]
async def inject_security_info(self, headers):
if self.client_config.username and self.client_config.password:
access_token = await self.auth_client.get_access_token(False)
if access_token is not None and access_token != "":
headers[Constants.ACCESS_TOKEN] = access_token
return

View File

@ -0,0 +1,469 @@
import asyncio
import logging
from abc import ABC, abstractmethod
from enum import Enum, auto
from typing import Dict, Optional
from v2.nacos.common.constants import Constants
from v2.nacos.common.nacos_exception import NacosException, CLIENT_DISCONNECT, SERVER_ERROR, UN_REGISTER
from v2.nacos.transport.connection import Connection
from v2.nacos.transport.connection_event_listener import ConnectionEventListener
from v2.nacos.transport.model.internal_request import CONNECTION_RESET_REQUEST_TYPE, \
CLIENT_DETECTION_REQUEST_TYPE, HealthCheckRequest, ConnectResetRequest
from v2.nacos.transport.model.internal_response import ErrorResponse, ConnectResetResponse
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.server_info import ServerInfo
from v2.nacos.transport.nacos_server_connector import NacosServerConnector
from v2.nacos.transport.server_request_handler import ClientDetectionRequestHandler, IServerRequestHandler
from v2.nacos.utils.common_util import get_current_time_millis
class ConnectionType(Enum):
GRPC = auto()
class RpcClientStatus(Enum):
INITIALIZED = auto()
STARTING = auto()
UNHEALTHY = auto()
RUNNING = auto()
SHUTDOWN = auto()
class ConnectionStatus(Enum):
DISCONNECTED = auto()
CONNECTED = auto()
class ConnectionEvent:
def __init__(self, event_type: ConnectionStatus):
self.event_type = event_type
def is_connected(self) -> bool:
return self.event_type == ConnectionStatus.CONNECTED
def is_disconnected(self) -> bool:
return self.event_type == ConnectionStatus.DISCONNECTED
def __str__(self):
if self.is_connected():
return "connected"
elif self.is_disconnected():
return "disconnected"
else:
return ""
class ReconnectContext:
def __init__(self, server_info: Optional[ServerInfo], on_request_fail: bool):
self.on_request_fail = on_request_fail
self.server_info = server_info
class RpcClient(ABC):
RETRY_TIMES = 3
DEFAULT_TIMEOUT_MILLS = 3000
def __init__(self, logger, name: str, nacos_server: NacosServerConnector):
self.logger = logger
self.name = name
self.labels: Dict[str, str] = {}
self.current_connection = None
self.rpc_client_status = RpcClientStatus.INITIALIZED
self.event_chan = asyncio.Queue()
self.reconnection_chan = asyncio.Queue()
self.connection_event_listeners = []
self.server_request_handler_mapping = {}
self.nacos_server = nacos_server
self.tenant = None
self.lock = asyncio.Lock()
self.last_active_timestamp = get_current_time_millis()
self.event_listener_task = None
self.health_check_task = None
self.reconnection_task = None
def put_all_labels(self, labels: Dict[str, str]):
self.labels.update(labels)
self.logger.info(f"rpc client init label, labels : {self.labels}")
async def event_listener(self):
try:
while not self.is_shutdown():
try:
event = await self.event_chan.get()
async with self.lock:
listeners = list(self.connection_event_listeners[:])
if len(listeners) == 0:
continue
self.logger.info("rpc client notify [%s] event to listeners", str(event))
for listener in listeners:
if event.is_connected():
try:
await listener.on_connected()
except NacosException as e:
self.logger.error("%s notify connect listener error, listener = %s,error:%s"
, self.name, listener.__class__.__name__, str(e))
if event.is_disconnected():
try:
await listener.on_disconnect()
except NacosException as e:
self.logger.error("%s notify disconnect listener error, listener = %s,error:%s"
, self.name, listener.__class__.__name__, str(e))
except Exception as e:
self.logger.error("notify connect listener,error:%s", str(e))
except asyncio.CancelledError:
self.logger.debug("event listener task cancelled")
async def health_check_periodically(self):
try:
while not self.is_shutdown():
try:
await asyncio.sleep(Constants.KEEP_ALIVE_TIME_MILLS / 1000)
if get_current_time_millis() - self.last_active_timestamp < Constants.KEEP_ALIVE_TIME_MILLS:
continue
is_healthy = await self.send_health_check()
if is_healthy:
self.last_active_timestamp = get_current_time_millis()
continue
else:
if not self.current_connection:
self.logger.error("%s server healthy check fail, currentConnection is None" % self.name)
continue
self.logger.error("%s server healthy check fail, currentConnection=%s"
, self.name, self.current_connection.get_connection_id())
if self.rpc_client_status == RpcClientStatus.SHUTDOWN:
continue
self.rpc_client_status = RpcClientStatus.UNHEALTHY
await self.reconnect(ReconnectContext(server_info=None, on_request_fail=False))
except asyncio.CancelledError:
break
except asyncio.CancelledError:
self.logger.debug("health check task cancelled")
async def reconnection_handler(self):
try:
while not self.is_shutdown():
try:
ctx = await self.reconnection_chan.get()
if ctx.server_info:
server_exist = False
for server_info in self.nacos_server.get_server_list():
if ctx.server_info.server_ip == server_info.ip_addr:
ctx.server_info.server_port = server_info.port
server_exist = True
break
if not server_exist:
self.logger.info(
f"[{self.name}] recommend server is not in server list, ignore recommend server {str(ctx.server_info)}")
ctx.server_info = None
await self.reconnect(ctx)
except asyncio.CancelledError:
break
except asyncio.CancelledError:
self.logger.debug("reconnection handler task cancelled")
async def start(self):
async with self.lock:
self.rpc_client_status = RpcClientStatus.STARTING
await self.register_server_request_handlers()
self.event_listener_task = asyncio.create_task(self.event_listener())
self.health_check_task = asyncio.create_task(self.health_check_periodically())
self.reconnection_task = asyncio.create_task(self.reconnection_handler())
connection = None
start_up_retry_times = RpcClient.RETRY_TIMES
while start_up_retry_times > 0 and connection is None:
try:
start_up_retry_times -= 1
server_info = self._next_rpc_server()
self.logger.info(
f"rpc client start to connect server, server: {server_info.get_address()}")
connection = await self.connect_to_server(server_info)
except Exception as e:
self.logger.warning(
f"rpc client failed to connect server, error: {str(e)},retry times left:{start_up_retry_times}")
if connection:
self.current_connection = connection
self.logger.info(
f"rpc client successfully connected to server:{self.current_connection.server_info.get_address()}, connection_id:{self.current_connection.get_connection_id()}")
async with self.lock:
self.rpc_client_status = RpcClientStatus.RUNNING
if connection is None:
raise NacosException(CLIENT_DISCONNECT, "failed to connect server")
@abstractmethod
async def connect_to_server(self, server_info: ServerInfo) -> Optional[Connection]:
pass
@abstractmethod
def get_connection_type(self):
pass
@abstractmethod
def get_rpc_port_offset(self):
pass
def get_current_server(self):
if self.current_connection:
return self.current_connection.server_info
async def switch_server_async(self, server_info: Optional[ServerInfo], on_request_fail: bool):
await self.reconnection_chan.put(ReconnectContext(server_info=server_info, on_request_fail=on_request_fail))
def is_wait_initiated(self):
return self.rpc_client_status == RpcClientStatus.INITIALIZED
def is_running(self):
return self.rpc_client_status == RpcClientStatus.RUNNING
def is_shutdown(self):
return self.rpc_client_status == RpcClientStatus.SHUTDOWN
async def _notify_connection_change(self, event_type: ConnectionStatus):
await self.event_chan.put(ConnectionEvent(event_type))
async def notify_server_srv_change(self):
if self.current_connection is None:
await self.switch_server_async(None, False)
return
cur_server_info = self.current_connection.get_server_info()
found = False
for server in self.nacos_server.get_server_list():
if server.ip_addr == cur_server_info.server_ip:
found = True
break
if not found:
self.logger.info("current connected server %s is not in the latest server list, switch to a new server.",
cur_server_info.get_address())
await self.switch_server_async(None, False)
async def register_server_request_handlers(self):
await asyncio.gather(
self.register_server_request_handler(CONNECTION_RESET_REQUEST_TYPE, ConnectResetRequestHandler(self)),
self.register_server_request_handler(CLIENT_DETECTION_REQUEST_TYPE, ClientDetectionRequestHandler())
)
async def register_server_request_handler(self, request_type: str, handler: IServerRequestHandler) -> None:
if not handler or not request_type:
self.logger.error(
f"rpc client register server push request handler missing required parameters, request: {request_type}, handler: {handler.name() if handler else 'None'}")
return
self.logger.info(
f"rpc client register server push request: {request_type} handler: {handler.name()}")
async with self.lock:
self.server_request_handler_mapping[request_type] = handler
async def register_connection_listener(self, listener: ConnectionEventListener):
self.logger.info(f"rpc client register connection listener: {listener.__class__.__name__}")
async with self.lock:
self.connection_event_listeners.append(listener)
def _next_rpc_server(self) -> Optional[ServerInfo]:
server_config = self.nacos_server.get_next_server()
return self._resolve_server_info(server_config)
def _resolve_server_info(self, server_address: str) -> ServerInfo:
server_port = self.get_rpc_port_offset()
if Constants.HTTP_PREFIX in server_address:
split = server_address.rstrip("/").split(Constants.COLON)
server_ip = split[1].replace("//", "")
if len(split) > 2 and len(split[2].strip()) > 0:
server_port += int(split[2])
else:
split = server_address.rstrip("/").split(Constants.COLON)
server_ip = split[0]
if len(split) > 1 and len(split[1].strip()) > 0:
server_port += int(split[1])
server_info = ServerInfo(server_ip, server_port)
return server_info
async def request(self, request: Request, timeout_millis: int = DEFAULT_TIMEOUT_MILLS):
retry_times = 0
start = get_current_time_millis()
exception_throw = None
while retry_times < RpcClient.RETRY_TIMES and get_current_time_millis() < start + timeout_millis:
wait_reconnect = False
try:
if not self.current_connection or not self.is_running():
wait_reconnect = True
raise NacosException(CLIENT_DISCONNECT,
"client not connected,status:" + str(self.rpc_client_status))
response = await self.current_connection.request(request, timeout_millis)
if not response:
raise NacosException(SERVER_ERROR, "request failed, response is null")
if isinstance(response, ErrorResponse):
if response.get_error_code() == UN_REGISTER:
async with self.lock:
wait_reconnect = True
self.rpc_client_status = RpcClientStatus.UNHEALTHY
self.logger.error("connection is unregistered, switch server, connectionId=%s, request=%s",
self.current_connection.get_connection_id(), request.get_request_type())
await self.switch_server_async(None, False)
raise NacosException(SERVER_ERROR, response.get_message())
self.last_active_timestamp = get_current_time_millis()
return response
except NacosException as e:
if wait_reconnect:
sleep_time = min(0.1, timeout_millis / 3000)
await asyncio.sleep(sleep_time)
self.logger.error("send request fail, request=%s, retryTimes=%s, errorMessage=%s", request, retry_times,
str(e))
exception_throw = e
retry_times += 1
async with self.lock:
self.rpc_client_status = RpcClientStatus.UNHEALTHY
await self.switch_server_async(None, True)
raise exception_throw
async def shutdown(self):
async with self.lock:
self.rpc_client_status = RpcClientStatus.SHUTDOWN
# 取消所有任务
tasks = [self.event_listener_task, self.health_check_task, self.reconnection_task]
for task in tasks:
if task and not task.done():
task.cancel()
# 等待所有任务完成
if any(task for task in tasks if task):
await asyncio.gather(*[task for task in tasks if task], return_exceptions=True)
# 清理任务引用
self.event_listener_task = None
self.health_check_task = None
self.reconnection_task = None
await self._close_connection()
async def _close_connection(self):
if self.current_connection is not None:
await self.current_connection.close()
await self._notify_connection_change(ConnectionStatus.DISCONNECTED)
async def send_health_check(self):
if not self.current_connection:
return False
health_check_request = HealthCheckRequest()
try:
response = await self.current_connection.request(health_check_request, RpcClient.DEFAULT_TIMEOUT_MILLS)
if not response.is_success():
# when client request immediately after server starts, server may not ready to serve new request
# the server will return code 3xx, tell the client to retry after a while
# this situation, just return true,because the healthCheck will start again after 5 seconds
if response.get_error_code() >= 300 and response.get_error_code() < 400:
return True
return False
return response and response.is_success()
except Exception as e:
self.logger.error("health check failed, response is null or not success, err=%s", str(e))
return False
async def reconnect(self, reconnection_ctx: ReconnectContext):
try:
recommend_server = reconnection_ctx.server_info
if reconnection_ctx.on_request_fail and await self.send_health_check():
self.logger.info("%s server check success, currentServer is %s", self.name,
self.current_connection.server_info.get_address())
async with self.lock:
self.rpc_client_status = RpcClientStatus.RUNNING
await self._notify_connection_change(ConnectionStatus.CONNECTED)
return
switch_success = False
reconnect_times, retry_turns = 0, 0
while not self.is_shutdown() and not switch_success:
try:
server_info = recommend_server if recommend_server else self._next_rpc_server()
connection_new = await self.connect_to_server(server_info)
if connection_new:
self.logger.info("%s success to connect a server:%s, connectionId:%s", self.name,
server_info.get_address(), connection_new.get_connection_id())
if self.current_connection:
self.logger.info("%s abandon prev connection, server is:%s, connectionId:%s", self.name,
self.current_connection.server_info.get_address(),
self.current_connection.get_connection_id())
self.current_connection.set_abandon(True)
await self._close_connection()
self.current_connection = connection_new
async with self.lock:
self.rpc_client_status = RpcClientStatus.RUNNING
switch_success = True
await self._notify_connection_change(ConnectionStatus.CONNECTED)
return
if self.is_shutdown():
await self._close_connection()
last_exception = None
except NacosException as e:
logging.error(f"failed to connect server, error = {str(e)}")
last_exception = str(e)
if reconnect_times > 0 and reconnect_times % len(self.nacos_server.get_server_list()) == 0:
err_info = last_exception if last_exception else "unknown"
self.logger.warning(
"%s failed to connect to server,after trying %s times,last try server is %s,error:%s",
self.name, reconnect_times, server_info.get_address(), str(err_info))
if retry_turns < 50:
retry_turns += 1
reconnect_times += 1
if not self.is_running():
await asyncio.sleep(min((retry_turns + 1) / 10, 5))
if self.is_shutdown():
self.logger.warning("%s client is shutdown, stop reconnect to server", self.name)
except NacosException as e:
self.logger.warning("%s failed to reconnect to server, error is %s", self.name, str(e))
class ConnectResetRequestHandler(IServerRequestHandler):
def __init__(self, rpc_client: RpcClient):
self.rpc_client = rpc_client
def name(self) -> str:
return "ConnectResetRequestHandler"
async def request_reply(self, request: Request) -> Optional[ConnectResetResponse]:
if not isinstance(request, ConnectResetRequest):
return None
try:
with self.rpc_client.lock:
if self.rpc_client.is_running():
if request.server_ip.strip():
server_info = ServerInfo(request.server_ip, int(request.server_port))
await self.rpc_client.switch_server_async(server_info=server_info,
on_request_fail=False)
else:
await self.rpc_client.switch_server_async(server_info=None,
on_request_fail=True)
return ConnectResetResponse()
except NacosException as e:
self.rpc_client.logger.error("rpc client %s failed to switch server,error:%s", self.rpc_client.name, e)
return None

View File

@ -0,0 +1,117 @@
import asyncio
import os
from typing import Dict
from v2.nacos.common.client_config import ClientConfig
from v2.nacos.common.nacos_exception import NacosException, CLIENT_INVALID_PARAM, INVALID_PARAM
from v2.nacos.transport.grpc_client import GrpcClient
from v2.nacos.transport.nacos_server_connector import NacosServerConnector
from v2.nacos.transport.rpc_client import RpcClient, ConnectionType
class RpcClientFactory:
def __init__(self, logger):
self.client_map = {}
self.logger = logger
self.lock = asyncio.Lock()
def get_all_client_entries(self) -> Dict[str, RpcClient]:
return self.client_map
def get_client(self, client_name: str) -> RpcClient:
return self.client_map[client_name]
async def create_client(self, client_name: str, connection_type: ConnectionType, labels: Dict[str, str],
client_config: ClientConfig, nacos_server: NacosServerConnector) -> RpcClient:
async with self.lock:
client = None
if client_name not in self.client_map.keys():
self.logger.info("create new rpc client: " + client_name)
if connection_type == ConnectionType.GRPC:
client = GrpcClient(self.logger, client_name, client_config, nacos_server)
if not client:
raise NacosException(CLIENT_INVALID_PARAM, "unsupported connection type: " + str(connection_type))
self.logger.info(f"init app conn labels from client config,{client_config.app_conn_labels}")
app_conn_labels_env = get_app_labels_from_env()
self.logger.info(f"init app conn labels from env,{app_conn_labels_env}")
app_conn_labels = merge_app_labels(client_config.app_conn_labels, app_conn_labels_env)
self.logger.info("final app conn labels: " + str(app_conn_labels))
app_conn_labels = add_prefix_for_each_key(app_conn_labels, "app_")
if len(app_conn_labels) > 0:
client.put_all_labels(app_conn_labels)
client.put_all_labels(labels)
self.client_map[client_name] = client
return client
return self.client_map[client_name]
async def shutdown_all_clients(self):
for client in self.client_map.values():
await client.shutdown()
def get_app_labels_from_env() -> dict:
config_map = {}
# nacos_config_gray_label
gray_label = os.getenv("nacos_config_gray_label")
if gray_label:
config_map["nacos_config_gray_label"] = gray_label
# nacos_app_conn_labels
conn_labels = os.getenv("nacos_app_conn_labels")
if conn_labels:
labels_map = parse_labels(conn_labels)
config_map.update(labels_map)
return config_map
def parse_labels(raw_labels: str) -> dict:
if not raw_labels.strip():
return {}
result_map = {}
labels = raw_labels.split(",")
for label in labels:
if label.strip():
kv = label.split("=")
if len(kv) == 2:
key = kv[0].strip()
value = kv[1].strip()
result_map[key] = value
else:
raise NacosException(INVALID_PARAM, f"unknown label format: {label}")
return result_map
def merge_app_labels(app_labels_appointed: dict, app_labels_env: dict) -> dict:
preferred = os.getenv("nacos_app_conn_labels_preferred", "").lower()
prefer_first = preferred != "env"
return merge_maps(app_labels_appointed, app_labels_env, prefer_first)
def merge_maps(map1: dict, map2: dict, prefer_first: bool) -> dict:
result = {} # Start with map1
if map1:
result.update(map1)
for k, v in map2.items():
if not (prefer_first and k in result):
result[k] = v
return result
def add_prefix_for_each_key(m: dict, prefix: str) -> dict:
if not m:
return m
new_map = {}
for k, v in m.items():
if k.strip():
new_key = prefix + k
new_map[new_key] = v
return new_map

View File

@ -0,0 +1,30 @@
from abc import ABC, abstractmethod
from typing import Optional
from v2.nacos.transport.model.internal_request import ClientDetectionRequest
from v2.nacos.transport.model.internal_response import ClientDetectionResponse
from v2.nacos.transport.model.rpc_request import Request
from v2.nacos.transport.model.rpc_response import Response
class IServerRequestHandler(ABC):
@abstractmethod
def name(self) -> str:
pass
@abstractmethod
async def request_reply(self, request: Request) -> Optional[Response]:
pass
class ClientDetectionRequestHandler(IServerRequestHandler):
def name(self) -> str:
return "ClientDetectionRequestHandler"
async def request_reply(self, request: Request) -> Optional[Response]:
if not isinstance(request, ClientDetectionRequest):
return None
return ClientDetectionResponse()

View File

View File

@ -0,0 +1,36 @@
import base64
from Crypto.Cipher import AES
from v2.nacos.utils.encode_util import str_to_bytes, bytes_to_str, decode_base64
def pad(byte_array: bytes) -> bytes:
"""
pkcs5 padding
"""
block_size = AES.block_size
pad_len = block_size - len(byte_array) % block_size
return byte_array + (bytes([pad_len]) * pad_len)
# pkcs5 - unpadding
def unpad(byte_array: bytes) -> bytes:
return byte_array[:-ord(byte_array[-1:])]
def encrypt(message: str, key: str) -> str:
byte_array = str_to_bytes(message)
key_bytes = decode_base64(str_to_bytes(key))
aes = AES.new(key_bytes, AES.MODE_ECB)
padded = pad(byte_array)
encrypted = aes.encrypt(padded)
return base64.b64encode(encrypted).decode('utf-8')
def decrypt(encr_data: str, key: str) -> str:
byte_array = decode_base64(str_to_bytes(encr_data))
key_bytes = decode_base64(str_to_bytes(key))
aes = AES.new(key_bytes, AES.MODE_ECB)
decrypted = aes.decrypt(byte_array)
return bytes_to_str(unpad(decrypted))

View File

@ -0,0 +1,44 @@
import json
import time
from pydantic import BaseModel
from v2.nacos.common.constants import Constants
def get_current_time_millis():
t = time.time()
return int(round(t * 1000))
def to_json_string(obj: BaseModel):
try:
return obj.model_dump_json()
except (TypeError, ValueError) as e:
print(f"Error serializing object to JSON: {e}")
return None
def to_json_obj(body):
try:
return json.loads(body)
except (TypeError, ValueError) as e:
print(f"Error serializing object to OBJ: {e}")
return None
def to_json(obj):
d = {}
d.update(obj.__dict__)
return d
def vars_obj(obj):
try:
return vars(obj)
except (TypeError, ValueError) as e:
print(f"Error serializing obj to dict: {e}")
return None

View File

@ -0,0 +1,9 @@
SHOW_CONTENT_SIZE = 100
def truncate_content(content: str):
if content == "":
return ""
if len(content) <= SHOW_CONTENT_SIZE:
return content
return content[:SHOW_CONTENT_SIZE] + "..."

View File

@ -0,0 +1,32 @@
import base64
def str_to_bytes(text: str, encoding: str = 'utf-8') -> bytes:
"""
将字符串转换为字节
:param text: 要转换的字符串
:param encoding: 字符串的编码方式默认为 'utf-8'
:return: 转换后的字节
"""
return text.encode(encoding)
def bytes_to_str(bytes_, encoding: str = 'utf-8'):
if not bytes_:
return ""
# Directly decode the UTF-8 bytes back to a string
return bytes_.decode(encoding)
def decode_base64(bytes_: bytes):
return base64.b64decode(bytes_)
def encode_base64(bytes_):
# Simply encode the input bytes to Base64
return base64.b64encode(bytes_).decode('utf-8') # Decoding to string for consistency with Go's behavior
def urlsafe_b64encode(bytes_):
return base64.urlsafe_b64encode(bytes_).decode('utf-8')

View File

@ -0,0 +1,94 @@
import os
from logging import Logger
from typing import Optional
import aiofiles
os_type = os.name
def mkdir_if_necessary(create_dir: str):
if os_type == 'nt' and os.path.isabs(create_dir):
if len(create_dir) < 2 or create_dir[1] != ':':
raise ValueError("Invalid absolute path for Windows")
os.makedirs(create_dir, exist_ok=True)
def is_file_exist(file_path: str):
if not file_path:
return False
return os.path.exists(file_path)
async def read_file(logger: Logger, file_path: str) -> str:
"""
读取指定文件的内容
:param logger: logger
:param file_path: 文件路径
:return: 文件内容字符串
"""
try:
async with aiofiles.open(file_path, 'r', encoding='utf-8') as file:
file_content = await file.read()
return file_content
except FileNotFoundError:
logger.warning(f"File not found: {file_path}")
return ""
except PermissionError:
logger.error(f"Permission denied to read file: {file_path}")
return ""
except Exception as e:
logger.error(f"Error reading file: {file_path}, error: {e}")
return ""
async def read_all_files_in_dir(logger: Logger, dir_path: str) -> Optional[dict]:
"""
读取指定文件夹下所有文件的内容
:param logger: logger
:param dir_path: 文件夹路径
:return: 包含文件名和内容的字典
"""
if not is_file_exist(dir_path):
logger.error(f"directory not found: {dir_path}")
return None
if not os.path.isdir(dir_path):
logger.error(f"path is not a directory: {dir_path}")
return None
try:
file_contents = {}
for file_name in os.listdir(dir_path):
file_path = os.path.join(dir_path, file_name)
if os.path.isfile(file_path):
content = await read_file(logger, file_path)
file_contents[file_name] = content
continue
return file_contents
except Exception as e:
logger.error(f"Error reading directory: {dir_path}, error: {e}")
return None
async def write_to_file(logger: Logger, file_path: str, content: str) -> None:
"""
将内容写入指定文件
:param logger: logger
:param file_path: 文件路径
:param content: 要写入的内容
"""
mkdir_if_necessary(os.path.dirname(file_path))
try:
async with aiofiles.open(file_path, 'w', encoding='utf-8') as file:
await file.write(content)
except PermissionError:
logger.error(f"Permission denied to write file: {file_path},content: {content}")
raise PermissionError
except Exception as e:
logger.error(f"Error writing to file: {file_path}, content: {content}, error: {e}")
raise e

View File

@ -0,0 +1,13 @@
import base64
import hashlib
import hmac
def sign_with_hmac_sha1_encrypt(encrypt_text: str, encrypt_key: str):
if not encrypt_key:
encrypt_key = ""
key = encrypt_key.encode()
mac = hmac.new(key, digestmod=hashlib.sha1)
mac.update(encrypt_text.encode())
return base64.b64encode(mac.digest()).decode()

View File

@ -0,0 +1,9 @@
import hashlib
def md5(content: str):
if content:
md = hashlib.md5()
md.update(content.encode('utf-8'))
return md.hexdigest()
return ""

View File

@ -0,0 +1,20 @@
import socket
from functools import lru_cache
import psutil
from v2.nacos.common.nacos_exception import NacosException, INVALID_INTERFACE_ERROR
class NetUtils:
@staticmethod
@lru_cache(maxsize=1)
def get_local_ip():
try:
for interface, addrs in psutil.net_if_addrs().items():
for addr in addrs:
if addr.family == socket.AF_INET and not addr.address.startswith("127."):
return addr.address
raise NacosException(INVALID_INTERFACE_ERROR, "no valid non-loopback IPv4 interface found")
except socket.gaierror as err:
raise NacosException(INVALID_INTERFACE_ERROR, f"failed to query local IP address, error: {str(err)}")