Compare commits

...

108 Commits
v0.5.1 ... main

Author SHA1 Message Date
KubeEdge Bot 8f900d6148
Merge pull request #475 from tangming1996/bugfix/workflow
update workfow actions from v2 to v4
2025-03-06 19:16:47 +08:00
ming.tang 67166a576f update workfow actions from v2 to v4
Signed-off-by: ming.tang <ming.tang@daocloud.io>
2025-03-06 14:32:13 +08:00
KubeEdge Bot eaed7da6ec
Merge pull request #472 from tangming1996/bugfix/crd-update-helm
fix helm crd can not generete error
2025-03-03 18:50:45 +08:00
ming.tang 11600c6f00 fix helm crd can not generete error
Signed-off-by: ming.tang <ming.tang@daocloud.io>
2025-02-21 10:04:41 +08:00
KubeEdge Bot 150cc1a968
Merge pull request #467 from tangming1996/bugfix/federal-learning
fix FederatedLearningJob delete error
2025-02-20 19:18:34 +08:00
KubeEdge Bot d234200db5
Merge pull request #465 from tangming1996/feature/hpa-implement
feature: hpa for jointinference
2025-02-20 19:14:34 +08:00
ming.tang bff8fd491e fix FederatedLearningJob delete error
Signed-off-by: ming.tang <ming.tang@daocloud.io>
2025-02-14 16:15:07 +08:00
ming.tang 27cc953a0f feature: hpa for jointinference
Signed-off-by: ming.tang <ming.tang@daocloud.io>
2025-02-14 10:59:40 +08:00
KubeEdge Bot b9f4d9d3ee
Merge pull request #462 from WillardHu/upgrade-k8s-and-go
Upgrade K8s and Go versions
2025-02-13 17:33:27 +08:00
WillardHu 67f623b401 Bump up the Ubuntu and controller-gen versions to be consistent with KubeEdge
Signed-off-by: WillardHu <wei.hu@daocloud.io>
2025-02-07 11:06:45 +08:00
WillardHu 86cf2b8d2c Solved the vendor check to fail
Signed-off-by: WillardHu <wei.hu@daocloud.io>
2025-01-17 11:40:40 +08:00
WillardHu c15570ede0 Update client generation tools, remove k8s dependency, fix lint issues
Signed-off-by: WillardHu <wei.hu@daocloud.io>
2025-01-17 11:40:40 +08:00
WillardHu f5f870cccd Update vendor licenses
Signed-off-by: WillardHu <wei.hu@daocloud.io>
2025-01-17 10:50:22 +08:00
WillardHu 495553403d Update go vendors
Signed-off-by: WillardHu <wei.hu@daocloud.io>
2025-01-17 10:48:45 +08:00
WillardHu 22e479be48 Upgrade K8s and Go versions
Signed-off-by: WillardHu <wei.hu@daocloud.io>
2025-01-17 10:44:29 +08:00
KubeEdge Bot 8a4e24b847
Merge pull request #460 from MooreZheng/main
update reviewers
2025-01-08 09:36:15 +08:00
MooreZheng 2ce0ebca89
update owners
Signed-off-by: MooreZheng <zimu.zheng@hotmail.com>
2025-01-07 20:10:17 +08:00
KubeEdge Bot 2f439627f8
Merge pull request #441 from FuryMartin/patch-1
Update uvicorn's version in `lib/requirements.txt` to fix installation issue with latest `pip`
2024-12-27 18:07:04 +08:00
KubeEdge Bot 6f0b2a4e8a
Merge pull request #457 from ajie65/docs/hpa-proposal
[LFX'24] Add Sedna Joint inference HPA Proposal
2024-11-28 18:29:35 +08:00
KubeEdge Bot 01351c51aa
Merge pull request #455 from Electronic-Waste/doc/proposal
[LFX'24] Add Sedna Federated Learning v2 Proposal.
2024-11-22 10:08:29 +08:00
Electronic-Waste 8a6c5e7501 fix: data-centric scheduling.
Signed-off-by: Electronic-Waste <2690692950@qq.com>
2024-11-21 15:28:11 +00:00
huang qi jie dc6e801ba4 [LFX'24] Add Sedna Joint inference HPA Proposal
Signed-off-by: huang qi jie <huangqijie@gmail.com>
2024-11-14 13:48:55 +08:00
Electronic-Waste 4ba8fb7c83 feat: add sedna federated learning v2 proposal.
Signed-off-by: Electronic-Waste <2690692950@qq.com>
2024-11-04 16:20:46 +00:00
KubeEdge Bot 7cce21963a
Merge pull request #438 from SherlockShemol/sedna-controller-enhancement
update sedna controller (jointinference and federatedlearning controller) enhancement proposal
2024-10-30 20:48:08 +08:00
KubeEdge Bot 712b62b7bf
Merge pull request #446 from SherlockShemol/fl-controller-enhancement
Sedna FederatedLearning controller enhancement
2024-10-30 20:47:08 +08:00
KubeEdge Bot cabab5f1d8
Merge pull request #445 from SherlockShemol/ji-controller-enhancement
JointInferenceService controller enhancement
2024-10-30 20:46:07 +08:00
SherlockShemol bfa3a65b2a enhance joint inference controller.improve the cascade deletion.Create a Deployment resource instead of a Pod resource, and let the Deployment controller manage the number of Pods.automatically update deployment when modifying CRD of joint inference.add a test file to ensure the correctness of joint inference controller.
Signed-off-by: SherlockShemol <shemol@163.com>
2024-10-30 09:42:25 +08:00
SherlockShemol 72fba6695c update sedna controller (jointinference and federatedlearning controller) enhancement proposal
Signed-off-by: SherlockShemol <shemol@163.com>
2024-10-28 18:52:08 +08:00
SherlockShemol b37522ebe0 enhance federated learning controller. \n 1.improve the cascade deletion of federated learning cotroller. \n 2.enable federated learning pod to rebuild itself if it is manually or wrongly deleted. \n 3.enable self updating pod config when modifying CRD of federated learning. \n 4.add test file to ensure the correctness of the solution.
Signed-off-by: SherlockShemol <shemol@163.com>
2024-10-24 22:31:42 +08:00
KubeEdge Bot 3342955521
Merge pull request #443 from tangming1996/fix/objectsearch
fix objectsearch bug of joint delete
2024-09-13 17:35:22 +08:00
ming.tang c19e6c949c fix objectsearch bug of joint delete
Signed-off-by: ming.tang <ming.tang@daocloud.io>
2024-09-12 10:10:00 +08:00
FuryMartin 46bb74a265 Update requirements.txt
Signed-off-by: Yu Fan <fany@buaa.edu.cn>
2024-08-15 21:05:15 +08:00
KubeEdge Bot 2ecc30d821
Merge pull request #437 from SherlockShemol/sedna-controller-enhancement
proposal - Sedna controller enhancement
2024-08-13 11:46:24 +08:00
shemol f75ebbe37d proposal-sedna controller enhancement: fix JointInferenceService owner reference
correct the owner reference of joint inference objects

Signed-off-by: shemol <shemol@163.com>
2024-08-07 13:16:27 +08:00
shemol b510706e78 proposal-sedna controller enhancement
Submit a first draft of proposal of sedna controller enhancement

Signed-off-by: shemol <shemol@163.com>
2024-08-07 13:16:03 +08:00
KubeEdge Bot ac623ab32d
Merge pull request #426 from wbc6080/fix-slack
fix slack url
2023-12-11 15:56:53 +08:00
KubeEdge Bot 681bdd2740
Merge pull request #420 from tangming1996/fix/develop
Fixed helm installation failure
2023-12-11 15:55:54 +08:00
wbc6080 465f325af7 fix slack url
Signed-off-by: wbc6080 <wangbincheng4@huawei.com>
2023-12-07 11:10:43 +08:00
ming.tang 5ad27aaa7e Fixed helm installation failure
Signed-off-by: ming.tang <ming.tang@daocloud.io>
2023-11-08 16:50:24 +08:00
KubeEdge Bot b8ec263422
Merge pull request #417 from Shelley-BaoYue/main
update CoC, point to kubeedge/community/CODE_OF_CONDUCT.md
2023-09-20 20:21:46 +08:00
Shelley-BaoYue 807c36eaa8 update CoC
Signed-off-by: Shelley-BaoYue <baoyue2@huawei.com>
2023-09-06 14:29:33 +08:00
KubeEdge Bot 23dfbaaae9
Merge pull request #412 from tangming1996/main
update doc
2023-08-16 09:18:13 +08:00
KubeEdge Bot 174273d9a1
Merge pull request #379 from Lj1ang/Mindspore-Demo
two missing line in mindspore backend
2023-07-29 14:37:09 +08:00
ming.tang c6bcee7197 update doc
Signed-off-by: ming.tang <ming.tang@daocloud.io>
2023-07-04 17:46:26 +08:00
KubeEdge Bot 3e8de61074
Merge pull request #406 from jaypume/feature-knowledge-base
Support showing knowledge base
2023-06-13 12:06:32 +08:00
Jie Pu 6753b229eb update generated crd code and fix ci
Signed-off-by: Jie Pu <i@jaypu.com>
2023-06-13 11:33:57 +08:00
Jie Pu 0e43075c44 Support showing knowledge base in crd status field
Signed-off-by: Jie Pu <i@jaypu.com>
2023-06-12 17:17:45 +08:00
Jie Pu f996a835a6 Add autogen code based on types.go
Signed-off-by: Jie Pu <i@jaypu.com>
2023-06-12 17:17:45 +08:00
Jie Pu e1c920ac00 Add knowledge base types definition
Signed-off-by: Jie Pu <i@jaypu.com>
2023-06-12 17:17:45 +08:00
KubeEdge Bot dad820306d
Merge pull request #410 from jaypume/fix-ci
Fix ci e2e running failed issue.
2023-06-09 18:40:05 +08:00
Jie Pu 7b8ea94635 Fix kind version to v0.18.0
Signed-off-by: Jie Pu <i@jaypu.com>
2023-06-09 18:09:27 +08:00
KubeEdge Bot b6fbf1f0b1
Merge pull request #405 from qxygxt/main
Tutorial for ATCII Lifelong Learning Job
2023-05-04 17:16:05 +08:00
qxygxt 5d96245357 a tutorial based on atcii lifelong learning job
Signed-off-by: qxygxt <xingyu.q@outlook.com>
2023-04-20 20:46:04 +08:00
KubeEdge Bot 0197964b2b
Merge pull request #391 from luosiqi/proposal
Proposal and tutorial of unstructured lifelong learning
2023-04-18 16:02:50 +08:00
KubeEdge Bot f7cbd56fdf
Merge pull request #402 from jaypume/update-owners
Update Owners
2023-04-14 19:36:46 +08:00
Jie Pu 087f0cafb2 Update Owners
Signed-off-by: Jie Pu <pujie2@huawei.com>
2023-04-14 18:53:27 +08:00
KubeEdge Bot 2a003ec7ca
Merge pull request #392 from luosiqi/main
Unstructured Sedna Lifelong Learning Architecture
2023-03-31 10:58:34 +08:00
SiqiLuo 9b4bfcc260 Modify docker image address of unstructured lifelong learning in Readme
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-03-31 10:07:04 +08:00
SiqiLuo 7750093afb Improve tutorial of unstructured lifelong learning
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-03-30 17:26:36 +08:00
SiqiLuo c5cf783d96 Conduct code check and improve docs of unstructured lifelong learning
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-03-30 16:18:56 +08:00
SiqiLuo 5dce8ff69e Add base class for all the algorithm modules of unstructured lifelong learning
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-03-30 16:18:31 +08:00
SiqiLuo a4b1069edb Improve base model codes of unstructured lifelong learning, i.e., RFNet
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-03-30 16:17:44 +08:00
SiqiLuo ab977a3068 proposal of unstructured lifelong learning
Signed-off-by: luosiqi <1587295470@qq.com>
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-03-29 19:50:08 +08:00
KubeEdge Bot c763c1a90e
Merge pull request #385 from RyanZhaoXB/install-dev
update the path of installation document in README.md
2023-03-03 15:42:47 +08:00
KubeEdge Bot e036f48c40
Merge pull request #394 from RyanZhaoXB/install-link-fix
wrong url in document
2023-02-13 14:15:31 +08:00
Ryan dbb59745b5 update quickstart.md and fix the wrong url
Signed-off-by: Ryan <zhaoran11@huawei.com>
2023-02-02 17:50:10 +08:00
SiqiLuo 0d465d46d8 Improve proposal of unstructure lifelong learning
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-01-18 11:41:11 +08:00
SiqiLuo 2ea525a225 proposal of lifelong learning
Signed-off-by: SiqiLuo <1587295470@qq.com>
2023-01-18 10:31:05 +08:00
KubeEdge Bot 1cddd17ef2
Merge pull request #382 from luosiqi/main
Unstructured lifelong learning with cloud robotics example
2022-12-29 14:34:21 +08:00
SiqiLuo 5f117132d5 Code review and done
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-28 16:06:02 +08:00
SiqiLuo f6f67fd492 Code check and reduce too long characters
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-28 15:58:16 +08:00
SiqiLuo 4bab19a78d Code review
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-28 15:45:22 +08:00
SiqiLuo 8f1f777b5d Code check and fix
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-28 15:17:50 +08:00
SiqiLuo f961fa7476 Remove unneccessary files
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-28 10:04:35 +08:00
SiqiLuo 22487aa1ea Code review and modify for docs
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-28 09:47:53 +08:00
SiqiLuo a97594e3d7 Remove sensitive messages
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-26 15:38:49 +08:00
SiqiLuo 7524867eff code review and modify
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-26 15:37:19 +08:00
SiqiLuo 01d3db0b12 code review for open source
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-26 15:32:52 +08:00
SiqiLuo 885fe68e16 fixed robo skills
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-20 12:16:33 +08:00
Ryan 8152ba4734 update the path of installation document in README.md
Signed-off-by: Ryan <zhaoran11@huawei.com>
2022-12-15 15:39:00 +08:00
SiqiLuo 439fd31911 unstructured lifelong learning
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-08 15:38:06 +08:00
SiqiLuo 45a3afe05b First commit
Signed-off-by: SiqiLuo <1587295470@qq.com>
2022-12-08 15:30:57 +08:00
Lj1ang d212587b2b two missing line for mindspore backend
Signed-off-by: Lj1ang <2872509481@qq.com>
2022-11-13 18:50:54 +08:00
KubeEdge Bot ebfdbfa7f5
Merge pull request #341 from Lj1ang/TinyMS-Support-and-Demos
[OSPP]The proposal for TinyMS support in Python SDK
2022-11-01 00:13:47 +08:00
KubeEdge Bot d1cf42c7e5
Merge pull request #368 from Ymh13383894400/main
Build a high-frequency Sedna-based end-to-end use case in ModelBox fo…
2022-10-31 23:10:48 +08:00
Ymh13383894400 104d80e80d build high frequency sedna example with modelbox
Signed-off-by: Ymh13383894400 <1431605505@qq.com>
2022-10-31 23:03:35 +08:00
KubeEdge Bot 306080d6e1
Merge pull request #375 from yqhok1/main
Add JSON data parse
2022-10-31 14:49:48 +08:00
KubeEdge Bot 44efe5c4cc
Merge pull request #376 from Lj1ang/Mindspore-Demo
[OSPP]Mindspore demo
2022-10-31 14:10:47 +08:00
York You c5d7ace6a6
Add JSON data parse
Signed-off-by: York You <573861119@qq.com>
2022-10-31 11:00:38 +08:00
KubeEdge Bot 1f219ac38f
Merge pull request #369 from wjf222/wjf_ospp_final
ospp Lifelong Learning exporter and Visualization
2022-10-30 10:27:47 +08:00
KubeEdge Bot 871561fdee
Merge pull request #366 from Kanakami/implementation
[OSPP] The implementation for Observability management
2022-10-30 10:26:46 +08:00
Lj1ang e72ad41a44 perfection of README
Signed-off-by: Lj1ang <2872509481@qq.com>
2022-10-28 16:52:13 +08:00
Lj1ang 941656862e Mindspore demo:dog croissants classification
Signed-off-by: Lj1ang <2872509481@qq.com>
2022-10-28 15:04:50 +08:00
Lj1ang 9b7071b967 Mindspore Demo can run locally
Signed-off-by: Lj1ang <2872509481@qq.com>
2022-10-28 15:04:50 +08:00
Kanakami 4bd048c66b implementation for observability management
Signed-off-by: Kanakami <979466793@qq.com>
2022-10-28 14:19:21 +08:00
Lj1ang 702fb8cfe6 The proposal for TinyMS support in Python SDK
Signed-off-by: Lj1ang <2872509481@qq.com>
2022-10-28 13:39:56 +08:00
wjf a73929c778 ospp Lifelong Learning exporter and Visualization
Signed-off-by: wjf <1287290237@qq.com>
2022-10-27 11:21:25 +08:00
KubeEdge Bot 085ad09e22
Merge pull request #373 from JimmyYang20/main
Add components folder
2022-10-25 11:20:42 +08:00
JimmyYang20 48ec100bbe Add components folder
Signed-off-by: JimmyYang20 <yangjin39@huawei.com>
2022-10-25 11:12:13 +08:00
KubeEdge Bot 99df0079e6
Merge pull request #335 from wjf222/wjf-OSPP-proposal
[OSPP] The proposal for Lifelong Learning O&M
2022-10-25 10:06:42 +08:00
wjf f7fa09c70c add the custom metrics
Signed-off-by: wjf <1287290237@qq.com>
2022-10-19 10:15:12 +08:00
KubeEdge Bot 5c796fded5
Merge pull request #340 from Kanakami/main
[OSPP] The proposal for Observability management
2022-10-18 10:56:35 +08:00
Kanakami 1633dd851f remove training hyperparameters and extract task basic metrics
Signed-off-by: Kanakami <979466793@qq.com>
2022-08-20 18:54:57 +08:00
wjf222 a83c5a06e8 Add algorithm metrics
Signed-off-by: wjf222 <jffwang@qq.com>
2022-08-18 21:17:40 +08:00
wjf222 5759e74d63 [OSPP] The proposal for Lifelong Learning O&M
Signed-off-by: wjf222 <jffwang@qq.com>
2022-08-17 22:10:52 +08:00
Kanakami ce2bf33515 update catalogue and architecture image
Signed-off-by: Kanakami <979466793@qq.com>
2022-08-17 22:08:31 +08:00
Kanakami 7ba9568bd4 add common system metrics and algorithm metrics
Signed-off-by: Kanakami <979466793@qq.com>
2022-08-16 22:48:17 +08:00
Kanakami 89a9a2e19c Add observability management proposal
Signed-off-by: Kanakami <979466793@qq.com>
2022-08-16 21:07:22 +08:00
3721 changed files with 695211 additions and 273494 deletions

View File

@ -9,7 +9,7 @@ on:
jobs:
verify-and-lint:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: Verify codegen/vendor/licenses, do lint
env:
GOPATH: ${{ github.workspace }}
@ -19,17 +19,17 @@ jobs:
steps:
- name: Install Go
uses: actions/setup-go@v2
uses: actions/setup-go@v4
with:
go-version: 1.16.x
go-version: 1.22.9
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
path: ${{ env.CODE_DIR }}
- uses: actions/cache@v2
- uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@ -51,36 +51,36 @@ jobs:
working-directory: ${{ env.CODE_DIR }}
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: build gm and lc
steps:
- name: Install Go
uses: actions/setup-go@v2
uses: actions/setup-go@v4
with:
go-version: 1.16.x
go-version: 1.22.9
- uses: actions/cache@v2
- uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- run: make build # without verify
basic_test:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: 'Unit test, integration test edge (noop now)'
steps:
- name: Install Go
uses: actions/setup-go@v2
uses: actions/setup-go@v4
with:
go-version: 1.16.x
go-version: 1.22.9
- uses: actions/cache@v2
- uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@ -89,34 +89,33 @@ jobs:
run: ': to be added'
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- run: ': to be added'
e2e_test:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: 'E2e test'
steps:
- name: Install Go
uses: actions/setup-go@v2
uses: actions/setup-go@v4
with:
go-version: 1.16.x
go-version: 1.22.9
- uses: actions/cache@v2
- uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
- name: Install dependences
run: |
# since this ubuntu-latest os has already kind/kubectl/jq(see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-README.md),
# this just makes it sure
type kind || {
sudo apt-get install -y jq
go get sigs.k8s.io/kind@$(curl -s https://api.github.com/repos/kubernetes-sigs/kind/releases/latest | jq -r .tag_name)
}
# since this ubuntu-latest os has already kind/kubectl/jq(see https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2004-Readme.md),
# but kind v0.19.0 has different api with v0.18.0, so here to reinstall kind v0.18.0
sudo apt-get install -y jq
go install sigs.k8s.io/kind@$(curl -s https://api.github.com/repos/kubernetes-sigs/kind/releases/97518847 | jq -r .tag_name)
kind version
type kubectl || {
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
@ -125,49 +124,49 @@ jobs:
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- run: make e2e
docker_build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: docker image build for gm/lc
steps:
- name: Install Go
uses: actions/setup-go@v2
uses: actions/setup-go@v4
with:
go-version: 1.16.x
go-version: 1.22.9
- uses: actions/cache@v2
- uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- run: make images
docker_cross_build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: docker cross build images for gm/lc/kb
steps:
- name: Install Go
uses: actions/setup-go@v2
uses: actions/setup-go@v4
with:
go-version: 1.16.x
go-version: 1.22.9
- uses: actions/cache@v2
- uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0

6
.gitignore vendored
View File

@ -4,6 +4,7 @@
*.dll
*.so
*.dylib
*.DS_Store
# Test binary, built with `go test -c`
*.test
@ -28,3 +29,8 @@ __pycache__/
# go build output
/_output
# AI model files
*.pth
*.model
*.pkl

View File

@ -12,21 +12,49 @@ run:
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# which dirs to skip: issues from them won't be reported;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but default dirs are skipped independently
# from this option's value (see skip-dirs-use-default).
# "/" will be replaced by current OS file path separator to properly work
# on Windows.
skip-dirs:
issues:
# Which dirs to exclude: issues from them won't be reported.
# Can use regexp here: `generated.*`, regexp is applied on full path,
# including the path prefix if one is set.
# Default dirs are skipped independently of this option's value (see exclude-dirs-use-default).
# "/" will be replaced by current OS file path separator to properly work on Windows.
# Default: []
exclude-dirs:
- vendor
- fake
- externalversions
exclude-dirs-use-default: true
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
# The formats used to render issues.
# Formats:
# - `colored-line-number`
# - `line-number`
# - `json`
# - `colored-tab`
# - `tab`
# - `html`
# - `checkstyle`
# - `code-climate`
# - `junit-xml`
# - `junit-xml-extended`
# - `github-actions`
# - `teamcity`
# - `sarif`
# Output path can be either `stdout`, `stderr` or path to the file to write to.
#
# For the CLI flag (`--out-format`), multiple formats can be specified by separating them by comma.
# The output can be specified for each of them by separating format name and path by colon symbol.
# Example: "--out-format=checkstyle:report.xml,json:stdout,colored-line-number"
# The CLI flag (`--out-format`) override the configuration file.
#
# Default:
# formats:
# - format: colored-line-number
# path: stdout
formats:
- format: colored-line-number
# print lines of code with issue, default is true
print-issued-lines: true
@ -41,13 +69,21 @@ linters-settings:
misspell:
ignore-words:
- mosquitto
revive:
rules:
- name: unused-parameter
severity: warning
disabled: true
- name: unused-receiver
severity: warning
disabled: true
linters:
disable-all: true
enable:
- goconst
- gofmt
- golint
- revive
- gosimple
- govet
- misspell

View File

@ -1,5 +1,3 @@
# KubeEdge Community Code of Conduct
KubeEdge follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at kubeedge@gmail.com.
Please refer to our [KubeEdge Community Code of Conduct](https://github.com/kubeedge/community/blob/master/CODE_OF_CONDUCT.md)

View File

@ -1,16 +0,0 @@
= vendor/github.com/PuerkitoBio/purell licensed under: =
Copyright (c) 2012, Martin Angers
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
= vendor/github.com/PuerkitoBio/purell/LICENSE fb8b39492731abb9a3d68575f3eedbfa

24
LICENSES/vendor/github.com/beorn7/perks/LICENSE generated vendored Normal file
View File

@ -0,0 +1,24 @@
= vendor/github.com/beorn7/perks licensed under: =
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
= vendor/github.com/beorn7/perks/LICENSE 0d0738f37ee8dc0b5f88a32e83c60198

View File

@ -1,6 +1,8 @@
= vendor/go.uber.org/zap licensed under: =
= vendor/github.com/blang/semver/v4 licensed under: =
Copyright (c) 2016-2017 Uber Technologies, Inc.
The MIT License
Copyright (c) 2014 Benedikt Lang <github at benediktlang.de>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -20,4 +22,5 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
= vendor/go.uber.org/zap/LICENSE.txt 5e8153e456a82529ea845e0d511abb69
= vendor/github.com/blang/semver/v4/LICENSE 5a3ade42a900439691ebc22013660cae

26
LICENSES/vendor/github.com/cespare/xxhash/v2/LICENSE generated vendored Normal file
View File

@ -0,0 +1,26 @@
= vendor/github.com/cespare/xxhash/v2 licensed under: =
Copyright (c) 2016 Caleb Spare
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
= vendor/github.com/cespare/xxhash/v2/LICENSE.txt 802da049c92a99b4387d3f3d91b00fa9

View File

@ -22,4 +22,4 @@ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
= vendor/github.com/emicklei/go-restful/LICENSE 2ebc1c12a0f4eae5394522e31961e1de
= vendor/github.com/emicklei/go-restful/v3/LICENSE 2ebc1c12a0f4eae5394522e31961e1de

View File

@ -1,4 +1,4 @@
= vendor/github.com/googleapis/gnostic licensed under: =
= vendor/github.com/google/gnostic-models licensed under: =
Apache License
@ -204,4 +204,4 @@
limitations under the License.
= vendor/github.com/googleapis/gnostic/LICENSE b1e01b26bacfc2232046c90a330332b3
= vendor/github.com/google/gnostic-models/LICENSE b1e01b26bacfc2232046c90a330332b3

View File

@ -1,366 +0,0 @@
= vendor/github.com/hashicorp/golang-lru licensed under: =
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.
= vendor/github.com/hashicorp/golang-lru/LICENSE f27a50d2e878867827842f2c60e30bfc

View File

@ -1,17 +1,205 @@
= vendor/github.com/inconshreveable/mousetrap licensed under: =
Copyright 2014 Alan Shreve
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
http://www.apache.org/licenses/LICENSE-2.0
1. Definitions.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
= vendor/github.com/inconshreveable/mousetrap/LICENSE b23cff9db13f093a4e6ff77105cbd8eb
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022 Alan Shreve (@inconshreveable)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
= vendor/github.com/inconshreveable/mousetrap/LICENSE 7ea8c6c3cf90c1ca8494325e32c35579

View File

@ -1,6 +1,8 @@
= vendor/go.uber.org/atomic licensed under: =
= vendor/github.com/josharian/intern licensed under: =
Copyright (c) 2016 Uber Technologies, Inc.
MIT License
Copyright (c) 2019 Josh Bleecher Snyder
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -9,15 +11,15 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
= vendor/go.uber.org/atomic/LICENSE.txt 1caee86519456feda989f8a838102b50
= vendor/github.com/josharian/intern/license.md 6bc75378a26e0addbcdfa118e4d54574

View File

@ -1,6 +1,6 @@
= vendor/github.com/docker/distribution licensed under: =
= vendor/github.com/matttproud/golang_protobuf_extensions licensed under: =
Apache License
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
@ -202,5 +202,4 @@ Apache License
See the License for the specific language governing permissions and
limitations under the License.
= vendor/github.com/docker/distribution/LICENSE d2794c0df5b907fdace235a619d80314
= vendor/github.com/matttproud/golang_protobuf_extensions/LICENSE e3fc50a88d0a364313df4b21ef20c29e

35
LICENSES/vendor/github.com/munnerz/goautoneg/LICENSE generated vendored Normal file
View File

@ -0,0 +1,35 @@
= vendor/github.com/munnerz/goautoneg licensed under: =
Copyright (c) 2011, Open Knowledge Foundation Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
Neither the name of the Open Knowledge Foundation Ltd. nor the
names of its contributors may be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
= vendor/github.com/munnerz/goautoneg/LICENSE 0c241922fc69330e2e5590de114f3bf5

View File

@ -1,196 +0,0 @@
= vendor/github.com/opencontainers/go-digest licensed under: =
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2019, 2020 OCI Contributors
Copyright 2016 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
= vendor/github.com/opencontainers/go-digest/LICENSE 2d6fc0e85c3f118af64c85a78d56d3cf

View File

@ -1,5 +1,4 @@
= vendor/github.com/go-openapi/spec licensed under: =
= vendor/github.com/prometheus/client_golang licensed under: =
Apache License
Version 2.0, January 2004
@ -203,4 +202,4 @@
See the License for the specific language governing permissions and
limitations under the License.
= vendor/github.com/go-openapi/spec/LICENSE 3b83ef96387f14655fc854ddc3c6bd57
= vendor/github.com/prometheus/client_golang/LICENSE 86d3f3a95c324c9479bd8986968f4327

View File

@ -1,5 +1,4 @@
= vendor/k8s.io/component-helpers licensed under: =
= vendor/github.com/prometheus/client_model licensed under: =
Apache License
Version 2.0, January 2004
@ -203,4 +202,4 @@
See the License for the specific language governing permissions and
limitations under the License.
= vendor/k8s.io/component-helpers/LICENSE 3b83ef96387f14655fc854ddc3c6bd57
= vendor/github.com/prometheus/client_model/LICENSE 86d3f3a95c324c9479bd8986968f4327

View File

@ -1,5 +1,4 @@
= vendor/k8s.io/kubernetes licensed under: =
= vendor/github.com/prometheus/common licensed under: =
Apache License
Version 2.0, January 2004
@ -203,4 +202,4 @@
See the License for the specific language governing permissions and
limitations under the License.
= vendor/k8s.io/kubernetes/LICENSE 3b83ef96387f14655fc854ddc3c6bd57
= vendor/github.com/prometheus/common/LICENSE 86d3f3a95c324c9479bd8986968f4327

View File

@ -1,5 +1,4 @@
= vendor/k8s.io/apiserver licensed under: =
= vendor/github.com/prometheus/procfs licensed under: =
Apache License
Version 2.0, January 2004
@ -203,4 +202,4 @@
See the License for the specific language governing permissions and
limitations under the License.
= vendor/k8s.io/apiserver/LICENSE 3b83ef96387f14655fc854ddc3c6bd57
= vendor/github.com/prometheus/procfs/LICENSE 86d3f3a95c324c9479bd8986968f4327

View File

@ -1,23 +0,0 @@
= vendor/go.uber.org/multierr licensed under: =
Copyright (c) 2017 Uber Technologies, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
= vendor/go.uber.org/multierr/LICENSE.txt f65b21a547112d1bc7b11b90f9b31997

54
LICENSES/vendor/gopkg.in/yaml.v3/LICENSE generated vendored Normal file
View File

@ -0,0 +1,54 @@
= vendor/gopkg.in/yaml.v3 licensed under: =
This project is covered by two different licenses: MIT and Apache.
#### MIT License ####
The following files were ported to Go from C files of libyaml, and thus
are still covered by their original MIT license, with the additional
copyright staring in 2011 when the project was ported over:
apic.go emitterc.go parserc.go readerc.go scannerc.go
writerc.go yamlh.go yamlprivateh.go
Copyright (c) 2006-2010 Kirill Simonov
Copyright (c) 2006-2011 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
### Apache License ###
All the remaining project files are covered by the Apache license:
Copyright (c) 2011-2019 Canonical Ltd
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
= vendor/gopkg.in/yaml.v3/LICENSE 3c91c17266710e16afdbb2b6d15c761c

View File

@ -22,4 +22,4 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
= vendor/gorm.io/gorm/License 162a54d183196f03ce8b2a312e5919f8
= vendor/gorm.io/gorm/LICENSE 162a54d183196f03ce8b2a312e5919f8

View File

@ -1,4 +1,4 @@
= vendor/k8s.io/gengo licensed under: =
= vendor/k8s.io/gengo/v2 licensed under: =
Apache License
@ -203,4 +203,4 @@
See the License for the specific language governing permissions and
limitations under the License.
= vendor/k8s.io/gengo/LICENSE ad09685d909e7a9f763d2bb62d4bd6fb
= vendor/k8s.io/gengo/v2/LICENSE ad09685d909e7a9f763d2bb62d4bd6fb

242
LICENSES/vendor/sigs.k8s.io/json/LICENSE generated vendored Normal file
View File

@ -0,0 +1,242 @@
= vendor/sigs.k8s.io/json licensed under: =
Files other than internal/golang/* licensed under:
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
------------------
internal/golang/* files licensed under:
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
= vendor/sigs.k8s.io/json/LICENSE 545d3f23616dee7495323aeb0b098df3

View File

@ -183,6 +183,7 @@ e2e:
.PHONY: crds controller-gen
crds: controller-gen
$(CONTROLLER_GEN) $(CRD_OPTIONS) paths="./pkg/apis/sedna/v1alpha1" output:crd:artifacts:config=build/crds
$(CONTROLLER_GEN) $(CRD_OPTIONS) paths="./pkg/apis/sedna/v1alpha1" output:crd:artifacts:config=build/helm/sedna/crds
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@ -200,7 +201,7 @@ ifeq (, $(shell which controller-gen))
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
cd $$CONTROLLER_GEN_TMP_DIR ;\
go mod init tmp ;\
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.4.1 ;\
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.15.0 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
}
CONTROLLER_GEN=$(GOBIN)/controller-gen

16
OWNERS
View File

@ -1,12 +1,8 @@
approvers:
- chendave
- fisherxu
- jaypume
- kevin-wangzefeng
- llhuii
- sids-b
- ugvddm
reviewers:
- JimmyYang20
- TymonXie
- sig-ai-co-chairs
- sig-ai-tech-leads
reviewers:
- tangming1996
- MooreZheng
- sig-ai-project-maintainers

10
OWNERS_ALIASES Normal file
View File

@ -0,0 +1,10 @@
aliases:
sig-ai-co-chairs:
- MooreZheng
sig-ai-tech-leads:
- jaypume
- Ratuchetp
sig-ai-project-maintainers:
- Poorunga
- MooreZheng

View File

@ -78,7 +78,7 @@ Documentation is located on [readthedoc.io](https://sedna.readthedocs.io/). Thes
### Installation
Follow the [Sedna installation document](docs/setup/quick-start.md) to install Sedna.
Follow the [Sedna installation document](docs/setup/install.md) to install Sedna.
### Examples
Example1[Using Joint Inference Service in Helmet Detection Scenario](/examples/joint_inference/helmet_detection_inference/README.md).
@ -111,7 +111,7 @@ If you need support, start with the [troubleshooting guide](./docs/troubleshooti
-->
If you have questions, feel free to reach out to us in the following ways:
- [slack channel](https://app.slack.com/client/TDZ5TGXQW/C01EG84REVB/details)
- [slack channel](https://kubeedge.io/docs/community/slack/)
## Contributing

View File

@ -93,7 +93,7 @@ Sedna的安装文档请参考[这里](/docs/setup/install.md)。
-->
如果您有任何疑问,请以下方式与我们联系:
- [slack channel](https://app.slack.com/client/TDZ5TGXQW/C01EG84REVB/details)
- [slack channel](https://kubeedge.io/docs/community/slack/)
## 贡献

View File

@ -1,11 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.15.0
name: datasets.sedna.io
spec:
group: sedna.io
@ -22,14 +20,19 @@ spec:
description: Dataset describes the data that a dataset resource should have
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@ -50,9 +53,9 @@ spec:
- url
type: object
status:
description: DatasetStatus represents information about the status of
a dataset including the time a dataset updated, and number of samples
in a dataset
description: |-
DatasetStatus represents information about the status of a dataset
including the time a dataset updated, and number of samples in a dataset
properties:
numberOfSamples:
type: integer
@ -69,9 +72,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.15.0
name: models.sedna.io
spec:
group: sedna.io
@ -22,14 +20,19 @@ spec:
description: Model describes the data that a model resource should have
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@ -51,8 +54,9 @@ spec:
- url
type: object
status:
description: ModelStatus represents information about the status of a
model including the time a model updated, and metrics in a model
description: |-
ModelStatus represents information about the status of a model
including the time a model updated, and metrics in a model
properties:
metrics:
items:
@ -79,9 +83,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -14,7 +14,7 @@
# Add cross buildx improvement
# _speed_buildx_for_go_
FROM golang:1.16-alpine3.15 AS builder
FROM golang:1.22.9-alpine3.19 AS builder
LABEL stage=builder
ARG GO_LDFLAGS
@ -31,7 +31,7 @@ RUN make build WHAT=gm GO_LDFLAGS=$GO_LDFLAGS OUT_DIR=_output
# ALT: just using go build
# RUN CGO_ENABLED=0 go build -o _output/bin/sedna-gm -ldflags "$GO_LDFLAGS -w -s" cmd/sedna-gm/sedna-gm.go
FROM alpine:3.11
FROM alpine:3.19
COPY --from=builder /code/_output/bin/sedna-gm /usr/local/bin/sedna-gm

View File

@ -21,7 +21,7 @@ version: 0.1.0
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: v0.4.3
appVersion: v0.6.0
dependencies:
- name: kb

View File

@ -7,13 +7,14 @@ Visit https://github.com/kubeedge/sedna for more information.
```
$ git clone https://github.com/kubeedge/sedna.git
$ cd sedna
$ helm install sedna ./build/helm/sedna
$ kubectl create namespace sedna
$ helm install sedna --namespace sedna ./build/helm/sedna
```
## Uninstall
```
$ helm uninstall sedna
$ helm uninstall sedna -n sedna
```
## Update CRDs

View File

@ -21,4 +21,4 @@ version: 0.1.0
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: v0.4.3
appVersion: v0.6.0

View File

@ -1,4 +1,4 @@
image: kubeedge/sedna-gm:v0.4.3
image: kubeedge/sedna-gm:v0.6.0
resources:
requests:
memory: 32Mi

View File

@ -21,4 +21,4 @@ version: 0.1.0
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: v0.4.3
appVersion: v0.6.0

View File

@ -1,4 +1,4 @@
image: kubeedge/sedna-kb:v0.4.3
image: kubeedge/sedna-kb:v0.6.0
resources:
requests:
memory: 256Mi

View File

@ -21,4 +21,4 @@ version: 0.1.0
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: v0.4.3
appVersion: v0.6.0

View File

@ -16,7 +16,7 @@ spec:
spec:
containers:
- name: lc
image: kubeedge/sedna-lc:v0.4.3
image: {{ .Values.image }}
env:
- name: GM_ADDRESS
value: gm.sedna:9000

View File

@ -1,4 +1,4 @@
image: kubeedge/sedna-lc:v0.4.3
image: kubeedge/sedna-lc:v0.6.0
resources:
requests:
memory: 32Mi

View File

@ -1,11 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.15.0
name: datasets.sedna.io
spec:
group: sedna.io
@ -19,14 +17,27 @@ spec:
- name: v1alpha1
schema:
openAPIV3Schema:
description: Dataset describes the data that a dataset resource should have
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: DatasetSpec is a description of a dataset
properties:
credentialName:
type: string
@ -42,6 +53,9 @@ spec:
- url
type: object
status:
description: |-
DatasetStatus represents information about the status of a dataset
including the time a dataset updated, and number of samples in a dataset
properties:
numberOfSamples:
type: integer
@ -58,9 +72,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.15.0
name: models.sedna.io
spec:
group: sedna.io
@ -19,17 +17,34 @@ spec:
- name: v1alpha1
schema:
openAPIV3Schema:
description: Model describes the data that a model resource should have
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: ModelSpec is a description of a model
properties:
credentialName:
type: string
device_soc_versions:
items:
type: string
type: array
format:
type: string
url:
@ -39,9 +54,14 @@ spec:
- url
type: object
status:
description: |-
ModelStatus represents information about the status of a model
including the time a model updated, and metrics in a model
properties:
metrics:
items:
description: Metric describes the data that a resource model metric
should have
properties:
key:
type: string
@ -63,9 +83,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -16,6 +16,9 @@ rules:
- lifelonglearningjobs
- objectsearchservices
- objecttrackingservices
- reidjobs
- videoanalyticsjobs
- featureextractionservices
verbs:
- get
- list
@ -34,6 +37,9 @@ rules:
- lifelonglearningjobs/status
- objectsearchservices/status
- objecttrackingservices/status
- reidjobs/status
- videoanalyticsjobs/status
- featureextractionservices/status
verbs:
- get
- update

View File

@ -1,5 +1,5 @@
kb:
image: kubeedge/sedna-kb:v0.4.3
image: kubeedge/sedna-kb:v0.6.0
resources:
requests:
memory: 256Mi
@ -8,7 +8,7 @@ kb:
memory: 512Mi
gm:
image: kubeedge/sedna-gm:v0.4.3
image: kubeedge/sedna-gm:v0.6.0
resources:
requests:
memory: 32Mi
@ -28,7 +28,7 @@ gm:
server: http://kb.sedna:9020
lc:
image: kubeedge/sedna-lc:v0.4.3
image: kubeedge/sedna-lc:v0.6.0
resources:
requests:
memory: 32Mi

View File

@ -12,15 +12,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM python:3.6-slim
FROM python:3.9-slim
RUN pip install colorlog~=4.7.2
RUN pip install PyYAML~=5.4.1
RUN pip install fastapi~=0.63.0
RUN pip install starlette~=0.13.6
RUN pip install pydantic~=1.8.1
RUN pip install joblib~=1.0.1
RUN pip install pandas~=1.1.5
RUN pip install joblib~=1.2.0
RUN pip install pandas
RUN pip install uvicorn~=0.14.0
RUN pip install python-multipart~=0.0.5
RUN pip install SQLAlchemy~=1.4.7

View File

@ -15,7 +15,7 @@
# Add cross buildx improvement
# LC has built sqlite3 which requires CGO with CGO_ENABLED=1
# _speed_buildx_for_cgo_alpine_
FROM golang:1.16-alpine3.15 AS builder
FROM golang:1.22.9-alpine3.19 AS builder
LABEL stage=builder
ARG GO_LDFLAGS
@ -30,7 +30,7 @@ COPY . .
RUN make build WHAT=lc GO_LDFLAGS=$GO_LDFLAGS OUT_DIR=_output
FROM alpine:3.11
FROM alpine:3.19
COPY --from=builder /code/_output/bin/sedna-lc /usr/local/bin/sedna-lc

View File

@ -0,0 +1,27 @@
from prometheus_client import start_http_server,Gauge
import os
import time
Incremental_num = Gauge('Incremental_HardSamples', 'Hard Samples for Incremental', ['hardSample'])
def get_incremental_learning_metrics(hard_samples_path):
Incremental_num.labels(hardSample=True).set(len(os.listdir(hard_samples_path)))
if __name__ == "__main__":
'''
These are paths in demo test cases.
If you have run your own tasks, please change the following paths to the paths you used.
If you want to monitor multiple tasks, you need to change this exporter a little.
When Sedna is available to show metrics like inference count in -oyaml, there is no need to run this exporter.
'''
# incremental learning
hard_samples_path = "/incremental_learning/he"
start_http_server(8000)
while True:
get_incremental_learning_metrics(hard_samples_path)
time.sleep(10)

View File

@ -0,0 +1,29 @@
from prometheus_client import start_http_server,Gauge
import os
import time
JointInference_num = Gauge('JointInference_InferenceCount','InferenceCount for Joint Inference',['type'])
def get_joint_inference_metrics(cloud_inference_path, output_path):
JointInference_num.labels(type='Edge').set(len(os.listdir(output_path)))
JointInference_num.labels(type='Cloud').set(len(os.listdir(cloud_inference_path)))
if __name__ == "__main__":
'''
These are paths in demo test cases.
If you have run your own tasks, please change the following paths to the paths you used.
If you want to monitor multiple tasks, you need to change this exporter a little.
When Sedna is available to show metrics like inference count in -oyaml, there is no need to run this exporter.
'''
# joint inference
cloud_inference_path = "/joint_inference/output/hard_example_cloud_inference_output"
output_path = "/joint_inference/output/output"
start_http_server(8001)
while True:
get_joint_inference_metrics(cloud_inference_path, output_path)
time.sleep(10)

View File

@ -0,0 +1,20 @@
module pkl-exporter
go 1.17
require (
github.com/mattn/go-sqlite3 v1.14.15
github.com/prometheus/client_golang v1.13.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a // indirect
google.golang.org/protobuf v1.28.1 // indirect
)

View File

@ -0,0 +1,374 @@
package main
import (
"database/sql"
"flag"
"fmt"
_ "github.com/mattn/go-sqlite3"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
"math/rand"
"os"
"path/filepath"
"strconv"
"strings"
"log"
"net/http"
"time"
)
var addr = flag.String("listen-address", ":7070", "The address to listen on for HTTP requests.")
var metrics = []struct {
name string
label string
note string
labelValues string
value float64
}{
{"metric1", "method,handler", "This is metric1", "a, b", 0},
{"metric2", "method,handler", "This is metric2", "a, b", 0},
{"Recall", "model_name", "Model Recall", "a", 0},
{"Precision", "model_name", "Model Precision", "b", 0},
}
func recordTaskNumM(db *sql.DB) {
go func() {
for {
rows, err := db.Query("SELECT a.id, a.name, a.task_num, b.task_attr FROM ll_task_grp as a left join ll_tasks as b where a.id=b.id;")
if err != nil {
fmt.Println(err)
}
for rows.Next() {
var id int
var name string
var taskNum int
var attr string
err = rows.Scan(&id, &name, &taskNum, &attr)
g, err := opsTaskTaskNum.GetMetricWithLabelValues(strconv.Itoa(id), name, attr)
if err == nil {
g.Set(float64(taskNum))
}
}
time.Sleep(1 * time.Second)
}
}()
}
func recordTaskSampleMStatusM(db *sql.DB) {
go func() {
for {
rows, err := db.Query("SELECT id, name, deploy, sample_num, task_num FROM ll_task_grp")
if err != nil {
fmt.Println(err)
}
for rows.Next() {
var id int
var name string
var deploy bool
var sampleNum int
var taskNum int
err = rows.Scan(&id, &name, &deploy, &sampleNum, &taskNum)
g, err := opsTaskSampleNum.GetMetricWithLabelValues(strconv.Itoa(id), name)
if err == nil {
g.Set(float64(sampleNum))
}
g, err = opsDeployStatus.GetMetricWithLabelValues(strconv.Itoa(id), name)
if err == nil {
if deploy {
g.Set(1)
} else {
g.Set(0)
}
}
}
time.Sleep(1 * time.Second)
}
}()
}
func recordKnownTasks(db *sql.DB) {
go func() {
for {
rows, err := db.Query("SELECT count(*) as c from ll_task_models where is_current = 1")
if err != nil {
fmt.Println(err)
}
for rows.Next() {
var c int
err = rows.Scan(&c)
g, err := opsKnowTaskNum.GetMetricWithLabelValues()
if err == nil {
g.Set(float64(c))
}
}
time.Sleep(1 * time.Second)
}
}()
}
func recordTaskStatus(db *sql.DB) {
go func() {
for {
rows, err := db.Query("SELECT task_id, model_url, is_current as c from ll_task_models")
if err != nil {
fmt.Println(err)
}
for rows.Next() {
var taskId int
var modelUrl string
var isCurrent bool
err = rows.Scan(&taskId, &modelUrl, &isCurrent)
g, err := opsTaskStatus.GetMetricWithLabelValues(strconv.Itoa(taskId), modelUrl)
if err == nil {
if isCurrent {
g.Set(1)
} else {
g.Set(0)
}
}
}
time.Sleep(1 * time.Second)
}
}()
}
func recordTaskRelationShip(db *sql.DB) {
go func() {
for {
rows, err := db.Query("select grp_id, task_id, transfer_radio from ll_task_relation")
if err != nil {
fmt.Println(err)
}
for rows.Next() {
var grpId int
var taskId int
var transferRatio float64
err = rows.Scan(&grpId, &taskId, &transferRatio)
g, err := opsTaskRelationShip.GetMetricWithLabelValues(strconv.Itoa(grpId), strconv.Itoa(taskId))
if err == nil {
g.Set(transferRatio)
}
}
time.Sleep(1 * time.Second)
}
}()
}
func customMetrics(db *sql.DB, registry *prometheus.Registry) {
// mock
sql := "insert into metric (name, label, note, last_time) values (?, ?, ?, time('now'))"
rows, err := db.Query("select count(*) from metric")
if err != nil {
fmt.Println(err)
}
mock := false
for rows.Next() {
var count int
err = rows.Scan(&count)
if err != nil {
fmt.Println(err)
}
mock = count == 0
}
if mock {
for i, metric := range metrics {
_, err := db.Exec(sql, metric.name, metric.label, metric.note)
if err != nil {
fmt.Println(err)
}
_, err = db.Exec("insert into metric_value (metric_id, label_value, value, last_time) values (?, ?, ?, time('now'))",
i, metric.labelValues, metric.value)
if err != nil {
fmt.Println(err)
}
}
mockV := 1.0
go func() {
for {
_, err := db.Exec("update metric_value set value=?", mockV)
if err != nil {
fmt.Println(err)
}
mockV = rand.Float64()
time.Sleep(time.Second)
}
}()
}
// register metrics
registeredMetrics := make([]*prometheus.GaugeVec, 0, 4)
rows, err = db.Query("select name, label, note from metric order by id asc")
if err != nil {
fmt.Println(err)
}
for rows.Next() {
var name string
var label string
var note string
err = rows.Scan(&name, &label, &note)
labels := strings.Split(label, ",")
for i := range labels {
labels[i] = strings.TrimSpace(labels[i])
}
met := promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: name,
Help: note,
}, labels)
registry.MustRegister(met)
registeredMetrics = append(registeredMetrics, met)
}
go func() {
for {
rows, err := db.Query("select metric_id, label_value, `value` from metric_value")
if err != nil {
fmt.Println(err)
}
for rows.Next() {
var metricId int
var labelValue string
var value float64
err = rows.Scan(&metricId, &labelValue, &value)
labelValues := strings.Split(labelValue, ",")
for i, _ := range labelValues {
labelValues[i] = strings.TrimSpace(labelValues[i])
}
if len(registeredMetrics) <= metricId || registeredMetrics[metricId] == nil {
continue
}
g, err := registeredMetrics[metricId].GetMetricWithLabelValues(labelValues...)
if err == nil {
g.Set(value)
}
}
time.Sleep(time.Second)
}
}()
}
func fileScanner(p string, suffix string) {
go func() {
for {
num := 0
root := p
err := filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
if strings.HasSuffix(path, suffix) {
num += 1
}
return nil
})
if err != nil {
println(err)
}
g, err := opsFilesSuffixNum.GetMetricWithLabelValues()
if err == nil {
g.Set(float64(num))
}
num = 0
time.Sleep(time.Second)
}
}()
}
var (
// task metrics
opsKnowTaskNum = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "know_task_num",
Help: "Number of known tasks in the knowledge base",
}, []string{})
opsTaskSampleNum = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "task_sample_num",
Help: "The total number of samples in task",
}, []string{"id", "name"})
opsTaskTaskNum = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "task_num",
Help: "The total number of tasks",
}, []string{"id", "name", "attr"})
opsTaskRelationShip = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "task_relation_ship",
Help: "Migration relationship between tasks",
}, []string{"grp_id", "task_id"})
opsTaskStatus = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "task_status",
Help: "Whether the task can be deployed",
}, []string{"task_id", "model_url"})
opsDeployStatus = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "deploy_status",
Help: "Enum(Waiting, OK, NotOK)",
}, []string{"id", "name"})
opsFilesSuffixNum = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "files_with_suffix_num",
Help: "The number of files with suffix",
}, []string{})
)
func main() {
var scannerPath string
var scannerSuffix string
flag.StringVar(&scannerPath, "scanner-path", "-", "file scanner path")
flag.StringVar(&scannerSuffix, "scanner-suffix", "", "file scanner suffix")
flag.Parse()
if scannerPath != "-" {
fileScanner(scannerPath, scannerSuffix)
}
dbSrc := "kb.sqlite3"
db, err := sql.Open("sqlite3", dbSrc)
if err != nil {
fmt.Printf("Can't open %s\n", dbSrc)
return
}
dropTableMetric := "drop table metric"
dropTableMetricValue := "drop table metric_value"
_, err = db.Exec(dropTableMetric)
if err != nil {
println(err)
}
_, err = db.Exec(dropTableMetricValue)
if err != nil {
println(err)
}
createTableMetric := "create table if not exists metric(id int primary key, `name` text, label text, note text, last_time timestamp)"
createTableMetricValue := "create table if not exists metric_value(id int primary key, metric_id int, label_value text, `value` float, last_time timestamp)"
_, err = db.Exec(createTableMetric)
if err != nil {
println(err)
}
_, err = db.Exec(createTableMetricValue)
if err != nil {
println(err)
}
// Create a new registry.
reg := prometheus.NewRegistry()
recordTaskNumM(db)
recordTaskSampleMStatusM(db)
recordKnownTasks(db)
recordTaskRelationShip(db)
recordTaskStatus(db)
customMetrics(db, reg)
// Add Go module build info.
reg.MustRegister(opsKnowTaskNum)
reg.MustRegister(opsTaskSampleNum)
reg.MustRegister(opsTaskTaskNum)
reg.MustRegister(opsTaskRelationShip)
reg.MustRegister(opsTaskStatus)
reg.MustRegister(opsDeployStatus)
reg.MustRegister(opsFilesSuffixNum)
// Expose the registered metrics via HTTP.
http.Handle("/metrics", promhttp.HandlerFor(
reg,
promhttp.HandlerOpts{
// Opt into OpenMetrics to support exemplars.
EnableOpenMetrics: true,
},
))
log.Fatal(http.ListenAndServe(*addr, nil))
}

View File

@ -0,0 +1,419 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 5,
"links": [],
"liveNow": false,
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 24,
"panels": [],
"title": "Algorithm Metrics (Federated Learning)",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "auto",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "namespace"
},
"properties": [
{
"id": "custom.width",
"value": 94
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 1
},
"id": 20,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": []
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_FederatedLearningJob_TrainNodesInfo{name=\"$AI_task_name\"}",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
}
],
"title": "Train Nodes",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"datasetName": true,
"datasetUrl": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"node": true,
"service": true
},
"indexByName": {},
"renameByName": {
"name": "task name"
}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "auto",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "nodeName"
},
"properties": [
{
"id": "custom.width",
"value": 98
}
]
},
{
"matcher": {
"id": "byName",
"options": "url"
},
"properties": [
{
"id": "custom.width",
"value": 99
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 9
},
"id": 19,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"frameIndex": 0,
"showHeader": true,
"sortBy": []
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_FederatedLearningJob_TrainNodesInfo{name=\"$AI_task_name\"}",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_Dataset_numberOfSamples",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "B"
}
],
"title": "Node Sample Num",
"transformations": [
{
"id": "seriesToColumns",
"options": {
"byField": "datasetName"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Time 1": true,
"Time 2": true,
"Value": true,
"Value #A": true,
"Value #B": false,
"__name__": true,
"__name__ 1": true,
"__name__ 2": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_component 1": true,
"app_kubernetes_io_component 2": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_instance 1": true,
"app_kubernetes_io_instance 2": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_managed_by 1": true,
"app_kubernetes_io_managed_by 2": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_name 1": true,
"app_kubernetes_io_name 2": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_part_of 1": true,
"app_kubernetes_io_part_of 2": true,
"app_kubernetes_io_version": true,
"app_kubernetes_io_version 2": true,
"crd_type": true,
"crd_type 1": true,
"crd_type 2": true,
"datasetUrl": false,
"helm_sh_chart": true,
"helm_sh_chart 1": true,
"helm_sh_chart 2": true,
"instance": true,
"instance 1": true,
"instance 2": true,
"job": true,
"job 1": true,
"job 2": true,
"namespace 1": true,
"namespace 2": true,
"node": true,
"node 1": true,
"node 2": true,
"service": true,
"service 1": true,
"service 2": true
},
"indexByName": {
"Time": 3,
"Value": 18,
"__name__": 4,
"app_kubernetes_io_component": 5,
"app_kubernetes_io_instance": 6,
"app_kubernetes_io_managed_by": 7,
"app_kubernetes_io_name": 8,
"app_kubernetes_io_part_of": 9,
"app_kubernetes_io_version": 10,
"crd_type": 11,
"datasetUrl": 12,
"helm_sh_chart": 13,
"instance": 14,
"job": 15,
"name": 1,
"namespace": 0,
"node": 16,
"nodeName": 2,
"service": 17
},
"renameByName": {
"Value #B": "NumberOfSamples",
"name": "task name",
"name 2": "name",
"service": "",
"url": ""
}
}
},
{
"id": "filterByValue",
"options": {
"filters": [
{
"config": {
"id": "isNotNull",
"options": {}
},
"fieldName": "task name"
}
],
"match": "any",
"type": "include"
}
}
],
"type": "table"
}
],
"refresh": "30s",
"schemaVersion": 36,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "surface-defect-detection",
"value": "surface-defect-detection"
},
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"definition": "label_values(kube_sedna_io_v1alpha1_FederatedLearningJob_StageConditionStatus, name)",
"hide": 0,
"includeAll": false,
"label": "AI Task Name",
"multi": false,
"name": "AI_task_name",
"options": [],
"query": {
"query": "label_values(kube_sedna_io_v1alpha1_FederatedLearningJob_StageConditionStatus, name)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Algorithm Metrics(Federated Learning)",
"uid": "fzfVlS74z",
"version": 4,
"weekStart": ""
}

View File

@ -0,0 +1,274 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 3,
"links": [],
"liveNow": false,
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 33,
"panels": [],
"title": "Algorithm Metrics (Joint Inference)",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 8,
"x": 0,
"y": 1
},
"id": 35,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"expr": "JointInference_InferenceCount{type=\"Edge\"}",
"range": true,
"refId": "A"
}
],
"title": "EdgeInferenceCount",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 8,
"x": 8,
"y": 1
},
"id": 37,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"expr": "JointInference_InferenceCount{type=\"Cloud\"}",
"range": true,
"refId": "A"
}
],
"title": "CloudInferenceCount",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 8,
"x": 16,
"y": 1
},
"id": 36,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"expr": "JointInference_InferenceCount{type=\"Cloud\"}",
"range": true,
"refId": "A"
}
],
"title": "HardSampleNum",
"type": "stat"
}
],
"refresh": "30s",
"schemaVersion": 36,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "helmet-detection-inference-example",
"value": "helmet-detection-inference-example"
},
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"definition": "label_values(kube_sedna_io_v1alpha1_JointInferenceService_StageConditionStatus, name)",
"hide": 0,
"includeAll": false,
"label": "AI Task Name",
"multi": false,
"name": "AI_task_name",
"options": [],
"query": {
"query": "label_values(kube_sedna_io_v1alpha1_JointInferenceService_StageConditionStatus, name)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Algorithm Metrics(Joint Inference)",
"uid": "vTS0QS7Vk",
"version": 3,
"weekStart": ""
}

View File

@ -0,0 +1,994 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 2,
"links": [],
"liveNow": false,
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 8,
"panels": [],
"title": "Common System Metrics",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "color-background",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 1
},
"id": 4,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_pod_container_status_running",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
}
],
"title": "Containers Running",
"transformations": [
{
"id": "filterByValue",
"options": {
"filters": [
{
"config": {
"id": "notEqual",
"options": {
"value": 0
}
},
"fieldName": "Value"
}
],
"match": "any",
"type": "include"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"node": true,
"service": true,
"uid": true
},
"indexByName": {},
"renameByName": {}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "color-background",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "light-yellow",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 1
},
"id": 6,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_pod_container_status_waiting",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
}
],
"title": "Containers Waiting",
"transformations": [
{
"id": "filterByValue",
"options": {
"filters": [
{
"config": {
"id": "notEqual",
"options": {
"value": 0
}
},
"fieldName": "Value"
}
],
"match": "any",
"type": "include"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"node": true,
"service": true,
"uid": true
},
"indexByName": {},
"renameByName": {}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "auto",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "pod"
},
"properties": [
{
"id": "custom.width",
"value": 350
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 9
},
"id": 2,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": []
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_pod_status_phase",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
}
],
"title": "Pod Status",
"transformations": [
{
"id": "filterByValue",
"options": {
"filters": [
{
"config": {
"id": "notEqual",
"options": {
"value": 0
}
},
"fieldName": "Value"
}
],
"match": "any",
"type": "include"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"namespace": false,
"node": true,
"service": true,
"uid": true
},
"indexByName": {
"Time": 0,
"Value": 17,
"__name__": 1,
"app_kubernetes_io_component": 2,
"app_kubernetes_io_instance": 3,
"app_kubernetes_io_managed_by": 4,
"app_kubernetes_io_name": 5,
"app_kubernetes_io_part_of": 6,
"app_kubernetes_io_version": 7,
"helm_sh_chart": 8,
"instance": 9,
"job": 10,
"namespace": 12,
"node": 13,
"phase": 14,
"pod": 11,
"service": 15,
"uid": 16
},
"renameByName": {}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "color-background",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "light-red",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 9
},
"id": 5,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_pod_container_status_terminated",
"format": "table",
"instant": true,
"legendFormat": "",
"range": false,
"refId": "A"
}
],
"title": "Containers Terminated",
"transformations": [
{
"id": "filterByValue",
"options": {
"filters": [
{
"config": {
"id": "notEqual",
"options": {
"value": 0
}
},
"fieldName": "Value"
}
],
"match": "any",
"type": "include"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"node": true,
"service": true,
"uid": true
},
"indexByName": {},
"renameByName": {}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "auto",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "crd_type"
},
"properties": [
{
"id": "custom.width",
"value": 185
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 17
},
"id": 10,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": []
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_JointInferenceService_StageConditionStatus",
"format": "table",
"instant": true,
"range": false,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_IncrementalLearningJob_StageConditionStatus",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "",
"range": false,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_FederatedLearningJob_StageConditionStatus",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "",
"range": false,
"refId": "C"
}
],
"title": "StageConditionStatus",
"transformations": [
{
"id": "merge",
"options": {}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"node": true,
"service": true
},
"indexByName": {},
"renameByName": {
"__name__ 1": "",
"crd_type 1": "",
"crd_type 2": "",
"helm_sh_chart 2": "",
"instance 1": ""
}
}
},
{
"id": "filterByValue",
"options": {
"filters": [
{
"config": {
"id": "greater",
"options": {
"value": 0
}
},
"fieldName": "Value #A"
},
{
"config": {
"id": "greater",
"options": {
"value": 0
}
},
"fieldName": "Value #B"
},
{
"config": {
"id": "greater",
"options": {
"value": 0
}
},
"fieldName": "Value #C"
}
],
"match": "any",
"type": "include"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Value #A": true,
"Value #B": true,
"Value #C": true
},
"indexByName": {},
"renameByName": {}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "auto",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 17
},
"id": 12,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_JointInferenceService_StartTime",
"format": "table",
"instant": true,
"range": false,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_IncrementalLearningJob_StartTime",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_FederatedLearningJob_StartTime",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "C"
}
],
"title": "Start Time of AI Tasks",
"transformations": [
{
"id": "merge",
"options": {}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value #A": true,
"Value #B": true,
"Value #C": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"node": true,
"service": true
},
"indexByName": {},
"renameByName": {
"Value #A": ""
}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "auto",
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "crd_type"
},
"properties": [
{
"id": "custom.width",
"value": 106
}
]
},
{
"matcher": {
"id": "byName",
"options": "name"
},
"properties": [
{
"id": "custom.width",
"value": 297
}
]
},
{
"matcher": {
"id": "byName",
"options": "namespace"
},
"properties": [
{
"id": "custom.width",
"value": 238
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 25
},
"id": 22,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": []
},
"pluginVersion": "9.0.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"name": "Prometheus"
},
"editorMode": "builder",
"exemplar": false,
"expr": "kube_sedna_io_v1alpha1_Dataset_numberOfSamples",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
}
],
"title": "Dataset",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"__name__": true,
"app_kubernetes_io_component": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_managed_by": true,
"app_kubernetes_io_name": true,
"app_kubernetes_io_part_of": true,
"app_kubernetes_io_version": true,
"helm_sh_chart": true,
"instance": true,
"job": true,
"node": true,
"service": true
},
"indexByName": {},
"renameByName": {
"Value": "NumberOfSamples"
}
}
}
],
"type": "table"
}
],
"refresh": "30s",
"schemaVersion": 36,
"style": "dark",
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Common System Metrics",
"uid": "Vl0e9NM4z",
"version": 15,
"weekStart": ""
}

View File

@ -0,0 +1,935 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 1,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [
{
"options": {
"0": {
"color": "red",
"index": 0,
"text": "offline"
},
"1": {
"color": "green",
"index": 1,
"text": "current"
}
},
"type": "value"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 12,
"x": 0,
"y": 0
},
"id": 8,
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showThresholdLabels": false,
"showThresholdMarkers": true,
"text": {}
},
"pluginVersion": "8.3.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "task_status",
"interval": "",
"legendFormat": "task-{{task_id}}",
"refId": "A"
}
],
"title": "Model deploy status",
"type": "gauge"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [
{
"options": {
"0": {
"color": "green",
"index": 0,
"text": "Waiting"
},
"1": {
"color": "dark-purple",
"index": 1,
"text": "Deployed"
}
},
"type": "value"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "orange",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 12,
"x": 12,
"y": 0
},
"id": 14,
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showThresholdLabels": false,
"showThresholdMarkers": true
},
"pluginVersion": "8.3.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "deploy_status",
"interval": "",
"legendFormat": "{{name}}",
"refId": "A"
}
],
"title": "Deploy status",
"type": "gauge"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 7,
"x": 0,
"y": 5
},
"id": 4,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "task_num",
"interval": "",
"legendFormat": "task-{{id}}-{{attr}}",
"refId": "A"
}
],
"title": "Task num",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 8,
"x": 7,
"y": 5
},
"id": 2,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "know_task_num",
"interval": "",
"legendFormat": "total",
"refId": "A"
}
],
"title": "Deployed model num",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 9,
"x": 15,
"y": 5
},
"id": 6,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"pluginVersion": "8.3.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "task_sample_num",
"interval": "",
"legendFormat": "task-{{id}}",
"refId": "A"
}
],
"title": "Task sample num",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 12
},
"id": 10,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "task_relation_ship",
"interval": "",
"legendFormat": "grp-{{grp_id}}-task-{{task_id}}",
"refId": "A"
}
],
"title": "Task relationship",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "center",
"displayMode": "color-text"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "task_id"
},
"properties": [
{
"id": "custom.width",
"value": 69
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 12
},
"id": 12,
"options": {
"footer": {
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": []
},
"pluginVersion": "8.3.5",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": false,
"expr": "task_status",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"title": "Model URL",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"instance": true,
"job": true,
"namespace": true,
"pod": true,
"pod_template_hash": true,
"sedna": true
},
"indexByName": {
"Time": 1,
"Value": 10,
"__name__": 2,
"instance": 3,
"job": 4,
"model_url": 5,
"namespace": 6,
"pod": 7,
"pod_template_hash": 8,
"sedna": 9,
"task_id": 0
},
"renameByName": {}
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 8,
"x": 0,
"y": 20
},
"id": 16,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "Precision{}",
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"title": "Precision",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 8,
"x": 8,
"y": 20
},
"id": 17,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "Recall{}",
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"title": "Recall",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 8,
"x": 16,
"y": 20
},
"id": 18,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom"
},
"tooltip": {
"mode": "single"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"exemplar": true,
"expr": "files_with_suffix_num{}",
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"title": "File nums",
"type": "timeseries"
}
],
"refresh": false,
"schemaVersion": 34,
"style": "dark",
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-30m",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "lifelonglearning-dashboard",
"uid": "ONnc8eGVk",
"version": 3,
"weekStart": ""
}

View File

@ -0,0 +1,955 @@
rbac:
create: true
## Use an existing ClusterRole/Role (depending on rbac.namespaced false/true)
# useExistingRole: name-of-some-(cluster)role
pspEnabled: true
pspUseAppArmor: true
namespaced: false
extraRoleRules: []
# - apiGroups: []
# resources: []
# verbs: []
extraClusterRoleRules: []
# - apiGroups: []
# resources: []
# verbs: []
serviceAccount:
create: true
name:
nameTest:
## Service account annotations. Can be templated.
# annotations:
# eks.amazonaws.com/role-arn: arn:aws:iam::123456789000:role/iam-role-name-here
autoMount: true
replicas: 1
## Create a headless service for the deployment
headlessService: false
## Create HorizontalPodAutoscaler object for deployment type
#
autoscaling:
enabled: false
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60
## See `kubectl explain poddisruptionbudget.spec` for more
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
podDisruptionBudget: {}
# minAvailable: 1
# maxUnavailable: 1
## See `kubectl explain deployment.spec.strategy` for more
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
deploymentStrategy:
type: RollingUpdate
readinessProbe:
httpGet:
path: /api/health
port: 3000
livenessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 60
timeoutSeconds: 30
failureThreshold: 10
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName: "default-scheduler"
image:
repository: grafana/grafana
# Overrides the Grafana image tag whose default is the chart appVersion
tag: ""
sha: ""
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Can be templated.
##
# pullSecrets:
# - myRegistrKeySecretName
testFramework:
enabled: true
image: "bats/bats"
tag: "v1.4.1"
imagePullPolicy: IfNotPresent
securityContext: {}
securityContext:
runAsUser: 472
runAsGroup: 472
fsGroup: 472
containerSecurityContext:
{}
# Enable creating the grafana configmap
createConfigmap: true
# Extra configmaps to mount in grafana pods
# Values are templated.
extraConfigmapMounts: []
# - name: certs-configmap
# mountPath: /etc/grafana/ssl/
# subPath: certificates.crt # (optional)
# configMap: certs-configmap
# readOnly: true
extraEmptyDirMounts: []
# - name: provisioning-notifiers
# mountPath: /etc/grafana/provisioning/notifiers
# Apply extra labels to common labels.
extraLabels: {}
## Assign a PriorityClassName to pods if set
# priorityClassName:
downloadDashboardsImage:
repository: curlimages/curl
tag: 7.73.0
sha: ""
pullPolicy: IfNotPresent
downloadDashboards:
env: {}
envFromSecret: ""
resources: {}
## Pod Annotations
# podAnnotations: {}
## Pod Labels
# podLabels: {}
podPortName: grafana
## Deployment annotations
# annotations: {}
## Expose the grafana service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##
service:
enabled: true
type: NodePort
port: 80
targetPort: 3000
nodePort: 31000
# targetPort: 4181 To be used with a proxy extraContainer
annotations: {}
labels: {}
portName: service
# Adds the appProtocol field to the service. This allows to work with istio protocol selection. Ex: "http" or "tcp"
appProtocol: ""
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: false
path: /metrics
# namespace: monitoring (defaults to use the namespace this chart is deployed to)
labels: {}
interval: 1m
scheme: http
tlsConfig: {}
scrapeTimeout: 30s
relabelings: []
extraExposePorts: []
# - name: keycloak
# port: 8080
# targetPort: 8080
# type: ClusterIP
# overrides pod.spec.hostAliases in the grafana deployment's pods
hostAliases: []
# - ip: "1.2.3.4"
# hostnames:
# - "my.host.com"
ingress:
enabled: false
# For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
# ingressClassName: nginx
# Values can be templated
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
# pathType is only for k8s >= 1.1=
pathType: Prefix
hosts:
- chart-example.local
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths: []
# - path: /*
# backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
## Or for k8s > 1.19
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# name: use-annotation
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Node labels for pod assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
#
nodeSelector: {labelName: master}
## Tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Affinity for pod assignment (evaluated as template)
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Additional init containers (evaluated as template)
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
##
extraInitContainers: []
## Enable an Specify container in extraContainers. This is meant to allow adding an authentication proxy to a grafana pod
extraContainers: ""
# extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - -provider=github
# - -client-id=
# - -client-secret=
# - -github-org=<ORG_NAME>
# - -email-domain=*
# - -cookie-secret=
# - -http-address=http://0.0.0.0:4181
# - -upstream-url=http://127.0.0.1:3000
# ports:
# - name: proxy-web
# containerPort: 4181
## Volumes that can be used in init containers that will not be mounted to deployment pods
extraContainerVolumes: []
# - name: volume-from-secret
# secret:
# secretName: secret-to-mount
# - name: empty-dir-volume
# emptyDir: {}
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
type: pvc
enabled: false
# storageClassName: default
accessModes:
- ReadWriteOnce
size: 10Gi
# annotations: {}
finalizers:
- kubernetes.io/pvc-protection
# selectorLabels: {}
## Sub-directory of the PV to mount. Can be templated.
# subPath: ""
## Name of an existing PVC. Can be templated.
# existingClaim:
## If persistence is not enabled, this allows to mount the
## local storage in-memory to improve performance
##
inMemory:
enabled: false
## The maximum usage on memory medium EmptyDir would be
## the minimum value between the SizeLimit specified
## here and the sum of memory limits of all containers in a pod
##
# sizeLimit: 300Mi
initChownData:
## If false, data ownership will not be reset at startup
## This allows the prometheus-server to be run with an arbitrary user
##
enabled: true
## initChownData container image
##
image:
repository: busybox
tag: "1.31.1"
sha: ""
pullPolicy: IfNotPresent
## initChownData resource requests and limits
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
# adminPassword: strongpassword
# Use an existing secret for the admin user.
admin:
## Name of the secret. Can be templated.
existingSecret: ""
userKey: admin-user
passwordKey: admin-password
## Define command to be executed at startup by grafana container
## Needed if using `vault-env` to manage secrets (ref: https://banzaicloud.com/blog/inject-secrets-into-pods-vault/)
## Default is "run.sh" as defined in grafana's Dockerfile
# command:
# - "sh"
# - "/run.sh"
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Extra environment variables that will be pass onto deployment pods
##
## to provide grafana with access to CloudWatch on AWS EKS:
## 1. create an iam role of type "Web identity" with provider oidc.eks.* (note the provider for later)
## 2. edit the "Trust relationships" of the role, add a line inside the StringEquals clause using the
## same oidc eks provider as noted before (same as the existing line)
## also, replace NAMESPACE and prometheus-operator-grafana with the service account namespace and name
##
## "oidc.eks.us-east-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:sub": "system:serviceaccount:NAMESPACE:prometheus-operator-grafana",
##
## 3. attach a policy to the role, you can use a built in policy called CloudWatchReadOnlyAccess
## 4. use the following env: (replace 123456789000 and iam-role-name-here with your aws account number and role name)
##
## env:
## AWS_ROLE_ARN: arn:aws:iam::123456789000:role/iam-role-name-here
## AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
## AWS_REGION: us-east-1
##
## 5. uncomment the EKS section in extraSecretMounts: below
## 6. uncomment the annotation section in the serviceAccount: above
## make sure to replace arn:aws:iam::123456789000:role/iam-role-name-here with your role arn
env: {}
## "valueFrom" environment variable references that will be added to deployment pods. Name is templated.
## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvarsource-v1-core
## Renders in container spec as:
## env:
## ...
## - name: <key>
## valueFrom:
## <value rendered as YAML>
envValueFrom: {}
# ENV_NAME:
# configMapKeyRef:
# name: configmap-name
# key: value_key
## The name of a secret in the same kubernetes namespace which contain values to be added to the environment
## This can be useful for auth tokens, etc. Value is templated.
envFromSecret: ""
## Sensible environment variables that will be rendered as new secret object
## This can be useful for auth tokens, etc
envRenderSecret: {}
## The names of secrets in the same kubernetes namespace which contain values to be added to the environment
## Each entry should contain a name key, and can optionally specify whether the secret must be defined with an optional key.
## Name is templated.
envFromSecrets: []
## - name: secret-name
## optional: true
## The names of conifgmaps in the same kubernetes namespace which contain values to be added to the environment
## Each entry should contain a name key, and can optionally specify whether the configmap must be defined with an optional key.
## Name is templated.
## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#configmapenvsource-v1-core
envFromConfigMaps: []
## - name: configmap-name
## optional: true
# Inject Kubernetes services as environment variables.
# See https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables
enableServiceLinks: true
## Additional grafana server secret mounts
# Defines additional mounts with secrets. Secrets must be manually created in the namespace.
extraSecretMounts: []
# - name: secret-files
# mountPath: /etc/secrets
# secretName: grafana-secret-files
# readOnly: true
# subPath: ""
#
# for AWS EKS (cloudwatch) use the following (see also instruction in env: above)
# - name: aws-iam-token
# mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
# readOnly: true
# projected:
# defaultMode: 420
# sources:
# - serviceAccountToken:
# audience: sts.amazonaws.com
# expirationSeconds: 86400
# path: token
#
# for CSI e.g. Azure Key Vault use the following
# - name: secrets-store-inline
# mountPath: /run/secrets
# readOnly: true
# csi:
# driver: secrets-store.csi.k8s.io
# readOnly: true
# volumeAttributes:
# secretProviderClass: "akv-grafana-spc"
# nodePublishSecretRef: # Only required when using service principal mode
# name: grafana-akv-creds # Only required when using service principal mode
## Additional grafana server volume mounts
# Defines additional volume mounts.
extraVolumeMounts: []
# - name: extra-volume-0
# mountPath: /mnt/volume0
# readOnly: true
# existingClaim: volume-claim
# - name: extra-volume-1
# mountPath: /mnt/volume1
# readOnly: true
# hostPath: /usr/shared/
## Container Lifecycle Hooks. Execute a specific bash command or make an HTTP request
lifecycleHooks: {}
# postStart:
# exec:
# command: []
## Pass the plugins you want installed as a list.
##
plugins: []
# - digrich-bubblechart-panel
# - grafana-clock-panel
## Configure grafana datasources
## ref: http://docs.grafana.org/administration/provisioning/#datasources
##
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-server.default:80
access: proxy
isDefault: true
- name: Loki
type: loki
url: http://loki.default:3100
access: proxy
isDefault: false
# - name: CloudWatch
# type: cloudwatch
# access: proxy
# uid: cloudwatch
# editable: false
# jsonData:
# authType: default
# defaultRegion: us-east-1
## Configure notifiers
## ref: http://docs.grafana.org/administration/provisioning/#alert-notification-channels
##
notifiers: {}
# notifiers.yaml:
# notifiers:
# - name: email-notifier
# type: email
# uid: email1
# # either:
# org_id: 1
# # or
# org_name: Main Org.
# is_default: true
# settings:
# addresses: an_email_address@example.com
# delete_notifiers:
## Configure grafana dashboard providers
## ref: http://docs.grafana.org/administration/provisioning/#dashboards
##
## `path` must be /var/lib/grafana/dashboards/<provider_name>
##
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'sedna'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
allowUiUpdates: true
options:
path: /var/lib/grafana/dashboards/sedna
## Configure grafana dashboard to import
## NOTE: To use dashboards you must also enable/configure dashboardProviders
## ref: https://grafana.com/dashboards
##
## dashboards per provider, use provider name as key.
##
dashboards:
sedna:
common-system-dashboard:
file: dashboards/Common System Metrics.json
algorithm-metrics-joint-inference-dashboard:
file: dashboards/Algorithm Metrics(Joint Inference).json
algorithm-metrics-incremental-learning-dashboard:
file: dashboards/Algorithm Metrics(Incremental Learning).json
algorithm-metrics-federated-learning-dashboard:
file: dashboards/Algorithm Metrics(Federated Learning).json
# default:
# some-dashboard:
# json: |
# $RAW_JSON
# custom-dashboard:
# file: dashboards/custom-dashboard.json
# prometheus-stats:
# gnetId: 2
# revision: 2
# datasource: Prometheus
# local-dashboard:
# url: https://example.com/repository/test.json
# token: ''
# local-dashboard-base64:
# url: https://example.com/repository/test-b64.json
# token: ''
# b64content: true
# local-dashboard-gitlab:
# url: https://example.com/repository/test-gitlab.json
# gitlabToken: ''
## Reference to external ConfigMap per provider. Use provider name as key and ConfigMap name as value.
## A provider dashboards must be defined either by external ConfigMaps or in values.yaml, not in both.
## ConfigMap data example:
##
## data:
## example-dashboard.json: |
## RAW_JSON
##
dashboardsConfigMaps: {}
# default: ""
## Grafana's primary configuration
## NOTE: values in map will be converted to ini format
## ref: http://docs.grafana.org/installation/configuration/
##
grafana.ini:
paths:
data: /var/lib/grafana/
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
provisioning: /etc/grafana/provisioning
analytics:
check_for_updates: true
log:
mode: console
grafana_net:
url: https://grafana.net
server:
domain: "{{ if (and .Values.ingress.enabled .Values.ingress.hosts) }}{{ .Values.ingress.hosts | first }}{{ end }}"
## grafana Authentication can be enabled with the following values on grafana.ini
# server:
# The full public facing url you use in browser, used for redirects and emails
# root_url:
# https://grafana.com/docs/grafana/latest/auth/github/#enable-github-in-grafana
# auth.github:
# enabled: false
# allow_sign_up: false
# scopes: user:email,read:org
# auth_url: https://github.com/login/oauth/authorize
# token_url: https://github.com/login/oauth/access_token
# api_url: https://api.github.com/user
# team_ids:
# allowed_organizations:
# client_id:
# client_secret:
## LDAP Authentication can be enabled with the following values on grafana.ini
## NOTE: Grafana will fail to start if the value for ldap.toml is invalid
# auth.ldap:
# enabled: true
# allow_sign_up: true
# config_file: /etc/grafana/ldap.toml
## Grafana's LDAP configuration
## Templated by the template in _helpers.tpl
## NOTE: To enable the grafana.ini must be configured with auth.ldap.enabled
## ref: http://docs.grafana.org/installation/configuration/#auth-ldap
## ref: http://docs.grafana.org/installation/ldap/#configuration
ldap:
enabled: false
# `existingSecret` is a reference to an existing secret containing the ldap configuration
# for Grafana in a key `ldap-toml`.
existingSecret: ""
# `config` is the content of `ldap.toml` that will be stored in the created secret
config: ""
# config: |-
# verbose_logging = true
# [[servers]]
# host = "my-ldap-server"
# port = 636
# use_ssl = true
# start_tls = false
# ssl_skip_verify = false
# bind_dn = "uid=%s,ou=users,dc=myorg,dc=com"
## Grafana's SMTP configuration
## NOTE: To enable, grafana.ini must be configured with smtp.enabled
## ref: http://docs.grafana.org/installation/configuration/#smtp
smtp:
# `existingSecret` is a reference to an existing secret containing the smtp configuration
# for Grafana.
existingSecret: ""
userKey: "user"
passwordKey: "password"
## Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders
## Requires at least Grafana 5 to work and can't be used together with parameters dashboardProviders, datasources and dashboards
sidecar:
image:
repository: quay.io/kiwigrid/k8s-sidecar
tag: 1.19.2
sha: ""
imagePullPolicy: IfNotPresent
resources: {}
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
securityContext: {}
# skipTlsVerify Set to true to skip tls verification for kube api calls
# skipTlsVerify: true
enableUniqueFilenames: false
readinessProbe: {}
livenessProbe: {}
# Log level. Can be one of: DEBUG, INFO, WARN, ERROR, CRITICAL.
logLevel: INFO
dashboards:
env: {}
enabled: false
SCProvider: true
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
# value of label that the configmaps with dashboards are set to
labelValue: ""
# folder in the pod that should hold the collected dashboards (unless `defaultFolderName` is set)
folder: /tmp/dashboards
# The default folder name, it will create a subfolder under the `folder` and put dashboards in there instead
defaultFolderName: null
# Namespaces list. If specified, the sidecar will search for config-maps/secrets inside these namespaces.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces.
searchNamespace: null
# Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
watchMethod: WATCH
# search in configmap, secret or both
resource: both
# If specified, the sidecar will look for annotation with this name to create folder and put graph here.
# You can use this parameter together with `provider.foldersFromFilesStructure`to annotate configmaps and create folder structure.
folderAnnotation: null
# Absolute path to shell script to execute after a configmap got reloaded
script: null
# watchServerTimeout: request to the server, asking it to cleanly close the connection after that.
# defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S
# watchServerTimeout: 3600
#
# watchClientTimeout: is a client-side timeout, configuring your local socket.
# If you have a network outage dropping all packets with no RST/FIN,
# this is how long your client waits before realizing & dropping the connection.
# defaults to 66sec (sic!)
# watchClientTimeout: 60
#
# provider configuration that lets grafana manage the dashboards
provider:
# name of the provider, should be unique
name: sidecarProvider
# orgid as configured in grafana
orgid: 1
# folder in which the dashboards should be imported in grafana
folder: ''
# type of the provider
type: file
# disableDelete to activate a import-only behaviour
disableDelete: false
# allow updating provisioned dashboards from the UI
allowUiUpdates: false
# allow Grafana to replicate dashboard structure from filesystem
foldersFromFilesStructure: false
# Additional dashboard sidecar volume mounts
extraMounts: []
# Sets the size limit of the dashboard sidecar emptyDir volume
sizeLimit: {}
datasources:
enabled: false
# label that the configmaps with datasources are marked with
label: grafana_datasource
# value of label that the configmaps with datasources are set to
labelValue: ""
# If specified, the sidecar will search for datasource config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
# Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
watchMethod: WATCH
# search in configmap, secret or both
resource: both
# Endpoint to send request to reload datasources
reloadURL: "http://localhost:3000/api/admin/provisioning/datasources/reload"
skipReload: false
# Deploy the datasource sidecar as an initContainer in addition to a container.
# This is needed if skipReload is true, to load any datasources defined at startup time.
initDatasources: false
# Sets the size limit of the datasource sidecar emptyDir volume
sizeLimit: {}
plugins:
enabled: false
# label that the configmaps with plugins are marked with
label: grafana_plugin
# value of label that the configmaps with plugins are set to
labelValue: ""
# If specified, the sidecar will search for plugin config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
# Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
watchMethod: WATCH
# search in configmap, secret or both
resource: both
# Endpoint to send request to reload plugins
reloadURL: "http://localhost:3000/api/admin/provisioning/plugins/reload"
skipReload: false
# Deploy the datasource sidecar as an initContainer in addition to a container.
# This is needed if skipReload is true, to load any plugins defined at startup time.
initPlugins: false
# Sets the size limit of the plugin sidecar emptyDir volume
sizeLimit: {}
notifiers:
enabled: false
# label that the configmaps with notifiers are marked with
label: grafana_notifier
# If specified, the sidecar will search for notifier config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
# search in configmap, secret or both
resource: both
# Sets the size limit of the notifier sidecar emptyDir volume
sizeLimit: {}
## Override the deployment namespace
##
namespaceOverride: ""
## Number of old ReplicaSets to retain
##
revisionHistoryLimit: 10
## Add a seperate remote image renderer deployment/service
imageRenderer:
# Enable the image-renderer deployment & service
enabled: false
replicas: 1
image:
# image-renderer Image repository
repository: grafana/grafana-image-renderer
# image-renderer Image tag
tag: latest
# image-renderer Image sha (optional)
sha: ""
# image-renderer ImagePullPolicy
pullPolicy: Always
# extra environment variables
env:
HTTP_HOST: "0.0.0.0"
# RENDERING_ARGS: --no-sandbox,--disable-gpu,--window-size=1280x758
# RENDERING_MODE: clustered
# IGNORE_HTTPS_ERRORS: true
# image-renderer deployment serviceAccount
serviceAccountName: ""
# image-renderer deployment securityContext
securityContext: {}
# image-renderer deployment Host Aliases
hostAliases: []
# image-renderer deployment priority class
priorityClassName: ''
service:
# Enable the image-renderer service
enabled: true
# image-renderer service port name
portName: 'http'
# image-renderer service port used by both service and deployment
port: 8081
targetPort: 8081
# Adds the appProtocol field to the image-renderer service. This allows to work with istio protocol selection. Ex: "http" or "tcp"
appProtocol: ""
# If https is enabled in Grafana, this needs to be set as 'https' to correctly configure the callback used in Grafana
grafanaProtocol: http
# In case a sub_path is used this needs to be added to the image renderer callback
grafanaSubPath: ""
# name of the image-renderer port on the pod
podPortName: http
# number of image-renderer replica sets to keep
revisionHistoryLimit: 10
networkPolicy:
# Enable a NetworkPolicy to limit inbound traffic to only the created grafana pods
limitIngress: true
# Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods
limitEgress: false
resources: {}
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
## Node labels for pod assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
#
nodeSelector: {}
## Tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Affinity for pod assignment (evaluated as template)
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
networkPolicy:
## @param networkPolicy.enabled Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
##
enabled: false
## @param networkPolicy.allowExternal Don't require client label for connections
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to grafana port defined.
## When true, grafana will accept connections from any source
## (with the correct destination port).
##
ingress: true
## @param networkPolicy.ingress When true enables the creation
## an ingress network policy
##
allowExternal: true
## @param networkPolicy.explicitNamespacesSelector A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed
## If explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace
## and that match other criteria, the ones that have the good label, can reach the grafana.
## But sometimes, we want the grafana to be accessible to clients from other namespaces, in this case, we can use this
## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added.
##
## Example:
## explicitNamespacesSelector:
## matchLabels:
## role: frontend
## matchExpressions:
## - {key: role, operator: In, values: [frontend]}
##
explicitNamespacesSelector: {}
##
##
##
##
##
##
egress:
## @param networkPolicy.egress.enabled When enabled, an egress network policy will be
## created allowing grafana to connect to external data sources from kubernetes cluster.
enabled: false
##
## @param networkPolicy.egress.ports Add individual ports to be allowed by the egress
ports: []
## Add ports to the egress by specifying - port: <port number>
## E.X.
## ports:
## - port: 80
## - port: 443
##
##
##
##
##
##
# Enable backward compatibility of kubernetes where version below 1.13 doesn't have the enableServiceLinks option
enableKubeBackwardCompatibility: false
useStatefulSet: false
# Create a dynamic manifests via values:
extraObjects: []
# - apiVersion: "kubernetes-client.io/v1"
# kind: ExternalSecret
# metadata:
# name: grafana-secrets
# spec:
# backendType: gcpSecretsManager
# data:
# - key: grafana-admin-password
# name: adminPassword

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

View File

@ -0,0 +1,426 @@
# Default values for kube-state-metrics.
prometheusScrape: true
image:
# repository: registry.k8s.io/kube-state-metrics/kube-state-metrics
repository: registry.cn-hangzhou.aliyuncs.com/kanakami/kube-state-metrics
tag: v2.6.0
sha: ""
pullPolicy: IfNotPresent
imagePullSecrets: []
# - name: "image-pull-secret"
# If set to true, this will deploy kube-state-metrics as a StatefulSet and the data
# will be automatically sharded across <.Values.replicas> pods using the built-in
# autodiscovery feature: https://github.com/kubernetes/kube-state-metrics#automated-sharding
# This is an experimental feature and there are no stability guarantees.
autosharding:
enabled: false
replicas: 1
# List of additional cli arguments to configure kube-state-metrics
# for example: --enable-gzip-encoding, --log-file, etc.
# all the possible args can be found here: https://github.com/kubernetes/kube-state-metrics/blob/master/docs/cli-arguments.md
extraArgs:
- --custom-resource-state-config
- |
spec:
resources:
- groupVersionKind:
group: sedna.io
kind: "JointInferenceService"
version: "v1alpha1"
commonLabels:
crd_type: "JointInferenceService"
labelsFromPath:
name: [metadata, name]
metrics:
- name: "StageConditionStatus"
help: "Status of each stage,Enum(Waiting,Ready,Starting,Running,Completed,Failed)"
each:
type: StateSet
stateSet:
list: [Waiting,Ready,Starting,Running,Completed,Complete,Failed]
path: [status,conditions, "-1", type]
labelName: phase
- name: "StartTime"
help: "The start time of tasks"
each:
type: Info
info:
labelsFromPath:
startTime: [status, startTime]
- groupVersionKind:
group: sedna.io
kind: "IncrementalLearningJob"
version: "v1alpha1"
commonLabels:
crd_type: "IncrementalLearningJob"
labelsFromPath:
name: [metadata, name]
metrics:
- name: "StageConditionStatus"
help: "Status of each stage,Enum(Waiting,Ready,Starting,Running,Completed,Failed)"
each:
type: StateSet
stateSet:
list: [Waiting,Ready,Starting,Running,Completed,Complete,Failed]
path: [status,conditions, "0", type]
labelName: phase
- name: "StartTime"
help: "The start time of tasks"
each:
type: Info
info:
labelsFromPath:
startTime: [status, startTime]
- name: "TaskStage"
help: "Incremental Learning Task Stage"
each:
type: StateSet
stateSet:
list: [Train, Eval, Deploy]
path: [status, conditions, "-1", stage]
labelName: stage
- name: "DatasetInfo"
help: "Dataset info"
each:
type: Gauge
gauge:
path: [spec, dataset, trainProb]
labelsFromPath:
datasetName: [spec, dataset, name]
- name: "StageDetails"
help: "get detail infos of each Stage"
each:
type: Info
info:
path: [status, conditions]
labelsFromKey: stage_d
labelsFromPath:
stage: [stage]
type: [type]
data: [data]
lastHeartbeatTime: [lastHeartbeatTime]
- groupVersionKind:
group: sedna.io
kind: "FederatedLearningJob"
version: "v1alpha1"
commonLabels:
crd_type: "FederatedLearningJob"
labelsFromPath:
name: [metadata, name]
metrics:
- name: "StageConditionStatus"
help: "Status of each stage,Enum(Waiting,Ready,Starting,Running,Completed,Failed)"
each:
type: StateSet
stateSet:
list: [Waiting,Ready,Starting,Running,Completed,Complete,Failed]
path: [status,conditions, "-1", type]
labelName: phase
- name: "StartTime"
help: "The start time of tasks"
each:
type: Info
info:
labelsFromPath:
startTime: [status, startTime]
- name: "CompletionTime"
help: "The completion time of tasks"
each:
type: Info
info:
labelsFromPath:
startTime: [status, completionTime]
- name: "TrainNodesInfo"
help: "Train nodes"
each:
type: Info
info:
path: [spec, trainingWorkers]
labelFromKey: node_d
labelsFromPath:
nodeName: [template, spec, nodeName]
datasetName: [dataset, name]
- groupVersionKind:
group: sedna.io
kind: "Dataset"
version: "v1alpha1"
commonLabels:
crd_type: "Dataset"
labelsFromPath:
datasetName: [metadata, name]
url: [spec, url]
metrics:
- name: "numberOfSamples"
help: "Number Of Samples"
each:
type: Gauge
gauge:
path: [status, numberOfSamples]
service:
port: 8080
# Default to clusterIP for backward compatibility
type: ClusterIP
nodePort: 0
loadBalancerIP: ""
clusterIP: ""
annotations: {}
## Additional labels to add to all resources
customLabels: {}
# app: kube-state-metrics
## set to true to add the release label so scraping of the servicemonitor with kube-prometheus-stack works out of the box
releaseLabel: false
hostNetwork: false
rbac:
# If true, create & use RBAC resources
create: true
# Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to it, rolename set here.
# useExistingRole: cluster-admin
# If set to false - Run without Cluteradmin privs needed - ONLY works if namespace is also set (if useExistingRole is set this name is used as ClusterRole or Role to bind to)
useClusterRole: true
# Add permissions for CustomResources' apiGroups in Role/ClusterRole. Should be used in conjunction with Custom Resource State Metrics configuration
# Example:
# - apiGroups: ["monitoring.coreos.com"]
# resources: ["prometheuses"]
# verbs: ["list", "watch"]
extraRules:
- apiGroups:
- sedna.io
resources:
- datasets
- models
- jointinferenceservices
- featureextractionservices
- federatedlearningjobs
- incrementallearningjobs
- lifelonglearningjobs
- objectsearchservices
- objecttrackingservices
- reidjobs
- videoanalyticsjobs
verbs:
- get
- list
- watch
- patch
serviceAccount:
# Specifies whether a ServiceAccount should be created, require rbac true
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
# Reference to one or more secrets to be used when pulling images
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# ServiceAccount annotations.
# Use case: AWS EKS IAM roles for service accounts
# ref: https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html
annotations: {}
prometheus:
monitor:
enabled: false
additionalLabels: {}
namespace: ""
jobLabel: ""
interval: ""
scrapeTimeout: ""
proxyUrl: ""
selectorOverride: {}
honorLabels: false
metricRelabelings: []
relabelings: []
scheme: ""
tlsConfig: {}
## Specify if a Pod Security Policy for kube-state-metrics must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
additionalVolumes: []
securityContext:
enabled: true
runAsGroup: 65534
runAsUser: 65534
fsGroup: 65534
## Specify security settings for a Container
## Allows overrides and additional options compared to (Pod) securityContext
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
containerSecurityContext: {}
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {labelName: master}
## Affinity settings for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
affinity: {}
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
## Topology spread constraints for pod assignment
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []
# Annotations to be added to the deployment/statefulset
annotations: {}
# Annotations to be added to the pod
podAnnotations: {}
## Assign a PriorityClassName to pods if set
# priorityClassName: ""
# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
podDisruptionBudget: {}
# Comma-separated list of metrics to be exposed.
# This list comprises of exact metric names and/or regex patterns.
# The allowlist and denylist are mutually exclusive.
metricAllowlist: []
# Comma-separated list of metrics not to be enabled.
# This list comprises of exact metric names and/or regex patterns.
# The allowlist and denylist are mutually exclusive.
metricDenylist: []
# Comma-separated list of additional Kubernetes label keys that will be used in the resource's
# labels metric. By default the metric contains only name and namespace labels.
# To include additional labels, provide a list of resource names in their plural form and Kubernetes
# label keys you would like to allow for them (Example: '=namespaces=[k8s-label-1,k8s-label-n,...],pods=[app],...)'.
# A single '*' can be provided per resource instead to allow any labels, but that has
# severe performance implications (Example: '=pods=[*]').
metricLabelsAllowlist: []
# - namespaces=[k8s-label-1,k8s-label-n]
# Comma-separated list of Kubernetes annotations keys that will be used in the resource'
# labels metric. By default the metric contains only name and namespace labels.
# To include additional annotations provide a list of resource names in their plural form and Kubernetes
# annotation keys you would like to allow for them (Example: '=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...)'.
# A single '*' can be provided per resource instead to allow any annotations, but that has
# severe performance implications (Example: '=pods=[*]').
metricAnnotationsAllowList: []
# - pods=[k8s-annotation-1,k8s-annotation-n]
# Available collectors for kube-state-metrics.
# By default, all available resources are enabled, comment out to disable.
collectors:
- certificatesigningrequests
- configmaps
- cronjobs
- daemonsets
- deployments
- endpoints
- horizontalpodautoscalers
- ingresses
- jobs
- limitranges
- mutatingwebhookconfigurations
- namespaces
- networkpolicies
- nodes
- persistentvolumeclaims
- persistentvolumes
- poddisruptionbudgets
- pods
- replicasets
- replicationcontrollers
- resourcequotas
- secrets
- services
- statefulsets
- storageclasses
- validatingwebhookconfigurations
- volumeattachments
# - lifelonglearningjobs
- incrementallearningjobs
- jointinferenceservices
- federatedlearningjobs
- datasets
# - verticalpodautoscalers # not a default resource, see also: https://github.com/kubernetes/kube-state-metrics#enabling-verticalpodautoscalers
# Enabling kubeconfig will pass the --kubeconfig argument to the container
kubeconfig:
enabled: false
# base64 encoded kube-config file
secret:
# Enable only the release namespace for collecting resources. By default all namespaces are collected.
# If releaseNamespace and namespaces are both set only releaseNamespace will be used.
releaseNamespace: false
# Comma-separated list of namespaces to be enabled for collecting resources. By default all namespaces are collected.
namespaces: ""
# Comma-separated list of namespaces not to be enabled. If namespaces and namespaces-denylist are both set,
# only namespaces that are excluded in namespaces-denylist will be used.
namespacesDenylist: ""
## Override the deployment namespace
##
namespaceOverride: ""
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 64Mi
# requests:
# cpu: 10m
# memory: 32Mi
## Provide a k8s version to define apiGroups for podSecurityPolicy Cluster Role.
## For example: kubeTargetVersionOverride: 1.14.9
##
kubeTargetVersionOverride: ""
# Enable self metrics configuration for service and Service Monitor
# Default values for telemetry configuration can be overridden
# If you set telemetryNodePort, you must also set service.type to NodePort
selfMonitor:
enabled: false
# telemetryHost: 0.0.0.0
# telemetryPort: 8081
# telemetryNodePort: 0
# volumeMounts are used to add custom volume mounts to deployment.
# See example below
volumeMounts: []
# - mountPath: /etc/config
# name: config-volume
# volumes are used to add custom volumes to deployment
# See example below
volumes: []
# - configMap:
# name: cm-for-volume
# name: config-volume

View File

@ -0,0 +1,113 @@
test_pod:
image: bats/bats:v1.1.0
pullPolicy: IfNotPresent
loki:
enabled: true
isDefault: true
url: http://{{(include "loki.serviceName" .)}}:{{ .Values.loki.service.port }}
readinessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
livenessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
datasource:
jsonData: {}
promtail:
enabled: true
config:
logLevel: info
serverPort: 3101
clients:
- url: http://{{ .Release.Name }}:3100/loki/api/v1/push
extraEnv:
- name: KUBERNETES_SERVICE_HOST
# value: "10.176.122.14"
value: "10.96.0.1"
- name: KUBERNETES_SERVICE_PORT
# value: "8080"
value: "443"
fluent-bit:
enabled: false
grafana:
enabled: false
sidecar:
datasources:
enabled: true
maxLines: 1000
image:
tag: 8.3.5
prometheus:
enabled: false
isDefault: false
url: http://{{ include "prometheus.fullname" .}}:{{ .Values.prometheus.server.service.servicePort }}{{ .Values.prometheus.server.prefixURL }}
datasource:
jsonData: {}
filebeat:
enabled: false
filebeatConfig:
filebeat.yml: |
# logging.level: debug
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["logstash-loki:5044"]
logstash:
enabled: false
image: grafana/logstash-output-loki
imageTag: 1.0.1
filters:
main: |-
filter {
if [kubernetes] {
mutate {
add_field => {
"container_name" => "%{[kubernetes][container][name]}"
"namespace" => "%{[kubernetes][namespace]}"
"pod" => "%{[kubernetes][pod][name]}"
}
replace => { "host" => "%{[kubernetes][node][name]}"}
}
}
mutate {
remove_field => ["tags"]
}
}
outputs:
main: |-
output {
loki {
url => "http://loki:3100/loki/api/v1/push"
#username => "test"
#password => "test"
}
# stdout { codec => rubydebug }
}
# proxy is currently only used by loki test pod
# Note: If http_proxy/https_proxy are set, then no_proxy should include the
# loki service name, so that tests are able to communicate with the loki
# service.
proxy:
http_proxy: ""
https_proxy: ""
no_proxy: ""

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,85 @@
### User Guide for Lifelonglearning exporter
This guide is complementary to Observation-Management on LifelongLearning and covers how to install lifelonglearning exporter on an existing Sedna environment.
### What you can get
#### 1. The dashboard about lifelonglearning job
After installation, you can see UI of Grafana on http://{The node you labeled for deployment}:31000
You can get your 'admin' user password by running:
```
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
After login, you can see some dashboards designed for LifelongLearning.
![total_dashboard](../images/total_dashboard.png)
### 2. Set custom metrics what you need
After installation, you can see the tables {metric and metric_value} in the knowledge base which is in the svc/kb.
You can set the required metrics in both tables by following these steps, not just those related to lifelong learning.
1. Add the metrics name, metrics label, and other information to the metrics table.
2. Add metric values and labels to metrics_value at runtime.
For example, As shown in the image below, I have metrics in the Metric table: Recall, Precision.
![img.png](../images/metrics.png)
![img.png](../images/metrics_values.png)
Then Exporter periodically scans the metrics in the Metric_value table and classifies them according to the labels and you can see the metrics in the prometheus:
![img.png](../images/recall.png)
![img.png](../images/precission.png)
### How to Install
#### Prerequisites
- Sedna
- Grafana
- Prometheus
If you don't have an existing Sedna or KubeEdge environment, you can follow the [Sedna Installation Document](https://github.com/kubeedge/sedna/blob/main/docs/setup/install.md) to install what you need.
If you don't have an existing Grafana or Prometheus environment, you can follow the [Observation-Management Document](../README.md) to install what you need.
#### Install KB_exporter
You should use the following command to compile the kb_exproter:
```shell
go build main.go
```
Then you can use the following command to depoly:
```shell
# Move the installtion package to the specified dirctory
kubectl cp $EXPORTER kb:/db/ -n sedna
# Enter the kb
kubectl exec -it kb -n sedna
# Start the exporter
./main
# You can use the following command to ask for help
./main -h
```
#### Expose ports
After the exporter is installed, you need to modify the following exposed ports so that Prometheus can automatically find target metrics.
```yaml
# You should add the following configuration to svc/kb to expose the corresponding port.
name: tcp-1
port: 7070
protocol: TCP
targetPort: 7070
```
``` yaml
# You should add the following configuration to pod/kb to notify Prometheus that the following ports require periodic scanning.
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "7070"
prometheus.io/scrape: "true"
```
#### Import dashboard
There is a dashboard files in lifelonglearning_exporter/dashboard.
You can go to Grafana and select Import JSON files.
![img.png](../images/dashboard_import.png)
After the above steps, you can see the lifelong learning panel, of course, you can also manually add new metrics
![total_dashboard](../images/total_dashboard.png)

View File

@ -0,0 +1,96 @@
# User Guide for Observability Management
This guide covers how to install Observability Management on an existing Sedna environment.
If you don't have an existing Sedna or KubeEdge environment, you can follow the [Sedna Installation Document](https://github.com/kubeedge/sedna/blob/main/docs/setup/install.md) to install what you need.
## Get Helm Repos Info
This project is based on Grafana and Prometheus community, so you need to get helm repos firstly.
```
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
```
## Configs
### Node Selector
You should label one node, so that you can deploy components like Grafana, Prometheus and Loki on this node.
Cloud node is recommended to be chosen for deploying the components needed on Observability Management.
```
# If sedna-mini-control-plane is the node you chose for deployment.
kubectl label nodes sedna-mini-control-plane labelName=master
```
## Installation
### Kube-state-metrics
```
cd sedna/components/dashboard/kube-state-metrics
helm install kube-state-metrics prometheus-community/kube-state-metrics -f values.yaml
```
After Installation, you can curl the url kube-state-metrics exposed to check whether the metrics can be collected.
If pod IP of Kube-state-metrics is 10.244.0.127, the url is 10.244.0.127:8080/metrics
### Exporter (Optional)
Currently, the exporter is only used for collecting inference counts and numbers of hard samples.
These metrics are not shown in the running yamls, so kube-state-metrics can not collect these custom metrics.
```
cd sedna/components/dashboard/exporter/incremental_learning/hardSamplesExporter.py
cd sedna/components/dashboard/exporter/joint_inference/hardSamplesExporter.py
```
1. Edit the exporter if needed. (The current paths are for demo test cases.)
2. Run the exporter on the node which is producing hard samples.
### Prometheus
```
cd sedna/components/dashboard/prometheus
```
If you run the exporters before, you need to edit **values.yaml** of Prometheus.
```yaml
serverFiles:
prometheus.yml:
scrape_configs:
...
- job_name: "sedna-exporter"
static_configs:
- targets:
- ${The node IP you run the exporter}:${Port(Default as 8000)}
```
Install Prometheus
```
helm install prometheus prometheus-community/prometheus -f values.yaml
```
If the installation is successful, you can see UI of Prometheus on http://{The node you labeled for deployment}:30003
At **Targets** view of Prometheus UI, the states of kube-state-metrics and sedna_exporter(Optional) are **UP**.
![](../images/kube-state-metrics_up.png)
![](../images/sedna_exporter_up.png)
### Loki
```
cd sedna/components/dashboardloki-stack
helm install loki grafana/loki-stack -f values.yaml
```
Because there are no kubelets on edge nodes, some problems like this [issue](https://github.com/kubeedge/kubeedge/issues/4170) make promtail on edge node can not collect logs.
### Grafana
There are several dashboard files in `sedna/components/dashboard/grafana/dashboards`.
To automatically load these files, you need to download files of [Grafana-Chart](https://github.com/grafana/helm-charts/tree/main/charts/grafana) to directory `sedna/components/dashboard/grafana` firstly.
Then replace the original directory `dashboards` and original values.yaml with ours.
```
cd sedna/components/dashboard/grafana
helm install grafana . -f values.yaml
```
After installation, you can see UI of Grafana on http://{The node you labeled for deployment}:31000
You can get your 'admin' user password by running:
```
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
When you log in, you can see some dashboards designed for Sedna.
![](../images/dashboards.png)
![](../images/demo.png)

View File

@ -29,9 +29,9 @@ Docker version 19.03.6, build 369ce74a3c
#### 1. Deploy Sedna
Sedna provides three deployment methods, which can be selected according to your actual situation:
- [Install Sedna AllinOne](setup/all-in-one.md). (used for development, here we use it)
- [Install Sedna local up](setup/local-up.md).
- [Install Sedna on a cluster](setup/install.md).
- [Install Sedna AllinOne](../setup/all-in-one.md). (used for development, here we use it)
- [Install Sedna local up](../setup/local-up.md).
- [Install Sedna on a cluster](../setup/install.md).
The [all-in-one script](/scripts/installation/all-in-one.sh) is used to install Sedna along with a mini Kubernetes environment locally, including:
- A Kubernetes v1.21 cluster with multi worker nodes, default zero worker node.
@ -179,5 +179,5 @@ Sedna is an open source project and in the spirit of openness and freedom, we we
You can get in touch with the community according to the ways:
* [Github Issues](https://github.com/kubeedge/sedna/issues)
* [Regular Community Meeting](https://zoom.us/j/4167237304)
* [slack channel](https://app.slack.com/client/TDZ5TGXQW/C01EG84REVB/details)
* [slack channel](https://kubeedge.io/docs/community/slack/)

View File

@ -0,0 +1,235 @@
- [Operation and maintenance UI development](#Operation-and-maintenance-UI-development)
- [Motivation](#motivation)
- [Goals](#goals)
- [Proposal](#proposal)
- [Use Cases](#use-cases)
- [Design Details](#design-details)
- [Log](#Log)
- [Metric](#Metric)
- [Visualization](#Visualization)
- [NetWork](#NetWork)
- [Cache](#Cache)
- [Install](#install)
# Operation and maintenance UI development
## Motivation
At present, Users can only access information about lifelong learning through the command line, including logs, status, and metrics.It is inconvenient and unfriendly for users. This proposal will provide a UI for metrics, log collection, status monitoring and management based on grafana. This allows the user to get the information by the graphical interface
### Goals
- Supports metrics collection and visualization in lifelong learning
- Support unified management and search of application logs
- Support the management and status monitoring of lifelong learning in dashboard
## Proposal
We propose using grafana,loki and prometheus to display the metric,log and statues about the lifelong learning job. The Prometheus is used to collect the metric and status and the loki is used to collect logs.
### Use cases
- Users can search log by specifying pod and keywords in the log control panel
- User can view the history state or the metric of any component in a sequence diagram
- Users can view the status of any CRD (Model, DataSet, Lifelonglearningjob) by specifying the name
## Design details
### Architecture
![](./images/lifelong-learning-ops-architecture.png)
1. Log
In Daemonset mode, Promtail will be deployed on each node (Cloud, Edge), and will monitor the log storage directory on the node (eg: /var/logs/contains).
If there is a log update, in Loki's next pull, the updated log will be pulled.
It is the same way as "Kubectl logs" to get logs.
Loki is deployed on Cloud nodes and persistently stores the pulled logs
2. Metric
kube-state-metrics is about generating metrics from Kubernetes API objects without modification. This ensures that features provided by kube-state-metrics have the same grade of stability as the Kubernetes API objects themselves.
In Kubeedge, it is the same way as "Kubectl get" to get component information. When using Kubectl to obtain information from edge nodes, Kube-State-Metrics can also.
Both Prometheus and Kube-State-Server are deployed on Cloud nodes.Prometheus will pull the metrics in Kube-state-server and store the metrics persistently.
3. Visualization
Grafana is deployed on the Cloud node, and it visualizes based on the persisted data in Prometheus and Loki that are also located on the Cloud node.
### Log
At present, the log file is under the directory /var/log/contains.
There are two types of Promtail collection modes. The following table shows the feature comparison:
| | Daemonset | SideCar |
|----------------------------|---------------------------|-------------------------------------------|
| Source | Sysout + Part file | File |
| Log classification storage | Mapped by container/path | Pod can be separately |
| Multi-tenant isolation | Isolated by configuration | Isolated by Pod |
| Resource occupancy | Low | High |
| Customizability | Low | High, each Pod is configured individually |
| Applicable scene | Applicable scene | Large, mixed cluster |
### Metric
#### CRD metric
1. LLJob Status
| **Metric** | **Description** |
|----------------------|----------------------------------------------------------------------------|
| JobStatus | Status of each job,Enum(True, False, Unknown) |
| StageConditionStatus | Status of each stage,Enum(Waiting,Ready,Starting,Running,Completed,Failed) |
| LearningStage | Stages of lifelong learning,Enum(Train, Eval, Deploy ) |
2. Dataset status
| **Metric** | **Description** |
|------------------|-------------------------------------|
| NumberOfSamples | The number of samples |
| StageTrainNumber | The number of samples used by train |
| StageEvalNumber | The number of samples used by eval |
3. Model
| **Metric** | **Description** |
|------------|-------------------------------------------|
| Key | The value corresponding to the custom Key |
4. Task
| **Metric** | **Description** |
|-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ActiveTasksNumber | The number of running tasks |
| TotalTasksNumber | The number of total tasks |
| TasksInfo | Tasks Info<br/>{ <br/>location: "edge &#124; cloud",<br/>work: "training &#124; infering",<br/>startTime: "xxxx-xx-xx xx:xx:xx",<br/>duration: "xxxx s/min/h",<br/>status: "active &#124; dead"<br/>} |
5. Worker
| **Metric** | **Description** |
|---------------------|-----------------------------------------------------------------------------------------------------------|
| ActiveWorkersNumber | The number of running worker |
| TotalWorkerNumber | The number of total worker |
| WorkersInfo | Workers Info<br/>{<br/>location: "edge &#124;cloud",<br/>status: "waiting &#124;active &#124; dead"<br/>} |
6. Number of samples for inference (known tasks, unknown tasks)
| **Metric** | **Description** |
|----------------------|--------------------------------------------------------|
| InferenceDoneNumber | The number of samples for which inference is completed |
| InferenceReadyNumber | The number of samples ready for inference |
| InferenceErrorNumber | The number of samples with inference errors |
| InferenceRate | Inference completion percentage0-100% |
7. Knowledge base
| **Metric** | **Description** |
|------------|----------------------------------------------------------------------|
| KBServerIP | The server IP where the knowledge base is located |
| KBNodeType | The type of the node where the knowledge base is located(Cloud,Edge) |
| KBNodeName | The name of the node where the knowledge base is located |
8. Training stage
| **Metric** | **Description** |
|------------------------|-----------------------------------------------------------|
| KnowTaskNum | Number of known tasks in the knowledge base |
| TaskGroupModelFile | The model file that Task needs to load |
| TaskSampleNum | The number of samples used at training stage |
| TaskSampleDir | The path to the sample for the subtask |
| TaskModelSavePath | The path where the task model is saved |
| TaskModelBaseModelPath | The path where the base task model is saved |
| TaskRelationship | Migration relationship between tasks |
| TaskProperties | Basic properties of subtasks |
| KnowledgeBase | For indicators related to the knowledge base, see Table 7 |
9. Evaluation stage
| **Metric** | **Description** |
|----------------|-----------------------------------------------------------|
| EvalSampleNum | The number of samples used at eval stage |
| EvalModelUrl | The url of model at eval stage |
| TaskScore | Score for the task to be eval |
| TaskEvalResult | Results of Model Evaluation |
| TaskStatus | Whether the Task can be deployed |
| KnowledgeBase | For indicators related to the knowledge base, see Table 7 |
10. Deploy stage
| **Metric** | **Description** |
|----------------|-----------------------------|
| DeployStatus | Enum(Waiting,Ok,NotOk) |
| DeployModelUrl | The url for deploying model |
#### Custom metric
Now, the metrics that come with lifelonglearning may not meet your needs, so here is a way for users to expand metrics.
You can register your own indicators in the Metic table,as shown in the table below:
![img.png](images/lifelong-learning-ops-custom-metrics.png)
I added precision and recall to the table,Of course, you can also add any other metrics you want.
After this, when you want to add an metric of the same id to the Metric_value table, multiple metrics will be generated according to the different labels.
![img.png](images/lifelong-learning-ops-custom-metrics-values.png)
And, both Metric and Metric_value tables are located in the KB database.
Therefore, you can manipulate these two tables just like any other table in the KB.
#### Example
```
kind: CustomResourceStateMetrics
spec:
resources:
-
groupVersionKind:
group: myteam.io
kind: "Foo"
version: "v1"
labelsFromPath:
name: [ metadata, name ]
metrics:
- name: "active_count"
help: "Number Foo Bars active"
each:
path: [status, active]
labelFromKey: type
labelsFromPath:
bar: [bar]
value: [count]
commonLabels:
custom_metric: "yes"
labelsFromPath:
"*": [metadata, labels] # copy all labels from CR labels
foo: [metadata, labels, foo] # copy a single label (overrides *)
- name: "other_count"
each:
path: [status, other]
errorLogV: 5
```
### Visualization
Grafana can be used to visualize the monitoring data of Sedna. The visualization in Grafana contains 2 parts: `metrics` and `logs`.
- Visualization of metrics:
![](./images/lifelong-learning-ops-metrics-grafana.png)
- Visualization of logs:
![](./images/lifelong-learning-ops-log-grafana.png)
## Install
Using helm, modified based on the Loki-Stack.
```yaml
loki:
enabled: true
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 10Gi
promtail:
enabled: true
config:
lokiAddress: http://{{ .Release.Name }}:3100/loki/api/v1/push
grafana:
enabled: true
prometheus:
enabled: true
isDefault: false
nodeExporter:
enabled: true
kubeStateMetrics:
enabled: true
```
There will also be related json files for the Grafana dashboard.
The corresponding dashboards can be imported according to user needs

Binary file not shown.

After

Width:  |  Height:  |  Size: 298 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 302 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 539 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 753 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.4 MiB

Some files were not shown because too many files have changed in this diff Show More