Compare commits

...

83 Commits
v0.3.6 ... main

Author SHA1 Message Date
Vishal Kumar eefe9fe40a
Fix: Updated k8s version and GitHub action workflow version (#212)
* Fix: Update admission decoder type from pointer to value in handlers

Signed-off-by: Ayush Kumar <ayushshyamkumar888@gmail.com>
Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Upgrade controller-gen version to v0.18.0 and refactor decoder usage in tests

Signed-off-by: Ayush Kumar <ayushshyamkumar888@gmail.com>
Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Upgrade ginkgo to v2 and update import paths in test files

Signed-off-by: Ayush Kumar <ayushshyamkumar888@gmail.com>
Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Refactor feature gate handling in workflow tests

Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Remove unused done channel from BeforeSuite in test files

Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Refactor feature gate handling in workflow tests

Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Update decoder handling in test files for consistency

Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Upgrade Kubernetes version to v1.31.10 in unit-test.yaml

Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Removed KIND version from unit-test.yaml

Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Update backport action to use korthout/backport-action@v3

Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Upgrade Kubernetes dependencies to v0.31.10 in go.mod

Signed-off-by: Amit Singh <singhamitch@outlook.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Upgrade K3D image version to v1.31.10 and refactor feature gate handling in tests

Signed-off-by: Amit Singh <singhamitch@outlook.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Downgrade controller-tools version to v0.16 in dependency.mk

Signed-off-by: Amit Singh <singhamitch@outlook.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Update K3D image version to v1.31 in e2e.yaml

Signed-off-by: Amit Singh <singhamitch@outlook.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* Chore: Downgrade go-cmp and ginkgo/gomega versions in go.mod and go.sum

Signed-off-by: Amit Singh <singhamitch@outlook.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* chore: sets timeout in beforeeach nodes

Signed-off-by: Amit Singh <singhamitch@outlook.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>

* chore: updates controller runtime version

Signed-off-by: Amit Singh <singhamitch@outlook.com>

* chore: fixes linter warnings

Signed-off-by: Amit Singh <singhamitch@outlook.com>

---------

Signed-off-by: Ayush Kumar <ayushshyamkumar888@gmail.com>
Signed-off-by: vishal210893 <vishal210893@gmail.com>
Signed-off-by: semmet95 <singhamitch@outlook.com>
Signed-off-by: Amit Singh <singhamitch@outlook.com>
Co-authored-by: Ayush Shyam Kumar <ayushshyam.official.888@gmail.com>
2025-06-26 06:58:01 +08:00
Brian Kane 59237b82a5
Fix: Fixes the request workflowstep (#211) 2025-06-16 21:37:44 +08:00
Anoop Gopalakrishnan d7db9c4ef4
Chore: Update CODEOWNERS (#206)
Signed-off-by: Anoop Gopalakrishnan <anoop2811@aol.in>
2025-05-01 01:39:47 +08:00
PushparajShetty 067ed6a846
Chore: upgrades go, golang_ci, and staticcheck version (#204)
* upgrades go, golang_ci, and staticcheck version

Signed-off-by: Ayush Shyam Kumar <ayushshyam.official.888@gmail.com>

* upgrades ubuntu version

Signed-off-by: Ayush Shyam Kumar <ayushshyam.official.888@gmail.com>

* upgrades staticcheck version

Signed-off-by: Ayush Shyam Kumar <ayushshyam.official.888@gmail.com>

* upgrades staticcheck version

Signed-off-by: Ayush Shyam Kumar <ayushshyam.official.888@gmail.com>

* fix error string capitalization to comply with staticcheck

Signed-off-by: Ayush Shyam Kumar <ayushshyam.official.888@gmail.com>

---------

Signed-off-by: Ayush Shyam Kumar <ayushshyam.official.888@gmail.com>
2025-04-30 09:29:00 +05:30
shivin 5e963e0c45
Fix: Fix output (#203)
* fix compatibility for output

Signed-off-by: Pushparaj Shetty KS <kspushparajshetty@gmail.com>
Signed-off-by: Pushparaj Shetty K S <kspushparajshetty@gmail.com>

* ran make reviewable

Signed-off-by: Pushparaj Shetty KS <kspushparajshetty@gmail.com>
Signed-off-by: Pushparaj Shetty K S <kspushparajshetty@gmail.com>

* update actions/cache version to v3

Signed-off-by: Pushparaj Shetty KS <kspushparajshetty@gmail.com>
Signed-off-by: Pushparaj Shetty K S <kspushparajshetty@gmail.com>

* update logic in Output function

Signed-off-by: Pushparaj Shetty K S <kspushparajshetty@gmail.com>

---------

Signed-off-by: Pushparaj Shetty KS <kspushparajshetty@gmail.com>
Signed-off-by: Pushparaj Shetty K S <kspushparajshetty@gmail.com>
Co-authored-by: Pushparaj Shetty K S <kspushparajshetty@gmail.com>
2025-04-15 08:49:44 +05:30
Tianxin Dong 23468c911a
Fix: fix compatibility for output (#201)
fix: fix compatibility for output

Signed-off-by: FogDong <fog@bentoml.com>
2025-02-26 10:25:24 +08:00
gzb1128 ea6d165f44
Docs: Readme link text bug fix (#200)
Docs: fix readme link text bug

Signed-off-by: gzb1128 <591605936@qq.com>
2025-02-12 12:51:57 +08:00
pnr d8a85b26c8
Chore: update k8s to 1.29 (#195)
chore: update k8s to 1.29

Signed-off-by: phantomnat <w.nattadej@gmail.com>
2024-12-10 15:46:45 +08:00
Tianxin Dong 55f1433fd7
Fix: fix suspend judgement (#196)
Fix: fix context in provider (#194)

fix: fix context in provider

Signed-off-by: FogDong <fog@bentoml.com>
2024-09-24 23:29:48 +08:00
Tianxin Dong 76db4ac03e
Fix: fix context in provider (#194)
fix: fix context in provider

Signed-off-by: FogDong <fog@bentoml.com>
2024-08-17 00:14:12 +08:00
Tianxin Dong 94c9275ef9
Fix: fix native cue providers (#193)
* fix: fix native cue providers

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix test

Signed-off-by: FogDong <fog@bentoml.com>

---------

Signed-off-by: FogDong <fog@bentoml.com>
2024-08-15 22:25:06 +08:00
Tianxin Dong 1fa0042fe9
Feat: add builtin providers and fix helm (#192)
* feat: add builtin providers and fix helm

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix lint

Signed-off-by: FogDong <fog@bentoml.com>

---------

Signed-off-by: FogDong <fog@bentoml.com>
2024-08-15 00:51:50 +08:00
Tianxin Dong 9d557371b3
Feat: add new providers (#190)
* feat: add new providers

Signed-off-by: FogDong <fog@bentoml.com>

* fix: lint

Signed-off-by: FogDong <fog@bentoml.com>

* fix: delete useless code

Signed-off-by: FogDong <fog@bentoml.com>

* fix: ignore package error

Signed-off-by: FogDong <fog@bentoml.com>

* fix: add test -v

Signed-off-by: FogDong <fog@bentoml.com>

* fix: set dynamic client

Signed-off-by: FogDong <fog@bentoml.com>

* fix: register schema

Signed-off-by: FogDong <fog@bentoml.com>

* fix: delete -v

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix test

Signed-off-by: FogDong <fog@bentoml.com>

---------

Signed-off-by: FogDong <fog@bentoml.com>
2024-08-13 23:37:54 +08:00
Tianxin Dong 7d94489306
Chore: refactor the cue engine with cuex (#162)
* Chore: refactor the cue engine with cuex

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* use singelton for compiler

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* fix email test casses

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* fix test

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* add unmarshal to as util func

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* add time provider

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* fix: fix compiler

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix lint and rebase

Signed-off-by: FogDong <fog@bentoml.com>

* chore: update cue and fix context key

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix input fill val

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix checkpending field path

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix nil pointer return in kube

Signed-off-by: FogDong <fog@bentoml.com>

* fix: fix error in return

Signed-off-by: FogDong <fog@bentoml.com>

* fix: add mutex for syntax

Signed-off-by: FogDong <fog@bentoml.com>

* fix: add kubeclient runtime param

Signed-off-by: FogDong <fog@bentoml.com>

* chore: clean up stdlib

Signed-off-by: FogDong <fog@bentoml.com>

* fix: do not override runtime params

Signed-off-by: FogDong <fog@bentoml.com>

---------

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
Signed-off-by: FogDong <fog@bentoml.com>
2024-07-27 17:44:41 +08:00
Chaitanyareddy0702 c3331e7c07
Chore: Update the go version to 1.22 (#188)
* Update the go version to 1.22

- Change the go version in go.mod to 1.22

Signed-off-by: Chaitanyareddy0702 <chaitanyareddy0702@gmail.com>
Author: VibhorChinda <vibhorchinda@gmail.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Fix: Change the version of go in workflow file

- Change go and golangci version in unit-test workflow

Signed-off-by: Chaitanyareddy0702 <chaitanyareddy0702@gmail.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Chore: Update the go version to 1.22

- Update the go version in dockerfile and post-submit in github workflows

Signed-off-by: Chaitanyareddy0702 <chaitanyareddy0702@gmail.com>

Author: VibhorChinda <vibhorchinda@gmail.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Chore: Update the go version to 1.22

- Update the staticcheck to version 2023.1.7

Signed-off-by: Chaitanyareddy0702 <chaitanyareddy0702@gmail.com>

Author: VibhorChinda <vibhorchinda@gmail.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Chore: Update go version

- Change the go version and golangci version in go.yml workflow

Signed-off-by: Chaitanyareddy0702 <chaitanyareddy0702@gmail.com>

Author: VibhorChinda <vibhorchinda@gmail.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Chore: Update go version

- Change the action-cache version

Signed-off-by: Chaitanyareddy0702 <chaitanyareddy0702@gmail.com>

Author: VibhorChinda <vibhorchinda@gmail.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Chore: Bump the go verison to 1.22

- Remove cache of go dependencies on the pipeline and run go mod tidy

Signed-off-by: co_gwre <co@guidewire.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Bumped the staticcheck version

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Bump the go version for e2e tests

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Removed the uncache step

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* bumped golanci version

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Bump controller tools version

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Bumped go version in dockerfile.e2e

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Added no lint comments

Author: Chaitanyareddy0702 chaitanyareddy0702@gmail.com
Signed-off-by: vchinda <vchinda@guidewire.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* added lint

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* Revert "added lint"

This reverts commit 0025dd036210a3a82e07cfef605be740b7f70b8f.

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* modified lint

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* lint changes

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* added nolint

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

* changed new object

Signed-off-by: GW Cloud Common Services <some@guidewire.com>

---------

Signed-off-by: Chaitanyareddy0702 <chaitanyareddy0702@gmail.com>
Signed-off-by: GW Cloud Common Services <some@guidewire.com>
Signed-off-by: co_gwre <co@guidewire.com>
Signed-off-by: vchinda <vchinda@guidewire.com>
Co-authored-by: co_gwre <co@guidewire.com>
Co-authored-by: vchinda <vchinda@guidewire.com>
Co-authored-by: GW Cloud Common Services <some@guidewire.com>
2024-07-10 22:17:47 +08:00
dependabot[bot] f187b42dc9
Chore(deps): bump golang.org/x/crypto from 0.6.0 to 0.17.0 (#182)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.6.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.6.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 14:48:12 +08:00
Jongwoo Han ec3c20d214
Feat: replace deprecated command with environment file (#181)
Signed-off-by: Jongwoo Han <jongwooo.han@gmail.com>
2023-12-18 15:48:19 +08:00
dependabot[bot] 7d88eef4ab
Chore(deps): bump google.golang.org/grpc from 1.50.1 to 1.53.0 (#168)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.50.1 to 1.53.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.50.1...v1.53.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 15:10:47 +08:00
Somefive 3d4f4413ff
Feat: update ci for publishing (#167)
Signed-off-by: Yin Da <yd219913@alibaba-inc.com>
2023-06-05 10:52:35 +08:00
dependabot[bot] cd8f883f59
Chore(deps): bump github.com/cloudflare/circl from 1.1.0 to 1.3.3 (#163)
Bumps [github.com/cloudflare/circl](https://github.com/cloudflare/circl) from 1.1.0 to 1.3.3.
- [Release notes](https://github.com/cloudflare/circl/releases)
- [Commits](https://github.com/cloudflare/circl/compare/v1.1.0...v1.3.3)



---
updated-dependencies:
- dependency-name: github.com/cloudflare/circl
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-01 17:40:32 +08:00
dependabot[bot] 27766d4e7b
Chore(deps): bump github.com/docker/distribution from 2.8.1+incompatible to 2.8.2+incompatible (#164)
Chore(deps): bump github.com/docker/distribution

Bumps [github.com/docker/distribution](https://github.com/docker/distribution) from 2.8.1+incompatible to 2.8.2+incompatible.
- [Release notes](https://github.com/docker/distribution/releases)
- [Commits](https://github.com/docker/distribution/compare/v2.8.1...v2.8.2)



---
updated-dependencies:
- dependency-name: github.com/docker/distribution
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-01 17:19:09 +08:00
Arvin 97c5853ce1
Feat: add image pull secret in task step (#166) 2023-05-29 15:09:59 +08:00
Somefive 34c6911427
Fix: apply application should not use strategic merge patch (#165)
* Fix: apply application should not use strategic merge patch

Signed-off-by: Yin Da <yd219913@alibaba-inc.com>

* upgrade ci version

Signed-off-by: Yin Da <yd219913@alibaba-inc.com>

---------

Signed-off-by: Yin Da <yd219913@alibaba-inc.com>
2023-05-23 11:23:35 +08:00
Tianxin Dong 687ba328b1
Fix: fix terminate suspending steps (#160)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-04-18 18:54:34 +08:00
Tianxin Dong 9c5da14de8
Feat: add job orchestration example (#152)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-04-13 13:01:28 +08:00
Tianxin Dong be9e5a10ba
Fix: fix the step group status if there's a suspending sub step (#158)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-04-12 22:28:34 +08:00
Somefive 9c36c21ea8
Feat: upgrade k8s.io to 0.26 (#150)
Signed-off-by: Yin Da <yd219913@alibaba-inc.com>
2023-04-10 13:49:20 +08:00
Tianxin Dong b068c91d0c
Feat: add chat-gpt step and its example (#156)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-04-10 13:43:50 +08:00
dependabot[bot] 6a9833336f
Chore(deps): bump github.com/docker/docker from 20.10.17+incompatible to 20.10.24+incompatible (#155)
Chore(deps): bump github.com/docker/docker

Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.17+incompatible to 20.10.24+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v20.10.17...v20.10.24)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-07 18:06:25 +08:00
yangs cd812ee307
Feat: add stepGroupName to process.Context (#151)
Signed-off-by: yangsoon <songyang.song@alibaba-inc.com>
Co-authored-by: yangsoon <songyang.song@alibaba-inc.com>
2023-04-07 10:19:51 +08:00
Tianxin Dong a4f3ec81fc
Feat: add mode in steps for step group (#153)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-04-04 14:14:44 +08:00
wyike c730c05966
Feat: prometheus check steps provider (#149)
* check-metrics

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

refactor some code

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

add tests

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* try to fix go lint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* fix test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

delete useless code

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

add nolint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

---------

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2023-03-28 10:26:57 +08:00
dependabot[bot] 5b55dbd928
Chore(deps): bump github.com/crossplane/crossplane-runtime from 0.14.1-0.20210722005935-0b469fcc77cd to 0.16.1 (#148)
Chore(deps): bump github.com/crossplane/crossplane-runtime

Bumps [github.com/crossplane/crossplane-runtime](https://github.com/crossplane/crossplane-runtime) from 0.14.1-0.20210722005935-0b469fcc77cd to 0.16.1.
- [Release notes](https://github.com/crossplane/crossplane-runtime/releases)
- [Commits](https://github.com/crossplane/crossplane-runtime/commits/v0.16.1)

---
updated-dependencies:
- dependency-name: github.com/crossplane/crossplane-runtime
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 17:48:24 +08:00
Tianxin Dong 59e7c1c967
Fix: fix step depends on skip (#146)
* Fix: fix step depends on skip

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* rename the util fuction

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

---------

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-03-13 16:53:19 +08:00
dependabot[bot] 8ea00c6a92
Chore(deps): bump golang.org/x/net from 0.3.0 to 0.7.0 (#137)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.3.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.3.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 11:44:35 +08:00
Tianxin Dong 3420e6d9ad
Feat: add recycle grouped workflow run with cron (#143)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-03-02 11:44:09 +08:00
dependabot[bot] b72a2f9a77
Chore(deps): bump github.com/containerd/containerd from 1.6.12 to 1.6.18 (#136)
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.6.12 to 1.6.18.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.6.12...v1.6.18)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 11:04:14 +08:00
Tianxin Dong 8eae143050
Feat: add default suspend message (#141)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-27 10:31:18 +08:00
Tianxin Dong 6da55e89cb
Fix: fix auto resume in suspend (#140)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-24 17:36:34 +08:00
Tianxin Dong 1a2e8a10e5
Feat: support multi suspend in definition and fix resume (#139)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-24 10:35:38 +08:00
Tianxin Dong ecaf98dcd0
Chore: refactor suspend step and add op.#Suspend (#138)
* Chore: refactor suspend step and add op.#Suspend

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* add more test

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

---------

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-23 13:41:41 +08:00
Tianxin Dong edc78492f1
Fix: remove patch in apply to make it standalone action (#134)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-15 18:02:59 +08:00
Tianxin Dong 92d7b6a260
Feat: support resume a specific suspend step in workflow (#133)
* Feat: support resume a specific suspend step in workflow

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* resolve the comment

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

---------

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-14 17:24:19 +08:00
Tianxin Dong 2daa3cb189
Feat: watch event listener in controller for a faster reconcile (#131)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-07 11:45:34 +08:00
Jianbo Sun 7135326581
Feat: update workflow version and refactor the config provider (#130)
Update workflow version and refactor the config provider

Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2023-02-06 16:42:05 +08:00
Somefive c4a399535a
Feat: support sharding (#129)
Signed-off-by: Yin Da <yd219913@alibaba-inc.com>
2023-02-06 15:55:06 +08:00
Tianxin Dong 82e074888f
Chore: update cue to v0.5.0-beta.5 (#126)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-06 11:58:13 +08:00
qiaozp 953a71ce4b
Chore: export inputItem and outputItem (#125)
* Chore: export inputItem and outputItem

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>

* make reviewable

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>

---------

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-02-03 14:24:11 +08:00
Tianxin Dong 6cae385d5e
Fix: fix panic when workflow is skipped (#122)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-30 20:15:32 +08:00
Tianxin Dong e8f00ceab2
Fix: add requeue in skip and optimize readme (#120)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-17 17:01:50 +08:00
Tianxin Dong 95c2164618
Fix: upgrade definitions in charts (#118)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-12 14:23:47 +08:00
Tianxin Dong 03ed8afad3
Fix: omit context submission when the context is created (#117)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-12 10:55:58 +08:00
Tianxin Dong 555573cf64
Feat: add controller version in backup controller (#114)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-11 19:08:54 +08:00
Tianxin Dong ad226c2c3b
Fix: delete table of contents (#116)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-10 16:36:23 +08:00
Tianxin Dong 9b3926a9ab
Feat: add explanation in readme (#115)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-10 14:43:51 +08:00
Tianxin Dong 6124a964eb
Fix: add cache handle if the patch is failed in end reconcile (#113)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-03 18:01:46 +08:00
Tianxin Dong 6ae0c5cbc4
Feat: add set value and use it for inputs (#112)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-30 18:26:36 +08:00
Tianxin Dong 3da7f1a4df
Fix: optimize workflow context to avoid conflict error (#111)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-28 15:13:59 +08:00
Tianxin Dong 911095d19f
Fix: clear user info in ctx when create cm for workflow context (#110)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-27 20:40:50 +08:00
Tianxin Dong 3afde47f34
Fix: fix suspend bug in dag that caused by patch step status (#109)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-27 15:45:17 +08:00
Tianxin Dong c83b1c27be
Fix: optimize restart from step func for unification (#108)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-26 16:02:25 +08:00
Tianxin Dong 03956f632a
Chore: depracate useless data in context (#107)
Deprecate: depracate useless data in context

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-26 15:23:48 +08:00
Tianxin Dong f821bc3485
Fix: fix invalid debug cm name (#106)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-26 11:55:28 +08:00
Somefive 81d587e0b6
Chore: refactor test (#105)
Signed-off-by: Yin Da <yd219913@alibaba-inc.com>

Signed-off-by: Yin Da <yd219913@alibaba-inc.com>
2022-12-21 11:28:41 +08:00
Tianxin Dong 3053ee2676
Feat: add example for build and push image (#104)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-20 17:39:20 +08:00
Tianxin Dong 0e0ff8d300
Feat: add built-in defs in charts (#103)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-14 13:31:34 +08:00
Tianxin Dong fd1cdcc4ed
Feat: add retry failed step operation (#101)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-13 21:44:42 +08:00
Tianxin Dong a8868ee0d6
Chore: Bump version of skip duplication actions (#102)
Fix: update skip duplication actions' version

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-12 10:40:35 +08:00
Tianxin Dong 418d0a8ffc
Feat: add webhook and auto generate name for steps (#100)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-09 18:13:47 +08:00
Tianxin Dong 6e9551b213
Feat: optimiaze the sls producer instance (#99)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-09 17:12:25 +08:00
Tianxin Dong d2722774a5
Fix: set gvk for created cm to fix patch (#96)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-08 11:07:37 +08:00
Tianxin Dong 39486331b8
Feat: add apply terraform example yaml (#95)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-07 11:06:43 +08:00
Tianxin Dong 9eda5ba624
Feat: add feature gates for patch step status at once (#94)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-12-06 14:20:50 +08:00
Tianxin Dong 46c102e914
Fix: fix the usage of http client and run crd (#93)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-30 17:19:52 +08:00
Tianxin Dong 960d4bbb12
Feat: add arch img in readme (#92)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-29 21:55:05 +08:00
yangs de4facb08c
Fix: input error stores structure type data (#91)
Signed-off-by: songyang.song <songyang.song@alibaba-inc.com>

Signed-off-by: songyang.song <songyang.song@alibaba-inc.com>
Co-authored-by: songyang.song <songyang.song@alibaba-inc.com>
2022-11-29 17:30:41 +08:00
Tianxin Dong 3f1eab660b
Feat: add substitute unstructured object (#90)
* Feat: add substitute unstructured object

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* change the var name

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-25 11:05:20 +08:00
Tianxin Dong a8d0af5295
Feat: add patch in kube.apply (#89)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-21 17:19:33 +08:00
Tianxin Dong b9c2ea4ee6
Feat: add user agent in config (#88)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-18 11:19:14 +08:00
Tianxin Dong 48df288898
Fix: open list lit for fill value (#87)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-15 10:14:45 +08:00
Tianxin Dong a7b9c55310
Fix: fix fill array with array in inputs (#86)
* Fix: fix fill array with array in inputs

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* deprecate fill value by script

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-14 17:13:20 +08:00
Tianxin Dong 4ac7113a5c
Fix: unify stdlib for workflow and kubevela (#85)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-11 16:35:39 +08:00
Tianxin Dong 7f1c99ceb7
Feat: add controller require for canary (#84)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-11-11 10:42:45 +08:00
213 changed files with 15149 additions and 11462 deletions

2
.github/CODEOWNERS vendored
View File

@ -1,3 +1,3 @@
# This file is a github code protect rule follow the codeowners https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-code-owners#example-of-a-codeowners-file
* @FogDong @wonderflow @leejanee @Somefive
* @FogDong @wonderflow @leejanee @Somefive @anoop2811

View File

@ -7,16 +7,16 @@ on:
jobs:
# align with crossplane's choice https://github.com/crossplane/crossplane/blob/master/.github/workflows/backport.yml
open-pr:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
if: github.event.pull_request.merged
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Open Backport PR
uses: zeebe-io/backport-action@v0.0.6
uses: korthout/backport-action@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}

View File

@ -1,40 +1,42 @@
name: HelmChart
on:
push:
tags:
- "v*"
workflow_dispatch: {}
env:
BUCKET: ${{ secrets.OSS_BUCKET }}
ENDPOINT: ${{ secrets.OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
jobs:
publish-images:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: actions/checkout@v4
- name: Get the vars
id: vars
run: |
if [[ ${GITHUB_REF} == "refs/heads/main" ]]; then
echo ::set-output name=TAG::latest
echo "TAG=latest" >> $GITHUB_OUTPUT
else
echo ::set-output name=TAG::${GITHUB_REF#refs/tags/}
echo "TAG=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
fi
echo "GITVERSION=git-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Login ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login Docker Hub
uses: docker/login-action@v1
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- uses: docker/setup-qemu-action@v1
- uses: docker/setup-buildx-action@v1
- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
with:
driver-opts: image=moby/buildkit:master
- uses: docker/build-push-action@v2
- uses: docker/build-push-action@v3
name: Build & Pushing vela-workflow for Dockerhub
with:
context: .
@ -46,26 +48,28 @@ jobs:
push: ${{ github.event_name != 'pull_request' }}
build-args: |
GOPROXY=https://proxy.golang.org
VERSION=${{ steps.vars.outputs.TAG }}
GIT_VERSION=${{ steps.vars.outputs.GITVERSION }}
tags: |-
docker.io/oamdev/vela-workflow:${{ steps.vars.outputs.TAG }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-workflow:${{ steps.vars.outputs.TAG }}
publish-charts:
env:
HELM_CHART: charts/vela-workflow
LOCAL_OSS_DIRECTORY: .oss/
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@master
- uses: actions/checkout@v4
- name: Get the vars
id: vars
run: |
echo ::set-output name=TAG::${GITHUB_REF#refs/tags/}
echo "TAG=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: Install Helm
uses: azure/setup-helm@v1
uses: azure/setup-helm@v4.3.0
with:
version: v3.4.0
- name: Setup node
uses: actions/setup-node@v2
uses: actions/setup-node@v4
with:
node-version: '14'
- uses: oprypin/find-latest-tag@v1
@ -90,15 +94,22 @@ jobs:
sed -i "s/latest/${image_tag}/g" $HELM_CHART/values.yaml
chart_smever=${chart_version#"v"}
sed -i "s/0.1.0/$chart_smever/g" $HELM_CHART/Chart.yaml
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT} -c .ossutilconfig
- name: sync cloud to local
run: ./ossutil --config-file .ossutilconfig sync oss://$BUCKET/core $LOCAL_OSS_DIRECTORY
- name: Package helm charts
- uses: jnwng/github-app-installation-token-action@v2
id: get_app_token
with:
appId: 340472
installationId: 38064967
privateKey: ${{ secrets.GH_KUBEVELA_APP_PRIVATE_KEY }}
- name: Sync Chart Repo
run: |
helm package $HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm repo index --url https://$BUCKET.$ENDPOINT/core $LOCAL_OSS_DIRECTORY
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync $LOCAL_OSS_DIRECTORY oss://$BUCKET/core -f
git config --global user.email "135009839+kubevela[bot]@users.noreply.github.com"
git config --global user.name "kubevela[bot]"
git clone https://x-access-token:${{ steps.get_app_token.outputs.token }}@github.com/kubevela/charts.git kubevela-charts
helm package $HELM_CHART --destination ./kubevela-charts/docs/
helm repo index --url https://kubevela.github.io/charts ./kubevela-charts/docs/
cd kubevela-charts/
git add docs/
chart_version=${GITHUB_REF#refs/tags/}
git commit -m "update vela-workflow chart ${chart_version}"
git push https://x-access-token:${{ steps.get_app_token.outputs.token }}@github.com/kubevela/charts.git

View File

@ -16,7 +16,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Build Vela Workflow image from Dockerfile
run: |
@ -45,7 +45,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v1

View File

@ -15,21 +15,21 @@ on:
env:
# Common versions
GO_VERSION: '1.19'
GOLANGCI_VERSION: 'v1.49'
K3D_IMAGE_VERSION: '[\"v1.20\",\"v1.24\"]'
K3D_IMAGE_VERSIONS: '[\"v1.20\",\"v1.24\"]'
GO_VERSION: '1.23.8'
GOLANGCI_VERSION: 'v1.60.1'
K3D_IMAGE_VERSION: '[\"v1.31\"]'
K3D_IMAGE_VERSIONS: '[\"v1.31\"]'
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v4.0.0
uses: fkirc/skip-duplicate-actions@v5.3.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@ -37,7 +37,7 @@ jobs:
concurrent_skipping: false
set-k8s-matrix:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
outputs:
matrix: ${{ steps.set-k8s-matrix.outputs.matrix }}
steps:
@ -45,13 +45,13 @@ jobs:
run: |
if [[ "${{ github.ref }}" == refs/tags/v* ]]; then
echo "pushing tag: ${{ github.ref_name }}"
echo "::set-output name=matrix::${{ env.K3D_IMAGE_VERSIONS }}"
echo "matrix=${{ env.K3D_IMAGE_VERSIONS }}" >> $GITHUB_OUTPUT
else
echo "::set-output name=matrix::${{ env.K3D_IMAGE_VERSION }}"
echo "matrix=${{ env.K3D_IMAGE_VERSION }}" >> $GITHUB_OUTPUT
fi
e2e-tests:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
needs: [ detect-noop,set-k8s-matrix ]
if: needs.detect-noop.outputs.noop != 'true'
strategy:
@ -64,10 +64,10 @@ jobs:
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
@ -83,7 +83,7 @@ jobs:
- name: Calculate K3d args
run: |
EGRESS_ARG=""
if [[ "${{ matrix.k8s-version }}" == v1.24 ]]; then
if [[ "${{ matrix.k8s-version }}" == v1.26 ]]; then
EGRESS_ARG="--k3s-arg --egress-selector-mode=disabled@server:0"
fi
echo "EGRESS_ARG=${EGRESS_ARG}" >> $GITHUB_ENV
@ -110,7 +110,7 @@ jobs:
run: make end-e2e
- name: Upload coverage report
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: /tmp/e2e-profile.out

View File

@ -13,19 +13,19 @@ on:
env:
# Common versions
GO_VERSION: '1.19'
GOLANGCI_VERSION: 'v1.49'
GO_VERSION: '1.23.8'
GOLANGCI_VERSION: 'v1.60.1'
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v4.0.0
uses: fkirc/skip-duplicate-actions@v5.3.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@ -33,52 +33,52 @@ jobs:
concurrent_skipping: false
staticcheck:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
submodules: true
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-pkg-
- name: Install StaticCheck
run: go install honnef.co/go/tools/cmd/staticcheck@2022.1
run: go install honnef.co/go/tools/cmd/staticcheck@v0.5.1
- name: Static Check
run: staticcheck ./...
lint:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
submodules: true
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}

View File

@ -7,16 +7,16 @@ on:
jobs:
bot:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Checkout Actions
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
repository: "oam-dev/kubevela-github-actions"
path: ./actions
ref: v0.4.2
- name: Setup Node.js
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
node-version: '14'
cache: 'npm'
@ -44,7 +44,7 @@ jobs:
allow-edits: "false"
permission-level: read
- name: Handle Command
uses: actions/github-script@v4
uses: actions/github-script@v7
env:
VERSION: ${{ steps.command.outputs.command-arguments }}
with:
@ -65,7 +65,7 @@ jobs:
})
console.log("Added '" + label + "' label.")
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Open Backport PR

View File

@ -7,18 +7,18 @@ on:
workflow_dispatch: {}
env:
GO_VERSION: '1.19'
GO_VERSION: '1.23.8'
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v3.3.0
uses: fkirc/skip-duplicate-actions@v5.3.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@ -26,7 +26,7 @@ jobs:
concurrent_skipping: false
image-multi-arch:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
strategy:
@ -35,12 +35,12 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
@ -61,7 +61,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
submodules: true

View File

@ -13,20 +13,19 @@ on:
env:
# Common versions
GO_VERSION: '1.19'
GOLANGCI_VERSION: 'v1.49'
KIND_VERSION: 'v0.7.0'
GO_VERSION: '1.23.8'
GOLANGCI_VERSION: 'v1.60.1'
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v3.3.0
uses: fkirc/skip-duplicate-actions@v5.3.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@ -34,24 +33,24 @@ jobs:
concurrent_skipping: false
unit-tests:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Set up Go
uses: actions/setup-go@v1
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
submodules: true
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
@ -62,19 +61,19 @@ jobs:
sudo apt-get install -y golang-ginkgo-dev
- name: install Kubebuilder
uses: RyanSiu1995/kubebuilder-action@v1.2
uses: RyanSiu1995/kubebuilder-action@v1.3.1
with:
version: 3.1.0
version: 3.15.1
kubebuilderOnly: false
kubernetesVersion: v1.21.2
kubernetesVersion: v1.31.10
- name: Run Make test
run: make test
- name: Upload coverage report
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./coverage.txt
files: ./coverage.txt
flags: unit-test
name: codecov-umbrella

View File

@ -1,17 +1,9 @@
run:
timeout: 10m
skip-files:
- "zz_generated\\..+\\.go$"
- ".*_test.go$"
skip-dirs:
- "hack"
- "e2e"
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
formats: colored-line-number
linters-settings:
errcheck:
@ -26,7 +18,7 @@ linters-settings:
# [deprecated] comma-separated list of pairs of the form pkg:regex
# the regex is used to ignore names within pkg. (default "fmt:.*").
# see https://github.com/kisielk/errcheck#the-deprecated-method for details
ignore: fmt:.*,io/ioutil:^Read.*
exclude-functions: fmt:.*,io/ioutil:^Read.*
exhaustive:
# indicates that switch statements are to be considered exhaustive if a
@ -105,7 +97,6 @@ linters-settings:
linters:
enable:
- megacheck
- govet
- gocyclo
- gocritic
@ -116,10 +107,21 @@ linters:
- unconvert
- misspell
- nakedret
- gosimple
- staticcheck
fast: false
issues:
exclude-files:
- "zz_generated\\..+\\.go$"
- ".*_test.go$"
exclude-dirs:
- "hack"
- "e2e"
# Excluding configuration per-path and per-linter
exclude-rules:
# Exclude some linters from running on tests files.

View File

@ -1,8 +1,6 @@
ARG BASE_IMAGE
# Build the manager binary
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19-alpine as builder
ARG GOPROXY
ENV GOPROXY=${GOPROXY:-https://goproxy.cn}
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.23.8-alpine3.21 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
@ -20,9 +18,10 @@ COPY version/ version/
# Build
ARG TARGETARCH
ARG VERSION
ARG GITVERSION
RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=${TARGETARCH} \
go build -a -ldflags "-s -w" \
go build -a -ldflags "-s -w -X github.com/kubevela/workflow/version.VelaVersion=${VERSION:-undefined} -X github.com/kubevela/workflow/version.GitRevision=${GITVERSION:-undefined}" \
-o vela-workflow-${TARGETARCH} cmd/main.go
FROM ${BASE_IMAGE:-alpine:3.15}

View File

@ -1,8 +1,6 @@
ARG BASE_IMAGE
# Build the manager binary
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19-alpine as builder
ARG GOPROXY
ENV GOPROXY=${GOPROXY:-https://goproxy.cn}
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.23.8-alpine3.21 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod

170
README.md
View File

@ -1,55 +1,112 @@
<h1 align="center">KubeVela Workflow</h1>
# KubeVela Workflow
[![Go Report Card](https://goreportcard.com/badge/github.com/kubevela/workflow)](https://goreportcard.com/report/github.com/kubevela/workflow)
[![codecov](https://codecov.io/gh/kubevela/workflow/branch/main/graph/badge.svg)](https://codecov.io/gh/kubevela/workflow)
[![LICENSE](https://img.shields.io/github/license/kubevela/workflow.svg?style=flat-square)](/LICENSE)
[![Total alerts](https://img.shields.io/lgtm/alerts/g/kubevela/workflow.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/kubevela/workflow/alerts/)
<br/>
<br/>
<h2 align="center">Table of Contents</h2>
* [Why use KubeVela Workflow?](#why-use-kubevela-workflow)
* [How can KubeVela Workflow be used?](#how-can-kubevela-workflow-be-used)
* [Installation](#installation)
* [Quick Start](#quick-start)
* [Features](#features)
* [How to write custom steps?](#how-to-write-custom-steps)
* [Contributing](#contributing)
## What is KubeVela Workflow
<br/>
<br/>
KubeVela Workflow is an open-source cloud-native workflow project that can use to orchestrate CI/CD process, terraform resources, multi-kubernetes-clusters management and even your own functional calls.
<h2 align="center">Why use KubeVela Workflow</h2>
You can [install](#installation) KubeVela Workflow and use it, or import the code as an [sdk](#how-can-kubevela-workflow-be-used) of an IaC-based workflow engine in your own repository.
*The main differences between KubeVela Workflow and other cloud-native workflows are*:
All the steps in the workflow is based on IaC(Cue): every step has a `type` for abstract and reuse, the `step-type` is programmed in [CUE](https://cuelang.org/) language and easy to customize.
That is to say, **you can use atomic capabilities like a function call in every step, instead of just creating a pod.**
## Why use KubeVela Workflow
<h1 align="center"><a href="https://kubevela.io/docs/end-user/pipeline/workflowrun"><img src="https://static.kubevela.net/images/1.6/workflow-arch.png" alt="workflow arch" align="center" width="700px" /></a></h1>
🌬️ **Lightweight Workflow Engine**: KubeVela Workflow won't create a pod or job for process control. Instead, everything can be done in steps and there will be no redundant resource consumption.
**Flexible, Extensible and Programmable**: All the steps are based on the [CUE](https://cuelang.org/) language, which means if you want to customize a new step, you just need to write CUE codes and no need to compile or build anything, KubeVela Workflow will evaluate these codes.
**Flexible, Extensible and Programmable**: Every step has a type, and all the types are based on the [CUE](https://cuelang.org/) language, which means if you want to customize a new step type, you just need to write CUE codes and no need to compile or build anything, KubeVela Workflow will evaluate these codes.
💪 **Rich built-in capabilities**: You can control the process with conditional judgement, inputs/outputs, timeout, etc. You can also use the built-in steps to do some common tasks, such as `deploy resources`, `suspend`, `notification`, `step-group` and more!
💪 **Rich built-in capabilities**: You can control the process with conditional judgement, inputs/outputs, timeout, etc. You can also use the built-in step types to do some common tasks, such as `deploy resources`, `suspend`, `notification`, `step-group` and more!
🔐 **Safe execution with schema checksum checking**: Every step will be checked with the schema, which means you can't run a step with a wrong parameter. This will ensure the safety of the workflow execution.
<h2 align="center">How can KubeVela Workflow be used</h2>
## Try KubeVela Workflow
During the evolution of the [OAM](https://oam.dev/) and [KubeVela project](https://github.com/kubevela/kubevela), **workflow**, as an important part to control the delivery process, has gradually matured. Therefore, we separated the workflow code from the KubeVela repository to make it standalone. As a general workflow engine, it can be used directly or as an SDK by other projects.
Run your first WorkflowRun to distribute secrets, build and push your image, and apply the resources in the cluster! Image build can take some time, you can use `vela workflow logs build-push-image --step build-push` to check the logs of building.
### As a standalone workflow engine
```
apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: build-push-image
namespace: default
spec:
workflowSpec:
steps:
# or use kubectl create secret generic git-token --from-literal='GIT_TOKEN=<your-token>'
- name: create-git-secret
type: export2secret
properties:
secretName: git-secret
data:
token: <git token>
# or use kubectl create secret docker-registry docker-regcred \
# --docker-server=https://index.docker.io/v1/ \
# --docker-username=<your-username> \
# --docker-password=<your-password>
- name: create-image-secret
type: export2secret
properties:
secretName: image-secret
kind: docker-registry
dockerRegistry:
username: <docker username>
password: <docker password>
- name: build-push
type: build-push-image
properties:
# use your kaniko executor image like below, if not set, it will use default image oamdev/kaniko-executor:v1.9.1
# kanikoExecutor: gcr.io/kaniko-project/executor:latest
# you can use context with git and branch or directly specify the context, please refer to https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts
context:
git: github.com/FogDong/simple-web-demo
branch: main
image: fogdong/simple-web-demo:v1
# specify your dockerfile, if not set, it will use default dockerfile ./Dockerfile
# dockerfile: ./Dockerfile
credentials:
image:
name: image-secret
# buildArgs:
# - key="value"
# platform: linux/arm
- name: apply-deploy
type: apply-deployment
properties:
image: fogdong/simple-web-demo:v1
```
Unlike the workflow in the KubeVela Application, this workflow will only be executed once, and will **not keep reconciliation**, **no garbage collection** when the workflow object deleted or updated. You can use it for **one-time** operations like:
## Quick Start
- Glue and orchestrate operations, such as control the deploy process of multiple resources(e.g. your Applications), scale up/down, read-notify processes, or the sequence between http requests.
- Orchestrate delivery process without day-2 management, just deploy. The most common use case is to initialize your infrastructure for some environment.
After installation, you can either run a WorkflowRun directly or from a Workflow Template. Every step in the workflow should have a type and some parameters, in which defines how this step works. You can use the [built-in step type definitions](https://kubevela.io/docs/next/end-user/workflow/built-in-workflow-defs) or [write your own custom step types](#how-to-write-custom-steps).
Please refer to the [installation](#installation) and [quick start](#quick-start) sections for more.
> Please checkout the [WorkflowRun Specification](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#workflowrun) and [WorkflowRun Status](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#status) for more details.
### As an SDK
### Run a WorkflowRun directly
You can use KubeVela Workflow as an SDK to integrate it into your project. For example, the KubeVela Project use it to control the process of application delivery.
For more, please refer to the following examples:
You just need to initialize a workflow instance and generate all the task runners with the instance, then execute the task runners. Please check out the example in [Workflow](https://github.com/kubevela/workflow/blob/main/controllers/workflowrun_controller.go#L101) or [KubeVela](https://github.com/kubevela/kubevela/blob/master/pkg/controller/core.oam.dev/v1alpha2/application/application_controller.go#L197).
- [Control the delivery process of multiple resources(e.g. your Applications)](./examples/multiple-apps.md)
- [Request a specified URL and then use the response as a message to notify](./examples/request-and-notify.md)
- [Automatically initialize the environment with terraform](./examples/initialize-env.md)
<h2 align="center">Installation</h2>
### Run a WorkflowRun from a Workflow Template
Please refer to the following examples:
- [Run the Workflow Template with different context to control the process](./examples/run-with-template.md)
## Installation
### Install Workflow
@ -69,38 +126,11 @@ If you have installed KubeVela, you can install Workflow with the KubeVela Addon
vela addon enable vela-workflow
```
### Install Vela CLI
### Install Vela CLI(Optional)
Please checkout: [Install Vela CLI](https://kubevela.io/docs/installation/kubernetes#install-vela-cli)
### Install built-in steps in KubeVela(Optional)
Use `vela def apply <directory>` to install built-in step definitions in [KubeVela](https://github.com/kubevela/kubevela/tree/master/vela-templates/definitions/internal/workflowstep) and [Workflow Addon](https://github.com/kubevela/catalog/tree/master/addons/vela-workflow/definitions).
> Note that if you installed Workflow using KubeVela Addon, then the definitions in the addon will be installed automatically.
Checkout this [doc](https://kubevela.io/docs/end-user/workflow/built-in-workflow-defs) for more details.
<h2 align="center">Quick Start</h2>
You can either run a WorkflowRun directly or from a Workflow Template.
> Please checkout the [WorkflowRun Specification](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#workflowrun) and [WorkflowRun Status](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#status) for more details.
### Run a WorkflowRun directly
Please refer to the following examples:
- [Control the delivery process of multiple resources(e.g. your Applications)](./examples/multiple-apps.md)
- [Request a specified URL and then use the response as a message to notify](./examples/request-and-notify.md)
### Run a WorkflowRun from a Workflow Template
Please refer to the following examples:
- [Run the Workflow Template with different context to control the process](./examples/run-with-template)
<h2 align="center">Features</h2>
## Features
- [Operate WorkflowRun](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#operate-workflowrun)
- [Suspend and Resume](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#suspend-and-resume)
@ -112,7 +142,12 @@ Please refer to the following examples:
- [Custom Context Data](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#custom-context-data)
- [Built-in Context Data](https://kubevela.io/docs/next/end-user/pipeline/workflowrun#built-in-context-data)
<h2 align="center">How to write custom steps</h2>
## Step Types
### Built-in Step Types
Please checkout the [built-in step definitions](https://kubevela.io/docs/next/end-user/workflow/built-in-workflow-defs) with scope that valid in `WorkflowRun`.
### Write Your Custom Step Types
If you're not familiar with CUE, please checkout the [CUE documentation](https://kubevela.io/docs/platform-engineers/cue/basic) first.
@ -120,6 +155,23 @@ You can customize your steps with CUE and some [built-in operations](https://kub
> Note that you cannot use the [application operations](https://kubevela.io/docs/next/platform-engineers/workflow/cue-actions#application-operations) since there're no application data like components/traits/policy in the WorkflowRun.
<h2 align="center">Contributing</h2>
## How can KubeVela Workflow be used
Check out [CONTRIBUTING](https://kubevela.io/docs/contributor/overview) to see how to develop with KubeVela Workflow.
During the evolution of the [OAM](https://oam.dev/) and [KubeVela project](https://github.com/kubevela/kubevela), **workflow**, as an important part to control the delivery process, has gradually matured. Therefore, we separated the workflow code from the KubeVela repository to make it standalone. As a general workflow engine, it can be used directly or as an SDK by other projects.
### As a standalone workflow engine
Unlike the workflow in the KubeVela Application, this workflow will only be executed once, and will **not keep reconciliation**, **no garbage collection** when the workflow object deleted or updated. You can use it for **one-time** operations like:
- Glue and orchestrate operations, such as control the deploy process of multiple resources(e.g. your Applications), scale up/down, read-notify processes, or the sequence between http requests.
- Orchestrate delivery process without day-2 management, just deploy. The most common use case is to initialize your infrastructure for some environment.
### As an SDK
You can use KubeVela Workflow as an SDK to integrate it into your project. For example, the KubeVela Project use it to control the process of application delivery.
You just need to initialize a workflow instance and generate all the task runners with the instance, then execute the task runners. Please check out the example in [Workflow](https://github.com/kubevela/workflow/blob/main/controllers/workflowrun_controller.go#L101) or [KubeVela](https://github.com/kubevela/kubevela/blob/master/pkg/controller/core.oam.dev/v1alpha2/application/application_controller.go#L197).
## Contributing
Check out [CONTRIBUTING](https://kubevela.io/docs/contributor/overview) to see how to develop with KubeVela Workflow.

View File

@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2022 The KubeVela Authors.

View File

@ -101,8 +101,7 @@ type WorkflowRunStatus struct {
// WorkflowSpec defines workflow steps and other attributes
type WorkflowSpec struct {
Mode *WorkflowExecuteMode `json:"mode,omitempty"`
Steps []WorkflowStep `json:"steps,omitempty"`
Steps []WorkflowStep `json:"steps,omitempty"`
}
// WorkflowExecuteMode defines the mode of workflow execution
@ -144,6 +143,7 @@ type Workflow struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Mode *WorkflowExecuteMode `json:"mode,omitempty"`
WorkflowSpec `json:",inline"`
}
@ -160,7 +160,10 @@ type WorkflowList struct {
// WorkflowStep defines how to execute a workflow step.
type WorkflowStep struct {
WorkflowStepBase `json:",inline"`
SubSteps []WorkflowStepBase `json:"subSteps,omitempty"`
// Mode is only valid for sub steps, it defines the mode of the sub steps
// +nullable
Mode WorkflowMode `json:"mode,omitempty"`
SubSteps []WorkflowStepBase `json:"subSteps,omitempty"`
}
// WorkflowStepMeta contains the meta data of a workflow step
@ -171,7 +174,7 @@ type WorkflowStepMeta struct {
// WorkflowStepBase defines the workflow step base
type WorkflowStepBase struct {
// Name is the unique name of the workflow step.
Name string `json:"name"`
Name string `json:"name,omitempty"`
// Type is the type of the workflow step.
Type string `json:"type"`
// Meta is the meta data of the workflow step.
@ -251,20 +254,24 @@ const (
WorkflowStepPhaseRunning WorkflowStepPhase = "running"
// WorkflowStepPhasePending will make the controller wait for the step to run.
WorkflowStepPhasePending WorkflowStepPhase = "pending"
// WorkflowStepPhaseSuspending will make the controller suspend the workflow.
WorkflowStepPhaseSuspending WorkflowStepPhase = "suspending"
)
// StepOutputs defines output variable of WorkflowStep
type StepOutputs []outputItem
type StepOutputs []OutputItem
// StepInputs defines variable input of WorkflowStep
type StepInputs []inputItem
type StepInputs []InputItem
type inputItem struct {
ParameterKey string `json:"parameterKey"`
// InputItem defines an input variable of WorkflowStep
type InputItem struct {
ParameterKey string `json:"parameterKey,omitempty"`
From string `json:"from"`
}
type outputItem struct {
// OutputItem defines an output variable of WorkflowStep
type OutputItem struct {
ValueFrom string `json:"valueFrom"`
Name string `json:"name"`
}

View File

@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2022 The KubeVela Authors.
@ -26,6 +25,36 @@ import (
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InputItem) DeepCopyInto(out *InputItem) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InputItem.
func (in *InputItem) DeepCopy() *InputItem {
if in == nil {
return nil
}
out := new(InputItem)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *OutputItem) DeepCopyInto(out *OutputItem) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OutputItem.
func (in *OutputItem) DeepCopy() *OutputItem {
if in == nil {
return nil
}
out := new(OutputItem)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in StepInputs) DeepCopyInto(out *StepInputs) {
{
@ -86,6 +115,11 @@ func (in *Workflow) DeepCopyInto(out *Workflow) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
if in.Mode != nil {
in, out := &in.Mode, &out.Mode
*out = new(WorkflowExecuteMode)
**out = **in
}
in.WorkflowSpec.DeepCopyInto(&out.WorkflowSpec)
}
@ -277,11 +311,6 @@ func (in *WorkflowRunStatus) DeepCopy() *WorkflowRunStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkflowSpec) DeepCopyInto(out *WorkflowSpec) {
*out = *in
if in.Mode != nil {
in, out := &in.Mode, &out.Mode
*out = new(WorkflowExecuteMode)
**out = **in
}
if in.Steps != nil {
in, out := &in.Steps, &out.Steps
*out = make([]WorkflowStep, len(*in))

View File

@ -29,21 +29,25 @@ helm install --create-namespace -n vela-system workflow kubevela/vela-workflow -
### Core parameters
| Name | Description | Value |
| --------------------------- | --------------------------------------------------------------------------------------------- | ----- |
| `systemDefinitionNamespace` | System definition namespace, if unspecified, will use built-in variable `.Release.Namespace`. | `nil` |
| `concurrentReconciles` | concurrentReconciles is the concurrent reconcile number of the controller | `4` |
| Name | Description | Value |
| -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- | ------- |
| `systemDefinitionNamespace` | System definition namespace, if unspecified, will use built-in variable `.Release.Namespace`. | `nil` |
| `concurrentReconciles` | concurrentReconciles is the concurrent reconcile number of the controller | `4` |
| `ignoreWorkflowWithoutControllerRequirement` | will determine whether to process the workflowrun without 'workflowrun.oam.dev/controller-version-require' annotation | `false` |
### KubeVela workflow parameters
| Name | Description | Value |
| -------------------------------------- | ------------------------------------------------------ | ------- |
| `workflow.enableSuspendOnFailure` | Enable suspend on workflow failure | `false` |
| `workflow.backoff.maxTime.waitState` | The max backoff time of workflow in a wait condition | `60` |
| `workflow.backoff.maxTime.failedState` | The max backoff time of workflow in a failed condition | `300` |
| `workflow.step.errorRetryTimes` | The max retry times of a failed workflow step | `10` |
| Name | Description | Value |
| ------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
| `workflow.enableSuspendOnFailure` | Enable the capability of suspend an failed workflow automatically | `false` |
| `workflow.enablePatchStatusAtOnce` | Enable the capability of patch status at once | `false` |
| `workflow.enableWatchEventListener` | Enable the capability of watch event listener for a faster reconcile, note that you need to install [kube-trigger](https://github.com/kubevela/kube-trigger) first to use this feature | `false` |
| `workflow.enableExternalPackageForDefaultCompiler` | Enable external package for default compiler | `true` |
| `workflow.enableExternalPackageWatchForDefaultCompiler` | Enable external package watch for default compiler | `false` |
| `workflow.backoff.maxTime.waitState` | The max backoff time of workflow in a wait condition | `60` |
| `workflow.backoff.maxTime.failedState` | The max backoff time of workflow in a failed condition | `300` |
| `workflow.step.errorRetryTimes` | The max retry times of a failed workflow step | `10` |
| `workflow.groupByLabel` | The label used to group workflow record | `pipeline.oam.dev/name` |
### KubeVela workflow backup parameters
@ -53,12 +57,10 @@ helm install --create-namespace -n vela-system workflow kubevela/vela-workflow -
| `backup.strategy` | The backup strategy for workflow record | `BackupFinishedRecord` |
| `backup.ignoreStrategy` | The ignore strategy for backup | `IgnoreLatestFailedRecord` |
| `backup.cleanOnBackup` | Enable auto clean after backup workflow record | `false` |
| `backup.groupByLabel` | The label used to group workflow record | `""` |
| `backup.persistType` | The persist type for workflow record | `""` |
| `backup.configSecretName` | The secret name of backup config | `backup-config` |
| `backup.configSecretNamespace` | The secret name of backup config namespace | `vela-system` |
### KubeVela Workflow controller parameters
| Name | Description | Value |
@ -76,26 +78,26 @@ helm install --create-namespace -n vela-system workflow kubevela/vela-workflow -
| `webhookService.port` | KubeVela webhook service port | `9443` |
| `healthCheck.port` | KubeVela health check port | `9440` |
### Common parameters
| Name | Description | Value |
| ---------------------------- | -------------------------------------------------------------------------------------------------------------------------- | ------- |
| `imagePullSecrets` | Image pull secrets | `[]` |
| `nameOverride` | Override name | `""` |
| `fullnameOverride` | Fullname override | `""` |
| `serviceAccount.create` | Specifies whether a service account should be created | `true` |
| `serviceAccount.annotations` | Annotations to add to the service account | `{}` |
| `serviceAccount.name` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | `nil` |
| `nodeSelector` | Node selector | `{}` |
| `tolerations` | Tolerations | `[]` |
| `affinity` | Affinity | `{}` |
| `rbac.create` | Specifies whether a RBAC role should be created | `true` |
| `logDebug` | Enable debug logs for development purpose | `false` |
| `logFilePath` | If non-empty, write log files in this path | `""` |
| `logFileMaxSize` | Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. | `1024` |
| `kubeClient.qps` | The qps for reconcile clients, default is 50 | `500` |
| `kubeClient.burst` | The burst for reconcile clients, default is 100 | `1000` |
| Name | Description | Value |
| ---------------------------- | -------------------------------------------------------------------------------------------------------------------------- | --------------- |
| `imagePullSecrets` | Image pull secrets | `[]` |
| `nameOverride` | Override name | `""` |
| `fullnameOverride` | Fullname override | `""` |
| `serviceAccount.create` | Specifies whether a service account should be created | `true` |
| `serviceAccount.annotations` | Annotations to add to the service account | `{}` |
| `serviceAccount.name` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | `nil` |
| `nodeSelector` | Node selector | `{}` |
| `tolerations` | Tolerations | `[]` |
| `affinity` | Affinity | `{}` |
| `rbac.create` | Specifies whether a RBAC role should be created | `true` |
| `logDebug` | Enable debug logs for development purpose | `false` |
| `logFilePath` | If non-empty, write log files in this path | `""` |
| `logFileMaxSize` | Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. | `1024` |
| `kubeClient.qps` | The qps for reconcile clients, default is 50 | `500` |
| `kubeClient.burst` | The burst for reconcile clients, default is 100 | `1000` |
| `kubeClient.userAgent` | The user agent of the client, default is vela-workflow | `vela-workflow` |
## Uninstallation

View File

@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.18.0
name: workflowruns.core.oam.dev
spec:
group: core.oam.dev
@ -32,14 +31,19 @@ spec:
description: WorkflowRun is the Schema for the workflowRun API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@ -64,17 +68,6 @@ spec:
workflowSpec:
description: WorkflowSpec defines workflow steps and other attributes
properties:
mode:
description: WorkflowExecuteMode defines the mode of workflow
execution
properties:
steps:
description: Steps is the mode of workflow steps execution
type: string
subSteps:
description: SubSteps is the mode of workflow sub steps execution
type: string
type: object
steps:
items:
description: WorkflowStep defines how to execute a workflow
@ -91,6 +84,7 @@ spec:
inputs:
description: Inputs is the inputs of the step
items:
description: InputItem defines an input variable of WorkflowStep
properties:
from:
type: string
@ -98,7 +92,6 @@ spec:
type: string
required:
- from
- parameterKey
type: object
type: array
meta:
@ -107,12 +100,19 @@ spec:
alias:
type: string
type: object
mode:
description: Mode is only valid for sub steps, it defines
the mode of the sub steps
nullable: true
type: string
name:
description: Name is the unique name of the workflow step.
type: string
outputs:
description: Outputs is the outputs of the step
items:
description: OutputItem defines an output variable of
WorkflowStep
properties:
name:
type: string
@ -143,6 +143,8 @@ spec:
inputs:
description: Inputs is the inputs of the step
items:
description: InputItem defines an input variable
of WorkflowStep
properties:
from:
type: string
@ -150,7 +152,6 @@ spec:
type: string
required:
- from
- parameterKey
type: object
type: array
meta:
@ -167,6 +168,8 @@ spec:
outputs:
description: Outputs is the outputs of the step
items:
description: OutputItem defines an output variable
of WorkflowStep
properties:
name:
type: string
@ -188,7 +191,6 @@ spec:
description: Type is the type of the workflow step.
type: string
required:
- name
- type
type: object
type: array
@ -199,7 +201,6 @@ spec:
description: Type is the type of the workflow step.
type: string
required:
- name
- type
type: object
type: array
@ -214,13 +215,15 @@ spec:
description: A Condition that may apply to a resource.
properties:
lastTransitionTime:
description: LastTransitionTime is the last time this condition
transitioned from one status to another.
description: |-
LastTransitionTime is the last time this condition transitioned from one
status to another.
format: date-time
type: string
message:
description: A Message containing details about this condition's
last transition from one status to another, if any.
description: |-
A Message containing details about this condition's last transition from
one status to another, if any.
type: string
reason:
description: A Reason for this condition's last transition from
@ -231,8 +234,9 @@ spec:
False, or Unknown?
type: string
type:
description: Type of this condition. At most one of each condition
type may apply to a resource at any point in time.
description: |-
Type of this condition. At most one of each condition type may apply to
a resource at any point in time.
type: string
required:
- lastTransitionTime
@ -242,63 +246,49 @@ spec:
type: object
type: array
contextBackend:
description: 'ObjectReference contains enough information to let you
inspect or modify the referred object. --- New uses of this type
are discouraged because of difficulty describing its usage when
embedded in APIs. 1. Ignored fields. It includes many fields which
are not generally honored. For instance, ResourceVersion and FieldPath
are both very rarely valid in actual usage. 2. Invalid usage help. It
is impossible to add specific help for individual usage. In most
embedded usages, there are particular restrictions like, "must refer
only to types A and B" or "UID not honored" or "name must be restricted".
Those cannot be well described when embedded. 3. Inconsistent validation. Because
the usages are different, the validation rules are different by
usage, which makes it hard for users to predict what will happen.
4. The fields are both imprecise and overly precise. Kind is not
a precise mapping to a URL. This can produce ambiguity during interpretation
and require a REST mapping. In most cases, the dependency is on
the group,resource tuple and the version of the actual struct is
irrelevant. 5. We cannot easily change it. Because this type is
embedded in many locations, updates to this type will affect numerous
schemas. Don''t make new APIs embed an underspecified API type
they do not control. Instead of using this type, create a locally
provided and used type that is well-focused on your reference. For
example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533
.'
description: ObjectReference contains enough information to let you
inspect or modify the referred object.
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
description: |-
If referring to a piece of an object instead of an entire object, this string
should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within a pod, this would take on a value like:
"spec.containers{name}" (where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]" (container with
index 2 in this pod). This syntax is chosen only to have some well-defined way of
referencing a part of an object.
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
description: |-
Name of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
description: |-
Namespace of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
description: |-
Specific resourceVersion to which this reference is made, if any.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
description: |-
UID of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
type: string
type: object
x-kubernetes-map-type: atomic
endTime:
format: date-time
type: string

View File

@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.18.0
name: workflows.core.oam.dev
spec:
group: core.oam.dev
@ -23,14 +22,19 @@ spec:
description: Workflow is the Schema for the workflow API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@ -59,6 +63,7 @@ spec:
inputs:
description: Inputs is the inputs of the step
items:
description: InputItem defines an input variable of WorkflowStep
properties:
from:
type: string
@ -66,7 +71,6 @@ spec:
type: string
required:
- from
- parameterKey
type: object
type: array
meta:
@ -75,12 +79,18 @@ spec:
alias:
type: string
type: object
mode:
description: Mode is only valid for sub steps, it defines the mode
of the sub steps
nullable: true
type: string
name:
description: Name is the unique name of the workflow step.
type: string
outputs:
description: Outputs is the outputs of the step
items:
description: OutputItem defines an output variable of WorkflowStep
properties:
name:
type: string
@ -110,6 +120,7 @@ spec:
inputs:
description: Inputs is the inputs of the step
items:
description: InputItem defines an input variable of WorkflowStep
properties:
from:
type: string
@ -117,7 +128,6 @@ spec:
type: string
required:
- from
- parameterKey
type: object
type: array
meta:
@ -132,6 +142,7 @@ spec:
outputs:
description: Outputs is the outputs of the step
items:
description: OutputItem defines an output variable of WorkflowStep
properties:
name:
type: string
@ -153,7 +164,6 @@ spec:
description: Type is the type of the workflow step.
type: string
required:
- name
- type
type: object
type: array
@ -164,7 +174,6 @@ spec:
description: Type is the type of the workflow step.
type: string
required:
- name
- type
type: object
type: array

View File

@ -0,0 +1,81 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: packages.cue.oam.dev
spec:
group: cue.oam.dev
names:
kind: Package
listKind: PackageList
plural: packages
shortNames:
- pkg
- cpkg
- cuepkg
- cuepackage
singular: package
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.path
name: PATH
type: string
- jsonPath: .spec.provider.protocol
name: PROTO
type: string
- jsonPath: .spec.provider.endpoint
name: ENDPOINT
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: Package is an extension for cuex engine
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: PackageSpec the spec for Package
properties:
path:
type: string
provider:
description: Provider the external Provider in Package for cuex to
run functions
properties:
endpoint:
type: string
protocol:
description: ProviderProtocol the protocol type for external Provider
type: string
required:
- endpoint
- protocol
type: object
templates:
additionalProperties:
type: string
type: object
required:
- path
- templates
type: object
required:
- spec
type: object
served: true
storage: true
subresources: {}

View File

@ -0,0 +1,55 @@
{{- if and .Values.admissionWebhooks.certManager.enabled -}}
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ template "kubevela.fullname" . }}-self-signed-issuer
spec:
selfSigned: {}
---
# Generate a CA Certificate used to sign certificates for the webhook
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ template "kubevela.fullname" . }}-root-cert
spec:
secretName: {{ template "kubevela.fullname" . }}-root-cert
duration: 43800h # 5y
revisionHistoryLimit: {{ .Values.admissionWebhooks.certManager.revisionHistoryLimit }}
issuerRef:
name: {{ template "kubevela.fullname" . }}-self-signed-issuer
commonName: "ca.webhook.kubevela"
isCA: true
---
# Create an Issuer that uses the above generated CA certificate to issue certs
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ template "kubevela.fullname" . }}-root-issuer
namespace: {{ .Release.Namespace }}
spec:
ca:
secretName: {{ template "kubevela.fullname" . }}-root-cert
---
# generate a serving certificate for the apiservices to use
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ template "kubevela.fullname" . }}-admission
duration: 8760h # 1y
revisionHistoryLimit: {{ .Values.admissionWebhooks.certManager.revisionHistoryLimit }}
issuerRef:
name: {{ template "kubevela.fullname" . }}-root-issuer
dnsNames:
- {{ template "kubevela.name" . }}-webhook.{{ .Release.Namespace }}.svc
- {{ template "kubevela.name" . }}-webhook.{{ .Release.Namespace }}.svc.cluster.local
{{- end }}

View File

@ -0,0 +1,28 @@
{{- if and .Values.admissionWebhooks.enabled .Values.admissionWebhooks.patch.enabled .Values.rbac.create (not .Values.admissionWebhooks.certManager.enabled) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "kubevela.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
app: {{ template "kubevela.name" . }}-admission
{{- include "kubevela.labels" . | nindent 4 }}
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
- mutatingwebhookconfigurations
verbs:
- get
- update
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- update
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if and .Values.admissionWebhooks.enabled .Values.admissionWebhooks.patch.enabled .Values.rbac.create (not .Values.admissionWebhooks.certManager.enabled) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "kubevela.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
app: {{ template "kubevela.name" . }}-admission
{{- include "kubevela.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "kubevela.fullname" . }}-admission
subjects:
- kind: ServiceAccount
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,58 @@
{{- if and .Values.admissionWebhooks.enabled .Values.admissionWebhooks.patch.enabled (not .Values.admissionWebhooks.certManager.enabled) }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "kubevela.fullname" . }}-admission-create
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
app: {{ template "kubevela.name" . }}-admission-create
{{- include "kubevela.labels" . | nindent 4 }}
spec:
{{- if .Capabilities.APIVersions.Has "batch/v1alpha1" }}
# Alpha feature since k8s 1.12
ttlSecondsAfterFinished: 0
{{- end }}
template:
metadata:
name: {{ template "kubevela.fullname" . }}-admission-create
labels:
app: {{ template "kubevela.name" . }}-admission-create
{{- include "kubevela.labels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: create
image: {{ .Values.imageRegistry }}{{ .Values.admissionWebhooks.patch.image.repository }}:{{ .Values.admissionWebhooks.patch.image.tag }}
imagePullPolicy: {{ .Values.admissionWebhooks.patch.image.pullPolicy }}
args:
- create
- --host={{ template "kubevela.name" . }}-webhook,{{ template "kubevela.name" . }}-webhook.{{ .Release.Namespace }}.svc
- --namespace={{ .Release.Namespace }}
- --secret-name={{ template "kubevela.fullname" . }}-admission
- --key-name=tls.key
- --cert-name=tls.crt
restartPolicy: OnFailure
serviceAccountName: {{ template "kubevela.fullname" . }}-admission
{{- with .Values.admissionWebhooks.patch.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.admissionWebhooks.patch.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.admissionWebhooks.patch.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
{{- end }}

View File

@ -0,0 +1,53 @@
{{- if and .Values.admissionWebhooks.enabled .Values.admissionWebhooks.patch.enabled (not .Values.admissionWebhooks.certManager.enabled) }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "kubevela.fullname" . }}-admission-patch
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
app: {{ template "kubevela.name" . }}-admission-patch
{{- include "kubevela.labels" . | nindent 4 }}
spec:
{{- if .Capabilities.APIVersions.Has "batch/v1alpha1" }}
# Alpha feature since k8s 1.12
ttlSecondsAfterFinished: 0
{{- end }}
template:
metadata:
name: {{ template "kubevela.fullname" . }}-admission-patch
labels:
app: {{ template "kubevela.name" . }}-admission-patch
{{- include "kubevela.labels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: patch
image: {{ .Values.imageRegistry }}{{ .Values.admissionWebhooks.patch.image.repository }}:{{ .Values.admissionWebhooks.patch.image.tag }}
imagePullPolicy: {{ .Values.admissionWebhooks.patch.image.pullPolicy }}
args:
- patch
- --webhook-name={{ template "kubevela.fullname" . }}-admission
- --namespace={{ .Release.Namespace }}
- --secret-name={{ template "kubevela.fullname" . }}-admission
- --patch-failure-policy={{ .Values.admissionWebhooks.failurePolicy }}
restartPolicy: OnFailure
serviceAccountName: {{ template "kubevela.fullname" . }}-admission
{{- with .Values.admissionWebhooks.patch.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.admissionWebhooks.patch.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
securityContext:
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if and .Values.admissionWebhooks.enabled .Values.admissionWebhooks.patch.enabled .Values.rbac.create (not .Values.admissionWebhooks.certManager.enabled) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
app: {{ template "kubevela.name" . }}-admission
{{- include "kubevela.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if and .Values.admissionWebhooks.enabled .Values.admissionWebhooks.patch.enabled .Values.rbac.create (not .Values.admissionWebhooks.certManager.enabled) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
app: {{ template "kubevela.name" . }}-admission
{{- include "kubevela.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "kubevela.fullname" . }}-admission
subjects:
- kind: ServiceAccount
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,13 @@
{{- if and .Values.admissionWebhooks.enabled .Values.admissionWebhooks.patch.enabled .Values.rbac.create (not .Values.admissionWebhooks.certManager.enabled) }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
app: {{ template "kubevela.name" . }}-admission
{{- include "kubevela.labels" . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,41 @@
{{- if .Values.admissionWebhooks.enabled -}}
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
{{- if .Values.admissionWebhooks.certManager.enabled }}
annotations:
cert-manager.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "kubevela.fullname" .) | quote }}
{{- end }}
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: {{ template "kubevela.name" . }}-webhook
namespace: {{ .Release.Namespace }}
path: /mutating-core-oam-dev-v1alpha1-workflowruns
{{- if .Values.admissionWebhooks.patch.enabled }}
failurePolicy: Ignore
{{- else }}
failurePolicy: Fail
{{- end }}
name: mutating.core.oam.dev.v1alpha1.workflowruns
sideEffects: None
rules:
- apiGroups:
- core.oam.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- workflowruns
scope: Namespaced
admissionReviewVersions:
- v1beta1
- v1
timeoutSeconds: 5
{{- end -}}

View File

@ -0,0 +1,40 @@
{{- if .Values.admissionWebhooks.enabled -}}
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: {{ template "kubevela.fullname" . }}-admission
namespace: {{ .Release.Namespace }}
{{- if .Values.admissionWebhooks.certManager.enabled }}
annotations:
cert-manager.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "kubevela.fullname" .) | quote }}
{{- end }}
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: {{ template "kubevela.name" . }}-webhook
namespace: {{ .Release.Namespace }}
path: /validating-core-oam-dev-v1alpha1-workflowruns
{{- if .Values.admissionWebhooks.patch.enabled }}
failurePolicy: Ignore
{{- else }}
failurePolicy: {{ .Values.admissionWebhooks.failurePolicy }}
{{- end }}
name: validating.core.oam.dev.v1alpha2.applicationconfigurations
sideEffects: None
rules:
- apiGroups:
- core.oam.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- workflowruns
scope: Namespaced
admissionReviewVersions:
- v1beta1
- v1
timeoutSeconds: 5
{{- end -}}

View File

@ -0,0 +1,19 @@
{{- if .Values.admissionWebhooks.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "kubevela.name" . }}-webhook
namespace: {{ .Release.Namespace }}
labels:
{{- include "kubevela.labels" . | nindent 4 }}
spec:
type: {{ .Values.webhookService.type }}
ports:
- port: 443
targetPort: {{ .Values.webhookService.port }}
protocol: TCP
name: https
selector:
{{ include "kubevela.selectorLabels" . | nindent 6 }}
{{- end -}}

View File

@ -0,0 +1,93 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/addon-operation.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Enable a KubeVela addon
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/catalog/master/examples/vela-workflow/observability.yaml
labels:
custom.definition.oam.dev/scope: WorkflowRun
name: addon-operation
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
job: op.#Apply & {
value: {
apiVersion: "batch/v1"
kind: "Job"
metadata: {
name: context.name + "-" + context.stepSessionID
namespace: "vela-system"
labels: "enable-addon.oam.dev": context.name
annotations: "workflow.oam.dev/step": context.stepName
}
spec: {
backoffLimit: 3
template: {
metadata: {
labels: {
"workflow.oam.dev/name": context.name
"workflow.oam.dev/session": context.stepSessionID
}
annotations: "workflow.oam.dev/step": context.stepName
}
spec: {
containers: [
{
name: parameter.addonName + "-enable-job"
image: parameter.image
if parameter.args == _|_ {
command: ["vela", "addon", parameter.operation, parameter.addonName]
}
if parameter.args != _|_ {
command: ["vela", "addon", parameter.operation, parameter.addonName] + parameter.args
}
},
]
restartPolicy: "Never"
serviceAccount: parameter.serviceAccountName
}
}
}
}
}
log: op.#Log & {
source: resources: [{labelSelector: {
"workflow.oam.dev/name": context.name
"workflow.oam.dev/session": context.stepSessionID
}}]
}
fail: op.#Steps & {
if job.value.status.failed != _|_ {
if job.value.status.failed > 2 {
breakWorkflow: op.#Fail & {
message: "enable addon failed"
}
}
}
}
wait: op.#ConditionalWait & {
continue: job.value.status.succeeded != _|_ && job.value.status.succeeded > 0
}
parameter: {
// +usage=Specify the name of the addon.
addonName: string
// +usage=Specify addon enable args.
args?: [...string]
// +usage=Specify the image
image: *"oamdev/vela-cli:v1.6.4" | string
// +usage=operation for the addon
operation: *"enable" | "upgrade" | "disable"
// +usage=specify serviceAccountName want to use
serviceAccountName: *"kubevela-vela-core" | string
}

View File

@ -0,0 +1,61 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/apply-app.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/alias: ""
definition.oam.dev/description: Apply application from data or ref to the cluster
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/apply-applications.yaml
labels:
custom.definition.oam.dev/scope: WorkflowRun
name: apply-app
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
"encoding/yaml"
)
app: op.#Steps & {
if parameter.data != _|_ {
apply: op.#Apply & {
value: parameter.data
}
}
if parameter.ref != _|_ {
if parameter.ref.type == "configMap" {
cm: op.#Read & {
value: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: parameter.ref.name
namespace: parameter.ref.namespace
}
}
}
template: cm.value.data[parameter.ref.key]
apply: op.#Apply & {
value: yaml.Unmarshal(template)
}
}
}
}
wait: op.#ConditionalWait & {
continue: app.apply.value.status.status == "running" && app.apply.value.status.observedGeneration == app.apply.value.metadata.generation
}
parameter: close({
data?: {...}
}) | close({
ref?: {
name: string
namespace: *context.namespace | string
type: *"configMap" | string
key: *"application" | string
}
})

View File

@ -0,0 +1,51 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/apply-deployment.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/alias: ""
definition.oam.dev/description: Apply deployment with specified image and cmd.
name: apply-deployment
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"strconv"
"strings"
"vela/op"
)
output: op.#Apply & {
value: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.stepName
namespace: context.namespace
}
spec: {
selector: matchLabels: "workflow.oam.dev/step-name": "\(context.name)-\(context.stepName)"
template: {
metadata: labels: "workflow.oam.dev/step-name": "\(context.name)-\(context.stepName)"
spec: containers: [{
name: context.stepName
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
wait: op.#ConditionalWait & {
continue: output.value.status != _|_ && output.value.status.updatedReplicas == output.value.status.availableReplicas && output.value.status.observedGeneration == output.value.metadata.generation
}
parameter: {
image: string
cmd?: [...string]
}

View File

@ -0,0 +1,47 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/apply-job.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
custom.definition.oam.dev/category: Resource Management
definition.oam.dev/description: Apply job
name: apply-job
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
// apply the job
apply: op.#Apply & {
value: parameter.value
cluster: parameter.cluster
}
// fail the step if the job fails
if apply.status.failed > 0 {
fail: op.#Fail & {
message: "Job failed"
}
}
// wait the job to be ready
wait: op.#ConditionalWait & {
continue: apply.status.succeeded == apply.spec.completions
}
parameter: {
// +usage=Specify Kubernetes job object to be applied
value: {
apiVersion: "batch/v1"
kind: "Job"
...
}
// +usage=The cluster you want to apply the resource to, default is the current control plane cluster
cluster: *"" | string
}

View File

@ -0,0 +1,30 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/apply-object.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Apply raw kubernetes objects for your workflow steps
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: apply-object
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
apply: op.#Apply & {
value: parameter.value
cluster: parameter.cluster
}
parameter: {
// +usage=Specify Kubernetes native resource object to be applied
value: {...}
// +usage=The cluster you want to apply the resource to, default is the current control plane cluster
cluster: *"" | string
}

View File

@ -0,0 +1,91 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/apply-terraform-config.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/alias: ""
definition.oam.dev/description: Apply terraform configuration in the step
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/apply-terraform-resource.yaml
name: apply-terraform-config
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
apply: op.#Apply & {
value: {
apiVersion: "terraform.core.oam.dev/v1beta2"
kind: "Configuration"
metadata: {
name: "\(context.name)-\(context.stepName)"
namespace: context.namespace
}
spec: {
deleteResource: parameter.deleteResource
variable: parameter.variable
forceDelete: parameter.forceDelete
if parameter.source.path != _|_ {
path: parameter.source.path
}
if parameter.source.remote != _|_ {
remote: parameter.source.remote
}
if parameter.source.hcl != _|_ {
hcl: parameter.source.hcl
}
if parameter.providerRef != _|_ {
providerRef: parameter.providerRef
}
if parameter.jobEnv != _|_ {
jobEnv: parameter.jobEnv
}
if parameter.writeConnectionSecretToRef != _|_ {
writeConnectionSecretToRef: parameter.writeConnectionSecretToRef
}
if parameter.region != _|_ {
region: parameter.region
}
}
}
}
check: op.#ConditionalWait & {
continue: apply.value.status != _|_ && apply.value.status.apply != _|_ && apply.value.status.apply.state == "Available"
}
parameter: {
// +usage=specify the source of the terraform configuration
source: close({
// +usage=directly specify the hcl of the terraform configuration
hcl: string
}) | close({
// +usage=specify the remote url of the terraform configuration
remote: *"https://github.com/kubevela-contrib/terraform-modules.git" | string
// +usage=specify the path of the terraform configuration
path?: string
})
// +usage=whether to delete resource
deleteResource: *true | bool
// +usage=the variable in the configuration
variable: {...}
// +usage=this specifies the namespace and name of a secret to which any connection details for this managed resource should be written.
writeConnectionSecretToRef?: {
name: string
namespace: *context.namespace | string
}
// +usage=providerRef specifies the reference to Provider
providerRef?: {
name: string
namespace: *context.namespace | string
}
// +usage=region is cloud provider's region. It will override the region in the region field of providerRef
region?: string
// +usage=the envs for job
jobEnv?: {...}
// +usae=forceDelete will force delete Configuration no matter which state it is or whether it has provisioned some resources
forceDelete: *false | bool
}

View File

@ -0,0 +1,144 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/apply-terraform-provider.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/alias: ""
definition.oam.dev/description: Apply terraform provider config
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/apply-terraform-resource.yaml
name: apply-terraform-provider
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
"strings"
)
config: op.#CreateConfig & {
name: "\(context.name)-\(context.stepName)"
namespace: context.namespace
template: "terraform-\(parameter.type)"
config: {
name: parameter.name
if parameter.type == "alibaba" {
ALICLOUD_ACCESS_KEY: parameter.accessKey
ALICLOUD_SECRET_KEY: parameter.secretKey
ALICLOUD_REGION: parameter.region
}
if parameter.type == "aws" {
AWS_ACCESS_KEY_ID: parameter.accessKey
AWS_SECRET_ACCESS_KEY: parameter.secretKey
AWS_DEFAULT_REGION: parameter.region
AWS_SESSION_TOKEN: parameter.token
}
if parameter.type == "azure" {
ARM_CLIENT_ID: parameter.clientID
ARM_CLIENT_SECRET: parameter.clientSecret
ARM_SUBSCRIPTION_ID: parameter.subscriptionID
ARM_TENANT_ID: parameter.tenantID
}
if parameter.type == "baidu" {
BAIDUCLOUD_ACCESS_KEY: parameter.accessKey
BAIDUCLOUD_SECRET_KEY: parameter.secretKey
BAIDUCLOUD_REGION: parameter.region
}
if parameter.type == "ec" {
EC_API_KEY: parameter.apiKey
}
if parameter.type == "gcp" {
GOOGLE_CREDENTIALS: parameter.credentials
GOOGLE_REGION: parameter.region
GOOGLE_PROJECT: parameter.project
}
if parameter.type == "tencent" {
TENCENTCLOUD_SECRET_ID: parameter.secretID
TENCENTCLOUD_SECRET_KEY: parameter.secretKey
TENCENTCLOUD_REGION: parameter.region
}
if parameter.type == "ucloud" {
UCLOUD_PRIVATE_KEY: parameter.privateKey
UCLOUD_PUBLIC_KEY: parameter.publicKey
UCLOUD_PROJECT_ID: parameter.projectID
UCLOUD_REGION: parameter.region
}
}
}
read: op.#Read & {
value: {
apiVersion: "terraform.core.oam.dev/v1beta1"
kind: "Provider"
metadata: {
name: parameter.name
namespace: context.namespace
}
}
}
check: op.#ConditionalWait & {
if read.value.status != _|_ {
continue: read.value.status.state == "ready"
}
if read.value.status == _|_ {
continue: false
}
}
providerBasic: {
accessKey: string
secretKey: string
region: string
}
#AlibabaProvider: {
providerBasic
type: "alibaba"
name: *"alibaba-provider" | string
}
#AWSProvider: {
providerBasic
token: *"" | string
type: "aws"
name: *"aws-provider" | string
}
#AzureProvider: {
subscriptionID: string
tenantID: string
clientID: string
clientSecret: string
name: *"azure-provider" | string
}
#BaiduProvider: {
providerBasic
type: "baidu"
name: *"baidu-provider" | string
}
#ECProvider: {
type: "ec"
apiKey: *"" | string
name: "ec-provider" | string
}
#GCPProvider: {
credentials: string
region: string
project: string
type: "gcp"
name: *"gcp-provider" | string
}
#TencentProvider: {
secretID: string
secretKey: string
region: string
type: "tencent"
name: *"tencent-provider" | string
}
#UCloudProvider: {
publicKey: string
privateKey: string
projectID: string
region: string
type: "ucloud"
name: *"ucloud-provider" | string
}
parameter: *#AlibabaProvider | #AWSProvider | #AzureProvider | #BaiduProvider | #ECProvider | #GCPProvider | #TencentProvider | #UCloudProvider

View File

@ -0,0 +1,158 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/build-push-image.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/alias: ""
definition.oam.dev/description: Build and push image from git url
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/built-push-image.yaml
name: build-push-image
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
"encoding/json"
"strings"
)
url: {
if parameter.context.git != _|_ {
address: strings.TrimPrefix(parameter.context.git, "git://")
value: "git://\(address)#refs/heads/\(parameter.context.branch)"
}
if parameter.context.git == _|_ {
value: parameter.context
}
}
kaniko: op.#Apply & {
value: {
apiVersion: "v1"
kind: "Pod"
metadata: {
name: "\(context.name)-\(context.stepSessionID)-kaniko"
namespace: context.namespace
}
spec: {
containers: [
{
args: [
"--dockerfile=\(parameter.dockerfile)",
"--context=\(url.value)",
"--destination=\(parameter.image)",
"--verbosity=\(parameter.verbosity)",
if parameter.platform != _|_ {
"--customPlatform=\(parameter.platform)"
},
if parameter.buildArgs != _|_ for arg in parameter.buildArgs {
"--build-arg=\(arg)"
},
]
image: parameter.kanikoExecutor
name: "kaniko"
if parameter.credentials != _|_ && parameter.credentials.image != _|_ {
volumeMounts: [
{
mountPath: "/kaniko/.docker/"
name: parameter.credentials.image.name
},
]
}
if parameter.credentials != _|_ && parameter.credentials.git != _|_ {
env: [
{
name: "GIT_TOKEN"
valueFrom: secretKeyRef: {
key: parameter.credentials.git.key
name: parameter.credentials.git.name
}
},
]
}
},
]
if parameter.credentials != _|_ && parameter.credentials.image != _|_ {
volumes: [
{
name: parameter.credentials.image.name
secret: {
defaultMode: 420
items: [
{
key: parameter.credentials.image.key
path: "config.json"
},
]
secretName: parameter.credentials.image.name
}
},
]
}
restartPolicy: "Never"
}
}
}
log: op.#Log & {
source: resources: [{
name: "\(context.name)-\(context.stepSessionID)-kaniko"
namespace: context.namespace
}]
}
read: op.#Read & {
value: {
apiVersion: "v1"
kind: "Pod"
metadata: {
name: "\(context.name)-\(context.stepSessionID)-kaniko"
namespace: context.namespace
}
}
}
wait: op.#ConditionalWait & {
continue: read.value.status != _|_ && read.value.status.phase == "Succeeded"
}
#secret: {
name: string
key: string
}
#git: {
git: string
branch: *"master" | string
}
parameter: {
// +usage=Specify the kaniko executor image, default to oamdev/kaniko-executor:v1.9.1
kanikoExecutor: *"oamdev/kaniko-executor:v1.9.1" | string
// +usage=Specify the context to build image, you can use context with git and branch or directly specify the context, please refer to https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts
context: #git | string
// +usage=Specify the dockerfile
dockerfile: *"./Dockerfile" | string
// +usage=Specify the image
image: string
// +usage=Specify the platform to build
platform?: string
// +usage=Specify the build args
buildArgs?: [...string]
// +usage=Specify the credentials to access git and image registry
credentials?: {
// +usage=Specify the credentials to access git
git?: {
// +usage=Specify the secret name
name: string
// +usage=Specify the secret key
key: string
}
// +usage=Specify the credentials to access image registry
image?: {
// +usage=Specify the secret name
name: string
// +usage=Specify the secret key
key: *".dockerconfigjson" | string
}
}
// +usage=Specify the verbosity level
verbosity: *"info" | "panic" | "fatal" | "error" | "warn" | "debug" | "trace"
}

View File

@ -0,0 +1,132 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/chat-gpt.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
custom.definition.oam.dev/category: External Intergration
definition.oam.dev/alias: ""
definition.oam.dev/description: Send request to chat-gpt
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/chat-gpt.yaml
name: chat-gpt
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
"encoding/json"
"encoding/base64"
)
token: op.#Steps & {
if parameter.token.value != _|_ {
value: parameter.token.value
}
if parameter.token.secretRef != _|_ {
read: op.#Read & {
value: {
apiVersion: "v1"
kind: "Secret"
metadata: {
name: parameter.token.secretRef.name
namespace: context.namespace
}
}
}
stringValue: op.#ConvertString & {bt: base64.Decode(null, read.value.data[parameter.token.secretRef.key])}
value: stringValue.str
}
}
http: op.#HTTPDo & {
method: "POST"
url: "https://api.openai.com/v1/chat/completions"
request: {
timeout: parameter.timeout
body: json.Marshal({
model: parameter.model
messages: [{
if parameter.prompt.type == "custom" {
content: parameter.prompt.content
}
if parameter.prompt.type == "diagnose" {
content: """
You are a professional kubernetes administrator.
Carefully read the provided information, being certain to spell out the diagnosis & reasoning, and don't skip any steps.
Answer in \(parameter.prompt.lang).
---
\(json.Marshal(parameter.prompt.content))
---
What is wrong with this object and how to fix it?
"""
}
if parameter.prompt.type == "audit" {
content: """
You are a professional kubernetes administrator.
You inspect the object and find out the security misconfigurations and give advice.
Write down the possible problems in bullet points, using the imperative tense.
Remember to write only the most important points and do not write more than a few bullet points.
Answer in \(parameter.prompt.lang).
---
\(json.Marshal(parameter.prompt.content))
---
What is the secure problem with this object and how to fix it?
"""
}
if parameter.prompt.type == "quality-gate" {
content: """
You are a professional kubernetes administrator.
You inspect the object and find out the security misconfigurations and rate the object. The max score is 100.
Answer with score only.
---
\(json.Marshal(parameter.prompt.content))
---
What is the score of this object?
"""
}
role: "user"
}]
})
header: {
"Content-Type": "application/json"
Authorization: "Bearer \(token.value)"
}
}
}
response: json.Unmarshal(http.response.body)
fail: op.#Steps & {
if http.response.statusCode >= 400 {
requestFail: op.#Fail & {
message: "\(http.response.statusCode): failed to request: \(response.error.message)"
}
}
}
result: response.choices[0].message.content
log: op.#Log & {
data: result
}
parameter: {
token: close({
// +usage=the token value
value: string
}) | close({
secretRef: {
// +usage=name is the name of the secret
name: string
// +usage=key is the token key in the secret
key: string
}
})
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type: *"custom" | "diagnose" | "audit" | "quality-gate"
lang: *"English" | "Chinese"
content: string | {...}
}
timeout: *"30s" | string
}

View File

@ -0,0 +1,60 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/clean-jobs.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: clean applied jobs in the cluster
name: clean-jobs
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
parameter: {
labelselector?: {...}
namespace: *context.namespace | string
}
cleanJobs: op.#Delete & {
value: {
apiVersion: "batch/v1"
kind: "Job"
metadata: {
name: context.name
namespace: parameter.namespace
}
}
filter: {
namespace: parameter.namespace
if parameter.labelselector != _|_ {
matchingLabels: parameter.labelselector
}
if parameter.labelselector == _|_ {
matchingLabels: "workflow.oam.dev/name": context.name
}
}
}
cleanPods: op.#Delete & {
value: {
apiVersion: "v1"
kind: "pod"
metadata: {
name: context.name
namespace: parameter.namespace
}
}
filter: {
namespace: parameter.namespace
if parameter.labelselector != _|_ {
matchingLabels: parameter.labelselector
}
if parameter.labelselector == _|_ {
matchingLabels: "workflow.oam.dev/name": context.name
}
}
}

View File

@ -0,0 +1,44 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/create-config.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Create or update a config
name: create-config
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
deploy: op.#CreateConfig & {
name: parameter.name
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
if parameter.namespace == _|_ {
namespace: context.namespace
}
if parameter.template != _|_ {
template: parameter.template
}
config: parameter.config
}
parameter: {
//+usage=Specify the name of the config.
name: string
//+usage=Specify the namespace of the config.
namespace?: string
//+usage=Specify the template of the config.
template?: string
//+usage=Specify the content of the config.
config: {...}
}

View File

@ -0,0 +1,34 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/delete-config.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Delete a config
name: delete-config
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
deploy: op.#DeleteConfig & {
name: parameter.name
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
if parameter.namespace == _|_ {
namespace: context.namespace
}
}
parameter: {
//+usage=Specify the name of the config.
name: string
//+usage=Specify the namespace of the config.
namespace?: string
}

View File

@ -0,0 +1,47 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/export2config.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Export data to specified Kubernetes ConfigMap in your workflow.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: export2config
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
apply: op.#Apply & {
value: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: parameter.configName
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
if parameter.namespace == _|_ {
namespace: context.namespace
}
}
data: parameter.data
}
cluster: parameter.cluster
}
parameter: {
// +usage=Specify the name of the config map
configName: string
// +usage=Specify the namespace of the config map
namespace?: string
// +usage=Specify the data of config map
data: {}
// +usage=Specify the cluster of the config map
cluster: *"" | string
}

View File

@ -0,0 +1,79 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/export2secret.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Export data to Kubernetes Secret in your workflow.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: export2secret
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
"encoding/base64"
"encoding/json"
)
secret: op.#Steps & {
data: *parameter.data | {}
if parameter.kind == "docker-registry" && parameter.dockerRegistry != _|_ {
registryData: auths: "\(parameter.dockerRegistry.server)": {
username: parameter.dockerRegistry.username
password: parameter.dockerRegistry.password
auth: base64.Encode(null, "\(parameter.dockerRegistry.username):\(parameter.dockerRegistry.password)")
}
data: ".dockerconfigjson": json.Marshal(registryData)
}
apply: op.#Apply & {
value: {
apiVersion: "v1"
kind: "Secret"
if parameter.type == _|_ && parameter.kind == "docker-registry" {
type: "kubernetes.io/dockerconfigjson"
}
if parameter.type != _|_ {
type: parameter.type
}
metadata: {
name: parameter.secretName
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
if parameter.namespace == _|_ {
namespace: context.namespace
}
}
stringData: data
}
cluster: parameter.cluster
}
}
parameter: {
// +usage=Specify the name of the secret
secretName: string
// +usage=Specify the namespace of the secret
namespace?: string
// +usage=Specify the type of the secret
type?: string
// +usage=Specify the data of secret
data: {}
// +usage=Specify the cluster of the secret
cluster: *"" | string
// +usage=Specify the kind of the secret
kind: *"generic" | "docker-registry"
// +usage=Specify the docker data
dockerRegistry?: {
// +usage=Specify the username of the docker registry
username: string
// +usage=Specify the password of the docker registry
password: string
// +usage=Specify the server of the docker registry
server: *"https://index.docker.io/v1/" | string
}
}

View File

@ -0,0 +1,33 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/list-config.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: List the configs
name: list-config
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
output: op.#ListConfig & {
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
if parameter.namespace == _|_ {
namespace: context.namespace
}
template: parameter.template
}
parameter: {
//+usage=Specify the template of the config.
template: string
//+usage=Specify the namespace of the config.
namespace?: string
}

View File

@ -0,0 +1,341 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/notification.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Send notifications to Email, DingTalk, Slack, Lark or webhook in your workflow.
name: notification
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
"encoding/base64"
)
parameter: {
// +usage=Please fulfill its url and message if you want to send Lark messages
lark?: {
// +usage=Specify the the lark url, you can either sepcify it in value or use secretRef
url: close({
// +usage=the url address content in string
value: string
}) | close({
secretRef: {
// +usage=name is the name of the secret
name: string
// +usage=key is the key in the secret
key: string
}
})
// +usage=Specify the message that you want to sent, refer to [Lark messaging](https://open.feishu.cn/document/ukTMukTMukTM/ucTM5YjL3ETO24yNxkjN#8b0f2a1b).
message: {
// +usage=msg_type can be text, post, image, interactive, share_chat, share_user, audio, media, file, sticker
msg_type: string
// +usage=content should be json encode string
content: string
}
}
// +usage=Please fulfill its url and message if you want to send DingTalk messages
dingding?: {
// +usage=Specify the the dingding url, you can either sepcify it in value or use secretRef
url: close({
// +usage=the url address content in string
value: string
}) | close({
secretRef: {
// +usage=name is the name of the secret
name: string
// +usage=key is the key in the secret
key: string
}
})
// +usage=Specify the message that you want to sent, refer to [dingtalk messaging](https://developers.dingtalk.com/document/robots/custom-robot-access/title-72m-8ag-pqw)
message: {
// +usage=Specify the message content of dingtalk notification
text?: *null | close({
content: string
})
// +usage=msgType can be text, link, mardown, actionCard, feedCard
msgtype: *"text" | "link" | "markdown" | "actionCard" | "feedCard"
link?: *null | close({
text?: string
title?: string
messageUrl?: string
picUrl?: string
})
markdown?: *null | close({
text: string
title: string
})
at?: *null | close({
atMobiles?: *null | [...string]
isAtAll?: bool
})
actionCard?: *null | close({
text: string
title: string
hideAvatar: string
btnOrientation: string
singleTitle: string
singleURL: string
btns: *null | close([...*null | close({
title: string
actionURL: string
})])
})
feedCard?: *null | close({
links: *null | close([...*null | close({
text?: string
title?: string
messageUrl?: string
picUrl?: string
})])
})
}
}
// +usage=Please fulfill its url and message if you want to send Slack messages
slack?: {
// +usage=Specify the the slack url, you can either sepcify it in value or use secretRef
url: close({
// +usage=the url address content in string
value: string
}) | close({
secretRef: {
// +usage=name is the name of the secret
name: string
// +usage=key is the key in the secret
key: string
}
})
// +usage=Specify the message that you want to sent, refer to [slack messaging](https://api.slack.com/reference/messaging/payload)
message: {
// +usage=Specify the message text for slack notification
text: string
blocks?: *null | close([...block])
attachments?: *null | close({
blocks?: *null | close([...block])
color?: string
})
thread_ts?: string
// +usage=Specify the message text format in markdown for slack notification
mrkdwn?: *true | bool
}
}
// +usage=Please fulfill its from, to and content if you want to send email
email?: {
// +usage=Specify the email info that you want to send from
from: {
// +usage=Specify the email address that you want to send from
address: string
// +usage=The alias is the email alias to show after sending the email
alias?: string
// +usage=Specify the password of the email, you can either sepcify it in value or use secretRef
password: close({
// +usage=the password content in string
value: string
}) | close({
secretRef: {
// +usage=name is the name of the secret
name: string
// +usage=key is the key in the secret
key: string
}
})
// +usage=Specify the host of your email
host: string
// +usage=Specify the port of the email host, default to 587
port: *587 | int
}
// +usage=Specify the email address that you want to send to
to: [...string]
// +usage=Specify the content of the email
content: {
// +usage=Specify the subject of the email
subject: string
// +usage=Specify the context body of the email
body: string
}
}
}
block: {
type: string
block_id?: string
elements?: [...{
type: string
action_id?: string
url?: string
value?: string
style?: string
text?: textType
confirm?: {
title: textType
text: textType
confirm: textType
deny: textType
style?: string
}
options?: [...option]
initial_options?: [...option]
placeholder?: textType
initial_date?: string
image_url?: string
alt_text?: string
option_groups?: [...option]
max_selected_items?: int
initial_value?: string
multiline?: bool
min_length?: int
max_length?: int
dispatch_action_config?: trigger_actions_on?: [...string]
initial_time?: string
}]
}
textType: {
type: string
text: string
emoji?: bool
verbatim?: bool
}
option: {
text: textType
value: string
description?: textType
url?: string
}
// send webhook notification
ding: op.#Steps & {
if parameter.dingding != _|_ {
if parameter.dingding.url.value != _|_ {
ding1: op.#DingTalk & {
message: parameter.dingding.message
dingUrl: parameter.dingding.url.value
}
}
if parameter.dingding.url.secretRef != _|_ && parameter.dingding.url.value == _|_ {
read: op.#Read & {
value: {
apiVersion: "v1"
kind: "Secret"
metadata: {
name: parameter.dingding.url.secretRef.name
namespace: context.namespace
}
}
}
stringValue: op.#ConvertString & {bt: base64.Decode(null, read.value.data[parameter.dingding.url.secretRef.key])}
ding2: op.#DingTalk & {
message: parameter.dingding.message
dingUrl: stringValue.str
}
}
}
}
lark: op.#Steps & {
if parameter.lark != _|_ {
if parameter.lark.url.value != _|_ {
lark1: op.#Lark & {
message: parameter.lark.message
larkUrl: parameter.lark.url.value
}
}
if parameter.lark.url.secretRef != _|_ && parameter.lark.url.value == _|_ {
read: op.#Read & {
value: {
apiVersion: "v1"
kind: "Secret"
metadata: {
name: parameter.lark.url.secretRef.name
namespace: context.namespace
}
}
}
stringValue: op.#ConvertString & {bt: base64.Decode(null, read.value.data[parameter.lark.url.secretRef.key])}
lark2: op.#Lark & {
message: parameter.lark.message
larkUrl: stringValue.str
}
}
}
}
slack: op.#Steps & {
if parameter.slack != _|_ {
if parameter.slack.url.value != _|_ {
slack1: op.#Slack & {
message: parameter.slack.message
slackUrl: parameter.slack.url.value
}
}
if parameter.slack.url.secretRef != _|_ && parameter.slack.url.value == _|_ {
read: op.#Read & {
value: {
kind: "Secret"
apiVersion: "v1"
metadata: {
name: parameter.slack.url.secretRef.name
namespace: context.namespace
}
}
}
stringValue: op.#ConvertString & {bt: base64.Decode(null, read.value.data[parameter.slack.url.secretRef.key])}
slack2: op.#Slack & {
message: parameter.slack.message
slackUrl: stringValue.str
}
}
}
}
email: op.#Steps & {
if parameter.email != _|_ {
if parameter.email.from.password.value != _|_ {
email1: op.#SendEmail & {
from: {
address: parameter.email.from.address
if parameter.email.from.alias != _|_ {
alias: parameter.email.from.alias
}
password: parameter.email.from.password.value
host: parameter.email.from.host
port: parameter.email.from.port
}
to: parameter.email.to
content: parameter.email.content
}
}
if parameter.email.from.password.secretRef != _|_ && parameter.email.from.password.value == _|_ {
read: op.#Read & {
value: {
kind: "Secret"
apiVersion: "v1"
metadata: {
name: parameter.email.from.password.secretRef.name
namespace: context.namespace
}
}
}
stringValue: op.#ConvertString & {bt: base64.Decode(null, read.value.data[parameter.email.from.password.secretRef.key])}
email2: op.#SendEmail & {
from: {
address: parameter.email.from.address
if parameter.email.from.alias != _|_ {
alias: parameter.email.from.alias
}
password: stringValue.str
host: parameter.email.from.host
port: parameter.email.from.port
}
to: parameter.email.to
content: parameter.email.content
}
}
}
}

View File

@ -0,0 +1,24 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/print-message-in-status.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: print message in workflow step status
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: print-message-in-status
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
parameter: message: string
msg: op.#Message & {
message: parameter.message
}

View File

@ -0,0 +1,47 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/read-app.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/alias: ""
definition.oam.dev/description: Read application from the cluster
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/apply-applications.yaml
labels:
custom.definition.oam.dev/scope: WorkflowRun
name: read-app
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
"encoding/yaml"
"strings"
)
read: op.#Read & {
value: {
apiVersion: "core.oam.dev/v1beta1"
kind: "Application"
metadata: {
name: parameter.name
namespace: parameter.namespace
}
}
}
message: op.#Steps & {
if read.err != _|_ {
if strings.Contains(read.err, "not found") {
msg: op.#Message & {
message: "Application not found"
}
}
}
}
parameter: {
name: string
namespace: *context.namespace | string
}

View File

@ -0,0 +1,34 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/read-config.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Read a config
name: read-config
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
output: op.#ReadConfig & {
name: parameter.name
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
if parameter.namespace == _|_ {
namespace: context.namespace
}
}
parameter: {
//+usage=Specify the name of the config.
name: string
//+usage=Specify the namespace of the config.
namespace?: string
}

View File

@ -0,0 +1,64 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/read-object.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Read Kubernetes objects from cluster for your workflow steps
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: read-object
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
output: {
if parameter.apiVersion == _|_ && parameter.kind == _|_ {
op.#Read & {
value: {
apiVersion: "core.oam.dev/v1beta1"
kind: "Application"
metadata: {
name: parameter.name
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
}
}
cluster: parameter.cluster
}
}
if parameter.apiVersion != _|_ || parameter.kind != _|_ {
op.#Read & {
value: {
apiVersion: parameter.apiVersion
kind: parameter.kind
metadata: {
name: parameter.name
if parameter.namespace != _|_ {
namespace: parameter.namespace
}
}
}
cluster: parameter.cluster
}
}
}
parameter: {
// +usage=Specify the apiVersion of the object, defaults to 'core.oam.dev/v1beta1'
apiVersion?: string
// +usage=Specify the kind of the object, defaults to Application
kind?: string
// +usage=Specify the name of the object
name: string
// +usage=The namespace of the resource you want to read
namespace?: *"default" | string
// +usage=The cluster you want to apply the resource to, default is the current control plane cluster
cluster: *"" | string
}

View File

@ -0,0 +1,58 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/request.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/alias: ""
definition.oam.dev/description: Send request to the url
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/request.yaml
name: request
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/http"
"vela/op"
"encoding/json"
)
req: http.#HTTPDo & {
$params: {
method: parameter.method
url: parameter.url
request: {
if parameter.body != _|_ {
body: json.Marshal(parameter.body)
}
if parameter.header != _|_ {
header: parameter.header
}
}
}
} @step(1)
wait: op.#ConditionalWait & {
continue: req.$returns != _|_
message?: "Waiting for response from \(parameter.url)"
} @step(2)
fail: op.#Steps & {
if req.$returns.statusCode > 400 {
requestFail: op.#Fail & {
message: "request of \(parameter.url) is fail: \(req.$returns.statusCode)"
}
}
} @step(3)
response: json.Unmarshal(req.$returns.body)
parameter: {
url: string
method: *"GET" | "POST" | "PUT" | "DELETE"
body?: {...}
header?: [string]: string
}

View File

@ -0,0 +1,18 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/step-group.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: A special step that you can declare 'subSteps' in it, 'subSteps' is an array containing any step type whose valid parameters do not include the `step-group` step type itself. The sub steps were executed in parallel.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: step-group
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
// no parameters, the nop only to make the template not empty or it's invalid
nop: {}

View File

@ -0,0 +1,18 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/suspend.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Suspend the current workflow, it can be resumed by 'vela workflow resume' command.
name: suspend
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
parameter: {
// +usage=Specify the wait duration time to resume workflow such as "30s", "1min" or "2m15s"
duration?: string
}

View File

@ -0,0 +1,149 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/vela-cli.cue
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Run a vela command
definition.oam.dev/example-url: https://raw.githubusercontent.com/kubevela/workflow/main/examples/workflow-run/apply-terraform-resource.yaml
name: vela-cli
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
mountsArray: [
if parameter.storage != _|_ && parameter.storage.secret != _|_ for v in parameter.storage.secret {
{
name: "secret-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
if parameter.storage != _|_ && parameter.storage.hostPath != _|_ for v in parameter.storage.hostPath {
{
name: "hostpath-" + v.name
mountPath: v.mountPath
}
},
]
volumesList: [
if parameter.storage != _|_ && parameter.storage.secret != _|_ for v in parameter.storage.secret {
{
name: "secret-" + v.name
secret: {
defaultMode: v.defaultMode
secretName: v.secretName
if v.items != _|_ {
items: v.items
}
}
}
if parameter.storage != _|_ && parameter.storage.hostPath != _|_ for v in parameter.storage.hostPath {
{
name: "hostpath-" + v.name
path: v.path
}
}
},
]
deDupVolumesArray: [
for val in [
for i, vi in volumesList {
for j, vj in volumesList if j < i && vi.name == vj.name {
_ignore: true
}
vi
},
] if val._ignore == _|_ {
val
},
]
job: op.#Apply & {
value: {
apiVersion: "batch/v1"
kind: "Job"
metadata: {
name: "\(context.name)-\(context.stepName)-\(context.stepSessionID)"
if parameter.serviceAccountName == "kubevela-vela-core" {
namespace: "vela-system"
}
if parameter.serviceAccountName != "kubevela-vela-core" {
namespace: context.namespace
}
}
spec: {
backoffLimit: 3
template: {
metadata: labels: "workflow.oam.dev/step-name": "\(context.name)-\(context.stepName)"
spec: {
containers: [
{
name: "\(context.name)-\(context.stepName)-\(context.stepSessionID)-job"
image: parameter.image
command: parameter.command
volumeMounts: mountsArray
},
]
restartPolicy: "Never"
serviceAccount: parameter.serviceAccountName
volumes: deDupVolumesArray
}
}
}
}
}
log: op.#Log & {
source: resources: [{labelSelector: "workflow.oam.dev/step-name": "\(context.name)-\(context.stepName)"}]
}
fail: op.#Steps & {
if job.value.status.failed != _|_ {
if job.value.status.failed > 2 {
breakWorkflow: op.#Fail & {
message: "failed to execute vela command"
}
}
}
}
wait: op.#ConditionalWait & {
continue: job.value.status.succeeded != _|_ && job.value.status.succeeded > 0
}
parameter: {
// +usage=Specify the name of the addon.
addonName: string
// +usage=Specify the vela command
command: [...string]
// +usage=Specify the image
image: *"oamdev/vela-cli:v1.6.4" | string
// +usage=specify serviceAccountName want to use
serviceAccountName: *"kubevela-vela-core" | string
storage?: {
// +usage=Mount Secret type storage
secret?: [...{
name: string
mountPath: string
subPath?: string
defaultMode: *420 | int
secretName: string
items?: [...{
key: string
path: string
mode: *511 | int
}]
}]
// +usage=Declare host path type storage
hostPath?: [...{
name: string
path: string
mountPath: string
type: *"Directory" | "DirectoryOrCreate" | "FileOrCreate" | "File" | "Socket" | "CharDevice" | "BlockDevice"
}]
}
}

View File

@ -109,6 +109,11 @@ spec:
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
args:
- "-test.coverprofile=/workspace/data/e2e-profile.out"
- "__DEVEL__E2E"
- "-test.run=E2EMain"
- "-test.coverpkg=$(go list ./pkg/...| tr '
' ','| sed 's/,$//g')"
{{ if .Values.admissionWebhooks.enabled }}
- "--use-webhook=true"
- "--webhook-port={{ .Values.webhookService.port }}"
@ -125,17 +130,23 @@ spec:
- "--leader-elect"
- "--health-probe-bind-address=:{{ .Values.healthCheck.port }}"
- "--concurrent-reconciles={{ .Values.concurrentReconciles }}"
- "--ignore-workflow-without-controller-requirement={{ .Values.ignoreWorkflowWithoutControllerRequirement }}"
- "--kube-api-qps={{ .Values.kubeClient.qps }}"
- "--kube-api-burst={{ .Values.kubeClient.burst }}"
- "--user-agent={{ .Values.kubeClient.userAgent }}"
- "--max-workflow-wait-backoff-time={{ .Values.workflow.backoff.maxTime.waitState }}"
- "--max-workflow-failed-backoff-time={{ .Values.workflow.backoff.maxTime.failedState }}"
- "--max-workflow-step-error-retry-times={{ .Values.workflow.step.errorRetryTimes }}"
- "--feature-gates=EnableWatchEventListener={{- .Values.workflow.enableWatchEventListener | toString -}}"
- "--feature-gates=EnablePatchStatusAtOnce={{- .Values.workflow.enablePatchStatusAtOnce | toString -}}"
- "--feature-gates=EnableSuspendOnFailure={{- .Values.workflow.enableSuspendOnFailure | toString -}}"
- "--feature-gates=EnableBackupWorkflowRecord={{- .Values.backup.enabled | toString -}}"
- "--group-by-label={{ .Values.workflow.groupByLabel }}"
- "--enable-external-package-for-default-compiler={{- .Values.workflow.enableExternalPackageForDefaultCompiler | toString -}}"
- "--enable-external-package-watch-for-default-compiler={{- .Values.workflow.enableExternalPackageWatchForDefaultCompiler | toString -}}"
{{ if .Values.backup.enable }}
- "--backup-strategy={{ .Values.backup.strategy }}"
- "--backup-ignore-strategy={{ .Values.backup.ignoreStrategy }}"
- "--backup-group-by-label={{ .Values.backup.groupByLabel }}"
- "--backup-clean-on-backup={{ .Values.backup.cleanOnBackup }}"
- "--backup-persist-type={{ .Values.backup.persisType }}"
- "--backup-config-secret-name={{ .Values.backup.configSecretName }}"

View File

@ -9,21 +9,34 @@ systemDefinitionNamespace:
## @param concurrentReconciles concurrentReconciles is the concurrent reconcile number of the controller
concurrentReconciles: 4
## @param ignoreWorkflowWithoutControllerRequirement will determine whether to process the workflowrun without 'workflowrun.oam.dev/controller-version-require' annotation
ignoreWorkflowWithoutControllerRequirement: false
## @section KubeVela workflow parameters
## @param workflow.enableSuspendOnFailure Enable suspend on workflow failure
## @param workflow.enableSuspendOnFailure Enable the capability of suspend an failed workflow automatically
## @param workflow.enablePatchStatusAtOnce Enable the capability of patch status at once
## @param workflow.enableWatchEventListener Enable the capability of watch event listener for a faster reconcile, note that you need to install [kube-trigger](https://github.com/kubevela/kube-trigger) first to use this feature
## @param workflow.enableExternalPackageForDefaultCompiler Enable external package for default compiler
## @param workflow.enableExternalPackageWatchForDefaultCompiler Enable external package watch for default compiler
## @param workflow.backoff.maxTime.waitState The max backoff time of workflow in a wait condition
## @param workflow.backoff.maxTime.failedState The max backoff time of workflow in a failed condition
## @param workflow.step.errorRetryTimes The max retry times of a failed workflow step
## @param workflow.groupByLabel The label used to group workflow record
workflow:
enableSuspendOnFailure: false
enablePatchStatusAtOnce: false
enableWatchEventListener: false
enableExternalPackageForDefaultCompiler: true
enableExternalPackageWatchForDefaultCompiler: false
backoff:
maxTime:
waitState: 60
failedState: 300
step:
errorRetryTimes: 10
groupByLabel: "pipeline.oam.dev/name"
## @section KubeVela workflow backup parameters
@ -31,7 +44,6 @@ workflow:
## @param backup.strategy The backup strategy for workflow record
## @param backup.ignoreStrategy The ignore strategy for backup
## @param backup.cleanOnBackup Enable auto clean after backup workflow record
## @param backup.groupByLabel The label used to group workflow record
## @param backup.persistType The persist type for workflow record
## @param backup.configSecretName The secret name of backup config
## @param backup.configSecretNamespace The secret name of backup config namespace
@ -40,7 +52,6 @@ backup:
strategy: BackupFinishedRecord
ignoreStrategy: IgnoreLatestFailedRecord
cleanOnBackup: false
groupByLabel: ""
persistType: ""
configSecretName: "backup-config"
configSecretNamespace: "vela-system"
@ -141,7 +152,7 @@ logFileMaxSize: 1024
## @skip admissionWebhooks
admissionWebhooks:
enabled: false
enabled: true
failurePolicy: Fail
certificate:
mountPath: /etc/k8s-webhook-certs
@ -154,13 +165,13 @@ admissionWebhooks:
nodeSelector: {}
affinity: {}
tolerations: []
appConversion:
enabled: false
certManager:
enabled: false
## @param kubeClient.qps The qps for reconcile clients, default is 50
## @param kubeClient.burst The burst for reconcile clients, default is 100
## @param kubeClient.userAgent The user agent of the client, default is vela-workflow
kubeClient:
qps: 500
burst: 1000
userAgent: vela-workflow

View File

@ -21,44 +21,54 @@ import (
"errors"
goflag "flag"
"fmt"
"io"
"net/http"
"net/http/pprof"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/crossplane/crossplane-runtime/pkg/event"
"github.com/kubevela/pkg/controller/sharding"
flag "github.com/spf13/pflag"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/crossplane-runtime/pkg/event"
flag "github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apiserver/pkg/util/feature"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/klog/v2"
"k8s.io/klog/v2/klogr"
"k8s.io/klog/v2/textlogger"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/healthz"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
crtlwebhook "sigs.k8s.io/controller-runtime/pkg/webhook"
triggerv1alpha1 "github.com/kubevela/kube-trigger/api/v1alpha1"
velaclient "github.com/kubevela/pkg/controller/client"
"github.com/kubevela/pkg/multicluster"
"github.com/kubevela/workflow/api/v1alpha1"
"github.com/kubevela/workflow/controllers"
"github.com/kubevela/workflow/pkg/backup"
"github.com/kubevela/workflow/pkg/common"
"github.com/kubevela/workflow/pkg/cue/packages"
"github.com/kubevela/workflow/pkg/features"
"github.com/kubevela/workflow/pkg/monitor/watcher"
"github.com/kubevela/workflow/pkg/providers"
"github.com/kubevela/workflow/pkg/types"
"github.com/kubevela/workflow/pkg/utils"
"github.com/kubevela/workflow/pkg/webhook"
"github.com/kubevela/workflow/version"
//+kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
scheme = runtime.NewScheme()
waitSecretTimeout = 90 * time.Second
waitSecretInterval = 2 * time.Second
)
func init() {
@ -69,13 +79,13 @@ func init() {
}
func main() {
var metricsAddr, logFilePath, probeAddr, pprofAddr, leaderElectionResourceLock string
var metricsAddr, logFilePath, probeAddr, pprofAddr, leaderElectionResourceLock, userAgent, certDir string
var backupStrategy, backupIgnoreStrategy, backupPersistType, groupByLabel, backupConfigSecretName, backupConfigSecretNamespace string
var enableLeaderElection, logDebug, backupCleanOnBackup bool
var enableLeaderElection, useWebhook, logDebug, backupCleanOnBackup bool
var qps float64
var logFileMaxSize uint64
var burst, webhookPort int
var leaseDuration, renewDeadline, retryPeriod time.Duration
var leaseDuration, renewDeadline, retryPeriod, recycleDuration time.Duration
var controllerArgs controllers.Args
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
@ -86,17 +96,24 @@ func main() {
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.StringVar(&leaderElectionResourceLock, "leader-election-resource-lock", "configmapsleases", "The resource lock to use for leader election")
flag.StringVar(&leaderElectionResourceLock, "leader-election-resource-lock", "leases", "The resource lock to use for leader election")
flag.DurationVar(&leaseDuration, "leader-election-lease-duration", 15*time.Second,
"The duration that non-leader candidates will wait to force acquire leadership")
flag.DurationVar(&renewDeadline, "leader-election-renew-deadline", 10*time.Second,
"The duration that the acting controlplane will retry refreshing leadership before giving up")
flag.DurationVar(&retryPeriod, "leader-election-retry-period", 2*time.Second,
"The duration the LeaderElector clients should wait between tries of actions")
flag.DurationVar(&recycleDuration, "recycle-duration", 30*24*time.Hour,
"The recycle duration of a completed and is not the latest record in a set of workflowruns")
flag.BoolVar(&useWebhook, "use-webhook", false, "Enable Admission Webhook")
flag.StringVar(&certDir, "webhook-cert-dir", "/k8s-webhook-server/serving-certs", "Admission webhook cert/key dir.")
flag.IntVar(&webhookPort, "webhook-port", 9443, "admission webhook listen address")
flag.IntVar(&controllerArgs.ConcurrentReconciles, "concurrent-reconciles", 4, "concurrent-reconciles is the concurrent reconcile number of the controller. The default value is 4")
flag.BoolVar(&controllerArgs.IgnoreWorkflowWithoutControllerRequirement, "ignore-workflow-without-controller-requirement", false, "If true, workflow controller will not process the workflowrun without 'workflowrun.oam.dev/controller-version-require' annotation")
flag.Float64Var(&qps, "kube-api-qps", 50, "the qps for reconcile clients. Low qps may lead to low throughput. High qps may give stress to api-server. Raise this value if concurrent-reconciles is set to be high.")
flag.IntVar(&burst, "kube-api-burst", 100, "the burst for reconcile clients. Recommend setting it qps*2.")
flag.StringVar(&userAgent, "user-agent", "vela-workflow", "the user agent of the client.")
flag.StringVar(&pprofAddr, "pprof-addr", "", "The address for pprof to use while exporting profiling results. The default value is empty which means do not expose it. Set it to address like :6666 to expose it.")
flag.IntVar(&types.MaxWorkflowWaitBackoffTime, "max-workflow-wait-backoff-time", 60, "Set the max workflow wait backoff time, default is 60")
flag.IntVar(&types.MaxWorkflowFailedBackoffTime, "max-workflow-failed-backoff-time", 300, "Set the max workflow wait backoff time, default is 300")
@ -104,12 +121,15 @@ func main() {
flag.StringVar(&backupStrategy, "backup-strategy", "BackupFinishedRecord", "Set the strategy for backup workflow records, default is RemainLatestFailedRecord")
flag.StringVar(&backupIgnoreStrategy, "backup-ignore-strategy", "", "Set the strategy for ignore backup workflow records, default is IgnoreLatestFailedRecord")
flag.StringVar(&backupPersistType, "backup-persist-type", "", "Set the persist type for backup workflow records, default is empty")
flag.StringVar(&groupByLabel, "backup-group-by-label", "", "Set the label for group by, default is empty")
flag.StringVar(&groupByLabel, "group-by-label", "pipeline.oam.dev/name", "Set the label for group by, default is pipeline.oam.dev/name")
flag.BoolVar(&backupCleanOnBackup, "backup-clean-on-backup", false, "Set the auto clean for backup workflow records, default is false")
flag.StringVar(&backupConfigSecretName, "backup-config-secret-name", "backup-config", "Set the secret name for backup workflow configs, default is backup-config")
flag.StringVar(&backupConfigSecretNamespace, "backup-config-secret-namespace", "vela-system", "Set the secret namespace for backup workflow configs, default is backup-config")
flag.BoolVar(&providers.EnableExternalPackageForDefaultCompiler, "enable-external-package-for-default-compiler", true, "Enable external package for default compiler")
flag.BoolVar(&providers.EnableExternalPackageWatchForDefaultCompiler, "enable-external-package-watch-for-default-compiler", false, "Enable external package watch for default compiler")
multicluster.AddClusterGatewayClientFlags(flag.CommandLine)
feature.DefaultMutableFeatureGate.AddFlag(flag.CommandLine)
sharding.AddControllerFlags(flag.CommandLine)
// setup logging
klog.InitFlags(nil)
@ -146,7 +166,7 @@ func main() {
}
}()
if err := pprofServer.ListenAndServe(); !errors.Is(http.ErrServerClosed, err) {
if err := pprofServer.ListenAndServe(); !errors.Is(err, http.ErrServerClosed) {
klog.Error(err, "Failed to start debug HTTP server")
panic(err)
}
@ -159,7 +179,7 @@ func main() {
_ = flag.Set("log_file_max_size", strconv.FormatUint(logFileMaxSize, 10))
}
ctrl.SetLogger(klogr.New())
ctrl.SetLogger(textlogger.NewLogger(textlogger.NewConfig()))
klog.InfoS("KubeVela Workflow information", "version", version.VelaVersion, "revision", version.GitRevision)
@ -170,12 +190,23 @@ func main() {
"QPS", restConfig.QPS,
"Burst", restConfig.Burst,
)
restConfig.UserAgent = userAgent
if feature.DefaultMutableFeatureGate.Enabled(features.EnableWatchEventListener) {
utilruntime.Must(triggerv1alpha1.AddToScheme(scheme))
}
leaderElectionID := fmt.Sprintf("workflow-%s", strings.ToLower(strings.ReplaceAll(version.VelaVersion, ".", "-")))
leaderElectionID += sharding.GetShardIDSuffix()
mgr, err := ctrl.NewManager(restConfig, ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: webhookPort,
Scheme: scheme,
Metrics: metricsserver.Options{
BindAddress: metricsAddr,
},
WebhookServer: crtlwebhook.NewServer(crtlwebhook.Options{
CertDir: certDir,
Port: webhookPort,
}),
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: leaderElectionID,
@ -184,34 +215,45 @@ func main() {
RenewDeadline: &renewDeadline,
RetryPeriod: &retryPeriod,
NewClient: velaclient.DefaultNewControllerClient,
NewCache: sharding.BuildCache(&v1alpha1.WorkflowRun{}),
})
if err != nil {
klog.Error(err, "unable to start manager")
os.Exit(1)
}
pd, err := packages.NewPackageDiscover(mgr.GetConfig())
if err != nil {
klog.Error(err, "Failed to create CRD discovery for CUE package client")
if !packages.IsCUEParseErr(err) {
kubeClient := mgr.GetClient()
if groupByLabel != "" {
if err := mgr.Add(utils.NewRecycleCronJob(kubeClient, recycleDuration, "0 0 * * *", groupByLabel)); err != nil {
klog.Error(err, "unable to start recycle cronjob")
os.Exit(1)
}
}
kubeClient := mgr.GetClient()
if useWebhook {
klog.InfoS("Enable webhook", "server port", strconv.Itoa(webhookPort))
webhook.Register(mgr, controllerArgs)
if err := waitWebhookSecretVolume(certDir, waitSecretTimeout, waitSecretInterval); err != nil {
klog.ErrorS(err, "Unable to get webhook secret")
os.Exit(1)
}
}
if err = (&controllers.WorkflowRunReconciler{
Client: kubeClient,
Scheme: mgr.GetScheme(),
PackageDiscover: pd,
Recorder: event.NewAPIRecorder(mgr.GetEventRecorderFor("WorkflowRun")),
Args: controllerArgs,
Client: kubeClient,
Scheme: mgr.GetScheme(),
Recorder: event.NewAPIRecorder(mgr.GetEventRecorderFor("WorkflowRun")),
ControllerVersion: version.VelaVersion,
Args: controllerArgs,
}).SetupWithManager(mgr); err != nil {
klog.Error(err, "unable to create controller", "controller", "WorkflowRun")
os.Exit(1)
}
if feature.DefaultMutableFeatureGate.Enabled(features.EnableBackupWorkflowRecord) {
if backupPersistType == "" {
klog.Warning("Backup persist type is empty, workflow record won't be persisted")
}
configSecret := &corev1.Secret{}
reader := mgr.GetAPIReader()
if err := reader.Get(context.Background(), client.ObjectKey{
@ -221,20 +263,21 @@ func main() {
klog.Error(err, "unable to find secret")
os.Exit(1)
}
configData := configSecret.Data
if configData == nil {
configData = make(map[string][]byte)
persister, err := backup.NewPersister(configSecret.Data, backupPersistType)
if err != nil {
klog.Error(err, "unable to create persister")
os.Exit(1)
}
if err = (&controllers.BackupReconciler{
Client: kubeClient,
Scheme: mgr.GetScheme(),
Client: kubeClient,
Scheme: mgr.GetScheme(),
ControllerVersion: version.VelaVersion,
BackupArgs: controllers.BackupArgs{
BackupStrategy: backupStrategy,
IgnoreStrategy: backupIgnoreStrategy,
CleanOnBackup: backupCleanOnBackup,
GroupByLabel: groupByLabel,
PersistType: backupPersistType,
PersistConfig: configData,
Persister: persister,
},
Args: controllerArgs,
}).SetupWithManager(mgr); err != nil {
@ -270,3 +313,50 @@ func main() {
}
klog.Info("Safely stops Program...")
}
// waitWebhookSecretVolume waits for webhook secret ready to avoid mgr running crash
func waitWebhookSecretVolume(certDir string, timeout, interval time.Duration) error {
start := time.Now()
for {
time.Sleep(interval)
if time.Since(start) > timeout {
return fmt.Errorf("getting webhook secret timeout after %s", timeout.String())
}
klog.InfoS("Wait webhook secret", "time consumed(second)", int64(time.Since(start).Seconds()),
"timeout(second)", int64(timeout.Seconds()))
if _, err := os.Stat(certDir); !os.IsNotExist(err) {
ready := func() bool {
f, err := os.Open(filepath.Clean(certDir))
if err != nil {
return false
}
defer func() {
if err := f.Close(); err != nil {
klog.Error(err, "Failed to close file")
}
}()
// check if dir is empty
if _, err := f.Readdir(1); errors.Is(err, io.EOF) {
return false
}
// check if secret files are empty
err = filepath.Walk(certDir, func(path string, info os.FileInfo, err error) error { //nolint:revive,unused
// even Cert dir is created, cert files are still empty for a while
if info.Size() == 0 {
return errors.New("secret is not ready")
}
return nil
})
if err == nil {
klog.InfoS("Webhook secret is ready", "time consumed(second)",
int64(time.Since(start).Seconds()))
return true
}
return false
}()
if ready {
return nil
}
}
}
}

View File

@ -41,19 +41,19 @@ import (
// BackupReconciler reconciles a WorkflowRun object
type BackupReconciler struct {
client.Client
Scheme *runtime.Scheme
Scheme *runtime.Scheme
ControllerVersion string
BackupArgs
Args
}
// BackupArgs is the args for backup
type BackupArgs struct {
PersistType string
Persister backup.PersistWorkflowRecord
BackupStrategy string
IgnoreStrategy string
GroupByLabel string
CleanOnBackup bool
PersistConfig map[string][]byte
}
const (
@ -88,6 +88,11 @@ func (r *BackupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
return ctrl.Result{}, client.IgnoreNotFound(err)
}
if !r.matchControllerRequirement(run) {
logCtx.Info("skip workflowrun: not match the controller requirement of workflowrun")
return ctrl.Result{}, nil
}
if !run.Status.Finished {
logCtx.Info("WorkflowRun is not finished, skip reconcile")
return ctrl.Result{}, nil
@ -131,9 +136,8 @@ func (r *BackupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
}
func (r *BackupReconciler) backup(ctx monitorContext.Context, cli client.Client, run *v1alpha1.WorkflowRun) error {
persister := backup.NewPersister(r.PersistType, r.PersistConfig)
if persister != nil {
if err := persister.Store(ctx, run); err != nil {
if r.Persister != nil {
if err := r.Persister.Store(ctx, run); err != nil {
return err
}
}
@ -146,6 +150,18 @@ func (r *BackupReconciler) backup(ctx monitorContext.Context, cli client.Client,
return nil
}
func (r *BackupReconciler) matchControllerRequirement(wr *v1alpha1.WorkflowRun) bool {
if wr.Annotations != nil {
if requireVersion, ok := wr.Annotations[types.AnnotationControllerRequirement]; ok {
return requireVersion == r.ControllerVersion
}
}
if r.IgnoreWorkflowWithoutControllerRequirement {
return false
}
return true
}
// SetupWithManager sets up the controller with the Manager.
func (r *BackupReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
@ -156,14 +172,14 @@ func (r *BackupReconciler) SetupWithManager(mgr ctrl.Manager) error {
// filter the changes in workflow status
// let workflow handle its reconcile
UpdateFunc: func(e ctrlEvent.UpdateEvent) bool {
new := e.ObjectNew.DeepCopyObject().(*v1alpha1.WorkflowRun)
old := e.ObjectOld.DeepCopyObject().(*v1alpha1.WorkflowRun)
newObj := e.ObjectNew.DeepCopyObject().(*v1alpha1.WorkflowRun)
oldObj := e.ObjectOld.DeepCopyObject().(*v1alpha1.WorkflowRun)
// if the workflow is not finished, skip the reconcile
if !new.Status.Finished {
if !newObj.Status.Finished {
return false
}
return !reflect.DeepEqual(old, new)
return !reflect.DeepEqual(oldObj, newObj)
},
CreateFunc: func(e ctrlEvent.CreateEvent) bool {
run := e.Object.DeepCopyObject().(*v1alpha1.WorkflowRun)
@ -180,7 +196,7 @@ func isLatestFailedRecord(ctx context.Context, cli client.Client, run *v1alpha1.
}
runs := &v1alpha1.WorkflowRunList{}
listOpt := &client.ListOptions{}
if groupByLabel != "" {
if groupByLabel != "" && run.Labels != nil && run.Labels[groupByLabel] != "" {
labels := &metav1.LabelSelector{
MatchLabels: map[string]string{
groupByLabel: run.Labels[groupByLabel],

View File

@ -21,7 +21,7 @@ import (
"fmt"
"time"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"

View File

@ -19,23 +19,25 @@ package controllers
import (
"path/filepath"
"testing"
"time"
"github.com/crossplane/crossplane-runtime/pkg/event"
. "github.com/onsi/ginkgo"
cuexv1alpha1 "github.com/kubevela/pkg/apis/cue/v1alpha1"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/envtest"
"sigs.k8s.io/controller-runtime/pkg/envtest/printer"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"github.com/kubevela/pkg/util/singleton"
"github.com/kubevela/workflow/api/v1alpha1"
"github.com/kubevela/workflow/pkg/cue/packages"
//+kubebuilder:scaffold:imports
)
@ -52,12 +54,10 @@ var recorder = NewFakeRecorder(10000)
func TestAPIs(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecsWithDefaultAndCustomReporters(t,
"Controller Suite",
[]Reporter{printer.NewlineReporter{}})
RunSpecs(t, "Controller Suite")
}
var _ = BeforeSuite(func() {
var _ = BeforeSuite(func(ctx SpecContext) {
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
By("bootstrapping test environment")
@ -72,27 +72,27 @@ var _ = BeforeSuite(func() {
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())
pd, err := packages.NewPackageDiscover(cfg)
Expect(err).To(BeNil())
testScheme = scheme.Scheme
err = v1alpha1.AddToScheme(testScheme)
Expect(err).NotTo(HaveOccurred())
err = cuexv1alpha1.AddToScheme(testScheme)
Expect(err).NotTo(HaveOccurred())
//+kubebuilder:scaffold:scheme
k8sClient, err = client.New(cfg, client.Options{Scheme: testScheme})
Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())
singleton.KubeClient.Set(k8sClient)
fakeDynamicClient := fake.NewSimpleDynamicClient(testScheme)
singleton.DynamicClient.Set(fakeDynamicClient)
reconciler = &WorkflowRunReconciler{
Client: k8sClient,
Scheme: testScheme,
PackageDiscover: pd,
Recorder: event.NewAPIRecorder(recorder),
Client: k8sClient,
Scheme: testScheme,
Recorder: event.NewAPIRecorder(recorder),
}
}, 60)
}, NodeTimeout(1*time.Minute))
var _ = AfterSuite(func() {
By("tearing down the test environment")

View File

@ -6,6 +6,20 @@ metadata:
spec:
schematic:
cue:
template: "import (\n\t\"vela/op\"\n)\n\napply: op.#Apply & {\n\tvalue: parameter.value\n\tcluster:
parameter.cluster\n}\nparameter: {\n\t// +usage=Specify the value of the object\n\tvalue:
{...}\n\t// +usage=Specify the cluster of the object\n\tcluster: *\"\" | string\n}\n"
template: |
import (
"vela/kube"
)
apply: kube.#Apply & {
$params: {
value: parameter.value
cluster: parameter.cluster
}
}
parameter: {
// +usage=Specify Kubernetes native resource object to be applied
value: {...}
// +usage=The cluster you want to apply the resource to, default is the current control plane cluster
cluster: *"" | string
}

15
controllers/testdata/multi-suspend.yaml vendored Normal file
View File

@ -0,0 +1,15 @@
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: multi-suspend
namespace: vela-system
spec:
schematic:
cue:
template: |
import (
"vela/builtin"
)
suspend1: builtin.#Suspend & {}
suspend2: builtin.#Suspend & {}

View File

@ -0,0 +1,26 @@
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: save-process-context
namespace: vela-system
spec:
schematic:
cue:
template: |
import "vela/op"
cm: op.#Apply & {
value: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: parameter.name
labels: {
"process.context.data": "true"
}
}
data: context
}
}
parameter: name: string

View File

@ -0,0 +1,52 @@
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
name: suspend-and-deploy
namespace: vela-system
spec:
schematic:
cue:
template: |
import (
"vela/kube"
"vela/builtin"
)
suspend: builtin.#Suspend & {$params: duration: "1s"}
output: kube.#Apply & {
$params: {
cluster: parameter.cluster
value: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.stepName
namespace: context.namespace
}
spec: {
selector: matchLabels: "workflow.oam.dev/step-name": "\(context.name)-\(context.stepName)"
replicas: parameter.replicas
template: {
metadata: labels: "workflow.oam.dev/step-name": "\(context.name)-\(context.stepName)"
spec: containers: [{
name: context.stepName
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
}
wait: builtin.#ConditionalWait & {
$params: continue: output.$returns.value.status.readyReplicas == parameter.replicas
}
parameter: {
image: string
replicas: *1 | int
cluster: *"" | string
cmd?: [...string]
}

View File

@ -6,14 +6,49 @@ metadata:
spec:
schematic:
cue:
template: "import (\t\"vela/op\"\n)\n\noutput: op.#Apply & {\n\tvalue: {\n\t\tapiVersion:
\"apps/v1\"\n\t\tkind: \"Deployment\"\n\t\tmetadata: {\n\t\t\tname:
\ context.stepName\n\t\t\tnamespace: context.namespace\n\t\t}\n\t\tspec:
{\n\t\t\tselector: matchLabels: wr: context.stepName\n\t\t\ttemplate: {\n\t\t\t\tmetadata:
labels: wr: context.stepName\n\t\t\t\tspec: containers: [{\n\t\t\t\t\tname:
\ context.stepName\n\t\t\t\t\timage: parameter.image\n\t\t\t\t\tif parameter[\"cmd\"]
!= _|_ {\n\t\t\t\t\t\tcommand: parameter.cmd\n\t\t\t\t\t}\n\t\t\t\t\tif parameter[\"message\"]
!= _|_ {\n\t\t\t\t\t\tenv: [{\n\t\t\t\t\t\t\tname: \"MESSAGE\"\n\t\t\t\t\t\t\tvalue:
parameter.message\n\t\t\t\t\t\t}]\n\t\t\t\t\t}\n\t\t\t\t}]\n\t\t\t}\n\t\t}\n\t}\n}\nwait:
op.#ConditionalWait & {\n\tcontinue: output.value.status.readyReplicas ==
1\n}\nparameter: {\n\timage: string\n\tcmd?: [...string]\n\tmessage?: string\n}\n"
template: |
import (
"vela/kube"
"vela/builtin"
)
output: kube.#Apply & {
$params: value: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.stepName
namespace: context.namespace
}
spec: {
selector: matchLabels: wr: context.stepName
template: {
metadata: labels: wr: context.stepName
spec: containers: [{
name: context.stepName
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
if parameter["message"] != _|_ {
env: [{
name: "MESSAGE"
value: parameter.message
}]
}
}]
}
}
}
}
wait: builtin.#ConditionalWait & {
if len(output.$returns.value.status) > 0 if output.$returns.value.status.readyReplicas == 1 {
$params: continue: true
}
}
parameter: {
image: string
cmd?: [...string]
message?: string
}

View File

@ -22,10 +22,9 @@ import (
"path/filepath"
sysruntime "runtime"
"strings"
"testing"
"time"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
@ -72,7 +71,7 @@ var _ = Describe("Test Workflow", func() {
},
},
}
testDefinitions := []string{"test-apply", "apply-object", "failed-render"}
testDefinitions := []string{"test-apply", "apply-object", "failed-render", "suspend-and-deploy", "multi-suspend", "save-process-context"}
BeforeEach(func() {
setupNamespace(ctx, namespace)
@ -96,10 +95,10 @@ var _ = Describe("Test Workflow", func() {
Name: "workflow",
Namespace: namespace,
},
Mode: &v1alpha1.WorkflowExecuteMode{
Steps: v1alpha1.WorkflowModeDAG,
},
WorkflowSpec: v1alpha1.WorkflowSpec{
Mode: &v1alpha1.WorkflowExecuteMode{
Steps: v1alpha1.WorkflowModeDAG,
},
Steps: []v1alpha1.WorkflowStep{
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
@ -206,17 +205,69 @@ var _ = Describe("Test Workflow", func() {
Expect(wrObj.Status.Suspend).Should(BeTrue())
Expect(wrObj.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSuspending))
Expect(wrObj.Status.Steps[0].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseRunning))
Expect(wrObj.Status.Steps[0].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseSuspending))
Expect(wrObj.Status.Steps[0].ID).ShouldNot(BeEquivalentTo(""))
// resume
wrObj.Status.Suspend = false
wrObj.Status.Steps[0].Phase = v1alpha1.WorkflowStepPhaseSucceeded
Expect(k8sClient.Status().Patch(ctx, wrObj, client.Merge)).Should(BeNil())
Expect(utils.ResumeWorkflow(ctx, k8sClient, wrObj, "")).Should(BeNil())
Expect(wrObj.Status.Suspend).Should(BeFalse())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, client.ObjectKey{
Name: wr.Name,
Namespace: wr.Namespace,
}, wrObj)).Should(BeNil())
Expect(wrObj.Status.Suspend).Should(BeFalse())
Expect(wrObj.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSucceeded))
})
It("test workflow suspend in sub steps", func() {
wr := wrTemplate.DeepCopy()
wr.Name = "test-wr-sub-suspend"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "group",
Type: "step-group",
},
SubSteps: []v1alpha1.WorkflowStepBase{
{
Name: "suspend",
Type: "suspend",
},
{
Name: "step1",
Type: "test-apply",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
},
}}
Expect(k8sClient.Create(ctx, wr)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
wrObj := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, client.ObjectKey{
Name: wr.Name,
Namespace: wr.Namespace,
}, wrObj)).Should(BeNil())
Expect(wrObj.Status.Suspend).Should(BeTrue())
Expect(wrObj.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSuspending))
Expect(wrObj.Status.Steps[0].SubStepsStatus[0].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseSuspending))
Expect(wrObj.Status.Steps[0].SubStepsStatus[0].ID).ShouldNot(BeEquivalentTo(""))
// resume
Expect(utils.ResumeWorkflow(ctx, k8sClient, wrObj, "")).Should(BeNil())
Expect(wrObj.Status.Suspend).Should(BeFalse())
expDeployment := &appsv1.Deployment{}
step1Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step1"}
Expect(k8sClient.Get(ctx, step1Key, expDeployment)).Should(BeNil())
expDeployment.Status.Replicas = 1
expDeployment.Status.ReadyReplicas = 1
expDeployment.Status.Conditions = []appsv1.DeploymentCondition{{
Message: "hello",
}}
Expect(k8sClient.Status().Update(ctx, expDeployment)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
@ -258,11 +309,7 @@ var _ = Describe("Test Workflow", func() {
Expect(wrObj.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSuspending))
// terminate the workflow
wrObj.Status.Terminated = true
wrObj.Status.Suspend = false
wrObj.Status.Steps[0].Phase = v1alpha1.WorkflowStepPhaseFailed
wrObj.Status.Steps[0].Reason = wfTypes.StatusReasonTerminate
Expect(k8sClient.Status().Patch(ctx, wrObj, client.Merge)).Should(BeNil())
Expect(utils.TerminateWorkflow(ctx, k8sClient, wrObj)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
@ -288,7 +335,7 @@ var _ = Describe("Test Workflow", func() {
Outputs: v1alpha1.StepOutputs{
{
Name: "message",
ValueFrom: `"message: " +output.value.status.conditions[0].message`,
ValueFrom: `"message: " +output.$returns.value.status.conditions[0].message`,
},
},
},
@ -297,7 +344,7 @@ var _ = Describe("Test Workflow", func() {
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step2",
Type: "test-apply",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox","message":"test"}`)},
Inputs: v1alpha1.StepInputs{
{
From: "message",
@ -368,7 +415,7 @@ var _ = Describe("Test Workflow", func() {
Outputs: v1alpha1.StepOutputs{
{
Name: "message",
ValueFrom: `"message: " +output.value.status.conditions[0].message`,
ValueFrom: `"message: " +output.$returns.value.status.conditions[0].message`,
},
},
},
@ -486,6 +533,21 @@ var _ = Describe("Test Workflow", func() {
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
},
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step3",
Type: "suspend",
If: "false",
},
},
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step4",
Type: "test-apply",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
DependsOn: []string{"step3"},
},
},
}
wr.Spec.Mode = &v1alpha1.WorkflowExecuteMode{
Steps: v1alpha1.WorkflowModeDAG,
@ -499,6 +561,7 @@ var _ = Describe("Test Workflow", func() {
expDeployment := &appsv1.Deployment{}
step1Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step1"}
step2Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step2"}
step4Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step4"}
Expect(k8sClient.Get(ctx, step2Key, expDeployment)).Should(utils.NotFoundMatcher{})
checkRun := &v1alpha1.WorkflowRun{}
@ -519,13 +582,15 @@ var _ = Describe("Test Workflow", func() {
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, step4Key, expDeployment)).Should(utils.NotFoundMatcher{})
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Mode.Steps).Should(BeEquivalentTo(v1alpha1.WorkflowModeDAG))
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSucceeded))
})
It("test failed after retries in step mode with suspend on failure", func() {
defer featuregatetesting.SetFeatureGateDuringTest(&testing.T{}, utilfeature.DefaultFeatureGate, features.EnableSuspendOnFailure, true)()
featuregatetesting.SetFeatureGateDuringTest(GinkgoT(), utilfeature.DefaultFeatureGate, features.EnableSuspendOnFailure, true)
wr := wrTemplate.DeepCopy()
wr.Name = "wr-failed-after-retries"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{
@ -578,9 +643,71 @@ var _ = Describe("Test Workflow", func() {
Expect(checkRun.Status.Steps[1].Reason).Should(BeEquivalentTo(wfTypes.StatusReasonFailedAfterRetries))
By("resume the suspended workflow run")
Expect(utils.ResumeWorkflow(ctx, k8sClient, checkRun, "")).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
checkRun.Status.Suspend = false
Expect(k8sClient.Status().Patch(ctx, checkRun, client.Merge)).Should(BeNil())
Expect(checkRun.Status.Message).Should(BeEquivalentTo(""))
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateExecuting))
Expect(checkRun.Status.Steps[1].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseFailed))
})
It("test reconcile with patch status at once", func() {
featuregatetesting.SetFeatureGateDuringTest(GinkgoT(), utilfeature.DefaultFeatureGate, features.EnableSuspendOnFailure, true)
featuregatetesting.SetFeatureGateDuringTest(GinkgoT(), utilfeature.DefaultFeatureGate, features.EnablePatchStatusAtOnce, true)
wr := wrTemplate.DeepCopy()
wr.Name = "wr-failed-after-retries"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step1",
Type: "test-apply",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
},
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step2-failed",
Type: "apply-object",
Properties: &runtime.RawExtension{Raw: []byte(`{"value":[{"apiVersion":"v1","kind":"invalid","metadata":{"name":"test1"}}]}`)},
},
},
}
Expect(k8sClient.Create(ctx, wr)).Should(BeNil())
wrKey := types.NamespacedName{Namespace: wr.Namespace, Name: wr.Name}
checkRun := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
expDeployment := &appsv1.Deployment{}
step1Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step1"}
Expect(k8sClient.Get(ctx, step1Key, expDeployment)).Should(BeNil())
expDeployment.Status.Replicas = 1
expDeployment.Status.ReadyReplicas = 1
Expect(k8sClient.Status().Update(ctx, expDeployment)).Should(BeNil())
By("verify the first ten reconciles")
for i := 0; i < wfTypes.MaxWorkflowStepErrorRetryTimes; i++ {
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Message).Should(BeEquivalentTo(""))
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateExecuting))
Expect(checkRun.Status.Steps[1].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseFailed))
}
By("workflowrun should be suspended after failed max reconciles")
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSuspending))
Expect(checkRun.Status.Message).Should(BeEquivalentTo(wfTypes.MessageSuspendFailedAfterRetries))
Expect(checkRun.Status.Steps[1].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseFailed))
Expect(checkRun.Status.Steps[1].Reason).Should(BeEquivalentTo(wfTypes.StatusReasonFailedAfterRetries))
By("resume the suspended workflow run")
Expect(utils.ResumeWorkflow(ctx, k8sClient, checkRun, "")).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
@ -590,7 +717,7 @@ var _ = Describe("Test Workflow", func() {
})
It("test failed after retries in dag mode with running step and suspend on failure", func() {
defer featuregatetesting.SetFeatureGateDuringTest(&testing.T{}, utilfeature.DefaultFeatureGate, features.EnableSuspendOnFailure, true)()
featuregatetesting.SetFeatureGateDuringTest(GinkgoT(), utilfeature.DefaultFeatureGate, features.EnableSuspendOnFailure, true)
wr := wrTemplate.DeepCopy()
wr.Name = "wr-failed-after-retries"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{
@ -661,11 +788,19 @@ var _ = Describe("Test Workflow", func() {
checkRun := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
for i := 0; i < wfTypes.MaxWorkflowStepErrorRetryTimes; i++ {
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Message).Should(BeEquivalentTo(""))
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateExecuting))
Expect(checkRun.Status.Steps[0].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseFailed))
}
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateFailed))
Expect(checkRun.Status.Steps[0].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseFailed))
Expect(checkRun.Status.Steps[0].Reason).Should(BeEquivalentTo(wfTypes.StatusReasonRendering))
Expect(checkRun.Status.Steps[0].Reason).Should(BeEquivalentTo(wfTypes.StatusReasonFailedAfterRetries))
Expect(checkRun.Status.Steps[1].Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStepPhaseSkipped))
})
@ -743,6 +878,80 @@ var _ = Describe("Test Workflow", func() {
}))
})
It("test workflow run with mode in step groups", func() {
wr := wrTemplate.DeepCopy()
wr.Name = "wr-group-mode"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step1",
Type: "test-apply",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
},
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "group",
Type: "step-group",
},
Mode: v1alpha1.WorkflowModeStep,
SubSteps: []v1alpha1.WorkflowStepBase{
{
Name: "step2",
Type: "test-apply",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
{
Name: "step3",
Type: "test-apply",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
},
},
}
wr.Spec.Mode = &v1alpha1.WorkflowExecuteMode{
Steps: v1alpha1.WorkflowModeDAG,
}
Expect(k8sClient.Create(context.Background(), wr)).Should(BeNil())
wrKey := types.NamespacedName{Namespace: wr.Namespace, Name: wr.Name}
tryReconcile(reconciler, wr.Name, wr.Namespace)
expDeployment := &appsv1.Deployment{}
step3Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step3"}
Expect(k8sClient.Get(ctx, step3Key, expDeployment)).Should(utils.NotFoundMatcher{})
checkRun := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
step1Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step1"}
Expect(k8sClient.Get(ctx, step1Key, expDeployment)).Should(BeNil())
expDeployment.Status.Replicas = 1
expDeployment.Status.ReadyReplicas = 1
Expect(k8sClient.Status().Update(ctx, expDeployment)).Should(BeNil())
step2Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step2"}
Expect(k8sClient.Get(ctx, step2Key, expDeployment)).Should(BeNil())
expDeployment.Status.Replicas = 1
expDeployment.Status.ReadyReplicas = 1
Expect(k8sClient.Status().Update(ctx, expDeployment)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, step3Key, expDeployment)).Should(BeNil())
expDeployment.Status.Replicas = 1
expDeployment.Status.ReadyReplicas = 1
Expect(k8sClient.Status().Update(ctx, expDeployment)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSucceeded))
Expect(checkRun.Status.Mode).Should(BeEquivalentTo(v1alpha1.WorkflowExecuteMode{
Steps: v1alpha1.WorkflowModeDAG,
SubSteps: v1alpha1.WorkflowModeDAG,
}))
})
It("test sub steps", func() {
wr := wrTemplate.DeepCopy()
wr.Name = "wr-substeps"
@ -1250,6 +1459,84 @@ var _ = Describe("Test Workflow", func() {
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateFailed))
})
It("test suspend and deploy", func() {
wr := wrTemplate.DeepCopy()
wr.Name = "wr-suspend-and-deploy"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step1",
Type: "suspend-and-deploy",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
},
}
Expect(k8sClient.Create(context.Background(), wr)).Should(BeNil())
wrKey := types.NamespacedName{Namespace: wr.Namespace, Name: wr.Name}
tryReconcile(reconciler, wr.Name, wr.Namespace)
checkRun := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSuspending))
expDeployment := &appsv1.Deployment{}
step1Key := types.NamespacedName{Namespace: wr.Namespace, Name: "step1"}
Expect(k8sClient.Get(ctx, step1Key, expDeployment)).Should(utils.NotFoundMatcher{})
time.Sleep(time.Second)
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateExecuting))
Expect(k8sClient.Get(ctx, step1Key, expDeployment)).Should(BeNil())
expDeployment.Status.Replicas = 1
expDeployment.Status.ReadyReplicas = 1
Expect(k8sClient.Status().Update(ctx, expDeployment)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSucceeded))
})
It("test multiple suspend", func() {
wr := wrTemplate.DeepCopy()
wr.Name = "wr-multi-suspend"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{
{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step1",
Type: "multi-suspend",
},
},
}
Expect(k8sClient.Create(context.Background(), wr)).Should(BeNil())
wrKey := types.NamespacedName{Namespace: wr.Namespace, Name: wr.Name}
tryReconcile(reconciler, wr.Name, wr.Namespace)
checkRun := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSuspending))
Expect(utils.ResumeWorkflow(ctx, k8sClient, checkRun, "")).Should(BeNil())
Expect(checkRun.Status.Suspend).Should(BeFalse())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
// suspended by the second suspend
Expect(checkRun.Status.Suspend).Should(BeTrue())
Expect(utils.ResumeWorkflow(ctx, k8sClient, checkRun, "")).Should(BeNil())
Expect(checkRun.Status.Suspend).Should(BeFalse())
tryReconcile(reconciler, wr.Name, wr.Namespace)
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
Expect(checkRun.Status.Suspend).Should(BeFalse())
Expect(checkRun.Status.Phase).Should(BeEquivalentTo(v1alpha1.WorkflowStateSucceeded))
})
It("test timeout", func() {
wr := wrTemplate.DeepCopy()
wr.Name = "wr-timeout"
@ -1459,7 +1746,7 @@ var _ = Describe("Test Workflow", func() {
// terminate manually
checkRun := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, wrKey, checkRun)).Should(BeNil())
terminateWorkflowRun(ctx, checkRun, 0)
Expect(utils.TerminateWorkflow(ctx, k8sClient, checkRun)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
@ -1530,14 +1817,109 @@ var _ = Describe("Test Workflow", func() {
By("Check debug Config Map is created")
debugCM := &corev1.ConfigMap{}
Expect(k8sClient.Get(ctx, types.NamespacedName{
Name: debug.GenerateContextName(wr.Name, "step1", string(curRun.UID)),
Name: debug.GenerateContextName(wr.Name, curRun.Status.Steps[0].ID, string(curRun.UID)),
Namespace: wr.Namespace,
}, debugCM)).Should(BeNil())
Expect(k8sClient.Get(ctx, types.NamespacedName{
Name: debug.GenerateContextName(wr.Name, "step2-sub", string(curRun.UID)),
Name: debug.GenerateContextName(wr.Name, curRun.Status.Steps[1].SubStepsStatus[0].ID, string(curRun.UID)),
Namespace: wr.Namespace,
}, debugCM)).Should(BeNil())
})
It("test step context data", func() {
wr := wrTemplate.DeepCopy()
wr.Name = "test-step-context-data"
wr.Spec.WorkflowSpec.Steps = []v1alpha1.WorkflowStep{{
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "group1",
Type: "step-group",
},
SubSteps: []v1alpha1.WorkflowStepBase{
{
Name: "step1",
Type: "save-process-context",
Properties: &runtime.RawExtension{Raw: []byte(`{"name":"process-context-step1"}`)},
},
{
Name: "step2",
Type: "save-process-context",
Properties: &runtime.RawExtension{Raw: []byte(`{"name":"process-context-step2"}`)},
},
},
}, {
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "group2",
Type: "step-group",
},
SubSteps: []v1alpha1.WorkflowStepBase{
{
Name: "step3",
Type: "save-process-context",
Properties: &runtime.RawExtension{Raw: []byte(`{"name":"process-context-step3"}`)},
},
{
Name: "step4",
Type: "save-process-context",
Properties: &runtime.RawExtension{Raw: []byte(`{"name":"process-context-step4"}`)},
},
},
}, {
WorkflowStepBase: v1alpha1.WorkflowStepBase{
Name: "step5",
Type: "save-process-context",
Properties: &runtime.RawExtension{Raw: []byte(`{"name":"process-context-step5"}`)},
},
}}
Expect(k8sClient.Create(ctx, wr)).Should(BeNil())
tryReconcile(reconciler, wr.Name, wr.Namespace)
wrObj := &v1alpha1.WorkflowRun{}
Expect(k8sClient.Get(ctx, client.ObjectKey{
Name: wr.Name,
Namespace: wr.Namespace,
}, wrObj)).Should(BeNil())
cmList := new(corev1.ConfigMapList)
labels := &metav1.LabelSelector{
MatchLabels: map[string]string{
"process.context.data": "true",
},
}
selector, err := metav1.LabelSelectorAsSelector(labels)
Expect(err).Should(BeNil())
Expect(k8sClient.List(ctx, cmList, &client.ListOptions{
LabelSelector: selector,
})).Should(BeNil())
processCtxMap := make(map[string]map[string]string)
for _, cm := range cmList.Items {
processCtxMap[cm.Name] = cm.Data
}
step1Ctx := processCtxMap["process-context-step1"]
step2Ctx := processCtxMap["process-context-step2"]
step3Ctx := processCtxMap["process-context-step3"]
step4Ctx := processCtxMap["process-context-step4"]
step5Ctx := processCtxMap["process-context-step5"]
By("check context.stepName")
Expect(step1Ctx["stepName"]).Should(Equal("step1"))
Expect(step2Ctx["stepName"]).Should(Equal("step2"))
Expect(step3Ctx["stepName"]).Should(Equal("step3"))
Expect(step4Ctx["stepName"]).Should(Equal("step4"))
Expect(step5Ctx["stepName"]).Should(Equal("step5"))
By("check context.stepGroupName")
Expect(step1Ctx["stepGroupName"]).Should(Equal("group1"))
Expect(step2Ctx["stepGroupName"]).Should(Equal("group1"))
Expect(step3Ctx["stepGroupName"]).Should(Equal("group2"))
Expect(step4Ctx["stepGroupName"]).Should(Equal("group2"))
Expect(step5Ctx["stepGroupName"]).Should(Equal(""))
By("check context.spanID")
spanID := strings.Split(step1Ctx["spanID"], ".")[0]
for _, pCtx := range processCtxMap {
Expect(pCtx["spanID"]).Should(ContainSubstring(spanID))
}
})
})
func reconcileWithReturn(r *WorkflowRunReconciler, name, ns string) error {
@ -1576,11 +1958,3 @@ func setupTestDefinitions(ctx context.Context, defs []string, namespace string)
})).Should(SatisfyAny(BeNil(), &utils.AlreadyExistMatcher{}))
}
}
func terminateWorkflowRun(ctx context.Context, run *v1alpha1.WorkflowRun, index int) {
run.Status.Suspend = false
run.Status.Terminated = true
run.Status.Steps[index].Phase = v1alpha1.WorkflowStepPhaseFailed
run.Status.Steps[index].Reason = wfTypes.StatusReasonTerminate
Expect(k8sClient.Status().Update(ctx, run)).Should(BeNil())
}

View File

@ -27,21 +27,27 @@ import (
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
k8stypes "k8s.io/apimachinery/pkg/types"
"k8s.io/apiserver/pkg/util/feature"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
ctrlEvent "sigs.k8s.io/controller-runtime/pkg/event"
ctrlHandler "sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
triggerv1alpha1 "github.com/kubevela/kube-trigger/api/v1alpha1"
monitorContext "github.com/kubevela/pkg/monitor/context"
"github.com/kubevela/workflow/api/condition"
"github.com/kubevela/workflow/api/v1alpha1"
wfContext "github.com/kubevela/workflow/pkg/context"
"github.com/kubevela/workflow/pkg/cue/packages"
"github.com/kubevela/workflow/pkg/executor"
"github.com/kubevela/workflow/pkg/features"
"github.com/kubevela/workflow/pkg/generator"
"github.com/kubevela/workflow/pkg/monitor/metrics"
providertypes "github.com/kubevela/workflow/pkg/providers/types"
"github.com/kubevela/workflow/pkg/types"
)
@ -49,17 +55,24 @@ import (
type Args struct {
// ConcurrentReconciles is the concurrent reconcile number of the controller
ConcurrentReconciles int
// IgnoreWorkflowWithoutControllerRequirement indicates that workflow controller will not process the workflowrun without 'workflowrun.oam.dev/controller-version-require' annotation.
IgnoreWorkflowWithoutControllerRequirement bool
}
// WorkflowRunReconciler reconciles a WorkflowRun object
type WorkflowRunReconciler struct {
client.Client
Scheme *runtime.Scheme
PackageDiscover *packages.PackageDiscover
Recorder event.Recorder
Scheme *runtime.Scheme
Recorder event.Recorder
ControllerVersion string
Args
}
type workflowRunPatcher struct {
client.Client
run *v1alpha1.WorkflowRun
}
var (
// ReconcileTimeout timeout for controller to reconcile
ReconcileTimeout = time.Minute * 3
@ -74,6 +87,10 @@ func (r *WorkflowRunReconciler) Reconcile(ctx context.Context, req ctrl.Request)
defer cancel()
ctx = types.SetNamespaceInCtx(ctx, req.Namespace)
ctx = providertypes.WithLabelParams(ctx, map[string]string{
types.LabelWorkflowRunName: req.Name,
types.LabelWorkflowRunNamespace: req.Namespace,
})
logCtx := monitorContext.NewTraceContext(ctx, "").AddTag("workflowrun", req.String())
logCtx.Info("Start reconcile workflowrun")
@ -90,6 +107,11 @@ func (r *WorkflowRunReconciler) Reconcile(ctx context.Context, req ctrl.Request)
return ctrl.Result{}, client.IgnoreNotFound(err)
}
if !r.matchControllerRequirement(run) {
logCtx.Info("skip workflowrun: not match the controller requirement of workflowrun")
return ctrl.Result{}, nil
}
timeReporter := timeReconcile(run)
defer timeReporter()
@ -107,10 +129,7 @@ func (r *WorkflowRunReconciler) Reconcile(ctx context.Context, req ctrl.Request)
}
isUpdate := instance.Status.Message != ""
runners, err := generator.GenerateRunners(logCtx, instance, types.StepGeneratorOptions{
PackageDiscover: r.PackageDiscover,
Client: r.Client,
})
runners, err := generator.GenerateRunners(logCtx, instance, types.StepGeneratorOptions{})
if err != nil {
logCtx.Error(err, "[generate runners]")
r.Recorder.Event(run, event.Warning(v1alpha1.ReasonGenerate, errors.WithMessage(err, v1alpha1.MessageFailedGenerate)))
@ -118,7 +137,11 @@ func (r *WorkflowRunReconciler) Reconcile(ctx context.Context, req ctrl.Request)
return r.endWithNegativeCondition(logCtx, run, condition.ErrorCondition(v1alpha1.WorkflowRunConditionType, err))
}
executor := executor.New(instance, r.Client)
patcher := &workflowRunPatcher{
Client: r.Client,
run: run,
}
executor := executor.New(instance, executor.WithStatusPatcher(patcher.patchStatus))
state, err := executor.ExecuteRunners(logCtx, runners)
if err != nil {
logCtx.Error(err, "[execute runners]")
@ -133,39 +156,55 @@ func (r *WorkflowRunReconciler) Reconcile(ctx context.Context, req ctrl.Request)
case v1alpha1.WorkflowStateSuspending:
logCtx.Info("Workflow return state=Suspend")
if duration := executor.GetSuspendBackoffWaitTime(); duration > 0 {
return ctrl.Result{RequeueAfter: duration}, r.patchStatus(logCtx, run, isUpdate)
return ctrl.Result{RequeueAfter: duration}, patcher.patchStatus(logCtx, &run.Status, isUpdate)
}
return ctrl.Result{}, r.patchStatus(logCtx, run, isUpdate)
return ctrl.Result{}, patcher.patchStatus(logCtx, &run.Status, isUpdate)
case v1alpha1.WorkflowStateFailed:
logCtx.Info("Workflow return state=Failed")
r.doWorkflowFinish(run)
r.Recorder.Event(run, event.Normal(v1alpha1.ReasonExecute, v1alpha1.MessageFailed))
return ctrl.Result{}, r.patchStatus(logCtx, run, isUpdate)
return ctrl.Result{}, patcher.patchStatus(logCtx, &run.Status, isUpdate)
case v1alpha1.WorkflowStateTerminated:
logCtx.Info("Workflow return state=Terminated")
r.doWorkflowFinish(run)
r.Recorder.Event(run, event.Normal(v1alpha1.ReasonExecute, v1alpha1.MessageTerminated))
return ctrl.Result{}, r.patchStatus(logCtx, run, isUpdate)
return ctrl.Result{}, patcher.patchStatus(logCtx, &run.Status, isUpdate)
case v1alpha1.WorkflowStateExecuting:
logCtx.Info("Workflow return state=Executing")
return ctrl.Result{RequeueAfter: executor.GetBackoffWaitTime()}, r.patchStatus(logCtx, run, isUpdate)
return ctrl.Result{RequeueAfter: executor.GetBackoffWaitTime()}, patcher.patchStatus(logCtx, &run.Status, isUpdate)
case v1alpha1.WorkflowStateSucceeded:
logCtx.Info("Workflow return state=Succeeded")
r.doWorkflowFinish(run)
run.Status.SetConditions(condition.ReadyCondition(v1alpha1.WorkflowRunConditionType))
r.Recorder.Event(run, event.Normal(v1alpha1.ReasonExecute, v1alpha1.MessageSuccessfully))
return ctrl.Result{}, r.patchStatus(logCtx, run, isUpdate)
return ctrl.Result{}, patcher.patchStatus(logCtx, &run.Status, isUpdate)
case v1alpha1.WorkflowStateSkipped:
logCtx.Info("Skip this reconcile")
return ctrl.Result{}, nil
return ctrl.Result{RequeueAfter: executor.GetBackoffWaitTime()}, nil
}
return ctrl.Result{}, nil
}
func (r *WorkflowRunReconciler) matchControllerRequirement(wr *v1alpha1.WorkflowRun) bool {
if wr.Annotations != nil {
if requireVersion, ok := wr.Annotations[types.AnnotationControllerRequirement]; ok {
return requireVersion == r.ControllerVersion
}
}
if r.IgnoreWorkflowWithoutControllerRequirement {
return false
}
return true
}
// SetupWithManager sets up the controller with the Manager.
func (r *WorkflowRunReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
builder := ctrl.NewControllerManagedBy(mgr)
if feature.DefaultMutableFeatureGate.Enabled(features.EnableWatchEventListener) {
builder = builder.Watches(&triggerv1alpha1.EventListener{}, ctrlHandler.EnqueueRequestsFromMapFunc(findObjectForEventListener))
}
return builder.
WithOptions(controller.Options{
MaxConcurrentReconciles: r.ConcurrentReconciles,
}).
@ -173,32 +212,37 @@ func (r *WorkflowRunReconciler) SetupWithManager(mgr ctrl.Manager) error {
// filter the changes in workflow status
// let workflow handle its reconcile
UpdateFunc: func(e ctrlEvent.UpdateEvent) bool {
new := e.ObjectNew.DeepCopyObject().(*v1alpha1.WorkflowRun)
old := e.ObjectOld.DeepCopyObject().(*v1alpha1.WorkflowRun)
newObj, isNewWR := e.ObjectNew.DeepCopyObject().(*v1alpha1.WorkflowRun)
oldObj, isOldWR := e.ObjectOld.DeepCopyObject().(*v1alpha1.WorkflowRun)
// if the object is a event listener, reconcile the controller
if !isNewWR || !isOldWR {
return true
}
// if the workflow is finished, skip the reconcile
if new.Status.Finished {
if newObj.Status.Finished {
return false
}
// filter managedFields changes
old.ManagedFields = nil
new.ManagedFields = nil
oldObj.ManagedFields = nil
newObj.ManagedFields = nil
// filter resourceVersion changes
old.ResourceVersion = new.ResourceVersion
oldObj.ResourceVersion = newObj.ResourceVersion
// if the generation is changed, return true to let the controller handle it
if old.Generation != new.Generation {
if oldObj.Generation != newObj.Generation {
return true
}
// ignore the changes in step status
old.Status.Steps = new.Status.Steps
oldObj.Status.Steps = newObj.Status.Steps
return !reflect.DeepEqual(old, new)
return !reflect.DeepEqual(oldObj, newObj)
},
CreateFunc: func(e ctrlEvent.CreateEvent) bool {
CreateFunc: func(e ctrlEvent.CreateEvent) bool { //nolint:revive,unused
return true
},
}).
@ -208,13 +252,16 @@ func (r *WorkflowRunReconciler) SetupWithManager(mgr ctrl.Manager) error {
func (r *WorkflowRunReconciler) endWithNegativeCondition(ctx context.Context, wr *v1alpha1.WorkflowRun, condition condition.Condition) (ctrl.Result, error) {
wr.SetConditions(condition)
if err := r.patchStatus(ctx, wr, false); err != nil {
if err := r.Status().Patch(ctx, wr, client.Merge); err != nil {
executor.StepStatusCache.Store(fmt.Sprintf("%s-%s", wr.Name, wr.Namespace), -1)
return ctrl.Result{}, errors.WithMessage(err, "failed to patch workflowrun status")
}
return ctrl.Result{}, fmt.Errorf("reconcile WorkflowRun error, msg: %s", condition.Message)
}
func (r *WorkflowRunReconciler) patchStatus(ctx context.Context, wr *v1alpha1.WorkflowRun, isUpdate bool) error {
func (r *workflowRunPatcher) patchStatus(ctx context.Context, status *v1alpha1.WorkflowRunStatus, isUpdate bool) error {
r.run.Status = *status
wr := r.run
if isUpdate {
if err := r.Status().Update(ctx, wr); err != nil {
executor.StepStatusCache.Store(fmt.Sprintf("%s-%s", wr.Name, wr.Namespace), -1)
@ -245,3 +292,9 @@ func timeReconcile(wr *v1alpha1.WorkflowRun) func() {
metrics.WorkflowRunReconcileTimeHistogram.WithLabelValues(beginPhase, string(wr.Status.Phase)).Observe(v)
}
}
func findObjectForEventListener(_ context.Context, object client.Object) []reconcile.Request {
return []reconcile.Request{{
NamespacedName: k8stypes.NamespacedName{Name: object.GetName(), Namespace: object.GetNamespace()},
}}
}

View File

@ -21,8 +21,7 @@ import (
"os"
"time"
"github.com/kubevela/workflow/api/v1alpha1"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -30,7 +29,8 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/yaml"
"github.com/oam-dev/kubevela/pkg/oam/util"
"github.com/kubevela/workflow/api/v1alpha1"
"github.com/kubevela/workflow/pkg/utils"
)
var _ = Describe("Test the workflow run with the built-in definitions", func() {
@ -45,7 +45,7 @@ var _ = Describe("Test the workflow run with the built-in definitions", func() {
Eventually(func() error {
return k8sClient.Create(ctx, &ns)
}, time.Second*3, time.Microsecond*300).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
}, time.Second*3, time.Microsecond*300).Should(SatisfyAny(BeNil(), &utils.AlreadyExistMatcher{}))
})
It("Test the workflow with config definition", func() {

View File

@ -23,7 +23,7 @@ import (
"strings"
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@ -31,6 +31,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/config"
"github.com/kubevela/pkg/util/singleton"
"github.com/kubevela/pkg/util/test/definition"
"github.com/kubevela/workflow/api/v1alpha1"
@ -58,9 +59,11 @@ var k8sClient client.Client
var _ = BeforeSuite(func() {
conf, err := config.GetConfig()
Expect(err).Should(BeNil())
singleton.KubeConfig.Set(conf)
k8sClient, err = client.New(conf, client.Options{Scheme: scheme})
Expect(err).Should(BeNil())
singleton.KubeClient.Set(k8sClient)
prepareWorkflowDefinitions()
})

102
examples/initialize-env.md Normal file
View File

@ -0,0 +1,102 @@
# Automatically initialize the environment with terraform
You can use Workflow together with Terraform to initialize your environment automatically.
> Note: please make sure that you have enabled the KubeVela Terraform Addon first:
> ```bash
> vela addon enable terraform
> # supported: terraform-alibaba/terraform-aws/terraform-azure/terraform-baidu/terraform-ec/terraform-gcp/terraform-tencent/terraform-ucloud
> vela addon enable terraform-<cloud name>
> ```
For example, use the cloud provider to create a cluster first, then add this cluster to the management of kubevela workflow. After that, deploy a configmap in the newly created cluster to initialize the environment. Let's take Alibaba Cloud Kubernetes cluster as an example:
Apply the following YAML:
```yaml
apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: apply-terraform-resource
namespace: default
spec:
workflowSpec:
steps:
# initialize the terraform provider with credential first
- name: provider
type: apply-terraform-provider
properties:
type: alibaba
name: my-alibaba-provider
accessKey: <accessKey>
secretKey: <secretKey>
region: cn-hangzhou
# create a ACK cluster with terraform
- name: configuration
type: apply-terraform-config
properties:
source:
# you can choose to use remote tf or specify the hcl directly
# hcl: <your hcl>
path: alibaba/cs/dedicated-kubernetes
remote: https://github.com/FogDong/terraform-modules
providerRef:
name: my-alibaba-provider
writeConnectionSecretToRef:
name: my-terraform-secret
namespace: vela-system
variable:
name: regular-check-ack
new_nat_gateway: true
vpc_name: "tf-k8s-vpc-regular-check"
vpc_cidr: "10.0.0.0/8"
vswitch_name_prefix: "tf-k8s-vsw-regualr-check"
vswitch_cidrs: [ "10.1.0.0/16", "10.2.0.0/16", "10.3.0.0/16" ]
k8s_name_prefix: "tf-k8s-regular-check"
k8s_version: 1.24.6-aliyun.1
k8s_pod_cidr: "192.168.5.0/24"
k8s_service_cidr: "192.168.2.0/24"
k8s_worker_number: 2
cpu_core_count: 4
memory_size: 8
tags:
created_by: "Terraform-of-KubeVela"
created_from: "module-tf-alicloud-ecs-instance"
# add the newly created cluster to the management of kubevela workflow with vela cli
- name: add-cluster
type: vela-cli
properties:
storage:
secret:
- name: secret-mount
secretName: my-terraform-secret
mountPath: /kubeconfig/ack
command:
- vela
- cluster
- join
- /kubeconfig/ack/KUBECONFIG
- --name=ack
# clean the execution job
- name: clean-cli-jobs
type: clean-jobs
if: always
properties:
labelSelector:
"workflow.oam.dev/step-name": apply-terraform-resource-add-cluster
# apply the configmap in the created cluster
- name: distribute-config
type: apply-object
properties:
cluster: ack
value:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm
namespace: default
data:
test-key: test-value
```
In the workflow, the first step will create a terraform provider with your credentials, after that, it will create the terraform resource. Then, it will throw a job to execute vela command -- add the cluster to the management of vela, and clean the job after it is finished. Finally, it will create a config map in the created cluster.

View File

@ -0,0 +1,169 @@
# Kubernetes Job Orchestration
Kubevela provides a simple way to orchestrate Jobs in Kubernetes. With the built-in !(`apply-job`)[../charts/vela-core/templates/definitions/apply-job.yaml] workflow step, you can easily create a workflow that runs a series of Jobs in order.
## Run Jobs in Order
For example, following is a workflow that runs three Jobs in order, the first two is in a group that runs in parallel, and the third one runs after the first two are done.
```yaml
apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: batch-jobs
namespace: default
spec:
mode:
steps: StepByStep
subSteps: DAG
workflowSpec:
steps:
- name: group
type: step-group
subSteps:
- name: job1
type: apply-job
properties:
value:
metadata:
name: job1
namespace: default
spec:
completions: 2
parallelism: 1
template:
spec:
containers:
- command:
- echo
- hello world
image: bash
name: mytask
- name: job2
type: apply-job
properties:
...
- name: job3
if: steps.job2.status.succeeded > 1
type: apply-job
properties:
...
```
### Inside `apply-job`
The `apply-job` workflow step is a wrapper of Kubernetes Job. It takes the same parameters as a Kubernetes Job, and creates a Job in the cluster. The workflow step will wait until the Job is done before moving on to the next step. If the Job fails, the workflow will be marked as failed.
The `apply-job` type is written in CUE, and the definition is as follows:
```cue
import (
"vela/op"
)
"apply-job": {
type: "workflow-step"
annotations: {
"category": "Resource Management"
}
labels: {}
description: "Apply job"
}
template: {
// apply the job
apply: op.#Apply & {
value: parameter.value
cluster: parameter.cluster
}
// fail the step if the job fails
if apply.status.failed > 0 {
fail: op.#Fail & {
message: "Job failed"
}
}
// wait the job to be ready
wait: op.#ConditionalWait & {
continue: apply.status.succeeded == apply.spec.completions
}
parameter: {
// +usage=Specify Kubernetes job object to be applied
value: {
apiVersion: "batch/v1"
kind: "Job"
...
}
// +usage=The cluster you want to apply the resource to, default is the current control plane cluster
cluster: *"" | string
}
}
```
## Customize the Job for Machine Learning Training
If you want to customize the Job for training with some specific machine learning framework, you can customize your own workflow step. For example, if you want to use PyTorch to train a model, you can create a workflow step that creates a PyTorch Job.
```cue
import (
"vela/op"
)
"apply-pytorch-job": {
type: "workflow-step"
annotations: {
"category": "Resource Management"
}
labels: {}
description: "Apply pytorch job"
}
template: {
// customize the job with pytorch config
job: {
parameter.value
spec: template: spec: {
containers: [{
// pytorch config
env: [{...}]
}]
}
}
// apply the job
apply: op.#Apply & {
value: job
cluster: parameter.cluster
}
// create the service for pytorch job
service: op.#Apply & {...}
// fail the step if the job fails
if apply.status.failed > 0 {
fail: op.#Fail & {
message: "Job failed"
}
}
// wait the job to be ready
wait: op.#ConditionalWait & {
continue: apply.status.succeeded == apply.spec.completions
}
parameter: {
// +usage=Specify Kubernetes job object to be applied
value: {
apiVersion: "batch/v1"
kind: "Job"
...
}
// +usage=The cluster you want to apply the resource to, default is the current control plane cluster
cluster: *"" | string
service: *true | bool
config: {
...
}
}
}
```

View File

@ -1,9 +1,5 @@
# Control the delivery process of multiple applications
> Note: The example uses following definitions, please use `vela def apply -f <filename>` to install them first.
> - [Definition `read-app`](https://github.com/kubevela/catalog/blob/master/addons/vela-workflow/definitions/read-app.cue)
> - [Definition `apply-app`](https://github.com/kubevela/catalog/blob/master/addons/vela-workflow/definitions/apply-app.cue)
Apply the following workflow to control the delivery process of multiple applications:
```yaml

View File

@ -1,9 +1,5 @@
# Use Workflow for request and notify
> Note: The example uses following definitions, please use `vela def apply -f <filename>` to install them first.
> - [Definition `request`](https://github.com/kubevela/catalog/blob/master/addons/vela-workflow/definitions/request.cue)
> - [Definition `notification`](https://github.com/kubevela/kubevela/blob/master/vela-templates/definitions/internal/workflowstep/notification.cue)
Apply the following workflow for request a specified URL first and then use the response as a message to your slack channel.
```yaml

View File

@ -1,8 +1,5 @@
# Run your workflow with template
> Note: The example uses following definitions, please use `vela def apply -f <filename>` to install them first.
> - [Definition `apply-deployment`](https://github.com/kubevela/catalog/blob/master/addons/vela-workflow/definitions/apply-deployment.cue)
You can also create a Workflow Template and run it with a WorkflowRun with different context.
Apply the following Workflow Template first:

View File

@ -0,0 +1,77 @@
apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: apply-terraform-resource
namespace: default
spec:
workflowSpec:
steps:
- name: provider
type: apply-terraform-provider
properties:
type: alibaba
name: my-alibaba-provider
accessKey: <accessKey>
secretKey: <secretKey>
region: cn-hangzhou
- name: configuration
type: apply-terraform-config
properties:
source:
path: alibaba/cs/dedicated-kubernetes
remote: https://github.com/FogDong/terraform-modules
providerRef:
name: my-alibaba-provider
writeConnectionSecretToRef:
name: my-terraform-secret
namespace: vela-system
variable:
name: regular-check-ack
new_nat_gateway: true
vpc_name: "tf-k8s-vpc-regular-check"
vpc_cidr: "10.0.0.0/8"
vswitch_name_prefix: "tf-k8s-vsw-regualr-check"
vswitch_cidrs: [ "10.1.0.0/16", "10.2.0.0/16", "10.3.0.0/16" ]
k8s_name_prefix: "tf-k8s-regular-check"
k8s_version: 1.24.6-aliyun.1
k8s_pod_cidr: "192.168.5.0/24"
k8s_service_cidr: "192.168.2.0/24"
k8s_worker_number: 2
cpu_core_count: 4
memory_size: 8
tags:
created_by: "Terraform-of-KubeVela"
created_from: "module-tf-alicloud-ecs-instance"
- name: add-cluster
type: vela-cli
if: always
properties:
storage:
secret:
- name: secret-mount
secretName: my-terraform-secret
mountPath: /kubeconfig/ack
command:
- vela
- cluster
- join
- /kubeconfig/ack/KUBECONFIG
- --name=ack
- name: clean-cli-jobs
type: clean-jobs
properties:
namespace: vela-system
labelSelector:
"workflow.oam.dev/step-name": apply-terraform-resource-add-cluster
- name: distribute-config
type: apply-object
properties:
cluster: ack
value:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm
namespace: default
data:
test-key: test-value

View File

@ -0,0 +1,62 @@
apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: build-push-image
namespace: default
spec:
workflowSpec:
steps:
# or use kubectl create secret generic git-token --from-literal='GIT_TOKEN=<your-token>'
- name: create-git-secret
type: export2secret
properties:
secretName: git-secret
data:
token: <git token>
# or use kubectl create secret docker-registry docker-regcred \
# --docker-server=https://index.docker.io/v1/ \
# --docker-username=<your-username> \
# --docker-password=<your-password>
- name: create-image-secret
type: export2secret
properties:
secretName: image-secret
kind: docker-registry
dockerRegistry:
username: <docker username>
password: <docker password>
- name: build-push
type: build-push-image
properties:
# use your kaniko executor image like below, if not set, it will use default image oamdev/kaniko-executor:v1.9.1
# kanikoExecutor: gcr.io/kaniko-project/executor:latest
# you can use context with git and branch or directly specify the context, please refer to https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts
context:
git: github.com/FogDong/simple-web-demo
branch: main
image: fogdong/simple-web-demo:v1
# specify your dockerfile, if not set, it will use default dockerfile ./Dockerfile
# dockerfile: ./Dockerfile
credentials:
image:
name: image-secret
# buildArgs:
# - key="value"
# platform: linux/arm
- name: apply-app
type: apply-app
properties:
data:
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: my-app
spec:
components:
- name: my-web
type: webservice
properties:
image: fogdong/simple-web-demo:v1
ports:
- port: 80
expose: true

View File

@ -0,0 +1,49 @@
apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt
namespace: default
spec:
workflowSpec:
steps:
# apply a deployment with invalid image, this step will fail because of timeout
# the resource will be passed to chat-gpt step to anaylze
- name: apply
type: apply-deployment
timeout: 3s
outputs:
- name: resource
valueFrom: output.value
properties:
image: invalid
# if apply step failed, send the resource to chat-gpt to diagnose
- name: chat-diagnose
if: status.apply.failed
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
# specify your token
value: <your token>
prompt:
type: diagnose
# if apply step succeeded, send the resource to chat-gpt to audit
- name: chat-audit
if: status.apply.succeeded
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
# or read your token from secret
secretRef:
name: chat-gpt-token-secret
key: token
prompt:
type: audit
lang: Chinese

305
go.mod
View File

@ -1,232 +1,144 @@
module github.com/kubevela/workflow
go 1.19
go 1.23.8
require (
cuelang.org/go v0.5.0-alpha.1
cuelang.org/go v0.9.2
github.com/agiledragon/gomonkey/v2 v2.4.0
github.com/aliyun/aliyun-log-go-sdk v0.1.38
github.com/crossplane/crossplane-runtime v0.14.1-0.20210722005935-0b469fcc77cd
github.com/cue-exp/kubevelafix v0.0.0-20220922150317-aead819d979d
github.com/evanphx/json-patch v4.12.0+incompatible
github.com/crossplane/crossplane-runtime v1.16.0
github.com/evanphx/json-patch v5.6.0+incompatible
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da
github.com/google/go-cmp v0.5.8
github.com/hashicorp/go-version v1.3.0
github.com/kubevela/pkg v0.0.0-20221017134311-26e5042d4503
github.com/oam-dev/kubevela v1.6.0-alpha.4.0.20221018114727-ab4348ed67d0
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.20.2
github.com/google/go-cmp v0.6.0
github.com/hashicorp/go-version v1.6.0
github.com/kubevela/kube-trigger v0.1.1-0.20230403060228-6582e7595db6
github.com/kubevela/pkg v1.9.3-0.20241203070234-2cf98778c0a9
github.com/onsi/ginkgo/v2 v2.23.3
github.com/onsi/gomega v1.36.2
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.12.2
github.com/prometheus/client_golang v1.19.1
github.com/prometheus/common v0.55.0
github.com/robfig/cron/v3 v3.0.1
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.7.1
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af
github.com/stretchr/testify v1.9.0
golang.org/x/time v0.5.0
gomodules.xyz/jsonpatch/v2 v2.4.0
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.23.6
k8s.io/apiextensions-apiserver v0.23.6
k8s.io/apimachinery v0.23.6
k8s.io/apiserver v0.23.6
k8s.io/client-go v0.23.6
k8s.io/component-base v0.23.6
k8s.io/klog/v2 v2.60.1
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9
sigs.k8s.io/controller-runtime v0.11.2
sigs.k8s.io/yaml v1.3.0
k8s.io/api v0.31.10
k8s.io/apiextensions-apiserver v0.31.10
k8s.io/apimachinery v0.31.10
k8s.io/apiserver v0.31.10
k8s.io/client-go v0.31.10
k8s.io/component-base v0.31.10
k8s.io/klog/v2 v2.130.1
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8
sigs.k8s.io/controller-runtime v0.19.1
sigs.k8s.io/yaml v1.4.0
)
require golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2 // indirect
require (
github.com/AlecAivazis/survey/v2 v2.1.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/BurntSushi/toml v0.4.1 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Masterminds/semver/v3 v3.1.1 // indirect
github.com/Masterminds/sprig v2.22.0+incompatible // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
dario.cat/mergo v1.0.0 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/agext/levenshtein v1.2.2 // indirect
github.com/alessio/shellescape v1.2.2 // indirect
github.com/aliyun/alibaba-cloud-sdk-go v1.61.1704 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/briandowns/spinner v1.11.1 // indirect
github.com/buger/jsonparser v1.1.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff v2.2.1+incompatible // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/cloudtty/cloudtty v0.2.0 // indirect
github.com/cockroachdb/apd/v2 v2.0.2 // indirect
github.com/containerd/containerd v1.5.13 // indirect
github.com/coreos/go-semver v0.3.0 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v1.7.1 // indirect
github.com/docker/cli v20.10.16+incompatible // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/docker v20.10.16+incompatible // indirect
github.com/docker/docker-credential-helpers v0.6.4 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/emicklei/go-restful v2.9.5+incompatible // indirect
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
github.com/emirpasic/gods v1.12.0 // indirect
github.com/evanphx/json-patch/v5 v5.1.0 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.1 // indirect
github.com/form3tech-oss/jwt-go v3.2.3+incompatible // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/getkin/kin-openapi v0.94.0 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-errors/errors v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cockroachdb/apd/v3 v3.2.1 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/frankban/quicktest v1.11.3 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-kit/kit v0.10.0 // indirect
github.com/go-kit/log v0.2.0 // indirect
github.com/go-kit/log v0.2.1 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/go-logr/logr v1.2.2 // indirect
github.com/go-logr/zapr v1.2.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/cel-go v0.20.1 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/hashicorp/hcl/v2 v2.9.1 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jellydator/ttlcache/v3 v3.0.1 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351 // indirect
github.com/klauspost/compress v1.15.11 // indirect
github.com/kubevela/prism v1.5.1-0.20220915071949-6bf3ad33f84f // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/magiconair/properties v1.8.5 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-colorable v0.1.11 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/hashstructure/v2 v2.0.1 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/mpvl/unique v0.0.0-20150818121801-cbe035fff7de // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/nacos-group/nacos-sdk-go/v2 v2.1.0 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/oam-dev/cluster-gateway v1.4.0 // indirect
github.com/oam-dev/cluster-register v1.0.4-0.20220928064144-5f76a9d7ca8c // indirect
github.com/oam-dev/terraform-config-inspect v0.0.0-20210418082552-fc72d929aa28 // indirect
github.com/oam-dev/terraform-controller v0.7.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20220114050600-8b9d41f48198 // indirect
github.com/openkruise/kruise-api v1.1.0 // indirect
github.com/openkruise/rollouts v0.1.1-0.20220622054609-149e5a48da5e // indirect
github.com/openshift/library-go v0.0.0-20220112153822-ac82336bd076 // indirect
github.com/pelletier/go-toml v1.9.4 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/oam-dev/cluster-gateway v1.9.1-0.20241120140625-33c8891b781c // indirect
github.com/openshift/library-go v0.0.0-20230327085348-8477ec72b725 // indirect
github.com/pierrec/lz4 v2.6.0+incompatible // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/sergi/go-diff v1.1.0 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/spf13/cobra v1.4.0 // indirect
github.com/src-d/gcfg v1.4.0 // indirect
github.com/wonderflow/cert-manager-api v1.0.4-0.20210304051430-e08aa76f6c5f // indirect
github.com/xanzy/ssh-agent v0.3.0 // indirect
github.com/xlab/treeprint v1.1.0 // indirect
github.com/zclconf/go-cty v1.8.0 // indirect
go.etcd.io/etcd/api/v3 v3.5.0 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.0 // indirect
go.etcd.io/etcd/client/v3 v3.5.0 // indirect
go.opentelemetry.io/contrib v0.20.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0 // indirect
go.opentelemetry.io/otel v0.20.0 // indirect
go.opentelemetry.io/otel/exporters/otlp v0.20.0 // indirect
go.opentelemetry.io/otel/metric v0.20.0 // indirect
go.opentelemetry.io/otel/sdk v0.20.0 // indirect
go.opentelemetry.io/otel/sdk/export/metric v0.20.0 // indirect
go.opentelemetry.io/otel/sdk/metric v0.20.0 // indirect
go.opentelemetry.io/otel/trace v0.20.0 // indirect
go.opentelemetry.io/proto/otlp v0.7.0 // indirect
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.7.0 // indirect
go.uber.org/zap v1.21.0 // indirect
golang.org/x/crypto v0.0.0-20220926161630-eccd6366d1be // indirect
golang.org/x/net v0.0.0-20220906165146-f3363e06e74c // indirect
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 // indirect
golang.org/x/sys v0.0.0-20220928140112-f11e5e49a4ec // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220628213854-d9e0b6570c03 // indirect
google.golang.org/grpc v1.48.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/spf13/cobra v1.8.1 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
go.etcd.io/etcd/api/v3 v3.5.16 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.16 // indirect
go.etcd.io/etcd/client/v3 v3.5.16 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
go.opentelemetry.io/otel v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 // indirect
go.opentelemetry.io/otel/metric v1.28.0 // indirect
go.opentelemetry.io/otel/sdk v1.28.0 // indirect
go.opentelemetry.io/otel/trace v1.28.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/crypto v0.36.0 // indirect
golang.org/x/exp v0.0.0-20240112132812-db7319d0e0e3 // indirect
golang.org/x/net v0.37.0 // indirect
golang.org/x/oauth2 v0.22.0 // indirect
golang.org/x/sync v0.12.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/term v0.30.0 // indirect
golang.org/x/text v0.23.0 // indirect
golang.org/x/tools v0.31.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 // indirect
google.golang.org/grpc v1.67.0 // indirect
google.golang.org/protobuf v1.36.5 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.66.2 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/src-d/go-billy.v4 v4.3.2 // indirect
gopkg.in/src-d/go-git.v4 v4.13.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
helm.sh/helm/v3 v3.7.2 // indirect
istio.io/api v0.0.0-20220512212136-561ffec82582 // indirect
istio.io/client-go v1.13.4 // indirect
istio.io/gogo-genproto v0.0.0-20211208193508-5ab4acc9eb1e // indirect
k8s.io/cli-runtime v0.23.6 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/klog v1.0.0 // indirect
k8s.io/kube-aggregator v0.23.0 // indirect
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
k8s.io/kubectl v0.23.6 // indirect
k8s.io/metrics v0.23.6 // indirect
open-cluster-management.io/api v0.7.0 // indirect
oras.land/oras-go v0.4.0 // indirect
k8s.io/kms v0.31.10 // indirect
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
k8s.io/kubectl v0.29.0 // indirect
open-cluster-management.io/api v0.11.0 // indirect
sigs.k8s.io/apiserver-network-proxy v0.0.30 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.30 // indirect
sigs.k8s.io/apiserver-runtime v1.1.1 // indirect
sigs.k8s.io/gateway-api v0.4.3 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/kind v0.9.0 // indirect
sigs.k8s.io/kustomize/api v0.10.1 // indirect
sigs.k8s.io/kustomize/kyaml v0.13.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.30.3 // indirect
sigs.k8s.io/apiserver-runtime v1.1.2-0.20221118041430-0a6394f6dda3 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
)
replace (
@ -234,5 +146,6 @@ replace (
github.com/go-kit/kit => github.com/go-kit/kit v0.12.0
github.com/nats-io/jwt/v2 => github.com/nats-io/jwt/v2 v2.0.1
github.com/nats-io/nats-server/v2 => github.com/nats-io/nats-server/v2 v2.9.3
sigs.k8s.io/apiserver-network-proxy/konnectivity-client => sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.31-0.20220502234555-5308cea56b78
github.com/wercker/stern => github.com/oam-dev/stern v1.13.2
sigs.k8s.io/apiserver-runtime => github.com/kmodules/apiserver-runtime v1.1.2-0.20250422194347-c5ac4abaf2ae
)

2100
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -4,7 +4,7 @@ IMG_TAG ?= latest
OS ?= linux
ARCH ?= amd64
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.24.1
ENVTEST_K8S_VERSION = 1.31
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@ -47,8 +47,8 @@ VELA_VERSION ?= master
# Repo info
GIT_COMMIT ?= git-$(shell git rev-parse --short HEAD)
GIT_COMMIT_LONG ?= $(shell git rev-parse HEAD)
VELA_VERSION_KEY := github.com/oam-dev/kubevela/version.VelaVersion
VELA_GITVERSION_KEY := github.com/oam-dev/kubevela/version.GitRevision
VELA_VERSION_KEY := github.com/kubevela/workflow/version.VelaVersion
VELA_GITVERSION_KEY := github.com/kubevela/workflow/version.GitRevision
LDFLAGS ?= "-s -w -X $(VELA_VERSION_KEY)=$(VELA_VERSION) -X $(VELA_GITVERSION_KEY)=$(GIT_COMMIT)"

View File

@ -12,7 +12,7 @@ ENVTEST ?= $(LOCALBIN)/setup-envtest
## Tool Versions
KUSTOMIZE_VERSION ?= 4.5.5
CONTROLLER_TOOLS_VERSION ?= v0.9.0
CONTROLLER_TOOLS_VERSION ?= v0.18
KUSTOMIZE_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"
.PHONY: kustomize
@ -39,7 +39,7 @@ ifeq (, $(shell which staticcheck))
@{ \
set -e ;\
echo 'installing honnef.co/go/tools/cmd/staticcheck ' ;\
go install honnef.co/go/tools/cmd/staticcheck@2022.1 ;\
go install honnef.co/go/tools/cmd/staticcheck@v0.5.1 ;\
}
STATICCHECK=$(GOBIN)/staticcheck
else
@ -58,7 +58,7 @@ else
GOIMPORTS=$(shell which goimports)
endif
GOLANGCILINT_VERSION ?= v1.46.0
GOLANGCILINT_VERSION ?= v1.60.0
.PHONY: golangci
golangci:
@ -84,7 +84,7 @@ ifeq (, $(shell which readme-generator))
@{ \
set -e ;\
echo 'installing readme-generator-for-helm' ;\
npm install -g readme-generator-for-helm ;\
npm install -g @bitnami/readme-generator-for-helm ;\
}
else
@$(OK) readme-generator-for-helm is already installed

View File

@ -5,14 +5,15 @@ e2e-setup-controller-pre-hook:
.PHONY: e2e-setup-controller
e2e-setup-controller:
helm upgrade --install \
--create-namespace \
--namespace vela-system \
--set image.repository=oamdev/vela-workflow \
--set image.tag=latest \
--set image.pullPolicy=IfNotPresent \
--wait vela-workflow \
./charts/vela-workflow
helm upgrade --install \
--create-namespace \
--namespace vela-system \
--set image.repository=oamdev/vela-workflow \
--set image.tag=latest \
--set image.pullPolicy=IfNotPresent \
--wait vela-workflow \
./charts/vela-workflow \
--debug
.PHONY: end-e2e
end-e2e:

View File

@ -1,8 +1,9 @@
package backup
import (
monitorContext "github.com/kubevela/pkg/monitor/context"
"fmt"
monitorContext "github.com/kubevela/pkg/monitor/context"
"github.com/kubevela/workflow/api/v1alpha1"
"github.com/kubevela/workflow/pkg/backup/sls"
)
@ -13,24 +14,21 @@ const (
)
// NewPersister is a factory method for creating a persister.
func NewPersister(persistType string, config map[string][]byte) persistWorkflowRecord {
func NewPersister(config map[string][]byte, persistType string) (PersistWorkflowRecord, error) {
if config == nil {
return nil, fmt.Errorf("empty config")
}
switch persistType {
case PersistTypeSLS:
if config == nil {
return nil
}
return &sls.Handler{
LogStoreName: string(config["LogStoreName"]),
ProjectName: string(config["ProjectName"]),
Endpoint: string(config["Endpoint"]),
AccessKeyID: string(config["AccessKeyID"]),
AccessKeySecret: string(config["AccessKeySecret"]),
}
return sls.NewSLSHandler(config)
case "":
return nil, nil
default:
return nil
return nil, fmt.Errorf("unsupported persist type %s", persistType)
}
}
type persistWorkflowRecord interface {
// PersistWorkflowRecord is the interface for record persist
type PersistWorkflowRecord interface {
Store(ctx monitorContext.Context, run *v1alpha1.WorkflowRun) error
}

View File

@ -1,44 +1,91 @@
package backup
import (
"reflect"
"context"
"testing"
"github.com/kubevela/workflow/pkg/backup/sls"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
func TestNewPersister(t *testing.T) {
type args struct {
cli := fake.NewClientBuilder().WithScheme(scheme.Scheme).Build()
ctx := context.Background()
testCases := map[string]struct {
persistType string
config map[string][]byte
}
tests := []struct {
name string
args args
want persistWorkflowRecord
configName string
expectedErr string
secret *corev1.Secret
}{
{
name: "Empty config",
args: args{
persistType: "sls",
config: nil,
"empty config": {
persistType: "sls",
secret: &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "valid",
Namespace: "default",
},
},
want: nil,
expectedErr: "empty config",
},
{
name: "Success",
args: args{
persistType: "sls",
config: make(map[string][]byte),
"invalid type": {
persistType: "invalid",
secret: &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "valid",
Namespace: "default",
},
Data: map[string][]byte{
"accessKeyID": []byte("accessKeyID"),
},
},
expectedErr: "unsupported persist type",
},
"sls-not-complete": {
persistType: "sls",
secret: &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "valid",
Namespace: "default",
},
Data: map[string][]byte{
"accessKeyID": []byte("accessKeyID"),
},
},
expectedErr: "invalid SLS config",
},
"sls-success": {
persistType: "sls",
secret: &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "valid",
Namespace: "default",
},
Data: map[string][]byte{
"AccessKeyID": []byte("accessKeyID"),
"AccessKeySecret": []byte("accessKeySecret"),
"Endpoint": []byte("endpoint"),
"ProjectName": []byte("project"),
"LogStoreName": []byte("logstore"),
},
},
want: &sls.Handler{},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := NewPersister(tt.args.persistType, tt.args.config); !reflect.DeepEqual(got, tt.want) {
t.Errorf("NewPersister() = %v, want %v", got, tt.want)
for name, tc := range testCases {
t.Run(name, func(t *testing.T) {
r := require.New(t)
if tc.secret != nil {
r.NoError(cli.Create(ctx, tc.secret))
defer cli.Delete(ctx, tc.secret)
}
_, err := NewPersister(tc.secret.Data, tc.persistType)
if tc.expectedErr != "" {
r.Contains(err.Error(), tc.expectedErr)
return
}
r.NoError(err)
})
}
}

View File

@ -2,6 +2,7 @@ package sls
import (
"encoding/json"
"fmt"
"time"
monitorContext "github.com/kubevela/pkg/monitor/context"
@ -12,29 +13,53 @@ import (
// Handler is sls config.
type Handler struct {
LogStoreName string
ProjectName string
Endpoint string
AccessKeyID string
AccessKeySecret string
LogStoreName string
ProjectName string
ProducerConfig *producer.ProducerConfig
}
// Callback is for sls callback
type Callback struct {
ctx monitorContext.Context
}
// NewSLSHandler create a new sls handler
func NewSLSHandler(config map[string][]byte) (*Handler, error) {
endpoint := string(config["Endpoint"])
accessKeyID := string(config["AccessKeyID"])
accessKeySecret := string(config["AccessKeySecret"])
projectName := string(config["ProjectName"])
logStoreName := string(config["LogStoreName"])
if endpoint == "" || accessKeyID == "" || accessKeySecret == "" || projectName == "" || logStoreName == "" {
return nil, fmt.Errorf("invalid SLS config, please make sure endpoint/ak/sk/project/logstore are both provided correctly")
}
producerConfig := producer.GetDefaultProducerConfig()
producerConfig.Endpoint = endpoint
producerConfig.AccessKeyID = accessKeyID
producerConfig.AccessKeySecret = accessKeySecret
return &Handler{
ProducerConfig: producerConfig,
LogStoreName: logStoreName,
ProjectName: projectName,
}, nil
}
// Fail is fail callback
func (callback *Callback) Fail(result *producer.Result) {
callback.ctx.Error(fmt.Errorf("failed to send log to sls"), result.GetErrorMessage(), "errorCode", result.GetErrorCode(), "requestId", result.GetRequestId())
}
// Success is success callback
func (callback *Callback) Success(result *producer.Result) { //nolint:revive,unused
}
// Store is store workflowRun to sls
func (s *Handler) Store(ctx monitorContext.Context, run *v1alpha1.WorkflowRun) error {
ctx.Info("Start Send workflow record to SLS")
producerConfig := producer.GetDefaultProducerConfig()
producerConfig.Endpoint = s.Endpoint
producerConfig.AccessKeyID = s.AccessKeyID
producerConfig.AccessKeySecret = s.AccessKeySecret
producerInstance := producer.InitProducer(producerConfig)
producerInstance.Start()
defer func(producerInstance *producer.Producer, timeoutMs int64) {
err := producerInstance.Close(timeoutMs)
if err != nil {
ctx.Error(err, "Close SLS fail")
}
}(producerInstance, 60000)
p := producer.InitProducer(s.ProducerConfig)
p.Start()
defer p.SafeClose()
data, err := json.Marshal(run)
if err != nil {
@ -42,8 +67,9 @@ func (s *Handler) Store(ctx monitorContext.Context, run *v1alpha1.WorkflowRun) e
return err
}
callback := &Callback{ctx: ctx}
log := producer.GenerateLog(uint32(time.Now().Unix()), map[string]string{"content": string(data)})
err = producerInstance.SendLog(s.ProjectName, s.LogStoreName, "topic", "", log)
err = p.SendLogWithCallBack(s.ProjectName, s.LogStoreName, "topic", "", log, callback)
if err != nil {
ctx.Error(err, "Send WorkflowRun Content to SLS fail")
return err

View File

@ -2,57 +2,40 @@ package sls
import (
"context"
"os"
"testing"
monitorContext "github.com/kubevela/pkg/monitor/context"
"github.com/stretchr/testify/require"
"github.com/kubevela/workflow/api/v1alpha1"
)
func TestHandler_Store(t *testing.T) {
type fields struct {
LogStoreName string
ProjectName string
Endpoint string
AccessKeyID string
AccessKeySecret string
}
type args struct {
ctx monitorContext.Context
run *v1alpha1.WorkflowRun
}
tests := []struct {
name string
fields fields
args args
config map[string][]byte
run *v1alpha1.WorkflowRun
wantErr bool
}{
{
name: "Success",
fields: fields{
LogStoreName: os.Getenv("LOG_TEST_LOGSTORE"),
ProjectName: os.Getenv("LOG_TEST_PROJECT"),
Endpoint: os.Getenv("LOG_TEST_ENDPOINT"),
AccessKeyID: os.Getenv("LOG_TEST_ACCESS_KEY_ID"),
AccessKeySecret: os.Getenv("LOG_TEST_ACCESS_KEY_SECRET"),
},
args: args{
ctx: monitorContext.NewTraceContext(context.Background(), "test-sls"),
config: map[string][]byte{
"AccessKeyID": []byte("accessKeyID"),
"AccessKeySecret": []byte("accessKeySecret"),
"Endpoint": []byte("endpoint"),
"ProjectName": []byte("project"),
"LogStoreName": []byte("logstore"),
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Handler{
LogStoreName: tt.fields.LogStoreName,
ProjectName: tt.fields.ProjectName,
Endpoint: tt.fields.Endpoint,
AccessKeyID: tt.fields.AccessKeyID,
AccessKeySecret: tt.fields.AccessKeySecret,
}
if err := s.Store(tt.args.ctx, tt.args.run); (err != nil) != tt.wantErr {
r := require.New(t)
ctx := monitorContext.NewTraceContext(context.Background(), "test")
s, err := NewSLSHandler(tt.config)
r.NoError(err)
if err := s.Store(ctx, tt.run); (err != nil) != tt.wantErr {
t.Errorf("Store() error = %v, wantErr %v", err, tt.wantErr)
}
})

View File

@ -18,28 +18,28 @@ package context
import (
"context"
"encoding/json"
"fmt"
"reflect"
"strings"
"sync"
"time"
"cuelang.org/go/cue"
"cuelang.org/go/cue/cuecontext"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/kubevela/pkg/util/rand"
"github.com/kubevela/workflow/pkg/cue/model"
"github.com/kubevela/pkg/util/singleton"
"github.com/kubevela/workflow/pkg/cue/model/sets"
"github.com/kubevela/workflow/pkg/cue/model/value"
)
const (
// ConfigMapKeyComponents is the key in ConfigMap Data field for containing data of components
ConfigMapKeyComponents = "components"
// ConfigMapKeyVars is the key in ConfigMap Data field for containing data of variable
ConfigMapKeyVars = "vars"
// AnnotationStartTimestamp is the annotation key of the workflow start timestamp
@ -52,56 +52,34 @@ var (
// WorkflowContext is workflow context.
type WorkflowContext struct {
cli client.Client
store *corev1.ConfigMap
memoryStore *sync.Map
components map[string]*ComponentManifest
vars *value.Value
vars cue.Value
modified bool
}
// GetComponent Get ComponentManifest from workflow context.
func (wf *WorkflowContext) GetComponent(name string) (*ComponentManifest, error) {
component, ok := wf.components[name]
if !ok {
return nil, errors.Errorf("component %s not found in application", name)
}
return component, nil
}
// GetComponents Get All ComponentManifest from workflow context.
func (wf *WorkflowContext) GetComponents() map[string]*ComponentManifest {
return wf.components
}
// PatchComponent patch component with value.
func (wf *WorkflowContext) PatchComponent(name string, patchValue *value.Value) error {
component, err := wf.GetComponent(name)
if err != nil {
return err
}
if err := component.Patch(patchValue); err != nil {
return err
}
wf.modified = true
return nil
}
// GetVar get variable from workflow context.
func (wf *WorkflowContext) GetVar(paths ...string) (*value.Value, error) {
return wf.vars.LookupValue(paths...)
func (wf *WorkflowContext) GetVar(paths ...string) (cue.Value, error) {
v := wf.vars.LookupPath(value.FieldPath(paths...))
if !v.Exists() {
return v, fmt.Errorf("var %s not found", strings.Join(paths, "."))
}
return v, nil
}
// SetVar set variable to workflow context.
func (wf *WorkflowContext) SetVar(v *value.Value, paths ...string) error {
str, err := v.String()
func (wf *WorkflowContext) SetVar(v cue.Value, paths ...string) error {
// convert value to string to set
str, err := sets.ToString(v)
if err != nil {
return errors.WithMessage(err, "compile var")
}
if err := wf.vars.FillRaw(str, paths...); err != nil {
return err
}
if err := wf.vars.Error(); err != nil {
wf.vars, err = value.FillRaw(wf.vars, str, paths...)
if err != nil {
return err
}
if err := wf.vars.Err(); err != nil {
return err
}
wf.modified = true
@ -166,94 +144,59 @@ func (wf *WorkflowContext) DeleteValueInMemory(paths ...string) {
wf.memoryStore.Delete(strings.Join(paths, "."))
}
// MakeParameter make 'value' with string
func (wf *WorkflowContext) MakeParameter(parameter string) (*value.Value, error) {
if parameter == "" {
parameter = "{}"
}
return wf.vars.MakeValue(parameter)
}
// Commit the workflow context and persist it's content.
func (wf *WorkflowContext) Commit() error {
func (wf *WorkflowContext) Commit(ctx context.Context) error {
if !wf.modified {
return nil
}
if err := wf.writeToStore(); err != nil {
return err
}
if err := wf.sync(); err != nil {
if err := wf.sync(ctx); err != nil {
return errors.WithMessagef(err, "save context to configMap(%s/%s)", wf.store.Namespace, wf.store.Name)
}
return nil
}
func (wf *WorkflowContext) writeToStore() error {
varStr, err := wf.vars.String()
varStr, err := sets.ToString(wf.vars)
if err != nil {
return err
}
jsonObject := map[string]string{}
for name, comp := range wf.components {
s, err := comp.string()
if err != nil {
return errors.WithMessagef(err, "encode component %s ", name)
}
jsonObject[name] = s
}
if wf.store.Data == nil {
wf.store.Data = make(map[string]string)
}
b, err := json.Marshal(jsonObject)
if err != nil {
return err
}
wf.store.Data[ConfigMapKeyComponents] = string(b)
wf.store.Data[ConfigMapKeyVars] = varStr
return nil
}
func (wf *WorkflowContext) sync() error {
ctx := context.Background()
func (wf *WorkflowContext) sync(ctx context.Context) error {
cli := singleton.KubeClient.Get()
store := &corev1.ConfigMap{}
if EnableInMemoryContext {
MemStore.UpdateInMemoryContext(wf.store)
} else if err := wf.cli.Update(ctx, wf.store); err != nil {
} else if err := cli.Get(ctx, types.NamespacedName{
Name: wf.store.Name,
Namespace: wf.store.Namespace,
}, store); err != nil {
if kerrors.IsNotFound(err) {
return wf.cli.Create(ctx, wf.store)
return cli.Create(ctx, wf.store)
}
return err
}
return nil
return cli.Patch(ctx, wf.store, client.MergeFrom(store.DeepCopy()))
}
// LoadFromConfigMap recover workflow context from configMap.
func (wf *WorkflowContext) LoadFromConfigMap(cm corev1.ConfigMap) error {
func (wf *WorkflowContext) LoadFromConfigMap(_ context.Context, cm corev1.ConfigMap) error {
if wf.store == nil {
wf.store = &cm
}
data := cm.Data
componentsJs := map[string]string{}
if data[ConfigMapKeyComponents] != "" {
if err := json.Unmarshal([]byte(data[ConfigMapKeyComponents]), &componentsJs); err != nil {
return errors.WithMessage(err, "decode components")
}
wf.components = map[string]*ComponentManifest{}
for name, compJs := range componentsJs {
cm := new(ComponentManifest)
if err := cm.unmarshal(compJs); err != nil {
return errors.WithMessagef(err, "unmarshal component(%s) manifest", name)
}
wf.components[name] = cm
}
}
var err error
wf.vars, err = value.NewValue(data[ConfigMapKeyVars], nil, "")
if err != nil {
return errors.WithMessage(err, "decode vars")
}
wf.vars = cuecontext.New().CompileString(data[ConfigMapKeyVars])
return nil
}
@ -268,74 +211,14 @@ func (wf *WorkflowContext) StoreRef() *corev1.ObjectReference {
}
}
// ComponentManifest contains resources rendered from an application component.
type ComponentManifest struct {
Workload model.Instance
Auxiliaries []model.Instance
}
// Patch the ComponentManifest with value
func (comp *ComponentManifest) Patch(patchValue *value.Value) error {
return comp.Workload.Unify(patchValue.CueValue())
}
type componentMould struct {
StandardWorkload string
Traits []string
}
func (comp *ComponentManifest) string() (string, error) {
workload, err := comp.Workload.String()
if err != nil {
return "", err
}
cm := componentMould{
StandardWorkload: workload,
}
for _, aux := range comp.Auxiliaries {
auxiliary, err := aux.String()
if err != nil {
return "", err
}
cm.Traits = append(cm.Traits, auxiliary)
}
js, err := json.Marshal(cm)
return string(js), err
}
func (comp *ComponentManifest) unmarshal(v string) error {
cm := componentMould{}
if err := json.Unmarshal([]byte(v), &cm); err != nil {
return err
}
wlInst := cuecontext.New().CompileString(cm.StandardWorkload)
wl, err := model.NewBase(wlInst)
if err != nil {
return err
}
comp.Workload = wl
for _, s := range cm.Traits {
auxInst := cuecontext.New().CompileString(s)
aux, err := model.NewOther(auxInst)
if err != nil {
return err
}
comp.Auxiliaries = append(comp.Auxiliaries, aux)
}
return nil
}
// NewContext new workflow context without initialize data.
func NewContext(cli client.Client, ns, name string, owner []metav1.OwnerReference) (Context, error) {
wfCtx, err := newContext(cli, ns, name, owner)
func NewContext(ctx context.Context, ns, name string, owner []metav1.OwnerReference) (Context, error) {
wfCtx, err := newContext(ctx, ns, name, owner)
if err != nil {
return nil, err
}
return wfCtx, wfCtx.Commit()
return wfCtx, nil
}
// CleanupMemoryStore cleans up memory store.
@ -343,49 +226,58 @@ func CleanupMemoryStore(name, ns string) {
workflowMemoryCache.Delete(fmt.Sprintf("%s-%s", name, ns))
}
func newContext(cli client.Client, ns, name string, owner []metav1.OwnerReference) (*WorkflowContext, error) {
var (
ctx = context.Background()
store corev1.ConfigMap
)
store.Name = generateStoreName(name)
store.Namespace = ns
store.SetOwnerReferences(owner)
func newContext(ctx context.Context, ns, name string, owner []metav1.OwnerReference) (*WorkflowContext, error) {
cli := singleton.KubeClient.Get()
store := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: generateStoreName(name),
Namespace: ns,
OwnerReferences: owner,
},
Data: map[string]string{
ConfigMapKeyVars: "",
},
}
kindConfigMap := reflect.TypeOf(corev1.ConfigMap{}).Name()
if EnableInMemoryContext {
MemStore.GetOrCreateInMemoryContext(&store)
} else if err := cli.Get(ctx, client.ObjectKey{Name: store.Name, Namespace: store.Namespace}, &store); err != nil {
MemStore.GetOrCreateInMemoryContext(store)
} else if err := cli.Get(ctx, client.ObjectKey{Name: store.Name, Namespace: store.Namespace}, store); err != nil {
if kerrors.IsNotFound(err) {
if err := cli.Create(ctx, &store); err != nil {
if err := cli.Create(ctx, store); err != nil {
return nil, err
}
store.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind(kindConfigMap))
} else {
return nil, err
}
} else if !reflect.DeepEqual(store.OwnerReferences, owner) {
store = corev1.ConfigMap{
store = &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-%s", generateStoreName(name), rand.RandomString(5)),
Namespace: ns,
OwnerReferences: owner,
},
Data: map[string]string{
ConfigMapKeyVars: "",
},
}
if err := cli.Create(ctx, &store); err != nil {
if err := cli.Create(ctx, store); err != nil {
return nil, err
}
store.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind(kindConfigMap))
}
store.Annotations = map[string]string{
AnnotationStartTimestamp: time.Now().String(),
}
memCache := getMemoryStore(fmt.Sprintf("%s-%s", name, ns))
wfCtx := &WorkflowContext{
cli: cli,
store: &store,
store: store,
memoryStore: memCache,
components: map[string]*ComponentManifest{},
modified: true,
}
var err error
wfCtx.vars, err = value.NewValue("", nil, "")
wfCtx.vars = cuecontext.New().CompileString("")
return wfCtx, err
}
@ -405,10 +297,11 @@ func getMemoryStore(key string) *sync.Map {
}
// LoadContext load workflow context from store.
func LoadContext(cli client.Client, ns, name, ctxName string) (Context, error) {
func LoadContext(ctx context.Context, ns, name, ctxName string) (Context, error) {
var store corev1.ConfigMap
store.Name = ctxName
store.Namespace = ns
cli := singleton.KubeClient.Get()
if EnableInMemoryContext {
MemStore.GetOrCreateInMemoryContext(&store)
} else if err := cli.Get(context.Background(), client.ObjectKey{
@ -418,15 +311,14 @@ func LoadContext(cli client.Client, ns, name, ctxName string) (Context, error) {
return nil, err
}
memCache := getMemoryStore(fmt.Sprintf("%s-%s", name, ns))
ctx := &WorkflowContext{
cli: cli,
wfCtx := &WorkflowContext{
store: &store,
memoryStore: memCache,
}
if err := ctx.LoadFromConfigMap(store); err != nil {
if err := wfCtx.LoadFromConfigMap(ctx, store); err != nil {
return nil, err
}
return ctx, nil
return wfCtx, nil
}
// generateStoreName generates the config map name of workflow context.

View File

@ -21,128 +21,20 @@ import (
"encoding/json"
"testing"
"cuelang.org/go/cue"
"cuelang.org/go/cue/cuecontext"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
yamlUtil "sigs.k8s.io/yaml"
"github.com/kubevela/workflow/pkg/cue/model/value"
"github.com/kubevela/pkg/cue/util"
"github.com/kubevela/pkg/util/singleton"
)
func TestComponent(t *testing.T) {
wfCtx := newContextForTest(t)
r := require.New(t)
_, err := wfCtx.GetComponent("expected-not-found")
r.Equal(err.Error(), "component expected-not-found not found in application")
cmf, err := wfCtx.GetComponent("server")
r.NoError(err)
components := wfCtx.GetComponents()
_, ok := components["server"]
r.Equal(ok, true)
s, err := cmf.Workload.String()
r.NoError(err)
r.Equal(s, `apiVersion: "v1"
kind: "Pod"
metadata: {
labels: {
app: "nginx"
}
}
spec: {
containers: [{
env: [{
name: "APP"
value: "nginx"
}]
image: "nginx:1.14.2"
imagePullPolicy: "IfNotPresent"
name: "main"
ports: [{
containerPort: 8080
protocol: "TCP"
}]
}]
}
`)
r.Equal(len(cmf.Auxiliaries), 1)
s, err = cmf.Auxiliaries[0].String()
r.NoError(err)
r.Equal(s, `apiVersion: "v1"
kind: "Service"
metadata: {
name: "my-service"
}
spec: {
ports: [{
port: 80
protocol: "TCP"
targetPort: 8080
}]
selector: {
app: "nginx"
}
}
`)
pv, err := value.NewValue(`
spec: containers: [{
// +patchKey=name
env:[{name: "ClusterIP",value: "1.1.1.1"}]}]
`, nil, "")
r.NoError(err)
err = wfCtx.PatchComponent("server", pv)
r.NoError(err)
cmf, err = wfCtx.GetComponent("server")
r.NoError(err)
s, err = cmf.Workload.String()
r.NoError(err)
r.Equal(s, `apiVersion: "v1"
kind: "Pod"
metadata: {
labels: {
app: "nginx"
}
}
spec: {
containers: [{
// +patchKey=name
env: [{
name: "APP"
value: "nginx"
}, {
name: "ClusterIP"
value: "1.1.1.1"
}, ...]
image: "nginx:1.14.2"
imagePullPolicy: "IfNotPresent"
name: "main"
ports: [{
containerPort: 8080
protocol: "TCP"
}, ...]
}]
}
`)
err = wfCtx.writeToStore()
r.NoError(err)
expected, err := yaml.Marshal(wfCtx.components)
r.NoError(err)
err = wfCtx.LoadFromConfigMap(*wfCtx.store)
r.NoError(err)
componentsYaml, err := yaml.Marshal(wfCtx.components)
r.NoError(err)
r.Equal(string(expected), string(componentsYaml))
}
func TestVars(t *testing.T) {
wfCtx := newContextForTest(t)
@ -154,13 +46,12 @@ func TestVars(t *testing.T) {
{
variable: `input: "1.1.1.1"`,
paths: []string{"clusterIP"},
expected: `"1.1.1.1"
`,
expected: `"1.1.1.1"`,
},
{
variable: "input: 100",
paths: []string{"football", "score"},
expected: "100\n",
expected: "100",
},
{
variable: `
@ -170,42 +61,26 @@ input: {
}`,
paths: []string{"football"},
expected: `score: 100
result: 101
`,
result: 101`,
},
}
for _, tCase := range testCases {
r := require.New(t)
val, err := value.NewValue(tCase.variable, nil, "")
r.NoError(err)
input, err := val.LookupValue("input")
r.NoError(err)
err = wfCtx.SetVar(input, tCase.paths...)
cuectx := cuecontext.New()
val := cuectx.CompileString(tCase.variable)
input := val.LookupPath(cue.ParsePath("input"))
err := wfCtx.SetVar(input, tCase.paths...)
r.NoError(err)
result, err := wfCtx.GetVar(tCase.paths...)
r.NoError(err)
rStr, err := result.String()
rStr, err := util.ToString(result)
r.NoError(err)
r.Equal(rStr, tCase.expected)
}
r := require.New(t)
param, err := wfCtx.MakeParameter(`{"name": "foo"}`)
r.NoError(err)
mark, err := wfCtx.GetVar("football")
r.NoError(err)
err = param.FillObject(mark)
r.NoError(err)
rStr, err := param.String()
r.NoError(err)
r.Equal(rStr, `name: "foo"
score: 100
result: 101
`)
conflictV, err := value.NewValue(`score: 101`, nil, "")
r.NoError(err)
err = wfCtx.SetVar(conflictV, "football")
conflictV := cuecontext.New().CompileString(`score: 101`)
err := wfCtx.SetVar(conflictV, "football")
r.Equal(err.Error(), "football.score: conflicting values 101 and 100")
}
@ -227,40 +102,38 @@ func TestRefObj(t *testing.T) {
}
func TestContext(t *testing.T) {
cli := newCliForTest(t, nil)
newCliForTest(t, nil)
r := require.New(t)
ctx := context.Background()
wfCtx, err := NewContext(cli, "default", "app-v1", []metav1.OwnerReference{{Name: "test1"}})
wfCtx, err := NewContext(ctx, "default", "app-v1", []metav1.OwnerReference{{Name: "test1"}})
r.NoError(err)
err = wfCtx.Commit()
err = wfCtx.Commit(context.Background())
r.NoError(err)
_, err = NewContext(cli, "default", "app-v1", []metav1.OwnerReference{{Name: "test2"}})
_, err = NewContext(ctx, "default", "app-v1", []metav1.OwnerReference{{Name: "test2"}})
r.NoError(err)
wfCtx, err = LoadContext(cli, "default", "app-v1", "workflow-app-v1-context")
wfCtx, err = LoadContext(ctx, "default", "app-v1", "workflow-app-v1-context")
r.NoError(err)
err = wfCtx.Commit()
err = wfCtx.Commit(context.Background())
r.NoError(err)
cli = newCliForTest(t, nil)
_, err = LoadContext(cli, "default", "app-v1", "workflow-app-v1-context")
newCliForTest(t, nil)
_, err = LoadContext(ctx, "default", "app-v1", "workflow-app-v1-context")
r.Equal(err.Error(), `configMap "workflow-app-v1-context" not found`)
wfCtx, err = NewContext(cli, "default", "app-v1", nil)
_, err = NewContext(ctx, "default", "app-v1", nil)
r.NoError(err)
r.Equal(len(wfCtx.GetComponents()), 0)
_, err = wfCtx.GetComponent("server")
r.Equal(err.Error(), "component server not found in application")
}
func TestGetStore(t *testing.T) {
cli := newCliForTest(t, nil)
newCliForTest(t, nil)
r := require.New(t)
wfCtx, err := NewContext(cli, "default", "app-v1", nil)
wfCtx, err := NewContext(context.Background(), "default", "app-v1", nil)
r.NoError(err)
err = wfCtx.Commit()
err = wfCtx.Commit(context.Background())
r.NoError(err)
store := wfCtx.GetStore()
@ -268,12 +141,12 @@ func TestGetStore(t *testing.T) {
}
func TestMutableValue(t *testing.T) {
cli := newCliForTest(t, nil)
newCliForTest(t, nil)
r := require.New(t)
wfCtx, err := NewContext(cli, "default", "app-v1", nil)
wfCtx, err := NewContext(context.Background(), "default", "app-v1", nil)
r.NoError(err)
err = wfCtx.Commit()
err = wfCtx.Commit(context.Background())
r.NoError(err)
wfCtx.SetMutableValue("value", "test", "key")
@ -286,12 +159,12 @@ func TestMutableValue(t *testing.T) {
}
func TestMemoryValue(t *testing.T) {
cli := newCliForTest(t, nil)
newCliForTest(t, nil)
r := require.New(t)
wfCtx, err := NewContext(cli, "default", "app-v1", nil)
wfCtx, err := NewContext(context.Background(), "default", "app-v1", nil)
r.NoError(err)
err = wfCtx.Commit()
err = wfCtx.Commit(context.Background())
r.NoError(err)
wfCtx.SetValueInMemory("value", "test", "key")
@ -313,9 +186,9 @@ func TestMemoryValue(t *testing.T) {
r.Equal(count, 11)
}
func newCliForTest(t *testing.T, wfCm *corev1.ConfigMap) *test.MockClient {
func newCliForTest(t *testing.T, wfCm *corev1.ConfigMap) {
r := require.New(t)
return &test.MockClient{
cli := &test.MockClient{
MockGet: func(ctx context.Context, key client.ObjectKey, obj client.Object) error {
o, ok := obj.(*corev1.ConfigMap)
if ok {
@ -344,7 +217,7 @@ func newCliForTest(t *testing.T, wfCm *corev1.ConfigMap) *test.MockClient {
}
return nil
},
MockUpdate: func(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error {
MockPatch: func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error {
o, ok := obj.(*corev1.ConfigMap)
if ok {
if wfCm == nil {
@ -355,6 +228,7 @@ func newCliForTest(t *testing.T, wfCm *corev1.ConfigMap) *test.MockClient {
return nil
},
}
singleton.KubeClient.Set(cli)
}
func newContextForTest(t *testing.T) *WorkflowContext {
@ -368,7 +242,7 @@ func newContextForTest(t *testing.T) *WorkflowContext {
wfCtx := &WorkflowContext{
store: &cm,
}
err = wfCtx.LoadFromConfigMap(cm)
err = wfCtx.LoadFromConfigMap(context.Background(), cm)
r.NoError(err)
return wfCtx
}
@ -376,7 +250,7 @@ func newContextForTest(t *testing.T) *WorkflowContext {
var (
testCaseYaml = `apiVersion: v1
data:
components: '{"server":"{\"Scopes\":null,\"StandardWorkload\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"nginx\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"APP\\\",\\\"value\\\":\\\"nginx\\\"}],\\\"image\\\":\\\"nginx:1.14.2\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"main\\\",\\\"ports\\\":[{\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"}]}]}}\",\"Traits\":[\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"name\\\":\\\"my-service\\\"},\\\"spec\\\":{\\\"ports\\\":[{\\\"port\\\":80,\\\"protocol\\\":\\\"TCP\\\",\\\"targetPort\\\":8080}],\\\"selector\\\":{\\\"app\\\":\\\"nginx\\\"}}}\"]}"}'
test: ""
kind: ConfigMap
metadata:
name: app-v1

View File

@ -17,18 +17,16 @@ limitations under the License.
package context
import (
corev1 "k8s.io/api/core/v1"
"context"
"github.com/kubevela/workflow/pkg/cue/model/value"
"cuelang.org/go/cue"
corev1 "k8s.io/api/core/v1"
)
// Context is workflow context interface
type Context interface {
GetComponent(name string) (*ComponentManifest, error)
GetComponents() map[string]*ComponentManifest
PatchComponent(name string, patchValue *value.Value) error
GetVar(paths ...string) (*value.Value, error)
SetVar(v *value.Value, paths ...string) error
GetVar(paths ...string) (cue.Value, error)
SetVar(v cue.Value, paths ...string) error
GetStore() *corev1.ConfigMap
GetMutableValue(path ...string) string
SetMutableValue(data string, path ...string)
@ -37,7 +35,6 @@ type Context interface {
SetValueInMemory(data interface{}, paths ...string)
GetValueInMemory(paths ...string) (interface{}, bool)
DeleteValueInMemory(paths ...string)
Commit() error
MakeParameter(parameter string) (*value.Value, error)
Commit(ctx context.Context) error
StoreRef() *corev1.ObjectReference
}

View File

@ -7,4 +7,3 @@ The following packages need to be tested without external/tool dependencies, So
- github.com/kubevela/workflow/pkg/cue/model/sets
- github.com/kubevela/workflow/pkg/cue/process
- github.com/kubevela/workflow/pkg/cue/task
- github.com/kubevela/workflow/pkg/cue/packages

View File

@ -25,6 +25,8 @@ const (
ConfigFieldName = "config"
// ParameterFieldName is the keyword in CUE template to define users' input and the reference to the context parameter
ParameterFieldName = "parameter"
// ContextFieldName is the keyword in CUE template to define context
ContextFieldName = "context"
// ContextName is the name of context
ContextName = "name"
// ContextNamespace is the namespace of the app
@ -37,6 +39,8 @@ const (
ContextStepSessionID = "stepSessionID"
// ContextStepName is the name of the step
ContextStepName = "stepName"
// ContextStepGroupName is the name of the stepGroup
ContextStepGroupName = "stepGroupName"
// ContextSpanID is name for span id.
ContextSpanID = "spanID"
// OutputSecretName is used to store all secret names which are generated by cloud resource components

View File

@ -188,7 +188,7 @@ func strategyPatchHandle() interceptor {
}
}
paths := append(ctx.Pos(), labelStr(field.Label))
paths := append(ctx.Pos(), LabelStr(field.Label))
baseSubNode, err := lookUp(baseNode, paths...)
if err != nil {
if errors.Is(err, notFoundErr) {
@ -217,14 +217,14 @@ func strategyPatchHandle() interceptor {
case *ast.StructLit:
for _, elt := range v.Elts {
if fe, ok := elt.(*ast.Field); ok &&
labelStr(fe.Label) == labelStr(field.Label) {
LabelStr(fe.Label) == LabelStr(field.Label) {
fe.Value = field.Value
}
}
case *ast.File: // For the top level element
for _, decl := range v.Decls {
if fe, ok := decl.(*ast.Field); ok &&
labelStr(fe.Label) == labelStr(field.Label) {
LabelStr(fe.Label) == LabelStr(field.Label) {
fe.Value = field.Value
}
}
@ -281,7 +281,7 @@ func strategyUnify(base cue.Value, patch cue.Value, params *UnifyParams, patchOp
} else if params.PatchStrategy == StrategyJSONPatch {
return jsonPatch(base, patch.LookupPath(cue.ParsePath("operations")))
}
openBase, err := openListLit(base)
openBase, err := OpenListLit(base)
if err != nil {
return cue.Value{}, errors.Wrapf(err, "failed to open list it for merge")
}

View File

@ -70,7 +70,8 @@ func lookUp(node ast.Node, paths ...string) (ast.Node, error) {
return nil, notFoundErr
}
func lookUpAll(node ast.Node, paths ...string) []ast.Node {
// LookUpAll look up all the nodes by paths
func LookUpAll(node ast.Node, paths ...string) []ast.Node {
if len(paths) == 0 {
return []ast.Node{node}
}
@ -81,7 +82,7 @@ func lookUpAll(node ast.Node, paths ...string) []ast.Node {
for _, decl := range x.Decls {
nnode := lookField(decl, key)
if nnode != nil {
nodes = append(nodes, lookUpAll(nnode, paths[1:]...)...)
nodes = append(nodes, LookUpAll(nnode, paths[1:]...)...)
}
}
@ -89,13 +90,13 @@ func lookUpAll(node ast.Node, paths ...string) []ast.Node {
for _, elt := range x.Elts {
nnode := lookField(elt, key)
if nnode != nil {
nodes = append(nodes, lookUpAll(nnode, paths[1:]...)...)
nodes = append(nodes, LookUpAll(nnode, paths[1:]...)...)
}
}
case *ast.ListLit:
for index, elt := range x.Elts {
if strconv.Itoa(index) == key {
return lookUpAll(elt, paths[1:]...)
return LookUpAll(elt, paths[1:]...)
}
}
}
@ -136,7 +137,7 @@ func doBuiltinFunc(root ast.Node, pathSel ast.Expr, do func(values []ast.Node) (
if len(paths) == 0 {
return nil, errors.New("path resolve error")
}
values := lookUpAll(root, paths...)
values := LookUpAll(root, paths...)
return do(values)
}
@ -187,14 +188,15 @@ func peelCloseExpr(node ast.Node) ast.Node {
func lookField(node ast.Node, key string) ast.Node {
if field, ok := node.(*ast.Field); ok {
// Note: the trim here has side effect: "\(v)" will be trimmed to \(v), only used for comparing fields
if strings.Trim(labelStr(field.Label), `"`) == strings.Trim(key, `"`) {
if strings.Trim(LabelStr(field.Label), `"`) == strings.Trim(key, `"`) {
return field.Value
}
}
return nil
}
func labelStr(label ast.Label) string {
// LabelStr get the string label
func LabelStr(label ast.Label) string {
switch v := label.(type) {
case *ast.Ident:
return v.Name
@ -311,8 +313,9 @@ func OpenBaiscLit(val cue.Value) (*ast.File, error) {
return f, err
}
// OpenListLit make that the listLit can be modified.
// nolint:staticcheck
func openListLit(val cue.Value) (*ast.File, error) {
func OpenListLit(val cue.Value) (*ast.File, error) {
f, err := ToFile(val.Syntax(cue.Docs(true), cue.ResolveReferences(true)))
if err != nil {
return nil, err

View File

@ -257,13 +257,13 @@ bottom: _|_
val := cuecontext.New().BuildFile(f)
s, err := toString(val)
r.NoError(err)
r.Equal(s, `a: *10 | _
a1: int
b: *"foo" | _
b1: string
c: *true | _
c1: bool
arr: *[1, 2] | [...]
r.Equal(s, `a: *10 | _
a1: int
b: *"foo" | _
b1: string
c: *true | _
c1: bool
arr: *[1, 2] | [...]
top: _
bottom: _|_ // explicit error (_|_ literal) in source
`)

View File

@ -52,7 +52,7 @@ func (nwk *nodewalker) walk(node ast.Node) {
switch n := node.(type) {
case *ast.Field:
label := labelStr(n.Label)
label := LabelStr(n.Label)
if label == "" || strings.HasPrefix(label, "#") {
return
}
@ -63,7 +63,7 @@ func (nwk *nodewalker) walk(node ast.Node) {
for k, v := range oriTags {
nwk.tags[k] = v
}
nwk.pos = append(nwk.pos, labelStr(n.Label))
nwk.pos = append(nwk.pos, LabelStr(n.Label))
tags := findCommentTag(n.Comments())
for tk, tv := range tags {
nwk.tags[tk] = tv

View File

@ -19,348 +19,116 @@ package value
import (
"encoding/json"
"fmt"
"sort"
"strconv"
"strings"
"sync"
"cuelang.org/go/cue"
"cuelang.org/go/cue/ast"
"cuelang.org/go/cue/build"
"cuelang.org/go/cue/cuecontext"
"cuelang.org/go/cue/literal"
"cuelang.org/go/cue/format"
"cuelang.org/go/cue/parser"
"cuelang.org/go/cue/token"
"github.com/cue-exp/kubevelafix"
"github.com/pkg/errors"
"github.com/kubevela/pkg/cue/util"
"github.com/kubevela/workflow/pkg/cue/model/sets"
"github.com/kubevela/workflow/pkg/cue/packages"
"github.com/kubevela/workflow/pkg/stdlib"
workflowerrors "github.com/kubevela/workflow/pkg/errors"
)
// DefaultPackageHeader describes the default package header for CUE files.
const DefaultPackageHeader = "package main\n"
// Value is an object with cue.context and vendors
type Value struct {
v cue.Value
r *cue.Context
addImports func(instance *build.Instance) error
}
// String return value's cue format string
func (val *Value) String(opts ...func(node ast.Node) ast.Node) (string, error) {
opts = append(opts, sets.OptBytesToString)
return sets.ToString(val.v, opts...)
}
// Error return value's error information.
func (val *Value) Error() error {
v := val.CueValue()
if !v.Exists() {
return errors.New("empty value")
}
if err := val.v.Err(); err != nil {
return err
}
var gerr error
v.Walk(func(value cue.Value) bool {
if err := value.Eval().Err(); err != nil {
gerr = err
return false
}
return true
}, nil)
return gerr
}
// UnmarshalTo unmarshal value into golang object
func (val *Value) UnmarshalTo(x interface{}) error {
data, err := val.v.MarshalJSON()
if err != nil {
return err
}
return json.Unmarshal(data, x)
}
// NewValue new a value
func NewValue(s string, pd *packages.PackageDiscover, tagTempl string, opts ...func(*ast.File) error) (*Value, error) {
builder := &build.Instance{}
file, err := parser.ParseFile("-", s, parser.ParseComments)
if err != nil {
return nil, err
}
file = kubevelafix.Fix(file).(*ast.File)
for _, opt := range opts {
if err := opt(file); err != nil {
return nil, err
}
}
if err := builder.AddSyntax(file); err != nil {
return nil, err
}
return newValue(builder, pd, tagTempl)
}
// NewValueWithInstance new value with instance
func NewValueWithInstance(instance *build.Instance, pd *packages.PackageDiscover, tagTempl string) (*Value, error) {
return newValue(instance, pd, tagTempl)
}
func newValue(builder *build.Instance, pd *packages.PackageDiscover, tagTempl string) (*Value, error) {
addImports := func(inst *build.Instance) error {
if pd != nil {
pd.ImportBuiltinPackagesFor(inst)
}
if err := stdlib.AddImportsFor(inst, tagTempl); err != nil {
return err
}
return nil
}
if err := addImports(builder); err != nil {
return nil, err
}
r := cuecontext.New()
inst := r.BuildInstance(builder)
val := new(Value)
val.r = r
val.v = inst
val.addImports = addImports
// do not check val.Err() error here, because the value may be filled later
return val, nil
}
// AddFile add file to the instance
func AddFile(bi *build.Instance, filename string, src interface{}) error {
if filename == "" {
filename = "-"
}
file, err := parser.ParseFile(filename, src, parser.ParseComments)
file = kubevelafix.Fix(file).(*ast.File)
if err != nil {
return err
}
if err := bi.AddSyntax(file); err != nil {
return err
}
return nil
}
// TagFieldOrder add step tag.
func TagFieldOrder(root *ast.File) error {
i := 0
vs := &visitor{
r: map[string]struct{}{},
}
for _, decl := range root.Decls {
vs.addAttrForExpr(decl, &i)
}
return nil
}
// ProcessScript preprocess the script builtin function.
func ProcessScript(root *ast.File) error {
return sets.PreprocessBuiltinFunc(root, "script", func(values []ast.Node) (ast.Expr, error) {
for _, v := range values {
lit, ok := v.(*ast.BasicLit)
if ok {
src, err := literal.Unquote(lit.Value)
if err != nil {
return nil, errors.WithMessage(err, "unquote script value")
}
expr, err := parser.ParseExpr("-", src)
if err != nil {
return nil, errors.Errorf("script value(%s) is invalid CueLang", src)
}
return expr, nil
}
}
return nil, errors.New("script parameter error")
})
}
type visitor struct {
r map[string]struct{}
}
func (vs *visitor) done(name string) {
vs.r[name] = struct{}{}
}
func (vs *visitor) shouldDo(name string) bool {
_, ok := vs.r[name]
return !ok
}
func (vs *visitor) addAttrForExpr(node ast.Node, index *int) {
switch v := node.(type) {
case *ast.Comprehension:
st := v.Value.(*ast.StructLit)
for _, elt := range st.Elts {
vs.addAttrForExpr(elt, index)
}
case *ast.Field:
basic, ok := v.Label.(*ast.Ident)
if !ok {
return
}
if !vs.shouldDo(basic.Name) {
return
}
if v.Attrs == nil {
*index++
vs.done(basic.Name)
v.Attrs = []*ast.Attribute{
{Text: fmt.Sprintf("@step(%d)", *index)},
}
}
}
}
// MakeValue generate an value with same runtime
func (val *Value) MakeValue(s string) (*Value, error) {
builder := &build.Instance{}
file, err := parser.ParseFile("-", s, parser.ParseComments)
if err != nil {
return nil, err
}
if err := builder.AddSyntax(file); err != nil {
return nil, err
}
if err := val.addImports(builder); err != nil {
return nil, err
}
inst := val.r.BuildInstance(builder)
v := new(Value)
v.r = val.r
v.v = inst
v.addImports = val.addImports
if v.Error() != nil {
return nil, v.Error()
}
return v, nil
}
func (val *Value) makeValueWithFile(files ...*ast.File) (*Value, error) {
builder := &build.Instance{}
newFile := &ast.File{}
imports := map[string]*ast.ImportSpec{}
for _, f := range files {
for _, importSpec := range f.Imports {
if _, ok := imports[importSpec.Name.String()]; !ok {
imports[importSpec.Name.String()] = importSpec
}
}
newFile.Decls = append(newFile.Decls, f.Decls...)
}
for _, imp := range imports {
newFile.Imports = append(newFile.Imports, imp)
}
if err := builder.AddSyntax(newFile); err != nil {
return nil, err
}
if err := val.addImports(builder); err != nil {
return nil, err
}
inst := val.r.BuildInstance(builder)
v := new(Value)
v.r = val.r
v.v = inst
v.addImports = val.addImports
return v, nil
}
// FillRaw unify the value with the cue format string x at the given path.
func (val *Value) FillRaw(x string, paths ...string) error {
func FillRaw(val cue.Value, x string, paths ...string) (cue.Value, error) {
file, err := parser.ParseFile("-", x, parser.ParseComments)
if err != nil {
return err
return cue.Value{}, err
}
xInst := val.r.BuildFile(file)
v := val.v.FillPath(FieldPath(paths...), xInst)
xInst := val.Context().BuildFile(file)
v := val.FillPath(FieldPath(paths...), xInst)
if v.Err() != nil {
return v.Err()
return cue.Value{}, v.Err()
}
val.v = v
return nil
return v, nil
}
// FillValueByScript unify the value x at the given script path.
func (val *Value) FillValueByScript(x *Value, path string) error {
if !strings.Contains(path, "[") {
newV := val.v.FillPath(FieldPath(path), x.v)
if err := newV.Err(); err != nil {
return err
}
val.v = newV
func setValue(orig ast.Node, expr ast.Expr, selectors []cue.Selector) error {
if len(selectors) == 0 {
return nil
}
s, err := x.String()
if err != nil {
return err
}
return val.fillRawByScript(s, path)
}
func (val *Value) fillRawByScript(x string, path string) error {
a := newAssembler(x)
pathExpr, err := parser.ParseExpr("path", path)
if err != nil {
return errors.WithMessage(err, "parse path")
}
if err := a.installTo(pathExpr); err != nil {
return err
}
raw, err := val.String(sets.ListOpen)
if err != nil {
return err
}
v, err := val.MakeValue(raw + "\n" + a.v)
if err != nil {
return errors.WithMessage(err, "remake value")
}
if err := v.Error(); err != nil {
return err
}
*val = *v
return nil
}
// CueValue return cue.Value
func (val *Value) CueValue() cue.Value {
return val.v
}
// FillObject unify the value with object x at the given path.
func (val *Value) FillObject(x interface{}, paths ...string) error {
insert := x
if v, ok := x.(*Value); ok {
if v.r != val.r {
return errors.New("filled value not created with same Runtime")
key := selectors[0]
selectors = selectors[1:]
switch x := orig.(type) {
case *ast.ListLit:
if key.Type() != cue.IndexLabel {
return fmt.Errorf("invalid key type %s in list lit", key.Type())
}
insert = v.v
if len(selectors) == 0 {
for key.Index() >= len(x.Elts) {
x.Elts = append(x.Elts, ast.NewStruct())
}
x.Elts[key.Index()] = expr
return nil
}
return setValue(x.Elts[key.Index()], expr, selectors)
case *ast.StructLit:
if len(x.Elts) == 0 || (key.Type() == cue.StringLabel && len(sets.LookUpAll(x, key.String())) == 0) {
if len(selectors) == 0 {
x.Elts = append(x.Elts, &ast.Field{
Label: ast.NewString(key.String()),
Value: expr,
})
} else {
x.Elts = append(x.Elts, &ast.Field{
Label: ast.NewString(key.String()),
Value: ast.NewStruct(),
})
}
return setValue(x.Elts[len(x.Elts)-1].(*ast.Field).Value, expr, selectors)
}
for i := range x.Elts {
switch elem := x.Elts[i].(type) {
case *ast.Field:
if len(selectors) == 0 {
if key.Type() == cue.StringLabel && strings.Trim(sets.LabelStr(elem.Label), `"`) == strings.Trim(key.String(), `"`) {
x.Elts[i].(*ast.Field).Value = expr
return nil
}
}
if key.Type() == cue.StringLabel && strings.Trim(sets.LabelStr(elem.Label), `"`) == strings.Trim(key.String(), `"`) {
return setValue(x.Elts[i].(*ast.Field).Value, expr, selectors)
}
default:
return fmt.Errorf("not support type %T", elem)
}
}
default:
return fmt.Errorf("not support type %T", orig)
}
newV := val.v.FillPath(FieldPath(paths...), insert)
// do not check newV.Err() error here, because the value may be filled later
val.v = newV
return nil
}
// LookupValue reports the value at a path starting from val
func (val *Value) LookupValue(paths ...string) (*Value, error) {
v := val.v.LookupPath(FieldPath(paths...))
if !v.Exists() {
return nil, errors.Errorf("failed to lookup value: var(path=%s) not exist", strings.Join(paths, "."))
var syntaxLock sync.Mutex
// SetValueByScript set the value v at the given script path.
// nolint:staticcheck
func SetValueByScript(base, v cue.Value, path ...string) (cue.Value, error) {
cuepath := FieldPath(path...)
selectors := cuepath.Selectors()
syntaxLock.Lock()
node := base.Syntax(cue.ResolveReferences(true))
syntaxLock.Unlock()
if err := setValue(node, v.Syntax(cue.ResolveReferences(true)).(ast.Expr), selectors); err != nil {
return cue.Value{}, err
}
return &Value{
v: v,
r: val.r,
addImports: val.addImports,
}, nil
b, err := format.Node(node)
if err != nil {
return cue.Value{}, err
}
return base.Context().CompileBytes(b), nil
}
func isScript(content string) (bool, error) {
@ -390,44 +158,77 @@ func isSelector(node ast.Node) bool {
}
}
// LookupByScript reports the value by cue script.
func (val *Value) LookupByScript(script string) (*Value, error) {
// LookupValueByScript reports the value by cue script.
func LookupValueByScript(val cue.Value, script string) (cue.Value, error) {
var outputKey = "zz_output__"
script = strings.TrimSpace(script)
scriptFile, err := parser.ParseFile("-", script, parser.ParseComments)
if err != nil {
return nil, errors.WithMessage(err, "parse script")
return cue.Value{}, errors.WithMessage(err, "parse script")
}
isScriptPath, err := isScript(script)
if err != nil {
return nil, err
return cue.Value{}, err
}
if !isScriptPath {
return val.LookupValue(script)
v := val.LookupPath(cue.ParsePath(script))
if !v.Exists() {
return cue.Value{}, workflowerrors.LookUpNotFoundErr(script)
}
}
raw, err := val.String()
raw, err := util.ToString(val)
if err != nil {
return nil, err
return cue.Value{}, err
}
rawFile, err := parser.ParseFile("-", raw, parser.ParseComments)
if err != nil {
return nil, errors.WithMessage(err, "parse script")
return cue.Value{}, errors.WithMessage(err, "parse script")
}
behindKey(scriptFile, outputKey)
newV, err := val.makeValueWithFile(rawFile, scriptFile)
newV, err := makeValueWithFiles(rawFile, scriptFile)
if err != nil {
return nil, err
}
if newV.Error() != nil {
return nil, newV.Error()
return cue.Value{}, err
}
return newV.LookupValue(outputKey)
v := newV.LookupPath(cue.ParsePath(outputKey))
if !v.Exists() {
return cue.Value{}, workflowerrors.LookUpNotFoundErr(outputKey)
}
return v, nil
}
func makeValueWithFiles(files ...*ast.File) (cue.Value, error) {
builder := &build.Instance{}
newFile := &ast.File{}
imports := map[string]*ast.ImportSpec{}
for _, f := range files {
for _, importSpec := range f.Imports {
if _, ok := imports[importSpec.Name.String()]; !ok {
imports[importSpec.Name.String()] = importSpec
}
}
newFile.Decls = append(newFile.Decls, f.Decls...)
}
for _, imp := range imports {
newFile.Imports = append(newFile.Imports, imp)
}
if err := builder.AddSyntax(newFile); err != nil {
return cue.Value{}, err
}
v := cuecontext.New().BuildInstance(builder)
if v.Err() != nil {
return cue.Value{}, v.Err()
}
return v, nil
}
func behindKey(file *ast.File, key string) {
@ -464,284 +265,6 @@ func behindKey(file *ast.File, key string) {
}
type field struct {
Name string
Value *Value
no int64
}
// StepByList process item in list.
func (val *Value) StepByList(handle func(name string, in *Value) (bool, error)) error {
iter, err := val.CueValue().List()
if err != nil {
return err
}
for iter.Next() {
stop, err := handle(iter.Label(), &Value{
v: iter.Value(),
r: val.r,
addImports: val.addImports,
})
if err != nil {
return err
}
if stop {
return nil
}
}
return nil
}
// StepByFields process the fields in order
func (val *Value) StepByFields(handle func(name string, in *Value) (bool, error)) error {
iter := steps(val)
for iter.next() {
iter.do(handle)
}
return iter.err
}
type stepsIterator struct {
queue []*field
index int
target *Value
err error
stopped bool
}
func steps(v *Value) *stepsIterator {
return &stepsIterator{
target: v,
}
}
func (iter *stepsIterator) next() bool {
if iter.stopped {
return false
}
if iter.err != nil {
return false
}
if iter.queue != nil {
iter.index++
}
iter.assemble()
return iter.index <= len(iter.queue)-1
}
func (iter *stepsIterator) assemble() {
filters := map[string]struct{}{}
for _, item := range iter.queue {
filters[item.Name] = struct{}{}
}
cueIter, err := iter.target.v.Fields(cue.Definitions(true), cue.Hidden(true), cue.All())
if err != nil {
iter.err = err
return
}
var addFields []*field
for cueIter.Next() {
val := cueIter.Value()
name := cueIter.Label()
if val.IncompleteKind() == cue.TopKind {
continue
}
attr := val.Attribute("step")
no, err := attr.Int(0)
if err != nil {
no = 100
if name == "#do" || name == "#provider" {
no = 0
}
}
if _, ok := filters[name]; !ok {
addFields = append(addFields, &field{
Name: name,
no: no,
})
}
}
suffixItems := addFields
suffixItems = append(suffixItems, iter.queue[iter.index:]...)
sort.Sort(sortFields(suffixItems))
iter.queue = append(iter.queue[:iter.index], suffixItems...)
}
func (iter *stepsIterator) value() *Value {
v := iter.target.v.LookupPath(FieldPath(iter.name()))
return &Value{
r: iter.target.r,
v: v,
addImports: iter.target.addImports,
}
}
func (iter *stepsIterator) name() string {
return iter.queue[iter.index].Name
}
func (iter *stepsIterator) do(handle func(name string, in *Value) (bool, error)) {
if iter.err != nil {
return
}
v := iter.value()
stopped, err := handle(iter.name(), v)
if err != nil {
iter.err = err
return
}
iter.stopped = stopped
if !isDef(iter.name()) {
if err := iter.target.FillObject(v, iter.name()); err != nil {
iter.err = err
return
}
}
}
type sortFields []*field
func (sf sortFields) Len() int {
return len(sf)
}
func (sf sortFields) Less(i, j int) bool {
return sf[i].no < sf[j].no
}
func (sf sortFields) Swap(i, j int) {
sf[i], sf[j] = sf[j], sf[i]
}
// Field return the cue value corresponding to the specified field
func (val *Value) Field(label string) (cue.Value, error) {
v := val.v.LookupPath(cue.ParsePath(label))
if !v.Exists() {
return v, errors.Errorf("label %s not found", label)
}
if v.IncompleteKind() == cue.BottomKind {
return v, errors.Errorf("label %s's value not computed", label)
}
return v, nil
}
// GetString get the string value at a path starting from v.
func (val *Value) GetString(paths ...string) (string, error) {
v, err := val.LookupValue(paths...)
if err != nil {
return "", err
}
return v.CueValue().String()
}
// GetStringSlice get string slice from val
func (val *Value) GetStringSlice(paths ...string) ([]string, error) {
v, err := val.LookupValue(paths...)
if err != nil {
return nil, err
}
var s []string
err = v.UnmarshalTo(&s)
return s, err
}
// GetInt64 get the int value at a path starting from v.
func (val *Value) GetInt64(paths ...string) (int64, error) {
v, err := val.LookupValue(paths...)
if err != nil {
return 0, err
}
return v.CueValue().Int64()
}
// GetBool get the int value at a path starting from v.
func (val *Value) GetBool(paths ...string) (bool, error) {
v, err := val.LookupValue(paths...)
if err != nil {
return false, err
}
return v.CueValue().Bool()
}
// OpenCompleteValue make that the complete value can be modified.
func (val *Value) OpenCompleteValue() error {
newS, err := sets.OpenBaiscLit(val.CueValue())
if err != nil {
return err
}
v := cuecontext.New().BuildFile(newS)
val.v = v
return nil
}
func isDef(s string) bool {
return strings.HasPrefix(s, "#")
}
// assembler put value under parsed expression as path.
type assembler struct {
v string
}
func newAssembler(v string) *assembler {
return &assembler{v: v}
}
func (a *assembler) fill2Path(p string) {
a.v = fmt.Sprintf("%s: %s", p, a.v)
}
func (a *assembler) fill2Array(i int) {
s := ""
for j := 0; j < i; j++ {
s += "_,"
}
if strings.Contains(a.v, ":") && !strings.HasPrefix(a.v, "{") {
a.v = fmt.Sprintf("{ %s }", a.v)
}
a.v = fmt.Sprintf("[%s%s]", s, strings.TrimSpace(a.v))
}
func (a *assembler) installTo(expr ast.Expr) error {
switch v := expr.(type) {
case *ast.IndexExpr:
if err := a.installTo(v.Index); err != nil {
return err
}
if err := a.installTo(v.X); err != nil {
return err
}
case *ast.SelectorExpr:
if ident, ok := v.Sel.(*ast.Ident); ok {
if err := a.installTo(ident); err != nil {
return err
}
} else {
return errors.New("invalid sel type in selector")
}
if err := a.installTo(v.X); err != nil {
return err
}
case *ast.Ident:
a.fill2Path(v.String())
case *ast.BasicLit:
switch v.Kind {
case token.STRING:
a.fill2Path(v.Value)
case token.INT:
idex, _ := strconv.Atoi(v.Value)
a.fill2Array(idex)
default:
return errors.New("invalid path")
}
default:
return errors.New("invalid path")
}
return nil
}
// makePath creates a Path from a sequence of string.
func makePath(paths ...string) string {
mergedPath := ""
@ -786,3 +309,12 @@ func FieldPath(paths ...string) cue.Path {
}
return cue.ParsePath(s)
}
// UnmarshalTo unmarshal value into golang object
func UnmarshalTo(val cue.Value, x interface{}) error {
data, err := val.MarshalJSON()
if err != nil {
return err
}
return json.Unmarshal(data, x)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,675 +0,0 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cue
import (
"context"
"errors"
"strings"
"time"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"cuelang.org/go/cue/build"
"github.com/google/go-cmp/cmp"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
certificatesv1beta1 "k8s.io/api/certificates/v1beta1"
coordinationv1 "k8s.io/api/coordination/v1"
corev1 "k8s.io/api/core/v1"
discoveryv1beta1 "k8s.io/api/discovery/v1beta1"
networkingv1 "k8s.io/api/networking/v1"
policyv1beta1 "k8s.io/api/policy/v1beta1"
rbacv1 "k8s.io/api/rbac/v1"
crdv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/utils/pointer"
"github.com/kubevela/workflow/pkg/cue/model"
"github.com/kubevela/workflow/pkg/utils"
)
var _ = Describe("Package discovery resources for definition from K8s APIServer", func() {
PIt("check that all built-in k8s resource are registered", func() {
var localSchemeBuilder = runtime.SchemeBuilder{
admissionregistrationv1.AddToScheme,
appsv1.AddToScheme,
batchv1.AddToScheme,
certificatesv1beta1.AddToScheme,
coordinationv1.AddToScheme,
corev1.AddToScheme,
discoveryv1beta1.AddToScheme,
networkingv1.AddToScheme,
policyv1beta1.AddToScheme,
rbacv1.AddToScheme,
}
var localScheme = runtime.NewScheme()
err := localSchemeBuilder.AddToScheme(localScheme)
Expect(err).Should(BeNil())
types := localScheme.AllKnownTypes()
for typ := range types {
if strings.HasSuffix(typ.Kind, "List") {
continue
}
if strings.HasSuffix(typ.Kind, "Options") {
continue
}
switch typ.Kind {
case "WatchEvent":
continue
case "APIGroup", "APIVersions":
continue
case "RangeAllocation", "ComponentStatus", "Status":
continue
case "SerializedReference", "EndpointSlice":
continue
case "PodStatusResult", "EphemeralContainers":
continue
}
Expect(pd.Exist(metav1.GroupVersionKind{
Group: typ.Group,
Version: typ.Version,
Kind: typ.Kind,
})).Should(BeTrue(), typ.String())
}
})
// nolint:staticcheck
PIt("discovery built-in k8s resource with kube prefix", func() {
By("test ingress in kube package")
bi := build.NewContext().NewInstance("", nil)
err := bi.AddFile("-", `
import (
kube "kube/networking.k8s.io/v1beta1"
)
output: kube.#Ingress
output: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata: name: "myapp"
spec: {
rules: [{
host: parameter.domain
http: {
paths: [
for k, v in parameter.http {
path: k
backend: {
serviceName: "myname"
servicePort: v
}
},
]
}
}]
}
}
parameter: {
domain: "abc.com"
http: {
"/": 80
}
}`)
Expect(err).ToNot(HaveOccurred())
inst, err := pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err := model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err := base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Ingress",
"apiVersion": "networking.k8s.io/v1beta1",
"metadata": map[string]interface{}{"name": "myapp"},
"spec": map[string]interface{}{
"rules": []interface{}{
map[string]interface{}{
"host": "abc.com",
"http": map[string]interface{}{
"paths": []interface{}{
map[string]interface{}{
"path": "/",
"backend": map[string]interface{}{
"serviceName": "myname",
"servicePort": int64(80),
}}}}}}}},
})).Should(BeEquivalentTo(""))
By("test Invalid Import path")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
kube "kube/networking.k8s.io/v1"
)
output: kube.#Deployment
output: {
metadata: {
"name": parameter.name
}
spec: template: spec: {
containers: [{
name:"invalid-path",
image: parameter.image
}]
}
}
parameter: {
name: "myapp"
image: "nginx"
}`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
_, err = model.NewBase(inst.Lookup("output"))
Expect(err).ShouldNot(BeNil())
Expect(err.Error()).Should(Equal("_|_ // undefined field \"#Deployment\""))
By("test Deployment in kube package")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
kube "kube/apps/v1"
)
output: kube.#Deployment
output: {
metadata: {
"name": parameter.name
}
spec: template: spec: {
containers: [{
name:"test",
image: parameter.image
}]
}
}
parameter: {
name: "myapp"
image: "nginx"
}`)
Expect(err).ShouldNot(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": map[string]interface{}{"name": "myapp"},
"spec": map[string]interface{}{
"selector": map[string]interface{}{},
"template": map[string]interface{}{
"spec": map[string]interface{}{
"containers": []interface{}{
map[string]interface{}{
"name": "test",
"image": "nginx"}}}}}},
})).Should(BeEquivalentTo(""))
By("test Secret in kube package")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
kube "kube/v1"
)
output: kube.#Secret
output: {
metadata: {
"name": parameter.name
}
type:"kubevela"
}
parameter: {
name: "myapp"
}`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Secret",
"apiVersion": "v1",
"metadata": map[string]interface{}{"name": "myapp"},
"type": "kubevela"}})).Should(BeEquivalentTo(""))
By("test Service in kube package")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
kube "kube/v1"
)
output: kube.#Service
output: {
metadata: {
"name": parameter.name
}
spec: type: "ClusterIP",
}
parameter: {
name: "myapp"
}`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Service",
"apiVersion": "v1",
"metadata": map[string]interface{}{"name": "myapp"},
"spec": map[string]interface{}{
"type": "ClusterIP"}},
})).Should(BeEquivalentTo(""))
Expect(pd.Exist(metav1.GroupVersionKind{
Group: "",
Version: "v1",
Kind: "Service",
})).Should(Equal(true))
By("Check newly added CRD refreshed and could be used in CUE package")
crd1 := crdv1.CustomResourceDefinition{
ObjectMeta: metav1.ObjectMeta{
Name: "foo.example.com",
},
Spec: crdv1.CustomResourceDefinitionSpec{
Group: "example.com",
Names: crdv1.CustomResourceDefinitionNames{
Kind: "Foo",
ListKind: "FooList",
Plural: "foo",
Singular: "foo",
},
Versions: []crdv1.CustomResourceDefinitionVersion{{
Name: "v1",
Served: true,
Storage: true,
Subresources: &crdv1.CustomResourceSubresources{Status: &crdv1.CustomResourceSubresourceStatus{}},
Schema: &crdv1.CustomResourceValidation{
OpenAPIV3Schema: &crdv1.JSONSchemaProps{
Type: "object",
Properties: map[string]crdv1.JSONSchemaProps{
"spec": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
}},
"status": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
"app-hash": {Type: "string"},
}}}}}},
},
Scope: crdv1.NamespaceScoped,
},
}
Expect(k8sClient.Create(context.Background(), &crd1)).Should(SatisfyAny(BeNil(), &utils.AlreadyExistMatcher{}))
Expect(pd.Exist(metav1.GroupVersionKind{
Group: "example.com",
Version: "v1",
Kind: "Foo",
})).Should(Equal(false))
By("test new added CRD in kube package")
Eventually(func() error {
if err := pd.RefreshKubePackagesFromCluster(); err != nil {
return err
}
if !pd.Exist(metav1.GroupVersionKind{
Group: "example.com",
Version: "v1",
Kind: "Foo",
}) {
return errors.New("crd(example.com/v1.Foo) not register to openAPI")
}
return nil
}, time.Second*30, time.Millisecond*300).Should(BeNil())
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
kv1 "kube/example.com/v1"
)
output: kv1.#Foo
output: {
spec: key: "test1"
status: key: "test2"
}
`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Foo",
"apiVersion": "example.com/v1",
"spec": map[string]interface{}{
"key": "test1"},
"status": map[string]interface{}{
"key": "test2"}},
})).Should(BeEquivalentTo(""))
})
// nolint:staticcheck
PIt("discovery built-in k8s resource with third-party path", func() {
By("test ingress in kube package")
bi := build.NewContext().NewInstance("", nil)
err := bi.AddFile("-", `
import (
network "k8s.io/networking/v1beta1"
)
output: network.#Ingress
output: {
metadata: name: "myapp"
spec: {
rules: [{
host: parameter.domain
http: {
paths: [
for k, v in parameter.http {
path: k
backend: {
serviceName: "myname"
servicePort: v
}
},
]
}
}]
}
}
parameter: {
domain: "abc.com"
http: {
"/": 80
}
}`)
Expect(err).ToNot(HaveOccurred())
inst, err := pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err := model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err := base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Ingress",
"apiVersion": "networking.k8s.io/v1beta1",
"metadata": map[string]interface{}{"name": "myapp"},
"spec": map[string]interface{}{
"rules": []interface{}{
map[string]interface{}{
"host": "abc.com",
"http": map[string]interface{}{
"paths": []interface{}{
map[string]interface{}{
"path": "/",
"backend": map[string]interface{}{
"serviceName": "myname",
"servicePort": int64(80),
}}}}}}}},
})).Should(BeEquivalentTo(""))
By("test Invalid Import path")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
"k8s.io/networking/v1"
)
output: v1.#Deployment
output: {
metadata: {
"name": parameter.name
}
spec: template: spec: {
containers: [{
name:"invalid-path",
image: parameter.image
}]
}
}
parameter: {
name: "myapp"
image: "nginx"
}`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
_, err = model.NewBase(inst.Lookup("output"))
Expect(err).ShouldNot(BeNil())
Expect(err.Error()).Should(Equal("_|_ // undefined field \"#Deployment\""))
By("test Deployment in kube package")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
apps "k8s.io/apps/v1"
)
output: apps.#Deployment
output: {
metadata: {
"name": parameter.name
}
spec: template: spec: {
containers: [{
name:"test",
image: parameter.image
}]
}
}
parameter: {
name: "myapp"
image: "nginx"
}`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": map[string]interface{}{"name": "myapp"},
"spec": map[string]interface{}{
"selector": map[string]interface{}{},
"template": map[string]interface{}{
"spec": map[string]interface{}{
"containers": []interface{}{
map[string]interface{}{
"name": "test",
"image": "nginx"}}}}}},
})).Should(BeEquivalentTo(""))
By("test Secret in kube package")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
"k8s.io/core/v1"
)
output: v1.#Secret
output: {
metadata: {
"name": parameter.name
}
type:"kubevela"
}
parameter: {
name: "myapp"
}`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Secret",
"apiVersion": "v1",
"metadata": map[string]interface{}{"name": "myapp"},
"type": "kubevela"}})).Should(BeEquivalentTo(""))
By("test Service in kube package")
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
"k8s.io/core/v1"
)
output: v1.#Service
output: {
metadata: {
"name": parameter.name
}
spec: type: "ClusterIP",
}
parameter: {
name: "myapp"
}`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Service",
"apiVersion": "v1",
"metadata": map[string]interface{}{"name": "myapp"},
"spec": map[string]interface{}{
"type": "ClusterIP"}},
})).Should(BeEquivalentTo(""))
Expect(pd.Exist(metav1.GroupVersionKind{
Group: "",
Version: "v1",
Kind: "Service",
})).Should(Equal(true))
By("Check newly added CRD refreshed and could be used in CUE package")
crd1 := crdv1.CustomResourceDefinition{
ObjectMeta: metav1.ObjectMeta{
Name: "bar.example.com",
},
Spec: crdv1.CustomResourceDefinitionSpec{
Group: "example.com",
Names: crdv1.CustomResourceDefinitionNames{
Kind: "Bar",
ListKind: "BarList",
Plural: "bar",
Singular: "bar",
},
Versions: []crdv1.CustomResourceDefinitionVersion{{
Name: "v1",
Served: true,
Storage: true,
Subresources: &crdv1.CustomResourceSubresources{Status: &crdv1.CustomResourceSubresourceStatus{}},
Schema: &crdv1.CustomResourceValidation{
OpenAPIV3Schema: &crdv1.JSONSchemaProps{
Type: "object",
Properties: map[string]crdv1.JSONSchemaProps{
"spec": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
}},
"status": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
"app-hash": {Type: "string"},
}}}}}},
},
Scope: crdv1.NamespaceScoped,
},
}
Expect(k8sClient.Create(context.Background(), &crd1)).Should(SatisfyAny(BeNil(), &utils.AlreadyExistMatcher{}))
Expect(pd.Exist(metav1.GroupVersionKind{
Group: "example.com",
Version: "v1",
Kind: "Bar",
})).Should(Equal(false))
By("test new added CRD in kube package")
Eventually(func() error {
if err := pd.RefreshKubePackagesFromCluster(); err != nil {
return err
}
if !pd.Exist(metav1.GroupVersionKind{
Group: "example.com",
Version: "v1",
Kind: "Bar",
}) {
return errors.New("crd(example.com/v1.Bar) not register to openAPI")
}
return nil
}, time.Second*30, time.Millisecond*300).Should(BeNil())
bi = build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", `
import (
ev1 "example.com/v1"
)
output: ev1.#Bar
output: {
spec: key: "test1"
status: key: "test2"
}
`)
Expect(err).Should(BeNil())
inst, err = pd.ImportPackagesAndBuildInstance(bi)
Expect(err).Should(BeNil())
base, err = model.NewBase(inst.Lookup("output"))
Expect(err).Should(BeNil())
data, err = base.Unstructured()
Expect(err).Should(BeNil())
Expect(cmp.Diff(data, &unstructured.Unstructured{Object: map[string]interface{}{
"kind": "Bar",
"apiVersion": "example.com/v1",
"spec": map[string]interface{}{
"key": "test1"},
"status": map[string]interface{}{
"key": "test2"}},
})).Should(BeEquivalentTo(""))
})
})

View File

@ -1,465 +0,0 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package packages
import (
"fmt"
"path/filepath"
"strings"
"sync"
"time"
"cuelang.org/go/cue"
"cuelang.org/go/cue/ast"
"cuelang.org/go/cue/build"
"cuelang.org/go/cue/cuecontext"
"cuelang.org/go/cue/parser"
"cuelang.org/go/cue/token"
"cuelang.org/go/encoding/jsonschema"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"github.com/kubevela/workflow/pkg/stdlib"
)
const (
// BuiltinPackageDomain Specify the domain of the built-in package
BuiltinPackageDomain = "kube"
// K8sResourcePrefix Indicates that the definition comes from kubernetes
K8sResourcePrefix = "io_k8s_api_"
// ParseJSONSchemaErr describes the error that occurs when cue parses json
ParseJSONSchemaErr ParseErrType = "parse json schema of k8s crds error"
)
// PackageDiscover defines the inner CUE packages loaded from K8s cluster
type PackageDiscover struct {
velaBuiltinPackages []*build.Instance
pkgKinds map[string][]VersionKind
mutex sync.RWMutex
client *rest.RESTClient
}
// VersionKind contains the resource metadata and reference name
type VersionKind struct {
DefinitionName string
APIVersion string
Kind string
}
// ParseErrType represents the type of CUEParseError
type ParseErrType string
// CUEParseError describes an error when CUE parse error
type CUEParseError struct {
err error
errType ParseErrType
}
// Error implements the Error interface.
func (cueErr CUEParseError) Error() string {
return fmt.Sprintf("%s: %s", cueErr.errType, cueErr.err.Error())
}
// IsCUEParseErr returns true if the specified error is CUEParseError type.
func IsCUEParseErr(err error) bool {
return errors.As(err, &CUEParseError{})
}
// NewPackageDiscover will create a PackageDiscover client with the K8s config file.
func NewPackageDiscover(config *rest.Config) (*PackageDiscover, error) {
client, err := getClusterOpenAPIClient(config)
if err != nil {
return nil, err
}
pd := &PackageDiscover{
client: client,
pkgKinds: make(map[string][]VersionKind),
}
if err = pd.RefreshKubePackagesFromCluster(); err != nil {
return pd, err
}
return pd, nil
}
// ImportBuiltinPackagesFor will add KubeVela built-in packages into your CUE instance
func (pd *PackageDiscover) ImportBuiltinPackagesFor(bi *build.Instance) {
pd.mutex.RLock()
defer pd.mutex.RUnlock()
bi.Imports = append(bi.Imports, pd.velaBuiltinPackages...)
}
// ImportPackagesAndBuildInstance Combine import built-in packages and build cue template together to avoid data race
// nolint:staticcheck
func (pd *PackageDiscover) ImportPackagesAndBuildInstance(bi *build.Instance) (inst *cue.Instance, err error) {
var r cue.Runtime
if pd == nil {
return r.Build(bi)
}
pd.ImportBuiltinPackagesFor(bi)
if err := stdlib.AddImportsFor(bi, ""); err != nil {
return nil, err
}
pd.mutex.Lock()
defer pd.mutex.Unlock()
return r.Build(bi)
}
// ImportPackagesAndBuildValue Combine import built-in packages and build cue template together to avoid data race
func (pd *PackageDiscover) ImportPackagesAndBuildValue(bi *build.Instance) (val cue.Value, err error) {
cuectx := cuecontext.New()
if pd == nil {
return cuectx.BuildInstance(bi), nil
}
pd.ImportBuiltinPackagesFor(bi)
if err := stdlib.AddImportsFor(bi, ""); err != nil {
return cue.Value{}, err
}
pd.mutex.Lock()
defer pd.mutex.Unlock()
return cuectx.BuildInstance(bi), nil
}
// ListPackageKinds list packages and their kinds
func (pd *PackageDiscover) ListPackageKinds() map[string][]VersionKind {
pd.mutex.RLock()
defer pd.mutex.RUnlock()
return pd.pkgKinds
}
// RefreshKubePackagesFromCluster will use K8s client to load/refresh all K8s open API as a reference kube package using in template
func (pd *PackageDiscover) RefreshKubePackagesFromCluster() error {
return nil
// body, err := pd.client.Get().AbsPath("/openapi/v2").Do(context.Background()).Raw()
// if err != nil {
// return err
// }
// return pd.addKubeCUEPackagesFromCluster(string(body))
}
// Exist checks if the GVK exists in the built-in packages
func (pd *PackageDiscover) Exist(gvk metav1.GroupVersionKind) bool {
dgvk := convert2DGVK(gvk)
// package name equals to importPath
importPath := genStandardPkgName(dgvk)
pd.mutex.RLock()
defer pd.mutex.RUnlock()
pkgKinds, ok := pd.pkgKinds[importPath]
if !ok {
pkgKinds = pd.pkgKinds[genOpenPkgName(dgvk)]
}
for _, v := range pkgKinds {
if v.Kind == dgvk.Kind {
return true
}
}
return false
}
// mount will mount the new parsed package into PackageDiscover built-in packages
func (pd *PackageDiscover) mount(pkg *pkgInstance, pkgKinds []VersionKind) {
pd.mutex.Lock()
defer pd.mutex.Unlock()
if pkgKinds == nil {
pkgKinds = []VersionKind{}
}
for i, p := range pd.velaBuiltinPackages {
if p.ImportPath == pkg.ImportPath {
pd.pkgKinds[pkg.ImportPath] = pkgKinds
pd.velaBuiltinPackages[i] = pkg.Instance
return
}
}
pd.pkgKinds[pkg.ImportPath] = pkgKinds
pd.velaBuiltinPackages = append(pd.velaBuiltinPackages, pkg.Instance)
}
func (pd *PackageDiscover) pkgBuild(packages map[string]*pkgInstance, pkgName string,
dGVK domainGroupVersionKind, def string, kubePkg *pkgInstance, groupKinds map[string][]VersionKind) error {
pkg, ok := packages[pkgName]
if !ok {
pkg = newPackage(pkgName)
pkg.Imports = []*build.Instance{kubePkg.Instance}
}
mykinds := groupKinds[pkgName]
mykinds = append(mykinds, VersionKind{
APIVersion: dGVK.APIVersion,
Kind: dGVK.Kind,
DefinitionName: "#" + dGVK.Kind,
})
file, err := parser.ParseFile(dGVK.reverseString(), def)
if err != nil {
return err
}
if err := pkg.AddSyntax(file); err != nil {
return err
}
packages[pkgName] = pkg
groupKinds[pkgName] = mykinds
return nil
}
func (pd *PackageDiscover) addKubeCUEPackagesFromCluster(apiSchema string) error {
file, err := parser.ParseFile("-", apiSchema)
if err != nil {
return err
}
oaInst := cuecontext.New().BuildFile(file)
if err != nil {
return err
}
dgvkMapper := make(map[string]domainGroupVersionKind)
pathValue := oaInst.LookupPath(cue.ParsePath("paths"))
if pathValue.Exists() {
iter, err := pathValue.Fields()
if err != nil {
return err
}
for iter.Next() {
gvk := iter.Value().LookupPath(cue.ParsePath("post[\"x-kubernetes-group-version-kind\"]"))
if gvk.Exists() {
if v, err := getDGVK(gvk); err == nil {
dgvkMapper[v.reverseString()] = v
}
}
}
}
oaFile, err := jsonschema.Extract(oaInst, &jsonschema.Config{
Root: "#/definitions",
Map: openAPIMapping(dgvkMapper),
})
if err != nil {
return CUEParseError{
err: err,
errType: ParseJSONSchemaErr,
}
}
kubePkg := newPackage("kube")
kubePkg.processOpenAPIFile(oaFile)
if err := kubePkg.AddSyntax(oaFile); err != nil {
return err
}
packages := make(map[string]*pkgInstance)
groupKinds := make(map[string][]VersionKind)
for k := range dgvkMapper {
v := dgvkMapper[k]
apiVersion := v.APIVersion
def := fmt.Sprintf(`
import "kube"
#%s: kube.%s & {
kind: "%s"
apiVersion: "%s",
}`, v.Kind, k, v.Kind, apiVersion)
if err := pd.pkgBuild(packages, genStandardPkgName(v), v, def, kubePkg, groupKinds); err != nil {
return err
}
if err := pd.pkgBuild(packages, genOpenPkgName(v), v, def, kubePkg, groupKinds); err != nil {
return err
}
}
for name, pkg := range packages {
pd.mount(pkg, groupKinds[name])
}
return nil
}
func genOpenPkgName(v domainGroupVersionKind) string {
return BuiltinPackageDomain + "/" + v.APIVersion
}
func genStandardPkgName(v domainGroupVersionKind) string {
res := []string{v.Group, v.Version}
if v.Domain != "" {
res = []string{v.Domain, v.Group, v.Version}
}
return strings.Join(res, "/")
}
func setDiscoveryDefaults(config *rest.Config) {
config.APIPath = ""
config.GroupVersion = nil
if config.Timeout == 0 {
config.Timeout = 32 * time.Second
}
if config.Burst == 0 && config.QPS < 100 {
// discovery is expected to be bursty, increase the default burst
// to accommodate looking up resource info for many API groups.
// matches burst set by ConfigFlags#ToDiscoveryClient().
// see https://issue.k8s.io/86149
config.Burst = 100
}
codec := runtime.NoopEncoder{Decoder: clientgoscheme.Codecs.UniversalDecoder()}
config.NegotiatedSerializer = serializer.NegotiatedSerializerWrapper(runtime.SerializerInfo{Serializer: codec})
if len(config.UserAgent) == 0 {
config.UserAgent = rest.DefaultKubernetesUserAgent()
}
}
func getClusterOpenAPIClient(config *rest.Config) (*rest.RESTClient, error) {
copyConfig := *config
setDiscoveryDefaults(&copyConfig)
return rest.UnversionedRESTClientFor(&copyConfig)
}
func openAPIMapping(dgvkMapper map[string]domainGroupVersionKind) func(pos token.Pos, a []string) ([]ast.Label, error) {
return func(pos token.Pos, a []string) ([]ast.Label, error) {
if len(a) < 2 {
return nil, errors.New("openAPIMapping format invalid")
}
name := strings.ReplaceAll(a[1], ".", "_")
name = strings.ReplaceAll(name, "-", "_")
if _, ok := dgvkMapper[name]; !ok && strings.HasPrefix(name, K8sResourcePrefix) {
trimName := strings.TrimPrefix(name, K8sResourcePrefix)
if v, ok := dgvkMapper[trimName]; ok {
v.Domain = "k8s.io"
dgvkMapper[name] = v
delete(dgvkMapper, trimName)
}
}
if strings.HasSuffix(a[1], ".JSONSchemaProps") && pos != token.NoPos {
return []ast.Label{ast.NewIdent("_")}, nil
}
return []ast.Label{ast.NewIdent(name)}, nil
}
}
type domainGroupVersionKind struct {
Domain string
Group string
Version string
Kind string
APIVersion string
}
func (dgvk domainGroupVersionKind) reverseString() string {
var s = []string{dgvk.Kind, dgvk.Version}
s = append(s, strings.Split(dgvk.Group, ".")...)
domain := dgvk.Domain
if domain == "k8s.io" {
domain = "api.k8s.io"
}
if domain != "" {
s = append(s, strings.Split(domain, ".")...)
}
for i, j := 0, len(s)-1; i < j; i, j = i+1, j-1 {
s[i], s[j] = s[j], s[i]
}
return strings.ReplaceAll(strings.Join(s, "_"), "-", "_")
}
type pkgInstance struct {
*build.Instance
}
func newPackage(name string) *pkgInstance {
return &pkgInstance{
&build.Instance{
PkgName: filepath.Base(name),
ImportPath: name,
},
}
}
func (pkg *pkgInstance) processOpenAPIFile(f *ast.File) {
ast.Walk(f, func(node ast.Node) bool {
if st, ok := node.(*ast.StructLit); ok {
hasEllipsis := false
for index, elt := range st.Elts {
if _, isEllipsis := elt.(*ast.Ellipsis); isEllipsis {
if hasEllipsis {
st.Elts = st.Elts[:index]
return true
}
if index > 0 {
st.Elts = st.Elts[:index]
return true
}
hasEllipsis = true
}
}
}
return true
}, nil)
for _, decl := range f.Decls {
if field, ok := decl.(*ast.Field); ok {
if val, ok := field.Value.(*ast.Ident); ok && val.Name == "string" {
field.Value = ast.NewBinExpr(token.OR, ast.NewIdent("int"), ast.NewIdent("string"))
}
}
}
}
func getDGVK(v cue.Value) (ret domainGroupVersionKind, err error) {
gvk := metav1.GroupVersionKind{}
gvk.Group, err = v.LookupPath(cue.ParsePath("group")).String()
if err != nil {
return
}
gvk.Version, err = v.LookupPath(cue.ParsePath("version")).String()
if err != nil {
return
}
gvk.Kind, err = v.LookupPath(cue.ParsePath("kind")).String()
if err != nil {
return
}
ret = convert2DGVK(gvk)
return
}
func convert2DGVK(gvk metav1.GroupVersionKind) domainGroupVersionKind {
ret := domainGroupVersionKind{
Version: gvk.Version,
Kind: gvk.Kind,
APIVersion: gvk.Version,
}
if gvk.Group == "" {
ret.Group = "core"
ret.Domain = "k8s.io"
} else {
ret.APIVersion = gvk.Group + "/" + ret.APIVersion
sv := strings.Split(gvk.Group, ".")
// Domain must contain dot
if len(sv) > 2 {
ret.Domain = strings.Join(sv[1:], ".")
ret.Group = sv[0]
} else {
ret.Group = gvk.Group
}
}
return ret
}

Some files were not shown because too many files have changed in this diff Show More