Compare commits

...

679 Commits

Author SHA1 Message Date
dependabot[bot] 50d41559ab chore(deps): bump rand from 0.9.1 to 0.9.2
Bumps [rand](https://github.com/rust-random/rand) from 0.9.1 to 0.9.2.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/rand_core-0.9.1...rand_core-0.9.2)

---
updated-dependencies:
- dependency-name: rand
  dependency-version: 0.9.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:12:43 -04:00
dependabot[bot] a8ea265933 chore(deps): bump serde_json from 1.0.140 to 1.0.142
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.140 to 1.0.142.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.140...v1.0.142)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-version: 1.0.142
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:06:22 -04:00
dependabot[bot] dad082b6c7 chore(deps): bump mlugg/setup-zig from 2.0.4 to 2.0.5
Bumps [mlugg/setup-zig](https://github.com/mlugg/setup-zig) from 2.0.4 to 2.0.5.
- [Release notes](https://github.com/mlugg/setup-zig/releases)
- [Commits](475c97be87...8d6198c65f)

---
updated-dependencies:
- dependency-name: mlugg/setup-zig
  dependency-version: 2.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:06:01 -04:00
dependabot[bot] 6271e697ed chore(deps): bump actions/download-artifact from 4.3.0 to 5.0.0
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.3.0 to 5.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](d3f86a106a...634f93cb29)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:05:57 -04:00
dependabot[bot] b1dd4e650a chore(deps): bump taiki-e/install-action from 2.56.19 to 2.58.9
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.56.19 to 2.58.9.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](c99cc51b30...2c73a741d1)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.58.9
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:05:46 -04:00
dependabot[bot] 5e7e3eddb2 chore(deps): bump clap from 4.5.41 to 4.5.43
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.41 to 4.5.43.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.41...clap_complete-v4.5.43)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.43
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:05:29 -04:00
dependabot[bot] 7f3652c9b4 chore(deps): bump hyper-util from 0.1.15 to 0.1.16
Bumps [hyper-util](https://github.com/hyperium/hyper-util) from 0.1.15 to 0.1.16.
- [Release notes](https://github.com/hyperium/hyper-util/releases)
- [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.15...v0.1.16)

---
updated-dependencies:
- dependency-name: hyper-util
  dependency-version: 0.1.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:05:21 -04:00
dependabot[bot] 7948300cf5 chore(deps): bump actions/checkout from 4.2.2 to 5.0.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.2 to 5.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](11bd71901b...08c6903cd8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:05:18 -04:00
dependabot[bot] 6eb78120b7 chore(deps): bump docker/login-action from 3.4.0 to 3.5.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](74a5d14239...184bdaa072)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:05:05 -04:00
dependabot[bot] 18464c2ac8 chore(deps): bump testcontainers from 0.24.0 to 0.25.0
Bumps [testcontainers](https://github.com/testcontainers/testcontainers-rs) from 0.24.0 to 0.25.0.
- [Release notes](https://github.com/testcontainers/testcontainers-rs/releases)
- [Changelog](https://github.com/testcontainers/testcontainers-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testcontainers/testcontainers-rs/compare/0.24.0...0.25.0)

---
updated-dependencies:
- dependency-name: testcontainers
  dependency-version: 0.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 16:04:53 -04:00
dependabot[bot] 34b054122b chore(deps): bump indexmap from 2.9.0 to 2.10.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.9.0 to 2.10.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/main/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.9.0...2.10.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-version: 2.10.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:43:41 -06:00
dependabot[bot] 41f01ba0df chore(deps): bump hyper-util from 0.1.14 to 0.1.15
Bumps [hyper-util](https://github.com/hyperium/hyper-util) from 0.1.14 to 0.1.15.
- [Release notes](https://github.com/hyperium/hyper-util/releases)
- [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.14...v0.1.15)

---
updated-dependencies:
- dependency-name: hyper-util
  dependency-version: 0.1.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:36:41 -06:00
dependabot[bot] c6f6b44b51 chore(deps): bump Swatinem/rust-cache from 2.7.8 to 2.8.0
Bumps [Swatinem/rust-cache](https://github.com/swatinem/rust-cache) from 2.7.8 to 2.8.0.
- [Release notes](https://github.com/swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md)
- [Commits](9d47c6ad4b...98c8021b55)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-version: 2.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:36:14 -06:00
dependabot[bot] d3a82c8b2b chore(deps): bump docker/setup-buildx-action from 3.10.0 to 3.11.1
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.10.0 to 3.11.1.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](b5ca514318...e468171a9d)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-version: 3.11.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:35:55 -06:00
dependabot[bot] 4c8c73e603 chore(deps): bump taiki-e/install-action from 2.53.0 to 2.56.19
---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.56.19
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:34:15 -06:00
dependabot[bot] 30f49b6cef chore(deps): bump utoipa from 5.3.1 to 5.4.0
Bumps [utoipa](https://github.com/juhaku/utoipa) from 5.3.1 to 5.4.0.
- [Release notes](https://github.com/juhaku/utoipa/releases)
- [Changelog](https://github.com/juhaku/utoipa/blob/master/utoipa-rapidoc/CHANGELOG.md)
- [Commits](https://github.com/juhaku/utoipa/compare/utoipa-5.3.1...utoipa-5.4.0)

---
updated-dependencies:
- dependency-name: utoipa
  dependency-version: 5.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:33:59 -06:00
dependabot[bot] 66502de4f0 chore(deps): bump tokio from 1.45.1 to 1.46.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.45.1 to 1.46.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.45.1...tokio-1.46.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.46.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:30:58 -06:00
dependabot[bot] e0ec996d4d chore(deps): bump github/codeql-action from 3.28.19 to 3.29.3
---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.29.3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:30:30 -06:00
dependabot[bot] 8515084c01 chore(deps): bump mlugg/setup-zig from 2.0.1 to 2.0.4
Bumps [mlugg/setup-zig](https://github.com/mlugg/setup-zig) from 2.0.1 to 2.0.4.
- [Release notes](https://github.com/mlugg/setup-zig/releases)
- [Commits](7dccf5e6d0...475c97be87)

---
updated-dependencies:
- dependency-name: mlugg/setup-zig
  dependency-version: 2.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:30:12 -06:00
dependabot[bot] 8c037d3406 chore(deps): bump clap from 4.5.40 to 4.5.41
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.40 to 4.5.41.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.40...clap_complete-v4.5.41)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.41
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-08 19:28:33 -06:00
dependabot[bot] abf0702404 chore(deps): bump nkeys from 0.4.4 to 0.4.5
Bumps [nkeys](https://github.com/wasmcloud/nkeys) from 0.4.4 to 0.4.5.
- [Release notes](https://github.com/wasmcloud/nkeys/releases)
- [Commits](https://github.com/wasmcloud/nkeys/compare/v0.4.4...v0.4.5)

---
updated-dependencies:
- dependency-name: nkeys
  dependency-version: 0.4.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-08 17:53:54 -06:00
Brooks Townsend 67d8b25f27 chore(chart): bump to latest app version
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2025-06-18 11:03:44 -04:00
dependabot[bot] b376c3ae2b chore(deps): bump softprops/action-gh-release from 2.2.2 to 2.3.2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.2.2 to 2.3.2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](da05d55257...72f2c25fcb)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.3.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-16 14:04:52 -06:00
dependabot[bot] eec6ca1c03 chore(deps): bump clap from 4.5.39 to 4.5.40
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.39 to 4.5.40.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.39...clap_complete-v4.5.40)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.40
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-16 13:59:42 -06:00
dependabot[bot] cf9ef590b3 chore(deps): bump taiki-e/install-action from 2.52.7 to 2.53.0
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.52.7 to 2.53.0.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](92f69c1952...cfe1303741)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.53.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-16 13:58:36 -06:00
dependabot[bot] 2009753535 chore(deps): bump taiki-e/install-action from 2.52.4 to 2.52.7
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.52.4 to 2.52.7.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](735e593394...92f69c1952)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.52.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-10 20:14:19 -06:00
dependabot[bot] 6ffc096379 chore(deps): bump hyper-util from 0.1.13 to 0.1.14
Bumps [hyper-util](https://github.com/hyperium/hyper-util) from 0.1.13 to 0.1.14.
- [Release notes](https://github.com/hyperium/hyper-util/releases)
- [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.13...v0.1.14)

---
updated-dependencies:
- dependency-name: hyper-util
  dependency-version: 0.1.14
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-10 20:13:29 -06:00
dependabot[bot] 62b573183b chore(deps): bump github/codeql-action from 3.28.18 to 3.28.19
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.18 to 3.28.19.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](ff0a06e83c...fca7ace96b)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.19
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-10 20:11:47 -06:00
dependabot[bot] 254765a5db chore(deps): bump ossf/scorecard-action from 2.4.1 to 2.4.2
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.4.1 to 2.4.2.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](f49aabe0b5...05b42c6244)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-version: 2.4.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-06 14:21:01 -06:00
dependabot[bot] 9ad8b52ffe chore(deps): bump clap from 4.5.38 to 4.5.39
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.38 to 4.5.39.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.38...clap_complete-v4.5.39)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.39
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-06 14:20:46 -06:00
dependabot[bot] cc394fb963 chore(deps): bump hyper-util from 0.1.12 to 0.1.13
Bumps [hyper-util](https://github.com/hyperium/hyper-util) from 0.1.12 to 0.1.13.
- [Release notes](https://github.com/hyperium/hyper-util/releases)
- [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.12...v0.1.13)

---
updated-dependencies:
- dependency-name: hyper-util
  dependency-version: 0.1.13
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-06 14:20:37 -06:00
dependabot[bot] 4f0be1c2ec chore(deps): bump docker/build-push-action from 6.17.0 to 6.18.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.17.0 to 6.18.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](1dc7386353...263435318d)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 6.18.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-06 14:20:21 -06:00
dependabot[bot] c6177f1ec0 chore(deps): bump taiki-e/install-action from 2.52.1 to 2.52.4
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.52.1 to 2.52.4.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](6c6479b498...735e593394)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.52.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-06 14:19:38 -06:00
dependabot[bot] 9ab6ef3f3a chore(deps): bump hyper-util from 0.1.11 to 0.1.12
Bumps [hyper-util](https://github.com/hyperium/hyper-util) from 0.1.11 to 0.1.12.
- [Release notes](https://github.com/hyperium/hyper-util/releases)
- [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.11...v0.1.12)

---
updated-dependencies:
- dependency-name: hyper-util
  dependency-version: 0.1.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-28 16:10:04 -04:00
dependabot[bot] aab70fa276 chore(deps): bump mlugg/setup-zig from 2.0.0 to 2.0.1
Bumps [mlugg/setup-zig](https://github.com/mlugg/setup-zig) from 2.0.0 to 2.0.1.
- [Release notes](https://github.com/mlugg/setup-zig/releases)
- [Commits](aa9ad5c14e...7dccf5e6d0)

---
updated-dependencies:
- dependency-name: mlugg/setup-zig
  dependency-version: 2.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-28 16:08:09 -04:00
dependabot[bot] 04862520cb chore(deps): bump taiki-e/install-action from 2.51.2 to 2.52.1
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.51.2 to 2.52.1.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](941e8a4d9d...6c6479b498)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.52.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-28 16:07:52 -04:00
dependabot[bot] d24a275f69 chore(deps): bump uuid from 1.16.0 to 1.17.0
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.16.0 to 1.17.0.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/v1.16.0...v1.17.0)

---
updated-dependencies:
- dependency-name: uuid
  dependency-version: 1.17.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-28 16:07:37 -04:00
dependabot[bot] dc85b32bed chore(deps): bump tokio from 1.45.0 to 1.45.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.45.0 to 1.45.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.45.0...tokio-1.45.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.45.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-28 16:07:21 -04:00
dependabot[bot] a5a61d2749 chore(deps): bump tokio from 1.44.2 to 1.45.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.2 to 1.45.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.2...tokio-1.45.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.45.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-20 09:42:00 -04:00
dependabot[bot] c065b3e17e chore(deps): bump clap from 4.5.37 to 4.5.38
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.37 to 4.5.38.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.37...clap_complete-v4.5.38)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.38
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-19 21:39:12 -06:00
dependabot[bot] 4239d6d898
chore(deps): bump github/codeql-action from 3.28.17 to 3.28.18 (#669)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.17 to 3.28.18.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](60168efe1c...ff0a06e83c)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 08:56:32 -05:00
dependabot[bot] d240b53a5d
chore(deps): bump docker/build-push-action from 6.16.0 to 6.17.0 (#668)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.16.0 to 6.17.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](14487ce63c...1dc7386353)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 6.17.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 08:44:15 -05:00
dependabot[bot] 4e014223b8
chore(deps): bump taiki-e/install-action from 2.50.10 to 2.51.2 (#667)
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.50.10 to 2.51.2.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](83254c5438...941e8a4d9d)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.51.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 08:31:15 -05:00
dependabot[bot] 96aa54bd5e chore(deps): bump mlugg/setup-zig from 1.2.1 to 2.0.0
Bumps [mlugg/setup-zig](https://github.com/mlugg/setup-zig) from 1.2.1 to 2.0.0.
- [Release notes](https://github.com/mlugg/setup-zig/releases)
- [Commits](a67e68dc5c...aa9ad5c14e)

---
updated-dependencies:
- dependency-name: mlugg/setup-zig
  dependency-version: 2.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-12 09:38:59 -04:00
dependabot[bot] 67b1d85ba9 chore(deps): bump taiki-e/install-action from 2.50.7 to 2.50.10
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.50.7 to 2.50.10.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](86c23eed46...83254c5438)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.50.10
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-12 09:36:04 -04:00
Joonas Bergius b5133163ae
chore: Switch to mlugg/setup-zig action (#662)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2025-05-06 08:29:49 -05:00
dependabot[bot] d5a77cc74c chore(deps): bump sha2 from 0.10.8 to 0.10.9
Bumps [sha2](https://github.com/RustCrypto/hashes) from 0.10.8 to 0.10.9.
- [Commits](https://github.com/RustCrypto/hashes/compare/sha2-v0.10.8...sha2-v0.10.9)

---
updated-dependencies:
- dependency-name: sha2
  dependency-version: 0.10.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-05 22:31:25 -06:00
dependabot[bot] ef80b684ba chore(deps): bump chrono from 0.4.40 to 0.4.41
Bumps [chrono](https://github.com/chronotope/chrono) from 0.4.40 to 0.4.41.
- [Release notes](https://github.com/chronotope/chrono/releases)
- [Changelog](https://github.com/chronotope/chrono/blob/main/CHANGELOG.md)
- [Commits](https://github.com/chronotope/chrono/compare/v0.4.40...v0.4.41)

---
updated-dependencies:
- dependency-name: chrono
  dependency-version: 0.4.41
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-05 22:31:00 -06:00
dependabot[bot] ee40750113
chore(deps): bump testcontainers from 0.23.3 to 0.24.0 (#659)
Bumps [testcontainers](https://github.com/testcontainers/testcontainers-rs) from 0.23.3 to 0.24.0.
- [Release notes](https://github.com/testcontainers/testcontainers-rs/releases)
- [Changelog](https://github.com/testcontainers/testcontainers-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testcontainers/testcontainers-rs/compare/0.23.3...0.24.0)

---
updated-dependencies:
- dependency-name: testcontainers
  dependency-version: 0.24.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 19:04:41 -05:00
dependabot[bot] 73dc76b72a
chore(deps): bump taiki-e/install-action from 2.50.3 to 2.50.7 (#657)
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.50.3 to 2.50.7.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](ab3728c7ba...86c23eed46)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.50.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 08:59:17 -05:00
dependabot[bot] aac1e46d0b
chore(deps): bump github/codeql-action from 3.28.16 to 3.28.17 (#658)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.16 to 3.28.17.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](28deaeda66...60168efe1c)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.17
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 08:58:52 -05:00
dependabot[bot] e843cfb824 chore(deps): bump rand from 0.9.0 to 0.9.1
Bumps [rand](https://github.com/rust-random/rand) from 0.9.0 to 0.9.1.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.9.0...rand_core-0.9.1)

---
updated-dependencies:
- dependency-name: rand
  dependency-version: 0.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-29 14:28:33 -04:00
dependabot[bot] 0ef3162684 chore(deps): bump github/codeql-action from 3.28.15 to 3.28.16
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.15 to 3.28.16.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](45775bd823...28deaeda66)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-29 14:28:10 -04:00
dependabot[bot] 726a6c0bc7 chore(deps): bump actions/download-artifact from 4.2.1 to 4.3.0
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.2.1 to 4.3.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](95815c38cf...d3f86a106a)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 4.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-29 14:28:02 -04:00
dependabot[bot] f1a3acbf1e chore(deps): bump actions/setup-python from 5.5.0 to 5.6.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.5.0 to 5.6.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](8d9ed9ac5c...a26af69be9)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 5.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-29 14:27:48 -04:00
dependabot[bot] e92e526dfe chore(deps): bump docker/build-push-action from 6.15.0 to 6.16.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.15.0 to 6.16.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](471d1dc4e0...14487ce63c)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 6.16.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-29 14:27:33 -04:00
dependabot[bot] 15ae8c4d6a chore(deps): bump taiki-e/install-action from 2.49.50 to 2.50.3
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.49.50 to 2.50.3.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](09dc018eee...ab3728c7ba)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.50.3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-29 14:27:18 -04:00
dependabot[bot] 22fc78860f
chore(deps): bump clap from 4.5.36 to 4.5.37 (#650)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.36 to 4.5.37.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.36...clap_complete-v4.5.37)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.37
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-21 10:18:24 -05:00
dependabot[bot] c7953f95e9
chore(deps): bump taiki-e/install-action from 2.49.49 to 2.49.50 (#649)
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.49.49 to 2.49.50.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](be7c31b674...09dc018eee)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.49.50
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-21 10:17:46 -05:00
dependabot[bot] 7f0fc3a396
chore(deps): bump softprops/action-gh-release from 2.2.1 to 2.2.2 (#648)
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.2.1 to 2.2.2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](c95fe14893...da05d55257)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-21 10:16:15 -05:00
dependabot[bot] 37b47154e3 chore(deps): bump github/codeql-action from 3.28.13 to 3.28.15
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.13 to 3.28.15.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](1b549b9259...45775bd823)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:53:59 -04:00
dependabot[bot] 3c8b0742a5 chore(deps): bump actions/setup-python from 5.4.0 to 5.5.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.4.0 to 5.5.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](42375524e2...8d9ed9ac5c)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:51:20 -04:00
dependabot[bot] 8a3d21ce7d chore(deps): bump clap from 4.5.32 to 4.5.36
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.32 to 4.5.36.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.32...clap_complete-v4.5.36)

---
updated-dependencies:
- dependency-name: clap
  dependency-version: 4.5.36
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:50:57 -04:00
dependabot[bot] c09d40d335 chore(deps): bump tokio from 1.44.1 to 1.44.2
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.1 to 1.44.2.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.1...tokio-1.44.2)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.44.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:50:50 -04:00
dependabot[bot] 0748b04b60 chore(deps): bump indexmap from 2.8.0 to 2.9.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.8.0 to 2.9.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/main/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.8.0...2.9.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-version: 2.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:50:37 -04:00
dependabot[bot] dc1955370f chore(deps): bump hyper-util from 0.1.10 to 0.1.11
Bumps [hyper-util](https://github.com/hyperium/hyper-util) from 0.1.10 to 0.1.11.
- [Release notes](https://github.com/hyperium/hyper-util/releases)
- [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.10...v0.1.11)

---
updated-dependencies:
- dependency-name: hyper-util
  dependency-version: 0.1.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:50:28 -04:00
dependabot[bot] ebd113e51a chore(deps): bump anyhow from 1.0.97 to 1.0.98
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.97 to 1.0.98.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.97...1.0.98)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-version: 1.0.98
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:50:21 -04:00
dependabot[bot] 8def8fe075 chore(deps): bump taiki-e/install-action from 2.49.34 to 2.49.49
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.49.34 to 2.49.49.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](914ac1e29d...be7c31b674)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-version: 2.49.49
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-14 09:50:14 -04:00
dependabot[bot] 1ae4e8e2cb chore(deps): bump Swatinem/rust-cache from 2.7.7 to 2.7.8
Bumps [Swatinem/rust-cache](https://github.com/swatinem/rust-cache) from 2.7.7 to 2.7.8.
- [Release notes](https://github.com/swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md)
- [Commits](f0deed1e0e...9d47c6ad4b)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 13:56:01 -06:00
dependabot[bot] db80173177 chore(deps): bump actions/download-artifact from 4.1.9 to 4.2.1
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.1.9 to 4.2.1.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](cc20338598...95815c38cf)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 13:54:17 -06:00
dependabot[bot] 6b4946dd32 chore(deps): bump taiki-e/install-action from 2.49.28 to 2.49.34
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.49.28 to 2.49.34.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](d7975a1de2...914ac1e29d)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 13:53:43 -06:00
dependabot[bot] 897192b894 chore(deps): bump github/codeql-action from 3.28.11 to 3.28.13
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.11 to 3.28.13.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](6bb031afdd...1b549b9259)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 13:50:33 -06:00
dependabot[bot] d715170d01 chore(deps): bump actions/upload-artifact from 4.6.1 to 4.6.2
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.1 to 4.6.2.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](4cec3d8aa0...ea165f8d65)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 13:48:15 -06:00
Taylor Thomas 8a1cd9e8e4 chore: Bump dep versions in preparation for release
Now that the control client is released, we can release wadm to continue
releasing the rest of the host monorepo

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2025-03-19 15:00:57 -06:00
dependabot[bot] 93fbb9f4a3 chore(deps): bump taiki-e/install-action from 2.49.18 to 2.49.28
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.49.18 to 2.49.28.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](f87f9990b0...d7975a1de2)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 13:35:45 -04:00
dependabot[bot] 6e57d6f197 chore(deps): bump docker/login-action from 3.3.0 to 3.4.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](9780b0c442...74a5d14239)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 13:26:35 -04:00
dependabot[bot] b3ebcd2e2a chore(deps): bump ulid from 1.2.0 to 1.2.1
Bumps [ulid](https://github.com/dylanhart/ulid-rs) from 1.2.0 to 1.2.1.
- [Commits](https://github.com/dylanhart/ulid-rs/compare/v1.2.0...v1.2.1)

---
updated-dependencies:
- dependency-name: ulid
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 13:26:19 -04:00
dependabot[bot] 6c8dd444ba chore(deps): bump indexmap from 2.7.1 to 2.8.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.7.1 to 2.8.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/main/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.7.1...2.8.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 13:26:06 -04:00
dependabot[bot] 005d599bcd chore(deps): bump tokio from 1.44.0 to 1.44.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.0 to 1.44.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.0...tokio-1.44.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 13:25:59 -04:00
dependabot[bot] 86af1498cb chore(deps): bump serde from 1.0.217 to 1.0.219
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.217 to 1.0.219.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.217...v1.0.219)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 13:25:50 -04:00
dependabot[bot] 60f0014449 chore(deps): bump async-trait from 0.1.87 to 0.1.88
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.87 to 0.1.88.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.87...0.1.88)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 13:25:42 -04:00
dependabot[bot] a329be44a3 chore(deps): bump testcontainers from 0.23.2 to 0.23.3
Bumps [testcontainers](https://github.com/testcontainers/testcontainers-rs) from 0.23.2 to 0.23.3.
- [Release notes](https://github.com/testcontainers/testcontainers-rs/releases)
- [Changelog](https://github.com/testcontainers/testcontainers-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testcontainers/testcontainers-rs/compare/0.23.2...0.23.3)

---
updated-dependencies:
- dependency-name: testcontainers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-12 14:13:08 -06:00
Brooks Townsend 14f7ed1bab chore(deps)!: upgrade async-nats to 0.39
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-03-12 16:05:02 -04:00
Brooks Townsend 39b79638ad chore(wadm): bump to 0.20.3 for release
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-03-11 14:41:37 -04:00
Brooks Townsend ac747cd8bc fix(scaler): react to config set events for components/providers
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-03-11 14:41:37 -04:00
dependabot[bot] 77f33f08f6 chore(deps): bump uuid from 1.13.1 to 1.15.1
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.13.1 to 1.15.1.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/1.13.1...v1.15.1)

---
updated-dependencies:
- dependency-name: uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 13:55:00 -06:00
dependabot[bot] 130c8f4a70 chore(deps): bump chrono from 0.4.39 to 0.4.40
Bumps [chrono](https://github.com/chronotope/chrono) from 0.4.39 to 0.4.40.
- [Release notes](https://github.com/chronotope/chrono/releases)
- [Changelog](https://github.com/chronotope/chrono/blob/main/CHANGELOG.md)
- [Commits](https://github.com/chronotope/chrono/compare/v0.4.39...v0.4.40)

---
updated-dependencies:
- dependency-name: chrono
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 13:52:45 -06:00
dependabot[bot] e9f017b809 chore(deps): bump tokio from 1.43.0 to 1.44.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.43.0 to 1.44.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.43.0...tokio-1.44.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 13:49:53 -06:00
dependabot[bot] 1365854fbb chore(deps): bump thiserror from 2.0.11 to 2.0.12
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 2.0.11 to 2.0.12.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/2.0.11...2.0.12)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 13:43:35 -06:00
dependabot[bot] 8164b443fc chore(deps): bump azure/setup-helm from 4.2.0 to 4.3.0
Bumps [azure/setup-helm](https://github.com/azure/setup-helm) from 4.2.0 to 4.3.0.
- [Release notes](https://github.com/azure/setup-helm/releases)
- [Changelog](https://github.com/Azure/setup-helm/blob/main/CHANGELOG.md)
- [Commits](fe7b79cd5e...b9e51907a0)

---
updated-dependencies:
- dependency-name: azure/setup-helm
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 13:42:39 -06:00
dependabot[bot] 445622df2e chore(deps): bump ossf/scorecard-action from 2.4.0 to 2.4.1
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.4.0 to 2.4.1.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](62b2cac7ed...f49aabe0b5)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 13:42:09 -06:00
dependabot[bot] e218cdae70 chore(deps): bump github/codeql-action from 3.28.10 to 3.28.11
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.10 to 3.28.11.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](b56ba49b26...6bb031afdd)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 13:41:37 -06:00
dependabot[bot] f74f7f8f54 chore(deps): bump taiki-e/install-action from 2.48.13 to 2.49.18
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.48.13 to 2.49.18.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](ad0904967b...f87f9990b0)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 12:56:26 -06:00
dependabot[bot] 734c726f14 chore(deps): bump actions/download-artifact from 4.1.8 to 4.1.9
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.1.8 to 4.1.9.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](fa0a91b85d...cc20338598)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 12:54:15 -06:00
Nicolas Lamirault 0fba847245
feat(helm): Kubernetes labels (#598)
Signed-off-by: Nicolas Lamirault <nicolas.lamirault@gmail.com>
2025-03-06 19:28:57 -05:00
dependabot[bot] a2c022b462 chore(deps): bump anyhow from 1.0.95 to 1.0.97
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.95 to 1.0.97.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.95...1.0.97)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:42:00 -07:00
dependabot[bot] 4db8763a0f chore(deps): bump schemars from 0.8.21 to 0.8.22
Bumps [schemars](https://github.com/GREsau/schemars) from 0.8.21 to 0.8.22.
- [Release notes](https://github.com/GREsau/schemars/releases)
- [Changelog](https://github.com/GREsau/schemars/blob/master/CHANGELOG.md)
- [Commits](https://github.com/GREsau/schemars/compare/v0.8.21...v0.8.22)

---
updated-dependencies:
- dependency-name: schemars
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:29:22 -07:00
dependabot[bot] 7958bfbced chore(deps): bump async-trait from 0.1.86 to 0.1.87
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.86 to 0.1.87.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.86...0.1.87)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:27:54 -07:00
dependabot[bot] 37eb784b82 chore(deps): bump serde_json from 1.0.138 to 1.0.140
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.138 to 1.0.140.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.138...v1.0.140)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:27:06 -07:00
dependabot[bot] 16191d081a chore(deps): bump actions/upload-artifact from 4.6.0 to 4.6.1
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.0 to 4.6.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](65c4c4a1dd...4cec3d8aa0)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:26:39 -07:00
dependabot[bot] a5424b7e4c chore(deps): bump github/codeql-action from 3.28.9 to 3.28.10
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.9 to 3.28.10.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](9e8d0789d4...b56ba49b26)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:26:06 -07:00
dependabot[bot] 2e3abbcba0 chore(deps): bump docker/setup-qemu-action from 3.4.0 to 3.6.0
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 3.4.0 to 3.6.0.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](4574d27a47...29109295f8)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:25:04 -07:00
dependabot[bot] 720113d026 chore(deps): bump docker/setup-buildx-action from 3.9.0 to 3.10.0
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.9.0 to 3.10.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](f7ce87c1d6...b5ca514318)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:24:27 -07:00
dependabot[bot] 80bba4fb9f chore(deps): bump docker/build-push-action from 6.13.0 to 6.15.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.13.0 to 6.15.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](ca877d9245...471d1dc4e0)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 10:23:56 -07:00
dependabot[bot] 2e474c5d0c
chore(deps): bump clap from 4.5.28 to 4.5.29 (#597)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.28 to 4.5.29.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.28...clap_complete-v4.5.29)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-17 09:36:09 -06:00
dependabot[bot] ceda608718
chore(deps): bump taiki-e/install-action from 2.48.9 to 2.48.13 (#596)
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.48.9 to 2.48.13.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](995f97569c...ad0904967b)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-17 09:35:32 -06:00
dependabot[bot] 6b9d6fd26f chore(deps): bump uuid from 1.12.1 to 1.13.1
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.12.1 to 1.13.1.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/1.12.1...1.13.1)

---
updated-dependencies:
- dependency-name: uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:35:57 -05:00
dependabot[bot] 44753eb992 chore(deps): bump testcontainers from 0.23.1 to 0.23.2
Bumps [testcontainers](https://github.com/testcontainers/testcontainers-rs) from 0.23.1 to 0.23.2.
- [Release notes](https://github.com/testcontainers/testcontainers-rs/releases)
- [Changelog](https://github.com/testcontainers/testcontainers-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testcontainers/testcontainers-rs/compare/0.23.1...0.23.2)

---
updated-dependencies:
- dependency-name: testcontainers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:29:38 -05:00
dependabot[bot] c5694226c8 chore(deps): bump taiki-e/install-action from 2.48.1 to 2.48.9
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.48.1 to 2.48.9.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](510b3ecd79...995f97569c)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:29:22 -05:00
dependabot[bot] c808f7a07a chore(deps): bump ulid from 1.1.4 to 1.2.0
Bumps [ulid](https://github.com/dylanhart/ulid-rs) from 1.1.4 to 1.2.0.
- [Commits](https://github.com/dylanhart/ulid-rs/compare/v1.1.4...v1.2.0)

---
updated-dependencies:
- dependency-name: ulid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:29:17 -05:00
dependabot[bot] eaebdd918e chore(deps): bump clap from 4.5.27 to 4.5.28
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.27 to 4.5.28.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.27...clap_complete-v4.5.28)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:29:12 -05:00
dependabot[bot] e756aa038f chore(deps): bump docker/setup-buildx-action from 3.8.0 to 3.9.0
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.8.0 to 3.9.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](6524bf65af...f7ce87c1d6)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:28:58 -05:00
dependabot[bot] ba04447356 chore(deps): bump docker/setup-qemu-action from 3.3.0 to 3.4.0
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](53851d1459...4574d27a47)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:28:46 -05:00
dependabot[bot] 386eebd33f chore(deps): bump github/codeql-action from 3.28.8 to 3.28.9
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.28.8 to 3.28.9.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](dd746615b3...9e8d0789d4)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-11 12:28:33 -05:00
Brooks Townsend 1926bf070f chore: ignore dependabot commits in release notes
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-02-11 11:48:34 -05:00
Brooks Townsend ddb912553a chore: patch all for release
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-02-11 11:48:34 -05:00
Stuart Harris bdf06dc5d9 only check configs that declare properties for unique names.
Signed-off-by: Stuart Harris <stuart.harris@red-badger.com>
2025-02-07 09:09:49 -05:00
dependabot[bot] ffc655e749 chore(deps): bump hyper from 1.5.2 to 1.6.0
Bumps [hyper](https://github.com/hyperium/hyper) from 1.5.2 to 1.6.0.
- [Release notes](https://github.com/hyperium/hyper/releases)
- [Changelog](https://github.com/hyperium/hyper/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper/compare/v1.5.2...v1.6.0)

---
updated-dependencies:
- dependency-name: hyper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-05 10:47:05 -05:00
dependabot[bot] 7218266206 chore(deps): bump async-trait from 0.1.85 to 0.1.86
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.85 to 0.1.86.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.85...0.1.86)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-04 18:47:51 -05:00
dependabot[bot] cb00233aaa chore(deps): bump serde_json from 1.0.137 to 1.0.138
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.137 to 1.0.138.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.137...v1.0.138)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-04 18:44:45 -05:00
dependabot[bot] 7a94b8565c chore(deps): bump bytes from 1.9.0 to 1.10.0
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.9.0 to 1.10.0.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.9.0...v1.10.0)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-04 18:44:19 -05:00
dependabot[bot] 66ca4cc9f5 chore(deps): bump softprops/action-gh-release from 2.0.9 to 2.2.1
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.9 to 2.2.1.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](e7a8f85e1c...c95fe14893)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-03 13:55:15 -05:00
luk3ark c8e715a088 fix(ci): correct wadm WIT tarball structure
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-02-03 13:47:34 -05:00
dependabot[bot] a5066c16dd
chore(deps): bump taiki-e/install-action from 2.47.25 to 2.48.1 (#579)
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.47.25 to 2.48.1.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/taiki-e/install-action/compare/v2.47.25...510b3ecd7915856b6909305605afa7a8a57c1b04)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-03 10:59:16 -06:00
dependabot[bot] e4de5fc83e
chore(deps): bump actions/setup-python from 5.3.0 to 5.4.0 (#578)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](0b93645e9f...42375524e2)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-03 10:58:42 -06:00
dependabot[bot] b26427c3ec
chore(deps): bump github/codeql-action from 3.27.9 to 3.28.8 (#577)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.9 to 3.28.8.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](df409f7d92...dd746615b3)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-03 10:13:28 -06:00
Taylor Thomas 2113aa3781 chore: Bumps versions for patch release
I also did a little housekeeping here to fix a bunch of clippy lints and
updated the flake inputs

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2025-01-30 12:59:42 -05:00
dependabot[bot] 55444f27f2 chore(deps): bump uuid from 1.12.0 to 1.12.1
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.12.0 to 1.12.1.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/1.12.0...1.12.1)

---
updated-dependencies:
- dependency-name: uuid
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 10:35:51 -05:00
dependabot[bot] 797eddf5c1 chore(deps): bump docker/build-push-action from 6.10.0 to 6.13.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.10.0 to 6.13.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](48aba3b46d...ca877d9245)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 10:28:22 -05:00
dependabot[bot] 55be7d8558 chore(deps): bump taiki-e/install-action from 2.46.11 to 2.47.25
Bumps [taiki-e/install-action](https://github.com/taiki-e/install-action) from 2.46.11 to 2.47.25.
- [Release notes](https://github.com/taiki-e/install-action/releases)
- [Changelog](https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md)
- [Commits](ed8c79bccf...1936c8cfe3)

---
updated-dependencies:
- dependency-name: taiki-e/install-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 10:28:06 -05:00
dependabot[bot] 7d59eb4746 chore(deps): bump serde_json from 1.0.135 to 1.0.137
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.135 to 1.0.137.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.135...v1.0.137)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 10:27:47 -05:00
dependabot[bot] 4bb74d04fe chore(deps): bump wasmcloud-control-interface from 2.2.0 to 2.3.0
Bumps [wasmcloud-control-interface](https://github.com/wasmCloud/wasmCloud) from 2.2.0 to 2.3.0.
- [Release notes](https://github.com/wasmCloud/wasmCloud/releases)
- [Changelog](https://github.com/wasmCloud/wasmCloud/blob/main/CHANGELOG.md)
- [Commits](https://github.com/wasmCloud/wasmCloud/commits)

---
updated-dependencies:
- dependency-name: wasmcloud-control-interface
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 10:27:31 -05:00
dependabot[bot] 1f902b248c chore(deps): bump helm/chart-testing-action from 2.6.1 to 2.7.0
Bumps [helm/chart-testing-action](https://github.com/helm/chart-testing-action) from 2.6.1 to 2.7.0.
- [Release notes](https://github.com/helm/chart-testing-action/releases)
- [Commits](e6669bcd63...0d28d3144d)

---
updated-dependencies:
- dependency-name: helm/chart-testing-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 10:27:23 -05:00
dependabot[bot] 34fb5e69b2 chore(deps): bump clap from 4.5.26 to 4.5.27
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.26 to 4.5.27.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.26...clap_complete-v4.5.27)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-27 10:27:20 -05:00
luk3ark efeb6a020d fix(wadm): correct status topic name in WADM
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-01-27 10:25:29 -05:00
Taylor Thomas e492823998 fix(ci): Right tarball location
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2025-01-24 14:15:20 -07:00
Taylor Thomas ad2cb51238 fix(ci): Passes the right directory for wit builds
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2025-01-24 12:46:09 -07:00
luk3ark 95633628af feat(ci): add OCI publishing to WADM WIT workflow
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-01-23 16:53:51 -07:00
dependabot[bot] 9fbc598eff
chore(deps): bump actions/checkout from 4.1.1 to 4.2.2 (#557)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.1 to 4.2.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.1.1...11bd71901bbe5b1630ceea73d27597364c9af683)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 18:26:39 +00:00
dependabot[bot] 830b02545a
chore(deps): bump indexmap from 2.7.0 to 2.7.1 (#554)
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.7.0 to 2.7.1.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.7.0...2.7.1)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:26:05 -06:00
dependabot[bot] 9475e4c542
chore(deps): bump semver from 1.0.24 to 1.0.25 (#553)
Bumps [semver](https://github.com/dtolnay/semver) from 1.0.24 to 1.0.25.
- [Release notes](https://github.com/dtolnay/semver/releases)
- [Commits](https://github.com/dtolnay/semver/compare/1.0.24...1.0.25)

---
updated-dependencies:
- dependency-name: semver
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:25:47 -06:00
dependabot[bot] 84d4f48783
chore(deps): bump thiserror from 2.0.9 to 2.0.11 (#552)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 2.0.9 to 2.0.11.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/2.0.9...2.0.11)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:25:25 -06:00
dependabot[bot] 95d256215b
chore(deps): bump uuid from 1.11.0 to 1.12.0 (#551)
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.11.0 to 1.12.0.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/1.11.0...1.12.0)

---
updated-dependencies:
- dependency-name: uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:24:55 -06:00
dependabot[bot] 7e97f6e615
chore(deps): bump tokio from 1.42.0 to 1.43.0 (#550)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.42.0 to 1.43.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.42.0...tokio-1.43.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:23:55 -06:00
dependabot[bot] bcc2b7f461
chore(deps): bump Swatinem/rust-cache from 2.7.5 to 2.7.7 (#555)
Bumps [Swatinem/rust-cache](https://github.com/swatinem/rust-cache) from 2.7.5 to 2.7.7.
- [Release notes](https://github.com/swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md)
- [Commits](82a92a6e8f...f0deed1e0e)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:22:05 -06:00
dependabot[bot] 2aa35a9514
chore(deps): bump helm/kind-action from 1.11.0 to 1.12.0 (#556)
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.11.0 to 1.12.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](ae94020eaf...a1b0e39133)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:21:14 -06:00
dependabot[bot] f504e8c1b2
chore(deps): bump docker/setup-qemu-action from 3.2.0 to 3.3.0 (#558)
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 3.2.0 to 3.3.0.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](49b3bc8e6b...53851d1459)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:20:14 -06:00
dependabot[bot] 7658a4e654
chore(deps): bump actions/upload-artifact from 4.4.3 to 4.6.0 (#559)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.4.3 to 4.6.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4.4.3...65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-20 12:19:09 -06:00
Brooks Townsend 64e3d93118 refactor(wadm): better visibility controls
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend 41e6e352cc chore: bump wadm to 0.20.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend d169b1be62 test(wadm): fix relative path
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend 4676947211 ci(wadm): test wadm crate feature combinations
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend 78e077604e fix(wadm): properly gate imports behind http_admin
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend a7a287ce7b chore(wadm): add http_admin feature
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend 90dac77412 refactor: use cfg_attr to gate CLI config
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend ab9ad612ee chore(deps): simplify CLI deps
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Brooks Townsend 18a66b2640 refactor(*)!: move start functionality to wadm lib
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-17 14:11:59 -05:00
Roman Volosatovs 13faa57248 feat: add HTTP admin endpoint
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
2025-01-14 09:32:34 -05:00
dependabot[bot] b167486f48 chore(deps): bump utoipa from 5.3.0 to 5.3.1
Bumps [utoipa](https://github.com/juhaku/utoipa) from 5.3.0 to 5.3.1.
- [Release notes](https://github.com/juhaku/utoipa/releases)
- [Changelog](https://github.com/juhaku/utoipa/blob/master/utoipa-rapidoc/CHANGELOG.md)
- [Commits](https://github.com/juhaku/utoipa/compare/utoipa-5.3.0...utoipa-5.3.1)

---
updated-dependencies:
- dependency-name: utoipa
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-13 10:01:27 -05:00
dependabot[bot] 52500b4787 chore(deps): bump serde_json from 1.0.134 to 1.0.135
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.134 to 1.0.135.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.134...v1.0.135)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-13 10:01:09 -05:00
dependabot[bot] 8df7924598 chore(deps): bump async-trait from 0.1.83 to 0.1.85
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.83 to 0.1.85.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.83...0.1.85)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-13 10:00:53 -05:00
dependabot[bot] 59e7e66562 chore(deps): bump clap from 4.5.23 to 4.5.26
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.23 to 4.5.26.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.23...clap_complete-v4.5.26)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-13 10:00:49 -05:00
dependabot[bot] f88140893b chore(deps): bump ulid from 1.1.3 to 1.1.4
Bumps [ulid](https://github.com/dylanhart/ulid-rs) from 1.1.3 to 1.1.4.
- [Commits](https://github.com/dylanhart/ulid-rs/compare/v1.1.3...v1.1.4)

---
updated-dependencies:
- dependency-name: ulid
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-13 10:00:44 -05:00
Brooks Townsend 77f5bc8961 test(validation): allow misnamed interface as warning
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-03 16:30:50 -05:00
Brooks Townsend e67c9e580c fix(types): warn on unknown interface
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-03 16:30:50 -05:00
Brooks Townsend 4243efdc8f release(*): bump versions
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2025-01-02 10:42:13 -05:00
Márk Kővári 40d8b50c0e test(e2e): add one e2e test with memory persistence
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
Márk Kővári 5a4c13fe75 chore(nats): remove redundant possible values, clap already generates
those

Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
Márk Kővári b6b398ecd7 chore(stream-persistence): remove hidden and lowercase options
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
Márk Kővári 6fc79d3c81 fix(typo): persistance to persistence
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
Márk Kővári 7a811a6737 fix(clap): enum pascalcase rename
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
Márk Kővári 1448671649 debug(storage): add storage default value failes WIP
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
Márk Kővári f596dadcb8 fix(clippy): resolve clippy warning with ToString
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
Márk Kővári ca868c5f79 feat(wadm): cli enable memory stream usage with flags
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2025-01-02 09:54:09 -05:00
luk3ark 11aa88b73f feat(deps): removed default std feature and updated to target_family
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-01-02 09:53:05 -05:00
luk3ark 6b768c1607 feat(deps): change feature gate back to wit
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-01-02 09:53:05 -05:00
luk3ark c26eb6d2fd feat(deps): removed redundant dependencies
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-01-02 09:53:05 -05:00
luk3ark f34b19a79b feat(deps): add separate wit-wasm and wit-std features
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-01-02 09:53:05 -05:00
luk3ark 532e4930ef feat(deps): add separate wit-wasm and wit-std features
Signed-off-by: luk3ark <luk3ark@gmail.com>
2025-01-02 09:53:05 -05:00
luk3ark 6004c9a136 allow unique interfaces across duplicate links and test
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-31 11:12:40 -05:00
dependabot[bot] 4af2a727c3 chore(deps): bump serde from 1.0.216 to 1.0.217
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.216 to 1.0.217.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.216...v1.0.217)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-30 10:52:20 -07:00
dependabot[bot] d92b0b7e6a chore(deps): bump utoipa from 5.2.0 to 5.3.0
Bumps [utoipa](https://github.com/juhaku/utoipa) from 5.2.0 to 5.3.0.
- [Release notes](https://github.com/juhaku/utoipa/releases)
- [Changelog](https://github.com/juhaku/utoipa/blob/master/utoipa-rapidoc/CHANGELOG.md)
- [Commits](https://github.com/juhaku/utoipa/compare/utoipa-5.2.0...utoipa-5.3.0)

---
updated-dependencies:
- dependency-name: utoipa
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-23 10:47:12 -05:00
dependabot[bot] ab26db73b7 chore(deps): bump serde_json from 1.0.133 to 1.0.134
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.133 to 1.0.134.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.133...v1.0.134)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-23 10:47:00 -05:00
dependabot[bot] 229411893a chore(deps): bump anyhow from 1.0.94 to 1.0.95
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.94 to 1.0.95.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.94...1.0.95)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-23 10:46:45 -05:00
dependabot[bot] e2de3fe6b8 chore(deps): bump thiserror from 2.0.7 to 2.0.9
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 2.0.7 to 2.0.9.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/2.0.7...2.0.9)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-23 10:46:41 -05:00
Vikrant Palle 062130e6f1 improve test_healthy_providers_return_healthy_status unit test
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Vikrant Palle df0bf72cde nit: formatting fixes
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Vikrant Palle dad1bd9f66 use StatusType::Failed in scaler status
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Vikrant Palle a0da5ef75e add unhealthy status to bindings
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Vikrant Palle f1d68a87d5 refactor health check event handling
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Vikrant Palle b67193a9f8 add unhealthy status type
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Vikrant Palle 764e90ba1b add unit tests
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Vikrant Palle 50b672ad30 reflect unhealthy providers in spreadscaler + daemonscaler
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-23 09:10:29 -05:00
Márk Kővári 265f732fc8 feat(validation): warn for link props source_configs and target_configs
Signed-off-by: Márk Kővári <kovarimarkofficial@gmail.com>
2024-12-20 09:32:57 -07:00
Florian Fürstenberg b2a1082559 fix(server): Removed not needed arguments for checking for duplicate link config names (#478)
Signed-off-by: Florian Fürstenberg <florian.fuerstenberg@posteo.de>
2024-12-19 09:21:22 -05:00
Florian Fürstenberg 341ae617ec fix(server): Added validation logic for duplicated link config names (#478)
Signed-off-by: Florian Fürstenberg <florian.fuerstenberg@posteo.de>
2024-12-19 09:21:22 -05:00
Florian Fürstenberg a6223a3f74 fix(server): Added validation for duplicated link config names (#478)
Signed-off-by: Florian Fürstenberg <florian.fuerstenberg@posteo.de>
2024-12-19 09:21:22 -05:00
Taylor Thomas 38cb50f364 chore: Polishes up flake with a few more clarifying comments
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-12-17 12:30:03 -07:00
Roman Volosatovs 2b50ef2877 feat: filter `Cargo.toml`
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
2024-12-17 12:30:03 -07:00
Taylor Thomas 97e9e32066 feat(flake): Attempts to break up deps some more in the flake
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-12-17 12:30:03 -07:00
Taylor Thomas c2ae9f2643 feat: Adds flake
Adds a nix flake for usage in building things. It is still missing the
ability to run an e2e test and build docker images, but it does work
for both building and nix shell

Signed-off-by: Taylor Thomas <taylor@oftaylor.com>
2024-12-17 12:30:03 -07:00
Joonas Bergius 864acfd28e
chore(ci): Pin GitHub Actions dependencies (#523)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-12-17 12:04:35 -06:00
dependabot[bot] 994b881701 chore(deps): bump thiserror from 1.0.69 to 2.0.6
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.69 to 2.0.6.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.69...2.0.6)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-17 13:02:57 -05:00
dependabot[bot] 2cc4092daa
chore(deps): bump github/codeql-action from 3.27.6 to 3.27.9 (#520)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.6 to 3.27.9.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](aa57810251...df409f7d92)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 16:30:52 -06:00
dependabot[bot] e1d665416e
chore(deps): bump helm/kind-action from 1.10.0 to 1.11.0 (#519)
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.10.0 to 1.11.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.10.0...v1.11.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 11:16:29 -06:00
dependabot[bot] 6e8eb504c9
chore(deps): bump ossf/scorecard-action from 2.3.1 to 2.4.0 (#521)
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.3.1 to 2.4.0.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](0864cf1902...62b2cac7ed)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 11:15:47 -06:00
luk3ark 7d80eca6aa remove unneeded wasm feature and revert naming
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-13 12:17:41 -07:00
luk3ark 54bf5cbb61 added feature flag for unused import
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-13 12:17:41 -07:00
luk3ark 65cfd337f6 fix
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-13 12:17:41 -07:00
luk3ark 87c64bdcd9 fix test dependency
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-13 12:17:41 -07:00
luk3ark 505debf7ff fix typo
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-13 12:17:41 -07:00
luk3ark c898e2eb20 added wasm flag for wadm-types
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-13 12:17:41 -07:00
luk3ark 5919660776 added wasm flag for wadm-types
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-13 12:17:41 -07:00
Florian Fürstenberg c1db5ff946 fix(server): Added missing test for test_delete_noop (#502)
Signed-off-by: Florian Fürstenberg <florian.fuerstenberg@posteo.de>
2024-12-12 10:34:23 -07:00
Florian Fürstenberg 163c28269a fix(server): Cover DeleteResult::Noop for delete_model if no version was specified (#502)
Signed-off-by: Florian Fürstenberg <florian.fuerstenberg@posteo.de>
2024-12-12 10:34:23 -07:00
Vikrant Palle e9c7cf4ab1 nit: move hashset inside loop
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-12 12:34:07 -05:00
Vikrant Palle f137a9ab60 change duplicate link definition
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-12 12:34:07 -05:00
Vikrant Palle d9c3627547 add check for duplicate links
Signed-off-by: Vikrant Palle <vikrantpalle@gmail.com>
2024-12-12 12:34:07 -05:00
luk3ark e8fe31f0ed added explicit generates
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-12 09:59:48 -07:00
luk3ark 18e5566a5e chore: bump wadm-types to 0.9.0 for wit-bindgen-wrpc
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-12-12 09:59:48 -07:00
Joonas Bergius 2561838039
chore: Add Security Policy with link to the main repository (#508)
Signed-off-by: Joonas Bergius <joonas@bergi.us>
2024-12-10 10:14:34 -06:00
dependabot[bot] 8c0ea8263d chore(deps): bump chrono from 0.4.38 to 0.4.39
Bumps [chrono](https://github.com/chronotope/chrono) from 0.4.38 to 0.4.39.
- [Release notes](https://github.com/chronotope/chrono/releases)
- [Changelog](https://github.com/chronotope/chrono/blob/main/CHANGELOG.md)
- [Commits](https://github.com/chronotope/chrono/compare/v0.4.38...v0.4.39)

---
updated-dependencies:
- dependency-name: chrono
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-10 09:35:35 -05:00
Joonas Bergius ae8ab69f24
chore: Fix scorecard workflow spacing (#506)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-12-09 15:05:19 -05:00
dependabot[bot] 61b81112bd
chore(deps): bump tokio from 1.41.1 to 1.42.0 (#510)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.1 to 1.42.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.1...tokio-1.42.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 12:29:26 -06:00
dependabot[bot] b2207ef41f
chore(deps): bump clap from 4.5.21 to 4.5.23 (#511)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.21 to 4.5.23.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.21...clap_complete-v4.5.23)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 12:28:36 -06:00
dependabot[bot] 0cc63485f4
chore(deps): bump anyhow from 1.0.93 to 1.0.94 (#512)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.93 to 1.0.94.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.93...1.0.94)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 10:38:40 -06:00
Joonas Bergius 31cf33a9b7
chore: Add OSSF Scorecard workflow (#504)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-12-06 15:28:16 -05:00
Joonas Bergius fb2b74532b
fix: RUSTSEC-2024-0402 (#503)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-12-06 15:28:01 -05:00
Ahmed Tadde ca5a63104a
fix: detect spread scaler requirements violation (#491)
---------

Signed-off-by: Ahmed <ahmedtadde@gmail.com>
Co-authored-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-12-05 01:21:58 -05:00
Joonas Bergius 21feab093f
chore(ci): Set token permissions for GitHub Actions workflows (#498)
Signed-off-by: Joonas Bergius <joonas@bergi.us>
2024-12-02 17:31:16 +00:00
dependabot[bot] eb6fce9255 chore(deps): bump thiserror from 1.0.65 to 1.0.69
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.65 to 1.0.69.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.65...1.0.69)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-02 10:38:08 -05:00
dependabot[bot] 087203cdbc chore(deps): bump indexmap from 2.6.0 to 2.7.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.6.0 to 2.7.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.6.0...2.7.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-02 10:38:01 -05:00
dependabot[bot] 6e35596a22 chore(deps): bump bytes from 1.8.0 to 1.9.0
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.8.0 to 1.9.0.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.8.0...v1.9.0)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-02 10:37:45 -05:00
dependabot[bot] 2d47f32fc5
chore(deps): bump serde_json from 1.0.132 to 1.0.133 (#496)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.132 to 1.0.133.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.132...v1.0.133)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-30 09:39:59 -06:00
Joonas Bergius 2c00cada86
chore(wadm-cli): prune unused dependencies (#487)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-20 00:29:15 -06:00
Joonas Bergius d1b9d925d2
chore(wadm): prune unused dependencies (#486)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-20 00:28:58 -06:00
Joonas Bergius db38c50600
chore(wadm-client): prune unused dependencies (#485)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-20 00:28:44 -06:00
Joonas Bergius 964a586ab6
chore(wadm-types): prune unused dependencies (#484)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-20 00:28:31 -06:00
Sudhanshu Pandey 6c425a198c
chore: Update the Github Action to set correct tag for the Docker Image (#493)
Signed-off-by: Sudhanshu Pandey <sp6370@nyu.edu>
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
Co-authored-by: Joonas Bergius <joonas@users.noreply.github.com>
2024-11-20 00:27:37 -06:00
dependabot[bot] 0fb04cfee4
chore(deps): bump serde from 1.0.214 to 1.0.215 (#495)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.214 to 1.0.215.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.214...v1.0.215)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 15:44:10 +00:00
dependabot[bot] 066eccdbd2
chore(deps): bump clap from 4.5.20 to 4.5.21 (#494)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.20 to 4.5.21.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.20...clap_complete-v4.5.21)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 15:43:54 +00:00
dependabot[bot] 4bd2560bdd
chore(deps): bump anyhow from 1.0.92 to 1.0.93 (#490)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.92 to 1.0.93.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.92...1.0.93)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 09:40:58 -06:00
dependabot[bot] 57e1807be8
chore(deps): bump tokio from 1.41.0 to 1.41.1 (#489)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.0 to 1.41.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.0...tokio-1.41.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 09:40:50 -06:00
dependabot[bot] ef32c26fa0
chore(deps): bump serial_test from 3.1.1 to 3.2.0 (#488)
Bumps [serial_test](https://github.com/palfrey/serial_test) from 3.1.1 to 3.2.0.
- [Release notes](https://github.com/palfrey/serial_test/releases)
- [Commits](https://github.com/palfrey/serial_test/compare/v3.1.1...v3.2.0)

---
updated-dependencies:
- dependency-name: serial_test
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 09:40:18 -06:00
Victor Adossi 1c4b706b17
chore(dx): remove deprecated crates extension (#492)
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-11-14 10:32:43 -07:00
Joonas Bergius c48802566e
chore: Bump client and types 0.7.1 (#483)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-09 09:34:16 -06:00
Joonas Bergius 42cc8672d1
chore(ci): pin zig to latest stable version (#482)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-09 09:29:39 -05:00
Joonas Bergius 9272799f62
chore: Bump wasmcloud-secrets-types (#481)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-09 09:29:30 -05:00
Joonas Bergius cebb511d28
chore: Bump wascap to 0.15.2 (#480)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-11-08 19:58:07 -06:00
dependabot[bot] d0faba952d chore(deps): bump serde from 1.0.213 to 1.0.214
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.213 to 1.0.214.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.213...v1.0.214)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-04 09:44:49 -05:00
dependabot[bot] f59cfa2f7d chore(deps): bump utoipa from 5.1.3 to 5.2.0
Bumps [utoipa](https://github.com/juhaku/utoipa) from 5.1.3 to 5.2.0.
- [Release notes](https://github.com/juhaku/utoipa/releases)
- [Changelog](https://github.com/juhaku/utoipa/blob/master/utoipa-rapidoc/CHANGELOG.md)
- [Commits](https://github.com/juhaku/utoipa/compare/utoipa-5.1.3...utoipa-5.2.0)

---
updated-dependencies:
- dependency-name: utoipa
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-04 09:44:26 -05:00
dependabot[bot] 0e78489a56 chore(deps): bump anyhow from 1.0.91 to 1.0.92
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.91 to 1.0.92.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.91...1.0.92)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-04 09:44:20 -05:00
dependabot[bot] 466f6ff402 chore(deps): bump serde from 1.0.210 to 1.0.213
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.210 to 1.0.213.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.210...v1.0.213)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 10:46:29 -04:00
dependabot[bot] bd2cc980c7 chore(deps): bump thiserror from 1.0.64 to 1.0.65
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.64 to 1.0.65.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.64...1.0.65)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 10:46:24 -04:00
dependabot[bot] 955905148c chore(deps): bump utoipa from 5.1.1 to 5.1.3
Bumps [utoipa](https://github.com/juhaku/utoipa) from 5.1.1 to 5.1.3.
- [Release notes](https://github.com/juhaku/utoipa/releases)
- [Changelog](https://github.com/juhaku/utoipa/blob/master/utoipa-rapidoc/CHANGELOG.md)
- [Commits](https://github.com/juhaku/utoipa/compare/utoipa-5.1.1...utoipa-5.1.3)

---
updated-dependencies:
- dependency-name: utoipa
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 10:46:17 -04:00
dependabot[bot] b9da5ee9f6 chore(deps): bump tokio from 1.40.0 to 1.41.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.40.0 to 1.41.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.40.0...tokio-1.41.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 10:45:55 -04:00
dependabot[bot] 81d41b3cd8 chore(deps): bump bytes from 1.7.2 to 1.8.0
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.7.2 to 1.8.0.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.7.2...v1.8.0)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 10:45:47 -04:00
dependabot[bot] fbf29a9350 chore(deps): bump actions/setup-python from 5.2.0 to 5.3.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.2.0 to 5.3.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.2.0...v5.3.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 09:51:15 -04:00
Dan Norris cfc7c4504a
fix(chart): reference the correct value for the Jetstream domain (#467)
Pull the `jetstreamDomain` value from `config.wadm.nats.jetstreamDomain`
instead of from `config.wadm.jetstreamDomain` since that is what we
define in the values file. It is also a more logical way to group the
value than what the chart was expecting.

Also bump the default version of wadm to the latest one.

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2024-10-24 10:29:48 -04:00
Brooks Townsend 6f29e72932 release(wadm): 0.18, types and client 0.7
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-22 16:37:08 -04:00
dependabot[bot] 9ac409a28d chore(deps): bump anyhow from 1.0.89 to 1.0.91
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.89 to 1.0.91.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.89...1.0.91)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-22 14:28:00 -04:00
dependabot[bot] 1309c9bf1f chore(deps): bump uuid from 1.10.0 to 1.11.0
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.10.0 to 1.11.0.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/1.10.0...1.11.0)

---
updated-dependencies:
- dependency-name: uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-22 14:25:52 -04:00
dependabot[bot] 54740fbf62 chore(deps): bump serde_json from 1.0.128 to 1.0.132
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.128 to 1.0.132.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/1.0.128...1.0.132)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-22 14:22:09 -04:00
Joonas Bergius eb34a928c6
fix(wadm-types): Address RUSTSEC-2024-0370 (#461)
* fix(wadm-types): Address RUSTSEC-2024-0370

Signed-off-by: Joonas Bergius <joonas@cosmonic.com>

* chore: Bump wadm-types version

Signed-off-by: Joonas Bergius <joonas@cosmonic.com>

---------

Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-10-22 11:20:10 -07:00
Joonas Bergius 4d2fc1a406
chore: Swap wolfi-base source to cgr.dev instead of Docker Hub (#462)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-10-21 16:15:25 -05:00
Brooks Townsend 08da607ad9 release(wadm): v0.18.0-rc.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-21 17:07:14 -04:00
Brooks Townsend 9972d4d903 refactor(wadm): address clippy warnings
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend b459bea3fb test(upgrades): add link name for wasmCloud 1.3
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend b7ef888072 chore: prefix shared annotation with experimental
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend aa2689ab36 feat(server): ensure deployed apps find shared components
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend ec08ba7316 test: add invalid shared tests
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend 471f07fe67 chore(wit): update bindings for shared applications
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend 0dbb3d102c test: add e2e_shared integration test
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend 8830527b43 feat: add status_scaler
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Brooks Townsend 434aeafbb8 feat!: support shared components and providers
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

fix: shared components id generation

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-18 12:49:04 -04:00
Taylor Thomas 05d5242d27 chore: Pull in slightly older version of regex
Because transitive deps suck. We need this so we can update the OCI deps
in the main host

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-10-15 12:21:11 -06:00
dependabot[bot] 77c012d6d1
chore(deps): bump clap from 4.5.19 to 4.5.20 (#454)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.19 to 4.5.20.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.19...clap_complete-v4.5.20)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-14 09:06:42 -05:00
Brooks Townsend 3a066c35c6 chore(MAINTAINERS): add organizations
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-12 17:44:22 -06:00
Brooks Townsend e07481a66c chore: add MAINTAINERS.md
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-12 14:51:43 -04:00
Joonas Bergius 4b7233af2c
release: Bump wadm to 0.17.0 (#449)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-10-09 14:21:17 -05:00
Joonas Bergius e4d453fa34
release: Bump wadm-client and wadm-types to 0.6.0 (#448)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-10-09 13:34:01 -05:00
Brooks Townsend 1e2bbc2111 chore: bump 0.16.1, remove base64 dep
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-08 15:10:28 -04:00
Brooks Townsend 5fda091b50 fix(wadm): deserialize, not decode, stream status
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-08 15:10:28 -04:00
dependabot[bot] e0d4e23758 chore(deps): bump indexmap from 2.5.0 to 2.6.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.5.0 to 2.6.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.5.0...2.6.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-07 13:15:33 -04:00
dependabot[bot] 1b768f8d20 chore(deps): bump clap from 4.5.18 to 4.5.19
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.18 to 4.5.19.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.18...clap_complete-v4.5.19)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-07 12:44:54 -04:00
Brooks Townsend 980d8ef926 release(wadm): v0.16.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-10-04 11:09:39 -04:00
Victor Adossi 12880bf5e1 fix(ci): pre-pull compose images to avoid timeouts
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-10-04 01:36:07 -06:00
Victor Adossi 75c45fa750 fix(tests): re-attempt connection to NATS
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-10-04 01:36:07 -06:00
Victor Adossi 51692b7156 chore(deps): update for control-interface 2.2.0
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-10-04 01:36:07 -06:00
Joonas Bergius eb57ec900a
release: Bump wadm client and types (#438)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-10-03 09:00:28 -05:00
Victor Adossi 78caba43e1 chore: fix lint
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-10-02 14:21:49 -04:00
Victor Adossi 3e769f5708 chore(deps): update wasmcloud-control-interface to v2.1.0
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-10-02 14:21:49 -04:00
dependabot[bot] e39e1f1c63 chore(deps): bump async-trait from 0.1.82 to 0.1.83
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.82 to 0.1.83.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.82...0.1.83)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-01 08:27:18 -04:00
dependabot[bot] 967c047f05 chore(deps): bump testcontainers from 0.22.0 to 0.23.1
Bumps [testcontainers](https://github.com/testcontainers/testcontainers-rs) from 0.22.0 to 0.23.1.
- [Release notes](https://github.com/testcontainers/testcontainers-rs/releases)
- [Changelog](https://github.com/testcontainers/testcontainers-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testcontainers/testcontainers-rs/compare/0.22.0...0.23.1)

---
updated-dependencies:
- dependency-name: testcontainers
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-01 08:27:12 -04:00
dependabot[bot] 8724621dc0 chore(deps): bump regex from 1.10.6 to 1.11.0
Bumps [regex](https://github.com/rust-lang/regex) from 1.10.6 to 1.11.0.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.10.6...1.11.0)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-01 08:27:06 -04:00
Victor Adossi 1d085cab07 chore(deps): update for control-interface v2.0.0
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-09-30 19:50:51 -04:00
Joonas Bergius fbf06f624e
feat: Add wolfi image (#430)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-09-23 14:37:08 -05:00
Brooks Townsend f1ef62d6cd release: bump crates for release
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-09-23 15:07:04 -04:00
Brooks Townsend a5486595a2 fix(handler): backwards compat list
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-09-23 14:02:21 -04:00
dependabot[bot] 5c4094c1c7 chore(deps): bump anyhow from 1.0.87 to 1.0.89
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.87 to 1.0.89.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.87...1.0.89)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-23 12:47:55 -04:00
dependabot[bot] 55caf37442 chore(deps): bump nkeys from 0.4.3 to 0.4.4
Bumps [nkeys](https://github.com/wasmcloud/nkeys) from 0.4.3 to 0.4.4.
- [Release notes](https://github.com/wasmcloud/nkeys/releases)
- [Commits](https://github.com/wasmcloud/nkeys/compare/v0.4.3...v0.4.4)

---
updated-dependencies:
- dependency-name: nkeys
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-23 12:41:19 -04:00
dependabot[bot] 6521d4e2c4 chore(deps): bump clap from 4.5.17 to 4.5.18
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.17 to 4.5.18.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.17...clap_complete-v4.5.18)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-23 12:41:09 -04:00
dependabot[bot] fa51184cfc chore(deps): bump thiserror from 1.0.63 to 1.0.64
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.63 to 1.0.64.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.63...1.0.64)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-23 12:40:47 -04:00
dependabot[bot] eb0b2eab9b chore(deps): bump bytes from 1.7.1 to 1.7.2
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.7.1 to 1.7.2.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.7.1...v1.7.2)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-23 12:40:38 -04:00
Joonas Bergius 1136744fe6 chore: Fix up release workflow
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-09-20 18:52:49 -06:00
Joonas Bergius efe9a8a5f6
chore: Use normal cargo build on windows (#422)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-09-20 22:44:16 +00:00
Joonas Bergius ee427db054 chore: Rework release pipeline
Fixes #210

Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-09-20 13:13:42 -06:00
Joonas Bergius 5ea118e235
chore: Revise the default NATS Server address logic (#420)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-09-19 08:14:04 -05:00
Brooks Townsend 5719f0e57e feat(wadm)!: support configuring max stream bytes
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-09-17 15:30:37 -04:00
Taylor Thomas 78343c264e fix(types): Fixes validation for wasi:keyvalue
Our validation was erroneously rejecting things that were using the batch
or watch interfaces for wasi keyvalue. Also fixes some things with versioning
so we don't have to change it everywhere all the time

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-09-12 17:14:30 -04:00
dependabot[bot] ee8f8ea555 chore(deps): bump clap from 4.5.16 to 4.5.17
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.16 to 4.5.17.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.16...clap_complete-v4.5.17)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-09 10:13:53 -04:00
dependabot[bot] 2d2320bc61 chore(deps): bump serde from 1.0.209 to 1.0.210
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.209 to 1.0.210.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.209...v1.0.210)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-09 10:13:43 -04:00
dependabot[bot] 71e3138355 chore(deps): bump anyhow from 1.0.86 to 1.0.87
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.86 to 1.0.87.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.86...1.0.87)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-09 10:13:36 -04:00
dependabot[bot] 4c31bc24c1 chore(deps): bump serde_json from 1.0.127 to 1.0.128
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.127 to 1.0.128.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/1.0.127...1.0.128)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-09 10:13:29 -04:00
Joonas Bergius 7aedd8ac5c
chore(chart): Bump wadm chart to default to 0.14.0 (#402)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-09-03 10:14:20 -05:00
dependabot[bot] 2b0dd9efec chore(deps): bump async-trait from 0.1.81 to 0.1.82
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.81 to 0.1.82.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.81...0.1.82)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-03 11:13:49 -04:00
dependabot[bot] c4a3c7978a chore(deps): bump testcontainers from 0.21.1 to 0.22.0
Bumps [testcontainers](https://github.com/testcontainers/testcontainers-rs) from 0.21.1 to 0.22.0.
- [Release notes](https://github.com/testcontainers/testcontainers-rs/releases)
- [Changelog](https://github.com/testcontainers/testcontainers-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testcontainers/testcontainers-rs/compare/0.21.1...0.22.0)

---
updated-dependencies:
- dependency-name: testcontainers
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-03 11:13:34 -04:00
dependabot[bot] 89c9e77f6e chore(deps): bump indexmap from 2.4.0 to 2.5.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.4.0 to 2.5.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.4.0...2.5.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-03 11:13:14 -04:00
dependabot[bot] fd75aaa8ef chore(deps): bump actions/setup-python from 5.1.1 to 5.2.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.1 to 5.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.1.1...v5.2.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-03 11:13:07 -04:00
dependabot[bot] c64f28dd03 chore(deps): bump tokio from 1.39.3 to 1.40.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.3 to 1.40.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.3...tokio-1.40.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-03 11:12:46 -04:00
Ahmed Tadde 8011f09570
fix(server): deprecate and replace model.list operation with model.get (#400)
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2024-08-29 21:42:00 -04:00
Brooks Townsend 5a22fd1258 fix(wadm): ensure custom traits are not spread or link traits
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-27 20:22:18 -04:00
dependabot[bot] b1fb8894f6 chore(deps): bump serde from 1.0.208 to 1.0.209
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.208 to 1.0.209.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.208...v1.0.209)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-26 10:06:37 -04:00
dependabot[bot] 2e77266224 chore(deps): bump serde_json from 1.0.125 to 1.0.127
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.125 to 1.0.127.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/1.0.125...1.0.127)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-26 10:06:29 -04:00
dependabot[bot] 1e2c90645d chore(deps): bump clap from 4.5.15 to 4.5.16
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.15 to 4.5.16.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.15...clap_complete-v4.5.16)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-26 10:06:23 -04:00
Lachlan Heywood b78d4bf1b6 chore(schema): change name and description on json schema
Signed-off-by: Lachlan Heywood <lachieh@users.noreply.github.com>
2024-08-20 11:06:33 -07:00
Brooks Townsend 524579a1f4 release(wadm): v0.14.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-20 07:29:17 -07:00
Brooks Townsend e339b6cae2 release(wadm-client): v0.3.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-20 07:29:17 -07:00
Brooks Townsend 86ce562d7f release(wadm-types): v0.3.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-20 07:29:17 -07:00
dependabot[bot] d30c092942 chore(deps): bump serde_json from 1.0.124 to 1.0.125
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.124 to 1.0.125.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.124...1.0.125)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-19 12:29:01 -07:00
dependabot[bot] aa074af58b chore(deps): bump indexmap from 2.3.0 to 2.4.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.3.0 to 2.4.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.3.0...2.4.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-19 12:28:46 -07:00
dependabot[bot] 2fc3f6974b chore(deps): bump serde from 1.0.206 to 1.0.208
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.206 to 1.0.208.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.206...v1.0.208)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-19 12:28:40 -07:00
dependabot[bot] b9f65ffb0a chore(deps): bump tokio from 1.39.2 to 1.39.3
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.2 to 1.39.3.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.2...tokio-1.39.3)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-19 12:28:30 -07:00
dependabot[bot] ce7c1b4bb2
chore(deps): bump testcontainers from 0.21.0 to 0.21.1 (#390)
Bumps [testcontainers](https://github.com/testcontainers/testcontainers-rs) from 0.21.0 to 0.21.1.
- [Release notes](https://github.com/testcontainers/testcontainers-rs/releases)
- [Changelog](https://github.com/testcontainers/testcontainers-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testcontainers/testcontainers-rs/compare/0.21.0...0.21.1)

---
updated-dependencies:
- dependency-name: testcontainers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-19 08:57:00 -05:00
Bailey Hayes b5c471ea2a chore: bump chart appVersion
Signed-off-by: Bailey Hayes <behayes2@gmail.com>
2024-08-12 18:07:25 -06:00
dependabot[bot] afc0d916e3 chore(deps): bump serde from 1.0.204 to 1.0.206
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.204 to 1.0.206.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.204...v1.0.206)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-12 07:27:03 -07:00
dependabot[bot] a37ab6dd95 chore(deps): bump serde_json from 1.0.122 to 1.0.124
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.122 to 1.0.124.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.122...v1.0.124)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-12 07:21:31 -07:00
dependabot[bot] c3d00c714d chore(deps): bump clap from 4.5.13 to 4.5.15
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.13 to 4.5.15.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.13...v4.5.15)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-12 07:21:17 -07:00
dependabot[bot] b9e1cc611b chore(deps): bump nkeys from 0.3.2 to 0.4.3
Bumps [nkeys](https://github.com/wasmcloud/nkeys) from 0.3.2 to 0.4.3.
- [Release notes](https://github.com/wasmcloud/nkeys/releases)
- [Commits](https://github.com/wasmcloud/nkeys/compare/v0.3.2...v0.4.3)

---
updated-dependencies:
- dependency-name: nkeys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-12 07:21:06 -07:00
dependabot[bot] 203a91f1e0 chore(deps): bump regex from 1.10.5 to 1.10.6
Bumps [regex](https://github.com/rust-lang/regex) from 1.10.5 to 1.10.6.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.10.5...1.10.6)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-12 07:21:00 -07:00
Brooks Townsend 78291c79bb refactor(scaler): rename BackoffAwareScaler to BackoffWrapper
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-12 07:16:28 -07:00
Brooks Townsend 7e03d060b3 feat(scaler)!: backoff when failed event received
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-12 07:16:28 -07:00
Brooks Townsend f1237363c1 feat(scaler): implement component scale corresponding event
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-12 07:16:28 -07:00
Bailey Hayes 5def02caaf
fix(charts): align replicas value (#382)
Rendering this chart without this fix results
in an empty field for replicas.

Signed-off-by: Bailey Hayes <behayes2@gmail.com>
2024-08-12 08:59:43 -05:00
Brooks Townsend 2d6327b943 fix(scalers): put link for component as target
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-09 13:04:36 -07:00
Brooks Townsend c295cf0e33 fix(server): use backwards compatible undeployed
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-09 10:42:19 -07:00
Brooks Townsend e2764c720b fix(bindings): add waiting status
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:53:44 -07:00
Brooks Townsend 2b43beb831 refactor(wadm): list manifests, don't fake status
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:53:44 -07:00
Brooks Townsend 066e50e4eb feat(scaler): add kind and name methods
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:53:44 -07:00
Brooks Townsend 6f4abcf389 fix(scalers): report status properly after reconcile
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:53:44 -07:00
Brooks Townsend 19cbd5a44d feat(observer): observe lattice on manifest publish
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:53:44 -07:00
Brooks Townsend 172db98f1e feat(wadm)!: detail status per scaler
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

correct test status checker

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:53:44 -07:00
Brooks Townsend 72170e9a8e feat(scaler): add human_friendly_name method
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:53:44 -07:00
Brooks Townsend ffe20a6177 fix(wadm): update reaper to allow for latency
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 12:50:26 -07:00
Brooks Townsend 8d8adfe54e fix(scalers): remove scalers upon notification
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-06 11:25:56 -07:00
Brooks Townsend c6c481f930 fix(wadm): attach lattice/multitenant to consumer metadata
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-05 15:01:26 -07:00
Brooks Townsend 43ba03790a feat(wadm)!: set cleanup interval to 60s
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-05 14:55:30 -07:00
Brooks Townsend 95bdf2a6bb ci(test): ensure entire workspace builds
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-05 14:45:09 -07:00
Brooks Townsend 50d2b76213 fix(wit): update bindings to types 0.2.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-05 14:45:09 -07:00
Brooks Townsend 392347dfe9 feat(client)!: return name and version from deploy model
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-05 08:52:50 -07:00
Brooks Townsend ce536fbdc8 fix(#345): use cached links when req fails
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-05 08:47:06 -07:00
dependabot[bot] 05cfd3e84e chore(deps): bump clap from 4.5.11 to 4.5.13
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.11 to 4.5.13.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.11...v4.5.13)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 07:24:21 -07:00
Joonas Bergius b0212e548c
chore: Migrate more tests over to using testcontainers for setup (#369)
chore: Migrate more tests over to using testcontainers for setup

---------

Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-08-05 09:22:52 -05:00
dependabot[bot] 409f61fa74 chore(deps): bump indexmap from 2.2.6 to 2.3.0
Bumps [indexmap](https://github.com/indexmap-rs/indexmap) from 2.2.6 to 2.3.0.
- [Changelog](https://github.com/indexmap-rs/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/indexmap-rs/indexmap/compare/2.2.6...2.3.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 07:21:28 -07:00
dependabot[bot] e020955fac chore(deps): bump bytes from 1.6.1 to 1.7.1
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.6.1 to 1.7.1.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.6.1...v1.7.1)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 07:19:09 -07:00
dependabot[bot] 0755307d78 chore(deps): bump base64 from 0.21.7 to 0.22.1
Bumps [base64](https://github.com/marshallpierce/rust-base64) from 0.21.7 to 0.22.1.
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.21.7...v0.22.1)

---
updated-dependencies:
- dependency-name: base64
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 07:18:54 -07:00
dependabot[bot] bb9650198d chore(deps): bump serde_json from 1.0.121 to 1.0.122
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.121 to 1.0.122.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.121...v1.0.122)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 07:18:36 -07:00
Brooks Townsend 535b5f44f9 chore: remove unused image
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-02 09:28:44 -07:00
Brooks Townsend 66f54eed4c chore: update READMEs
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-02 09:28:44 -07:00
Brooks Townsend 39a0857a4b chore: remove duplicated manifests
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-02 09:28:44 -07:00
Brooks Townsend 1dc35d584f chore: collapse test folder into tests
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-02 09:28:44 -07:00
Joonas Bergius 4e624ffd4a
chore: Replace wash up with testcontainers (#367)
* chore: Replace wash up with testcontainers

Closes #353

Signed-off-by: Joonas Bergius <joonas@cosmonic.com>

---------

Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-08-02 11:21:54 -05:00
Brooks Townsend 1febfc92d2 chore: update README to be more direct
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-02 08:45:25 -07:00
Brooks Townsend 2de7f9eca2 chore(wadm): bump to 0.13.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-01 11:01:08 -07:00
Brooks Townsend 0d92cd8b92 fix(wadm): react to config events
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-01 11:01:08 -07:00
Brooks Townsend cb98a9911b chore: remove beta, pin to stable version
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-08-01 08:38:48 -07:00
dependabot[bot] e4d2f569dc chore(deps): bump clap from 4.5.9 to 4.5.11
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.9 to 4.5.11.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.9...clap_complete-v4.5.11)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-31 07:19:27 -07:00
Brooks Townsend c6777e6bca feat(*): support field on secret property
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-31 07:11:55 -07:00
Brooks Townsend 4a5dcae3cf feat(server)!: ensure claimed ID is uniquely deployed
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-30 14:54:33 -04:00
Brooks Townsend 669791b685 deps: use published wasmcloud-secrets-types crate
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend 5c643f9d9b refactor(secrets): use SecretConfig type to simplify scaler
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend 7549fd3500 refactor(secrets): use wasmcloud-secrets-types crate
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend 050f4ecbc9 fix(secretscaler): correct policy type
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend 5cecde8718 fix(validation): ensure secrets link to valid policies
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend 85fb9c4ec7 tests: represent vault secret policy
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend bead045d24 chore(wadm): bump to 0.13.0-beta.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend 4b0d5171cb fix(secrets): correct policy type and format
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

fix(secrets): correct policy type and format

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend 8db4eb791f chore(wadm-types): bump to 0.2.0-beta.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

types beta

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

types beta

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend aa4cb16a0f chore(wadm-client): bump to 0.2.0-beta.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

client beta

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
Brooks Townsend d43e92d5c2 refactor: rename secrets source to properties
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

fix manifest

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-29 11:31:37 -04:00
dependabot[bot] f597e61680 chore(deps): bump tokio from 1.38.1 to 1.39.2
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.38.1 to 1.39.2.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.38.1...tokio-1.39.2)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-29 10:44:33 -04:00
dependabot[bot] e0d9e2f90e chore(deps): bump serde_json from 1.0.120 to 1.0.121
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.120 to 1.0.121.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.120...v1.0.121)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-29 10:44:27 -04:00
Joonas Bergius a2ff5a0411 chore(charts): Bump appVersion in the helm chart
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-07-26 11:18:07 -04:00
Brooks Townsend d699e20866 chore: update CODEOWNERS with team
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-25 09:34:56 -04:00
Brooks Townsend 24c3be8559 chore(wadm): use application verbage instead of model
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-24 09:48:43 -04:00
Dan Norris 7ed42a1077 fix: correctly emit a validation error if a backend key in a policy is missing
It turns out that boolean negations are important, so use the correct
check to validate that a `backend` key is missing from a policy
properties block.

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2024-07-23 17:46:25 -04:00
Dan Norris f529890cca fix: copy backend key to top level in a generated config value
It turns out that it is easier to parse a serialized secrets value if we
have the `backend` key at the top level instead of nested in
string-encoded JSON. This moves that value out of the policy properties
value that the secrets scaler serializes into a config value and instead
stores it as a top level key.

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2024-07-23 14:23:49 -04:00
Brooks Townsend 48e8d4caec ci(e2e,release): update upload/download to v4
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-22 15:17:07 -04:00
dependabot[bot] 9989da7c5f chore(deps): bump thiserror from 1.0.62 to 1.0.63
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.62 to 1.0.63.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.62...1.0.63)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-22 12:47:01 -04:00
dependabot[bot] 544547ca9e chore(deps): bump tokio from 1.38.0 to 1.38.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.38.0 to 1.38.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.38.0...tokio-1.38.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-22 12:46:42 -04:00
Brooks Townsend 34e95975d5 ci(release): use GITHUB_TOKEN to release
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-22 11:28:36 -04:00
Dan Norris 32a4bb5c50 fix: write out policy data in a secret config correctly
In #307 we introduced secrets support, which works by storing secrets
references as config in NATS KV. This fixes a bug in the way that we
were serializing the configuration as JSON where we were overwriting the
policy field which is required.

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2024-07-19 14:16:27 -04:00
Dan Norris 737aa8259f feat!: add support for secrets in manifests
This adds support for secrets in wasmCloud application manifests. The
secrets themselves are actually _secret references_ as outlined in
wasmCloud/wasmCloud#2190. Just like config, secrets can be specified at
the component or provider level or on a link.

Secret references themselves are actually implemented as an additional
kind of config stored in the same config data bucket. However, I opted
to implement a dedicated scaler for secrets that is largely a clone of
the existing ConfigScaler since the underlying data type is very
different than the arbitrary set of key/value pairs we use for config.

An example of what this looks like in a component is shown below:

```yaml
spec:
  components:
    - name: http-component
      type: component
      properties:
        image: ghcr.io/wasmcloud/test-fetch-with-token:0.1.0-fake
        secrets:
          - name: some-api-token
            source:
              backend: nats-kv
              key: test-value
              version: 1
          - name: my-other-secret
            source:
              backend: aws-secrets-manager
              value: secret-name
              version: "be01a5fb-7ebb-4ae9-8ea0-0902e8940bc0"
```

This contains a breaking change to the way that we specify config on
links:

```yaml
- type: link
  properties:
    namespace: wasmcloud
    package: postgres
    interfaces: [managed-query]
    target:
      name: sql-postgres
      secrets:
        - name: db-password
          source:
            backend: nats-kv
            key: myapp_db-password
            version: 1
```

Instead of using `target_config` and `source_config`, this renames them
to `target` and `source` respectively and adds keys for `config` and
`secrets`. The name of the target is now now a key at the top level of
the `target` block, as seen above.

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2024-07-19 13:19:04 -04:00
Lucas Fontes 3274f121b2 docs: Updating wadm links
Signed-off-by: Lucas Fontes <lucas@cosmonic.com>
2024-07-19 09:51:48 -04:00
Dan Norris 819978970a feat: add policy configuration block
This adds a policy block to the top-level configuration of the OAM
specification. This block, defined in the [draft
specification](https://github.com/oam-dev/spec/blob/master/SPEC_DRAFT.md),
is intended to be used for _application_ level configuration that
applies to all components.

Initial use cases for this type of configuration include:
* configuring parameters to secret backends
* configuring Kubernetes Services, Ingresses, etc. configured by the
  wasmcloud-operator

An example of this might look something like:

```yaml
---
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: sample
  annotations:
    version: v0.0.1
    description: Sample manifest that passes
spec:
  policies:
    - name: aws-secret-config
      type: secret-backend
      properties:
        iam-role: some-role-id
    - name: k8s-service
      type: kubernetes-service
      properties:
        type: LoadBalancer
        # ...
  components:
    - name: http-component
      type: component
      properties:
        image: ghcr.io/wasmcloud/component-http-hello-world:0.1.0
      traits:
        - type: spreadscaler
          properties:
            replicas: 1
```

Instead of adding this new block to the existing JSON schema, this
instead implements an OpenAPI v3 schema derived from the structs defined
in the types crate and uses that to drive validation. This feels like
much more maintainable approach going forward.

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2024-07-15 09:28:48 -06:00
dependabot[bot] 3c53f462b6 chore(deps): bump bytes from 1.6.0 to 1.6.1
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.6.0 to 1.6.1.
- [Release notes](https://github.com/tokio-rs/bytes/releases)
- [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/bytes/compare/v1.6.0...v1.6.1)

---
updated-dependencies:
- dependency-name: bytes
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-15 10:31:03 -04:00
dependabot[bot] ba63c290ea chore(deps): bump serde from 1.0.203 to 1.0.204
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.203 to 1.0.204.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.203...v1.0.204)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-15 10:30:57 -04:00
dependabot[bot] 0366132f9b chore(deps): bump serial_test from 1.0.0 to 3.1.1
Bumps [serial_test](https://github.com/palfrey/serial_test) from 1.0.0 to 3.1.1.
- [Release notes](https://github.com/palfrey/serial_test/releases)
- [Commits](https://github.com/palfrey/serial_test/compare/v1.0.0...v3.1.1)

---
updated-dependencies:
- dependency-name: serial_test
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-15 10:30:47 -04:00
dependabot[bot] fbef4df02f chore(deps): bump clap from 4.5.7 to 4.5.9
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.7 to 4.5.9.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.7...v4.5.9)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-15 10:30:19 -04:00
dependabot[bot] 9e003d6944 chore(deps): bump ulid from 1.1.2 to 1.1.3
Bumps [ulid](https://github.com/dylanhart/ulid-rs) from 1.1.2 to 1.1.3.
- [Commits](https://github.com/dylanhart/ulid-rs/compare/v1.1.2...v1.1.3)

---
updated-dependencies:
- dependency-name: ulid
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-15 10:30:07 -04:00
dependabot[bot] f82f3ddeb7 chore(deps): bump softprops/action-gh-release from 1 to 2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 1 to 2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](https://github.com/softprops/action-gh-release/compare/v1...v2)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:58:02 -03:00
dependabot[bot] b0114f5268 chore(deps): bump helm/kind-action from 1.9.0 to 1.10.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.9.0 to 1.10.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.9.0...v1.10.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:51:49 -03:00
dependabot[bot] 9cb7732635 chore(deps): bump thiserror from 1.0.61 to 1.0.62
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.61 to 1.0.62.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.61...1.0.62)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:51:43 -03:00
dependabot[bot] d86b504c6f chore(deps): bump uuid from 1.9.1 to 1.10.0
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.9.1 to 1.10.0.
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/1.9.1...1.10.0)

---
updated-dependencies:
- dependency-name: uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:51:35 -03:00
dependabot[bot] d41c8fc9e5 chore(deps): bump serde_json from 1.0.118 to 1.0.120
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.118 to 1.0.120.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.118...v1.0.120)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:51:25 -03:00
dependabot[bot] b03ee1ce32 chore(deps): bump async-trait from 0.1.80 to 0.1.81
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.80 to 0.1.81.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.80...0.1.81)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:51:18 -03:00
dependabot[bot] 46975b0547 chore(deps): bump actions/setup-python from 5.0.0 to 5.1.1
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.0.0 to 5.1.1.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.0.0...v5.1.1)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:51:11 -03:00
dependabot[bot] fcec438f82 chore(deps): bump docker/build-push-action from 3 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v3...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-12 17:51:04 -03:00
Joonas Bergius aa7017ca3a chore: Add dependabot configuration
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-07-12 17:02:15 -03:00
luk3ark 600b419088 added feature flag for bindings
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-07-12 12:52:17 -03:00
luk3ark 87862d4534 finalized
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-07-12 12:52:17 -03:00
luk3ark f93cc2cb99 Fixed reference to oam-manifest in wit
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-07-12 12:52:17 -03:00
luk3ark 1087bba408 updated stream impl, copied publish flow for wadm, moved oam into wadm wit package
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-07-12 12:52:17 -03:00
luk3ark 956bdeb161 cleanup - remove redudant dependency
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-07-12 12:52:17 -03:00
luk3ark a92cc37510 chore: remove redudant comments
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-07-12 12:52:17 -03:00
luk3ark 0d329750b3 refactored wadm provider to use wadm client and types crate
Signed-off-by: luk3ark <luk3ark@gmail.com>
2024-07-12 12:52:17 -03:00
Taylor Thomas 85c655d3f6 feat(state): retry state updates
This adds retries for putting state into a bucket. It is fairly basic
but it gets the job done. We had to add a few more constraints to the
`Store` trait to handle the retries (such as cloning data).

Co-authored-by: Brooks Townsend <brooksmtownsend@gmail.com>
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-07-03 14:15:09 -06:00
Bailey Hayes a751827f93 fix: use instances in all examples
Signed-off-by: Bailey Hayes <behayes2@gmail.com>
2024-07-03 16:06:29 -04:00
Brooks Townsend 6040ee35be test(e2e): fix manifests for multiple hosts test
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-02 16:05:21 -04:00
Brooks Townsend bbef11d1fe refactor(*): rename actor to component
Co-authored-by: Taylor Thomas <taylor@oftaylor.com>
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-02 16:05:21 -04:00
Brooks Townsend 836d48d6bd fix(#279)!: compute ID based on all unique features
Co-authored-by: Taylor Thomas <taylor@oftaylor.com>
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-02 16:05:21 -04:00
Brooks Townsend cc396cdae2 test(e2e): enable upgrades integration test
Co-authored-by: Taylor Thomas <taylor@oftaylor.com>
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

test(e2e): update upgrade test

Co-authored-by: Taylor Thomas <taylor@oftaylor.com>
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-07-02 16:05:21 -04:00
Joonas Bergius 8bdaba7f26 feat(wadm)!: Switch wadm events to limits-based stream and create a new sourcing stream for EventConsumer
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-07-02 08:50:59 -06:00
Joonas Bergius 8f3efe3899 chore(wadm): Bump wadm crate to 0.12.2
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-07-02 08:46:17 -06:00
Brooks Townsend f3f9ed351b test(integration): enable consumer integration tests
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
Co-authored-by: Taylor Thomas <taylor@oftaylor.com>
2024-07-01 10:33:21 -06:00
Joonas Bergius c82fdc044e fix(wadm): Set PutModelResponse current_version based on the stored current_version
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-06-27 09:31:52 -06:00
Brooks Townsend 41ed2459d8 chore(wadm): update to v0.12.2
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-06-20 16:18:31 -04:00
Victor Adossi 6295e86490 chore(*): fix clippy lints
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-06-18 10:13:29 -04:00
Victor Adossi 5d76ab006d chore(deps): remove once-cell
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-06-18 10:13:29 -04:00
Victor Adossi acf80e2748 fix(validation): allow multiple links to the same target
This commit removes the check that prevented making multiple links to
the same target.

Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-06-14 14:03:41 -06:00
Brooks Townsend e45e1e7e90 fix(wadm): support empty delete payload
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-06-14 11:24:13 -04:00
Brooks Townsend ed2e999b60 fix(client): correct delete topic
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-06-13 10:50:46 -04:00
Brooks Townsend f79d5cdf15 chore: bump to v0.12.1 for release
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-06-11 19:09:09 -04:00
Brooks Townsend 4d31dc9c1b fix(client): app list topic
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-06-07 14:22:56 -04:00
ritesh089 272b1029b4
feat(server): allow providers with different versions in different apps
Signed-off-by: rrai35 <ritesh.rai@aexp.com>
Authored-by: rrai35 <ritesh.rai@aexp.com>
2024-06-03 10:17:08 -07:00
Taylor Thomas 61c94fc559 fix(wadm): Adds oam schema to wadm crate
This also adds a check to make sure that the two schemas are in sync
since I had to copy it over. Using something like a build.rs would have
had the same issue in that the file was outside of the crate

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-05-24 17:48:31 -06:00
Taylor Thomas 5e3d5272b4 fix(*): Bumps wadm crate version
We missed the crate bump when we did the other version bump

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-05-24 15:21:51 -06:00
Taylor Thomas 389664eecd fix(*): Removed unneeded readme keys and update paths
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-05-24 14:31:55 -06:00
Brooks Townsend d9e7de62ee chore(wadm): bump to v0.12.0 for release
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-05-24 09:11:06 -04:00
Taylor Thomas 6e6ad37650 feat(client)!: Adds fully functional wadm client
The client has been integrated into the e2e tests to ensure that it
works properly. This is marked as a breaking change becasue it moves
most of the API types out from the main wadm library

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-05-22 11:36:36 -06:00
Taylor Thomas b70d019799 feat(*)!: Introduces a new wadm-types crate
This allows us to import the types separately from the rest of the lib
code for wadm

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-05-22 11:36:36 -06:00
Taylor Thomas 7c3c5a5ac7 ref(*): Moves wadm library into its own crate
This is in preparation to break out the types into their own crate

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-05-22 11:36:36 -06:00
Victor Adossi a54277ae88 feat(validation): add manifest validation
This commit adds validation functions along with utility accessors to
`Manifest` to enable checking WADM manifests for common errors.

As the validation functions are exposed they can be used by downstream
crates (ex. `wash`) to validate WADM manifests or try to catch
errors *before* a manifest is used.

Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-05-20 22:25:08 -06:00
Eric Gregory e63bfc66f1
docs(readme): Update manifest example and language for 1.0 (#284)
* Update manifest example and language in readme

Signed-off-by: Eric Gregory <eric@cosmonic.com>

* Use hello world manifest

Signed-off-by: Eric Gregory <eric@cosmonic.com>

* Remove versions from manifests, other nits

Signed-off-by: Eric Gregory <eric@cosmonic.com>

---------

Signed-off-by: Eric Gregory <eric@cosmonic.com>
2024-05-20 19:36:47 +00:00
Brooks Townsend 81c8207439 chore: bump to v0.11.2
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-05-20 15:35:31 -04:00
Brooks Townsend d36c5efd42 ci(release): use native macos arm64 runner
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-05-20 13:05:22 -04:00
Taylor Thomas 74a71f356d
feat(*)!: Makes version optional (#281)
This PR makes two major changes. The first, and most important, is that
versions are now optional. If no version field is passed, wadm will
automatically generate a ULID to use as its version. This means that
listing versions can be done in order by sorting them lexographically.

Second is an inversion of the delete behavior. Most of the time the
delete endpoint in practice has been used to delete the whole application
and not just a specific version. Now, by default, if you call the delete
endpoint, it will delete all versions. You can still pass a now optional
version to delete a single specific version.

While I was in the handlers, I also removed an unused field for
orphaning undeploys

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-05-15 16:20:26 -04:00
Victor Adossi 334db5431c chore: fix clippy lints
Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2024-05-06 13:17:33 -04:00
Joonas Bergius fde119ffec
Fix rendering WADM_NATS_CREDS_FILE when creds secret is provided (#273)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-05-01 12:08:17 -06:00
Joonas Bergius b9fd65e990 Add support for mounting Kubernetes Secret for nats.creds file
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-05-01 12:21:16 -04:00
Brooks Townsend 4a8117439e chore: bump to v0.11.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-30 14:25:56 -04:00
Brooks Townsend 4e14010f39 fix(events): handle old provider state gracefully
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-30 14:25:56 -04:00
Taylor Thomas f85bed3396 docs(README): Removes some issues that have been fixed or addressed
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-04-17 20:55:23 -04:00
Taylor Thomas 3eb06a0023 fix(ci): Wrong chart path
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-04-17 17:13:41 -06:00
Taylor Thomas 152eb99f75 chore(chart): Bumps chart to use latest released version
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2024-04-17 17:00:55 -06:00
Brooks Townsend 15da88eb2b chore: bump ctl interface v1.0.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-17 12:55:36 -04:00
Brooks Townsend c01e4dd7b8 chore: bump to v0.11.0 for release
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-17 11:14:16 -04:00
Brooks Townsend 90c3eebb9c feat(config): support monitoring external config
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

don't delete non-managed config

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 6a35dd09bc refactor(scaler): simplify config cleanup logic
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend d9af83ca19 chore: bump to v0.11.0-alpha.4
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend a5ee1c0e11 test(e2e): assert configuration creation
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 7f5f9247a5 test(config): add unit test for scaler
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 26f3c84b1b feat(scaler)!: manage component named configuration
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 99fc559b6a feat(scaler)!: manage link named configuration
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend a4e101bd2b feat(*)!: configuration reconciliation loop
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

fix lints

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

fix todos

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 12224b23d0 feat(scaler)!: manage provider named configuration
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

fix todo about streams

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 903115a6c8 feat(config): add support for config events
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 9817134240 fix: remove actor_started/actor_stopped event handling
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

fix test

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

remove testing eprintlns

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-16 18:06:59 -04:00
Brooks Townsend 2dda8601e8 fix: ensure streams have updated subjects
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-12 14:24:11 -04:00
Brooks Townsend 9243793d5e chore: revert noisy info log to trace
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-12 11:56:53 -04:00
Brooks Townsend b2084a2c91 chore: bump to v0.11.0-alpha.3
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-12 11:56:53 -04:00
Brooks Townsend 346235bcaa fix(scaler): remove ineligible providers
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-12 11:56:53 -04:00
Brooks Townsend 3be058f48b fix(scaler): remove ineligible components
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-12 11:56:53 -04:00
Brooks Townsend f1cb64dafd fix(link): issue where delete didn't trigger put
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-12 11:56:53 -04:00
Brooks Townsend 02d4b1d064 refactor: use common config encode function
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-11 22:00:26 -04:00
Brooks Townsend 1c6c627884 chore: make config example more descriptive
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-11 22:00:26 -04:00
Brooks Townsend e1838f0b21 feat(*)!: support passing configuration for all components
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-11 22:00:26 -04:00
Brooks Townsend ee302525a6 fix: calculate component ID from model and name
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-11 15:41:35 -04:00
Brooks Townsend 2f7e8a9915 fix(state): allow actor alias for compat
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-11 15:29:56 -04:00
Brooks Townsend 48763ff2ea chore: bump to v0.11.0-alpha.2
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

chore: bump to v0.11.0-alpha.2

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-09 12:46:02 -04:00
Brooks Townsend e798b1305c fix: resolve issue updating providers
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-09 12:46:02 -04:00
Brooks Townsend b997d8becd fix(*)!: update actor_id to component_id
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-09 12:46:02 -04:00
Brooks Townsend 404523f240 feat(model): support labels in manifest
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 19:11:34 -04:00
Brooks Townsend 19690b44f5 chore: update Cargo.lock
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:59:12 -04:00
Brooks Townsend efbbba58ac chore: bump to v0.11.0-alpha.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:59:12 -04:00
Brooks Townsend b6febbbaa3 feat(model): rename actor type to component
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend f9558af7b4 chore(gitignore): ignore .vscode
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 04ca08124f test(e2e): disable multitenant and upgrades tests
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 37f59146a1 feat(*)!: update to control interface 1.0.0-alpha3
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

feat(*)!: handle component_scale events

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 2a311e106f test(e2e_multiple_hosts): update to wasmcloud 1.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

test(e2e): wip update to 1.0 manifests

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

test(e2e_multiple_hosts): update to 1.0

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

test(e2e_multiple_hosts): update for 1.0

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

at least

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 372cc755b9 fix: add support for component links
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 59bfdcdcba fix: add support for config passthrough
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 483922153c chore(clippy): resolve new lints
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 85a6f9a830 fix(link)!: immediately put link when absent
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend d6c6bfdf2c fix(oam): update JSON schema with interface link definitions
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend ec7333312d test(*): update manifests with interface link definitions
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 3ad452e14b feat(model): change link config to interface structure
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend e672e37f31 chore(model): skip serializing spread if empty
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 61fe7e2c83 test(*): update unit tests for wasmcloud 1.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 28bb06b51d test(event): remove unnecessary inventory mocking
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend eae4d9e806 fix(*): update events to 1.0 working versions
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend 8d38c77dc2 test(integration): move integration test to update
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Brooks Townsend fd4ae29023 feat(*)!: update to 1.0-compatible control interface
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

everything but scalers compile

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

commands, events, scaler, server, workers compile

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

scalers compile

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

all but link scaler

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

tests running!

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-04-08 16:12:46 -04:00
Joonas Bergius ea900d0a0c
feat: Add helm chart for deploying wadm (#248)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-04-01 18:35:23 +00:00
Brooks Townsend f83bdff194 refactor(*): remove singular actor events
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-03-05 06:03:15 -08:00
Brooks Townsend a99d7242f6 feat(events): support actor_scaled event
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

add scaled events to parse

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-03-05 06:03:15 -08:00
Joonas Bergius ff0e21a3a7 chore: Update the default OTLP HTTP port to match the current OTEL spec
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-02-05 09:31:30 -05:00
Dan Norris d07c29481b feat: support TLS CA certificates when building a NATS client
NATS servers support TLS, which means sometimes you need to be able to
provide certificates when instantiating a client.

This specifically adds support for supplying a CA certificate to use
when connecting to a NATS server. I opted not to add client certificate
support for mTLS for now, but it's an easy lift if it should just be
added now.

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2024-01-30 12:31:24 -05:00
Brooks Townsend 2c55b59203 refactor: stronger typed data for lattice subscriptions
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-22 14:53:24 -05:00
Connor Smith dcf8774031 feat(*)!: subscribe to updated lattice event subjects
Signed-off-by: Connor Smith <connor.smith.256@gmail.com>
2024-01-22 14:53:24 -05:00
Brooks Townsend 35b9df8e26 release: v0.10.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-18 17:00:45 -05:00
Brooks Townsend b30f6e3928 refactor(status): rename ready and compensating
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-16 12:27:10 -05:00
Brooks Townsend 1703502463 fix(scaler): resolve possible linkdef loop
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-16 12:27:10 -05:00
Brooks Townsend f8b9f23466 release: v0.10.0-rc1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:18:33 -05:00
Brooks Townsend 1506a3b594 fix(scaler): always publish delete notification
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:11:20 -05:00
Brooks Townsend 12805a1aea fix(test): test for concurrency not instances
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:07:05 -05:00
Brooks Townsend e5a40286b0 ci(test): update NATS to 2.10.7
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:07:05 -05:00
Brooks Townsend 7416824597 chore(test)!: update wash wasmcloud version to 0.81
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:07:05 -05:00
Brooks Townsend eec8dd851b fix(e2e)!: compute actor count from max
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:07:05 -05:00
Brooks Townsend 52cfa300cc chore(e2e): upgrade wasmcloud version to 0.81
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:07:05 -05:00
Brooks Townsend 3799e1b569 feat(*)!: prevent querying inventory for new heartbeat
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

rip out last #191 artifact

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

cleanup

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

pr cleanup

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2024-01-12 18:07:05 -05:00
Joonas Bergius 0686159c73
chore: Replace atty with std::io::IsTerminal (#231)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-01-09 16:47:31 -07:00
Joonas Bergius 6e93fd883a chore: Update serde_yaml version to bump unsafe-libyaml
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-01-09 10:06:01 -07:00
Joonas Bergius e2e3efd3d6 chore: Update wasm-bindgen version
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-01-09 09:37:10 -05:00
Joonas Bergius 5ae9bbafe3 chore: Update ahash version to bump zerocopy
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2024-01-09 09:34:24 -05:00
Brooks Townsend 49dde268b7 fix(scaler): remove backoff wrapper around link
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-12-21 14:38:14 -05:00
Brooks Townsend 74c49823f9 chore: update instances property in YAML files
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

update yamls and readmes
2023-12-13 15:20:16 -05:00
Brooks Townsend 727ac2f153 feat(model): Update spreadscaler to use instances
instead of replicas

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

update model and code
2023-12-13 15:20:16 -05:00
Brooks Townsend aafef190ed chore: addressed clippy warnings rust 1.74
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-12-13 14:45:27 -05:00
Brooks Townsend ac9962c807 chore(build): update docker actions to v3
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-12-13 11:39:00 -05:00
Brooks Townsend 4b2237f1eb chore: updated release github token
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-12-13 11:39:00 -05:00
Brooks Townsend dbe3c284dd chore(gitignore): ignore .idea
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-11-28 17:47:23 -05:00
Brooks Townsend 86fdecc4ca feat(status)!: rename compensating and ready status
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

chore: update to lowercase

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-11-28 17:47:23 -05:00
Brooks Townsend 48bd5482af feat(status): prevent duplicate status publishes
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-11-28 17:47:14 -05:00
Joonas Bergius 7c5f5d8259 Derive PartialEq and Eq for DeleteResult
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
2023-11-24 10:39:46 -07:00
Brooks Townsend 79bfee3075 chore: bump to 0.9.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-11-20 11:35:03 -05:00
Roman Volosatovs 68580bb7dd
build: update dependencies (#218)
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
2023-11-17 13:56:32 -07:00
Aishwarya Harpale 571adff020
chore: Better JSON validation error messages
* chore: Better JSON validation error messages

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* nit fix

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

---------

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>
2023-11-06 12:21:42 -06:00
Brooks Townsend f628a17eb2 chore: bump to 0.8.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-11-02 11:40:31 -04:00
Taylor Thomas 54ee1a43ca
fix(storage): Updates scalers to do a single data lookup per event (#211)
This works by looking up all the data once per event (whenever scalers
are called). This reduces lookups of links by at least 40% and should
reduce data usage as well (though not as much as doing some sort of
caching would)

Fixes #203

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-11-02 09:23:15 -06:00
Brooks Townsend 0354619dff chore: bump to v0.8.0
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-10-30 11:58:27 -04:00
Victor Adossi 1d2c7f082d
fix: increase length to 512 for image references (#209)
Sometimes, sufficiently nested folders will require a length longer
than 128 characters for the path to a Wasm executable, when using the
`file://` specifier.

While it's questionable whether the image should be limited at all, it
looks like 128 is at the very least too low.

This commit increases the string allowed for `image` specification to
a higher multiple of 128.

Signed-off-by: Victor Adossi <vadossi@cosmonic.com>
2023-10-30 13:35:36 +00:00
Brooks Townsend acec86fe59 fix: update stored host provider annotations
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

update test

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-10-26 18:15:57 -04:00
Taylor Thomas 9bd9e9b5f7 feat(*): Update to newest control client
This bumps the control client version and removes some of the previous work
that was in rc.1

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-10-24 13:48:10 -06:00
Brooks Townsend 9ca8b96b08 build(test): update stale host heartbeat test
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

tatoooooine

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-10-24 14:11:16 -04:00
Brooks Townsend b5abc1ba71 chore(gitignore): ignore .DS_Store
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-10-24 14:11:16 -04:00
Brooks Townsend fcc19d86fd feat(event): restore provider information from heartbeat
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-10-24 14:11:16 -04:00
Brooks Townsend 85b53acdbf feat(event): update provider status with neutral event
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-10-24 14:11:16 -04:00
Brooks Townsend 30da210e31 feat(event): restore actor image reference from heartbeat
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>

just propogate actor description

Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-10-24 14:11:16 -04:00
Dan Norris c995f193ec
chore(api): derive clone on VersionInfo and ModelSummary
This marks the VersionInfo and ModelSummary types as Clone, which makes
them easier to work with when including them in collections (ex. storing
in a Vec).

Signed-off-by: Dan Norris <protochron@users.noreply.github.com>
2023-10-24 09:05:41 -04:00
Connor Smith 4d75405c7e chore: add smaller logos
Signed-off-by: Connor Smith <connor.smith.256@gmail.com>
2023-10-19 13:47:15 -06:00
Connor Smith 4bae1adb67 ci: update actions
Signed-off-by: Connor Smith <connor.smith.256@gmail.com>
2023-10-19 11:25:33 -04:00
Aishwarya Harpale e0f9bc1f3f
feat(api): ensure deployed providers can’t run different versions
* Modified provider checks to be version independant

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Fix test top reflect change

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Added deploy time check for duplicate provider refs

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Fixed edge cases for version checks

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Added e2e test to check for duplicate provider versions

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Addressed review comments

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Minor fix to retrieving staged model

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

---------

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>
2023-10-18 17:19:40 -04:00
Taylor Thomas f7acd7e865
Merge pull request #192 from thomastaylor312/chore/rc_bump
chore(*): Bumps version to 0.8.0-rc.1
2023-10-12 14:24:59 -06:00
Taylor Thomas 68b3503ef3 chore(*): Bumps version to 0.8.0-rc.1
This also removes some comments we missed in a previous PR

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-10-12 13:38:47 -06:00
Ahmed Tadde 78aca99a7a
[FEAT] Support upgrades of just link config or provider config (#185)
* feat(scaler): WIP - computing hash of linkdef values for linkscaler id

Signed-off-by: Ahmed <ahmedtadde@gmail.com>

* feat(scaler): compute hash of provider_config for provider (spread)scaler id

Signed-off-by: Ahmed Tadde <ahmedtadde@gmail.com>

* feat(scaler): compute hash of provider_config for provider (daemon)scaler id

Signed-off-by: Ahmed Tadde <ahmedtadde@gmail.com>

* feat(scaler): updating e2e tests to ensure that changes to linkdef values and provider config cause regression bugs.

Signed-off-by: Ahmed <ahmedtadde@gmail.com>

* test: update manifest input for e2e upgrades test

Signed-off-by: Ahmed <ahmedtadde@gmail.com>

* feat(scaler): CapabilityConfig hash compute has fallback behavior when json value fails to parse.

Signed-off-by: Ahmed <ahmedtadde@gmail.com>

* refactor(scaler): use base64 encoding for capability config hashing

Signed-off-by: Ahmed <ahmedtadde@gmail.com>

* refactor(scaler): use base64 hash of provider_config directly; don't double hash into u64.

Signed-off-by: Ahmed <ahmedtadde@gmail.com>

* fix(scaler): remove vestigial imports

Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-10-12 19:32:50 +00:00
Taylor Thomas b68652e292
Merge pull request #190 from thomastaylor312/fix/link_caching
fix(*): Link caching and other goodies
2023-10-12 13:03:38 -06:00
Taylor Thomas 6f216c4cc6 fix(state): Update state to use a backwards compatible version of parsing out host information
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-10-12 12:30:27 -06:00
Taylor Thomas fe2964a773
Merge pull request #188 from ahmedtadde/feat/optional-prefix-for-internal-streams
[FEAT] Add the ability to set a stream prefix for all wadm internal streams
2023-10-12 12:19:48 -06:00
Taylor Thomas 15c6b4f2b2 fix(*): Link caching and other goodies
This PR got a little bit out of hand. We had a bunch of bugs that may or
may not have been interrelated so we fixed most of them. The biggest thing
is that we switching to caching link definitions for scalers so that
the number of consumers created was much lower. Additionally, we fixed when
status notifications are published so that we don't get reconciling status
when other scalers are being removed

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-10-12 10:35:58 -06:00
Ahmed 625cbc3afc docs: adds some documentation to clarify the currently known concrete use case for the `stream-prefix` arg
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-10-12 09:00:43 -04:00
Ahmed 5ee3fa495f feat(wadm-nats-streams): add optional stream prefix for internal nats streams.
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-10-06 10:38:04 -04:00
Brooks Townsend 867793f871 chore: bump to 0.7.1
Signed-off-by: Brooks Townsend <brooksmtownsend@gmail.com>
2023-09-28 12:39:28 -04:00
Aishwarya Harpale 1d3a879c41
feat(server)!: validate put manifests with JSON schema
* Added json schema for validating manifests

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Minor fixes

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Addressed review comments

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Added additional test case, better error messages

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Fixes to spread requirements

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* allow objects in provider config

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

* formatted JSON

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

---------

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
Co-authored-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 20:04:56 +00:00
Taylor Thomas 81d754d49c
Merge pull request #183 from ahmedtadde/feat/simplify-adding-scaler-for-scale-manager
Refactor manifest upgrade for more efficient scaler computation
2023-09-22 12:40:39 -06:00
Brooks Townsend 4813b4fbb0 chore(test): use rc10 for tests
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

oops comment in thing
2023-09-22 14:39:27 -04:00
Brooks Townsend c19a3f6c8c chore: refactors post rebase
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend 4a6f6fd11c chore: bump to 0.7.0
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend c02f7ab243 fix(scaler): avoid recursive notification with no event
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

feat(scaler): register events if some

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

correct register logic

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend 4778f6a391 feat(e2e): wait for first lattice event before asserts
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend bb41b6b195 fix(linkdef): ensure linkdefs are put on actors_started
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend 8de866c418 chore(test): use Rust host for e2e tests
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

nats host

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

chore(test): use rc9

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

use rc9
2023-09-22 14:39:27 -04:00
Brooks Townsend c34929b136 fix(test): add annotation to annotation_stop
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend d4be374ab0 chore: remove simplescaler reference impl
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

chore: remove simplescaler reference impl

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend c2e30e63a9 feat(command): remove start stop actor cmds
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Brooks Townsend d5b18a0d45 feat(scaler): use scale commands for actors
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-22 14:39:27 -04:00
Ahmed 261dae2604 feat(workers): prevent redundant scalers computation on published manifest event.
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-09-22 07:27:44 -04:00
Taylor Thomas 064237ab1c
Merge pull request #182 from ahmedtadde/feat/ensure-linkdef-target-maps-to-some-capability-comp
Validate that linkdef targets actually exist in the manifest
2023-09-21 16:33:13 -06:00
Ahmed 786ca22424 feat(model): every trait(linkdef).target should map to some capability component in manifest
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-09-21 14:54:12 -04:00
Taylor Thomas 1321bcd818
Merge pull request #181 from thomastaylor312/feat/state_updates
feat!(state): Updates event loop to store state from ActorsStarted/Stopped
2023-09-21 10:34:02 -06:00
Ahmed 2a378fcd4a feat(server): improved output messages for model operations
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-09-21 12:11:52 -04:00
Taylor Thomas 47d54442ab feat!(state): Updates event loop to store state from ActorsStarted/Stopped
This should significantly decrease the amount of writes and reads from the
store

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-09-20 15:44:12 -06:00
Ahmed 942a811079 feat(spreadscaler): return computed spreads ordered by weight. this should be fine; will be revisited later if needed.
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-09-14 12:37:45 -04:00
Ahmed 1b3c76f58a feat(spreadscaler): add some safety for ordering implementation detail for computed spreads output
Signed-off-by: Ahmed <ahmedtadde@gmail.com>
2023-09-14 12:37:45 -04:00
Ahmed Tadde e7ea2cf3e6 feat(spreadscaler): after initial allocation, distribute leftover replicas based on weight
Signed-off-by: Ahmed Tadde <ahmedtaddde@gmail.com>
Signed-off-by: Ahmed Tadde <ahmedtadde@gmail.com>
2023-09-14 12:37:45 -04:00
Taylor Thomas 226bb82ff6
Merge pull request #174 from emattiza/fix/iss-147-structured-logging-error
bug: fix structured logging crash
2023-09-12 14:13:41 -07:00
Evan Mattiza b1810873d8 bug: fix structured logging crash
configures the log layer to have an appropriate FormatFields when
using json, or uses the default Full configuration

partially addresses #147

Signed-off-by: Evan Mattiza <emattiza@gmail.com>
2023-09-12 14:31:22 +00:00
Taylor Thomas 1bc07979d0
Merge pull request #172 from emattiza/fix/iss-156-name-sort-model-list
fix: implement model name sort in model listing
2023-09-11 13:04:14 -07:00
Evan Mattiza ca2108a1ae fix: implement model name sort in model listing
updates app listing to be sorted on model name

closes #156

Signed-off-by: Evan Mattiza <emattiza@gmail.com>
2023-09-11 19:28:45 +00:00
Brooks Townsend 5c1739d8fa chore: bump to 0.6.0 for release
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-05 15:20:40 -07:00
Brooks Townsend c2b189abcb feat(daemonscaler)!: add provider daemonscaler
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

cleanup

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

allow cleanup for provider

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

added test, fix pr comment

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

removed wait for washboard

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

so no head?

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-05 14:18:37 -07:00
Brooks Townsend f04c4177a2 feat(daemonscaler)!: add actor daemonscaler
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

refactored for correct logic

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

fixed issue with no spread requirements

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-05 14:18:37 -07:00
Brooks Townsend ed4430c8e4 chore(test): properly run Rust host
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

try

actually care about port

wasmcloud 0.78

sotp
2023-09-05 13:31:52 -07:00
Brooks Townsend 71cd136ae8 chore: bump to 0.5.1
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

removed wait for washboard

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

removed wait for washboard

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-05 13:31:52 -07:00
Brooks Townsend 54f1bbdb2b fix(scaler): count providers towards spread requirements
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-09-05 13:31:52 -07:00
Brooks Townsend 1b1cb4cae0 chore: bump to 0.5 for release
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-22 21:55:29 -04:00
Brooks Townsend e3cfa7efb5
fix(*): allow revision and version to be optional (#164)
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-22 20:23:29 +00:00
Brooks Townsend 24d19960a7 fix(upgrades): run cleanup commands after adding scalers
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-18 13:16:18 -04:00
Brooks Townsend 37dc530de5 chore: bump version for 0.5.0-rc.1
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-18 13:10:04 -04:00
Aishwarya Harpale 5e7ef37f8a
feat(server)!: add validation support for manifests
* Added validation support for manifests

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Addressed review comments

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Fixed conflicts

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

* Added comments, removed unnecessary code

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>

---------

Signed-off-by: aish-where-ya <aharpale@cosmonic.com>
2023-08-18 12:07:37 -04:00
Brooks Townsend 5d47a37f71 chore: bump to 0.5.0-alpha.4
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-14 15:10:43 -04:00
Brooks Townsend d18811db54 feat(status): fully compute status for status updates
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

de bugh

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

what if not efficient

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

manifest unpublished

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

ci: update checkout to v3

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

update only on full reconcile ez

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

cheating

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

cleanup

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

moar cleanup

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

more cleanup

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

status cheat

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

correct error messages

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

update on hint too

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

wait

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

refactor

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

no cheating all cleanup

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

cleanup

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-14 14:31:15 -04:00
Jordan Rash 1fa321aac1
feat(docker): add ca-certificates back to container
* add certs back

Signed-off-by: Jordan Rash <15827604+jordan-rash@users.noreply.github.com>

* brooks saves me

Signed-off-by: Jordan Rash <15827604+jordan-rash@users.noreply.github.com>

---------

Signed-off-by: Jordan Rash <15827604+jordan-rash@users.noreply.github.com>
2023-08-10 15:59:01 -04:00
Brooks Townsend b0c8820cfc feat(e2e): add manifest upgrade e2e test
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-10 14:42:15 -04:00
Brooks Townsend 0c811bddac ci(e2e): run e2e tests in matrix
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-10 14:42:15 -04:00
Brooks Townsend 0c7d6c009c chore(make): delete all streams on cleanup
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-10 14:42:15 -04:00
Brooks Townsend 63b5d64691 chore(e2e): fix flake wadm update status properly
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

debug

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

try sleep more

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-10 14:42:15 -04:00
Brooks Townsend 8909ce9bab feat(*)!: implement version upgrades
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

removed sleep

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-10 14:42:15 -04:00
Brooks Townsend dc07ae039b feat(*)!: terminate old resources upon new manifest deploy
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-10 14:42:15 -04:00
Brooks Townsend 7bd158d7a0 feat(linkscaler): add interest in LinkdefSet
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-10 14:42:15 -04:00
Brooks Townsend 3a2201f4b4 chore: bumped to 0.5.0-alpha.3
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-07 13:42:38 -04:00
Brooks Townsend e370ac2b5c feat(linkscaler): add interest in LinkdefSet
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

address PR feedback, fix tests

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-07 12:32:21 -04:00
Brooks Townsend 2256c8d317 feat(e2e): add status assertions to e2e tests
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-07 12:32:21 -04:00
Brooks Townsend 409de137b3 feat(status): implement basic status, validate name
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

implemented update status when cmds empty

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

sample impl for linkdef status

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

implemented updating status for scalers

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

cleanup for PR

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

cleanup

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-07 12:32:21 -04:00
Brooks Townsend 77ff8e51a7 feat(*)!: set semaphore to optional, default max permits
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-01 10:50:19 -04:00
Brooks Townsend 792ba3cf0e fix(*): acquire permit for work after receive msg
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-08-01 10:50:19 -04:00
Brooks Townsend 14b0532401 fix(multitenant): filter consumers by multitenant status
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

chore: bump async-nats wasmcloud-control-interface

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

fix missing struct fields

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-25 14:00:57 -04:00
Brooks Townsend bb75a3af67 fix(*): support deleting all versions of manifest
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-25 14:00:57 -04:00
Brooks Townsend cef49fe7b3 chore: add CODEOWNERS for reviewers
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-25 13:12:16 -04:00
Brooks Townsend 79114198d6 chore(*): bump version to 0.5.0-alpha.1
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-25 13:11:56 -04:00
Brooks Townsend 4a8c2a9542 fix(canary): include context and binaries for canary release
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-24 09:53:28 -04:00
Brooks Townsend 01b1115223 e2e test debug
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-19 13:02:34 -04:00
Brooks Townsend 29d513ae8b fix(e2e): update tests with notion of account prefix
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

reworked e2e account import/exports

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

removed unnecessary stream response type

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

addressed PR concerns

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-19 13:02:34 -04:00
Brooks Townsend 86d34f4a1c feat(multitenant)!: store and query manifests by account
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-19 13:02:34 -04:00
Brooks Townsend df9db63cd9 feat(multitenant): restore consumer multitenant prefix
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

propogated multitenant prefix around

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

fixed test compiling

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-18 13:00:15 -04:00
Brooks Townsend be385dd175 feat(multitenant): added e2e test logging
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-18 13:00:15 -04:00
Brooks Townsend 5581d0166e feat(docker): added make build-docker rule
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-18 13:00:15 -04:00
Brooks Townsend bd22320c8b feat(multitenant): added basic e2e test
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-18 13:00:15 -04:00
Brooks Townsend ed2d2851e7 feat(multitenant): added e2e test infra
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-18 13:00:15 -04:00
Brooks Townsend b19def3177 feat(multitenant): use multitenant prefix ctl cmds
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-18 13:00:15 -04:00
Connor Smith c21d0d04f0
Merge pull request #130 from wasmCloud/bump/nats-nkeys
Bumped nats and nkeys to fix update bug
2023-07-14 11:07:28 -06:00
Connor Smith 059773d570 bump to wasmbus-rpc v0.14
Signed-off-by: Connor Smith <connor.smith.256@gmail.com>
2023-07-14 11:01:02 -06:00
Brooks Townsend 9efc4cb7dc pinned control interface to 0.27
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-14 11:27:05 -04:00
Brooks Townsend f8a3e96622 bumped wasmbus-rpc for future compat
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-14 11:16:51 -04:00
Brooks Townsend ef218ec782 chore(test): ignore plural actor events for test
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

reverted wasmbus to 0.13

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

changed wasmbus Link to control link

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

updated for async nats 0.30 release

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-14 09:06:00 -04:00
Brooks Townsend e45a348dd2 chore(nats): bump async_nats to fix jetstream update
bumped nats and nkeys to fix update bug

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-07-14 09:01:21 -04:00
Taylor Thomas 640515e7fb
Merge pull request #128 from wasmCloud/fix/arm_build
Adds support for building arm image
2023-06-06 09:31:20 -06:00
Taylor Thomas 3c8ebd2fcc Adds support for building arm image
Also makes image building faster by copying the already built binary

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-06-05 18:38:35 -06:00
Taylor Thomas ec889ac327 Changes rpc dep to not be a star
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-06-05 15:03:29 -06:00
Taylor Thomas 4cc5e8b05f
Merge pull request #126 from thomastaylor312/fix/bad_globby
Fixes an unclear glob export
2023-06-05 14:18:07 -06:00
Taylor Thomas bef46186ee Fixes an unclear glob export
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-06-05 13:18:26 -06:00
Taylor Thomas dcdb79bef0
Merge pull request #124 from thomastaylor312/feat/finish_e2e
feat(tests): Finishes e2e tests
2023-06-05 13:08:33 -06:00
Taylor Thomas c6ecbd666f feat(tests): Finishes e2e tests
This also polishes up the README and gets everything ready for the 0.4 release

Closes #53

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-06-05 12:31:08 -06:00
Taylor Thomas 326a89f648 feat(*): Adds additional complex test and fixes a bug
This adds an additional test as well as an undeploy test. It also
fixes a reaper bug that could occur where some actor instances wouldn't
be cleaned, resulting in incorrect state being used for reconcile decisions

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-06-01 18:04:42 -06:00
Taylor Thomas 7cee184708
Merge pull request #119 from thomastaylor312/feat/check_provider_existence
feat(scaler): Accounts for other providers running the lattice
2023-05-31 16:18:38 -06:00
Taylor Thomas 193f3992f2 feat(scaler): Accounts for other providers running the lattice
This PR contains a couple small changes I discovered while manually testing,
but the large change is around the scaler algorithm for providers. Now a
provider scaler will consider its job done if _any_ provider matching what
it is expecting to be running.

Closes #106

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-31 15:02:55 -06:00
Taylor Thomas d08418408e
Merge pull request #112 from wasmCloud/feat/support-completed-actor-commands
feat(*) Remove jitter with plural events and expected events
2023-05-26 10:19:55 -06:00
Taylor Thomas a0670f87c1 fix(*): Fixes infinite loop message spam
The linkdef was emitting an event everytime, which caused infinite spam.
This also makes sure that the expected event notifications only get
delivered to specific scalers rather than everything (which was causing
a backoff for _all_ scalers assigned to a model)

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-26 10:08:58 -06:00
Taylor Thomas 1a9f6c1584 Adds logging output from docker compose containers
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-24 11:50:26 -06:00
Taylor Thomas 7ad5420e65 feat(*): Finishes up work for distributed backoff
This fixes several minor issues and streamlines how backoffs are handled.

First, to get around issues with event serialization when resending the
event as part of a notification, we now reencode it as a cloudevent. There
was a lot of work to be done to manage our own serialization with it.

Second, this moves all backoff code to be internal to the backoff scaler
wrapper. This helps avoid leaky abstractions in the future and made the
code easier to debug

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-24 10:52:11 -06:00
Brooks Townsend 8713b699b2 removed unused and printlns
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 18:42:54 -04:00
Brooks Townsend 8c812ed6e9 implemented notifications for the backoffscaler
Co-authored-by: Taylor Thomas <taylor@oftaylor.com>
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 18:39:31 -04:00
Brooks Townsend c67b3d0e84 added logic to clear expected events
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 15:56:28 -04:00
Brooks Townsend 1d163e3838 fixed unit tests
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 14:53:41 -04:00
Brooks Townsend 4d0002c044 addressed original PR comments
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 14:53:41 -04:00
Brooks Townsend c036aa0f5b cleaned up unused backoff code, clippy
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 14:53:28 -04:00
Brooks Townsend aaf5ba24a7 scalers expect events in response to commands
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

added remaining events to compare

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

added notifications for dist-evts

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

removed testing printlns

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 14:53:28 -04:00
Brooks Townsend 6cf11fb8b2 updated devcontainer and install instructions
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

install things

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 14:51:54 -04:00
Brooks Townsend 3a5d5b8935 added actors_stopped and actors_started events
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

initial working implementation of actors startstop

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>

removed backoffs from Scalers

Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-22 14:51:54 -04:00
Taylor Thomas 3e5eb883f3
Merge pull request #118 from thomastaylor312/feat/provider_config
feat(*): Adds support for provider config
2023-05-22 12:22:57 -06:00
Taylor Thomas e9843c8b23 feat(*): Adds support for provider config
Adds support for passing a string encoded config or an optional json
encoded config through to a provider. This includes an additional
example manifest and tests

Closes #102

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-22 12:08:10 -06:00
Taylor Thomas 24d9231387
Merge pull request #116 from thomastaylor312/fix/racy_manifests
ref(server): Refactors storage for manifests
2023-05-18 14:45:54 -06:00
Taylor Thomas 33b8dba08c ref(server): Refactors storage for manifests
This solved the problem where you could really only update one manifest
at a time per lattice. Please note that this has some added e2e tests,
but those are currently commented out as they are waiting on some of our
other work for removing jitter from starting actors (which is landing in
the new host version and in wadm fairly soon)

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-18 14:32:48 -06:00
Brooks Townsend 4209c91ee9 added stale.yml definition
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-17 11:00:18 -04:00
Taylor Thomas affa69c950 Adds additional tests
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-15 14:58:40 -06:00
Taylor Thomas 9a6f938a31
Merge pull request #111 from thomastaylor312/feat/e2e_tests
feat(tests): Adds e2e test scaffolding
2023-05-15 14:58:14 -06:00
Taylor Thomas 1cf0a7ec06 Don't panic on a timeout from inventory
Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-15 14:36:44 -06:00
Taylor Thomas be5b7bbd22 feat(tests): Adds e2e test scaffolding
This adds some basic scaffolding and a pipeline for running e2e tests.
It doesn't contain all e2e tests we should write, but has all the
scaffolding and the most simple test put together and running

Signed-off-by: Taylor Thomas <taylor@cosmonic.com>
2023-05-10 09:09:34 -06:00
Brooks Townsend 368a68078b fix/100 allow nats JWT to be file or literal
Signed-off-by: Brooks Townsend <brooks@cosmonic.com>
2023-05-08 16:16:58 -04:00
196 changed files with 32168 additions and 11648 deletions

View File

@ -19,8 +19,7 @@
},
"extensions": [
"rust-lang.rust-analyzer",
"tamasfe.even-better-toml",
"serayuzgur.crates"
"tamasfe.even-better-toml"
]
}
},

View File

@ -1,20 +1,5 @@
#!/bin/bash
# INSTALL WASH
cargo install wash-cli --git https://github.com/wasmcloud/wash --branch feat/wadm_0.4_support --force
# INSTALL WADM
ARCH=$(arch)
VERSION=v0.4.0-alpha.1
if [[ $ARCH == "x86_64" ]]; then
ARCH="amd64"
fi
TARBALL=wadm-$VERSION-linux-$ARCH
curl -fLO https://github.com/wasmCloud/wadm/releases/download/$VERSION/$TARBALL.tar.gz
tar -xvf $TARBALL.tar.gz
chmod +x $TARBALL/wadm
mv $TARBALL/wadm /usr/local/cargo/bin/wadm
rm -rf $TARBALL $TARBALL.tar.gz
curl -s https://packagecloud.io/install/repositories/wasmcloud/core/script.deb.sh | sudo bash
sudo apt install wash openssl -y

5
.envrc Normal file
View File

@ -0,0 +1,5 @@
if ! has nix_direnv_version || ! nix_direnv_version 3.0.6; then
source_url "https://raw.githubusercontent.com/nix-community/nix-direnv/3.0.6/direnvrc" "sha256-RYcUJaRMf8oF5LznDrlCXbkOQrywm0HDv1VjYGaJGdM="
fi
watch_file rust-toolchain.toml
use flake

2
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,2 @@
# wasmCloud wadm maintainers
* @wasmCloud/wadm-maintainers

View File

@ -0,0 +1,38 @@
name: Install and configure wkg (linux only)
inputs:
wkg-version:
description: version of wkg to install. Should be a valid tag from https://github.com/bytecodealliance/wasm-pkg-tools/releases
default: "v0.6.0"
oci-username:
description: username for oci registry
required: true
oci-password:
description: password for oci registry
required: true
runs:
using: composite
steps:
- name: Download wkg
shell: bash
run: |
curl --fail -L https://github.com/bytecodealliance/wasm-pkg-tools/releases/download/${{ inputs.wkg-version }}/wkg-x86_64-unknown-linux-gnu -o wkg
chmod +x wkg;
echo "$(realpath .)" >> "$GITHUB_PATH";
- name: Generate and set wkg config
shell: bash
env:
WKG_OCI_USERNAME: ${{ inputs.oci-username }}
WKG_OCI_PASSWORD: ${{ inputs.oci-password }}
run: |
cat << EOF > wkg-config.toml
[namespace_registries]
wasmcloud = "wasmcloud.com"
wrpc = "bytecodealliance.org"
wasi = "wasi.dev"
[registry."wasmcloud.com".oci]
auth = { username = "${WKG_OCI_USERNAME}", password = "${WKG_OCI_PASSWORD}" }
EOF
echo "WKG_CONFIG_FILE=$(realpath wkg-config.toml)" >> $GITHUB_ENV

16
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,16 @@
version: 2
updates:
- package-ecosystem: "cargo"
directory: "/"
schedule:
interval: "weekly"
day: "monday"
time: "09:00"
timezone: "America/New_York"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
day: "monday"
time: "09:00"
timezone: "America/New_York"

6
.github/release.yml vendored Normal file
View File

@ -0,0 +1,6 @@
# .github/release.yml
changelog:
exclude:
authors:
- dependabot

19
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,19 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- pinned # Pinned issues should stick around
- security # Security issues need to be resolved
- roadmap # Issue is captured on the wasmCloud roadmap and won't be lost
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. If this
has been closed too eagerly, please feel free to tag a maintainer so we can
keep working on the issue. Thank you for contributing to wasmCloud!
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

111
.github/workflows/chart.yml vendored Normal file
View File

@ -0,0 +1,111 @@
name: chart
env:
HELM_VERSION: v3.14.0
CHART_TESTING_NAMESPACE: chart-testing
on:
push:
tags:
- 'chart-v[0-9].[0-9]+.[0-9]+'
pull_request:
paths:
- 'charts/**'
- '.github/workflows/chart.yml'
permissions:
contents: read
jobs:
validate:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
- name: Fetch main branch for chart-testing
run: |
git fetch origin main:main
- name: Set up Helm
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
with:
version: ${{ env.HELM_VERSION }}
# Used by helm chart-testing below
- name: Set up Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.12.2'
- name: Set up chart-testing
uses: helm/chart-testing-action@0d28d3144d3a25ea2cc349d6e59901c4ff469b3b # v2.7.0
with:
version: v3.10.1
yamllint_version: 1.35.1
yamale_version: 5.0.0
- name: Run chart-testing (lint)
run: |
ct lint --config charts/wadm/ct.yaml
- name: Create kind cluster
uses: helm/kind-action@a1b0e391336a6ee6713a0583f8c6240d70863de3 # v1.12.0
with:
version: "v0.22.0"
- name: Install nats in the test cluster
run: |
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm repo update
helm install nats nats/nats -f charts/wadm/ci/nats.yaml --namespace ${{ env.CHART_TESTING_NAMESPACE }} --create-namespace
- name: Run chart-testing install / same namespace
run: |
ct install --config charts/wadm/ct.yaml --namespace ${{ env.CHART_TESTING_NAMESPACE }}
- name: Run chart-testing install / across namespaces
run: |
ct install --config charts/wadm/ct.yaml --helm-extra-set-args "--set=wadm.config.nats.server=nats://nats-headless.${{ env.CHART_TESTING_NAMESPACE }}.svc.cluster.local"
publish:
if: ${{ startsWith(github.ref, 'refs/tags/chart-v') }}
runs-on: ubuntu-22.04
needs: validate
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Helm
uses: azure/setup-helm@b9e51907a09c216f16ebe8536097933489208112 # v4.3.0
with:
version: ${{ env.HELM_VERSION }}
- name: Package
run: |
helm package charts/wadm -d .helm-charts
- name: Login to GHCR
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Lowercase the organization name for ghcr.io
run: |
echo "GHCR_REPO_NAMESPACE=${GITHUB_REPOSITORY_OWNER,,}" >>${GITHUB_ENV}
- name: Publish
run: |
for chart in .helm-charts/*; do
if [ -z "${chart:-}" ]; then
break
fi
helm push "${chart}" "oci://ghcr.io/${{ env.GHCR_REPO_NAMESPACE }}/charts"
done

56
.github/workflows/e2e.yml vendored Normal file
View File

@ -0,0 +1,56 @@
name: e2e Tests Wadm
on:
pull_request:
branches:
- main
permissions:
contents: read
jobs:
test:
name: e2e
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
test: [e2e_multiple_hosts, e2e_upgrades, e2e_shared]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install latest Rust stable toolchain
uses: dtolnay/rust-toolchain@1ff72ee08e3cb84d84adba594e0a297990fc1ed3 # stable
with:
toolchain: stable
components: clippy, rustfmt
# Cache: rust
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
key: 'ubuntu-22.04-rust-cache'
# If the test uses a docker compose file, pre-emptively pull images used in docker compose
- name: Pull images for test ${{ matrix.test }}
shell: bash
run: |
export DOCKER_COMPOSE_FILE=tests/docker-compose-${{ matrix.test }}.yaml;
[[ -f "$DOCKER_COMPOSE_FILE" ]] && docker compose -f $DOCKER_COMPOSE_FILE pull;
# Run e2e tests in a matrix for efficiency
- name: Run tests ${{ matrix.test }}
id: test
env:
WADM_E2E_TEST: ${{ matrix.test }}
run: make test-individual-e2e
# if the previous step fails, upload logs
- name: Upload logs for debugging
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() && steps.test.outcome == 'failure' }}
with:
name: e2e-logs-${{ matrix.test }}
path: ./tests/e2e_log/*
# Be nice and only retain the logs for 7 days
retention-days: 7

View File

@ -3,219 +3,280 @@ on:
push:
branches:
- main
tags:
- "v*" # Push events to matching v*, i.e. v1.0, v20.15.10
tags:
- 'v*'
- 'types-v*'
- 'client-v*'
workflow_dispatch: # Allow manual creation of artifacts without a release
permissions:
contents: read
defaults:
run:
shell: bash
jobs:
build:
name: build release assets
runs-on: ${{ matrix.config.os }}
runs-on: ${{ matrix.config.runnerOs }}
outputs:
version_output: ${{ steps.version_output.outputs.version }}
strategy:
matrix:
config:
# NOTE: We are building on an older version of ubuntu because of libc compatibility
# issues. Namely, if we build on a new version of libc, it isn't backwards compatible with
# old versions. But if we build on the old version, it is compatible with the newer
# versions running in ubuntu 22 and its ilk
- {
os: "ubuntu-20.04",
arch: "amd64",
extension: "",
targetPath: "target/release/",
runnerOs: 'ubuntu-latest',
buildCommand: 'cargo zigbuild',
target: 'x86_64-unknown-linux-musl',
uploadArtifactSuffix: 'linux-amd64',
buildOutputPath: 'target/x86_64-unknown-linux-musl/release/wadm',
}
- {
os: "ubuntu-20.04",
arch: "aarch64",
extension: "",
targetPath: "target/aarch64-unknown-linux-gnu/release/",
runnerOs: 'ubuntu-latest',
buildCommand: 'cargo zigbuild',
target: 'aarch64-unknown-linux-musl',
uploadArtifactSuffix: 'linux-aarch64',
buildOutputPath: 'target/aarch64-unknown-linux-musl/release/wadm',
}
- {
os: "macos-latest",
arch: "amd64",
extension: "",
targetPath: "target/release/",
runnerOs: 'macos-14',
buildCommand: 'cargo zigbuild',
target: 'x86_64-apple-darwin',
uploadArtifactSuffix: 'macos-amd64',
buildOutputPath: 'target/x86_64-apple-darwin/release/wadm',
}
- {
os: "windows-latest",
arch: "amd64",
extension: ".exe",
targetPath: "target/release/",
runnerOs: 'macos-14',
buildCommand: 'cargo zigbuild',
target: 'aarch64-apple-darwin',
uploadArtifactSuffix: 'macos-aarch64',
buildOutputPath: 'target/aarch64-apple-darwin/release/wadm',
}
- {
os: "macos-latest",
arch: "aarch64",
extension: "",
targetPath: "target/aarch64-apple-darwin/release/",
runnerOs: 'windows-latest',
buildCommand: 'cargo build',
target: 'x86_64-pc-windows-msvc',
uploadArtifactSuffix: 'windows-amd64',
buildOutputPath: 'target/x86_64-pc-windows-msvc/release/wadm.exe',
}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: set the release version (tag)
if: startsWith(github.ref, 'refs/tags/v')
shell: bash
run: echo "RELEASE_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_ENV
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
run: |
echo "RELEASE_VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_ENV
- name: set the release version (main)
if: github.ref == 'refs/heads/main'
shell: bash
run: echo "RELEASE_VERSION=canary" >> $GITHUB_ENV
if: ${{ github.ref == 'refs/heads/main' }}
run: |
echo "RELEASE_VERSION=canary" >> $GITHUB_ENV
- name: Output Version
id: version_output
run: echo "version=$RELEASE_VERSION" >> $GITHUB_OUTPUT
- name: lowercase the runner OS name
shell: bash
- name: Install Zig
uses: mlugg/setup-zig@8d6198c65fb0feaa111df26e6b467fea8345e46f # v2.0.5
with:
version: 0.13.0
- name: Install latest Rust stable toolchain
uses: dtolnay/rust-toolchain@1ff72ee08e3cb84d84adba594e0a297990fc1ed3 # stable
with:
toolchain: stable
components: clippy, rustfmt
target: ${{ matrix.config.target }}
- name: Install cargo zigbuild
uses: taiki-e/install-action@2c73a741d1544cc346e9b0af11868feba03eb69d # v2.58.9
with:
tool: cargo-zigbuild
- name: Build wadm
run: |
OS=$(echo "${{ runner.os }}" | tr '[:upper:]' '[:lower:]')
echo "RUNNER_OS=$OS" >> $GITHUB_ENV
${{ matrix.config.buildCommand }} --release --bin wadm --target ${{ matrix.config.target }}
- name: Install latest Rust stable toolchain
uses: dtolnay/rust-toolchain@stable
if: matrix.config.arch != 'aarch64'
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
toolchain: stable
components: clippy, rustfmt
- name: setup for cross-compile builds
if: matrix.config.arch == 'aarch64' && matrix.config.os == 'ubuntu-20.04'
run: |
sudo apt-get update
sudo apt install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
rustup toolchain install stable-aarch64-unknown-linux-gnu
rustup target add --toolchain stable-aarch64-unknown-linux-gnu aarch64-unknown-linux-gnu
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc" >> $GITHUB_ENV
echo "CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc" >> $GITHUB_ENV
echo "CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++" >> $GITHUB_ENV
- name: Install latest Rust stable toolchain
uses: dtolnay/rust-toolchain@stable
if: matrix.config.arch == 'aarch64' && matrix.config.os == 'macos-latest'
with:
toolchain: stable
components: clippy, rustfmt
target: aarch64-apple-darwin
- name: Install latest Rust stable toolchain
uses: dtolnay/rust-toolchain@stable
if: matrix.config.arch == 'aarch64' && matrix.config.os == 'ubuntu-20.04'
with:
toolchain: stable
components: clippy, rustfmt
target: aarch64-unknown-linux-gnu
- name: build release
if: matrix.config.arch != 'aarch64'
run: "cargo build --release --bin wadm --features cli"
- name: build release
if: matrix.config.arch == 'aarch64' && matrix.config.os == 'macos-latest'
run: "cargo build --release --bin wadm --features cli --target aarch64-apple-darwin"
- name: build release
if: matrix.config.arch == 'aarch64' && matrix.config.os == 'ubuntu-20.04'
run: "cargo build --release --bin wadm --features cli --target aarch64-unknown-linux-gnu"
- uses: actions/upload-artifact@v3
with:
name: wadm-${{ env.RELEASE_VERSION }}-${{ env.RUNNER_OS }}-${{ matrix.config.arch }}
name: wadm-${{ env.RELEASE_VERSION }}-${{ matrix.config.uploadArtifactSuffix }}
if-no-files-found: error
path: |
${{ matrix.config.targetPath }}wadm${{ matrix.config.extension }}
${{ matrix.config.buildOutputPath }}
publish:
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
name: publish release assets
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: build
if: startsWith(github.ref, 'refs/tags/v')
permissions:
contents: write
env:
RELEASE_VERSION: ${{ needs.build.outputs.version_output }}
steps:
- name: download release assets
uses: actions/download-artifact@v3
- name: Generate Checksums
run: |
for dir in */; do
cd "$dir" || continue
sum=$(sha256sum * | awk '{ print $1 }')
echo "$dir:$sum" >> checksums-${{ env.RELEASE_VERSION }}.txt
cd ..
done
- name: Package Binaries
run:
for dir in */; do tar -czvf "${dir%/}.tar.gz" "$dir"; done
- name: Publish to GHCR
uses: softprops/action-gh-release@v1
with:
token: ${{ secrets.WADM_GITHUB_TOKEN }}
prerelease: false
draft: false
files: |
checksums-${{ env.RELEASE_VERSION }}.txt
wadm-${{ env.RELEASE_VERSION }}-linux-aarch64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-linux-amd64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-macos-aarch64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-macos-amd64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-windows-amd64.tar.gz
- name: Download release assets
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
- name: Prepare release
run: |
for dir in */; do
test -d "$dir" || continue
tarball="${dir%/}.tar.gz"
tar -czvf "${tarball}" "$dir"
sha256sum "${tarball}" >> SHA256SUMS
done
- name: Create github release
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8 # v2.3.2
with:
token: ${{ secrets.GITHUB_TOKEN }}
prerelease: false
draft: false
files: |
SHA256SUMS
wadm-${{ env.RELEASE_VERSION }}-linux-aarch64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-linux-amd64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-macos-aarch64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-macos-amd64.tar.gz
wadm-${{ env.RELEASE_VERSION }}-windows-amd64.tar.gz
crate:
if: ${{ startsWith(github.ref, 'refs/tags/v') || startsWith(github.ref, 'refs/tags/types-v') || startsWith(github.ref, 'refs/tags/client-v') }}
name: Publish crate
runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/v')
needs: build
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install latest Rust stable toolchain
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@1ff72ee08e3cb84d84adba594e0a297990fc1ed3 # stable
with:
toolchain: stable
- name: Cargo login
run: cargo login ${{ secrets.CRATES_TOKEN }}
shell: bash
- name: Cargo publish
run: cargo publish
shell: bash
- name: Cargo login
run: |
cargo login ${{ secrets.CRATES_TOKEN }}
- name: Cargo publish wadm-types
if: ${{ startsWith(github.ref, 'refs/tags/types-v') }}
working-directory: ./crates/wadm-types
run: |
cargo publish
- name: Cargo publish wadm lib
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
working-directory: ./crates/wadm
run: |
cargo publish
- name: Cargo publish wadm-client
if: ${{ startsWith(github.ref, 'refs/tags/client-v') }}
working-directory: ./crates/wadm-client
run: |
cargo publish
docker-image:
name: Build and push docker images
runs-on: ubuntu-latest
needs: build
permissions:
contents: read
packages: write
env:
RELEASE_VERSION: ${{ needs.build.outputs.version_output }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
path: ./artifacts
pattern: '*linux*'
- name: Prepare container artifacts
working-directory: ./artifacts
run: |
for dir in */; do
name="${dir%/}"
mv "${name}/wadm" wadm
chmod +x wadm
rmdir "${name}"
mv wadm "${name}"
done
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.WADM_GITHUB_TOKEN }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: lowercase repository owner
run: |
echo "OWNER=${GITHUB_REPOSITORY_OWNER,,}" >>$GITHUB_ENV
- name: Set the formatted release version for the docker tag
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
run: |
echo "RELEASE_VERSION_DOCKER_TAG=${RELEASE_VERSION#v}" >> $GITHUB_ENV
- name: Build and push (tag)
uses: docker/build-push-action@v3
if: startsWith(github.ref, 'refs/tags/v')
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
with:
push: true
tags: ghcr.io/${{ env.OWNER }}/wadm:latest,ghcr.io/${{ env.OWNER }}/wadm:${{ env.RELEASE_VERSION }}
platforms: linux/amd64,linux/arm64
context: ./
build-args: |
BIN_ARM64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-aarch64
BIN_AMD64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-amd64
tags: |
ghcr.io/${{ env.OWNER }}/wadm:latest
ghcr.io/${{ env.OWNER }}/wadm:${{ env.RELEASE_VERSION }},
ghcr.io/${{ env.OWNER }}/wadm:${{ env.RELEASE_VERSION_DOCKER_TAG }}
- name: Build and push wolfi (tag)
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
with:
push: true
platforms: linux/amd64,linux/arm64
context: ./
file: ./Dockerfile.wolfi
build-args: |
BIN_ARM64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-aarch64
BIN_AMD64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-amd64
tags: |
ghcr.io/${{ env.OWNER }}/wadm:latest-wolfi
ghcr.io/${{ env.OWNER }}/wadm:${{ env.RELEASE_VERSION }}-wolfi
ghcr.io/${{ env.OWNER }}/wadm:${{ env.RELEASE_VERSION_DOCKER_TAG }}-wolfi
- name: Build and push (main)
uses: docker/build-push-action@v3
if: github.ref == 'refs/heads/main'
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
if: ${{ github.ref == 'refs/heads/main' }}
with:
push: true
platforms: linux/amd64,linux/arm64
context: ./
build-args: |
BIN_ARM64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-aarch64
BIN_AMD64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-amd64
tags: ghcr.io/${{ env.OWNER }}/wadm:canary
- name: Build and push (main)
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
if: ${{ github.ref == 'refs/heads/main' }}
with:
push: true
platforms: linux/amd64,linux/arm64
context: ./
file: ./Dockerfile.wolfi
build-args: |
BIN_ARM64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-aarch64
BIN_AMD64=./artifacts/wadm-${{ env.RELEASE_VERSION }}-linux-amd64
tags: ghcr.io/${{ env.OWNER }}/wadm:canary-wolfi

73
.github/workflows/scorecard.yml vendored Normal file
View File

@ -0,0 +1,73 @@
# This workflow uses actions that are not certified by GitHub. They are provided
# by a third-party and are governed by separate terms of service, privacy
# policy, and support documentation.
name: Scorecard supply-chain security
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: '28 13 * * 3'
push:
branches: [ "main" ]
# Declare default permissions as read only.
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Needed to publish results and get a badge (see publish_results below).
id-token: write
# Uncomment the permissions below if installing in a private repository.
# contents: read
# actions: read
steps:
- name: "Checkout code"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
# (Optional) "write" PAT token. Uncomment the `repo_token` line below if:
# - you want to enable the Branch-Protection check on a *public* repository, or
# - you are installing Scorecard on a *private* repository
# To create the PAT, follow the steps in https://github.com/ossf/scorecard-action?tab=readme-ov-file#authentication-with-fine-grained-pat-optional.
# repo_token: ${{ secrets.SCORECARD_TOKEN }}
# Public repositories:
# - Publish results to OpenSSF REST API for easy access by consumers
# - Allows the repository to include the Scorecard badge.
# - See https://github.com/ossf/scorecard-action#publishing-results.
# For private repositories:
# - `publish_results` will always be set to `false`, regardless
# of the value entered here.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v3.pre.node20
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@d6bbdef45e766d081b84a2def353b0055f728d3e # v3.29.3
with:
sarif_file: results.sarif

View File

@ -5,6 +5,9 @@ on:
branches:
- main
permissions:
contents: read
jobs:
test:
name: Test
@ -12,32 +15,55 @@ jobs:
strategy:
matrix:
os: [ubuntu-22.04]
nats_version: [2.9.15]
nats_version: [2.10.22]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install latest Rust stable toolchain
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@1ff72ee08e3cb84d84adba594e0a297990fc1ed3 # stable
with:
toolchain: stable
default: true
components: clippy, rustfmt
# Cache: rust
- uses: Swatinem/rust-cache@v2
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
key: "${{ matrix.os }}-rust-cache"
key: '${{ matrix.os }}-rust-cache'
- name: Install wash
uses: wasmCloud/common-actions/install-wash@main
- name: Check that Wadm JSON Schema is up-to-date
shell: bash
run: |
cargo run --bin wadm-schema
if [ $(git diff --exit-code > /dev/null) ]; then
echo 'Wadm JSON Schema is out of date. Please run `cargo run --bin wadm-schema` and commit the changes.'
exit 1
fi
- name: install wash
uses: taiki-e/install-action@2c73a741d1544cc346e9b0af11868feba03eb69d # v2.58.9
with:
tool: wash@0.38.0
# GH Actions doesn't currently support passing args to service containers and there is no way
# to use an environment variable to turn on jetstream for nats, so we manually start it here
- name: Start NATS
run: docker run --rm -d --name wadm-test -p 127.0.0.1:4222:4222 nats:${{ matrix.nats_version }} -js
- name: Build
run: |
cargo build --all-features --all-targets --workspace
# Make sure the wadm crate works well with feature combinations
# The above command builds the workspace and tests with no features
- name: Check wadm crate with features
run: |
cargo check -p wadm --no-default-features
cargo check -p wadm --features cli
cargo check -p wadm --features http_admin
cargo check -p wadm --features cli,http_admin
# Run all tests
- name: Run tests
run: |
cargo test -- --nocapture
cargo test --workspace -- --nocapture

47
.github/workflows/wit-wadm.yaml vendored Normal file
View File

@ -0,0 +1,47 @@
name: wit-wasmcloud-wadm-publish
on:
push:
tags:
- "wit-wasmcloud-wadm-v*"
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: write
packages: write
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
sparse-checkout: |
wit
.github
- name: Extract tag context
id: ctx
run: |
version=${GITHUB_REF_NAME#wit-wasmcloud-wadm-v}
echo "version=${version}" >> "$GITHUB_OUTPUT"
echo "tarball=wit-wasmcloud-wadm-${version}.tar.gz" >> "$GITHUB_OUTPUT"
echo "version is ${version}"
- uses: ./.github/actions/configure-wkg
with:
oci-username: ${{ github.repository_owner }}
oci-password: ${{ secrets.GITHUB_TOKEN }}
- name: Build
run: wkg wit build --wit-dir wit/wadm -o package.wasm
- name: Push version-tagged WebAssembly binary to GHCR
run: wkg publish package.wasm
- name: Package tarball for release
run: |
mkdir -p release/wit
cp wit/wadm/*.wit release/wit/
tar cvzf ${{ steps.ctx.outputs.tarball }} -C release wit
- name: Release
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8 # v2.3.2
with:
files: ${{ steps.ctx.outputs.tarball }}
make_latest: "false"

13
.gitignore vendored
View File

@ -1 +1,14 @@
/target
tests/e2e_log/
*.dump
# Thanks MacOS
.DS_Store
# Ignore IDE specific files
.idea/
.vscode/
.direnv/
result

4424
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,56 +1,118 @@
[package]
name = "wadm"
name = "wadm-cli"
description = "wasmCloud Application Deployment Manager: A tool for running Wasm applications in wasmCloud"
version = "0.4.0-alpha.2"
version.workspace = true
edition = "2021"
maintainers = ["wasmCloud Team"]
authors = ["wasmCloud Team"]
keywords = ["webassembly", "wasmcloud", "wadm"]
license = "Apache-2.0"
readme = "README.md"
repository = "https://github.com/wasmcloud/wadm"
default-run = "wadm"
[workspace.package]
version = "0.21.0"
[features]
default = []
cli = ["clap", "tracing-opentelemetry", "tracing-subscriber", "opentelemetry", "opentelemetry-otlp", "atty"]
# internal feature for e2e tests
_e2e_tests = []
[workspace]
members = ["crates/*"]
[dependencies]
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive", "cargo", "env"] }
# One version back to avoid clashes with 0.10 of otlp
opentelemetry = { workspace = true, features = ["rt-tokio"] }
# 0.10 to avoid protoc dep
opentelemetry-otlp = { workspace = true, features = [
"http-proto",
"reqwest-client",
] }
schemars = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }
tracing = { workspace = true, features = ["log"] }
tracing-opentelemetry = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter", "json"] }
wadm = { workspace = true, features = ["cli", "http_admin"] }
wadm-types = { workspace = true }
[workspace.dependencies]
anyhow = "1"
async-nats = "0.29"
async-nats = "0.39"
async-trait = "0.1"
atty = { version = "0.2", optional = true }
bytes = "1"
chrono = "0.4"
clap = { version = "4", features = ["derive", "cargo", "env"], optional = true }
cloudevents-sdk = "0.7"
clap = { version = "4", features = ["derive", "cargo", "env"] }
cloudevents-sdk = "0.8"
futures = "0.3"
indexmap = { version = "1", features = ["serde-1"] }
http = { version = "1", default-features = false }
http-body-util = { version = "0.1", default-features = false }
hyper = { version = "1", default-features = false }
hyper-util = { version = "0.1", default-features = false }
indexmap = { version = "2", features = ["serde"] }
jsonschema = "0.29"
lazy_static = "1"
nkeys = "0.2.0"
nkeys = "0.4.5"
# One version back to avoid clashes with 0.10 of otlp
opentelemetry = { version = "0.17", features = ["rt-tokio"], optional = true }
opentelemetry = { version = "0.17", features = ["rt-tokio"] }
# 0.10 to avoid protoc dep
opentelemetry-otlp = { version = "0.10", features = ["http-proto", "reqwest-client"], optional = true }
# TODO: Actually leverage prometheus
prometheus = { version = "0.13", optional = true }
rand = { version = "0.8", features = ["small_rng"] }
opentelemetry-otlp = { version = "0.10", features = [
"http-proto",
"reqwest-client",
] }
rand = { version = "0.9", features = ["small_rng"] }
# NOTE(thomastaylor312): Pinning this temporarily to 1.10 due to transitive dependency with oci
# crates that are pinned to 1.10
regex = "~1.10"
schemars = "0.8"
semver = { version = "1.0.25", features = ["serde"] }
serde = "1"
serde_json = "1"
serde_yaml = "0.9"
sha2 = "0.10.2"
thiserror = "1"
tokio = { version = "1", features = ["full"] }
sha2 = "0.10.9"
thiserror = "2"
tokio = { version = "1", default-features = false }
tracing = { version = "0.1", features = ["log"] }
tracing-futures = "0.2"
tracing-opentelemetry = { version = "0.17", optional = true }
tracing-subscriber = { version = "0.3.7", features = ["env-filter", "json"], optional = true }
tracing-opentelemetry = { version = "0.17" }
tracing-subscriber = { version = "0.3.7", features = ["env-filter", "json"] }
ulid = { version = "1", features = ["serde"] }
utoipa = "5"
uuid = "1"
wasmcloud-control-interface = "0.25"
semver = { version = "1.0.16", features = [ "serde" ] }
wadm = { version = "0.21", path = "./crates/wadm" }
wadm-client = { version = "0.10", path = "./crates/wadm-client" }
wadm-types = { version = "0.8", path = "./crates/wadm-types" }
wasmcloud-control-interface = "2.4.0"
wasmcloud-secrets-types = "0.5.0"
wit-bindgen-wrpc = { version = "0.9", default-features = false }
wit-bindgen = { version = "0.36.0", default-features = false }
[dev-dependencies]
serial_test = "1"
async-nats = { workspace = true }
chrono = { workspace = true }
futures = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
serde_yaml = { workspace = true }
serial_test = "3"
wadm-client = { workspace = true }
wadm-types = { workspace = true }
wasmcloud-control-interface = { workspace = true }
testcontainers = "0.25"
[build-dependencies]
schemars = { workspace = true }
serde_json = { workspace = true }
wadm-types = { workspace = true }
[[bin]]
name = "wadm"
path = "bin/main.rs"
required-features = ["cli"]
path = "src/main.rs"
[[bin]]
name = "wadm-schema"
path = "src/schema.rs"

View File

@ -1,26 +1,26 @@
FROM rust:1.68.0-slim as builder
FROM debian:bullseye-slim AS base
WORKDIR /usr/src/wadm
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y ca-certificates
COPY . /usr/src/wadm/
FROM base AS base-amd64
ARG BIN_AMD64
ARG BIN=$BIN_AMD64
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools musl-dev
RUN update-ca-certificates
FROM base AS base-arm64
ARG BIN_ARM64
ARG BIN=$BIN_ARM64
RUN cargo build --bin wadm --features cli --target x86_64-unknown-linux-musl --release
FROM alpine:3.16.0 AS runtime
FROM base-$TARGETARCH
ARG USERNAME=wadm
ARG USER_UID=1000
ARG USER_GID=$USER_UID
RUN addgroup -g $USER_GID $USERNAME \
&& adduser -D -u $USER_UID -G $USERNAME $USERNAME
RUN addgroup --gid $USER_GID $USERNAME \
&& adduser --disabled-login -u $USER_UID --ingroup $USERNAME $USERNAME
# Copy application binary from builder image
COPY --from=builder --chown=$USERNAME /usr/src/wadm/target/x86_64-unknown-linux-musl/release/wadm /usr/local/bin/wadm
# Copy application binary from disk
COPY --chown=$USERNAME ${BIN} /usr/local/bin/wadm
USER $USERNAME
# Run the application

17
Dockerfile.wolfi Normal file
View File

@ -0,0 +1,17 @@
FROM cgr.dev/chainguard/wolfi-base:latest AS base
FROM base AS base-amd64
ARG BIN_AMD64
ARG BIN=$BIN_AMD64
FROM base AS base-arm64
ARG BIN_ARM64
ARG BIN=$BIN_ARM64
FROM base-$TARGETARCH
# Copy application binary from disk
COPY ${BIN} /usr/local/bin/wadm
# Run the application
ENTRYPOINT ["/usr/local/bin/wadm"]

View File

@ -1,16 +0,0 @@
# wasmCloud Application Deployment Manager - Events
**wadm** emits all events on the `wadm.evt` subject in the form of [CloudEvents]().
The following is a list of the events emitted by wadm and the field names of the payload
carried within the cloud event's `data` field (stored as JSON). Each of the following events are in the `com.wasmcloud.wadm` namespace, so the event type for `model_version_created` is actually `com.wasmcloud.wadm.model_version_created`
| Event Type | Fields | Description |
| --- | --- | --- |
| `model_version_created` | name, version, lattice_id | Indicates that a new version of a model has been stored |
| `model_version_deleted` | name, version, lattice_id | Indicates that a specific model version has been deleted |
| `model_deployed` | name, version, lattice_id | Indicates that a deployment monitor process has started (and nothing more) |
| `model_undeployed` | name, version, lattice_id | Indicates that a deployment monitor process has been stopped (and nothing more) |
| `deployment_state_changed` | name, version, lattice_id, state | Indicates that a deployment monitor has changed state |
| `control_action_taken` | name, version, lattice_id, action_type, params(map) | Indicates that a deployment monitor has taken corrective action as a result of reconciliation |
| `control_action_failed` | name, version, lattice_id, action_type, message, params(map) | Indicates a failure to submit corrective action to a lattice control API |
| `reconciliation_error_occurred` | name, version, lattice_id, message, params(map) | Indicates that required corrective action as a result of reconciliation cannot be performed (e.g. insufficient resources) |

25
MAINTAINERS.md Normal file
View File

@ -0,0 +1,25 @@
# MAINTAINERS
The following individuals are responsible for reviewing code, managing issues, and ensuring the overall quality of `wadm`.
## @wasmCloud/wadm-maintainers
Name: Joonas Bergius
GitHub: @joonas
Organization: Cosmonic
Name: Dan Norris
GitHub: @protochron
Organization: Cosmonic
Name: Taylor Thomas
GitHub: @thomastaylor312
Organization: Cosmonic
Name: Ahmed Tadde
GitHub: @ahmedtadde
Organization: PreciseTarget
Name: Brooks Townsend
GitHub: @brooksmtownsend
Organization: Cosmonic

View File

@ -7,29 +7,35 @@ MAKEFLAGS += --no-builtin-rules
MAKEFLAGS += --no-print-directory
MAKEFLAGS += -S
.DEFAULT: all
OS_NAME := $(shell uname -s | tr '[:upper:]' '[:lower:]')
ifeq ($(OS_NAME),darwin)
NC_FLAGS := -czt
else
NC_FLAGS := -Czt
endif
.DEFAULT: help
CARGO ?= cargo
CARGO_WATCH ?= cargo-watch
CARGO_CLIPPY ?= cargo-clippy
DOCKER ?= docker
NATS ?= nats
all: build test
# Defaulting to the local registry since multi-arch containers have to be pushed
WADM_TAG ?= localhost:5000/wasmcloud/wadm:latest
# These should be either built locally or downloaded from a release.
BIN_AMD64 ?= wadm-amd64
BIN_ARM64 ?= wadm-aarch64
help: ## Display this help
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_\-.*]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
all: build test
###########
# Tooling #
###########
# Ensure that cargo watch is installed
check-cargo-watch:
ifeq ("",$(shell command -v $(CARGO_WATCH)))
$(error "ERROR: cargo-watch is not installed (see: https://crates.io/crates/cargo-watch)")
endif
# Ensure that clippy is installed
check-cargo-clippy:
ifeq ("",$(shell command -v $(CARGO_CLIPPY)))
@ -44,14 +50,23 @@ lint: check-cargo-clippy ## Run code lint
$(CARGO) fmt --all --check
$(CARGO) clippy --all-features --all-targets --workspace
lint-watch: check-cargo-clippy ## Run code lint (continuously)
$(CARGO) watch -- $(MAKE) lint
build: ## Build wadm
$(CARGO) build
$(CARGO) build --bin wadm
build-docker: ## Build wadm docker image
ifndef BIN_AMD64
$(error BIN_AMD64 is not set, required for docker building)
endif
ifndef BIN_ARM64
$(error BIN_ARM64 is not set, required for docker building)
endif
$(DOCKER) buildx build --platform linux/amd64,linux/arm64 \
--build-arg BIN_AMD64=$(BIN_AMD64) \
--build-arg BIN_ARM64=$(BIN_ARM64) \
-t $(WADM_TAG) \
--push .
build-watch: check-cargo-watch ## Build wadm (continuously)
$(CARGO) watch -- $(MAKE) build
########
# Test #
@ -62,40 +77,46 @@ build-watch: check-cargo-watch ## Build wadm (continuously)
CARGO_TEST_TARGET ?=
test:: ## Run tests
ifeq ($(shell nc -czt -w1 127.0.0.1 4222 || echo fail),fail)
$(DOCKER) run --rm -d --name wadm-test -p 127.0.0.1:4222:4222 nats:2.9 -js
ifeq ($(shell nc $(NC_FLAGS) -w1 127.0.0.1 4222 || echo fail),fail)
$(DOCKER) run --rm -d --name wadm-test -p 127.0.0.1:4222:4222 nats:2.10 -js
$(CARGO) test $(CARGO_TEST_TARGET) -- --nocapture
$(DOCKER) stop wadm-test
else
$(CARGO) test $(CARGO_TEST_TARGET) -- --nocapture
endif
test-watch: ## Run tests (continuously)
$(CARGO) watch -- $(MAKE) test
test-int:: ## Run integration tests
ifeq (,$(CARGO_TEST_TARGET))
$(CARGO) test --test ="*" -- --nocapture
test-e2e:: ## Run e2e tests
ifeq ($(shell nc $(NC_FLAGS) -w1 127.0.0.1 4222 || echo fail),fail)
@$(MAKE) build
@# Reenable this once we've enabled all tests
@# RUST_BACKTRACE=1 $(CARGO) test --test e2e_multitenant --features _e2e_tests -- --nocapture
RUST_BACKTRACE=1 $(CARGO) test --test e2e_multiple_hosts --features _e2e_tests -- --nocapture
RUST_BACKTRACE=1 $(CARGO) test --test e2e_upgrades --features _e2e_tests -- --nocapture
else
$(CARGO) test --test $(CARGO_TEST_TARGET) -- --nocapture
@echo "WARN: Not running e2e tests. NATS must not be currently running"
exit 1
endif
test-int-watch: ## Run integration tests (continuously)
$(CARGO) watch -- $(MAKE) test-int
test-int-all:: ## Run all integration tests
$(MAKE) test-int CARGO_TEST_TARGET='*'
test-individual-e2e:: ## Runs an individual e2e test based on the WADM_E2E_TEST env var
ifeq ($(shell nc $(NC_FLAGS) -w1 127.0.0.1 4222 || echo fail),fail)
@$(MAKE) build
RUST_BACKTRACE=1 $(CARGO) test --test $(WADM_E2E_TEST) --features _e2e_tests -- --nocapture
else
@echo "WARN: Not running e2e tests. NATS must not be currently running"
exit 1
endif
###########
# Cleanup #
###########
stream-cleanup: ## Purges all streams that wadm creates
$(NATS) stream purge wadm_commands --force
$(NATS) stream purge wadm_events --force
$(NATS) stream purge wadm_notify --force
$(NATS) stream purge wadm_mirror --force
$(NATS) stream purge KV_wadm_state --force
$(NATS) stream purge KV_wadm_manifests --force
stream-cleanup: ## Removes all streams that wadm creates
-$(NATS) stream del wadm_commands --force
-$(NATS) stream del wadm_events --force
-$(NATS) stream del wadm_event_consumer --force
-$(NATS) stream del wadm_notify --force
-$(NATS) stream del wadm_status --force
-$(NATS) stream del KV_wadm_state --force
-$(NATS) stream del KV_wadm_manifests --force
.PHONY: check-cargo-watch check-cargo-clippy lint build build-watch test test-watch test-int test-int-all test-int-watch stream-cleanup
.PHONY: check-cargo-clippy lint build build-watch test stream-cleanup clean test-e2e test

165
README.md
View File

@ -1,43 +1,37 @@
<img align="right" src="./wadm.png" alt="wadm logo" style="width: 200px" />
<img align="right" src="./static/images/wadm_128.png" alt="wadm logo" />
# wasmCloud Application Deployment Manager (wadm)
The wasmCloud Application Deployment Manager (**wadm**) enables declarative wasmCloud applications.
It's responsible for managing a set of application deployment specifications, monitoring the current
state of an entire [lattice](https://wasmcloud.com/docs/reference/lattice/), and issuing the
appropriate lattice control commands required to close the gap between observed and desired state.
It is currently in an `alpha` release state and undergoing further rigor and testing approaching a
production-ready `0.4` version.
Wadm is a Wasm-native orchestrator for managing and scaling declarative wasmCloud applications.
**Heads Up**
## Responsibilities
Wadm is still in alpha as we continue to tie up a few loose ends. Below is a list of known
issues/missing features to be aware of:
**wadm** is powerful because it focuses on a small set of core responsibilities, making it efficient and easy to manage.
- Currently the API does not update with the status of each scaler and reconcile action
- The current scaling algorithm is a bit jittery and puts the "eventual" in eventual consistency. It
will get to the right number of replicas, but may take a second. We've already identified a fix,
but it is in the the wasmCloud host, so we'll need to update things there first
- Full e2e tests. There are integration tests, but not full e2e tests in place yet
- Full/updated documentation
- More examples!
- **Manage application specifications** - Manage applications which represent _desired state_. This includes
the creation, deletion, upgrades and rollback of applications to previous versions. Application
specifications are defined using the [Open Application Model](https://oam.dev/). For more
information on wadm's specific OAM features, see our [OAM README](./oam/README.md).
- **Observe state** - Monitor wasmCloud [CloudEvents](https://wasmcloud.com/docs/reference/cloud-event-list) from all hosts in a [lattice](https://wasmcloud.com/docs/deployment/lattice/) to build the current state.
- **Reconcile with compensating commands** - When the current state doesn't match the desired state, issue commands to wasmCloud hosts in the lattice with the [control interface](https://wasmcloud.com/docs/hosts/lattice-protocols/control-interface) to reach desired state. Wadm is constantly reconciling and will react immediately to ensure applications stay deployed. For example, if a host stops, wadm will reconcile the `host_stopped` event and issue any necessary commands to start components on other available hosts.
## Using wadm
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://github.com/codespaces/new?hide_repo_select=true&ref=main&repo=401352358&machine=standardLinux32gb&location=EastUs)
### Install
### Install & Run
You can deploy **wadm** by downloading the binary for your host operating system and architecture, and then running it alongside your wasmCloud host. We recommend using **wash** to run wasmCloud and NATS, and then running **wadm** afterwards connected to the same NATS connection.
You can easily run **wadm** by downloading the [`wash`](https://wasmcloud.com/docs/installation) CLI, which automatically launches wadm alongside NATS and a wasmCloud host when you run `wash up`. You can use `wash` to query, create, and deploy applications.
Ensure you have a proper [rust](https://www.rust-lang.org/tools/install) toolchain installed to install **wash**, until we release wash v0.18.0.
```
# Install wash
cargo install wash-cli --git https://github.com/wasmcloud/wash --branch feat/wadm_0.4_support --force
```bash
wash up -d # Start NATS, wasmCloud, and wadm in the background
```
```
Follow the [wasmCloud quickstart](https://wasmcloud.com/docs/tour/hello-world) to get started building and deploying an application, or follow the **Deploying an application** example below to simply try a deploy.
If you prefer to run **wadm** separately and/or connect to running wasmCloud hosts, you can instead opt for using the latest GitHub release artifact and executing the binary. Simply replace the latest version, your operating system, and architecture below. Please note that wadm requires a wasmCloud host version >=0.63.0
```bash
# Install wadm
curl -fLO https://github.com/wasmCloud/wadm/releases/download/<version>/wadm-<version>-<os>-<arch>.tar.gz
tar -xvf wadm-<version>-<os>-<arch>.tar.gz
@ -45,137 +39,114 @@ cd wadm-<version>-<os>-<arch>
./wadm
```
### Setup
```
wash up -d # Start NATS and wasmCloud in the background
wadm # Start wadm
```
### Deploying an application
Take the following manifest and save it locally (you can also download this from
[echo.yaml](./oam/echo.yaml)):
Copy the following manifest and save it locally as `hello.yaml` (you can also find it in the `oam`
[directory](./oam/hello.yaml)):
```yaml
# Metadata
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: echo
name: hello-world
annotations:
version: v0.0.1
description: "This is my app"
description: 'HTTP hello world demo'
spec:
components:
- name: echo
type: actor
- name: http-component
type: component
properties:
image: wasmcloud.azurecr.io/echo:0.3.7
# Run components from OCI registries as below or from a local .wasm component binary.
image: ghcr.io/wasmcloud/components/http-hello-world-rust:0.1.0
traits:
# One replica of this component will run
- type: spreadscaler
properties:
replicas: 1
- type: linkdef
properties:
target: httpserver
values:
address: 0.0.0.0:8080
instances: 1
# The httpserver capability provider, started from the official wasmCloud OCI artifact
- name: httpserver
type: capability
properties:
contract: wasmcloud:httpserver
image: wasmcloud.azurecr.io/httpserver:0.17.0
image: ghcr.io/wasmcloud/http-server:0.22.0
traits:
- type: spreadscaler
# Link the HTTP server and set it to listen on the local machine's port 8080
- type: link
properties:
replicas: 1
target: http-component
namespace: wasi
package: http
interfaces: [incoming-handler]
source:
config:
- name: default-http
properties:
ADDRESS: 127.0.0.1:8080
```
Then, use **wadm** to put the manifest and deploy it.
Then use `wash` to deploy the manifest:
```
wash app put ./echo.yaml
wash app deploy echo
```bash
wash app deploy hello.yaml
```
🎉 You've just launched your first application with **wadm**! Try `curl localhost:8080/wadm` and see
the response from the [echo](https://github.com/wasmCloud/examples/tree/main/actor/echo) WebAssembly
module.
🎉 You've just launched your first application with **wadm**! Try `curl localhost:8080`.
When you're done, you can use **wadm** to undeploy the application.
When you're done, you can use `wash` to undeploy the application:
```
wash app undeploy echo
```bash
wash app undeploy hello-world
```
### Modifying applications
**wadm** supports upgrading applications by `put`ting new versions of manifests and then `deploy`ing
them. Try changing the manifest you created above by updating the number of echo replicas.
**wadm** supports upgrading applications by deploying new versions of manifests. Try changing the manifest you created above by updating the number of instances.
```yaml
<<ELIDED>>
name: echo
metadata:
name: hello-world
annotations:
version: v0.0.2 # Note the changed version
description: "wasmCloud echo Example"
description: 'HTTP hello world demo'
spec:
components:
- name: echo
type: actor
- name: http-component
type: component
properties:
image: wasmcloud.azurecr.io/echo:0.3.5
image: ghcr.io/wasmcloud/components/http-hello-world-rust:0.1.0
traits:
- type: spreadscaler
properties:
replicas: 10 # Let's run 10!
instances: 10 # Let's have 10!
<<ELIDED>>
```
Then, simply deploy the new version:
Then simply deploy the new manifest:
```
wash app put ./echo.yaml
wash app deploy echo v0.0.2
```bash
wash app deploy hello.yaml
```
If you navigate to the [wasmCloud dashboard](http://localhost:4000/), you'll see that you now have
10 instances of the echo actor.
_Documentation for configuring the spreadscaler to spread actors and providers across multiple hosts
in a lattice is forthcoming._
## Responsibilities
**wadm** has a very small set of responsibilities, which actually contributes to its power.
- **Manage Application Specifications** - Manage models consisting of _desired state_. This includes
the creation and deletion and _rollback_ of models to previous versions. Application
specifications are defined using the [Open Application Model](https://oam.dev/). For more
information on wadm's specific OAM features, see our [OAM README](./oam/README.md).
- **Observe State** - Monitor wasmCloud [CloudEvents](https://cloudevents.io/) from all hosts in a
lattice to build the current state.
- **Take Compensating Actions** - When indicated, issue commands to the [lattice control
interface](https://github.com/wasmCloud/interfaces/tree/main/lattice-control) to bring about the
changes necessary to make the desired and observed state match.
Now wasmCloud is configured to automatically scale your component to 10 instances based on incoming load.
## 🚧 Advanced
You can find a Docker Compose file for deploying an end-to-end multi-tenant example in the [test](https://github.com/wasmCloud/wadm/blob/main/tests/docker-compose-e2e-multitenant.yaml) directory.
In advanced use cases, **wadm** is also capable of:
- Monitoring multiple lattices
- Running multiple replicas to distribute load among multiple processes, or for a high-availability
- Monitoring multiple lattices.
- Running multiple instances to distribute load among multiple processes, or for a high-availability
architecture.
🚧 The above functionality is somewhat tested, but not as rigorously as a single instance monitoring
🚧 Multi-lattice and multi-process functionality is somewhat tested, but not as rigorously as a single instance monitoring
a single lattice. Proceed with caution while we do further testing.
### API
Interacting with **wadm** is done over NATS on the root topic `wadm.api.{prefix}` where `prefix` is
the lattice namespace prefix. For more information on this API, please consult the [wadm
Reference](https://wasmcloud.dev/reference/wadm).
Reference](https://wasmcloud.com/docs/ecosystem/wadm/).
## References

3
SECURITY.md Normal file
View File

@ -0,0 +1,3 @@
# Reporting a security issue
Please refer to the [wasmCloud Security Process and Policy](https://github.com/wasmCloud/wasmCloud/blob/main/SECURITY.md) for details on how to report security issues and vulnerabilities.

View File

@ -1,50 +0,0 @@
//! A module for connection pools and generators. This is needed because control interface clients
//! (and possibly other things like nats connections in the future) are lattice scoped or need
//! different credentials
use wasmcloud_control_interface::{Client, ClientBuilder};
#[derive(Debug, Default, Clone)]
pub struct ControlClientConfig {
/// The jetstream domain to use for the clients
pub js_domain: Option<String>,
/// The topic prefix to use for operations
pub topic_prefix: Option<String>,
}
/// A client constructor for wasmCloud control interface clients, identified by a lattice ID
// NOTE: Yes, this sounds java-y. Deal with it.
#[derive(Clone)]
pub struct ControlClientConstructor {
client: async_nats::Client,
config: ControlClientConfig,
}
impl ControlClientConstructor {
/// Creates a new client pool that is all backed using the same NATS client. The given NATS
/// client should be using credentials that can access all desired lattices.
pub fn new(
client: async_nats::Client,
config: ControlClientConfig,
) -> ControlClientConstructor {
ControlClientConstructor { client, config }
}
/// Get the client for the given lattice ID
pub async fn get_connection(&self, id: &str) -> anyhow::Result<Client> {
let builder = ClientBuilder::new(self.client.clone()).lattice_prefix(id);
let builder = if let Some(domain) = self.config.js_domain.as_deref() {
builder.js_domain(domain)
} else {
builder
};
let builder = if let Some(prefix) = self.config.topic_prefix.as_deref() {
builder.topic_prefix(prefix)
} else {
builder
};
builder
.build()
.await
.map_err(|e| anyhow::anyhow!("Error building client for {id}: {e:?}"))
}
}

View File

@ -1,366 +0,0 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
use async_nats::jetstream::{stream::Stream, Context};
use clap::Parser;
use tokio::sync::Semaphore;
use wadm::{
consumers::{
manager::{ConsumerManager, WorkerCreator},
*,
},
mirror::Mirror,
nats_utils::LatticeIdParser,
scaler::manager::{ScalerManager, WADM_NOTIFY_PREFIX},
server::{ManifestNotifier, Server, DEFAULT_WADM_TOPIC_PREFIX},
storage::{nats_kv::NatsKvStore, reaper::Reaper},
workers::{CommandPublisher, CommandWorker, EventWorker},
DEFAULT_COMMANDS_TOPIC, DEFAULT_EVENTS_TOPIC, DEFAULT_MULTITENANT_EVENTS_TOPIC,
DEFAULT_WADM_EVENTS_TOPIC,
};
mod connections;
mod logging;
mod nats;
mod observer;
use connections::{ControlClientConfig, ControlClientConstructor};
const EVENT_STREAM_NAME: &str = "wadm_events";
const COMMAND_STREAM_NAME: &str = "wadm_commands";
const MIRROR_STREAM_NAME: &str = "wadm_mirror";
const NOTIFY_STREAM_NAME: &str = "wadm_notify";
#[derive(Parser, Debug)]
#[command(name = clap::crate_name!(), version = clap::crate_version!(), about = "wasmCloud Application Deployment Manager", long_about = None)]
struct Args {
/// The ID for this wadm process. Defaults to a random UUIDv4 if none is provided. This is used
/// to help with debugging when identifying which process is doing the work
#[arg(short = 'i', long = "host-id", env = "WADM_HOST_ID")]
host_id: Option<String>,
/// Whether or not to use structured log output (as JSON)
#[arg(
short = 'l',
long = "structured-logging",
default_value = "false",
env = "WADM_STRUCTURED_LOGGING"
)]
structured_logging: bool,
/// Whether or not to enable opentelemetry tracing
#[arg(
short = 't',
long = "tracing",
default_value = "false",
env = "WADM_TRACING_ENABLED"
)]
tracing_enabled: bool,
/// The endpoint to use for tracing. Setting this flag enables tracing, even if --tracing is set
/// to false. Defaults to http://localhost:55681/v1/traces if not set and tracing is enabled
#[arg(short = 'e', long = "tracing-endpoint", env = "WADM_TRACING_ENDPOINT")]
tracing_endpoint: Option<String>,
/// The NATS JetStream domain to connect to
#[arg(short = 'd', env = "WADM_JETSTREAM_DOMAIN")]
domain: Option<String>,
/// (Advanced) Tweak the maximum number of jobs to run for handling events and commands. Be
/// careful how you use this as it can affect performance
#[arg(
short = 'j',
long = "max-jobs",
default_value = "256",
env = "WADM_MAX_JOBS"
)]
max_jobs: usize,
/// The URL of the nats server you want to connect to
#[arg(
short = 's',
long = "nats-server",
env = "WADM_NATS_SERVER",
default_value = "127.0.0.1:4222"
)]
nats_server: String,
/// Use the specified nkey file or seed literal for authentication. Must be used in conjunction with --nats-jwt
#[arg(
long = "nats-seed",
env = "WADM_NATS_NKEY",
conflicts_with = "nats_creds",
requires = "nats_jwt"
)]
nats_seed: Option<String>,
/// Use the specified jwt file for authentication. Must be used in conjunction with --nats-nkey
#[arg(
long = "nats-jwt",
env = "WADM_NATS_JWT",
conflicts_with = "nats_creds",
requires = "nats_seed"
)]
nats_jwt: Option<PathBuf>,
/// (Optional) NATS credential file to use when authenticating
#[arg(
long = "nats-creds-file",
env = "WADM_NATS_CREDS_FILE",
conflicts_with_all = ["nats_seed", "nats_jwt"],
)]
nats_creds: Option<PathBuf>,
/// Name of the bucket used for storage of lattice state
#[arg(
long = "state-bucket-name",
env = "WADM_STATE_BUCKET_NAME",
default_value = "wadm_state"
)]
state_bucket: String,
/// The amount of time in seconds to give for hosts to fail to heartbeat and be removed from the
/// store. By default, this is 120s because it is 4x the host heartbeat interval
#[arg(
long = "cleanup-interval",
env = "WADM_CLEANUP_INTERVAL",
default_value = "120"
)]
cleanup_interval: u64,
/// The API topic prefix to use. This is an advanced setting that should only be used if you
/// know what you are doing
#[arg(
long = "api-prefix",
env = "WADM_API_PREFIX",
default_value = DEFAULT_WADM_TOPIC_PREFIX
)]
api_prefix: String,
/// Name of the bucket used for storage of manifests
#[arg(
long = "manifest-bucket-name",
env = "WADM_MANIFEST_BUCKET_NAME",
default_value = "wadm_manifests"
)]
manifest_bucket: String,
/// Run wadm in multitenant mode. This is for advanced multitenant use cases with segmented NATS
/// account traffic and not simple cases where all lattices use credentials from the same
/// account. See the deployment guide for more information
#[arg(long = "multitenant", env = "WADM_MULTITENANT")]
multitenant: bool,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
logging::configure_tracing(
args.structured_logging,
args.tracing_enabled,
args.tracing_endpoint,
);
// Build storage adapter for lattice state (on by default)
let (client, context) = nats::get_client_and_context(
args.nats_server.clone(),
args.domain.clone(),
args.nats_seed.clone(),
args.nats_jwt.clone(),
args.nats_creds.clone(),
)
.await?;
// TODO: We will probably need to set up all the flags (like lattice prefix and topic prefix) down the line
let connection_pool = ControlClientConstructor::new(
client.clone(),
ControlClientConfig {
js_domain: args.domain,
topic_prefix: None,
},
);
let trimmer: &[_] = &['.', '>', '*'];
let store = nats::ensure_kv_bucket(&context, args.state_bucket, 1).await?;
let state_storage = NatsKvStore::new(store);
let store = nats::ensure_kv_bucket(&context, args.manifest_bucket, 1).await?;
let manifest_storage = NatsKvStore::new(store);
let event_stream = nats::ensure_stream(
&context,
EVENT_STREAM_NAME.to_owned(),
vec![DEFAULT_WADM_EVENTS_TOPIC.to_owned()],
Some(
"A stream that stores all events coming in on the wasmbus.evt topics in a cluster"
.to_string(),
),
)
.await?;
let command_stream = nats::ensure_stream(
&context,
COMMAND_STREAM_NAME.to_owned(),
vec![DEFAULT_COMMANDS_TOPIC.to_owned()],
Some("A stream that stores all commands for wadm".to_string()),
)
.await?;
let event_stream_topics = if args.multitenant {
vec![DEFAULT_MULTITENANT_EVENTS_TOPIC.to_owned()]
} else {
vec![DEFAULT_EVENTS_TOPIC.to_owned()]
};
let mirror_stream = nats::ensure_stream(
&context,
MIRROR_STREAM_NAME.to_owned(),
event_stream_topics.clone(),
Some("A stream that publishes all events to the same stream".to_string()),
)
.await?;
let notify_stream = nats::ensure_notify_stream(
&context,
NOTIFY_STREAM_NAME.to_owned(),
vec![format!("{WADM_NOTIFY_PREFIX}.*")],
)
.await?;
let permit_pool = Arc::new(Semaphore::new(args.max_jobs));
let event_worker_creator = EventWorkerCreator {
state_store: state_storage.clone(),
manifest_store: manifest_storage.clone(),
pool: connection_pool.clone(),
command_topic_prefix: DEFAULT_COMMANDS_TOPIC.trim_matches(trimmer).to_owned(),
publisher: context.clone(),
notify_stream,
};
let events_manager: ConsumerManager<EventConsumer> = ConsumerManager::new(
permit_pool.clone(),
event_stream,
event_worker_creator.clone(),
)
.await;
let command_worker_creator = CommandWorkerCreator {
pool: connection_pool,
};
let commands_manager: ConsumerManager<CommandConsumer> = ConsumerManager::new(
permit_pool.clone(),
command_stream,
command_worker_creator.clone(),
)
.await;
// TODO(thomastaylor312): We might want to figure out how not to run this globally. Doing a
// synthetic event sent to the stream could be nice, but all the wadm processes would still fire
// off that tick, resulting in multiple people handling. We could maybe get it to work with the
// right duplicate window, but we have no idea when each process could fire a tick. Worst case
// scenario right now is that multiple fire simultaneously and a few of them just delete nothing
let reaper = Reaper::new(
state_storage.clone(),
Duration::from_secs(args.cleanup_interval / 2),
[],
);
let wadm_event_prefix = DEFAULT_WADM_EVENTS_TOPIC.trim_matches(trimmer);
let observer = observer::Observer {
parser: LatticeIdParser::new("wasmbus", args.multitenant),
command_manager: commands_manager,
event_manager: events_manager,
mirror: Mirror::new(mirror_stream, wadm_event_prefix),
reaper,
client: client.clone(),
command_worker_creator,
event_worker_creator,
};
let server = Server::new(
manifest_storage,
client,
Some(&args.api_prefix),
ManifestNotifier::new(wadm_event_prefix, context),
)
.await?;
tokio::select! {
res = server.serve() => {
res?
}
res = observer.observe(event_stream_topics) => {
res?
}
_ = tokio::signal::ctrl_c() => {}
}
Ok(())
}
#[derive(Clone)]
struct CommandWorkerCreator {
pool: ControlClientConstructor,
}
#[async_trait::async_trait]
impl WorkerCreator for CommandWorkerCreator {
type Output = CommandWorker;
async fn create(&self, lattice_id: &str) -> anyhow::Result<Self::Output> {
self.pool
.get_connection(lattice_id)
.await
.map(CommandWorker::new)
}
}
#[derive(Clone)]
struct EventWorkerCreator<StateStore, ManifestStore> {
state_store: StateStore,
manifest_store: ManifestStore,
pool: ControlClientConstructor,
command_topic_prefix: String,
publisher: Context,
notify_stream: Stream,
}
#[async_trait::async_trait]
impl<StateStore, ManifestStore> WorkerCreator for EventWorkerCreator<StateStore, ManifestStore>
where
StateStore: wadm::storage::Store + Send + Sync + Clone + 'static,
ManifestStore: wadm::storage::ReadStore + Send + Sync + Clone + 'static,
{
type Output = EventWorker<StateStore, wasmcloud_control_interface::Client, Context>;
async fn create(&self, lattice_id: &str) -> anyhow::Result<Self::Output> {
match self.pool.get_connection(lattice_id).await {
Ok(client) => {
let publisher = CommandPublisher::new(
self.publisher.clone(),
&format!("{}.{lattice_id}", self.command_topic_prefix),
);
let manager = ScalerManager::new(
self.publisher.clone(),
self.notify_stream.clone(),
lattice_id,
self.state_store.clone(),
self.manifest_store.clone(),
publisher.clone(),
)
.await?;
Ok(EventWorker::new(
self.state_store.clone(),
client,
publisher,
manager,
))
}
Err(e) => Err(e),
}
}
}

View File

@ -1,145 +0,0 @@
use std::path::PathBuf;
use anyhow::Result;
use async_nats::{
jetstream::{
self,
kv::{Config as KvConfig, Store},
stream::{Config as StreamConfig, Stream},
Context,
},
Client, ConnectOptions,
};
use wadm::DEFAULT_EXPIRY_TIME;
/// Creates a NATS client from the given options
pub async fn get_client_and_context(
url: String,
js_domain: Option<String>,
seed: Option<String>,
jwt_path: Option<PathBuf>,
creds_path: Option<PathBuf>,
) -> Result<(Client, Context)> {
let client = if seed.is_none() && jwt_path.is_none() && creds_path.is_none() {
async_nats::connect(url).await?
} else {
let opts = build_nats_options(seed, jwt_path, creds_path).await?;
async_nats::connect_with_options(url, opts).await?
};
let context = if let Some(domain) = js_domain {
jetstream::with_domain(client.clone(), domain)
} else {
jetstream::new(client.clone())
};
Ok((client, context))
}
async fn build_nats_options(
seed: Option<String>,
jwt_path: Option<PathBuf>,
creds_path: Option<PathBuf>,
) -> Result<ConnectOptions> {
match (seed, jwt_path, creds_path) {
(Some(seed), Some(jwt), None) => {
let jwt = tokio::fs::read_to_string(jwt).await?;
let kp = std::sync::Arc::new(get_seed(seed).await?);
Ok(async_nats::ConnectOptions::with_jwt(jwt, move |nonce| {
let key_pair = kp.clone();
async move { key_pair.sign(&nonce).map_err(async_nats::AuthError::new) }
}))
}
(None, None, Some(creds)) => async_nats::ConnectOptions::with_credentials_file(creds)
.await
.map_err(anyhow::Error::from),
_ => {
// We shouldn't ever get here due to the requirements on the flags, but return a helpful error just in case
Err(anyhow::anyhow!(
"Got too many options. Make sure to provide a seed and jwt or a creds path"
))
}
}
}
/// Takes a string that could be a raw seed, or a path and does all the necessary loading and parsing steps
async fn get_seed(seed: String) -> Result<nkeys::KeyPair> {
// MAGIC NUMBER: Length of a seed key
let raw_seed = if seed.len() == 58 && seed.starts_with('S') {
seed
} else {
tokio::fs::read_to_string(seed).await?
};
nkeys::KeyPair::from_seed(&raw_seed).map_err(anyhow::Error::from)
}
/// A helper that ensures that the given stream name exists, using defaults to create if it does
/// not. Returns the handle to the stream
pub async fn ensure_stream(
context: &Context,
name: String,
subjects: Vec<String>,
description: Option<String>,
) -> Result<Stream> {
context
.get_or_create_stream(StreamConfig {
name,
description,
num_replicas: 1,
retention: async_nats::jetstream::stream::RetentionPolicy::WorkQueue,
subjects,
max_age: DEFAULT_EXPIRY_TIME,
storage: async_nats::jetstream::stream::StorageType::File,
allow_rollup: false,
..Default::default()
})
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
/// A helper that ensures that the notify stream exists
pub async fn ensure_notify_stream(
context: &Context,
name: String,
subjects: Vec<String>,
) -> Result<Stream> {
context
.get_or_create_stream(StreamConfig {
name,
description: Some("A stream for capturing all notification events for wadm".into()),
num_replicas: 1,
retention: async_nats::jetstream::stream::RetentionPolicy::Interest,
subjects,
max_age: DEFAULT_EXPIRY_TIME,
storage: async_nats::jetstream::stream::StorageType::File,
..Default::default()
})
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
/// A helper that ensures that the given KV bucket exists, using defaults to create if it does
/// not. Returns the handle to the stream
pub async fn ensure_kv_bucket(
context: &Context,
name: String,
history_to_keep: i64,
) -> Result<Store> {
if let Ok(kv) = context.get_key_value(&name).await {
Ok(kv)
} else {
context
.create_key_value(KvConfig {
bucket: name,
history: history_to_keep,
num_replicas: 1,
storage: jetstream::stream::StorageType::File,
..Default::default()
})
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
}

25
charts/wadm/.helmignore Normal file
View File

@ -0,0 +1,25 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
ci/
.helmignore

24
charts/wadm/Chart.yaml Normal file
View File

@ -0,0 +1,24 @@
apiVersion: v2
name: wadm
description: A Helm chart for deploying wadm on Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: '0.2.10'
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 'v0.21.0'

8
charts/wadm/ci/nats.yaml Normal file
View File

@ -0,0 +1,8 @@
config:
jetstream:
enabled: true
fileStore:
pvc:
enabled: false
merge:
domain: default

3
charts/wadm/ct.yaml Normal file
View File

@ -0,0 +1,3 @@
validate-maintainers: false
target-branch: main # TODO: Remove this once chart-testing 3.10.1+ is released
helm-extra-args: --timeout 60s

View File

View File

@ -0,0 +1,106 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "wadm.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "wadm.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "wadm.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "wadm.labels" -}}
helm.sh/chart: {{ include "wadm.chart" . }}
{{ include "wadm.selectorLabels" . }}
app.kubernetes.io/component: wadm
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/part-of: wadm
{{- with .Values.additionalLabels }}
{{ . | toYaml }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "wadm.selectorLabels" -}}
app.kubernetes.io/name: {{ include "wadm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- define "wadm.nats.server" -}}
- name: WADM_NATS_SERVER
{{- if .Values.wadm.config.nats.server }}
value: {{ .Values.wadm.config.nats.server | quote }}
{{- else }}
value: nats-headless.{{ .Release.Namespace }}.svc.cluster.local
{{- end }}
{{- end }}
{{- define "wadm.nats.auth" -}}
{{- if .Values.wadm.config.nats.creds.secretName -}}
- name: WADM_NATS_CREDS_FILE
value: {{ include "wadm.nats.creds_file_path" . | quote }}
{{- else if and .Values.wadm.config.nats.creds.jwt .Values.wadm.config.nats.creds.seed -}}
- name: WADM_NATS_NKEY
value: {{ .Values.wadm.config.nats.creds.seed | quote }}
- name: WADM_NATS_JWT
value: {{ .Values.wadm.config.nats.creds.jwt | quote }}
{{- end }}
{{- end }}
{{- define "wadm.nats.creds_file_path" }}
{{- if .Values.wadm.config.nats.creds.secretName -}}
/etc/nats-creds/nats.creds
{{- end }}
{{- end }}
{{- define "wadm.nats.creds_volume_mount" -}}
{{- if .Values.wadm.config.nats.creds.secretName -}}
volumeMounts:
- name: nats-creds-secret-volume
mountPath: "/etc/nats-creds"
readOnly: true
{{- end }}
{{- end }}
{{- define "wadm.nats.creds_volume" -}}
{{- with .Values.wadm.config.nats.creds -}}
{{- if .secretName -}}
volumes:
- name: nats-creds-secret-volume
secret:
secretName: {{ .secretName }}
items:
- key: {{ .key }}
path: "nats.creds"
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,106 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "wadm.fullname" . }}
labels:
{{- include "wadm.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
{{- include "wadm.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "wadm.labels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.wadm.image.repository }}:{{ .Values.wadm.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.wadm.image.pullPolicy }}
env:
{{- include "wadm.nats.server" . | nindent 12 }}
{{- include "wadm.nats.auth" . | nindent 12 }}
{{- if .Values.wadm.config.nats.tlsCaFile }}
- name: WADM_NATS_TLS_CA_FILE
value: {{ .Values.wadm.config.nats.tlsCaFile | quote }}
{{- end }}
{{- if .Values.wadm.config.hostId }}
- name: WADM_HOST_ID
value: {{ .Values.wadm.config.hostId | quote }}
{{- end }}
{{- if .Values.wadm.config.structuredLogging }}
- name: WADM_STRUCTURED_LOGGING
value: {{ .Values.wadm.config.structuredLogging | quote }}
{{- end }}
{{- if .Values.wadm.config.tracing }}
- name: WADM_TRACING_ENABLED
value: {{ .Values.wadm.config.tracing | quote }}
{{- end }}
{{- if .Values.wadm.config.tracingEndpoint }}
- name: WADM_TRACING_ENDPOINT
value: {{ .Values.wadm.config.tracingEndpoint | quote }}
{{- end }}
{{- if .Values.wadm.config.nats.jetstreamDomain }}
- name: WADM_JETSTREAM_DOMAIN
value: {{ .Values.wadm.config.nats.jetstreamDomain | quote }}
{{- end }}
{{- if .Values.wadm.config.maxJobs }}
- name: WADM_MAX_JOBS
value: {{ .Values.wadm.config.maxJobs }}
{{- end }}
{{- if .Values.wadm.config.stateBucket }}
- name: WADM_STATE_BUCKET_NAME
value: {{ .Values.wadm.config.stateBucket | quote }}
{{- end }}
{{- if .Values.wadm.config.manifestBucket }}
- name: WADM_MANIFEST_BUCKET_NAME
value: {{ .Values.wadm.config.manifestBucket | quote }}
{{- end }}
{{- if .Values.wadm.config.cleanupInterval }}
- name: WADM_CLEANUP_INTERVAL
value: {{ .Values.wadm.config.cleanupInterval }}
{{- end }}
{{- if .Values.wadm.config.apiPrefix }}
- name: WADM_API_PREFIX
value: {{ .Values.wadm.config.apiPrefix }}
{{- end }}
{{- if .Values.wadm.config.streamPrefix }}
- name: WADM_STREAM_PREFIX
value: {{ .Values.wadm.config.streamPrefix }}
{{- end }}
{{- if .Values.wadm.config.multitenant }}
- name: WADM_MULTITENANT
value: {{ .Values.wadm.config.multitenant | quote }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- include "wadm.nats.creds_volume_mount" . | nindent 10 -}}
{{- include "wadm.nats.creds_volume" . | nindent 6 -}}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

85
charts/wadm/values.yaml Normal file
View File

@ -0,0 +1,85 @@
wadm:
# replicas represents the number of copies of wadm to run
replicas: 1
# image represents the image and tag for running wadm
image:
repository: ghcr.io/wasmcloud/wadm
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
config:
apiPrefix: ""
streamPrefix: ""
cleanupInterval: ""
hostId: ""
logLevel: ""
nats:
server: ""
jetstreamDomain: ""
tlsCaFile: ""
creds:
jwt: ""
seed: ""
secretName: ""
key: "nats.creds"
maxJobs: ""
stateBucket: ""
manifestBucket: ""
multitenant: false
structuredLogging: false
tracing: false
tracingEndpoint: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
additionalLabels: {}
# app: wadm
serviceAccount:
# Specifies whether a service account should be created
create: true
# Automatically mount a ServiceAccount's API credentials?
automount: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
# fsGroup: 1000
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- "ALL"
seccompProfile:
type: "RuntimeDefault"
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -0,0 +1,20 @@
[package]
name = "wadm-client"
description = "A client library for interacting with the wadm API"
version = "0.10.0"
edition = "2021"
authors = ["wasmCloud Team"]
keywords = ["webassembly", "wasmcloud", "wadm"]
license = "Apache-2.0"
repository = "https://github.com/wasmcloud/wadm"
[dependencies]
anyhow = { workspace = true }
async-nats = { workspace = true }
futures = { workspace = true }
nkeys = { workspace = true }
serde_json = { workspace = true }
serde_yaml = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true, features = ["full"] }
wadm-types = { workspace = true }

View File

@ -0,0 +1,36 @@
use thiserror::Error;
pub type Result<T> = std::result::Result<T, ClientError>;
/// Errors that can occur when interacting with the wadm client.
#[derive(Error, Debug)]
pub enum ClientError {
/// Unable to load the manifest from a given source. The underlying error is anyhow::Error to
/// allow for flexibility in loading from different sources.
#[error("Unable to load manifest: {0:?}")]
ManifestLoad(anyhow::Error),
/// An error occurred with the NATS transport
#[error(transparent)]
NatsError(#[from] async_nats::RequestError),
/// An API error occurred with the request
#[error("Invalid request: {0}")]
ApiError(String),
/// The named model was not found
#[error("Model not found: {0}")]
NotFound(String),
/// Unable to serialize or deserialize YAML or JSON data.
#[error("Unable to parse manifest: {0:?}")]
Serialization(#[from] SerializationError),
/// Any other errors that are not covered by the other error cases
#[error(transparent)]
Other(#[from] anyhow::Error),
}
/// Errors that can occur when serializing or deserializing YAML or JSON data.
#[derive(Error, Debug)]
pub enum SerializationError {
#[error(transparent)]
Yaml(#[from] serde_yaml::Error),
#[error(transparent)]
Json(#[from] serde_json::Error),
}

View File

@ -0,0 +1,297 @@
//! A client for interacting with Wadm.
use std::path::PathBuf;
use std::sync::{Arc, OnceLock};
use async_nats::{HeaderMap, Message};
use error::{ClientError, SerializationError};
use futures::Stream;
use topics::TopicGenerator;
use wadm_types::{
api::{
DeleteModelRequest, DeleteModelResponse, DeleteResult, DeployModelRequest,
DeployModelResponse, DeployResult, GetModelRequest, GetModelResponse, GetResult,
ModelSummary, PutModelResponse, PutResult, Status, StatusResponse, StatusResult,
VersionInfo, VersionResponse,
},
Manifest,
};
mod nats;
pub mod error;
pub use error::Result;
pub mod loader;
pub use loader::ManifestLoader;
pub mod topics;
/// Headers for `Content-Type: application/json`
static HEADERS_CONTENT_TYPE_JSON: OnceLock<HeaderMap> = OnceLock::new();
/// Retrieve static content type headers
fn get_headers_content_type_json() -> &'static HeaderMap {
HEADERS_CONTENT_TYPE_JSON.get_or_init(|| {
let mut headers = HeaderMap::new();
headers.insert("Content-Type", "application/json");
headers
})
}
#[derive(Clone)]
pub struct Client {
topics: Arc<TopicGenerator>,
client: async_nats::Client,
}
#[derive(Default, Clone)]
/// Options for connecting to a NATS server for a Wadm client. Setting none of these options will
/// default to anonymous authentication with a localhost NATS server running on port 4222
pub struct ClientConnectOptions {
/// The URL of the NATS server to connect to. If not provided, the client will connect to the
/// default NATS address of 127.0.0.1:4222
pub url: Option<String>,
/// An nkey seed to use for authenticating with the NATS server. This can either be the raw seed
/// or a path to a file containing the seed. If used, the `jwt` option must be provided
pub seed: Option<String>,
/// A JWT to use for authenticating with the NATS server. This can either be the raw JWT or a
/// path to a file containing the JWT. If used, the `seed` option must be provided
pub jwt: Option<String>,
/// A path to a file containing the credentials to use for authenticating with the NATS server.
/// If used, the `seed` and `jwt` options must not be provided
pub creds_path: Option<PathBuf>,
/// An optional path to a file containing the root CA certificates to use for authenticating
/// with the NATS server.
pub ca_path: Option<PathBuf>,
}
impl Client {
/// Creates a new client with the given lattice ID, optional API prefix, and connection options.
/// Errors if it is unable to connect to the NATS server
pub async fn new(
lattice: &str,
prefix: Option<&str>,
opts: ClientConnectOptions,
) -> anyhow::Result<Self> {
let topics = TopicGenerator::new(lattice, prefix);
let nats_client =
nats::get_client(opts.url, opts.seed, opts.jwt, opts.creds_path, opts.ca_path).await?;
Ok(Client {
topics: Arc::new(topics),
client: nats_client,
})
}
/// Creates a new client with the given lattice ID, optional API prefix, and NATS client. This
/// is not recommended and is hidden because the async-nats crate is not 1.0 yet. That means it
/// is a breaking API change every time we upgrade versions. DO NOT use this function unless you
/// are willing to accept this breaking change. This function is explicitly excluded from our
/// semver guarantees until async-nats is 1.0.
#[doc(hidden)]
pub fn from_nats_client(
lattice: &str,
prefix: Option<&str>,
nats_client: async_nats::Client,
) -> Self {
let topics = TopicGenerator::new(lattice, prefix);
Client {
topics: Arc::new(topics),
client: nats_client,
}
}
/// Puts the given manifest into the lattice. The lattice can be anything that implements the
/// [`ManifestLoader`] trait (a path to a file, raw bytes, or an already parsed manifest).
///
/// Returns the name and version of the manifest that was put into the lattice
pub async fn put_manifest(&self, manifest: impl ManifestLoader) -> Result<(String, String)> {
let manifest = manifest.load_manifest().await?;
let manifest_bytes = serde_json::to_vec(&manifest).map_err(SerializationError::from)?;
let topic = self.topics.model_put_topic();
let resp = self
.client
.request_with_headers(
topic,
get_headers_content_type_json().clone(),
manifest_bytes.into(),
)
.await?;
let body: PutModelResponse =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
if matches!(body.result, PutResult::Error) {
return Err(ClientError::ApiError(body.message));
}
Ok((body.name, body.current_version))
}
/// Gets a list of all manifests in the lattice. This does not return the full manifest, just a
/// summary of its metadata and status
pub async fn list_manifests(&self) -> Result<Vec<ModelSummary>> {
let topic = self.topics.model_list_topic();
let resp = self
.client
.request(topic, Vec::with_capacity(0).into())
.await?;
let body: Vec<ModelSummary> =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
Ok(body)
}
/// Gets a manifest from the lattice by name and optionally its version. If no version is set,
/// the latest version will be returned
pub async fn get_manifest(&self, name: &str, version: Option<&str>) -> Result<Manifest> {
let topic = self.topics.model_get_topic(name);
let body = if let Some(version) = version {
serde_json::to_vec(&GetModelRequest {
version: Some(version.to_string()),
})
.map_err(SerializationError::from)?
} else {
Vec::with_capacity(0)
};
let resp = self.client.request(topic, body.into()).await?;
let body: GetModelResponse =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
match body.result {
GetResult::Error => Err(ClientError::ApiError(body.message)),
GetResult::NotFound => Err(ClientError::NotFound(name.to_string())),
GetResult::Success => body.manifest.ok_or_else(|| {
ClientError::ApiError("API returned success but didn't set a manifest".to_string())
}),
}
}
/// Deletes a manifest from the lattice by name and optionally its version. If no version is
/// set, all versions will be deleted
///
/// Returns true if the manifest was deleted, false if it was a noop (meaning it wasn't found or
/// was already deleted)
pub async fn delete_manifest(&self, name: &str, version: Option<&str>) -> Result<bool> {
let topic = self.topics.model_delete_topic(name);
let body = if let Some(version) = version {
serde_json::to_vec(&DeleteModelRequest {
version: Some(version.to_string()),
})
.map_err(SerializationError::from)?
} else {
Vec::with_capacity(0)
};
let resp = self.client.request(topic, body.into()).await?;
let body: DeleteModelResponse =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
match body.result {
DeleteResult::Error => Err(ClientError::ApiError(body.message)),
DeleteResult::Noop => Ok(false),
DeleteResult::Deleted => Ok(true),
}
}
/// Gets a list of all versions of a manifest in the lattice
pub async fn list_versions(&self, name: &str) -> Result<Vec<VersionInfo>> {
let topic = self.topics.model_versions_topic(name);
let resp = self
.client
.request(topic, Vec::with_capacity(0).into())
.await?;
let body: VersionResponse =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
match body.result {
GetResult::Error => Err(ClientError::ApiError(body.message)),
GetResult::NotFound => Err(ClientError::NotFound(name.to_string())),
GetResult::Success => Ok(body.versions),
}
}
/// Deploys a manifest to the lattice. The optional version parameter can be used to deploy a
/// specific version of a manifest. If no version is set, the latest version will be deployed
///
/// Please note that an OK response does not necessarily mean that the manifest was deployed
/// successfully, just that the server accepted the deployment request.
///
/// Returns a tuple of the name and version of the manifest that was deployed
pub async fn deploy_manifest(
&self,
name: &str,
version: Option<&str>,
) -> Result<(String, Option<String>)> {
let topic = self.topics.model_deploy_topic(name);
let body = if let Some(version) = version {
serde_json::to_vec(&DeployModelRequest {
version: Some(version.to_string()),
})
.map_err(SerializationError::from)?
} else {
Vec::with_capacity(0)
};
let resp = self.client.request(topic, body.into()).await?;
let body: DeployModelResponse =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
match body.result {
DeployResult::Error => Err(ClientError::ApiError(body.message)),
DeployResult::NotFound => Err(ClientError::NotFound(name.to_string())),
DeployResult::Acknowledged => Ok((body.name, body.version)),
}
}
/// A shorthand method that is the equivalent of calling [`put_manifest`](Self::put_manifest)
/// and then [`deploy_manifest`](Self::deploy_manifest)
///
/// Returns the name and version of the manifest that was deployed. Note that this will always
/// deploy the latest version of the manifest (i.e. the one that was just put)
pub async fn put_and_deploy_manifest(
&self,
manifest: impl ManifestLoader,
) -> Result<(String, String)> {
let (name, version) = self.put_manifest(manifest).await?;
// We don't technically need to put the version since we just deployed, but to make sure we
// maintain that behvior we'll put it here just in case
self.deploy_manifest(&name, Some(&version)).await?;
Ok((name, version))
}
/// Undeploys the given manifest from the lattice
///
/// Returns Ok(manifest_name) if the manifest undeploy request was acknowledged
pub async fn undeploy_manifest(&self, name: &str) -> Result<String> {
let topic = self.topics.model_undeploy_topic(name);
let resp = self
.client
.request(topic, Vec::with_capacity(0).into())
.await?;
let body: DeployModelResponse =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
match body.result {
DeployResult::Error => Err(ClientError::ApiError(body.message)),
DeployResult::NotFound => Err(ClientError::NotFound(name.to_string())),
DeployResult::Acknowledged => Ok(body.name),
}
}
/// Gets the status of the given manifest
pub async fn get_manifest_status(&self, name: &str) -> Result<Status> {
let topic = self.topics.model_status_topic(name);
let resp = self
.client
.request(topic, Vec::with_capacity(0).into())
.await?;
let body: StatusResponse =
serde_json::from_slice(&resp.payload).map_err(SerializationError::from)?;
match body.result {
StatusResult::Error => Err(ClientError::ApiError(body.message)),
StatusResult::NotFound => Err(ClientError::NotFound(name.to_string())),
StatusResult::Ok => body.status.ok_or_else(|| {
ClientError::ApiError("API returned success but didn't set a status".to_string())
}),
}
}
/// Subscribes to the status of a given manifest
pub async fn subscribe_to_status(&self, name: &str) -> Result<impl Stream<Item = Message>> {
let subject = self.topics.wadm_status_topic(name);
let subscriber = self
.client
.subscribe(subject)
.await
.map_err(|e| ClientError::ApiError(e.to_string()))?;
Ok(subscriber)
}
}

View File

@ -0,0 +1,68 @@
//! Various helpers and traits for loading and parsing manifests
use std::{
future::Future,
path::{Path, PathBuf},
};
use wadm_types::Manifest;
use crate::{error::ClientError, Result};
/// A trait for loading a [`Manifest`] from a variety of sources. This is also used as a convenience
/// trait in the client for easily passing in any type of Manifest
pub trait ManifestLoader {
fn load_manifest(self) -> impl Future<Output = Result<Manifest>>;
}
impl ManifestLoader for &Manifest {
async fn load_manifest(self) -> Result<Manifest> {
Ok(self.clone())
}
}
impl ManifestLoader for Manifest {
async fn load_manifest(self) -> Result<Manifest> {
Ok(self)
}
}
impl ManifestLoader for Vec<u8> {
async fn load_manifest(self) -> Result<Manifest> {
parse_yaml_or_json(self).map_err(Into::into)
}
}
impl ManifestLoader for &[u8] {
async fn load_manifest(self) -> Result<Manifest> {
parse_yaml_or_json(self).map_err(Into::into)
}
}
// Helper macro for implementing `ManifestLoader` for anything that implements `AsRef<Path>` (which
// results in a compiler error if we do it generically)
macro_rules! impl_manifest_loader_for_path {
($($ty:ty),*) => {
$(
impl ManifestLoader for $ty {
async fn load_manifest(self) -> Result<Manifest> {
let raw = tokio::fs::read(self).await.map_err(|e| ClientError::ManifestLoad(e.into()))?;
parse_yaml_or_json(raw).map_err(Into::into)
}
}
)*
};
}
impl_manifest_loader_for_path!(&Path, &str, &String, String, PathBuf, &PathBuf);
/// A simple function that attempts to parse the given bytes as YAML or JSON. This is used in the
/// implementations of `ManifestLoader`
pub fn parse_yaml_or_json(
raw: impl AsRef<[u8]>,
) -> std::result::Result<Manifest, crate::error::SerializationError> {
// Attempt to parse as YAML first, then JSON
serde_yaml::from_slice(raw.as_ref())
.or_else(|_| serde_json::from_slice(raw.as_ref()))
.map_err(Into::into)
}

View File

@ -0,0 +1,75 @@
//! Helpers for creating a NATS client without exposing the NATS client in the API
use std::path::PathBuf;
use anyhow::{Context, Result};
use async_nats::{Client, ConnectOptions};
const DEFAULT_NATS_ADDR: &str = "nats://127.0.0.1:4222";
/// Creates a NATS client from the given options
pub async fn get_client(
url: Option<String>,
seed: Option<String>,
jwt: Option<String>,
creds_path: Option<PathBuf>,
ca_path: Option<PathBuf>,
) -> Result<Client> {
let mut opts = ConnectOptions::new();
opts = match (seed, jwt, creds_path) {
(Some(seed), Some(jwt), None) => {
let jwt = resolve_jwt(jwt).await?;
let kp = std::sync::Arc::new(get_seed(seed).await?);
opts.jwt(jwt, move |nonce| {
let key_pair = kp.clone();
async move { key_pair.sign(&nonce).map_err(async_nats::AuthError::new) }
})
}
(None, None, Some(creds)) => opts.credentials_file(creds).await?,
(None, None, None) => opts,
_ => {
// We shouldn't ever get here due to the requirements on the flags, but return a helpful error just in case
return Err(anyhow::anyhow!(
"Got incorrect combination of connection options. Should either have nothing set, a seed, a jwt, or a credentials file"
));
}
};
if let Some(ca) = ca_path {
opts = opts.add_root_certificates(ca).require_tls(true);
}
opts.connect(url.unwrap_or_else(|| DEFAULT_NATS_ADDR.to_string()))
.await
.map_err(Into::into)
}
/// Takes a string that could be a raw seed, or a path and does all the necessary loading and parsing steps
async fn get_seed(seed: String) -> Result<nkeys::KeyPair> {
// MAGIC NUMBER: Length of a seed key
let raw_seed = if seed.len() == 58 && seed.starts_with('S') {
seed
} else {
tokio::fs::read_to_string(seed)
.await
.context("Unable to read seed file")?
};
nkeys::KeyPair::from_seed(&raw_seed).map_err(anyhow::Error::from)
}
/// Resolves a JWT value by either returning the string itself if it's a valid JWT
/// or by loading the contents of a file specified by the JWT value.
async fn resolve_jwt(jwt_or_file: String) -> Result<String> {
if tokio::fs::metadata(&jwt_or_file)
.await
.map(|metadata| metadata.is_file())
.unwrap_or(false)
{
tokio::fs::read_to_string(jwt_or_file)
.await
.map_err(|e| anyhow::anyhow!("Error loading JWT from file: {e}"))
} else {
// We could do more validation on the JWT here, but if the JWT is invalid then
// connecting will fail anyways
Ok(jwt_or_file)
}
}

View File

@ -0,0 +1,81 @@
use wadm_types::api::{DEFAULT_WADM_TOPIC_PREFIX, WADM_STATUS_API_PREFIX};
/// A generator that uses various config options to generate the proper topic names for the wadm API
pub struct TopicGenerator {
topic_prefix: String,
model_prefix: String,
}
impl TopicGenerator {
/// Creates a new topic generator with a lattice ID and an optional API prefix
pub fn new(lattice: &str, prefix: Option<&str>) -> TopicGenerator {
let topic_prefix = format!(
"{}.{}",
prefix.unwrap_or(DEFAULT_WADM_TOPIC_PREFIX),
lattice
);
let model_prefix = format!("{}.model", topic_prefix);
TopicGenerator {
topic_prefix,
model_prefix,
}
}
/// Returns the full prefix for the topic, including the API prefix and the lattice ID
pub fn prefix(&self) -> &str {
&self.topic_prefix
}
/// Returns the full prefix for model operations (currently the only operations supported in the
/// API)
pub fn model_prefix(&self) -> &str {
&self.model_prefix
}
/// Returns the full topic for a model put operation
pub fn model_put_topic(&self) -> String {
format!("{}.put", self.model_prefix())
}
/// Returns the full topic for a model get operation
pub fn model_get_topic(&self, model_name: &str) -> String {
format!("{}.get.{model_name}", self.model_prefix())
}
/// Returns the full topic for a model delete operation
pub fn model_delete_topic(&self, model_name: &str) -> String {
format!("{}.del.{model_name}", self.model_prefix())
}
/// Returns the full topic for a model list operation
pub fn model_list_topic(&self) -> String {
format!("{}.list", self.model_prefix())
}
/// Returns the full topic for listing the versions of a model
pub fn model_versions_topic(&self, model_name: &str) -> String {
format!("{}.versions.{model_name}", self.model_prefix())
}
/// Returns the full topic for a model deploy operation
pub fn model_deploy_topic(&self, model_name: &str) -> String {
format!("{}.deploy.{model_name}", self.model_prefix())
}
/// Returns the full topic for a model undeploy operation
pub fn model_undeploy_topic(&self, model_name: &str) -> String {
format!("{}.undeploy.{model_name}", self.model_prefix())
}
/// Returns the full topic for getting a model status
pub fn model_status_topic(&self, model_name: &str) -> String {
format!("{}.status.{model_name}", self.model_prefix())
}
/// Returns the full topic for WADM status subscriptions
pub fn wadm_status_topic(&self, app_name: &str) -> String {
// Extract just the lattice name from topic_prefix
let lattice = self.topic_prefix.split('.').last().unwrap_or("default");
format!("{}.{}.{}", WADM_STATUS_API_PREFIX, lattice, app_name)
}
}

View File

@ -0,0 +1,28 @@
[package]
name = "wadm-types"
description = "Types and validators for the wadm API"
version = "0.8.3"
edition = "2021"
authors = ["wasmCloud Team"]
keywords = ["webassembly", "wasmcloud", "wadm"]
license = "Apache-2.0"
repository = "https://github.com/wasmcloud/wadm"
[features]
wit = []
[dependencies]
anyhow = { workspace = true }
regex = { workspace = true }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
serde_yaml = { workspace = true }
utoipa = { workspace = true }
[target.'cfg(not(target_family = "wasm"))'.dependencies]
tokio = { workspace = true, features = ["full"] }
wit-bindgen-wrpc = { workspace = true }
[target.'cfg(target_family = "wasm")'.dependencies]
wit-bindgen = { workspace = true, features = ["macros"] }

View File

@ -1,6 +1,10 @@
use serde::{Deserialize, Serialize};
use crate::model::Manifest;
use crate::Manifest;
/// The default topic prefix for the wadm API;
pub const DEFAULT_WADM_TOPIC_PREFIX: &str = "wadm.api";
pub const WADM_STATUS_API_PREFIX: &str = "wadm.status";
/// The request body for getting a manifest
#[derive(Debug, Serialize, Deserialize)]
@ -19,6 +23,14 @@ pub struct GetModelResponse {
pub manifest: Option<Manifest>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct ListModelsResponse {
pub result: GetResult,
#[serde(default)]
pub message: String,
pub models: Vec<ModelSummary>,
}
/// Possible outcomes of a get request
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
@ -52,13 +64,18 @@ pub enum PutResult {
}
/// Summary of a given model returned when listing
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ModelSummary {
pub name: String,
pub version: String,
pub description: Option<String>,
pub deployed: bool,
pub deployed_version: Option<String>,
#[serde(default)]
pub detailed_status: Status,
#[deprecated(since = "0.14.0", note = "Use detailed_status instead")]
pub status: StatusType,
#[deprecated(since = "0.14.0", note = "Use detailed_status instead")]
pub status_message: Option<String>,
}
/// The response to a versions request
@ -71,7 +88,7 @@ pub struct VersionResponse {
}
/// Information about a given version of a model, returned as part of a list of all versions
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct VersionInfo {
pub version: String,
pub deployed: bool,
@ -81,9 +98,7 @@ pub struct VersionInfo {
#[derive(Debug, Serialize, Deserialize)]
pub struct DeleteModelRequest {
#[serde(default)]
pub version: String,
#[serde(default)]
pub delete_all: bool,
pub version: Option<String>,
}
/// A response from a delete request
@ -97,7 +112,7 @@ pub struct DeleteModelResponse {
}
/// All possible outcomes of a delete operation
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
pub enum DeleteResult {
Deleted,
@ -114,16 +129,20 @@ pub struct DeployModelRequest {
pub version: Option<String>,
}
/// A response from a deploy request
/// A response from a deploy or undeploy request
#[derive(Debug, Serialize, Deserialize)]
pub struct DeployModelResponse {
pub result: DeployResult,
#[serde(default)]
pub message: String,
#[serde(default)]
pub name: String,
#[serde(default)]
pub version: Option<String>,
}
/// All possible outcomes of a deploy operation
#[derive(Debug, Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
pub enum DeployResult {
Error,
@ -132,10 +151,10 @@ pub enum DeployResult {
}
/// A request to undeploy a model
///
/// Right now this is just an empty struct, but it is reserved for future use
#[derive(Debug, Serialize, Deserialize)]
pub struct UndeployModelRequest {
pub non_destructive: bool,
}
pub struct UndeployModelRequest {}
/// A response to a status request
#[derive(Debug, Serialize, Deserialize)]
@ -157,27 +176,46 @@ pub enum StatusResult {
}
/// The current status of a model
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[derive(Debug, Serialize, Deserialize, Default, Clone, PartialEq, Eq)]
pub struct Status {
pub version: String,
#[serde(rename = "status")]
pub info: StatusInfo,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub scalers: Vec<ScalerStatus>,
#[serde(default)]
#[deprecated(since = "0.14.0")]
pub version: String,
#[serde(default)]
#[deprecated(since = "0.14.0")]
pub components: Vec<ComponentStatus>,
}
impl Status {
pub fn new(info: StatusInfo, scalers: Vec<ScalerStatus>) -> Self {
#[allow(deprecated)]
Status {
info,
scalers,
version: String::with_capacity(0),
components: Vec::with_capacity(0),
}
}
}
/// The current status of a component
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[derive(Debug, Serialize, Deserialize, Default, Clone, Eq, PartialEq)]
pub struct ComponentStatus {
pub name: String,
#[serde(rename = "type")]
pub component_type: String,
#[serde(rename = "status")]
pub info: StatusInfo,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub traits: Vec<TraitStatus>,
}
/// The current status of a trait
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[derive(Debug, Serialize, Deserialize, Default, Clone, Eq, PartialEq)]
pub struct TraitStatus {
#[serde(rename = "type")]
pub trait_type: String,
@ -185,8 +223,24 @@ pub struct TraitStatus {
pub info: StatusInfo,
}
/// The current status of a scaler
#[derive(Debug, Serialize, Deserialize, Default, Clone, Eq, PartialEq)]
pub struct ScalerStatus {
/// The id of the scaler
#[serde(default)]
pub id: String,
/// The kind of scaler
#[serde(default)]
pub kind: String,
/// The human-readable name of the scaler
#[serde(default)]
pub name: String,
#[serde(rename = "status")]
pub info: StatusInfo,
}
/// Common high-level status information
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[derive(Debug, Serialize, Deserialize, Default, Clone, Eq, PartialEq)]
pub struct StatusInfo {
#[serde(rename = "type")]
pub status_type: StatusType,
@ -194,15 +248,63 @@ pub struct StatusInfo {
pub message: String,
}
impl StatusInfo {
pub fn undeployed(message: &str) -> Self {
StatusInfo {
status_type: StatusType::Undeployed,
message: message.to_owned(),
}
}
pub fn deployed(message: &str) -> Self {
StatusInfo {
status_type: StatusType::Deployed,
message: message.to_owned(),
}
}
pub fn failed(message: &str) -> Self {
StatusInfo {
status_type: StatusType::Failed,
message: message.to_owned(),
}
}
pub fn reconciling(message: &str) -> Self {
StatusInfo {
status_type: StatusType::Reconciling,
message: message.to_owned(),
}
}
pub fn waiting(message: &str) -> Self {
StatusInfo {
status_type: StatusType::Waiting,
message: message.to_owned(),
}
}
pub fn unhealthy(message: &str) -> Self {
StatusInfo {
status_type: StatusType::Unhealthy,
message: message.to_owned(),
}
}
}
/// All possible status types
#[derive(Debug, Serialize, Deserialize, Eq, PartialEq, Clone, Copy, Default)]
#[serde(rename_all = "lowercase")]
pub enum StatusType {
Waiting,
#[default]
Undeployed,
Compensating,
Ready,
#[serde(alias = "compensating")]
Reconciling,
#[serde(alias = "ready")]
Deployed,
Failed,
Unhealthy,
}
// Implementing add makes it easy for use to get an aggregate status by summing all of them together
@ -225,9 +327,15 @@ impl std::ops::Add for StatusType {
// If anything is undeployed, the whole thing is
(Self::Undeployed, _) => Self::Undeployed,
(_, Self::Undeployed) => Self::Undeployed,
(Self::Compensating, _) => Self::Compensating,
(_, Self::Compensating) => Self::Compensating,
_ => unreachable!("aggregating StatusType failure. This is programmer error"),
// If anything is waiting, the whole thing is
(Self::Waiting, _) => Self::Waiting,
(_, Self::Waiting) => Self::Waiting,
(Self::Reconciling, _) => Self::Reconciling,
(_, Self::Reconciling) => Self::Reconciling,
(Self::Unhealthy, _) => Self::Unhealthy,
(_, Self::Unhealthy) => Self::Unhealthy,
// This is technically covered in the first comparison, but we'll be explicit
(Self::Deployed, Self::Deployed) => Self::Deployed,
}
}
}
@ -247,8 +355,10 @@ mod test {
#[test]
fn test_status_aggregate() {
assert!(matches!(
[StatusType::Ready, StatusType::Ready].into_iter().sum(),
StatusType::Ready
[StatusType::Deployed, StatusType::Deployed]
.into_iter()
.sum(),
StatusType::Deployed
));
assert!(matches!(
@ -266,14 +376,14 @@ mod test {
));
assert!(matches!(
[StatusType::Compensating, StatusType::Undeployed]
[StatusType::Reconciling, StatusType::Undeployed]
.into_iter()
.sum(),
StatusType::Undeployed
));
assert!(matches!(
[StatusType::Ready, StatusType::Undeployed]
[StatusType::Deployed, StatusType::Undeployed]
.into_iter()
.sum(),
StatusType::Undeployed
@ -281,8 +391,8 @@ mod test {
assert!(matches!(
[
StatusType::Ready,
StatusType::Compensating,
StatusType::Deployed,
StatusType::Reconciling,
StatusType::Undeployed,
StatusType::Failed
]
@ -291,6 +401,20 @@ mod test {
StatusType::Failed
));
assert!(matches!(
[StatusType::Deployed, StatusType::Unhealthy]
.into_iter()
.sum(),
StatusType::Unhealthy
));
assert!(matches!(
[StatusType::Reconciling, StatusType::Unhealthy]
.into_iter()
.sum(),
StatusType::Reconciling
));
let empty: Vec<StatusType> = Vec::new();
assert!(matches!(empty.into_iter().sum(), StatusType::Undeployed));
}

View File

@ -0,0 +1,621 @@
use crate::{
api::{
ComponentStatus, DeleteResult, GetResult, ModelSummary, PutResult, Status, StatusInfo,
StatusResult, StatusType, TraitStatus, VersionInfo,
},
CapabilityProperties, Component, ComponentProperties, ConfigDefinition, ConfigProperty,
LinkProperty, Manifest, Metadata, Policy, Properties, SecretProperty, SecretSourceProperty,
SharedApplicationComponentProperties, Specification, Spread, SpreadScalerProperty,
TargetConfig, Trait, TraitProperty,
};
use wasmcloud::wadm;
#[cfg(all(feature = "wit", target_family = "wasm"))]
wit_bindgen::generate!({
path: "wit",
additional_derives: [
serde::Serialize,
serde::Deserialize,
],
with: {
"wasmcloud:wadm/types@0.2.0": generate,
"wasmcloud:wadm/client@0.2.0": generate,
"wasmcloud:wadm/handler@0.2.0": generate
}
});
#[cfg(all(feature = "wit", not(target_family = "wasm")))]
wit_bindgen_wrpc::generate!({
generate_unused_types: true,
additional_derives: [
serde::Serialize,
serde::Deserialize,
],
with: {
"wasmcloud:wadm/types@0.2.0": generate,
"wasmcloud:wadm/client@0.2.0": generate,
"wasmcloud:wadm/handler@0.2.0": generate
}
});
// Trait implementations for converting types in the API module to the generated types
impl From<Manifest> for wadm::types::OamManifest {
fn from(manifest: Manifest) -> Self {
wadm::types::OamManifest {
api_version: manifest.api_version.to_string(),
kind: manifest.kind.to_string(),
metadata: manifest.metadata.into(),
spec: manifest.spec.into(),
}
}
}
impl From<Metadata> for wadm::types::Metadata {
fn from(metadata: Metadata) -> Self {
wadm::types::Metadata {
name: metadata.name,
annotations: metadata.annotations.into_iter().collect(),
labels: metadata.labels.into_iter().collect(),
}
}
}
impl From<Specification> for wadm::types::Specification {
fn from(spec: Specification) -> Self {
wadm::types::Specification {
components: spec.components.into_iter().map(|c| c.into()).collect(),
policies: spec.policies.into_iter().map(|c| c.into()).collect(),
}
}
}
impl From<Component> for wadm::types::Component {
fn from(component: Component) -> Self {
wadm::types::Component {
name: component.name,
properties: component.properties.into(),
traits: component
.traits
.map(|traits| traits.into_iter().map(|t| t.into()).collect()),
}
}
}
impl From<Policy> for wadm::types::Policy {
fn from(policy: Policy) -> Self {
wadm::types::Policy {
name: policy.name,
properties: policy.properties.into_iter().collect(),
type_: policy.policy_type,
}
}
}
impl From<Properties> for wadm::types::Properties {
fn from(properties: Properties) -> Self {
match properties {
Properties::Component { properties } => {
wadm::types::Properties::Component(properties.into())
}
Properties::Capability { properties } => {
wadm::types::Properties::Capability(properties.into())
}
}
}
}
impl From<ComponentProperties> for wadm::types::ComponentProperties {
fn from(properties: ComponentProperties) -> Self {
wadm::types::ComponentProperties {
application: properties.application.map(Into::into),
image: properties.image,
id: properties.id,
config: properties.config.into_iter().map(|c| c.into()).collect(),
secrets: properties.secrets.into_iter().map(|c| c.into()).collect(),
}
}
}
impl From<CapabilityProperties> for wadm::types::CapabilityProperties {
fn from(properties: CapabilityProperties) -> Self {
wadm::types::CapabilityProperties {
application: properties.application.map(Into::into),
image: properties.image,
id: properties.id,
config: properties.config.into_iter().map(|c| c.into()).collect(),
secrets: properties.secrets.into_iter().map(|c| c.into()).collect(),
}
}
}
impl From<ConfigProperty> for wadm::types::ConfigProperty {
fn from(property: ConfigProperty) -> Self {
wadm::types::ConfigProperty {
name: property.name,
properties: property.properties.map(|props| props.into_iter().collect()),
}
}
}
impl From<SecretProperty> for wadm::types::SecretProperty {
fn from(property: SecretProperty) -> Self {
wadm::types::SecretProperty {
name: property.name,
properties: property.properties.into(),
}
}
}
impl From<SecretSourceProperty> for wadm::types::SecretSourceProperty {
fn from(property: SecretSourceProperty) -> Self {
wadm::types::SecretSourceProperty {
policy: property.policy,
key: property.key,
field: property.field,
version: property.version,
}
}
}
impl From<SharedApplicationComponentProperties>
for wadm::types::SharedApplicationComponentProperties
{
fn from(properties: SharedApplicationComponentProperties) -> Self {
wadm::types::SharedApplicationComponentProperties {
name: properties.name,
component: properties.component,
}
}
}
impl From<Trait> for wadm::types::Trait {
fn from(trait_: Trait) -> Self {
wadm::types::Trait {
trait_type: trait_.trait_type,
properties: trait_.properties.into(),
}
}
}
impl From<TraitProperty> for wadm::types::TraitProperty {
fn from(property: TraitProperty) -> Self {
match property {
TraitProperty::Link(link) => wadm::types::TraitProperty::Link(link.into()),
TraitProperty::SpreadScaler(spread) => {
wadm::types::TraitProperty::Spreadscaler(spread.into())
}
TraitProperty::Custom(custom) => wadm::types::TraitProperty::Custom(custom.to_string()),
}
}
}
impl From<LinkProperty> for wadm::types::LinkProperty {
fn from(property: LinkProperty) -> Self {
wadm::types::LinkProperty {
source: property.source.map(|c| c.into()),
target: property.target.into(),
namespace: property.namespace,
package: property.package,
interfaces: property.interfaces,
name: property.name,
}
}
}
impl From<ConfigDefinition> for wadm::types::ConfigDefinition {
fn from(definition: ConfigDefinition) -> Self {
wadm::types::ConfigDefinition {
config: definition.config.into_iter().map(|c| c.into()).collect(),
secrets: definition.secrets.into_iter().map(|s| s.into()).collect(),
}
}
}
impl From<TargetConfig> for wadm::types::TargetConfig {
fn from(config: TargetConfig) -> Self {
wadm::types::TargetConfig {
name: config.name,
config: config.config.into_iter().map(|c| c.into()).collect(),
secrets: config.secrets.into_iter().map(|s| s.into()).collect(),
}
}
}
impl From<SpreadScalerProperty> for wadm::types::SpreadscalerProperty {
fn from(property: SpreadScalerProperty) -> Self {
wadm::types::SpreadscalerProperty {
instances: property.instances as u32,
spread: property.spread.into_iter().map(|s| s.into()).collect(),
}
}
}
impl From<Spread> for wadm::types::Spread {
fn from(spread: Spread) -> Self {
wadm::types::Spread {
name: spread.name,
requirements: spread.requirements.into_iter().collect(),
weight: spread.weight.map(|w| w as u32),
}
}
}
impl From<ModelSummary> for wadm::types::ModelSummary {
fn from(summary: ModelSummary) -> Self {
wadm::types::ModelSummary {
name: summary.name,
version: summary.version,
description: summary.description,
deployed_version: summary.deployed_version,
status: summary.status.into(),
status_message: summary.status_message,
}
}
}
impl From<DeleteResult> for wadm::types::DeleteResult {
fn from(result: DeleteResult) -> Self {
match result {
DeleteResult::Deleted => wadm::types::DeleteResult::Deleted,
DeleteResult::Error => wadm::types::DeleteResult::Error,
DeleteResult::Noop => wadm::types::DeleteResult::Noop,
}
}
}
impl From<GetResult> for wadm::types::GetResult {
fn from(result: GetResult) -> Self {
match result {
GetResult::Error => wadm::types::GetResult::Error,
GetResult::Success => wadm::types::GetResult::Success,
GetResult::NotFound => wadm::types::GetResult::NotFound,
}
}
}
impl From<PutResult> for wadm::types::PutResult {
fn from(result: PutResult) -> Self {
match result {
PutResult::Error => wadm::types::PutResult::Error,
PutResult::Created => wadm::types::PutResult::Created,
PutResult::NewVersion => wadm::types::PutResult::NewVersion,
}
}
}
impl From<StatusType> for wadm::types::StatusType {
fn from(status: StatusType) -> Self {
match status {
StatusType::Undeployed => wadm::types::StatusType::Undeployed,
StatusType::Reconciling => wadm::types::StatusType::Reconciling,
StatusType::Deployed => wadm::types::StatusType::Deployed,
StatusType::Failed => wadm::types::StatusType::Failed,
StatusType::Waiting => wadm::types::StatusType::Waiting,
StatusType::Unhealthy => wadm::types::StatusType::Unhealthy,
}
}
}
// Trait implementations for converting generated types to the types in the API module
impl From<wadm::types::StatusType> for StatusType {
fn from(status: wadm::types::StatusType) -> Self {
match status {
wadm::types::StatusType::Undeployed => StatusType::Undeployed,
wadm::types::StatusType::Reconciling => StatusType::Reconciling,
wadm::types::StatusType::Deployed => StatusType::Deployed,
wadm::types::StatusType::Failed => StatusType::Failed,
wadm::types::StatusType::Waiting => StatusType::Waiting,
wadm::types::StatusType::Unhealthy => StatusType::Unhealthy,
}
}
}
impl From<wadm::types::StatusInfo> for StatusInfo {
fn from(info: wadm::types::StatusInfo) -> Self {
StatusInfo {
status_type: info.status_type.into(),
message: info.message,
}
}
}
impl From<wadm::types::ComponentStatus> for ComponentStatus {
fn from(status: wadm::types::ComponentStatus) -> Self {
ComponentStatus {
name: status.name,
component_type: status.component_type,
info: status.info.into(),
traits: status
.traits
.into_iter()
.map(|t| TraitStatus {
trait_type: t.trait_type,
info: t.info.into(),
})
.collect(),
}
}
}
impl From<wadm::types::TraitStatus> for TraitStatus {
fn from(status: wadm::types::TraitStatus) -> Self {
TraitStatus {
trait_type: status.trait_type,
info: status.info.into(),
}
}
}
impl From<wadm::types::StatusResult> for StatusResult {
fn from(result: wadm::types::StatusResult) -> Self {
match result {
wadm::types::StatusResult::Error => StatusResult::Error,
wadm::types::StatusResult::Ok => StatusResult::Ok,
wadm::types::StatusResult::NotFound => StatusResult::NotFound,
}
}
}
impl From<wadm::types::OamManifest> for Manifest {
fn from(manifest: wadm::types::OamManifest) -> Self {
Manifest {
api_version: manifest.api_version,
kind: manifest.kind,
metadata: manifest.metadata.into(),
spec: manifest.spec.into(),
}
}
}
impl From<wadm::types::Metadata> for Metadata {
fn from(metadata: wadm::types::Metadata) -> Self {
Metadata {
name: metadata.name,
annotations: metadata.annotations.into_iter().collect(),
labels: metadata.labels.into_iter().collect(),
}
}
}
impl From<wadm::types::Specification> for Specification {
fn from(spec: wadm::types::Specification) -> Self {
Specification {
components: spec.components.into_iter().map(|c| c.into()).collect(),
policies: spec.policies.into_iter().map(|c| c.into()).collect(),
}
}
}
impl From<wadm::types::Component> for Component {
fn from(component: wadm::types::Component) -> Self {
Component {
name: component.name,
properties: component.properties.into(),
traits: component
.traits
.map(|traits| traits.into_iter().map(|t| t.into()).collect()),
}
}
}
impl From<wadm::types::Policy> for Policy {
fn from(policy: wadm::types::Policy) -> Self {
Policy {
name: policy.name,
properties: policy.properties.into_iter().collect(),
policy_type: policy.type_,
}
}
}
impl From<wadm::types::Properties> for Properties {
fn from(properties: wadm::types::Properties) -> Self {
match properties {
wadm::types::Properties::Component(properties) => Properties::Component {
properties: properties.into(),
},
wadm::types::Properties::Capability(properties) => Properties::Capability {
properties: properties.into(),
},
}
}
}
impl From<wadm::types::ComponentProperties> for ComponentProperties {
fn from(properties: wadm::types::ComponentProperties) -> Self {
ComponentProperties {
image: properties.image,
application: properties.application.map(Into::into),
id: properties.id,
config: properties.config.into_iter().map(|c| c.into()).collect(),
secrets: properties.secrets.into_iter().map(|c| c.into()).collect(),
}
}
}
impl From<wadm::types::CapabilityProperties> for CapabilityProperties {
fn from(properties: wadm::types::CapabilityProperties) -> Self {
CapabilityProperties {
image: properties.image,
application: properties.application.map(Into::into),
id: properties.id,
config: properties.config.into_iter().map(|c| c.into()).collect(),
secrets: properties.secrets.into_iter().map(|c| c.into()).collect(),
}
}
}
impl From<wadm::types::ConfigProperty> for ConfigProperty {
fn from(property: wadm::types::ConfigProperty) -> Self {
ConfigProperty {
name: property.name,
properties: property.properties.map(|props| props.into_iter().collect()),
}
}
}
impl From<wadm::types::SecretProperty> for SecretProperty {
fn from(property: wadm::types::SecretProperty) -> Self {
SecretProperty {
name: property.name,
properties: property.properties.into(),
}
}
}
impl From<wadm::types::SecretSourceProperty> for SecretSourceProperty {
fn from(property: wadm::types::SecretSourceProperty) -> Self {
SecretSourceProperty {
policy: property.policy,
key: property.key,
field: property.field,
version: property.version,
}
}
}
impl From<wadm::types::SharedApplicationComponentProperties>
for SharedApplicationComponentProperties
{
fn from(properties: wadm::types::SharedApplicationComponentProperties) -> Self {
SharedApplicationComponentProperties {
name: properties.name,
component: properties.component,
}
}
}
impl From<wadm::types::Trait> for Trait {
fn from(trait_: wadm::types::Trait) -> Self {
Trait {
trait_type: trait_.trait_type,
properties: trait_.properties.into(),
}
}
}
impl From<wadm::types::TraitProperty> for TraitProperty {
fn from(property: wadm::types::TraitProperty) -> Self {
match property {
wadm::types::TraitProperty::Link(link) => TraitProperty::Link(link.into()),
wadm::types::TraitProperty::Spreadscaler(spread) => {
TraitProperty::SpreadScaler(spread.into())
}
wadm::types::TraitProperty::Custom(custom) => {
TraitProperty::Custom(serde_json::value::Value::String(custom))
}
}
}
}
impl From<wadm::types::LinkProperty> for LinkProperty {
fn from(property: wadm::types::LinkProperty) -> Self {
#[allow(deprecated)]
LinkProperty {
source: property.source.map(|c| c.into()),
target: property.target.into(),
namespace: property.namespace,
package: property.package,
interfaces: property.interfaces,
name: property.name,
source_config: None,
target_config: None,
}
}
}
impl From<wadm::types::ConfigDefinition> for ConfigDefinition {
fn from(definition: wadm::types::ConfigDefinition) -> Self {
ConfigDefinition {
config: definition.config.into_iter().map(|c| c.into()).collect(),
secrets: definition.secrets.into_iter().map(|s| s.into()).collect(),
}
}
}
impl From<wadm::types::TargetConfig> for TargetConfig {
fn from(config: wadm::types::TargetConfig) -> Self {
TargetConfig {
name: config.name,
config: config.config.into_iter().map(|c| c.into()).collect(),
secrets: config.secrets.into_iter().map(|s| s.into()).collect(),
}
}
}
impl From<wadm::types::SpreadscalerProperty> for SpreadScalerProperty {
fn from(property: wadm::types::SpreadscalerProperty) -> Self {
SpreadScalerProperty {
instances: property.instances as usize,
spread: property.spread.into_iter().map(|s| s.into()).collect(),
}
}
}
impl From<wadm::types::Spread> for Spread {
fn from(spread: wadm::types::Spread) -> Self {
Spread {
name: spread.name,
requirements: spread.requirements.into_iter().collect(),
weight: spread.weight.map(|w| w as usize),
}
}
}
impl From<VersionInfo> for wadm::types::VersionInfo {
fn from(info: VersionInfo) -> Self {
wasmcloud::wadm::types::VersionInfo {
version: info.version,
deployed: info.deployed,
}
}
}
// Implement the From trait for StatusInfo
impl From<StatusInfo> for wadm::types::StatusInfo {
fn from(info: StatusInfo) -> Self {
wadm::types::StatusInfo {
status_type: info.status_type.into(),
message: info.message,
}
}
}
// Implement the From trait for Status
impl From<Status> for wadm::types::Status {
fn from(status: Status) -> Self {
wadm::types::Status {
version: status.version,
info: status.info.into(),
components: status.components.into_iter().map(|c| c.into()).collect(),
}
}
}
// Implement the From trait for ComponentStatus
impl From<ComponentStatus> for wadm::types::ComponentStatus {
fn from(component_status: ComponentStatus) -> Self {
wadm::types::ComponentStatus {
name: component_status.name,
component_type: component_status.component_type,
info: component_status.info.into(),
traits: component_status
.traits
.into_iter()
.map(|t| t.into())
.collect(),
}
}
}
// Implement the From trait for TraitStatus
impl From<TraitStatus> for wadm::types::TraitStatus {
fn from(trait_status: TraitStatus) -> Self {
wadm::types::TraitStatus {
trait_type: trait_status.trait_type,
info: trait_status.info.into(),
}
}
}

View File

@ -0,0 +1,987 @@
use std::collections::{BTreeMap, HashMap};
use schemars::JsonSchema;
use serde::{de, Deserialize, Serialize};
use utoipa::ToSchema;
pub mod api;
#[cfg(feature = "wit")]
pub mod bindings;
#[cfg(feature = "wit")]
pub use bindings::*;
pub mod validation;
/// The default weight for a spread
pub const DEFAULT_SPREAD_WEIGHT: usize = 100;
/// The expected OAM api version
pub const OAM_VERSION: &str = "core.oam.dev/v1beta1";
/// The currently supported kind for OAM manifests.
// NOTE(thomastaylor312): If we ever end up supporting more than one kind, we should use an enum for
// this
pub const APPLICATION_KIND: &str = "Application";
/// The version key, as predefined by the [OAM
/// spec](https://github.com/oam-dev/spec/blob/master/metadata.md#annotations-format)
pub const VERSION_ANNOTATION_KEY: &str = "version";
/// The description key, as predefined by the [OAM
/// spec](https://github.com/oam-dev/spec/blob/master/metadata.md#annotations-format)
pub const DESCRIPTION_ANNOTATION_KEY: &str = "description";
/// The annotation key for shared applications
pub const SHARED_ANNOTATION_KEY: &str = "experimental.wasmcloud.dev/shared";
/// The identifier for the builtin spreadscaler trait type
pub const SPREADSCALER_TRAIT: &str = "spreadscaler";
/// The identifier for the builtin daemonscaler trait type
pub const DAEMONSCALER_TRAIT: &str = "daemonscaler";
/// The identifier for the builtin linkdef trait type
pub const LINK_TRAIT: &str = "link";
/// The string used for indicating a latest version. It is explicitly forbidden to use as a version
/// for a manifest
pub const LATEST_VERSION: &str = "latest";
/// The default link name
pub const DEFAULT_LINK_NAME: &str = "default";
/// Manifest file based on the Open Application Model (OAM) specification for declaratively managing wasmCloud applications
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct Manifest {
/// The OAM version of the manifest
#[serde(rename = "apiVersion")]
pub api_version: String,
/// The kind or type of manifest described by the spec
pub kind: String,
/// Metadata describing the manifest
pub metadata: Metadata,
/// The specification for this manifest
pub spec: Specification,
}
impl Manifest {
/// Returns a reference to the current version
pub fn version(&self) -> &str {
self.metadata
.annotations
.get(VERSION_ANNOTATION_KEY)
.map(|v| v.as_str())
.unwrap_or_default()
}
/// Returns a reference to the current description if it exists
pub fn description(&self) -> Option<&str> {
self.metadata
.annotations
.get(DESCRIPTION_ANNOTATION_KEY)
.map(|v| v.as_str())
}
/// Indicates if the manifest is shared, meaning it can be used by multiple applications
pub fn shared(&self) -> bool {
self.metadata
.annotations
.get(SHARED_ANNOTATION_KEY)
.is_some_and(|v| v.parse::<bool>().unwrap_or(false))
}
/// Returns the components in the manifest
pub fn components(&self) -> impl Iterator<Item = &Component> {
self.spec.components.iter()
}
/// Helper function to find shared components that are missing from the given list of
/// deployed applications
pub fn missing_shared_components(&self, deployed_apps: &[&Manifest]) -> Vec<&Component> {
self.spec
.components
.iter()
.filter(|shared_component| {
match &shared_component.properties {
Properties::Capability {
properties:
CapabilityProperties {
image: None,
application: Some(shared_app),
..
},
}
| Properties::Component {
properties:
ComponentProperties {
image: None,
application: Some(shared_app),
..
},
} => {
if deployed_apps.iter().filter(|a| a.shared()).any(|m| {
m.metadata.name == shared_app.name
&& m.components().any(|c| {
c.name == shared_app.component
// This compares just the enum variant, not the actual properties
// For example, if we reference a shared component that's a capability,
// we want to make sure the deployed component is a capability.
&& std::mem::discriminant(&c.properties)
== std::mem::discriminant(&shared_component.properties)
})
}) {
false
} else {
true
}
}
_ => false,
}
})
.collect()
}
/// Returns only the WebAssembly components in the manifest
pub fn wasm_components(&self) -> impl Iterator<Item = &Component> {
self.components()
.filter(|c| matches!(c.properties, Properties::Component { .. }))
}
/// Returns only the provider components in the manifest
pub fn capability_providers(&self) -> impl Iterator<Item = &Component> {
self.components()
.filter(|c| matches!(c.properties, Properties::Capability { .. }))
}
/// Returns a map of component names to components in the manifest
pub fn component_lookup(&self) -> HashMap<&String, &Component> {
self.components()
.map(|c| (&c.name, c))
.collect::<HashMap<&String, &Component>>()
}
/// Returns only links in the manifest
pub fn links(&self) -> impl Iterator<Item = &Trait> {
self.components()
.flat_map(|c| c.traits.as_ref())
.flatten()
.filter(|t| t.is_link())
}
/// Returns only policies in the manifest
pub fn policies(&self) -> impl Iterator<Item = &Policy> {
self.spec.policies.iter()
}
/// Returns a map of policy names to policies in the manifest
pub fn policy_lookup(&self) -> HashMap<&String, &Policy> {
self.spec
.policies
.iter()
.map(|p| (&p.name, p))
.collect::<HashMap<&String, &Policy>>()
}
}
/// The metadata describing the manifest
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
pub struct Metadata {
/// The name of the manifest. This must be unique per lattice
pub name: String,
/// Optional data for annotating this manifest see <https://github.com/oam-dev/spec/blob/master/metadata.md#annotations-format>
#[serde(skip_serializing_if = "BTreeMap::is_empty")]
pub annotations: BTreeMap<String, String>,
/// Optional data for labeling this manifest, see <https://github.com/oam-dev/spec/blob/master/metadata.md#label-format>
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub labels: BTreeMap<String, String>,
}
/// A representation of an OAM specification
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
pub struct Specification {
/// The list of components for describing an application
pub components: Vec<Component>,
/// The list of policies describing an application. This is for providing application-wide
/// setting such as configuration for a secrets backend, how to render Kubernetes services,
/// etc. It can be omitted if no policies are needed for an application.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub policies: Vec<Policy>,
}
/// A policy definition
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
pub struct Policy {
/// The name of this policy
pub name: String,
/// The properties for this policy
pub properties: BTreeMap<String, String>,
/// The type of the policy
#[serde(rename = "type")]
pub policy_type: String,
}
/// A component definition
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
// TODO: figure out why this can't be uncommented
// #[serde(deny_unknown_fields)]
pub struct Component {
/// The name of this component
pub name: String,
/// The properties for this component
// NOTE(thomastaylor312): It would probably be better for us to implement a custom deserialze
// and serialize that combines this and the component type. This is good enough for first draft
#[serde(flatten)]
pub properties: Properties,
/// A list of various traits assigned to this component
#[serde(skip_serializing_if = "Option::is_none")]
pub traits: Option<Vec<Trait>>,
}
impl Component {
fn secrets(&self) -> Vec<SecretProperty> {
let mut secrets = Vec::new();
if let Some(traits) = self.traits.as_ref() {
let l: Vec<SecretProperty> = traits
.iter()
.filter_map(|t| {
if let TraitProperty::Link(link) = &t.properties {
let mut tgt_iter = link.target.secrets.clone();
if let Some(src) = &link.source {
tgt_iter.extend(src.secrets.clone());
}
Some(tgt_iter)
} else {
None
}
})
.flatten()
.collect();
secrets.extend(l);
};
match &self.properties {
Properties::Component { properties } => {
secrets.extend(properties.secrets.clone());
}
Properties::Capability { properties } => secrets.extend(properties.secrets.clone()),
};
secrets
}
/// Returns only links in the component
fn links(&self) -> impl Iterator<Item = &Trait> {
self.traits.iter().flatten().filter(|t| t.is_link())
}
}
/// Properties that can be defined for a component
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(tag = "type")]
pub enum Properties {
#[serde(rename = "component", alias = "actor")]
Component { properties: ComponentProperties },
#[serde(rename = "capability")]
Capability { properties: CapabilityProperties },
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct ComponentProperties {
/// The image reference to use. Required unless the component is a shared component
/// that is defined in another shared application.
#[serde(skip_serializing_if = "Option::is_none")]
pub image: Option<String>,
/// Information to locate a component within a shared application. Cannot be specified
/// if the image is specified.
#[serde(skip_serializing_if = "Option::is_none")]
pub application: Option<SharedApplicationComponentProperties>,
/// The component ID to use for this component. If not supplied, it will be generated
/// as a combination of the [Metadata::name] and the image reference.
#[serde(skip_serializing_if = "Option::is_none")]
pub id: Option<String>,
/// Named configuration to pass to the component. The component will be able to retrieve
/// these values at runtime using `wasi:runtime/config.`
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub config: Vec<ConfigProperty>,
/// Named secret references to pass to the component. The component will be able to retrieve
/// these values at runtime using `wasmcloud:secrets/store`.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub secrets: Vec<SecretProperty>,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, Default, ToSchema, JsonSchema)]
pub struct ConfigDefinition {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub config: Vec<ConfigProperty>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub secrets: Vec<SecretProperty>,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, Hash, ToSchema, JsonSchema)]
pub struct SecretProperty {
/// The name of the secret. This is used by a reference by the component or capability to
/// get the secret value as a resource.
pub name: String,
/// The properties of the secret that indicate how to retrieve the secret value from a secrets
/// backend and which backend to actually query.
pub properties: SecretSourceProperty,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, Hash, ToSchema, JsonSchema)]
pub struct SecretSourceProperty {
/// The policy to use for retrieving the secret.
pub policy: String,
/// The key to use for retrieving the secret from the backend.
pub key: String,
/// The field to use for retrieving the secret from the backend. This is optional and can be
/// used to retrieve a specific field from a secret.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub field: Option<String>,
/// The version of the secret to retrieve. If not supplied, the latest version will be used.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub version: Option<String>,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct CapabilityProperties {
/// The image reference to use. Required unless the component is a shared component
/// that is defined in another shared application.
#[serde(skip_serializing_if = "Option::is_none")]
pub image: Option<String>,
/// Information to locate a component within a shared application. Cannot be specified
/// if the image is specified.
#[serde(skip_serializing_if = "Option::is_none")]
pub application: Option<SharedApplicationComponentProperties>,
/// The component ID to use for this provider. If not supplied, it will be generated
/// as a combination of the [Metadata::name] and the image reference.
#[serde(skip_serializing_if = "Option::is_none")]
pub id: Option<String>,
/// Named configuration to pass to the provider. The merged set of configuration will be passed
/// to the provider at runtime using the provider SDK's `init()` function.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub config: Vec<ConfigProperty>,
/// Named secret references to pass to the t. The provider will be able to retrieve
/// these values at runtime using `wasmcloud:secrets/store`.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub secrets: Vec<SecretProperty>,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
pub struct SharedApplicationComponentProperties {
/// The name of the shared application
pub name: String,
/// The name of the component in the shared application
pub component: String,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct Trait {
/// The type of trait specified. This should be a unique string for the type of scaler. As we
/// plan on supporting custom scalers, these traits are not enumerated
#[serde(rename = "type")]
pub trait_type: String,
/// The properties of this trait
pub properties: TraitProperty,
}
impl Trait {
/// Helper that creates a new linkdef type trait with the given properties
pub fn new_link(props: LinkProperty) -> Trait {
Trait {
trait_type: LINK_TRAIT.to_owned(),
properties: TraitProperty::Link(props),
}
}
/// Check if a trait is a link
pub fn is_link(&self) -> bool {
self.trait_type == LINK_TRAIT
}
/// Check if a trait is a scaler
pub fn is_scaler(&self) -> bool {
self.trait_type == SPREADSCALER_TRAIT || self.trait_type == DAEMONSCALER_TRAIT
}
/// Helper that creates a new spreadscaler type trait with the given properties
pub fn new_spreadscaler(props: SpreadScalerProperty) -> Trait {
Trait {
trait_type: SPREADSCALER_TRAIT.to_owned(),
properties: TraitProperty::SpreadScaler(props),
}
}
pub fn new_daemonscaler(props: SpreadScalerProperty) -> Trait {
Trait {
trait_type: DAEMONSCALER_TRAIT.to_owned(),
properties: TraitProperty::SpreadScaler(props),
}
}
}
/// Properties for defining traits
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(untagged)]
#[allow(clippy::large_enum_variant)]
pub enum TraitProperty {
Link(LinkProperty),
SpreadScaler(SpreadScalerProperty),
// TODO(thomastaylor312): This is still broken right now with deserializing. If the incoming
// type specifies instances, it matches with spreadscaler first. So we need to implement a custom
// parser here
Custom(serde_json::Value),
}
impl From<LinkProperty> for TraitProperty {
fn from(value: LinkProperty) -> Self {
Self::Link(value)
}
}
impl From<SpreadScalerProperty> for TraitProperty {
fn from(value: SpreadScalerProperty) -> Self {
Self::SpreadScaler(value)
}
}
// impl From<serde_json::Value> for TraitProperty {
// fn from(value: serde_json::Value) -> Self {
// Self::Custom(value)
// }
// }
/// Properties for the config list associated with components, providers, and links
///
/// ## Usage
/// Defining a config block, like so:
/// ```yaml
/// source_config:
/// - name: "external-secret-kv"
/// - name: "default-port"
/// properties:
/// port: "8080"
/// ```
///
/// Will result in two config scalers being created, one with the name `basic-kv` and one with the
/// name `default-port`. Wadm will not resolve collisions with configuration names between manifests.
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct ConfigProperty {
/// Name of the config to ensure exists
pub name: String,
/// Optional properties to put with the configuration. If the properties are
/// omitted in the manifest, wadm will assume that the configuration is externally managed
/// and will not attempt to create it, only reporting the status as failed if not found.
#[serde(skip_serializing_if = "Option::is_none")]
pub properties: Option<HashMap<String, String>>,
}
/// This impl is a helper to help compare a `Vec<String>` to a `Vec<ConfigProperty>`
impl PartialEq<ConfigProperty> for String {
fn eq(&self, other: &ConfigProperty) -> bool {
self == &other.name
}
}
/// Properties for links
#[derive(Debug, Serialize, Clone, PartialEq, Eq, ToSchema, JsonSchema, Default)]
#[serde(deny_unknown_fields)]
pub struct LinkProperty {
/// WIT namespace for the link
pub namespace: String,
/// WIT package for the link
pub package: String,
/// WIT interfaces for the link
pub interfaces: Vec<String>,
/// Configuration to apply to the source of the link
#[serde(default, skip_serializing_if = "Option::is_none")]
pub source: Option<ConfigDefinition>,
/// Configuration to apply to the target of the link
pub target: TargetConfig,
/// The name of this link
#[serde(skip_serializing_if = "Option::is_none")]
pub name: Option<String>,
#[serde(default, skip_serializing)]
#[deprecated(since = "0.13.0")]
pub source_config: Option<Vec<ConfigProperty>>,
#[serde(default, skip_serializing)]
#[deprecated(since = "0.13.0")]
pub target_config: Option<Vec<ConfigProperty>>,
}
impl<'de> Deserialize<'de> for LinkProperty {
fn deserialize<D>(d: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let json = serde_json::value::Value::deserialize(d)?;
let mut target = TargetConfig::default();
let mut source = None;
// Handling the old configuration -- translate to a TargetConfig
if let Some(t) = json.get("target") {
if t.is_string() {
let name = t.as_str().unwrap();
let mut tgt = vec![];
if let Some(tgt_config) = json.get("target_config") {
tgt = serde_json::from_value(tgt_config.clone()).map_err(de::Error::custom)?;
}
target = TargetConfig {
name: name.to_string(),
config: tgt,
secrets: vec![],
};
} else {
// Otherwise handle normally
target =
serde_json::from_value(json["target"].clone()).map_err(de::Error::custom)?;
}
}
if let Some(s) = json.get("source_config") {
let src: Vec<ConfigProperty> =
serde_json::from_value(s.clone()).map_err(de::Error::custom)?;
source = Some(ConfigDefinition {
config: src,
secrets: vec![],
});
}
// If the source block is present then it takes priority
if let Some(s) = json.get("source") {
source = Some(serde_json::from_value(s.clone()).map_err(de::Error::custom)?);
}
// Validate that the required keys are all present
if json.get("namespace").is_none() {
return Err(de::Error::custom("namespace is required"));
}
if json.get("package").is_none() {
return Err(de::Error::custom("package is required"));
}
if json.get("interfaces").is_none() {
return Err(de::Error::custom("interfaces is required"));
}
Ok(LinkProperty {
namespace: json["namespace"].as_str().unwrap().to_string(),
package: json["package"].as_str().unwrap().to_string(),
interfaces: json["interfaces"]
.as_array()
.unwrap()
.iter()
.map(|v| v.as_str().unwrap().to_string())
.collect(),
source,
target,
name: json.get("name").map(|v| v.as_str().unwrap().to_string()),
..Default::default()
})
}
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, Default, ToSchema, JsonSchema)]
pub struct TargetConfig {
/// The target this link applies to. This should be the name of a component in the manifest
pub name: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub config: Vec<ConfigProperty>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub secrets: Vec<SecretProperty>,
}
impl PartialEq<TargetConfig> for String {
fn eq(&self, other: &TargetConfig) -> bool {
self == &other.name
}
}
/// Properties for spread scalers
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct SpreadScalerProperty {
/// Number of instances to spread across matching requirements
#[serde(alias = "replicas")]
pub instances: usize,
/// Requirements for spreading those instances
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub spread: Vec<Spread>,
}
/// Configuration for various spreading requirements
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, ToSchema, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct Spread {
/// The name of this spread requirement
pub name: String,
/// An arbitrary map of labels to match on for scaling requirements
#[serde(skip_serializing_if = "BTreeMap::is_empty")]
pub requirements: BTreeMap<String, String>,
/// An optional weight for this spread. Higher weights are given more precedence
#[serde(skip_serializing_if = "Option::is_none")]
pub weight: Option<usize>,
}
impl Default for Spread {
fn default() -> Self {
Spread {
name: "default".to_string(),
requirements: BTreeMap::default(),
weight: None,
}
}
}
#[cfg(test)]
mod test {
use std::io::BufReader;
use std::path::Path;
use anyhow::Result;
use super::*;
pub(crate) fn deserialize_yaml(filepath: impl AsRef<Path>) -> Result<Manifest> {
let file = std::fs::File::open(filepath)?;
let reader = BufReader::new(file);
let yaml_string: Manifest = serde_yaml::from_reader(reader)?;
Ok(yaml_string)
}
pub(crate) fn deserialize_json(filepath: impl AsRef<Path>) -> Result<Manifest> {
let file = std::fs::File::open(filepath)?;
let reader = BufReader::new(file);
let json_string: Manifest = serde_json::from_reader(reader)?;
Ok(json_string)
}
#[test]
fn test_oam_deserializer() {
let res = deserialize_json("../../oam/simple1.json");
match res {
Ok(parse_results) => parse_results,
Err(error) => panic!("Error {:?}", error),
};
let res = deserialize_yaml("../../oam/simple1.yaml");
match res {
Ok(parse_results) => parse_results,
Err(error) => panic!("Error {:?}", error),
};
}
#[test]
#[ignore] // see TODO in TraitProperty enum
fn test_custom_traits() {
let manifest = deserialize_yaml("../../oam/custom.yaml").expect("Should be able to parse");
let component = manifest
.spec
.components
.into_iter()
.find(|comp| matches!(comp.properties, Properties::Component { .. }))
.expect("Should be able to find component");
let traits = component.traits.expect("Should have Vec of traits");
assert!(
traits
.iter()
.any(|t| matches!(t.properties, TraitProperty::Custom(_))),
"Should have found custom property trait: {traits:?}"
);
}
#[test]
fn test_config() {
let manifest = deserialize_yaml("../../oam/config.yaml").expect("Should be able to parse");
let props = match &manifest.spec.components[0].properties {
Properties::Component { properties } => properties,
_ => panic!("Should have found capability component"),
};
assert_eq!(props.config.len(), 1, "Should have found a config property");
let config_property = props.config.first().expect("Should have a config property");
assert!(config_property.name == "component_config");
assert!(config_property
.properties
.as_ref()
.is_some_and(|p| p.get("lang").is_some_and(|v| v == "EN-US")));
let props = match &manifest.spec.components[1].properties {
Properties::Capability { properties } => properties,
_ => panic!("Should have found capability component"),
};
assert_eq!(props.config.len(), 1, "Should have found a config property");
let config_property = props.config.first().expect("Should have a config property");
assert!(config_property.name == "provider_config");
assert!(config_property
.properties
.as_ref()
.is_some_and(|p| p.get("default-port").is_some_and(|v| v == "8080")));
assert!(config_property.properties.as_ref().is_some_and(|p| p
.get("cache_file")
.is_some_and(|v| v == "/tmp/mycache.json")));
}
#[test]
fn test_component_matching() {
let manifest = deserialize_yaml("../../oam/simple2.yaml").expect("Should be able to parse");
assert_eq!(
manifest
.spec
.components
.iter()
.filter(|component| matches!(component.properties, Properties::Component { .. }))
.count(),
1,
"Should have found 1 component property"
);
assert_eq!(
manifest
.spec
.components
.iter()
.filter(|component| matches!(component.properties, Properties::Capability { .. }))
.count(),
2,
"Should have found 2 capability properties"
);
}
#[test]
fn test_trait_matching() {
let manifest = deserialize_yaml("../../oam/simple2.yaml").expect("Should be able to parse");
// Validate component traits
let traits = manifest
.spec
.components
.clone()
.into_iter()
.find(|component| matches!(component.properties, Properties::Component { .. }))
.expect("Should find component component")
.traits
.expect("Should have traits object");
assert_eq!(traits.len(), 1, "Should have 1 trait");
assert!(
matches!(traits[0].properties, TraitProperty::SpreadScaler(_)),
"Should have spreadscaler properties"
);
// Validate capability component traits
let traits = manifest
.spec
.components
.into_iter()
.find(|component| {
matches!(
&component.properties,
Properties::Capability {
properties: CapabilityProperties { image, .. }
} if image.clone().expect("image to be present") == "wasmcloud.azurecr.io/httpserver:0.13.1"
)
})
.expect("Should find capability component")
.traits
.expect("Should have traits object");
assert_eq!(traits.len(), 1, "Should have 1 trait");
assert!(
matches!(traits[0].properties, TraitProperty::Link(_)),
"Should have link property"
);
if let TraitProperty::Link(ld) = &traits[0].properties {
assert_eq!(ld.source.as_ref().unwrap().config, vec![]);
assert_eq!(ld.target.name, "userinfo".to_string());
} else {
panic!("trait property was not a link definition");
}
}
#[test]
fn test_oam_serializer() {
let mut spread_vec: Vec<Spread> = Vec::new();
let spread_item = Spread {
name: "eastcoast".to_string(),
requirements: BTreeMap::from([("zone".to_string(), "us-east-1".to_string())]),
weight: Some(80),
};
spread_vec.push(spread_item);
let spread_item = Spread {
name: "westcoast".to_string(),
requirements: BTreeMap::from([("zone".to_string(), "us-west-1".to_string())]),
weight: Some(20),
};
spread_vec.push(spread_item);
let mut trait_vec: Vec<Trait> = Vec::new();
let spreadscalerprop = SpreadScalerProperty {
instances: 4,
spread: spread_vec,
};
let trait_item = Trait::new_spreadscaler(spreadscalerprop);
trait_vec.push(trait_item);
let linkdefprop = LinkProperty {
target: TargetConfig {
name: "webcap".to_string(),
..Default::default()
},
namespace: "wasi".to_string(),
package: "http".to_string(),
interfaces: vec!["incoming-handler".to_string()],
source: Some(ConfigDefinition {
config: {
vec![ConfigProperty {
name: "http".to_string(),
properties: Some(HashMap::from([("port".to_string(), "8080".to_string())])),
}]
},
..Default::default()
}),
name: Some("default".to_string()),
..Default::default()
};
let trait_item = Trait::new_link(linkdefprop);
trait_vec.push(trait_item);
let mut component_vec: Vec<Component> = Vec::new();
let component_item = Component {
name: "userinfo".to_string(),
properties: Properties::Component {
properties: ComponentProperties {
image: Some("wasmcloud.azurecr.io/fake:1".to_string()),
application: None,
id: None,
config: vec![],
secrets: vec![],
},
},
traits: Some(trait_vec),
};
component_vec.push(component_item);
let component_item = Component {
name: "webcap".to_string(),
properties: Properties::Capability {
properties: CapabilityProperties {
image: Some("wasmcloud.azurecr.io/httpserver:0.13.1".to_string()),
application: None,
id: None,
config: vec![],
secrets: vec![],
},
},
traits: None,
};
component_vec.push(component_item);
let mut spread_vec: Vec<Spread> = Vec::new();
let spread_item = Spread {
name: "haslights".to_string(),
requirements: BTreeMap::from([("zone".to_string(), "enabled".to_string())]),
weight: Some(DEFAULT_SPREAD_WEIGHT),
};
spread_vec.push(spread_item);
let spreadscalerprop = SpreadScalerProperty {
instances: 1,
spread: spread_vec,
};
let mut trait_vec: Vec<Trait> = Vec::new();
let trait_item = Trait::new_spreadscaler(spreadscalerprop);
trait_vec.push(trait_item);
let component_item = Component {
name: "ledblinky".to_string(),
properties: Properties::Capability {
properties: CapabilityProperties {
image: Some("wasmcloud.azurecr.io/ledblinky:0.0.1".to_string()),
application: None,
id: None,
config: vec![],
secrets: vec![],
},
},
traits: Some(trait_vec),
};
component_vec.push(component_item);
let spec = Specification {
components: component_vec,
policies: vec![],
};
let metadata = Metadata {
name: "my-example-app".to_string(),
annotations: BTreeMap::from([
(VERSION_ANNOTATION_KEY.to_string(), "v0.0.1".to_string()),
(
DESCRIPTION_ANNOTATION_KEY.to_string(),
"This is my app".to_string(),
),
]),
labels: BTreeMap::from([(
"prefix.dns.prefix/name-for_a.123".to_string(),
"this is a valid label".to_string(),
)]),
};
let manifest = Manifest {
api_version: OAM_VERSION.to_owned(),
kind: APPLICATION_KIND.to_owned(),
metadata,
spec,
};
let serialized_json =
serde_json::to_vec(&manifest).expect("Should be able to serialize JSON");
let serialized_yaml = serde_yaml::to_string(&manifest)
.expect("Should be able to serialize YAML")
.into_bytes();
// Test the round trip back in
let json_manifest: Manifest = serde_json::from_slice(&serialized_json)
.expect("Should be able to deserialize JSON roundtrip");
let yaml_manifest: Manifest = serde_yaml::from_slice(&serialized_yaml)
.expect("Should be able to deserialize YAML roundtrip");
// Make sure the manifests don't contain any custom traits (to test that we aren't parsing
// the tagged enum poorly)
assert!(
!json_manifest
.spec
.components
.into_iter()
.any(|component| component
.traits
.unwrap_or_default()
.into_iter()
.any(|t| matches!(t.properties, TraitProperty::Custom(_)))),
"Should have found custom properties"
);
assert!(
!yaml_manifest
.spec
.components
.into_iter()
.any(|component| component
.traits
.unwrap_or_default()
.into_iter()
.any(|t| matches!(t.properties, TraitProperty::Custom(_)))),
"Should have found custom properties"
);
}
#[test]
fn test_deprecated_fields_not_set() {
let manifest = deserialize_yaml("../../oam/simple2.yaml").expect("Should be able to parse");
// Validate component traits
let traits = manifest
.spec
.components
.clone()
.into_iter()
.filter(|component| matches!(component.name.as_str(), "webcap"))
.find(|component| matches!(component.properties, Properties::Capability { .. }))
.expect("Should find component component")
.traits
.expect("Should have traits object");
assert_eq!(traits.len(), 1, "Should have 1 trait");
if let TraitProperty::Link(ld) = &traits[0].properties {
assert_eq!(ld.source.as_ref().unwrap().config, vec![]);
#[allow(deprecated)]
let source_config = &ld.source_config;
assert_eq!(source_config, &None);
} else {
panic!("trait property was not a link definition");
};
}
}

View File

@ -0,0 +1,908 @@
//! Logic for model ([`Manifest`]) validation
//!
use std::collections::{HashMap, HashSet};
#[cfg(not(target_family = "wasm"))]
use std::path::Path;
use std::sync::OnceLock;
use anyhow::{Context as _, Result};
use regex::Regex;
use serde::{Deserialize, Serialize};
use crate::{
CapabilityProperties, ComponentProperties, LinkProperty, Manifest, Properties, Trait,
TraitProperty, DEFAULT_LINK_NAME, LATEST_VERSION,
};
/// A namespace -> package -> interface lookup
type KnownInterfaceLookup = HashMap<String, HashMap<String, HashMap<String, ()>>>;
/// Hard-coded list of known namespaces/packages and the interfaces they contain.
///
/// Using an interface that is *not* on this list is not an error --
/// custom interfaces are expected to not be on this list, but when using
/// a known namespace and package, interfaces should generally be well known.
static KNOWN_INTERFACE_LOOKUP: OnceLock<KnownInterfaceLookup> = OnceLock::new();
const SECRET_POLICY_TYPE: &str = "policy.secret.wasmcloud.dev/v1alpha1";
/// Get the static list of known interfaces
fn get_known_interface_lookup() -> &'static KnownInterfaceLookup {
KNOWN_INTERFACE_LOOKUP.get_or_init(|| {
HashMap::from([
(
"wrpc".into(),
HashMap::from([
(
"blobstore".into(),
HashMap::from([("blobstore".into(), ())]),
),
(
"keyvalue".into(),
HashMap::from([("atomics".into(), ()), ("store".into(), ())]),
),
(
"http".into(),
HashMap::from([
("incoming-handler".into(), ()),
("outgoing-handler".into(), ()),
]),
),
]),
),
(
"wasi".into(),
HashMap::from([
(
"blobstore".into(),
HashMap::from([("blobstore".into(), ())]),
),
("config".into(), HashMap::from([("runtime".into(), ())])),
(
"keyvalue".into(),
HashMap::from([
("atomics".into(), ()),
("store".into(), ()),
("batch".into(), ()),
("watch".into(), ()),
]),
),
(
"http".into(),
HashMap::from([
("incoming-handler".into(), ()),
("outgoing-handler".into(), ()),
]),
),
("logging".into(), HashMap::from([("logging".into(), ())])),
]),
),
(
"wasmcloud".into(),
HashMap::from([(
"messaging".into(),
HashMap::from([("consumer".into(), ()), ("handler".into(), ())]),
)]),
),
])
})
}
static MANIFEST_NAME_REGEX_STR: &str = r"^[-\w]+$";
static MANIFEST_NAME_REGEX: OnceLock<Regex> = OnceLock::new();
/// Retrieve regular expression which manifest names must match, compiled to a usable [`Regex`]
fn get_manifest_name_regex() -> &'static Regex {
MANIFEST_NAME_REGEX.get_or_init(|| {
Regex::new(MANIFEST_NAME_REGEX_STR)
.context("failed to parse manifest name regex")
.unwrap()
})
}
/// Check whether a manifest name matches requirements, returning all validation errors
pub fn validate_manifest_name(name: &str) -> impl ValidationOutput {
let mut errors = Vec::new();
if !get_manifest_name_regex().is_match(name) {
errors.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("manifest name [{name}] is not allowed (should match regex [{MANIFEST_NAME_REGEX_STR}])"),
))
}
errors
}
/// Check whether a manifest name matches requirements
pub fn is_valid_manifest_name(name: &str) -> bool {
validate_manifest_name(name).valid()
}
/// Check whether a manifest version is valid, returning all validation errors
pub fn validate_manifest_version(version: &str) -> impl ValidationOutput {
let mut errors = Vec::new();
if version == LATEST_VERSION {
errors.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("{LATEST_VERSION} is not allowed in wadm"),
))
}
errors
}
/// Check whether a manifest version is valid requirements
pub fn is_valid_manifest_version(version: &str) -> bool {
validate_manifest_version(version).valid()
}
/// Check whether a known grouping of namespace, package and interface are valid.
/// A grouping must be both known/expected and invalid to fail this test (ex. a typo).
///
/// NOTE: what is considered a valid interface known to the host depends explicitly on
/// the wasmCloud host and wasmCloud project goals/implementation. This information is
/// subject to change.
fn is_invalid_known_interface(
namespace: &str,
package: &str,
interface: &str,
) -> Vec<ValidationFailure> {
let known_interfaces = get_known_interface_lookup();
let Some(pkg_lookup) = known_interfaces.get(namespace) else {
// This namespace isn't known, so it may be a custom interface
return vec![];
};
let Some(iface_lookup) = pkg_lookup.get(package) else {
// Unknown package inside a known interface we control is probably a bug
return vec![ValidationFailure::new(
ValidationFailureLevel::Warning,
format!("unrecognized interface [{namespace}:{package}/{interface}]"),
)];
};
// Unknown interface inside known namespace and package is probably a bug
if !iface_lookup.contains_key(interface) {
// Unknown package inside a known interface we control is probably a bug, but may be
// a new interface we don't know about yet
return vec![ValidationFailure::new(
ValidationFailureLevel::Warning,
format!("unrecognized interface [{namespace}:{package}/{interface}]"),
)];
}
Vec::new()
}
/// Level of a failure related to validation
#[derive(Debug, Default, Clone, Eq, PartialEq, Serialize, Deserialize)]
#[non_exhaustive]
pub enum ValidationFailureLevel {
#[default]
Warning,
Error,
}
impl core::fmt::Display for ValidationFailureLevel {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
write!(
f,
"{}",
match self {
Self::Warning => "warning",
Self::Error => "error",
}
)
}
}
/// Failure detailing a validation failure, normally indicating a failure
#[derive(Debug, Default, Clone, Eq, PartialEq, Serialize, Deserialize)]
#[non_exhaustive]
pub struct ValidationFailure {
pub level: ValidationFailureLevel,
pub msg: String,
}
impl ValidationFailure {
fn new(level: ValidationFailureLevel, msg: String) -> Self {
ValidationFailure { level, msg }
}
}
impl core::fmt::Display for ValidationFailure {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
write!(f, "[{}] {}", self.level, self.msg)
}
}
/// Things that support output validation
pub trait ValidationOutput {
/// Whether the object is valid
fn valid(&self) -> bool;
/// Warnings returned (if any) during validation
fn warnings(&self) -> Vec<&ValidationFailure>;
/// The errors returned by the validation
fn errors(&self) -> Vec<&ValidationFailure>;
}
/// Default implementation for a list of concrete [`ValidationFailure`]s
impl ValidationOutput for [ValidationFailure] {
fn valid(&self) -> bool {
self.errors().is_empty()
}
fn warnings(&self) -> Vec<&ValidationFailure> {
self.iter()
.filter(|m| m.level == ValidationFailureLevel::Warning)
.collect()
}
fn errors(&self) -> Vec<&ValidationFailure> {
self.iter()
.filter(|m| m.level == ValidationFailureLevel::Error)
.collect()
}
}
/// Default implementation for a list of concrete [`ValidationFailure`]s
impl ValidationOutput for Vec<ValidationFailure> {
fn valid(&self) -> bool {
self.as_slice().valid()
}
fn warnings(&self) -> Vec<&ValidationFailure> {
self.iter()
.filter(|m| m.level == ValidationFailureLevel::Warning)
.collect()
}
fn errors(&self) -> Vec<&ValidationFailure> {
self.iter()
.filter(|m| m.level == ValidationFailureLevel::Error)
.collect()
}
}
/// Validate a WADM application manifest, returning a list of validation failures
///
/// At present this can check for:
/// - unsupported interfaces (i.e. typos, etc)
/// - unknown packages under known namespaces
/// - "dangling" links (missing components)
///
/// Since `[ValidationFailure]` implements `ValidationOutput`, you can call `valid()` and other
/// trait methods on it:
///
/// ```rust,ignore
/// let messages = validate_manifest(some_path).await?;
/// let valid = messages.valid();
/// ```
///
/// # Arguments
///
/// * `path` - Path to the Manifest that will be read into memory and validated
#[cfg(not(target_family = "wasm"))]
pub async fn validate_manifest_file(
path: impl AsRef<Path>,
) -> Result<(Manifest, Vec<ValidationFailure>)> {
let content = tokio::fs::read_to_string(path.as_ref())
.await
.with_context(|| format!("failed to read manifest @ [{}]", path.as_ref().display()))?;
validate_manifest_bytes(&content).await.with_context(|| {
format!(
"failed to parse YAML manifest [{}]",
path.as_ref().display()
)
})
}
/// Validate a lsit of bytes that represents a WADM application manifest
///
/// # Arguments
///
/// * `content` - YAML content to the Manifest that will be read into memory and validated
pub async fn validate_manifest_bytes(
content: impl AsRef<[u8]>,
) -> Result<(Manifest, Vec<ValidationFailure>)> {
let raw_yaml_content = content.as_ref();
let manifest =
serde_yaml::from_slice(content.as_ref()).context("failed to parse manifest content")?;
let mut failures = validate_manifest(&manifest).await?;
let mut yaml_issues = validate_raw_yaml(raw_yaml_content)?;
failures.append(&mut yaml_issues);
Ok((manifest, failures))
}
/// Validate a WADM application manifest, returning a list of validation failures
///
/// At present this can check for:
/// - unsupported interfaces (i.e. typos, etc)
/// - unknown packages under known namespaces
/// - "dangling" links (missing components)
/// - secrets mapped to unknown policies
///
/// Since `[ValidationFailure]` implements `ValidationOutput`, you can call `valid()` and other
/// trait methods on it:
///
/// ```rust,ignore
/// let messages = validate_manifest(some_path).await?;
/// let valid = messages.valid();
/// ```
///
/// # Arguments
///
/// * `manifest` - The [`Manifest`] that should be validated
pub async fn validate_manifest(manifest: &Manifest) -> Result<Vec<ValidationFailure>> {
// Check for known failures with the manifest
let mut failures = Vec::new();
failures.extend(
validate_manifest_name(&manifest.metadata.name)
.errors()
.into_iter()
.cloned(),
);
failures.extend(
validate_manifest_version(manifest.version())
.errors()
.into_iter()
.cloned(),
);
failures.extend(core_validation(manifest));
failures.extend(check_misnamed_interfaces(manifest));
failures.extend(check_dangling_links(manifest));
failures.extend(validate_policies(manifest));
failures.extend(ensure_no_custom_traits(manifest));
failures.extend(validate_component_properties(manifest));
failures.extend(check_duplicate_links(manifest));
failures.extend(validate_link_configs(manifest));
Ok(failures)
}
pub fn validate_raw_yaml(content: &[u8]) -> Result<Vec<ValidationFailure>> {
let mut failures = Vec::new();
let raw_content: serde_yaml::Value =
serde_yaml::from_slice(content).context("failed read raw yaml content")?;
failures.extend(validate_components_configs(&raw_content));
Ok(failures)
}
fn core_validation(manifest: &Manifest) -> Vec<ValidationFailure> {
let mut failures = Vec::new();
let mut name_registry: HashSet<String> = HashSet::new();
let mut id_registry: HashSet<String> = HashSet::new();
let mut required_capability_components: HashSet<String> = HashSet::new();
for label in manifest.metadata.labels.iter() {
if !valid_oam_label(label) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Invalid OAM label: {:?}", label),
));
}
}
for annotation in manifest.metadata.annotations.iter() {
if !valid_oam_label(annotation) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Invalid OAM annotation: {:?}", annotation),
));
}
}
for component in manifest.spec.components.iter() {
// Component name validation : each component (components or providers) should have a unique name
if !name_registry.insert(component.name.clone()) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Duplicate component name in manifest: {}", component.name),
));
}
// Provider validation :
// Provider config should be serializable [For all components that have JSON config, validate that it can serialize.
// We need this so it doesn't trigger an error when sending a command down the line]
// Providers should have a unique image ref and link name
if let Properties::Capability {
properties:
CapabilityProperties {
id: Some(component_id),
config: _capability_config,
..
},
} = &component.properties
{
if !id_registry.insert(component_id.to_string()) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"Duplicate component identifier in manifest: {}",
component_id
),
));
}
}
// Component validation : Components should have a unique identifier per manifest
if let Properties::Component {
properties: ComponentProperties { id: Some(id), .. },
} = &component.properties
{
if !id_registry.insert(id.to_string()) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Duplicate component identifier in manifest: {}", id),
));
}
}
// Linkdef validation : A linkdef from a component should have a unique target and reference
if let Some(traits_vec) = &component.traits {
for trait_item in traits_vec.iter() {
if let Trait {
// TODO : add trait type validation after custom types are done. See TraitProperty enum.
properties: TraitProperty::Link(LinkProperty { target, .. }),
..
} = &trait_item
{
// Multiple components{ with type != 'capability'} can declare the same target, so we don't need to check for duplicates on insert
required_capability_components.insert(target.name.to_string());
}
}
}
}
let missing_capability_components = required_capability_components
.difference(&name_registry)
.collect::<Vec<&String>>();
if !missing_capability_components.is_empty() {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"The following capability component(s) are missing from the manifest: {:?}",
missing_capability_components
),
));
};
failures
}
/// Check for misnamed host-supported interfaces in the manifest
fn check_misnamed_interfaces(manifest: &Manifest) -> Vec<ValidationFailure> {
let mut failures = Vec::new();
for link_trait in manifest.links() {
if let TraitProperty::Link(LinkProperty {
namespace,
package,
interfaces,
target: _target,
source: _source,
..
}) = &link_trait.properties
{
for interface in interfaces {
failures.extend(is_invalid_known_interface(namespace, package, interface))
}
}
}
failures
}
/// This validation rule should eventually be removed, but at this time (as of wadm 0.14.0)
/// custom traits are not supported. We technically deserialize the custom trait, but 99%
/// of the time this is just a poorly formatted spread or link scaler which is incredibly
/// frustrating to debug.
fn ensure_no_custom_traits(manifest: &Manifest) -> Vec<ValidationFailure> {
let mut failures = Vec::new();
for component in manifest.components() {
if let Some(traits) = &component.traits {
for trait_item in traits {
match &trait_item.properties {
TraitProperty::Custom(trt) if trait_item.is_link() => failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Link trait deserialized as custom trait, ensure fields are correct: {}", trt),
)),
TraitProperty::Custom(trt) if trait_item.is_scaler() => failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Scaler trait deserialized as custom trait, ensure fields are correct: {}", trt),
)),
_ => (),
}
}
}
}
failures
}
/// Check for "dangling" links, which contain targets that are not specified elsewhere in the
/// WADM manifest.
///
/// A problem of this type only constitutes a warning, because it is possible that the manifest
/// does not *completely* specify targets (they may be deployed/managed external to WADM or in a separte
/// manifest).
fn check_dangling_links(manifest: &Manifest) -> Vec<ValidationFailure> {
let lookup = manifest.component_lookup();
let mut failures = Vec::new();
for link_trait in manifest.links() {
match &link_trait.properties {
TraitProperty::Custom(obj) => {
if obj.get("target").is_none() {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
"custom link is missing 'target' property".into(),
));
continue;
}
// Ensure target property is present
match obj["target"]["name"].as_str() {
// If target is present, ensure it's pointing to a known component
Some(target) if !lookup.contains_key(&String::from(target)) => {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Warning,
format!("custom link target [{target}] is not a listed component"),
))
}
// For all keys where the the component is in the lookup we can do nothing
Some(_) => {}
// if target property is not present, note that it is missing
None => failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
"custom link is missing 'target' name property".into(),
)),
}
}
TraitProperty::Link(LinkProperty { name, target, .. }) => {
let link_identifier = name
.as_ref()
.map(|n| format!("(name [{n}])"))
.unwrap_or_else(|| format!("(target [{}])", target.name));
if !lookup.contains_key(&target.name) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Warning,
format!(
"link {link_identifier} target [{}] is not a listed component",
target.name
),
))
}
}
_ => unreachable!("manifest.links() should only return links"),
}
}
failures
}
/// Ensure that a manifest has secrets that are mapped to known policies
/// and that those policies have the expected type and properties.
fn validate_policies(manifest: &Manifest) -> Vec<ValidationFailure> {
let policies = manifest.policy_lookup();
let mut failures = Vec::new();
for c in manifest.components() {
// Ensure policies meant for secrets are valid
for secret in c.secrets() {
match policies.get(&secret.properties.policy) {
Some(policy) if policy.policy_type != SECRET_POLICY_TYPE => {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"secret '{}' is mapped to policy '{}' which is not a secret policy. Expected type '{SECRET_POLICY_TYPE}'",
secret.name, secret.properties.policy
),
))
}
Some(policy) => {
if !policy.properties.contains_key("backend") {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"secret '{}' is mapped to policy '{}' which does not include a 'backend' property",
secret.name, secret.properties.policy
),
))
}
}
None => failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"secret '{}' is mapped to unknown policy '{}'",
secret.name, secret.properties.policy
),
)),
}
}
}
failures
}
/// Ensure that all components in a manifest either specify an image reference or a shared
/// component in a different manifest. Note that this does not validate that the image reference
/// is valid or that the shared component is valid, only that one of the two properties is set.
pub fn validate_component_properties(application: &Manifest) -> Vec<ValidationFailure> {
let mut failures = Vec::new();
for component in application.spec.components.iter() {
match &component.properties {
Properties::Component {
properties:
ComponentProperties {
image,
application,
config,
secrets,
..
},
}
| Properties::Capability {
properties:
CapabilityProperties {
image,
application,
config,
secrets,
..
},
} => match (image, application) {
(Some(_), Some(_)) => {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
"Component cannot have both 'image' and 'application' properties".into(),
));
}
(None, None) => {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
"Component must have either 'image' or 'application' property".into(),
));
}
// This is a problem because of our left-folding config implementation. A shared application
// could specify additional config and actually overwrite the original manifest's config.
(None, Some(shared_properties)) if !config.is_empty() => {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"Shared component '{}' cannot specify additional 'config'",
shared_properties.name
),
));
}
(None, Some(shared_properties)) if !secrets.is_empty() => {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"Shared component '{}' cannot specify additional 'secrets'",
shared_properties.name
),
));
}
// Shared application components already have scale properties defined in their original manifest
(None, Some(shared_properties))
if component
.traits
.as_ref()
.is_some_and(|traits| traits.iter().any(|trt| trt.is_scaler())) =>
{
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"Shared component '{}' cannot include a scaler trait",
shared_properties.name
),
));
}
_ => {}
},
}
}
failures
}
/// Validates link configs in a WADM application manifest.
///
/// At present this can check for:
/// - all configs that declare `properties` have unique names
/// (configs without properties refer to existing configs)
///
pub fn validate_link_configs(manifest: &Manifest) -> Vec<ValidationFailure> {
let mut failures = Vec::new();
let mut link_config_names = HashSet::new();
for link_trait in manifest.links() {
if let TraitProperty::Link(LinkProperty { target, source, .. }) = &link_trait.properties {
for config in &target.config {
// we only need to check for uniqueness of configs with properties
if config.properties.is_none() {
continue;
}
// Check if config name is unique
if !link_config_names.insert(config.name.clone()) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Duplicate link config name found: '{}'", config.name),
));
}
}
if let Some(source) = source {
for config in &source.config {
// we only need to check for uniqueness of configs with properties
if config.properties.is_none() {
continue;
}
// Check if config name is unique
if !link_config_names.insert(config.name.clone()) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!("Duplicate link config name found: '{}'", config.name),
));
}
}
}
}
}
failures
}
/// Funtion to validate the component configs
/// from 0.13.0 source_config is deprecated and replaced with source:config:
/// this function validates the raw yaml to check for deprecated source_config and target_config
pub fn validate_components_configs(application: &serde_yaml::Value) -> Vec<ValidationFailure> {
let mut failures = Vec::new();
if let Some(specs) = application.get("spec") {
if let Some(components) = specs.get("components") {
if let Some(components_sequence) = components.as_sequence() {
for component in components_sequence.iter() {
failures.extend(get_deprecated_configs(component));
}
}
}
}
failures
}
fn get_deprecated_configs(component: &serde_yaml::Value) -> Vec<ValidationFailure> {
let mut failures = vec![];
if let Some(traits) = component.get("traits") {
if let Some(traits_sequence) = traits.as_sequence() {
for trait_ in traits_sequence.iter() {
if let Some(trait_type) = trait_.get("type") {
if trait_type.ne("link") {
continue;
}
}
if let Some(trait_properties) = trait_.get("properties") {
if trait_properties.get("source_config").is_some() {
failures.push(ValidationFailure {
level: ValidationFailureLevel::Warning,
msg: "one of the components' link trait contains a source_config key, please use source:config: rather".to_string(),
});
}
if trait_properties.get("target_config").is_some() {
failures.push(ValidationFailure {
level: ValidationFailureLevel::Warning,
msg: "one of the components' link trait contains a target_config key, please use target:config: rather".to_string(),
});
}
}
}
}
}
failures
}
/// This function validates that a key/value pair is a valid OAM label. It's using fairly
/// basic validation rules to ensure that the manifest isn't doing anything horribly wrong. Keeping
/// this function free of regex is intentional to keep this code functional but simple.
///
/// See <https://github.com/oam-dev/spec/blob/master/metadata.md#metadata> for details
pub fn valid_oam_label(label: (&String, &String)) -> bool {
let (key, _) = label;
match key.split_once('/') {
Some((prefix, name)) => is_valid_dns_subdomain(prefix) && is_valid_label_name(name),
None => is_valid_label_name(key),
}
}
pub fn is_valid_dns_subdomain(s: &str) -> bool {
if s.is_empty() || s.len() > 253 {
return false;
}
s.split('.').all(|part| {
// Ensure each part is non-empty, <= 63 characters, starts with an alphabetic character,
// ends with an alphanumeric character, and contains only alphanumeric characters or hyphens
!part.is_empty()
&& part.len() <= 63
&& part.starts_with(|c: char| c.is_ascii_alphabetic())
&& part.ends_with(|c: char| c.is_ascii_alphanumeric())
&& part.chars().all(|c| c.is_ascii_alphanumeric() || c == '-')
})
}
// Ensure each name is non-empty, <= 63 characters, starts with an alphanumeric character,
// ends with an alphanumeric character, and contains only alphanumeric characters, hyphens,
// underscores, or periods
pub fn is_valid_label_name(name: &str) -> bool {
if name.is_empty() || name.len() > 63 {
return false;
}
name.starts_with(|c: char| c.is_ascii_alphanumeric())
&& name.ends_with(|c: char| c.is_ascii_alphanumeric())
&& name
.chars()
.all(|c| c.is_ascii_alphanumeric() || c == '-' || c == '_' || c == '.')
}
/// Checks whether a manifest contains "duplicate" links.
///
/// Multiple links from the same source with the same name, namespace, package and interface
/// are considered duplicate links.
fn check_duplicate_links(manifest: &Manifest) -> Vec<ValidationFailure> {
let mut failures = Vec::new();
for component in manifest.components() {
let mut link_ids = HashSet::new();
for link in component.links() {
if let TraitProperty::Link(LinkProperty {
name,
namespace,
package,
interfaces,
..
}) = &link.properties
{
for interface in interfaces {
if !link_ids.insert((
name.clone()
.unwrap_or_else(|| DEFAULT_LINK_NAME.to_string()),
namespace,
package,
interface,
)) {
failures.push(ValidationFailure::new(
ValidationFailureLevel::Error,
format!(
"Duplicate link found inside component '{}': {} ({}:{}/{})",
component.name,
name.clone()
.unwrap_or_else(|| DEFAULT_LINK_NAME.to_string()),
namespace,
package,
interface
),
));
};
}
}
}
}
failures
}
#[cfg(test)]
mod tests {
use super::is_valid_manifest_name;
const VALID_MANIFEST_NAMES: [&str; 4] = [
"mymanifest",
"my-manifest",
"my_manifest",
"mymanifest-v2-v3-final",
];
const INVALID_MANIFEST_NAMES: [&str; 2] = ["my.manifest", "my manifest"];
/// Ensure valid manifest names pass
#[test]
fn manifest_names_valid() {
// Acceptable manifest names
for valid in VALID_MANIFEST_NAMES {
assert!(is_valid_manifest_name(valid));
}
}
/// Ensure invalid manifest names fail
#[test]
fn manifest_names_invalid() {
for invalid in INVALID_MANIFEST_NAMES {
assert!(!is_valid_manifest_name(invalid))
}
}
}

View File

@ -0,0 +1,4 @@
[wadm]
path = "../../../wit/wadm"
sha256 = "9795ab1a83023da07da2dc28d930004bd913b9dbf07d68d9ef9207a44348a169"
sha512 = "9a94f33fd861912c81efd441cd19cc8066dbb2df5c2236d0472b66294bddc20ec5ad569484be18334d8c104ae9647b2c81c9878210ac35694ad8ba4a5b3780be"

View File

@ -0,0 +1 @@
wadm = "../../../wit/wadm"

View File

@ -0,0 +1,48 @@
package wasmcloud:wadm@0.2.0;
/// A Wadm client which interacts with the wadm api
interface client {
use types.{
version-info,
status,
model-summary,
oam-manifest
};
// Deploys a model to the WADM system.
// If no lattice is provided, the default lattice name 'default' is used.
deploy-model: func(model-name: string, version: option<string>, lattice: option<string>) -> result<string, string>;
// Undeploys a model from the WADM system.
undeploy-model: func(model-name: string, lattice: option<string>, non-destructive: bool) -> result<_, string>;
// Stores the application manifest for later deploys.
// Model is the full YAML or JSON string in this case
// Returns the model name and version respectively.
put-model: func(model: string, lattice: option<string>) -> result<tuple<string, string>, string>;
/// Store an oam manifest directly for later deploys.
put-manifest: func(manifest: oam-manifest, lattice: option<string>) -> result<tuple<string, string>, string>;
// Retrieves the history of a given model name.
get-model-history: func(model-name: string, lattice: option<string>) -> result<list<version-info>, string>;
// Retrieves the status of a given model by name.
get-model-status: func(model-name: string, lattice: option<string>) -> result<status, string>;
// Retrieves details on a given model.
get-model-details: func(model-name: string, version: option<string>, lattice: option<string>) -> result<oam-manifest, string>;
// Deletes a model version from the WADM system.
delete-model-version: func(model-name: string, version: option<string>, lattice: option<string>) -> result<bool, string>;
// Retrieves all application manifests.
get-models: func(lattice: option<string>) -> result<list<model-summary>, string>;
}
interface handler {
use types.{status-update};
// Callback handled to invoke a function when an update is received from a app status subscription
handle-status-update: func(msg: status-update) -> result<_, string>;
}

View File

@ -0,0 +1,218 @@
package wasmcloud:wadm@0.2.0;
interface types {
record model-summary {
name: string,
version: string,
description: option<string>,
deployed-version: option<string>,
status: status-type,
status-message: option<string>
}
record version-info {
version: string,
deployed: bool
}
record status-update {
app: string,
status: status
}
record status {
version: string,
info: status-info,
components: list<component-status>
}
record component-status {
name: string,
component-type: string,
info: status-info,
traits: list<trait-status>
}
record trait-status {
trait-type: string,
info: status-info
}
record status-info {
status-type: status-type,
message: string
}
enum put-result {
error,
created,
new-version
}
enum get-result {
error,
success,
not-found
}
enum status-result {
error,
ok,
not-found
}
enum delete-result {
deleted,
error,
noop
}
enum status-type {
undeployed,
reconciling,
deployed,
failed,
waiting,
unhealthy
}
enum deploy-result {
error,
acknowledged,
not-found
}
// The overall structure of an OAM manifest.
record oam-manifest {
api-version: string,
kind: string,
metadata: metadata,
spec: specification,
}
// Metadata describing the manifest
record metadata {
name: string,
annotations: list<tuple<string, string>>,
labels: list<tuple<string, string>>,
}
// The specification for this manifest
record specification {
components: list<component>,
policies: list<policy>
}
// A component definition
record component {
name: string,
properties: properties,
traits: option<list<trait>>,
}
// Properties that can be defined for a component
variant properties {
component(component-properties),
capability(capability-properties),
}
// Properties for a component
record component-properties {
image: option<string>,
application: option<shared-application-component-properties>,
id: option<string>,
config: list<config-property>,
secrets: list<secret-property>,
}
// Properties for a capability
record capability-properties {
image: option<string>,
application: option<shared-application-component-properties>,
id: option<string>,
config: list<config-property>,
secrets: list<secret-property>,
}
// A policy definition
record policy {
name: string,
properties: list<tuple<string, string>>,
%type: string,
}
// A trait definition
record trait {
trait-type: string,
properties: trait-property,
}
// Properties for defining traits
variant trait-property {
link(link-property),
spreadscaler(spreadscaler-property),
custom(string),
}
// Properties for links
record link-property {
namespace: string,
%package: string,
interfaces: list<string>,
source: option<config-definition>,
target: target-config,
name: option<string>,
}
// Configuration definition
record config-definition {
config: list<config-property>,
secrets: list<secret-property>,
}
// Configuration properties
record config-property {
name: string,
properties: option<list<tuple<string, string>>>,
}
// Secret properties
record secret-property {
name: string,
properties: secret-source-property,
}
// Secret source properties
record secret-source-property {
policy: string,
key: string,
field: option<string>,
version: option<string>,
}
// Shared application component properties
record shared-application-component-properties {
name: string,
component: string
}
// Target configuration
record target-config {
name: string,
config: list<config-property>,
secrets: list<secret-property>,
}
// Properties for spread scalers
record spreadscaler-property {
instances: u32,
spread: list<spread>,
}
// Configuration for various spreading requirements
record spread {
name: string,
requirements: list<tuple<string, string>>,
weight: option<u32>,
}
}

View File

@ -0,0 +1,7 @@
package wasmcloud:wadm-types@0.2.0;
world interfaces {
import wasmcloud:wadm/types@0.2.0;
import wasmcloud:wadm/client@0.2.0;
import wasmcloud:wadm/handler@0.2.0;
}

51
crates/wadm/Cargo.toml Normal file
View File

@ -0,0 +1,51 @@
[package]
name = "wadm"
description = "wasmCloud Application Deployment Manager: A tool for running Wasm applications in wasmCloud"
version.workspace = true
edition = "2021"
authors = ["wasmCloud Team"]
keywords = ["webassembly", "wasmcloud", "wadm"]
license = "Apache-2.0"
readme = "../../README.md"
repository = "https://github.com/wasmcloud/wadm"
[features]
# Enables clap attributes on the wadm configuration struct
cli = ["clap"]
http_admin = ["http", "http-body-util", "hyper", "hyper-util"]
default = []
[package.metadata.cargo-machete]
ignored = ["cloudevents-sdk"]
[dependencies]
anyhow = { workspace = true }
async-nats = { workspace = true }
async-trait = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true, optional = true, features = ["derive", "cargo", "env"]}
cloudevents-sdk = { workspace = true }
http = { workspace = true, features = ["std"], optional = true }
http-body-util = { workspace = true, optional = true }
hyper = { workspace = true, optional = true }
hyper-util = { workspace = true, features = ["server"], optional = true }
futures = { workspace = true }
indexmap = { workspace = true, features = ["serde"] }
nkeys = { workspace = true }
semver = { workspace = true, features = ["serde"] }
serde = { workspace = true }
serde_json = { workspace = true }
serde_yaml = { workspace = true }
sha2 = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true, features = ["full"] }
tracing = { workspace = true, features = ["log"] }
tracing-futures = { workspace = true }
ulid = { workspace = true, features = ["serde"] }
uuid = { workspace = true }
wadm-types = { workspace = true }
wasmcloud-control-interface = { workspace = true }
wasmcloud-secrets-types = { workspace = true }
[dev-dependencies]
serial_test = "3"

View File

@ -0,0 +1,293 @@
//! Type implementations for commands issued to compensate for state changes
use std::{
collections::{BTreeMap, HashMap},
error::Error,
hash::{Hash, Hasher},
};
use serde::{Deserialize, Serialize};
use wasmcloud_control_interface::Link;
use crate::{
events::{ComponentScaleFailed, ComponentScaled, Event, ProviderStartFailed, ProviderStarted},
workers::insert_managed_annotations,
};
macro_rules! from_impl {
($t:ident) => {
impl From<$t> for Command {
fn from(value: $t) -> Command {
Command::$t(value)
}
}
};
}
/// All possible compensatory commands for a lattice
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)]
pub enum Command {
ScaleComponent(ScaleComponent),
StartProvider(StartProvider),
StopProvider(StopProvider),
PutLink(PutLink),
DeleteLink(DeleteLink),
PutConfig(PutConfig),
DeleteConfig(DeleteConfig),
}
impl Command {
/// Generates the corresponding event for a [Command](Command) in the form of a two-tuple ([Event](Event), Option<Event>)
///
/// # Arguments
/// `model_name` - The model name that the command satisfies, needed to compute the proper annotations
///
/// # Return
/// - The first element in the tuple corresponds to the "success" event a host would output after completing this command
/// - The second element in the tuple corresponds to an optional "failure" event that a host could output if processing fails
pub fn corresponding_event(&self) -> Option<(Event, Option<Event>)> {
match self {
Command::StartProvider(StartProvider {
annotations,
reference,
host_id,
provider_id,
model_name,
..
}) => {
let mut annotations = annotations.to_owned();
insert_managed_annotations(&mut annotations, model_name);
Some((
Event::ProviderStarted(ProviderStarted {
provider_id: provider_id.to_owned(),
annotations: annotations.to_owned(),
claims: None,
image_ref: reference.to_owned(),
host_id: host_id.to_owned(),
}),
Some(Event::ProviderStartFailed(ProviderStartFailed {
provider_id: provider_id.to_owned(),
provider_ref: reference.to_owned(),
host_id: host_id.to_owned(),
// We don't know this field from the command
error: String::with_capacity(0),
})),
))
}
Command::ScaleComponent(ScaleComponent {
component_id,
host_id,
count,
reference,
annotations,
model_name,
..
}) => {
let mut annotations = annotations.to_owned();
insert_managed_annotations(&mut annotations, model_name);
Some((
Event::ComponentScaled(ComponentScaled {
component_id: component_id.to_owned(),
host_id: host_id.to_owned(),
max_instances: *count as usize,
image_ref: reference.to_owned(),
annotations: annotations.to_owned(),
// We don't know this field from the command
claims: None,
}),
Some(Event::ComponentScaleFailed(ComponentScaleFailed {
component_id: component_id.to_owned(),
host_id: host_id.to_owned(),
max_instances: *count as usize,
image_ref: reference.to_owned(),
annotations: annotations.to_owned(),
// We don't know these fields from the command
error: String::with_capacity(0),
claims: None,
})),
))
}
_ => None,
}
}
}
/// Struct for the ScaleComponent command
#[derive(Clone, Debug, Serialize, Deserialize, Default, Eq)]
pub struct ScaleComponent {
/// The ID of the component to scale. This should be computed by wadm as a combination
/// of the manifest name and the component name.
pub component_id: String,
/// The host id on which to scale the components
pub host_id: String,
/// The number of components to scale to
pub count: u32,
/// The OCI or bindle reference to scale
pub reference: String,
/// The name of the model/manifest that generated this command
pub model_name: String,
/// Additional annotations to attach on this command
pub annotations: BTreeMap<String, String>,
/// Named configuration to pass to the component.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub config: Vec<String>,
}
from_impl!(ScaleComponent);
impl PartialEq for ScaleComponent {
fn eq(&self, other: &Self) -> bool {
self.component_id == other.component_id
&& self.host_id == other.host_id
&& self.count == other.count
&& self.model_name == other.model_name
&& self.annotations == other.annotations
}
}
/// Struct for the StartProvider command
#[derive(Clone, Debug, Eq, Serialize, Deserialize, Default)]
pub struct StartProvider {
/// The OCI or bindle reference to start
pub reference: String,
/// The ID of the provider to scale. This should be computed by wadm as a combination
/// of the manifest name and the provider name.
pub provider_id: String,
/// The host id on which to start the provider
pub host_id: String,
/// The name of the model/manifest that generated this command
pub model_name: String,
/// Named configuration to pass to the provider.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub config: Vec<String>,
/// Additional annotations to attach on this command
pub annotations: BTreeMap<String, String>,
}
from_impl!(StartProvider);
impl PartialEq for StartProvider {
fn eq(&self, other: &StartProvider) -> bool {
self.reference == other.reference
&& self.host_id == other.host_id
&& self.model_name == other.model_name
}
}
impl Hash for StartProvider {
fn hash<H: Hasher>(&self, state: &mut H) {
self.reference.hash(state);
self.host_id.hash(state);
}
}
/// Struct for the StopProvider command
#[derive(Clone, Debug, Eq, Serialize, Deserialize, Default)]
pub struct StopProvider {
/// The ID of the provider to stop
pub provider_id: String,
/// The host ID on which to stop the provider
pub host_id: String,
/// The name of the model/manifest that generated this command
pub model_name: String,
/// Additional annotations to attach on this command
pub annotations: BTreeMap<String, String>,
}
from_impl!(StopProvider);
impl PartialEq for StopProvider {
fn eq(&self, other: &StopProvider) -> bool {
self.provider_id == other.provider_id
&& self.host_id == other.host_id
&& self.model_name == other.model_name
}
}
impl Hash for StopProvider {
fn hash<H: Hasher>(&self, state: &mut H) {
self.provider_id.hash(state);
self.host_id.hash(state);
}
}
/// Struct for the PutLinkdef command
#[derive(Clone, Debug, Eq, Serialize, Deserialize, Default, PartialEq, Hash)]
pub struct PutLink {
/// Source identifier for the link
pub source_id: String,
/// Target for the link, which can be a unique identifier or (future) a routing group
pub target: String,
/// Name of the link. Not providing this is equivalent to specifying "default"
pub name: String,
/// WIT namespace of the link operation, e.g. `wasi` in `wasi:keyvalue/readwrite.get`
pub wit_namespace: String,
/// WIT package of the link operation, e.g. `keyvalue` in `wasi:keyvalue/readwrite.get`
pub wit_package: String,
/// WIT Interfaces to be used for the link, e.g. `readwrite`, `atomic`, etc.
pub interfaces: Vec<String>,
/// List of named configurations to provide to the source upon request
#[serde(default)]
pub source_config: Vec<String>,
/// List of named configurations to provide to the target upon request
#[serde(default)]
pub target_config: Vec<String>,
/// The name of the model/manifest that generated this command
pub model_name: String,
}
impl TryFrom<PutLink> for Link {
type Error = Box<dyn Error + Send + Sync>;
fn try_from(value: PutLink) -> Result<Link, Self::Error> {
Link::builder()
.source_id(&value.source_id)
.target(&value.target)
.name(&value.name)
.wit_namespace(&value.wit_namespace)
.wit_package(&value.wit_package)
.interfaces(value.interfaces)
.source_config(value.source_config)
.target_config(value.target_config)
.build()
}
}
from_impl!(PutLink);
/// Struct for the DeleteLinkdef command
#[derive(Clone, Debug, Eq, PartialEq, Hash, Serialize, Deserialize, Default)]
pub struct DeleteLink {
/// The ID of the component to unlink
pub source_id: String,
/// The WIT namespace of the component to unlink
pub wit_namespace: String,
/// The WIT package of the component to unlink
pub wit_package: String,
/// The link name to unlink
pub link_name: String,
/// The name of the model/manifest that generated this command
pub model_name: String,
}
from_impl!(DeleteLink);
/// Struct for the PutConfig command
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, Default)]
pub struct PutConfig {
/// The name of the configuration to put
pub config_name: String,
/// The configuration properties to put
pub config: HashMap<String, String>,
}
from_impl!(PutConfig);
/// Struct for the DeleteConfig command
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, Default)]
pub struct DeleteConfig {
/// The name of the configuration to delete
pub config_name: String,
}
from_impl!(DeleteConfig);

306
crates/wadm/src/config.rs Normal file
View File

@ -0,0 +1,306 @@
#[cfg(feature = "http_admin")]
use core::net::SocketAddr;
use std::path::PathBuf;
#[cfg(feature = "cli")]
use clap::Parser;
use wadm_types::api::DEFAULT_WADM_TOPIC_PREFIX;
use crate::nats::StreamPersistence;
#[derive(Clone, Debug)]
#[cfg_attr(feature = "cli", derive(Parser))]
#[cfg_attr(feature = "cli", command(name = clap::crate_name!(), version = clap::crate_version!(), about = "wasmCloud Application Deployment Manager", long_about = None))]
pub struct WadmConfig {
/// The ID for this wadm process. Defaults to a random UUIDv4 if none is provided. This is used
/// to help with debugging when identifying which process is doing the work
#[cfg_attr(
feature = "cli",
arg(short = 'i', long = "host-id", env = "WADM_HOST_ID")
)]
pub host_id: Option<String>,
/// Whether or not to use structured log output (as JSON)
#[cfg_attr(
feature = "cli",
arg(
short = 'l',
long = "structured-logging",
default_value = "false",
env = "WADM_STRUCTURED_LOGGING"
)
)]
pub structured_logging: bool,
/// Whether or not to enable opentelemetry tracing
#[cfg_attr(
feature = "cli",
arg(
short = 't',
long = "tracing",
default_value = "false",
env = "WADM_TRACING_ENABLED"
)
)]
pub tracing_enabled: bool,
/// The endpoint to use for tracing. Setting this flag enables tracing, even if --tracing is set
/// to false. Defaults to http://localhost:4318/v1/traces if not set and tracing is enabled
#[cfg_attr(
feature = "cli",
arg(short = 'e', long = "tracing-endpoint", env = "WADM_TRACING_ENDPOINT")
)]
pub tracing_endpoint: Option<String>,
/// The NATS JetStream domain to connect to
#[cfg_attr(feature = "cli", arg(short = 'd', env = "WADM_JETSTREAM_DOMAIN"))]
pub domain: Option<String>,
/// (Advanced) Tweak the maximum number of jobs to run for handling events and commands. Be
/// careful how you use this as it can affect performance
#[cfg_attr(
feature = "cli",
arg(short = 'j', long = "max-jobs", env = "WADM_MAX_JOBS")
)]
pub max_jobs: Option<usize>,
/// The URL of the nats server you want to connect to
#[cfg_attr(
feature = "cli",
arg(
short = 's',
long = "nats-server",
env = "WADM_NATS_SERVER",
default_value = "127.0.0.1:4222"
)
)]
pub nats_server: String,
/// Use the specified nkey file or seed literal for authentication. Must be used in conjunction with --nats-jwt
#[cfg_attr(
feature = "cli",
arg(
long = "nats-seed",
env = "WADM_NATS_NKEY",
conflicts_with = "nats_creds",
requires = "nats_jwt"
)
)]
pub nats_seed: Option<String>,
/// Use the specified jwt file or literal for authentication. Must be used in conjunction with --nats-nkey
#[cfg_attr(
feature = "cli",
arg(
long = "nats-jwt",
env = "WADM_NATS_JWT",
conflicts_with = "nats_creds",
requires = "nats_seed"
)
)]
pub nats_jwt: Option<String>,
/// (Optional) NATS credential file to use when authenticating
#[cfg_attr(
feature = "cli", arg(
long = "nats-creds-file",
env = "WADM_NATS_CREDS_FILE",
conflicts_with_all = ["nats_seed", "nats_jwt"],
))]
pub nats_creds: Option<PathBuf>,
/// (Optional) NATS TLS certificate file to use when authenticating
#[cfg_attr(
feature = "cli",
arg(long = "nats-tls-ca-file", env = "WADM_NATS_TLS_CA_FILE")
)]
pub nats_tls_ca_file: Option<PathBuf>,
/// Name of the bucket used for storage of lattice state
#[cfg_attr(
feature = "cli",
arg(
long = "state-bucket-name",
env = "WADM_STATE_BUCKET_NAME",
default_value = "wadm_state"
)
)]
pub state_bucket: String,
/// The amount of time in seconds to give for hosts to fail to heartbeat and be removed from the
/// store. By default, this is 70s because it is 2x the host heartbeat interval plus a little padding
#[cfg_attr(
feature = "cli",
arg(
long = "cleanup-interval",
env = "WADM_CLEANUP_INTERVAL",
default_value = "70"
)
)]
pub cleanup_interval: u64,
/// The API topic prefix to use. This is an advanced setting that should only be used if you
/// know what you are doing
#[cfg_attr(
feature = "cli", arg(
long = "api-prefix",
env = "WADM_API_PREFIX",
default_value = DEFAULT_WADM_TOPIC_PREFIX
))]
pub api_prefix: String,
/// This prefix to used for the internal streams. When running in a multitenant environment,
/// clients share the same JS domain (since messages need to come from lattices).
/// Setting a stream prefix makes it possible to have a separate stream for different wadms running in a multitenant environment.
/// This is an advanced setting that should only be used if you know what you are doing.
#[cfg_attr(
feature = "cli",
arg(long = "stream-prefix", env = "WADM_STREAM_PREFIX")
)]
pub stream_prefix: Option<String>,
/// Name of the bucket used for storage of manifests
#[cfg_attr(
feature = "cli",
arg(
long = "manifest-bucket-name",
env = "WADM_MANIFEST_BUCKET_NAME",
default_value = "wadm_manifests"
)
)]
pub manifest_bucket: String,
/// Run wadm in multitenant mode. This is for advanced multitenant use cases with segmented NATS
/// account traffic and not simple cases where all lattices use credentials from the same
/// account. See the deployment guide for more information
#[cfg_attr(
feature = "cli",
arg(long = "multitenant", env = "WADM_MULTITENANT", hide = true)
)]
pub multitenant: bool,
//
// Max bytes configuration for streams. Primarily configurable to enable deployment on NATS infra
// with limited resources.
//
/// Maximum bytes to keep for the state bucket
#[cfg_attr(
feature = "cli", arg(
long = "state-bucket-max-bytes",
env = "WADM_STATE_BUCKET_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_state_bucket_bytes: i64,
/// Maximum bytes to keep for the manifest bucket
#[cfg_attr(
feature = "cli", arg(
long = "manifest-bucket-max-bytes",
env = "WADM_MANIFEST_BUCKET_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_manifest_bucket_bytes: i64,
/// Nats streams storage type
#[cfg_attr(
feature = "cli", arg(
long = "stream-persistence",
env = "WADM_STREAM_PERSISTENCE",
default_value_t = StreamPersistence::File
))]
pub stream_persistence: StreamPersistence,
/// Maximum bytes to keep for the command stream
#[cfg_attr(
feature = "cli", arg(
long = "command-stream-max-bytes",
env = "WADM_COMMAND_STREAM_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_command_stream_bytes: i64,
/// Maximum bytes to keep for the event stream
#[cfg_attr(
feature = "cli", arg(
long = "event-stream-max-bytes",
env = "WADM_EVENT_STREAM_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_event_stream_bytes: i64,
/// Maximum bytes to keep for the event consumer stream
#[cfg_attr(
feature = "cli", arg(
long = "event-consumer-stream-max-bytes",
env = "WADM_EVENT_CONSUMER_STREAM_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_event_consumer_stream_bytes: i64,
/// Maximum bytes to keep for the status stream
#[cfg_attr(
feature = "cli", arg(
long = "status-stream-max-bytes",
env = "WADM_STATUS_STREAM_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_status_stream_bytes: i64,
/// Maximum bytes to keep for the notify stream
#[cfg_attr(
feature = "cli", arg(
long = "notify-stream-max-bytes",
env = "WADM_NOTIFY_STREAM_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_notify_stream_bytes: i64,
/// Maximum bytes to keep for the wasmbus event stream
#[cfg_attr(
feature = "cli", arg(
long = "wasmbus-event-stream-max-bytes",
env = "WADM_WASMBUS_EVENT_STREAM_MAX_BYTES",
default_value_t = -1,
hide = true
))]
pub max_wasmbus_event_stream_bytes: i64,
#[cfg(feature = "http_admin")]
#[cfg_attr(feature = "cli", clap(long = "http-admin", env = "WADM_HTTP_ADMIN"))]
/// HTTP administration endpoint address
pub http_admin: Option<SocketAddr>,
}
impl Default for WadmConfig {
fn default() -> Self {
Self {
host_id: None,
domain: None,
max_jobs: None,
nats_server: "127.0.0.1:4222".to_string(),
nats_seed: None,
nats_jwt: None,
nats_creds: None,
nats_tls_ca_file: None,
state_bucket: "wadm_state".to_string(),
cleanup_interval: 70,
api_prefix: DEFAULT_WADM_TOPIC_PREFIX.to_string(),
stream_prefix: None,
manifest_bucket: "wadm_manifests".to_string(),
multitenant: false,
max_state_bucket_bytes: -1,
max_manifest_bucket_bytes: -1,
stream_persistence: StreamPersistence::File,
max_command_stream_bytes: -1,
max_event_stream_bytes: -1,
max_event_consumer_stream_bytes: -1,
max_status_stream_bytes: -1,
max_notify_stream_bytes: -1,
max_wasmbus_event_stream_bytes: -1,
structured_logging: false,
tracing_enabled: false,
tracing_endpoint: None,
#[cfg(feature = "http_admin")]
http_admin: None,
}
}
}

View File

@ -0,0 +1,61 @@
//! A module for connection pools and generators. This is needed because control interface clients
//! (and possibly other things like nats connections in the future) are lattice scoped or need
//! different credentials
use wasmcloud_control_interface::{Client, ClientBuilder};
// Copied from https://github.com/wasmCloud/control-interface-client/blob/main/src/broker.rs#L1, not public
const DEFAULT_TOPIC_PREFIX: &str = "wasmbus.ctl";
/// A client constructor for wasmCloud control interface clients, identified by a lattice ID
// NOTE: Yes, this sounds java-y. Deal with it.
#[derive(Clone)]
pub struct ControlClientConstructor {
client: async_nats::Client,
/// The topic prefix to use for operations
topic_prefix: Option<String>,
}
impl ControlClientConstructor {
/// Creates a new client pool that is all backed using the same NATS client and an optional
/// topic prefix. The given NATS client should be using credentials that can access all desired
/// lattices.
pub fn new(
client: async_nats::Client,
topic_prefix: Option<String>,
) -> ControlClientConstructor {
ControlClientConstructor {
client,
topic_prefix,
}
}
/// Get the client for the given lattice ID
pub fn get_connection(&self, id: &str, multitenant_prefix: Option<&str>) -> Client {
let builder = ClientBuilder::new(self.client.clone()).lattice(id);
let builder = builder.topic_prefix(topic_prefix(
multitenant_prefix,
self.topic_prefix.as_deref(),
));
builder.build()
}
}
/// Returns the topic prefix to use for the given multitenant prefix and topic prefix. The
/// default prefix is `wasmbus.ctl`.
///
/// If running in multitenant mode, we listen to events on *.wasmbus.evt.*.> and need to send commands
/// back to the '*' account. This match takes into account custom prefixes as well to support
/// advanced use cases.
///
/// This function does _not_ take into account whether or not wadm is running in multitenant mode, it's assumed
/// that passing a Some() value for multitenant_prefix means that wadm is running in multitenant mode.
fn topic_prefix(multitenant_prefix: Option<&str>, topic_prefix: Option<&str>) -> String {
match (multitenant_prefix, topic_prefix) {
(Some(mt), Some(prefix)) => format!("{}.{}", mt, prefix),
(Some(mt), None) => format!("{}.{DEFAULT_TOPIC_PREFIX}", mt),
(None, Some(prefix)) => prefix.to_string(),
_ => DEFAULT_TOPIC_PREFIX.to_string(),
}
}

View File

@ -1,8 +1,8 @@
//! A module for creating and consuming a stream of commands from NATS
use std::collections::HashMap;
use std::pin::Pin;
use std::task::{Context, Poll};
use std::time::Duration;
use async_nats::{
jetstream::{
@ -14,13 +14,11 @@ use async_nats::{
use futures::{Stream, TryStreamExt};
use tracing::{error, warn};
use super::{CreateConsumer, ScopedMessage};
use super::{CreateConsumer, ScopedMessage, LATTICE_METADATA_KEY, MULTITENANT_METADATA_KEY};
use crate::commands::*;
/// The name of the durable NATS stream and consumer that contains incoming lattice events
pub const COMMANDS_CONSUMER_PREFIX: &str = "wadm_commands";
/// The default time given for a command to ack. This is longer than events due to the possible need for more processing time
pub const DEFAULT_ACK_TIME: Duration = Duration::from_secs(2);
/// A stream of all commands in a lattice, consumed from a durable NATS stream and consumer
pub struct CommandConsumer {
@ -39,12 +37,26 @@ impl CommandConsumer {
stream: JsStream,
topic: &str,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> Result<CommandConsumer, NatsError> {
if !topic.contains(lattice_id) {
return Err(format!("Topic {topic} does not match for lattice ID {lattice_id}").into());
}
let consumer_name = format!("{COMMANDS_CONSUMER_PREFIX}_{lattice_id}");
let (consumer_name, metadata) = if let Some(prefix) = multitenant_prefix {
(
format!("{COMMANDS_CONSUMER_PREFIX}-{lattice_id}_{prefix}"),
HashMap::from([
(LATTICE_METADATA_KEY.to_string(), lattice_id.to_string()),
(MULTITENANT_METADATA_KEY.to_string(), prefix.to_string()),
]),
)
} else {
(
format!("{COMMANDS_CONSUMER_PREFIX}-{lattice_id}"),
HashMap::from([(LATTICE_METADATA_KEY.to_string(), lattice_id.to_string())]),
)
};
let consumer = stream
.get_or_create_consumer(
&consumer_name,
@ -55,10 +67,11 @@ impl CommandConsumer {
"Durable wadm commands consumer for lattice {lattice_id}"
)),
ack_policy: async_nats::jetstream::consumer::AckPolicy::Explicit,
ack_wait: DEFAULT_ACK_TIME,
ack_wait: super::DEFAULT_ACK_TIME,
max_deliver: 3,
deliver_policy: async_nats::jetstream::consumer::DeliverPolicy::All,
filter_subject: topic.to_owned(),
metadata,
..Default::default()
},
)
@ -81,7 +94,7 @@ impl Stream for CommandConsumer {
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
match self.stream.try_poll_next_unpin(cx) {
Poll::Ready(None) => Poll::Ready(None),
Poll::Ready(Some(Err(e))) => Poll::Ready(Some(Err(e))),
Poll::Ready(Some(Err(e))) => Poll::Ready(Some(Err(Box::new(e)))),
Poll::Ready(Some(Ok(msg))) => {
// Convert to our event type, skipping if we can't do it (and looping around to
// try the next poll)
@ -129,7 +142,8 @@ impl CreateConsumer for CommandConsumer {
stream: async_nats::jetstream::stream::Stream,
topic: &str,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> Result<Self::Output, NatsError> {
CommandConsumer::new(stream, topic, lattice_id).await
CommandConsumer::new(stream, topic, lattice_id, multitenant_prefix).await
}
}

View File

@ -1,9 +1,9 @@
//! A module for creating and consuming a stream of events from a wasmcloud lattice
use std::collections::HashMap;
use std::convert::TryFrom;
use std::pin::Pin;
use std::task::{Context, Poll};
use std::time::Duration;
use async_nats::{
jetstream::{
@ -15,13 +15,11 @@ use async_nats::{
use futures::{Stream, TryStreamExt};
use tracing::{debug, error, warn};
use super::{CreateConsumer, ScopedMessage};
use super::{CreateConsumer, ScopedMessage, LATTICE_METADATA_KEY, MULTITENANT_METADATA_KEY};
use crate::events::*;
/// The name of the durable NATS stream and consumer that contains incoming lattice events
pub const EVENTS_CONSUMER_PREFIX: &str = "wadm_events";
/// The default time given for an event to ack
pub const DEFAULT_ACK_TIME: Duration = Duration::from_secs(2);
pub const EVENTS_CONSUMER_PREFIX: &str = "wadm_event_consumer";
/// A stream of all events of a lattice, consumed from a durable NATS stream and consumer
pub struct EventConsumer {
@ -40,11 +38,25 @@ impl EventConsumer {
stream: JsStream,
topic: &str,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> Result<EventConsumer, NatsError> {
if !topic.contains(lattice_id) {
return Err(format!("Topic {topic} does not match for lattice ID {lattice_id}").into());
}
let consumer_name = format!("{EVENTS_CONSUMER_PREFIX}_{lattice_id}");
let (consumer_name, metadata) = if let Some(prefix) = multitenant_prefix {
(
format!("{EVENTS_CONSUMER_PREFIX}-{lattice_id}_{prefix}"),
HashMap::from([
(LATTICE_METADATA_KEY.to_string(), lattice_id.to_string()),
(MULTITENANT_METADATA_KEY.to_string(), prefix.to_string()),
]),
)
} else {
(
format!("{EVENTS_CONSUMER_PREFIX}-{lattice_id}"),
HashMap::from([(LATTICE_METADATA_KEY.to_string(), lattice_id.to_string())]),
)
};
let consumer = stream
.get_or_create_consumer(
&consumer_name,
@ -55,10 +67,11 @@ impl EventConsumer {
"Durable wadm events consumer for lattice {lattice_id}"
)),
ack_policy: async_nats::jetstream::consumer::AckPolicy::Explicit,
ack_wait: DEFAULT_ACK_TIME,
ack_wait: super::DEFAULT_ACK_TIME,
max_deliver: 3,
deliver_policy: async_nats::jetstream::consumer::DeliverPolicy::All,
filter_subject: topic.to_owned(),
metadata,
..Default::default()
},
)
@ -81,7 +94,7 @@ impl Stream for EventConsumer {
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
match self.stream.try_poll_next_unpin(cx) {
Poll::Ready(None) => Poll::Ready(None),
Poll::Ready(Some(Err(e))) => Poll::Ready(Some(Err(e))),
Poll::Ready(Some(Err(e))) => Poll::Ready(Some(Err(Box::new(e)))),
Poll::Ready(Some(Ok(msg))) => {
// Parse as a cloud event, skipping if we can't do it (and looping around to try
// the next poll)
@ -145,7 +158,8 @@ impl CreateConsumer for EventConsumer {
stream: async_nats::jetstream::stream::Stream,
topic: &str,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> Result<Self::Output, NatsError> {
EventConsumer::new(stream, topic, lattice_id).await
EventConsumer::new(stream, topic, lattice_id, multitenant_prefix).await
}
}

View File

@ -9,6 +9,8 @@ use tokio::{
};
use tracing::{error, instrument, trace, warn, Instrument};
use crate::consumers::{LATTICE_METADATA_KEY, MULTITENANT_METADATA_KEY};
use super::{CreateConsumer, ScopedMessage};
/// A convenience type for returning work results
@ -76,7 +78,11 @@ pub trait Worker {
pub trait WorkerCreator {
type Output: Worker + Send + Sync + 'static;
async fn create(&self, lattice_id: &str) -> anyhow::Result<Self::Output>;
async fn create(
&self,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> anyhow::Result<Self::Output>;
}
/// A manager of a specific type of Consumer that handles giving out permits to work and managing
@ -107,6 +113,7 @@ impl<C> ConsumerManager<C> {
permit_pool: Arc<Semaphore>,
stream: NatsStream,
worker_generator: F,
multitenant: bool,
) -> ConsumerManager<C>
where
W: Worker + Send + Sync + 'static,
@ -135,19 +142,38 @@ impl<C> ConsumerManager<C> {
return None;
}
};
// TODO: This is somewhat brittle as we could change naming schemes, but it is
// good enough for now. We are just taking the name (which should be of the
// format `<consumer_prefix>_<lattice_id>`), but this makes sure we are always
// getting the last thing in case of other underscores
let lattice_id = match info.name.split('_').last() {
Some(id) => id,
None => return None,
// Now that wadm is using NATS 2.10, the lattice and multitenant prefix are stored in the consumer metadata
// as a fallback for older versions, we can still extract it from the consumer name in the
// form `<consumer_prefix>-<lattice_prefix>_<multitenant_prefix>`
let (lattice_id, multitenant_prefix) = match (info.config.metadata.get(LATTICE_METADATA_KEY), info.config.metadata.get(MULTITENANT_METADATA_KEY)) {
(Some(lattice), Some(multitenant_prefix)) => {
trace!(%lattice, %multitenant_prefix, "Found lattice and multitenant prefix in consumer metadata");
(lattice.to_owned(), Some(multitenant_prefix.to_owned()))
}
(Some(lattice), None) => {
trace!(%lattice, "Found lattice in consumer metadata");
(lattice.to_owned(), None)
}
_ => {
match extract_lattice_and_multitenant(&info.name) {
(Some(id), prefix) => (id, prefix),
(None, _) => return None,
}
}
};
// Don't create multitenant consumers if running in single tenant mode, and vice versa
if multitenant_prefix.is_some() != multitenant {
trace!(%lattice_id, "Skipping consumer for lattice because multitenant doesn't match");
return None;
}
// NOTE(thomastaylor312): It might be nicer for logs if we add an extra param for a
// friendly consumer manager name
trace!(%lattice_id, subject = %info.config.filter_subject, "Adding consumer for lattice");
let worker = match worker_generator.create(lattice_id).await {
let worker = match worker_generator.create(&lattice_id, multitenant_prefix.as_deref()).await {
Ok(w) => w,
Err(e) => {
error!(error = %e, %lattice_id, "Unable to add consumer for lattice. Error when generating worker");
@ -155,7 +181,7 @@ impl<C> ConsumerManager<C> {
}
};
match manager.spawn_handler(&info.config.filter_subject, lattice_id, worker).await {
match manager.spawn_handler(&info.config.filter_subject, &lattice_id, multitenant_prefix.as_deref(), worker).await {
Ok(handle) => Some((info.config.filter_subject.to_owned(), handle)),
Err(e) => {
error!(error = %e, %lattice_id, "Unable to add consumer for lattice");
@ -179,6 +205,7 @@ impl<C> ConsumerManager<C> {
&self,
topic: &str,
lattice_id: &str,
multitenant_prefix: Option<&str>,
worker: W,
) -> Result<(), async_nats::Error>
where
@ -191,7 +218,9 @@ impl<C> ConsumerManager<C> {
{
if !self.has_consumer(topic).await {
trace!("Adding new consumer");
let handle = self.spawn_handler(topic, lattice_id, worker).await?;
let handle = self
.spawn_handler(topic, lattice_id, multitenant_prefix, worker)
.await?;
let mut handles = self.handles.write().await;
handles.insert(topic.to_owned(), handle);
}
@ -202,6 +231,7 @@ impl<C> ConsumerManager<C> {
&self,
topic: &str,
lattice_id: &str,
multitenant_prefix: Option<&str>,
worker: W,
) -> Result<JoinHandle<WorkResult<()>>, async_nats::Error>
where
@ -212,10 +242,11 @@ impl<C> ConsumerManager<C> {
+ Unpin
+ 'static,
{
let consumer = C::create(self.stream.clone(), topic, lattice_id).await?;
let consumer =
C::create(self.stream.clone(), topic, lattice_id, multitenant_prefix).await?;
let permits = self.permits.clone();
Ok(tokio::spawn(work_fn(consumer, permits, worker).instrument(
tracing::info_span!("consumer_worker", %topic),
tracing::info_span!("consumer_worker", %topic, worker_type = %std::any::type_name::<W>()),
)))
}
@ -247,12 +278,13 @@ where
C: Stream<Item = Result<ScopedMessage<W::Message>, async_nats::Error>> + Unpin,
{
loop {
// Get next value from stream, returning error if the consumer stopped
let res = consumer.next().await.ok_or(WorkError::ConsumerStopped)?;
// Grab a permit to do some work. This will only return errors if the pool is closed
trace!("Getting work permit");
let _permit = permits.acquire().await?;
trace!("Received work permit, attempting to pull from consumer");
// Get next value from stream, returning error if the consumer stopped
let res = consumer.next().await.ok_or(WorkError::ConsumerStopped)?;
let res = match res {
Ok(msg) => {
trace!(message = ?msg, "Got message from consumer");
@ -272,3 +304,49 @@ where
}
}
}
/// Extracts the lattice ID and multitenant prefix from a consumer name in the form of either:
/// 1. <consumer_prefix>-<lattice_prefix>_<multitenant_prefix>
/// 2. <consumer_prefix>-<lattice_prefix>
fn extract_lattice_and_multitenant(consumer_name: &str) -> (Option<String>, Option<String>) {
let mut parts = consumer_name.split('-');
// Ignore the consumer prefix
let _consumer_prefix = parts.next();
// Split the remainder into lattice and multitenant prefix
let remainder = parts.collect::<Vec<&str>>().join("-");
let mut lattice_and_multitenant = remainder.split('_');
let lattice_id = lattice_and_multitenant.next().map(|l| l.to_owned());
let multitenant_prefix = lattice_and_multitenant.next().map(|p| p.to_owned());
(lattice_id, multitenant_prefix)
}
#[cfg(test)]
mod test {
use super::extract_lattice_and_multitenant;
#[test]
fn can_extract_lattice_and_multitenant() {
let default = "wadm_commands-default";
let with_multi = "wadm_events-default_AAAAAACOUNT";
let with_dashes = "wadm_mirror-550e8400-e29b-41d4-a716-446655440000_AAAAAACOUNT";
assert_eq!(
extract_lattice_and_multitenant(default),
(Some("default".to_owned()), None)
);
assert_eq!(
extract_lattice_and_multitenant(with_multi),
(Some("default".to_owned()), Some("AAAAAACOUNT".to_owned()))
);
assert_eq!(
extract_lattice_and_multitenant(with_dashes),
(
Some("550e8400-e29b-41d4-a716-446655440000".to_owned()),
Some("AAAAAACOUNT".to_owned())
)
);
}
}

View File

@ -13,6 +13,12 @@ mod commands;
mod events;
pub mod manager;
/// The default time given for a command to ack. This is longer than events due to the possible need for more processing time
pub const DEFAULT_ACK_TIME: Duration = Duration::from_secs(2);
pub const LATTICE_METADATA_KEY: &str = "lattice";
pub const MULTITENANT_METADATA_KEY: &str = "multitenant_prefix";
pub use commands::*;
pub use events::*;
@ -156,5 +162,6 @@ pub trait CreateConsumer {
stream: async_nats::jetstream::stream::Stream,
topic: &str,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> Result<Self::Output, NatsError>;
}

View File

@ -1,5 +1,5 @@
use core::hash::{Hash, Hasher};
use std::collections::HashMap;
use std::collections::BTreeMap;
use serde::{Deserialize, Serialize};
@ -7,19 +7,17 @@ use serde::{Deserialize, Serialize};
/// and Hash since it can serve as a key
#[derive(Debug, Serialize, Deserialize, Default, Clone, Eq)]
pub struct ProviderInfo {
pub contract_id: String,
pub link_name: String,
// TODO: Should we actually parse the nkey?
pub public_key: String,
#[serde(alias = "public_key")]
pub provider_id: String,
#[serde(default)]
pub annotations: HashMap<String, String>,
pub provider_ref: String,
#[serde(default)]
pub annotations: BTreeMap<String, String>,
}
impl PartialEq for ProviderInfo {
fn eq(&self, other: &Self) -> bool {
self.public_key == other.public_key
&& self.contract_id == other.contract_id
&& self.link_name == other.link_name
self.provider_id == other.provider_id
}
}
@ -27,13 +25,11 @@ impl PartialEq for ProviderInfo {
// inventory where these three pieces need to be unique regardless of annotations
impl Hash for ProviderInfo {
fn hash<H: Hasher>(&self, state: &mut H) {
self.public_key.hash(state);
self.contract_id.hash(state);
self.link_name.hash(state);
self.provider_id.hash(state);
}
}
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[derive(Debug, Serialize, Deserialize, Default, Clone, PartialEq, Eq)]
pub struct ProviderClaims {
pub expires_human: String,
// TODO: Should we actually parse the nkey?
@ -48,39 +44,27 @@ pub struct ProviderClaims {
pub version: String,
}
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[derive(Debug, Serialize, Deserialize, Default, Clone, PartialEq, Eq)]
pub struct ProviderHealthCheckInfo {
pub link_name: String,
// TODO: Should we make this a parsed nkey?
pub public_key: String,
pub contract_id: String,
pub provider_id: String,
pub host_id: String,
}
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
pub struct ActorClaims {
#[derive(Debug, Serialize, Deserialize, Default, Clone, PartialEq, Eq)]
pub struct ComponentClaims {
pub call_alias: Option<String>,
#[serde(rename = "caps")]
pub capabilites: Vec<String>,
#[serde(default)]
pub expires_human: String,
// TODO: parse as nkey?
#[serde(default)]
pub issuer: String,
#[serde(default)]
pub name: String,
#[serde(default)]
pub not_before_human: String,
pub revision: usize,
pub revision: Option<usize>,
// NOTE: This doesn't need a custom deserialize because unlike provider claims, these come out
// in an array
pub tags: Option<Vec<String>>,
pub version: String,
}
#[derive(Debug, Serialize, Deserialize, Default, Eq, PartialEq, Clone)]
pub struct Linkdef {
// TODO: parse as an nkey?
pub actor_id: String,
pub contract_id: String,
pub id: String,
pub link_name: String,
// TODO: parse as an nkey?
pub provider_id: String,
pub values: HashMap<String, String>,
pub version: Option<String>,
}

View File

@ -2,16 +2,24 @@
//! attribute of a cloudevent
// TODO: These should probably be generated from a schema which we add into the actual cloud event
use std::{collections::HashMap, convert::TryFrom};
use std::{
collections::{BTreeMap, HashMap},
convert::TryFrom,
fmt::Display,
};
use cloudevents::{AttributesReader, Data, Event as CloudEvent};
use cloudevents::{AttributesReader, Data, Event as CloudEvent, EventBuilder, EventBuilderV10};
use serde::{Deserialize, Serialize};
use thiserror::Error;
use wasmcloud_control_interface::{ComponentDescription, Link, ProviderDescription};
use crate::model::Manifest;
use wadm_types::Manifest;
use super::data::*;
/// The source used for cloud events that wadm emits
pub const WADM_SOURCE: &str = "wadm";
// NOTE: this macro is a helper so we don't have to copy/paste these impls for each type. The first
// argument is the struct name you are generating for and the second argument is the event type as
// expected in the cloud event.
@ -83,10 +91,10 @@ pub trait EventType {
}
/// A lattice event
#[derive(Debug)]
#[derive(Debug, Clone, Deserialize, PartialEq, Eq)]
pub enum Event {
ActorStarted(ActorStarted),
ActorStopped(ActorStopped),
ComponentScaled(ComponentScaled),
ComponentScaleFailed(ComponentScaleFailed),
ProviderStarted(ProviderStarted),
ProviderStopped(ProviderStopped),
ProviderStartFailed(ProviderStartFailed),
@ -98,19 +106,47 @@ pub enum Event {
HostHeartbeat(HostHeartbeat),
LinkdefSet(LinkdefSet),
LinkdefDeleted(LinkdefDeleted),
ConfigSet(ConfigSet),
ConfigDeleted(ConfigDeleted),
// NOTE(thomastaylor312): We may change where and how these get published, but it makes sense
// for now to have them here even though they aren't technically lattice events
ManifestPublished(ManifestPublished),
ManifestUnpublished(ManifestUnpublished),
}
impl Display for Event {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Event::ComponentScaled(_) => write!(f, "ComponentScaled"),
Event::ComponentScaleFailed(_) => write!(f, "ComponentScaleFailed"),
Event::ProviderStarted(_) => write!(f, "ProviderStarted"),
Event::ProviderStopped(_) => write!(f, "ProviderStopped"),
Event::ProviderStartFailed(_) => write!(f, "ProviderStartFailed"),
Event::ProviderHealthCheckPassed(_) => write!(f, "ProviderHealthCheckPassed"),
Event::ProviderHealthCheckFailed(_) => write!(f, "ProviderHealthCheckFailed"),
Event::ProviderHealthCheckStatus(_) => write!(f, "ProviderHealthCheckStatus"),
Event::HostStarted(_) => write!(f, "HostStarted"),
Event::HostStopped(_) => write!(f, "HostStopped"),
Event::HostHeartbeat(_) => write!(f, "HostHeartbeat"),
Event::LinkdefSet(_) => write!(f, "LinkdefSet"),
Event::LinkdefDeleted(_) => write!(f, "LinkdefDeleted"),
Event::ConfigSet(_) => write!(f, "ConfigSet"),
Event::ConfigDeleted(_) => write!(f, "ConfigDeleted"),
Event::ManifestPublished(_) => write!(f, "ManifestPublished"),
Event::ManifestUnpublished(_) => write!(f, "ManifestUnpublished"),
}
}
}
impl TryFrom<CloudEvent> for Event {
type Error = ConversionError;
fn try_from(value: CloudEvent) -> Result<Self, Self::Error> {
match value.ty() {
ActorStarted::TYPE => ActorStarted::try_from(value).map(Event::ActorStarted),
ActorStopped::TYPE => ActorStopped::try_from(value).map(Event::ActorStopped),
ComponentScaled::TYPE => ComponentScaled::try_from(value).map(Event::ComponentScaled),
ComponentScaleFailed::TYPE => {
ComponentScaleFailed::try_from(value).map(Event::ComponentScaleFailed)
}
ProviderStarted::TYPE => ProviderStarted::try_from(value).map(Event::ProviderStarted),
ProviderStopped::TYPE => ProviderStopped::try_from(value).map(Event::ProviderStopped),
ProviderStartFailed::TYPE => {
@ -130,6 +166,8 @@ impl TryFrom<CloudEvent> for Event {
HostHeartbeat::TYPE => HostHeartbeat::try_from(value).map(Event::HostHeartbeat),
LinkdefSet::TYPE => LinkdefSet::try_from(value).map(Event::LinkdefSet),
LinkdefDeleted::TYPE => LinkdefDeleted::try_from(value).map(Event::LinkdefDeleted),
ConfigSet::TYPE => ConfigSet::try_from(value).map(Event::ConfigSet),
ConfigDeleted::TYPE => ConfigDeleted::try_from(value).map(Event::ConfigDeleted),
ManifestPublished::TYPE => {
ManifestPublished::try_from(value).map(Event::ManifestPublished)
}
@ -141,6 +179,41 @@ impl TryFrom<CloudEvent> for Event {
}
}
impl TryFrom<Event> for CloudEvent {
type Error = anyhow::Error;
fn try_from(value: Event) -> Result<Self, Self::Error> {
let ty = match value {
Event::ComponentScaled(_) => ComponentScaled::TYPE,
Event::ComponentScaleFailed(_) => ComponentScaleFailed::TYPE,
Event::ProviderStarted(_) => ProviderStarted::TYPE,
Event::ProviderStopped(_) => ProviderStopped::TYPE,
Event::ProviderStartFailed(_) => ProviderStartFailed::TYPE,
Event::ProviderHealthCheckPassed(_) => ProviderHealthCheckPassed::TYPE,
Event::ProviderHealthCheckFailed(_) => ProviderHealthCheckFailed::TYPE,
Event::ProviderHealthCheckStatus(_) => ProviderHealthCheckStatus::TYPE,
Event::HostStarted(_) => HostStarted::TYPE,
Event::HostStopped(_) => HostStopped::TYPE,
Event::HostHeartbeat(_) => HostHeartbeat::TYPE,
Event::LinkdefSet(_) => LinkdefSet::TYPE,
Event::LinkdefDeleted(_) => LinkdefDeleted::TYPE,
Event::ConfigSet(_) => ConfigSet::TYPE,
Event::ConfigDeleted(_) => ConfigDeleted::TYPE,
Event::ManifestPublished(_) => ManifestPublished::TYPE,
Event::ManifestUnpublished(_) => ManifestUnpublished::TYPE,
};
EventBuilderV10::new()
.id(uuid::Uuid::new_v4().to_string())
.source(WADM_SOURCE)
.time(chrono::Utc::now())
.data("application/json", serde_json::to_value(value)?)
.ty(ty)
.build()
.map_err(anyhow::Error::from)
}
}
// Custom serialize that just delegates to the underlying event type
impl Serialize for Event {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
@ -148,8 +221,8 @@ impl Serialize for Event {
S: serde::Serializer,
{
match self {
Event::ActorStarted(evt) => evt.serialize(serializer),
Event::ActorStopped(evt) => evt.serialize(serializer),
Event::ComponentScaled(evt) => evt.serialize(serializer),
Event::ComponentScaleFailed(evt) => evt.serialize(serializer),
Event::ProviderStarted(evt) => evt.serialize(serializer),
Event::ProviderStopped(evt) => evt.serialize(serializer),
Event::ProviderStartFailed(evt) => evt.serialize(serializer),
@ -161,6 +234,8 @@ impl Serialize for Event {
Event::HostHeartbeat(evt) => evt.serialize(serializer),
Event::LinkdefSet(evt) => evt.serialize(serializer),
Event::LinkdefDeleted(evt) => evt.serialize(serializer),
Event::ConfigSet(evt) => evt.serialize(serializer),
Event::ConfigDeleted(evt) => evt.serialize(serializer),
Event::ManifestPublished(evt) => evt.serialize(serializer),
Event::ManifestUnpublished(evt) => evt.serialize(serializer),
}
@ -176,8 +251,8 @@ impl Event {
/// Returns the underlying raw cloudevent type for the event
pub fn raw_type(&self) -> &str {
match self {
Event::ActorStarted(_) => ActorStarted::TYPE,
Event::ActorStopped(_) => ActorStopped::TYPE,
Event::ComponentScaled(_) => ComponentScaled::TYPE,
Event::ComponentScaleFailed(_) => ComponentScaleFailed::TYPE,
Event::ProviderStarted(_) => ProviderStarted::TYPE,
Event::ProviderStopped(_) => ProviderStopped::TYPE,
Event::ProviderStartFailed(_) => ProviderStopped::TYPE,
@ -189,6 +264,8 @@ impl Event {
Event::HostHeartbeat(_) => HostHeartbeat::TYPE,
Event::LinkdefSet(_) => LinkdefSet::TYPE,
Event::LinkdefDeleted(_) => LinkdefDeleted::TYPE,
Event::ConfigSet(_) => ConfigSet::TYPE,
Event::ConfigDeleted(_) => ConfigDeleted::TYPE,
Event::ManifestPublished(_) => ManifestPublished::TYPE,
Event::ManifestUnpublished(_) => ManifestUnpublished::TYPE,
}
@ -215,57 +292,55 @@ pub enum ConversionError {
// EVENTS START HERE
//
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ActorStarted {
pub annotations: HashMap<String, String>,
// Commented out for now because the host broken it and we actually don't use this right now
// pub api_version: usize,
pub claims: ActorClaims,
// Component Events
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ComponentScaled {
pub annotations: BTreeMap<String, String>,
pub claims: Option<ComponentClaims>,
pub image_ref: String,
// TODO: Parse as UUID?
pub instance_id: String,
// TODO: Parse as nkey?
pub public_key: String,
pub max_instances: usize,
pub component_id: String,
#[serde(default)]
pub host_id: String,
}
event_impl!(
ActorStarted,
"com.wasmcloud.lattice.actor_started",
ComponentScaled,
"com.wasmcloud.lattice.component_scaled",
source,
host_id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ActorStopped {
#[serde(default)]
pub annotations: HashMap<String, String>,
pub instance_id: String,
// TODO: Parse as nkey?
pub public_key: String,
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ComponentScaleFailed {
pub annotations: BTreeMap<String, String>,
pub claims: Option<ComponentClaims>,
pub image_ref: String,
pub max_instances: usize,
pub component_id: String,
#[serde(default)]
pub host_id: String,
pub error: String,
}
event_impl!(
ActorStopped,
"com.wasmcloud.lattice.actor_stopped",
ComponentScaleFailed,
"com.wasmcloud.lattice.component_scale_failed",
source,
host_id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
// Provider Events
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ProviderStarted {
pub annotations: HashMap<String, String>,
pub claims: ProviderClaims,
pub contract_id: String,
pub annotations: BTreeMap<String, String>,
#[serde(default)]
/// Optional provider claims
pub claims: Option<ProviderClaims>,
pub image_ref: String,
// TODO: parse as UUID?
pub instance_id: String,
pub link_name: String,
// TODO: parse as nkey?
pub public_key: String,
pub provider_id: String,
#[serde(default)]
pub host_id: String,
}
@ -277,10 +352,10 @@ event_impl!(
host_id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ProviderStartFailed {
pub error: String,
pub link_name: String,
pub provider_id: String,
pub provider_ref: String,
#[serde(default)]
pub host_id: String,
@ -293,20 +368,10 @@ event_impl!(
host_id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ProviderStopped {
#[serde(default)]
// TODO(thomastaylor312): Yep, there was a spelling bug in the host is 0.62.1. Revert this once
// 0.62.2 is out
#[serde(rename = "annotaions")]
pub annotations: HashMap<String, String>,
pub contract_id: String,
// TODO: parse as UUID?
pub instance_id: String,
pub link_name: String,
// TODO: parse as nkey?
pub public_key: String,
// We should probably do an actual enum here, but elixir definitely isn't doing it
pub annotations: BTreeMap<String, String>,
pub provider_id: String,
pub reason: String,
#[serde(default)]
pub host_id: String,
@ -319,72 +384,81 @@ event_impl!(
host_id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ProviderHealthCheckPassed {
#[serde(flatten)]
pub data: ProviderHealthCheckInfo,
#[serde(default)]
pub host_id: String,
}
event_impl!(
ProviderHealthCheckPassed,
"com.wasmcloud.lattice.health_check_passed",
source,
host_id
"com.wasmcloud.lattice.health_check_passed"
);
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ProviderHealthCheckFailed {
#[serde(flatten)]
pub data: ProviderHealthCheckInfo,
#[serde(default)]
pub host_id: String,
}
event_impl!(
ProviderHealthCheckFailed,
"com.wasmcloud.lattice.health_check_failed",
source,
host_id
"com.wasmcloud.lattice.health_check_failed"
);
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ProviderHealthCheckStatus {
#[serde(flatten)]
pub data: ProviderHealthCheckInfo,
#[serde(default)]
pub host_id: String,
}
event_impl!(
ProviderHealthCheckStatus,
"com.wasmcloud.lattice.health_check_status",
source,
host_id
"com.wasmcloud.lattice.health_check_status"
);
#[derive(Debug, Serialize, Deserialize, Clone)]
// Link Events
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct LinkdefSet {
#[serde(flatten)]
pub linkdef: Linkdef,
pub linkdef: Link,
}
event_impl!(LinkdefSet, "com.wasmcloud.lattice.linkdef_set");
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct LinkdefDeleted {
#[serde(flatten)]
pub linkdef: Linkdef,
pub source_id: String,
pub name: String,
pub wit_namespace: String,
pub wit_package: String,
}
event_impl!(LinkdefDeleted, "com.wasmcloud.lattice.linkdef_deleted");
#[derive(Debug, Serialize, Deserialize, Clone)]
// Config Events
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ConfigSet {
pub config_name: String,
}
event_impl!(ConfigSet, "com.wasmcloud.lattice.config_set");
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ConfigDeleted {
pub config_name: String,
}
event_impl!(ConfigDeleted, "com.wasmcloud.lattice.config_deleted");
// Host Events
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct HostStarted {
pub labels: HashMap<String, String>,
pub friendly_name: String,
// TODO: Parse as nkey?
#[serde(default)]
pub id: String,
}
@ -396,10 +470,9 @@ event_impl!(
id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct HostStopped {
pub labels: HashMap<String, String>,
// TODO: Parse as nkey?
#[serde(default)]
pub id: String,
}
@ -411,30 +484,40 @@ event_impl!(
id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct HostHeartbeat {
pub actors: HashMap<String, usize>,
/// Components running on this host.
pub components: Vec<ComponentDescription>,
/// Providers running on this host
pub providers: Vec<ProviderDescription>,
/// The host's unique ID
#[serde(default, alias = "id")]
pub host_id: String,
/// The host's cluster issuer public key
#[serde(default)]
pub issuer: String,
/// The host's human-readable friendly name
pub friendly_name: String,
/// The host's labels
pub labels: HashMap<String, String>,
#[serde(default)]
pub annotations: HashMap<String, String>,
pub providers: Vec<ProviderInfo>,
pub uptime_human: String,
pub uptime_seconds: usize,
/// The host version
pub version: semver::Version,
// TODO: Parse as nkey?
#[serde(default)]
pub id: String,
/// The host uptime in human-readable form
pub uptime_human: String,
/// The host uptime in seconds
pub uptime_seconds: u64,
}
event_impl!(
HostHeartbeat,
"com.wasmcloud.lattice.host_heartbeat",
source,
id
host_id
);
#[derive(Debug, Serialize, Deserialize, Clone)]
// Manifest Events
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ManifestPublished {
#[serde(flatten)]
pub manifest: Manifest,
@ -442,7 +525,7 @@ pub struct ManifestPublished {
event_impl!(ManifestPublished, "com.wadm.manifest_published");
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ManifestUnpublished {
pub name: String,
}
@ -482,7 +565,8 @@ mod test {
#[test]
fn test_all_supported_events() {
let raw = std::fs::read("./test/data/events.json").expect("Unable to load test data");
let raw = std::fs::read("../../tests/fixtures/manifests/events.json")
.expect("Unable to load test data");
let all_events: Vec<cloudevents::Event> = serde_json::from_slice(&raw).unwrap();

479
crates/wadm/src/lib.rs Normal file
View File

@ -0,0 +1,479 @@
use std::sync::Arc;
use std::time::Duration;
use anyhow::Result;
use async_nats::jetstream::{stream::Stream, Context};
use config::WadmConfig;
use tokio::{sync::Semaphore, task::JoinSet};
use tracing::log::debug;
#[cfg(feature = "http_admin")]
use anyhow::Context as _;
#[cfg(feature = "http_admin")]
use hyper::body::Bytes;
#[cfg(feature = "http_admin")]
use hyper_util::rt::{TokioExecutor, TokioIo};
#[cfg(feature = "http_admin")]
use tokio::net::TcpListener;
use crate::{
connections::ControlClientConstructor,
consumers::{
manager::{ConsumerManager, WorkerCreator},
*,
},
nats_utils::LatticeIdParser,
scaler::manager::{ScalerManager, WADM_NOTIFY_PREFIX},
server::{ManifestNotifier, Server},
storage::{nats_kv::NatsKvStore, reaper::Reaper},
workers::{CommandPublisher, CommandWorker, EventWorker, StatusPublisher},
};
pub use nats::StreamPersistence;
pub mod commands;
pub mod config;
pub mod consumers;
pub mod events;
pub mod nats_utils;
pub mod publisher;
pub mod scaler;
pub mod server;
pub mod storage;
pub mod workers;
mod connections;
pub(crate) mod model;
mod nats;
mod observer;
#[cfg(test)]
pub mod test_util;
/// Default amount of time events should stay in the stream. This is the 2x heartbeat interval, plus
/// some wiggle room. Exported to make setting defaults easy
pub const DEFAULT_EXPIRY_TIME: Duration = Duration::from_secs(70);
/// Default topic to listen to for all lattice events
pub const DEFAULT_EVENTS_TOPIC: &str = "wasmbus.evt.*.>";
/// Default topic to listen to for all lattice events in a multitenant deployment
pub const DEFAULT_MULTITENANT_EVENTS_TOPIC: &str = "*.wasmbus.evt.*.>";
/// Default topic to listen to for all commands
pub const DEFAULT_COMMANDS_TOPIC: &str = "wadm.cmd.*";
/// Default topic to listen to for all status updates. wadm.status.<lattice_id>.<manifest_name>
pub const DEFAULT_STATUS_TOPIC: &str = "wadm.status.*.*";
/// Default topic to listen to for all wadm event updates
pub const DEFAULT_WADM_EVENTS_TOPIC: &str = "wadm.evt.*.>";
/// Default internal wadm event consumer listen topic for the merged wadm and wasmbus events stream.
pub const DEFAULT_WADM_EVENT_CONSUMER_TOPIC: &str = "wadm_event_consumer.evt.*.>";
/// Managed by annotation used for labeling things properly in wadm
pub const MANAGED_BY_ANNOTATION: &str = "wasmcloud.dev/managed-by";
/// Identifier for managed by annotation. This is the value [`MANAGED_BY_ANNOTATION`] is set to
pub const MANAGED_BY_IDENTIFIER: &str = "wadm";
/// An annotation that denotes which model a resource belongs to
pub const APP_SPEC_ANNOTATION: &str = "wasmcloud.dev/appspec";
/// An annotation that denotes which scaler is managing a resource
pub const SCALER_KEY: &str = "wasmcloud.dev/scaler";
/// The default link name. In the future, this will likely be pulled in from another crate
pub const DEFAULT_LINK_NAME: &str = "default";
/// Default stream name for wadm events
pub const DEFAULT_WADM_EVENT_STREAM_NAME: &str = "wadm_events";
/// Default stream name for wadm event consumer
pub const DEFAULT_WADM_EVENT_CONSUMER_STREAM_NAME: &str = "wadm_event_consumer";
/// Default stream name for wadm commands
pub const DEFAULT_COMMAND_STREAM_NAME: &str = "wadm_commands";
/// Default stream name for wadm status
pub const DEFAULT_STATUS_STREAM_NAME: &str = "wadm_status";
/// Default stream name for wadm notifications
pub const DEFAULT_NOTIFY_STREAM_NAME: &str = "wadm_notify";
/// Default stream name for wasmbus events
pub const DEFAULT_WASMBUS_EVENT_STREAM_NAME: &str = "wasmbus_events";
/// Start wadm with the provided [WadmConfig], returning [JoinSet] with two tasks:
/// 1. The server task that listens for API requests
/// 2. The observer task that listens for events and commands
///
/// When embedding wadm in another application, this function should be called to start the wadm
/// server and observer tasks.
///
/// # Usage
///
/// ```no_run
/// async {
/// let config = wadm::config::WadmConfig::default();
/// let mut wadm = wadm::start_wadm(config).await.expect("should start wadm");
/// tokio::select! {
/// res = wadm.join_next() => {
/// match res {
/// Some(Ok(_)) => {
/// tracing::info!("WADM has exited successfully");
/// std::process::exit(0);
/// }
/// Some(Err(e)) => {
/// tracing::error!("WADM has exited with an error: {:?}", e);
/// std::process::exit(1);
/// }
/// None => {
/// tracing::info!("WADM server did not start");
/// std::process::exit(0);
/// }
/// }
/// }
/// _ = tokio::signal::ctrl_c() => {
/// tracing::info!("Received Ctrl+C, shutting down");
/// std::process::exit(0);
/// }
/// }
/// };
/// ```
pub async fn start_wadm(config: WadmConfig) -> Result<JoinSet<Result<()>>> {
// Build storage adapter for lattice state (on by default)
let (client, context) = nats::get_client_and_context(
config.nats_server.clone(),
config.domain.clone(),
config.nats_seed.clone(),
config.nats_jwt.clone(),
config.nats_creds.clone(),
config.nats_tls_ca_file.clone(),
)
.await?;
// TODO: We will probably need to set up all the flags (like lattice prefix and topic prefix) down the line
let connection_pool = ControlClientConstructor::new(client.clone(), None);
let trimmer: &[_] = &['.', '>', '*'];
let store = nats::ensure_kv_bucket(
&context,
config.state_bucket,
1,
config.max_state_bucket_bytes,
config.stream_persistence.into(),
)
.await?;
let state_storage = NatsKvStore::new(store);
let manifest_storage = nats::ensure_kv_bucket(
&context,
config.manifest_bucket,
1,
config.max_manifest_bucket_bytes,
config.stream_persistence.into(),
)
.await?;
let internal_stream_name = |stream_name: &str| -> String {
match config.stream_prefix.clone() {
Some(stream_prefix) => {
format!(
"{}.{}",
stream_prefix.trim_end_matches(trimmer),
stream_name
)
}
None => stream_name.to_string(),
}
};
debug!("Ensuring wadm event stream");
let event_stream = nats::ensure_limits_stream(
&context,
internal_stream_name(DEFAULT_WADM_EVENT_STREAM_NAME),
vec![DEFAULT_WADM_EVENTS_TOPIC.to_owned()],
Some(
"A stream that stores all events coming in on the wadm.evt subject in a cluster"
.to_string(),
),
config.max_event_stream_bytes,
config.stream_persistence.into(),
)
.await?;
debug!("Ensuring command stream");
let command_stream = nats::ensure_stream(
&context,
internal_stream_name(DEFAULT_COMMAND_STREAM_NAME),
vec![DEFAULT_COMMANDS_TOPIC.to_owned()],
Some("A stream that stores all commands for wadm".to_string()),
config.max_command_stream_bytes,
config.stream_persistence.into(),
)
.await?;
let status_stream = nats::ensure_status_stream(
&context,
internal_stream_name(DEFAULT_STATUS_STREAM_NAME),
vec![DEFAULT_STATUS_TOPIC.to_owned()],
config.max_status_stream_bytes,
config.stream_persistence.into(),
)
.await?;
debug!("Ensuring wasmbus event stream");
// Remove the previous wadm_(multitenant)_mirror streams so that they don't
// prevent us from creating the new wasmbus_(multitenant)_events stream
// TODO(joonas): Remove this some time in the future once we're confident
// enough that there are no more wadm_(multitenant)_mirror streams around.
for mirror_stream_name in &["wadm_mirror", "wadm_multitenant_mirror"] {
if (context.get_stream(mirror_stream_name).await).is_ok() {
context.delete_stream(mirror_stream_name).await?;
}
}
let wasmbus_event_subjects = match config.multitenant {
true => vec![DEFAULT_MULTITENANT_EVENTS_TOPIC.to_owned()],
false => vec![DEFAULT_EVENTS_TOPIC.to_owned()],
};
let wasmbus_event_stream = nats::ensure_limits_stream(
&context,
DEFAULT_WASMBUS_EVENT_STREAM_NAME.to_string(),
wasmbus_event_subjects.clone(),
Some(
"A stream that stores all events coming in on the wasmbus.evt subject in a cluster"
.to_string(),
),
config.max_wasmbus_event_stream_bytes,
config.stream_persistence.into(),
)
.await?;
debug!("Ensuring notify stream");
let notify_stream = nats::ensure_notify_stream(
&context,
DEFAULT_NOTIFY_STREAM_NAME.to_owned(),
vec![format!("{WADM_NOTIFY_PREFIX}.*")],
config.max_notify_stream_bytes,
config.stream_persistence.into(),
)
.await?;
debug!("Ensuring event consumer stream");
let event_consumer_stream = nats::ensure_event_consumer_stream(
&context,
DEFAULT_WADM_EVENT_CONSUMER_STREAM_NAME.to_owned(),
DEFAULT_WADM_EVENT_CONSUMER_TOPIC.to_owned(),
vec![&wasmbus_event_stream, &event_stream],
Some(
"A stream that sources from wadm_events and wasmbus_events for wadm event consumer's use"
.to_string(),
),
config.max_event_consumer_stream_bytes,
config.stream_persistence.into(),
)
.await?;
debug!("Creating event consumer manager");
let permit_pool = Arc::new(Semaphore::new(
config.max_jobs.unwrap_or(Semaphore::MAX_PERMITS),
));
let event_worker_creator = EventWorkerCreator {
state_store: state_storage.clone(),
manifest_store: manifest_storage.clone(),
pool: connection_pool.clone(),
command_topic_prefix: DEFAULT_COMMANDS_TOPIC.trim_matches(trimmer).to_owned(),
publisher: context.clone(),
notify_stream,
status_stream: status_stream.clone(),
};
let events_manager: ConsumerManager<EventConsumer> = ConsumerManager::new(
permit_pool.clone(),
event_consumer_stream,
event_worker_creator.clone(),
config.multitenant,
)
.await;
debug!("Creating command consumer manager");
let command_worker_creator = CommandWorkerCreator {
pool: connection_pool,
};
let commands_manager: ConsumerManager<CommandConsumer> = ConsumerManager::new(
permit_pool.clone(),
command_stream,
command_worker_creator.clone(),
config.multitenant,
)
.await;
// TODO(thomastaylor312): We might want to figure out how not to run this globally. Doing a
// synthetic event sent to the stream could be nice, but all the wadm processes would still fire
// off that tick, resulting in multiple people handling. We could maybe get it to work with the
// right duplicate window, but we have no idea when each process could fire a tick. Worst case
// scenario right now is that multiple fire simultaneously and a few of them just delete nothing
let reaper = Reaper::new(
state_storage.clone(),
Duration::from_secs(config.cleanup_interval / 2),
[],
);
let wadm_event_prefix = DEFAULT_WADM_EVENTS_TOPIC.trim_matches(trimmer);
debug!("Creating lattice observer");
let observer = observer::Observer {
parser: LatticeIdParser::new("wasmbus", config.multitenant),
command_manager: commands_manager,
event_manager: events_manager,
reaper,
client: client.clone(),
command_worker_creator,
event_worker_creator,
};
debug!("Subscribing to API topic");
let server = Server::new(
manifest_storage,
client,
Some(&config.api_prefix),
config.multitenant,
status_stream,
ManifestNotifier::new(wadm_event_prefix, context),
)
.await?;
let mut tasks = JoinSet::new();
#[cfg(feature = "http_admin")]
if let Some(addr) = config.http_admin {
debug!("Setting up HTTP administration endpoint");
let socket = TcpListener::bind(addr)
.await
.context("failed to bind on HTTP administation endpoint")?;
let svc = hyper::service::service_fn(move |req| {
const OK: &str = r#"{"status":"ok"}"#;
async move {
let (http::request::Parts { method, uri, .. }, _) = req.into_parts();
match (method.as_str(), uri.path()) {
("HEAD", "/livez") => Ok(http::Response::default()),
("GET", "/livez") => Ok(http::Response::new(http_body_util::Full::new(
Bytes::from(OK),
))),
(method, "/livez") => http::Response::builder()
.status(http::StatusCode::METHOD_NOT_ALLOWED)
.body(http_body_util::Full::new(Bytes::from(format!(
"method `{method}` not supported for path `/livez`"
)))),
("HEAD", "/readyz") => Ok(http::Response::default()),
("GET", "/readyz") => Ok(http::Response::new(http_body_util::Full::new(
Bytes::from(OK),
))),
(method, "/readyz") => http::Response::builder()
.status(http::StatusCode::METHOD_NOT_ALLOWED)
.body(http_body_util::Full::new(Bytes::from(format!(
"method `{method}` not supported for path `/readyz`"
)))),
(.., path) => http::Response::builder()
.status(http::StatusCode::NOT_FOUND)
.body(http_body_util::Full::new(Bytes::from(format!(
"unknown endpoint `{path}`"
)))),
}
}
});
let srv = hyper_util::server::conn::auto::Builder::new(TokioExecutor::new());
tasks.spawn(async move {
loop {
let stream = match socket.accept().await {
Ok((stream, _)) => stream,
Err(err) => {
tracing::error!(?err, "failed to accept HTTP administration connection");
continue;
}
};
if let Err(err) = srv.serve_connection(TokioIo::new(stream), svc).await {
tracing::error!(?err, "failed to serve HTTP administration connection");
}
}
});
}
// Subscribe and handle API requests
tasks.spawn(server.serve());
// Observe and handle events
tasks.spawn(observer.observe(wasmbus_event_subjects));
Ok(tasks)
}
#[derive(Clone)]
struct CommandWorkerCreator {
pool: ControlClientConstructor,
}
#[async_trait::async_trait]
impl WorkerCreator for CommandWorkerCreator {
type Output = CommandWorker;
async fn create(
&self,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> anyhow::Result<Self::Output> {
let client = self.pool.get_connection(lattice_id, multitenant_prefix);
Ok(CommandWorker::new(client))
}
}
#[derive(Clone)]
struct EventWorkerCreator<StateStore> {
state_store: StateStore,
manifest_store: async_nats::jetstream::kv::Store,
pool: ControlClientConstructor,
command_topic_prefix: String,
publisher: Context,
notify_stream: Stream,
status_stream: Stream,
}
#[async_trait::async_trait]
impl<StateStore> WorkerCreator for EventWorkerCreator<StateStore>
where
StateStore: crate::storage::Store + Send + Sync + Clone + 'static,
{
type Output = EventWorker<StateStore, wasmcloud_control_interface::Client, Context>;
async fn create(
&self,
lattice_id: &str,
multitenant_prefix: Option<&str>,
) -> anyhow::Result<Self::Output> {
let client = self.pool.get_connection(lattice_id, multitenant_prefix);
let command_publisher = CommandPublisher::new(
self.publisher.clone(),
&format!("{}.{lattice_id}", self.command_topic_prefix),
);
let status_publisher = StatusPublisher::new(
self.publisher.clone(),
Some(self.status_stream.clone()),
&format!("wadm.status.{lattice_id}"),
);
let manager = ScalerManager::new(
self.publisher.clone(),
self.notify_stream.clone(),
lattice_id,
multitenant_prefix,
self.state_store.clone(),
self.manifest_store.clone(),
command_publisher.clone(),
status_publisher.clone(),
client.clone(),
)
.await?;
Ok(EventWorker::new(
self.state_store.clone(),
client,
command_publisher,
status_publisher,
manager,
))
}
}

View File

@ -2,30 +2,19 @@
use indexmap::IndexMap;
use serde::{Deserialize, Serialize};
use crate::model::Manifest;
use crate::server::ComponentStatus;
use crate::storage::StateKind;
use wadm_types::{Manifest, LATEST_VERSION, VERSION_ANNOTATION_KEY};
use super::LATEST_VERSION;
/// This struct represents a single manfiest, with its version history. Internally these are stored
/// This struct represents a single manifest, with its version history. Internally these are stored
/// as an indexmap keyed by version name
#[derive(Debug, Serialize, Deserialize, Default)]
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
pub(crate) struct StoredManifest {
// Ordering matters for how we store a manifest, so we need to use an index map to preserve
// insertion order _and_ have quick access to specific versions
// NOTE(thomastaylor312): We probably should have a configurable limit for how many we keep
// around in history so they don't balloon forever
manifests: IndexMap<String, Manifest>,
// Set only if a version is deployed
deployed_version: Option<String>,
// TODO: Figure out which status to store (probably just the status for each component). We can
// convert it to the external facing type from the server as needed
status: Vec<ComponentStatus>,
// An optional top level status message to help with debugging or status for the whole manifest
status_message: Option<String>,
}
impl StateKind for StoredManifest {
const KIND: &'static str = "manifest";
}
impl StoredManifest {
@ -39,8 +28,20 @@ impl StoredManifest {
/// Adds the given manifest, returning `false` if unable to add (e.g. the version already
/// exists)
pub fn add_version(&mut self, manifest: Manifest) -> bool {
let version = manifest.version().to_owned();
pub fn add_version(&mut self, mut manifest: Manifest) -> bool {
let version = match manifest.metadata.annotations.get(VERSION_ANNOTATION_KEY) {
Some(v) => v.to_string(),
None => {
// If a version is not given, automatically add a new version with a specific ULID (that way
// it can be sorted in order)
let v = ulid::Ulid::new().to_string();
manifest
.metadata
.annotations
.insert(VERSION_ANNOTATION_KEY.to_string(), v.clone());
v
}
};
if self.manifests.contains_key(&version) {
return false;
}
@ -112,6 +113,11 @@ impl StoredManifest {
.expect("A manifest should always exist. This is programmer error")
}
/// Returns the name of the manifest
pub fn name(&self) -> &str {
&self.get_current().metadata.name
}
/// Gets a reference to the specified version of the manifest
pub fn get_version(&self, version: &str) -> Option<&Manifest> {
self.manifests.get(version)
@ -133,14 +139,54 @@ impl StoredManifest {
pub fn count(&self) -> usize {
self.manifests.len()
}
}
/// Returns a reference to the current status message, if any
pub fn status_message(&self) -> Option<&str> {
self.status_message.as_deref()
#[cfg(test)]
mod test {
use super::*;
use std::{io::BufReader, path::Path};
use anyhow::Result;
use wadm_types::VERSION_ANNOTATION_KEY;
pub(crate) fn deserialize_yaml(filepath: impl AsRef<Path>) -> Result<Manifest> {
let file = std::fs::File::open(filepath)?;
let reader = BufReader::new(file);
let yaml_string: Manifest = serde_yaml::from_reader(reader)?;
Ok(yaml_string)
}
/// Returns a reference to the current status
pub fn status(&self) -> &Vec<ComponentStatus> {
&self.status
#[test]
fn test_versioning() {
let mut manifest = deserialize_yaml("../../tests/fixtures/manifests/simple2.yaml")
.expect("Should be able to parse");
let mut stored = StoredManifest::default();
assert!(
stored.add_version(manifest.clone()),
"Should be able to add manifest without a version set"
);
let updated = stored.get_current();
ulid::Ulid::from_string(updated.version()).expect("Should have had a ULID set");
// Now update the manifest and add a new custom version
manifest
.metadata
.annotations
.insert(VERSION_ANNOTATION_KEY.to_string(), "v0.0.1".to_string());
assert!(
stored.add_version(manifest.clone()),
"Should be able to add manifest with custom version"
);
let updated = stored.get_current();
assert_eq!(updated.version(), "v0.0.1", "Version should still be set");
// Try adding again and make sure that still fails
assert!(
!stored.add_version(manifest),
"Adding duplicate version should fail"
);
}
}

410
crates/wadm/src/nats.rs Normal file
View File

@ -0,0 +1,410 @@
use std::path::PathBuf;
use anyhow::{anyhow, Result};
use async_nats::{
jetstream::{
self,
kv::{Config as KvConfig, Store},
stream::{Config as StreamConfig, Source, StorageType, Stream, SubjectTransform},
Context,
},
Client, ConnectOptions,
};
use crate::DEFAULT_EXPIRY_TIME;
use tracing::{debug, warn};
#[derive(Debug, Clone, Copy, Default)]
pub enum StreamPersistence {
#[default]
File,
Memory,
}
impl std::fmt::Display for StreamPersistence {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
StreamPersistence::File => write!(f, "file"),
StreamPersistence::Memory => write!(f, "memory"),
}
}
}
impl From<StreamPersistence> for StorageType {
fn from(persistance: StreamPersistence) -> Self {
match persistance {
StreamPersistence::File => StorageType::File,
StreamPersistence::Memory => StorageType::Memory,
}
}
}
impl From<&str> for StreamPersistence {
fn from(persistance: &str) -> Self {
match persistance {
"file" => StreamPersistence::File,
"memory" => StreamPersistence::Memory,
_ => StreamPersistence::File,
}
}
}
/// Creates a NATS client from the given options
pub(crate) async fn get_client_and_context(
url: String,
js_domain: Option<String>,
seed: Option<String>,
jwt: Option<String>,
creds_path: Option<PathBuf>,
ca_path: Option<PathBuf>,
) -> Result<(Client, Context)> {
let client = if seed.is_none() && jwt.is_none() && creds_path.is_none() {
let mut opts = async_nats::ConnectOptions::new();
if let Some(ca) = ca_path {
opts = opts.add_root_certificates(ca).require_tls(true);
}
opts.connect(url).await?
} else {
let opts = build_nats_options(seed, jwt, creds_path, ca_path).await?;
async_nats::connect_with_options(url, opts).await?
};
let context = if let Some(domain) = js_domain {
jetstream::with_domain(client.clone(), domain)
} else {
jetstream::new(client.clone())
};
Ok((client, context))
}
async fn build_nats_options(
seed: Option<String>,
jwt: Option<String>,
creds_path: Option<PathBuf>,
ca_path: Option<PathBuf>,
) -> Result<ConnectOptions> {
let mut opts = async_nats::ConnectOptions::new();
opts = match (seed, jwt, creds_path) {
(Some(seed), Some(jwt), None) => {
let jwt = resolve_jwt(jwt).await?;
let kp = std::sync::Arc::new(get_seed(seed).await?);
opts.jwt(jwt, move |nonce| {
let key_pair = kp.clone();
async move { key_pair.sign(&nonce).map_err(async_nats::AuthError::new) }
})
}
(None, None, Some(creds)) => opts.credentials_file(creds).await?,
_ => {
// We shouldn't ever get here due to the requirements on the flags, but return a helpful error just in case
return Err(anyhow::anyhow!(
"Got too many options. Make sure to provide a seed and jwt or a creds path"
));
}
};
if let Some(ca) = ca_path {
opts = opts.add_root_certificates(ca).require_tls(true);
}
Ok(opts)
}
/// Takes a string that could be a raw seed, or a path and does all the necessary loading and parsing steps
async fn get_seed(seed: String) -> Result<nkeys::KeyPair> {
// MAGIC NUMBER: Length of a seed key
let raw_seed = if seed.len() == 58 && seed.starts_with('S') {
seed
} else {
tokio::fs::read_to_string(seed).await?
};
nkeys::KeyPair::from_seed(&raw_seed).map_err(anyhow::Error::from)
}
/// Resolves a JWT value by either returning the string itself if it's a valid JWT
/// or by loading the contents of a file specified by the JWT value.
///
/// # Arguments
///
/// * `jwt_or_file` - A string that represents either a JWT or a file path containing a JWT.
///
/// # Returns
///
/// A `Result` containing a string if successful, or an error if the JWT value
/// is invalid or the file cannot be read.
async fn resolve_jwt(jwt_or_file: String) -> Result<String> {
if tokio::fs::metadata(&jwt_or_file)
.await
.map(|metadata| metadata.is_file())
.unwrap_or(false)
{
tokio::fs::read_to_string(jwt_or_file)
.await
.map_err(|e| anyhow!("Error loading JWT from file: {e}"))
} else {
// We could do more validation on the JWT here, but if the JWT is invalid then
// connecting will fail anyways
Ok(jwt_or_file)
}
}
/// A helper that ensures that the given stream name exists, using defaults to create if it does
/// not. Returns the handle to the stream
pub async fn ensure_stream(
context: &Context,
name: String,
subjects: Vec<String>,
description: Option<String>,
max_bytes: i64,
storage: StorageType,
) -> Result<Stream> {
debug!("Ensuring stream {name} exists");
let stream_config = StreamConfig {
name: name.clone(),
description,
num_replicas: 1,
retention: async_nats::jetstream::stream::RetentionPolicy::WorkQueue,
subjects,
max_age: DEFAULT_EXPIRY_TIME,
allow_rollup: false,
max_bytes,
storage,
..Default::default()
};
if let Ok(stream) = context.get_stream(&name).await {
// For now, we only check if the subjects are the same in order to make sure that
// newer versions of wadm adjust subjects appropriately. In the case that developers
// want to alter the storage or replicas of a stream, for example,
// we don't want to override that configuration.
if stream.cached_info().config.subjects == stream_config.subjects {
return Ok(stream);
} else {
warn!("Found stream {name} with different configuration, deleting and recreating");
context.delete_stream(name).await?;
}
}
context
.get_or_create_stream(stream_config)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
pub async fn ensure_limits_stream(
context: &Context,
name: String,
subjects: Vec<String>,
description: Option<String>,
max_bytes: i64,
storage: StorageType,
) -> Result<Stream> {
debug!("Ensuring stream {name} exists");
let stream_config = StreamConfig {
name: name.clone(),
description,
num_replicas: 1,
retention: async_nats::jetstream::stream::RetentionPolicy::Limits,
subjects,
max_age: DEFAULT_EXPIRY_TIME,
allow_rollup: false,
max_bytes,
storage,
..Default::default()
};
if let Ok(stream) = context.get_stream(&name).await {
// For now, we only check if the subjects are the same in order to make sure that
// newer versions of wadm adjust subjects appropriately. In the case that developers
// want to alter the storage or replicas of a stream, for example,
// we don't want to override that configuration.
if stream.cached_info().config.subjects == stream_config.subjects {
return Ok(stream);
} else {
warn!("Found stream {name} with different configuration, deleting and recreating");
context.delete_stream(name).await?;
}
}
context
.get_or_create_stream(stream_config)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
pub async fn ensure_event_consumer_stream(
context: &Context,
name: String,
subject: String,
streams: Vec<&Stream>,
description: Option<String>,
max_bytes: i64,
storage: StorageType,
) -> Result<Stream> {
debug!("Ensuring stream {name} exists");
// This maps the upstream (wasmbus.evt.*.> & wadm.evt.*.>) Streams into
// a set of configuration for the downstream wadm event consumer Stream
// that consolidates them into a single set of subjects (wadm_event_consumer.evt.*.>)
// to be consumable by the wadm event consumer.
let sources = streams
.iter()
.map(|stream| stream.cached_info().config.clone())
.map(|stream_config| Source {
name: stream_config.name,
subject_transforms: stream_config
.subjects
.iter()
.map(|stream_subject| SubjectTransform {
source: stream_subject.to_owned(),
destination: match stream_subject.starts_with('*') {
// If we have a multi-tenant stream subject, we need to replace
// the second wildcard since the first one represents the account id,
// otherwise replace the first one:
//
// multi-tenant: <account-id>.<subject>.evt.<lattice-id>.<event-type>
// single-tenant: <subject>.evt.<lattice-id>.<event-type>
true => subject.replacen('*', "{{wildcard(2)}}", 1),
false => subject.replacen('*', "{{wildcard(1)}}", 1),
},
})
.collect(),
..Default::default()
})
.collect();
let stream_config = StreamConfig {
name: name.clone(),
description,
num_replicas: 1,
retention: async_nats::jetstream::stream::RetentionPolicy::WorkQueue,
subjects: vec![],
max_age: DEFAULT_EXPIRY_TIME,
sources: Some(sources),
allow_rollup: false,
max_bytes,
storage,
..Default::default()
};
if let Ok(stream) = context.get_stream(&name).await {
if stream.cached_info().config.retention == stream_config.retention {
return Ok(stream);
} else {
warn!("Found stream {name} with different configuration, deleting and recreating");
context.delete_stream(name).await?;
}
}
context
.get_or_create_stream(stream_config)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
pub async fn ensure_status_stream(
context: &Context,
name: String,
subjects: Vec<String>,
max_bytes: i64,
storage: StorageType,
) -> Result<Stream> {
debug!("Ensuring stream {name} exists");
context
.get_or_create_stream(StreamConfig {
name,
description: Some(
"A stream that stores all status updates for wadm applications".into(),
),
num_replicas: 1,
allow_direct: true,
retention: async_nats::jetstream::stream::RetentionPolicy::Limits,
max_messages_per_subject: 10,
subjects,
max_age: std::time::Duration::from_nanos(0),
max_bytes,
storage,
..Default::default()
})
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
/// A helper that ensures that the notify stream exists
pub async fn ensure_notify_stream(
context: &Context,
name: String,
subjects: Vec<String>,
max_bytes: i64,
storage: StorageType,
) -> Result<Stream> {
debug!("Ensuring stream {name} exists");
context
.get_or_create_stream(StreamConfig {
name,
description: Some("A stream for capturing all notification events for wadm".into()),
num_replicas: 1,
retention: async_nats::jetstream::stream::RetentionPolicy::Interest,
subjects,
max_age: DEFAULT_EXPIRY_TIME,
max_bytes,
storage,
..Default::default()
})
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
/// A helper that ensures that the given KV bucket exists, using defaults to create if it does
/// not. Returns the handle to the stream
pub async fn ensure_kv_bucket(
context: &Context,
name: String,
history_to_keep: i64,
max_bytes: i64,
storage: StorageType,
) -> Result<Store> {
debug!("Ensuring kv bucket {name} exists");
if let Ok(kv) = context.get_key_value(&name).await {
Ok(kv)
} else {
context
.create_key_value(KvConfig {
bucket: name,
history: history_to_keep,
num_replicas: 1,
storage,
max_bytes,
..Default::default()
})
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
}
#[cfg(test)]
mod test {
use super::resolve_jwt;
use anyhow::Result;
#[tokio::test]
async fn can_resolve_jwt_value_and_file() -> Result<()> {
let my_jwt = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ2aWRlb0lkIjoiUWpVaUxYSnVjMjl0IiwiaWF0IjoxNjIwNjAzNDY5fQ.2PKx6y2ym6IWbeM6zFgHOkDnZEtGTR3YgYlQ2_Jki5g";
let jwt_path = "../../tests/fixtures/nats.jwt";
let jwt_inside_file = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdHJpbmciOiAiQWNjb3JkIHRvIGFsbCBrbm93biBsb3dzIG9mIGF2aWF0aW9uLCB0aGVyZSBpcyBubyB3YXkgdGhhdCBhIGJlZSBhYmxlIHRvIGZseSJ9.GyU6pTRhflcOg6KBCU6wZedP8BQzLXbdgYIoU6KzzD8";
assert_eq!(
resolve_jwt(my_jwt.to_string())
.await
.expect("should resolve jwt string to itself"),
my_jwt.to_string()
);
assert_eq!(
resolve_jwt(jwt_path.to_string())
.await
.expect("should be able to read jwt file"),
jwt_inside_file.to_string()
);
Ok(())
}
}

View File

@ -0,0 +1,192 @@
//! Helper utilities for interacting with NATS
const EVENT_SUBJECT: &str = "evt";
/// A parser for NATS subjects that parses out a lattice ID for any given subject
pub struct LatticeIdParser {
// NOTE(thomastaylor312): We don't actually support specific prefixes right now, but we could in
// the future as we already do for control topics. So this is just trying to future proof
prefix: String,
multitenant: bool,
}
impl LatticeIdParser {
/// Returns a new parser configured to use the given prefix. If `multitenant` is set to true,
/// this parser will also attempt to parse a subject as if it were an account imported topic
/// (e.g. `A****.wasmbus.evt.{lattice-id}.>`) if it doesn't match the normally expected pattern
pub fn new(prefix: &str, multitenant: bool) -> LatticeIdParser {
LatticeIdParser {
prefix: prefix.to_owned(),
multitenant,
}
}
/// Parses the given subject based on settings and then returns the lattice ID of the subject and
/// the account ID if it is multitenant.
/// Returns None if it couldn't parse the topic
pub fn parse(&self, subject: &str) -> Option<LatticeInformation> {
let separated: Vec<&str> = subject.split('.').collect();
// For reference, topics look like the following:
//
// Normal: `{prefix}.evt.{lattice-id}.{event-type}`
// Multitenant: `{account-id}.{prefix}.evt.{lattice-id}.{event-type}`
//
// Note that the account ID should be prefaced with an `A`
match separated[..] {
[prefix, evt, lattice_id, _event_type]
if prefix == self.prefix && evt == EVENT_SUBJECT =>
{
Some(LatticeInformation {
lattice_id: lattice_id.to_owned(),
multitenant_prefix: None,
prefix: self.prefix.clone(),
})
}
[account_id, prefix, evt, lattice_id, _event_type]
if self.multitenant
&& prefix == self.prefix
&& evt == EVENT_SUBJECT
&& account_id.starts_with('A') =>
{
Some(LatticeInformation {
lattice_id: lattice_id.to_owned(),
multitenant_prefix: Some(account_id.to_owned()),
prefix: self.prefix.clone(),
})
}
_ => None,
}
}
}
/// Simple helper struct for returning lattice information from a parsed event topic
#[derive(Clone, Debug)]
pub struct LatticeInformation {
lattice_id: String,
multitenant_prefix: Option<String>,
prefix: String,
}
impl LatticeInformation {
pub fn lattice_id(&self) -> &str {
&self.lattice_id
}
pub fn multitenant_prefix(&self) -> Option<&str> {
self.multitenant_prefix.as_deref()
}
/// Constructs the event subject to listen on for a particular lattice
pub fn event_subject(&self) -> String {
if let Some(account_id) = &self.multitenant_prefix {
// e.g. Axxx.wasmbus.evt.{lattice-id}.*
format!(
"{}.{}.{}.{}.*",
account_id, self.prefix, EVENT_SUBJECT, self.lattice_id
)
} else {
// e.g. wasmbus.evt.{lattice-id}.*
format!("{}.{}.{}.*", self.prefix, EVENT_SUBJECT, self.lattice_id)
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_valid_subjects() {
// Default first
let parser = LatticeIdParser::new("wasmbus", false);
let single_lattice = parser
.parse("wasmbus.evt.blahblah.>")
.expect("Should return lattice id");
assert_eq!(
single_lattice.lattice_id(),
"blahblah",
"Should return the right ID"
);
assert_eq!(
single_lattice.multitenant_prefix(),
None,
"Should return no multitenant prefix"
);
assert_eq!(
single_lattice.event_subject(),
"wasmbus.evt.blahblah.*",
"Should return the right event subject"
);
// Shouldn't parse a multitenant
assert!(
parser.parse("ACCOUNTID.wasmbus.evt.default.>").is_none(),
"Shouldn't parse a multitenant topic"
);
// Multitenant second
let parser = LatticeIdParser::new("wasmbus", true);
assert_eq!(
parser
.parse("wasmbus.evt.blahblah.host_heartbeat")
.expect("Should return lattice id")
.lattice_id(),
"blahblah",
"Should return the right ID"
);
let res = parser
.parse("ACCOUNTID.wasmbus.evt.blahblah.>")
.expect("Should parse multitenant topic");
assert_eq!(res.lattice_id(), "blahblah", "Should return the right ID");
assert_eq!(
res.multitenant_prefix()
.expect("Should return account id in multitenant mode"),
"ACCOUNTID",
"Should return the right ID"
);
assert_eq!(
res.event_subject(),
"ACCOUNTID.wasmbus.evt.blahblah.*",
"Should return the right event subject"
);
}
#[test]
fn test_invalid_subjects() {
let parser = LatticeIdParser::new("wasmbus", true);
// Test 3 and 4 part subjects to make sure they don't parse
assert!(
parser.parse("BLAH.wasmbus.notevt.default.>").is_none(),
"Shouldn't parse 4 part invalid topic"
);
assert!(
parser.parse("wasmbus.notme.default.>").is_none(),
"Shouldn't parse 3 part invalid topic"
);
assert!(
parser.parse("lebus.evt.default.>").is_none(),
"Shouldn't parse an non-matching prefix"
);
assert!(
parser.parse("wasmbus.evt.>").is_none(),
"Shouldn't parse a too short topic"
);
assert!(
parser.parse("BADACCOUNT.wasmbus.evt.default.>").is_none(),
"Shouldn't parse invalid account topic"
);
assert!(
parser.parse("wasmbus.notme.default.bar.baz").is_none(),
"Shouldn't parse long topic"
);
}
}

View File

@ -4,35 +4,32 @@ use async_nats::Subscriber;
use futures::{stream::SelectAll, StreamExt, TryFutureExt};
use tracing::{debug, error, instrument, trace, warn};
use wadm::{
use crate::{
consumers::{
manager::{ConsumerManager, WorkerCreator},
CommandConsumer, EventConsumer,
},
events::{EventType, HostHeartbeat, HostStarted},
mirror::Mirror,
events::{EventType, HostHeartbeat, HostStarted, ManifestPublished},
nats_utils::LatticeIdParser,
storage::{nats_kv::NatsKvStore, reaper::Reaper, ReadStore, Store},
DEFAULT_COMMANDS_TOPIC, DEFAULT_WADM_EVENTS_TOPIC,
storage::{nats_kv::NatsKvStore, reaper::Reaper, Store},
DEFAULT_COMMANDS_TOPIC, DEFAULT_WADM_EVENT_CONSUMER_TOPIC,
};
use super::{CommandWorkerCreator, EventWorkerCreator};
pub(crate) struct Observer<StateStore, ManifestStore> {
pub(crate) struct Observer<StateStore> {
pub(crate) parser: LatticeIdParser,
pub(crate) command_manager: ConsumerManager<CommandConsumer>,
pub(crate) event_manager: ConsumerManager<EventConsumer>,
pub(crate) mirror: Mirror,
pub(crate) client: async_nats::Client,
pub(crate) reaper: Reaper<NatsKvStore>,
pub(crate) event_worker_creator: EventWorkerCreator<StateStore, ManifestStore>,
pub(crate) event_worker_creator: EventWorkerCreator<StateStore>,
pub(crate) command_worker_creator: CommandWorkerCreator,
}
impl<StateStore, ManifestStore> Observer<StateStore, ManifestStore>
impl<StateStore> Observer<StateStore>
where
StateStore: Store + Send + Sync + Clone + 'static,
ManifestStore: ReadStore + Send + Sync + Clone + 'static,
{
/// Watches the given topic (with wildcards) for wasmbus events. If it finds a lattice that it
/// isn't managing, it will start managing it immediately
@ -47,33 +44,30 @@ where
if !is_event_we_care_about(&msg.payload) {
continue;
}
let lattice_id = match self.parser.parse(&msg.subject) {
Some(id) => id,
None => {
trace!(subject = %msg.subject, "Found non-matching lattice subject");
continue;
}
let Some(lattice_info) = self.parser.parse(&msg.subject) else {
trace!(subject = %msg.subject, "Found non-matching lattice subject");
continue;
};
let lattice_id = lattice_info.lattice_id();
let multitenant_prefix = lattice_info.multitenant_prefix();
let event_subject = lattice_info.event_subject();
// Create the reaper for this lattice. This operation returns early if it is
// already running
self.reaper.observe(lattice_id);
// Make sure the mirror consumer is up and running. This operation returns early
// if it is already running
if let Err(e) = self.mirror.monitor_lattice(&msg.subject, lattice_id).await {
// If we can't set up the mirror, we can't proceed, so exit early
error!(error = %e, %lattice_id, "Couldn't add mirror consumer. Will retry on next heartbeat");
continue;
}
let command_topic = DEFAULT_COMMANDS_TOPIC.replace('*', lattice_id);
let events_topic = DEFAULT_WADM_EVENTS_TOPIC.replace('*', lattice_id);
let events_topic = DEFAULT_WADM_EVENT_CONSUMER_TOPIC.replace('*', lattice_id);
let needs_command = !self.command_manager.has_consumer(&command_topic).await;
let needs_event = !self.event_manager.has_consumer(&events_topic).await;
if needs_command {
debug!(%lattice_id, subject = %msg.subject, mapped_subject = %command_topic, "Found unmonitored lattice, adding command consumer");
let worker = match self.command_worker_creator.create(lattice_id).await {
debug!(%lattice_id, subject = %event_subject, mapped_subject = %command_topic, "Found unmonitored lattice, adding command consumer");
let worker = match self
.command_worker_creator
.create(lattice_id, multitenant_prefix)
.await
{
Ok(w) => w,
Err(e) => {
error!(error = %e, %lattice_id, "Couldn't construct worker for command consumer. Will retry on next heartbeat");
@ -81,15 +75,19 @@ where
}
};
self.command_manager
.add_for_lattice(&command_topic, lattice_id, worker)
.add_for_lattice(&command_topic, lattice_id, multitenant_prefix, worker)
.await
.unwrap_or_else(|e| {
error!(error = %e, %lattice_id, "Couldn't add command consumer. Will retry on next heartbeat");
})
}
if needs_event {
debug!(%lattice_id, subject = %msg.subject, mapped_subject = %events_topic, "Found unmonitored lattice, adding event consumer");
let worker = match self.event_worker_creator.create(lattice_id).await {
debug!(%lattice_id, subject = %event_subject, mapped_subject = %events_topic, "Found unmonitored lattice, adding event consumer");
let worker = match self
.event_worker_creator
.create(lattice_id, multitenant_prefix)
.await
{
Ok(w) => w,
Err(e) => {
error!(error = %e, %lattice_id, "Couldn't construct worker for event consumer. Will retry on next heartbeat");
@ -97,7 +95,7 @@ where
}
};
self.event_manager
.add_for_lattice(&events_topic, lattice_id, worker)
.add_for_lattice(&events_topic, lattice_id, multitenant_prefix, worker)
.await
.unwrap_or_else(|e| {
error!(error = %e, %lattice_id, "Couldn't add event consumer. Will retry on next heartbeat");
@ -113,15 +111,17 @@ where
}
}
// This is a stupid hacky function to check that this is a host started or host heartbeat event
// without actually parsing
// This is a stupid hacky function to check that this is a host started, host heartbeat, or
// manifest_published event without actually parsing
fn is_event_we_care_about(data: &[u8]) -> bool {
let string_data = match std::str::from_utf8(data) {
Ok(s) => s,
Err(_) => return false,
};
string_data.contains(HostStarted::TYPE) || string_data.contains(HostHeartbeat::TYPE)
string_data.contains(HostStarted::TYPE)
|| string_data.contains(HostHeartbeat::TYPE)
|| string_data.contains(ManifestPublished::TYPE)
}
async fn get_subscriber(

View File

@ -0,0 +1,395 @@
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::hash::{Hash, Hasher};
use anyhow::Result;
use async_trait::async_trait;
use tokio::sync::RwLock;
use tracing::debug;
use tracing::error;
use tracing::instrument;
use tracing::trace;
use wadm_types::{
api::{StatusInfo, StatusType},
TraitProperty,
};
use crate::commands::{DeleteConfig, PutConfig};
use crate::events::{ConfigDeleted, ConfigSet};
use crate::workers::ConfigSource;
use crate::{commands::Command, events::Event, scaler::Scaler};
const CONFIG_SCALER_KIND: &str = "ConfigScaler";
pub struct ConfigScaler<ConfigSource> {
config_bucket: ConfigSource,
id: String,
config_name: String,
// NOTE(#263): Introducing storing the entire configuration in-memory has the potential to get
// fairly heavy if the configuration is large. We should consider a more efficient way to store
// this by fetching configuration from the manifest when it's needed, for example.
config: Option<HashMap<String, String>>,
status: RwLock<StatusInfo>,
}
#[async_trait]
impl<C: ConfigSource + Send + Sync + Clone> Scaler for ConfigScaler<C> {
fn id(&self) -> &str {
&self.id
}
fn kind(&self) -> &str {
CONFIG_SCALER_KIND
}
fn name(&self) -> String {
self.config_name.to_string()
}
async fn status(&self) -> StatusInfo {
let _ = self.reconcile().await;
self.status.read().await.to_owned()
}
async fn update_config(&mut self, _config: TraitProperty) -> Result<Vec<Command>> {
debug!("ConfigScaler does not support updating config, ignoring");
Ok(vec![])
}
#[instrument(level = "trace", skip_all, fields(scaler_id = %self.id))]
async fn handle_event(&self, event: &Event) -> Result<Vec<Command>> {
match event {
Event::ConfigSet(ConfigSet { config_name })
| Event::ConfigDeleted(ConfigDeleted { config_name }) => {
if config_name == &self.config_name {
return self.reconcile().await;
}
}
// This is a workaround to ensure that the config has a chance to periodically
// update itself if it is out of sync. For efficiency, we only fetch configuration
// again if the status is not deployed.
Event::HostHeartbeat(_) => {
if !matches!(self.status.read().await.status_type, StatusType::Deployed) {
return self.reconcile().await;
}
}
_ => {
trace!("ConfigScaler does not support this event, ignoring");
}
}
Ok(Vec::new())
}
#[instrument(level = "trace", skip_all, scaler_id = %self.id)]
async fn reconcile(&self) -> Result<Vec<Command>> {
debug!(self.config_name, "Fetching configuration");
match (
self.config_bucket.get_config(&self.config_name).await,
self.config.as_ref(),
) {
// If configuration is not supplied to the scaler, we just ensure that it exists
(Ok(Some(_config)), None) => {
*self.status.write().await = StatusInfo::deployed("");
Ok(Vec::new())
}
// If configuration is not supplied and doesn't exist, we enter a failed state
(Ok(None), None) => {
*self.status.write().await = StatusInfo::failed(&format!(
"Specified configuration {} does not exist",
self.config_name
));
Ok(Vec::new())
}
// If configuration matches what's supplied, this scaler is deployed
(Ok(Some(config)), Some(scaler_config)) if &config == scaler_config => {
*self.status.write().await = StatusInfo::deployed("");
Ok(Vec::new())
}
// If configuration is out of sync, we put the configuration
(Ok(_config), Some(scaler_config)) => {
debug!(self.config_name, "Putting configuration");
*self.status.write().await = StatusInfo::reconciling("Configuration out of sync");
Ok(vec![Command::PutConfig(PutConfig {
config_name: self.config_name.clone(),
config: scaler_config.clone(),
})])
}
(Err(e), _) => {
error!(error = %e, "Configscaler failed to fetch configuration");
*self.status.write().await = StatusInfo::failed(&e.to_string());
Ok(Vec::new())
}
}
}
#[instrument(level = "trace", skip_all)]
async fn cleanup(&self) -> Result<Vec<Command>> {
if self.config.is_some() {
Ok(vec![Command::DeleteConfig(DeleteConfig {
config_name: self.config_name.clone(),
})])
} else {
// This configuration is externally managed, don't delete it
Ok(Vec::new())
}
}
}
impl<C: ConfigSource> ConfigScaler<C> {
/// Construct a new ConfigScaler with specified values
pub fn new(
config_bucket: C,
config_name: &str,
config: Option<&HashMap<String, String>>,
) -> Self {
let mut id = config_name.to_string();
// Hash the config to generate a unique id, used to compare scalers for uniqueness when updating
if let Some(config) = config.as_ref() {
let mut config_hasher = std::collections::hash_map::DefaultHasher::new();
BTreeMap::from_iter(config.iter()).hash(&mut config_hasher);
id.extend(format!("-{}", config_hasher.finish()).chars());
}
Self {
config_bucket,
id,
config_name: config_name.to_string(),
config: config.cloned(),
status: RwLock::new(StatusInfo::reconciling("")),
}
}
}
#[cfg(test)]
mod test {
use std::collections::{BTreeMap, HashMap};
use wadm_types::{api::StatusType, ConfigProperty};
use crate::{
commands::{Command, PutConfig},
events::{ComponentScaled, ConfigDeleted, Event, HostHeartbeat},
scaler::{configscaler::ConfigScaler, Scaler},
test_util::TestLatticeSource,
};
#[tokio::test]
/// Ensure that the config scaler reacts properly to events, fetching configuration
/// when it is out of sync and ignoring irrelevant events.
async fn test_configscaler() {
let lattice = TestLatticeSource {
claims: HashMap::new(),
inventory: Default::default(),
links: Vec::new(),
config: HashMap::new(),
};
let config = ConfigProperty {
name: "test_config".to_string(),
properties: Some(HashMap::from_iter(vec![(
"key".to_string(),
"value".to_string(),
)])),
};
let config_scaler =
ConfigScaler::new(lattice.clone(), &config.name, config.properties.as_ref());
assert_eq!(
config_scaler.status().await.status_type,
StatusType::Reconciling
);
assert_eq!(
config_scaler
.reconcile()
.await
.expect("reconcile should succeed"),
vec![Command::PutConfig(PutConfig {
config_name: config.name.clone(),
config: config.properties.clone().expect("properties not found"),
})]
);
assert_eq!(
config_scaler.status().await.status_type,
StatusType::Reconciling
);
// Configuration deleted, relevant
assert_eq!(
config_scaler
.handle_event(&Event::ConfigDeleted(ConfigDeleted {
config_name: config.name.clone()
}))
.await
.expect("handle_event should succeed"),
vec![Command::PutConfig(PutConfig {
config_name: config.name.clone(),
config: config.properties.clone().expect("properties not found"),
})]
);
assert_eq!(
config_scaler.status().await.status_type,
StatusType::Reconciling
);
// Configuration deleted, irrelevant
assert_eq!(
config_scaler
.handle_event(&Event::ConfigDeleted(ConfigDeleted {
config_name: "some_other_config".to_string()
}))
.await
.expect("handle_event should succeed"),
vec![]
);
assert_eq!(
config_scaler.status().await.status_type,
StatusType::Reconciling
);
// Periodic reconcile with host heartbeat
assert_eq!(
config_scaler
.handle_event(&Event::HostHeartbeat(HostHeartbeat {
components: Vec::new(),
providers: Vec::new(),
host_id: String::default(),
issuer: String::default(),
friendly_name: String::default(),
labels: HashMap::new(),
version: semver::Version::new(0, 0, 0),
uptime_human: String::default(),
uptime_seconds: 0,
}))
.await
.expect("handle_event should succeed"),
vec![Command::PutConfig(PutConfig {
config_name: config.name.clone(),
config: config.properties.clone().expect("properties not found"),
})]
);
assert_eq!(
config_scaler.status().await.status_type,
StatusType::Reconciling
);
// Ignore other event
assert_eq!(
config_scaler
.handle_event(&Event::ComponentScaled(ComponentScaled {
annotations: BTreeMap::new(),
claims: None,
image_ref: "foo".to_string(),
max_instances: 0,
component_id: "fooo".to_string(),
host_id: "hostid".to_string()
}))
.await
.expect("handle_event should succeed"),
vec![]
);
assert_eq!(
config_scaler.status().await.status_type,
StatusType::Reconciling
);
// Create lattice where config is present
let lattice2 = TestLatticeSource {
claims: HashMap::new(),
inventory: Default::default(),
links: Vec::new(),
config: HashMap::from_iter(vec![(
config.name.clone(),
config.properties.clone().expect("properties not found"),
)]),
};
let config_scaler2 = ConfigScaler::new(lattice2, &config.name, config.properties.as_ref());
assert_eq!(
config_scaler2
.reconcile()
.await
.expect("reconcile should succeed"),
vec![]
);
assert_eq!(
config_scaler2.status().await.status_type,
StatusType::Deployed
);
// Periodic reconcile with host heartbeat
assert_eq!(
config_scaler2
.handle_event(&Event::HostHeartbeat(HostHeartbeat {
components: Vec::new(),
providers: Vec::new(),
host_id: String::default(),
issuer: String::default(),
friendly_name: String::default(),
labels: HashMap::new(),
version: semver::Version::new(0, 0, 0),
uptime_human: String::default(),
uptime_seconds: 0,
}))
.await
.expect("handle_event should succeed"),
vec![]
);
assert_eq!(
config_scaler2.status().await.status_type,
StatusType::Deployed
);
// Create lattice where config is present but with the wrong values
let lattice3 = TestLatticeSource {
claims: HashMap::new(),
inventory: Default::default(),
links: Vec::new(),
config: HashMap::from_iter(vec![(
config.name.clone(),
HashMap::from_iter(vec![("key".to_string(), "wrong_value".to_string())]),
)]),
};
let config_scaler3 =
ConfigScaler::new(lattice3.clone(), &config.name, config.properties.as_ref());
assert_eq!(
config_scaler3
.reconcile()
.await
.expect("reconcile should succeed"),
vec![Command::PutConfig(PutConfig {
config_name: config.name.clone(),
config: config.properties.clone().expect("properties not found"),
})]
);
assert_eq!(
config_scaler3.status().await.status_type,
StatusType::Reconciling
);
// Test supplied name but not supplied config
let config_scaler4 = ConfigScaler::new(lattice3, &config.name, None);
assert_eq!(
config_scaler4
.reconcile()
.await
.expect("reconcile should succeed"),
vec![]
);
assert_eq!(
config_scaler4.status().await.status_type,
StatusType::Deployed
);
let config_scaler5 = ConfigScaler::new(lattice, &config.name, None);
assert_eq!(
config_scaler5
.reconcile()
.await
.expect("reconcile should succeed"),
vec![]
);
assert_eq!(
config_scaler5.status().await.status_type,
StatusType::Failed
);
}
}

View File

@ -0,0 +1,780 @@
//! Contains code for converting the list of [`Component`]s in an application into a list of [`Scaler`]s
//! that are responsible for monitoring and enforcing the desired state of a lattice
use std::{collections::HashMap, time::Duration};
use anyhow::Result;
use tracing::{error, warn};
use wadm_types::{
api::StatusInfo, CapabilityProperties, Component, ComponentProperties, ConfigProperty,
LinkProperty, Policy, Properties, SecretProperty, SharedApplicationComponentProperties,
SpreadScalerProperty, Trait, TraitProperty, DAEMONSCALER_TRAIT, LINK_TRAIT, SPREADSCALER_TRAIT,
};
use wasmcloud_secrets_types::SECRET_PREFIX;
use crate::{
publisher::Publisher,
scaler::{
spreadscaler::{link::LINK_SCALER_KIND, ComponentSpreadScaler, SPREAD_SCALER_KIND},
statusscaler::StatusScaler,
Scaler,
},
storage::{snapshot::SnapshotStore, ReadStore},
workers::{ConfigSource, LinkSource, SecretSource},
DEFAULT_LINK_NAME,
};
use super::{
configscaler::ConfigScaler,
daemonscaler::{provider::ProviderDaemonScaler, ComponentDaemonScaler},
secretscaler::SecretScaler,
spreadscaler::{
link::{LinkScaler, LinkScalerConfig},
provider::{ProviderSpreadConfig, ProviderSpreadScaler},
},
BackoffWrapper,
};
pub(crate) type BoxedScaler = Box<dyn Scaler + Send + Sync + 'static>;
pub(crate) type ScalerList = Vec<BoxedScaler>;
const EMPTY_TRAIT_VEC: Vec<Trait> = Vec::new();
/// Converts a list of manifest [`Component`]s into a [`ScalerList`], resolving shared application
/// references, links, configuration and secrets as necessary.
///
/// # Arguments
/// * `components` - The list of components to convert
/// * `policies` - The policies to use when creating the scalers so they can access secrets
/// * `lattice_id` - The lattice id the scalers operate on
/// * `notifier` - The publisher to use when creating the scalers so they can report status
/// * `name` - The name of the manifest that the scalers are being created for
/// * `notifier_subject` - The subject to use when creating the scalers so they can report status
/// * `snapshot_data` - The store to use when creating the scalers so they can access lattice state
pub(crate) fn manifest_components_to_scalers<S, P, L>(
components: &[Component],
policies: &HashMap<&String, &Policy>,
lattice_id: &str,
manifest_name: &str,
notifier_subject: &str,
notifier: &P,
snapshot_data: &SnapshotStore<S, L>,
) -> ScalerList
where
S: ReadStore + Send + Sync + Clone + 'static,
P: Publisher + Clone + Send + Sync + 'static,
L: LinkSource + ConfigSource + SecretSource + Clone + Send + Sync + 'static,
{
let mut scalers: ScalerList = Vec::new();
components
.iter()
.for_each(|component| match &component.properties {
Properties::Component { properties } => {
// Determine if this component is contained in this manifest or a shared application
let (application_name, component_name) = match resolve_manifest_component(
manifest_name,
&component.name,
properties.image.as_ref(),
properties.application.as_ref(),
) {
Ok(names) => names,
Err(err) => {
error!(err);
scalers.push(Box::new(StatusScaler::new(
uuid::Uuid::new_v4().to_string(),
SPREAD_SCALER_KIND,
&component.name,
StatusInfo::failed(err),
)) as BoxedScaler);
return;
}
};
component_scalers(
&mut scalers,
components,
properties,
component.traits.as_ref(),
manifest_name,
application_name,
component_name,
lattice_id,
policies,
notifier_subject,
notifier,
snapshot_data,
)
}
Properties::Capability { properties } => {
// Determine if this component is contained in this manifest or a shared application
let (application_name, component_name) = match resolve_manifest_component(
manifest_name,
&component.name,
properties.image.as_ref(),
properties.application.as_ref(),
) {
Ok(names) => names,
Err(err) => {
error!(err);
scalers.push(Box::new(StatusScaler::new(
uuid::Uuid::new_v4().to_string(),
SPREAD_SCALER_KIND,
&component.name,
StatusInfo::failed(err),
)) as BoxedScaler);
return;
}
};
provider_scalers(
&mut scalers,
components,
properties,
component.traits.as_ref(),
manifest_name,
application_name,
component_name,
lattice_id,
policies,
notifier_subject,
notifier,
snapshot_data,
)
}
});
scalers
}
/// Helper function, primarily to remove nesting, that extends a [`ScalerList`] with all scalers
/// from a (Wasm) component [`Component`]
///
/// # Arguments
/// * `scalers` - The list of scalers to extend
/// * `components` - The list of components to convert
/// * `properties` - The properties of the component to convert
/// * `traits` - The traits of the component to convert
/// * `manifest_name` - The name of the manifest that the scalers are being created for
/// * `application_name` - The name of the application that the scalers are being created for
/// * `component_name` - The name of the component to convert
/// * **The following arguments are required to create scalers, passed directly through to the scaler
/// * `lattice_id` - The lattice id the scalers operate on
/// * `policies` - The policies to use when creating the scalers so they can access secrets
/// * `notifier_subject` - The subject to use when creating the scalers so they can report status
/// * `notifier` - The publisher to use when creating the scalers so they can report status
/// * `snapshot_data` - The store to use when creating the scalers so they can access lattice state
#[allow(clippy::too_many_arguments)]
fn component_scalers<S, P, L>(
scalers: &mut ScalerList,
components: &[Component],
properties: &ComponentProperties,
traits: Option<&Vec<Trait>>,
manifest_name: &str,
application_name: &str,
component_name: &str,
lattice_id: &str,
policies: &HashMap<&String, &Policy>,
notifier_subject: &str,
notifier: &P,
snapshot_data: &SnapshotStore<S, L>,
) where
S: ReadStore + Send + Sync + Clone + 'static,
P: Publisher + Clone + Send + Sync + 'static,
L: LinkSource + ConfigSource + SecretSource + Clone + Send + Sync + 'static,
{
scalers.extend(traits.unwrap_or(&EMPTY_TRAIT_VEC).iter().filter_map(|trt| {
// If an image is specified, then it's a component in the same manifest. Otherwise, it's a shared component
let component_id = if properties.image.is_some() {
compute_component_id(manifest_name, properties.id.as_ref(), component_name)
} else {
compute_component_id(application_name, properties.id.as_ref(), component_name)
};
let (config_scalers, mut config_names) =
config_to_scalers(snapshot_data, manifest_name, &properties.config);
let (secret_scalers, secret_names) = secrets_to_scalers(
snapshot_data,
manifest_name,
&properties.secrets,
policies,
);
config_names.append(&mut secret_names.clone());
// TODO(#451): Consider a way to report on status of a shared component
match (trt.trait_type.as_str(), &trt.properties, &properties.image) {
// Shared application components already have their own spread/daemon scalers, you
// cannot modify them from another manifest
(SPREADSCALER_TRAIT, TraitProperty::SpreadScaler(_), None) => {
warn!(
"Unsupported SpreadScaler trait specified for a shared component {component_name}"
);
None
}
(DAEMONSCALER_TRAIT, TraitProperty::SpreadScaler(_), None) => {
warn!(
"Unsupported DaemonScaler trait specified for a shared component {component_name}"
);
None
}
(SPREADSCALER_TRAIT, TraitProperty::SpreadScaler(p), Some(image_ref)) => {
// If the image is not specified, then it's a reference to a shared provider
// in a different manifest
Some(Box::new(BackoffWrapper::new(
ComponentSpreadScaler::new(
snapshot_data.clone(),
image_ref.clone(),
component_id,
lattice_id.to_owned(),
application_name.to_owned(),
p.to_owned(),
component_name,
config_names,
),
notifier.clone(),
config_scalers,
secret_scalers,
notifier_subject,
application_name,
Some(Duration::from_secs(5)),
)) as BoxedScaler)
}
(DAEMONSCALER_TRAIT, TraitProperty::SpreadScaler(p), Some(image_ref)) => {
Some(Box::new(BackoffWrapper::new(
ComponentDaemonScaler::new(
snapshot_data.clone(),
image_ref.to_owned(),
component_id,
lattice_id.to_owned(),
application_name.to_owned(),
p.to_owned(),
component_name,
config_names,
),
notifier.clone(),
config_scalers,
secret_scalers,
notifier_subject,
application_name,
Some(Duration::from_secs(5)),
)) as BoxedScaler)
}
(LINK_TRAIT, TraitProperty::Link(p), _) => {
// Find the target component of the link and create a scaler for it
components
.iter()
.find_map(|component| match &component.properties {
Properties::Capability {
properties:
CapabilityProperties {
id,
application,
image,
..
},
}
| Properties::Component {
properties:
ComponentProperties {
id,
application,
image,
..
},
} if component.name == p.target.name => Some(link_scaler(
p,
lattice_id,
manifest_name,
application_name,
&component.name,
component_id.to_string(),
id.as_ref(),
image.as_ref(),
application.as_ref(),
policies,
notifier_subject,
notifier,
snapshot_data,
)),
_ => None,
})
}
_ => None,
}
}));
}
/// Helper function, primarily to remove nesting, that extends a [`ScalerList`] with all scalers
/// from a capability provider [`Component`]
/// /// # Arguments
/// * `scalers` - The list of scalers to extend
/// * `components` - The list of components to convert
/// * `properties` - The properties of the capability provider to convert
/// * `traits` - The traits of the component to convert
/// * `manifest_name` - The name of the manifest that the scalers are being created for
/// * `application_name` - The name of the application that the scalers are being created for
/// * `component_name` - The name of the component to convert
/// * **The following arguments are required to create scalers, passed directly through to the scaler
/// * `lattice_id` - The lattice id the scalers operate on
/// * `policies` - The policies to use when creating the scalers so they can access secrets
/// * `notifier_subject` - The subject to use when creating the scalers so they can report status
/// * `notifier` - The publisher to use when creating the scalers so they can report status
/// * `snapshot_data` - The store to use when creating the scalers so they can access lattice state
#[allow(clippy::too_many_arguments)]
fn provider_scalers<S, P, L>(
scalers: &mut ScalerList,
components: &[Component],
properties: &CapabilityProperties,
traits: Option<&Vec<Trait>>,
manifest_name: &str,
application_name: &str,
component_name: &str,
lattice_id: &str,
policies: &HashMap<&String, &Policy>,
notifier_subject: &str,
notifier: &P,
snapshot_data: &SnapshotStore<S, L>,
) where
S: ReadStore + Send + Sync + Clone + 'static,
P: Publisher + Clone + Send + Sync + 'static,
L: LinkSource + ConfigSource + SecretSource + Clone + Send + Sync + 'static,
{
// If an image is specified, then it's a provider in the same manifest. Otherwise, it's a shared component
let provider_id = if properties.image.is_some() {
compute_component_id(manifest_name, properties.id.as_ref(), component_name)
} else {
compute_component_id(application_name, properties.id.as_ref(), component_name)
};
let mut scaler_specified = false;
scalers.extend(traits.unwrap_or(&EMPTY_TRAIT_VEC).iter().filter_map(|trt| {
match (trt.trait_type.as_str(), &trt.properties, &properties.image) {
// Shared application components already have their own spread/daemon scalers, you
// cannot modify them from another manifest
(SPREADSCALER_TRAIT, TraitProperty::SpreadScaler(_), None) => {
warn!(
"Unsupported SpreadScaler trait specified for a shared provider {component_name}"
);
None
}
(DAEMONSCALER_TRAIT, TraitProperty::SpreadScaler(_), None) => {
warn!(
"Unsupported DaemonScaler trait specified for a shared provider {component_name}"
);
None
}
(SPREADSCALER_TRAIT, TraitProperty::SpreadScaler(p), Some(image)) => {
scaler_specified = true;
let (config_scalers, mut config_names) =
config_to_scalers(snapshot_data, application_name, &properties.config);
let (secret_scalers, secret_names) = secrets_to_scalers(
snapshot_data,
application_name,
&properties.secrets,
policies,
);
config_names.append(&mut secret_names.clone());
Some(Box::new(BackoffWrapper::new(
ProviderSpreadScaler::new(
snapshot_data.clone(),
ProviderSpreadConfig {
lattice_id: lattice_id.to_owned(),
provider_id: provider_id.to_owned(),
provider_reference: image.to_owned(),
spread_config: p.to_owned(),
model_name: application_name.to_owned(),
provider_config: config_names,
},
component_name,
),
notifier.clone(),
config_scalers,
secret_scalers,
notifier_subject,
application_name,
// Providers are a bit longer because it can take a bit to download
Some(Duration::from_secs(60)),
)) as BoxedScaler)
}
(DAEMONSCALER_TRAIT, TraitProperty::SpreadScaler(p), Some(image)) => {
scaler_specified = true;
let (config_scalers, mut config_names) =
config_to_scalers(snapshot_data, application_name, &properties.config);
let (secret_scalers, secret_names) = secrets_to_scalers(
snapshot_data,
application_name,
&properties.secrets,
policies,
);
config_names.append(&mut secret_names.clone());
Some(Box::new(BackoffWrapper::new(
ProviderDaemonScaler::new(
snapshot_data.clone(),
ProviderSpreadConfig {
lattice_id: lattice_id.to_owned(),
provider_id: provider_id.to_owned(),
provider_reference: image.to_owned(),
spread_config: p.to_owned(),
model_name: application_name.to_owned(),
provider_config: config_names,
},
component_name,
),
notifier.clone(),
config_scalers,
secret_scalers,
notifier_subject,
application_name,
// Providers are a bit longer because it can take a bit to download
Some(Duration::from_secs(60)),
)) as BoxedScaler)
}
// Find the target component of the link and create a scaler for it.
(LINK_TRAIT, TraitProperty::Link(p), _) => {
components
.iter()
.find_map(|component| match &component.properties {
// Providers cannot link to other providers, only components
Properties::Capability { .. } if component.name == p.target.name => {
error!(
"Provider {} cannot link to provider {}, only components",
&component.name, p.target.name
);
None
}
Properties::Component {
properties:
ComponentProperties {
image,
application,
id,
..
},
} if component.name == p.target.name => Some(link_scaler(
p,
lattice_id,
manifest_name,
application_name,
&component.name,
provider_id.to_owned(),
id.as_ref(),
image.as_ref(),
application.as_ref(),
policies,
notifier_subject,
notifier,
snapshot_data,
)),
_ => None,
})
}
_ => None,
}
}));
// Allow providers to omit the spreadscaler entirely for simplicity
if !scaler_specified {
if let Some(image) = &properties.image {
let (config_scalers, mut config_names) =
config_to_scalers(snapshot_data, application_name, &properties.config);
let (secret_scalers, mut secret_names) = secrets_to_scalers(
snapshot_data,
application_name,
&properties.secrets,
policies,
);
config_names.append(&mut secret_names);
scalers.push(Box::new(BackoffWrapper::new(
ProviderSpreadScaler::new(
snapshot_data.clone(),
ProviderSpreadConfig {
lattice_id: lattice_id.to_owned(),
provider_id,
provider_reference: image.to_owned(),
spread_config: SpreadScalerProperty {
instances: 1,
spread: vec![],
},
model_name: application_name.to_owned(),
provider_config: config_names,
},
component_name,
),
notifier.clone(),
config_scalers,
secret_scalers,
notifier_subject,
application_name,
// Providers are a bit longer because it can take a bit to download
Some(Duration::from_secs(60)),
)) as BoxedScaler)
}
}
}
/// Resolves configuration, secrets, and the target of a link to create a boxed [`LinkScaler`]
///
/// # Arguments
/// * `link_property` - The properties of the link to convert
/// * `lattice_id` - The lattice id the scalers operate on
/// * `manifest_name` - The name of the manifest that the scalers are being created for
/// * `application_name` - The name of the application that the scalers are being created for
/// * `component_name` - The name of the component to convert
/// * `source_id` - The ID of the source component
/// * `target_id` - The optional ID of the target component
/// * `image` - The optional image reference of the target component
/// * `shared` - The optional shared application reference of the target component
/// * `policies` - The policies to use when creating the scalers so they can access secrets
/// * `notifier_subject` - The subject to use when creating the scalers so they can report status
/// * `notifier` - The publisher to use when creating the scalers so they can report status
/// * `snapshot_data` - The store to use when creating the scalers so they can access lattice state
#[allow(clippy::too_many_arguments)]
fn link_scaler<S, P, L>(
link_property: &LinkProperty,
lattice_id: &str,
manifest_name: &str,
application_name: &str,
component_name: &str,
source_id: String,
target_id: Option<&String>,
image: Option<&String>,
shared: Option<&SharedApplicationComponentProperties>,
policies: &HashMap<&String, &Policy>,
notifier_subject: &str,
notifier: &P,
snapshot_data: &SnapshotStore<S, L>,
) -> BoxedScaler
where
S: ReadStore + Send + Sync + Clone + 'static,
P: Publisher + Clone + Send + Sync + 'static,
L: LinkSource + ConfigSource + SecretSource + Clone + Send + Sync + 'static,
{
let (mut config_scalers, mut source_config) = config_to_scalers(
snapshot_data,
manifest_name,
&link_property
.source
.as_ref()
.unwrap_or(&Default::default())
.config,
);
let (target_config_scalers, mut target_config) =
config_to_scalers(snapshot_data, manifest_name, &link_property.target.config);
let (target_secret_scalers, target_secrets) = secrets_to_scalers(
snapshot_data,
manifest_name,
&link_property.target.secrets,
policies,
);
let (mut source_secret_scalers, source_secrets) = secrets_to_scalers(
snapshot_data,
manifest_name,
&link_property
.source
.as_ref()
.unwrap_or(&Default::default())
.secrets,
policies,
);
config_scalers.extend(target_config_scalers);
source_secret_scalers.extend(target_secret_scalers);
target_config.extend(target_secrets);
source_config.extend(source_secrets);
let (target_manifest_name, target_component_name) =
match resolve_manifest_component(manifest_name, component_name, image, shared) {
Ok(name) => name,
Err(err) => {
error!(err);
return Box::new(StatusScaler::new(
uuid::Uuid::new_v4().to_string(),
LINK_SCALER_KIND,
format!(
"{} -({}:{})-> {}",
component_name,
link_property.namespace,
link_property.package,
link_property.target.name
),
StatusInfo::failed(err),
)) as BoxedScaler;
}
};
let target = compute_component_id(target_manifest_name, target_id, target_component_name);
Box::new(BackoffWrapper::new(
LinkScaler::new(
snapshot_data.clone(),
LinkScalerConfig {
source_id,
target,
wit_namespace: link_property.namespace.to_owned(),
wit_package: link_property.package.to_owned(),
wit_interfaces: link_property.interfaces.to_owned(),
name: link_property
.name
.to_owned()
.unwrap_or_else(|| DEFAULT_LINK_NAME.to_string()),
lattice_id: lattice_id.to_owned(),
model_name: application_name.to_owned(),
source_config,
target_config,
},
snapshot_data.clone(),
),
notifier.clone(),
config_scalers,
source_secret_scalers,
notifier_subject,
application_name,
Some(Duration::from_secs(5)),
)) as BoxedScaler
}
/// Returns a tuple which is a list of scalers and a list of the names of the configs that the
/// scalers use.
///
/// Any input [ConfigProperty] that has a `properties` field will be converted into a [ConfigScaler], and
/// the name of the configuration will be modified to be unique to the model and component. If the properties
/// field is not present, the name will be used as-is and assumed that it's managed externally to wadm.
fn config_to_scalers<C: ConfigSource + Send + Sync + Clone>(
config_source: &C,
manifest_name: &str,
configs: &[ConfigProperty],
) -> (Vec<ConfigScaler<C>>, Vec<String>) {
configs
.iter()
.map(|config| {
let name = if config.properties.is_some() {
compute_component_id(manifest_name, None, &config.name)
} else {
config.name.clone()
};
(
ConfigScaler::new(config_source.clone(), &name, config.properties.as_ref()),
name,
)
})
.unzip()
}
fn secrets_to_scalers<S: SecretSource + Send + Sync + Clone>(
secret_source: &S,
manifest_name: &str,
secrets: &[SecretProperty],
policies: &HashMap<&String, &Policy>,
) -> (Vec<SecretScaler<S>>, Vec<String>) {
secrets
.iter()
.map(|s| {
let name = compute_secret_id(manifest_name, None, &s.name);
let policy = *policies.get(&s.properties.policy).unwrap();
(
SecretScaler::new(
name.clone(),
policy.clone(),
s.clone(),
secret_source.clone(),
),
name,
)
})
.unzip()
}
/// Based on the name of the model and the optionally provided ID, returns a unique ID for the
/// component that is a sanitized version of the component reference and model name, separated
/// by a dash.
pub(crate) fn compute_component_id(
manifest_name: &str,
component_id: Option<&String>,
component_name: &str,
) -> String {
if let Some(id) = component_id {
id.to_owned()
} else {
format!(
"{}-{}",
manifest_name
.to_lowercase()
.replace(|c: char| !c.is_ascii_alphanumeric(), "_"),
component_name
.to_lowercase()
.replace(|c: char| !c.is_ascii_alphanumeric(), "_")
)
}
}
pub(crate) fn compute_secret_id(
manifest_name: &str,
component_id: Option<&String>,
component_name: &str,
) -> String {
let name = compute_component_id(manifest_name, component_id, component_name);
format!("{SECRET_PREFIX}_{name}")
}
/// Helper function to resolve a link to a manifest component, returning the name of the manifest
/// and the name of the component where the target resides.
///
/// If the component resides in the same manifest, then the name of the manifest & the name of the
/// component as specified will be returned. In the case that the component resides in a shared
/// application, the name of the shared application & the name of the component in that application
/// will be returned.
///
/// # Arguments
/// * `application_name` - The name of the manifest that the scalers are being created for
/// * `component_name` - The name of the component in the source manifest to target
/// * `component_image_ref` - The image reference for the component
/// * `shared_app_info` - The optional shared application reference for the component
fn resolve_manifest_component<'a>(
application_name: &'a str,
component_name: &'a str,
component_image_ref: Option<&'a String>,
shared_app_info: Option<&'a SharedApplicationComponentProperties>,
) -> Result<(&'a str, &'a str), &'a str> {
match (component_image_ref, shared_app_info) {
(Some(_), None) => Ok((application_name, component_name)),
(None, Some(app)) => Ok((app.name.as_str(), app.component.as_str())),
// These two cases should both be unreachable, since this is caught at manifest
// validation before it's put. Just in case, we'll log an error and ensure the status is failed
(None, None) => Err("Application did not specify an image or shared application reference"),
(Some(_image), Some(_app)) => {
Err("Application specified both an image and a shared application reference")
}
}
}
#[cfg(test)]
mod test {
use super::compute_component_id;
#[test]
fn compute_proper_component_id() {
// User supplied ID always takes precedence
assert_eq!(
compute_component_id("mymodel", Some(&"myid".to_string()), "echo"),
"myid"
);
assert_eq!(
compute_component_id(
"some model name with spaces cause yaml",
Some(&"myid".to_string()),
" echo "
),
"myid"
);
// Sanitize component reference
assert_eq!(
compute_component_id("mymodel", None, "echo-component"),
"mymodel-echo_component"
);
// Ensure we can support spaces in the model name, because YAML strings
assert_eq!(
compute_component_id("some model name with spaces cause yaml", None, "echo"),
"some_model_name_with_spaces_cause_yaml-echo"
);
// Ensure we can support spaces in the model name, because YAML strings
// Ensure we can support lowercasing the reference as well, just in case
assert_eq!(
compute_component_id("My ThInG", None, "thing.wasm"),
"my_thing-thing_wasm"
);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,841 @@
use std::collections::BTreeMap;
use anyhow::Result;
use async_trait::async_trait;
use tokio::sync::RwLock;
use tracing::{instrument, trace};
use wadm_types::api::StatusType;
use wadm_types::{api::StatusInfo, Spread, SpreadScalerProperty, TraitProperty};
use crate::commands::StopProvider;
use crate::events::{
ConfigSet, HostHeartbeat, ProviderHealthCheckFailed, ProviderHealthCheckInfo,
ProviderHealthCheckPassed, ProviderInfo, ProviderStarted, ProviderStopped,
};
use crate::scaler::compute_id_sha256;
use crate::scaler::spreadscaler::{
compute_ineligible_hosts, eligible_hosts, provider::ProviderSpreadConfig,
spreadscaler_annotations,
};
use crate::storage::{Provider, ProviderStatus};
use crate::SCALER_KEY;
use crate::{
commands::{Command, StartProvider},
events::{Event, HostStarted, HostStopped},
scaler::Scaler,
storage::{Host, ReadStore},
};
use super::DAEMON_SCALER_KIND;
/// The ProviderDaemonScaler ensures that a provider is running on every host, according to a
/// [SpreadScalerProperty](crate::model::SpreadScalerProperty)
///
/// If no [Spreads](crate::model::Spread) are specified, this Scaler simply maintains the number of instances
/// on every available host.
pub struct ProviderDaemonScaler<S> {
config: ProviderSpreadConfig,
store: S,
id: String,
status: RwLock<StatusInfo>,
}
#[async_trait]
impl<S: ReadStore + Send + Sync + Clone> Scaler for ProviderDaemonScaler<S> {
fn id(&self) -> &str {
&self.id
}
fn kind(&self) -> &str {
DAEMON_SCALER_KIND
}
fn name(&self) -> String {
self.config.provider_id.to_string()
}
async fn status(&self) -> StatusInfo {
let _ = self.reconcile().await;
self.status.read().await.to_owned()
}
async fn update_config(&mut self, config: TraitProperty) -> Result<Vec<Command>> {
let spread_config = match config {
TraitProperty::SpreadScaler(prop) => prop,
_ => anyhow::bail!("Given config was not a daemon scaler config object"),
};
// If no spreads are specified, an empty spread is sufficient to match _every_ host
// in a lattice
let spread_config = if spread_config.spread.is_empty() {
SpreadScalerProperty {
instances: spread_config.instances,
spread: vec![Spread::default()],
}
} else {
spread_config
};
self.config.spread_config = spread_config;
self.reconcile().await
}
#[instrument(level = "trace", skip_all, fields(scaler_id = %self.id))]
async fn handle_event(&self, event: &Event) -> Result<Vec<Command>> {
// NOTE(brooksmtownsend): We could be more efficient here and instead of running
// the entire reconcile, smart compute exactly what needs to change, but it just
// requires more code branches and would be fine as a future improvement
match event {
Event::ProviderStarted(ProviderStarted { provider_id, .. })
| Event::ProviderStopped(ProviderStopped { provider_id, .. })
if provider_id == &self.config.provider_id =>
{
self.reconcile().await
}
// If the host labels match any spread requirement, perform reconcile
Event::HostStopped(HostStopped { labels, .. })
| Event::HostStarted(HostStarted { labels, .. })
| Event::HostHeartbeat(HostHeartbeat { labels, .. })
if self.config.spread_config.spread.iter().any(|spread| {
spread.requirements.iter().all(|(key, value)| {
labels.get(key).map(|val| val == value).unwrap_or(false)
})
}) =>
{
self.reconcile().await
}
// perform status updates for health check events
Event::ProviderHealthCheckFailed(ProviderHealthCheckFailed {
data: ProviderHealthCheckInfo { provider_id, .. },
})
| Event::ProviderHealthCheckPassed(ProviderHealthCheckPassed {
data: ProviderHealthCheckInfo { provider_id, .. },
}) if provider_id == &self.config.provider_id => {
let provider = self
.store
.get::<Provider>(&self.config.lattice_id, &self.config.provider_id)
.await?;
let unhealthy_providers = provider.map_or(0, |p| {
p.hosts
.values()
.filter(|s| *s == &ProviderStatus::Failed)
.count()
});
let status = self.status.read().await.to_owned();
// update health status of scaler
if let Some(status) = match (status, unhealthy_providers > 0) {
// scaler is deployed but contains unhealthy providers
(
StatusInfo {
status_type: StatusType::Deployed,
..
},
true,
) => Some(StatusInfo::failed(&format!(
"Unhealthy provider on {} host(s)",
unhealthy_providers
))),
// scaler can become unhealthy only if it was previously deployed
// once scaler becomes healthy again revert back to deployed state
// this is a workaround to detect unhealthy status until
// StatusType::Unhealthy can be used
(
StatusInfo {
status_type: StatusType::Failed,
message,
},
false,
) if message.starts_with("Unhealthy provider on") => {
Some(StatusInfo::deployed(""))
}
// don't update status if scaler is not deployed
_ => None,
} {
*self.status.write().await = status;
}
// only status needs update no new commands required
Ok(Vec::new())
}
Event::ConfigSet(ConfigSet { config_name })
if self.config.provider_config.contains(config_name) =>
{
self.reconcile().await
}
// No other event impacts the job of this scaler so we can ignore it
_ => Ok(Vec::new()),
}
}
#[instrument(level = "trace", skip_all, fields(name = %self.config.model_name, scaler_id = %self.id))]
async fn reconcile(&self) -> Result<Vec<Command>> {
let hosts = self.store.list::<Host>(&self.config.lattice_id).await?;
let provider_id = &self.config.provider_id;
let provider_ref = &self.config.provider_reference;
let ineligible_hosts = compute_ineligible_hosts(
&hosts,
self.config
.spread_config
.spread
.iter()
.collect::<Vec<&Spread>>(),
);
// Remove any providers that are managed by this scaler and running on ineligible hosts
let remove_ineligible: Vec<Command> = ineligible_hosts
.iter()
.filter_map(|(_host_id, host)| {
if host
.providers
.get(&ProviderInfo {
provider_id: provider_id.to_string(),
provider_ref: provider_ref.to_string(),
annotations: BTreeMap::default(),
})
.is_some_and(|provider| {
provider
.annotations
.get(SCALER_KEY)
.is_some_and(|id| id == &self.id)
})
{
Some(Command::StopProvider(StopProvider {
provider_id: provider_id.to_owned(),
host_id: host.id.to_string(),
model_name: self.config.model_name.to_owned(),
annotations: BTreeMap::default(),
}))
} else {
None
}
})
.collect();
// If we found any providers running on ineligible hosts, remove them before
// attempting to start new ones.
if !remove_ineligible.is_empty() {
let status = StatusInfo::reconciling(
"Found providers running on ineligible hosts, removing them.",
);
trace!(?status, "Updating scaler status");
*self.status.write().await = status;
return Ok(remove_ineligible);
}
let mut spread_status = vec![];
trace!(spread = ?self.config.spread_config.spread, ?provider_id, "Computing commands");
let commands = self
.config
.spread_config
.spread
.iter()
.flat_map(|spread| {
let eligible_hosts = eligible_hosts(&hosts, spread);
if !eligible_hosts.is_empty() {
eligible_hosts
.iter()
// Filter out hosts that are already running this provider
.filter_map(|(_host_id, host)| {
let provider_on_host = host.providers.get(&ProviderInfo {
provider_id: provider_id.to_string(),
provider_ref: provider_ref.to_string(),
annotations: BTreeMap::default(),
});
match (provider_on_host, self.config.spread_config.instances) {
// Spread instances set to 0 means we're cleaning up and should stop
// running providers
(Some(_), 0) => Some(Command::StopProvider(StopProvider {
provider_id: provider_id.to_owned(),
host_id: host.id.to_string(),
model_name: self.config.model_name.to_owned(),
annotations: spreadscaler_annotations(&spread.name, &self.id),
})),
// Whenever instances > 0, we should start a provider if it's not already running
(None, _n) => Some(Command::StartProvider(StartProvider {
reference: provider_ref.to_owned(),
provider_id: provider_id.to_owned(),
host_id: host.id.to_string(),
model_name: self.config.model_name.to_owned(),
annotations: spreadscaler_annotations(&spread.name, &self.id),
config: self.config.provider_config.clone(),
})),
_ => None,
}
})
.collect::<Vec<Command>>()
} else {
// No hosts were eligible, so we can't attempt to add or remove providers
trace!(?spread.name, "Found no eligible hosts for daemon scaler");
spread_status.push(StatusInfo::failed(&format!(
"Could not satisfy daemonscaler {} for {}, 0 eligible hosts found.",
spread.name, self.config.provider_reference
)));
vec![]
}
})
.collect::<Vec<Command>>();
trace!(?commands, "Calculated commands for provider daemonscaler");
let status = match (spread_status.is_empty(), commands.is_empty()) {
// No failures, no commands, scaler satisfied
(true, true) => StatusInfo::deployed(""),
// No failures, commands generated, scaler is reconciling
(true, false) => {
StatusInfo::reconciling(&format!("Scaling provider on {} host(s)", commands.len()))
}
// Failures occurred, scaler is in a failed state
(false, _) => StatusInfo::failed(
&spread_status
.into_iter()
.map(|s| s.message)
.collect::<Vec<String>>()
.join(" "),
),
};
trace!(?status, "Updating scaler status");
*self.status.write().await = status;
Ok(commands)
}
#[instrument(level = "trace", skip_all, fields(name = %self.config.model_name))]
async fn cleanup(&self) -> Result<Vec<Command>> {
let mut config_clone = self.config.clone();
config_clone.spread_config.instances = 0;
let cleanerupper = ProviderDaemonScaler {
config: config_clone,
store: self.store.clone(),
id: self.id.clone(),
status: RwLock::new(StatusInfo::reconciling("")),
};
cleanerupper.reconcile().await
}
}
impl<S: ReadStore + Send + Sync> ProviderDaemonScaler<S> {
/// Construct a new ProviderDaemonScaler with specified configuration values
pub fn new(store: S, config: ProviderSpreadConfig, component_name: &str) -> Self {
// Compute the id of this scaler based on all of the configuration values
// that make it unique. This is used during upgrades to determine if a
// scaler is the same as a previous one.
let mut id_parts = vec![
DAEMON_SCALER_KIND,
&config.model_name,
component_name,
&config.provider_id,
&config.provider_reference,
];
id_parts.extend(
config
.provider_config
.iter()
.map(std::string::String::as_str),
);
let id = compute_id_sha256(&id_parts);
// If no spreads are specified, an empty spread is sufficient to match _every_ host
// in a lattice
let spread_config = if config.spread_config.spread.is_empty() {
SpreadScalerProperty {
instances: config.spread_config.instances,
spread: vec![Spread::default()],
}
} else {
config.spread_config
};
Self {
store,
config: ProviderSpreadConfig {
spread_config,
..config
},
id,
status: RwLock::new(StatusInfo::reconciling("")),
}
}
}
#[cfg(test)]
mod test {
use std::{
collections::{BTreeMap, HashMap, HashSet},
sync::Arc,
};
use anyhow::Result;
use chrono::Utc;
use wadm_types::{Spread, SpreadScalerProperty};
use crate::{
commands::{Command, StartProvider},
scaler::{spreadscaler::spreadscaler_annotations, Scaler},
storage::{Host, Provider, Store},
test_util::TestStore,
};
use super::*;
const MODEL_NAME: &str = "test_provider_spreadscaler";
#[test]
fn test_id_generator() {
let config = ProviderSpreadConfig {
lattice_id: "lattice".to_string(),
provider_reference: "provider_ref".to_string(),
provider_id: "provider_id".to_string(),
model_name: MODEL_NAME.to_string(),
spread_config: SpreadScalerProperty {
instances: 1,
spread: vec![],
},
provider_config: vec![],
};
let scaler1 =
ProviderDaemonScaler::new(Arc::new(TestStore::default()), config, "myprovider");
let config = ProviderSpreadConfig {
lattice_id: "lattice".to_string(),
provider_reference: "provider_ref".to_string(),
provider_id: "provider_id".to_string(),
model_name: MODEL_NAME.to_string(),
spread_config: SpreadScalerProperty {
instances: 1,
spread: vec![],
},
provider_config: vec!["foobar".to_string()],
};
let scaler2 =
ProviderDaemonScaler::new(Arc::new(TestStore::default()), config, "myprovider");
assert_ne!(
scaler1.id(),
scaler2.id(),
"ProviderDaemonScaler IDs should be different with different configuration"
);
}
#[tokio::test]
async fn can_spread_on_multiple_hosts() -> Result<()> {
let lattice_id = "provider_spread_multi_host";
let provider_ref = "fakecloud.azurecr.io/provider:3.2.1".to_string();
let provider_id = "VASDASDIAMAREALPROVIDERPROVIDER";
let host_id_one = "NASDASDIMAREALHOSTONE";
let host_id_two = "NASDASDIMAREALHOSTTWO";
let store = Arc::new(TestStore::default());
store
.store(
lattice_id,
host_id_one.to_string(),
Host {
components: HashMap::new(),
friendly_name: "hey".to_string(),
labels: HashMap::from_iter([
("inda".to_string(), "cloud".to_string()),
("cloud".to_string(), "fake".to_string()),
("region".to_string(), "us-noneofyourbusiness-1".to_string()),
]),
providers: HashSet::new(),
uptime_seconds: 123,
version: None,
id: host_id_one.to_string(),
last_seen: Utc::now(),
},
)
.await?;
store
.store(
lattice_id,
host_id_two.to_string(),
Host {
components: HashMap::new(),
friendly_name: "hey".to_string(),
labels: HashMap::from_iter([
("inda".to_string(), "cloud".to_string()),
("cloud".to_string(), "real".to_string()),
("region".to_string(), "us-yourhouse-1".to_string()),
]),
providers: HashSet::new(),
uptime_seconds: 123,
version: None,
id: host_id_two.to_string(),
last_seen: Utc::now(),
},
)
.await?;
store
.store(
lattice_id,
provider_id.to_string(),
Provider {
id: provider_id.to_string(),
name: "provider".to_string(),
issuer: "issuer".to_string(),
reference: provider_ref.to_string(),
hosts: HashMap::new(),
},
)
.await?;
// Ensure we spread evenly with equal weights, clean division
let multi_spread_even = SpreadScalerProperty {
// instances are ignored so putting an absurd number
instances: 12312,
spread: vec![Spread {
name: "SimpleOne".to_string(),
requirements: BTreeMap::from_iter([("inda".to_string(), "cloud".to_string())]),
weight: Some(100),
}],
};
let spreadscaler = ProviderDaemonScaler::new(
store.clone(),
ProviderSpreadConfig {
lattice_id: lattice_id.to_string(),
provider_id: provider_id.to_string(),
provider_reference: provider_ref.to_string(),
spread_config: multi_spread_even,
model_name: MODEL_NAME.to_string(),
provider_config: vec!["foobar".to_string()],
},
"fake_component",
);
let mut commands = spreadscaler.reconcile().await?;
assert_eq!(commands.len(), 2);
// Sort to enable predictable test
commands.sort_unstable_by(|a, b| match (a, b) {
(Command::StartProvider(a), Command::StartProvider(b)) => a.host_id.cmp(&b.host_id),
_ => panic!("Should have been start providers"),
});
let cmd_one = commands.first().cloned();
match cmd_one {
None => panic!("command should have existed"),
Some(Command::StartProvider(start)) => {
assert_eq!(
start,
StartProvider {
reference: provider_ref.to_string(),
provider_id: provider_id.to_string(),
host_id: host_id_one.to_string(),
model_name: MODEL_NAME.to_string(),
annotations: spreadscaler_annotations("SimpleOne", spreadscaler.id()),
config: vec!["foobar".to_string()],
}
);
// This manual assertion is because we don't hash on annotations and I want to be extra sure we have the
// correct ones
assert_eq!(
start.annotations,
spreadscaler_annotations("SimpleOne", spreadscaler.id())
)
}
Some(_other) => panic!("command should have been a start provider"),
}
let cmd_two = commands.get(1).cloned();
match cmd_two {
None => panic!("command should have existed"),
Some(Command::StartProvider(start)) => {
assert_eq!(
start,
StartProvider {
reference: provider_ref.to_string(),
provider_id: provider_id.to_string(),
host_id: host_id_two.to_string(),
model_name: MODEL_NAME.to_string(),
annotations: spreadscaler_annotations("SimpleTwo", spreadscaler.id()),
config: vec!["foobar".to_string()],
}
);
// This manual assertion is because we don't hash on annotations and I want to be extra sure we have the
// correct ones
assert_eq!(
start.annotations,
spreadscaler_annotations("SimpleOne", spreadscaler.id())
)
}
Some(_other) => panic!("command should have been a start provider"),
}
Ok(())
}
#[tokio::test]
async fn test_healthy_providers_return_healthy_status() -> Result<()> {
let lattice_id = "test_healthy_providers";
let provider_ref = "fakecloud.azurecr.io/provider:3.2.1".to_string();
let provider_id = "VASDASDIAMAREALPROVIDERPROVIDER";
let host_id_one = "NASDASDIMAREALHOSTONE";
let host_id_two = "NASDASDIMAREALHOSTTWO";
let store = Arc::new(TestStore::default());
store
.store(
lattice_id,
host_id_one.to_string(),
Host {
components: HashMap::new(),
friendly_name: "hey".to_string(),
labels: HashMap::from_iter([
("inda".to_string(), "cloud".to_string()),
("cloud".to_string(), "fake".to_string()),
("region".to_string(), "us-noneofyourbusiness-1".to_string()),
]),
providers: HashSet::from_iter([ProviderInfo {
provider_id: provider_id.to_string(),
provider_ref: provider_ref.to_string(),
annotations: BTreeMap::default(),
}]),
uptime_seconds: 123,
version: None,
id: host_id_one.to_string(),
last_seen: Utc::now(),
},
)
.await?;
store
.store(
lattice_id,
host_id_two.to_string(),
Host {
components: HashMap::new(),
friendly_name: "hey".to_string(),
labels: HashMap::from_iter([
("inda".to_string(), "cloud".to_string()),
("cloud".to_string(), "real".to_string()),
("region".to_string(), "us-yourhouse-1".to_string()),
]),
providers: HashSet::from_iter([ProviderInfo {
provider_id: provider_id.to_string(),
provider_ref: provider_ref.to_string(),
annotations: BTreeMap::default(),
}]),
uptime_seconds: 123,
version: None,
id: host_id_two.to_string(),
last_seen: Utc::now(),
},
)
.await?;
store
.store(
lattice_id,
provider_id.to_string(),
Provider {
id: provider_id.to_string(),
name: "provider".to_string(),
issuer: "issuer".to_string(),
reference: provider_ref.to_string(),
hosts: HashMap::from([
(host_id_one.to_string(), ProviderStatus::Failed),
(host_id_two.to_string(), ProviderStatus::Running),
]),
},
)
.await?;
// Ensure we spread evenly with equal weights, clean division
let multi_spread_even = SpreadScalerProperty {
// instances are ignored so putting an absurd number
instances: 2,
spread: vec![Spread {
name: "SimpleOne".to_string(),
requirements: BTreeMap::from_iter([("inda".to_string(), "cloud".to_string())]),
weight: Some(100),
}],
};
let spreadscaler = ProviderDaemonScaler::new(
store.clone(),
ProviderSpreadConfig {
lattice_id: lattice_id.to_string(),
provider_id: provider_id.to_string(),
provider_reference: provider_ref.to_string(),
spread_config: multi_spread_even,
model_name: MODEL_NAME.to_string(),
provider_config: vec!["foobar".to_string()],
},
"fake_component",
);
spreadscaler.reconcile().await?;
spreadscaler
.handle_event(&Event::ProviderHealthCheckFailed(
ProviderHealthCheckFailed {
data: ProviderHealthCheckInfo {
provider_id: provider_id.to_string(),
host_id: host_id_one.to_string(),
},
},
))
.await?;
store
.store(
lattice_id,
provider_id.to_string(),
Provider {
id: provider_id.to_string(),
name: "provider".to_string(),
issuer: "issuer".to_string(),
reference: provider_ref.to_string(),
hosts: HashMap::from([
(host_id_one.to_string(), ProviderStatus::Pending),
(host_id_two.to_string(), ProviderStatus::Running),
]),
},
)
.await?;
spreadscaler
.handle_event(&Event::ProviderHealthCheckPassed(
ProviderHealthCheckPassed {
data: ProviderHealthCheckInfo {
provider_id: provider_id.to_string(),
host_id: host_id_two.to_string(),
},
},
))
.await?;
assert_eq!(
spreadscaler.status.read().await.to_owned(),
StatusInfo::deployed("")
);
Ok(())
}
#[tokio::test]
async fn test_unhealthy_providers_return_unhealthy_status() -> Result<()> {
let lattice_id = "test_unhealthy_providers";
let provider_ref = "fakecloud.azurecr.io/provider:3.2.1".to_string();
let provider_id = "VASDASDIAMAREALPROVIDERPROVIDER";
let host_id_one = "NASDASDIMAREALHOSTONE";
let host_id_two = "NASDASDIMAREALHOSTTWO";
let store = Arc::new(TestStore::default());
store
.store(
lattice_id,
host_id_one.to_string(),
Host {
components: HashMap::new(),
friendly_name: "hey".to_string(),
labels: HashMap::from_iter([
("inda".to_string(), "cloud".to_string()),
("cloud".to_string(), "fake".to_string()),
("region".to_string(), "us-noneofyourbusiness-1".to_string()),
]),
providers: HashSet::from_iter([ProviderInfo {
provider_id: provider_id.to_string(),
provider_ref: provider_ref.to_string(),
annotations: BTreeMap::default(),
}]),
uptime_seconds: 123,
version: None,
id: host_id_one.to_string(),
last_seen: Utc::now(),
},
)
.await?;
store
.store(
lattice_id,
host_id_two.to_string(),
Host {
components: HashMap::new(),
friendly_name: "hey".to_string(),
labels: HashMap::from_iter([
("inda".to_string(), "cloud".to_string()),
("cloud".to_string(), "real".to_string()),
("region".to_string(), "us-yourhouse-1".to_string()),
]),
providers: HashSet::from_iter([ProviderInfo {
provider_id: provider_id.to_string(),
provider_ref: provider_ref.to_string(),
annotations: BTreeMap::default(),
}]),
uptime_seconds: 123,
version: None,
id: host_id_two.to_string(),
last_seen: Utc::now(),
},
)
.await?;
store
.store(
lattice_id,
provider_id.to_string(),
Provider {
id: provider_id.to_string(),
name: "provider".to_string(),
issuer: "issuer".to_string(),
reference: provider_ref.to_string(),
hosts: HashMap::from([
(host_id_one.to_string(), ProviderStatus::Failed),
(host_id_two.to_string(), ProviderStatus::Running),
]),
},
)
.await?;
// Ensure we spread evenly with equal weights, clean division
let multi_spread_even = SpreadScalerProperty {
// instances are ignored so putting an absurd number
instances: 2,
spread: vec![Spread {
name: "SimpleOne".to_string(),
requirements: BTreeMap::from_iter([("inda".to_string(), "cloud".to_string())]),
weight: Some(100),
}],
};
let spreadscaler = ProviderDaemonScaler::new(
store.clone(),
ProviderSpreadConfig {
lattice_id: lattice_id.to_string(),
provider_id: provider_id.to_string(),
provider_reference: provider_ref.to_string(),
spread_config: multi_spread_even,
model_name: MODEL_NAME.to_string(),
provider_config: vec!["foobar".to_string()],
},
"fake_component",
);
spreadscaler.reconcile().await?;
spreadscaler
.handle_event(&Event::ProviderHealthCheckFailed(
ProviderHealthCheckFailed {
data: ProviderHealthCheckInfo {
provider_id: provider_id.to_string(),
host_id: host_id_one.to_string(),
},
},
))
.await?;
assert_eq!(
spreadscaler.status.read().await.to_owned(),
StatusInfo::failed("Unhealthy provider on 1 host(s)")
);
Ok(())
}
}

View File

@ -0,0 +1,555 @@
//! A struct that manages creating and removing scalers for all manifests
use std::{collections::HashMap, ops::Deref, sync::Arc};
use anyhow::Result;
use async_nats::jetstream::{
consumer::pull::{Config as PullConfig, Stream as MessageStream},
kv::Store as KvStore,
stream::Stream as JsStream,
AckKind,
};
use cloudevents::Event as CloudEvent;
use futures::StreamExt;
use serde::{Deserialize, Serialize};
use tokio::{
sync::{OwnedRwLockReadGuard, RwLock},
task::JoinHandle,
};
use tracing::{debug, error, instrument, trace, warn};
use wadm_types::{
api::{Status, StatusInfo},
Manifest,
};
use crate::{
events::Event,
publisher::Publisher,
scaler::{Command, Scaler},
storage::{snapshot::SnapshotStore, ReadStore},
workers::{CommandPublisher, ConfigSource, LinkSource, SecretSource, StatusPublisher},
};
use super::convert::manifest_components_to_scalers;
pub type BoxedScaler = Box<dyn Scaler + Send + Sync + 'static>;
pub type ScalerList = Vec<BoxedScaler>;
pub const WADM_NOTIFY_PREFIX: &str = "wadm.notify";
/// All events sent for manifest notifications
#[derive(Debug, Serialize, Deserialize)]
pub enum Notifications {
CreateScalers(Manifest),
DeleteScalers(String),
/// Register expected events for a manifest. You can either trigger this with an event (which
/// will result in calling `handle_event`) or without in order to calculate expected events with
/// a full reconcile (like on first deploy), rather than just handling a single event
RegisterExpectedEvents {
name: String,
scaler_id: String,
triggering_event: Option<CloudEvent>,
},
/// Remove an event from the expected list for a manifest scaler
RemoveExpectedEvent {
name: String,
scaler_id: String,
event: CloudEvent,
},
}
/// A wrapper type returned when getting a list of scalers for a model
pub struct Scalers {
pub scalers: OwnedRwLockReadGuard<HashMap<String, ScalerList>, ScalerList>,
}
impl Deref for Scalers {
type Target = ScalerList;
fn deref(&self) -> &Self::Target {
self.scalers.deref()
}
}
/// A wrapper type returned when getting all scalers.
pub struct AllScalers {
pub scalers: OwnedRwLockReadGuard<HashMap<String, ScalerList>>,
}
impl Deref for AllScalers {
type Target = HashMap<String, ScalerList>;
fn deref(&self) -> &Self::Target {
self.scalers.deref()
}
}
/// A wrapper type returned when getting a specific scaler
pub struct SingleScaler {
pub scaler: OwnedRwLockReadGuard<HashMap<String, ScalerList>, BoxedScaler>,
}
impl Deref for SingleScaler {
type Target = BoxedScaler;
fn deref(&self) -> &Self::Target {
self.scaler.deref()
}
}
/// A manager that consumes notifications from a stream for a lattice and then either adds or removes the
/// necessary scalers
#[derive(Clone)]
pub struct ScalerManager<StateStore, P: Clone, L: Clone> {
handle: Option<Arc<JoinHandle<Result<()>>>>,
scalers: Arc<RwLock<HashMap<String, ScalerList>>>,
client: P,
subject: String,
lattice_id: String,
command_publisher: CommandPublisher<P>,
status_publisher: StatusPublisher<P>,
snapshot_data: SnapshotStore<StateStore, L>,
}
impl<StateStore, P: Clone, L: Clone> Drop for ScalerManager<StateStore, P, L> {
fn drop(&mut self) {
if let Some(handle) = self.handle.take() {
handle.abort()
}
}
}
impl<StateStore, P, L> ScalerManager<StateStore, P, L>
where
StateStore: ReadStore + Send + Sync + Clone + 'static,
P: Publisher + Clone + Send + Sync + 'static,
L: LinkSource + ConfigSource + SecretSource + Clone + Send + Sync + 'static,
{
/// Creates a new ScalerManager configured to notify messages to `wadm.notify.{lattice_id}`
/// using the given jetstream client. Also creates an ephemeral consumer for notifications on
/// the given stream
#[allow(clippy::too_many_arguments)]
pub async fn new(
client: P,
stream: JsStream,
lattice_id: &str,
multitenant_prefix: Option<&str>,
state_store: StateStore,
manifest_store: KvStore,
command_publisher: CommandPublisher<P>,
status_publisher: StatusPublisher<P>,
link_getter: L,
) -> Result<ScalerManager<StateStore, P, L>> {
// Create the consumer first so that we can make sure we don't miss anything during the
// first reconcile pass
let subject = format!("{WADM_NOTIFY_PREFIX}.{lattice_id}");
let consumer = stream
.create_consumer(PullConfig {
// TODO(thomastaylor312): We should probably generate a friendly consumer name
// using an optional unique identifier
description: Some(format!(
"Ephemeral wadm notifier consumer for lattice {lattice_id}"
)),
ack_policy: async_nats::jetstream::consumer::AckPolicy::Explicit,
ack_wait: std::time::Duration::from_secs(2),
max_deliver: 3,
deliver_policy: async_nats::jetstream::consumer::DeliverPolicy::All,
filter_subject: subject.clone(),
..Default::default()
})
.await
.map_err(|e| anyhow::anyhow!("Unable to create ephemeral consumer: {e:?}"))?;
let messages = consumer
.messages()
.await
.map_err(|e| anyhow::anyhow!("Unable to subscribe to consumer: {e:?}"))?;
// Get current scalers set up
let manifest_store = crate::server::ModelStorage::new(manifest_store);
let futs = manifest_store
.list(multitenant_prefix, lattice_id)
.await?
.into_iter()
.map(|summary| {
manifest_store.get(multitenant_prefix, lattice_id, summary.name().to_owned())
});
let all_manifests = futures::future::join_all(futs)
.await
.into_iter()
.filter_map(|manifest| manifest.transpose())
.map(|res| res.map(|(manifest, _)| manifest))
.collect::<Result<Vec<_>>>()?;
let snapshot_data = SnapshotStore::new(
state_store.clone(),
link_getter.clone(),
lattice_id.to_owned(),
);
let scalers: HashMap<String, ScalerList> = all_manifests
.into_iter()
.filter_map(|manifest| {
let data = manifest.get_deployed()?;
let name = manifest.name().to_owned();
let scalers = manifest_components_to_scalers(
&data.spec.components,
&data.policy_lookup(),
lattice_id,
&name,
&subject,
&client,
&snapshot_data,
);
Some((name, scalers))
})
.collect();
let scalers = Arc::new(RwLock::new(scalers));
let mut manager = ScalerManager {
handle: None,
scalers,
client,
subject,
lattice_id: lattice_id.to_owned(),
command_publisher,
status_publisher,
snapshot_data,
};
let cloned = manager.clone();
let handle = tokio::spawn(async move { cloned.notify(messages).await });
manager.handle = Some(Arc::new(handle));
Ok(manager)
}
// NOTE(thomastaylor312): This is a little gross as it is purely for testing, but we needed a
// way to work around creating a consumer when starting stuff
#[cfg(test)]
pub(crate) async fn test_new(
client: P,
lattice_id: &str,
state_store: StateStore,
command_publisher: CommandPublisher<P>,
status_publisher: StatusPublisher<P>,
link_getter: L,
) -> ScalerManager<StateStore, P, L> {
let snapshot_data = SnapshotStore::new(
state_store.clone(),
link_getter.clone(),
lattice_id.to_owned(),
);
ScalerManager {
handle: None,
scalers: Arc::new(RwLock::new(HashMap::new())),
client,
subject: format!("{WADM_NOTIFY_PREFIX}.{lattice_id}"),
lattice_id: lattice_id.to_owned(),
command_publisher,
status_publisher,
snapshot_data,
}
}
/// Refreshes the snapshot data consumed by all scalers. This is a temporary workaround until we
/// start caching data
pub(crate) async fn refresh_data(&self) -> Result<()> {
self.snapshot_data.refresh().await
}
/// Adds scalers for the given manifest. Emitting an event to notify other wadm processes that
/// they should create them as well. Only returns an error if it can't notify. Returns the
/// scaler list for immediate use in reconciliation
///
/// This only constructs the scalers and doesn't reconcile. The returned [`Scalers`] type can be
/// used to set this model to backoff mode
#[instrument(level = "trace", skip_all, fields(name = %manifest.metadata.name, lattice_id = %self.lattice_id))]
pub async fn add_scalers<'a>(
&'a self,
manifest: &'a Manifest,
scalers: ScalerList,
) -> Result<Scalers> {
self.add_raw_scalers(&manifest.metadata.name, scalers).await;
let notification = serde_json::to_vec(&Notifications::CreateScalers(manifest.to_owned()))?;
self.client
.publish(notification, Some(&self.subject))
.await?;
// The error case here would be _really_ weird as something would have removed it right
// after we added, but handling it just in case
self.get_scalers(&manifest.metadata.name)
.await
.ok_or_else(|| anyhow::anyhow!("Data error: scalers no longer exist after creation"))
}
pub fn scalers_for_manifest<'a>(&'a self, manifest: &'a Manifest) -> ScalerList {
manifest_components_to_scalers(
&manifest.spec.components,
&manifest.policy_lookup(),
&self.lattice_id,
&manifest.metadata.name,
&self.subject,
&self.client,
&self.snapshot_data,
)
}
/// Gets the scalers for the given model name, returning None if they don't exist.
#[instrument(level = "trace", skip(self), fields(lattice_id = %self.lattice_id))]
pub async fn get_scalers<'a>(&'a self, name: &'a str) -> Option<Scalers> {
let lock = self.scalers.clone().read_owned().await;
OwnedRwLockReadGuard::try_map(lock, |scalers| scalers.get(name))
.ok()
.map(|scalers| Scalers { scalers })
}
/// Gets a specific scaler for the given model name with the given ID, returning None if it doesn't exist.
#[instrument(level = "trace", skip(self), fields(lattice_id = %self.lattice_id))]
pub async fn get_specific_scaler<'a>(
&'a self,
name: &'a str,
scaler_id: &'a str,
) -> Option<SingleScaler> {
let lock = self.scalers.clone().read_owned().await;
OwnedRwLockReadGuard::try_map(lock, |scalers| {
scalers
.get(name)
.and_then(|scalers| scalers.iter().find(|s| s.id() == scaler_id))
})
.ok()
.map(|scaler| SingleScaler { scaler })
}
/// Gets all current managed scalers
#[instrument(level = "trace", skip(self), fields(lattice_id = %self.lattice_id))]
pub async fn get_all_scalers(&self) -> AllScalers {
AllScalers {
scalers: self.scalers.clone().read_owned().await,
}
}
/// Removes the scalers for a given model name, publishing all commands needed for cleanup.
/// Returns `None` when no scaler with the name was found
///
/// This function will notify other wadms that they should remove the scalers as well. If the
/// notification or handling commands fails, then this function will reinsert the scalers back into the internal map
/// and return an error (so this function can be called again)
// NOTE(thomastaylor312): This was designed the way it is to avoid race conditions. We only ever
// stop components and providers that have the right annotation. So if for some reason this
// leaves something hanging, we should probably add something to the reaper
#[instrument(level = "debug", skip(self), fields(lattice_id = %self.lattice_id))]
pub async fn remove_scalers(&self, name: &str) -> Option<Result<()>> {
let scalers = match self.remove_scalers_internal(name).await {
Some(Ok(s)) => Some(s),
Some(Err(e)) => {
warn!(err = ?e, "Error when running cleanup steps for scalers. Operation will be retried");
return Some(Err(e));
}
None => None,
};
// SAFETY: This is entirely data in our control and should be safe to unwrap
if let Err(e) = self
.client
.publish(
serde_json::to_vec(&Notifications::DeleteScalers(name.to_owned())).unwrap(),
Some(&self.subject),
)
.await
{
error!(error = %e, "Unable to publish notification");
if let Some(scalers) = scalers {
self.scalers.write().await.insert(name.to_owned(), scalers);
}
Some(Err(e))
} else {
Some(Ok(()))
}
}
/// An internal function to allow pushing the scalers without any of the publishing
async fn add_raw_scalers(&self, name: &str, scalers: ScalerList) {
self.scalers.write().await.insert(name.to_owned(), scalers);
}
/// A function that removes the scalers without any of the publishing
/// CAUTION: This function does not do any cleanup, so it should only be used in scenarios
/// where you are prepared to handle that yourself.
pub(crate) async fn remove_raw_scalers(&self, name: &str) -> Option<ScalerList> {
self.scalers.write().await.remove(name)
}
/// Does everything except sending the notification
#[instrument(level = "debug", skip(self), fields(lattice_id = %self.lattice_id))]
async fn remove_scalers_internal(&self, name: &str) -> Option<Result<ScalerList>> {
// Remove the scalers first to avoid them handling events while we're cleaning up
let scalers = self.remove_raw_scalers(name).await?;
// Always refresh data before cleaning up
if let Err(e) = self.refresh_data().await {
return Some(Err(e));
}
let commands = match futures::future::join_all(
scalers.iter().map(|scaler| scaler.cleanup()),
)
.await
.into_iter()
.collect::<Result<Vec<Vec<Command>>, anyhow::Error>>()
.map(|all| all.into_iter().flatten().collect::<Vec<Command>>())
{
Ok(c) => c,
Err(e) => {
warn!(err = ?e, "Error when running cleanup steps for scalers. Operation will be retried");
// Put the scalers back into the map so we can run cleanup again on retry
self.scalers.write().await.insert(name.to_owned(), scalers);
return Some(Err(e));
}
};
trace!(?commands, "Publishing cleanup commands");
if let Err(e) = self.command_publisher.publish_commands(commands).await {
error!(error = %e, "Unable to publish cleanup commands");
self.scalers.write().await.insert(name.to_owned(), scalers);
Some(Err(e))
} else {
Some(Ok(scalers))
}
}
#[instrument(level = "debug", skip_all, fields(lattice_id = %self.lattice_id))]
async fn notify(&self, mut messages: MessageStream) -> Result<()> {
loop {
tokio::select! {
res = messages.next() => {
match res {
Some(Ok(msg)) => {
let notification: Notifications = match serde_json::from_slice(&msg.payload) {
Ok(n) => n,
Err(e) => {
warn!(error = %e, "Received unparsable message from consumer");
continue;
}
};
match notification {
Notifications::CreateScalers(manifest) => {
// We don't want to trigger the notification, so just create the scalers and then insert
let scalers = manifest_components_to_scalers(
&manifest.spec.components,
&manifest.policy_lookup(),
&self.lattice_id,
&manifest.metadata.name,
&self.subject,
&self.client,
&self.snapshot_data,
);
let num_scalers = scalers.len();
self.add_raw_scalers(&manifest.metadata.name, scalers).await;
trace!(name = %manifest.metadata.name, %num_scalers, "Finished creating scalers for manifest");
}
Notifications::DeleteScalers(name) => {
trace!(%name, "Removing scalers for manifest");
match self.remove_scalers_internal(&name).await {
Some(Ok(_)) | None => {
trace!(%name, "Removed manifests or manifests were already removed");
// NOTE(thomastaylor312): We publish the undeployed
// status here after we remove scalers. All wadm
// instances will receive this event (even the one that
// initially deleted it) and so it made more sense to
// publish the status here so we don't get any stray
// reconciling status messages from a wadm instance that
// hasn't deleted the scaler yet
if let Err(e) = self
.status_publisher
.publish_status(&name, Status::new(
StatusInfo::undeployed("Manifest has been undeployed"),
Vec::with_capacity(0),
))
.await
{
warn!(error = ?e, "Failed to set status to undeployed");
}
}
Some(Err(e)) => {
error!(error = %e, %name, "Error when running cleanup steps for scalers. Nacking notification");
if let Err(e) = msg.ack_with(AckKind::Nak(None)).await {
warn!(error = %e, %name, "Unable to nack message");
// We continue here so we don't fall through to the ack
continue;
}
}
}
// NOTE(thomastaylor312): We could find that this strategy actually
// doesn't tear down everything or leaves something hanging. If that is
// the case, a new part of the reaper logic should handle it
},
// NOTE(thomastaylor312): Please note that both of the
// ExpectedEvents blocks are "cheating". If the scaler is a backoff
// wrapped scaler, calling `reconcile` or `handle_event` will
// trigger the creation of the expected events for this scaler.
// Otherwise, it will just run the logic for handling stuff. Either
// way, we ignore the returned events. Right now this is totally
// fine because scalers should run fairly quickly and we only run
// them in the case where something is observed. It also leaves the
// flexibility of managing any scaler rather than just backoff
// wrapped ones (which is good from a Rust API point of view). If
// this starts to become a problem, we can revisit how we handle
// this (probably by requiring that this struct always wraps any
// scaler in the backoff wrapper and using custom methods from that
// type)
Notifications::RegisterExpectedEvents{ name, scaler_id, triggering_event } => {
trace!(%name, "Computing and registering expected events for manifest");
if let Some(scaler) = self.get_specific_scaler(&name, &scaler_id).await {
if let Some(event) = triggering_event {
let parsed_event: Event = match event.try_into() {
Ok(e) => e,
Err(e) => {
error!(error = %e, %name, "Unable to parse given event");
continue;
}
};
if let Err(e) = scaler.handle_event(&parsed_event).await {
error!(error = %e, %name, %scaler_id, "Unable to register expected events for scaler");
}
} else if let Err(e) = scaler.reconcile().await {
error!(error = %e, %name, %scaler_id, "Unable to register expected events for scaler");
}
} else {
debug!(%name, "Received request to register events for non-existent scalers, ignoring");
}
},
Notifications::RemoveExpectedEvent{ name, scaler_id, event } => {
trace!(%name, "Removing expected event for manifest");
if let Some(scaler) = self.get_specific_scaler(&name, &scaler_id).await {
let parsed_event: Event = match event.try_into() {
Ok(e) => e,
Err(e) => {
error!(error = %e, %name, "Unable to parse given event");
continue;
}
};
if let Err(e) = scaler.handle_event(&parsed_event).await {
error!(error = %e, %name, %scaler_id, "Unable to register expected events for scaler");
}
} else {
debug!(%name, "Received request to remove event for non-existent scalers, ignoring");
}
}
}
// Always ack if we get here
if let Err(e) = msg.double_ack().await {
warn!(error = %e,"Unable to ack message");
}
}
Some(Err(e)) => {
error!(error = %e, "Error when retrieving message from stream. Will attempt to fetch next message");
}
None => {
// NOTE(thomastaylor312): This could possibly be a fatal error as we won't be able
// to create scalers. We may want to determine how to bubble that all the way back
// up if it is
error!("Notifier for manifests has exited");
anyhow::bail!("Notifier has exited")
}
}
}
}
}
}
}

View File

@ -0,0 +1,611 @@
use std::{sync::Arc, time::Duration};
use anyhow::Result;
use async_trait::async_trait;
use sha2::{Digest, Sha256};
use tokio::{
sync::{Mutex, RwLock},
task::JoinHandle,
};
use tracing::{error, instrument, trace, Instrument};
use wadm_types::{api::StatusInfo, TraitProperty};
use crate::{
commands::Command,
events::{ComponentScaleFailed, ComponentScaled, Event, ProviderStartFailed, ProviderStarted},
publisher::Publisher,
workers::{get_commands_and_result, ConfigSource, SecretSource},
};
pub mod configscaler;
mod convert;
pub mod daemonscaler;
pub mod manager;
pub mod secretscaler;
pub mod spreadscaler;
pub mod statusscaler;
use manager::Notifications;
use self::configscaler::ConfigScaler;
use self::secretscaler::SecretScaler;
const DEFAULT_WAIT_TIMEOUT: Duration = Duration::from_secs(30);
const DEFAULT_SCALER_KIND: &str = "Scaler";
/// A trait describing a struct that can be configured to compute the difference between
/// desired state and configured state, returning a set of commands to approach desired state.
///
/// Implementers of this trait can choose how to access state, but it's generally recommended to
/// use a [ReadStore](crate::storage::ReadStore) so that it can retrieve current information about
/// state using a common trait that only allows store access and not modification
///
/// Typically a Scaler should be configured with `update_config`, then use the `reconcile` method
/// for an inital set of commands. As events change the state, they should also be given to the Scaler
/// to determine if actions need to be taken in response to an event
#[async_trait]
pub trait Scaler {
/// A unique identifier for this scaler type. This is used for logging and for selecting
/// specific scalers as needed. wadm scalers implement this by computing a sha256 hash of
/// all of the parameters that are used to construct the scaler, therefore ensuring that
/// the ID is unique for each scaler
fn id(&self) -> &str;
/// An optional human-friendly name for this scaler. This is used for logging and for selecting
/// specific scalers as needed. This is optional and by default returns the same value as `id`,
/// and does not have to be unique
fn name(&self) -> String {
self.id().to_string()
}
/// An optional kind of scaler. This is used for logging and for selecting specific scalers as needed
fn kind(&self) -> &str {
DEFAULT_SCALER_KIND
}
/// Determine the status of this scaler according to reconciliation logic. This is the opportunity
/// for scalers to indicate that they are unhealthy with a message as to what's missing.
async fn status(&self) -> StatusInfo;
/// Provide a scaler with configuration to use internally when computing commands This should
/// trigger a reconcile with the new configuration.
///
/// This config can be anything that can be turned into a
/// [`TraitProperty`](crate::model::TraitProperty). Additional configuration outside of what is
/// available in a `TraitProperty` can be passed when constructing the scaler
async fn update_config(&mut self, config: TraitProperty) -> Result<Vec<Command>>;
/// Compute commands that must be taken given an event that changes the lattice state
async fn handle_event(&self, event: &Event) -> Result<Vec<Command>>;
/// Compute commands that must be taken to achieve desired state as specified in config
async fn reconcile(&self) -> Result<Vec<Command>>;
/// Returns the list of commands needed to cleanup for a scaler
///
/// This purposefully does not consume the scaler so that if there is a failure it can be kept
/// around
async fn cleanup(&self) -> Result<Vec<Command>>;
}
/// The BackoffWrapper is a wrapper around a scaler that is responsible for
/// ensuring that a particular scaler doesn't get overwhelmed with events and has the
/// necessary prerequisites to reconcile.
///
/// 1. `required_config` & `required_secrets`: With the introduction of configuration
/// for wadm applications, the most necessary prerequisite for components, providers
/// and links to start is that their configuration is available. Scalers will not be
/// able to issue commands until the configuration exists.
/// 2. `expected_events`: For scalers that issue commands that should result in events,
/// the BackoffWrapper is responsible for ensuring that the scaler doesn't continually
/// issue commands that it's already expecting events for. Commonly this will allow a host
/// to download larger images from an OCI repository without being bombarded with repeat requests.
/// 3. `backoff_status`: If a scaler receives an event that it was expecting, but it was a failure
/// event, the scaler should back off exponentially while reporting that failure status. This both
/// allows for diagnosing issues with reconciliation and prevents thrashing.
///
/// All of the above effectively allows the inner Scaler to only worry about the logic around
/// reconciling and handling events, rather than be concerned about whether or not
/// it should handle a specific event, if it's causing jitter, overshoot, etc.
///
/// The `notifier` is used to publish notifications to add, remove, or recompute
/// expected events with scalers on other wadm instances, as only one wadm instance
/// at a time will handle a specific event.
pub(crate) struct BackoffWrapper<T, P, C> {
scaler: T,
notifier: P,
notify_subject: String,
model_name: String,
required_config: Vec<ConfigScaler<C>>,
required_secrets: Vec<SecretScaler<C>>,
/// A list of (success, Option<failure>) events that the scaler is expecting
#[allow(clippy::type_complexity)]
expected_events: Arc<RwLock<Vec<(Event, Option<Event>)>>>,
/// Responsible for clearing up the expected events list after a certain amount of time
event_cleaner: Mutex<Option<JoinHandle<()>>>,
/// The amount of time to wait before cleaning up the expected events list
cleanup_timeout: std::time::Duration,
/// The status of the scaler, set when the scaler is backing off due to a
/// failure event.
backoff_status: Arc<RwLock<Option<StatusInfo>>>,
// TODO(#253): Figure out where/when/how to store the backoff and exponentially repeat it
/// Responsible for cleaning up the backoff status after a specified duration
status_cleaner: Mutex<Option<JoinHandle<()>>>,
}
impl<T, P, C> BackoffWrapper<T, P, C>
where
T: Scaler + Send + Sync,
P: Publisher + Send + Sync + 'static,
C: ConfigSource + SecretSource + Send + Sync + Clone + 'static,
{
/// Wraps the given scaler in a new BackoffWrapper. `cleanup_timeout` can be set to a
/// desired waiting time, otherwise it will default to 30s
pub fn new(
scaler: T,
notifier: P,
required_config: Vec<ConfigScaler<C>>,
required_secrets: Vec<SecretScaler<C>>,
notify_subject: &str,
model_name: &str,
cleanup_timeout: Option<Duration>,
) -> Self {
Self {
scaler,
notifier,
required_config,
required_secrets,
notify_subject: notify_subject.to_owned(),
model_name: model_name.to_string(),
expected_events: Arc::new(RwLock::new(Vec::new())),
event_cleaner: Mutex::new(None),
cleanup_timeout: cleanup_timeout.unwrap_or(DEFAULT_WAIT_TIMEOUT),
backoff_status: Arc::new(RwLock::new(None)),
status_cleaner: Mutex::new(None),
}
}
pub async fn event_count(&self) -> usize {
self.expected_events.read().await.len()
}
/// Adds events to the expected events list
///
/// # Arguments
/// `events` - A list of (success, failure) events to add to the expected events list
/// `clear_previous` - If true, clears the previous expected events list before adding the new events
async fn add_events<I>(&self, events: I, clear_previous: bool)
where
I: IntoIterator<Item = (Event, Option<Event>)>,
{
let mut expected_events = self.expected_events.write().await;
if clear_previous {
expected_events.clear();
}
expected_events.extend(events);
self.set_timed_event_cleanup().await;
}
/// Removes an event pair from the expected events list if one matches the given event
/// Returns a tuple of bools, the first indicating if the event was removed, and the second
/// indicating if the event was the failure event
async fn remove_event(&self, event: &Event) -> Result<(bool, bool)> {
let mut expected_events = self.expected_events.write().await;
let before_count = expected_events.len();
let mut failed_event = false;
expected_events.retain(|(success, fail)| {
let matches_success = evt_matches_expected(success, event);
let matches_failure = fail
.as_ref()
.map_or(false, |f| evt_matches_expected(f, event));
// Update failed_event if the event matches the failure event
failed_event |= matches_failure;
// Retain the event if it doesn't match either the success or failure event
!(matches_success || matches_failure)
});
Ok((expected_events.len() < before_count, failed_event))
}
/// Handles an incoming event for the given scaler.
///
/// This function processes the event and returns a vector of commands to be executed.
/// It also manages the expected events list, removing successfully handled events
/// and adding new expected events based on the executed commands, and using the notifier
/// to send notifications to other scalers running on different wadm instances.
///
/// # Arguments
///
/// * `scaler`: A reference to the `ScalerWithEvents` struct which represents the scaler with events.
/// * `event`: A reference to the `Event` struct which represents the incoming event to be handled.
///
/// # Returns
///
/// * `Result<Vec<Command>>`: A `Result` containing a vector of `Command` structs if successful,
/// or an error of type `anyhow::Error` if any error occurs while processing the event.
#[instrument(level = "trace", skip_all, fields(scaler_id = %self.id()))]
async fn handle_event_internal(&self, event: &Event) -> anyhow::Result<Vec<Command>> {
let model_name = &self.model_name;
let (expected_event, failed_event) = self.remove_event(event).await?;
let commands: Vec<Command> = if expected_event {
// So here, if we receive a failed event that it was "expecting"
// Then we know that the scaler status is essentially failed and should retry
// So we should tell the other scalers to remove the event, AND other scalers
// in the process of removing that event will know that it failed.
trace!(failed_event, "Scaler received event that it was expecting");
if failed_event {
let failed_message = match event {
Event::ProviderStartFailed(evt) => evt.error.clone(),
Event::ComponentScaleFailed(evt) => evt.error.clone(),
_ => format!("Received a failed event of type '{}'", event.raw_type()),
};
*self.backoff_status.write().await = Some(StatusInfo::failed(&failed_message));
// TODO(#253): Here we could refer to a stored previous duration and increase it
self.set_timed_status_cleanup(std::time::Duration::from_secs(5))
.await;
}
let data = serde_json::to_vec(&Notifications::RemoveExpectedEvent {
name: model_name.to_owned(),
scaler_id: self.scaler.id().to_owned(),
event: event.to_owned().try_into()?,
})?;
self.notifier
.publish(data, Some(&self.notify_subject))
.await?;
// The scaler was expecting this event and it shouldn't respond with commands
Vec::with_capacity(0)
} else if self.event_count().await > 0 {
trace!("Scaler received event but is still expecting events, ignoring");
// If a scaler is expecting events still, don't have it handle events. This is effectively
// the backoff mechanism within wadm
Vec::with_capacity(0)
} else if self.backoff_status.read().await.is_some() {
trace!("Scaler received event but is in backoff, ignoring");
Vec::with_capacity(0)
} else {
trace!("Scaler is not backing off, checking configuration");
let (mut config_commands, res) = get_commands_and_result(
self.required_config
.iter()
.map(|config| async { config.handle_event(event).await }),
"Errors occurred while handling event with config scalers",
)
.await;
if let Err(e) = res {
error!(
"Error occurred while handling event with config scalers: {}",
e
);
}
let (mut secret_commands, res) = get_commands_and_result(
self.required_secrets
.iter()
.map(|secret| async { secret.handle_event(event).await }),
"Errors occurred while handling event with secret scalers",
)
.await;
if let Err(e) = res {
error!(
"Error occurred while handling event with secret scalers: {}",
e
);
}
// If the config scalers or secret scalers have commands to send, return them
if !config_commands.is_empty() || !secret_commands.is_empty() {
config_commands.append(&mut secret_commands);
return Ok(config_commands);
}
trace!("Scaler required configuration is present, handling event");
let commands = self.scaler.handle_event(event).await?;
// Based on the commands, compute the events that we expect to see for this scaler. The scaler
// will then ignore incoming events until all of the expected events have been received.
let expected_events = commands.iter().filter_map(|cmd| cmd.corresponding_event());
self.add_events(expected_events, false).await;
// Only let other scalers know if we generated commands to take
if !self.expected_events.read().await.is_empty() {
trace!("Scaler generated commands, notifying other scalers to register expected events");
let data = serde_json::to_vec(&Notifications::RegisterExpectedEvents {
name: model_name.to_owned(),
scaler_id: self.scaler.id().to_owned(),
triggering_event: Some(event.to_owned().try_into()?),
})?;
self.notifier
.publish(data, Some(&self.notify_subject))
.await?;
}
commands
};
Ok(commands)
}
#[instrument(level = "trace", skip_all, fields(scaler_id = %self.id()))]
async fn reconcile_internal(&self) -> Result<Vec<Command>> {
// If we're already in backoff, return an empty list
let current_event_count = self.event_count().await;
if current_event_count > 0 {
trace!(%current_event_count, "Scaler is awaiting an event, not reconciling");
return Ok(Vec::with_capacity(0));
}
if self.backoff_status.read().await.is_some() {
tracing::info!(%current_event_count, "Scaler is backing off, not reconciling");
return Ok(Vec::with_capacity(0));
}
let mut commands = Vec::new();
for config in &self.required_config {
config.reconcile().await?.into_iter().for_each(|cmd| {
commands.push(cmd);
});
}
let mut secret_commands = Vec::new();
for secret in &self.required_secrets {
secret.reconcile().await?.into_iter().for_each(|cmd| {
secret_commands.push(cmd);
});
}
commands.append(secret_commands.as_mut());
if !commands.is_empty() {
return Ok(commands);
}
match self.scaler.reconcile().await {
// "Back off" scaler with expected corresponding events if the scaler generated commands
Ok(commands) if !commands.is_empty() => {
// Generate expected events
self.add_events(
commands
.iter()
.filter_map(|command| command.corresponding_event()),
true,
)
.await;
if !self.expected_events.read().await.is_empty() {
trace!("Reconcile generated expected events, notifying other scalers to register expected events");
let data = serde_json::to_vec(&Notifications::RegisterExpectedEvents {
name: self.model_name.to_owned(),
scaler_id: self.scaler.id().to_owned(),
triggering_event: None,
})?;
self.notifier
.publish(data, Some(&self.notify_subject))
.await?;
return Ok(commands);
}
Ok(commands)
}
Ok(commands) => {
trace!("Reconcile generated no commands, no need to register expected events");
Ok(commands)
}
Err(e) => Err(e),
}
}
async fn cleanup_internal(&self) -> Result<Vec<Command>> {
let mut commands = self.scaler.cleanup().await.unwrap_or_default();
for config in self.required_config.iter() {
match config.cleanup().await {
Ok(cmds) => commands.extend(cmds),
// Explicitly logging, but continuing, in the case of an error to make sure
// we don't prevent other cleanup tasks from running
Err(e) => {
error!("Error occurred while cleaning up config scalers: {}", e);
}
}
}
for secret in self.required_secrets.iter() {
match secret.cleanup().await {
Ok(cmds) => commands.extend(cmds),
// Explicitly logging, but continuing, in the case of an error to make sure
// we don't prevent other cleanup tasks from running
Err(e) => {
error!("Error occurred while cleaning up secret scalers: {}", e);
}
}
}
Ok(commands)
}
/// Sets a timed cleanup task to clear the expected events list after a timeout
async fn set_timed_event_cleanup(&self) {
let mut event_cleaner = self.event_cleaner.lock().await;
// Clear any existing handle
if let Some(handle) = event_cleaner.take() {
handle.abort();
}
let expected_events = self.expected_events.clone();
let timeout = self.cleanup_timeout;
*event_cleaner = Some(tokio::spawn(
async move {
tokio::time::sleep(timeout).await;
trace!("Reached event cleanup timeout, clearing expected events");
expected_events.write().await.clear();
}
.instrument(tracing::trace_span!("event_cleaner", scaler_id = %self.id())),
));
}
/// Sets a timed cleanup task to clear the expected events list after a timeout
async fn set_timed_status_cleanup(&self, timeout: Duration) {
let mut status_cleaner = self.status_cleaner.lock().await;
// Clear any existing handle
if let Some(handle) = status_cleaner.take() {
handle.abort();
}
let backoff_status = self.backoff_status.clone();
*status_cleaner = Some(tokio::spawn(
async move {
tokio::time::sleep(timeout).await;
trace!("Reached status cleanup timeout, clearing backoff status");
backoff_status.write().await.take();
}
.instrument(tracing::trace_span!("status_cleaner", scaler_id = %self.id())),
));
}
}
#[async_trait]
/// The [`Scaler`] trait implementation for the [`BackoffWrapper`] is mostly a simple wrapper,
/// with three exceptions, which allow scalers to sync state between different wadm instances.
///
/// * `handle_event` calls an internal method that uses a notifier to publish notifications to
/// all Scalers, even running on different wadm instances, to handle that event. The resulting
/// commands from those scalers are ignored as this instance is already handling the event.
/// * `reconcile` calls an internal method that uses a notifier to ensure all Scalers, even
/// running on different wadm instances, compute their expected events in response to the
/// reconciliation commands in order to "back off".
/// * `status` will first check to see if the scaler is in a backing off state, and if so, return
/// the backoff status. Otherwise, it will return the status of the scaler.
impl<T, P, C> Scaler for BackoffWrapper<T, P, C>
where
T: Scaler + Send + Sync,
P: Publisher + Send + Sync + 'static,
C: ConfigSource + SecretSource + Send + Sync + Clone + 'static,
{
fn id(&self) -> &str {
// Pass through the ID of the wrapped scaler
self.scaler.id()
}
fn kind(&self) -> &str {
// Pass through the kind of the wrapped scaler
self.scaler.kind()
}
fn name(&self) -> String {
self.scaler.name()
}
async fn status(&self) -> StatusInfo {
// If the scaler has a backoff status, return that, otherwise return the status of the scaler
if let Some(status) = self.backoff_status.read().await.clone() {
status
} else {
self.scaler.status().await
}
}
async fn update_config(&mut self, config: TraitProperty) -> Result<Vec<Command>> {
self.scaler.update_config(config).await
}
async fn handle_event(&self, event: &Event) -> Result<Vec<Command>> {
self.handle_event_internal(event).await
}
async fn reconcile(&self) -> Result<Vec<Command>> {
self.reconcile_internal().await
}
async fn cleanup(&self) -> Result<Vec<Command>> {
self.cleanup_internal().await
}
}
/// A specialized function that compares an incoming lattice event to an "expected" event
/// stored alongside a [Scaler](Scaler).
///
/// This is not a PartialEq or Eq implementation because there are strict assumptions that do not always hold.
/// For example, an incoming and expected event are equal even if their claims are not equal, because we cannot
/// compute that information from a [Command](Command). However, this is not a valid comparison if actually
/// comparing two events for equality.
fn evt_matches_expected(incoming: &Event, expected: &Event) -> bool {
match (incoming, expected) {
(
Event::ProviderStarted(ProviderStarted {
annotations: a1,
image_ref: i1,
host_id: h1,
provider_id: p1,
..
}),
Event::ProviderStarted(ProviderStarted {
annotations: a2,
image_ref: i2,
host_id: h2,
provider_id: p2,
..
}),
) => a1 == a2 && i1 == i2 && p1 == p2 && h1 == h2,
(
Event::ProviderStartFailed(ProviderStartFailed {
provider_id: p1,
provider_ref: i1,
host_id: h1,
..
}),
Event::ProviderStartFailed(ProviderStartFailed {
provider_id: p2,
provider_ref: i2,
host_id: h2,
..
}),
) => p1 == p2 && h1 == h2 && i1 == i2,
(
Event::ComponentScaled(ComponentScaled {
annotations: a1,
image_ref: i1,
component_id: c1,
host_id: h1,
..
}),
Event::ComponentScaled(ComponentScaled {
annotations: a2,
image_ref: i2,
component_id: c2,
host_id: h2,
..
}),
) => a1 == a2 && i1 == i2 && c1 == c2 && h1 == h2,
(
Event::ComponentScaleFailed(ComponentScaleFailed {
annotations: a1,
image_ref: i1,
component_id: c1,
host_id: h1,
..
}),
Event::ComponentScaleFailed(ComponentScaleFailed {
annotations: a2,
image_ref: i2,
component_id: c2,
host_id: h2,
..
}),
) => a1 == a2 && i1 == i2 && c1 == c2 && h1 == h2,
_ => false,
}
}
/// Computes the sha256 digest of the given parameters to form a unique ID for a scaler
pub(crate) fn compute_id_sha256(params: &[&str]) -> String {
let mut hasher = Sha256::new();
for param in params {
hasher.update(param.as_bytes())
}
let hash = hasher.finalize();
format!("{hash:x}")
}

View File

@ -0,0 +1,316 @@
use anyhow::{Context, Result};
use async_trait::async_trait;
use tokio::sync::RwLock;
use tracing::{debug, error, instrument, trace};
use wadm_types::{
api::{StatusInfo, StatusType},
Policy, SecretProperty, TraitProperty,
};
use wasmcloud_secrets_types::SecretConfig;
use crate::{
commands::{Command, DeleteConfig, PutConfig},
events::{ConfigDeleted, ConfigSet, Event},
scaler::Scaler,
workers::SecretSource,
};
use super::compute_id_sha256;
const SECRET_SCALER_KIND: &str = "SecretScaler";
pub struct SecretScaler<SecretSource> {
secret_source: SecretSource,
/// The key to use in the configdata bucket for this secret
secret_name: String,
secret_config: SecretConfig,
id: String,
status: RwLock<StatusInfo>,
}
impl<S: SecretSource> SecretScaler<S> {
pub fn new(
secret_name: String,
policy: Policy,
secret_property: SecretProperty,
secret_source: S,
) -> Self {
// Compute the id of this scaler based on all of the values that make it unique.
// This is used during upgrades to determine if a scaler is the same as a previous one.
let mut id_parts = vec![
secret_name.as_str(),
policy.name.as_str(),
policy.policy_type.as_str(),
secret_property.name.as_str(),
secret_property.properties.policy.as_str(),
secret_property.properties.key.as_str(),
];
if let Some(version) = secret_property.properties.version.as_ref() {
id_parts.push(version.as_str());
}
id_parts.extend(
policy
.properties
.iter()
.flat_map(|(k, v)| vec![k.as_str(), v.as_str()]),
);
let id = compute_id_sha256(&id_parts);
let secret_config = config_from_manifest_structures(policy, secret_property)
.expect("failed to create secret config from policy and secret properties");
Self {
id,
secret_name,
secret_config,
secret_source,
status: RwLock::new(StatusInfo::reconciling("")),
}
}
}
#[async_trait]
impl<S: SecretSource + Send + Sync + Clone> Scaler for SecretScaler<S> {
fn id(&self) -> &str {
&self.id
}
fn kind(&self) -> &str {
SECRET_SCALER_KIND
}
fn name(&self) -> String {
self.secret_config.name.to_string()
}
async fn status(&self) -> StatusInfo {
let _ = self.reconcile().await;
self.status.read().await.to_owned()
}
async fn update_config(&mut self, _config: TraitProperty) -> Result<Vec<Command>> {
debug!("SecretScaler does not support updating config, ignoring");
Ok(vec![])
}
async fn handle_event(&self, event: &Event) -> Result<Vec<Command>> {
match event {
Event::ConfigSet(ConfigSet { config_name })
| Event::ConfigDeleted(ConfigDeleted { config_name }) => {
if config_name == &self.secret_name {
return self.reconcile().await;
}
}
// This is a workaround to ensure that the config has a chance to periodically
// update itself if it is out of sync. For efficiency, we only fetch configuration
// again if the status is not deployed.
Event::HostHeartbeat(_) => {
if !matches!(self.status.read().await.status_type, StatusType::Deployed) {
return self.reconcile().await;
}
}
_ => {
trace!("SecretScaler does not support this event, ignoring");
}
}
Ok(Vec::new())
}
#[instrument(level = "trace", skip_all, scaler_id = %self.id)]
async fn reconcile(&self) -> Result<Vec<Command>> {
debug!(self.secret_name, "Fetching configuration");
match self.secret_source.get_secret(&self.secret_name).await {
// If configuration matches what's supplied, this scaler is deployed
Ok(Some(config)) if config == self.secret_config => {
*self.status.write().await = StatusInfo::deployed("");
Ok(Vec::new())
}
// If configuration is out of sync, we put the configuration
Ok(_config) => {
debug!(self.secret_name, "Putting secret");
match self.secret_config.clone().try_into() {
Ok(config) => {
*self.status.write().await = StatusInfo::reconciling("Secret out of sync");
Ok(vec![Command::PutConfig(PutConfig {
config_name: self.secret_name.clone(),
config,
})])
}
Err(e) => {
*self.status.write().await = StatusInfo::failed(&format!(
"Failed to convert secret config to map: {}.",
e
));
Ok(vec![])
}
}
}
Err(e) => {
error!(error = %e, "SecretScaler failed to fetch configuration");
*self.status.write().await = StatusInfo::failed(&e.to_string());
Ok(Vec::new())
}
}
}
#[instrument(level = "trace", skip_all)]
async fn cleanup(&self) -> Result<Vec<Command>> {
Ok(vec![Command::DeleteConfig(DeleteConfig {
config_name: self.secret_name.clone(),
})])
}
}
/// Merge policy and properties into a [`SecretConfig`] for later use.
fn config_from_manifest_structures(
policy: Policy,
reference: SecretProperty,
) -> anyhow::Result<SecretConfig> {
let mut policy_properties = policy.properties.clone();
let backend = policy_properties
.remove("backend")
.context("policy did not have a backend property")?;
Ok(SecretConfig::new(
reference.name.clone(),
backend,
reference.properties.key.clone(),
reference.properties.field.clone(),
reference.properties.version.clone(),
policy_properties
.into_iter()
.map(|(k, v)| (k, v.into()))
.collect(),
))
}
#[cfg(test)]
mod test {
use super::config_from_manifest_structures;
use crate::{
commands::{Command, PutConfig},
events::{ConfigDeleted, Event, HostHeartbeat},
scaler::Scaler,
test_util::TestLatticeSource,
};
use std::collections::{BTreeMap, HashMap};
use wadm_types::{api::StatusType, Policy, SecretProperty, SecretSourceProperty};
#[tokio::test]
async fn test_secret_scaler() {
let lattice = TestLatticeSource {
claims: HashMap::new(),
inventory: Default::default(),
links: Vec::new(),
config: HashMap::new(),
};
let policy = Policy {
name: "nats-kv".to_string(),
policy_type: "secrets-backend".to_string(),
properties: BTreeMap::from([("backend".to_string(), "nats-kv".to_string())]),
};
let secret = SecretProperty {
name: "test".to_string(),
properties: SecretSourceProperty {
policy: "nats-kv".to_string(),
key: "test".to_string(),
field: None,
version: None,
},
};
let secret_scaler = super::SecretScaler::new(
secret.name.clone(),
policy.clone(),
secret.clone(),
lattice.clone(),
);
assert_eq!(
secret_scaler.status().await.status_type,
StatusType::Reconciling
);
let cfg = config_from_manifest_structures(policy, secret.clone())
.expect("failed to merge policy");
assert_eq!(
secret_scaler
.reconcile()
.await
.expect("reconcile did not succeed"),
vec![Command::PutConfig(PutConfig {
config_name: secret.name.clone(),
config: cfg.clone().try_into().expect("should convert to map"),
})],
);
assert_eq!(
secret_scaler.status().await.status_type,
StatusType::Reconciling
);
// Configuration deleted, relevant
assert_eq!(
secret_scaler
.handle_event(&Event::ConfigDeleted(ConfigDeleted {
config_name: secret.name.clone()
}))
.await
.expect("handle_event should succeed"),
vec![Command::PutConfig(PutConfig {
config_name: secret.name.clone(),
config: cfg.clone().try_into().expect("should convert to map"),
})]
);
assert_eq!(
secret_scaler.status().await.status_type,
StatusType::Reconciling
);
// Configuration deleted, irrelevant
assert_eq!(
secret_scaler
.handle_event(&Event::ConfigDeleted(ConfigDeleted {
config_name: "some_other_config".to_string()
}))
.await
.expect("handle_event should succeed"),
vec![]
);
assert_eq!(
secret_scaler.status().await.status_type,
StatusType::Reconciling
);
// Periodic reconcile with host heartbeat
assert_eq!(
secret_scaler
.handle_event(&Event::HostHeartbeat(HostHeartbeat {
components: Vec::new(),
providers: Vec::new(),
host_id: String::default(),
issuer: String::default(),
friendly_name: String::default(),
labels: HashMap::new(),
version: semver::Version::new(0, 0, 0),
uptime_human: String::default(),
uptime_seconds: 0,
}))
.await
.expect("handle_event should succeed"),
vec![Command::PutConfig(PutConfig {
config_name: secret.name.clone(),
config: cfg.clone().try_into().expect("should convert to map"),
})]
);
assert_eq!(
secret_scaler.status().await.status_type,
StatusType::Reconciling
);
}
}

View File

@ -0,0 +1,612 @@
use anyhow::Result;
use async_trait::async_trait;
use tokio::sync::RwLock;
use tracing::instrument;
use wadm_types::{api::StatusInfo, TraitProperty};
use crate::{
commands::{Command, DeleteLink, PutLink},
events::{
Event, LinkdefDeleted, LinkdefSet, ProviderHealthCheckInfo, ProviderHealthCheckPassed,
ProviderHealthCheckStatus,
},
scaler::{compute_id_sha256, Scaler},
storage::ReadStore,
workers::LinkSource,
};
pub const LINK_SCALER_KIND: &str = "LinkScaler";
/// Config for a LinkSpreadConfig
pub struct LinkScalerConfig {
/// Component identifier for the source of the link
pub source_id: String,
/// Target identifier or group for the link
pub target: String,
/// WIT Namespace for the link
pub wit_namespace: String,
/// WIT Package for the link
pub wit_package: String,
/// WIT Interfaces for the link
pub wit_interfaces: Vec<String>,
/// Name of the link
pub name: String,
/// Lattice ID the Link is configured for
pub lattice_id: String,
/// The name of the wadm model this SpreadScaler is under
pub model_name: String,
/// List of configurations for the source of this link
pub source_config: Vec<String>,
/// List of configurations for the target of this link
pub target_config: Vec<String>,
}
/// The LinkSpreadScaler ensures that link configuration exists on a specified lattice.
pub struct LinkScaler<S, L> {
pub config: LinkScalerConfig,
// TODO(#253): Reenable once we figure out https://github.com/wasmCloud/wadm/issues/123
#[allow(unused)]
store: S,
ctl_client: L,
id: String,
status: RwLock<StatusInfo>,
}
#[async_trait]
impl<S, L> Scaler for LinkScaler<S, L>
where
S: ReadStore + Send + Sync,
L: LinkSource + Send + Sync,
{
fn id(&self) -> &str {
&self.id
}
fn kind(&self) -> &str {
LINK_SCALER_KIND
}
fn name(&self) -> String {
format!(
"{} -({}:{})-> {}",
self.config.source_id,
self.config.wit_namespace,
self.config.wit_package,
self.config.target
)
}
async fn status(&self) -> StatusInfo {
let _ = self.reconcile().await;
self.status.read().await.to_owned()
}
async fn update_config(&mut self, _config: TraitProperty) -> Result<Vec<Command>> {
// NOTE(brooksmtownsend): Updating a link scaler essentially means you're creating
// a totally new scaler, so just do that instead.
self.reconcile().await
}
#[instrument(level = "trace", skip_all, fields(scaler_id = %self.id))]
async fn handle_event(&self, event: &Event) -> Result<Vec<Command>> {
match event {
// Trigger linkdef creation if this component starts and belongs to this model
Event::ComponentScaled(evt) if evt.component_id == self.config.source_id || evt.component_id == self.config.target => {
self.reconcile().await
}
Event::ProviderHealthCheckPassed(ProviderHealthCheckPassed {
data: ProviderHealthCheckInfo { provider_id, .. },
..
})
| Event::ProviderHealthCheckStatus(ProviderHealthCheckStatus {
data: ProviderHealthCheckInfo { provider_id, .. },
..
// NOTE(brooksmtownsend): Ideally we shouldn't actually care about the target being healthy, but
// I'm leaving this part in for now to avoid strange conditions in the future where we might want
// to re-put a link or at least reconcile if the target changes health status.
}) if provider_id == &self.config.source_id || provider_id == &self.config.target => {
// Wait until we know the provider is healthy before we link. This also avoids the race condition
// where a provider is started by the host
self.reconcile().await
}
Event::LinkdefDeleted(LinkdefDeleted {
source_id,
wit_namespace,
wit_package,
name,
}) if source_id == &self.config.source_id
&& name == &self.config.name
&& wit_namespace == &self.config.wit_namespace
&& wit_package == &self.config.wit_package =>
{
self.reconcile().await
}
Event::LinkdefSet(LinkdefSet { linkdef })
if linkdef.source_id() == self.config.source_id
&& linkdef.target() == self.config.target
&& linkdef.name() == self.config.name =>
{
*self.status.write().await = StatusInfo::deployed("");
Ok(Vec::new())
}
_ => Ok(Vec::new()),
}
}
#[instrument(level = "trace", skip_all, fields(source_id = %self.config.source_id, target = %self.config.source_id, link_name = %self.config.name, scaler_id = %self.id))]
async fn reconcile(&self) -> Result<Vec<Command>> {
let source_id = &self.config.source_id;
let target = &self.config.target;
let linkdefs = self.ctl_client.get_links().await?;
let (exists, _config_different) = linkdefs
.into_iter()
.find(|linkdef| {
linkdef.source_id() == source_id
&& linkdef.target() == target
&& linkdef.name() == self.config.name
})
.map(|linkdef| {
(
true,
// TODO(#88): reverse compare too
// Ensure all supplied configs (both source and target) are the same
linkdef
.source_config()
.iter()
.eq(self.config.source_config.iter())
&& linkdef
.target_config()
.iter()
.eq(self.config.target_config.iter()),
)
})
.unwrap_or((false, false));
// TODO(#88)
// If it already exists, but values are different, we need to have a delete event first
// and recreate it with the correct values second
// let mut commands = values_different
// .then(|| {
// trace!("Linkdef exists, but values are different, deleting and recreating");
// vec![Command::DeleteLinkdef(DeleteLinkdef {
// component_id: component_id.to_owned(),
// provider_id: provider_id.to_owned(),
// contract_id: self.config.provider_contract_id.to_owned(),
// link_name: self.config.provider_link_name.to_owned(),
// model_name: self.config.model_name.to_owned(),
// })]
// })
// .unwrap_or_default();
// if exists && !values_different {
// trace!("Linkdef already exists, skipping");
// } else if !exists || values_different {
// trace!("Linkdef does not exist or needs to be recreated");
// commands.push(Command::PutLinkdef(PutLinkdef {
// component_id: component_id.to_owned(),
// provider_id: provider_id.to_owned(),
// link_name: self.config.provider_link_name.to_owned(),
// contract_id: self.config.provider_contract_id.to_owned(),
// values: self.config.values.to_owned(),
// model_name: self.config.model_name.to_owned(),
// }))
// };
let commands = if !exists {
*self.status.write().await = StatusInfo::reconciling(&format!(
"Putting link definition between {source_id} and {target}"
));
vec![Command::PutLink(PutLink {
source_id: self.config.source_id.to_owned(),
target: self.config.target.to_owned(),
name: self.config.name.to_owned(),
wit_namespace: self.config.wit_namespace.to_owned(),
wit_package: self.config.wit_package.to_owned(),
interfaces: self.config.wit_interfaces.to_owned(),
source_config: self.config.source_config.clone(),
target_config: self.config.target_config.clone(),
model_name: self.config.model_name.to_owned(),
})]
} else {
*self.status.write().await = StatusInfo::deployed("");
Vec::with_capacity(0)
};
Ok(commands)
}
async fn cleanup(&self) -> Result<Vec<Command>> {
Ok(vec![Command::DeleteLink(DeleteLink {
model_name: self.config.model_name.to_owned(),
source_id: self.config.source_id.to_owned(),
link_name: self.config.name.to_owned(),
wit_namespace: self.config.wit_namespace.to_owned(),
wit_package: self.config.wit_package.to_owned(),
})])
}
}
impl<S: ReadStore + Send + Sync, L: LinkSource> LinkScaler<S, L> {
/// Construct a new LinkScaler with specified configuration values
pub fn new(store: S, link_config: LinkScalerConfig, ctl_client: L) -> Self {
// Compute the id of this scaler based on all of the configuration values
// that make it unique. This is used during upgrades to determine if a
// scaler is the same as a previous one.
let mut id_parts = vec![
LINK_SCALER_KIND,
&link_config.model_name,
&link_config.name,
&link_config.source_id,
&link_config.target,
&link_config.wit_namespace,
&link_config.wit_package,
];
id_parts.extend(
link_config
.wit_interfaces
.iter()
.map(std::string::String::as_str),
);
id_parts.extend(
link_config
.source_config
.iter()
.map(std::string::String::as_str),
);
id_parts.extend(
link_config
.target_config
.iter()
.map(std::string::String::as_str),
);
let id = compute_id_sha256(&id_parts);
Self {
store,
config: link_config,
ctl_client,
id,
status: RwLock::new(StatusInfo::reconciling("")),
}
}
}
#[cfg(test)]
mod test {
use std::{
collections::{BTreeMap, HashMap, HashSet},
sync::Arc,
vec,
};
use wasmcloud_control_interface::Link;
use chrono::Utc;
use super::*;
use crate::{
events::{ComponentScaled, ProviderHealthCheckInfo, ProviderInfo},
storage::{Component, Host, Provider, Store},
test_util::{TestLatticeSource, TestStore},
APP_SPEC_ANNOTATION,
};
async fn create_store(lattice_id: &str, component_ref: &str, provider_ref: &str) -> TestStore {
let store = TestStore::default();
store
.store(
lattice_id,
"component".to_string(),
Component {
id: "component".to_string(),
reference: component_ref.to_owned(),
..Default::default()
},
)
.await
.expect("Couldn't store component");
store
.store(
lattice_id,
"provider".to_string(),
Provider {
id: "provider".to_string(),
reference: provider_ref.to_owned(),
..Default::default()
},
)
.await
.expect("Couldn't store component");
store
}
#[tokio::test]
async fn test_different_ids() {
let lattice_id = "id_generator".to_string();
let component_ref = "component_ref".to_string();
let component_id = "component_id".to_string();
let provider_ref = "provider_ref".to_string();
let provider_id = "provider_id".to_string();
let source_config = vec!["source_config".to_string()];
let target_config = vec!["target_config".to_string()];
let scaler = LinkScaler::new(
create_store(&lattice_id, &component_ref, &provider_ref).await,
LinkScalerConfig {
source_id: provider_id.clone(),
target: component_id.clone(),
wit_namespace: "wit_namespace".to_string(),
wit_package: "wit_package".to_string(),
wit_interfaces: vec!["wit_interface".to_string()],
name: "default".to_string(),
lattice_id: lattice_id.clone(),
model_name: "model".to_string(),
source_config: source_config.clone(),
target_config: target_config.clone(),
},
TestLatticeSource::default(),
);
let other_same_scaler = LinkScaler::new(
create_store(&lattice_id, &component_ref, &provider_ref).await,
LinkScalerConfig {
source_id: provider_id.clone(),
target: component_id.clone(),
wit_namespace: "wit_namespace".to_string(),
wit_package: "wit_package".to_string(),
wit_interfaces: vec!["wit_interface".to_string()],
name: "default".to_string(),
lattice_id: lattice_id.clone(),
model_name: "model".to_string(),
source_config: source_config.clone(),
target_config: target_config.clone(),
},
TestLatticeSource::default(),
);
assert_eq!(scaler.id(), other_same_scaler.id(), "LinkScaler ID should be the same when scalers have the same type, model name, provider link name, component reference, provider reference, and values");
let different_scaler = LinkScaler::new(
create_store(&lattice_id, &component_ref, &provider_ref).await,
LinkScalerConfig {
source_id: provider_id.clone(),
target: component_id.clone(),
wit_namespace: "wit_namespace".to_string(),
wit_package: "wit_package".to_string(),
wit_interfaces: vec!["wit_interface".to_string()],
name: "default".to_string(),
lattice_id: lattice_id.clone(),
model_name: "model".to_string(),
source_config: vec!["foo".to_string()],
target_config: vec!["bar".to_string()],
},
TestLatticeSource::default(),
);
assert_ne!(
scaler.id(),
different_scaler.id(),
"LinkScaler ID should be different when scalers have different configured values"
);
}
#[tokio::test]
async fn test_no_linkdef() {
let lattice_id = "no-linkdef".to_string();
let component_ref = "component_ref".to_string();
let component_id = "component".to_string();
let provider_ref = "provider_ref".to_string();
let provider_id = "provider".to_string();
let scaler = LinkScaler::new(
create_store(&lattice_id, &component_ref, &provider_ref).await,
LinkScalerConfig {
source_id: component_id.clone(),
target: provider_id.clone(),
wit_namespace: "namespace".to_string(),
wit_package: "package".to_string(),
wit_interfaces: vec!["interface".to_string()],
name: "default".to_string(),
lattice_id: lattice_id.clone(),
model_name: "model".to_string(),
source_config: vec![],
target_config: vec![],
},
TestLatticeSource::default(),
);
// Run a reconcile and make sure it returns a single put linkdef command
let commands = scaler.reconcile().await.expect("Couldn't reconcile");
assert_eq!(commands.len(), 1, "Expected 1 command, got {commands:?}");
assert!(matches!(commands[0], Command::PutLink(_)));
}
#[tokio::test]
async fn test_existing_linkdef() {
let lattice_id = "existing-linkdef".to_string();
let component_ref = "component_ref".to_string();
let component_id = "component".to_string();
let provider_ref = "provider_ref".to_string();
let provider_id = "provider".to_string();
let linkdef = Link::builder()
.source_id(&component_id)
.target(&provider_id)
.wit_namespace("namespace")
.wit_package("package")
.interfaces(vec!["interface".to_string()])
.name("default")
.build()
.unwrap();
let scaler = LinkScaler::new(
create_store(&lattice_id, &component_ref, &provider_ref).await,
LinkScalerConfig {
source_id: linkdef.source_id().to_string(),
target: linkdef.target().to_string(),
wit_namespace: linkdef.wit_namespace().to_string(),
wit_package: linkdef.wit_package().to_string(),
wit_interfaces: linkdef.interfaces().clone(),
name: linkdef.name().to_string(),
source_config: vec![],
target_config: vec![],
lattice_id: lattice_id.clone(),
model_name: "model".to_string(),
},
TestLatticeSource {
links: vec![linkdef],
..Default::default()
},
);
let commands = scaler.reconcile().await.expect("Couldn't reconcile");
assert_eq!(
commands.len(),
0,
"Scaler shouldn't have returned any commands"
);
}
#[tokio::test]
async fn can_put_linkdef_from_triggering_events() {
let lattice_id = "can_put_linkdef_from_triggering_events";
let echo_ref = "fakecloud.azurecr.io/echo:0.3.4".to_string();
let echo_id = "MASDASDIAMAREALCOMPONENTECHO";
let httpserver_ref = "fakecloud.azurecr.io/httpserver:0.5.2".to_string();
let host_id_one = "NASDASDIMAREALHOSTONE";
let store = Arc::new(TestStore::default());
// STATE SETUP BEGIN
store
.store(
lattice_id,
host_id_one.to_string(),
Host {
components: HashMap::from_iter([(echo_id.to_string(), 1)]),
friendly_name: "hey".to_string(),
labels: HashMap::from_iter([
("cloud".to_string(), "fake".to_string()),
("region".to_string(), "us-brooks-1".to_string()),
]),
providers: HashSet::from_iter([ProviderInfo {
provider_id: "VASDASD".to_string(),
provider_ref: httpserver_ref.to_string(),
annotations: BTreeMap::from_iter([(
APP_SPEC_ANNOTATION.to_string(),
"foobar".to_string(),
)]),
}]),
uptime_seconds: 123,
version: None,
id: host_id_one.to_string(),
last_seen: Utc::now(),
},
)
.await
.expect("should be able to store a host");
store
.store(
lattice_id,
"VASDASD".to_string(),
Provider {
id: "VASDASD".to_string(),
reference: httpserver_ref.to_string(),
..Default::default()
},
)
.await
.expect("should be able to store provider");
// STATE SETUP END
let link_scaler = LinkScaler::new(
store.clone(),
LinkScalerConfig {
source_id: echo_id.to_string(),
target: "VASDASD".to_string(),
wit_namespace: "wasmcloud".to_string(),
wit_package: "httpserver".to_string(),
wit_interfaces: vec![],
name: "default".to_string(),
source_config: vec![],
target_config: vec![],
lattice_id: lattice_id.to_string(),
model_name: "foobar".to_string(),
},
TestLatticeSource::default(),
);
let commands = link_scaler
.reconcile()
.await
.expect("link scaler to handle reconcile");
// Since no link exists, we should expect a put link command
assert_eq!(commands.len(), 1);
// Component starts, put into state and then handle event
store
.store(
lattice_id,
echo_id.to_string(),
Component {
id: echo_id.to_string(),
reference: echo_ref.to_string(),
..Default::default()
},
)
.await
.expect("should be able to store component");
let commands = link_scaler
.handle_event(&Event::ComponentScaled(ComponentScaled {
annotations: BTreeMap::from_iter([(
APP_SPEC_ANNOTATION.to_string(),
"foobar".to_string(),
)]),
claims: None,
image_ref: echo_ref,
component_id: echo_id.to_string(),
max_instances: 1,
host_id: host_id_one.to_string(),
}))
.await
.expect("should be able to handle components started event");
assert_eq!(commands.len(), 1);
let commands = link_scaler
.handle_event(&Event::LinkdefSet(LinkdefSet {
linkdef: Link::builder()
// NOTE: contract, link, and provider id matches but the component is different
.source_id("nm0001772")
.target("VASDASD")
.wit_namespace("wasmcloud")
.wit_package("httpserver")
.name("default")
.build()
.unwrap(),
}))
.await
.expect("");
assert!(commands.is_empty());
let commands = link_scaler
.handle_event(&Event::ProviderHealthCheckPassed(
ProviderHealthCheckPassed {
data: ProviderHealthCheckInfo {
provider_id: "VASDASD".to_string(),
host_id: host_id_one.to_string(),
},
},
))
.await
.expect("should be able to handle provider health check");
assert_eq!(commands.len(), 1);
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,66 @@
use anyhow::Result;
use async_trait::async_trait;
use wadm_types::{api::StatusInfo, TraitProperty};
use crate::{commands::Command, events::Event, scaler::Scaler};
/// The StatusScaler is a scaler that only reports a predefined status and does not perform any actions.
/// It's primarily used as a placeholder for a scaler that wadm failed to initialize for reasons that
/// couldn't be caught during deployment, and will not be fixed until a new version of the app is deployed.
pub struct StatusScaler {
id: String,
kind: String,
name: String,
status: StatusInfo,
}
#[async_trait]
impl Scaler for StatusScaler {
fn id(&self) -> &str {
&self.id
}
fn kind(&self) -> &str {
&self.kind
}
fn name(&self) -> String {
self.name.to_string()
}
async fn status(&self) -> StatusInfo {
self.status.clone()
}
async fn update_config(&mut self, _config: TraitProperty) -> Result<Vec<Command>> {
Ok(vec![])
}
async fn handle_event(&self, _event: &Event) -> Result<Vec<Command>> {
Ok(Vec::with_capacity(0))
}
async fn reconcile(&self) -> Result<Vec<Command>> {
Ok(Vec::with_capacity(0))
}
async fn cleanup(&self) -> Result<Vec<Command>> {
Ok(Vec::with_capacity(0))
}
}
impl StatusScaler {
pub fn new(
id: impl AsRef<str>,
kind: impl AsRef<str>,
name: impl AsRef<str>,
status: StatusInfo,
) -> Self {
StatusScaler {
id: id.as_ref().to_string(),
kind: kind.as_ref().to_string(),
name: name.as_ref().to_string(),
status,
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,32 +1,34 @@
use async_nats::{Client, Subscriber};
use async_nats::{
jetstream::{kv::Store, stream::Stream},
Client, Subscriber,
};
use futures::StreamExt;
use tracing::{info, instrument, warn};
use wadm_types::api::DEFAULT_WADM_TOPIC_PREFIX;
use crate::{publisher::Publisher, storage::Store};
use crate::publisher::Publisher;
mod handlers;
mod notifier;
mod parser;
mod types;
mod storage;
use handlers::Handler;
pub use notifier::ManifestNotifier;
pub use parser::CONTENT_TYPE_HEADER;
pub use types::*;
/// The default topic prefix for the wadm API;
pub const DEFAULT_WADM_TOPIC_PREFIX: &str = "wadm.api";
pub(crate) use storage::ModelStorage;
const QUEUE_GROUP: &str = "wadm_server";
/// A server for the wadm API
pub struct Server<S, P> {
handler: Handler<S, P>,
pub struct Server<P> {
handler: Handler<P>,
subscriber: Subscriber,
prefix: String,
multitenant: bool,
}
impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
impl<P: Publisher> Server<P> {
/// Returns a new server configured with the given store, NATS client, and optional topic
/// prefix. Returns an error if it can't subscribe on the right topics
///
@ -34,11 +36,13 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
/// when you may need to set a custom prefix for security purposes or topic segregation
#[instrument(level = "info", skip_all)]
pub async fn new(
store: S,
store: Store,
client: Client,
topic_prefix: Option<&str>,
multitenant: bool,
status_stream: Stream,
notifier: ManifestNotifier<P>,
) -> anyhow::Result<Server<S, P>> {
) -> anyhow::Result<Server<P>> {
// Trim off any spaces or trailing/preceding dots
let prefix = topic_prefix
.unwrap_or(DEFAULT_WADM_TOPIC_PREFIX)
@ -49,7 +53,13 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
anyhow::bail!("Given prefix was empty")
}
let topic = format!("{prefix}.>");
let topic_prefix = if multitenant {
format!("*.{prefix}")
} else {
prefix.clone()
};
let topic = format!("{topic_prefix}.>");
info!(%topic, "Creating API subscriber");
// NOTE(thomastaylor312): Technically there is a condition where two people try to send an
// update to the same manifest. We are protected against this overwriting each other (we
@ -63,12 +73,14 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
Ok(Server {
handler: Handler {
store,
store: ModelStorage::new(store),
client,
notifier,
status_stream,
},
subscriber,
prefix,
multitenant,
})
}
@ -79,7 +91,7 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
#[instrument(level = "info", skip_all)]
pub async fn serve(mut self) -> anyhow::Result<()> {
while let Some(msg) = self.subscriber.next().await {
if !msg.subject.starts_with(&self.prefix) {
if !msg.subject.starts_with(&self.prefix) && !self.multitenant {
warn!(subject = %msg.subject, "Received message on an invalid subject");
continue;
}
@ -101,54 +113,99 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
match parsed {
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "list",
object_name: None,
} => self.handler.list_models(msg, lattice_id).await,
} => {
warn!("Received deprecated subject: model.list. Please use model.get instead");
self.handler
.list_models_deprecated(msg, account_id, lattice_id)
.await
}
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "get",
object_name: Some(name),
} => self.handler.get_model(msg, lattice_id, name).await,
} => {
self.handler
.get_model(msg, account_id, lattice_id, name)
.await
}
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "get",
object_name: None,
} => self.handler.list_models(msg, account_id, lattice_id).await,
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "put",
object_name: None,
} => self.handler.put_model(msg, lattice_id).await,
} => self.handler.put_model(msg, account_id, lattice_id).await,
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "del",
object_name: Some(name),
} => self.handler.delete_model(msg, lattice_id, name).await,
} => {
self.handler
.delete_model(msg, account_id, lattice_id, name)
.await
}
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "versions",
object_name: Some(name),
} => self.handler.list_versions(msg, lattice_id, name).await,
} => {
self.handler
.list_versions(msg, account_id, lattice_id, name)
.await
}
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "deploy",
object_name: Some(name),
} => self.handler.deploy_model(msg, lattice_id, name).await,
} => {
self.handler
.deploy_model(msg, account_id, lattice_id, name)
.await
}
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "undeploy",
object_name: Some(name),
} => self.handler.undeploy_model(msg, lattice_id, name).await,
} => {
self.handler
.undeploy_model(msg, account_id, lattice_id, name)
.await
}
ParsedSubject {
account_id,
lattice_id,
category: "model",
operation: "status",
object_name: Some(name),
} => self.handler.model_status(msg, lattice_id, name).await,
} => {
self.handler
.model_status(msg, account_id, lattice_id, name)
.await
}
ParsedSubject {
account_id: _,
lattice_id: _,
category: "model",
operation: "history",
@ -172,12 +229,24 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
fn parse_subject<'a>(&self, subject: &'a str) -> anyhow::Result<ParsedSubject<'a>> {
// Topic structure: wadm.api.{lattice-id}.{category}.{operation}.{object}
// First, clean off the prefix and then split and iterate
// Multitenant topic structure: {account-id}.wadm.api.{lattice-id}.{category}.{operation}.{object}
// First, clean off the account if multitenant, then prefix and then split and iterate
let (account_id, subject) = if self.multitenant {
if let Some((account_id, rest)) = subject.split_once('.') {
(Some(account_id), rest)
} else {
anyhow::bail!("Expected to find account ID in multitenant subject")
}
} else {
(None, subject)
};
let mut trimmed = subject
.trim_start_matches(&self.prefix)
.trim_start_matches('.')
.split('.')
.fuse();
let lattice_id = trimmed
.next()
.ok_or_else(|| anyhow::anyhow!("Expected to find lattice ID"))?;
@ -191,9 +260,10 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
let object_name = trimmed.next();
// Catch malformed long subjects
if trimmed.next().is_some() {
anyhow::bail!("Found extra components of subject")
anyhow::bail!("Found extra components of subject, ensure your manifest name consists of only alphanumeric characters, dashes, and underscores.")
}
Ok(ParsedSubject {
account_id,
lattice_id,
category,
operation,
@ -203,6 +273,7 @@ impl<S: Store + Send + Sync, P: Publisher> Server<S, P> {
}
struct ParsedSubject<'a> {
account_id: Option<&'a str>,
lattice_id: &'a str,
category: &'a str,
operation: &'a str,

View File

@ -1,14 +1,12 @@
use cloudevents::{EventBuilder, EventBuilderV10};
use cloudevents::Event as CloudEvent;
use tracing::{instrument, trace};
use wadm_types::Manifest;
use crate::{
events::{EventType, ManifestPublished, ManifestUnpublished},
model::Manifest,
events::{Event, ManifestPublished, ManifestUnpublished},
publisher::Publisher,
};
const WADM_SOURCE: &str = "wadm";
/// A notifier that publishes changes about manifests with the given publisher
pub struct ManifestNotifier<P> {
prefix: String,
@ -26,26 +24,20 @@ impl<P: Publisher> ManifestNotifier<P> {
}
}
#[instrument(level = "trace", skip(self, data))]
#[instrument(level = "trace", skip(self))]
async fn send_event(
&self,
lattice_id: &str,
ty: &str,
data: serde_json::Value,
event_subject_key: &str,
event: Event,
) -> anyhow::Result<()> {
let event = EventBuilderV10::new()
.id(uuid::Uuid::new_v4().to_string())
.source(WADM_SOURCE)
.time(chrono::Utc::now())
.ty(ty)
.data("application/json", data)
.build()?;
let event: CloudEvent = event.try_into()?;
// NOTE(thomastaylor312): A future improvement could be retries here
trace!("Sending notification event");
self.publisher
.publish(
serde_json::to_vec(&event)?,
Some(&format!("{}.{lattice_id}", self.prefix)),
Some(&format!("{}.{lattice_id}.{event_subject_key}", self.prefix)),
)
.await
}
@ -53,8 +45,8 @@ impl<P: Publisher> ManifestNotifier<P> {
pub async fn deployed(&self, lattice_id: &str, manifest: Manifest) -> anyhow::Result<()> {
self.send_event(
lattice_id,
ManifestPublished::TYPE,
serde_json::to_value(manifest)?,
"manifest_published",
Event::ManifestPublished(ManifestPublished { manifest }),
)
.await
}
@ -62,9 +54,9 @@ impl<P: Publisher> ManifestNotifier<P> {
pub async fn undeployed(&self, lattice_id: &str, name: &str) -> anyhow::Result<()> {
self.send_event(
lattice_id,
ManifestUnpublished::TYPE,
serde_json::json!({
"name": name,
"manifest_unpublished",
Event::ManifestUnpublished(ManifestUnpublished {
name: name.to_owned(),
}),
)
.await

View File

@ -1,6 +1,6 @@
use async_nats::HeaderMap;
use crate::model::Manifest;
use wadm_types::Manifest;
/// The name of the header in the NATS request to use for content type inference. The header value
/// should be a valid MIME type

View File

@ -0,0 +1,264 @@
use std::collections::BTreeSet;
use anyhow::Result;
use async_nats::jetstream::kv::{Operation, Store};
use tracing::{debug, instrument, trace};
use crate::model::StoredManifest;
// TODO(thomastaylor312): Once async nats has concrete error types for KV, we should switch out
// anyhow for concrete error types so we can indicate whether a failure was due to something like a
// CAS failure or a network error
/// Storage for models, with some logic around updating a list of all models in a lattice to make
/// calls more efficient
#[derive(Clone)]
pub(crate) struct ModelStorage {
store: Store,
}
impl ModelStorage {
pub fn new(store: Store) -> ModelStorage {
Self { store }
}
/// Gets the stored data and its current revision for the given model, returning None if it
/// doesn't exist
// NOTE(thomastaylor312): The model name is an AsRef purely so we don't have to clone a bunch of
// things when fetching in the manager. If we expose this struct outside of the crate, we should
// either revert this to be `&str` or make it `AsRef<str>` everywhere
#[instrument(level = "debug", skip(self, model_name), fields(model_name = %model_name.as_ref()))]
pub async fn get(
&self,
account_id: Option<&str>,
lattice_id: &str,
model_name: impl AsRef<str>,
) -> Result<Option<(StoredManifest, u64)>> {
let key = model_key(account_id, lattice_id, model_name.as_ref());
debug!(%key, "Fetching model from storage");
self.store
.entry(&key)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?
.and_then(|entry| {
// Skip any delete or purge operations
if matches!(entry.operation, Operation::Delete | Operation::Purge) {
return None;
}
Some(
serde_json::from_slice::<StoredManifest>(&entry.value)
.map_err(anyhow::Error::from)
.map(|m| (m, entry.revision)),
)
})
.transpose()
}
/// Updates the stored data with the given model, overwriting any existing data. The optional
/// `current_revision` parameter can be used to compare whether or not you're updating the model
/// with the latest revision
#[instrument(level = "debug", skip(self, model), fields(model_name = %model.name()))]
pub async fn set(
&self,
account_id: Option<&str>,
lattice_id: &str,
model: StoredManifest,
current_revision: Option<u64>,
) -> Result<()> {
debug!("Storing model in storage");
// We need to store the model, then update the set. This is because if we update the set
// first and the model fails, it will look like the model exists when it actually doesn't
let key = model_key(account_id, lattice_id, model.name());
trace!(%key, "Storing manifest at key");
let data = serde_json::to_vec(&model).map_err(anyhow::Error::from)?;
if let Some(revision) = current_revision.filter(|r| r > &0) {
self.store
.update(&key, data.into(), revision)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?;
} else {
self.store
.put(&key, data.into())
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?;
}
trace!("Adding model to set");
self.retry_model_update(
account_id,
lattice_id,
ModelNameOperation::Add(model.name()),
)
.await
}
/// Fetches a summary of all manifests for the given lattice.
#[instrument(level = "debug", skip(self))]
pub async fn list(
&self,
account_id: Option<&str>,
lattice_id: &str,
) -> Result<Vec<StoredManifest>> {
debug!("Fetching list of models from storage");
let futs = self
.get_model_set(account_id, lattice_id)
.await?
.unwrap_or_default()
.0
.into_iter()
// We can't use filter map with futures, but we can use map and then flatten it below
.map(|model_name| async move {
match self.get(account_id, lattice_id, &model_name).await {
Ok(Some((manifest, _))) => Some(Ok(manifest)),
Ok(None) => None,
Err(e) => Some(Err(e)),
}
});
// Flatten, collect, and sort on name
futures::future::join_all(futs)
.await
.into_iter()
.flatten()
.collect::<Result<Vec<StoredManifest>>>()
}
/// Deletes the given model from storage. This also removes the model from the list of all
/// models in the lattice
#[instrument(level = "debug", skip(self))]
pub async fn delete(
&self,
account_id: Option<&str>,
lattice_id: &str,
model_name: &str,
) -> Result<()> {
debug!("Deleting model from storage");
// We need to delete from the set first, then delete the model itself. This is because if we
// delete the model but then cannot delete the item from the set, then we end up in a
// situation where we say it already exists when creating. If the model doesn't delete it is
// fine, because a set operation will overwrite it
self.retry_model_update(
account_id,
lattice_id,
ModelNameOperation::Delete(model_name),
)
.await?;
let key = model_key(account_id, lattice_id, model_name);
trace!("Deleting model from storage");
self.store
.purge(&key)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))
}
/// Helper function that returns the list of models for the given lattice along with the current
/// revision for use in updating
async fn get_model_set(
&self,
account_id: Option<&str>,
lattice_id: &str,
) -> Result<Option<(BTreeSet<String>, u64)>> {
match self
.store
.entry(model_set_key(account_id, lattice_id))
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?
{
Some(entry) if !matches!(entry.operation, Operation::Delete | Operation::Purge) => {
let models: BTreeSet<String> =
serde_json::from_slice(&entry.value).map_err(anyhow::Error::from)?;
Ok(Some((models, entry.revision)))
}
Some(_) | None => Ok(None),
}
}
/// Convenience wrapper around retrying a key update
#[instrument(level = "debug", skip(self))]
async fn retry_model_update<'a>(
&self,
account_id: Option<&str>,
lattice_id: &str,
operation: ModelNameOperation<'a>,
) -> Result<()> {
// Always retry 3 times for now. We can make this configurable later if we want
for i in 0..3 {
trace!("Fetching current models from storage");
let (mut model_list, current_revision) =
match self.get_model_set(account_id, lattice_id).await? {
Some((models, revision)) => (models, revision),
None if matches!(operation, ModelNameOperation::Delete(_)) => {
debug!("No models exist in storage for delete, returning early");
return Ok(());
}
None => (BTreeSet::new(), 0),
};
match operation {
ModelNameOperation::Add(model_name) => {
if !model_list.insert(model_name.to_owned()) {
debug!("Model was already in list, returning early");
return Ok(());
}
}
ModelNameOperation::Delete(model_name) => {
if !model_list.remove(model_name) {
debug!("Model was not in list, returning early");
return Ok(());
}
}
}
match self
.store
.update(
model_set_key(account_id, lattice_id),
serde_json::to_vec(&model_list)
.map_err(anyhow::Error::from)?
.into(),
current_revision,
)
.await
{
Ok(_) => return Ok(()),
// NOTE(thomastaylor312): This is brittle but will be replaced once the NATS client
// has a concrete error for KV stuff
Err(e) if e.to_string().contains("wrong last sequence") => {
debug!(error = %e, attempt = i+1, "Model list update failed due to the underlying data changing, retrying");
continue;
}
Err(e) => {
// If it wasn't a wrong last sequence error, then we should bail
anyhow::bail!("{e:?}")
}
}
}
Err(anyhow::anyhow!(
"Model list update failed due to conflicts after multiple retries"
))
}
}
#[derive(Debug)]
enum ModelNameOperation<'a> {
Add(&'a str),
Delete(&'a str),
}
fn model_set_key(account_id: Option<&str>, lattice_id: &str) -> String {
if let Some(account) = account_id {
format!("{}-{}", account, lattice_id)
} else {
lattice_id.to_string()
}
}
fn model_key(account_id: Option<&str>, lattice_id: &str, model_name: &str) -> String {
if let Some(account) = account_id {
format!("{}-{}-{}", account, lattice_id, model_name)
} else {
format!("{}-{}", lattice_id, model_name)
}
}

View File

@ -4,9 +4,10 @@ use std::{collections::HashMap, ops::Deref};
pub mod nats_kv;
pub mod reaper;
pub(crate) mod snapshot;
mod state;
pub use state::{Actor, Host, Provider, ProviderStatus, WadmActorInstance};
pub use state::{Component, Host, Provider, ProviderStatus, WadmComponentInfo};
/// A trait that must be implemented with a unique identifier for the given type. This is used in
/// the construction of keys for a store
@ -51,7 +52,7 @@ pub trait Store: ReadStore {
/// By default this will just call [`Store::store_many`] with a single item in the list of data
async fn store<T>(&self, lattice_id: &str, id: String, data: T) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync + Clone, // Needs to be clone in order to retry updates
{
self.store_many(lattice_id, [(id, data)]).await
}
@ -61,7 +62,7 @@ pub trait Store: ReadStore {
///
/// The given data can be anything that can be turned into an iterator of (key, value). This
/// means you can pass a [`HashMap`](std::collections::HashMap) or something like
/// `["key".to_string(), Actor{...}]`
/// `["key".to_string(), Component{...}]`
///
/// This function has several required bounds. It needs to be serialize and deserialize because
/// some implementations will need to deserialize the current data before modifying it.
@ -70,7 +71,7 @@ pub trait Store: ReadStore {
/// sendable between threads
async fn store_many<T, D>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync + Clone, // Needs to be clone in order to retry updates
D: IntoIterator<Item = (String, T)> + Send;
/// Delete a state entry
@ -78,7 +79,7 @@ pub trait Store: ReadStore {
/// By default this will just call [`Store::delete_many`] with a single item in the list of data
async fn delete<T>(&self, lattice_id: &str, id: &str) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync,
{
self.delete_many::<T, _, _>(lattice_id, [id]).await
}
@ -96,7 +97,7 @@ pub trait Store: ReadStore {
/// sendable between threads
async fn delete_many<T, D, K>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync,
D: IntoIterator<Item = K> + Send,
K: AsRef<str>;
}
@ -106,7 +107,7 @@ pub trait Store: ReadStore {
impl<S: Store + Send + Sync> Store for std::sync::Arc<S> {
async fn store_many<T, D>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync + Clone,
D: IntoIterator<Item = (String, T)> + Send,
{
self.as_ref().store_many(lattice_id, data).await
@ -114,7 +115,7 @@ impl<S: Store + Send + Sync> Store for std::sync::Arc<S> {
async fn delete_many<T, D, K>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync,
D: IntoIterator<Item = K> + Send,
K: AsRef<str>,
{
@ -209,7 +210,7 @@ impl<S: Store + Sync> ScopedStore<S> {
/// Store a piece of state. This should overwrite existing state entries
pub async fn store<T>(&self, id: String, data: T) -> Result<(), S::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync + Clone,
{
self.inner.store(&self.lattice_id, id, data).await
}
@ -218,7 +219,7 @@ impl<S: Store + Sync> ScopedStore<S> {
/// allows for stores to perform multiple writes simultaneously or to leverage transactions
pub async fn store_many<T, D>(&self, data: D) -> Result<(), S::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync + Clone,
D: IntoIterator<Item = (String, T)> + Send,
{
self.inner.store_many(&self.lattice_id, data).await
@ -227,7 +228,7 @@ impl<S: Store + Sync> ScopedStore<S> {
/// Delete a state entry
pub async fn delete<T>(&self, id: &str) -> Result<(), S::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync,
{
self.inner.delete::<T>(&self.lattice_id, id).await
}
@ -236,7 +237,7 @@ impl<S: Store + Sync> ScopedStore<S> {
/// simultaneously or to leverage transactions
pub async fn delete_many<T, D, K>(&self, data: D) -> Result<(), S::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync,
D: IntoIterator<Item = K> + Send,
K: AsRef<str>,
{
@ -245,10 +246,3 @@ impl<S: Store + Sync> ScopedStore<S> {
.await
}
}
/// A helper function for generating a unique ID for any given provider. This is exposed purely to
/// be a common way of creating a key to access/store provider information
pub fn provider_id(public_key: &str, link_name: &str) -> String {
// TODO: Update this to also use contract ID when 0.62 comes out
format!("{}/{}", public_key, link_name)
}

View File

@ -9,17 +9,19 @@
//! the encoding in the future. Because of this, DO NOT depend on accessing this data other than
//! through this module
//!
//! All data is currently stored in a single encoded map per type (host, actor, provider), where the
//! keys are the ID as given by [`StateId::id`]. Once again, we reserve the right to change this
//! All data is currently stored in a single encoded map per type (host, component, provider), where
//! the keys are the ID as given by [`StateId::id`]. Once again, we reserve the right to change this
//! structure in the future
use std::collections::HashMap;
use std::io::Error as IoError;
use std::time::Duration;
use async_nats::{
jetstream::kv::{Operation, Store as KvStore},
Error as NatsError,
};
use async_trait::async_trait;
use futures::Future;
use serde::{de::DeserializeOwned, Serialize};
use tracing::{debug, error, field::Empty, instrument, trace};
use tracing_futures::Instrument;
@ -69,23 +71,91 @@ impl NatsKvStore {
let key = generate_key::<T>(lattice_id);
tracing::Span::current().record("key", &key);
debug!("Fetching data from store");
match self.store.entry(key).await? {
Some(entry) if !matches!(entry.operation, Operation::Delete | Operation::Purge) => {
match self.store.entry(key).await {
Ok(Some(entry)) if !matches!(entry.operation, Operation::Delete | Operation::Purge) => {
trace!(len = %entry.value.len(), "Fetched bytes from store...deserializing");
serde_json::from_slice::<'_, HashMap<String, T>>(&entry.value)
.map(|d| (d, entry.revision))
.map_err(NatsStoreError::from)
}
// If it was a delete entry, we still need to return the revision
Some(entry) => {
Ok(Some(entry)) => {
trace!("Data was deleted, returning last revision");
debug!("No data found for key, returning empty");
Ok((HashMap::with_capacity(0), entry.revision))
}
None => {
Ok(None) => {
debug!("No data found for key, returning empty");
Ok((HashMap::with_capacity(0), 0))
}
Err(e) => Err(NatsStoreError::Nats(e.into())),
}
}
/// Helper that retries update operations
// NOTE(thomastaylor312): We could probably make this even better with some exponential backoff,
// but this is easy enough for now since generally there isn't a ton of competition for updating
// a single lattice
async fn update_with_retries<T, F, Fut>(
&self,
lattice_id: &str,
key: &str,
timeout: Duration,
updater: F,
) -> Result<(), NatsStoreError>
where
T: Serialize + DeserializeOwned + StateKind + Send,
F: Fn(HashMap<String, T>) -> Fut,
Fut: Future<Output = Result<Vec<u8>, NatsStoreError>>,
{
let res = tokio::time::timeout(timeout, async {
loop {
let (current_data, revision) = self
.internal_list::<T>(lattice_id)
.in_current_span()
.await?;
debug!(revision, "Updating data in store");
let updated_data = updater(current_data).await?;
trace!("Writing bytes to store");
// If the function doesn't return any data (such as for deletes), just return early.
// Everything is an update (right now), even for deletes so the only case we'd have
// an empty vec is if we aren't updating anything
if updated_data.is_empty() {
return Ok(())
}
match self.store.update(key, updated_data.into(), revision).await {
Ok(_) => return Ok(()),
Err(e) => {
if e.to_string().contains("wrong last sequence") {
debug!(%key, %lattice_id, "Got wrong last sequence when trying to update state. Retrying update operation");
continue;
}
return Err(NatsStoreError::Nats(e.into()));
}
// TODO(#316): Uncomment this code once we can update to the latest
// async-nats, which actually allows us to access the inner source of the error
// Err(e) => {
// let source = match e.source() {
// Some(s) => s,
// None => return Err(NatsStoreError::Nats(e.into())),
// };
// match source.downcast_ref::<PublishError>() {
// Some(e) if matches!(e.kind(), PublishErrorKind::WrongLastSequence) => {
// debug!(%key, %lattice_id, "Got wrong last sequence when trying to update state. Retrying update operation");
// continue;
// },
// _ => return Err(NatsStoreError::Nats(e.into())),
// }
// }
}
}
})
.await;
match res {
Err(_e) => Err(NatsStoreError::Other(
"Timed out while retrying updates to key".to_string(),
)),
Ok(res2) => res2,
}
}
}
@ -140,7 +210,7 @@ impl Store for NatsKvStore {
///
/// The given data can be anything that can be turned into an iterator of (key, value). This
/// means you can pass a [`HashMap`](std::collections::HashMap) or something like
/// `["key".to_string(), Actor{...}]`
/// `["key".to_string(), Component{...}]`
///
/// This function has several required bounds. It needs to be serialize and deserialize because
/// some implementations will need to deserialize the current data before modifying it.
@ -150,79 +220,76 @@ impl Store for NatsKvStore {
#[instrument(level = "debug", skip(self, data), fields(key = Empty))]
async fn store_many<T, D>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync + Clone,
D: IntoIterator<Item = (String, T)> + Send,
{
let key = generate_key::<T>(lattice_id);
tracing::Span::current().record("key", &key);
let (mut current_data, revision) = self
.internal_list::<T>(lattice_id)
.in_current_span()
.await?;
debug!("Updating data in store");
for (id, item) in data.into_iter() {
if current_data.insert(id, item).is_some() {
// NOTE: We may want to return the old data in the future. For now, keeping it simple
trace!("Replaced existing data");
} else {
trace!("Inserted new entry");
};
}
let serialized = serde_json::to_vec(&current_data)?;
// NOTE(thomastaylor312): This could not matter, but because this is JSON and not consuming
// the data it is serializing, we are now holding a vec of the serialized data and the
// actual struct in memory. So this drops it immediately to hopefully keep memory usage down
// on busy servers
drop(current_data);
trace!(len = serialized.len(), "Writing bytes to store");
self.store
.update(key, serialized.into(), revision)
.await
.map(|_| ())
.map_err(NatsStoreError::from)
let data: Vec<(String, T)> = data.into_iter().collect();
self.update_with_retries(
lattice_id,
&key,
Duration::from_millis(1500),
|mut current_data| async {
let cloned = data.clone();
async move {
for (id, item) in cloned.into_iter() {
if current_data.insert(id, item).is_some() {
// NOTE: We may want to return the old data in the future. For now, keeping it simple
trace!("Replaced existing data");
} else {
trace!("Inserted new entry");
};
}
serde_json::to_vec(&current_data).map_err(NatsStoreError::SerDe)
}
.await
},
)
.in_current_span()
.await
}
#[instrument(level = "debug", skip(self, data), fields(key = Empty))]
async fn delete_many<T, D, K>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync,
D: IntoIterator<Item = K> + Send,
K: AsRef<str>,
{
let key = generate_key::<T>(lattice_id);
tracing::Span::current().record("key", &key);
let (mut current_data, revision) = self
.internal_list::<T>(lattice_id)
.in_current_span()
.await?;
debug!("Updating data in store");
let mut updated = false;
for id in data.into_iter() {
if current_data.remove(id.as_ref()).is_some() {
// NOTE: We may want to return the old data in the future. For now, keeping it simple
trace!(id = %id.as_ref(), "Removing existing data");
updated = true;
} else {
trace!(id = %id.as_ref(), "ID doesn't exist in store, ignoring");
};
}
// If we updated nothing, return early
if !updated {
return Ok(());
}
let serialized = serde_json::to_vec(&current_data)?;
// NOTE(thomastaylor312): This could not matter, but because this is JSON and not consuming
// the data it is serializing, we are now holding a vec of the serialized data and the
// actual struct in memory. So this drops it immediately to hopefully keep memory usage down
// on busy servers
drop(current_data);
trace!(len = serialized.len(), "Writing bytes to store");
self.store
.update(key, serialized.into(), revision)
.await
.map(|_| ())
.map_err(NatsStoreError::from)
let data: Vec<String> = data.into_iter().map(|s| s.as_ref().to_string()).collect();
self.update_with_retries(
lattice_id,
&key,
Duration::from_millis(1500),
|mut current_data: HashMap<String, T>| async {
let cloned = data.clone();
async move {
let mut updated = false;
for id in cloned.into_iter() {
if current_data.remove(&id).is_some() {
// NOTE: We may want to return the old data in the future. For now, keeping it simple
trace!(%id, "Removing existing data");
updated = true;
} else {
trace!(%id, "ID doesn't exist in store, ignoring");
};
}
// If we updated nothing, return early
if !updated {
return Ok(Vec::with_capacity(0));
}
serde_json::to_vec(&current_data).map_err(NatsStoreError::SerDe)
}
.await
},
)
.in_current_span()
.await
}
}

View File

@ -1,5 +1,5 @@
//! Contains helpers for reaping Hosts that haven't received a heartbeat within a configured amount
//! of time and actors and providers on hosts that no longer exist
//! of time and components and providers on hosts that no longer exist
use std::collections::HashMap;
@ -7,7 +7,7 @@ use chrono::{Duration, Utc};
use tokio::{task::JoinHandle, time};
use tracing::{debug, error, info, instrument, trace, warn};
use super::{Actor, Host, Provider, Store};
use super::{Component, Host, Provider, Store};
/// A struct that can reap various pieces of data from the given store
pub struct Reaper<S> {
@ -102,7 +102,7 @@ impl<S: Store + Clone + Send + Sync + 'static> Undertaker<S> {
loop {
ticker.tick().await;
trace!("Tick fired, running reap tasks");
// We want to reap hosts first so that the state is up to date for reaping actors and providers
// We want to reap hosts first so that the state is up to date for reaping components and providers
self.reap_hosts().await;
// Now get the current list of hosts
let hosts = match self.store.list::<Host>(&self.lattice_id).await {
@ -112,8 +112,9 @@ impl<S: Store + Clone + Send + Sync + 'static> Undertaker<S> {
continue;
}
};
// Reap actors and providers simultaneously
futures::join!(self.reap_actors(&hosts), self.reap_providers(&hosts));
// Reap components and providers
self.reap_components(&hosts).await;
self.reap_providers(&hosts).await;
trace!("Completed reap tasks");
}
}
@ -151,46 +152,49 @@ impl<S: Store + Clone + Send + Sync + 'static> Undertaker<S> {
}
#[instrument(level = "debug", skip(self, hosts), fields(lattice_id = %self.lattice_id))]
async fn reap_actors(&self, hosts: &HashMap<String, Host>) {
let actors = match self.store.list::<Actor>(&self.lattice_id).await {
async fn reap_components(&self, hosts: &HashMap<String, Host>) {
let components = match self.store.list::<Component>(&self.lattice_id).await {
Ok(n) => n,
Err(e) => {
error!(error = %e, "Error when fetching actors from store. Will retry on next tick");
error!(error = %e, "Error when fetching components from store. Will retry on next tick");
return;
}
};
let (actors_to_remove, actors_to_update): (HashMap<String, Actor>, HashMap<String, Actor>) =
actors
.into_iter()
.filter_map(|(id, mut actor)| {
let current_num_hosts = actor.instances.len();
// Only keep the instances where the host exists
actor
.instances
.retain(|host_id, _| hosts.contains_key(host_id));
// If we got rid of something, that means this needs to update
(current_num_hosts != actor.instances.len()).then_some((id, actor))
})
.partition(|(_, actor)| actor.instances.is_empty());
let (components_to_remove, components_to_update): (
HashMap<String, Component>,
HashMap<String, Component>,
) = components
.into_iter()
.map(|(id, mut component)| {
// Only keep the instances where the host exists and the component is in its map
component.instances.retain(|host_id, _| {
hosts
.get(host_id)
.map(|host| host.components.contains_key(&component.id))
.unwrap_or(false)
});
(id, component)
})
.partition(|(_, component)| component.instances.is_empty());
debug!(to_remove = %actors_to_remove.len(), to_update = %actors_to_update.len(), "Filtered out list of actors to update and reap");
debug!(to_remove = %components_to_remove.len(), to_update = %components_to_update.len(), "Filtered out list of components to update and reap");
if let Err(e) = self
.store
.store_many(&self.lattice_id, actors_to_update)
.store_many(&self.lattice_id, components_to_update)
.await
{
warn!(error = %e, "Error when storing updated actors. Will retry on next tick");
warn!(error = %e, "Error when storing updated components. Will retry on next tick");
return;
}
if let Err(e) = self
.store
.delete_many::<Actor, _, _>(&self.lattice_id, actors_to_remove.keys())
.delete_many::<Component, _, _>(&self.lattice_id, components_to_remove.keys())
.await
{
warn!(error = %e, "Error when deleting actors from store. Will retry on next tick")
warn!(error = %e, "Error when deleting components from store. Will retry on next tick")
}
}
@ -199,7 +203,7 @@ impl<S: Store + Clone + Send + Sync + 'static> Undertaker<S> {
let providers = match self.store.list::<Provider>(&self.lattice_id).await {
Ok(n) => n,
Err(e) => {
error!(error = %e, "Error when fetching actors from store. Will retry on next tick");
error!(error = %e, "Error when fetching components from store. Will retry on next tick");
return;
}
};
@ -244,10 +248,13 @@ impl<S: Store + Clone + Send + Sync + 'static> Undertaker<S> {
#[cfg(test)]
mod test {
use super::*;
use std::{collections::HashSet, sync::Arc};
use std::{
collections::{BTreeMap, HashSet},
sync::Arc,
};
use crate::{
storage::{ProviderStatus, ReadStore, WadmActorInstance},
storage::{ProviderStatus, ReadStore, WadmComponentInfo},
test_util::TestStore,
};
@ -256,9 +263,7 @@ mod test {
let store = Arc::new(TestStore::default());
let lattice_id = "reaper";
let actor_id = "testactor";
let actor_instance_id_one = "asdasdj-asdada-132123-ffff";
let actor_instance_id_two = "123abc-asdada-132123-ffff";
let component_id = "testcomponent";
let host1_id = "host1";
let host2_id = "host2";
@ -268,21 +273,23 @@ mod test {
lattice_id,
[
(
actor_id.to_string(),
Actor {
id: actor_id.to_string(),
component_id.to_string(),
Component {
id: component_id.to_string(),
instances: HashMap::from([
(
host1_id.to_string(),
HashSet::from_iter([WadmActorInstance::from_id(
actor_instance_id_one.to_string(),
)]),
HashSet::from_iter([WadmComponentInfo {
annotations: BTreeMap::default(),
count: 1,
}]),
),
(
host2_id.to_string(),
HashSet::from_iter([WadmActorInstance::from_id(
actor_instance_id_two.to_string(),
)]),
HashSet::from_iter([WadmComponentInfo {
annotations: BTreeMap::default(),
count: 1,
}]),
),
]),
..Default::default()
@ -290,13 +297,14 @@ mod test {
),
(
"idontexist".to_string(),
Actor {
Component {
id: "idontexist".to_string(),
instances: HashMap::from([(
host1_id.to_string(),
HashSet::from_iter([WadmActorInstance::from_id(
actor_instance_id_one.to_string(),
)]),
HashSet::from_iter([WadmComponentInfo {
annotations: BTreeMap::default(),
count: 1,
}]),
)]),
..Default::default()
},
@ -326,7 +334,7 @@ mod test {
(
host1_id.to_string(),
Host {
actors: HashMap::from([(actor_id.to_string(), 1)]),
components: HashMap::from([(component_id.to_string(), 1)]),
providers: HashSet::default(),
id: host1_id.to_string(),
last_seen: Utc::now(),
@ -336,7 +344,7 @@ mod test {
(
host2_id.to_string(),
Host {
actors: HashMap::default(),
components: HashMap::from([(component_id.to_string(), 1)]),
providers: HashSet::default(),
id: host2_id.to_string(),
// Make this host stick around for longer
@ -357,18 +365,118 @@ mod test {
// Wait for first node to be reaped (two ticks)
tokio::time::sleep(wait * 2).await;
// Now check that the providers, actors, and hosts were reaped
// Now check that the providers, components, and hosts were reaped
let hosts = store.list::<Host>(lattice_id).await.unwrap();
assert_eq!(hosts.len(), 1, "Only one host should be left");
let actors = store.list::<Actor>(lattice_id).await.unwrap();
assert_eq!(actors.len(), 1, "Only one actor should remain in the store");
actors
.get(actor_id)
.expect("Should have the correct actor in the store");
let components = store.list::<Component>(lattice_id).await.unwrap();
assert_eq!(
components.len(),
1,
"Only one component should remain in the store"
);
components
.get(component_id)
.expect("Should have the correct component in the store");
assert!(
store.list::<Provider>(lattice_id).await.unwrap().is_empty(),
"No providers should exist"
);
}
#[tokio::test]
async fn test_stale_component() {
let store = Arc::new(TestStore::default());
let lattice_id = "reaper";
let component_id = "testcomponent";
let host1_id = "host1";
let host2_id = "host2";
// Prepopulate the store
store
.store(
lattice_id,
component_id.to_string(),
Component {
id: component_id.to_string(),
instances: HashMap::from([
(
host1_id.to_string(),
HashSet::from_iter([WadmComponentInfo {
annotations: BTreeMap::default(),
count: 1,
}]),
),
(
host2_id.to_string(),
HashSet::from_iter([WadmComponentInfo {
annotations: BTreeMap::default(),
count: 1,
}]),
),
]),
..Default::default()
},
)
.await
.unwrap();
store
.store_many(
lattice_id,
[
(
host1_id.to_string(),
Host {
components: HashMap::from([(component_id.to_string(), 1)]),
providers: HashSet::default(),
id: host1_id.to_string(),
last_seen: Utc::now() + Duration::milliseconds(600),
..Default::default()
},
),
(
host2_id.to_string(),
Host {
components: HashMap::default(),
providers: HashSet::default(),
id: host2_id.to_string(),
last_seen: Utc::now() + Duration::milliseconds(600),
..Default::default()
},
),
],
)
.await
.unwrap();
let reap_interval = std::time::Duration::from_millis(50);
// Interval + wiggle
let wait = std::time::Duration::from_millis(70);
let _reaper = Reaper::new(store.clone(), reap_interval, [lattice_id.to_owned()]);
// Wait for first tick
tokio::time::sleep(wait).await;
// Make sure we only have one instance of the component left
let components = store.list::<Component>(lattice_id).await.unwrap();
let component = components
.get(component_id)
.expect("Should have the correct component in the store");
assert_eq!(
component.instances.len(),
1,
"Only one host should remain in instances"
);
assert_eq!(
component
.instances
.get(host1_id)
.expect("Should have instance left on the correct host")
.len(),
1,
"Only one instance should remain on host"
);
}
}

View File

@ -0,0 +1,193 @@
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
use tracing::debug;
use wasmcloud_control_interface::Link;
use wasmcloud_secrets_types::SecretConfig;
use crate::storage::{Component, Host, Provider, ReadStore, StateKind};
use crate::workers::{ConfigSource, LinkSource, SecretSource};
// NOTE(thomastaylor312): This type is real ugly and we should probably find a better way to
// structure the ReadStore trait so it doesn't have the generic T we have to work around here. This
// is essentially a map of "state kind" -> map of ID to partially serialized state. I did try to
// implement some sort of getter trait but it has to be generic across T
type InMemoryData = HashMap<String, HashMap<String, serde_json::Value>>;
/// A store and claims/links source implementation that contains a static snapshot of the data that
/// can be refreshed periodically. Please note that this is scoped to a specific lattice ID and
/// should be constructed separately for each lattice ID.
///
/// Since configuration is fetched infrequently, and configuration might be large, we instead
/// query the configuration source directly when we need it.
///
/// NOTE: This is a temporary workaround until we get a proper caching store in place
pub struct SnapshotStore<S, L> {
store: S,
lattice_source: L,
lattice_id: String,
stored_state: Arc<RwLock<InMemoryData>>,
links: Arc<RwLock<Vec<Link>>>,
}
impl<S, L> Clone for SnapshotStore<S, L>
where
S: Clone,
L: Clone,
{
fn clone(&self) -> Self {
Self {
store: self.store.clone(),
lattice_source: self.lattice_source.clone(),
lattice_id: self.lattice_id.clone(),
stored_state: self.stored_state.clone(),
links: self.links.clone(),
}
}
}
impl<S, L> SnapshotStore<S, L>
where
S: ReadStore,
L: LinkSource + ConfigSource + SecretSource,
{
/// Creates a new snapshot store that is scoped to the given lattice ID
pub fn new(store: S, lattice_source: L, lattice_id: String) -> Self {
Self {
store,
lattice_source,
lattice_id,
stored_state: Default::default(),
links: Arc::new(RwLock::new(Vec::new())),
}
}
/// Refreshes the snapshotted data, returning an error if it couldn't update the data
pub async fn refresh(&self) -> anyhow::Result<()> {
// SAFETY: All of these unwraps are safe because we _just_ deserialized from JSON
let providers = self
.store
.list::<Provider>(&self.lattice_id)
.await?
.into_iter()
.map(|(key, val)| (key, serde_json::to_value(val).unwrap()))
.collect::<HashMap<_, _>>();
let components = self
.store
.list::<Component>(&self.lattice_id)
.await?
.into_iter()
.map(|(key, val)| (key, serde_json::to_value(val).unwrap()))
.collect::<HashMap<_, _>>();
let hosts = self
.store
.list::<Host>(&self.lattice_id)
.await?
.into_iter()
.map(|(key, val)| (key, serde_json::to_value(val).unwrap()))
.collect::<HashMap<_, _>>();
// If we fail to get the links, that likely just means the lattice source is down, so we
// just fall back on what we have cached
if let Ok(links) = self.lattice_source.get_links().await {
*self.links.write().await = links;
} else {
debug!("Failed to get links from lattice source, using cached links");
};
{
let mut stored_state = self.stored_state.write().await;
stored_state.insert(Provider::KIND.to_owned(), providers);
stored_state.insert(Component::KIND.to_owned(), components);
stored_state.insert(Host::KIND.to_owned(), hosts);
}
Ok(())
}
}
#[async_trait::async_trait]
impl<S, L> ReadStore for SnapshotStore<S, L>
where
// NOTE(thomastaylor312): We need this bound so we can pass through the error type.
S: ReadStore + Send + Sync,
L: Send + Sync,
{
type Error = S::Error;
// NOTE(thomastaylor312): See other note about the generic T above, but this is hardcore lolsob
async fn get<T>(&self, _lattice_id: &str, id: &str) -> Result<Option<T>, Self::Error>
where
T: serde::de::DeserializeOwned + StateKind,
{
Ok(self
.stored_state
.read()
.await
.get(T::KIND)
.and_then(|data| {
data.get(id).map(|data| {
serde_json::from_value::<T>(data.clone()).expect(
"Failed to deserialize data from snapshot, this is programmer error",
)
})
}))
}
async fn list<T>(&self, _lattice_id: &str) -> Result<HashMap<String, T>, Self::Error>
where
T: serde::de::DeserializeOwned + StateKind,
{
Ok(self
.stored_state
.read()
.await
.get(T::KIND)
.cloned()
.unwrap_or_default()
.into_iter()
.map(|(key, val)| {
(
key,
serde_json::from_value::<T>(val).expect(
"Failed to deserialize data from snapshot, this is programmer error",
),
)
})
.collect())
}
}
#[async_trait::async_trait]
impl<S, L> LinkSource for SnapshotStore<S, L>
where
S: Send + Sync,
L: Send + Sync,
{
async fn get_links(&self) -> anyhow::Result<Vec<Link>> {
Ok(self.links.read().await.clone())
}
}
#[async_trait::async_trait]
impl<S, L> ConfigSource for SnapshotStore<S, L>
where
S: Send + Sync,
L: ConfigSource + Send + Sync,
{
async fn get_config(&self, name: &str) -> anyhow::Result<Option<HashMap<String, String>>> {
self.lattice_source.get_config(name).await
}
}
#[async_trait::async_trait]
impl<S, L> SecretSource for SnapshotStore<S, L>
where
S: Send + Sync,
L: SecretSource + Send + Sync,
{
async fn get_secret(&self, name: &str) -> anyhow::Result<Option<SecretConfig>> {
self.lattice_source.get_secret(name).await
}
}

View File

@ -0,0 +1,361 @@
use std::borrow::{Borrow, ToOwned};
use std::collections::{BTreeMap, HashMap, HashSet};
use std::hash::{Hash, Hasher};
use chrono::{DateTime, Utc};
use semver::Version;
use serde::{Deserialize, Serialize};
use super::StateKind;
use crate::events::{ComponentScaled, HostHeartbeat, HostStarted, ProviderInfo, ProviderStarted};
/// A wasmCloud Capability provider
// NOTE: We probably aren't going to use this _right now_ so we've kept it pretty minimal. But it is
// possible that we could query wadm for more general data about the lattice in the future, so we do
// want to store this
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct Provider {
/// ID of the provider, normally a public nkey
pub id: String,
/// Name of the provider
pub name: String,
/// Issuer of the (signed) provider
pub issuer: String,
/// The reference used to start the provider. Can be empty if it was started from a file
pub reference: String,
/// The hosts this provider is running on
pub hosts: HashMap<String, ProviderStatus>,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
pub enum ProviderStatus {
/// The provider is starting and hasn't returned a heartbeat yet
Pending,
/// The provider is running
Running,
/// The provider failed to start
// TODO(thomastaylor312): In the future, we'll probably want to decay out a provider from state
// if it hasn't had a heartbeat
// if it fails a recent health check
Failed,
}
impl Default for ProviderStatus {
fn default() -> Self {
Self::Pending
}
}
impl std::fmt::Display for ProviderStatus {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
Self::Pending => "pending".to_string(),
Self::Running => "running".to_string(),
Self::Failed => "failed".to_string(),
}
)
}
}
impl StateKind for Provider {
const KIND: &'static str = "provider";
}
impl From<ProviderStarted> for Provider {
fn from(value: ProviderStarted) -> Self {
let (name, issuer) = value.claims.map(|c| (c.name, c.issuer)).unwrap_or_default();
Provider {
id: value.provider_id,
name,
issuer,
reference: value.image_ref,
..Default::default()
}
}
}
impl From<&ProviderStarted> for Provider {
fn from(value: &ProviderStarted) -> Self {
Provider {
id: value.provider_id.clone(),
name: value
.claims
.as_ref()
.map(|c| c.name.clone())
.unwrap_or_default(),
issuer: value
.claims
.as_ref()
.map(|c| c.issuer.clone())
.unwrap_or_default(),
reference: value.image_ref.clone(),
..Default::default()
}
}
}
/// A representation of a unique component (as defined by its annotations) and its count. This struct
/// has a custom implementation of PartialEq and Hash that _only_ compares the annotations. This is
/// not a very "pure" way of doing things, but it lets us access current counts of components without
/// having to do a bunch of extra work.
#[derive(Debug, Serialize, Deserialize, Clone, Default, Eq)]
pub struct WadmComponentInfo {
pub annotations: BTreeMap<String, String>,
pub count: usize,
}
impl PartialEq for WadmComponentInfo {
fn eq(&self, other: &Self) -> bool {
self.annotations == other.annotations
}
}
impl Hash for WadmComponentInfo {
fn hash<H: Hasher>(&self, state: &mut H) {
self.annotations.hash(state);
}
}
impl Borrow<BTreeMap<String, String>> for WadmComponentInfo {
fn borrow(&self) -> &BTreeMap<String, String> {
&self.annotations
}
}
/// A wasmCloud Component
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct Component {
/// ID of the component
pub id: String,
/// Name of the component
pub name: String,
/// Issuer of the (signed) component
pub issuer: String,
/// All instances of this component running in the lattice, keyed by the host ID and contains a hash
/// map of annotations -> count for each set of unique annotations
pub instances: HashMap<String, HashSet<WadmComponentInfo>>,
/// The reference used to start the component. Can be empty if it was started from a file
pub reference: String,
}
impl Component {
/// A helper method that returns the total count of running copies of this component, regardless of
/// which host they are running on
pub fn count(&self) -> usize {
self.instances
.values()
.map(|instances| instances.iter().map(|info| info.count).sum::<usize>())
.sum()
}
/// A helper method that returns the total count of running copies of this component on a specific
/// host
pub fn count_for_host(&self, host_id: &str) -> usize {
self.instances
.get(host_id)
.map(|instances| instances.iter().map(|info| info.count).sum::<usize>())
.unwrap_or_default()
}
}
impl StateKind for Component {
const KIND: &'static str = "component";
}
impl From<ComponentScaled> for Component {
fn from(value: ComponentScaled) -> Self {
let (name, issuer) = value.claims.map(|c| (c.name, c.issuer)).unwrap_or_default();
Component {
id: value.component_id,
name,
issuer,
reference: value.image_ref,
instances: HashMap::from_iter([(
value.host_id,
HashSet::from_iter([WadmComponentInfo {
annotations: value.annotations,
count: value.max_instances,
}]),
)]),
}
}
}
impl From<&ComponentScaled> for Component {
fn from(value: &ComponentScaled) -> Self {
Component {
id: value.component_id.clone(),
name: value
.claims
.as_ref()
.map(|c| c.name.clone())
.unwrap_or_default(),
issuer: value
.claims
.as_ref()
.map(|c| c.issuer.clone())
.unwrap_or_default(),
reference: value.image_ref.clone(),
instances: HashMap::from_iter([(
value.host_id.clone(),
HashSet::from_iter([WadmComponentInfo {
annotations: value.annotations.clone(),
count: value.max_instances,
}]),
)]),
}
}
}
/// A wasmCloud host
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct Host {
/// A map of component IDs to the number of instances of the component running on the host
#[serde(alias = "actors")]
pub components: HashMap<String, usize>,
/// The randomly generated friendly name of the host
pub friendly_name: String,
/// An arbitrary hashmap of string labels attached to the host
pub labels: HashMap<String, String>,
/// A set of running providers on the host
pub providers: HashSet<ProviderInfo>,
/// The current uptime of the host in seconds
pub uptime_seconds: usize,
/// The host version that is running
// NOTE(thomastaylor312): Right now a host started event doesn't emit the version, so a newly
// started host can't be registered with one. We should probably add that to the host started
// event and then modify it here
pub version: Option<Version>,
/// The ID of this host, in the form of its nkey encoded public key
pub id: String,
/// The time when this host was last seen, as a RFC3339 timestamp
pub last_seen: DateTime<Utc>,
}
impl StateKind for Host {
const KIND: &'static str = "host";
}
impl From<HostStarted> for Host {
fn from(value: HostStarted) -> Self {
Host {
friendly_name: value.friendly_name,
id: value.id,
labels: value.labels,
last_seen: Utc::now(),
..Default::default()
}
}
}
impl From<&HostStarted> for Host {
fn from(value: &HostStarted) -> Self {
Host {
friendly_name: value.friendly_name.clone(),
id: value.id.clone(),
labels: value.labels.clone(),
last_seen: Utc::now(),
..Default::default()
}
}
}
impl From<HostHeartbeat> for Host {
fn from(value: HostHeartbeat) -> Self {
let components = value
.components
.into_iter()
.map(|component| {
(
component.id().into(), // SAFETY: Unlikely to not fit into a usize, but fallback just in case
component.max_instances().try_into().unwrap_or(usize::MAX),
)
})
.collect();
let providers = value
.providers
.into_iter()
.map(|provider| ProviderInfo {
provider_id: provider.id().to_string(),
// NOTE: Provider should _always_ have an image ref. The control interface type should be updated.
provider_ref: provider.image_ref().map(String::from).unwrap_or_default(),
annotations: provider
.annotations()
.map(ToOwned::to_owned)
.map(BTreeMap::from_iter)
.unwrap_or_default(),
})
.collect();
Host {
components,
friendly_name: value.friendly_name,
labels: value.labels,
providers,
uptime_seconds: value.uptime_seconds as usize,
version: Some(value.version),
id: value.host_id,
last_seen: Utc::now(),
}
}
}
impl From<&HostHeartbeat> for Host {
fn from(value: &HostHeartbeat) -> Self {
let components = value
.components
.iter()
.map(|component| {
(
component.id().to_owned(),
// SAFETY: Unlikely to not fit into a usize, but fallback just in case
component.max_instances().try_into().unwrap_or(usize::MAX),
)
})
.collect();
let providers = value
.providers
.iter()
.map(|provider| ProviderInfo {
provider_id: provider.id().to_owned(),
provider_ref: provider.image_ref().map(String::from).unwrap_or_default(),
annotations: provider
.annotations()
.map(ToOwned::to_owned)
.map(BTreeMap::from_iter)
.unwrap_or_default(),
})
.collect();
Host {
components,
friendly_name: value.friendly_name.clone(),
labels: value.labels.clone(),
providers,
uptime_seconds: value.uptime_seconds as usize,
version: Some(value.version.clone()),
id: value.host_id.clone(),
last_seen: Utc::now(),
}
}
}

View File

@ -3,11 +3,15 @@ use std::{collections::HashMap, sync::Arc};
use serde::{de::DeserializeOwned, Serialize};
use tokio::sync::RwLock;
use wasmcloud_control_interface::HostInventory;
use wasmcloud_control_interface::{HostInventory, Link};
use wasmcloud_secrets_types::SecretConfig;
use crate::publisher::Publisher;
use crate::storage::StateKind;
use crate::workers::{Claims, ClaimsSource, InventorySource};
use crate::workers::{
secret_config_from_map, Claims, ClaimsSource, ConfigSource, InventorySource, LinkSource,
SecretSource,
};
fn generate_key<T: StateKind>(lattice_id: &str) -> String {
format!("{}_{lattice_id}", T::KIND)
@ -58,7 +62,7 @@ impl crate::storage::ReadStore for TestStore {
impl crate::storage::Store for TestStore {
async fn store_many<T, D>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync + Clone,
D: IntoIterator<Item = (String, T)> + Send,
{
let key = generate_key::<T>(lattice_id);
@ -79,7 +83,7 @@ impl crate::storage::Store for TestStore {
async fn delete_many<T, D, K>(&self, lattice_id: &str, data: D) -> Result<(), Self::Error>
where
T: Serialize + DeserializeOwned + StateKind + Send,
T: Serialize + DeserializeOwned + StateKind + Send + Sync,
D: IntoIterator<Item = K> + Send,
K: AsRef<str>,
{
@ -104,9 +108,11 @@ impl crate::storage::Store for TestStore {
#[derive(Clone, Default, Debug)]
/// A test "lattice source" for use with testing
pub(crate) struct TestLatticeSource {
pub(crate) claims: HashMap<String, Claims>,
pub(crate) inventory: Arc<RwLock<HashMap<String, HostInventory>>>,
pub struct TestLatticeSource {
pub claims: HashMap<String, Claims>,
pub inventory: Arc<RwLock<HashMap<String, HostInventory>>>,
pub links: Vec<Link>,
pub config: HashMap<String, HashMap<String, String>>,
}
#[async_trait::async_trait]
@ -123,6 +129,32 @@ impl InventorySource for TestLatticeSource {
}
}
#[async_trait::async_trait]
impl LinkSource for TestLatticeSource {
async fn get_links(&self) -> anyhow::Result<Vec<Link>> {
Ok(self.links.clone())
}
}
#[async_trait::async_trait]
impl ConfigSource for TestLatticeSource {
async fn get_config(&self, name: &str) -> anyhow::Result<Option<HashMap<String, String>>> {
Ok(self.config.get(name).cloned())
}
}
#[async_trait::async_trait]
impl SecretSource for TestLatticeSource {
async fn get_secret(&self, name: &str) -> anyhow::Result<Option<SecretConfig>> {
let secret_config = self
.get_config(format!("secret_{name}").as_str())
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?;
secret_config.map(secret_config_from_map).transpose()
}
}
/// A publisher that does nothing
#[derive(Clone, Default)]
pub struct NoopPublisher;

View File

@ -0,0 +1,117 @@
use tracing::{instrument, trace};
use crate::{
commands::*,
consumers::{
manager::{WorkError, WorkResult, Worker},
ScopedMessage,
},
};
use super::insert_managed_annotations;
/// A worker implementation for handling incoming commands
#[derive(Clone)]
pub struct CommandWorker {
client: wasmcloud_control_interface::Client,
}
impl CommandWorker {
/// Creates a new command worker with the given connection pool.
pub fn new(ctl_client: wasmcloud_control_interface::Client) -> CommandWorker {
CommandWorker { client: ctl_client }
}
}
#[async_trait::async_trait]
impl Worker for CommandWorker {
type Message = Command;
#[instrument(level = "trace", skip_all)]
async fn do_work(&self, mut message: ScopedMessage<Self::Message>) -> WorkResult<()> {
let res = match message.as_ref() {
Command::ScaleComponent(component) => {
trace!(command = ?component, "Handling scale component command");
// Order here is intentional to prevent scalers from overwriting managed annotations
let mut annotations = component.annotations.clone();
insert_managed_annotations(&mut annotations, &component.model_name);
self.client
.scale_component(
&component.host_id,
&component.reference,
&component.component_id,
component.count,
Some(annotations.into_iter().collect()),
component.config.clone(),
)
.await
}
Command::StartProvider(prov) => {
trace!(command = ?prov, "Handling start provider command");
// Order here is intentional to prevent scalers from overwriting managed annotations
let mut annotations = prov.annotations.clone();
insert_managed_annotations(&mut annotations, &prov.model_name);
self.client
.start_provider(
&prov.host_id,
&prov.reference,
&prov.provider_id,
Some(annotations.into_iter().collect()),
prov.config.clone(),
)
.await
}
Command::StopProvider(prov) => {
trace!(command = ?prov, "Handling stop provider command");
// Order here is intentional to prevent scalers from overwriting managed annotations
let mut annotations = prov.annotations.clone();
insert_managed_annotations(&mut annotations, &prov.model_name);
self.client
.stop_provider(&prov.host_id, &prov.provider_id)
.await
}
Command::PutLink(ld) => {
trace!(command = ?ld, "Handling put linkdef command");
// TODO(thomastaylor312): We should probably change ScopedMessage to allow us `pub`
// access to the inner type so we don't have to clone, but no need to worry for now
self.client.put_link(ld.clone().try_into()?).await
}
Command::DeleteLink(ld) => {
trace!(command = ?ld, "Handling delete linkdef command");
self.client
.delete_link(
&ld.source_id,
&ld.link_name,
&ld.wit_namespace,
&ld.wit_package,
)
.await
}
Command::PutConfig(put_config) => {
trace!(command = ?put_config, "Handling put config command");
self.client
.put_config(&put_config.config_name, put_config.config.clone())
.await
}
Command::DeleteConfig(delete_config) => {
trace!(command = ?delete_config, "Handling delete config command");
self.client.delete_config(&delete_config.config_name).await
}
}
.map_err(|e| anyhow::anyhow!("{e:?}"));
match res {
Ok(ack) if !ack.succeeded() => {
message.nack().await;
Err(WorkError::Other(
anyhow::anyhow!("{}", ack.message()).into(),
))
}
Ok(_) => message.ack().await.map_err(WorkError::from),
Err(e) => {
message.nack().await;
Err(WorkError::Other(e.into()))
}
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,324 @@
use anyhow::{bail, Context};
use async_nats::jetstream::stream::Stream;
use std::collections::{BTreeMap, HashMap};
use std::fmt::Debug;
use wasmcloud_secrets_types::SecretConfig;
use tracing::{debug, instrument, trace, warn};
use wadm_types::api::Status;
use wasmcloud_control_interface::{HostInventory, Link};
use crate::{commands::Command, publisher::Publisher, APP_SPEC_ANNOTATION};
/// A subset of needed claims to help populate state
#[derive(Debug, Clone)]
pub struct Claims {
pub name: String,
pub capabilities: Vec<String>,
pub issuer: String,
}
/// A trait for anything that can fetch a set of claims information about components.
///
/// NOTE: This trait right now exists as a convenience for two things: First, testing. Without
/// something like this we require a network connection to unit test. Second, there is no concrete
/// claims type returned from the control interface client. This allows us to abstract that away
/// until such time that we do export one and we'll be able to do so without breaking our API
#[async_trait::async_trait]
pub trait ClaimsSource {
async fn get_claims(&self) -> anyhow::Result<HashMap<String, Claims>>;
}
/// NOTE(brooksmtownsend): This trait exists in order to query the hosts inventory
/// upon receiving a heartbeat since the heartbeat doesn't contain enough
/// information to properly update the stored data for components
#[async_trait::async_trait]
pub trait InventorySource {
async fn get_inventory(&self, host_id: &str) -> anyhow::Result<HostInventory>;
}
/// A trait for anything that can fetch the links in a lattice
///
/// NOTE: This trait right now exists as a convenience for testing. It isn't ideal to have this just
/// due to testing, but it does allow us to abstract away the concrete type of the client
#[async_trait::async_trait]
pub trait LinkSource {
async fn get_links(&self) -> anyhow::Result<Vec<Link>>;
}
/// A trait for anything that can fetch a piece of named configuration
///
/// In the future this could be expanded to fetch more than just a single piece of configuration,
/// but for now it's limited to a single config in an attempt to keep the scope of fetching
/// configuration small, and efficiently pass around data.
#[async_trait::async_trait]
pub trait ConfigSource {
async fn get_config(&self, name: &str) -> anyhow::Result<Option<HashMap<String, String>>>;
}
/// A trait for anything that can fetch a secret.
#[async_trait::async_trait]
pub trait SecretSource {
async fn get_secret(&self, name: &str) -> anyhow::Result<Option<SecretConfig>>;
}
/// Converts the configuration map of strings to a secret config
pub fn secret_config_from_map(map: HashMap<String, String>) -> anyhow::Result<SecretConfig> {
match (
map.get("name"),
map.get("backend"),
map.get("key"),
map.get("policy"),
map.get("type"),
) {
(None, _, _, _, _) => bail!("missing name field in secret config"),
(_, None, _, _, _) => bail!("missing backend field in secret config"),
(_, _, None, _, _) => bail!("missing key field in secret config"),
(_, _, _, None, _) => bail!("missing policy field in secret config"),
(_, _, _, _, None) => bail!("missing type field in secret config"),
(Some(name), Some(backend), Some(key), Some(policy), Some(secret_type)) => {
Ok(SecretConfig {
name: name.to_string(),
backend: backend.to_string(),
key: key.to_string(),
field: map.get("field").map(|f| f.to_string()),
version: map.get("version").map(|v| v.to_string()),
policy: serde_json::from_str(policy)
.context("failed to deserialize policy from string")?,
secret_type: secret_type.to_string(),
})
}
}
}
#[async_trait::async_trait]
impl ClaimsSource for wasmcloud_control_interface::Client {
async fn get_claims(&self) -> anyhow::Result<HashMap<String, Claims>> {
match self.get_claims().await.map_err(|e| anyhow::anyhow!("{e}")) {
Ok(ctl_resp) if ctl_resp.succeeded() => {
let claims = ctl_resp.data().context("missing claims data")?.to_owned();
Ok(claims
.into_iter()
.filter_map(|mut claim| {
// NOTE(thomastaylor312): I'm removing instead of getting since we own the data and I
// don't want to clone every time we do this
// If we don't find a subject, we can't actually get the component ID, so skip this one
Some((
claim.remove("sub")?,
Claims {
name: claim.remove("name").unwrap_or_default(),
capabilities: claim
.remove("caps")
.map(|raw| raw.split(',').map(|s| s.to_owned()).collect())
.unwrap_or_default(),
issuer: claim.remove("iss").unwrap_or_default(),
},
))
})
.collect())
}
_ => Err(anyhow::anyhow!("Failed to get claims")),
}
}
}
#[async_trait::async_trait]
impl InventorySource for wasmcloud_control_interface::Client {
async fn get_inventory(&self, host_id: &str) -> anyhow::Result<HostInventory> {
match self
.get_host_inventory(host_id)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?
{
ctl_resp if ctl_resp.succeeded() && ctl_resp.data().is_some() => Ok(ctl_resp
.into_data()
.context("missing host inventory data")?),
ctl_resp => Err(anyhow::anyhow!(
"Failed to get inventory for host {host_id}, {}",
ctl_resp.message()
)),
}
}
}
// NOTE(thomastaylor312): A future improvement here that would make things more efficient is if this
// was just a cache of the links. On startup, it could fetch once, and then it could subscribe to
// the KV store for updates. This would allow us to not have to fetch every time we need to get
// links
#[async_trait::async_trait]
impl LinkSource for wasmcloud_control_interface::Client {
async fn get_links(&self) -> anyhow::Result<Vec<Link>> {
match self
.get_links()
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?
{
ctl_resp if ctl_resp.succeeded() && ctl_resp.data().is_some() => {
Ok(ctl_resp.into_data().context("missing link data")?)
}
ctl_resp => Err(anyhow::anyhow!(
"Failed to get links, {}",
ctl_resp.message()
)),
}
}
}
#[async_trait::async_trait]
impl ConfigSource for wasmcloud_control_interface::Client {
async fn get_config(&self, name: &str) -> anyhow::Result<Option<HashMap<String, String>>> {
match self
.get_config(name)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?
{
ctl_resp if ctl_resp.succeeded() && ctl_resp.data().is_some() => {
Ok(ctl_resp.into_data())
}
// TODO(https://github.com/wasmCloud/wasmCloud/issues/1906): The control interface should return a None when config isn't found
// instead of returning an error.
ctl_resp => {
debug!("Failed to get config for {name}, {}", ctl_resp.message());
Ok(None)
}
}
}
}
#[async_trait::async_trait]
impl SecretSource for wasmcloud_control_interface::Client {
async fn get_secret(&self, name: &str) -> anyhow::Result<Option<SecretConfig>> {
match self
.get_config(name)
.await
.map_err(|e| anyhow::anyhow!("{e:?}"))?
{
ctl_resp if ctl_resp.succeeded() && ctl_resp.data().is_some() => {
secret_config_from_map(ctl_resp.into_data().context("missing secret data")?)
.map(Some)
}
ctl_resp if ctl_resp.data().is_none() => {
debug!("Failed to get secret for {name}, {}", ctl_resp.message());
Ok(None)
}
ctl_resp => {
debug!("Failed to get secret for {name}, {}", ctl_resp.message());
Ok(None)
}
}
}
}
/// A struct for publishing status updates
#[derive(Clone)]
pub struct StatusPublisher<Pub> {
publisher: Pub,
// Stream for querying current status to avoid duplicate updates
status_stream: Option<Stream>,
// Topic prefix, e.g. wadm.status.default
topic_prefix: String,
}
impl<Pub> StatusPublisher<Pub> {
/// Creates an new status publisher configured with the given publisher that will send to the
/// manifest status topic using the given prefix
pub fn new(
publisher: Pub,
status_stream: Option<Stream>,
topic_prefix: &str,
) -> StatusPublisher<Pub> {
StatusPublisher {
publisher,
status_stream,
topic_prefix: topic_prefix.to_owned(),
}
}
}
impl<Pub: Publisher> StatusPublisher<Pub> {
#[instrument(level = "trace", skip(self))]
pub async fn publish_status(&self, name: &str, status: Status) -> anyhow::Result<()> {
let topic = format!("{}.{name}", self.topic_prefix);
// NOTE(brooksmtownsend): This direct get may not always query the jetstream leader. In the
// worst case where the last message isn't all the way updated, we may publish a duplicate
// status. This is an acceptable tradeoff to not have to query the leader directly every time.
let prev_status = if let Some(status_stream) = &self.status_stream {
status_stream
.direct_get_last_for_subject(&topic)
.await
.map(|m| serde_json::from_slice::<Status>(&m.payload).ok())
.ok()
.flatten()
} else {
None
};
match prev_status {
// If the status hasn't changed, skip publishing
Some(prev_status) if prev_status == status => {
trace!(%name, "Status hasn't changed since last update. Skipping");
Ok(())
}
_ => {
self.publisher
.publish(serde_json::to_vec(&status)?, Some(&topic))
.await
}
}
}
}
/// A struct for publishing commands
#[derive(Clone)]
pub struct CommandPublisher<Pub> {
publisher: Pub,
topic: String,
}
impl<Pub> CommandPublisher<Pub> {
/// Creates an new command publisher configured with the given publisher that will send to the
/// specified topic
pub fn new(publisher: Pub, topic: &str) -> CommandPublisher<Pub> {
CommandPublisher {
publisher,
topic: topic.to_owned(),
}
}
}
impl<Pub: Publisher> CommandPublisher<Pub> {
#[instrument(level = "trace", skip(self))]
pub async fn publish_commands(&self, commands: Vec<Command>) -> anyhow::Result<()> {
futures::future::join_all(
commands
.into_iter()
// Generally commands are purely internal to wadm and so shouldn't have an error serializing. If it does, warn and continue onward
.filter_map(|command| {
match serde_json::to_vec(&command) {
Ok(data) => Some(data),
Err(e) => {
warn!(error = %e, ?command, "Got malformed command when trying to serialize. Skipping this command");
None
}
}
})
.map(|data| self.publisher.publish(data, Some(&self.topic))),
)
.await
.into_iter()
.collect::<anyhow::Result<()>>()
}
}
/// Inserts managed annotations to the given `annotations` HashMap.
pub fn insert_managed_annotations(annotations: &mut BTreeMap<String, String>, model_name: &str) {
annotations.extend([
(
crate::MANAGED_BY_ANNOTATION.to_owned(),
crate::MANAGED_BY_IDENTIFIER.to_owned(),
),
(APP_SPEC_ANNOTATION.to_owned(), model_name.to_owned()),
])
}

View File

@ -7,5 +7,6 @@ mod event;
mod event_helpers;
pub use command::CommandWorker;
pub(crate) use event::get_commands_and_result;
pub use event::EventWorker;
pub use event_helpers::*;

Binary file not shown.

Before

Width:  |  Height:  |  Size: 677 B

704
flake.lock Normal file
View File

@ -0,0 +1,704 @@
{
"nodes": {
"advisory-db": {
"flake": false,
"locked": {
"lastModified": 1737565911,
"narHash": "sha256-WxIWw1mSPJVU1JfIcTdIubU5UoIwwR8h7UcXop/6htg=",
"owner": "rustsec",
"repo": "advisory-db",
"rev": "ffa26704690a3dc403edcd94baef103ee48f66eb",
"type": "github"
},
"original": {
"owner": "rustsec",
"repo": "advisory-db",
"type": "github"
}
},
"advisory-db_2": {
"flake": false,
"locked": {
"lastModified": 1730464311,
"narHash": "sha256-9xJoP1766XJSO1Qr0Lxg2P6dwPncTr3BJYlFMSXBd/E=",
"owner": "rustsec",
"repo": "advisory-db",
"rev": "f3460e5ed91658ab94fa41908cfa44991f9f4f02",
"type": "github"
},
"original": {
"owner": "rustsec",
"repo": "advisory-db",
"type": "github"
}
},
"crane": {
"locked": {
"lastModified": 1737689766,
"narHash": "sha256-ivVXYaYlShxYoKfSo5+y5930qMKKJ8CLcAoIBPQfJ6s=",
"owner": "ipetkov",
"repo": "crane",
"rev": "6fe74265bbb6d016d663b1091f015e2976c4a527",
"type": "github"
},
"original": {
"owner": "ipetkov",
"repo": "crane",
"type": "github"
}
},
"crane_2": {
"locked": {
"lastModified": 1730652660,
"narHash": "sha256-+XVYfmVXAiYA0FZT7ijHf555dxCe+AoAT5A6RU+6vSo=",
"owner": "ipetkov",
"repo": "crane",
"rev": "a4ca93905455c07cb7e3aca95d4faf7601cba458",
"type": "github"
},
"original": {
"owner": "ipetkov",
"repo": "crane",
"type": "github"
}
},
"crane_3": {
"inputs": {
"flake-compat": "flake-compat",
"flake-utils": [
"wasmcloud",
"nixify",
"nix-log",
"nixify",
"flake-utils"
],
"nixpkgs": [
"wasmcloud",
"nixify",
"nix-log",
"nixify",
"nixpkgs"
],
"rust-overlay": [
"wasmcloud",
"nixify",
"nix-log",
"nixify",
"rust-overlay"
]
},
"locked": {
"lastModified": 1679255352,
"narHash": "sha256-nkGwGuNkhNrnN33S4HIDV5NzkzMLU5mNStRn9sZwq8c=",
"owner": "rvolosatovs",
"repo": "crane",
"rev": "cec65880599a4ec6426186e24342e663464f5933",
"type": "github"
},
"original": {
"owner": "rvolosatovs",
"ref": "feat/wit",
"repo": "crane",
"type": "github"
}
},
"fenix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": []
},
"locked": {
"lastModified": 1738132439,
"narHash": "sha256-7q5vsyPQf6/aQEKAOgZ4ggv++Z2ppPSuPCGKlbPcM88=",
"owner": "nix-community",
"repo": "fenix",
"rev": "f94e521c1922784c377a2cace90aa89a6b8a1011",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"fenix_2": {
"inputs": {
"nixpkgs": [
"wasmcloud",
"nixify",
"nixpkgs-nixos"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1731047492,
"narHash": "sha256-F4h8YtTzPWv0/1Z6fc8fMSqKpn7YhOjlgp66cr15tEo=",
"owner": "nix-community",
"repo": "fenix",
"rev": "da6332e801fbb0418f80f20cefa947c5fe5c18c9",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"fenix_3": {
"inputs": {
"nixpkgs": [
"wasmcloud",
"nixify",
"nix-log",
"nixify",
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src_2"
},
"locked": {
"lastModified": 1679552560,
"narHash": "sha256-L9Se/F1iLQBZFGrnQJO8c9wE5z0Mf8OiycPGP9Y96hA=",
"owner": "nix-community",
"repo": "fenix",
"rev": "fb49a9f5605ec512da947a21cc7e4551a3950397",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1673956053,
"narHash": "sha256-4gtG9iQuiKITOjNQQeQIpoIB6b16fm+504Ch3sNKLd8=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "35bb57c0c8d8b62bbfd284272c928ceb64ddbde9",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1726560853,
"narHash": "sha256-X6rJYSESBVr3hBoH0WbKE5KvhPU5bloyZ2L4K60/fPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_3": {
"locked": {
"lastModified": 1678901627,
"narHash": "sha256-U02riOqrKKzwjsxc/400XnElV+UtPUQWpANPlyazjH0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "93a2b84fc4b70d9e089d029deacc3583435c2ed6",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"macos-sdk": {
"flake": false,
"locked": {
"lastModified": 1694769349,
"narHash": "sha256-TEvVJy+NMPyzgWSk/6S29ZMQR+ICFxSdS3tw247uhFc=",
"type": "tarball",
"url": "https://github.com/roblabla/MacOSX-SDKs/releases/download/macosx14.0/MacOSX14.0.sdk.tar.xz"
},
"original": {
"type": "tarball",
"url": "https://github.com/roblabla/MacOSX-SDKs/releases/download/macosx14.0/MacOSX14.0.sdk.tar.xz"
}
},
"nix-filter": {
"locked": {
"lastModified": 1730207686,
"narHash": "sha256-SCHiL+1f7q9TAnxpasriP6fMarWE5H43t25F5/9e28I=",
"owner": "numtide",
"repo": "nix-filter",
"rev": "776e68c1d014c3adde193a18db9d738458cd2ba4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "nix-filter",
"type": "github"
}
},
"nix-filter_2": {
"locked": {
"lastModified": 1678109515,
"narHash": "sha256-C2X+qC80K2C1TOYZT8nabgo05Dw2HST/pSn6s+n6BO8=",
"owner": "numtide",
"repo": "nix-filter",
"rev": "aa9ff6ce4a7f19af6415fb3721eaa513ea6c763c",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "nix-filter",
"type": "github"
}
},
"nix-flake-tests": {
"locked": {
"lastModified": 1677844186,
"narHash": "sha256-ErJZ/Gs1rxh561CJeWP5bohA2IcTq1rDneu1WT6CVII=",
"owner": "antifuchs",
"repo": "nix-flake-tests",
"rev": "bbd9216bd0f6495bb961a8eb8392b7ef55c67afb",
"type": "github"
},
"original": {
"owner": "antifuchs",
"repo": "nix-flake-tests",
"type": "github"
}
},
"nix-flake-tests_2": {
"locked": {
"lastModified": 1677844186,
"narHash": "sha256-ErJZ/Gs1rxh561CJeWP5bohA2IcTq1rDneu1WT6CVII=",
"owner": "antifuchs",
"repo": "nix-flake-tests",
"rev": "bbd9216bd0f6495bb961a8eb8392b7ef55c67afb",
"type": "github"
},
"original": {
"owner": "antifuchs",
"repo": "nix-flake-tests",
"type": "github"
}
},
"nix-log": {
"inputs": {
"nix-flake-tests": "nix-flake-tests",
"nixify": "nixify_2",
"nixlib": "nixlib_2"
},
"locked": {
"lastModified": 1681933283,
"narHash": "sha256-phDsQdaoUEI4DUTErR6Tz7lS0y3kXvDwwbqtxpzd0eo=",
"owner": "rvolosatovs",
"repo": "nix-log",
"rev": "833d31e3c1a677eac81ba87e777afa5076071d66",
"type": "github"
},
"original": {
"owner": "rvolosatovs",
"repo": "nix-log",
"type": "github"
}
},
"nix-log_2": {
"inputs": {
"nix-flake-tests": "nix-flake-tests_2",
"nixify": [
"wasmcloud",
"wit-deps",
"nixify"
],
"nixlib": [
"wasmcloud",
"wit-deps",
"nixlib"
]
},
"locked": {
"lastModified": 1681933283,
"narHash": "sha256-phDsQdaoUEI4DUTErR6Tz7lS0y3kXvDwwbqtxpzd0eo=",
"owner": "rvolosatovs",
"repo": "nix-log",
"rev": "833d31e3c1a677eac81ba87e777afa5076071d66",
"type": "github"
},
"original": {
"owner": "rvolosatovs",
"repo": "nix-log",
"type": "github"
}
},
"nixify": {
"inputs": {
"advisory-db": "advisory-db_2",
"crane": "crane_2",
"fenix": "fenix_2",
"flake-utils": "flake-utils_2",
"macos-sdk": "macos-sdk",
"nix-filter": "nix-filter",
"nix-log": "nix-log",
"nixlib": [
"wasmcloud",
"nixlib"
],
"nixpkgs-darwin": "nixpkgs-darwin",
"nixpkgs-nixos": "nixpkgs-nixos",
"rust-overlay": "rust-overlay_2"
},
"locked": {
"lastModified": 1731068753,
"narHash": "sha256-6H+vYAYl/koFsiBEM4WHZhOoOQ2Hfzd+MtcxFfAOOtw=",
"owner": "rvolosatovs",
"repo": "nixify",
"rev": "7b83953ebfb22ba1f623ac06312aebee81f2182e",
"type": "github"
},
"original": {
"owner": "rvolosatovs",
"repo": "nixify",
"type": "github"
}
},
"nixify_2": {
"inputs": {
"crane": "crane_3",
"fenix": "fenix_3",
"flake-utils": "flake-utils_3",
"nix-filter": "nix-filter_2",
"nixlib": "nixlib",
"nixpkgs": "nixpkgs_2",
"rust-overlay": "rust-overlay"
},
"locked": {
"lastModified": 1679748566,
"narHash": "sha256-yA4yIJjNCOLoUh0py9S3SywwbPnd/6NPYbXad+JeOl0=",
"owner": "rvolosatovs",
"repo": "nixify",
"rev": "80e823959511a42dfec4409fef406a14ae8240f3",
"type": "github"
},
"original": {
"owner": "rvolosatovs",
"repo": "nixify",
"type": "github"
}
},
"nixlib": {
"locked": {
"lastModified": 1679187309,
"narHash": "sha256-H8udmkg5wppL11d/05MMzOMryiYvc403axjDNZy1/TQ=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "44214417fe4595438b31bdb9469be92536a61455",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs.lib",
"type": "github"
}
},
"nixlib_2": {
"locked": {
"lastModified": 1679791877,
"narHash": "sha256-tTV1Mf0hPWIMtqyU16Kd2JUBDWvfHlDC9pF57vcbgpQ=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "cc060ddbf652a532b54057081d5abd6144d01971",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs.lib",
"type": "github"
}
},
"nixlib_3": {
"locked": {
"lastModified": 1731200463,
"narHash": "sha256-qDaAweJjdFbVExqs8aG27urUgcgKufkIngHW3Rzustg=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "e04234d263750db01c78a412690363dc2226e68a",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs.lib",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1738163270,
"narHash": "sha256-B/7Y1v4y+msFFBW1JAdFjNvVthvNdJKiN6EGRPnqfno=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "59e618d90c065f55ae48446f307e8c09565d5ab0",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "release-24.11",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-darwin": {
"locked": {
"lastModified": 1730891215,
"narHash": "sha256-i85DPrhDuvzgvIWCpJlbfM2UFtNYbapo20MtQXsvay4=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "c128e44a249d6180740d0a979b6480d5b795c013",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixpkgs-24.05-darwin",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-nixos": {
"locked": {
"lastModified": 1730883749,
"narHash": "sha256-mwrFF0vElHJP8X3pFCByJR365Q2463ATp2qGIrDUdlE=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "dba414932936fde69f0606b4f1d87c5bc0003ede",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixos-24.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1679577639,
"narHash": "sha256-7u7bsNP0ApBnLgsHVROQ5ytoMqustmMVMgtaFS/P7EU=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "8f1bcd72727c5d4cd775545595d068be410f2a7e",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixpkgs-22.11-darwin",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"advisory-db": "advisory-db",
"crane": "crane",
"fenix": "fenix",
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"wasmcloud": "wasmcloud"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1730989300,
"narHash": "sha256-ZWSta9893f/uF5PoRFn/BSUAxF4dKW+TIbdA6rZoGBg=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "1042a8c22c348491a4bade4f664430b03d6f5b5c",
"type": "github"
},
"original": {
"owner": "rust-lang",
"ref": "nightly",
"repo": "rust-analyzer",
"type": "github"
}
},
"rust-analyzer-src_2": {
"flake": false,
"locked": {
"lastModified": 1679520343,
"narHash": "sha256-AJGSGWRfoKWD5IVTu1wEsR990wHbX0kIaolPqNMEh0c=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "eb791f31e688ae00908eb75d4c704ef60c430a92",
"type": "github"
},
"original": {
"owner": "rust-lang",
"ref": "nightly",
"repo": "rust-analyzer",
"type": "github"
}
},
"rust-overlay": {
"inputs": {
"flake-utils": [
"wasmcloud",
"nixify",
"nix-log",
"nixify",
"flake-utils"
],
"nixpkgs": [
"wasmcloud",
"nixify",
"nix-log",
"nixify",
"nixpkgs"
]
},
"locked": {
"lastModified": 1679537973,
"narHash": "sha256-R6borgcKeyMIjjPeeYsfo+mT8UdS+OwwbhhStdCfEjg=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "fbc7ae3f14d32e78c0e8d7865f865cc28a46b232",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"rust-overlay_2": {
"inputs": {
"nixpkgs": [
"wasmcloud",
"nixify",
"nixpkgs-nixos"
]
},
"locked": {
"lastModified": 1731032894,
"narHash": "sha256-dQSyYPmrQiPr+PGEd+K8038rubFGz7G/dNXVeaGWE0w=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "d52f2a4c103a0acf09ded857b9e2519ae2360e59",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"wasmcloud": {
"inputs": {
"nixify": "nixify",
"nixlib": "nixlib_3",
"wit-deps": "wit-deps"
},
"locked": {
"lastModified": 1731409523,
"narHash": "sha256-Q/BnuJaMyJfY+p9VpdyBWtRjEo4TdRvFMMhfdDFj6cU=",
"owner": "wasmCloud",
"repo": "wasmCloud",
"rev": "579455058513b907c7df4a4ec13728f83c6b782b",
"type": "github"
},
"original": {
"owner": "wasmCloud",
"ref": "wash-cli-v0.37.0",
"repo": "wasmCloud",
"type": "github"
}
},
"wit-deps": {
"inputs": {
"nix-log": "nix-log_2",
"nixify": [
"wasmcloud",
"nixify"
],
"nixlib": [
"wasmcloud",
"nixlib"
]
},
"locked": {
"lastModified": 1727963723,
"narHash": "sha256-urAGMGMH5ousEeVTZ5AaLPfowXaYQoISNXiutV00iQo=",
"owner": "bytecodealliance",
"repo": "wit-deps",
"rev": "eb7c84564acfe13a4197bb15052fd2e2b3d29775",
"type": "github"
},
"original": {
"owner": "bytecodealliance",
"ref": "v0.4.0",
"repo": "wit-deps",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

264
flake.nix Normal file
View File

@ -0,0 +1,264 @@
{
nixConfig.extra-substituters =
[ "https://wasmcloud.cachix.org" "https://crane.cachix.org" ];
nixConfig.extra-trusted-public-keys = [
"wasmcloud.cachix.org-1:9gRBzsKh+x2HbVVspreFg/6iFRiD4aOcUQfXVDl3hiM="
"crane.cachix.org-1:8Scfpmn9w+hGdXH/Q9tTLiYAE/2dnJYRJP7kl80GuRk="
];
description = "A flake for building and running wadm";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/release-24.11";
crane.url = "github:ipetkov/crane";
fenix = {
url = "github:nix-community/fenix";
inputs.nixpkgs.follows = "nixpkgs";
inputs.rust-analyzer-src.follows = "";
};
flake-utils.url = "github:numtide/flake-utils";
advisory-db = {
url = "github:rustsec/advisory-db";
flake = false;
};
# The wash CLI flag is always after the latest host release tag we want
wasmcloud.url = "github:wasmCloud/wasmCloud/wash-cli-v0.37.0";
};
outputs =
{ self, nixpkgs, crane, fenix, flake-utils, advisory-db, wasmcloud, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
inherit (pkgs) lib;
craneLib = crane.mkLib pkgs;
src = craneLib.cleanCargoSource ./.;
# Common arguments can be set here to avoid repeating them later
commonArgs = {
inherit src;
strictDeps = true;
buildInputs = [
# Add additional build inputs here
] ++ lib.optionals pkgs.stdenv.isDarwin [
# Additional darwin specific inputs can be set here if needed
];
# Additional environment variables can be set directly here if needed
# MY_CUSTOM_VAR = "some value";
};
craneLibLLvmTools = craneLib.overrideToolchain
(fenix.packages.${system}.complete.withComponents [
"cargo"
"llvm-tools"
"rustc"
]);
# Get the lock file for filtering
rawLockFile = builtins.fromTOML (builtins.readFile ./Cargo.lock);
# Filter out the workspace members
filteredLockFile = rawLockFile // {
package = builtins.filter (x: !lib.strings.hasPrefix "wadm" x.name)
rawLockFile.package;
};
cargoVendorDir =
craneLib.vendorCargoDeps { cargoLockParsed = filteredLockFile; };
cargoLock = craneLib.writeTOML "Cargo.lock" filteredLockFile;
# Build *just* the cargo dependencies (of the entire workspace), but we don't want to build
# any of the other things in the crate to avoid rebuilding things in the dependencies when
# we change workspace crate dependencies
cargoArtifacts = let
commonArgs' = removeAttrs commonArgs [ "src" ];
# Get the manifest file for filtering
rawManifestFile = builtins.fromTOML (builtins.readFile ./Cargo.toml);
# Filter out the workspace members from manifest
filteredManifestFile = with lib;
let
filterWadmAttrs =
filterAttrs (name: _: !strings.hasPrefix "wadm" name);
workspace = removeAttrs rawManifestFile.workspace [ "members" ];
in rawManifestFile // {
workspace = workspace // {
dependencies = filterWadmAttrs workspace.dependencies;
package = workspace.package // {
# pin version to avoid rebuilds on bumps
version = "0.0.0";
};
};
dependencies = filterWadmAttrs rawManifestFile.dependencies;
dev-dependencies =
filterWadmAttrs rawManifestFile.dev-dependencies;
build-dependencies =
filterWadmAttrs rawManifestFile.build-dependencies;
};
cargoToml = craneLib.writeTOML "Cargo.toml" filteredManifestFile;
dummySrc = craneLib.mkDummySrc {
src = pkgs.runCommand "wadm-dummy-src" { } ''
mkdir -p $out
cp --recursive --no-preserve=mode,ownership ${src}/. -t $out
cp ${cargoToml} $out/Cargo.toml
'';
};
args = commonArgs' // {
inherit cargoLock cargoToml cargoVendorDir dummySrc;
cargoExtraArgs = ""; # disable `--locked` passed by default by crane
};
in craneLib.buildDepsOnly args;
individualCrateArgs = commonArgs // {
inherit (craneLib.crateNameFromCargoToml { inherit src; }) version;
# TODO(thomastaylor312) We run unit tests here and e2e tests externally. The nextest step
# wasn't letting me pass in the fileset
doCheck = true;
};
fileSetForCrate = lib.fileset.toSource {
root = ./.;
fileset = lib.fileset.unions [
./Cargo.toml
./Cargo.lock
./tests
./oam
(craneLib.fileset.commonCargoSources ./crates/wadm)
(craneLib.fileset.commonCargoSources ./crates/wadm-client)
(craneLib.fileset.commonCargoSources ./crates/wadm-types)
];
};
# Build the top-level crates of the workspace as individual derivations.
# This allows consumers to only depend on (and build) only what they need.
# Though it is possible to build the entire workspace as a single derivation,
# so this is left up to you on how to organize things
#
# Note that the cargo workspace must define `workspace.members` using wildcards,
# otherwise, omitting a crate (like we do below) will result in errors since
# cargo won't be able to find the sources for all members.
# TODO(thomastaylor312) I tried using `doInstallCargoArtifacts` and passing in things to the
# next derivations as the `cargoArtifacts`, but that ended up always building things twice
# rather than caching. We should look into it more and see if there's a way to make it work.
wadm-lib = craneLib.cargoBuild (individualCrateArgs // {
inherit cargoArtifacts;
pname = "wadm";
cargoExtraArgs = "-p wadm";
src = fileSetForCrate;
});
wadm = craneLib.buildPackage (individualCrateArgs // {
inherit cargoArtifacts;
pname = "wadm-cli";
cargoExtraArgs = "--bin wadm";
src = fileSetForCrate;
});
wadm-client = craneLib.cargoBuild (individualCrateArgs // {
inherit cargoArtifacts;
pname = "wadm-client";
cargoExtraArgs = "-p wadm-client";
src = fileSetForCrate;
});
wadm-types = craneLib.cargoBuild (individualCrateArgs // {
inherit cargoArtifacts;
pname = "wadm-types";
cargoExtraArgs = "-p wadm-types";
src = fileSetForCrate;
});
in {
checks = {
# Build the crates as part of `nix flake check` for convenience
inherit wadm wadm-client wadm-types;
# Run clippy (and deny all warnings) on the workspace source,
# again, reusing the dependency artifacts from above.
#
# Note that this is done as a separate derivation so that
# we can block the CI if there are issues here, but not
# prevent downstream consumers from building our crate by itself.
workspace-clippy = craneLib.cargoClippy (commonArgs // {
inherit cargoArtifacts;
cargoClippyExtraArgs = "--all-targets -- --deny warnings";
});
workspace-doc =
craneLib.cargoDoc (commonArgs // { inherit cargoArtifacts; });
# Check formatting
workspace-fmt = craneLib.cargoFmt { inherit src; };
# Audit dependencies
workspace-audit = craneLib.cargoAudit { inherit src advisory-db; };
# Audit licenses
# my-workspace-deny = craneLib.cargoDeny {
# inherit src;
# };
# TODO: the wadm e2e tests use docker compose and things like `wash up` to test things
# (which accesses network currently). We would need to fix those tests to do something
# else to work properly. The low hanging fruit here would be to use the built artifact
# in the e2e tests so we can output those binaries from the nix build and then just
# run the tests from a separate repo. We could also do something like outputting the
# prebuilt artifacts out into the current directory to save on build time. But that is
# for later us to figure out
runE2ETests = pkgs.runCommand "e2e-tests" {
nativeBuildInputs = with pkgs;
[
nats-server
# wasmcloud.wasmcloud
];
} ''
touch $out
'';
};
packages = {
inherit wadm wadm-client wadm-types wadm-lib;
default = wadm;
} // lib.optionalAttrs (!pkgs.stdenv.isDarwin) {
workspace-llvm-coverage = craneLibLLvmTools.cargoLlvmCov
(commonArgs // { inherit cargoArtifacts; });
};
apps = {
wadm = flake-utils.lib.mkApp { drv = wadm; };
default = flake-utils.lib.mkApp { drv = wadm; };
};
devShells.default = craneLib.devShell {
# Inherit inputs from checks.
checks = self.checks.${system};
RUST_SRC_PATH =
"${pkgs.rust.packages.stable.rustPlatform.rustLibSrc}";
# Extra inputs can be added here; cargo and rustc are provided by default.
packages = [
pkgs.nats-server
pkgs.natscli
pkgs.docker
pkgs.git
wasmcloud.outputs.packages.${system}.default
];
};
});
}

567
oam.schema.json Normal file
View File

@ -0,0 +1,567 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Manifest",
"description": "Manifest file based on the Open Application Model (OAM) specification for declaratively managing wasmCloud applications",
"type": "object",
"required": [
"apiVersion",
"kind",
"metadata",
"spec"
],
"properties": {
"apiVersion": {
"description": "The OAM version of the manifest",
"type": "string"
},
"kind": {
"description": "The kind or type of manifest described by the spec",
"type": "string"
},
"metadata": {
"description": "Metadata describing the manifest",
"allOf": [
{
"$ref": "#/definitions/Metadata"
}
]
},
"spec": {
"description": "The specification for this manifest",
"allOf": [
{
"$ref": "#/definitions/Specification"
}
]
}
},
"additionalProperties": false,
"definitions": {
"CapabilityProperties": {
"type": "object",
"properties": {
"application": {
"description": "Information to locate a component within a shared application. Cannot be specified if the image is specified.",
"anyOf": [
{
"$ref": "#/definitions/SharedApplicationComponentProperties"
},
{
"type": "null"
}
]
},
"config": {
"description": "Named configuration to pass to the provider. The merged set of configuration will be passed to the provider at runtime using the provider SDK's `init()` function.",
"type": "array",
"items": {
"$ref": "#/definitions/ConfigProperty"
}
},
"id": {
"description": "The component ID to use for this provider. If not supplied, it will be generated as a combination of the [Metadata::name] and the image reference.",
"type": [
"string",
"null"
]
},
"image": {
"description": "The image reference to use. Required unless the component is a shared component that is defined in another shared application.",
"type": [
"string",
"null"
]
},
"secrets": {
"description": "Named secret references to pass to the t. The provider will be able to retrieve these values at runtime using `wasmcloud:secrets/store`.",
"type": "array",
"items": {
"$ref": "#/definitions/SecretProperty"
}
}
},
"additionalProperties": false
},
"Component": {
"description": "A component definition",
"type": "object",
"oneOf": [
{
"type": "object",
"required": [
"properties",
"type"
],
"properties": {
"properties": {
"$ref": "#/definitions/ComponentProperties"
},
"type": {
"type": "string",
"enum": [
"component"
]
}
}
},
{
"type": "object",
"required": [
"properties",
"type"
],
"properties": {
"properties": {
"$ref": "#/definitions/CapabilityProperties"
},
"type": {
"type": "string",
"enum": [
"capability"
]
}
}
}
],
"required": [
"name"
],
"properties": {
"name": {
"description": "The name of this component",
"type": "string"
},
"traits": {
"description": "A list of various traits assigned to this component",
"type": [
"array",
"null"
],
"items": {
"$ref": "#/definitions/Trait"
}
}
}
},
"ComponentProperties": {
"type": "object",
"properties": {
"application": {
"description": "Information to locate a component within a shared application. Cannot be specified if the image is specified.",
"anyOf": [
{
"$ref": "#/definitions/SharedApplicationComponentProperties"
},
{
"type": "null"
}
]
},
"config": {
"description": "Named configuration to pass to the component. The component will be able to retrieve these values at runtime using `wasi:runtime/config.`",
"type": "array",
"items": {
"$ref": "#/definitions/ConfigProperty"
}
},
"id": {
"description": "The component ID to use for this component. If not supplied, it will be generated as a combination of the [Metadata::name] and the image reference.",
"type": [
"string",
"null"
]
},
"image": {
"description": "The image reference to use. Required unless the component is a shared component that is defined in another shared application.",
"type": [
"string",
"null"
]
},
"secrets": {
"description": "Named secret references to pass to the component. The component will be able to retrieve these values at runtime using `wasmcloud:secrets/store`.",
"type": "array",
"items": {
"$ref": "#/definitions/SecretProperty"
}
}
},
"additionalProperties": false
},
"ConfigDefinition": {
"type": "object",
"properties": {
"config": {
"type": "array",
"items": {
"$ref": "#/definitions/ConfigProperty"
}
},
"secrets": {
"type": "array",
"items": {
"$ref": "#/definitions/SecretProperty"
}
}
}
},
"ConfigProperty": {
"description": "Properties for the config list associated with components, providers, and links\n\n## Usage Defining a config block, like so: ```yaml source_config: - name: \"external-secret-kv\" - name: \"default-port\" properties: port: \"8080\" ```\n\nWill result in two config scalers being created, one with the name `basic-kv` and one with the name `default-port`. Wadm will not resolve collisions with configuration names between manifests.",
"type": "object",
"required": [
"name"
],
"properties": {
"name": {
"description": "Name of the config to ensure exists",
"type": "string"
},
"properties": {
"description": "Optional properties to put with the configuration. If the properties are omitted in the manifest, wadm will assume that the configuration is externally managed and will not attempt to create it, only reporting the status as failed if not found.",
"type": [
"object",
"null"
],
"additionalProperties": {
"type": "string"
}
}
},
"additionalProperties": false
},
"LinkProperty": {
"description": "Properties for links",
"type": "object",
"required": [
"interfaces",
"namespace",
"package",
"target"
],
"properties": {
"interfaces": {
"description": "WIT interfaces for the link",
"type": "array",
"items": {
"type": "string"
}
},
"name": {
"description": "The name of this link",
"type": [
"string",
"null"
]
},
"namespace": {
"description": "WIT namespace for the link",
"type": "string"
},
"package": {
"description": "WIT package for the link",
"type": "string"
},
"source": {
"description": "Configuration to apply to the source of the link",
"anyOf": [
{
"$ref": "#/definitions/ConfigDefinition"
},
{
"type": "null"
}
]
},
"source_config": {
"deprecated": true,
"writeOnly": true,
"type": [
"array",
"null"
],
"items": {
"$ref": "#/definitions/ConfigProperty"
}
},
"target": {
"description": "Configuration to apply to the target of the link",
"allOf": [
{
"$ref": "#/definitions/TargetConfig"
}
]
},
"target_config": {
"deprecated": true,
"writeOnly": true,
"type": [
"array",
"null"
],
"items": {
"$ref": "#/definitions/ConfigProperty"
}
}
},
"additionalProperties": false
},
"Metadata": {
"description": "The metadata describing the manifest",
"type": "object",
"required": [
"annotations",
"name"
],
"properties": {
"annotations": {
"description": "Optional data for annotating this manifest see <https://github.com/oam-dev/spec/blob/master/metadata.md#annotations-format>",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"labels": {
"description": "Optional data for labeling this manifest, see <https://github.com/oam-dev/spec/blob/master/metadata.md#label-format>",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"name": {
"description": "The name of the manifest. This must be unique per lattice",
"type": "string"
}
}
},
"Policy": {
"description": "A policy definition",
"type": "object",
"required": [
"name",
"properties",
"type"
],
"properties": {
"name": {
"description": "The name of this policy",
"type": "string"
},
"properties": {
"description": "The properties for this policy",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"type": {
"description": "The type of the policy",
"type": "string"
}
}
},
"SecretProperty": {
"type": "object",
"required": [
"name",
"properties"
],
"properties": {
"name": {
"description": "The name of the secret. This is used by a reference by the component or capability to get the secret value as a resource.",
"type": "string"
},
"properties": {
"description": "The properties of the secret that indicate how to retrieve the secret value from a secrets backend and which backend to actually query.",
"allOf": [
{
"$ref": "#/definitions/SecretSourceProperty"
}
]
}
}
},
"SecretSourceProperty": {
"type": "object",
"required": [
"key",
"policy"
],
"properties": {
"field": {
"description": "The field to use for retrieving the secret from the backend. This is optional and can be used to retrieve a specific field from a secret.",
"type": [
"string",
"null"
]
},
"key": {
"description": "The key to use for retrieving the secret from the backend.",
"type": "string"
},
"policy": {
"description": "The policy to use for retrieving the secret.",
"type": "string"
},
"version": {
"description": "The version of the secret to retrieve. If not supplied, the latest version will be used.",
"type": [
"string",
"null"
]
}
}
},
"SharedApplicationComponentProperties": {
"type": "object",
"required": [
"component",
"name"
],
"properties": {
"component": {
"description": "The name of the component in the shared application",
"type": "string"
},
"name": {
"description": "The name of the shared application",
"type": "string"
}
}
},
"Specification": {
"description": "A representation of an OAM specification",
"type": "object",
"required": [
"components"
],
"properties": {
"components": {
"description": "The list of components for describing an application",
"type": "array",
"items": {
"$ref": "#/definitions/Component"
}
},
"policies": {
"description": "The list of policies describing an application. This is for providing application-wide setting such as configuration for a secrets backend, how to render Kubernetes services, etc. It can be omitted if no policies are needed for an application.",
"type": "array",
"items": {
"$ref": "#/definitions/Policy"
}
}
}
},
"Spread": {
"description": "Configuration for various spreading requirements",
"type": "object",
"required": [
"name",
"requirements"
],
"properties": {
"name": {
"description": "The name of this spread requirement",
"type": "string"
},
"requirements": {
"description": "An arbitrary map of labels to match on for scaling requirements",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"weight": {
"description": "An optional weight for this spread. Higher weights are given more precedence",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
},
"additionalProperties": false
},
"SpreadScalerProperty": {
"description": "Properties for spread scalers",
"type": "object",
"required": [
"instances"
],
"properties": {
"instances": {
"description": "Number of instances to spread across matching requirements",
"type": "integer",
"format": "uint",
"minimum": 0.0
},
"spread": {
"description": "Requirements for spreading those instances",
"type": "array",
"items": {
"$ref": "#/definitions/Spread"
}
}
},
"additionalProperties": false
},
"TargetConfig": {
"type": "object",
"required": [
"name"
],
"properties": {
"config": {
"type": "array",
"items": {
"$ref": "#/definitions/ConfigProperty"
}
},
"name": {
"description": "The target this link applies to. This should be the name of a component in the manifest",
"type": "string"
},
"secrets": {
"type": "array",
"items": {
"$ref": "#/definitions/SecretProperty"
}
}
}
},
"Trait": {
"type": "object",
"required": [
"properties",
"type"
],
"properties": {
"properties": {
"description": "The properties of this trait",
"allOf": [
{
"$ref": "#/definitions/TraitProperty"
}
]
},
"type": {
"description": "The type of trait specified. This should be a unique string for the type of scaler. As we plan on supporting custom scalers, these traits are not enumerated",
"type": "string"
}
},
"additionalProperties": false
},
"TraitProperty": {
"description": "Properties for defining traits",
"anyOf": [
{
"$ref": "#/definitions/LinkProperty"
},
{
"$ref": "#/definitions/SpreadScalerProperty"
},
true
]
}
}
}

Some files were not shown because too many files have changed in this diff Show More