Compare commits

...

64 Commits

Author SHA1 Message Date
Yaron Schneider 22536a9640
Merge pull request #1438 from KentHsu/fix-perfsprint-linter-error
fix perfsprint linter error
2025-07-02 10:56:08 -07:00
KentHsu fff4c9158f fix linter error
Signed-off-by: KentHsu <chiahaohsu9@gmail.com>
2025-05-13 22:09:22 +08:00
Kent (Chia-Hao), Hsu 3198d9ca7e
Merge branch 'master' into fix-perfsprint-linter-error 2025-05-13 22:07:20 +08:00
Yaron Schneider b51eab0d84
Merge pull request #1509 from twinguy/master
Add detection for incompatible flags with --run-file
2025-05-05 18:59:17 -07:00
twinguy b31a9f2c56
chore: added go mod tidy to clear up pipeline issues
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-04-14 21:37:58 -05:00
twinguy c939814420
Use compatibleflags approach instead of incompatible
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-04-11 21:43:37 -05:00
Yaron Schneider a85b8132db
Merge branch 'master' into fix-perfsprint-linter-error 2025-03-29 00:40:15 +03:00
twinguy 5da3528524
Remove deprecated tests
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-03-24 16:56:21 -05:00
twinguy e7c1a322d7
Refactor warning message for incompatible flags in --run-file
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-03-24 16:32:13 -05:00
twinguy ce0b9fb4d9
Add detection for incompatible flags with --run-file
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-03-23 23:39:44 -05:00
Yaron Schneider 29f8962111
Merge release 1.15 into master (#1499)
* use non-deprecated flags in List operation (#1478)

Signed-off-by: yaron2 <schneider.yaron@live.com>

* Scheduler: set broadcast address to localhost:50006 in selfhosted (#1480)

* Scheduler: set broadcast address to localhost:50006 in selfhosted

Signed-off-by: joshvanl <me@joshvanl.dev>

* Set schedulder override flag for edge and dev

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix scheduler broadcast address for windows (#1481)

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Remove deprecated flags (#1482)

* remove deprecated flags

Signed-off-by: yaron2 <schneider.yaron@live.com>

* update Dapr version in tests

Signed-off-by: yaron2 <schneider.yaron@live.com>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>

* Fix daprsystem configuration retrieval when renewing certificates (#1486)

The issue found when similar resource were installed in k8s that use the name "configurations".
In this case the knative's "configurations.serving.knative.dev/v1" was the last in the list and the command returned the error
`Error from server (NotFound): configurations.serving.knative.dev "daprsystem" not found`

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix: arguments accept units (#1490)

* fix: arguments accept units
`max-body-size` and `read-buffer-size` now accept units as defined in the docs.

Fixes #1489

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: modify logic to comply with vetting

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt -w .

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: set defaults
`max-body-size` is defaulted to 4Mi
`request-buffer-size` is defaulted to 4Ki

This is inline with the runtime.

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: set defaults in run and annotate

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: exit with error rather than panic

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>

* Fix scheduler pod count for 1.15 version when testing master and latest (#1492)

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix podman CI (#1493)

* Fix podman CI
Update to podman 5.4.0

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix --cpus flag

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix dapr upgrade command incorrectly detecting HA mode for new version 1.15 (#1494)

* Fix dapr upgrade command detecting HA mode for new version 1.15
The issue is that the scheduler by default uses 3 replicas, which incorrectly identified non-HA install as HA.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix e2e

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix scheduler address for dapr run with file on Windows (#1497)

Signed-off-by: Anton Troshin <anton@diagrid.io>

* release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner (#1491)

* release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: version skews

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

* Update tests/e2e/upgrade/upgrade_test.go

Accepted

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Update tests/e2e/upgrade/upgrade_test.go

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Fix downgrade issue from 1.15 by deleting previous version scheduler pods
Update 1.15 RC to latest RC.18

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix downgrade 1.15 to 1.13 scenario with 0 scheduler pods

Signed-off-by: Anton Troshin <anton@diagrid.io>

* increase update test timeout to 60m and update latest version to 1.15

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix httpendpoint tests cleanup and checks

Signed-off-by: Anton Troshin <anton@diagrid.io>

* make sure matrix runs appropriate tests, every matrix ran the same tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* update to latest dapr 1.15.2

Signed-off-by: Anton Troshin <anton@diagrid.io>

* add logs when waiting for pod deletion

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>
Signed-off-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>

* Fix dapr init test latest version retrieval (#1500)

Lint

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix downgrade stuck (#1501)

* Fix goroutine channel leaks and ensure proper cleanup in tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add artificial delay before deleting scheduler pods during downgrade

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add timeout to helm upgrade tests, they are being stuck sometime for 5+ minutes

Signed-off-by: Anton Troshin <anton@diagrid.io>

* bump helm.sh/helm/v3 to v3.17.1

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>
Signed-off-by: joshvanl <me@joshvanl.dev>
Signed-off-by: Anton Troshin <anton@diagrid.io>
Signed-off-by: Mike Nguyen <hey@mike.ee>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Josh van Leeuwen <me@joshvanl.dev>
Co-authored-by: Mike Nguyen <hey@mike.ee>
2025-03-13 18:36:13 -07:00
Anton Troshin 16aeac5701
Merge branch 'master' into merge-release-1.15-into-master 2025-03-13 18:01:03 -05:00
Anton Troshin 16cc1d1b59
Fix downgrade stuck (#1501)
* Fix goroutine channel leaks and ensure proper cleanup in tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add artificial delay before deleting scheduler pods during downgrade

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add timeout to helm upgrade tests, they are being stuck sometime for 5+ minutes

Signed-off-by: Anton Troshin <anton@diagrid.io>

* bump helm.sh/helm/v3 to v3.17.1

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-03-12 18:27:40 -07:00
Anton Troshin ecc4ea4953
Fix dapr init test latest version retrieval (#1500)
Lint

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-03-07 13:42:23 -08:00
Mike Nguyen 4c1c26f2b6
release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner (#1491)
* release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: version skews

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

* Update tests/e2e/upgrade/upgrade_test.go

Accepted

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Update tests/e2e/upgrade/upgrade_test.go

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Fix downgrade issue from 1.15 by deleting previous version scheduler pods
Update 1.15 RC to latest RC.18

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix downgrade 1.15 to 1.13 scenario with 0 scheduler pods

Signed-off-by: Anton Troshin <anton@diagrid.io>

* increase update test timeout to 60m and update latest version to 1.15

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix httpendpoint tests cleanup and checks

Signed-off-by: Anton Troshin <anton@diagrid.io>

* make sure matrix runs appropriate tests, every matrix ran the same tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* update to latest dapr 1.15.2

Signed-off-by: Anton Troshin <anton@diagrid.io>

* add logs when waiting for pod deletion

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>
Signed-off-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>
2025-03-05 17:19:56 -08:00
Anton Troshin a0921c7820
Fix scheduler address for dapr run with file on Windows (#1497)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-03-05 17:18:23 -08:00
Anton Troshin 6c9bcc6dcf
Fix dapr upgrade command incorrectly detecting HA mode for new version 1.15 (#1494)
* Fix dapr upgrade command detecting HA mode for new version 1.15
The issue is that the scheduler by default uses 3 replicas, which incorrectly identified non-HA install as HA.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix e2e

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-26 10:08:12 -08:00
Anton Troshin 42b93eaa7a
Merge branch 'master' into fix-perfsprint-linter-error 2025-02-24 11:13:45 -06:00
Anton Troshin 98b9da9699
Fix scheduler pod count for 1.15 version when testing master and latest (#1488)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-24 09:11:00 -08:00
Anton Troshin bd09c94b77
Fix podman CI (#1493)
* Fix podman CI
Update to podman 5.4.0

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix --cpus flag

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-24 09:07:42 -08:00
Anton Troshin 06f38ed9bc
Fix scheduler pod count for 1.15 version when testing master and latest (#1492)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-21 19:11:41 -08:00
Mike Nguyen 0cd0585b64
fix: arguments accept units (#1490)
* fix: arguments accept units
`max-body-size` and `read-buffer-size` now accept units as defined in the docs.

Fixes #1489

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: modify logic to comply with vetting

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt -w .

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: set defaults
`max-body-size` is defaulted to 4Mi
`request-buffer-size` is defaulted to 4Ki

This is inline with the runtime.

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: set defaults in run and annotate

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: exit with error rather than panic

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>
2025-02-21 08:11:11 -08:00
Anton Troshin 728b329ce5
Merge branch 'master' into fix-perfsprint-linter-error 2025-02-18 10:28:56 -06:00
Anton Troshin a968b18f08
Fix daprsystem configuration retrieval when renewing certificates (#1486)
The issue found when similar resource were installed in k8s that use the name "configurations".
In this case the knative's "configurations.serving.knative.dev/v1" was the last in the list and the command returned the error
`Error from server (NotFound): configurations.serving.knative.dev "daprsystem" not found`

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-14 17:49:29 -08:00
Yaron Schneider ac6822ea69
Remove deprecated flags (#1482)
* remove deprecated flags

Signed-off-by: yaron2 <schneider.yaron@live.com>

* update Dapr version in tests

Signed-off-by: yaron2 <schneider.yaron@live.com>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-02-03 13:27:22 -08:00
Anton Troshin f8ee63c8f4
Fix scheduler broadcast address for windows (#1481)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-01-31 12:34:07 -08:00
Josh van Leeuwen 8ce6b9fed9
Scheduler: set broadcast address to localhost:50006 in selfhosted (#1480)
* Scheduler: set broadcast address to localhost:50006 in selfhosted

Signed-off-by: joshvanl <me@joshvanl.dev>

* Set schedulder override flag for edge and dev

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
2025-01-27 10:16:37 -08:00
Yaron Schneider 953c4a2a3f
use non-deprecated flags in List operation (#1478)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-01-21 22:52:26 -08:00
Mike Nguyen c3f0fb2472
release: pin go to 1.23.5 (#1477)
Signed-off-by: mikeee <hey@mike.ee>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-01-20 16:50:10 -08:00
Mike Nguyen 3acfa7d7e4
fix: population of the schedulerhostaddress in self-hosted mode (#1475)
* fix: population of the schedulerhostaddress in self-hosted mode

The scheduler host address is pre-populated when using the self-hosted mode multi-app run similarly to the single app run.
Kubernetes multi-app run is not affected and you will still need to specify a scheduler host address.

Signed-off-by: mikeee <hey@mike.ee>

* chore: lint

Signed-off-by: mikeee <hey@mike.ee>

---------

Signed-off-by: mikeee <hey@mike.ee>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-01-20 14:18:31 -08:00
Mike Nguyen 104849bc74
chore: remove gopkg / cleanup repo (#1415)
* chore: remove gopkg

Signed-off-by: mikeee <hey@mike.ee>

* chore: upgrade actions versions and remove explicit caching steps

Signed-off-by: mikeee <hey@mike.ee>

---------

Signed-off-by: mikeee <hey@mike.ee>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-01-15 05:28:22 -08:00
Anton Troshin 17f4785906
Update dependencies, Go version and address CVEs (#1474)
* Update dependencies, Go version and address CVEs

Signed-off-by: Anton Troshin <anton@diagrid.io>

* update golangci-lint version and list of disabled linters form dapr/dapr

Signed-off-by: Anton Troshin <anton@diagrid.io>

* adjust golangci-lint settings and fix lint issues

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix test

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-01-09 14:16:44 -08:00
Filinto Duran efe1d6c1e2
add image pull policy (#1462)
* add image pull policy

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* add allowed values

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* feedback refactor allowed values name

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* add unit tests

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* more lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* more lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

---------

Signed-off-by: Filinto Duran <filinto@diagrid.io>
Co-authored-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Mike Nguyen <hey@mike.ee>
2025-01-07 09:09:00 -08:00
Mike Nguyen dbbe022a8a
fix: allow the scheduler client to initialise for edge builds (#1467)
Signed-off-by: mikeee <hey@mike.ee>
2024-11-26 13:09:01 -08:00
Anton Troshin 25d9ece42f
Add version parsing check skip malformed versions and avoid panic (#1469)
* Add version parsing check skip malformed versions and avoid panic

Signed-off-by: Anton Troshin <anton@diagrid.io>

* lint

Signed-off-by: Anton Troshin <anton@diagrid.io>

* do not return error on nil, skip bad versions

Signed-off-by: Anton Troshin <anton@diagrid.io>

* simplify condition to skip prerelease and versions with metadata
print warning on error and non-semver version tag

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-22 13:59:51 -08:00
Anton Troshin 6d5e64d964
Fix tests running master in CI with specific dapr version (#1461)
* Fix tests running master in CI with specific dapr version

Signed-off-by: Anton Troshin <anton@diagrid.io>

* move env version load into common

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix k8s test files

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Revert "fix k8s test files"

This reverts commit 344867d19ca4b38e5a83a82a2a00bb04c1775bab.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Revert "move env version load into common"

This reverts commit 39e8c8caf54a157464bb44dffe448fc75727487f.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Revert "Fix tests running master in CI with specific dapr version"

This reverts commit a02c81f7e25a6bbdb8e3b172a8e215dae60d321f.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add GetRuntimeVersion to be able to compare semver dapr versions for conditional tests
Use GetRuntimeVersion in test

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-08 10:33:08 -08:00
Yaron Schneider 8cd81b0477
Merge pull request #1460 from antontroshin/merge-release-1.14-to-master
Merge release 1.14 to master
2024-11-05 17:29:12 -08:00
Anton Troshin 5446171840
Add versioned pod number validation
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 17:38:38 -06:00
Anton Troshin 1152a1ef55
Add placement and scheduler in slim mode self-hosted tests
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 16:22:01 -06:00
Anton Troshin d781b03002
Change podman mount to home
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 03:07:52 -06:00
Anton Troshin 9bc96c2fa0
Fix test
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 02:11:25 -06:00
Anton Troshin 376690b43e
Fix number of HA mode pods to wait in tests
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 01:35:53 -06:00
Anton Troshin 086a3b9adb
Fix number of pods to wait in tests
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 01:09:47 -06:00
Anton Troshin 1f080952a5
Add logs
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 00:56:45 -06:00
Anton Troshin 002a223864
Merge branch 'master' into merge-release-1.14-to-master 2024-11-04 20:14:52 -06:00
Anton Troshin db712e7eed
Fixing e2e tests, podman and scheduler fail with mariner images (#1450)
* Fix standalone e2e tests
Fix podman e2e install
Fix scheduler start failure on standalone mariner image variant

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix test

Signed-off-by: Anton Troshin <anton@diagrid.io>

* revert

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix podman machine settings
Add cleanups
Remove parallel tests
Fix mariner volume mount location
Remove old build tags

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-10-01 09:19:35 -07:00
Yetkin Timocin a3571c8464
Fix a typo (#1453)
Signed-off-by: ytimocin <ytimocin@microsoft.com>
2024-10-01 09:17:12 -07:00
Josh van Leeuwen e08443b9b3
Remove Docker client dependency from standalone run (#1443)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-08-15 13:53:26 -07:00
Yaron Schneider aefcca1899
Merge branch 'master' into fix-perfsprint-linter-error 2024-08-13 18:21:22 -07:00
Rishab Kumar aa0436ebe0
Added new holopin cli badge (#1399)
Signed-off-by: Rishab Kumar <rishabkumar7@gmail.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
Co-authored-by: Mike Nguyen <hey@mike.ee>
2024-08-06 14:08:58 -07:00
Yaron Schneider 027f5da3e1
update redis version (#1439)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-07-25 09:58:14 -07:00
KentHsu 22059c399e fix perfsprint linter error
Signed-off-by: KentHsu <chiahaohsu9@gmail.com>
2024-07-24 11:12:02 +08:00
Yaron Schneider fecf47d752
pin bitnami chart version for redis (#1437)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-07-23 15:49:14 -07:00
Anton Troshin ad67ce58c4
Add dapr scheduler server to the status command (#1434)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-07-22 16:38:06 -07:00
Cassie Coyle e72f95393c
Fix Scheduler Data Dir Permissions Issue (#1432)
* fix w/ @joshvanl & anton

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* add a .

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

---------

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>
2024-07-19 11:49:08 -07:00
Josh van Leeuwen 16a513ba7a
Change scheduler container data dir from `/var/run/...` to `/var/lib`. (#1429)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-07-18 16:29:43 -07:00
Josh van Leeuwen 30d888f4e6
Fix dapr init scheduler (#1428)
* Add E2E to validate scheduler after init.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Adds dapr_scheduler verify container

Signed-off-by: joshvanl <me@joshvanl.dev>

* Assert eventually TCP connect

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix eventually t check

Signed-off-by: joshvanl <me@joshvanl.dev>

* Write etcd-data-dir to custom path with volume

Signed-off-by: joshvanl <me@joshvanl.dev>

* Adds Helpers to test funcs

Signed-off-by: joshvanl <me@joshvanl.dev>

* Adds container name to TCP check

Signed-off-by: joshvanl <me@joshvanl.dev>

* Use rc.3 for scheduler

Signed-off-by: joshvanl <me@joshvanl.dev>

* Print container logs on failed TCP conn

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix params

Signed-off-by: joshvanl <me@joshvanl.dev>

* Use b

Signed-off-by: joshvanl <me@joshvanl.dev>

* Addfs `dev` flag to rc init

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix version check

Signed-off-by: joshvanl <me@joshvanl.dev>

* Skip TCP check on slim mode

Signed-off-by: joshvanl <me@joshvanl.dev>

* Remove debug test code

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: Artur Souza <asouza.pro@gmail.com>
Signed-off-by: joshvanl <me@joshvanl.dev>
Co-authored-by: Artur Souza <asouza.pro@gmail.com>
2024-07-18 12:50:16 -07:00
Josh van Leeuwen 29d29ab549
Give scheduler a default volume, making it resilient to restarts by (#1423)
* Give scheduler a default volume, making it resilient to restarts by
default

Signed-off-by: joshvanl <me@joshvanl.dev>

* Remove dapr_scheduler volume on uninstall, gated by `--all`

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix containerErrs in standaone.go

Signed-off-by: joshvanl <me@joshvanl.dev>

* Do not attempt to delete scheduler volume if no container runtime

Signed-off-by: joshvanl <me@joshvanl.dev>

* Increase upgrade test timeout to 40m

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
2024-07-17 12:13:51 -07:00
Yaron Schneider ddf43a5f55
update link (#1426)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-07-16 23:42:12 -07:00
Josh van Leeuwen ad3442b252
dapr_scheduler: Adds scheduler-volume flag (#1422)
* dapr_scheduler: pre-create data dir

Signed-off-by: joshvanl <me@joshvanl.dev>

* Adds --scheduler-volume to specify volume for data directory

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
2024-07-12 08:22:34 -07:00
Mike Nguyen ed0d3af2d0
fix: scheduler host address passed to runtime (#1421)
* fix: scheduler host address passed to runtime

Signed-off-by: mikeee <hey@mike.ee>

* fix: scheduler client stream initialised for 1.14<

Signed-off-by: mikeee <hey@mike.ee>

* fix: modify scheduler host address validation

if the scheduler container is not active, the scheduler flag will
not be passed to the runtime

Signed-off-by: mikeee <hey@mike.ee>

* fix: lint and refactor

Signed-off-by: mikeee <hey@mike.ee>

---------

Signed-off-by: mikeee <hey@mike.ee>
2024-07-10 09:37:34 -07:00
Artur Souza 762e2bb4ac
Fix check for version to contain scheduler. (#1417)
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
2024-07-05 20:16:40 -07:00
Cassie Coyle fd1d8e85bf
Distributed Scheduler CLI Changes (#1405)
* wip

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* rm scheduleJob logic. keep only init/uninstall logic

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* Fixes install and uninstall of scheduler in standalone mode.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fixing path for Go tools in Darwin.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fix Go bin location for MacOS.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fix min scheduler version to be 1.14.x

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Use env var to pass scheduler host.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fix CLI build to work with latest MacOS runners from GH

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

---------

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
Co-authored-by: Artur Souza <asouza.pro@gmail.com>
2024-07-03 14:58:55 -07:00
dependabot[bot] 4881ca11d7
Bump golang.org/x/net from 0.21.0 to 0.23.0 (#1401)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.21.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.21.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 20:01:32 +05:30
90 changed files with 2367 additions and 2908 deletions

6
.github/holopin.yml vendored
View File

@ -1,6 +1,6 @@
organization: dapr
defaultSticker: clmjkxscc122740fl0mkmb7egi
defaultSticker: clutq4bgp107990fl1h4m7jp3b
stickers:
-
id: clmjkxscc122740fl0mkmb7egi
alias: ghc2023
id: clutq4bgp107990fl1h4m7jp3b
alias: cli-badge

View File

@ -29,7 +29,7 @@ jobs:
name: Build ${{ matrix.target_os }}_${{ matrix.target_arch }} binaries
runs-on: ${{ matrix.os }}
env:
GOLANG_CI_LINT_VER: v1.55.2
GOLANG_CI_LINT_VER: v1.61.0
GOOS: ${{ matrix.target_os }}
GOARCH: ${{ matrix.target_arch }}
GOPROXY: https://proxy.golang.org
@ -39,7 +39,7 @@ jobs:
WIX_BIN_PATH: 'C:/Program Files (x86)/WiX Toolset v3.11/bin'
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
os: [ubuntu-latest, windows-latest, macOS-latest, macOS-latest-large]
target_arch: [arm, arm64, amd64]
include:
- os: ubuntu-latest
@ -48,6 +48,8 @@ jobs:
target_os: windows
- os: macOS-latest
target_os: darwin
- os: macOS-latest-large
target_os: darwin
exclude:
- os: windows-latest
target_arch: arm
@ -55,44 +57,28 @@ jobs:
target_arch: arm64
- os: macOS-latest
target_arch: arm
- os: macOS-latest
target_arch: amd64
- os: macOS-latest-large
target_arch: arm
- os: macOS-latest-large
target_arch: arm64
steps:
- name: Prepare Go's bin location - MacOS
if: matrix.target_os == 'darwin'
run: |
export PATH=$HOME/bin:$PATH
echo "$HOME/bin" >> $GITHUB_PATH
echo "GOBIN=$HOME/bin" >> $GITHUB_ENV
mkdir -p $HOME/bin
- name: Check out code into the Go module directory
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: 'go.mod'
- name: Cache Go modules (Linux)
if: matrix.target_os == 'linux'
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (Windows)
if: matrix.target_os == 'windows'
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (macOS)
if: matrix.target_os == 'darwin'
uses: actions/cache@v3
with:
path: |
~/Library/Caches/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Run golangci-lint
if: matrix.target_arch == 'amd64' && matrix.target_os == 'linux'
uses: golangci/golangci-lint-action@v3.2.0
@ -187,7 +173,7 @@ jobs:
runs-on: windows-latest
steps:
- name: Check out code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Parse release version and set REL_VERSION
run: python ./.github/scripts/get_release_version.py
- name: Update winget manifests

View File

@ -11,7 +11,7 @@ jobs:
pull-requests: write
packages: write
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Publish Features"
uses: devcontainers/action@v1

View File

@ -22,7 +22,7 @@ jobs:
- ubuntu:latest
- mcr.microsoft.com/devcontainers/base:ubuntu
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Install latest devcontainer CLI"
run: npm install -g @devcontainers/cli
@ -39,7 +39,7 @@ jobs:
features:
- dapr-cli
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Install latest devcontainer CLI"
run: npm install -g @devcontainers/cli
@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest
continue-on-error: true
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Install latest devcontainer CLI"
run: npm install -g @devcontainers/cli

View File

@ -9,7 +9,7 @@ jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Validate devcontainer-feature.json files"
uses: devcontainers/action@v1

View File

@ -34,7 +34,7 @@ jobs:
FOSSA_API_KEY: b88e1f4287c3108c8751bf106fb46db6 # This is a push-only token that is safe to be exposed.
steps:
- name: "Checkout code"
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: "Run FOSSA Scan"
uses: fossas/fossa-action@v1.3.1 # Use a specific version if locking is preferred

View File

@ -38,7 +38,7 @@ jobs:
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
- uses: actions/checkout@v4
# Install Dapr
- name: Install DAPR CLI

View File

@ -50,11 +50,10 @@ jobs:
name: E2E tests for K8s (KinD)
runs-on: ubuntu-latest
env:
DAPR_RUNTIME_PINNED_VERSION: 1.13.0-rc.2
DAPR_RUNTIME_PINNED_VERSION: 1.14.4
DAPR_DASHBOARD_PINNED_VERSION: 0.14.0
DAPR_RUNTIME_LATEST_STABLE_VERSION:
DAPR_DASHBOARD_LATEST_STABLE_VERSION:
DAPR_TGZ: dapr-1.13.0-rc.2.tgz
strategy:
fail-fast: false # Keep running if one leg fails.
matrix:
@ -80,23 +79,14 @@ jobs:
kind-image-sha: sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace
steps:
- name: Check out code onto GOPATH
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: ./src/github.com/dapr/cli
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: './src/github.com/dapr/cli/go.mod'
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Configure KinD
# Generate a KinD configuration file that uses:
@ -183,6 +173,7 @@ jobs:
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }}
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-k8s
shell: bash
- name: Run tests with Docker hub
@ -191,6 +182,7 @@ jobs:
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }}
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-k8s
shell: bash
- name: Upload test results

View File

@ -38,23 +38,24 @@ jobs:
GOARCH: ${{ matrix.target_arch }}
GOPROXY: https://proxy.golang.org
ARCHIVE_OUTDIR: dist/archives
DAPR_RUNTIME_PINNED_VERSION: "1.13.0-rc.2"
DAPR_RUNTIME_PINNED_VERSION: "1.14.4"
DAPR_DASHBOARD_PINNED_VERSION: 0.14.0
DAPR_RUNTIME_LATEST_STABLE_VERSION: ""
DAPR_DASHBOARD_LATEST_STABLE_VERSION: ""
GOLANG_PROTOBUF_REGISTRATION_CONFLICT: warn
PODMAN_VERSION: 4.4.4
PODMAN_VERSION: 5.4.0
strategy:
# TODO: Remove this when our E2E tests are stable for podman on MacOS.
fail-fast: false # Keep running if one leg fails.
matrix:
os: [macos-latest, ubuntu-latest, windows-latest]
# See https://github.com/actions/runner-images
os: [macos-latest-large, ubuntu-latest, windows-latest]
target_arch: [amd64]
dapr_install_mode: [slim, complete]
include:
- os: ubuntu-latest
target_os: linux
- os: macOS-latest
- os: macos-latest-large
target_os: darwin
- os: windows-latest
target_os: windows
@ -62,46 +63,24 @@ jobs:
- os: windows-latest
dapr_install_mode: complete
steps:
- name: Prepare Go's bin location - MacOS
if: matrix.os == 'macos-latest-large'
run: |
export PATH=$HOME/bin:$PATH
echo "$HOME/bin" >> $GITHUB_PATH
echo "GOBIN=$HOME/bin" >> $GITHUB_ENV
mkdir -p $HOME/bin
- name: Check out code into the Go module directory
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: "go.mod"
- name: Cache Go modules (Linux)
if: matrix.target_os == 'linux'
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (Windows)
if: matrix.target_os == 'windows'
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (macOS)
if: matrix.target_os == 'darwin'
uses: actions/cache@v3
with:
path: |
~/Library/Caches/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Install podman - MacOS
timeout-minutes: 15
if: matrix.os == 'macos-latest' && matrix.dapr_install_mode == 'complete'
if: matrix.os == 'macos-latest-large' && matrix.dapr_install_mode == 'complete'
run: |
# Install podman
curl -sL -o podman.pkg https://github.com/containers/podman/releases/download/v${{ env.PODMAN_VERSION }}/podman-installer-macos-amd64.pkg
@ -111,8 +90,10 @@ jobs:
# Start podman machine
sudo podman-mac-helper install
podman machine init
podman machine init -v $HOME:$HOME --memory 16384 --cpus 12
podman machine start --log-level debug
podman machine ssh sudo sysctl -w kernel.keys.maxkeys=20000
podman info
echo "CONTAINER_RUNTIME=podman" >> $GITHUB_ENV
- name: Determine latest Dapr Runtime version including Pre-releases
if: github.base_ref == 'master'
@ -149,7 +130,7 @@ jobs:
echo "DAPR_DASHBOARD_LATEST_STABLE_VERSION=$LATEST_STABLE_DASHBOARD_VERSION" >> $GITHUB_ENV
shell: bash
- name: Set the test timeout - MacOS
if: matrix.os == 'macos-latest'
if: matrix.os == 'macos-latest-large'
run: echo "E2E_SH_TEST_TIMEOUT=30m" >> $GITHUB_ENV
- name: Run E2E tests with GHCR
# runs every 6hrs

View File

@ -74,24 +74,14 @@ jobs:
kind-image-sha: sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace
steps:
- name: Check out code onto GOPATH
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: ./src/github.com/dapr/cli
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: './src/github.com/dapr/cli/go.mod'
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Configure KinD
# Generate a KinD configuration file that uses:
@ -147,6 +137,7 @@ jobs:
run: |
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-upgrade-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-upgrade
- name: Run tests with Docker hub
@ -154,6 +145,7 @@ jobs:
run: |
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-upgrade-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-upgrade
- name: Upload test results

5
.gitignore vendored
View File

@ -6,6 +6,9 @@
*.dylib
cli
# Handy directory to keep local scripts and help files.
.local/
# Mac's metadata folder
.DS_Store
@ -37,4 +40,4 @@ go.work
#Wix files
*.wixobj
*.wixpdb
*.msi
*.msi

View File

@ -4,7 +4,7 @@ run:
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
deadline: 10m
timeout: 10m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
@ -16,28 +16,22 @@ run:
#build-tags:
# - mytag
issues:
# which dirs to skip: they won't be analyzed;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but next dirs are always skipped independently
# from this option's value:
# third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs:
exclude-dirs:
- ^pkg.*client.*clientset.*versioned.*
- ^pkg.*client.*informers.*externalversions.*
- pkg.*mod.*k8s.io.*
# which files to skip: they will be analyzed, but issues from them
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
skip-files: []
# - ".*\\.my\\.go$"
# - lib/bad.go
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle, default is "colored-line-number"
format: tab
formats:
- format: tab
# print lines of code with issue, default is true
print-issued-lines: true
@ -71,9 +65,6 @@ linters-settings:
statements: 40
govet:
# report about shadowed variables
check-shadowing: true
# settings per analyzer
settings:
printf: # analyzer name, run `go tool vet help` to see all analyzers
@ -82,13 +73,18 @@ linters-settings:
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
- github.com/dapr/cli/pkg/print.FailureStatusEvent
- github.com/dapr/cli/pkg/print.SuccessStatusEvent
- github.com/dapr/cli/pkg/print.WarningStatusEvent
- github.com/dapr/cli/pkg/print.InfoStatusEvent
- github.com/dapr/cli/pkg/print.StatusEvent
- github.com/dapr/cli/pkg/print.Spinner
# enable or disable analyzers by name
enable:
- atomicalign
enable-all: false
disable:
- shadow
enable-all: false
disable-all: false
revive:
# linting errors below this confidence will be ignored, default is 0.8
@ -106,9 +102,6 @@ linters-settings:
gocognit:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
@ -141,7 +134,7 @@ linters-settings:
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
exported-fields-are-used: false
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
@ -216,12 +209,17 @@ linters-settings:
# Allow case blocks to end with a whitespace.
# Allow declarations (var) to be cuddled.
allow-cuddle-declarations: false
testifylint:
disable:
- require-error
linters:
fast: false
enable-all: true
disable:
# TODO Enforce the below linters later
- musttag
- dupl
- errcheck
- funlen
@ -230,39 +228,48 @@ linters:
- gocyclo
- gocognit
- godox
- interfacer
- lll
- maligned
- scopelint
- unparam
- wsl
- gomnd
- testpackage
- nestif
- goerr113
- nlreturn
- exhaustive
- gci
- noctx
- exhaustivestruct
- exhaustruct
- gomoddirectives
- paralleltest
- noctx
- gci
- tparallel
- wastedassign
- cyclop
- forbidigo
- tagliatelle
- thelper
- paralleltest
- wrapcheck
- varnamelen
- forcetypeassert
- tagliatelle
- ireturn
- golint
- nosnakecase
- errchkjson
- contextcheck
- gomoddirectives
- godot
- cyclop
- varnamelen
- errorlint
- forcetypeassert
- maintidx
- nilnil
- predeclared
- tenv
- thelper
- wastedassign
- containedctx
- gosimple
- nonamedreturns
- asasalint
- rowserrcheck
- sqlclosecheck
- inamedparam
- tagalign
- varcheck
- deadcode
- structcheck
- ifshort
- testifylint
- mnd
- canonicalheader
- exportloopref
- execinquery
- err113
- fatcontext
- forbidigo

View File

@ -40,7 +40,7 @@ Before you file an issue, make sure you've checked the following:
- 👎 down-vote
1. For bugs
- Check it's not an environment issue. For example, if running on Kubernetes, make sure prerequisites are in place. (state stores, bindings, etc.)
- You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement service). More details on how to get logs can be found [here](https://docs.dapr.io/operations/troubleshooting/logs-troubleshooting/).
- You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement, scheduler service). More details on how to get logs can be found [here](https://docs.dapr.io/operations/troubleshooting/logs-troubleshooting/).
1. For proposals
- Many changes to the Dapr runtime may require changes to the API. In that case, the best place to discuss the potential feature is the main [Dapr repo](https://github.com/dapr/dapr).
- Other examples could include bindings, state stores or entirely new components.

991
Gopkg.lock generated
View File

@ -1,991 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
digest = "1:80004fcc5cf64e591486b3e11b406f1e0d17bf85d475d64203c8494f5da4fcd1"
name = "cloud.google.com/go"
packages = ["compute/metadata"]
pruneopts = "UT"
revision = "8c41231e01b2085512d98153bcffb847ff9b4b9f"
version = "v0.38.0"
[[projects]]
digest = "1:6b1426cad7057b717351eacf5b6fe70f053f11aac1ce254bbf2fd72c031719eb"
name = "contrib.go.opencensus.io/exporter/ocagent"
packages = ["."]
pruneopts = "UT"
revision = "dcb33c7f3b7cfe67e8a2cea10207ede1b7c40764"
version = "v0.4.12"
[[projects]]
digest = "1:b88fe174accff6609eee9dc7e4ec9f828cbda83e3646111538dbcc7f762f1a56"
name = "github.com/Azure/go-autorest"
packages = [
"autorest",
"autorest/adal",
"autorest/azure",
"autorest/date",
"logger",
"tracing",
]
pruneopts = "UT"
revision = "f29a2eccaa178b367df0405778cd85e0af7b4225"
version = "v12.1.0"
[[projects]]
digest = "1:d5e752c67b445baa5b6cb6f8aa706775c2aa8e41aca95a0c651520ff2c80361a"
name = "github.com/Microsoft/go-winio"
packages = [
".",
"pkg/guid",
]
pruneopts = "UT"
revision = "6c72808b55902eae4c5943626030429ff20f3b63"
version = "v0.4.14"
[[projects]]
branch = "master"
digest = "1:174cfc45f3f3e0f24f4bc3d2c80d8bcd4c02e274f9a0c4e7fcb6ff3273c0eeee"
name = "github.com/Pallinder/sillyname-go"
packages = ["."]
pruneopts = "UT"
revision = "97aeae9e6ba11ec62a40cf8b6b4bc42116c0a303"
[[projects]]
digest = "1:04457f9f6f3ffc5fea48e71d62f2ca256637dee0a04d710288e27e05c8b41976"
name = "github.com/Sirupsen/logrus"
packages = ["."]
pruneopts = "UT"
revision = "839c75faf7f98a33d445d181f3018b5c3409a45e"
version = "v1.4.2"
[[projects]]
digest = "1:937ce6e0cd5ccfec205f444a0d9c74f5680cbb68cd0a992b000559bf964ea20b"
name = "github.com/briandowns/spinner"
packages = ["."]
pruneopts = "UT"
revision = "e3fb08e7443c496a847cb2eef48e3883f3e12c38"
version = "v1.6.1"
[[projects]]
digest = "1:c1100fc71e23b6a32b2c68a5202a848fd13811d5a10b12edb8019c3667d1cd9a"
name = "github.com/cenkalti/backoff"
packages = ["."]
pruneopts = "UT"
revision = "4b4cebaf850ec58f1bb1fec5bdebdf8501c2bc3f"
version = "v3.0.0"
[[projects]]
digest = "1:fdb4ed936abeecb46a8c27dcac83f75c05c87a46d9ec7711411eb785c213fa02"
name = "github.com/census-instrumentation/opencensus-proto"
packages = [
"gen-go/agent/common/v1",
"gen-go/agent/metrics/v1",
"gen-go/agent/trace/v1",
"gen-go/metrics/v1",
"gen-go/resource/v1",
"gen-go/trace/v1",
]
pruneopts = "UT"
revision = "a105b96453fe85139acc07b68de48f2cbdd71249"
version = "v0.2.0"
[[projects]]
digest = "1:95ea6524ccf5526a5f57fa634f2789266684ae1c15ce1a0cab3ae68e7ea3c4d0"
name = "github.com/dapr/dapr"
packages = [
"pkg/apis/components",
"pkg/apis/components/v1alpha1",
"pkg/components",
"pkg/config/modes",
]
pruneopts = "UT"
revision = "c75b111b7d2258ce7339f6b40c6d1af5b0b6de22"
version = "v0.2.0"
[[projects]]
digest = "1:ffe9824d294da03b391f44e1ae8281281b4afc1bdaa9588c9097785e3af10cec"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = "UT"
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:76dc72490af7174349349838f2fe118996381b31ea83243812a97e5a0fd5ed55"
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
pruneopts = "UT"
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
digest = "1:4ddc17aeaa82cb18c5f0a25d7c253a10682f518f4b2558a82869506eec223d76"
name = "github.com/docker/distribution"
packages = [
"digestset",
"reference",
]
pruneopts = "UT"
revision = "2461543d988979529609e8cb6fca9ca190dc48da"
version = "v2.7.1"
[[projects]]
digest = "1:c4c7064c2c67a0a00815918bae489dd62cd88d859d24c95115d69b00b3d33334"
name = "github.com/docker/docker"
packages = [
"api/types",
"api/types/blkiodev",
"api/types/container",
"api/types/events",
"api/types/filters",
"api/types/mount",
"api/types/network",
"api/types/reference",
"api/types/registry",
"api/types/strslice",
"api/types/swarm",
"api/types/time",
"api/types/versions",
"api/types/volume",
"client",
"pkg/tlsconfig",
]
pruneopts = "UT"
revision = "092cba3727bb9b4a2f0e922cd6c0f93ea270e363"
version = "v1.13.1"
[[projects]]
digest = "1:811c86996b1ca46729bad2724d4499014c4b9effd05ef8c71b852aad90deb0ce"
name = "github.com/docker/go-connections"
packages = [
"nat",
"sockets",
"tlsconfig",
]
pruneopts = "UT"
revision = "7395e3f8aa162843a74ed6d48e79627d9792ac55"
version = "v0.4.0"
[[projects]]
digest = "1:e95ef557dc3120984bb66b385ae01b4bb8ff56bcde28e7b0d1beed0cccc4d69f"
name = "github.com/docker/go-units"
packages = ["."]
pruneopts = "UT"
revision = "519db1ee28dcc9fd2474ae59fca29a810482bfb1"
version = "v0.4.0"
[[projects]]
digest = "1:865079840386857c809b72ce300be7580cb50d3d3129ce11bf9aa6ca2bc1934a"
name = "github.com/fatih/color"
packages = ["."]
pruneopts = "UT"
revision = "5b77d2a35fb0ede96d138fc9a99f5c9b6aef11b4"
version = "v1.7.0"
[[projects]]
digest = "1:abeb38ade3f32a92943e5be54f55ed6d6e3b6602761d74b4aab4c9dd45c18abd"
name = "github.com/fsnotify/fsnotify"
packages = ["."]
pruneopts = "UT"
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
version = "v1.4.7"
[[projects]]
digest = "1:2cd7915ab26ede7d95b8749e6b1f933f1c6d5398030684e6505940a10f31cfda"
name = "github.com/ghodss/yaml"
packages = ["."]
pruneopts = "UT"
revision = "0ca9ea5df5451ffdf184b4428c902747c2c11cd7"
version = "v1.0.0"
[[projects]]
branch = "master"
digest = "1:adddf11eb27039a3afcc74c5e3d13da84e189012ec37acfc2c70385f25edbe0f"
name = "github.com/gocarina/gocsv"
packages = ["."]
pruneopts = "UT"
revision = "2fc85fcf0c07e8bb9123b2104e84cfc2a5b53724"
[[projects]]
digest = "1:4d02824a56d268f74a6b6fdd944b20b58a77c3d70e81008b3ee0c4f1a6777340"
name = "github.com/gogo/protobuf"
packages = [
"proto",
"sortkeys",
]
pruneopts = "UT"
revision = "ba06b47c162d49f2af050fb4c75bcbc86a159d5c"
version = "v1.2.1"
[[projects]]
digest = "1:489a99067cd08971bd9c1ee0055119ba8febc1429f9200ab0bec68d35e8c4833"
name = "github.com/golang/protobuf"
packages = [
"jsonpb",
"proto",
"protoc-gen-go/descriptor",
"protoc-gen-go/generator",
"protoc-gen-go/generator/internal/remap",
"protoc-gen-go/plugin",
"ptypes",
"ptypes/any",
"ptypes/duration",
"ptypes/struct",
"ptypes/timestamp",
"ptypes/wrappers",
]
pruneopts = "UT"
revision = "b5d812f8a3706043e23a9cd5babf2e5423744d30"
version = "v1.3.1"
[[projects]]
digest = "1:a6181aca1fd5e27103f9a920876f29ac72854df7345a39f3b01e61c8c94cc8af"
name = "github.com/google/gofuzz"
packages = ["."]
pruneopts = "UT"
revision = "f140a6486e521aad38f5917de355cbf147cc0496"
version = "v1.0.0"
[[projects]]
digest = "1:582b704bebaa06b48c29b0cec224a6058a09c86883aaddabde889cd1a5f73e1b"
name = "github.com/google/uuid"
packages = ["."]
pruneopts = "UT"
revision = "0cd6bf5da1e1c83f8b45653022c74f71af0538a4"
version = "v1.1.1"
[[projects]]
digest = "1:65c4414eeb350c47b8de71110150d0ea8a281835b1f386eacaa3ad7325929c21"
name = "github.com/googleapis/gnostic"
packages = [
"OpenAPIv2",
"compiler",
"extensions",
]
pruneopts = "UT"
revision = "7c663266750e7d82587642f65e60bc4083f1f84e"
version = "v0.2.0"
[[projects]]
digest = "1:c7810b83a74c6ec1d14d16d4b950c09abce6fbe9cc660ac2cde5b57efa8cc12e"
name = "github.com/gophercloud/gophercloud"
packages = [
".",
"openstack",
"openstack/identity/v2/tenants",
"openstack/identity/v2/tokens",
"openstack/identity/v3/tokens",
"openstack/utils",
"pagination",
]
pruneopts = "UT"
revision = "c2d73b246b48e239d3f03c455905e06fe26e33c3"
version = "v0.1.0"
[[projects]]
digest = "1:4f30fff718a459f9be272e7aa87463cdf4ba27bb8bd7f586ac34c36d670aada4"
name = "github.com/grpc-ecosystem/grpc-gateway"
packages = [
"internal",
"runtime",
"utilities",
]
pruneopts = "UT"
revision = "cddead4ec1d10cc62f08e1fd6f8591fbe71cfff9"
version = "v1.9.1"
[[projects]]
digest = "1:67474f760e9ac3799f740db2c489e6423a4cde45520673ec123ac831ad849cb8"
name = "github.com/hashicorp/golang-lru"
packages = ["simplelru"]
pruneopts = "UT"
revision = "7087cb70de9f7a8bc0a10c375cb0d2280a8edf9c"
version = "v0.5.1"
[[projects]]
digest = "1:c0d19ab64b32ce9fe5cf4ddceba78d5bc9807f0016db6b1183599da3dcc24d10"
name = "github.com/hashicorp/hcl"
packages = [
".",
"hcl/ast",
"hcl/parser",
"hcl/printer",
"hcl/scanner",
"hcl/strconv",
"hcl/token",
"json/parser",
"json/scanner",
"json/token",
]
pruneopts = "UT"
revision = "8cb6e5b959231cc1119e43259c4a608f9c51a241"
version = "v1.0.0"
[[projects]]
digest = "1:a0cefd27d12712af4b5018dc7046f245e1e3b5760e2e848c30b171b570708f9b"
name = "github.com/imdario/mergo"
packages = ["."]
pruneopts = "UT"
revision = "7c29201646fa3de8506f701213473dd407f19646"
version = "v0.3.7"
[[projects]]
digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
pruneopts = "UT"
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
digest = "1:f5a2051c55d05548d2d4fd23d244027b59fbd943217df8aa3b5e170ac2fd6e1b"
name = "github.com/json-iterator/go"
packages = ["."]
pruneopts = "UT"
revision = "0ff49de124c6f76f8494e194af75bde0f1a49a29"
version = "v1.1.6"
[[projects]]
digest = "1:fa4b4bfa0edfb83c4690d8746ea25bc2447ad0c20063ba72adcb5725f54acde0"
name = "github.com/klauspost/compress"
packages = [
"flate",
"gzip",
"zlib",
]
pruneopts = "UT"
revision = "4e96aec082898e4dad17d8aca1a7e2d01362ff6c"
version = "v1.9.2"
[[projects]]
digest = "1:31e761d97c76151dde79e9d28964a812c46efc5baee4085b86f68f0c654450de"
name = "github.com/konsorten/go-windows-terminal-sequences"
packages = ["."]
pruneopts = "UT"
revision = "f55edac94c9bbba5d6182a4be46d86a2c9b5b50e"
version = "v1.0.2"
[[projects]]
digest = "1:5a0ef768465592efca0412f7e838cdc0826712f8447e70e6ccc52eb441e9ab13"
name = "github.com/magiconair/properties"
packages = ["."]
pruneopts = "UT"
revision = "de8848e004dd33dc07a2947b3d76f618a7fc7ef1"
version = "v1.8.1"
[[projects]]
digest = "1:c658e84ad3916da105a761660dcaeb01e63416c8ec7bc62256a9b411a05fcd67"
name = "github.com/mattn/go-colorable"
packages = ["."]
pruneopts = "UT"
revision = "167de6bfdfba052fa6b2d3664c8f5272e23c9072"
version = "v0.0.9"
[[projects]]
digest = "1:e150b5fafbd7607e2d638e4e5cf43aa4100124e5593385147b0a74e2733d8b0d"
name = "github.com/mattn/go-isatty"
packages = ["."]
pruneopts = "UT"
revision = "c2a7a6ca930a4cd0bc33a3f298eb71960732a3a7"
version = "v0.0.7"
[[projects]]
digest = "1:0356f3312c9bd1cbeda81505b7fd437501d8e778ab66998ef69f00d7f9b3a0d7"
name = "github.com/mattn/go-runewidth"
packages = ["."]
pruneopts = "UT"
revision = "3ee7d812e62a0804a7d0a324e0249ca2db3476d3"
version = "v0.0.4"
[[projects]]
branch = "master"
digest = "1:56aff9bb73896906956fee6927207393212bfaa732c1aab4feaf29de4b1418e9"
name = "github.com/mitchellh/go-ps"
packages = ["."]
pruneopts = "UT"
revision = "621e5597135b1d14a7d9c2bfc7bc312e7c58463c"
[[projects]]
digest = "1:53bc4cd4914cd7cd52139990d5170d6dc99067ae31c56530621b18b35fc30318"
name = "github.com/mitchellh/mapstructure"
packages = ["."]
pruneopts = "UT"
revision = "3536a929edddb9a5b34bd6861dc4a9647cb459fe"
version = "v1.1.2"
[[projects]]
digest = "1:33422d238f147d247752996a26574ac48dcf472976eda7f5134015f06bf16563"
name = "github.com/modern-go/concurrent"
packages = ["."]
pruneopts = "UT"
revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94"
version = "1.0.3"
[[projects]]
digest = "1:e32bdbdb7c377a07a9a46378290059822efdce5c8d96fe71940d87cb4f918855"
name = "github.com/modern-go/reflect2"
packages = ["."]
pruneopts = "UT"
revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd"
version = "1.0.1"
[[projects]]
branch = "master"
digest = "1:6491080aa184f88c2bb8e2f6056e5e0e9a578b2d8666efbd6e97bc37a0c41e72"
name = "github.com/nightlyone/lockfile"
packages = ["."]
pruneopts = "UT"
revision = "0ad87eef1443f64d3d8c50da647e2b1552851124"
[[projects]]
digest = "1:abcdbf03ca6ca13d3697e2186edc1f33863bbdac2b3a44dfa39015e8903f7409"
name = "github.com/olekukonko/tablewriter"
packages = ["."]
pruneopts = "UT"
revision = "e6d60cf7ba1f42d86d54cdf5508611c4aafb3970"
version = "v0.0.1"
[[projects]]
digest = "1:ee4d4af67d93cc7644157882329023ce9a7bcfce956a079069a9405521c7cc8d"
name = "github.com/opencontainers/go-digest"
packages = ["."]
pruneopts = "UT"
revision = "279bed98673dd5bef374d3b6e4b09e2af76183bf"
version = "v1.0.0-rc1"
[[projects]]
digest = "1:6eea828983c70075ca297bb915ffbcfd3e34c5a50affd94428a65df955c0ff9c"
name = "github.com/pelletier/go-toml"
packages = ["."]
pruneopts = "UT"
revision = "903d9455db9ff1d7ac1ab199062eca7266dd11a3"
version = "v1.6.0"
[[projects]]
digest = "1:7413525ee648f20b4181be7fe8103d0cb98be9e141926a03ee082dc207061e4e"
name = "github.com/phayes/freeport"
packages = ["."]
pruneopts = "UT"
revision = "b8543db493a5ed890c5499e935e2cad7504f3a04"
version = "1.0.2"
[[projects]]
digest = "1:cf31692c14422fa27c83a05292eb5cbe0fb2775972e8f1f8446a71549bd8980b"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = "UT"
revision = "ba968bfe8b2f7e042a574c888954fccecfa385b4"
version = "v0.8.1"
[[projects]]
digest = "1:bb495ec276ab82d3dd08504bbc0594a65de8c3b22c6f2aaa92d05b73fbf3a82e"
name = "github.com/spf13/afero"
packages = [
".",
"mem",
]
pruneopts = "UT"
revision = "588a75ec4f32903aa5e39a2619ba6a4631e28424"
version = "v1.2.2"
[[projects]]
digest = "1:08d65904057412fc0270fc4812a1c90c594186819243160dc779a402d4b6d0bc"
name = "github.com/spf13/cast"
packages = ["."]
pruneopts = "UT"
revision = "8c9545af88b134710ab1cd196795e7f2388358d7"
version = "v1.3.0"
[[projects]]
digest = "1:645cabccbb4fa8aab25a956cbcbdf6a6845ca736b2c64e197ca7cbb9d210b939"
name = "github.com/spf13/cobra"
packages = ["."]
pruneopts = "UT"
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]]
digest = "1:1b753ec16506f5864d26a28b43703c58831255059644351bbcb019b843950900"
name = "github.com/spf13/jwalterweatherman"
packages = ["."]
pruneopts = "UT"
revision = "94f6ae3ed3bceceafa716478c5fbf8d29ca601a1"
version = "v1.1.0"
[[projects]]
digest = "1:c1b1102241e7f645bc8e0c22ae352e8f0dc6484b6cb4d132fa9f24174e0119e2"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = "UT"
revision = "298182f68c66c05229eb03ac171abe6e309ee79a"
version = "v1.0.3"
[[projects]]
digest = "1:0b60fc944fb6a7b6c985832bd341bdb7ed8fe894fea330414e7774bb24652962"
name = "github.com/spf13/viper"
packages = ["."]
pruneopts = "UT"
revision = "72b022eb357a56469725dcd03918449e2278d02e"
version = "v1.5.0"
[[projects]]
digest = "1:f4b32291cad5efac2bfdba89ccde6aa04618b62ce06c1a571da2dc4f3f2677fb"
name = "github.com/subosito/gotenv"
packages = ["."]
pruneopts = "UT"
revision = "2ef7124db659d49edac6aa459693a15ae36c671a"
version = "v1.2.0"
[[projects]]
digest = "1:c468422f334a6b46a19448ad59aaffdfc0a36b08fdcc1c749a0b29b6453d7e59"
name = "github.com/valyala/bytebufferpool"
packages = ["."]
pruneopts = "UT"
revision = "e746df99fe4a3986f4d4f79e13c1e0117ce9c2f7"
version = "v1.0.0"
[[projects]]
digest = "1:15ad8a80098fcc7a194b9db6b26d74072a852e4faa957848c8118193d3c69230"
name = "github.com/valyala/fasthttp"
packages = [
".",
"fasthttputil",
"stackless",
]
pruneopts = "UT"
revision = "e5f51c11919d4f66400334047b897ef0a94c6f3c"
version = "v20180529"
[[projects]]
digest = "1:4c93890bbbb5016505e856cb06b5c5a2ff5b7217584d33f2a9071ebef4b5d473"
name = "go.opencensus.io"
packages = [
".",
"internal",
"internal/tagencoding",
"metric/metricdata",
"metric/metricproducer",
"plugin/ocgrpc",
"plugin/ochttp",
"plugin/ochttp/propagation/b3",
"plugin/ochttp/propagation/tracecontext",
"resource",
"stats",
"stats/internal",
"stats/view",
"tag",
"trace",
"trace/internal",
"trace/propagation",
"trace/tracestate",
]
pruneopts = "UT"
revision = "43463a80402d8447b7fce0d2c58edf1687ff0b58"
version = "v0.19.3"
[[projects]]
branch = "master"
digest = "1:bbe51412d9915d64ffaa96b51d409e070665efc5194fcf145c4a27d4133107a4"
name = "golang.org/x/crypto"
packages = ["ssh/terminal"]
pruneopts = "UT"
revision = "e1dfcc566284e143ba8f9afbb3fa563f2a0d212b"
[[projects]]
branch = "master"
digest = "1:01a2f697724170f98739b5261ec830eafc626f56b6786c730578dabcce47649c"
name = "golang.org/x/net"
packages = [
"context",
"context/ctxhttp",
"http/httpguts",
"http2",
"http2/hpack",
"idna",
"internal/socks",
"internal/timeseries",
"proxy",
"trace",
]
pruneopts = "UT"
revision = "a4d6f7feada510cc50e69a37b484cb0fdc6b7876"
[[projects]]
branch = "master"
digest = "1:645cb780e4f3177111b40588f0a7f5950efcfb473e7ff41d8d81b2ba5eaa6ed5"
name = "golang.org/x/oauth2"
packages = [
".",
"google",
"internal",
"jws",
"jwt",
]
pruneopts = "UT"
revision = "9f3314589c9a9136388751d9adae6b0ed400978a"
[[projects]]
branch = "master"
digest = "1:382bb5a7fb4034db3b6a2d19e5a4a6bcf52f4750530603c01ca18a172fa3089b"
name = "golang.org/x/sync"
packages = ["semaphore"]
pruneopts = "UT"
revision = "112230192c580c3556b8cee6403af37a4fc5f28c"
[[projects]]
branch = "master"
digest = "1:656ce95b4cdf841c00825f4cb94f6e0e1422d7d6faaf3094e94cd18884a32251"
name = "golang.org/x/sys"
packages = [
"unix",
"windows",
]
pruneopts = "UT"
revision = "a129542de9ae0895210abff9c95d67a1f33cb93d"
[[projects]]
digest = "1:8d8faad6b12a3a4c819a3f9618cb6ee1fa1cfc33253abeeea8b55336721e3405"
name = "golang.org/x/text"
packages = [
"collate",
"collate/build",
"internal/colltab",
"internal/gen",
"internal/language",
"internal/language/compact",
"internal/tag",
"internal/triegen",
"internal/ucd",
"language",
"secure/bidirule",
"transform",
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable",
]
pruneopts = "UT"
revision = "342b2e1fbaa52c93f31447ad2c6abc048c63e475"
version = "v0.3.2"
[[projects]]
branch = "master"
digest = "1:9fdc2b55e8e0fafe4b41884091e51e77344f7dc511c5acedcfd98200003bff90"
name = "golang.org/x/time"
packages = ["rate"]
pruneopts = "UT"
revision = "9d24e82272b4f38b78bc8cff74fa936d31ccd8ef"
[[projects]]
digest = "1:5f003878aabe31d7f6b842d4de32b41c46c214bb629bb485387dbcce1edf5643"
name = "google.golang.org/api"
packages = ["support/bundler"]
pruneopts = "UT"
revision = "aac82e61c0c8fe133c297b4b59316b9f481e1f0a"
version = "v0.6.0"
[[projects]]
digest = "1:04f2ff15fc59e1ddaf9900ad0e19e5b19586b31f9dafd4d592b617642b239d8f"
name = "google.golang.org/appengine"
packages = [
".",
"internal",
"internal/app_identity",
"internal/base",
"internal/datastore",
"internal/log",
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"urlfetch",
]
pruneopts = "UT"
revision = "54a98f90d1c46b7731eb8fb305d2a321c30ef610"
version = "v1.5.0"
[[projects]]
branch = "master"
digest = "1:3565a93b7692277a5dea355bc47bd6315754f3246ed07a224be6aec28972a805"
name = "google.golang.org/genproto"
packages = [
"googleapis/api/httpbody",
"googleapis/rpc/status",
"protobuf/field_mask",
]
pruneopts = "UT"
revision = "a7e196e89fd3a3c4d103ca540bd5dac3a736e375"
[[projects]]
digest = "1:e8800ddadd6bce3bc0c5ffd7bc55dbdddc6e750956c10cc10271cade542fccbe"
name = "google.golang.org/grpc"
packages = [
".",
"balancer",
"balancer/base",
"balancer/roundrobin",
"binarylog/grpc_binarylog_v1",
"codes",
"connectivity",
"credentials",
"credentials/internal",
"encoding",
"encoding/proto",
"grpclog",
"internal",
"internal/backoff",
"internal/balancerload",
"internal/binarylog",
"internal/channelz",
"internal/envconfig",
"internal/grpcrand",
"internal/grpcsync",
"internal/syscall",
"internal/transport",
"keepalive",
"metadata",
"naming",
"peer",
"resolver",
"resolver/dns",
"resolver/passthrough",
"stats",
"status",
"tap",
]
pruneopts = "UT"
revision = "501c41df7f472c740d0674ff27122f3f48c80ce7"
version = "v1.21.1"
[[projects]]
digest = "1:2d1fbdc6777e5408cabeb02bf336305e724b925ff4546ded0fa8715a7267922a"
name = "gopkg.in/inf.v0"
packages = ["."]
pruneopts = "UT"
revision = "d2d2541c53f18d2a059457998ce2876cc8e67cbf"
version = "v0.9.1"
[[projects]]
digest = "1:4d2e5a73dc1500038e504a8d78b986630e3626dc027bc030ba5c75da257cdb96"
name = "gopkg.in/yaml.v2"
packages = ["."]
pruneopts = "UT"
revision = "51d6538a90f86fe93ac480b35f37b2be17fef232"
version = "v2.2.2"
[[projects]]
digest = "1:86ad5797d1189de342ed6988fbb76b92dc0429a4d677ad69888d6137efa5712e"
name = "k8s.io/api"
packages = [
"admissionregistration/v1beta1",
"apps/v1",
"apps/v1beta1",
"apps/v1beta2",
"auditregistration/v1alpha1",
"authentication/v1",
"authentication/v1beta1",
"authorization/v1",
"authorization/v1beta1",
"autoscaling/v1",
"autoscaling/v2beta1",
"autoscaling/v2beta2",
"batch/v1",
"batch/v1beta1",
"batch/v2alpha1",
"certificates/v1beta1",
"coordination/v1",
"coordination/v1beta1",
"core/v1",
"events/v1beta1",
"extensions/v1beta1",
"networking/v1",
"networking/v1beta1",
"node/v1alpha1",
"node/v1beta1",
"policy/v1beta1",
"rbac/v1",
"rbac/v1alpha1",
"rbac/v1beta1",
"scheduling/v1",
"scheduling/v1alpha1",
"scheduling/v1beta1",
"settings/v1alpha1",
"storage/v1",
"storage/v1alpha1",
"storage/v1beta1",
]
pruneopts = "UT"
revision = "6e4e0e4f393bf5e8bbff570acd13217aa5a770cd"
version = "kubernetes-1.14.1"
[[projects]]
digest = "1:78f6a824d205c6cb0d011cce241407646b773cb57ee27e8c7e027753b4111075"
name = "k8s.io/apimachinery"
packages = [
"pkg/api/errors",
"pkg/api/meta",
"pkg/api/resource",
"pkg/apis/meta/v1",
"pkg/apis/meta/v1/unstructured",
"pkg/apis/meta/v1beta1",
"pkg/conversion",
"pkg/conversion/queryparams",
"pkg/fields",
"pkg/labels",
"pkg/runtime",
"pkg/runtime/schema",
"pkg/runtime/serializer",
"pkg/runtime/serializer/json",
"pkg/runtime/serializer/protobuf",
"pkg/runtime/serializer/recognizer",
"pkg/runtime/serializer/streaming",
"pkg/runtime/serializer/versioning",
"pkg/selection",
"pkg/types",
"pkg/util/clock",
"pkg/util/errors",
"pkg/util/framer",
"pkg/util/intstr",
"pkg/util/json",
"pkg/util/naming",
"pkg/util/net",
"pkg/util/runtime",
"pkg/util/sets",
"pkg/util/validation",
"pkg/util/validation/field",
"pkg/util/yaml",
"pkg/version",
"pkg/watch",
"third_party/forked/golang/reflect",
]
pruneopts = "UT"
revision = "6a84e37a896db9780c75367af8d2ed2bb944022e"
version = "kubernetes-1.14.1"
[[projects]]
digest = "1:37f699391265222af7da4bf8e443ca03dd834ce362fbb4b19b4d67492ff06781"
name = "k8s.io/client-go"
packages = [
"discovery",
"kubernetes",
"kubernetes/scheme",
"kubernetes/typed/admissionregistration/v1beta1",
"kubernetes/typed/apps/v1",
"kubernetes/typed/apps/v1beta1",
"kubernetes/typed/apps/v1beta2",
"kubernetes/typed/auditregistration/v1alpha1",
"kubernetes/typed/authentication/v1",
"kubernetes/typed/authentication/v1beta1",
"kubernetes/typed/authorization/v1",
"kubernetes/typed/authorization/v1beta1",
"kubernetes/typed/autoscaling/v1",
"kubernetes/typed/autoscaling/v2beta1",
"kubernetes/typed/autoscaling/v2beta2",
"kubernetes/typed/batch/v1",
"kubernetes/typed/batch/v1beta1",
"kubernetes/typed/batch/v2alpha1",
"kubernetes/typed/certificates/v1beta1",
"kubernetes/typed/coordination/v1",
"kubernetes/typed/coordination/v1beta1",
"kubernetes/typed/core/v1",
"kubernetes/typed/events/v1beta1",
"kubernetes/typed/extensions/v1beta1",
"kubernetes/typed/networking/v1",
"kubernetes/typed/networking/v1beta1",
"kubernetes/typed/node/v1alpha1",
"kubernetes/typed/node/v1beta1",
"kubernetes/typed/policy/v1beta1",
"kubernetes/typed/rbac/v1",
"kubernetes/typed/rbac/v1alpha1",
"kubernetes/typed/rbac/v1beta1",
"kubernetes/typed/scheduling/v1",
"kubernetes/typed/scheduling/v1alpha1",
"kubernetes/typed/scheduling/v1beta1",
"kubernetes/typed/settings/v1alpha1",
"kubernetes/typed/storage/v1",
"kubernetes/typed/storage/v1alpha1",
"kubernetes/typed/storage/v1beta1",
"pkg/apis/clientauthentication",
"pkg/apis/clientauthentication/v1alpha1",
"pkg/apis/clientauthentication/v1beta1",
"pkg/version",
"plugin/pkg/client/auth",
"plugin/pkg/client/auth/azure",
"plugin/pkg/client/auth/exec",
"plugin/pkg/client/auth/gcp",
"plugin/pkg/client/auth/oidc",
"plugin/pkg/client/auth/openstack",
"rest",
"rest/watch",
"third_party/forked/golang/template",
"tools/auth",
"tools/clientcmd",
"tools/clientcmd/api",
"tools/clientcmd/api/latest",
"tools/clientcmd/api/v1",
"tools/metrics",
"tools/reference",
"transport",
"util/cert",
"util/connrotation",
"util/flowcontrol",
"util/homedir",
"util/jsonpath",
"util/keyutil",
]
pruneopts = "UT"
revision = "1a26190bd76a9017e289958b9fba936430aa3704"
version = "kubernetes-1.14.1"
[[projects]]
digest = "1:c696379ad201c1e86591785579e16bf6cf886c362e9a7534e8eb0d1028b20582"
name = "k8s.io/klog"
packages = ["."]
pruneopts = "UT"
revision = "e531227889390a39d9533dde61f590fe9f4b0035"
version = "v0.3.0"
[[projects]]
branch = "master"
digest = "1:8b40227d4bf8b431fdab4f9026e6e346f00ac3be5662af367a183f78c57660b3"
name = "k8s.io/utils"
packages = ["integer"]
pruneopts = "UT"
revision = "8fab8cb257d50c8cf94ec9771e74826edbb68fb5"
[[projects]]
digest = "1:7719608fe0b52a4ece56c2dde37bedd95b938677d1ab0f84b8a7852e4c59f849"
name = "sigs.k8s.io/yaml"
packages = ["."]
pruneopts = "UT"
revision = "fd68e9863619f6ec2fdd8625fe1f02e7c877e480"
version = "v1.1.0"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/Pallinder/sillyname-go",
"github.com/briandowns/spinner",
"github.com/dapr/dapr/pkg/components",
"github.com/dapr/dapr/pkg/config/modes",
"github.com/docker/docker/client",
"github.com/fatih/color",
"github.com/gocarina/gocsv",
"github.com/google/uuid",
"github.com/mitchellh/go-ps",
"github.com/nightlyone/lockfile",
"github.com/olekukonko/tablewriter",
"github.com/phayes/freeport",
"github.com/spf13/cobra",
"github.com/spf13/viper",
"gopkg.in/yaml.v2",
"k8s.io/api/core/v1",
"k8s.io/apimachinery/pkg/apis/meta/v1",
"k8s.io/client-go/kubernetes",
"k8s.io/client-go/plugin/pkg/client/auth",
"k8s.io/client-go/plugin/pkg/client/auth/gcp",
"k8s.io/client-go/tools/clientcmd",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@ -1,54 +0,0 @@
# Gopkg.toml example
#
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
#
# [prune]
# non-go = false
# go-tests = true
# unused-packages = true
[[constraint]]
name = "github.com/fatih/color"
version = "1.7.0"
[[constraint]]
name = "github.com/spf13/cobra"
version = "0.0.3"
[[constraint]]
name = "github.com/spf13/viper"
version = "1.5.0"
[[override]]
version = "kubernetes-1.14.1"
name = "k8s.io/apimachinery"
[[override]]
version = "kubernetes-1.14.1"
name = "k8s.io/api"
[[override]]
version = "kubernetes-1.14.1"
name = "k8s.io/client-go"
[prune]
go-tests = true
unused-packages = true

View File

@ -59,6 +59,7 @@ ifeq ($(LOCAL_OS),Linux)
else ifeq ($(LOCAL_OS),Darwin)
TARGET_OS_LOCAL = darwin
GOLANGCI_LINT:=golangci-lint
PATH := $(PATH):$(HOME)/go/bin/darwin_$(GOARCH)
export ARCHIVE_EXT = .tar.gz
else
TARGET_OS_LOCAL ?= windows
@ -173,7 +174,7 @@ e2e-build-run-k8s: build test-e2e-k8s
################################################################################
.PHONY: test-e2e-upgrade
test-e2e-upgrade: test-deps
gotestsum --jsonfile $(TEST_OUTPUT_FILE) --format standard-verbose -- -timeout 30m -count=1 -tags=e2e ./tests/e2e/upgrade/...
gotestsum --jsonfile $(TEST_OUTPUT_FILE) --format standard-verbose -- -timeout 60m -count=1 -tags=e2e ./tests/e2e/upgrade/...
################################################################################
# Build, E2E Tests for Kubernetes Upgrade #

View File

@ -68,7 +68,7 @@ Install windows Dapr CLI using MSI package.
### Install Dapr on your local machine (self-hosted)
In self-hosted mode, dapr can be initialized using the CLI with the placement, redis and zipkin containers enabled by default(recommended) or without them which also does not require docker to be available in the environment.
In self-hosted mode, dapr can be initialized using the CLI with the placement, scheduler, redis, and zipkin containers enabled by default(recommended) or without them which also does not require docker to be available in the environment.
#### Initialize Dapr
@ -89,10 +89,11 @@ Output should look like so:
✅ Downloaded binaries and completed components set up.
daprd binary has been installed to $HOME/.dapr/bin.
dapr_placement container is running.
dapr_scheduler container is running.
dapr_redis container is running.
dapr_zipkin container is running.
Use `docker ps` to check running containers.
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
✅ Success! Dapr is up and running. To get started, go here: https://docs.dapr.io/getting-started
```
> Note: To see that Dapr has been installed successfully, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest`, `dapr_redis` and `dapr_zipkin` container images are all running.
@ -118,10 +119,11 @@ Output should look like so:
✅ Downloaded binaries and completed components set up.
daprd binary has been installed to $HOME/.dapr/bin.
placement binary has been installed.
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
scheduler binary has been installed.
✅ Success! Dapr is up and running. To get started, go here: https://docs.dapr.io/getting-started
```
>Note: When initializing Dapr with the `--slim` flag only the Dapr runtime binary and the placement service binary are installed. An empty default components folder is created with no default configuration files. During `dapr run` user should use `--resources-path` (`--components-path` is deprecated and will be removed in future releases) to point to a components directory with custom configurations files or alternatively place these files in the default directory. For Linux/MacOS, the default components directory path is `$HOME/.dapr/components` and for Windows it is `%USERPROFILE%\.dapr\components`.
>Note: When initializing Dapr with the `--slim` flag only the Dapr runtime, placement, and scheduler service binaries are installed. An empty default components folder is created with no default configuration files. During `dapr run` user should use `--resources-path` (`--components-path` is deprecated and will be removed in future releases) to point to a components directory with custom configurations files or alternatively place these files in the default directory. For Linux/MacOS, the default components directory path is `$HOME/.dapr/components` and for Windows it is `%USERPROFILE%\.dapr\components`.
#### Install a specific runtime version
@ -171,7 +173,7 @@ Move to the bundle directory and run the following command:
> If you are not running the above command from the bundle directory, provide the full path to bundle directory as input. For example, assuming the bundle directory path is $HOME/daprbundle, run `$HOME/daprbundle/dapr init --from-dir $HOME/daprbundle` to have the same behavior.
> Note: Dapr Installer bundle just contains the placement container apart from the binaries and so `zipkin` and `redis` are not enabled by default. You can pull the images locally either from network or private registry and run as follows:
> Note: Dapr Installer bundle just contains the placement and scheduler containers apart from the binaries and so `zipkin` and `redis` are not enabled by default. You can pull the images locally either from network or private registry and run as follows:
```bash
docker run --name "dapr_zipkin" --restart always -d -p 9411:9411 openzipkin/zipkin
@ -199,6 +201,9 @@ dapr init --network dapr-network
> Note: When installed to a specific Docker network, you will need to add the `--placement-host-address` arguments to `dapr run` commands run in any containers within that network.
> The format of `--placement-host-address` argument is either `<hostname>` or `<hostname>:<port>`. If the port is omitted, the default port `6050` for Windows and `50005` for Linux/MacOS applies.
> Note: When installed to a specific Docker network, you will need to add the `--scheduler-host-address` arguments to `dapr run` commands run in any containers within that network.
> The format of `--scheduler-host-address` argument is either `<hostname>` or `<hostname>:<port>`. If the port is omitted, the default port `6060` for Windows and `50006` for Linux/MacOS applies.
#### Install with a specific container runtime
You can install the Dapr runtime using a specific container runtime
@ -228,7 +233,7 @@ For more details, see the docs for dev containers with [Visual Studio Code](http
### Uninstall Dapr in a standalone mode
Uninstalling will remove daprd binary and the placement container (if installed with Docker or the placement binary if not).
Uninstalling will remove daprd binary along with the placement and scheduler containers (if installed with Docker or the placement and scheduler binaries if not).
```bash
@ -237,7 +242,7 @@ dapr uninstall
> For Linux users, if you run your docker cmds with sudo, you need to use "**sudo dapr uninstall**" to remove the containers.
The command above won't remove the redis or zipkin containers by default in case you were using it for other purposes. It will also not remove the default dapr folder that was created on `dapr init`. To remove all the containers (placement, redis, zipkin) and also the default dapr folder created on init run:
The command above won't remove the redis or zipkin containers by default in case you were using it for other purposes. It will also not remove the default dapr folder that was created on `dapr init`. To remove all the containers (placement, scheduler, redis, zipkin) and also the default dapr folder created on init run:
```bash
dapr uninstall --all
@ -245,7 +250,7 @@ dapr uninstall --all
The above command can also be run when Dapr has been installed in a non-docker environment, it will only remove the installed binaries and the default dapr folder in that case.
> NB: The `dapr uninstall` command will always try to remove the placement binary/service and will throw an error is not able to.
> NB: The `dapr uninstall` command will always try to remove the placement and scheduler binaries/services and will throw an error is not able to.
**You should always run a `dapr uninstall` before running another `dapr init`.**
@ -284,7 +289,7 @@ Output should look like as follows:
Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr/#install-with-helm-advanced
✅ Deploying the Dapr control plane to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://docs.dapr.io/getting-started
```
#### Supplying Helm values
@ -407,7 +412,7 @@ dapr init --network dapr-network
dapr run --app-id nodeapp --placement-host-address dapr_placement node app.js
```
> Note: When in a specific Docker network, the Redis, Zipkin and placement service containers are given specific network aliases, `dapr_redis`, `dapr_zipkin` and `dapr_placement`, respectively. The default configuration files reflect the network alias rather than `localhost` when a docker network is specified.
> Note: When in a specific Docker network, the Redis, Zipkin and placement and scheduler service containers are given specific network aliases, `dapr_redis`, `dapr_zipkin`, `dapr_placement`, and `dapr_scheduler`, respectively. The default configuration files reflect the network alias rather than `localhost` when a docker network is specified.
### Use gRPC

View File

@ -21,6 +21,11 @@ import (
"net/url"
"os"
"path/filepath"
"strconv"
"github.com/dapr/dapr/pkg/runtime"
"k8s.io/apimachinery/pkg/api/resource"
"github.com/spf13/cobra"
@ -60,8 +65,8 @@ var (
annotateReadinessProbeThreshold int
annotateDaprImage string
annotateAppSSL bool
annotateMaxRequestBodySize int
annotateReadBufferSize int
annotateMaxRequestBodySize string
annotateReadBufferSize string
annotateHTTPStreamRequestBody bool
annotateGracefulShutdownSeconds int
annotateEnableAPILogging bool
@ -221,7 +226,6 @@ func readInputsFromFS(path string) ([]io.Reader, error) {
inputs = append(inputs, file)
return nil
})
if err != nil {
return nil, err
}
@ -319,12 +323,23 @@ func getOptionsFromFlags() kubernetes.AnnotateOptions {
if annotateAppSSL {
o = append(o, kubernetes.WithAppSSL())
}
if annotateMaxRequestBodySize != -1 {
o = append(o, kubernetes.WithMaxRequestBodySize(annotateMaxRequestBodySize))
if annotateMaxRequestBodySize != "-1" {
if q, err := resource.ParseQuantity(annotateMaxRequestBodySize); err != nil {
print.FailureStatusEvent(os.Stderr, "error parsing value of max-body-size: %s, error: %s", annotateMaxRequestBodySize, err.Error())
os.Exit(1)
} else {
o = append(o, kubernetes.WithMaxRequestBodySize(int(q.Value())))
}
}
if annotateReadBufferSize != -1 {
o = append(o, kubernetes.WithReadBufferSize(annotateReadBufferSize))
if annotateReadBufferSize != "-1" {
if q, err := resource.ParseQuantity(annotateReadBufferSize); err != nil {
print.FailureStatusEvent(os.Stderr, "error parsing value of read-buffer-size: %s, error: %s", annotateMaxRequestBodySize, err.Error())
os.Exit(1)
} else {
o = append(o, kubernetes.WithReadBufferSize(int(q.Value())))
}
}
if annotateHTTPStreamRequestBody {
o = append(o, kubernetes.WithHTTPStreamRequestBody())
}
@ -386,8 +401,8 @@ func init() {
AnnotateCmd.Flags().StringVar(&annotateDaprImage, "dapr-image", "", "The image to use for the dapr sidecar container")
AnnotateCmd.Flags().BoolVar(&annotateAppSSL, "app-ssl", false, "Enable SSL for the app")
AnnotateCmd.Flags().MarkDeprecated("app-ssl", "This flag is deprecated and will be removed in a future release. Use \"app-protocol\" flag with https or grpcs instead")
AnnotateCmd.Flags().IntVar(&annotateMaxRequestBodySize, "max-request-body-size", -1, "The maximum request body size to use")
AnnotateCmd.Flags().IntVar(&annotateReadBufferSize, "http-read-buffer-size", -1, "The maximum size of HTTP header read buffer in kilobytes")
AnnotateCmd.Flags().StringVar(&annotateMaxRequestBodySize, "max-body-size", strconv.Itoa(runtime.DefaultMaxRequestBodySize>>20)+"Mi", "The maximum request body size to use")
AnnotateCmd.Flags().StringVar(&annotateReadBufferSize, "read-buffer-size", strconv.Itoa(runtime.DefaultReadBufferSize>>10)+"Ki", "The maximum size of HTTP header read buffer in kilobytes")
AnnotateCmd.Flags().BoolVar(&annotateHTTPStreamRequestBody, "http-stream-request-body", false, "Enable streaming request body for HTTP")
AnnotateCmd.Flags().IntVar(&annotateGracefulShutdownSeconds, "graceful-shutdown-seconds", -1, "The number of seconds to wait for the app to shutdown")
AnnotateCmd.Flags().BoolVar(&annotateEnableAPILogging, "enable-api-logging", false, "Enable API logging for the Dapr sidecar")

View File

@ -18,6 +18,7 @@ import (
"net"
"os"
"os/signal"
"strconv"
"github.com/pkg/browser"
"github.com/spf13/cobra"
@ -180,9 +181,9 @@ dapr dashboard -k -p 0
}()
// url for dashboard after port forwarding.
webURL := fmt.Sprintf("http://%s", net.JoinHostPort(dashboardHost, fmt.Sprint(portForward.LocalPort))) //nolint: perfsprint
webURL := "http://" + net.JoinHostPort(dashboardHost, strconv.Itoa(portForward.LocalPort))
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Dapr dashboard found in namespace:\t%s", foundNamespace))
print.InfoStatusEvent(os.Stdout, "Dapr dashboard found in namespace:\t"+foundNamespace)
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Dapr dashboard available at:\t%s\n", webURL))
err = browser.OpenURL(webURL)

View File

@ -45,6 +45,7 @@ var (
fromDir string
containerRuntime string
imageVariant string
schedulerVolume string
)
var InitCmd = &cobra.Command{
@ -146,7 +147,7 @@ dapr init --runtime-path <path-to-install-directory>
print.FailureStatusEvent(os.Stderr, err.Error())
os.Exit(1)
}
print.SuccessStatusEvent(os.Stdout, fmt.Sprintf("Success! Dapr has been installed to namespace %s. To verify, run `dapr status -k' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started", config.Namespace))
print.SuccessStatusEvent(os.Stdout, fmt.Sprintf("Success! Dapr has been installed to namespace %s. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started", config.Namespace))
} else {
dockerNetwork := ""
imageRegistryURI := ""
@ -170,12 +171,12 @@ dapr init --runtime-path <path-to-install-directory>
print.FailureStatusEvent(os.Stdout, "Invalid container runtime. Supported values are docker and podman.")
os.Exit(1)
}
err := standalone.Init(runtimeVersion, dashboardVersion, dockerNetwork, slimMode, imageRegistryURI, fromDir, containerRuntime, imageVariant, daprRuntimePath)
err := standalone.Init(runtimeVersion, dashboardVersion, dockerNetwork, slimMode, imageRegistryURI, fromDir, containerRuntime, imageVariant, daprRuntimePath, &schedulerVolume)
if err != nil {
print.FailureStatusEvent(os.Stderr, err.Error())
os.Exit(1)
}
print.SuccessStatusEvent(os.Stdout, "Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started")
print.SuccessStatusEvent(os.Stdout, "Success! Dapr is up and running. To get started, go here: https://docs.dapr.io/getting-started")
}
},
}
@ -210,7 +211,7 @@ func init() {
InitCmd.Flags().BoolVarP(&devMode, "dev", "", false, "Use Dev mode. Deploy Redis, Zipkin also in the Kubernetes cluster")
InitCmd.Flags().BoolVarP(&wait, "wait", "", false, "Wait for Kubernetes initialization to complete")
InitCmd.Flags().UintVarP(&timeout, "timeout", "", 300, "The wait timeout for the Kubernetes installation")
InitCmd.Flags().BoolVarP(&slimMode, "slim", "s", false, "Exclude placement service, Redis and Zipkin containers from self-hosted installation")
InitCmd.Flags().BoolVarP(&slimMode, "slim", "s", false, "Exclude placement service, scheduler service, Redis and Zipkin containers from self-hosted installation")
InitCmd.Flags().StringVarP(&runtimeVersion, "runtime-version", "", defaultRuntimeVersion, "The version of the Dapr runtime to install, for example: 1.0.0")
InitCmd.Flags().StringVarP(&dashboardVersion, "dashboard-version", "", defaultDashboardVersion, "The version of the Dapr dashboard to install, for example: 0.13.0")
InitCmd.Flags().StringVarP(&initNamespace, "namespace", "n", "dapr-system", "The Kubernetes namespace to install Dapr in")
@ -219,6 +220,7 @@ func init() {
InitCmd.Flags().String("network", "", "The Docker network on which to deploy the Dapr runtime")
InitCmd.Flags().StringVarP(&fromDir, "from-dir", "", "", "Use Dapr artifacts from local directory for self-hosted installation")
InitCmd.Flags().StringVarP(&imageVariant, "image-variant", "", "", "The image variant to use for the Dapr runtime, for example: mariner")
InitCmd.Flags().StringVarP(&schedulerVolume, "scheduler-volume", "", "dapr_scheduler", "Self-hosted only. Specify a volume for the scheduler service data directory.")
InitCmd.Flags().BoolP("help", "h", false, "Print this help message")
InitCmd.Flags().StringArrayVar(&values, "set", []string{}, "set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)")
InitCmd.Flags().String("image-registry", "", "Custom/private docker image repository URL")

View File

@ -67,7 +67,7 @@ dapr mtls export -o ./certs
}
dir, _ := filepath.Abs(exportPath)
print.SuccessStatusEvent(os.Stdout, fmt.Sprintf("Trust certs successfully exported to %s", dir))
print.SuccessStatusEvent(os.Stdout, "Trust certs successfully exported to "+dir)
},
PostRun: func(cmd *cobra.Command, args []string) {
kubernetes.CheckForCertExpiry()

View File

@ -103,7 +103,7 @@ dapr mtls renew-cert -k --valid-until <no of days> --restart
print.InfoStatusEvent(os.Stdout, "Using password file to generate root certificate")
err = kubernetes.RenewCertificate(kubernetes.RenewCertificateParams{
RootPrivateKeyFilePath: privateKey,
ValidUntil: time.Hour * time.Duration(validUntil*24),
ValidUntil: time.Hour * time.Duration(validUntil*24), //nolint:gosec
Timeout: timeout,
ImageVariant: imageVariant,
})
@ -113,7 +113,7 @@ dapr mtls renew-cert -k --valid-until <no of days> --restart
} else {
print.InfoStatusEvent(os.Stdout, "generating fresh certificates")
err = kubernetes.RenewCertificate(kubernetes.RenewCertificateParams{
ValidUntil: time.Hour * time.Duration(validUntil*24),
ValidUntil: time.Hour * time.Duration(validUntil*24), //nolint:gosec
Timeout: timeout,
ImageVariant: imageVariant,
})
@ -129,7 +129,7 @@ dapr mtls renew-cert -k --valid-until <no of days> --restart
logErrorAndExit(err)
}
print.SuccessStatusEvent(os.Stdout,
fmt.Sprintf("Certificate rotation is successful! Your new certicate is valid through %s", expiry.Format(time.RFC1123)))
"Certificate rotation is successful! Your new certicate is valid through "+expiry.Format(time.RFC1123))
if restartDaprServices {
restartControlPlaneService()

View File

@ -20,13 +20,19 @@ import (
"os"
"path/filepath"
"runtime"
"slices"
"strconv"
"strings"
"time"
"golang.org/x/mod/semver"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/spf13/viper"
daprRuntime "github.com/dapr/dapr/pkg/runtime"
"github.com/dapr/cli/pkg/kubernetes"
"github.com/dapr/cli/pkg/metadata"
"github.com/dapr/cli/pkg/print"
@ -38,34 +44,35 @@ import (
)
var (
appPort int
profilePort int
appID string
configFile string
port int
grpcPort int
internalGRPCPort int
maxConcurrency int
enableProfiling bool
logLevel string
protocol string
componentsPath string
resourcesPaths []string
appSSL bool
metricsPort int
maxRequestBodySize int
readBufferSize int
unixDomainSocket string
enableAppHealth bool
appHealthPath string
appHealthInterval int
appHealthTimeout int
appHealthThreshold int
enableAPILogging bool
apiListenAddresses string
runFilePath string
appChannelAddress string
enableRunK8s bool
appPort int
profilePort int
appID string
configFile string
port int
grpcPort int
internalGRPCPort int
maxConcurrency int
enableProfiling bool
logLevel string
protocol string
componentsPath string
resourcesPaths []string
appSSL bool
metricsPort int
maxRequestBodySize string
readBufferSize string
unixDomainSocket string
enableAppHealth bool
appHealthPath string
appHealthInterval int
appHealthTimeout int
appHealthThreshold int
enableAPILogging bool
apiListenAddresses string
schedulerHostAddress string
runFilePath string
appChannelAddress string
enableRunK8s bool
)
const (
@ -73,6 +80,15 @@ const (
runtimeWaitTimeoutInSeconds = 60
)
// Flags that are compatible with --run-file
var runFileCompatibleFlags = []string{
"kubernetes",
"help",
"version",
"runtime-path",
"log-as-json",
}
var RunCmd = &cobra.Command{
Use: "run",
Short: "Run Dapr and (optionally) your application side by side. Supported platforms: Self-hosted",
@ -123,6 +139,14 @@ dapr run --run-file /path/to/directory -k
},
Run: func(cmd *cobra.Command, args []string) {
if len(runFilePath) > 0 {
// Check for incompatible flags
incompatibleFlags := detectIncompatibleFlags(cmd)
if len(incompatibleFlags) > 0 {
// Print warning message about incompatible flags
warningMsg := "The following flags are ignored when using --run-file and should be configured in the run file instead: " + strings.Join(incompatibleFlags, ", ")
print.WarningStatusEvent(os.Stdout, warningMsg)
}
runConfigFilePath, err := getRunFilePath(runFilePath)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Failed to get run file path: %v", err)
@ -165,25 +189,26 @@ dapr run --run-file /path/to/directory -k
}
sharedRunConfig := &standalone.SharedRunConfig{
ConfigFile: configFile,
EnableProfiling: enableProfiling,
LogLevel: logLevel,
MaxConcurrency: maxConcurrency,
AppProtocol: protocol,
PlacementHostAddr: viper.GetString("placement-host-address"),
ComponentsPath: componentsPath,
ResourcesPaths: resourcesPaths,
AppSSL: appSSL,
MaxRequestBodySize: maxRequestBodySize,
HTTPReadBufferSize: readBufferSize,
EnableAppHealth: enableAppHealth,
AppHealthPath: appHealthPath,
AppHealthInterval: appHealthInterval,
AppHealthTimeout: appHealthTimeout,
AppHealthThreshold: appHealthThreshold,
EnableAPILogging: enableAPILogging,
APIListenAddresses: apiListenAddresses,
DaprdInstallPath: daprRuntimePath,
ConfigFile: configFile,
EnableProfiling: enableProfiling,
LogLevel: logLevel,
MaxConcurrency: maxConcurrency,
AppProtocol: protocol,
PlacementHostAddr: viper.GetString("placement-host-address"),
ComponentsPath: componentsPath,
ResourcesPaths: resourcesPaths,
AppSSL: appSSL,
MaxRequestBodySize: maxRequestBodySize,
HTTPReadBufferSize: readBufferSize,
EnableAppHealth: enableAppHealth,
AppHealthPath: appHealthPath,
AppHealthInterval: appHealthInterval,
AppHealthTimeout: appHealthTimeout,
AppHealthThreshold: appHealthThreshold,
EnableAPILogging: enableAPILogging,
APIListenAddresses: apiListenAddresses,
SchedulerHostAddress: schedulerHostAddress,
DaprdInstallPath: daprRuntimePath,
}
output, err := runExec.NewOutput(&standalone.RunConfig{
AppID: appID,
@ -225,6 +250,15 @@ dapr run --run-file /path/to/directory -k
output.DaprHTTPPort,
output.DaprGRPCPort)
}
if (daprVer.RuntimeVersion != "edge") && (semver.Compare(fmt.Sprintf("v%v", daprVer.RuntimeVersion), "v1.14.0-rc.1") == -1) {
print.InfoStatusEvent(os.Stdout, "The scheduler is only compatible with dapr runtime 1.14 onwards.")
for i, arg := range output.DaprCMD.Args {
if strings.HasPrefix(arg, "--scheduler-host-address") {
output.DaprCMD.Args[i] = ""
}
}
}
print.InfoStatusEvent(os.Stdout, startInfo)
output.DaprCMD.Stdout = os.Stdout
@ -306,14 +340,14 @@ dapr run --run-file /path/to/directory -k
stdErrPipe, pipeErr := output.AppCMD.StderrPipe()
if pipeErr != nil {
print.FailureStatusEvent(os.Stderr, fmt.Sprintf("Error creating stderr for App: %s", err.Error()))
print.FailureStatusEvent(os.Stderr, "Error creating stderr for App: "+err.Error())
appRunning <- false
return
}
stdOutPipe, pipeErr := output.AppCMD.StdoutPipe()
if pipeErr != nil {
print.FailureStatusEvent(os.Stderr, fmt.Sprintf("Error creating stdout for App: %s", err.Error()))
print.FailureStatusEvent(os.Stderr, "Error creating stdout for App: "+err.Error())
appRunning <- false
return
}
@ -322,13 +356,13 @@ dapr run --run-file /path/to/directory -k
outScanner := bufio.NewScanner(stdOutPipe)
go func() {
for errScanner.Scan() {
fmt.Println(print.Blue(fmt.Sprintf("== APP == %s", errScanner.Text())))
fmt.Println(print.Blue("== APP == " + errScanner.Text()))
}
}()
go func() {
for outScanner.Scan() {
fmt.Println(print.Blue(fmt.Sprintf("== APP == %s", outScanner.Text())))
fmt.Println(print.Blue("== APP == " + outScanner.Text()))
}
}()
@ -382,7 +416,7 @@ dapr run --run-file /path/to/directory -k
}
appCommand := strings.Join(args, " ")
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Updating metadata for app command: %s", appCommand))
print.InfoStatusEvent(os.Stdout, "Updating metadata for app command: "+appCommand)
err = metadata.Put(output.DaprHTTPPort, "appCommand", appCommand, output.AppID, unixDomainSocket)
if err != nil {
print.WarningStatusEvent(os.Stdout, "Could not update sidecar metadata for appCommand: %s", err.Error())
@ -454,13 +488,14 @@ func init() {
// By marking this as deprecated, the flag will be hidden from the help menu, but will continue to work. It will show a warning message when used.
RunCmd.Flags().MarkDeprecated("components-path", "This flag is deprecated and will be removed in the future releases. Use \"resources-path\" flag instead")
RunCmd.Flags().String("placement-host-address", "localhost", "The address of the placement service. Format is either <hostname> for default port or <hostname>:<port> for custom port")
RunCmd.Flags().StringVarP(&schedulerHostAddress, "scheduler-host-address", "", "localhost", "The address of the scheduler service. Format is either <hostname> for default port or <hostname>:<port> for custom port")
// TODO: Remove below flag once the flag is removed in runtime in future release.
RunCmd.Flags().BoolVar(&appSSL, "app-ssl", false, "Enable https when Dapr invokes the application")
RunCmd.Flags().MarkDeprecated("app-ssl", "This flag is deprecated and will be removed in the future releases. Use \"app-protocol\" flag with https or grpcs values instead")
RunCmd.Flags().IntVarP(&metricsPort, "metrics-port", "M", -1, "The port of metrics on dapr")
RunCmd.Flags().BoolP("help", "h", false, "Print this help message")
RunCmd.Flags().IntVarP(&maxRequestBodySize, "dapr-http-max-request-size", "", -1, "Max size of request body in MB")
RunCmd.Flags().IntVarP(&readBufferSize, "dapr-http-read-buffer-size", "", -1, "HTTP header read buffer in KB")
RunCmd.Flags().StringVarP(&maxRequestBodySize, "max-body-size", "", strconv.Itoa(daprRuntime.DefaultMaxRequestBodySize>>20)+"Mi", "Max size of request body in MB")
RunCmd.Flags().StringVarP(&readBufferSize, "read-buffer-size", "", strconv.Itoa(daprRuntime.DefaultReadBufferSize>>10)+"Ki", "HTTP header read buffer in KB")
RunCmd.Flags().StringVarP(&unixDomainSocket, "unix-domain-socket", "u", "", "Path to a unix domain socket dir. If specified, Dapr API servers will use Unix Domain Sockets")
RunCmd.Flags().BoolVar(&enableAppHealth, "enable-app-health-check", false, "Enable health checks for the application using the protocol defined with app-protocol")
RunCmd.Flags().StringVar(&appHealthPath, "app-health-check-path", "", "Path used for health checks; HTTP only")
@ -494,6 +529,8 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
// Set defaults if zero value provided in config yaml.
app.RunConfig.SetDefaultFromSchema()
app.RunConfig.SchedulerHostAddress = validateSchedulerHostAddress(daprVer.RuntimeVersion, app.RunConfig.SchedulerHostAddress)
// Validate validates the configs and modifies the ports to free ports, appId etc.
err := app.RunConfig.Validate()
if err != nil {
@ -510,6 +547,7 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
exitWithError = true
break
}
// Combined multiwriter for logs.
var appDaprdWriter io.Writer
// appLogWriter is used when app command is present.
@ -650,6 +688,17 @@ func executeRunWithAppsConfigFile(runFilePath string, k8sEnabled bool) {
}
}
// populate the scheduler host address based on the dapr version.
func validateSchedulerHostAddress(version, address string) string {
// If no SchedulerHostAddress is supplied, set it to default value.
if semver.Compare(fmt.Sprintf("v%v", version), "v1.15.0-rc.0") == 1 {
if address == "" {
return "localhost"
}
}
return address
}
func getRunConfigFromRunFile(runFilePath string) (runfileconfig.RunFileConfig, []runfileconfig.App, error) {
config := runfileconfig.RunFileConfig{}
apps, err := config.GetApps(runFilePath)
@ -1031,3 +1080,26 @@ func getRunFilePath(path string) (string, error) {
}
return path, nil
}
// getConflictingFlags checks if any flags are set other than the ones passed in the excludedFlags slice.
// Used for logic or notifications when any of the flags are conflicting and should not be used together.
func getConflictingFlags(cmd *cobra.Command, excludedFlags ...string) []string {
var conflictingFlags []string
cmd.Flags().Visit(func(f *pflag.Flag) {
if !slices.Contains(excludedFlags, f.Name) {
conflictingFlags = append(conflictingFlags, f.Name)
}
})
return conflictingFlags
}
// detectIncompatibleFlags checks if any incompatible flags are used with --run-file
// and returns a slice of the flag names that were used
func detectIncompatibleFlags(cmd *cobra.Command) []string {
if runFilePath == "" {
return nil // No run file specified, so no incompatibilities
}
// Get all flags that are not in the compatible list
return getConflictingFlags(cmd, append(runFileCompatibleFlags, "run-file")...)
}

78
cmd/run_test.go Normal file
View File

@ -0,0 +1,78 @@
package cmd
import (
"testing"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
)
func TestValidateSchedulerHostAddress(t *testing.T) {
t.Run("test scheduler host address - v1.14.0-rc.0", func(t *testing.T) {
address := validateSchedulerHostAddress("1.14.0-rc.0", "")
assert.Equal(t, "", address)
})
t.Run("test scheduler host address - v1.15.0-rc.0", func(t *testing.T) {
address := validateSchedulerHostAddress("1.15.0", "")
assert.Equal(t, "localhost:50006", address)
})
}
func TestDetectIncompatibleFlags(t *testing.T) {
// Setup a temporary run file path to trigger the incompatible flag check
originalRunFilePath := runFilePath
runFilePath = "some/path"
defer func() {
// Restore the original runFilePath
runFilePath = originalRunFilePath
}()
t.Run("detect incompatible flags", func(t *testing.T) {
// Create a test command with flags
cmd := &cobra.Command{Use: "test"}
cmd.Flags().String("app-id", "", "")
cmd.Flags().String("dapr-http-port", "", "")
cmd.Flags().String("kubernetes", "", "") // Compatible flag
cmd.Flags().String("runtime-path", "", "") // Compatible flag
cmd.Flags().String("log-as-json", "", "") // Compatible flag
// Mark flags as changed
cmd.Flags().Set("app-id", "myapp")
cmd.Flags().Set("dapr-http-port", "3500")
cmd.Flags().Set("kubernetes", "true")
cmd.Flags().Set("runtime-path", "/path/to/runtime")
cmd.Flags().Set("log-as-json", "true")
// Test detection
incompatibleFlags := detectIncompatibleFlags(cmd)
assert.Len(t, incompatibleFlags, 2)
assert.Contains(t, incompatibleFlags, "app-id")
assert.Contains(t, incompatibleFlags, "dapr-http-port")
assert.NotContains(t, incompatibleFlags, "kubernetes")
assert.NotContains(t, incompatibleFlags, "runtime-path")
assert.NotContains(t, incompatibleFlags, "log-as-json")
})
t.Run("no incompatible flags when run file not specified", func(t *testing.T) {
// Create a test command with flags
cmd := &cobra.Command{Use: "test"}
cmd.Flags().String("app-id", "", "")
cmd.Flags().String("dapr-http-port", "", "")
// Mark flags as changed
cmd.Flags().Set("app-id", "myapp")
cmd.Flags().Set("dapr-http-port", "3500")
// Temporarily clear runFilePath
originalRunFilePath := runFilePath
runFilePath = ""
defer func() {
runFilePath = originalRunFilePath
}()
// Test detection
incompatibleFlags := detectIncompatibleFlags(cmd)
assert.Nil(t, incompatibleFlags)
})
}

View File

@ -98,7 +98,7 @@ dapr stop --run-file /path/to/directory -k
func init() {
StopCmd.Flags().StringVarP(&stopAppID, "app-id", "a", "", "The application id to be stopped")
StopCmd.Flags().StringVarP(&runFilePath, "run-file", "f", "", "Path to the run template file for the list of apps to stop")
StopCmd.Flags().BoolVarP(&stopK8s, "kubernetes", "k", false, "Stop deployments in Kunernetes based on multi-app run file")
StopCmd.Flags().BoolVarP(&stopK8s, "kubernetes", "k", false, "Stop deployments in Kubernetes based on multi-app run file")
StopCmd.Flags().BoolP("help", "h", false, "Print this help message")
RootCmd.AddCommand(StopCmd)
}

View File

@ -43,7 +43,7 @@ var UninstallCmd = &cobra.Command{
# Uninstall from self-hosted mode
dapr uninstall
# Uninstall from self-hosted mode and remove .dapr directory, Redis, Placement and Zipkin containers
# Uninstall from self-hosted mode and remove .dapr directory, Redis, Placement, Scheduler, and Zipkin containers
dapr uninstall --all
# Uninstall from Kubernetes
@ -99,7 +99,7 @@ func init() {
UninstallCmd.Flags().BoolVarP(&uninstallKubernetes, "kubernetes", "k", false, "Uninstall Dapr from a Kubernetes cluster")
UninstallCmd.Flags().BoolVarP(&uninstallDev, "dev", "", false, "Uninstall Dapr Redis and Zipking installations from Kubernetes cluster")
UninstallCmd.Flags().UintVarP(&timeout, "timeout", "", 300, "The timeout for the Kubernetes uninstall")
UninstallCmd.Flags().BoolVar(&uninstallAll, "all", false, "Remove .dapr directory, Redis, Placement and Zipkin containers on local machine, and CRDs on a Kubernetes cluster")
UninstallCmd.Flags().BoolVar(&uninstallAll, "all", false, "Remove .dapr directory, Redis, Placement, Scheduler (and default volume in self-hosted mode), and Zipkin containers on local machine, and CRDs on a Kubernetes cluster")
UninstallCmd.Flags().String("network", "", "The Docker network from which to remove the Dapr runtime")
UninstallCmd.Flags().StringVarP(&uninstallNamespace, "namespace", "n", "dapr-system", "The Kubernetes namespace to uninstall Dapr from")
UninstallCmd.Flags().BoolP("help", "h", false, "Print this help message")

309
go.mod
View File

@ -1,272 +1,245 @@
module github.com/dapr/cli
go 1.21
go 1.23.5
require (
github.com/Azure/go-autorest/autorest v0.11.28 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.22 // indirect
github.com/Masterminds/semver v1.5.0
github.com/Masterminds/semver/v3 v3.3.0
github.com/Pallinder/sillyname-go v0.0.0-20130730142914-97aeae9e6ba1
github.com/briandowns/spinner v1.19.0
github.com/dapr/dapr v1.13.0-rc.6
github.com/dapr/go-sdk v1.6.1-0.20240209153236-ac26e622c4a6
github.com/docker/docker v20.10.21+incompatible
github.com/fatih/color v1.15.0
github.com/dapr/dapr v1.15.0-rc.3.0.20250107220753-e073759df4c1
github.com/dapr/go-sdk v1.11.0
github.com/dapr/kit v0.13.1-0.20241127165251-30e2c24840b4
github.com/docker/docker v25.0.6+incompatible
github.com/evanphx/json-patch/v5 v5.9.0
github.com/fatih/color v1.17.0
github.com/gocarina/gocsv v0.0.0-20220927221512-ad3251f9fa25
github.com/hashicorp/go-retryablehttp v0.7.1
github.com/hashicorp/go-retryablehttp v0.7.7
github.com/hashicorp/go-version v1.6.0
github.com/kolesnikovae/go-winjob v1.0.0
github.com/mitchellh/go-ps v1.0.0
github.com/nightlyone/lockfile v1.0.0
github.com/olekukonko/tablewriter v0.0.5
github.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/spf13/cobra v1.8.0
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.13.0
github.com/stretchr/testify v1.8.4
golang.org/x/sys v0.17.0
github.com/stretchr/testify v1.10.0
golang.org/x/mod v0.22.0
golang.org/x/sys v0.28.0
gopkg.in/yaml.v2 v2.4.0
helm.sh/helm/v3 v3.11.1
k8s.io/api v0.28.4
k8s.io/apiextensions-apiserver v0.28.4
k8s.io/apimachinery v0.28.4
k8s.io/cli-runtime v0.28.4
k8s.io/client-go v0.28.4
helm.sh/helm/v3 v3.17.1
k8s.io/api v0.32.1
k8s.io/apiextensions-apiserver v0.32.1
k8s.io/apimachinery v0.32.1
k8s.io/cli-runtime v0.32.1
k8s.io/client-go v0.32.1
k8s.io/helm v2.16.10+incompatible
sigs.k8s.io/yaml v1.4.0
)
require github.com/Masterminds/semver/v3 v3.2.0
require (
github.com/alphadose/haxmap v1.3.1 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/boltdb/bolt v1.3.1 // indirect
github.com/evanphx/json-patch v5.7.0+incompatible // indirect
github.com/go-chi/chi/v5 v5.0.11 // indirect
github.com/go-chi/cors v1.2.1 // indirect
github.com/hashicorp/go-hclog v1.5.0 // indirect
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
github.com/hashicorp/go-msgpack v0.5.5 // indirect
github.com/hashicorp/go-msgpack/v2 v2.1.1 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/hashicorp/raft v1.4.0 // indirect
github.com/hashicorp/raft-boltdb v0.0.0-20230125174641-2a8082862702 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
github.com/panjf2000/ants/v2 v2.8.1 // indirect
github.com/segmentio/asm v1.2.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.1.6 // indirect
github.com/zeebo/errs v1.3.0 // indirect
go.mongodb.org/mongo-driver v1.12.1 // indirect
go.opentelemetry.io/otel/metric v1.23.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/mod v0.14.0 // indirect
golang.org/x/tools v0.17.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240116215550-a9fa1716bcac // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240205150955-31a09d347014 // indirect
)
require (
cloud.google.com/go/compute v1.23.3 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
cel.dev/expr v0.18.0 // indirect
contrib.go.opencensus.io/exporter/prometheus v0.4.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/BurntSushi/toml v1.2.1 // indirect
dario.cat/mergo v1.0.1 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/BurntSushi/toml v1.4.0 // indirect
github.com/Code-Hex/go-generics-cache v1.3.1 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
github.com/Masterminds/squirrel v1.5.3 // indirect
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
github.com/Masterminds/squirrel v1.5.4 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/PuerkitoBio/purell v1.2.1 // indirect
github.com/andybalholm/brotli v1.0.6 // indirect
github.com/aavaz-ai/pii-scrubber v0.0.0-20220812094047-3fa450ab6973 // indirect
github.com/alphadose/haxmap v1.4.0 // indirect
github.com/anshal21/go-worker v1.1.0 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bufbuild/protocompile v0.6.0 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/chebyrash/promise v0.0.0-20230709133807-42ec49ba1459 // indirect
github.com/cloudevents/sdk-go/binding/format/protobuf/v2 v2.14.0 // indirect
github.com/cloudevents/sdk-go/v2 v2.14.0 // indirect
github.com/containerd/containerd v1.6.18 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/dapr/components-contrib v1.13.0-rc.4 // indirect
github.com/dapr/kit v0.13.0
github.com/cloudevents/sdk-go/v2 v2.15.2 // indirect
github.com/containerd/containerd v1.7.24 // indirect
github.com/containerd/errdefs v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/cyphar/filepath-securejoin v0.3.6 // indirect
github.com/dapr/components-contrib v1.15.0-rc.1.0.20241216170750-aca5116d95c9 // indirect
github.com/dapr/durabletask-go v0.5.1-0.20241216172832-16da3e7c3530 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect
github.com/docker/cli v20.10.21+incompatible // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/dlclark/regexp2 v1.10.0 // indirect
github.com/docker/cli v25.0.1+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch/v5 v5.8.1
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
github.com/evanphx/json-patch v5.9.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32 // indirect
github.com/go-chi/chi/v5 v5.1.0 // indirect
github.com/go-chi/cors v1.2.1 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-gorp/gorp/v3 v3.0.2 // indirect
github.com/go-gorp/gorp/v3 v3.1.0 // indirect
github.com/go-kit/log v0.2.1 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-openapi/jsonpointer v0.20.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/google/cel-go v0.18.2 // indirect
github.com/google/gnostic v0.6.9 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.22.0 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/huandu/xstrings v1.3.3 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jhump/protoreflect v1.15.2 // indirect
github.com/jmoiron/sqlx v1.3.5 // indirect
github.com/jhump/protoreflect v1.15.3 // indirect
github.com/jmoiron/sqlx v1.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.4 // indirect
github.com/kolesnikovae/go-winjob v1.0.0
github.com/klauspost/compress v1.17.10 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lestrrat-go/blackmagic v1.0.2 // indirect
github.com/lestrrat-go/httpcc v1.0.1 // indirect
github.com/lestrrat-go/httprc v1.0.4 // indirect
github.com/lestrrat-go/httprc v1.0.5 // indirect
github.com/lestrrat-go/iter v1.0.2 // indirect
github.com/lestrrat-go/jwx/v2 v2.0.19 // indirect
github.com/lestrrat-go/jwx/v2 v2.0.21 // indirect
github.com/lestrrat-go/option v1.0.1 // indirect
github.com/lib/pq v1.10.7 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/magiconair/properties v1.8.6 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/marusama/semaphore/v2 v2.5.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/microsoft/durabletask-go v0.4.1-0.20240122160106-fb5c4c05729d // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20221205130635-1aeaba878587 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.0-rc2 // indirect
github.com/opencontainers/runc v1.1.12 // indirect
github.com/opencontainers/image-spec v1.1.0 // indirect
github.com/openzipkin/zipkin-go v0.4.2 // indirect
github.com/panjf2000/ants/v2 v2.8.1 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pelletier/go-toml/v2 v2.0.6 // indirect
github.com/pelletier/go-toml/v2 v2.0.9 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pkoukk/tiktoken-go v0.1.6 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.17.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.45.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/prometheus/client_golang v1.20.4 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.59.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/prometheus/statsd_exporter v0.22.7 // indirect
github.com/rubenv/sql-migrate v1.2.0 // indirect
github.com/rubenv/sql-migrate v1.7.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/shopspring/decimal v1.2.0 // indirect
github.com/segmentio/asm v1.2.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/sony/gobreaker v0.5.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.8.2 // indirect
github.com/spf13/cast v1.6.0 // indirect
github.com/spf13/cast v1.7.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.1.7 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/subosito/gotenv v1.4.1 // indirect
github.com/tidwall/transform v0.0.0-20201103190739-32f242e2dbde // indirect
github.com/tklauser/go-sysconf v0.3.10 // indirect
github.com/tklauser/numcpus v0.4.0 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasthttp v1.51.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/tmc/langchaingo v0.1.12 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
github.com/zeebo/errs v1.3.0 // indirect
go.mongodb.org/mongo-driver v1.14.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.23.1 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/zipkin v1.21.0 // indirect
go.opentelemetry.io/otel/sdk v1.21.0 // indirect
go.opentelemetry.io/otel/trace v1.23.1 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
golang.org/x/crypto v0.19.0 // indirect
golang.org/x/exp v0.0.0-20240119083558-1b970713d09a // indirect
golang.org/x/net v0.21.0 // indirect
golang.org/x/oauth2 v0.16.0 // indirect
golang.org/x/sync v0.6.0 // indirect
golang.org/x/term v0.17.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/grpc v1.61.0 // indirect
google.golang.org/protobuf v1.32.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 // indirect
go.opentelemetry.io/otel v1.32.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.26.0 // indirect
go.opentelemetry.io/otel/exporters/zipkin v1.26.0 // indirect
go.opentelemetry.io/otel/metric v1.32.0 // indirect
go.opentelemetry.io/otel/sdk v1.30.0 // indirect
go.opentelemetry.io/otel/trace v1.32.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20241204233417-43b7b7cde48d // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.7.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240924160255-9d4c2d233b61 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241202173237-19429a94021a // indirect
google.golang.org/grpc v1.68.1 // indirect
google.golang.org/protobuf v1.35.2 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiserver v0.28.4 // indirect
k8s.io/component-base v0.28.4 // indirect
k8s.io/klog/v2 v2.110.1 // indirect
k8s.io/kube-openapi v0.0.0-20231113174909-778a5567bc1e // indirect
k8s.io/kubectl v0.26.0 // indirect
k8s.io/utils v0.0.0-20240102154912-e7106e64919e // indirect
oras.land/oras-go v1.2.2 // indirect
sigs.k8s.io/controller-runtime v0.16.3 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/kustomize/api v0.15.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.15.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
)
replace (
github.com/Azure/go-autorest => github.com/Azure/go-autorest v14.2.0+incompatible
github.com/docker/docker => github.com/moby/moby v17.12.0-ce-rc1.0.20200618181300-9dc6525e6118+incompatible
github.com/russross/blackfriday => github.com/russross/blackfriday v1.5.2
k8s.io/api => k8s.io/api v0.25.2
k8s.io/apimachinery => k8s.io/apimachinery v0.25.2
k8s.io/cli-runtime => k8s.io/cli-runtime v0.25.2
k8s.io/client => github.com/kubernetes-client/go v0.0.0-20190928040339-c757968c4c36
k8s.io/client-go => k8s.io/client-go v0.25.2
k8s.io/kube-openapi => k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280
sigs.k8s.io/kustomize/api => sigs.k8s.io/kustomize/api v0.12.1
sigs.k8s.io/kustomize/kyaml => sigs.k8s.io/kustomize/kyaml v0.13.9
k8s.io/apiserver v0.32.1 // indirect
k8s.io/component-base v0.32.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
k8s.io/kubectl v0.32.1 // indirect
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
oras.land/oras-go v1.2.5 // indirect
sigs.k8s.io/controller-runtime v0.19.0 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/kustomize/api v0.18.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.18.1 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect
)

1437
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -55,8 +55,8 @@ const (
daprReadinessProbeThresholdKey = "dapr.io/sidecar-readiness-probe-threshold"
daprImageKey = "dapr.io/sidecar-image"
daprAppSSLKey = "dapr.io/app-ssl"
daprMaxRequestBodySizeKey = "dapr.io/http-max-request-size"
daprReadBufferSizeKey = "dapr.io/http-read-buffer-size"
daprMaxRequestBodySizeKey = "dapr.io/max-body-size"
daprReadBufferSizeKey = "dapr.io/read-buffer-size"
daprHTTPStreamRequestBodyKey = "dapr.io/http-stream-request-body"
daprGracefulShutdownSecondsKey = "dapr.io/graceful-shutdown-seconds"
daprEnableAPILoggingKey = "dapr.io/enable-api-logging"

View File

@ -292,7 +292,6 @@ func TestAnnotate(t *testing.T) {
var out bytes.Buffer
in := []io.Reader{inputFile}
for i, annotation := range tt.annotations {
annotation := annotation
annotator := NewK8sAnnotator(K8sAnnotatorConfig{
TargetResource: &annotation.targetResource,
TargetNamespace: &annotation.targetNamespace,
@ -334,7 +333,7 @@ func TestAnnotate(t *testing.T) {
for i := range expectedDocs {
if tt.printOutput {
t.Logf(outDocs[i])
t.Logf(outDocs[i]) //nolint:govet
}
assert.YAMLEq(t, expectedDocs[i], outDocs[i])
}

View File

@ -15,7 +15,7 @@ package kubernetes
import (
"bytes"
"fmt"
"errors"
"testing"
"github.com/stretchr/testify/assert"
@ -242,7 +242,7 @@ func TestComponents(t *testing.T) {
err := writeComponents(&buff,
func() (*v1alpha1.ComponentList, error) {
if len(tc.errString) > 0 {
return nil, fmt.Errorf(tc.errString)
return nil, errors.New(tc.errString)
}
return &v1alpha1.ComponentList{Items: tc.k8sConfig}, nil

View File

@ -43,7 +43,7 @@ func GetDaprControlPlaneCurrentConfig() (*v1alpha1.Configuration, error) {
if err != nil {
return nil, err
}
output, err := utils.RunCmdAndWait("kubectl", "get", "configurations/daprsystem", "-n", namespace, "-o", "json")
output, err := utils.RunCmdAndWait("kubectl", "get", "configurations.dapr.io/daprsystem", "-n", namespace, "-o", "json")
if err != nil {
return nil, err
}

View File

@ -15,7 +15,7 @@ package kubernetes
import (
"bytes"
"fmt"
"errors"
"testing"
"github.com/stretchr/testify/assert"
@ -221,7 +221,7 @@ func TestConfigurations(t *testing.T) {
err := writeConfigurations(&buff,
func() (*v1alpha1.ConfigurationList, error) {
if len(tc.errString) > 0 {
return nil, fmt.Errorf(tc.errString)
return nil, errors.New(tc.errString)
}
return &v1alpha1.ConfigurationList{Items: tc.k8sConfig}, nil

View File

@ -40,6 +40,7 @@ const (
daprReleaseName = "dapr"
dashboardReleaseName = "dapr-dashboard"
latestVersion = "latest"
bitnamiStableVersion = "17.14.5"
// dev mode constants.
thirdPartyDevNamespace = "default"
@ -47,7 +48,7 @@ const (
redisChartName = "redis"
zipkinReleaseName = "dapr-dev-zipkin"
redisReleaseName = "dapr-dev-redis"
redisVersion = "6.2"
redisVersion = "6.2.11"
bitnamiHelmRepo = "https://charts.bitnami.com/bitnami"
daprHelmRepo = "https://dapr.github.io/helm-charts"
zipkinHelmRepo = "https://openzipkin.github.io/zipkin"
@ -99,9 +100,10 @@ func Init(config InitConfiguration) error {
if config.EnableDev {
redisChartVals := []string{
"image.tag=" + redisVersion,
"replica.replicaCount=0",
}
err = installThirdPartyWithConsole(redisReleaseName, redisChartName, latestVersion, bitnamiHelmRepo, "Dapr Redis", redisChartVals, config)
err = installThirdPartyWithConsole(redisReleaseName, redisChartName, bitnamiStableVersion, bitnamiHelmRepo, "Dapr Redis", redisChartVals, config)
if err != nil {
return err
}
@ -124,7 +126,7 @@ func installThirdPartyWithConsole(releaseName, chartName, releaseVersion, helmRe
defer installSpinning(print.Failure)
// releaseVersion of chart will always be latest version.
err := installThirdParty(releaseName, chartName, latestVersion, helmRepo, chartValues, config)
err := installThirdParty(releaseName, chartName, releaseVersion, helmRepo, chartValues, config)
if err != nil {
return err
}
@ -214,7 +216,7 @@ func getHelmChart(version, releaseName, helmRepo string, config *helm.Configurat
pull.Settings = &cli.EnvSettings{}
if version != latestVersion && (releaseName == daprReleaseName || releaseName == dashboardReleaseName) {
if version != latestVersion && (releaseName == daprReleaseName || releaseName == dashboardReleaseName || releaseName == redisChartName) {
pull.Version = chartVersion(version)
}
@ -247,10 +249,10 @@ func daprChartValues(config InitConfiguration, version string) (map[string]inter
helmVals := []string{
fmt.Sprintf("global.ha.enabled=%t", config.EnableHA),
fmt.Sprintf("global.mtls.enabled=%t", config.EnableMTLS),
fmt.Sprintf("global.tag=%s", utils.GetVariantVersion(version, config.ImageVariant)),
"global.tag=" + utils.GetVariantVersion(version, config.ImageVariant),
}
if len(config.ImageRegistryURI) != 0 {
helmVals = append(helmVals, fmt.Sprintf("global.registry=%s", config.ImageRegistryURI))
helmVals = append(helmVals, "global.registry="+config.ImageRegistryURI)
}
helmVals = append(helmVals, config.Args...)
@ -263,9 +265,9 @@ func daprChartValues(config InitConfiguration, version string) (map[string]inter
if err != nil {
return nil, err
}
helmVals = append(helmVals, fmt.Sprintf("dapr_sentry.tls.root.certPEM=%s", string(rootCertBytes)),
fmt.Sprintf("dapr_sentry.tls.issuer.certPEM=%s", string(issuerCertBytes)),
fmt.Sprintf("dapr_sentry.tls.issuer.keyPEM=%s", string(issuerKeyBytes)),
helmVals = append(helmVals, "dapr_sentry.tls.root.certPEM="+string(rootCertBytes),
"dapr_sentry.tls.issuer.certPEM="+string(issuerCertBytes),
"dapr_sentry.tls.issuer.keyPEM="+string(issuerKeyBytes),
)
}
@ -299,7 +301,7 @@ func install(releaseName, releaseVersion, helmRepo string, config InitConfigurat
}
if releaseName == daprReleaseName {
err = applyCRDs(fmt.Sprintf("v%s", version))
err = applyCRDs("v" + version)
if err != nil {
return err
}
@ -309,7 +311,7 @@ func install(releaseName, releaseVersion, helmRepo string, config InitConfigurat
installClient.ReleaseName = releaseName
installClient.Namespace = config.Namespace
installClient.Wait = config.Wait
installClient.Timeout = time.Duration(config.Timeout) * time.Second
installClient.Timeout = time.Duration(config.Timeout) * time.Second //nolint:gosec
values, err := daprChartValues(config, version)
if err != nil {
@ -338,7 +340,7 @@ func installThirdParty(releaseName, chartName, releaseVersion, helmRepo string,
installClient.ReleaseName = releaseName
installClient.Namespace = thirdPartyDevNamespace
installClient.Wait = config.Wait
installClient.Timeout = time.Duration(config.Timeout) * time.Second
installClient.Timeout = time.Duration(config.Timeout) * time.Second //nolint:gosec
values := map[string]interface{}{}

View File

@ -28,9 +28,9 @@ func TestListPodsInterface(t *testing.T) {
output, err := ListPodsInterface(k8s, map[string]string{
"test": "test",
})
assert.Nil(t, err, "unexpected error")
assert.NoError(t, err, "unexpected error")
assert.NotNil(t, output, "Expected empty list")
assert.Equal(t, 0, len(output.Items), "Expected length 0")
assert.Empty(t, output.Items, "Expected length 0")
})
t.Run("one matching pod", func(t *testing.T) {
k8s := fake.NewSimpleClientset((&v1.Pod{
@ -46,9 +46,9 @@ func TestListPodsInterface(t *testing.T) {
output, err := ListPodsInterface(k8s, map[string]string{
"test": "test",
})
assert.Nil(t, err, "unexpected error")
assert.NoError(t, err, "unexpected error")
assert.NotNil(t, output, "Expected non empty list")
assert.Equal(t, 1, len(output.Items), "Expected length 0")
assert.Len(t, output.Items, 1, "Expected length 0")
assert.Equal(t, "test", output.Items[0].Name, "expected name to match")
assert.Equal(t, "test", output.Items[0].Namespace, "expected namespace to match")
})

View File

@ -14,6 +14,7 @@ limitations under the License.
package kubernetes
import (
"errors"
"fmt"
"io"
"net/http"
@ -133,7 +134,7 @@ func (pf *PortForward) Init() error {
return fmt.Errorf("can not get the local and remote ports: %w", err)
}
if len(ports) == 0 {
return fmt.Errorf("can not get the local and remote ports: error getting ports length")
return errors.New("can not get the local and remote ports: error getting ports length")
}
pf.LocalPort = int(ports[0].Local)

View File

@ -52,7 +52,6 @@ func RenewCertificate(conf RenewCertificateParams) error {
conf.RootCertificateFilePath,
conf.IssuerCertificateFilePath,
conf.IssuerPrivateKeyFilePath)
if err != nil {
return err
}
@ -60,7 +59,6 @@ func RenewCertificate(conf RenewCertificateParams) error {
rootCertBytes, issuerCertBytes, issuerKeyBytes, err = GenerateNewCertificates(
conf.ValidUntil,
conf.RootPrivateKeyFilePath)
if err != nil {
return err
}
@ -123,7 +121,7 @@ func renewCertificate(rootCert, issuerCert, issuerKey []byte, timeout uint, imag
// Reuse the existing helm configuration values i.e. tags, registry, etc.
upgradeClient.ReuseValues = true
upgradeClient.Wait = true
upgradeClient.Timeout = time.Duration(timeout) * time.Second
upgradeClient.Timeout = time.Duration(timeout) * time.Second //nolint:gosec
upgradeClient.Namespace = status[0].Namespace
// Override the helm configuration values with the new certificates.
@ -148,12 +146,12 @@ func createHelmParamsForNewCertificates(ca, issuerCert, issuerKey string) (map[s
args := []string{}
if ca != "" && issuerCert != "" && issuerKey != "" {
args = append(args, fmt.Sprintf("dapr_sentry.tls.root.certPEM=%s", ca),
fmt.Sprintf("dapr_sentry.tls.issuer.certPEM=%s", issuerCert),
fmt.Sprintf("dapr_sentry.tls.issuer.keyPEM=%s", issuerKey),
args = append(args, "dapr_sentry.tls.root.certPEM="+ca,
"dapr_sentry.tls.issuer.certPEM="+issuerCert,
"dapr_sentry.tls.issuer.keyPEM="+issuerKey,
)
} else {
return nil, fmt.Errorf("parameters not found")
return nil, errors.New("parameters not found")
}
for _, v := range args {

View File

@ -297,7 +297,7 @@ func createDeploymentConfig(client versioned.Interface, app runfileconfig.App) d
Name: app.AppID,
Image: app.ContainerImage,
Env: getEnv(app),
ImagePullPolicy: corev1.PullAlways,
ImagePullPolicy: corev1.PullPolicy(app.ContainerImagePullPolicy),
},
},
},
@ -317,7 +317,7 @@ func createDeploymentConfig(client versioned.Interface, app runfileconfig.App) d
if app.AppPort != 0 {
dep.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{
{
ContainerPort: int32(app.AppPort),
ContainerPort: int32(app.AppPort), //nolint:gosec
},
}
}

View File

@ -33,6 +33,7 @@ var controlPlaneLabels = []string{
"dapr-placement-server",
"dapr-sidecar-injector",
"dapr-dashboard",
"dapr-scheduler-server",
}
type StatusClient struct {
@ -64,7 +65,6 @@ func NewStatusClient() (*StatusClient, error) {
// List status for Dapr resources.
func (s *StatusClient) Status() ([]StatusOutput, error) {
//nolint
client := s.client
if client == nil {
return nil, errors.New("kubernetes client not initialized")

View File

@ -83,7 +83,7 @@ func TestStatus(t *testing.T) {
if err != nil {
t.Fatalf("%s status should not raise an error", err.Error())
}
assert.Equal(t, 0, len(status), "Expected status to be empty list")
assert.Empty(t, status, "Expected status to be empty list")
})
t.Run("one status waiting", func(t *testing.T) {
@ -102,8 +102,8 @@ func TestStatus(t *testing.T) {
}
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -131,8 +131,8 @@ func TestStatus(t *testing.T) {
}
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -140,7 +140,7 @@ func TestStatus(t *testing.T) {
assert.Equal(t, "0.0.1", stat.Version, "expected version to match")
assert.Equal(t, 1, stat.Replicas, "expected replicas to match")
assert.Equal(t, "True", stat.Healthy, "expected health to match")
assert.Equal(t, stat.Status, "Running", "expected running status")
assert.Equal(t, "Running", stat.Status, "expected running status")
})
t.Run("one status terminated", func(t *testing.T) {
@ -160,8 +160,8 @@ func TestStatus(t *testing.T) {
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -169,7 +169,7 @@ func TestStatus(t *testing.T) {
assert.Equal(t, "0.0.1", stat.Version, "expected version to match")
assert.Equal(t, 1, stat.Replicas, "expected replicas to match")
assert.Equal(t, "False", stat.Healthy, "expected health to match")
assert.Equal(t, stat.Status, "Terminated", "expected terminated status")
assert.Equal(t, "Terminated", stat.Status, "expected terminated status")
})
t.Run("one status pending", func(t *testing.T) {
@ -193,8 +193,8 @@ func TestStatus(t *testing.T) {
k8s := newTestSimpleK8s(pod)
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -202,13 +202,13 @@ func TestStatus(t *testing.T) {
assert.Equal(t, "0.0.1", stat.Version, "expected version to match")
assert.Equal(t, 1, stat.Replicas, "expected replicas to match")
assert.Equal(t, "False", stat.Healthy, "expected health to match")
assert.Equal(t, stat.Status, "Pending", "expected pending status")
assert.Equal(t, "Pending", stat.Status, "expected pending status")
})
t.Run("one status empty client", func(t *testing.T) {
k8s := &StatusClient{}
status, err := k8s.Status()
assert.NotNil(t, err, "status should raise an error")
assert.Error(t, err, "status should raise an error")
assert.Equal(t, "kubernetes client not initialized", err.Error(), "expected errors to match")
assert.Nil(t, status, "expected nil for status")
})
@ -233,6 +233,9 @@ func TestControlPlaneServices(t *testing.T) {
{"dapr-sidecar-injector-74648c9dcb-5bsmn", "dapr-sidecar-injector", daprImageTag},
{"dapr-sidecar-injector-74648c9dcb-6bsmn", "dapr-sidecar-injector", daprImageTag},
{"dapr-sidecar-injector-74648c9dcb-7bsmn", "dapr-sidecar-injector", daprImageTag},
{"dapr-scheduler-server-0", "dapr-scheduler-server", daprImageTag},
{"dapr-scheduler-server-1", "dapr-scheduler-server", daprImageTag},
{"dapr-scheduler-server-2", "dapr-scheduler-server", daprImageTag},
}
expectedReplicas := map[string]int{}
@ -260,7 +263,7 @@ func TestControlPlaneServices(t *testing.T) {
k8s := newTestSimpleK8s(runtimeObj...)
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.NoError(t, err, "status should not raise an error")
assert.Equal(t, len(expectedReplicas), len(status), "Expected status to be empty list")
@ -302,8 +305,8 @@ func TestControlPlaneVersion(t *testing.T) {
pd.imageURI = tc.imageURI
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, tc.expectedVersion, stat.Version, "expected version to match")
}

View File

@ -41,7 +41,7 @@ func Uninstall(namespace string, uninstallAll bool, uninstallDev bool, timeout u
}
uninstallClient := helm.NewUninstall(config)
uninstallClient.Timeout = time.Duration(timeout) * time.Second
uninstallClient.Timeout = time.Duration(timeout) * time.Second //nolint:gosec
// Uninstall Dashboard as a best effort.
// Chart versions < 1.11 for Dapr will delete dashboard as part of the main chart.
@ -54,7 +54,6 @@ func Uninstall(namespace string, uninstallAll bool, uninstallDev bool, timeout u
}
_, err = uninstallClient.Run(daprReleaseName)
if err != nil {
return err
}

View File

@ -14,16 +14,22 @@ limitations under the License.
package kubernetes
import (
"context"
"errors"
"fmt"
"net/http"
"os"
"strings"
"time"
helm "helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/release"
core_v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/helm/pkg/strvals"
"github.com/Masterminds/semver/v3"
"github.com/hashicorp/go-version"
"github.com/dapr/cli/pkg/print"
@ -48,6 +54,8 @@ var crdsFullResources = []string{
"httpendpoints.dapr.io",
}
var versionWithHAScheduler = semver.MustParse("1.15.0-rc.1")
type UpgradeConfig struct {
RuntimeVersion string
DashboardVersion string
@ -111,7 +119,7 @@ func Upgrade(conf UpgradeConfig) error {
if !hasDashboardInDaprChart && willHaveDashboardInDaprChart && dashboardExists {
print.InfoStatusEvent(os.Stdout, "Dashboard being uninstalled prior to Dapr control plane upgrade...")
uninstallClient := helm.NewUninstall(helmConf)
uninstallClient.Timeout = time.Duration(conf.Timeout) * time.Second
uninstallClient.Timeout = time.Duration(conf.Timeout) * time.Second //nolint:gosec
_, err = uninstallClient.Run(dashboardReleaseName)
if err != nil {
@ -156,13 +164,42 @@ func Upgrade(conf UpgradeConfig) error {
return err
}
// used to signal the deletion of the scheduler pods only when downgrading from 1.15 to previous versions to handle incompatible changes
// in other cases the channel should be nil
var downgradeDeletionChan chan error
if !isDowngrade(conf.RuntimeVersion, daprVersion) {
err = applyCRDs(fmt.Sprintf("v%s", conf.RuntimeVersion))
err = applyCRDs("v" + conf.RuntimeVersion)
if err != nil {
return fmt.Errorf("unable to apply CRDs: %w", err)
}
} else {
print.InfoStatusEvent(os.Stdout, "Downgrade detected, skipping CRDs.")
targetVersion, errVersion := semver.NewVersion(conf.RuntimeVersion)
if errVersion != nil {
return fmt.Errorf("unable to parse dapr target version: %w", errVersion)
}
currentVersion, errVersion := semver.NewVersion(daprVersion)
if errVersion != nil {
return fmt.Errorf("unable to parse dapr current version: %w", errVersion)
}
if currentVersion.GreaterThanEqual(versionWithHAScheduler) && targetVersion.LessThan(versionWithHAScheduler) {
downgradeDeletionChan = make(chan error)
// Must delete all scheduler pods from cluster due to incompatible changes in version 1.15 with older versions.
go func() {
// Add an artificial delay to allow helm upgrade to progress and delete the pods only when necessary.
time.Sleep(15 * time.Second)
errDeletion := deleteSchedulerPods(status[0].Namespace, currentVersion, targetVersion)
if errDeletion != nil {
downgradeDeletionChan <- fmt.Errorf("failed to delete scheduler pods: %w", errDeletion)
print.FailureStatusEvent(os.Stderr, "Failed to delete scheduler pods: "+errDeletion.Error())
}
close(downgradeDeletionChan)
}()
}
}
chart, err := GetDaprHelmChartName(helmConf)
@ -179,6 +216,15 @@ func Upgrade(conf UpgradeConfig) error {
return fmt.Errorf("failure while running upgrade: %w", err)
}
// wait for the deletion of the scheduler pods to finish
if downgradeDeletionChan != nil {
select {
case <-downgradeDeletionChan:
case <-time.After(3 * time.Minute):
return errors.New("timed out waiting for downgrade deletion")
}
}
if dashboardChart != nil {
if dashboardExists {
if _, err = upgradeClient.Run(dashboardReleaseName, dashboardChart, vals); err != nil {
@ -201,6 +247,73 @@ func Upgrade(conf UpgradeConfig) error {
return nil
}
func deleteSchedulerPods(namespace string, currentVersion *semver.Version, targetVersion *semver.Version) error {
ctxWithTimeout, cancel := context.WithTimeout(context.Background(), time.Second*30)
defer cancel()
var pods *core_v1.PodList
// wait for at least one pod of the target version to be in the list before deleting the rest
// check the label app.kubernetes.io/version to determine the version of the pod
foundTargetVersion := false
for {
if foundTargetVersion {
break
}
k8sClient, err := Client()
if err != nil {
return err
}
pods, err = k8sClient.CoreV1().Pods(namespace).List(ctxWithTimeout, meta_v1.ListOptions{
LabelSelector: "app=dapr-scheduler-server",
})
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return err
}
if len(pods.Items) == 0 {
return nil
}
for _, pod := range pods.Items {
pv, ok := pod.Labels["app.kubernetes.io/version"]
if ok {
podVersion, err := semver.NewVersion(pv)
if err == nil && podVersion.Equal(targetVersion) {
foundTargetVersion = true
break
}
}
}
time.Sleep(5 * time.Second)
}
if pods == nil {
return errors.New("no scheduler pods found")
}
// get a fresh client to ensure we have the latest state of the cluster
k8sClient, err := Client()
if err != nil {
return err
}
// delete scheduler pods of the current version, i.e. >1.15.0
for _, pod := range pods.Items {
if pv, ok := pod.Labels["app.kubernetes.io/version"]; ok {
podVersion, err := semver.NewVersion(pv)
if err == nil && podVersion.Equal(currentVersion) {
err = k8sClient.CoreV1().Pods(namespace).Delete(ctxWithTimeout, pod.Name, meta_v1.DeleteOptions{})
if err != nil {
return fmt.Errorf("failed to delete pod %s during downgrade: %w", pod.Name, err)
}
}
}
}
return nil
}
// WithRetry enables retry with the specified max retries and retry interval.
func WithRetry(maxRetries int, retryInterval time.Duration) UpgradeOption {
return func(o *UpgradeOptions) {
@ -240,7 +353,7 @@ func helmUpgrade(client *helm.Upgrade, name string, chart *chart.Chart, vals map
// create a totally new helm client, this ensures that we fetch a fresh openapi schema from the server on each attempt.
client, _, err = newUpgradeClient(client.Namespace, UpgradeConfig{
Timeout: uint(client.Timeout),
Timeout: uint(client.Timeout), //nolint:gosec
})
if err != nil {
return nil, fmt.Errorf("unable to create helm client: %w", err)
@ -255,6 +368,11 @@ func highAvailabilityEnabled(status []StatusOutput) bool {
if s.Name == "dapr-dashboard" {
continue
}
// Skip the scheduler server because it's in HA mode by default since version 1.15.0
// This will fall back to other dapr services to determine if HA mode is enabled.
if strings.HasPrefix(s.Name, "dapr-scheduler-server") {
continue
}
if s.Replicas > 1 {
return true
}
@ -267,7 +385,7 @@ func applyCRDs(version string) error {
url := fmt.Sprintf("https://raw.githubusercontent.com/dapr/dapr/%s/charts/dapr/crds/%s.yaml", version, crd)
resp, _ := http.Get(url) //nolint:gosec
if resp != nil && resp.StatusCode == 200 {
if resp != nil && resp.StatusCode == http.StatusOK {
defer resp.Body.Close()
_, err := utils.RunCmdAndWait("kubectl", "apply", "-f", url)
@ -286,18 +404,18 @@ func upgradeChartValues(ca, issuerCert, issuerKey string, haMode, mtls bool, con
if err != nil {
return nil, err
}
globalVals = append(globalVals, fmt.Sprintf("global.tag=%s", utils.GetVariantVersion(conf.RuntimeVersion, conf.ImageVariant)))
globalVals = append(globalVals, "global.tag="+utils.GetVariantVersion(conf.RuntimeVersion, conf.ImageVariant))
if mtls && ca != "" && issuerCert != "" && issuerKey != "" {
globalVals = append(globalVals, fmt.Sprintf("dapr_sentry.tls.root.certPEM=%s", ca),
fmt.Sprintf("dapr_sentry.tls.issuer.certPEM=%s", issuerCert),
fmt.Sprintf("dapr_sentry.tls.issuer.keyPEM=%s", issuerKey),
globalVals = append(globalVals, "dapr_sentry.tls.root.certPEM="+ca,
"dapr_sentry.tls.issuer.certPEM="+issuerCert,
"dapr_sentry.tls.issuer.keyPEM="+issuerKey,
)
} else {
globalVals = append(globalVals, "global.mtls.enabled=false")
}
if len(conf.ImageRegistryURI) != 0 {
globalVals = append(globalVals, fmt.Sprintf("global.registry=%s", conf.ImageRegistryURI))
globalVals = append(globalVals, "global.registry="+conf.ImageRegistryURI)
}
if haMode {
globalVals = append(globalVals, "global.ha.enabled=true")
@ -334,7 +452,7 @@ func newUpgradeClient(namespace string, cfg UpgradeConfig) (*helm.Upgrade, *helm
client.Namespace = namespace
client.CleanupOnFail = true
client.Wait = true
client.Timeout = time.Duration(cfg.Timeout) * time.Second
client.Timeout = time.Duration(cfg.Timeout) * time.Second //nolint:gosec
return client, helmCfg, nil
}

View File

@ -31,6 +31,38 @@ func TestHAMode(t *testing.T) {
assert.True(t, r)
})
t.Run("ha mode with scheduler and other services", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server",
Replicas: 3,
},
{
Name: "dapr-placement-server",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.True(t, r)
})
t.Run("non-ha mode with only scheduler image variant", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server-mariner",
Replicas: 3,
},
{
Name: "dapr-placement-server-mariner",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.True(t, r)
})
t.Run("non-ha mode", func(t *testing.T) {
s := []StatusOutput{
{
@ -41,6 +73,46 @@ func TestHAMode(t *testing.T) {
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
t.Run("non-ha mode with scheduler and other services", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server",
Replicas: 3,
},
{
Name: "dapr-placement-server",
Replicas: 1,
},
}
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
t.Run("non-ha mode with only scheduler", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
t.Run("non-ha mode with only scheduler image variant", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server-mariner",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
}
func TestMTLSChartValues(t *testing.T) {

View File

@ -69,7 +69,7 @@ func tryGetRunDataLock() (*lockfile.Lockfile, error) {
return nil, err
}
for i := 0; i < 10; i++ {
for range 10 {
err = lockFile.TryLock()
// Error handling is essential, as we only try to get the lock.

View File

@ -14,7 +14,7 @@ limitations under the License.
package runexec
import (
"fmt"
"errors"
"io"
"os"
"os/exec"
@ -86,7 +86,7 @@ func (c *CmdProcess) WithOutputWriter(w io.Writer) {
// SetStdout should be called after WithOutputWriter.
func (c *CmdProcess) SetStdout() error {
if c.Command == nil {
return fmt.Errorf("command is nil")
return errors.New("command is nil")
}
c.Command.Stdout = c.OutputWriter
return nil
@ -99,7 +99,7 @@ func (c *CmdProcess) WithErrorWriter(w io.Writer) {
// SetStdErr should be called after WithErrorWriter.
func (c *CmdProcess) SetStderr() error {
if c.Command == nil {
return fmt.Errorf("command is nil")
return errors.New("command is nil")
}
c.Command.Stderr = c.ErrorWriter
return nil
@ -108,7 +108,7 @@ func (c *CmdProcess) SetStderr() error {
func NewOutput(config *standalone.RunConfig) (*RunOutput, error) {
// set default values from RunConfig struct's tag.
config.SetDefaultFromSchema()
//nolint
err := config.Validate()
if err != nil {
return nil, err

View File

@ -20,6 +20,8 @@ import (
"strings"
"testing"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/assert"
"github.com/dapr/cli/pkg/standalone"
@ -80,10 +82,10 @@ func setupRun(t *testing.T) {
componentsDir := standalone.GetDaprComponentsPath(myDaprPath)
configFile := standalone.GetDaprConfigPath(myDaprPath)
err = os.MkdirAll(componentsDir, 0o700)
assert.Equal(t, nil, err, "Unable to setup components dir before running test")
assert.NoError(t, err, "Unable to setup components dir before running test")
file, err := os.Create(configFile)
file.Close()
assert.Equal(t, nil, err, "Unable to create config file before running test")
assert.NoError(t, err, "Unable to create config file before running test")
}
func tearDownRun(t *testing.T) {
@ -94,9 +96,9 @@ func tearDownRun(t *testing.T) {
configFile := standalone.GetDaprConfigPath(myDaprPath)
err = os.RemoveAll(componentsDir)
assert.Equal(t, nil, err, "Unable to delete default components dir after running test")
assert.NoError(t, err, "Unable to delete default components dir after running test")
err = os.Remove(configFile)
assert.Equal(t, nil, err, "Unable to delete default config file after running test")
assert.NoError(t, err, "Unable to delete default config file after running test")
}
func assertCommonArgs(t *testing.T, basicConfig *standalone.RunConfig, output *RunOutput) {
@ -120,9 +122,9 @@ func assertCommonArgs(t *testing.T, basicConfig *standalone.RunConfig, output *R
assertArgumentEqual(t, "components-path", standalone.GetDaprComponentsPath(daprPath), output.DaprCMD.Args)
assertArgumentEqual(t, "app-ssl", "", output.DaprCMD.Args)
assertArgumentEqual(t, "metrics-port", "9001", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-http-max-request-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "max-body-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-internal-grpc-port", "5050", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-http-read-buffer-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "read-buffer-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-listen-addresses", "127.0.0.1", output.DaprCMD.Args)
}
@ -180,8 +182,8 @@ func TestRun(t *testing.T) {
AppProtocol: "http",
ComponentsPath: componentsDir,
AppSSL: true,
MaxRequestBodySize: -1,
HTTPReadBufferSize: -1,
MaxRequestBodySize: "-1",
HTTPReadBufferSize: "-1",
EnableAPILogging: true,
APIListenAddresses: "127.0.0.1",
}
@ -294,21 +296,21 @@ func TestRun(t *testing.T) {
basicConfig.ProfilePort = 0
basicConfig.EnableProfiling = true
basicConfig.MaxConcurrency = 0
basicConfig.MaxRequestBodySize = 0
basicConfig.HTTPReadBufferSize = 0
basicConfig.MaxRequestBodySize = ""
basicConfig.HTTPReadBufferSize = ""
basicConfig.AppProtocol = ""
basicConfig.SetDefaultFromSchema()
assert.Equal(t, -1, basicConfig.AppPort)
assert.True(t, basicConfig.HTTPPort == -1)
assert.True(t, basicConfig.GRPCPort == -1)
assert.True(t, basicConfig.MetricsPort == -1)
assert.True(t, basicConfig.ProfilePort == -1)
assert.Equal(t, -1, basicConfig.HTTPPort)
assert.Equal(t, -1, basicConfig.GRPCPort)
assert.Equal(t, -1, basicConfig.MetricsPort)
assert.Equal(t, -1, basicConfig.ProfilePort)
assert.True(t, basicConfig.EnableProfiling)
assert.Equal(t, -1, basicConfig.MaxConcurrency)
assert.Equal(t, -1, basicConfig.MaxRequestBodySize)
assert.Equal(t, -1, basicConfig.HTTPReadBufferSize)
assert.Equal(t, "4Mi", basicConfig.MaxRequestBodySize)
assert.Equal(t, "4Ki", basicConfig.HTTPReadBufferSize)
assert.Equal(t, "http", basicConfig.AppProtocol)
// Test after Validate gets called.
@ -316,14 +318,52 @@ func TestRun(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, 0, basicConfig.AppPort)
assert.True(t, basicConfig.HTTPPort > 0)
assert.True(t, basicConfig.GRPCPort > 0)
assert.True(t, basicConfig.MetricsPort > 0)
assert.True(t, basicConfig.ProfilePort > 0)
assert.Positive(t, basicConfig.HTTPPort)
assert.Positive(t, basicConfig.GRPCPort)
assert.Positive(t, basicConfig.MetricsPort)
assert.Positive(t, basicConfig.ProfilePort)
assert.True(t, basicConfig.EnableProfiling)
assert.Equal(t, -1, basicConfig.MaxConcurrency)
assert.Equal(t, -1, basicConfig.MaxRequestBodySize)
assert.Equal(t, -1, basicConfig.HTTPReadBufferSize)
assert.Equal(t, "4Mi", basicConfig.MaxRequestBodySize)
assert.Equal(t, "4Ki", basicConfig.HTTPReadBufferSize)
assert.Equal(t, "http", basicConfig.AppProtocol)
})
t.Run("run with max body size without units", func(t *testing.T) {
basicConfig.MaxRequestBodySize = "4000000"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "max-body-size", "4M", output.DaprCMD.Args)
})
t.Run("run with max body size with units", func(t *testing.T) {
basicConfig.MaxRequestBodySize = "4Mi"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "max-body-size", "4Mi", output.DaprCMD.Args)
basicConfig.MaxRequestBodySize = "5M"
output, err = NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "max-body-size", "5M", output.DaprCMD.Args)
})
t.Run("run with read buffer size set without units", func(t *testing.T) {
basicConfig.HTTPReadBufferSize = "16001"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "read-buffer-size", "16001", output.DaprCMD.Args)
})
t.Run("run with read buffer size set with units", func(t *testing.T) {
basicConfig.HTTPReadBufferSize = "4Ki"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "read-buffer-size", "4Ki", output.DaprCMD.Args)
})
}

View File

@ -41,8 +41,9 @@ type RunFileConfig struct {
// ContainerConfiguration represents the application container configuration parameters.
type ContainerConfiguration struct {
ContainerImage string `yaml:"containerImage"`
CreateService bool `yaml:"createService"`
ContainerImage string `yaml:"containerImage"`
ContainerImagePullPolicy string `yaml:"containerImagePullPolicy"`
CreateService bool `yaml:"createService"`
}
// App represents the configuration options for the apps in the run file.

View File

@ -26,6 +26,8 @@ import (
"gopkg.in/yaml.v2"
)
var imagePullPolicyValuesAllowed = []string{"Always", "Never", "IfNotPresent"}
// Parse the provided run file into a RunFileConfig struct.
func (a *RunFileConfig) parseAppsConfig(runFilePath string) error {
var err error
@ -70,7 +72,7 @@ func (a *RunFileConfig) validateRunConfig(runFilePath string) error {
a.Common.ResourcesPaths = append(a.Common.ResourcesPaths, a.Common.ResourcesPath)
}
for i := 0; i < len(a.Apps); i++ {
for i := range len(a.Apps) {
if a.Apps[i].AppDirPath == "" {
return errors.New("required field 'appDirPath' not found in the provided app config file")
}
@ -97,6 +99,15 @@ func (a *RunFileConfig) validateRunConfig(runFilePath string) error {
if len(strings.TrimSpace(a.Apps[i].ResourcesPath)) > 0 {
a.Apps[i].ResourcesPaths = append(a.Apps[i].ResourcesPaths, a.Apps[i].ResourcesPath)
}
// Check containerImagePullPolicy is valid.
if a.Apps[i].ContainerImagePullPolicy != "" {
if !utils.Contains(imagePullPolicyValuesAllowed, a.Apps[i].ContainerImagePullPolicy) {
return fmt.Errorf("invalid containerImagePullPolicy: %s, allowed values: %s", a.Apps[i].ContainerImagePullPolicy, strings.Join(imagePullPolicyValuesAllowed, ", "))
}
} else {
a.Apps[i].ContainerImagePullPolicy = "Always"
}
}
return nil
}
@ -212,9 +223,6 @@ func (a *RunFileConfig) resolvePathToAbsAndValidate(baseDir string, paths ...*st
return err
}
absPath := utils.GetAbsPath(baseDir, *path)
if err != nil {
return err
}
*path = absPath
if err = utils.ValidateFilePath(*path); err != nil {
return err

View File

@ -14,6 +14,7 @@ limitations under the License.
package runfileconfig
import (
"fmt"
"os"
"path/filepath"
"strings"
@ -32,6 +33,9 @@ var (
runFileForPrecedenceRuleDaprDir = filepath.Join(".", "testdata", "test_run_config_precedence_rule_dapr_dir.yaml")
runFileForLogDestination = filepath.Join(".", "testdata", "test_run_config_log_destination.yaml")
runFileForMultiResourcePaths = filepath.Join(".", "testdata", "test_run_config_multiple_resources_paths.yaml")
runFileForContainerImagePullPolicy = filepath.Join(".", "testdata", "test_run_config_container_image_pull_policy.yaml")
runFileForContainerImagePullPolicyInvalid = filepath.Join(".", "testdata", "test_run_config_container_image_pull_policy_invalid.yaml")
)
func TestRunConfigFile(t *testing.T) {
@ -41,7 +45,7 @@ func TestRunConfigFile(t *testing.T) {
err := appsRunConfig.parseAppsConfig(validRunFilePath)
assert.NoError(t, err)
assert.Equal(t, 2, len(appsRunConfig.Apps))
assert.Len(t, appsRunConfig.Apps, 2)
assert.Equal(t, 1, appsRunConfig.Version)
assert.NotEmpty(t, appsRunConfig.Common.ResourcesPath)
@ -60,7 +64,7 @@ func TestRunConfigFile(t *testing.T) {
apps, err := config.GetApps(validRunFilePath)
assert.NoError(t, err)
assert.Equal(t, 2, len(apps))
assert.Len(t, apps, 2)
assert.Equal(t, "webapp", apps[0].AppID)
assert.Equal(t, "backend", apps[1].AppID)
assert.Equal(t, "HTTP", apps[0].AppProtocol)
@ -86,8 +90,8 @@ func TestRunConfigFile(t *testing.T) {
assert.Equal(t, filepath.Join(apps[1].AppDirPath, ".dapr", "resources"), apps[1].ResourcesPaths[0])
// test merged envs from common and app sections.
assert.Equal(t, 2, len(apps[0].Env))
assert.Equal(t, 2, len(apps[1].Env))
assert.Len(t, apps[0].Env, 2)
assert.Len(t, apps[1].Env, 2)
assert.Contains(t, apps[0].Env, "DEBUG")
assert.Contains(t, apps[0].Env, "tty")
assert.Contains(t, apps[1].Env, "DEBUG")
@ -229,7 +233,7 @@ func TestRunConfigFile(t *testing.T) {
config := RunFileConfig{}
apps, err := config.GetApps(runFileForLogDestination)
assert.NoError(t, err)
assert.Equal(t, 6, len(apps))
assert.Len(t, apps, 6)
assert.Equal(t, "file", apps[0].DaprdLogDestination.String())
assert.Equal(t, "fileAndConsole", apps[0].AppLogDestination.String())
@ -251,6 +255,51 @@ func TestRunConfigFile(t *testing.T) {
})
}
func TestContainerImagePullPolicy(t *testing.T) {
testcases := []struct {
name string
runfFile string
expectedPullPolicies []string
expectedBadPolicyValue string
expectedErr bool
}{
{
name: "default value is Always",
runfFile: validRunFilePath,
expectedPullPolicies: []string{"Always", "Always"},
expectedErr: false,
},
{
name: "custom value is respected",
runfFile: runFileForContainerImagePullPolicy,
expectedPullPolicies: []string{"IfNotPresent", "Always"},
expectedErr: false,
},
{
name: "invalid value is rejected",
runfFile: runFileForContainerImagePullPolicyInvalid,
expectedPullPolicies: []string{"Always", "Always"},
expectedBadPolicyValue: "Invalid",
expectedErr: true,
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
config := RunFileConfig{}
config.parseAppsConfig(tc.runfFile)
err := config.validateRunConfig(tc.runfFile)
if tc.expectedErr {
assert.Error(t, err)
assert.Contains(t, err.Error(), fmt.Sprintf("invalid containerImagePullPolicy: %s, allowed values: Always, Never, IfNotPresent", tc.expectedBadPolicyValue))
return
}
assert.Equal(t, tc.expectedPullPolicies[0], config.Apps[0].ContainerImagePullPolicy)
assert.Equal(t, tc.expectedPullPolicies[1], config.Apps[1].ContainerImagePullPolicy)
})
}
}
func TestMultiResourcePathsResolution(t *testing.T) {
config := RunFileConfig{}
@ -297,7 +346,7 @@ func TestMultiResourcePathsResolution(t *testing.T) {
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
assert.Equal(t, tc.expectedNoOfResources, len(config.Apps[tc.appIndex].ResourcesPaths))
assert.Len(t, config.Apps[tc.appIndex].ResourcesPaths, tc.expectedNoOfResources)
var rsrcFound bool
for _, resourcePath := range config.Apps[tc.appIndex].ResourcesPaths {
if rsrcFound = strings.Contains(resourcePath, tc.expectedResourcesPathsContains); rsrcFound {

View File

@ -0,0 +1,24 @@
version: 1
common:
resourcesPath: ./app/resources
appProtocol: HTTP
appHealthProbeTimeout: 10
env:
DEBUG: false
tty: sts
apps:
- appDirPath: ./webapp/
resourcesPath: ./resources
configFilePath: ./config.yaml
appPort: 8080
appHealthProbeTimeout: 1
containerImagePullPolicy: IfNotPresent
containerImage: ghcr.io/dapr/dapr-workflows-python-sdk:latest
- appID: backend
appDirPath: ./backend/
appProtocol: GRPC
appPort: 3000
unixDomainSocket: /tmp/test-socket
env:
DEBUG: true
containerImage: ghcr.io/dapr/dapr-workflows-csharp-sdk:latest

View File

@ -0,0 +1,24 @@
version: 1
common:
resourcesPath: ./app/resources
appProtocol: HTTP
appHealthProbeTimeout: 10
env:
DEBUG: false
tty: sts
apps:
- appDirPath: ./webapp/
resourcesPath: ./resources
configFilePath: ./config.yaml
appPort: 8080
appHealthProbeTimeout: 1
containerImagePullPolicy: Invalid
containerImage: ghcr.io/dapr/dapr-workflows-python-sdk:latest
- appID: backend
appDirPath: ./backend/
appProtocol: GRPC
appPort: 3000
unixDomainSocket: /tmp/test-socket
env:
DEBUG: true
containerImage: ghcr.io/dapr/dapr-workflows-csharp-sdk:latest

View File

@ -55,10 +55,10 @@ func isStringNilOrEmpty(val *string) bool {
return val == nil || strings.TrimSpace(*val) == ""
}
func (b *bundleDetails) getPlacementImageName() string {
func (b *bundleDetails) getDaprImageName() string {
return *b.DaprImageName
}
func (b *bundleDetails) getPlacementImageFileName() string {
func (b *bundleDetails) getDaprImageFileName() string {
return *b.DaprImageFileName
}

View File

@ -44,8 +44,8 @@ func TestParseDetails(t *testing.T) {
assert.Equal(t, "0.10.0", *bd.DashboardVersion, "expected versions to match")
assert.Equal(t, "dist", *bd.BinarySubDir, "expected value to match")
assert.Equal(t, "docker", *bd.ImageSubDir, "expected value to match")
assert.Equal(t, "daprio/dapr:1.7.2", bd.getPlacementImageName(), "expected value to match")
assert.Equal(t, "daprio-dapr-1.7.2.tar.gz", bd.getPlacementImageFileName(), "expected value to match")
assert.Equal(t, "daprio/dapr:1.7.2", bd.getDaprImageName(), "expected value to match")
assert.Equal(t, "daprio-dapr-1.7.2.tar.gz", bd.getDaprImageFileName(), "expected value to match")
}
func TestParseDetailsMissingDetails(t *testing.T) {

View File

@ -25,8 +25,10 @@ const (
DefaultConfigFileName = "config.yaml"
DefaultResourcesDirName = "resources"
defaultDaprBinDirName = "bin"
defaultComponentsDirName = "components"
defaultDaprBinDirName = "bin"
defaultComponentsDirName = "components"
defaultSchedulerDirName = "scheduler"
defaultSchedulerDataDirName = "data"
)
// GetDaprRuntimePath returns the dapr runtime installation path.

View File

@ -85,7 +85,6 @@ func confirmContainerIsRunningOrExists(containerName string, isRunning bool, run
// If 'docker ps' failed due to some reason.
if err != nil {
//nolint
return false, fmt.Errorf("unable to confirm whether %s is running or exists. error\n%v", containerName, err.Error())
}
// 'docker ps' worked fine, but the response did not have the container name.
@ -100,7 +99,6 @@ func confirmContainerIsRunningOrExists(containerName string, isRunning bool, run
}
func isContainerRunError(err error) bool {
//nolint
if exitError, ok := err.(*exec.ExitError); ok {
exitCode := exitError.ExitCode()
return exitCode == 125
@ -109,7 +107,6 @@ func isContainerRunError(err error) bool {
}
func parseContainerRuntimeError(component string, err error) error {
//nolint
if exitError, ok := err.(*exec.ExitError); ok {
exitCode := exitError.ExitCode()
if exitCode == 125 { // see https://github.com/moby/moby/pull/14012

View File

@ -26,8 +26,8 @@ func TestDashboardRun(t *testing.T) {
assert.NoError(t, err)
assert.Contains(t, cmd.Args[0], "dashboard")
assert.Equal(t, cmd.Args[1], "--port")
assert.Equal(t, cmd.Args[2], "9090")
assert.Equal(t, "--port", cmd.Args[1])
assert.Equal(t, "9090", cmd.Args[2])
})
t.Run("start dashboard on random free port", func(t *testing.T) {
@ -35,7 +35,7 @@ func TestDashboardRun(t *testing.T) {
assert.NoError(t, err)
assert.Contains(t, cmd.Args[0], "dashboard")
assert.Equal(t, cmd.Args[1], "--port")
assert.NotEqual(t, cmd.Args[2], "0")
assert.Equal(t, "--port", cmd.Args[1])
assert.NotEqual(t, "0", cmd.Args[2])
})
}

View File

@ -64,7 +64,7 @@ func (s *Standalone) Invoke(appID, method string, data []byte, verb string, path
}
func makeEndpoint(lo ListOutput, method string) string {
return fmt.Sprintf("http://127.0.0.1:%s/v%s/invoke/%s/method/%s", fmt.Sprintf("%v", lo.HTTPPort), api.RuntimeAPIVersion, lo.AppID, method) //nolint: perfsprint
return fmt.Sprintf("http://127.0.0.1:%d/v%s/invoke/%s/method/%s", lo.HTTPPort, api.RuntimeAPIVersion, lo.AppID, method)
}
func handleResponse(response *http.Response) (string, error) {

View File

@ -66,7 +66,7 @@ func List() ([]ListOutput, error) {
for _, proc := range processes {
executable := strings.ToLower(proc.Executable())
if (executable == "daprd") || (executable == "daprd.exe") {
procDetails, err := process.NewProcess(int32(proc.Pid()))
procDetails, err := process.NewProcess(int32(proc.Pid())) //nolint:gosec
if err != nil {
continue
}
@ -105,9 +105,9 @@ func List() ([]ListOutput, error) {
enableMetrics = true
}
maxRequestBodySize := getIntArg(argumentsMap, "--dapr-http-max-request-size", runtime.DefaultMaxRequestBodySize)
maxRequestBodySize := getIntArg(argumentsMap, "max-body-size", runtime.DefaultMaxRequestBodySize)
httpReadBufferSize := getIntArg(argumentsMap, "--dapr-http-read-buffer-size", runtime.DefaultReadBufferSize)
httpReadBufferSize := getIntArg(argumentsMap, "read-buffer-size", runtime.DefaultReadBufferSize)
appID := argumentsMap["--app-id"]
appCmd := ""

View File

@ -62,7 +62,7 @@ func (s *Standalone) Publish(publishAppID, pubsubName, topic string, payload []b
},
}
} else {
url = fmt.Sprintf("http://localhost:%s/v%s/publish/%s/%s%s", fmt.Sprintf("%v", instance.HTTPPort), api.RuntimeAPIVersion, pubsubName, topic, queryParams) //nolint: perfsprint
url = fmt.Sprintf("http://localhost:%d/v%s/publish/%s/%s%s", instance.HTTPPort, api.RuntimeAPIVersion, pubsubName, topic, queryParams)
}
contentType := "application/json"
@ -94,7 +94,7 @@ func (s *Standalone) Publish(publishAppID, pubsubName, topic string, payload []b
}
func getDaprInstance(list []ListOutput, publishAppID string) (ListOutput, error) {
for i := 0; i < len(list); i++ {
for i := range list {
if list[i].AppID == publishAppID {
return list[i], nil
}
@ -112,7 +112,7 @@ func getQueryParams(metadata map[string]interface{}) string {
}
// Prefix with "?" and remove the last "&".
if queryParams != "" {
queryParams = fmt.Sprintf("?%s", queryParams[:len(queryParams)-1])
queryParams = "?" + queryParams[:len(queryParams)-1]
}
return queryParams
}

View File

@ -212,7 +212,7 @@ func TestGetQueryParams(t *testing.T) {
queryParams := getQueryParams(tc.metadata)
if queryParams != "" {
assert.True(t, queryParams[0] == '?', "expected query params to start with '?'")
assert.True(t, strings.HasPrefix(queryParams, "?"), "expected query params to start with '?'")
queryParams = queryParams[1:]
}

View File

@ -14,6 +14,7 @@ limitations under the License.
package standalone
import (
"context"
"fmt"
"net"
"os"
@ -23,12 +24,14 @@ import (
"strconv"
"strings"
"k8s.io/apimachinery/pkg/api/resource"
"github.com/Pallinder/sillyname-go"
"github.com/phayes/freeport"
"gopkg.in/yaml.v2"
"github.com/dapr/cli/pkg/print"
"github.com/dapr/dapr/pkg/components"
localloader "github.com/dapr/dapr/pkg/components/loader"
)
type LogDestType string
@ -78,8 +81,8 @@ type SharedRunConfig struct {
ResourcesPaths []string `arg:"resources-path" yaml:"resourcesPaths"`
// Speicifcally omitted from annotations as appSSL is deprecated.
AppSSL bool `arg:"app-ssl" yaml:"appSSL"`
MaxRequestBodySize int `arg:"dapr-http-max-request-size" annotation:"dapr.io/http-max-request-size" yaml:"daprHTTPMaxRequestSize" default:"-1"`
HTTPReadBufferSize int `arg:"dapr-http-read-buffer-size" annotation:"dapr.io/http-read-buffer-size" yaml:"daprHTTPReadBufferSize" default:"-1"`
MaxRequestBodySize string `arg:"max-body-size" annotation:"dapr.io/max-body-size" yaml:"maxBodySize" default:"4Mi"`
HTTPReadBufferSize string `arg:"read-buffer-size" annotation:"dapr.io/read-buffer-size" yaml:"readBufferSize" default:"4Ki"`
EnableAppHealth bool `arg:"enable-app-health-check" annotation:"dapr.io/enable-app-health-check" yaml:"enableAppHealthCheck"`
AppHealthPath string `arg:"app-health-check-path" annotation:"dapr.io/app-health-check-path" yaml:"appHealthCheckPath"`
AppHealthInterval int `arg:"app-health-probe-interval" annotation:"dapr.io/app-health-probe-interval" ifneq:"0" yaml:"appHealthProbeInterval"`
@ -87,10 +90,11 @@ type SharedRunConfig struct {
AppHealthThreshold int `arg:"app-health-threshold" annotation:"dapr.io/app-health-threshold" ifneq:"0" yaml:"appHealthThreshold"`
EnableAPILogging bool `arg:"enable-api-logging" annotation:"dapr.io/enable-api-logging" yaml:"enableApiLogging"`
// Specifically omitted from annotations see https://github.com/dapr/cli/issues/1324 .
DaprdInstallPath string `yaml:"runtimePath"`
Env map[string]string `yaml:"env"`
DaprdLogDestination LogDestType `yaml:"daprdLogDestination"`
AppLogDestination LogDestType `yaml:"appLogDestination"`
DaprdInstallPath string `yaml:"runtimePath"`
Env map[string]string `yaml:"env"`
DaprdLogDestination LogDestType `yaml:"daprdLogDestination"`
AppLogDestination LogDestType `yaml:"appLogDestination"`
SchedulerHostAddress string `arg:"scheduler-host-address" yaml:"schedulerHostAddress"`
}
func (meta *DaprMeta) newAppID() string {
@ -112,8 +116,8 @@ func (config *RunConfig) validateResourcesPaths() error {
return fmt.Errorf("error validating resources path %q : %w", dirPath, err)
}
}
componentsLoader := components.NewLocalComponents(dirPath...)
_, err := componentsLoader.Load()
localLoader := localloader.NewLocalLoader(config.AppID, dirPath)
err := localLoader.Validate(context.Background())
if err != nil {
return fmt.Errorf("error validating components in resources path %q : %w", dirPath, err)
}
@ -127,15 +131,34 @@ func (config *RunConfig) validatePlacementHostAddr() error {
}
if indx := strings.Index(placementHostAddr, ":"); indx == -1 {
if runtime.GOOS == daprWindowsOS {
placementHostAddr = fmt.Sprintf("%s:6050", placementHostAddr)
placementHostAddr += ":6050"
} else {
placementHostAddr = fmt.Sprintf("%s:50005", placementHostAddr)
placementHostAddr += ":50005"
}
}
config.PlacementHostAddr = placementHostAddr
return nil
}
func (config *RunConfig) validateSchedulerHostAddr() error {
schedulerHostAddr := config.SchedulerHostAddress
if len(schedulerHostAddr) == 0 {
return nil
}
if indx := strings.Index(schedulerHostAddr, ":"); indx == -1 {
if runtime.GOOS == daprWindowsOS {
schedulerHostAddr += ":6060"
} else {
schedulerHostAddr += ":50006"
}
}
config.SchedulerHostAddress = schedulerHostAddr
return nil
}
func (config *RunConfig) validatePort(portName string, portPtr *int, meta *DaprMeta) error {
if *portPtr <= 0 {
port, err := freeport.GetFreePort()
@ -205,18 +228,38 @@ func (config *RunConfig) Validate() error {
if config.MaxConcurrency < 1 {
config.MaxConcurrency = -1
}
if config.MaxRequestBodySize < 0 {
config.MaxRequestBodySize = -1
qBody, err := resource.ParseQuantity(config.MaxRequestBodySize)
if err != nil {
return fmt.Errorf("invalid max request body size: %w", err)
}
if config.HTTPReadBufferSize < 0 {
config.HTTPReadBufferSize = -1
if qBody.Value() < 0 {
config.MaxRequestBodySize = "-1"
} else {
config.MaxRequestBodySize = qBody.String()
}
qBuffer, err := resource.ParseQuantity(config.HTTPReadBufferSize)
if err != nil {
return fmt.Errorf("invalid http read buffer size: %w", err)
}
if qBuffer.Value() < 0 {
config.HTTPReadBufferSize = "-1"
} else {
config.HTTPReadBufferSize = qBuffer.String()
}
err = config.validatePlacementHostAddr()
if err != nil {
return err
}
err = config.validateSchedulerHostAddr()
if err != nil {
return err
}
return nil
}
@ -239,12 +282,27 @@ func (config *RunConfig) ValidateK8s() error {
if config.MaxConcurrency < 1 {
config.MaxConcurrency = -1
}
if config.MaxRequestBodySize < 0 {
config.MaxRequestBodySize = -1
qBody, err := resource.ParseQuantity(config.MaxRequestBodySize)
if err != nil {
return fmt.Errorf("invalid max request body size: %w", err)
}
if config.HTTPReadBufferSize < 0 {
config.HTTPReadBufferSize = -1
if qBody.Value() < 0 {
config.MaxRequestBodySize = "-1"
} else {
config.MaxRequestBodySize = qBody.String()
}
qBuffer, err := resource.ParseQuantity(config.HTTPReadBufferSize)
if err != nil {
return fmt.Errorf("invalid http read buffer size: %w", err)
}
if qBuffer.Value() < 0 {
config.HTTPReadBufferSize = "-1"
} else {
config.HTTPReadBufferSize = qBuffer.String()
}
return nil
@ -264,7 +322,7 @@ func (meta *DaprMeta) portExists(port int) bool {
if port <= 0 {
return false
}
//nolint
_, ok := meta.ExistingPorts[port]
if ok {
return true
@ -321,7 +379,7 @@ func (config *RunConfig) getArgs() []string {
// Recursive function to get all the args from the config struct.
// This is needed because the config struct has embedded struct.
func getArgsFromSchema(schema reflect.Value, args []string) []string {
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("arg")
@ -363,7 +421,7 @@ func (config *RunConfig) SetDefaultFromSchema() {
}
func (config *RunConfig) setDefaultFromSchemaRecursive(schema reflect.Value) {
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i)
typeField := schema.Type().Field(i)
if typeField.Type.Kind() == reflect.Struct {
@ -389,7 +447,7 @@ func (config *RunConfig) getEnv() []string {
// Handle values from config that have an "env" tag.
schema := reflect.ValueOf(*config)
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("env")
@ -449,7 +507,7 @@ func (config *RunConfig) getAppProtocol() string {
func (config *RunConfig) GetEnv() map[string]string {
env := map[string]string{}
schema := reflect.ValueOf(*config)
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("env")
@ -473,7 +531,7 @@ func (config *RunConfig) GetEnv() map[string]string {
func (config *RunConfig) GetAnnotations() map[string]string {
annotations := map[string]string{}
schema := reflect.ValueOf(*config)
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("annotation")

View File

@ -31,6 +31,7 @@ import (
"sync"
"time"
"github.com/Masterminds/semver"
"github.com/fatih/color"
"gopkg.in/yaml.v2"
@ -43,6 +44,7 @@ const (
daprRuntimeFilePrefix = "daprd"
dashboardFilePrefix = "dashboard"
placementServiceFilePrefix = "placement"
schedulerServiceFilePrefix = "scheduler"
daprWindowsOS = "windows"
@ -72,6 +74,8 @@ const (
// DaprPlacementContainerName is the container name of placement service.
DaprPlacementContainerName = "dapr_placement"
// DaprSchedulerContainerName is the container name of scheduler service.
DaprSchedulerContainerName = "dapr_scheduler"
// DaprRedisContainerName is the container name of redis.
DaprRedisContainerName = "dapr_redis"
// DaprZipkinContainerName is the container name of zipkin.
@ -81,6 +85,12 @@ const (
healthPort = 58080
metricPort = 59090
schedulerHealthPort = 58081
schedulerMetricPort = 59091
schedulerEtcdPort = 52379
daprVersionsWithScheduler = ">= 1.14.x"
)
var (
@ -133,6 +143,7 @@ type initInfo struct {
imageRegistryURL string
containerRuntime string
imageVariant string
schedulerVolume *string
}
type daprImageInfo struct {
@ -154,8 +165,27 @@ func isBinaryInstallationRequired(binaryFilePrefix, binInstallDir string) (bool,
return true, nil
}
// isSchedulerIncluded returns true if scheduler is included a given version for Dapr.
func isSchedulerIncluded(runtimeVersion string) (bool, error) {
c, err := semver.NewConstraint(daprVersionsWithScheduler)
if err != nil {
return false, err
}
v, err := semver.NewVersion(runtimeVersion)
if err != nil {
return false, err
}
vNoPrerelease, err := v.SetPrerelease("")
if err != nil {
return false, err
}
return c.Check(&vNoPrerelease), nil
}
// Init installs Dapr on a local machine using the supplied runtimeVersion.
func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMode bool, imageRegistryURL string, fromDir string, containerRuntime string, imageVariant string, daprInstallPath string) error {
func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMode bool, imageRegistryURL string, fromDir string, containerRuntime string, imageVariant string, daprInstallPath string, schedulerVolume *string) error {
var err error
var bundleDet bundleDetails
containerRuntime = strings.TrimSpace(containerRuntime)
@ -240,8 +270,10 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
createComponentsAndConfiguration,
installDaprRuntime,
installPlacement,
installScheduler,
installDashboard,
runPlacementService,
runSchedulerService,
runRedis,
runZipkin,
}
@ -274,6 +306,7 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
imageRegistryURL: imageRegistryURL,
containerRuntime: containerRuntime,
imageVariant: imageVariant,
schedulerVolume: schedulerVolume,
}
for _, step := range initSteps {
// Run init on the configurations and containers.
@ -302,6 +335,7 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
if slimMode {
// Print info on placement binary only on slim install.
print.InfoStatusEvent(os.Stdout, "%s binary has been installed to %s.", placementServiceFilePrefix, daprBinDir)
print.InfoStatusEvent(os.Stdout, "%s binary has been installed to %s.", schedulerServiceFilePrefix, daprBinDir)
} else {
runtimeCmd := utils.GetContainerRuntimeCmd(info.containerRuntime)
dockerContainerNames := []string{DaprPlacementContainerName, DaprRedisContainerName, DaprZipkinContainerName}
@ -309,6 +343,10 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
if isAirGapInit {
dockerContainerNames = []string{DaprPlacementContainerName}
}
hasScheduler, err := isSchedulerIncluded(info.runtimeVersion)
if err == nil && hasScheduler {
dockerContainerNames = append(dockerContainerNames, DaprSchedulerContainerName)
}
for _, container := range dockerContainerNames {
containerName := utils.CreateContainerName(container, dockerNetwork)
ok, err := confirmContainerIsRunningOrExists(containerName, true, runtimeCmd)
@ -379,7 +417,6 @@ func runZipkin(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
args = append(args, imageName)
}
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
@ -445,7 +482,6 @@ func runRedis(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
args = append(args, imageName)
}
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
@ -489,15 +525,15 @@ func runPlacementService(wg *sync.WaitGroup, errorChan chan<- error, info initIn
if isAirGapInit {
// if --from-dir flag is given load the image details from the installer-bundle.
dir := path_filepath.Join(info.fromDir, *info.bundleDet.ImageSubDir)
image = info.bundleDet.getPlacementImageName()
err = loadContainer(dir, info.bundleDet.getPlacementImageFileName(), info.containerRuntime)
image = info.bundleDet.getDaprImageName()
err = loadContainer(dir, info.bundleDet.getDaprImageFileName(), info.containerRuntime)
if err != nil {
errorChan <- err
return
}
} else {
// otherwise load the image from the specified repository.
image, err = getPlacementImageName(imgInfo, info)
image, err = getDaprImageName(imgInfo, info)
if err != nil {
errorChan <- err
return
@ -532,7 +568,6 @@ func runPlacementService(wg *sync.WaitGroup, errorChan chan<- error, info initIn
args = append(args, image)
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
@ -545,6 +580,141 @@ func runPlacementService(wg *sync.WaitGroup, errorChan chan<- error, info initIn
errorChan <- nil
}
func runSchedulerService(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
defer wg.Done()
if info.slimMode {
return
}
hasScheduler, err := isSchedulerIncluded(info.runtimeVersion)
if err != nil {
errorChan <- err
return
}
if !hasScheduler {
return
}
runtimeCmd := utils.GetContainerRuntimeCmd(info.containerRuntime)
schedulerContainerName := utils.CreateContainerName(DaprSchedulerContainerName, info.dockerNetwork)
exists, err := confirmContainerIsRunningOrExists(schedulerContainerName, false, runtimeCmd)
if err != nil {
errorChan <- err
return
} else if exists {
errorChan <- fmt.Errorf("%s container exists or is running. %s", schedulerContainerName, errInstallTemplate)
return
}
var image string
imgInfo := daprImageInfo{
ghcrImageName: daprGhcrImageName,
dockerHubImageName: daprDockerImageName,
imageRegistryURL: info.imageRegistryURL,
imageRegistryName: defaultImageRegistryName,
}
if isAirGapInit {
// if --from-dir flag is given load the image details from the installer-bundle.
dir := path_filepath.Join(info.fromDir, *info.bundleDet.ImageSubDir)
image = info.bundleDet.getDaprImageName()
err = loadContainer(dir, info.bundleDet.getDaprImageFileName(), info.containerRuntime)
if err != nil {
errorChan <- err
return
}
} else {
// otherwise load the image from the specified repository.
image, err = getDaprImageName(imgInfo, info)
if err != nil {
errorChan <- err
return
}
}
args := []string{
"run",
"--name", schedulerContainerName,
"--restart", "always",
"-d",
"--entrypoint", "./scheduler",
}
if info.schedulerVolume != nil {
// Don't touch this file location unless things start breaking.
// In Docker, when Docker creates a volume and mounts that volume. Docker
// assumes the file permissions of that directory if it exists in the container.
// If that directory didn't exist in the container previously, then Docker sets
// the permissions owned by root and not writeable.
// We are lucky in that the Dapr containers have a world writeable directory at
// /var/lock and can therefore mount the Docker volume here.
// TODO: update the Dapr scheduler dockerfile to create a scheduler user id writeable
// directory at /var/lib/dapr/scheduler, then update the path here.
if strings.EqualFold(info.imageVariant, "mariner") {
args = append(args, "--volume", *info.schedulerVolume+":/var/tmp")
} else {
args = append(args, "--volume", *info.schedulerVolume+":/var/lock")
}
}
osPort := 50006
if info.dockerNetwork != "" {
args = append(args,
"--network", info.dockerNetwork,
"--network-alias", DaprSchedulerContainerName)
} else {
if runtime.GOOS == daprWindowsOS {
osPort = 6060
}
args = append(args,
"-p", fmt.Sprintf("%v:50006", osPort),
"-p", fmt.Sprintf("%v:2379", schedulerEtcdPort),
"-p", fmt.Sprintf("%v:8080", schedulerHealthPort),
"-p", fmt.Sprintf("%v:9090", schedulerMetricPort),
)
}
if strings.EqualFold(info.imageVariant, "mariner") {
args = append(args, image, "--etcd-data-dir=/var/tmp/dapr/scheduler")
} else {
args = append(args, image, "--etcd-data-dir=/var/lock/dapr/scheduler")
}
if schedulerOverrideHostPort(info) {
args = append(args, fmt.Sprintf("--override-broadcast-host-port=localhost:%v", osPort))
}
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
errorChan <- parseContainerRuntimeError("scheduler service", err)
} else {
errorChan <- fmt.Errorf("%s %s failed with: %w", runtimeCmd, args, err)
}
return
}
errorChan <- nil
}
func schedulerOverrideHostPort(info initInfo) bool {
if info.runtimeVersion == "edge" || info.runtimeVersion == "dev" {
return true
}
runV, err := semver.NewVersion(info.runtimeVersion)
if err != nil {
return true
}
v115rc5, _ := semver.NewVersion("1.15.0-rc.5")
return runV.GreaterThan(v115rc5)
}
func moveDashboardFiles(extractedFilePath string, dir string) (string, error) {
// Move /release/os/web directory to /web.
oldPath := path_filepath.Join(path_filepath.Dir(extractedFilePath), "web")
@ -609,7 +779,29 @@ func installPlacement(wg *sync.WaitGroup, errorChan chan<- error, info initInfo)
}
}
// installBinary installs the daprd, placement or dashboard binaries and associated files inside the default dapr bin directory.
func installScheduler(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
defer wg.Done()
if !info.slimMode {
return
}
hasScheduler, err := isSchedulerIncluded(info.runtimeVersion)
if err != nil {
errorChan <- err
return
}
if !hasScheduler {
return
}
err = installBinary(info.runtimeVersion, schedulerServiceFilePrefix, cli_ver.DaprGitHubRepo, info)
if err != nil {
errorChan <- err
}
}
// installBinary installs the daprd, placement, scheduler, or dashboard binaries and associated files inside the default dapr bin directory.
func installBinary(version, binaryFilePrefix, githubRepo string, info initInfo) error {
var (
err error
@ -715,7 +907,7 @@ func createSlimConfiguration(wg *sync.WaitGroup, errorChan chan<- error, info in
func makeDefaultComponentsDir(installDir string) error {
// Make default components directory.
componentsDir := GetDaprComponentsPath(installDir)
//nolint
_, err := os.Stat(componentsDir)
if os.IsNotExist(err) {
errDir := os.MkdirAll(componentsDir, 0o755)
@ -782,7 +974,7 @@ func unzip(r *zip.Reader, targetDir string, binaryFilePrefix string) (string, er
return "", err
}
if strings.HasSuffix(fpath, fmt.Sprintf("%s.exe", binaryFilePrefix)) {
if strings.HasSuffix(fpath, binaryFilePrefix+".exe") {
foundBinary = fpath
}
@ -840,7 +1032,6 @@ func untar(reader io.Reader, targetDir string, binaryFilePrefix string) (string,
foundBinary := ""
for {
header, err := tr.Next()
//nolint
if err == io.EOF {
break
} else if err != nil {
@ -863,7 +1054,7 @@ func untar(reader io.Reader, targetDir string, binaryFilePrefix string) (string,
continue
}
f, err := os.OpenFile(path, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
f, err := os.OpenFile(path, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode)) //nolint:gosec
if err != nil {
return "", err
}
@ -912,14 +1103,14 @@ func moveFileToPath(filepath string, installLocation string) (string, error) {
if !strings.Contains(strings.ToLower(p), strings.ToLower(destDir)) {
destDir = utils.SanitizeDir(destDir)
pathCmd := "[System.Environment]::SetEnvironmentVariable('Path',[System.Environment]::GetEnvironmentVariable('Path','user') + '" + fmt.Sprintf(";%s", destDir) + "', 'user')"
pathCmd := "[System.Environment]::SetEnvironmentVariable('Path',[System.Environment]::GetEnvironmentVariable('Path','user') + '" + ";" + destDir + "', 'user')"
_, err := utils.RunCmdAndWait("powershell", pathCmd)
if err != nil {
return "", err
}
}
return fmt.Sprintf("%s\\daprd.exe", destDir), nil
return destDir + "\\daprd.exe", nil
}
if strings.HasPrefix(fileName, daprRuntimeFilePrefix) && installLocation != "" {
@ -944,7 +1135,7 @@ func createRedisStateStore(redisHost string, componentsPath string) error {
redisStore.Spec.Metadata = []componentMetadataItem{
{
Name: "redisHost",
Value: fmt.Sprintf("%s:6379", redisHost),
Value: redisHost + ":6379",
},
{
Name: "redisPassword",
@ -979,7 +1170,7 @@ func createRedisPubSub(redisHost string, componentsPath string) error {
redisPubSub.Spec.Metadata = []componentMetadataItem{
{
Name: "redisHost",
Value: fmt.Sprintf("%s:6379", redisHost),
Value: redisHost + ":6379",
},
{
Name: "redisPassword",
@ -1189,16 +1380,16 @@ func copyWithTimeout(ctx context.Context, dst io.Writer, src io.Reader) (int64,
}
}
// getPlacementImageName returns the resolved placement image name for online `dapr init`.
// getDaprImageName returns the resolved Dapr image name for online `dapr init`.
// It can either be resolved to the image-registry if given, otherwise GitHub container registry if
// selected or fallback to Docker Hub.
func getPlacementImageName(imageInfo daprImageInfo, info initInfo) (string, error) {
func getDaprImageName(imageInfo daprImageInfo, info initInfo) (string, error) {
image, err := resolveImageURI(imageInfo)
if err != nil {
return "", err
}
image, err = getPlacementImageWithTag(image, info.runtimeVersion, info.imageVariant)
image, err = getDaprImageWithTag(image, info.runtimeVersion, info.imageVariant)
if err != nil {
return "", err
}
@ -1206,8 +1397,8 @@ func getPlacementImageName(imageInfo daprImageInfo, info initInfo) (string, erro
// if default registry is GHCR and the image is not available in or cannot be pulled from GHCR
// fallback to using dockerhub.
if useGHCR(imageInfo, info.fromDir) && !tryPullImage(image, info.containerRuntime) {
print.InfoStatusEvent(os.Stdout, "Placement image not found in Github container registry, pulling it from Docker Hub")
image, err = getPlacementImageWithTag(daprDockerImageName, info.runtimeVersion, info.imageVariant)
print.InfoStatusEvent(os.Stdout, "Image not found in Github container registry, pulling it from Docker Hub")
image, err = getDaprImageWithTag(daprDockerImageName, info.runtimeVersion, info.imageVariant)
if err != nil {
return "", err
}
@ -1215,7 +1406,7 @@ func getPlacementImageName(imageInfo daprImageInfo, info initInfo) (string, erro
return image, nil
}
func getPlacementImageWithTag(name, version, imageVariant string) (string, error) {
func getDaprImageWithTag(name, version, imageVariant string) (string, error) {
err := utils.ValidateImageVariant(imageVariant)
if err != nil {
return "", err

View File

@ -98,7 +98,6 @@ func TestResolveImageWithGHCR(t *testing.T) {
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
got, err := resolveImageURI(test.args)
@ -144,7 +143,6 @@ func TestResolveImageWithDockerHub(t *testing.T) {
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
got, err := resolveImageURI(test.args)
@ -190,7 +188,6 @@ func TestResolveImageWithPrivateRegistry(t *testing.T) {
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
got, err := resolveImageURI(test.args)
@ -328,9 +325,31 @@ func TestInitLogActualContainerRuntimeName(t *testing.T) {
t.Skip("Skipping test as container runtime is available")
}
err := Init(latestVersion, latestVersion, "", false, "", "", test.containerRuntime, "", "")
assert.NotNil(t, err)
err := Init(latestVersion, latestVersion, "", false, "", "", test.containerRuntime, "", "", nil)
assert.Error(t, err)
assert.Contains(t, err.Error(), test.containerRuntime)
})
}
}
func TestIsSchedulerIncluded(t *testing.T) {
scenarios := []struct {
version string
isIncluded bool
}{
{"1.13.0-rc.1", false},
{"1.13.0", false},
{"1.13.1", false},
{"1.14.0", true},
{"1.14.0-rc.1", true},
{"1.14.0-mycompany.1", true},
{"1.14.1", true},
}
for _, scenario := range scenarios {
t.Run("isSchedulerIncludedIn"+scenario.version, func(t *testing.T) {
included, err := isSchedulerIncluded(scenario.version)
assert.NoError(t, err)
assert.Equal(t, scenario.isIncluded, included)
})
}
}

View File

@ -18,6 +18,7 @@ package standalone
import (
"fmt"
"strconv"
"syscall"
"github.com/dapr/cli/utils"
@ -31,10 +32,10 @@ func Stop(appID string, cliPIDToNoOfApps map[int]int, apps []ListOutput) error {
// Kill the Daprd process if Daprd was started without CLI, otherwise
// kill the CLI process which also kills the associated Daprd process.
if a.CliPID == 0 || cliPIDToNoOfApps[a.CliPID] > 1 {
pid = fmt.Sprintf("%v", a.DaprdPID) //nolint: perfsprint
pid = strconv.Itoa(a.DaprdPID)
cliPIDToNoOfApps[a.CliPID]--
} else {
pid = fmt.Sprintf("%v", a.CliPID) //nolint: perfsprint
pid = strconv.Itoa(a.CliPID)
}
_, err := utils.RunCmdAndWait("kill", pid)

View File

@ -20,6 +20,7 @@ import (
"time"
"github.com/dapr/cli/utils"
"github.com/kolesnikovae/go-winjob"
"golang.org/x/sys/windows"
)

View File

@ -24,13 +24,17 @@ import (
"github.com/dapr/cli/utils"
)
func removeContainers(uninstallPlacementContainer, uninstallAll bool, dockerNetwork, runtimeCmd string) []error {
func removeContainers(uninstallPlacementContainer, uninstallSchedulerContainer, uninstallAll bool, dockerNetwork, runtimeCmd string) []error {
var containerErrs []error
if uninstallPlacementContainer {
containerErrs = removeDockerContainer(containerErrs, DaprPlacementContainerName, dockerNetwork, runtimeCmd)
}
if uninstallSchedulerContainer {
containerErrs = removeDockerContainer(containerErrs, DaprSchedulerContainerName, dockerNetwork, runtimeCmd)
}
if uninstallAll {
containerErrs = removeDockerContainer(containerErrs, DaprRedisContainerName, dockerNetwork, runtimeCmd)
containerErrs = removeDockerContainer(containerErrs, DaprZipkinContainerName, dockerNetwork, runtimeCmd)
@ -59,6 +63,20 @@ func removeDockerContainer(containerErrs []error, containerName, network, runtim
return containerErrs
}
func removeSchedulerVolume(containerErrs []error, runtimeCmd string) []error {
print.InfoStatusEvent(os.Stdout, "Removing volume if it exists: dapr_scheduler")
_, err := utils.RunCmdAndWait(
runtimeCmd, "volume", "rm",
"--force",
"dapr_scheduler")
if err != nil {
containerErrs = append(
containerErrs,
fmt.Errorf("could not remove dapr_scheduler volume: %w", err))
}
return containerErrs
}
func removeDir(dirPath string) error {
_, err := os.Stat(dirPath)
if os.IsNotExist(err) {
@ -84,19 +102,27 @@ func Uninstall(uninstallAll bool, dockerNetwork string, containerRuntime string,
placementFilePath := binaryFilePathWithDir(daprBinDir, placementServiceFilePrefix)
_, placementErr := os.Stat(placementFilePath) // check if the placement binary exists.
uninstallPlacementContainer := errors.Is(placementErr, fs.ErrNotExist)
schedulerFilePath := binaryFilePathWithDir(daprBinDir, schedulerServiceFilePrefix)
_, schedulerErr := os.Stat(schedulerFilePath) // check if the scheduler binary exists.
uninstallSchedulerContainer := errors.Is(schedulerErr, fs.ErrNotExist)
// Remove .dapr/bin.
err = removeDir(daprBinDir)
if err != nil {
print.WarningStatusEvent(os.Stdout, "WARNING: could not delete dapr bin dir: %s", daprBinDir)
}
// We don't delete .dapr/scheduler by choice since it holds state.
// To delete .dapr/scheduler, user is expected to use the `--all` flag as it deletes the .dapr folder.
// The same happens for .dapr/components folder.
containerRuntime = strings.TrimSpace(containerRuntime)
runtimeCmd := utils.GetContainerRuntimeCmd(containerRuntime)
containerRuntimeAvailable := false
containerRuntimeAvailable = utils.IsContainerRuntimeInstalled(containerRuntime)
if containerRuntimeAvailable {
containerErrs = removeContainers(uninstallPlacementContainer, uninstallAll, dockerNetwork, runtimeCmd)
} else if uninstallPlacementContainer || uninstallAll {
containerErrs = removeContainers(uninstallPlacementContainer, uninstallSchedulerContainer, uninstallAll, dockerNetwork, runtimeCmd)
} else if uninstallSchedulerContainer || uninstallPlacementContainer || uninstallAll {
print.WarningStatusEvent(os.Stdout, "WARNING: could not delete supporting containers as container runtime is not installed or running")
}
@ -105,6 +131,10 @@ func Uninstall(uninstallAll bool, dockerNetwork string, containerRuntime string,
if err != nil {
print.WarningStatusEvent(os.Stdout, "WARNING: could not delete dapr dir %s: %s", installDir, err)
}
if containerRuntimeAvailable {
containerErrs = removeSchedulerVolume(containerErrs, runtimeCmd)
}
}
err = errors.New("uninstall failed")

View File

@ -26,6 +26,7 @@ import (
"golang.org/x/sys/windows"
"github.com/dapr/cli/pkg/print"
"github.com/kolesnikovae/go-winjob"
"github.com/kolesnikovae/go-winjob/jobapi"
)

View File

@ -15,6 +15,7 @@ package version
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
@ -112,23 +113,30 @@ func GetLatestReleaseGithub(githubURL string) (string, error) {
}
if len(githubRepoReleases) == 0 {
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
}
defaultVersion, _ := version.NewVersion("0.0.0")
latestVersion := defaultVersion
for _, release := range githubRepoReleases {
if !strings.Contains(release.TagName, "-rc") {
cur, _ := version.NewVersion(strings.TrimPrefix(release.TagName, "v"))
if cur.GreaterThan(latestVersion) {
latestVersion = cur
}
cur, err := version.NewVersion(strings.TrimPrefix(release.TagName, "v"))
if err != nil || cur == nil {
print.WarningStatusEvent(os.Stdout, "Malformed version %s, skipping", release.TagName)
continue
}
// Prerelease versions and versions with metadata are skipped.
if cur.Prerelease() != "" || cur.Metadata() != "" {
continue
}
if cur.GreaterThan(latestVersion) {
latestVersion = cur
}
}
if latestVersion.Equal(defaultVersion) {
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
}
return latestVersion.String(), nil
@ -144,7 +152,7 @@ func GetLatestReleaseHelmChart(helmChartURL string) (string, error) {
return "", err
}
if len(helmChartReleases.Entries.Dapr) == 0 {
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
}
for _, release := range helmChartReleases.Entries.Dapr {
@ -159,6 +167,6 @@ func GetLatestReleaseHelmChart(helmChartURL string) (string, error) {
return release.Version, nil
}
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
})
}

View File

@ -162,6 +162,52 @@ func TestGetVersionsGithub(t *testing.T) {
"no releases",
"",
},
{
"Malformed version no releases",
"/malformed_version_no_releases",
`[
{
"url": "https://api.github.com/repos/dapr/dapr/releases/186741665",
"html_url": "https://github.com/dapr/dapr/releases/tag/vedge",
"id": 186741665,
"tag_name": "vedge",
"target_commitish": "master",
"name": "Dapr Runtime vedge",
"draft": false,
"prerelease": false
}
] `,
"no releases",
"",
},
{
"Malformed version with latest",
"/malformed_version_with_latest",
`[
{
"url": "https://api.github.com/repos/dapr/dapr/releases/186741665",
"html_url": "https://github.com/dapr/dapr/releases/tag/vedge",
"id": 186741665,
"tag_name": "vedge",
"target_commitish": "master",
"name": "Dapr Runtime vedge",
"draft": false,
"prerelease": false
},
{
"url": "https://api.github.com/repos/dapr/dapr/releases/44766923",
"html_url": "https://github.com/dapr/dapr/releases/tag/v1.5.1",
"id": 44766923,
"tag_name": "v1.5.1",
"target_commitish": "master",
"name": "Dapr Runtime v1.5.1",
"draft": false,
"prerelease": false
}
] `,
"",
"1.5.1",
},
}
m := http.NewServeMux()
s := http.Server{Addr: ":12345", Handler: m, ReadHeaderTimeout: time.Duration(5) * time.Second}
@ -179,7 +225,7 @@ func TestGetVersionsGithub(t *testing.T) {
for _, tc := range tests {
t.Run(tc.Name, func(t *testing.T) {
version, err := GetLatestReleaseGithub(fmt.Sprintf("http://localhost:12345%s", tc.Path))
version, err := GetLatestReleaseGithub("http://localhost:12345" + tc.Path)
assert.Equal(t, tc.ExpectedVer, version)
if tc.ExpectedErr != "" {
assert.EqualError(t, err, tc.ExpectedErr)
@ -288,7 +334,7 @@ entries:
for _, tc := range tests {
t.Run(tc.Name, func(t *testing.T) {
version, err := GetLatestReleaseHelmChart(fmt.Sprintf("http://localhost:12346%s", tc.Path))
version, err := GetLatestReleaseHelmChart("http://localhost:12346" + tc.Path)
assert.Equal(t, tc.ExpectedVer, version)
if tc.ExpectedErr != "" {
assert.EqualError(t, err, tc.ExpectedErr)

View File

@ -19,10 +19,12 @@ import (
"os"
"path/filepath"
"runtime"
"strconv"
"strings"
"testing"
"time"
"github.com/Masterminds/semver/v3"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
core_v1 "k8s.io/api/core/v1"
@ -46,12 +48,23 @@ const (
ClusterRoles
ClusterRoleBindings
numHAPods = 13
numNonHAPods = 5
numHAPodsWithScheduler = 16
numHAPodsOld = 13
numNonHAPodsWithHAScheduler = 8
numNonHAPodsWithScheduler = 6
numNonHAPodsOld = 5
thirdPartyDevNamespace = "default"
devRedisReleaseName = "dapr-dev-redis"
devZipkinReleaseName = "dapr-dev-zipkin"
DaprModeHA = "ha"
DaprModeNonHA = "non-ha"
)
var (
VersionWithScheduler = semver.MustParse("1.14.0-rc.1")
VersionWithHAScheduler = semver.MustParse("1.15.0-rc.1")
)
type VersionDetails struct {
@ -73,6 +86,7 @@ type TestOptions struct {
CheckResourceExists map[Resource]bool
UninstallAll bool
InitWithCustomCert bool
TimeoutSeconds int
}
type TestCase struct {
@ -104,6 +118,29 @@ func GetVersionsFromEnv(t *testing.T, latest bool) (string, string) {
return daprRuntimeVersion, daprDashboardVersion
}
func GetRuntimeVersion(t *testing.T, latest bool) *semver.Version {
daprRuntimeVersion, _ := GetVersionsFromEnv(t, latest)
runtimeVersion, err := semver.NewVersion(daprRuntimeVersion)
require.NoError(t, err)
return runtimeVersion
}
func GetDaprTestHaMode() string {
daprHaMode := os.Getenv("TEST_DAPR_HA_MODE")
if daprHaMode != "" {
return daprHaMode
}
return ""
}
func ShouldSkipTest(mode string) bool {
envDaprHaMode := GetDaprTestHaMode()
if envDaprHaMode != "" {
return envDaprHaMode != mode
}
return false
}
func UpgradeTest(details VersionDetails, opts TestOptions) func(t *testing.T) {
return func(t *testing.T) {
daprPath := GetDaprPath()
@ -124,14 +161,20 @@ func UpgradeTest(details VersionDetails, opts TestOptions) func(t *testing.T) {
args = append(args, "--image-variant", details.ImageVariant)
}
if opts.TimeoutSeconds > 0 {
args = append(args, "--timeout", strconv.Itoa(opts.TimeoutSeconds))
}
output, err := spawn.Command(daprPath, args...)
t.Log(output)
require.NoError(t, err, "upgrade failed")
done := make(chan struct{})
defer close(done)
podsRunning := make(chan struct{})
defer close(podsRunning)
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning)
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning, details)
select {
case <-podsRunning:
t.Logf("verified all pods running in namespace %s are running after upgrade", DaprTestNamespace)
@ -197,7 +240,7 @@ func GetTestsOnInstall(details VersionDetails, opts TestOptions) []TestCase {
{"clusterroles exist " + details.RuntimeVersion, ClusterRolesTest(details, opts)},
{"clusterrolebindings exist " + details.RuntimeVersion, ClusterRoleBindingsTest(details, opts)},
{"apply and check components exist " + details.RuntimeVersion, ComponentsTestOnInstallUpgrade(opts)},
{"apply and check httpendpoints exist " + details.RuntimeVersion, HTTPEndpointsTestOnInstallUpgrade(opts)},
{"apply and check httpendpoints exist " + details.RuntimeVersion, HTTPEndpointsTestOnInstallUpgrade(opts, TestOptions{})},
{"check mtls " + details.RuntimeVersion, MTLSTestOnInstallUpgrade(opts)},
{"status check " + details.RuntimeVersion, StatusTestOnInstallUpgrade(details, opts)},
}
@ -325,10 +368,10 @@ func ComponentsTestOnInstallUpgrade(opts TestOptions) func(t *testing.T) {
}
}
func HTTPEndpointsTestOnInstallUpgrade(opts TestOptions) func(t *testing.T) {
func HTTPEndpointsTestOnInstallUpgrade(installOpts TestOptions, upgradeOpts TestOptions) func(t *testing.T) {
return func(t *testing.T) {
// if dapr is installed with httpendpoints.
if opts.ApplyHTTPEndpointChanges {
if installOpts.ApplyHTTPEndpointChanges {
// apply any changes to the httpendpoint.
t.Log("apply httpendpoint changes")
output, err := spawn.Command("kubectl", "apply", "-f", "../testdata/namespace.yaml")
@ -337,12 +380,17 @@ func HTTPEndpointsTestOnInstallUpgrade(opts TestOptions) func(t *testing.T) {
output, err = spawn.Command("kubectl", "apply", "-f", "../testdata/httpendpoint.yaml")
t.Log(output)
require.NoError(t, err, "expected no error on kubectl apply")
require.Equal(t, "httpendpoints.dapr.io/httpendpoint created\nhttpendpoints.dapr.io/httpendpoint created\n", output, "expected output to match")
httpEndpointOutputCheck(t, output)
if installOpts.ApplyHTTPEndpointChanges && upgradeOpts.ApplyHTTPEndpointChanges {
require.Equal(t, "httpendpoint.dapr.io/httpendpoint unchanged\n", output, "expected output to match")
} else {
require.Equal(t, "httpendpoint.dapr.io/httpendpoint created\n", output, "expected output to match")
}
t.Log("check applied httpendpoint exists")
_, err = spawn.Command("kubectl", "get", "httpendpoint")
output, err = spawn.Command("kubectl", "get", "httpendpoint")
require.NoError(t, err, "expected no error on calling to retrieve httpendpoints")
httpEndpointOutputCheck(t, output)
}
}
}
@ -352,6 +400,12 @@ func StatusTestOnInstallUpgrade(details VersionDetails, opts TestOptions) func(t
daprPath := GetDaprPath()
output, err := spawn.Command(daprPath, "status", "-k")
require.NoError(t, err, "status check failed")
version, err := semver.NewVersion(details.RuntimeVersion)
if err != nil {
t.Error("failed to parse runtime version", err)
}
var notFound map[string][]string
if !opts.HAEnabled {
notFound = map[string][]string{
@ -361,6 +415,11 @@ func StatusTestOnInstallUpgrade(details VersionDetails, opts TestOptions) func(t
"dapr-placement-server": {details.RuntimeVersion, "1"},
"dapr-operator": {details.RuntimeVersion, "1"},
}
if version.GreaterThanEqual(VersionWithHAScheduler) {
notFound["dapr-scheduler-server"] = []string{details.RuntimeVersion, "3"}
} else if version.GreaterThanEqual(VersionWithScheduler) {
notFound["dapr-scheduler-server"] = []string{details.RuntimeVersion, "1"}
}
} else {
notFound = map[string][]string{
"dapr-sentry": {details.RuntimeVersion, "3"},
@ -369,6 +428,9 @@ func StatusTestOnInstallUpgrade(details VersionDetails, opts TestOptions) func(t
"dapr-placement-server": {details.RuntimeVersion, "3"},
"dapr-operator": {details.RuntimeVersion, "3"},
}
if version.GreaterThanEqual(VersionWithScheduler) {
notFound["dapr-scheduler-server"] = []string{details.RuntimeVersion, "3"}
}
}
if details.ImageVariant != "" {
@ -376,6 +438,9 @@ func StatusTestOnInstallUpgrade(details VersionDetails, opts TestOptions) func(t
notFound["dapr-sidecar-injector"][0] = notFound["dapr-sidecar-injector"][0] + "-" + details.ImageVariant
notFound["dapr-placement-server"][0] = notFound["dapr-placement-server"][0] + "-" + details.ImageVariant
notFound["dapr-operator"][0] = notFound["dapr-operator"][0] + "-" + details.ImageVariant
if notFound["dapr-scheduler-server"] != nil {
notFound["dapr-scheduler-server"][0] = notFound["dapr-scheduler-server"][0] + "-" + details.ImageVariant
}
}
lines := strings.Split(output, "\n")[1:] // remove header of status.
@ -384,13 +449,13 @@ func StatusTestOnInstallUpgrade(details VersionDetails, opts TestOptions) func(t
cols := strings.Fields(strings.TrimSpace(line))
if len(cols) > 6 { // atleast 6 fields are verified from status (Age and created time are not).
if toVerify, ok := notFound[cols[0]]; ok { // get by name.
require.Equal(t, DaprTestNamespace, cols[1], "namespace must match")
require.Equal(t, "True", cols[2], "healthly field must be true")
require.Equal(t, "Running", cols[3], "pods must be Running")
require.Equal(t, toVerify[1], cols[4], "replicas must be equal")
require.Equal(t, DaprTestNamespace, cols[1], "%s namespace must match", cols[0])
require.Equal(t, "True", cols[2], "%s healthy field must be true", cols[0])
require.Equal(t, "Running", cols[3], "%s pods must be Running", cols[0])
require.Equal(t, toVerify[1], cols[4], "%s replicas must be equal", cols[0])
// TODO: Skip the dashboard version check for now until the helm chart is updated.
if cols[0] != "dapr-dashboard" {
require.Equal(t, toVerify[0], cols[5], "versions must match")
require.Equal(t, toVerify[0], cols[5], "%s versions must match", cols[0])
}
delete(notFound, cols[0])
}
@ -544,7 +609,7 @@ func GenerateNewCertAndRenew(details VersionDetails, opts TestOptions) func(t *t
done := make(chan struct{})
podsRunning := make(chan struct{})
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning)
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning, details)
select {
case <-podsRunning:
t.Logf("verified all pods running in namespace %s are running after certficate change", DaprTestNamespace)
@ -575,7 +640,7 @@ func UseProvidedPrivateKeyAndRenewCerts(details VersionDetails, opts TestOptions
done := make(chan struct{})
podsRunning := make(chan struct{})
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning)
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning, details)
select {
case <-podsRunning:
t.Logf("verified all pods running in namespace %s are running after certficate change", DaprTestNamespace)
@ -608,7 +673,7 @@ func UseProvidedNewCertAndRenew(details VersionDetails, opts TestOptions) func(t
done := make(chan struct{})
podsRunning := make(chan struct{})
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning)
go waitAllPodsRunning(t, DaprTestNamespace, opts.HAEnabled, done, podsRunning, details)
select {
case <-podsRunning:
t.Logf("verified all pods running in namespace %s are running after certficate change", DaprTestNamespace)
@ -815,8 +880,11 @@ func uninstallTest(all bool, devEnabled bool) func(t *testing.T) {
require.NoError(t, err, "uninstall failed")
// wait for pods to be deleted completely.
// needed to verify status checks fails correctly.
podsDeleted := make(chan struct{})
done := make(chan struct{})
defer close(done)
podsDeleted := make(chan struct{})
defer close(podsDeleted)
t.Log("waiting for pods to be deleted completely")
go waitPodDeletion(t, done, podsDeleted)
select {
@ -894,7 +962,7 @@ func componentsTestOnUninstall(opts TestOptions) func(t *testing.T) {
lines := strings.Split(output, "\n")
// An extra empty line is there in output.
require.Equal(t, 3, len(lines), "expected header and warning message of the output to remain")
require.Len(t, lines, 3, "expected header and warning message of the output to remain")
}
}
@ -924,7 +992,7 @@ func httpEndpointsTestOnUninstall(opts TestOptions) func(t *testing.T) {
lines := strings.Split(output, "\n")
// An extra empty line is there in output.
require.Equal(t, 2, len(lines), "expected kubernetes response message to remain")
require.Len(t, lines, 2, "expected kubernetes response message to remain")
}
}
}
@ -947,19 +1015,19 @@ func componentOutputCheck(t *testing.T, opts TestOptions, output string) {
}
if opts.UninstallAll {
assert.Equal(t, 2, len(lines), "expected at 0 components and 2 output lines")
assert.Len(t, lines, 2, "expected at 0 components and 2 output lines")
return
}
lines = strings.Split(output, "\n")[2:] // remove header and warning message.
lines = lines[2:] // remove header and warning message.
if opts.DevEnabled {
// default, test statestore.
// default pubsub.
// 3 components.
assert.Equal(t, 3, len(lines), "expected 3 components")
assert.Len(t, lines, 3, "expected 3 components")
} else {
assert.Equal(t, 2, len(lines), "expected 2 components") // default and test namespace components.
assert.Len(t, lines, 2, "expected 2 components") // default and test namespace components.
// for fresh cluster only one component yaml has been applied.
testNsFields := strings.Fields(lines[0])
@ -1119,6 +1187,8 @@ func waitPodDeletionDev(t *testing.T, done, podsDeleted chan struct{}) {
devRedisReleaseName: "dapr-dev-redis-master-",
devZipkinReleaseName: "dapr-dev-zipkin-",
}
t.Logf("dev pods waiting to be deleted: %d", len(list.Items))
for _, pod := range list.Items {
t.Log(pod.ObjectMeta.Name)
for component, prefix := range prefixes {
@ -1136,7 +1206,7 @@ func waitPodDeletionDev(t *testing.T, done, podsDeleted chan struct{}) {
if len(found) == 2 {
podsDeleted <- struct{}{}
}
time.Sleep(15 * time.Second)
time.Sleep(10 * time.Second)
}
}
@ -1148,23 +1218,32 @@ func waitPodDeletion(t *testing.T, done, podsDeleted chan struct{}) {
default:
break
}
ctx := context.Background()
ctxt, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
k8sClient, err := getClient()
require.NoError(t, err, "error getting k8s client for pods check")
list, err := k8sClient.CoreV1().Pods(DaprTestNamespace).List(ctxt, v1.ListOptions{
Limit: 100,
})
require.NoError(t, err, "error getting pods list from k8s")
if len(list.Items) == 0 {
podsDeleted <- struct{}{}
} else {
t.Logf("pods waiting to be deleted: %d", len(list.Items))
for _, pod := range list.Items {
t.Log(pod.ObjectMeta.Name)
}
}
time.Sleep(15 * time.Second)
time.Sleep(5 * time.Second)
}
}
func waitAllPodsRunning(t *testing.T, namespace string, haEnabled bool, done, podsRunning chan struct{}) {
func waitAllPodsRunning(t *testing.T, namespace string, haEnabled bool, done, podsRunning chan struct{}, details VersionDetails) {
for {
select {
case <-done: // if timeout was reached.
@ -1174,15 +1253,18 @@ func waitAllPodsRunning(t *testing.T, namespace string, haEnabled bool, done, po
}
ctx := context.Background()
ctxt, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
k8sClient, err := getClient()
require.NoError(t, err, "error getting k8s client for pods check")
list, err := k8sClient.CoreV1().Pods(namespace).List(ctxt, v1.ListOptions{
Limit: 100,
})
require.NoError(t, err, "error getting pods list from k8s")
t.Logf("waiting for pods to be running, current count: %d", len(list.Items))
countOfReadyPods := 0
for _, item := range list.Items {
t.Log(item.ObjectMeta.Name)
// Check pods running, and containers ready.
if item.Status.Phase == core_v1.PodRunning && len(item.Status.ContainerStatuses) != 0 {
size := len(item.Status.ContainerStatuses)
@ -1196,11 +1278,50 @@ func waitAllPodsRunning(t *testing.T, namespace string, haEnabled bool, done, po
}
}
}
if len(list.Items) == countOfReadyPods && ((haEnabled && countOfReadyPods == numHAPods) || (!haEnabled && countOfReadyPods == numNonHAPods)) {
pods, err := getVersionedNumberOfPods(haEnabled, details)
if err != nil {
t.Error(err)
}
if len(list.Items) == countOfReadyPods && countOfReadyPods == pods {
podsRunning <- struct{}{}
}
time.Sleep(15 * time.Second)
time.Sleep(5 * time.Second)
cancel()
}
}
func getVersionedNumberOfPods(isHAEnabled bool, details VersionDetails) (int, error) {
if isHAEnabled {
if details.UseDaprLatestVersion {
return numHAPodsWithScheduler, nil
}
rv, err := semver.NewVersion(details.RuntimeVersion)
if err != nil {
return 0, err
}
if rv.GreaterThanEqual(VersionWithScheduler) {
return numHAPodsWithScheduler, nil
}
return numHAPodsOld, nil
} else {
if details.UseDaprLatestVersion {
return numNonHAPodsWithHAScheduler, nil
}
rv, err := semver.NewVersion(details.RuntimeVersion)
if err != nil {
return 0, err
}
if rv.GreaterThanEqual(VersionWithHAScheduler) {
return numNonHAPodsWithHAScheduler, nil
}
if rv.GreaterThanEqual(VersionWithScheduler) {
return numNonHAPodsWithScheduler, nil
}
return numNonHAPodsOld, nil
}
}
@ -1210,7 +1331,6 @@ func exportCurrentCertificate(daprPath string) error {
os.RemoveAll("./certs")
}
_, err = spawn.Command(daprPath, "mtls", "export", "-o", "./certs")
if err != nil {
return fmt.Errorf("error in exporting certificate %w", err)
}

View File

@ -0,0 +1,113 @@
/*
Copyright 2024 The Dapr Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package common
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestGetVersionedNumberOfPods(t *testing.T) {
tests := []struct {
name string
isHAEnabled bool
details VersionDetails
expectedNumber int
expectedError bool
}{
{
name: "HA enabled with latest version",
isHAEnabled: true,
details: VersionDetails{UseDaprLatestVersion: true},
expectedNumber: numHAPodsWithScheduler,
expectedError: false,
},
{
name: "HA enabled with old version",
isHAEnabled: true,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "1.13.0"},
expectedNumber: numHAPodsOld,
expectedError: false,
},
{
name: "HA disabled with latest version",
isHAEnabled: false,
details: VersionDetails{UseDaprLatestVersion: true},
expectedNumber: numNonHAPodsWithScheduler,
expectedError: false,
},
{
name: "HA disabled with old version",
isHAEnabled: false,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "1.13.0"},
expectedNumber: numNonHAPodsOld,
expectedError: false,
},
{
name: "HA enabled with new version",
isHAEnabled: true,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "1.14.4"},
expectedNumber: numHAPodsWithScheduler,
expectedError: false,
},
{
name: "HA disabled with new version",
isHAEnabled: false,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "1.14.4"},
expectedNumber: numNonHAPodsWithScheduler,
expectedError: false,
},
{
name: "HA enabled with invalid version",
isHAEnabled: true,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "invalid version"},
expectedNumber: 0,
expectedError: true,
},
{
name: "HA disabled with invalid version",
isHAEnabled: false,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "invalid version"},
expectedNumber: 0,
expectedError: true,
},
{
name: "HA enabled with new RC version",
isHAEnabled: true,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "1.15.0-rc.1"},
expectedNumber: numHAPodsWithScheduler,
expectedError: false,
},
{
name: "HA disabled with new RC version",
isHAEnabled: false,
details: VersionDetails{UseDaprLatestVersion: false, RuntimeVersion: "1.15.0-rc.1"},
expectedNumber: numNonHAPodsWithScheduler,
expectedError: false,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
number, err := getVersionedNumberOfPods(tc.isHAEnabled, tc.details)
if tc.expectedError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tc.expectedNumber, number)
}
})
}
}

View File

@ -17,12 +17,17 @@ limitations under the License.
package kubernetes_test
import (
"fmt"
"testing"
"github.com/dapr/cli/tests/e2e/common"
)
func TestKubernetesNonHAModeMTLSDisabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -54,6 +59,10 @@ func TestKubernetesNonHAModeMTLSDisabled(t *testing.T) {
}
func TestKubernetesHAModeMTLSDisabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -85,6 +94,10 @@ func TestKubernetesHAModeMTLSDisabled(t *testing.T) {
}
func TestKubernetesDev(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -119,6 +132,10 @@ func TestKubernetesDev(t *testing.T) {
}
func TestKubernetesNonHAModeMTLSEnabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -150,6 +167,10 @@ func TestKubernetesNonHAModeMTLSEnabled(t *testing.T) {
}
func TestKubernetesHAModeMTLSEnabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -182,6 +203,10 @@ func TestKubernetesHAModeMTLSEnabled(t *testing.T) {
}
func TestKubernetesInitWithCustomCert(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -216,6 +241,10 @@ func TestKubernetesInitWithCustomCert(t *testing.T) {
// Test for certificate renewal
func TestRenewCertificateMTLSEnabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -251,6 +280,10 @@ func TestRenewCertificateMTLSEnabled(t *testing.T) {
}
func TestRenewCertificateMTLSDisabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -286,6 +319,10 @@ func TestRenewCertificateMTLSDisabled(t *testing.T) {
}
func TestRenewCertWithPrivateKey(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -328,6 +365,10 @@ func TestRenewCertWithPrivateKey(t *testing.T) {
}
func TestKubernetesUninstall(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -359,6 +400,10 @@ func TestKubernetesUninstall(t *testing.T) {
}
func TestRenewCertWithIncorrectFlags(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
common.EnsureUninstall(true, true)
tests := []common.TestCase{}
@ -397,6 +442,10 @@ func TestRenewCertWithIncorrectFlags(t *testing.T) {
// install dapr control plane with mariner docker images.
// Renew the certificate of this control plane.
func TestK8sInstallwithMarinerImagesAndRenewCertificate(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, false)
@ -435,6 +484,10 @@ func TestK8sInstallwithMarinerImagesAndRenewCertificate(t *testing.T) {
}
func TestKubernetesInstallwithoutRuntimeVersionFlag(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, true)
@ -467,6 +520,10 @@ func TestKubernetesInstallwithoutRuntimeVersionFlag(t *testing.T) {
}
func TestK8sInstallwithoutRuntimeVersionwithMarinerImagesFlag(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// ensure clean env for test
ensureCleanEnv(t, true)

View File

@ -47,6 +47,10 @@ var (
)
func TestKubernetesRunFile(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
ensureCleanEnv(t, false)
// setup tests

View File

@ -1,5 +1,4 @@
//go:build e2e || template
// +build e2e template
/*
Copyright 2022 The Dapr Authors
@ -19,8 +18,10 @@ package standalone_test
import (
"context"
"fmt"
"path/filepath"
"strings"
"github.com/dapr/cli/pkg/standalone"
"github.com/dapr/cli/tests/e2e/common"
"github.com/dapr/cli/tests/e2e/spawn"
"github.com/dapr/cli/utils"
@ -47,6 +48,47 @@ func cmdDashboard(ctx context.Context, port string) error {
return fmt.Errorf("Dashboard could not be started: %s", errOutput)
}
func cmdProcess(ctx context.Context, executable string, log func(args ...any), args ...string) (string, error) {
path, err := GetExecutablePath(executable)
if err != nil {
return "", err
}
stdOutChan, stdErrChan, err := spawn.CommandWithContext(ctx, path, args...)
if err != nil {
return "", err
}
if log != nil {
go func() {
for {
select {
case output := <-stdOutChan:
if output != "" {
log(executable + ": " + output)
}
case output := <-stdErrChan:
if output != "" {
log(executable + ": " + output)
}
case <-ctx.Done():
return
}
}
}()
}
return "", nil
}
func GetExecutablePath(executable string) (string, error) {
path, err := standalone.GetDaprRuntimePath("")
if err != nil {
return "", err
}
return filepath.Join(path, "bin", executable), nil
}
// cmdInit installs Dapr with the init command and returns the command output and error.
//
// When DAPR_E2E_INIT_SLIM is true, it will install Dapr without Docker containers.

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors
@ -33,7 +32,6 @@ func TestStandaloneInitNegatives(t *testing.T) {
require.NoError(t, err, "expected no error on querying for os home dir")
t.Run("run without install", func(t *testing.T) {
t.Parallel()
output, err := cmdRun("")
require.Error(t, err, "expected error status on run without install")
path := filepath.Join(homeDir, ".dapr", "components")
@ -45,21 +43,18 @@ func TestStandaloneInitNegatives(t *testing.T) {
})
t.Run("list without install", func(t *testing.T) {
t.Parallel()
output, err := cmdList("")
require.NoError(t, err, "expected no error status on list without install")
require.Equal(t, "No Dapr instances found.\n", output)
})
t.Run("stop without install", func(t *testing.T) {
t.Parallel()
output, err := cmdStopWithAppID("test")
require.NoError(t, err, "expected no error on stop without install")
require.Contains(t, output, "failed to stop app id test: couldn't find app id test", "expected output to match")
})
t.Run("uninstall without install", func(t *testing.T) {
t.Parallel()
output, err := cmdUninstall()
require.NoError(t, err, "expected no error on uninstall without install")
require.Contains(t, output, "Removing Dapr from your machine...", "expected output to contain message")

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors
@ -18,12 +17,17 @@ package standalone_test
import (
"context"
"net"
"os"
"path/filepath"
"runtime"
"strconv"
"strings"
"testing"
"time"
"github.com/Masterminds/semver"
"github.com/dapr/cli/pkg/version"
"github.com/dapr/cli/tests/e2e/common"
"github.com/dapr/cli/tests/e2e/spawn"
"github.com/docker/docker/api/types"
@ -153,10 +157,57 @@ func TestStandaloneInit(t *testing.T) {
daprPath := filepath.Join(homeDir, ".dapr")
require.DirExists(t, daprPath, "Directory %s does not exist", daprPath)
latestDaprRuntimeVersion, latestDaprDashboardVersion := common.GetVersionsFromEnv(t, true)
latestDaprRuntimeVersion, err := version.GetDaprVersion()
require.NoError(t, err)
latestDaprDashboardVersion, err := version.GetDashboardVersion()
require.NoError(t, err)
verifyContainers(t, latestDaprRuntimeVersion)
verifyBinaries(t, daprPath, latestDaprRuntimeVersion, latestDaprDashboardVersion)
verifyConfigs(t, daprPath)
placementPort := 50005
if runtime.GOOS == "windows" {
placementPort = 6050
}
verifyTCPLocalhost(t, placementPort)
})
t.Run("init version with scheduler", func(t *testing.T) {
// Ensure a clean environment
must(t, cmdUninstall, "failed to uninstall Dapr")
latestDaprRuntimeVersion, latestDaprDashboardVersion := common.GetVersionsFromEnv(t, true)
args := []string{
"--runtime-version", latestDaprRuntimeVersion,
"--dev",
}
output, err := cmdInit(args...)
t.Log(output)
require.NoError(t, err, "init failed")
assert.Contains(t, output, "Success! Dapr is up and running.")
homeDir, err := os.UserHomeDir()
require.NoError(t, err, "failed to get user home directory")
daprPath := filepath.Join(homeDir, ".dapr")
require.DirExists(t, daprPath, "Directory %s does not exist", daprPath)
verifyContainers(t, latestDaprRuntimeVersion)
verifyBinaries(t, daprPath, latestDaprRuntimeVersion, latestDaprDashboardVersion)
verifyConfigs(t, daprPath)
placementPort := 50005
schedulerPort := 50006
if runtime.GOOS == "windows" {
placementPort = 6050
schedulerPort = 6060
}
verifyTCPLocalhost(t, placementPort)
verifyTCPLocalhost(t, schedulerPort)
})
t.Run("init without runtime-version flag with mariner images", func(t *testing.T) {
@ -176,7 +227,11 @@ func TestStandaloneInit(t *testing.T) {
daprPath := filepath.Join(homeDir, ".dapr")
require.DirExists(t, daprPath, "Directory %s does not exist", daprPath)
latestDaprRuntimeVersion, latestDaprDashboardVersion := common.GetVersionsFromEnv(t, true)
latestDaprRuntimeVersion, err := version.GetDaprVersion()
require.NoError(t, err)
latestDaprDashboardVersion, err := version.GetDashboardVersion()
require.NoError(t, err)
verifyContainers(t, latestDaprRuntimeVersion+"-mariner")
verifyBinaries(t, daprPath, latestDaprRuntimeVersion, latestDaprDashboardVersion)
verifyConfigs(t, daprPath)
@ -187,10 +242,11 @@ func TestStandaloneInit(t *testing.T) {
// Note, in case of slim installation, the containers are not installed and
// this test is automatically skipped.
func verifyContainers(t *testing.T, daprRuntimeVersion string) {
t.Helper()
t.Run("verifyContainers", func(t *testing.T) {
if isSlimMode() {
t.Log("Skipping container verification because of slim installation")
return
t.Skip("Skipping container verification because of slim installation")
}
cli, err := dockerClient.NewClientWithOpts(dockerClient.FromEnv)
@ -205,6 +261,12 @@ func verifyContainers(t *testing.T, daprRuntimeVersion string) {
"dapr_redis": "",
}
v, err := semver.NewVersion(daprRuntimeVersion)
require.NoError(t, err)
if v.Major() >= 1 && v.Minor() >= 14 {
daprContainers["dapr_scheduler"] = daprRuntimeVersion
}
for _, container := range containers {
t.Logf("Found container: %v %s %s\n", container.Names, container.Image, container.State)
if container.State != "running" {
@ -233,6 +295,8 @@ func verifyContainers(t *testing.T, daprRuntimeVersion string) {
// verifyBinaries ensures that the correct binaries are present in the correct path.
func verifyBinaries(t *testing.T, daprPath, runtimeVersion, dashboardVersion string) {
t.Helper()
binPath := filepath.Join(daprPath, "bin")
require.DirExists(t, binPath, "Directory %s does not exist", binPath)
@ -247,6 +311,8 @@ func verifyBinaries(t *testing.T, daprPath, runtimeVersion, dashboardVersion str
for bin, version := range binaries {
t.Run("verifyBinaries/"+bin, func(t *testing.T) {
t.Helper()
file := filepath.Join(binPath, bin)
if runtime.GOOS == "windows" {
file += ".exe"
@ -265,6 +331,8 @@ func verifyBinaries(t *testing.T, daprPath, runtimeVersion, dashboardVersion str
// verifyConfigs ensures that the Dapr configuration and component YAMLs
// are present in the correct path and have the correct values.
func verifyConfigs(t *testing.T, daprPath string) {
t.Helper()
configSpec := map[interface{}]interface{}{}
// tracing is not enabled in slim mode by default.
if !isSlimMode() {
@ -353,3 +421,22 @@ func verifyConfigs(t *testing.T, daprPath string) {
})
}
}
// verifyTCPLocalhost verifies a given localhost TCP port is being listened to.
func verifyTCPLocalhost(t *testing.T, port int) {
t.Helper()
if isSlimMode() {
t.Skip("Skipping container verification because of slim installation")
}
// Check that the server is up and can accept connections.
endpoint := "127.0.0.1:" + strconv.Itoa(port)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
conn, err := net.Dial("tcp", endpoint)
//nolint:testifylint
if assert.NoError(c, err) {
conn.Close()
}
}, time.Second*10, time.Millisecond*10)
}

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors
@ -137,6 +136,13 @@ func TestStandaloneList(t *testing.T) {
t.Log(output)
require.NoError(t, err, "expected no error status on list")
require.Equal(t, "No Dapr instances found.\n", output)
// This test is skipped on Windows, but just in case, try to terminate dashboard process only on non-Windows OSs
// stopProcess uses kill command, won't work on windows
if runtime.GOOS != "windows" {
err = stopProcess("dashboard", "--port", "5555")
require.NoError(t, err, "failed to stop dashboard process")
}
})
}

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors

View File

@ -1,6 +1,4 @@
//go:build !windows && (e2e || template)
// +build !windows
// +build e2e template
/*
Copyright 2023 The Dapr Authors
@ -23,7 +21,6 @@ import (
"context"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
@ -43,6 +40,7 @@ type AppTestOutput struct {
}
func TestRunWithTemplateFile(t *testing.T) {
cleanUpLogs()
ensureDaprInstallation(t)
t.Cleanup(func() {
// remove dapr installation after all tests in this function.
@ -54,8 +52,7 @@ func TestRunWithTemplateFile(t *testing.T) {
runFilePath := "../testdata/run-template-files/wrong_emit_metrics_app_dapr.yaml"
t.Cleanup(func() {
// assumption in the test is that there is only one set of app and daprd logs in the logs directory.
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
cleanUpLogs()
stopAllApps(t, runFilePath)
})
args := []string{
@ -104,8 +101,7 @@ func TestRunWithTemplateFile(t *testing.T) {
runFilePath := "../testdata/run-template-files/dapr.yaml"
t.Cleanup(func() {
// assumption in the test is that there is only one set of app and daprd logs in the logs directory.
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
cleanUpLogs()
stopAllApps(t, runFilePath)
})
args := []string{
@ -161,8 +157,7 @@ func TestRunWithTemplateFile(t *testing.T) {
runFilePath := "../testdata/run-template-files/env_var_not_set_dapr.yaml"
t.Cleanup(func() {
// assumption in the test is that there is only one set of app and daprd logs in the logs directory.
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
cleanUpLogs()
stopAllApps(t, runFilePath)
})
args := []string{
@ -212,8 +207,7 @@ func TestRunWithTemplateFile(t *testing.T) {
runFilePath := "../testdata/run-template-files/no_app_command.yaml"
t.Cleanup(func() {
// assumption in the test is that there is only one set of app and daprd logs in the logs directory.
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
cleanUpLogs()
stopAllApps(t, runFilePath)
})
args := []string{
@ -264,8 +258,7 @@ func TestRunWithTemplateFile(t *testing.T) {
runFilePath := "../testdata/run-template-files/empty_app_command.yaml"
t.Cleanup(func() {
// assumption in the test is that there is only one set of app and daprd logs in the logs directory.
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
cleanUpLogs()
stopAllApps(t, runFilePath)
})
args := []string{
@ -313,8 +306,7 @@ func TestRunWithTemplateFile(t *testing.T) {
runFilePath := "../testdata/run-template-files/app_output_to_file_and_console.yaml"
t.Cleanup(func() {
// assumption in the test is that there is only one set of app and daprd logs in the logs directory.
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
cleanUpLogs()
stopAllApps(t, runFilePath)
})
args := []string{
@ -372,6 +364,10 @@ func TestRunTemplateFileWithoutDaprInit(t *testing.T) {
// remove any dapr installation before this test.
must(t, cmdUninstall, "failed to uninstall Dapr")
t.Run("valid template file without dapr init", func(t *testing.T) {
t.Cleanup(func() {
// assumption in the test is that there is only one set of app and daprd logs in the logs directory.
cleanUpLogs()
})
args := []string{
"-f", "../testdata/run-template-files/no_app_command.yaml",
}

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors
@ -17,19 +16,39 @@ limitations under the License.
package standalone_test
import (
"context"
"fmt"
"runtime"
"testing"
"github.com/dapr/cli/tests/e2e/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestStandaloneRun(t *testing.T) {
ensureDaprInstallation(t)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
if isSlimMode() {
output, err := cmdProcess(ctx, "placement", t.Log, "--metrics-port", "9091", "--healthz-port", "8081")
require.NoError(t, err)
t.Log(output)
if common.GetRuntimeVersion(t, false).GreaterThan(common.VersionWithScheduler) {
output, err = cmdProcess(ctx, "scheduler", t.Log, "--metrics-port", "9092", "--healthz-port", "8082")
require.NoError(t, err)
t.Log(output)
}
}
t.Cleanup(func() {
// remove dapr installation after all tests in this function.
must(t, cmdUninstall, "failed to uninstall Dapr")
// Call cancelFunc to stop the processes
cancelFunc()
})
for _, path := range getSocketCases() {
t.Run(fmt.Sprintf("normal exit, socket: %s", path), func(t *testing.T) {
@ -54,7 +73,11 @@ func TestStandaloneRun(t *testing.T) {
output, err := cmdRun(path, "--dapr-internal-grpc-port", "9999", "--", "bash", "-c", "echo test")
t.Log(output)
require.NoError(t, err, "run failed")
assert.Contains(t, output, "Internal gRPC server is running on port 9999")
if common.GetRuntimeVersion(t, false).GreaterThan(common.VersionWithScheduler) {
assert.Contains(t, output, "Internal gRPC server is running on :9999")
} else {
assert.Contains(t, output, "Internal gRPC server is running on port 9999")
}
assert.Contains(t, output, "Exited App successfully")
assert.Contains(t, output, "Exited Dapr successfully")
assert.NotContains(t, output, "Could not update sidecar metadata for cliPID")

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors
@ -18,6 +17,7 @@ package standalone_test
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -25,6 +25,14 @@ import (
func TestStandaloneStop(t *testing.T) {
ensureDaprInstallation(t)
time.Sleep(5 * time.Second)
t.Cleanup(func() {
// remove dapr installation after all tests in this function.
must(t, cmdUninstall, "failed to uninstall Dapr")
})
executeAgainstRunningDapr(t, func() {
t.Run("stop", func(t *testing.T) {
output, err := cmdStopWithAppID("dapr_e2e_stop")

View File

@ -1,6 +1,4 @@
//go:build !windows && (e2e || template)
// +build !windows
// +build e2e template
/*
Copyright 2023 The Dapr Authors
@ -22,7 +20,6 @@ package standalone_test
import (
"encoding/json"
"fmt"
"os"
"testing"
"time"
@ -31,6 +28,9 @@ import (
)
func TestStopAppsStartedWithRunTemplate(t *testing.T) {
// clean up logs before starting the tests
cleanUpLogs()
ensureDaprInstallation(t)
t.Cleanup(func() {
// remove dapr installation after all tests in this function.
@ -38,6 +38,9 @@ func TestStopAppsStartedWithRunTemplate(t *testing.T) {
})
t.Run("stop apps by passing run template file", func(t *testing.T) {
t.Cleanup(func() {
cleanUpLogs()
})
go ensureAllAppsStartedWithRunTemplate(t)
time.Sleep(10 * time.Second)
cliPID := getCLIPID(t)
@ -50,6 +53,9 @@ func TestStopAppsStartedWithRunTemplate(t *testing.T) {
})
t.Run("stop apps by passing a directory containing dapr.yaml", func(t *testing.T) {
t.Cleanup(func() {
cleanUpLogs()
})
go ensureAllAppsStartedWithRunTemplate(t)
time.Sleep(10 * time.Second)
cliPID := getCLIPID(t)
@ -60,6 +66,9 @@ func TestStopAppsStartedWithRunTemplate(t *testing.T) {
})
t.Run("stop apps by passing an invalid directory", func(t *testing.T) {
t.Cleanup(func() {
cleanUpLogs()
})
go ensureAllAppsStartedWithRunTemplate(t)
time.Sleep(10 * time.Second)
output, err := cmdStopWithRunTemplate("../testdata/invalid-dir")
@ -72,6 +81,9 @@ func TestStopAppsStartedWithRunTemplate(t *testing.T) {
})
t.Run("stop apps started with run template", func(t *testing.T) {
t.Cleanup(func() {
cleanUpLogs()
})
go ensureAllAppsStartedWithRunTemplate(t)
time.Sleep(10 * time.Second)
cliPID := getCLIPID(t)
@ -95,8 +107,7 @@ func ensureAllAppsStartedWithRunTemplate(t *testing.T) {
func tearDownTestSetup(t *testing.T) {
// remove dapr installation after all tests in this function.
must(t, cmdUninstall, "failed to uninstall Dapr")
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
cleanUpLogs()
}
func getCLIPID(t *testing.T) string {

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors

View File

@ -1,5 +1,4 @@
//go:build e2e || template
// +build e2e template
/*
Copyright 2022 The Dapr Authors
@ -157,3 +156,40 @@ func containerRuntime() string {
}
return ""
}
func getRunningProcesses() []string {
cmd := exec.Command("ps", "-o", "pid,command")
output, err := cmd.Output()
if err != nil {
return nil
}
processes := strings.Split(string(output), "\n")
// clean the process output whitespace
for i, process := range processes {
processes[i] = strings.TrimSpace(process)
}
return processes
}
func stopProcess(args ...string) error {
processCommand := strings.Join(args, " ")
processes := getRunningProcesses()
for _, process := range processes {
if strings.Contains(process, processCommand) {
processSplit := strings.SplitN(process, " ", 2)
cmd := exec.Command("kill", "-9", processSplit[0])
err := cmd.Run()
if err != nil {
return err
}
}
}
return nil
}
func cleanUpLogs() {
os.RemoveAll("../../apps/emit-metrics/.dapr/logs")
os.RemoveAll("../../apps/processor/.dapr/logs")
}

View File

@ -1,5 +1,4 @@
//go:build e2e && !template
// +build e2e,!template
/*
Copyright 2022 The Dapr Authors

View File

@ -1,6 +1,4 @@
//go:build windows && (e2e || template)
// +build windows
// +build e2e template
/*
Copyright 2023 The Dapr Authors
@ -115,7 +113,6 @@ func startAppsWithAppLogDestFile(t *testing.T, file string) {
assert.NotContains(t, output, "msg=\"All outstanding components processed\" app_id=emit-metrics")
assert.Contains(t, output, "Received signal to stop Dapr and app processes. Shutting down Dapr and app processes.")
}
func startAppsWithAppLogDestConsole(t *testing.T, file string) {
@ -139,5 +136,4 @@ func startAppsWithAppLogDestConsole(t *testing.T, file string) {
assert.NotContains(t, output, "msg=\"All outstanding components processed\" app_id=emit-metrics")
assert.Contains(t, output, "Received signal to stop Dapr and app processes. Shutting down Dapr and app processes.")
}

View File

@ -33,48 +33,30 @@ var supportedUpgradePaths = []upgradePath{
{
// test upgrade on mariner images.
previous: common.VersionDetails{
RuntimeVersion: "1.8.0",
DashboardVersion: "0.10.0",
ImageVariant: "mariner",
ClusterRoles: []string{"dapr-operator-admin", "dashboard-reader"},
ClusterRoleBindings: []string{"dapr-operator", "dapr-role-tokenreview-binding", "dashboard-reader-global"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.8.7",
DashboardVersion: "0.10.0",
ImageVariant: "mariner",
ClusterRoles: []string{"dapr-operator-admin", "dashboard-reader"},
ClusterRoleBindings: []string{"dapr-operator", "dapr-role-tokenreview-binding", "dashboard-reader-global"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io"},
},
},
{
previous: common.VersionDetails{
RuntimeVersion: "1.9.5",
DashboardVersion: "0.11.0",
ClusterRoles: []string{"dapr-operator-admin", "dashboard-reader"},
ClusterRoleBindings: []string{"dapr-operator", "dapr-role-tokenreview-binding", "dashboard-reader-global"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.10.7",
DashboardVersion: "0.12.0",
RuntimeVersion: "1.14.4",
DashboardVersion: "0.15.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.15.2",
DashboardVersion: "0.15.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
},
{
previous: common.VersionDetails{
RuntimeVersion: "1.11.0",
RuntimeVersion: "1.13.6",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.12.0",
RuntimeVersion: "1.14.4",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
@ -83,14 +65,30 @@ var supportedUpgradePaths = []upgradePath{
},
{
previous: common.VersionDetails{
RuntimeVersion: "1.12.0",
RuntimeVersion: "1.13.6",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.13.0-rc.2",
RuntimeVersion: "1.15.2",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
},
{
previous: common.VersionDetails{
RuntimeVersion: "1.14.4",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.15.2",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
@ -100,14 +98,14 @@ var supportedUpgradePaths = []upgradePath{
// test downgrade.
{
previous: common.VersionDetails{
RuntimeVersion: "1.13.0-rc.2",
RuntimeVersion: "1.15.2",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.12.0",
RuntimeVersion: "1.14.4",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
@ -116,14 +114,30 @@ var supportedUpgradePaths = []upgradePath{
},
{
previous: common.VersionDetails{
RuntimeVersion: "1.12.0",
RuntimeVersion: "1.15.2",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.11.0",
RuntimeVersion: "1.13.6",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
},
{
previous: common.VersionDetails{
RuntimeVersion: "1.14.4",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
CustomResourceDefs: []string{"components.dapr.io", "configurations.dapr.io", "subscriptions.dapr.io", "resiliencies.dapr.io", "httpendpoints.dapr.io"},
},
next: common.VersionDetails{
RuntimeVersion: "1.13.6",
DashboardVersion: "0.14.0",
ClusterRoles: []string{"dapr-dashboard", "dapr-injector", "dapr-operator-admin", "dapr-placement", "dapr-sentry"},
ClusterRoleBindings: []string{"dapr-operator-admin", "dapr-dashboard", "dapr-injector", "dapr-placement", "dapr-sentry"},
@ -146,7 +160,7 @@ func getTestsOnUpgrade(p upgradePath, installOpts, upgradeOpts common.TestOption
{Name: "clusterroles exist " + details.RuntimeVersion, Callable: common.ClusterRolesTest(details, upgradeOpts)},
{Name: "clusterrolebindings exist " + details.RuntimeVersion, Callable: common.ClusterRoleBindingsTest(details, upgradeOpts)},
{Name: "previously applied components exist " + details.RuntimeVersion, Callable: common.ComponentsTestOnInstallUpgrade(upgradeOpts)},
{Name: "previously applied http endpoints exist " + details.RuntimeVersion, Callable: common.HTTPEndpointsTestOnInstallUpgrade(upgradeOpts)},
{Name: "previously applied http endpoints exist " + details.RuntimeVersion, Callable: common.HTTPEndpointsTestOnInstallUpgrade(installOpts, upgradeOpts)},
{Name: "check mtls " + details.RuntimeVersion, Callable: common.MTLSTestOnInstallUpgrade(upgradeOpts)},
{Name: "status check " + details.RuntimeVersion, Callable: common.StatusTestOnInstallUpgrade(details, upgradeOpts)},
}...)
@ -172,16 +186,19 @@ func getTestsOnUpgrade(p upgradePath, installOpts, upgradeOpts common.TestOption
// Upgrade path tests.
func TestUpgradePathNonHAModeMTLSDisabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// Ensure a clean environment.
common.EnsureUninstall(false, false) // does not wait for pod deletion.
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("setup v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
t.Run(deleteCRDs+p.previous.RuntimeVersion, common.DeleteCRD(p.previous.CustomResourceDefs))
t.Run(deleteCRDs+p.next.RuntimeVersion, common.DeleteCRD(p.next.CustomResourceDefs))
})
}
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
installOpts := common.TestOptions{
HAEnabled: false,
@ -206,6 +223,7 @@ func TestUpgradePathNonHAModeMTLSDisabled(t *testing.T) {
common.ClusterRoles: true,
common.ClusterRoleBindings: true,
},
TimeoutSeconds: 120,
}
tests := getTestsOnUpgrade(p, installOpts, upgradeOpts)
@ -217,16 +235,19 @@ func TestUpgradePathNonHAModeMTLSDisabled(t *testing.T) {
}
func TestUpgradePathNonHAModeMTLSEnabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeNonHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeNonHA))
}
// Ensure a clean environment.
common.EnsureUninstall(false, false) // does not wait for pod deletion.
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("setup v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
t.Run(deleteCRDs+p.previous.RuntimeVersion, common.DeleteCRD(p.previous.CustomResourceDefs))
t.Run(deleteCRDs+p.next.RuntimeVersion, common.DeleteCRD(p.next.CustomResourceDefs))
})
}
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
installOpts := common.TestOptions{
HAEnabled: false,
@ -251,6 +272,7 @@ func TestUpgradePathNonHAModeMTLSEnabled(t *testing.T) {
common.ClusterRoles: true,
common.ClusterRoleBindings: true,
},
TimeoutSeconds: 120,
}
tests := getTestsOnUpgrade(p, installOpts, upgradeOpts)
@ -262,16 +284,18 @@ func TestUpgradePathNonHAModeMTLSEnabled(t *testing.T) {
}
func TestUpgradePathHAModeMTLSDisabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeHA))
}
// Ensure a clean environment.
common.EnsureUninstall(false, false) // does not wait for pod deletion.
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("setup v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
t.Run(deleteCRDs+p.previous.RuntimeVersion, common.DeleteCRD(p.previous.CustomResourceDefs))
t.Run(deleteCRDs+p.next.RuntimeVersion, common.DeleteCRD(p.next.CustomResourceDefs))
})
}
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
installOpts := common.TestOptions{
HAEnabled: true,
@ -296,6 +320,7 @@ func TestUpgradePathHAModeMTLSDisabled(t *testing.T) {
common.ClusterRoles: true,
common.ClusterRoleBindings: true,
},
TimeoutSeconds: 120,
}
tests := getTestsOnUpgrade(p, installOpts, upgradeOpts)
@ -307,16 +332,19 @@ func TestUpgradePathHAModeMTLSDisabled(t *testing.T) {
}
func TestUpgradePathHAModeMTLSEnabled(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeHA))
}
// Ensure a clean environment.
common.EnsureUninstall(false, false) // does not wait for pod deletion.
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("setup v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
t.Run(deleteCRDs+p.previous.RuntimeVersion, common.DeleteCRD(p.previous.CustomResourceDefs))
t.Run(deleteCRDs+p.next.RuntimeVersion, common.DeleteCRD(p.next.CustomResourceDefs))
})
}
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
installOpts := common.TestOptions{
HAEnabled: true,
@ -341,6 +369,7 @@ func TestUpgradePathHAModeMTLSEnabled(t *testing.T) {
common.ClusterRoles: true,
common.ClusterRoleBindings: true,
},
TimeoutSeconds: 120,
}
tests := getTestsOnUpgrade(p, installOpts, upgradeOpts)
@ -354,14 +383,12 @@ func TestUpgradePathHAModeMTLSEnabled(t *testing.T) {
// HTTPEndpoint Dapr resource is a new type as of v1.11.
// This test verifies install/upgrade functionality with this additional resource.
func TestUpgradeWithHTTPEndpoint(t *testing.T) {
if common.ShouldSkipTest(common.DaprModeHA) {
t.Skip(fmt.Sprintf("Skipping %s mode test", common.DaprModeHA))
}
// Ensure a clean environment.
common.EnsureUninstall(false, false) // does not wait for pod deletion.
for _, p := range supportedUpgradePaths {
t.Run(fmt.Sprintf("setup v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
t.Run(deleteCRDs+p.previous.RuntimeVersion, common.DeleteCRD(p.previous.CustomResourceDefs))
t.Run(deleteCRDs+p.next.RuntimeVersion, common.DeleteCRD(p.next.CustomResourceDefs))
})
}
for _, p := range supportedUpgradePaths {
ver, err := semver.NewVersion(p.previous.RuntimeVersion)
@ -373,11 +400,17 @@ func TestUpgradeWithHTTPEndpoint(t *testing.T) {
if ver.Major() != 1 || ver.Minor() < 11 {
return
}
t.Run(fmt.Sprintf("setup v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
t.Run(deleteCRDs+p.previous.RuntimeVersion, common.DeleteCRD(p.previous.CustomResourceDefs))
t.Run(deleteCRDs+p.next.RuntimeVersion, common.DeleteCRD(p.next.CustomResourceDefs))
})
t.Run(fmt.Sprintf("v%s to v%s", p.previous.RuntimeVersion, p.next.RuntimeVersion), func(t *testing.T) {
installOpts := common.TestOptions{
HAEnabled: true,
MTLSEnabled: true,
ApplyComponentChanges: false,
ApplyComponentChanges: true,
ApplyHTTPEndpointChanges: true,
CheckResourceExists: map[common.Resource]bool{
common.CustomResourceDefs: true,
@ -397,6 +430,7 @@ func TestUpgradeWithHTTPEndpoint(t *testing.T) {
common.ClusterRoles: true,
common.ClusterRoleBindings: true,
},
TimeoutSeconds: 120,
}
tests := getTestsOnUpgrade(p, installOpts, upgradeOpts)

View File

@ -76,7 +76,6 @@ func TestContainerRuntimeUtils(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
actualValid := IsValidContainerRuntime(tc.input)
@ -120,7 +119,6 @@ func TestContains(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
actualValid := Contains(tc.input, tc.expected)
@ -165,7 +163,6 @@ func TestGetVersionAndImageVariant(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
version, imageVariant := GetVersionAndImageVariant(tc.input)
@ -202,7 +199,6 @@ func TestValidateFilePaths(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
actual := ValidateFilePath(tc.input)
@ -244,7 +240,6 @@ func TestGetAbsPath(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
actual := GetAbsPath(baseDir, tc.input)
@ -302,7 +297,6 @@ func TestResolveHomeDir(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
if tc.skipWindows && runtime.GOOS == "windows" {
t.Skip("Skipping test on Windows")
@ -335,7 +329,6 @@ func TestReadFile(t *testing.T) {
},
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
_, actual := ReadFile(tc.input)
@ -398,7 +391,6 @@ func TestFindFileInDir(t *testing.T) {
},
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
filePath, err := FindFileInDir(tc.input, "dapr.yaml")
@ -480,7 +472,6 @@ func TestPrintDetail(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var buf bytes.Buffer
@ -552,7 +543,6 @@ func TestSanitizeDir(t *testing.T) {
}
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
actual := SanitizeDir(tc.input)